Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp855192pxu; Wed, 14 Oct 2020 16:05:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyNgcqkEgDkKMRMLIw7XxlZhnUYyjsoYgbL28XqVFwFCTX4D7ssARDHeh8/gVyF3SY4eHFo X-Received: by 2002:a17:906:5613:: with SMTP id f19mr1386368ejq.441.1602716742387; Wed, 14 Oct 2020 16:05:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602716742; cv=none; d=google.com; s=arc-20160816; b=oqsVd6zcegYZGW9C8hb+/4eAh752wmR1q2RdrgjQQoxl/NOkM2iCyRqTE/KlhOUhKF m+8U/dryB7I8R2SoDjCnYEdOJ3gvvC7JG7HcVgwoHI9t4e3ovIXLUAnUqtGoh4tS3p2A qiEm+zN9Q4wgW8RjiOu0T5mgkFkwJHxt7I4dM/8k1LnYubohQ9kJVRwh2fQgqVhN7/8e 4eJQ80QpfFNwEvBXlqOSaXg3mhP3oyKUElrXX7JydjRUmkTbq1OmqkLYoFjA+KbWRk6J 43fi8ygufopRJWV5Tu+PSo5JM8n6xmIcWkQfQZLXpikSqIEyDmM0B7zMtXbsjKESHBIP 52zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:sender:dkim-signature; bh=NfhHIv01r/iE20pouU33Lbt4LnC3KxWGgnbW83wgNA4=; b=fTavdd34qGbnfGAfKd3QYr55M06z/zNzcICqcZWJcbzHV87Re1xE2patAC7McC230S mufh4icLfU4SYE0+buPIqQqXYOaoTldZYCcYZCK50N23i54Li7BW9DHPyseZVYaguKo/ XM3hyJWrOjngRSN7dDQZ4BcPfFBBO9pyhx24xG9dJ2rxM3fvWz7IQG4StclaYnkwZuB5 X4w3q16I83a9Y5Rmn1Psq0uLbHHnj+/18I/1NyyBGbgXI4T6OSRxOwvxU9zqzKEI2wQq i6Wyb0suMZO5sNcqRJt5sfsLDjtE99nirFQkbOxjvbw8weqvCOAzssovt7/s80cfxgjk Y4lA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=b37+flGV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a2si666719edf.60.2020.10.14.16.05.19; Wed, 14 Oct 2020 16:05:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=b37+flGV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388560AbgJNS2A (ORCPT + 99 others); Wed, 14 Oct 2020 14:28:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389384AbgJNS1q (ORCPT ); Wed, 14 Oct 2020 14:27:46 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B24DCC0613DE for ; Wed, 14 Oct 2020 11:27:35 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d16so55033pll.21 for ; Wed, 14 Oct 2020 11:27:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=NfhHIv01r/iE20pouU33Lbt4LnC3KxWGgnbW83wgNA4=; b=b37+flGVdYc9pMf69O6wTflTQ0F23977EsCnDucMNS2a/nbjlnWlMyF2a7QyV9jcTH yKPRrJcQNLnWU7fQgXg9qz5jDCgwDx8YvqgFI4WqCZHKhnPvd8QueGefDAYtL0x7LUHC 38NHii5LekFJL+HhAfxerFLEDaVDwD8OxL4ilLR4rLNGRgX3Cz4qDy+56R7hQOpJLnn1 zj/Rsz8c5lDdBBN35eGshLjX5mWgp8MELqwI+Beqho3sPOOE0QA/dxvIeWJmmYf3xuYx swmPZnCQXCcaFHreu7dFdsq+lyq+fxMuldXyYaSUbVZxAyaWDIazrdMxis1X8kH/xoYK ViWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NfhHIv01r/iE20pouU33Lbt4LnC3KxWGgnbW83wgNA4=; b=Kkkny0GOs6AVhh21BkxNEtwe2ieP3D31AGtTW3PXpAikNKvbx7fOVBHA0p6JRPkohH xkMuSpqLVIO/s/Sqx/NRrpqpaCtmiCnOIKvc16UI6O2aRJSiCkW9Vl4RmynYgtR/yhTH aZ+/eYQwGR8ohOiM8eDygc7JOgK8+/VMitggKlGv/anWIHae8FrIaL0nNqUjWkAiK3Nr OUtAcVHkAiSmIoZBNbdoIzAHh9k8XhQeEzdSTLLLcFet+YMo3jo8DbTUVpa8nDkDpuQE G/XlaLYQBajn9VpSPJRrEBY9Z6iffiFUGxqwMDrDIzAUFZnEiB6gpwE/16yVSe1kMVQr LAaw== X-Gm-Message-State: AOAM533IX8KI1Y+1sZZtAEfzr8fAGealBM7/g6yhthyhmliu/x1C5Pwx Pb66V/MyPba95qDqGWgMROFnJms9nv4JLp8155NrEc4z+LqzHIy9b3RbnveNZl4IEuNTXzrm8fG t+f24mHiL4eMALpJtyme9NIDqfbnywfP41zDYMJch1OXb7HuwlppqzZEz1IEitijS1+M3K0sx Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a63:570d:: with SMTP id l13mr242121pgb.172.1602700055147; Wed, 14 Oct 2020 11:27:35 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:57 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-18-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 17/20] kvm: x86/mmu: Support write protection for nesting in tdp MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To support nested virtualization, KVM will sometimes need to write protect pages which are part of a shadowed paging structure or are not writable in the shadowed paging structure. Add a function to write protect GFN mappings for this purpose. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 4 +++ arch/x86/kvm/mmu/tdp_mmu.c | 50 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 3 +++ 3 files changed, 57 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8fcf5e955c475..58d2412817c87 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1553,6 +1553,10 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, write_protected |= __rmap_write_protect(kvm, rmap_head, true); } + if (kvm->arch.tdp_mmu_enabled) + write_protected |= + kvm_tdp_mmu_write_protect_gfn(kvm, slot, gfn); + return write_protected; } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 94624cc1df84c..c471f2e977d11 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1078,3 +1078,53 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, put_tdp_mmu_root(kvm, root); } } + +/* + * Removes write access on the last level SPTE mapping this GFN and unsets the + * SPTE_MMU_WRITABLE bit to ensure future writes continue to be intercepted. + * Returns true if an SPTE was set and a TLB flush is needed. + */ +static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, + gfn_t gfn) +{ + struct tdp_iter iter; + u64 new_spte; + bool spte_set = false; + + tdp_root_for_each_leaf_pte(iter, root, gfn, gfn + 1) { + if (!is_writable_pte(iter.old_spte)) + break; + + new_spte = iter.old_spte & + ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE); + + tdp_mmu_set_spte(kvm, &iter, new_spte); + spte_set = true; + } + + return spte_set; +} + +/* + * Removes write access on the last level SPTE mapping this GFN and unsets the + * SPTE_MMU_WRITABLE bit to ensure future writes continue to be intercepted. + * Returns true if an SPTE was set and a TLB flush is needed. + */ +bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn) +{ + struct kvm_mmu_page *root; + int root_as_id; + bool spte_set = false; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_tdp_mmu_root(kvm, root) { + root_as_id = kvm_mmu_page_as_id(root); + if (root_as_id != slot->as_id) + continue; + + spte_set = write_protect_gfn(kvm, root, gfn) || spte_set; + } + return spte_set; +} + diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index dc4cdc5cc29f5..b66283db43221 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -40,4 +40,7 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot); + +bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn); #endif /* __KVM_X86_MMU_TDP_MMU_H */ -- 2.28.0.1011.ga647a8990f-goog