Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1419648pxk; Fri, 25 Sep 2020 14:27:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyx5GkNoJw0NGCHXu9hyjhOVihIxSbJRnsmMpJHYwAk+Lv+wGCkzlzHrIJFmeiqXL06ye3V X-Received: by 2002:a17:906:3506:: with SMTP id r6mr4688285eja.55.1601069273332; Fri, 25 Sep 2020 14:27:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601069273; cv=none; d=google.com; s=arc-20160816; b=D0VLgiESTiXDaTTLBrm91oCW7bkVmvPPtw85iL5K5zfhjZ7wplLb70pwpYogdqTtTk w9mnyLvzX0JAKVll2lHUtOJ94l4O7uZtXwKj3W6z7RKipVNfEYb2WzgFYQrLwBzXv0MX ZUOWGPSLahXykmWTgbJALuefkOPYlsRlVKqgfQFIyoh+j3S0jkjwzP+IGk6y1L+LvbHK IlNhGRrSltovA/adZQ67raXsGV6SULt9B6To2oqf2Yx92UCDv9SPgtRQDAEBVUy5MocG Shxc/UntHeUrqipFgRRNa971y6w0VFF8iZZJlKYbupNlcqRyCLjrHb/obBJ7SzAQLq1u 3g9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:sender:dkim-signature; bh=wGRaKZqepjfqHu8EAcgDimD+fkRsdcPXOcl2oXcLVLw=; b=bWkstTpjQkHEI26EqI60Lg35IWf+jLlSfyEOQLVVsbZn29U5hS032SIHU+0J7myklE OUkq9KTD677oX4J1F2K4SJyOm7yyj8M0Ifw4AWcdSrBHfTQ4DC8fFy5bIYAb+4AZd211 TFL9oS69EdmjH7BvqsfWF7flDZraLCqH0sXnkn91Pbo90+4SoDhkbNp+ac/aXN9nbUvX gBdXkGvnRzZmOs3YyFZ9qtuKpfTFPvSOUaFxJ9vGS1giO2zn5xjYhWZf69yb9pzL/+VT 6Q07NXa1lyp/sIhXksqs3g+tFJrcfcZPESnhnBWVo3ZpZpuLrjp5Vg+kt2E+1L3Ve60Y GGPA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=tAJy0i4V; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qq22si2680198ejb.281.2020.09.25.14.27.30; Fri, 25 Sep 2020 14:27:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=tAJy0i4V; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729395AbgIYVXy (ORCPT + 99 others); Fri, 25 Sep 2020 17:23:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729313AbgIYVXp (ORCPT ); Fri, 25 Sep 2020 17:23:45 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9A13C0613D3 for ; Fri, 25 Sep 2020 14:23:44 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id r22so3247056qtc.9 for ; Fri, 25 Sep 2020 14:23:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=wGRaKZqepjfqHu8EAcgDimD+fkRsdcPXOcl2oXcLVLw=; b=tAJy0i4VTSVjyS7q2KkOmQHelTLbYCm/7l3p0QD0QKcCbL/jJKy3KcSb0AiykDcVPg q+Ir25YzqwwJR8isTg9v6pVwrLyYA6IGTLKlOjfBt6hu4KGuO3eyMgMinGyLOzik/jRL cqGHpN2Y9V3DsjS8kEQK7VHIDA4y7qfZDSlXNm7TqGoomvY5l7cNR75hyIsQA0yhV5NB Q62xqjjwH8gq1bhPyJHSuaydyRFNGcAlaCKnXJitGDSCdRjOMA+nizYntreGmKR1N4Kc 6OLtsV/mj8Ow+UX8WBLM6j22Q/kuyoU6QMj6Gran4+mEwWcxQ8HBOxZUMn2o085UXsUb Oc8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wGRaKZqepjfqHu8EAcgDimD+fkRsdcPXOcl2oXcLVLw=; b=XRAqjP6sEJ+WY/m/IUtnS0uXn6hycmqj54HYNw7HfpNFKF/0i9B88WayfwtZzXz6ED DK6p8PQfsOuHgu2ZnfO/59rI4tKXtGv322wHqJRsY8E+ZrFxmYJzqVf+RZzJr1+ZtnoC 3Xr2aPoEU1GpRt7qMA0kPSn6LEWiGlSdpIrMdRzo4IB9fySleduMDB2XMsHIECIEfOE4 l731WB+QLnXObkIYMiA0QLKUnNFEPAzI2mnPjTLbsfsagZ7G/4ofdytRGbwVPHNXpBah gHORZkm6P1q7Zj0gkXFkqS2avdt5dqMX8r+cdIZKXRsyTlM8UWZyLdSmz9BPwSSAjzJg lv/A== X-Gm-Message-State: AOAM533JV/AYgt8HJi7+awQ6Bx8j5tY6TUrHV4IIO+dRaRgPVFGzfwt1 B4AlRhzCbXjzj8HKG9avKgdEP8l5w9zwNz1e0rRpUQYVtqMC93FynF0Ve9RckpeF0tt/lS3RE79 BHHeuglSVRkfjBVf1GYjmGGUobu2y9nj5eu6ffSsJtBCyF9+t0+DlH7aG5B8Yvj1vBtmDYF18 Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a0c:e04e:: with SMTP id y14mr674676qvk.38.1601069022744; Fri, 25 Sep 2020 14:23:42 -0700 (PDT) Date: Fri, 25 Sep 2020 14:22:59 -0700 In-Reply-To: <20200925212302.3979661-1-bgardon@google.com> Message-Id: <20200925212302.3979661-20-bgardon@google.com> Mime-Version: 1.0 References: <20200925212302.3979661-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.709.gb0816b6eb0-goog Subject: [PATCH 19/22] kvm: mmu: Support write protection for nesting in tdp MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To support nested virtualization, KVM will sometimes need to write protect pages which are part of a shadowed paging structure or are not writable in the shadowed paging structure. Add a function to write protect GFN mappings for this purpose. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 5 ++++ arch/x86/kvm/mmu/tdp_mmu.c | 57 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 3 ++ 3 files changed, 65 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 12892fc4f146d..e6f5093ba8f6f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1667,6 +1667,11 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, write_protected |= __rmap_write_protect(kvm, rmap_head, true); } + if (kvm->arch.tdp_mmu_enabled) + write_protected = + kvm_tdp_mmu_write_protect_gfn(kvm, slot, gfn) || + write_protected; + return write_protected; } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index a2895119655ac..931cb469b1f2f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1161,3 +1161,60 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, put_tdp_mmu_root(kvm, root); } } + +/* + * Removes write access on the last level SPTE mapping this GFN and unsets the + * SPTE_MMU_WRITABLE bit to ensure future writes continue to be intercepted. + * Returns true if an SPTE was set and a TLB flush is needed. + */ +static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, + gfn_t gfn) +{ + struct tdp_iter iter; + u64 new_spte; + bool spte_set = false; + int as_id = kvm_mmu_page_as_id(root); + + for_each_tdp_pte_root(iter, root, gfn, gfn + 1) { + if (!is_shadow_present_pte(iter.old_spte) || + !is_last_spte(iter.old_spte, iter.level)) + continue; + + if (!is_writable_pte(iter.old_spte)) + break; + + new_spte = iter.old_spte & + ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE); + + *iter.sptep = new_spte; + handle_changed_spte(kvm, as_id, iter.gfn, iter.old_spte, + new_spte, iter.level); + spte_set = true; + } + + return spte_set; +} + +/* + * Removes write access on the last level SPTE mapping this GFN and unsets the + * SPTE_MMU_WRITABLE bit to ensure future writes continue to be intercepted. + * Returns true if an SPTE was set and a TLB flush is needed. + */ +bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn) +{ + struct kvm_mmu_page *root; + int root_as_id; + bool spte_set = false; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_tdp_mmu_root(kvm, root) { + root_as_id = kvm_mmu_page_as_id(root); + if (root_as_id != slot->as_id) + continue; + + spte_set = write_protect_gfn(kvm, root, gfn) || spte_set; + } + return spte_set; +} + diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 10e70699c5372..2ecb047211a6d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -40,4 +40,7 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot); + +bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn); #endif /* __KVM_X86_MMU_TDP_MMU_H */ -- 2.28.0.709.gb0816b6eb0-goog