Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1418254pxk; Fri, 25 Sep 2020 14:24:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwfbpfT3ANatpewjYuSJPlKQrjOauVZohgc9zrgcHXoJQL/+Tq3pxQkYWY32VzLxNC3XIii X-Received: by 2002:a05:6402:3050:: with SMTP id bu16mr3516820edb.343.1601069090113; Fri, 25 Sep 2020 14:24:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601069090; cv=none; d=google.com; s=arc-20160816; b=c399YB8mB6KDsW9MSyljdTd3Gv8JU881IR4Am8AIq5cUtCwd1t79v0ogBR7dT1NYnB amDcb61ligW4JZWSy96+Ry0D2smQIZVKChZz7MLe2Qb52YV7udORrOPOPMUG/1U+Ku4N jduZNm1O/CPb7jX7wRgKdj3A8AtpkjVIc0UY2HUuV+usxWY5KVEx4HgYJfxQX5Yt42qk iiNQ5PgvG3xEEZUh0qwGIfKgffEnOPvA2deS1j/o0ooHtWBdjcRfyJ52zoqN44jnMNr5 6/SXGH2kTnt8RQT4llf3y4E8M0bA1BC9zk1njAUQyxtmpBx0W+uHI1Veti2YIiCa/Ntu WmFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:sender:dkim-signature; bh=Ua7W65/d+eys6TkCdCKyUD20UsOP9zKnnrEciCir1oA=; b=x4nBPrvu6cE9arYrKyDhM5wJyi2JW5bR5i2PP3IoiTOizCA1MnY9q+daUv9/GfuoNi HSopM2FaIcPDahnSg70OOSOW9rQmbBrvow0M0TWo12hy/xcyeJECBGNfkbvm0WVqa/6z rD3rW5T3dfvi0zQpKtIGCDbFWn+CBtY0tX1+jgeX/VldliSYX0coJX3R/PVQnna5AGFe giQXBAmYNNT5Ljk6n688iza3byC2KxtP2FDV35hHmVPRiDdZTHqYQgbLCtpFJPW5t7Z7 8DMQrWOnqfvWm1jFnJjPisjZBCPOzIfmYoFr/Xf0jkJSyRr4LqqdOGxoS7I1Ht3ZAMe+ bEvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Ba8vZCGd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e25si2633995ejb.260.2020.09.25.14.24.26; Fri, 25 Sep 2020 14:24:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Ba8vZCGd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728954AbgIYVXR (ORCPT + 99 others); Fri, 25 Sep 2020 17:23:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727961AbgIYVXK (ORCPT ); Fri, 25 Sep 2020 17:23:10 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD737C0613D3 for ; Fri, 25 Sep 2020 14:23:10 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id h15so3447588pfr.3 for ; Fri, 25 Sep 2020 14:23:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Ua7W65/d+eys6TkCdCKyUD20UsOP9zKnnrEciCir1oA=; b=Ba8vZCGd6BVlQ3ApTWbLnydXdozDtsK//xKSkkcRq0/GVpiouAAKjNjX7g4wF3RHFh cm5pVnHUV72MHz50fFJTsAvMnamSUg8AVwiSAeq5C9/d8lvMDSrVhmJ7zjsVAPZhwAXx ok7JapD/Nia/Sd55oRtc2pYiYuZxpbjV6n029qzPHEdB+TcXZ5lhD5rhMQCoVfJ9eDFM vyk3DEF8RNUUn4UAgLXIiI54oE+o55wAly4tB+unfNR85PQwZO7r18osoAV7JRb7iY/7 5HoFJ0A6IIgZGQWwvH97D+IQ9OTkR9YgWOU1suPLe6xQypWfPomWfLrz+3Um8Ful6DUT B8MQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Ua7W65/d+eys6TkCdCKyUD20UsOP9zKnnrEciCir1oA=; b=PNoQ5gGmUzEMep3zA8ppfu8YzQfhyVCj6m6pquPMkIrnkZ22L/be3I83UdIkCTgpCp qbpnAHZUl0uvx7QsJDudKW7pW6OSKmgXlNXv86tSG/A3mMvYldSj0wRcGE5FWvLHWxyQ gipmwGrB4g6QymHWtAVO6Km/045y4k0rLKMv/6Bx6U0UE3nxIIXH4eGk6jQHQdD8dPbY 9apPpNYfXb4RgVtnPj6vtyqq7bcJevjOkXYROHW5JzgQrUS22FCaaVsUhAzRgG8bL/Nh EHuBA0crbYiifpqZACcOpBJLVp/7fl+twcuVjsmxCkSHeGFmgkk2ztVHnUOB3yfLLxQo FJ9w== X-Gm-Message-State: AOAM532AaMEwQ43us43b1g1/F/eUEB9n0rLM9NPJln6vwfbrZ9CkZlE5 UvoXDH3zFf67LzbDw3cER/oseKX8aWZzJhD1PPPkqH+psmJzNzI3lblruwv2UvHT838HutbWb8U ffoMumNdnTxzoAMql8j3HMgLUOMAQL5qIIp7U3/BtAq4zQKMGMboMaHkmTjFXRj/buwjmBN4t Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a17:90a:f298:: with SMTP id fs24mr475139pjb.4.1601068990014; Fri, 25 Sep 2020 14:23:10 -0700 (PDT) Date: Fri, 25 Sep 2020 14:22:41 -0700 In-Reply-To: <20200925212302.3979661-1-bgardon@google.com> Message-Id: <20200925212302.3979661-2-bgardon@google.com> Mime-Version: 1.0 References: <20200925212302.3979661-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.709.gb0816b6eb0-goog Subject: [PATCH 01/22] kvm: mmu: Separate making SPTEs from set_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Separate the functions for generating leaf page table entries from the function that inserts them into the paging structure. This refactoring will facilitate changes to the MMU sychronization model to use atomic compare / exchanges (which are not guaranteed to succeed) instead of a monolithic MMU lock. No functional change expected. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This commit introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon Reviewed-by: Peter Shier --- arch/x86/kvm/mmu/mmu.c | 52 +++++++++++++++++++++++++++--------------- 1 file changed, 34 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 71aa3da2a0b7b..81240b558d67f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2971,20 +2971,14 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) #define SET_SPTE_WRITE_PROTECTED_PT BIT(0) #define SET_SPTE_NEED_REMOTE_TLB_FLUSH BIT(1) -static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, - unsigned int pte_access, int level, - gfn_t gfn, kvm_pfn_t pfn, bool speculative, - bool can_unsync, bool host_writable) +static u64 make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, + gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool speculative, + bool can_unsync, bool host_writable, bool ad_disabled, + int *ret) { u64 spte = 0; - int ret = 0; - struct kvm_mmu_page *sp; - - if (set_mmio_spte(vcpu, sptep, gfn, pfn, pte_access)) - return 0; - sp = sptep_to_sp(sptep); - if (sp_ad_disabled(sp)) + if (ad_disabled) spte |= SPTE_AD_DISABLED_MASK; else if (kvm_vcpu_ad_need_write_protect(vcpu)) spte |= SPTE_AD_WRPROT_ONLY_MASK; @@ -3037,27 +3031,49 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, * is responsibility of mmu_get_page / kvm_sync_page. * Same reasoning can be applied to dirty page accounting. */ - if (!can_unsync && is_writable_pte(*sptep)) - goto set_pte; + if (!can_unsync && is_writable_pte(old_spte)) + return spte; if (mmu_need_write_protect(vcpu, gfn, can_unsync)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); - ret |= SET_SPTE_WRITE_PROTECTED_PT; + *ret |= SET_SPTE_WRITE_PROTECTED_PT; pte_access &= ~ACC_WRITE_MASK; spte &= ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE); } } - if (pte_access & ACC_WRITE_MASK) { - kvm_vcpu_mark_page_dirty(vcpu, gfn); + if (pte_access & ACC_WRITE_MASK) spte |= spte_shadow_dirty_mask(spte); - } if (speculative) spte = mark_spte_for_access_track(spte); -set_pte: + return spte; +} + +static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, + unsigned int pte_access, int level, + gfn_t gfn, kvm_pfn_t pfn, bool speculative, + bool can_unsync, bool host_writable) +{ + u64 spte = 0; + struct kvm_mmu_page *sp; + int ret = 0; + + if (set_mmio_spte(vcpu, sptep, gfn, pfn, pte_access)) + return 0; + + sp = sptep_to_sp(sptep); + + spte = make_spte(vcpu, pte_access, level, gfn, pfn, *sptep, speculative, + can_unsync, host_writable, sp_ad_disabled(sp), &ret); + if (!spte) + return 0; + + if (spte & PT_WRITABLE_MASK) + kvm_vcpu_mark_page_dirty(vcpu, gfn); + if (mmu_spte_update(sptep, spte)) ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH; return ret; -- 2.28.0.709.gb0816b6eb0-goog