Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4927578pxj; Tue, 22 Jun 2021 11:01:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyckwuRKteMoRF0O6rXNvtUER2mfUmC2nYn0wpnM8/QH2XDXymAh+3+oSbE6NtCXzL96dMT X-Received: by 2002:a6b:630c:: with SMTP id p12mr3800676iog.124.1624384891744; Tue, 22 Jun 2021 11:01:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624384891; cv=none; d=google.com; s=arc-20160816; b=CQcLzqt17SHJIuNLYERYa5PMYemxpcWUsN53H/UqQSGbd3LjM4KO9FeE+QyJ0gG6vx kBtUU+2J1tCPzjV5YA/ISbExhTh5f/ZXKalepqTBudkrbnaCcNPysVYNiZL2fjrCJM2X XFENMoLo9Tz+l5M526f2o1Xf7FUbL0eEm7ouourlw6MbNLbCHaCMj1nTfk9rAdkd9UQb WhEt47cW12PKnjYeoxYZOgXVyNYyo7T1IBD4CIOqoxZyLWFareJ1RtbPFtTmCFENrZ6W QYr0hGu2w8GKbYC/31ptEDz0xuhsybL1y1+ptmaPAejVdQnLHr9k75O1r98ihzLgYtmD LYhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:reply-to:dkim-signature; bh=ZVQPmrwIO1jLx03fsUYY+unjOucbiEXlPfNgj7m+k/Q=; b=a9bi4hRuHJJrbdEOnZ0rkzt/SlHJYann3Ra+x8VEOjc5Lt9hAewcgOVV4jEKAjWCK9 KDxfQk1x1J4nxc/Two8vilnW3lSheeAvQ+wr1mS5Fso+18dJyJBbBgDOwgxDyC5oiRNH maPeeiU1+kXH7UsvFcjNff6CFho5yPmHuDdu9gt/mXdRL964+n2/WdDx6F2BKduIRC3u bWK8LcZuQpW6bqZ4/t9gdVU0VW4bTsGi47PolkIWK9V2rL61YFXnp73uK3BCxlg8lSOM 61O3aUvEi3CLWCD1h2eJEiRiZCfkTElpumAtwvQeEmomabJ3CYtR01JneUeHjgF4YwED PUlg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=TLCaZriI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u12si8360063ill.149.2021.06.22.11.01.18; Tue, 22 Jun 2021 11:01:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=TLCaZriI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232755AbhFVSBT (ORCPT + 99 others); Tue, 22 Jun 2021 14:01:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232587AbhFVSAy (ORCPT ); Tue, 22 Jun 2021 14:00:54 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07F92C0613A3 for ; Tue, 22 Jun 2021 10:58:27 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id t131-20020a37aa890000b02903a9f6c1e8bfso19021562qke.10 for ; Tue, 22 Jun 2021 10:58:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=ZVQPmrwIO1jLx03fsUYY+unjOucbiEXlPfNgj7m+k/Q=; b=TLCaZriIZLxqUOGqBFW+2bxITVCMWf2aZbVzomfs8xmn4CKpqFQxG6voZh1f8ao7jJ gRSpgY88aisFMUt8/yqAw+6GxwXTMyHwRdbYznJHY1IkbS6/CN3o9KcyV0DXeZabwf0W ZR/iD/K+Xf1LfgdDb2QEE2H4vUEIun0rHj/ZjZEHSRVOXIukg7aBUIMb4dCTZVqrnwT6 p+E1Q8CGAo9cisd88M7YLozW1C3VPNpF5BeYcq+MhO1sSqUvFjXyboGyvjNsW6fMf9KU rPYr4O5a0jmcIPakG7ou28FStARJGrtT/n1PAmX8V8xOnejx+XMD6eOO2qZNO3MIQRAe T0iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=ZVQPmrwIO1jLx03fsUYY+unjOucbiEXlPfNgj7m+k/Q=; b=Vq/1a0dc3ol3imZ10hx2s5KF5TEfuT/Zjx589p4bmadT9ZLuAzL96uUPotEtrUceWE 3+I+2ZC16wPdCkbNhhLKDv5Fe1z8ErwgtVjeDSIpNojswKg80MZCa1oAWJhAAVrgtSXa UJ/vDXHrerI/pmgrrIdELUibnAXCXcLCb9Yf++dME9uEaaWXMUiG55y1LvJ5ZP0mXqJa 2IAtBTvXOWEE5zizSFxyyghnxlhbvym33zHQ3KigV9QMyjqBKdd9ItoKIGnz4LRkB+NO 4OOmdauSEuNqV+O4Mini8YYXDIzC64FcCNZnFX5id70c2M7DDoy719KfChoBIhaatL3V fpHg== X-Gm-Message-State: AOAM533mS3sIKPj+ChZM+eccWrYWGaHuV0hHOwa4pQfvezQLookde7xs d6LzM+aI/bjQbaVVag3t82fQYbVeVZ0= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:f449:: with SMTP id p9mr6413791ybe.259.1624384706126; Tue, 22 Jun 2021 10:58:26 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:58 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-14-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 13/54] KVM: x86/mmu: Rename unsync helper and update related comments From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Rename mmu_need_write_protect() to mmu_try_to_unsync_pages() and update a variety of related, stale comments. Add several new comments to call out subtle details, e.g. that upper-level shadow pages are write-tracked, and that can_unsync is false iff KVM is in the process of synchronizing pages. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 34 ++++++++++++++++++++++++--------- arch/x86/kvm/mmu/mmu_internal.h | 3 +-- arch/x86/kvm/mmu/spte.c | 10 ++++++++-- 3 files changed, 34 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 77296ce6215f..0171c245ecc7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2458,17 +2458,33 @@ static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) kvm_mmu_mark_parents_unsync(sp); } -bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, - bool can_unsync) +/* + * Attempt to unsync any shadow pages that can be reached by the specified gfn, + * KVM is creating a writable mapping for said gfn. Returns 0 if all pages + * were marked unsync (or if there is no shadow page), -EPERM if the SPTE must + * be write-protected. + */ +int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync) { struct kvm_mmu_page *sp; + /* + * Force write-protection if the page is being tracked. Note, the page + * track machinery is used to write-protect upper-level shadow pages, + * i.e. this guards the role.level == 4K assertion below! + */ if (kvm_page_track_is_active(vcpu, gfn, KVM_PAGE_TRACK_WRITE)) - return true; + return -EPERM; + /* + * The page is not write-tracked, mark existing shadow pages unsync + * unless KVM is synchronizing an unsync SP (can_unsync = false). In + * that case, KVM must complete emulation of the guest TLB flush before + * allowing shadow pages to become unsync (writable by the guest). + */ for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) { if (!can_unsync) - return true; + return -EPERM; if (sp->unsync) continue; @@ -2499,8 +2515,8 @@ bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, * 2.2 Guest issues TLB flush. * That causes a VM Exit. * - * 2.3 kvm_mmu_sync_pages() reads sp->unsync. - * Since it is false, so it just returns. + * 2.3 Walking of unsync pages sees sp->unsync is + * false and skips the page. * * 2.4 Guest accesses GVA X. * Since the mapping in the SP was not updated, @@ -2516,7 +2532,7 @@ bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, */ smp_wmb(); - return false; + return 0; } static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, @@ -3461,8 +3477,8 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) * flush strictly after those changes are made. We only need to * ensure that the other CPU sets these flags before any actual * changes to the page tables are made. The comments in - * mmu_need_write_protect() describe what could go wrong if this - * requirement isn't satisfied. + * mmu_try_to_unsync_pages() describe what could go wrong if + * this requirement isn't satisfied. */ if (!smp_load_acquire(&sp->unsync) && !smp_load_acquire(&sp->unsync_children)) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 18be103df9d5..35567293c1fd 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -122,8 +122,7 @@ static inline bool is_nx_huge_page_enabled(void) return READ_ONCE(nx_huge_pages); } -bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, - bool can_unsync); +int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync); void kvm_mmu_gfn_disallow_lpage(struct kvm_memory_slot *slot, gfn_t gfn); void kvm_mmu_gfn_allow_lpage(struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 8e8e8da740a0..246e61e0771e 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -147,13 +147,19 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, /* * Optimization: for pte sync, if spte was writable the hash * lookup is unnecessary (and expensive). Write protection - * is responsibility of mmu_get_page / kvm_sync_page. + * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots. * Same reasoning can be applied to dirty page accounting. */ if (!can_unsync && is_writable_pte(old_spte)) goto out; - if (mmu_need_write_protect(vcpu, gfn, can_unsync)) { + /* + * Unsync shadow pages that are reachable by the new, writable + * SPTE. Write-protect the SPTE if the page can't be unsync'd, + * e.g. it's write-tracked (upper-level SPs) or has one or more + * shadow pages and unsync'ing pages is not allowed. + */ + if (mmu_try_to_unsync_pages(vcpu, gfn, can_unsync)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); ret |= SET_SPTE_WRITE_PROTECTED_PT; -- 2.32.0.288.g62a8d224e6-goog