Received: by 2002:ab2:6c55:0:b0:1fd:c486:4f03 with SMTP id v21csp705172lqp; Wed, 12 Jun 2024 13:47:56 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWT8CAmYbSTZEApEMXYSBDDZ5L4XbCA5MADbRyI4Bswti3IXH2+AcvAaYNPROoNjKRlOzFPn7eVzbLklcOUVdtbVc6d9qniRyPsUAe2sA== X-Google-Smtp-Source: AGHT+IFMAAcBzO0+IzhlQyXHqpaA4Vztfl8mnI1c/+5aA32lb/qI1p8PMTPL02m5JRLD3wsbn+Wq X-Received: by 2002:a05:6a20:3941:b0:1b6:dae0:551a with SMTP id adf61e73a8af0-1b8a9b4dba9mr3416678637.24.1718225276464; Wed, 12 Jun 2024 13:47:56 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1718225276; cv=pass; d=google.com; s=arc-20160816; b=STE6n9Ekbntt5B9NzzLQbLCVhQ4XJItXRBOch8Byk+niL22z8haXV0asRjWe09SJkD daladVWJNcW68V8hKZw5eg2W0Vv+SJU42Hm3FKrOfZNpxxfj6CREq/kiNJtr0Y3/Hz80 Pz1cguYI+OdLDO9SwRWEUIDyd972hgSuc8gOWk2tJ8hxZqpN/GPOlosYgYGLoWsgWJ+q 230DOQE/hXDD4L/GwaaOrDqUcLqNaM6P6o98/mVi9hBMGBfBnapzn4KhB6zFR4TUtDhY I5azpaoz2Huc2O9Ux2bzHel5vG8Tlhfqp3FNqI2tlD4Ef4a0+XdJGixYhXdZdvRh97yc BIcA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=15xNRufoy8zQ0ly/BFY7+H/LjnHWUIf8CQqsanVt6QU=; fh=/x9rv/hDYvfPc/ax/xf2l4isggZ/U93xl8wiRv5u5D0=; b=PKuwgDyPSg8pUtIk+xDmsX9aF+Mg8ZD9/CIljDToeUJfwozO0q3MIEkQut3HNbau74 nFlLfdLRPgO8r6ghJ49kTEPMRnyuq2gS6Jkv9zToZ1+6J8WJCfAJtDfqhxo1x4bxTjw5 m2kfj2jf+CpmqljXQ80XDL95J/Cc5k4p1CnfxIIEufPZVq9ZETtEFWsmXZGrs2+K6L5t vVAJhLD7XSnMligTf35DR+ralRGvMWCudzJeifAKM8PNQSK4wsqDtx3Jci4TA87eANrq ZVxd3kScNnXZMNe/8uKF6w8etO1qEJ22KUYTjI7hWq/0YS3LwovqM0gE+88w4o5xcHNU knUw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=EWfrPZnM; arc=pass (i=1 spf=pass spfdomain=flex--seanjc.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-212235-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-212235-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id d2e1a72fcca58-70412de3d76si8669741b3a.93.2024.06.12.13.47.56 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 13:47:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-212235-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=EWfrPZnM; arc=pass (i=1 spf=pass spfdomain=flex--seanjc.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-212235-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-212235-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 071042840BF for ; Wed, 12 Jun 2024 20:47:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5C01384FAF; Wed, 12 Jun 2024 20:47:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EWfrPZnM" Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BBEB443144 for ; Wed, 12 Jun 2024 20:47:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718225270; cv=none; b=pVTnOmOwJQOfE5cMxu0SVH2i6HBLb7/QykRk4ZVC3D/Wbs4Lo/Wlgzy/0kRrDe46JrCHhrFij9KF2pUkshMSvU1KdtKVh33bTl0N+oRDXrbmHc8QffGQfMWoBR+EjdZK0/NFLddQK9VuzA1siQl599L1jIWS3AIsdpSvGnNiuMo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718225270; c=relaxed/simple; bh=aC5PaSjQFR+x1s9oqkemFF89YR3d70O6ds2KXWq0yGc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eENfCFPlYaRoe+2h5QyZyTxmHNOToWQxMv5b2jydzlkO7aO5HdiKXTSvw6tny4ZCcRGw8NrCvvlAiCuOeSPrTGY0fly5lFeBHo7mUwb4qyvd/L9ri5ejeCWsaWzfowLJlZOFzGQmjmqgFOSwFNLWW1eiZNB8mqSenQUBr7Q+WSE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EWfrPZnM; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dfeac23fe6eso543804276.0 for ; Wed, 12 Jun 2024 13:47:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1718225268; x=1718830068; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=15xNRufoy8zQ0ly/BFY7+H/LjnHWUIf8CQqsanVt6QU=; b=EWfrPZnM3JZTkWFkiVNf3uQvHcmRhN4wFSzZg4GmwLeK6QJ4qx6DrqbaJ4Xixi3FJa /A/YuZYQsyImpdVA8MLTI+z8rp+c8U4Q7JLtk/FxSi9t6f4Ra3YbpTkjUReJN+g9vp3T x1sbYkcTKGnNmzij9kB0aJaS6A0jgHgG6G0FspmbG3abFwQZncvaS4pMR/1srs5FflCD zjS9HyeiWLQMk0Iu7FaPStGxSbfPvmgDb28enQlXSE9yqj7iyENsQ8ftS82uAW7lN7Fa MRqVrOtKrWtKdLB5iboQ3oHREzsgzi1BHw82CEHzA3BQACUv4ykM75pwao4gVsIKgMdO OKFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718225268; x=1718830068; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=15xNRufoy8zQ0ly/BFY7+H/LjnHWUIf8CQqsanVt6QU=; b=HDgwiY1HHIyRlB5ItS6dFI14LRQTvVecMJl41iQY7cpJtzddQH3BrhpWLp63uF4kmm pJ3ssxz2Torxm14YQCTkb8FXbu0uasnNv5MLhaOEVIVXj0oz043DqxguHU+KYwjWTPTb 4E3HEBs51cL3swNVH6UyPP9jW5OkLOg3CUCk+fQG/QHz3sj9tctGXw4aSRVa3MvOp8y0 bIZvuTioaCwv1pjd0JfUwQQLiRZYZg0l2FjVmJsdiBDynvyi/G5ZCAUcx4zmxGIq9KsK 2qvXKPGZpjQVBd15++LQ4mrPT5wCo4uu7v8/rX4xdAFcsXGRabGelUzj8BoW0GGKRYfO DnQw== X-Forwarded-Encrypted: i=1; AJvYcCWv8zhgeqHRHavznZWur301e2OQH2Q+Wvg7pfkdTH99YUxeDU5IcetymyL2CfwRnL1XX7I1mobIopTFpyn5SMcTN8jhXCxxJIqL7SJ5 X-Gm-Message-State: AOJu0Yy6rfZeQdYDUX2bRNTHQjofO1FSNr/Ub8uD2MsfW36FlR1SUjxG RuizZu5BiEYIxx2+ZSq5g6ErVTaPZ/+PI2Sh1rc3WU3lB5GeW434eL+uyvDOmdOU+6aTy9UFBMs nWw== X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1544:b0:dfb:20e:2901 with SMTP id 3f1490d57ef6-dfe6676c7cfmr707011276.6.1718225267795; Wed, 12 Jun 2024 13:47:47 -0700 (PDT) Date: Wed, 12 Jun 2024 13:47:46 -0700 In-Reply-To: <9477c21a-4b18-4539-9f82-11046e43063c@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240419085927.3648704-1-pbonzini@redhat.com> <20240419085927.3648704-4-pbonzini@redhat.com> <9477c21a-4b18-4539-9f82-11046e43063c@intel.com> Message-ID: Subject: Re: [PATCH 3/6] KVM: x86/mmu: Extract __kvm_mmu_do_page_fault() From: Sean Christopherson To: Xiaoyao Li Cc: Paolo Bonzini , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, isaku.yamahata@intel.com, binbin.wu@linux.intel.com, rick.p.edgecombe@intel.com Content-Type: text/plain; charset="us-ascii" On Mon, Apr 22, 2024, Xiaoyao Li wrote: > On 4/19/2024 4:59 PM, Paolo Bonzini wrote: > > From: Isaku Yamahata > > > > Extract out __kvm_mmu_do_page_fault() from kvm_mmu_do_page_fault(). The > > inner function is to initialize struct kvm_page_fault and to call the fault > > handler, and the outer function handles updating stats and converting > > return code. > > I don't see how the outer function converts return code. > > > KVM_PRE_FAULT_MEMORY will call the KVM page fault handler. > > I assume it means the inner function will be used by KVM_PRE_FAULT_MEMORY. > > > This patch makes the emulation_type always set irrelevant to the return > > code. kvm_mmu_page_fault() is the only caller of kvm_mmu_do_page_fault(), > > and references the value only when PF_RET_EMULATE is returned. Therefore, > > this adjustment doesn't affect functionality. > > This paragraph needs to be removed, I think. It's not true. It's oddly worded, but I do think it's correct. kvm_arch_async_page_ready() doesn't pass emulation_type, and kvm_mmu_page_fault() bails early for all other return values: if (r < 0) return r; if (r != RET_PF_EMULATE) return 1; That said, this belongs in a separate patch (if it's actually necessary). And _that_ said, rather than add an inner version, what if we instead shuffle the stats code? pf_taken, pf_spurious, and pf_emulate should _only_ ever be bumped by kvm_mmu_page_fault(), i.e. should only track page faults that actually happened in the guest. And so handling them in kvm_mmu_do_page_fault() doesn't make any sense, because there should only ever be one caller that passes prefetch=false. Compile tested only, and kvm_mmu_page_fault() is a bit ugly (but that's solvable), but I think this would provide better separation of concerns. -- From: Sean Christopherson Date: Wed, 12 Jun 2024 12:51:38 -0700 Subject: [PATCH 1/2] KVM: x86/mmu: Bump pf_taken stat only in the "real" page fault handler Account stat.pf_taken in kvm_mmu_page_fault(), i.e. the actual page fault handler, instead of conditionally bumping it in kvm_mmu_do_page_fault(). The "real" page fault handler is the only path that should ever increment the number of taken page faults, as all other paths that "do page fault" are by definition not handling faults that occurred in the guest. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 ++ arch/x86/kvm/mmu/mmu_internal.h | 8 -------- 2 files changed, 2 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f2c9580d9588..3461b8c4aba2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5928,6 +5928,8 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err } if (r == RET_PF_INVALID) { + vcpu->stat.pf_taken++; + r = kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, error_code, false, &emulation_type); if (KVM_BUG_ON(r == RET_PF_INVALID, vcpu->kvm)) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index ce2fcd19ba6b..8efd31b3856b 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -318,14 +318,6 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, fault.slot = kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn); } - /* - * Async #PF "faults", a.k.a. prefetch faults, are not faults from the - * guest perspective and have already been counted at the time of the - * original fault. - */ - if (!prefetch) - vcpu->stat.pf_taken++; - if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) && fault.is_tdp) r = kvm_tdp_page_fault(vcpu, &fault); else base-commit: b7bc82a015e237862837bd1300d6ba1f5cd17466 -- 2.45.2.505.gda0bf45e8d-goog From 1dc69d38a8d51c9d8ad833475938cb925f7ea4cf Mon Sep 17 00:00:00 2001 From: Sean Christopherson Date: Wed, 12 Jun 2024 12:59:06 -0700 Subject: [PATCH 2/2] KVM: x86/mmu: Account pf_{fixed,emulate,spurious} in callers of "do page fault" Move the accounting of the result of kvm_mmu_do_page_fault() to its callers, as only pf_fixed is common to guest page faults and async #PFs, and upcoming support KVM_PRE_FAULT_MEMORY won't bump _any_ stats. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 19 ++++++++++++++++++- arch/x86/kvm/mmu/mmu_internal.h | 13 ------------- 2 files changed, 18 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3461b8c4aba2..56373577a197 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4291,7 +4291,16 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) work->arch.cr3 != kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu)) return; - kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, work->arch.error_code, true, NULL); + r = kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, work->arch.error_code, + true, NULL); + + /* + * Account fixed page faults, otherwise they'll never be counted, but + * ignore stats for all other return times. Page-ready "faults" aren't + * truly spurious and never trigger emulation + */ + if (r == RET_PF_FIXED) + vcpu->stat.pf_fixed++; } static inline u8 kvm_max_level_for_order(int order) @@ -5938,6 +5947,14 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err if (r < 0) return r; + + if (r == RET_PF_FIXED) + vcpu->stat.pf_fixed++; + else if (r == RET_PF_EMULATE) + vcpu->stat.pf_emulate++; + else if (r == RET_PF_SPURIOUS) + vcpu->stat.pf_spurious++; + if (r != RET_PF_EMULATE) return 1; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 8efd31b3856b..444f55a5eed7 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -337,19 +337,6 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (fault.write_fault_to_shadow_pgtable && emulation_type) *emulation_type |= EMULTYPE_WRITE_PF_TO_SP; - /* - * Similar to above, prefetch faults aren't truly spurious, and the - * async #PF path doesn't do emulation. Do count faults that are fixed - * by the async #PF handler though, otherwise they'll never be counted. - */ - if (r == RET_PF_FIXED) - vcpu->stat.pf_fixed++; - else if (prefetch) - ; - else if (r == RET_PF_EMULATE) - vcpu->stat.pf_emulate++; - else if (r == RET_PF_SPURIOUS) - vcpu->stat.pf_spurious++; return r; } --