Received: by 2002:ab2:6a05:0:b0:1f8:1780:a4ed with SMTP id w5csp1842714lqo; Sun, 12 May 2024 22:56:52 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCXf0+8aALMFxHh4a+FSRaMO1cE62LGuMwWsJ2zWlg7aJI/ZhQF6Zy7Skt8RsQKFWNkFGfmc02nU61oc5F3vOw6vGbJUj/UlIyumasTYmA== X-Google-Smtp-Source: AGHT+IFIVN5yIijj/agrgr/QrapQ72yhpkXSFVabzdjzg1WejJy8gaXZ/OSISiDxKYzsVoku8zpa X-Received: by 2002:aa7:d3ca:0:b0:574:d098:37b2 with SMTP id 4fb4d7f45d1cf-574d09839a3mr430092a12.10.1715579811971; Sun, 12 May 2024 22:56:51 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1715579811; cv=pass; d=google.com; s=arc-20160816; b=0dDH6KKhqWA9O8K1MRGar8ZpCl1l56WGnwYEl5bD+KyqSDslrZravBSG6R29oiHGKE bzbWL6wvf37ZYmXvwCJMOHfwUjF5BpNo/2k4AdiWDxRW91nkx0VQx+jfXTJg2ePutWCl V+DKeFk84rQeuXiHcy+RiC2wEi2Ab88olPIF5UwMz3IJk3VNRfokV8AF4jzfxVJDzaBk BP+X4rDgKHnLvOfT4SpRU2vOv5tcteSV2LMmHgA+7il6FqGwlhAzirR+TkBV1P51aA1X x2xl2k6++x7c3/ZDKkzz8H0W+frHxlsBGj/dDLvXKt/FGnBaRK+Th1SypEBFlO1K+Us3 VVWQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id:dkim-signature; bh=UmS9nqPA6Zjl9ZJ9BT7AzAtAS9YF60mSYCyvMW0fXfs=; fh=KuyjcLwM7s56N7UJ3mzp+7oCUKBbw1pRdRameoWhvV8=; b=bMbXr1w3ZZNutiXZEMAdQV3u7ECe3pOvm1mp4mLrud91izprMrYIxfsahdUp4ZnGWw PnmtEizcBqPs9QK5wxmjK3g2o+GoAQNs0xLX0FxfLsrNxTtcH3XbQaa5tFknMlYCEOW6 5d7i7VDqPL9QoBCcETizOnXvhjwy2EK7b5HY5ZwM35uUtCVLwO9qXlVz3uJZpyElwfKp qGicL7pyRsTH6O/NUTECllaCvlLEpAYUkTamhhLbMab+8ioFIXtmeJPbuXMWvBk+Fwnj Xk7CxU5eMz/TrlUyj9qFSIt6JE0RQ9bv4e9GMFpcxrjuw0vwzZmR8aLrtFgEmBW2mruF QNIg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HIcQmUJZ; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-177181-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-177181-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id 4fb4d7f45d1cf-5733bec15e2si4725460a12.169.2024.05.12.22.56.51 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 12 May 2024 22:56:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-177181-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HIcQmUJZ; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-177181-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-177181-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 8A98A1F21117 for ; Mon, 13 May 2024 05:56:51 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7BDC11465A6; Mon, 13 May 2024 05:56:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HIcQmUJZ" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE65250280; Mon, 13 May 2024 05:56:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715579800; cv=none; b=qtVU8UbVIRtipkAee+/2iyBeY+DYnxDf2txMLh7n2d4HDr/49NT7ThGyOyFqnEyglXfkhWoCZ2iSuFeIluEeIAihyptesWTSI4TNGYYbsmYiA7ktXewIkHZqwjoZqRSISRJqIPB5Lf06DsZc3wvu6Fsj5/i2WCx6kiS7tMQVwRg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715579800; c=relaxed/simple; bh=G8iOywcAcBnScphNUsC6tyWcLR3GHdaAtCcIaphQd+o=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=S0pMAz57Xj9vTa0gwB0I+8zVW5qELJFnVuUxHKvLkiaqEp8TXMMoP3VZGwjd61ju7Pv0wgqRH4Ib+W/figtHFa7a/2Rbf91HdTGz/EnioMACcxedvf65+PPVrCUdkskL6oMBgR7RssUQrFCkXCDtSo4d0mAZsFiSgo5iIHgWZRw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HIcQmUJZ; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1715579798; x=1747115798; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=G8iOywcAcBnScphNUsC6tyWcLR3GHdaAtCcIaphQd+o=; b=HIcQmUJZlh74GCYAgv8OmorAUq62cVP/OvvPA92YRRfVf37/J8nNeRP/ bMaOdoZJz0kovH8t0oWOK2eivbn5F4Yea8YJ+sSI+sIUkkkTlinL4oPKr tVqBt278L36qLRDk2jS+Ul9/bbeivBupcTsdxbt+6iFQ1vRTD4fJT34cN YxMtF7dR7PXhz7Y3IQfp9koeFWlCoWgXjQIvqM3yWa+NzMAAcTZVKvWQU CC5Ib/aqje9x4B6g2Ob/jdeqCZCAvTZHHzQwhvyDLh56zAm4KHf0jZg+r xHGGOmwpaKGYu3S6Cu62iez1zI2dSMd3Sh0/btwyKSB4PFUGZg/OCw7fk A==; X-CSE-ConnectionGUID: E7Hc8rMBRme/3x5YkBRsbw== X-CSE-MsgGUID: GWpr7pstRJm1HRcElt4TuA== X-IronPort-AV: E=McAfee;i="6600,9927,11071"; a="11715083" X-IronPort-AV: E=Sophos;i="6.08,157,1712646000"; d="scan'208";a="11715083" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2024 22:56:37 -0700 X-CSE-ConnectionGUID: P6Zc/P7tTxOKeFH7mhe1SA== X-CSE-MsgGUID: q++dGDfOQOOXcbZF73pT7A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,157,1712646000"; d="scan'208";a="30787423" Received: from xiaoyaol-hp-g830.ccr.corp.intel.com (HELO [10.125.243.198]) ([10.125.243.198]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2024 22:56:37 -0700 Message-ID: Date: Mon, 13 May 2024 13:56:33 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 07/17] KVM: x86/mmu: Use synthetic page fault error code to indicate private faults To: Paolo Bonzini , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Sean Christopherson References: <20240507155817.3951344-1-pbonzini@redhat.com> <20240507155817.3951344-8-pbonzini@redhat.com> Content-Language: en-US From: Xiaoyao Li In-Reply-To: <20240507155817.3951344-8-pbonzini@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit On 5/7/2024 11:58 PM, Paolo Bonzini wrote: > From: Sean Christopherson > > Add and use a synthetic, KVM-defined page fault error code to indicate > whether a fault is to private vs. shared memory. TDX and SNP have > different mechanisms for reporting private vs. shared, and KVM's > software-protected VMs have no mechanism at all. Usurp an error code > flag to avoid having to plumb another parameter to kvm_mmu_page_fault() > and friends. > > Alternatively, KVM could borrow AMD's PFERR_GUEST_ENC_MASK, i.e. set it > for TDX and software-protected VMs as appropriate, but that would require > *clearing* the flag for SEV and SEV-ES VMs, which support encrypted > memory at the hardware layer, but don't utilize private memory at the > KVM layer. > > Opportunistically add a comment to call out that the logic for software- > protected VMs is (and was before this commit) broken for nested MMUs, i.e. > for nested TDP, as the GPA is an L2 GPA. Punt on trying to play nice with > nested MMUs as there is a _lot_ of functionality that simply doesn't work > for software-protected VMs, e.g. all of the paths where KVM accesses guest > memory need to be updated to be aware of private vs. shared memory. > > Signed-off-by: Sean Christopherson > Message-Id: <20240228024147.41573-6-seanjc@google.com> > Signed-off-by: Paolo Bonzini Reviewed-by: Xiaoyao Li > --- > arch/x86/include/asm/kvm_host.h | 7 ++++++- > arch/x86/kvm/mmu/mmu.c | 14 ++++++++++++++ > arch/x86/kvm/mmu/mmu_internal.h | 2 +- > 3 files changed, 21 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 12e727301262..0dc755a6dc0c 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -273,7 +273,12 @@ enum x86_intercept_stage; > * when emulating instructions that triggers implicit access. > */ > #define PFERR_IMPLICIT_ACCESS BIT_ULL(48) > -#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS) > +/* > + * PRIVATE_ACCESS is a KVM-defined flag us to indicate that a fault occurred > + * when the guest was accessing private memory. > + */ > +#define PFERR_PRIVATE_ACCESS BIT_ULL(49) > +#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS | PFERR_PRIVATE_ACCESS) > > #define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \ > PFERR_WRITE_MASK | \ > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 3609167ba30e..eb041acec2dc 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -5799,6 +5799,20 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err > if (WARN_ON_ONCE(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) > return RET_PF_RETRY; > > + /* > + * Except for reserved faults (emulated MMIO is shared-only), set the > + * PFERR_PRIVATE_ACCESS flag for software-protected VMs based on the gfn's > + * current attributes, which are the source of truth for such VMs. Note, > + * this wrong for nested MMUs as the GPA is an L2 GPA, but KVM doesn't > + * currently supported nested virtualization (among many other things) > + * for software-protected VMs. > + */ > + if (IS_ENABLED(CONFIG_KVM_SW_PROTECTED_VM) && > + !(error_code & PFERR_RSVD_MASK) && > + vcpu->kvm->arch.vm_type == KVM_X86_SW_PROTECTED_VM && > + kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(cr2_or_gpa))) > + error_code |= PFERR_PRIVATE_ACCESS; > + > r = RET_PF_INVALID; > if (unlikely(error_code & PFERR_RSVD_MASK)) { > r = handle_mmio_page_fault(vcpu, cr2_or_gpa, direct); > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h > index 797b80f996a7..dfd9ff383663 100644 > --- a/arch/x86/kvm/mmu/mmu_internal.h > +++ b/arch/x86/kvm/mmu/mmu_internal.h > @@ -306,7 +306,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, > .max_level = KVM_MAX_HUGEPAGE_LEVEL, > .req_level = PG_LEVEL_4K, > .goal_level = PG_LEVEL_4K, > - .is_private = kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT), > + .is_private = err & PFERR_PRIVATE_ACCESS, > }; > int r; >