Received: by 2002:a05:7208:9594:b0:7e:5202:c8b4 with SMTP id gs20csp2303339rbb; Tue, 27 Feb 2024 18:43:18 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXcR/5w/CdFwrTS7fzjutPN4G/0i4o0HbIwxMsuT8nCndGarHQd5Wr86m/flt2ONklly+yKBuDAA/FKAx5ExoNEs1LzCz/EFdTYNbfHZA== X-Google-Smtp-Source: AGHT+IGNXekXUMNhDeqaWJPhqTe/iaYPBtqXzWf2zACXIRpo1uLIGIcBHnuAauNGZov40Qj10lQl X-Received: by 2002:a05:6358:190a:b0:17b:2bc6:e843 with SMTP id w10-20020a056358190a00b0017b2bc6e843mr15995937rwm.18.1709088197903; Tue, 27 Feb 2024 18:43:17 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709088197; cv=pass; d=google.com; s=arc-20160816; b=LSWzPG3zwad6lQgLnA+EIF6ugkyKKnx4keNfcDFNVHOLpJMURaNV87lpXPzj5gdgSg +mGbdkt7RNFCz7Gn8N1VP29u4r8YZb9YNMwOcpekKzCmWELsaXruJILqtmvDHEFFqMVC RtYRjR5GtnleqEY8lUvrTSHWKZtxyJOp7KK/HZWwljGfYQXgsp0nJWOWQWbeRvrgNMi+ iku56DuoQwdc19Xgq7hBmjCrLOp0yHVNnOT3olqBB9UU27ntP82pj9mdcOOrCsrQdXBy e3BKdu7lDAsV/mn3QsQ8qtyGAoDn6i6OpQB1TgdpUhX8NG1T54+KQtih3N5p4TBvOxAT 4gkA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :reply-to:dkim-signature; bh=UtgfH5mWUwUleAFxMU19P9nYVxmIT/Ru+pHd1FRZC+Q=; fh=JlXjsd2LkACqCFC/xGU010CUA/Sf2x1+DbfnXdaw/LA=; b=S3cuY1Fw0AR8RmuO6sXsr3xiGSpWjXZ5PXd4V1ZvAd0+Yqrgkus9nDL5UkQESn8Io0 MWnCTe7CLr3x6GKa0B8DUnuVtfZcm+zc1j3w/61b+UwXuC01c7L5CEGjgbxbD7+Hyo0O ctPbisKpc9yyjF+ji7GgbbPJseaFCOLkP/uQpyIm3067WmuMTglO54iO1oNyAJq0ch0G pjFHt3vXkkJc4S7oLmyoQYSEgl2zWEPHCDOWJB6wcqfbfiq3XLL5DNG6enKEKiYGPXK6 sHHYoXtjA7y3WSH9SrlwcGeNXOCP0BWB6I5aJxkeFCQsjcnuKLMqKKmbxBK8vgJ8fxQk TzDw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=PcHXFqab; arc=pass (i=1 spf=pass spfdomain=flex--seanjc.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-84431-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-84431-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id p15-20020a631e4f000000b005dc41265c15si6422534pgm.658.2024.02.27.18.43.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 18:43:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-84431-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=PcHXFqab; arc=pass (i=1 spf=pass spfdomain=flex--seanjc.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-84431-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-84431-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 8C7CC2873E8 for ; Wed, 28 Feb 2024 02:43:17 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 194F5224DD; Wed, 28 Feb 2024 02:42:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PcHXFqab" Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7102720DF1 for ; Wed, 28 Feb 2024 02:42:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709088122; cv=none; b=OReL+mC1Jh2cuj5dPxZNZOk/zj6PVc/vkjFR4+6sO5MoaD9OPaAPzaMiSLbpxG8XVOL2ROlO+QTVo45vuLqrQ9NHZDMCrralgbugthqbxBB1HWcY291E6ZPbeocfBcMNVRAxddpeZz4UdUXmN4gWSMT1rZlMhXL1We/I0AN+QY8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709088122; c=relaxed/simple; bh=fw6fQCaxNyQVmsm0HwKPD91hojXxJNupzj4MzyZI3f0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aWHVdmWIlgGRXtEGIiK5rsxPtYLj69JChoDVcvnCN0bTB7enMaxl94+Z14CSGOrjNgFl0rs/JsqgoRMWbhzg6VTujyz26eCIZjQctc3cXOXYKdTjWF2kaUlJQ0cQJh8P/7YH2LyNRd7tsBPmtIvSw8fVZMPXLFVa/MSZTmzBTds= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PcHXFqab; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-6e5588db705so382405b3a.1 for ; Tue, 27 Feb 2024 18:42:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1709088120; x=1709692920; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=UtgfH5mWUwUleAFxMU19P9nYVxmIT/Ru+pHd1FRZC+Q=; b=PcHXFqab6wfr2gPo40/6WpRR5Y4D22zIVOk8I9eZ4ln70tXNUJELztSRICOBRAaDuj B4MxQ0dre4sndiDi0+2aXU8XH6Pk4m7El8Zg1xgypt2hku2NWFXt+eUogAFHduBC3OCI ZVH7M83YfFE3EkfiFgs7pN4oA2lUUio6ot5V6XZvdTlTvsBYvdACnUzioFkbX88/Etrj jUwyIt1tqPPiBb7NvRPwIkQEMhL+7ZZbtBI3ehjguRGd+me4dW7gVxCHGHL7KL7vPcQf mnn1ZMea0ahhprXPuqsO9RawAKZ/gP2VIMESDbk/z7QnzFycYKWtPdm50DcN6odKzIXc KPTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709088120; x=1709692920; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=UtgfH5mWUwUleAFxMU19P9nYVxmIT/Ru+pHd1FRZC+Q=; b=avwowQJT+Dm81MbLYoZtqYwYm84BnWEBpfWn369XwFg6zuwNrTL4VJd6mtyDxx1qM6 jMsiAjr/dLk0rDV+x9fZybMQZJ3Y+vfigCCIYfSy3GX3NIGE4umn4UJke/oqgpJstnXU /XA0359EIX31iAxBBbGBuz00kkqp2iXqjuCZslcCOkQz2BbOsaUqHrS434QTySS9HDN0 BtrcB55Sh4ssLqJUljxMnWsk4z0IONQGc86VCrzF8Nv5StIpyC370wC0BH67Znieztd2 AO3C36BZQm16uh5SvepHR6Ng6KbTpTGSFHHaNjlxtJedPr+/MPxvzRJOF6/3phbw2Y2k WKHQ== X-Forwarded-Encrypted: i=1; AJvYcCWvTtsDeE5gjYSVKgnJ1phbBVAlnKAohLv1WNuRmUf0gwd6oBZtKyvO60P6jynt6rwr7rA1b0v+EY8gxKgLxGuzCrPcpqU44wLUidsY X-Gm-Message-State: AOJu0YzTHJK5NJoGsgmsQcfYCBaKG37OgUa9QuFMfgz8Hdfg7/Fy2Hk7 uaYamWhEjCAkxSGSpM7OB7C+1YzPUyvW+Yug1wyrHqRlobUda+Jj98ddhnIT0VME8ml444PRMqs jDw== X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:4708:b0:6e5:4142:ea1c with SMTP id df8-20020a056a00470800b006e54142ea1cmr3775pfb.3.1709088119796; Tue, 27 Feb 2024 18:41:59 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 27 Feb 2024 18:41:36 -0800 In-Reply-To: <20240228024147.41573-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240228024147.41573-1-seanjc@google.com> X-Mailer: git-send-email 2.44.0.278.ge034bb2e1d-goog Message-ID: <20240228024147.41573-6-seanjc@google.com> Subject: [PATCH 05/16] KVM: x86/mmu: Use synthetic page fault error code to indicate private faults From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yan Zhao , Isaku Yamahata , Michael Roth , Yu Zhang , Chao Peng , Fuad Tabba , David Matlack Content-Type: text/plain; charset="UTF-8" Add and use a synthetic, KVM-defined page fault error code to indicate whether a fault is to private vs. shared memory. TDX and SNP have different mechanisms for reporting private vs. shared, and KVM's software-protected VMs have no mechanism at all. Usurp an error code flag to avoid having to plumb another parameter to kvm_mmu_page_fault() and friends. Alternatively, KVM could borrow AMD's PFERR_GUEST_ENC_MASK, i.e. set it for TDX and software-protected VMs as appropriate, but that would require *clearing* the flag for SEV and SEV-ES VMs, which support encrypted memory at the hardware layer, but don't utilize private memory at the KVM layer. Opportunistically add a comment to call out that the logic for software- protected VMs is (and was before this commit) broken for nested MMUs, i.e. for nested TDP, as the GPA is an L2 GPA. Punt on trying to play nice with nested MMUs as there is a _lot_ of functionality that simply doesn't work for software-protected VMs, e.g. all of the paths where KVM accesses guest memory need to be updated to be aware of private vs. shared memory. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 11 +++++++++++ arch/x86/kvm/mmu/mmu.c | 26 +++++++++++++++++++------- arch/x86/kvm/mmu/mmu_internal.h | 2 +- 3 files changed, 31 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1e69743ef0fb..4077c46c61ab 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -267,7 +267,18 @@ enum x86_intercept_stage; #define PFERR_GUEST_ENC_MASK BIT_ULL(34) #define PFERR_GUEST_SIZEM_MASK BIT_ULL(35) #define PFERR_GUEST_VMPL_MASK BIT_ULL(36) + +/* + * IMPLICIT_ACCESS is a KVM-defined flag used to correctly perform SMAP checks + * when emulating instructions that triggers implicit access. + */ #define PFERR_IMPLICIT_ACCESS BIT_ULL(48) +/* + * PRIVATE_ACCESS is a KVM-defined flag us to indicate that a fault occurred + * when the guest was accessing private memory. + */ +#define PFERR_PRIVATE_ACCESS BIT_ULL(49) +#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS | PFERR_PRIVATE_ACCESS) #define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \ PFERR_WRITE_MASK | \ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 408969ac1291..7807bdcd87e8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5839,19 +5839,31 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err bool direct = vcpu->arch.mmu->root_role.direct; /* - * IMPLICIT_ACCESS is a KVM-defined flag used to correctly perform SMAP - * checks when emulating instructions that triggers implicit access. * WARN if hardware generates a fault with an error code that collides - * with the KVM-defined value. Clear the flag and continue on, i.e. - * don't terminate the VM, as KVM can't possibly be relying on a flag - * that KVM doesn't know about. + * with KVM-defined sythentic flags. Clear the flags and continue on, + * i.e. don't terminate the VM, as KVM can't possibly be relying on a + * flag that KVM doesn't know about. */ - if (WARN_ON_ONCE(error_code & PFERR_IMPLICIT_ACCESS)) - error_code &= ~PFERR_IMPLICIT_ACCESS; + if (WARN_ON_ONCE(error_code & PFERR_SYNTHETIC_MASK)) + error_code &= ~PFERR_SYNTHETIC_MASK; if (WARN_ON_ONCE(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) return RET_PF_RETRY; + /* + * Except for reserved faults (emulated MMIO is shared-only), set the + * private flag for software-protected VMs based on the gfn's current + * attributes, which are the source of truth for such VMs. Note, this + * wrong for nested MMUs as the GPA is an L2 GPA, but KVM doesn't + * currently supported nested virtualization (among many other things) + * for software-protected VMs. + */ + if (IS_ENABLED(CONFIG_KVM_SW_PROTECTED_VM) && + !(error_code & PFERR_RSVD_MASK) && + vcpu->kvm->arch.vm_type == KVM_X86_SW_PROTECTED_VM && + kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(cr2_or_gpa))) + error_code |= PFERR_PRIVATE_ACCESS; + r = RET_PF_INVALID; if (unlikely(error_code & PFERR_RSVD_MASK)) { r = handle_mmio_page_fault(vcpu, cr2_or_gpa, direct); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1fab1f2359b5..d7c10d338f14 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -306,7 +306,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, .max_level = KVM_MAX_HUGEPAGE_LEVEL, .req_level = PG_LEVEL_4K, .goal_level = PG_LEVEL_4K, - .is_private = kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT), + .is_private = err & PFERR_PRIVATE_ACCESS, }; int r; -- 2.44.0.278.ge034bb2e1d-goog