Received: by 2002:a05:7208:9594:b0:7e:5202:c8b4 with SMTP id gs20csp1172608rbb; Mon, 26 Feb 2024 00:32:31 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUHbwz+tNqjqFxeNMNdry67q2gCrRS1vX8sfjXpL0+zmEu5W4ge6QpSvIcnggNkidHu9QJ5+gx0+TUHuXFHgf82/AAQybRMv1GnFZS97A== X-Google-Smtp-Source: AGHT+IFKC2VH6DPJZUSF5HjaiFEe38uZNAHY5JdMObvfMcIAQzE20IVrzuAtAePuzd3pPw4vZ8HZ X-Received: by 2002:a17:906:4ad0:b0:a43:445f:14dc with SMTP id u16-20020a1709064ad000b00a43445f14dcmr1397356ejt.22.1708936351491; Mon, 26 Feb 2024 00:32:31 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708936351; cv=pass; d=google.com; s=arc-20160816; b=N3tMsN5M+d+JUCqm54sVlbQjalUG0/HUMbDfdgv7A3pPeLK5MbCwFOxn7j5a6p1+dl 7/JhF0B4K6DagnuUihIAZtPim7+IJduw5WX+ESygf/4FrOsdpM38CR/9g0yaNJrp7uMS zNlWpCrKRbF13BXBpd7vCmPF2H1Jrn6TeRPUc8LH9Bpq3xorgmmTgS19ePxv8e79DI5j JsVjShgM4aspK7+m7tHtJ5M7gdB0wQ+E4fGlCdIkoP0/YBix9C7+24PTxsr4o1WeTAxW IH4DqlZOWp9Q46gbPV40kVs0d3K+bEyVEtfSiZr3TseUqjR3fEr29VaHwaL3vqKRgaca JuhQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=dIneS2SRpHR15S77sme6P1rRqOTAGPz6tJ+JjL+Luu4=; fh=Itbyk7CEvizIrzGEESCqq3I2tZgG1kc/GkVOa3S7Hsg=; b=FbgMVUhnKbYHVZCp2gspP/FtNED3yVafz98JSJl2XAbcEE7pGC5/V4GnrTe0NaC/xp RovdlJ4Hy1T6T5pkGQ0VFOee8nzrH2IOZ7iy4JGd6r0NUCboRDGle1x0q0Nl+rHmhN0g lJaCfjQufsfowrRtxC40AJKNxiaCqrdlgzEx61Z12nAx4YdgEGb+Mg26p1Xax6r6HDwI hgkLkg7dJ2Y8jGjzTIclloK3XDwE5XbVSXovCrtWRDQXPVOnD0fz7CxZXvVm5TLW5alo 55K2eCKxJntq5JM9Fc2aVBcS1cBwTO/d3t29kcdFr/8RM+J/uouoJ6Yx6bvBVCcM4ljP A89g==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=eAoVIEZx; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-80774-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-80774-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id hd17-20020a170907969100b00a43050521besi1593971ejc.355.2024.02.26.00.32.31 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 00:32:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-80774-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=eAoVIEZx; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-80774-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-80774-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 1E6FE1F211FF for ; Mon, 26 Feb 2024 08:32:31 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DDDA7524CC; Mon, 26 Feb 2024 08:27:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eAoVIEZx" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD9B34D5AC; Mon, 26 Feb 2024 08:27:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.7 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708936067; cv=none; b=TZGovrxgZOPpOv6kVgjPiLzyxBESOY+c9k33t96o16g7e9eNXR94JPVR3c+4HYBXvi6xFI8PzQJXuwB/RkeTOwbqccobs2llSK6oYvXw1McWvNrDYttEdIuFmaO6+GbpG2nBJ/b2oqlhC9lj5nUPPaIPxa2hhLIqFYic8UE3qUE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708936067; c=relaxed/simple; bh=Awuik1MUAYsAnCDFjrgL7JE4Wn0+9r53tlCToVb0+AQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lGwLZK92Q6brMjmv9i4DlwaOc6f0Gu0vdYkgdU+SsHdVoNiY/wg4Da9exw0BRvCdI7vkXwCj1LnayJ5vTF6SzTR0QAyWNJxZWTyH/upjynFQhgEJ508cnjF+8nNNUZDTPpDVQxW54tBTCRcfOqHMaStdHmPRnGg+TGyFYY26Bfk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eAoVIEZx; arc=none smtp.client-ip=192.198.163.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708936065; x=1740472065; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Awuik1MUAYsAnCDFjrgL7JE4Wn0+9r53tlCToVb0+AQ=; b=eAoVIEZxkRvjJIiWiAjOYqVVD3dbQan1SyYsXU/oayX73Q7UkJCaqowq w+MQh7ugGL7JQrCjHlNDxa4K7pUDD1IRKvx1AySEi2MxHLKF9IsLcSv7r bDPNbjaOJoWJjpuzcTDHgmqUQuCFYNxA5rGQKIdwCHaQrm82tr6ndXGA/ Ypc9pWeRcOm8EGtoNW7zj+s5/KYWMeHWYG4HNaRc7MXFyNaphDSqBk458 /3eWQsZNXh2iZ9XBacwmX3c9KGIycHiigo/hh+CGjnQ5NDhhNvLplFGEv Sde9fvbECUJAwNksMpp2gYCmnSJOw3BBIEqySnsq+Dcc/Lu12ooDxLdrI A==; X-IronPort-AV: E=McAfee;i="6600,9927,10995"; a="28631479" X-IronPort-AV: E=Sophos;i="6.06,185,1705392000"; d="scan'208";a="28631479" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 00:27:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,185,1705392000"; d="scan'208";a="6474333" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 00:27:42 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v19 013/130] KVM: x86: Use PFERR_GUEST_ENC_MASK to indicate fault is private Date: Mon, 26 Feb 2024 00:25:15 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Isaku Yamahata SEV-SNP defines PFERR_GUEST_ENC_MASK (bit 32) in page-fault error bits to represent the guest page is encrypted. Use the bit to designate that the page fault is private and that it requires looking up memory attributes. The vendor kvm page fault handler should set PFERR_GUEST_ENC_MASK bit based on their fault information. It may or may not use the hardware value directly or parse the hardware value to set the bit. For KVM_X86_SW_PROTECTED_VM, ask memory attributes for the fault privateness. For async page fault, carry the bit and use it for kvm page fault handler. Signed-off-by: Isaku Yamahata --- Changes v4 -> v5: - Eliminate kvm_is_fault_private() by open code the function - Make async page fault handler to carry is_private bit Changes v3 -> v4: - rename back struct kvm_page_fault::private => is_private - catch up rename: KVM_X86_PROTECTED_VM => KVM_X86_SW_PROTECTED_VM Changes v2 -> v3: - Revive PFERR_GUEST_ENC_MASK - rename struct kvm_page_fault::is_private => private - Add check KVM_X86_PROTECTED_VM Changes v1 -> v2: - Introduced fault type and replaced is_private with fault_type. - Add kvm_get_fault_type() to encapsulate the difference. Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 24 +++++++++++++++++------- arch/x86/kvm/mmu/mmu_internal.h | 2 +- 3 files changed, 21 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 57ce89fc2740..28314e7d546c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -264,6 +264,7 @@ enum x86_intercept_stage; #define PFERR_SGX_BIT 15 #define PFERR_GUEST_FINAL_BIT 32 #define PFERR_GUEST_PAGE_BIT 33 +#define PFERR_GUEST_ENC_BIT 34 #define PFERR_IMPLICIT_ACCESS_BIT 48 #define PFERR_PRESENT_MASK BIT(PFERR_PRESENT_BIT) @@ -275,6 +276,7 @@ enum x86_intercept_stage; #define PFERR_SGX_MASK BIT(PFERR_SGX_BIT) #define PFERR_GUEST_FINAL_MASK BIT_ULL(PFERR_GUEST_FINAL_BIT) #define PFERR_GUEST_PAGE_MASK BIT_ULL(PFERR_GUEST_PAGE_BIT) +#define PFERR_GUEST_ENC_MASK BIT_ULL(PFERR_GUEST_ENC_BIT) #define PFERR_IMPLICIT_ACCESS BIT_ULL(PFERR_IMPLICIT_ACCESS_BIT) #define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \ @@ -1836,6 +1838,7 @@ struct kvm_arch_async_pf { gfn_t gfn; unsigned long cr3; bool direct_map; + u64 error_code; }; extern u32 __read_mostly kvm_nr_uret_msrs; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ccdbff3d85ec..61674d6b17aa 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4246,18 +4246,19 @@ static u32 alloc_apf_token(struct kvm_vcpu *vcpu) return (vcpu->arch.apf.id++ << 12) | vcpu->vcpu_id; } -static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, - gfn_t gfn) +static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { struct kvm_arch_async_pf arch; arch.token = alloc_apf_token(vcpu); - arch.gfn = gfn; + arch.gfn = fault->gfn; arch.direct_map = vcpu->arch.mmu->root_role.direct; arch.cr3 = kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu); + arch.error_code = fault->error_code & PFERR_GUEST_ENC_MASK; - return kvm_setup_async_pf(vcpu, cr2_or_gpa, - kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch); + return kvm_setup_async_pf(vcpu, fault->addr, + kvm_vcpu_gfn_to_hva(vcpu, fault->gfn), &arch); } void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) @@ -4276,7 +4277,8 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) work->arch.cr3 != kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu)) return; - kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true, NULL); + kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, work->arch.error_code, + true, NULL); } static inline u8 kvm_max_level_for_order(int order) @@ -4390,7 +4392,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault trace_kvm_async_pf_repeated_fault(fault->addr, fault->gfn); kvm_make_request(KVM_REQ_APF_HALT, vcpu); return RET_PF_RETRY; - } else if (kvm_arch_setup_async_pf(vcpu, fault->addr, fault->gfn)) { + } else if (kvm_arch_setup_async_pf(vcpu, fault)) { return RET_PF_RETRY; } } @@ -5814,6 +5816,14 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err if (WARN_ON_ONCE(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) return RET_PF_RETRY; + /* + * This is racy with updating memory attributes with mmu_seq. If we + * hit a race, it would result in retrying page fault. + */ + if (vcpu->kvm->arch.vm_type == KVM_X86_SW_PROTECTED_VM && + kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(cr2_or_gpa))) + error_code |= PFERR_GUEST_ENC_MASK; + r = RET_PF_INVALID; if (unlikely(error_code & PFERR_RSVD_MASK)) { r = handle_mmio_page_fault(vcpu, cr2_or_gpa, direct); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 21f55e8b4dc6..0443bfcf5d9c 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -292,13 +292,13 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, .user = err & PFERR_USER_MASK, .prefetch = prefetch, .is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault), + .is_private = err & PFERR_GUEST_ENC_MASK, .nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(vcpu->kvm), .max_level = KVM_MAX_HUGEPAGE_LEVEL, .req_level = PG_LEVEL_4K, .goal_level = PG_LEVEL_4K, - .is_private = kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT), }; int r; -- 2.25.1