Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp1012302pxb; Wed, 6 Apr 2022 06:42:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx5bLKEVuIjBZjujEtXlGHOGBrnWC/0wiflH6Z6cJgY1OyJlAWuC9mXoZjipfMvMJHBdbuR X-Received: by 2002:a63:5c4:0:b0:398:c83e:f931 with SMTP id 187-20020a6305c4000000b00398c83ef931mr7302884pgf.359.1649252536092; Wed, 06 Apr 2022 06:42:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649252536; cv=none; d=google.com; s=arc-20160816; b=1EaOOroSofpo8B4Av1s5euycl3BGmfMnxHC2rBaJKfMU9g00uSbIIk2O4mwpOUh46H zUlWlbneQIrM0xzdsC/QrCa+RXO00dyxbmVlNdHbDXPsk+P8r5lyO8RbuUak2kc6+K9Y MMAcVMkTTL3xWy2l7/IHnPI+tbgagm0gRvSCEP3zNzdJziUwHVwQ3XInjJzd5g+2Mkx6 AdJ40j8SOOpK3Ae5Bfzg+5BzuSdC4+oTG9BX0BTuzxtHCA28N5/qKmbeFYgWgbXeotLp VRRQBWir2wILlBxlJmw/Jaz87+b8cE0cnrHpMYCj4Y52Qyz9WNhvOmlc4Bo4TWFN/qjf GBNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=m2sfKB74gTolYuaOLfzO0CaB/byjGpQyOkAPbwbPHJY=; b=dmwFR9VWCOx9FQVgLSSI490UApnoXuL83l/k22ENIW8puMbFuqNtmv+e5Y6RcIo3Ue F3mbw/UM74qxDc/NzaMAET2ee6+w9jpGcLIknxXf/NBblyLEKC5UqoN3D+jHevAUksSY iF+TTyOPnxPZpdjU9hAFLxIUy7/13GPS01EbpZJJobJ/ypgWa1A2Ls29DkqhiTX9SikA AISn1/K10NcCyvF6rUM1ctGsgW3KovWOMQbikmhop5hBSw3IiXokGjrIAmUQ0EqR1/vP 1EMKIoBmRD9iafm00BL++LxMuc75WbXA9YxpnSK8eD1icBcMUbDrBCqo7yfnE7KL/xMU 0JgA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=MkDOyllG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id h34-20020a631222000000b003816043efd3si14804978pgl.456.2022.04.06.06.42.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Apr 2022 06:42:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=MkDOyllG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9EB1055050B; Wed, 6 Apr 2022 04:29:52 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353281AbiDEUXK (ORCPT + 99 others); Tue, 5 Apr 2022 16:23:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1453037AbiDEPzy (ORCPT ); Tue, 5 Apr 2022 11:55:54 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D48D4137B24 for ; Tue, 5 Apr 2022 07:59:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649170743; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=m2sfKB74gTolYuaOLfzO0CaB/byjGpQyOkAPbwbPHJY=; b=MkDOyllG16SfP+JMgX/qQZIjvZEAQkhaCIYH+UNacySdrEMA0Ny/LtkmJo6m8X2dsueIpS Jrsp8xdVqUNFWUTogm/OUqGuubq/gvKtY3bE7NjARsL/eiH3xiQJZdHxCDjUd9ZxGva4EN RCU20tWWLlk+6gmBemGWvGLdW57Pq7M= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-453-IXsRU-a-Oe6oYKphwg06mQ-1; Tue, 05 Apr 2022 10:58:59 -0400 X-MC-Unique: IXsRU-a-Oe6oYKphwg06mQ-1 Received: by mail-qt1-f199.google.com with SMTP id b10-20020ac801ca000000b002e1cfb25db9so9179273qtg.1 for ; Tue, 05 Apr 2022 07:58:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:in-reply-to :content-transfer-encoding; bh=m2sfKB74gTolYuaOLfzO0CaB/byjGpQyOkAPbwbPHJY=; b=gZ56U5k0lcOdY3cVNHbpbZduml6oLQcoCm/adcvnb+ZUuubxeosFYBpFT8JLCwHUqG 1+yukutFUtVIsshRTMW7jFqYrhgfSNiuun0PBemq6NLN2TkMjUeQSiaDz9IG0Dbuz2nV t67PQ5lT1zEmtGQtEDeDCnncTpV870iSXm7iZWiJ3uuEccF4IrU968vdX3UDAAkgHiwg zJQGOyL3q57ar/Djm74A2667eJkcdH0Gx+2POOABzRUjfAwQ1pPsJ/gBOFy5+IBWqQjm dwljn38bvA/ZhU05Qv8/QDwi7oRCszhOCeJWn/9xqxWZ+UyfJLq/hWz1KSketFDewyI0 wn7g== X-Gm-Message-State: AOAM532K51NQe8g4yTxHjQtMJrUIrGvGTkSYQQqnIF/we6InkqNzSiqH h8+wpnihsNqwu0FXVtPH98rEsDPj9v52m3Xtspq6g9w/oIc3IR2AgeFuJT6RtDJ3LWx8g0GJuQi bZFzX4QP+1S0qY8ztJj+EMcWy X-Received: by 2002:a05:620a:210f:b0:67b:119d:f32d with SMTP id l15-20020a05620a210f00b0067b119df32dmr2596727qkl.316.1649170739274; Tue, 05 Apr 2022 07:58:59 -0700 (PDT) X-Received: by 2002:a05:620a:210f:b0:67b:119d:f32d with SMTP id l15-20020a05620a210f00b0067b119df32dmr2596698qkl.316.1649170738832; Tue, 05 Apr 2022 07:58:58 -0700 (PDT) Received: from [10.32.181.87] (nat-pool-mxp-t.redhat.com. [149.6.153.186]) by smtp.googlemail.com with ESMTPSA id c27-20020a05620a165b00b0067d32238bc8sm8030769qko.125.2022.04.05.07.58.57 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 05 Apr 2022 07:58:58 -0700 (PDT) Message-ID: Date: Tue, 5 Apr 2022 16:58:56 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.7.0 Subject: Re: [RFC PATCH v5 047/104] KVM: x86/mmu: add a private pointer to struct kvm_mmu_page Content-Language: en-US To: isaku.yamahata@intel.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@gmail.com, Jim Mattson , erdemaktas@google.com, Connor Kuehl , Sean Christopherson References: <499d1fd01b0d1d9a8b46a55bb863afd0c76f1111.1646422845.git.isaku.yamahata@intel.com> From: Paolo Bonzini In-Reply-To: <499d1fd01b0d1d9a8b46a55bb863afd0c76f1111.1646422845.git.isaku.yamahata@intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,RDNS_NONE,SPF_HELO_NONE, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 3/4/22 20:49, isaku.yamahata@intel.com wrote: > From: Isaku Yamahata > > Add a private pointer to kvm_mmu_page for private EPT. > > To resolve KVM page fault on private GPA, it will allocate additional page > for Secure EPT in addition to private EPT. Add memory allocator for it and > topup its memory allocator before resolving KVM page fault similar to > shared EPT page. Allocation of those memory will be done for TDP MMU by > alloc_tdp_mmu_page(). Freeing those memory will be done for TDP MMU on > behalf of kvm_tdp_mmu_zap_all() called by kvm_mmu_zap_all(). Private EPT > page needs to carry one more page used for Secure EPT in addition to the > private EPT page. Add private pointer to struct kvm_mmu_page for that > purpose and Add helper functions to allocate/free a page for Secure EPT. > Also add helper functions to check if a given kvm_mmu_page is private. > > Signed-off-by: Isaku Yamahata > --- > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/mmu/mmu.c | 9 ++++ > arch/x86/kvm/mmu/mmu_internal.h | 84 +++++++++++++++++++++++++++++++++ > arch/x86/kvm/mmu/tdp_mmu.c | 3 ++ > 4 files changed, 97 insertions(+) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index fcab2337819c..0c8cc7d73371 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -689,6 +689,7 @@ struct kvm_vcpu_arch { > struct kvm_mmu_memory_cache mmu_shadow_page_cache; > struct kvm_mmu_memory_cache mmu_gfn_array_cache; > struct kvm_mmu_memory_cache mmu_page_header_cache; > + struct kvm_mmu_memory_cache mmu_private_sp_cache; > > /* > * QEMU userspace and the guest each have their own FPU state. > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 6e9847b1124b..8def8b97978f 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -758,6 +758,13 @@ static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu) > struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache; > int start, end, i, r; > > + if (kvm_gfn_stolen_mask(vcpu->kvm)) { > + r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_private_sp_cache, > + PT64_ROOT_MAX_LEVEL); > + if (r) > + return r; > + } > + > if (shadow_init_value) > start = kvm_mmu_memory_cache_nr_free_objects(mc); > > @@ -799,6 +806,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) > { > kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); > kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache); > + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_private_sp_cache); > kvm_mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache); > kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); > } > @@ -1791,6 +1799,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct > if (!direct) > sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); > set_page_private(virt_to_page(sp->spt), (unsigned long)sp); > + kvm_mmu_init_private_sp(sp); > > /* > * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages() > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h > index da6166b5c377..80f7a74a71dc 100644 > --- a/arch/x86/kvm/mmu/mmu_internal.h > +++ b/arch/x86/kvm/mmu/mmu_internal.h > @@ -53,6 +53,10 @@ struct kvm_mmu_page { > u64 *spt; > /* hold the gfn of each spte inside spt */ > gfn_t *gfns; > +#ifdef CONFIG_KVM_MMU_PRIVATE > + /* associated private shadow page, e.g. SEPT page */ > + void *private_sp; > +#endif > /* Currently serving as active root */ > union { > int root_count; > @@ -104,6 +108,86 @@ static inline int kvm_mmu_page_as_id(struct kvm_mmu_page *sp) > return kvm_mmu_role_as_id(sp->role); > } > > +/* > + * TDX vcpu allocates page for root Secure EPT page and assigns to CPU secure > + * EPT pointer. KVM doesn't need to allocate and link to the secure EPT. > + * Dummy value to make is_pivate_sp() return true. > + */ > +#define KVM_MMU_PRIVATE_SP_ROOT ((void *)1) > + > +#ifdef CONFIG_KVM_MMU_PRIVATE > +static inline bool is_private_sp(struct kvm_mmu_page *sp) > +{ > + return !!sp->private_sp; > +} > + > +static inline bool is_private_spte(u64 *sptep) > +{ > + return is_private_sp(sptep_to_sp(sptep)); > +} > + > +static inline void *kvm_mmu_private_sp(struct kvm_mmu_page *sp) > +{ > + return sp->private_sp; > +} > + > +static inline void kvm_mmu_init_private_sp(struct kvm_mmu_page *sp) > +{ > + sp->private_sp = NULL; > +} > + > +/* Valid sp->role.level is required. */ > +static inline void kvm_mmu_alloc_private_sp(struct kvm_vcpu *vcpu, > + struct kvm_mmu_page *sp) > +{ > + if (vcpu->arch.mmu->shadow_root_level == sp->role.level) > + sp->private_sp = KVM_MMU_PRIVATE_SP_ROOT; > + else > + sp->private_sp = > + kvm_mmu_memory_cache_alloc( > + &vcpu->arch.mmu_private_sp_cache); > + /* > + * Because mmu_private_sp_cache is topped up before staring kvm page > + * fault resolving, the allocation above shouldn't fail. > + */ > + WARN_ON_ONCE(!sp->private_sp); > +} > + > +static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp) > +{ > + if (sp->private_sp != KVM_MMU_PRIVATE_SP_ROOT) > + free_page((unsigned long)sp->private_sp); > +} > +#else > +static inline bool is_private_sp(struct kvm_mmu_page *sp) > +{ > + return false; > +} > + > +static inline bool is_private_spte(u64 *sptep) > +{ > + return false; > +} > + > +static inline void *kvm_mmu_private_sp(struct kvm_mmu_page *sp) > +{ > + return NULL; > +} > + > +static inline void kvm_mmu_init_private_sp(struct kvm_mmu_page *sp) > +{ > +} > + > +static inline void kvm_mmu_alloc_private_sp(struct kvm_vcpu *vcpu, > + struct kvm_mmu_page *sp) > +{ > +} > + > +static inline void kvm_mmu_free_private_sp(struct kvm_mmu_page *sp) > +{ > +} > +#endif > + > static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp) > { > /* > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index 8db262440d5c..a68f3a22836b 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -59,6 +59,8 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, > > static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) > { > + if (is_private_sp(sp)) > + kvm_mmu_free_private_sp(sp); > free_page((unsigned long)sp->spt); > kmem_cache_free(mmu_page_header_cache, sp); > } > @@ -184,6 +186,7 @@ static struct kvm_mmu_page *alloc_tdp_mmu_page(struct kvm_vcpu *vcpu, gfn_t gfn, > sp->role.word = page_role_for_level(vcpu, level).word; > sp->gfn = gfn; > sp->tdp_mmu_page = true; > + kvm_mmu_init_private_sp(sp); > > trace_kvm_mmu_get_page(sp, true); > Reviewed-by: Paolo Bonzini