Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp1315770pxb; Tue, 8 Feb 2022 14:29:48 -0800 (PST) X-Google-Smtp-Source: ABdhPJyZw6ox7hg7DLw0drmEgPuIpTCaZ0DiyhRvghO+9/SZg90KsJbXU6mfo7kdCDcVIfgR1q5N X-Received: by 2002:a17:903:2448:: with SMTP id l8mr3244026pls.84.1644359387906; Tue, 08 Feb 2022 14:29:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644359387; cv=none; d=google.com; s=arc-20160816; b=g5aVsE6hd4yBZsD3CtIqMu6slp3qC2CzGJShnXCYJz7by6u0uwl9MeuF2d2+zXgrXf CEx+w4U3xyhY3fW6TQ4OjvGPXGxs/aF3sYJJWQgBkf0BIKLzj5VMsrtfuLEA08ZUoBd/ 4JIGunTtXVZ16SIBfacfqHHHxflhi+frmMtjrHkH/2RK4M/8/AK7LIzSdEPvyboz4jtF H7PB2/M+//I0Gd9D2ZZ5CPdGIIVFDQ76xYdE0+vJCoEj7RAilZVQAMMBDhl7UWX8/bi3 l3QgBq6hg6X1EoV5YmvNRa8BCsWx9L4PjHyxW0OBC35tGmsO+BNp6ugX2ISY1F5n2eDO E4hA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=gVc/rSf8hc23i8BzLsnLzzxnzHxUect8vesEyQ0bjYQ=; b=an8kPI4yZax+v0PwrBmLeAbt21B1nIDYbjhZXG2VmRAZS5pWWpgEAtQ+KtxBWU8DLT qMeTWIYlzgsN5e/3i4I/7OGT6saYs+R+yAzHxEyIym0AePXVIsT7gpIrA/W/sz/6SYGd g+d4KO8d1KOvEofS+XxoSIm+vyUSaj1VYZlF4MZUTd28+o9a+03fBAN36Gv4QPDEQ8CI bOm2s+86/7afpDykE7+ourkFPEkPVDlUSJYdQX3TkHfJC9+uA7BHDUfSy5oBMuG4gTht ES7TwwqJXX6s+Y7CXEX66xxwTG5T2zzyWPrm3+O6sTrRDJk+UaQnErAKmS0CMbHztYWR SqNw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=MLap6RML; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kb12si1290630pjb.60.2022.02.08.14.29.34; Tue, 08 Feb 2022 14:29:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=MLap6RML; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244772AbiBGWmX (ORCPT + 99 others); Mon, 7 Feb 2022 17:42:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242105AbiBGWmU (ORCPT ); Mon, 7 Feb 2022 17:42:20 -0500 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA029C061355 for ; Mon, 7 Feb 2022 14:42:19 -0800 (PST) Received: by mail-pj1-x1033.google.com with SMTP id my12-20020a17090b4c8c00b001b528ba1cd7so706086pjb.1 for ; Mon, 07 Feb 2022 14:42:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=gVc/rSf8hc23i8BzLsnLzzxnzHxUect8vesEyQ0bjYQ=; b=MLap6RMLL02SXvhnBx0wvmT2F2aV1wjKwPElYKitbKZAPZebylvQ6QlI1JoNlHkXb0 tvzYZtWSl5bGRdXPB/pr0Dj+XaVcJwI3TfKVnAX3zO4A53Gat1L3c8Qrq2FlwIPtRP2Q u0ZISa51pSr3EpXKj1wm5r3O4T5XMViJck0tHXU3wtaLxsKz1isy4mOTdXZ7NFGtpTSj Qc5txrvcRvFakKnTQVh0ikuIBtOCLS2P1R6REpZlnejbH05gTUnWPBa1LDXwTKsFIsop t0MkwIgF9AJvf7e3pBoPzo6bxzRSYAsXmL+E3za/GfwW2E+AvsrohXL3YQFrR9TNgJ5d vA1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=gVc/rSf8hc23i8BzLsnLzzxnzHxUect8vesEyQ0bjYQ=; b=TLNdIBlpB3nh78cvrCOlZVrIrSIfqyEpUf6hjMnebmhGeX5J1loAaHuTWomlIOBS4h OraB9VVPSooVaQolX9PRP2DqgPaZu5QqF/n+HJ0Ypfl3GOTafJpcaXZNoexTLMFJDIva +r85M8b6j+lCqVFQrD0b8/S9hHbt2tC/8JEc0thANo4U4feMPOM4ScUCHxfe4fhmmIA8 zWYusx7d0D4NoddSCaIUSml4O8i+Lg0dJqbTIaQl3Ji6QM29ghvemDqj5iQRXl2dNTo2 LFSADmKThwgc/OYpIw7Axl5MDi4uFoK0oPkhHIeDtED1t2SVkpb7LKRCASYlAuDsJmPL 2Z4w== X-Gm-Message-State: AOAM5316AT8/3COeyc59SWBfIBaBzmf2rGD7WcnK1N7A3tauyqinZsR0 Ibq1Wiwwuc38k+/4h4RI14MVDA== X-Received: by 2002:a17:902:e84c:: with SMTP id t12mr1715065plg.63.1644273739162; Mon, 07 Feb 2022 14:42:19 -0800 (PST) Received: from google.com (254.80.82.34.bc.googleusercontent.com. [34.82.80.254]) by smtp.gmail.com with ESMTPSA id c7sm13039915pfp.164.2022.02.07.14.42.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Feb 2022 14:42:18 -0800 (PST) Date: Mon, 7 Feb 2022 22:42:14 +0000 From: David Matlack To: Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, seanjc@google.com, vkuznets@redhat.com Subject: Re: [PATCH 20/23] KVM: MMU: pull CPU role computation to kvm_init_mmu Message-ID: References: <20220204115718.14934-1-pbonzini@redhat.com> <20220204115718.14934-21-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220204115718.14934-21-pbonzini@redhat.com> X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Feb 04, 2022 at 06:57:15AM -0500, Paolo Bonzini wrote: > Do not lead init_kvm_*mmu into the temptation of poking > into struct kvm_mmu_role_regs, by passing to it directly > the CPU role. > > Signed-off-by: Paolo Bonzini > --- > arch/x86/kvm/mmu/mmu.c | 21 +++++++++------------ > 1 file changed, 9 insertions(+), 12 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 01027da82e23..6f9d876ce429 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -4721,11 +4721,9 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, > return role; > } > > -static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, > - const struct kvm_mmu_role_regs *regs) > +static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, union kvm_mmu_role cpu_role) > { > struct kvm_mmu *context = &vcpu->arch.root_mmu; > - union kvm_mmu_role cpu_role = kvm_calc_cpu_role(vcpu, regs); > union kvm_mmu_page_role mmu_role = kvm_calc_tdp_mmu_root_page_role(vcpu, cpu_role); > > if (cpu_role.as_u64 == context->cpu_role.as_u64 && > @@ -4779,10 +4777,9 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte > } > > static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, > - const struct kvm_mmu_role_regs *regs) > + union kvm_mmu_role cpu_role) > { > struct kvm_mmu *context = &vcpu->arch.root_mmu; > - union kvm_mmu_role cpu_role = kvm_calc_cpu_role(vcpu, regs); > union kvm_mmu_page_role mmu_role; > > mmu_role = cpu_role.base; > @@ -4874,20 +4871,19 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, > EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu); > > static void init_kvm_softmmu(struct kvm_vcpu *vcpu, > - const struct kvm_mmu_role_regs *regs) > + union kvm_mmu_role cpu_role) > { > struct kvm_mmu *context = &vcpu->arch.root_mmu; > > - kvm_init_shadow_mmu(vcpu, regs); > + kvm_init_shadow_mmu(vcpu, cpu_role); > > context->get_guest_pgd = get_cr3; > context->get_pdptr = kvm_pdptr_read; > context->inject_page_fault = kvm_inject_page_fault; > } > > -static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) > +static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, union kvm_mmu_role new_role) > { > - union kvm_mmu_role new_role = kvm_calc_cpu_role(vcpu, regs); > struct kvm_mmu *g_context = &vcpu->arch.nested_mmu; > > if (new_role.as_u64 == g_context->cpu_role.as_u64) > @@ -4928,13 +4924,14 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, const struct kvm_mmu_role > void kvm_init_mmu(struct kvm_vcpu *vcpu) > { > struct kvm_mmu_role_regs regs = vcpu_to_role_regs(vcpu); > + union kvm_mmu_role cpu_role = kvm_calc_cpu_role(vcpu, ®s); WDYT about also inlining vcpu_to_role_regs() in kvm_calc_cpu_role()? > > if (mmu_is_nested(vcpu)) > - init_kvm_nested_mmu(vcpu, ®s); > + init_kvm_nested_mmu(vcpu, cpu_role); > else if (tdp_enabled) > - init_kvm_tdp_mmu(vcpu, ®s); > + init_kvm_tdp_mmu(vcpu, cpu_role); > else > - init_kvm_softmmu(vcpu, ®s); > + init_kvm_softmmu(vcpu, cpu_role); > } > EXPORT_SYMBOL_GPL(kvm_init_mmu); > > -- > 2.31.1 > >