Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp4103304pxb; Sat, 5 Feb 2022 04:01:21 -0800 (PST) X-Google-Smtp-Source: ABdhPJzhUqiHNQ38HF8dly1qPrljsndICAEha+7/MoxbbltlDPKQ2DakigSn1dUYLZAGmZ9LHuMm X-Received: by 2002:a17:907:7da8:: with SMTP id oz40mr2958091ejc.328.1644062481714; Sat, 05 Feb 2022 04:01:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644062481; cv=none; d=google.com; s=arc-20160816; b=eKUfw+ZzxSdZsgZ5WBMbACRY8iFEOM6JKlSGHwN+rexC8tBs9FgpXZur7ch1Mxgt9L i/0zSCaZf6tmdEipZMYu3KAOzWqVJzl9ATMyEXFeAn8NMRTP850RjwYWUizOO7Zt7Xiq Cp5SqOjTJn/IZG0pH3FmXnVEdN6Oy381QSZNJrZN2Ir4zkZKS3Q1uiwBLi0WWyoq4FBd h4G+PjKDchRdaS2245K9X+nbFcSlaYR/Wcu/XiLsnqamJKMqcHrlAGd5MFPOM656o9BW VtqCaJ3istM6841AOtIX9El5Fp5QXQdpXjUf+/sdDa6UFFTMsf5Odo6VgkF3WfYmzygF SD5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=HdszhqD355DEoGMlDegfttlPu8jVJiAVeRKaATVPk1o=; b=Js+UhJKQ9jGFzF52n4JvVB3mmHwZ58YMc8jRrFAF3/c+ehDVMjBPfHbrkpzn29s+AG zxw2gKGebWOuAVNblm4Rfv3X67Lry9Gf3Asw5l/DdcgueuMddTH1N8pWSuBTZphmosz8 L0tbaZIqBv9+1vJFNlkRNIitlSkiidfLDBlmWx+gA8npEJ9r6QRdKv47ReuIEhMdQzBV etdHHMwL9gIMmsGIF1w4HJELg2jG/TJryulnPQFafQkhV6mc09slGrohFwPLUPEr19h0 5hrW4b65/miowpxHIl7UKjdMNwrcBqtfb4/YcvOoq9RF1j0yE1daiXhd9UhNlag6Q7N2 0UDg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WRKsXYRx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f23si3013859ejj.454.2022.02.05.04.00.53; Sat, 05 Feb 2022 04:01:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WRKsXYRx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234380AbiBDL5u (ORCPT + 99 others); Fri, 4 Feb 2022 06:57:50 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:44880 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358537AbiBDL51 (ORCPT ); Fri, 4 Feb 2022 06:57:27 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1643975846; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HdszhqD355DEoGMlDegfttlPu8jVJiAVeRKaATVPk1o=; b=WRKsXYRxNP/EGKqOmYAiwHdbCfMWQLXxNU67fH77w+twNiG2/94DMWti3h6QbLHjxMQOGA D8Nb3SteEzcMCvYwut9vJS470MRmYPmc3i/M0ZZabYY1fnyaXzfS5Vu7QJKyfv0X1SFng+ SYPB/WbdMs6TVuO1+1yWykmwYzG8N2U= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-364-R2EQWIJYPiGy16cgnl3zfQ-1; Fri, 04 Feb 2022 06:57:23 -0500 X-MC-Unique: R2EQWIJYPiGy16cgnl3zfQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2881A61261; Fri, 4 Feb 2022 11:57:22 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id B05901081172; Fri, 4 Feb 2022 11:57:21 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dmatlack@google.com, seanjc@google.com, vkuznets@redhat.com Subject: [PATCH 05/23] KVM: MMU: pull computation of kvm_mmu_role_regs to kvm_init_mmu Date: Fri, 4 Feb 2022 06:57:00 -0500 Message-Id: <20220204115718.14934-6-pbonzini@redhat.com> In-Reply-To: <20220204115718.14934-1-pbonzini@redhat.com> References: <20220204115718.14934-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The init_kvm_*mmu functions, with the exception of shadow NPT, do not need to know the full values of CR0/CR4/EFER; they only need to know the bits that make up the "role". This cleanup however will take quite a few incremental steps. As a start, pull the common computation of the struct kvm_mmu_role_regs into their caller: all of them extract the struct from the vcpu as the very first step. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 33 +++++++++++++++++---------------- 1 file changed, 17 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3add9d8b0630..577e70509510 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4736,12 +4736,12 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, return role; } -static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) +static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, + const struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context = &vcpu->arch.root_mmu; - struct kvm_mmu_role_regs regs = vcpu_to_role_regs(vcpu); union kvm_mmu_role new_role = - kvm_calc_tdp_mmu_root_page_role(vcpu, ®s, false); + kvm_calc_tdp_mmu_root_page_role(vcpu, regs, false); if (new_role.as_u64 == context->mmu_role.as_u64) return; @@ -4755,7 +4755,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; - context->root_level = role_regs_to_root_level(®s); + context->root_level = role_regs_to_root_level(regs); if (!is_cr0_pg(context)) context->gva_to_gpa = nonpaging_gva_to_gpa; @@ -4803,7 +4803,7 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, } static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *context, - struct kvm_mmu_role_regs *regs, + const struct kvm_mmu_role_regs *regs, union kvm_mmu_role new_role) { if (new_role.as_u64 == context->mmu_role.as_u64) @@ -4824,7 +4824,7 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte } static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, - struct kvm_mmu_role_regs *regs) + const struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context = &vcpu->arch.root_mmu; union kvm_mmu_role new_role = @@ -4845,7 +4845,7 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, static union kvm_mmu_role kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu, - struct kvm_mmu_role_regs *regs) + const struct kvm_mmu_role_regs *regs) { union kvm_mmu_role role = kvm_calc_shadow_root_page_role_common(vcpu, regs, false); @@ -4930,12 +4930,12 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, } EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu); -static void init_kvm_softmmu(struct kvm_vcpu *vcpu) +static void init_kvm_softmmu(struct kvm_vcpu *vcpu, + const struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context = &vcpu->arch.root_mmu; - struct kvm_mmu_role_regs regs = vcpu_to_role_regs(vcpu); - kvm_init_shadow_mmu(vcpu, ®s); + kvm_init_shadow_mmu(vcpu, regs); context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; @@ -4959,10 +4959,9 @@ kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs * return role; } -static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) +static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { - struct kvm_mmu_role_regs regs = vcpu_to_role_regs(vcpu); - union kvm_mmu_role new_role = kvm_calc_nested_mmu_role(vcpu, ®s); + union kvm_mmu_role new_role = kvm_calc_nested_mmu_role(vcpu, regs); struct kvm_mmu *g_context = &vcpu->arch.nested_mmu; if (new_role.as_u64 == g_context->mmu_role.as_u64) @@ -5002,12 +5001,14 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) void kvm_init_mmu(struct kvm_vcpu *vcpu) { + struct kvm_mmu_role_regs regs = vcpu_to_role_regs(vcpu); + if (mmu_is_nested(vcpu)) - init_kvm_nested_mmu(vcpu); + init_kvm_nested_mmu(vcpu, ®s); else if (tdp_enabled) - init_kvm_tdp_mmu(vcpu); + init_kvm_tdp_mmu(vcpu, ®s); else - init_kvm_softmmu(vcpu); + init_kvm_softmmu(vcpu, ®s); } EXPORT_SYMBOL_GPL(kvm_init_mmu); -- 2.31.1