Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4928548pxj; Tue, 22 Jun 2021 11:02:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxS5dWOp8f5IdPntLh5WMZJHOJOgbheMX5O9PbGHEiKpvtExD81qLcpxBzS5MCLHuh6fzvU X-Received: by 2002:a92:d484:: with SMTP id p4mr3252763ilg.139.1624384956570; Tue, 22 Jun 2021 11:02:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624384956; cv=none; d=google.com; s=arc-20160816; b=pysiOSxzypgtf2IhSL2YQhNWhDAiMZ5ZjH+p7QIaitNzAma/Ineek3T2C3pPj7WjcM tMRg0k9YVUhJn+BRv8IcdMo/YlZauDr77xwKYcQBgAN2mAENzahHTZWwYBs9K5mW2CNS YBkCkYl2OSjLUEseS1RByDxYUS3n086KI1iZFk1cJcm7Jdvdya19DZMB3A7Et1efjI/w zCWagB8GPeCteoMGaDBRsxfhRTpfNv1RTFWKPPgjucpHNsN5Lx9sofNfDHyDHpTzDCbg 6BZopIMnSfxN3cnYxYoCJKGk/zrhT3Dwk019MAqfRcgYIYiKz0DFXkB4aqKKQLgfE3Y8 BFZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:reply-to:dkim-signature; bh=uVkGf2uzZgS3tEPT7EpWNxKvNU7O+79uJa1mHHSjgeI=; b=txXlzBgAQP9Lc4620APSU2TGzqHPXe7Q+grUlWkFkWLUHhC5+Wy8guNSe0uZLGBKiP pMiorC0GNJAWHqZWWs0CwPuhTNVuuBgfIxaOFtpomHZZRsspZf6ZOskWFP0f55T8IwSO TLV9fWyMyEABofrhzCcDGqTV0X9HYwZcBAkX1rZ271tCJE2PPbDmOPhl83NgWOyfEwUF sf0sqEey2L0Hl3acvHS164nAgRvdAvzCt1t7ut0z7mpWawqZWjW0riFpwE4lQiVsQ+sr MaV5C2ryyHLVH0rS2a3Gj1YuUkUizUBmHRDiwvA+Ei9ict8gAHwWVlMGyBm2sQ6zQiWs 5nIw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=LztvxDin; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a25si90112jar.13.2021.06.22.11.02.18; Tue, 22 Jun 2021 11:02:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=LztvxDin; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232412AbhFVSCO (ORCPT + 99 others); Tue, 22 Jun 2021 14:02:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232812AbhFVSBb (ORCPT ); Tue, 22 Jun 2021 14:01:31 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF8A0C06114D for ; Tue, 22 Jun 2021 10:58:42 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id 2-20020a3709020000b02903aa9873df32so19080442qkj.15 for ; Tue, 22 Jun 2021 10:58:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=uVkGf2uzZgS3tEPT7EpWNxKvNU7O+79uJa1mHHSjgeI=; b=LztvxDinNhi7QJ0GKXdT+3jSiaw1qx/rK9xIwx6lEyS6S0dqw3hy8rbGDquMwozw6O icmZoiOR4DcJmpBJUcypxQd8xDNJwjHrsMcLnCkqyxbpVE02Ll9cRQhhvC8r2z6mfkwe Eorcy6QwaXn0odd28qFnpd2zxYgAV9NyViT8zC0wysWZ6d2cG8G7q7TcinUod3rOj+d/ eIpFRqy9CkXqjfosYszhu4h0uelDYynrjb3HS4uavjY0/kfuwHyJgU9OvHCg9WwScgjH YY+uMST8ElyxH169UJTraxOykxZdJmSFzq18VElct6irjw+Q/rrlfE7TOUH4QqgBKep/ kaig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uVkGf2uzZgS3tEPT7EpWNxKvNU7O+79uJa1mHHSjgeI=; b=VwG1dWjoJXARsF4dt42FDlwstYSUn6dKepthwx/CiELOWgP4UpWIaV3SQWZUBCWgL9 7yhs2XUoqlbSqVtvGR4s1oXGi6KnExMwiRR2SxXc4PDlX1/xy/21IDwiTXnbUJ+ucBm+ 8Jyd+nXVQcJh17NJ+jQ0wfQQVDnmnP7MMIRxRjll7ADdDbnglJ5v85BLJnPLw3s1FUgi 1/CczJDvQQmlq0Hil/5NYUKAJuGPeYWaHelWmipvwSnl8BNU3Ci9NDkCxvSpmA+0sSbQ MM3TycBKIAMRtRR7ghlY0jVJlAB8PYGOWyn/dVBvUdWXFXBCT1SgQh4vWlmpbsG2KnCC 8cYQ== X-Gm-Message-State: AOAM532TQtv6gys8c0p9hP+YSQD36WOtT/GvEW9aLRzff6xWiC86nAUO YQfO528Z6v++BuM66VZYMR9sAD7GpZE= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:6d88:: with SMTP id i130mr6407406ybc.435.1624384721824; Tue, 22 Jun 2021 10:58:41 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:05 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-21-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 20/54] KVM: x86/mmu: Add struct and helpers to retrieve MMU role bits from regs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Introduce "struct kvm_mmu_role_regs" to hold the register state that is incorporated into the mmu_role. For nested TDP, the register state that is factored into the MMU isn't vCPU state; the dedicated struct will be used to propagate the correct state throughout the flows without having to pass multiple params, and also provides helpers for the various flag accessors. Intentionally make the new helpers cumbersome/ugly by prepending four underscores. In the not-too-distant future, it will be preferable to use the mmu_role to query bits as the mmu_role can drop irrelevant bits without creating contradictions, e.g. clearing CR4 bits when CR0.PG=0. Reserve the clean helper names (no underscores) for the mmu_role. Add a helper for vCPU conversion, which is the common case. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 66 +++++++++++++++++++++++++++++++++--------- 1 file changed, 53 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5e3ee4aba2ff..3616c3b7618e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -176,9 +176,46 @@ static void mmu_spte_set(u64 *sptep, u64 spte); static union kvm_mmu_page_role kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu); +struct kvm_mmu_role_regs { + const unsigned long cr0; + const unsigned long cr4; + const u64 efer; +}; + #define CREATE_TRACE_POINTS #include "mmutrace.h" +/* + * Yes, lot's of underscores. They're a hint that you probably shouldn't be + * reading from the role_regs. Once the mmu_role is constructed, it becomes + * the single source of truth for the MMU's state. + */ +#define BUILD_MMU_ROLE_REGS_ACCESSOR(reg, name, flag) \ +static inline bool ____is_##reg##_##name(struct kvm_mmu_role_regs *regs)\ +{ \ + return !!(regs->reg & flag); \ +} +BUILD_MMU_ROLE_REGS_ACCESSOR(cr0, pg, X86_CR0_PG); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr0, wp, X86_CR0_WP); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, pse, X86_CR4_PSE); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, pae, X86_CR4_PAE); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, smep, X86_CR4_SMEP); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, smap, X86_CR4_SMAP); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, pke, X86_CR4_PKE); +BUILD_MMU_ROLE_REGS_ACCESSOR(cr4, la57, X86_CR4_LA57); +BUILD_MMU_ROLE_REGS_ACCESSOR(efer, nx, EFER_NX); +BUILD_MMU_ROLE_REGS_ACCESSOR(efer, lma, EFER_LMA); + +struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) +{ + struct kvm_mmu_role_regs regs = { + .cr0 = kvm_read_cr0_bits(vcpu, KVM_MMU_CR0_ROLE_BITS), + .cr4 = kvm_read_cr4_bits(vcpu, KVM_MMU_CR4_ROLE_BITS), + .efer = vcpu->arch.efer, + }; + + return regs; +} static inline bool kvm_available_flush_tlb_with_range(void) { @@ -4654,14 +4691,14 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only) } static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *context, - unsigned long cr0, unsigned long cr4, - u64 efer, union kvm_mmu_role new_role) + struct kvm_mmu_role_regs *regs, + union kvm_mmu_role new_role) { - if (!(cr0 & X86_CR0_PG)) + if (!____is_cr0_pg(regs)) nonpaging_init_context(vcpu, context); - else if (efer & EFER_LMA) + else if (____is_efer_lma(regs)) paging64_init_context(vcpu, context); - else if (cr4 & X86_CR4_PAE) + else if (____is_cr4_pae(regs)) paging32E_init_context(vcpu, context); else paging32_init_context(vcpu, context); @@ -4672,15 +4709,15 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte reset_shadow_zero_bits_mask(vcpu, context); } -static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, - unsigned long cr4, u64 efer) +static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, + struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context = &vcpu->arch.root_mmu; union kvm_mmu_role new_role = kvm_calc_shadow_mmu_root_page_role(vcpu, false); if (new_role.as_u64 != context->mmu_role.as_u64) - shadow_mmu_init_context(vcpu, context, cr0, cr4, efer, new_role); + shadow_mmu_init_context(vcpu, context, regs, new_role); } static union kvm_mmu_role @@ -4699,12 +4736,17 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, unsigned long cr4, u64 efer, gpa_t nested_cr3) { struct kvm_mmu *context = &vcpu->arch.guest_mmu; + struct kvm_mmu_role_regs regs = { + .cr0 = cr0, + .cr4 = cr4, + .efer = efer, + }; union kvm_mmu_role new_role = kvm_calc_shadow_npt_root_page_role(vcpu); __kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base); if (new_role.as_u64 != context->mmu_role.as_u64) - shadow_mmu_init_context(vcpu, context, cr0, cr4, efer, new_role); + shadow_mmu_init_context(vcpu, context, ®s, new_role); /* * Redo the shadow bits, the reset done by shadow_mmu_init_context() @@ -4773,11 +4815,9 @@ EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu); static void init_kvm_softmmu(struct kvm_vcpu *vcpu) { struct kvm_mmu *context = &vcpu->arch.root_mmu; + struct kvm_mmu_role_regs regs = vcpu_to_role_regs(vcpu); - kvm_init_shadow_mmu(vcpu, - kvm_read_cr0_bits(vcpu, KVM_MMU_CR0_ROLE_BITS), - kvm_read_cr4_bits(vcpu, KVM_MMU_CR4_ROLE_BITS), - vcpu->arch.efer); + kvm_init_shadow_mmu(vcpu, ®s); context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; -- 2.32.0.288.g62a8d224e6-goog