Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp318533pxb; Thu, 5 Nov 2020 00:21:00 -0800 (PST) X-Google-Smtp-Source: ABdhPJz2CjR7D4H/krkoKU/f3AOD9Kl6rpXXzx3RFAS7Tk3UPSACP+ts1gQMLOqxJTvCBZ1O18yh X-Received: by 2002:a17:906:512:: with SMTP id j18mr1240767eja.370.1604564460219; Thu, 05 Nov 2020 00:21:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604564460; cv=none; d=google.com; s=arc-20160816; b=oIE5inOldz5D1lflV7TvIgsEIchdMYbJ8qBlqjAcBPby7X31xt9j8JS1Clqtfh4hqg lSvSGlUec/CxW9O++lQ7goHghINebHMjZjHe+RoUx2xrqAkS/LCeLegsDAT5ouix9Huu 2UI01C7mnTHlUsWpS1rafUdcizetHSHEAJYKzRHzKQX9XQZVIxz3xt6cGX4dNPNA+Oj7 l9hlYYjCAKlJH/lVMI/LSQc5Ltgckw0DslnN8qSStO/8jYyIKJxd8ZP8R7bqwwavi6lb WYhx5LoiilYyCt++K39j23D1jKpNxdFTc6SEI0bGxRiK57xKNXvPCf3uRbAmZTiwxdaC 1LkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:ironport-sdr:ironport-sdr; bh=VV2hJTKSplDzZGW//xzHKqiD7CcubibxMxcGjWfvfiQ=; b=NOUxvclff1Qo64RcIPXJfaVdBZY+15DjaGELCM9jO/Ox4zLDH5B2BtmSwkklLBIuzo ttzKAbufhR1OzIFKwNmxBbBeztEt9lpnWUNe3gVWtf3/3k7xe5R2QXS/xkYxBZbsXKOI o7Gnxvo8dovl/nat0ooC7HMcRc7OW2DF4ZHsdJmyElEurqkBS1d6XHAQdFHB8zjT56sO pbST/B4F/L4wtafk3GBRXA3bbufodaso96mFnNDpFUPlBCv0X0Hq8u1DwtF7CNRxRV2M a85xIzucHgn0NfJAB+uozCB2t6OelqHJUdygl926hR7NEFaNKXbuX4igjT/R2GdxzXtE xqRA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q25si542195ejd.426.2020.11.05.00.20.37; Thu, 05 Nov 2020 00:21:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730153AbgKEIPx (ORCPT + 99 others); Thu, 5 Nov 2020 03:15:53 -0500 Received: from mga04.intel.com ([192.55.52.120]:39805 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729909AbgKEIPv (ORCPT ); Thu, 5 Nov 2020 03:15:51 -0500 IronPort-SDR: DdLDyTI1WRVzcDPqs2VP7mJsJEywI/QZUjhPf2BHG6RweFhla+NdiEeMxUm+jcGhvkaLkcWn4E /Ui7egWx2C7Q== X-IronPort-AV: E=McAfee;i="6000,8403,9795"; a="166755717" X-IronPort-AV: E=Sophos;i="5.77,453,1596524400"; d="scan'208";a="166755717" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2020 00:15:50 -0800 IronPort-SDR: +LwM0OAla6+u4Vbh/EVURA78EOUsiPiiNP7Mq5P/oJJERxFHwyQ9PRx25H0S/APl8xJc/9XHod zMoAPjdieATw== X-IronPort-AV: E=Sophos;i="5.77,453,1596524400"; d="scan'208";a="539281428" Received: from chenyi-pc.sh.intel.com ([10.239.159.72]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2020 00:15:48 -0800 From: Chenyi Qiang To: Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Xiaoyao Li Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC v3 3/7] KVM: MMU: Rename the pkru to pkr Date: Thu, 5 Nov 2020 16:18:00 +0800 Message-Id: <20201105081805.5674-4-chenyi.qiang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201105081805.5674-1-chenyi.qiang@intel.com> References: <20201105081805.5674-1-chenyi.qiang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org PKRU represents the PKU register utilized in the protection key rights check for user pages. Protection Keys for Superviosr Pages (PKS) extends the protection key architecture to cover supervisor pages. Rename the *pkru* related variables and functions to *pkr* which stands for both of the PKRU and PKRS. It makes sense because both registers have the same format. PKS and PKU can also share the same bitmap to cache the conditions where protection key checks are needed. Signed-off-by: Chenyi Qiang --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu.h | 12 ++++++------ arch/x86/kvm/mmu/mmu.c | 18 +++++++++--------- 3 files changed, 16 insertions(+), 16 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d44858b69353..7567952febd9 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -382,7 +382,7 @@ struct kvm_mmu { * with PFEC.RSVD replaced by ACC_USER_MASK from the page tables. * Each domain has 2 bits which are ANDed with AD and WD from PKRU. */ - u32 pkru_mask; + u32 pkr_mask; u64 *pae_root; u64 *lm_root; diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 9c4a9c8e43d9..a77bd20c83f9 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -190,8 +190,8 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, u32 errcode = PFERR_PRESENT_MASK; WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK)); - if (unlikely(mmu->pkru_mask)) { - u32 pkru_bits, offset; + if (unlikely(mmu->pkr_mask)) { + u32 pkr_bits, offset; /* * PKRU defines 32 bits, there are 16 domains and 2 @@ -199,15 +199,15 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, * index of the protection domain, so pte_pkey * 2 is * is the index of the first bit for the domain. */ - pkru_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3; + pkr_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3; /* clear present bit, replace PFEC.RSVD with ACC_USER_MASK. */ offset = (pfec & ~1) + ((pte_access & PT_USER_MASK) << (PFERR_RSVD_BIT - PT_USER_SHIFT)); - pkru_bits &= mmu->pkru_mask >> offset; - errcode |= -pkru_bits & PFERR_PK_MASK; - fault |= (pkru_bits != 0); + pkr_bits &= mmu->pkr_mask >> offset; + errcode |= -pkr_bits & PFERR_PK_MASK; + fault |= (pkr_bits != 0); } return -(u32)fault & errcode; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1f96adff8dc4..d22c0813e4b9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4301,20 +4301,20 @@ static void update_permission_bitmask(struct kvm_vcpu *vcpu, * away both AD and WD. For all reads or if the last condition holds, WD * only will be masked away. */ -static void update_pkru_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, +static void update_pkr_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, bool ept) { unsigned bit; bool wp; if (ept) { - mmu->pkru_mask = 0; + mmu->pkr_mask = 0; return; } /* PKEY is enabled only if CR4.PKE and EFER.LMA are both set. */ if (!kvm_read_cr4_bits(vcpu, X86_CR4_PKE) || !is_long_mode(vcpu)) { - mmu->pkru_mask = 0; + mmu->pkr_mask = 0; return; } @@ -4348,7 +4348,7 @@ static void update_pkru_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, /* PKRU.WD stops write access. */ pkey_bits |= (!!check_write) << 1; - mmu->pkru_mask |= (pkey_bits & 3) << pfec; + mmu->pkr_mask |= (pkey_bits & 3) << pfec; } } @@ -4370,7 +4370,7 @@ static void paging64_init_context_common(struct kvm_vcpu *vcpu, reset_rsvds_bits_mask(vcpu, context); update_permission_bitmask(vcpu, context, false); - update_pkru_bitmask(vcpu, context, false); + update_pkr_bitmask(vcpu, context, false); update_last_nonleaf_level(vcpu, context); MMU_WARN_ON(!is_pae(vcpu)); @@ -4400,7 +4400,7 @@ static void paging32_init_context(struct kvm_vcpu *vcpu, reset_rsvds_bits_mask(vcpu, context); update_permission_bitmask(vcpu, context, false); - update_pkru_bitmask(vcpu, context, false); + update_pkr_bitmask(vcpu, context, false); update_last_nonleaf_level(vcpu, context); context->page_fault = paging32_page_fault; @@ -4519,7 +4519,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) } update_permission_bitmask(vcpu, context, false); - update_pkru_bitmask(vcpu, context, false); + update_pkr_bitmask(vcpu, context, false); update_last_nonleaf_level(vcpu, context); reset_tdp_shadow_zero_bits_mask(vcpu, context); } @@ -4667,7 +4667,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->mmu_role.as_u64 = new_role.as_u64; update_permission_bitmask(vcpu, context, true); - update_pkru_bitmask(vcpu, context, true); + update_pkr_bitmask(vcpu, context, true); update_last_nonleaf_level(vcpu, context); reset_rsvds_bits_mask_ept(vcpu, context, execonly); reset_ept_shadow_zero_bits_mask(vcpu, context, execonly); @@ -4738,7 +4738,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) } update_permission_bitmask(vcpu, g_context, false); - update_pkru_bitmask(vcpu, g_context, false); + update_pkr_bitmask(vcpu, g_context, false); update_last_nonleaf_level(vcpu, g_context); } -- 2.17.1