Received: by 2002:a05:6a10:7420:0:0:0:0 with SMTP id hk32csp3608102pxb; Mon, 21 Feb 2022 01:41:38 -0800 (PST) X-Google-Smtp-Source: ABdhPJwGLuDASC3M3MlgQbQM5XnWWwbm0Ye1ap7ktjRslIwG3h+FFWeL0/V8Nz1FlwF9wjYlTq2J X-Received: by 2002:a17:907:b590:b0:6c1:c061:d945 with SMTP id qx16-20020a170907b59000b006c1c061d945mr14771471ejc.768.1645436497842; Mon, 21 Feb 2022 01:41:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1645436497; cv=none; d=google.com; s=arc-20160816; b=m176e22+hvdD9+oSuXdhqdGy3TVjlunLJHCpe5Hwt65NgtWUdcHzuMFiDHfNiClzAV Rr4igz5P8aJtooP070HcpkBrikhXwP5e6rHCgYMDSJkILgUcjZpWBFoR9Sx9bONP6Jk3 KkDtfah9n60xJKjSpxQvOku/BBuJT6OuXXua8/hTa47A9t8rMH8K84zi+twaS4NxkaJV /rJoObE5k2LMrqS6u+S4d1Eqk2MuVQRiBG0STuYjPnSjZmxk2bJKNO9fVWcrJ06BBxeY SSIWRmoJJ4z8Xhf+1e7jQyjMSBCksecOtfGwuxpU2zqoToOeVRzy58LSPM42KK1+43Gn o/Vw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=RBk4LoZuDUsSYWAvUzoMh3ycySIG39HSgLV33J2JxBk=; b=CHGwtd+uXIwRewPaIn0DklDr+9HYOSjtuuGYOU5gKx5zPeDQFzrGbtqoP+fDIHvVwr 2CmZuUEvoWjf/ljcwCJJ7fQD4/7/JMu9JFaunEo8q1ojDTwPo5EdkfZVjsEBR4bcqggr JNR2mcIlJ+9cNHp9NW1EfF191Nvaudyw2PWy6pa0sHicA9YZUYyrfuoS6dzTFbvLVa5X +hT5oOmcLqEW6wh0VVLZR6VSKEClp1hh/O+qaVFTqHStSWpUAefWjjL4220pD2r+T/Vu jgfyjPQu1R+b2KpWdSGJRCqYBJzI8Am/N2KjCnt+YbcFyIafK0tGkcuhqYAhtma7Gxic vjnQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=B1sSniG9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q16si11554703edr.30.2022.02.21.01.41.15; Mon, 21 Feb 2022 01:41:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=B1sSniG9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346872AbiBUIGk (ORCPT + 99 others); Mon, 21 Feb 2022 03:06:40 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:34002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346834AbiBUIGW (ORCPT ); Mon, 21 Feb 2022 03:06:22 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3EFB21B; Mon, 21 Feb 2022 00:05:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645430759; x=1676966759; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=W1BWTbZeZ+sjVqVvXu3s8DyrkSm6bR5yRGQXyhmgBGs=; b=B1sSniG9suMk8YWeXqB2AYiGU970aXR0/v0eRg+J5bicmTX+YskbqSNs IOo3JVjwmM+nqQ6t5D7QZD73HCwRxMPfpdu7v5+gidWnvZqMkVnXuKOJT hTTCdpDGAUj9NnCwxWokHrsU001K42q16CZsXahtnvuMBEFbz5QNGg+bW Y8Xl1JdMiNRluUQB7R0pwAsxcYjr7pW6pM7GFsNq7ZxLK1fxsZOk40EY8 SoZjaS2pk69GQm36K+dPqvfCwqfnastcepLxEhJmAc6qRvCIDt5RUbZIv dWip8VBZSM0GI+UQYNtS9AzdNisvXj6N9KD5z2lBhkC0UijY1m4yLPxN4 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10264"; a="250277873" X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="250277873" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 00:05:50 -0800 X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="638472326" Received: from unknown (HELO chenyi-pc.sh.intel.com) ([10.239.159.73]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 00:05:47 -0800 From: Chenyi Qiang To: Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Xiaoyao Li Cc: Chenyi Qiang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 4/7] KVM: MMU: Rename the pkru to pkr Date: Mon, 21 Feb 2022 16:08:37 +0800 Message-Id: <20220221080840.7369-5-chenyi.qiang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220221080840.7369-1-chenyi.qiang@intel.com> References: <20220221080840.7369-1-chenyi.qiang@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org PKRU represents the PKU register utilized in the protection key rights check for user pages. Protection Keys for Superviosr Pages (PKS) extends the protection key architecture to cover supervisor pages. Rename the *pkru* related variables and functions to *pkr* which stands for both of the PKRU and PKRS. It makes sense because PKS and PKU each have: - a single control register (PKRU and PKRS) - the same number of keys (16 in total) - the same format in control registers (Access and Write disable bits) PKS and PKU can also share the same bitmap pkr_mask cache conditions where protection key checks are needed, because they can share almost the same requirements for PK restrictions to cause a fault, except they focus on different pages (supervisor and user pages). Reviewed-by: Paolo Bonzini Signed-off-by: Chenyi Qiang --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu.h | 12 ++++++------ arch/x86/kvm/mmu/mmu.c | 10 +++++----- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 75940aeb5f67..c37cd23b6764 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -455,7 +455,7 @@ struct kvm_mmu { * with PFEC.RSVD replaced by ACC_USER_MASK from the page tables. * Each domain has 2 bits which are ANDed with AD and WD from PKRU. */ - u32 pkru_mask; + u32 pkr_mask; u64 *pae_root; u64 *pml4_root; diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index e9fbb2c8bbe2..9e216d205c8d 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -275,8 +275,8 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, u32 errcode = PFERR_PRESENT_MASK; WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK)); - if (unlikely(mmu->pkru_mask)) { - u32 pkru_bits, offset; + if (unlikely(mmu->pkr_mask)) { + u32 pkr_bits, offset; /* * PKRU defines 32 bits, there are 16 domains and 2 @@ -284,15 +284,15 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, * index of the protection domain, so pte_pkey * 2 is * is the index of the first bit for the domain. */ - pkru_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3; + pkr_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3; /* clear present bit, replace PFEC.RSVD with ACC_USER_MASK. */ offset = (pfec & ~1) + ((pte_access & PT_USER_MASK) << (PFERR_RSVD_BIT - PT_USER_SHIFT)); - pkru_bits &= mmu->pkru_mask >> offset; - errcode |= -pkru_bits & PFERR_PK_MASK; - fault |= (pkru_bits != 0); + pkr_bits &= mmu->pkr_mask >> offset; + errcode |= -pkr_bits & PFERR_PK_MASK; + fault |= (pkr_bits != 0); } return -(u32)fault & errcode; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 593093b52395..83597161b5f9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4620,12 +4620,12 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept) * away both AD and WD. For all reads or if the last condition holds, WD * only will be masked away. */ -static void update_pkru_bitmask(struct kvm_mmu *mmu) +static void update_pkr_bitmask(struct kvm_mmu *mmu) { unsigned bit; bool wp; - mmu->pkru_mask = 0; + mmu->pkr_mask = 0; if (!is_cr4_pke(mmu)) return; @@ -4660,7 +4660,7 @@ static void update_pkru_bitmask(struct kvm_mmu *mmu) /* PKRU.WD stops write access. */ pkey_bits |= (!!check_write) << 1; - mmu->pkru_mask |= (pkey_bits & 3) << pfec; + mmu->pkr_mask |= (pkey_bits & 3) << pfec; } } @@ -4672,7 +4672,7 @@ static void reset_guest_paging_metadata(struct kvm_vcpu *vcpu, reset_rsvds_bits_mask(vcpu, mmu); update_permission_bitmask(mmu, false); - update_pkru_bitmask(mmu); + update_pkr_bitmask(mmu); } static void paging64_init_context(struct kvm_mmu *context) @@ -4946,7 +4946,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->direct_map = false; update_permission_bitmask(context, true); - context->pkru_mask = 0; + context->pkr_mask = 0; reset_rsvds_bits_mask_ept(vcpu, context, execonly, huge_page_level); reset_ept_shadow_zero_bits_mask(vcpu, context, execonly); } -- 2.17.1