Received: by 2002:ac0:a591:0:0:0:0:0 with SMTP id m17-v6csp843176imm; Thu, 5 Jul 2018 09:49:54 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdx/Y57VAGsKdg+hpTqXp2z7wx2V9cXzfj4xfp7fNMoyxfgD9rfEy9cs0ec6N0lJsOj4HJ2 X-Received: by 2002:a17:902:8f96:: with SMTP id z22-v6mr7012980plo.190.1530809394014; Thu, 05 Jul 2018 09:49:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530809393; cv=none; d=google.com; s=arc-20160816; b=B3UPUZ9TqnSArwwRCO8EogbvR1XTvgqWj/PWc/5czaaX98cfIqTCs2BJNIac8jz3Gw KzHsbKjn5Ymtan2JjbTPRyzp73rdRgw5M5SRE4ftEUpfAZ4SfynpYTQpvpvJuIMlbw+H kicbdJp6fCWqCpopxKslRWk3B0Xv45XnrY940N3gMmTAcHriEUaACJh4I4T6a6gNgj+N ZoFzUE0lFgdvUytBNDrRo76MjCmSHo+sTQjRvBHvTnR2Xo5AbUOfZsKBr2TAN5KGlk9q 0d7DZnORfb4Js8E6GJ3w4XvTJg3qfu9ZE/aa50XB9dxJAcAO6St3cURkC1BnH9zny9RI X0uQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=eNKmSv2O35dZPrEDoEsSzJMq3EveMrHUc1nL4VVZV0M=; b=aPicWPmT+U/Dfxbtgpatq/DSAsIsmgjNerg7ZbJc+ktfOMeEphlO7xlhXzrTsX4Ceg t5paNI+MgVgJSWHCwgFMWkJBFadVFO19QswOtrXtsoIdrgzKyygfH7RsHTYKKnwbCwGb tNLWPN5IGF0jtEIJtDcExfhOhUWOReNHZ0Ar0qXyEQ/Oe0mEIcuJrDg7tC9HGd3d0OCX kRnynTTEeQMAqc+mcnqypYn0VxLlcrwFeEXwJ//8A6NGqgVJjT69V4xAkYd3Q2GfwXwB dlqHoUnq8cZKirtgB7iv8C+H3uD9ffDLdgvvsVARN75xB/xknPQ8xn3AvJ5QPVxmIt7C 6zbA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k190-v6si5909418pgk.266.2018.07.05.09.49.30; Thu, 05 Jul 2018 09:49:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754020AbeGEQsm (ORCPT + 99 others); Thu, 5 Jul 2018 12:48:42 -0400 Received: from foss.arm.com ([217.140.101.70]:53894 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753595AbeGEQsk (ORCPT ); Thu, 5 Jul 2018 12:48:40 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8EF807A9; Thu, 5 Jul 2018 09:48:40 -0700 (PDT) Received: from [10.1.206.73] (en101.cambridge.arm.com [10.1.206.73]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 10A443F5BA; Thu, 5 Jul 2018 09:48:38 -0700 (PDT) Subject: Re: [PATCH v4 4/7] KVM: arm64: Support PUD hugepage in stage2_is_exec() To: Punit Agrawal , kvmarm@lists.cs.columbia.edu Cc: linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, christoffer.dall@arm.com, linux-kernel@vger.kernel.org, Russell King , Catalin Marinas , Will Deacon References: <20180705140850.5801-1-punit.agrawal@arm.com> <20180705140850.5801-5-punit.agrawal@arm.com> From: Suzuki K Poulose Message-ID: <442d0f4b-cb23-9788-1ebd-b14c89c52c45@arm.com> Date: Thu, 5 Jul 2018 17:48:37 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: <20180705140850.5801-5-punit.agrawal@arm.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Punit, On 05/07/18 15:08, Punit Agrawal wrote: > In preparation for creating PUD hugepages at stage 2, add support for > detecting execute permissions on PUD page table entries. Faults due to > lack of execute permissions on page table entries is used to perform > i-cache invalidation on first execute. > > Provide trivial implementations of arm32 helpers to allow sharing of > code. > > Signed-off-by: Punit Agrawal > Cc: Christoffer Dall > Cc: Marc Zyngier > Cc: Russell King > Cc: Catalin Marinas > Cc: Will Deacon > --- > arch/arm/include/asm/kvm_mmu.h | 6 ++++++ > arch/arm64/include/asm/kvm_mmu.h | 5 +++++ > arch/arm64/include/asm/pgtable-hwdef.h | 2 ++ > virt/kvm/arm/mmu.c | 10 +++++++++- > 4 files changed, 22 insertions(+), 1 deletion(-) > > diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h > index c23722f75d5c..d05c8986e495 100644 > --- a/arch/arm/include/asm/kvm_mmu.h > +++ b/arch/arm/include/asm/kvm_mmu.h > @@ -96,6 +96,12 @@ static inline bool kvm_s2pud_readonly(pud_t *pud) > } > > > +static inline bool kvm_s2pud_exec(pud_t *pud) > +{ > + BUG(); > + return false; > +} > + > static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd) > { > *pmd = new_pmd; > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > index 84051930ddfe..15bc1be8f82f 100644 > --- a/arch/arm64/include/asm/kvm_mmu.h > +++ b/arch/arm64/include/asm/kvm_mmu.h > @@ -249,6 +249,11 @@ static inline bool kvm_s2pud_readonly(pud_t *pudp) > return kvm_s2pte_readonly((pte_t *)pudp); > } > > +static inline bool kvm_s2pud_exec(pud_t *pudp) > +{ > + return !(READ_ONCE(pud_val(*pudp)) & PUD_S2_XN); > +} > + > static inline bool kvm_page_empty(void *ptr) > { > struct page *ptr_page = virt_to_page(ptr); > diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h > index fd208eac9f2a..10ae592b78b8 100644 > --- a/arch/arm64/include/asm/pgtable-hwdef.h > +++ b/arch/arm64/include/asm/pgtable-hwdef.h > @@ -193,6 +193,8 @@ > #define PMD_S2_RDWR (_AT(pmdval_t, 3) << 6) /* HAP[2:1] */ > #define PMD_S2_XN (_AT(pmdval_t, 2) << 53) /* XN[1:0] */ > > +#define PUD_S2_XN (_AT(pudval_t, 2) << 53) /* XN[1:0] */ > + The changes above look good to me. Please see below. > /* > * Memory Attribute override for Stage-2 (MemAttr[3:0]) > */ > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > index db04b18218c1..ccdea0edabb3 100644 > --- a/virt/kvm/arm/mmu.c > +++ b/virt/kvm/arm/mmu.c > @@ -1040,10 +1040,18 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache > > static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr) > { > + pud_t *pudp; > pmd_t *pmdp; > pte_t *ptep; > > - pmdp = stage2_get_pmd(kvm, NULL, addr); > + pudp = stage2_get_pud(kvm, NULL, addr); > + if (!pudp || pud_none(*pudp) || !pud_present(*pudp)) > + return false; > + > + if (pud_huge(*pudp)) > + return kvm_s2pud_exec(pudp); > + > + pmdp = stage2_pmd_offset(pudp, addr); > if (!pmdp || pmd_none(*pmdp) || !pmd_present(*pmdp)) > return false; I am wondering if we need a slightly better way to deal with this kind of operation. We seem to duplicate the above operation (here and in the following patches), i.e, finding the "leaf entry" for a given address and follow the checks one level up at a time. So instead of doing, stage2_get_pud() and walking down everywhere this is needed, how about adding : /* Returns true if the leaf entry is found and updates the relevant pointer */ found = stage2_get_leaf_entry(kvm, NULL, addr, &pudp, &pmdp, &ptep) which could set the appropriate entry and we could check the result here. Cheers Suzuki