Received: by 10.192.165.148 with SMTP id m20csp670503imm; Fri, 4 May 2018 04:39:51 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrU0h9po3c65f5vsCR4PnKBPtpuVmhY8SrXqSJ59aa++6gSTbekIFs9l6hcxq1gzp5JST5o X-Received: by 2002:a17:902:8bc6:: with SMTP id r6-v6mr27836079plo.66.1525433991216; Fri, 04 May 2018 04:39:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525433991; cv=none; d=google.com; s=arc-20160816; b=H9VaqTtETCA7TXj1ukRI2q/Hs+8jrD0CLowGFeG1+1rUC+INNAvg7scT0qoHlpBuqO Gaz9GzvbEKbkpbWokA0Y/KAln6mw6RF5i5S+PwsqKFlGVjBuz5lOgt5ChGi02Y1TxXLL BD6dCbHiuKtD0UrFIDemNQGuRrRcNvFWxdc0RmVUtUgeuBFwdwH3zAel17RtdaIQqDtG dH+ivXpOCPEy1qm3dX3JCP6JX72eDp/4BT6EY8/SOZq7m1T0wSdjSs1JMQXvOkJjJmAY +Bf7yLWxa+p6qy0p5aFXLjK3eNJ+92QG1fqZ8VHqdefezWnr3QW8obSrn6zkZ4P4WWaA h0Ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=KpjI4SpZqSfPNX/e3wU7/30rJNwDI0dKOWlEn9GDE1Q=; b=Bq7iPywvxWmixJbzPDZJL9VU2dHyAA7M+Q3lOdZOC6Bud5YsWu4DyQdqzJ+04ojI5k RsKPsvp+nnLI3vRfv1MA4/jsFWq4zc0nLcAIzBcTAVWHqNlqJKVz3S1Fj4fBxOhagwEd Pr/WTvZsixOLf3TqH9Jl1+kiIwvnEAQUD/t2BnLMQ00kTldCF9dFL4znH1HJWnLyQh0S f5wsmbmtF1FP0ZIhuvsuKFM15ZufuwJVs8jR1xvi35udSmGGz4nj2aBEGl1uIYzPFzQU 91IlFmt7gX1yVnkSygtUXV+eppTfBS3sUUGs+NvTr77/8I0+ExucNsGotmpmkl/RyM2X j3YQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s10-v6si13117485pge.41.2018.05.04.04.39.36; Fri, 04 May 2018 04:39:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751390AbeEDLjI (ORCPT + 99 others); Fri, 4 May 2018 07:39:08 -0400 Received: from foss.arm.com ([217.140.101.70]:52524 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751001AbeEDLjH (ORCPT ); Fri, 4 May 2018 07:39:07 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 515221529; Fri, 4 May 2018 04:39:07 -0700 (PDT) Received: from localhost (unknown [10.37.8.146]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 72A643F25D; Fri, 4 May 2018 04:39:06 -0700 (PDT) Date: Fri, 4 May 2018 13:39:04 +0200 From: Christoffer Dall To: Punit Agrawal Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, linux-kernel@vger.kernel.org, suzuki.poulose@arm.com, Russell King , Catalin Marinas , Will Deacon Subject: Re: [PATCH v2 4/4] KVM: arm64: Add support for PUD hugepages at stage 2 Message-ID: <20180504113904.GE10191@C02W217FHV2R.local> References: <20180501102659.13188-1-punit.agrawal@arm.com> <20180501102659.13188-5-punit.agrawal@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180501102659.13188-5-punit.agrawal@arm.com> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 01, 2018 at 11:26:59AM +0100, Punit Agrawal wrote: > KVM currently supports PMD hugepages at stage 2. Extend the stage 2 > fault handling to add support for PUD hugepages. > > Addition of pud hugepage support enables additional hugepage > sizes (e.g., 1G with 4K granule) which can be useful on cores that > support mapping larger block sizes in the TLB entries. > > Signed-off-by: Punit Agrawal > Cc: Christoffer Dall > Cc: Marc Zyngier > Cc: Russell King > Cc: Catalin Marinas > Cc: Will Deacon Reviewed-by: Christoffer Dall > --- > arch/arm/include/asm/kvm_mmu.h | 19 ++++++++++++ > arch/arm64/include/asm/kvm_mmu.h | 15 ++++++++++ > arch/arm64/include/asm/pgtable-hwdef.h | 4 +++ > arch/arm64/include/asm/pgtable.h | 2 ++ > virt/kvm/arm/mmu.c | 40 ++++++++++++++++++++++++-- > 5 files changed, 77 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h > index 224c22c0a69c..155916dbdd7e 100644 > --- a/arch/arm/include/asm/kvm_mmu.h > +++ b/arch/arm/include/asm/kvm_mmu.h > @@ -77,8 +77,11 @@ void kvm_clear_hyp_idmap(void); > > #define kvm_pfn_pte(pfn, prot) pfn_pte(pfn, prot) > #define kvm_pfn_pmd(pfn, prot) pfn_pmd(pfn, prot) > +#define kvm_pfn_pud(pfn, prot) (__pud(0)) > > #define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) > +/* No support for pud hugepages */ > +#define kvm_pud_mkhuge(pud) (pud) > > /* > * The following kvm_*pud*() functionas are provided strictly to allow > @@ -95,6 +98,22 @@ static inline bool kvm_s2pud_readonly(pud_t *pud) > return false; > } > > +static inline void kvm_set_pud(pud_t *pud, pud_t new_pud) > +{ > + BUG(); > +} > + > +static inline pud_t kvm_s2pud_mkwrite(pud_t pud) > +{ > + BUG(); > + return pud; > +} > + > +static inline pud_t kvm_s2pud_mkexec(pud_t pud) > +{ > + BUG(); > + return pud; > +} > > static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd) > { > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > index f440cf216a23..f49a68fcbf26 100644 > --- a/arch/arm64/include/asm/kvm_mmu.h > +++ b/arch/arm64/include/asm/kvm_mmu.h > @@ -172,11 +172,14 @@ void kvm_clear_hyp_idmap(void); > > #define kvm_set_pte(ptep, pte) set_pte(ptep, pte) > #define kvm_set_pmd(pmdp, pmd) set_pmd(pmdp, pmd) > +#define kvm_set_pud(pudp, pud) set_pud(pudp, pud) > > #define kvm_pfn_pte(pfn, prot) pfn_pte(pfn, prot) > #define kvm_pfn_pmd(pfn, prot) pfn_pmd(pfn, prot) > +#define kvm_pfn_pud(pfn, prot) pfn_pud(pfn, prot) > > #define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) > +#define kvm_pud_mkhuge(pud) pud_mkhuge(pud) > > static inline pte_t kvm_s2pte_mkwrite(pte_t pte) > { > @@ -190,6 +193,12 @@ static inline pmd_t kvm_s2pmd_mkwrite(pmd_t pmd) > return pmd; > } > > +static inline pud_t kvm_s2pud_mkwrite(pud_t pud) > +{ > + pud_val(pud) |= PUD_S2_RDWR; > + return pud; > +} > + > static inline pte_t kvm_s2pte_mkexec(pte_t pte) > { > pte_val(pte) &= ~PTE_S2_XN; > @@ -202,6 +211,12 @@ static inline pmd_t kvm_s2pmd_mkexec(pmd_t pmd) > return pmd; > } > > +static inline pud_t kvm_s2pud_mkexec(pud_t pud) > +{ > + pud_val(pud) &= ~PUD_S2_XN; > + return pud; > +} > + > static inline void kvm_set_s2pte_readonly(pte_t *ptep) > { > pteval_t old_pteval, pteval; > diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h > index fd208eac9f2a..e327665e94d1 100644 > --- a/arch/arm64/include/asm/pgtable-hwdef.h > +++ b/arch/arm64/include/asm/pgtable-hwdef.h > @@ -193,6 +193,10 @@ > #define PMD_S2_RDWR (_AT(pmdval_t, 3) << 6) /* HAP[2:1] */ > #define PMD_S2_XN (_AT(pmdval_t, 2) << 53) /* XN[1:0] */ > > +#define PUD_S2_RDONLY (_AT(pudval_t, 1) << 6) /* HAP[2:1] */ > +#define PUD_S2_RDWR (_AT(pudval_t, 3) << 6) /* HAP[2:1] */ > +#define PUD_S2_XN (_AT(pudval_t, 2) << 53) /* XN[1:0] */ > + > /* > * Memory Attribute override for Stage-2 (MemAttr[3:0]) > */ > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index 7c4c8f318ba9..31ea9fda07e3 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -386,6 +386,8 @@ static inline int pmd_protnone(pmd_t pmd) > > #define pud_write(pud) pte_write(pud_pte(pud)) > > +#define pud_mkhuge(pud) (__pud(pud_val(pud) & ~PUD_TABLE_BIT)) > + > #define __pud_to_phys(pud) __pte_to_phys(pud_pte(pud)) > #define __phys_to_pud_val(phys) __phys_to_pte_val(phys) > #define pud_pfn(pud) ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT) > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > index 3afbf693e045..1fb108d47dbd 100644 > --- a/virt/kvm/arm/mmu.c > +++ b/virt/kvm/arm/mmu.c > @@ -1036,6 +1036,26 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache > return 0; > } > > +static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, > + phys_addr_t addr, const pud_t *new_pud) > +{ > + pud_t *pud, old_pud; > + > + pud = stage2_get_pud(kvm, cache, addr); > + VM_BUG_ON(!pud); > + > + old_pud = *pud; > + if (pud_present(old_pud)) { > + pud_clear(pud); > + kvm_tlb_flush_vmid_ipa(kvm, addr); > + } else { > + get_page(virt_to_page(pud)); > + } > + > + kvm_set_pud(pud, *new_pud); > + return 0; > +} > + > static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr) > { > pmd_t *pmdp; > @@ -1467,9 +1487,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > } > > vma_pagesize = vma_kernel_pagesize(vma); > - if (vma_pagesize == PMD_SIZE && !logging_active) { > + if ((vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) && > + !logging_active) { > + struct hstate *h = hstate_vma(vma); > + > hugetlb = true; > - gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; > + gfn = (fault_ipa & huge_page_mask(h)) >> PAGE_SHIFT; > } else { > /* > * Pages belonging to memslots that don't have the same > @@ -1556,7 +1579,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > if (exec_fault) > invalidate_icache_guest_page(pfn, vma_pagesize); > > - if (hugetlb) { > + if (vma_pagesize == PUD_SIZE) { > + pud_t new_pud = kvm_pfn_pud(pfn, mem_type); > + > + new_pud = kvm_pud_mkhuge(new_pud); > + if (writable) > + new_pud = kvm_s2pud_mkwrite(new_pud); > + > + if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status)) > + new_pud = kvm_s2pud_mkexec(new_pud); > + > + ret = stage2_set_pud_huge(kvm, memcache, fault_ipa, &new_pud); > + } else if (vma_pagesize == PMD_SIZE) { > pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type); > > new_pmd = kvm_pmd_mkhuge(new_pmd); > -- > 2.17.0 >