Received: by 10.192.165.148 with SMTP id m20csp360716imm; Fri, 20 Apr 2018 07:58:14 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/m9pYVGK2pXYpZacMg5rC40jcj6KpbIY/gWn4SHnZQAfagJeqp+exSWYDdSQLFUB3zzfWK X-Received: by 2002:a17:902:9a8b:: with SMTP id w11-v6mr10569059plp.75.1524236294319; Fri, 20 Apr 2018 07:58:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524236294; cv=none; d=google.com; s=arc-20160816; b=Nd49Bce2CjLF9IJo3CQg05HKab0TxQ6YtS5xN9M2rOcDBbf+3RJhEyTLeGl9+ubq3O bR+Yu4AhzDFn61i8cVtEqGQQLuYId19VAgfKfFSo3aicLs22RocjAHwhojjxlA94r+0/ VX9FKXeFyyGDB4Y/dMyWm6tT+5+sDbCUehxMdMHG8PkTNE4Jpc8omO1nfMbcgAyEdhNE e4pkZ9y57c7AEYLBjoxoP16/vubSsRyoxwEK0op2bIiNpaJa+pXH6MuXofpOWvCj9kW4 iUBOZo8ZJtrJGdgqU24SWUa7gAWGpS79YADvxG59pozudnC1G6RBbgx3dsSrfL+AJ2SI PWBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=M+HPrpibWFVToFmZSTnFAUzo0VtjqaHk1af6ARZXpDE=; b=Ct09VyqaX0Vj73eA9THW2KxriRPmGF72dJtltpHr0TEQgMpKwaSuPCcG6Uy7APJlH2 30ejT1hjfKZ3PeIA5J4NqSfQoT1DDiw7ik98wR/f5jKr7uY7smQ+71zKGkRLUYsLsXss fa8y9rhPtbD93LWxmmhVYl+6uobawJz+3O5N/7OfVnBMQ7C6g12KIXcAt7ORlHECdeVZ S8qqivyxu585ZkYDqb2Glnxe0iIAM0dXt5FLnXAtfv6i6GAG96fJyIFlAV5a3xQ45bwJ MckU8CsVw/2EqRLviuDwJAHcvB0i+wDCwsbjtSD1U83jCcCMQDCrC3Lzagk688rH73S9 QV/Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c16si5385477pfj.66.2018.04.20.07.57.59; Fri, 20 Apr 2018 07:58:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755537AbeDTOzp (ORCPT + 99 others); Fri, 20 Apr 2018 10:55:45 -0400 Received: from foss.arm.com ([217.140.101.70]:50170 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755302AbeDTOzk (ORCPT ); Fri, 20 Apr 2018 10:55:40 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8F6E115BE; Fri, 20 Apr 2018 07:55:40 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.207.29]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1182E3F59D; Fri, 20 Apr 2018 07:55:39 -0700 (PDT) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu Cc: Punit Agrawal , linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, christoffer.dall@arm.com, linux-kernel@vger.kernel.org, suzuki.poulose@arm.com, Russell King , Catalin Marinas , Will Deacon Subject: [PATCH 4/4] KVM: arm64: Add support for PUD hugepages at stage 2 Date: Fri, 20 Apr 2018 15:54:09 +0100 Message-Id: <20180420145409.24485-5-punit.agrawal@arm.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180420145409.24485-1-punit.agrawal@arm.com> References: <20180420145409.24485-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org KVM only supports PMD hugepages at stage 2. Extend the stage 2 fault handling to add support for PUD hugepages. Addition of pud hugepage support enables additional hugepage sizes (e.g., 1G with 4K granule) which can be useful on cores that support mapping larger block sizes in the TLB entries. Signed-off-by: Punit Agrawal Cc: Christoffer Dall Cc: Marc Zyngier Cc: Russell King Cc: Catalin Marinas Cc: Will Deacon --- arch/arm/include/asm/kvm_mmu.h | 19 +++++++++ arch/arm64/include/asm/kvm_mmu.h | 15 +++++++ arch/arm64/include/asm/pgtable-hwdef.h | 4 ++ arch/arm64/include/asm/pgtable.h | 2 + virt/kvm/arm/mmu.c | 54 ++++++++++++++++++++------ 5 files changed, 83 insertions(+), 11 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 224c22c0a69c..155916dbdd7e 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -77,8 +77,11 @@ void kvm_clear_hyp_idmap(void); #define kvm_pfn_pte(pfn, prot) pfn_pte(pfn, prot) #define kvm_pfn_pmd(pfn, prot) pfn_pmd(pfn, prot) +#define kvm_pfn_pud(pfn, prot) (__pud(0)) #define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) +/* No support for pud hugepages */ +#define kvm_pud_mkhuge(pud) (pud) /* * The following kvm_*pud*() functionas are provided strictly to allow @@ -95,6 +98,22 @@ static inline bool kvm_s2pud_readonly(pud_t *pud) return false; } +static inline void kvm_set_pud(pud_t *pud, pud_t new_pud) +{ + BUG(); +} + +static inline pud_t kvm_s2pud_mkwrite(pud_t pud) +{ + BUG(); + return pud; +} + +static inline pud_t kvm_s2pud_mkexec(pud_t pud) +{ + BUG(); + return pud; +} static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd) { diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index f440cf216a23..f49a68fcbf26 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -172,11 +172,14 @@ void kvm_clear_hyp_idmap(void); #define kvm_set_pte(ptep, pte) set_pte(ptep, pte) #define kvm_set_pmd(pmdp, pmd) set_pmd(pmdp, pmd) +#define kvm_set_pud(pudp, pud) set_pud(pudp, pud) #define kvm_pfn_pte(pfn, prot) pfn_pte(pfn, prot) #define kvm_pfn_pmd(pfn, prot) pfn_pmd(pfn, prot) +#define kvm_pfn_pud(pfn, prot) pfn_pud(pfn, prot) #define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) +#define kvm_pud_mkhuge(pud) pud_mkhuge(pud) static inline pte_t kvm_s2pte_mkwrite(pte_t pte) { @@ -190,6 +193,12 @@ static inline pmd_t kvm_s2pmd_mkwrite(pmd_t pmd) return pmd; } +static inline pud_t kvm_s2pud_mkwrite(pud_t pud) +{ + pud_val(pud) |= PUD_S2_RDWR; + return pud; +} + static inline pte_t kvm_s2pte_mkexec(pte_t pte) { pte_val(pte) &= ~PTE_S2_XN; @@ -202,6 +211,12 @@ static inline pmd_t kvm_s2pmd_mkexec(pmd_t pmd) return pmd; } +static inline pud_t kvm_s2pud_mkexec(pud_t pud) +{ + pud_val(pud) &= ~PUD_S2_XN; + return pud; +} + static inline void kvm_set_s2pte_readonly(pte_t *ptep) { pteval_t old_pteval, pteval; diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index fd208eac9f2a..e327665e94d1 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -193,6 +193,10 @@ #define PMD_S2_RDWR (_AT(pmdval_t, 3) << 6) /* HAP[2:1] */ #define PMD_S2_XN (_AT(pmdval_t, 2) << 53) /* XN[1:0] */ +#define PUD_S2_RDONLY (_AT(pudval_t, 1) << 6) /* HAP[2:1] */ +#define PUD_S2_RDWR (_AT(pudval_t, 3) << 6) /* HAP[2:1] */ +#define PUD_S2_XN (_AT(pudval_t, 2) << 53) /* XN[1:0] */ + /* * Memory Attribute override for Stage-2 (MemAttr[3:0]) */ diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 7e2c27e63cd8..5efb4585c879 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -386,6 +386,8 @@ static inline int pmd_protnone(pmd_t pmd) #define pud_write(pud) pte_write(pud_pte(pud)) +#define pud_mkhuge(pud) (__pud(pud_val(pud) & ~PUD_TABLE_BIT)) + #define __pud_to_phys(pud) __pte_to_phys(pud_pte(pud)) #define __phys_to_pud_val(phys) __phys_to_pte_val(phys) #define pud_pfn(pud) ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 5f53909da90e..7fb58dca0a83 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1036,6 +1036,26 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache return 0; } +static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, + phys_addr_t addr, const pud_t *new_pud) +{ + pud_t *pud, old_pud; + + pud = stage2_get_pud(kvm, cache, addr); + VM_BUG_ON(!pud); + + old_pud = *pud; + if (pud_present(old_pud)) { + pud_clear(pud); + kvm_tlb_flush_vmid_ipa(kvm, addr); + } else { + get_page(virt_to_page(pud)); + } + + kvm_set_pud(pud, *new_pud); + return 0; +} + static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr) { pmd_t *pmdp; @@ -1452,9 +1472,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } vma_pagesize = vma_kernel_pagesize(vma); - if (vma_pagesize == PMD_SIZE && !logging_active) { + if ((vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) && + !logging_active) { + struct hstate *h = hstate_vma(vma); + hugetlb = true; - gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; + gfn = (fault_ipa & huge_page_mask(h)) >> PAGE_SHIFT; } else { /* * Pages belonging to memslots that don't have the same @@ -1521,15 +1544,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; - if (!hugetlb && !force_pte) { - /* - * Only PMD_SIZE transparent hugepages(THP) are - * currently supported. This code will need to be - * updated if other THP sizes are supported. - */ + if (!hugetlb && !force_pte) hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); - vma_pagesize = PMD_SIZE; - } if (writable) kvm_set_pfn_dirty(pfn); @@ -1540,7 +1556,23 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault) invalidate_icache_guest_page(pfn, vma_pagesize); - if (hugetlb) { + if (vma_pagesize == PUD_SIZE) { + pud_t new_pud = kvm_pfn_pud(pfn, mem_type); + + new_pud = kvm_pud_mkhuge(new_pud); + if (writable) + new_pud = kvm_s2pud_mkwrite(new_pud); + + if (exec_fault) { + new_pud = kvm_s2pud_mkexec(new_pud); + } else if (fault_status == FSC_PERM) { + /* Preserve execute if XN was already cleared */ + if (stage2_is_exec(kvm, fault_ipa)) + new_pud = kvm_s2pud_mkexec(new_pud); + } + + ret = stage2_set_pud_huge(kvm, memcache, fault_ipa, &new_pud); + } else if (vma_pagesize == PMD_SIZE) { pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type); new_pmd = kvm_pmd_mkhuge(new_pmd); -- 2.17.0