Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp3027269imm; Mon, 24 Sep 2018 14:22:26 -0700 (PDT) X-Google-Smtp-Source: ACcGV62eEWDKAnw+U8vYWgxkxiIaCLuzb69SADgqygE2i6VmgZ+RukA1+RFKuly3mwAcrn3B/HYn X-Received: by 2002:a17:902:9a01:: with SMTP id v1-v6mr558089plp.20.1537824146383; Mon, 24 Sep 2018 14:22:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537824146; cv=none; d=google.com; s=arc-20160816; b=DXlHB6s1ZCvehtaN7N6YADQGJnDydeIENoQwBTSuJVoINMLsp3pkA50O5/7KuAIKHq 0IGAr5WqkF6/RbSa/IYPu5/fQFkWwZa7yBfZvqWoLiFtwcKSpr3CKyAxKgFXoWAp4gg7 lKpb5sF3HYLIBHryJUQdzRyKcE6dTRg6+BVKBUK5nn8IgbzW6/OibvD0z/ZDlqbruIe7 Ahtr7tSbXr9acy+tVZpGalRCcgA/uN6bj/CeP2p8HorpMS/hOAQYQO6Den1oxTIVnG9f kJBkxMIsfbZ4tzO0GyZt3SJmbq3BUytdaymp/n0mUxrpR/aA1Ru22E404Sh7Fn3JekRc rQ2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=5Ibu76ETb1bJHhDrd8ORNUz+W8+irR56wN4id1a+tUo=; b=UmaNIdxnDJfEGd3VbFSDibRaq1GI9sT/Sk0nvYiABBhsK3sACEFGTtDNWRn0mtGHji EAN+zJQZAaL8FLzmoE4ObtCoToufgZ0/TDPyKyEUsgcDYKnORdmYwxHipNyEzIGuctFT K95qttbCUPjNg52UVcoDIb79QCMsuc2KRTDbAPERC+sGA+5KHYO8Ue5Kbk1oWIM/lZiY SnKMa0PbEAw2eLNECxE7vaS3y1UDn5+s6/X+nv3ahTn7vm2HNx6z834AgVlKTnsGZPn2 9q9x00JBH6SZYXnIyLR3ey6TaAtBbTguunCSHKYmUqZ0RwAJL8jvzGFHtaESiVk+Sx/h XdGg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t10-v6si395631pgj.167.2018.09.24.14.22.10; Mon, 24 Sep 2018 14:22:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728482AbeIYDY4 (ORCPT + 99 others); Mon, 24 Sep 2018 23:24:56 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:42120 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728192AbeIYDYz (ORCPT ); Mon, 24 Sep 2018 23:24:55 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D927A7A9; Mon, 24 Sep 2018 14:20:45 -0700 (PDT) Received: from [10.37.9.1] (unknown [10.37.9.1]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 08ECF3F5B7; Mon, 24 Sep 2018 14:20:42 -0700 (PDT) Subject: Re: [PATCH v7 9/9] KVM: arm64: Add support for creating PUD hugepages at stage 2 To: Punit Agrawal , kvmarm@lists.cs.columbia.edu Cc: marc.zyngier@arm.com, will.deacon@arm.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Christoffer Dall , Russell King , Catalin Marinas References: <20180924174552.8387-1-punit.agrawal@arm.com> <20180924174552.8387-10-punit.agrawal@arm.com> From: Suzuki K Poulose Message-ID: <4df146fc-f9b0-882e-becb-056111a444d4@arm.com> Date: Mon, 24 Sep 2018 22:21:43 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <20180924174552.8387-10-punit.agrawal@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Punit, On 09/24/2018 06:45 PM, Punit Agrawal wrote: > KVM only supports PMD hugepages at stage 2. Now that the various page > handling routines are updated, extend the stage 2 fault handling to > map in PUD hugepages. > > Addition of PUD hugepage support enables additional page sizes (e.g., > 1G with 4K granule) which can be useful on cores that support mapping > larger block sizes in the TLB entries. > > Signed-off-by: Punit Agrawal > Cc: Christoffer Dall > Cc: Marc Zyngier > Cc: Russell King > Cc: Catalin Marinas > Cc: Will Deacon > > diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h > index a42b9505c9a7..a8e86b926ee0 100644 > --- a/arch/arm/include/asm/kvm_mmu.h > +++ b/arch/arm/include/asm/kvm_mmu.h > @@ -84,11 +84,14 @@ void kvm_clear_hyp_idmap(void); > > #define kvm_pfn_pte(pfn, prot) pfn_pte(pfn, prot) > #define kvm_pfn_pmd(pfn, prot) pfn_pmd(pfn, prot) > +#define kvm_pfn_pud(pfn, prot) (__pud(0)) > > #define kvm_pud_pfn(pud) ({ BUG(); 0; }) > > > #define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) > +/* No support for pud hugepages */ > +#define kvm_pud_mkhuge(pud) (pud) > shouldn't this be BUG() like other PUD huge helpers for arm32 ? > /* > * The following kvm_*pud*() functions are provided strictly to allow > @@ -105,6 +108,23 @@ static inline bool kvm_s2pud_readonly(pud_t *pud) > return false; > } > > +static inline void kvm_set_pud(pud_t *pud, pud_t new_pud) > +{ > + BUG(); > +} > + > +static inline pud_t kvm_s2pud_mkwrite(pud_t pud) > +{ > + BUG(); > + return pud; > +} > + > +static inline pud_t kvm_s2pud_mkexec(pud_t pud) > +{ > + BUG(); > + return pud; > +} > + > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > index 3ff7ebb262d2..5b8163537bc2 100644 > --- a/virt/kvm/arm/mmu.c > +++ b/virt/kvm/arm/mmu.c ... > @@ -1669,7 +1746,28 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > needs_exec = exec_fault || > (fault_status == FSC_PERM && stage2_is_exec(kvm, fault_ipa)); > > - if (hugetlb && vma_pagesize == PMD_SIZE) { > + if (hugetlb && vma_pagesize == PUD_SIZE) { > + /* > + * Assuming that PUD level always exists at Stage 2 - > + * this is true for 4k pages with 40 bits IPA > + * currently supported. > + * > + * When using 64k pages, 40bits of IPA results in > + * using only 2-levels at Stage 2. Overlooking this > + * problem for now as a PUD hugepage with 64k pages is > + * too big (4TB) to be practical. > + */ > + pud_t new_pud = kvm_pfn_pud(pfn, mem_type); Is this based on the Dynamic IPA series ? The cover letter seems to suggest that it is. But I don't see the check to make sure we have stage2 PUD level here before we go ahead and try PUD huge page at stage2. Also the comment above seems outdated in that case. > + > + new_pud = kvm_pud_mkhuge(new_pud); > + if (writable) > + new_pud = kvm_s2pud_mkwrite(new_pud); > + > + if (needs_exec) > + new_pud = kvm_s2pud_mkexec(new_pud); > + > + ret = stage2_set_pud_huge(kvm, memcache, fault_ipa, &new_pud); > + } else if (hugetlb && vma_pagesize == PMD_SIZE) { > pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type); > > new_pmd = kvm_pmd_mkhuge(new_pmd); > Suzuki