Received: by 10.192.165.148 with SMTP id m20csp4771804imm; Tue, 1 May 2018 03:37:01 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoLb7ZkWTrwE2lx8P4IFYzhY0Mu/KgJ0zi0XBQsGOj4RK3/JxOBNV+tJcQFu+AaFi6Ue3x7 X-Received: by 2002:a63:3688:: with SMTP id d130-v6mr12615544pga.228.1525171021301; Tue, 01 May 2018 03:37:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525171021; cv=none; d=google.com; s=arc-20160816; b=rsUqqb/I4+mXP34yKQ7nFqzQTVn1sHx2Wofl14sMmFSrjlIsOu3ZmWUq9fJBgG4R25 ZncIzOqKQXhR75C111WsH3j6EvAscXp9P8zmMvX97OGv9PGsQOHT0Eer2+uyL4DkDfl9 fjkVHhjQZcpQZhlc5IaidgKvB4WQ4MDaqM4LbIS4R+3gyVKZKosoWezDqiv9n2xfS+cD CRD1JJIsTEClQ4gSN8vq5Y/blhvbxX3lF5UvwGEOIcWy5kddWWzo/9qb3LM0rnuV4lb9 eWlvjIKl0fqZZQvMPh4metgydd18KHeLr3SD2w1NsPjK9bo3dg2YOSM6yt9Y6Oc6Cix3 Qh6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=FZiffqFo/f98ikdD2Ua7pqdCp239lietKvDTuDi1YFo=; b=Sg+h7NeapoHRN8xq10QRww8hBdB+0Ikm9g/sJcKq7e8O5BWnwzEMjVqWtI5f8F56SL TI9GqKQf3MsYpTlxw48f7nCRzwtZjjg4bA13Ki2LIgqs67M1ojJFXUM6FCQQrjeMUccH 8gPvI4OUS//S2bEfCcw5GpS63w0sDgB8/NoodG7IMFufXCEcvIThK1FQ09b9D/GLS+T0 kXjkhN/KvLJ2IN/w748ViEIVAU32tr6ux6ESkWjcCbA3uYt/8HYel6X1eWH7ucmgXl/b l74IacbU/dA7S2EJWkAWlkKJcSbHdyQKMDCVr+eJ0m5/drMCePHgMyHdrIKjpvtpxK1B aDhg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s66si9404002pfj.164.2018.05.01.03.36.46; Tue, 01 May 2018 03:37:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753800AbeEAKgf (ORCPT + 99 others); Tue, 1 May 2018 06:36:35 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:45406 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751090AbeEAKge (ORCPT ); Tue, 1 May 2018 06:36:34 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BC3BAF; Tue, 1 May 2018 03:36:33 -0700 (PDT) Received: from [10.1.206.73] (en101.cambridge.arm.com [10.1.206.73]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 91C7F3F5A0; Tue, 1 May 2018 03:36:31 -0700 (PDT) Subject: Re: [PATCH v2 2/4] KVM: arm/arm64: Introduce helpers to manupulate page table entries To: Punit Agrawal , kvmarm@lists.cs.columbia.edu Cc: linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, christoffer.dall@arm.com, linux-kernel@vger.kernel.org, Russell King , Catalin Marinas , Will Deacon References: <20180501102659.13188-1-punit.agrawal@arm.com> <20180501102659.13188-3-punit.agrawal@arm.com> From: Suzuki K Poulose Message-ID: <3eab5997-30b2-c51a-ca8e-5545bbadffc0@arm.com> Date: Tue, 1 May 2018 11:36:26 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <20180501102659.13188-3-punit.agrawal@arm.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/05/18 11:26, Punit Agrawal wrote: > Introduce helpers to abstract architectural handling of the conversion > of pfn to page table entries and marking a PMD page table entry as a > block entry. > > The helpers are introduced in preparation for supporting PUD hugepages > at stage 2 - which are supported on arm64 but do not exist on arm. Punit, The change are fine by me. However, we usually do not define kvm_* accessors for something which we know matches with the host variant. i.e, PMD and PTE helpers, which are always present and we make use of them directly. (see unmap_stage2_pmds for e.g) Cheers Suzuki > > Signed-off-by: Punit Agrawal > Acked-by: Christoffer Dall > Cc: Marc Zyngier > Cc: Russell King > Cc: Catalin Marinas > Cc: Will Deacon > --- > arch/arm/include/asm/kvm_mmu.h | 5 +++++ > arch/arm64/include/asm/kvm_mmu.h | 5 +++++ > virt/kvm/arm/mmu.c | 7 ++++--- > 3 files changed, 14 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h > index 707a1f06dc5d..5907a81ad5c1 100644 > --- a/arch/arm/include/asm/kvm_mmu.h > +++ b/arch/arm/include/asm/kvm_mmu.h > @@ -75,6 +75,11 @@ phys_addr_t kvm_get_idmap_vector(void); > int kvm_mmu_init(void); > void kvm_clear_hyp_idmap(void); > > +#define kvm_pfn_pte(pfn, prot) pfn_pte(pfn, prot) > +#define kvm_pfn_pmd(pfn, prot) pfn_pmd(pfn, prot) > + > +#define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) > + > static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd) > { > *pmd = new_pmd; > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > index 082110993647..d962508ce4b3 100644 > --- a/arch/arm64/include/asm/kvm_mmu.h > +++ b/arch/arm64/include/asm/kvm_mmu.h > @@ -173,6 +173,11 @@ void kvm_clear_hyp_idmap(void); > #define kvm_set_pte(ptep, pte) set_pte(ptep, pte) > #define kvm_set_pmd(pmdp, pmd) set_pmd(pmdp, pmd) > > +#define kvm_pfn_pte(pfn, prot) pfn_pte(pfn, prot) > +#define kvm_pfn_pmd(pfn, prot) pfn_pmd(pfn, prot) > + > +#define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) > + > static inline pte_t kvm_s2pte_mkwrite(pte_t pte) > { > pte_val(pte) |= PTE_S2_RDWR; > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > index 686fc6a4b866..74750236f445 100644 > --- a/virt/kvm/arm/mmu.c > +++ b/virt/kvm/arm/mmu.c > @@ -1554,8 +1554,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > invalidate_icache_guest_page(pfn, vma_pagesize); > > if (hugetlb) { > - pmd_t new_pmd = pfn_pmd(pfn, mem_type); > - new_pmd = pmd_mkhuge(new_pmd); > + pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type); > + > + new_pmd = kvm_pmd_mkhuge(new_pmd); > if (writable) > new_pmd = kvm_s2pmd_mkwrite(new_pmd); > > @@ -1564,7 +1565,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > > ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); > } else { > - pte_t new_pte = pfn_pte(pfn, mem_type); > + pte_t new_pte = kvm_pfn_pte(pfn, mem_type); > > if (writable) { > new_pte = kvm_s2pte_mkwrite(new_pte); >