Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp853054imm; Wed, 11 Jul 2018 12:05:35 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcOJ3SWFQp0e/B+RUYA/3mkOv1LkcDawk0uK4cr2LmO6CsByC7WRsvnx2FWmPvEBVjhc30/ X-Received: by 2002:a62:3a5b:: with SMTP id h88-v6mr31264020pfa.61.1531335935785; Wed, 11 Jul 2018 12:05:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531335935; cv=none; d=google.com; s=arc-20160816; b=iTJJOx79eNN+nao2Wi898Fng2aTOdhPS3N3JUK0s9gg/by6nJqVRzcYIfwd1HSKgkk AdUjbUkqJ+TJs6G75V0aqdYRr2eSN2OfeZUiVN3mvpkB+PSW0GEWD1Xif7v4Agm/MTVe 4dpZBwy93SjWhixwquJ8tA+n9v3MKeK+woCBtWEgwyIPYFtXq6xcNsSr4GHzMzfbKWYq vU4/S/zIWpIjbpJ8BP3xVeV3m3Eo4/KQU7BiagEz4EaUGAA3BSC4+3Ll5iLhUZcarQjl SDqFb1VEVntyAHAX99rIU/EjAJ5lKgtrud+X4VDrQy9cWqdz3aY9yMbjLxqy+A+gjk16 +2YA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=uspveDV7SuFJiorU4zsyVaIWCPyiqQiK87CpqSi/LZg=; b=rQ2KA2YONZIxJaNAyE5L3If6hOshQ0twzqedVQCKhcSLj5H941xhA6KxyiHx1koJ5S ZMZV8aSHj/f0hHPGjhAX9Vd1zFemczsiiVEllH2bF9AuK7f/xFLip6+MPrv5pVXvzePc ZP33yqEviQ3+bnJioQ7qP4x3R6CL8x/nd933ASm7m+GMuTCnoij6wMMaXBdFqS34TpPg P8tjN+b5mLAiGzMvpUBTi9/766ptd8G18I1JvTa1NPP36TdV6g1mAYSfXGIYa+XhaCNq aQ6RMArLTq1IbtwYGMhwRr3GhwzXvb9qPFyU9+tc/jaQZP3IKaBlM0L8nE1QmrP9kZhI xM9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c4-v6si18723452pgb.34.2018.07.11.12.05.08; Wed, 11 Jul 2018 12:05:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733235AbeGKQTB (ORCPT + 99 others); Wed, 11 Jul 2018 12:19:01 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:39626 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732761AbeGKQTB (ORCPT ); Wed, 11 Jul 2018 12:19:01 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6E2D17A9; Wed, 11 Jul 2018 09:13:58 -0700 (PDT) Received: from [10.1.206.73] (en101.cambridge.arm.com [10.1.206.73]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 04DA83F589; Wed, 11 Jul 2018 09:13:56 -0700 (PDT) Subject: Re: [PATCH v5 7/7] KVM: arm64: Add support for creating PUD hugepages at stage 2 To: Punit Agrawal Cc: kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, linux-kernel@vger.kernel.org, will.deacon@arm.com, Russell King , Catalin Marinas , linux-arm-kernel@lists.infradead.org References: <20180709143835.28971-1-punit.agrawal@arm.com> <20180709144124.29164-1-punit.agrawal@arm.com> <20180709144124.29164-7-punit.agrawal@arm.com> <87zhyxoize.fsf@e105922-lin.cambridge.arm.com> From: Suzuki K Poulose Message-ID: Date: Wed, 11 Jul 2018 17:13:54 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: <87zhyxoize.fsf@e105922-lin.cambridge.arm.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/07/18 17:05, Punit Agrawal wrote: > Suzuki K Poulose writes: > >> On 09/07/18 15:41, Punit Agrawal wrote: >>> KVM only supports PMD hugepages at stage 2. Now that the various page >>> handling routines are updated, extend the stage 2 fault handling to >>> map in PUD hugepages. >>> >>> Addition of PUD hugepage support enables additional page sizes (e.g., >>> 1G with 4K granule) which can be useful on cores that support mapping >>> larger block sizes in the TLB entries. >>> >>> Signed-off-by: Punit Agrawal >>> Cc: Christoffer Dall >>> Cc: Marc Zyngier >>> Cc: Russell King >>> Cc: Catalin Marinas >>> Cc: Will Deacon >>> --- >>> arch/arm/include/asm/kvm_mmu.h | 19 +++++++ >>> arch/arm64/include/asm/kvm_mmu.h | 15 +++++ >>> arch/arm64/include/asm/pgtable-hwdef.h | 2 + >>> arch/arm64/include/asm/pgtable.h | 2 + >>> virt/kvm/arm/mmu.c | 78 ++++++++++++++++++++++++-- >>> 5 files changed, 112 insertions(+), 4 deletions(-) >>> > > [...] > >>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c >>> index a6d3ac9d7c7a..d8e2497e5353 100644 >>> --- a/virt/kvm/arm/mmu.c >>> +++ b/virt/kvm/arm/mmu.c > > [...] > >>> @@ -1100,6 +1139,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, >>> phys_addr_t addr, const pte_t *new_pte, >>> unsigned long flags) >>> { >>> + pud_t *pud; >>> pmd_t *pmd; >>> pte_t *pte, old_pte; >>> bool iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP; >>> @@ -1108,6 +1148,22 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, >>> VM_BUG_ON(logging_active && !cache); >>> /* Create stage-2 page table mapping - Levels 0 and 1 */ >>> + pud = stage2_get_pud(kvm, cache, addr); >>> + if (!pud) { >>> + /* >>> + * Ignore calls from kvm_set_spte_hva for unallocated >>> + * address ranges. >>> + */ >>> + return 0; >>> + } >>> + >>> + /* >>> + * While dirty page logging - dissolve huge PUD, then continue >>> + * on to allocate page. >> >> Punit, >> >> We don't seem to allocate a page here for the PUD entry, in case if it is dissolved >> or empty (i.e, stage2_pud_none(*pud) is true.). > > I was trying to avoid duplicating the PUD allocation by reusing the > functionality in stage2_get_pmd(). > > Does the below updated comment help? > > /* > * While dirty page logging - dissolve huge PUD, it'll be > * allocated in stage2_get_pmd(). > */ > > The other option is to duplicate the stage2_pud_none() case from > stage2_get_pmd() here. I think the explicit check for stage2_pud_none() suits better here. That would make it explicit that we are tearing down the entries from top to bottom. Also, we may be able to short cut for case where we know we just allocated a PUD page and hence we need another PMD level page. Also, you are missing the comment about the assumption that stage2 PUD level always exist with 4k fixed IPA. Cheers Suzuki