Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp1025099imm; Wed, 11 Jul 2018 15:41:47 -0700 (PDT) X-Google-Smtp-Source: AAOMgpc8E9bT33ZujkaL3QI8UrBWWTBH9XGyfiwkLKFCs7Nf3NECJg8HWlmzOZ+/za5H411H2oZX X-Received: by 2002:a63:2013:: with SMTP id g19-v6mr450845pgg.68.1531348907694; Wed, 11 Jul 2018 15:41:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531348907; cv=none; d=google.com; s=arc-20160816; b=JFG7HVRiBYXMNBF3CK9EkDJWXDXMOmYhYeTw4zOCNgNaqsH+F4B1Mffe/aKiUDK8Nn O+qKM58Ck+oLbBC0enGihLSYgJbdrb2JMrYAr2EurV5oOZOnRmcXIp7FWYIKabaLfx9H +gXZo7HZWxgIkzMBHyVU1DMSm2ljpDi2fI7sA3zJw9lgRdtGE1hLk2uvmtZLnhGKpepa mFFs6vXDPkBQdiBIl+V7XuAqAyBWctFtZ9qYBqLVvWv+6yqw76sCNqjrbkHKW8iacxZi /HCs0pPxq6SueK0wI/vamaWPrFCMkXflCg3yl4dHHSPiqK6vgPsa8UXJRFq9C4RkskNg 6UTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from :arc-authentication-results; bh=KXcPZUWsHow4a1I/7HMj+JIrLjimU5G2dVJWIzICUm0=; b=ejOPw5ZPAwscBNnSNb7MMlcmpGP+VDjIIkTGUO2Y7BiD6O0dznvJs67STQ0wB9Zpmh i3UMYFhQ3T6rbuIq30R6tTOHf01T6FYI6RD5QHDRnFpsSx+sgt0ddgCoSk8sm0no+lzd WcapwYQL4nXfuT/nfkahBkDc94UEi98+8kayQ7AVNzaNKIArkELghxGdjhqw16wZZkfR q9xFM5fp8h84ENlT/dQ3BBWIyzD4nIngU9ZHP7Nty8JsPbAxVMQhXkbhOZuhJrofpVEl Or3p6+G4jYgV9xITJoJvKJMe/61o++zdDTPm8sWA/STbQGjZsT37Jr86M/tP2ZucrGUa w6cg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b28-v6si6132004pfe.265.2018.07.11.15.41.31; Wed, 11 Jul 2018 15:41:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389681AbeGKQYM (ORCPT + 99 others); Wed, 11 Jul 2018 12:24:12 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:39796 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733247AbeGKQYM (ORCPT ); Wed, 11 Jul 2018 12:24:12 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 27B097A9; Wed, 11 Jul 2018 09:19:08 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.206.33]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9D1C33F589; Wed, 11 Jul 2018 09:19:07 -0700 (PDT) From: Punit Agrawal To: Suzuki K Poulose Cc: marc.zyngier@arm.com, Catalin Marinas , will.deacon@arm.com, Russell King , linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v5 7/7] KVM: arm64: Add support for creating PUD hugepages at stage 2 References: <20180709143835.28971-1-punit.agrawal@arm.com> <20180709144124.29164-1-punit.agrawal@arm.com> <20180709144124.29164-7-punit.agrawal@arm.com> <87zhyxoize.fsf@e105922-lin.cambridge.arm.com> Date: Wed, 11 Jul 2018 17:19:06 +0100 In-Reply-To: (Suzuki K. Poulose's message of "Wed, 11 Jul 2018 17:13:54 +0100") Message-ID: <87lgahoidh.fsf@e105922-lin.cambridge.arm.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Suzuki K Poulose writes: > On 11/07/18 17:05, Punit Agrawal wrote: >> Suzuki K Poulose writes: >> >>> On 09/07/18 15:41, Punit Agrawal wrote: >>>> KVM only supports PMD hugepages at stage 2. Now that the various page >>>> handling routines are updated, extend the stage 2 fault handling to >>>> map in PUD hugepages. >>>> >>>> Addition of PUD hugepage support enables additional page sizes (e.g., >>>> 1G with 4K granule) which can be useful on cores that support mapping >>>> larger block sizes in the TLB entries. >>>> >>>> Signed-off-by: Punit Agrawal >>>> Cc: Christoffer Dall >>>> Cc: Marc Zyngier >>>> Cc: Russell King >>>> Cc: Catalin Marinas >>>> Cc: Will Deacon >>>> --- >>>> arch/arm/include/asm/kvm_mmu.h | 19 +++++++ >>>> arch/arm64/include/asm/kvm_mmu.h | 15 +++++ >>>> arch/arm64/include/asm/pgtable-hwdef.h | 2 + >>>> arch/arm64/include/asm/pgtable.h | 2 + >>>> virt/kvm/arm/mmu.c | 78 ++++++++++++++++++++++++-- >>>> 5 files changed, 112 insertions(+), 4 deletions(-) >>>> >> >> [...] >> >>>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c >>>> index a6d3ac9d7c7a..d8e2497e5353 100644 >>>> --- a/virt/kvm/arm/mmu.c >>>> +++ b/virt/kvm/arm/mmu.c >> >> [...] >> >>>> @@ -1100,6 +1139,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, >>>> phys_addr_t addr, const pte_t *new_pte, >>>> unsigned long flags) >>>> { >>>> + pud_t *pud; >>>> pmd_t *pmd; >>>> pte_t *pte, old_pte; >>>> bool iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP; >>>> @@ -1108,6 +1148,22 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, >>>> VM_BUG_ON(logging_active && !cache); >>>> /* Create stage-2 page table mapping - Levels 0 and 1 */ >>>> + pud = stage2_get_pud(kvm, cache, addr); >>>> + if (!pud) { >>>> + /* >>>> + * Ignore calls from kvm_set_spte_hva for unallocated >>>> + * address ranges. >>>> + */ >>>> + return 0; >>>> + } >>>> + >>>> + /* >>>> + * While dirty page logging - dissolve huge PUD, then continue >>>> + * on to allocate page. >>> >>> Punit, >>> >>> We don't seem to allocate a page here for the PUD entry, in case if it is dissolved >>> or empty (i.e, stage2_pud_none(*pud) is true.). >> >> I was trying to avoid duplicating the PUD allocation by reusing the >> functionality in stage2_get_pmd(). >> >> Does the below updated comment help? >> >> /* >> * While dirty page logging - dissolve huge PUD, it'll be >> * allocated in stage2_get_pmd(). >> */ >> >> The other option is to duplicate the stage2_pud_none() case from >> stage2_get_pmd() here. > > I think the explicit check for stage2_pud_none() suits better here. > That would make it explicit that we are tearing down the entries > from top to bottom. Also, we may be able to short cut for case > where we know we just allocated a PUD page and hence we need another > PMD level page. Ok, I'll add the PUD allocation code here. > > Also, you are missing the comment about the assumption that stage2 PUD > level always exist with 4k fixed IPA. Hmm... I'm quite sure I wrote a comment to that effect but can't find it now. I'll include it in the next version. Thanks, Punit > > Cheers > Suzuki > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm