Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp994342imm; Wed, 11 Jul 2018 15:01:42 -0700 (PDT) X-Google-Smtp-Source: AAOMgpduYIa2EsDS6nnnpgjhmpA5e7vXNNuwZp+dsaE8kbRBsVXElln/jlWus23W3eqQ0wyAv0Te X-Received: by 2002:a17:902:5281:: with SMTP id a1-v6mr327800pli.73.1531346502611; Wed, 11 Jul 2018 15:01:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531346502; cv=none; d=google.com; s=arc-20160816; b=HXeaFtup66v4i4NyV8zzbGZ9H7LNnLU2o5Y9KNQZtfoleP9ExUBjmWK24uwXPTIgyx UGaRwSP5uUFCFz9xxLNh2SwiZDROc7OtYM1SGfLlErBqKJdrAfu3d/rGZP0V5vSaOnrb LEReCuaI381ynB/JdOYm1SCn62DJpTo8ntQfDzwCU3oqkBdkq8N52x2fsYc4q1n8lgYf e+oJKRS6Esdz0k2tNb2FNZ+ymYJ2IhJxLBlskiMXNITW4Ggrdj/Bnw1SCHvCEnak7TiH CJEoCyLaKtI31few4jIosepAPRF7y0jr9k/orCcUmg7fyu3/tSk0i/fFcdFw60ikACLw r+FA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from :arc-authentication-results; bh=gi6yYdUMGM98oAXeUHHxiYuyoHYNq+IZk5oM2nhwcjI=; b=do6EO7LB7XmruciRkVZQ1Hh3AwFAi5Zxxvxb0xCMveKxJFPtXfUbccTntXG2TlwoyS HZ67TITPc8jyWqytYCT66IqxnS15ar+6rk/We5RX1aGZ5RM/SOh+9rXgj6MPGtCxiHug tSOEOkQWai6mw4VSuaGi6biZjnzaPh9kQpbrE6j9twGbfx8Z4Fmm1Rm2uUUAk3sQPBY5 6xI5P0TV1REVTu09skctem0P8TvRJRgqq7FOOC4w43RfdtoaIe01i0XHqIf+ey1ou68I jj4AIGNlycrjDrZg2U80gGVrf82D0bPIu3V3Uwpc6iCwbQYvn+3oB3p1iFdUWhFQSOw+ MIQw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o18-v6si18933563pgg.250.2018.07.11.15.01.27; Wed, 11 Jul 2018 15:01:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387775AbeGKQLA (ORCPT + 99 others); Wed, 11 Jul 2018 12:11:00 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:39452 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726586AbeGKQLA (ORCPT ); Wed, 11 Jul 2018 12:11:00 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C250315AD; Wed, 11 Jul 2018 09:05:59 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.206.33]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 437263F5B1; Wed, 11 Jul 2018 09:05:59 -0700 (PDT) From: Punit Agrawal To: Suzuki K Poulose Cc: kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, linux-kernel@vger.kernel.org, will.deacon@arm.com, Russell King , Catalin Marinas , linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v5 7/7] KVM: arm64: Add support for creating PUD hugepages at stage 2 References: <20180709143835.28971-1-punit.agrawal@arm.com> <20180709144124.29164-1-punit.agrawal@arm.com> <20180709144124.29164-7-punit.agrawal@arm.com> Date: Wed, 11 Jul 2018 17:05:57 +0100 In-Reply-To: (Suzuki K. Poulose's message of "Wed, 11 Jul 2018 14:38:38 +0100") Message-ID: <87zhyxoize.fsf@e105922-lin.cambridge.arm.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Suzuki K Poulose writes: > On 09/07/18 15:41, Punit Agrawal wrote: >> KVM only supports PMD hugepages at stage 2. Now that the various page >> handling routines are updated, extend the stage 2 fault handling to >> map in PUD hugepages. >> >> Addition of PUD hugepage support enables additional page sizes (e.g., >> 1G with 4K granule) which can be useful on cores that support mapping >> larger block sizes in the TLB entries. >> >> Signed-off-by: Punit Agrawal >> Cc: Christoffer Dall >> Cc: Marc Zyngier >> Cc: Russell King >> Cc: Catalin Marinas >> Cc: Will Deacon >> --- >> arch/arm/include/asm/kvm_mmu.h | 19 +++++++ >> arch/arm64/include/asm/kvm_mmu.h | 15 +++++ >> arch/arm64/include/asm/pgtable-hwdef.h | 2 + >> arch/arm64/include/asm/pgtable.h | 2 + >> virt/kvm/arm/mmu.c | 78 ++++++++++++++++++++++++-- >> 5 files changed, 112 insertions(+), 4 deletions(-) >> [...] >> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c >> index a6d3ac9d7c7a..d8e2497e5353 100644 >> --- a/virt/kvm/arm/mmu.c >> +++ b/virt/kvm/arm/mmu.c [...] >> @@ -1100,6 +1139,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, >> phys_addr_t addr, const pte_t *new_pte, >> unsigned long flags) >> { >> + pud_t *pud; >> pmd_t *pmd; >> pte_t *pte, old_pte; >> bool iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP; >> @@ -1108,6 +1148,22 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, >> VM_BUG_ON(logging_active && !cache); >> /* Create stage-2 page table mapping - Levels 0 and 1 */ >> + pud = stage2_get_pud(kvm, cache, addr); >> + if (!pud) { >> + /* >> + * Ignore calls from kvm_set_spte_hva for unallocated >> + * address ranges. >> + */ >> + return 0; >> + } >> + >> + /* >> + * While dirty page logging - dissolve huge PUD, then continue >> + * on to allocate page. > > Punit, > > We don't seem to allocate a page here for the PUD entry, in case if it is dissolved > or empty (i.e, stage2_pud_none(*pud) is true.). I was trying to avoid duplicating the PUD allocation by reusing the functionality in stage2_get_pmd(). Does the below updated comment help? /* * While dirty page logging - dissolve huge PUD, it'll be * allocated in stage2_get_pmd(). */ The other option is to duplicate the stage2_pud_none() case from stage2_get_pmd() here. What do you think? Thanks, Punit >> + */ >> + if (logging_active) >> + stage2_dissolve_pud(kvm, addr, pud); >> + >> pmd = stage2_get_pmd(kvm, cache, addr); >> if (!pmd) { > > And once you add an entry, pmd is just the matter of getting stage2_pmd_offset() from your pud. > No need to start again from the top-level with stage2_get_pmd(). > > Cheers > Suzuki > > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm