Received: by 10.192.165.148 with SMTP id m20csp808747imm; Fri, 27 Apr 2018 07:52:08 -0700 (PDT) X-Google-Smtp-Source: AB8JxZo3kXCqnbBkJuzPMk09XMiXGwsddxsmKIHo8Jz2IpD3ayx4Cw/DwYGWdEiCj4f3SCu/S6PR X-Received: by 2002:a63:7502:: with SMTP id q2-v6mr2441311pgc.248.1524840727948; Fri, 27 Apr 2018 07:52:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524840727; cv=none; d=google.com; s=arc-20160816; b=dCMUkBJbA866xdHjtKrGtzqdg90g4CXhwYeaEL7pDvZW637R/IOvMdOWpFd6082YSw CX4mTOs+wRPs5cFc8Vjm7hqon3q9bCtbbFpvSxBDTwUTyB3tzIZWIIfkb2n7LEKp6mJK DqzcQ97G2x4ypJVv0LypKGz50MlxqC6ZL2H5QwFIP2fAYh3hnMtqFTNkuvNrz4JGm/yG xmB/TxOSUG59/zqLktksmBDL9Ppp1VG2uRl5YnfnY/0tKvyeFuh4Au6FRAXYxFYhIvDI PycIfW8rGkWE12+edknRTN2Drm2aaJ96ZnSWPaSL+24U12gdrx8JrT/kigCErnuNzu7X 8CHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from :arc-authentication-results; bh=GRejwmhVzAQOnL/VXE7UkCU6QoroeghLYKdFSJFU17U=; b=IcvHjXq6ig1TSoNnoprH+bxm9TTNH564qnOyZEzDLOFR72frhBlCP4jxzo/l6Oyv/U cJalnHTHLSc1wqv8zG4focx7xyD/AhK4+nYT5hpBtk+zX4FdHXyg4UMbk72qY/dA1/m6 ouEq4BzzKvSnMDF4ZqRjkosW4y+SIRsIoICyZpoI+26bz0zzWqg8eQgewhFTgF/gCZ5T yKUOPY7IIfxdc9X7K96evqnnl+oiZTbCfJdgHtTciHsX8ej2nt3TMGx4rUM0VTQj1C1D xVG1CfyG4S2t/p85p5EncmoFemngemaD+yHbVu7bg+ilZ5wBDyrWw4YxqtkAVlT7mVVb KqEg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q15-v6si1338417pgc.303.2018.04.27.07.51.53; Fri, 27 Apr 2018 07:52:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934475AbeD0Ouv (ORCPT + 99 others); Fri, 27 Apr 2018 10:50:51 -0400 Received: from foss.arm.com ([217.140.101.70]:41452 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934254AbeD0Our (ORCPT ); Fri, 27 Apr 2018 10:50:47 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BA46415AD; Fri, 27 Apr 2018 07:50:46 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.207.29]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3B5CD3F4FF; Fri, 27 Apr 2018 07:50:46 -0700 (PDT) From: Punit Agrawal To: Christoffer Dall Cc: marc.zyngier@arm.com, Catalin Marinas , Will Deacon , Russell King , linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH 4/4] KVM: arm64: Add support for PUD hugepages at stage 2 References: <20180420145409.24485-1-punit.agrawal@arm.com> <20180420145409.24485-5-punit.agrawal@arm.com> <20180427111422.GK13249@C02W217FHV2R.local> Date: Fri, 27 Apr 2018 15:50:44 +0100 In-Reply-To: <20180427111422.GK13249@C02W217FHV2R.local> (Christoffer Dall's message of "Fri, 27 Apr 2018 13:14:22 +0200") Message-ID: <877eoszosb.fsf@e105922-lin.cambridge.arm.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Christoffer Dall writes: > On Fri, Apr 20, 2018 at 03:54:09PM +0100, Punit Agrawal wrote: >> KVM only supports PMD hugepages at stage 2. Extend the stage 2 fault >> handling to add support for PUD hugepages. >> >> Addition of pud hugepage support enables additional hugepage >> sizes (e.g., 1G with 4K granule) which can be useful on cores that >> support mapping larger block sizes in the TLB entries. >> >> Signed-off-by: Punit Agrawal >> Cc: Christoffer Dall >> Cc: Marc Zyngier >> Cc: Russell King >> Cc: Catalin Marinas >> Cc: Will Deacon >> --- >> arch/arm/include/asm/kvm_mmu.h | 19 +++++++++ >> arch/arm64/include/asm/kvm_mmu.h | 15 +++++++ >> arch/arm64/include/asm/pgtable-hwdef.h | 4 ++ >> arch/arm64/include/asm/pgtable.h | 2 + >> virt/kvm/arm/mmu.c | 54 ++++++++++++++++++++------ >> 5 files changed, 83 insertions(+), 11 deletions(-) >> [...] >> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c [...] >> @@ -1452,9 +1472,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >> } >> >> vma_pagesize = vma_kernel_pagesize(vma); >> - if (vma_pagesize == PMD_SIZE && !logging_active) { >> + if ((vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) && >> + !logging_active) { >> + struct hstate *h = hstate_vma(vma); >> + >> hugetlb = true; >> - gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; >> + gfn = (fault_ipa & huge_page_mask(h)) >> PAGE_SHIFT; >> } else { >> /* >> * Pages belonging to memslots that don't have the same >> @@ -1521,15 +1544,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >> if (mmu_notifier_retry(kvm, mmu_seq)) >> goto out_unlock; >> >> - if (!hugetlb && !force_pte) { >> - /* >> - * Only PMD_SIZE transparent hugepages(THP) are >> - * currently supported. This code will need to be >> - * updated if other THP sizes are supported. >> - */ >> + if (!hugetlb && !force_pte) >> hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); >> - vma_pagesize = PMD_SIZE; > > Why this change? Won't you end up trying to map THPs as individual > pages now? Argh - that's a rebase gone awry. Thanks for spotting. There's another issue with this hunk - hugetlb can be false after the call to transparent_hugepage_adjust(). I've fixed that up for the next update. > >> - } >> >> if (writable) >> kvm_set_pfn_dirty(pfn); >> @@ -1540,7 +1556,23 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >> if (exec_fault) >> invalidate_icache_guest_page(pfn, vma_pagesize); >> >> - if (hugetlb) { >> + if (vma_pagesize == PUD_SIZE) { >> + pud_t new_pud = kvm_pfn_pud(pfn, mem_type); >> + >> + new_pud = kvm_pud_mkhuge(new_pud); >> + if (writable) >> + new_pud = kvm_s2pud_mkwrite(new_pud); >> + >> + if (exec_fault) { >> + new_pud = kvm_s2pud_mkexec(new_pud); >> + } else if (fault_status == FSC_PERM) { >> + /* Preserve execute if XN was already cleared */ >> + if (stage2_is_exec(kvm, fault_ipa)) >> + new_pud = kvm_s2pud_mkexec(new_pud); >> + } > > aha, another reason for my suggestion in the other patch. Ack! Already fixed locally. > >> + >> + ret = stage2_set_pud_huge(kvm, memcache, fault_ipa, &new_pud); >> + } else if (vma_pagesize == PMD_SIZE) { >> pmd_t new_pmd = kvm_pfn_pmd(pfn, mem_type); >> >> new_pmd = kvm_pmd_mkhuge(new_pmd); >> -- >> 2.17.0 >> > > Otherwise, this patch looks fine. Thanks a lot for reviewing the patches. I'll send out an update incorporating your suggestions. Punit