Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp908006imu; Tue, 11 Dec 2018 09:22:53 -0800 (PST) X-Google-Smtp-Source: AFSGD/XcoiaSItSclTySWjMWIuARrbGOxt/lnl1BeuWKjPv83qcVC4K9nxt58NUXxYAin+kIZEEX X-Received: by 2002:a17:902:59d6:: with SMTP id d22mr17154223plj.10.1544548973883; Tue, 11 Dec 2018 09:22:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544548973; cv=none; d=google.com; s=arc-20160816; b=nifTAqBmjdc9ENlq7gKWJbAfJu0b4BYPm6yVSMx3cwVMg8D7KWJK5u1eMGp15lWg2K azpevOXOrKncUlTMkX9JxGTo2fB8VAXJ6oOsBmypg2eSOyaqn4S5aLj9InI3uWTq6EG/ UtASLhK6TuzvsYYbjpUZSekBANimSSMAieZa7jklDd5AlGWd7H8Uc3cdQasqVzbMCtQL DHFPVaVYAfWW620GSB4LgH7gHiiI8dAdRv/dxo5u9SWfqbFpIhLI7Uw70uxbkVC7TNmB xo0iBRv4fmIJ/cJX7J/YspHdq5aEHtLUDYEsW2Z7k7w8zbwWLB0zAoQFeA+ch6b3Bnoo zw2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=YtQRinhuoQzW9JUUdpiz932i+SkW+kvKK+JT1T56MmI=; b=vpddSnJaP5A20E/v3DQ21f2kT+l4ojLRiv76ox5RVqV+XxoPatUvzSTJQpflSMgEeN eeOMkr4TzgNEN28UPemBQM3JtPZLDQRtOngOWIyJNcYi5FhHrqX7Y3wAFkoXZySKm9SA fjkWuEa4Q103Lr8SaLOnXsh7UKcWDn7W8O35yNeYWVixUEF98lGM6ABYZV7ifME7v67y Ae8W2DDjneqBIIR5xvXb6xrt5+WAeaWwtWJ5G/AohSz1NfiSeCVoMVIIsHIbr/voz4Tq 9oQmomPVQfyyislUHg7PtVjjVcL2mUUOl5ZzOMPXDBwi068jjmKTJtnivOQQhODQ9JxA YF8A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w2si12495249pgh.565.2018.12.11.09.22.38; Tue, 11 Dec 2018 09:22:53 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727591AbeLKRML (ORCPT + 99 others); Tue, 11 Dec 2018 12:12:11 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:53550 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726170AbeLKRMI (ORCPT ); Tue, 11 Dec 2018 12:12:08 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 23D6715AB; Tue, 11 Dec 2018 09:12:08 -0800 (PST) Received: from en101.cambridge.arm.com (en101.cambridge.arm.com [10.1.196.93]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5860B3F6A8; Tue, 11 Dec 2018 09:12:06 -0800 (PST) From: Suzuki K Poulose To: linux-arm-kernel@lists.infradead.org Cc: suzuki.poulose@arm.com, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, marc.zyngier@arm.com, christoffer.dall@arm.com, will.deacon@arm.com, catalin.marinas@arm.com, anshuman.khandual@arm.com Subject: [PATCH v10 1/8] KVM: arm/arm64: Share common code in user_mem_abort() Date: Tue, 11 Dec 2018 17:10:34 +0000 Message-Id: <1544548241-6417-2-git-send-email-suzuki.poulose@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1544548241-6417-1-git-send-email-suzuki.poulose@arm.com> References: <1544548241-6417-1-git-send-email-suzuki.poulose@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Punit Agrawal The code for operations such as marking the pfn as dirty, and dcache/icache maintenance during stage 2 fault handling is duplicated between normal pages and PMD hugepages. Instead of creating another copy of the operations when we introduce PUD hugepages, let's share them across the different pagesizes. Signed-off-by: Punit Agrawal Reviewed-by: Suzuki K Poulose Cc: Christoffer Dall Cc: Marc Zyngier Signed-off-by: Suzuki K Poulose --- virt/kvm/arm/mmu.c | 49 ++++++++++++++++++++++++++++++------------------- 1 file changed, 30 insertions(+), 19 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 5eca48b..5959520 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1475,7 +1475,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, unsigned long fault_status) { int ret; - bool write_fault, exec_fault, writable, hugetlb = false, force_pte = false; + bool write_fault, exec_fault, writable, force_pte = false; unsigned long mmu_seq; gfn_t gfn = fault_ipa >> PAGE_SHIFT; struct kvm *kvm = vcpu->kvm; @@ -1484,7 +1484,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, kvm_pfn_t pfn; pgprot_t mem_type = PAGE_S2; bool logging_active = memslot_is_logging(memslot); - unsigned long flags = 0; + unsigned long vma_pagesize, flags = 0; write_fault = kvm_is_write_fault(vcpu); exec_fault = kvm_vcpu_trap_is_iabt(vcpu); @@ -1504,11 +1504,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) { - hugetlb = true; + vma_pagesize = vma_kernel_pagesize(vma); + if (vma_pagesize == PMD_SIZE && !logging_active) { gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; } else { /* + * Fallback to PTE if it's not one of the Stage 2 + * supported hugepage sizes + */ + vma_pagesize = PAGE_SIZE; + + /* * Pages belonging to memslots that don't have the same * alignment for userspace and IPA cannot be mapped using * block descriptors even if the pages belong to a THP for @@ -1573,23 +1579,33 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; - if (!hugetlb && !force_pte) - hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); + if (vma_pagesize == PAGE_SIZE && !force_pte) { + /* + * Only PMD_SIZE transparent hugepages(THP) are + * currently supported. This code will need to be + * updated to support other THP sizes. + */ + if (transparent_hugepage_adjust(&pfn, &fault_ipa)) + vma_pagesize = PMD_SIZE; + } - if (hugetlb) { + if (writable) + kvm_set_pfn_dirty(pfn); + + if (fault_status != FSC_PERM) + clean_dcache_guest_page(pfn, vma_pagesize); + + if (exec_fault) + invalidate_icache_guest_page(pfn, vma_pagesize); + + if (vma_pagesize == PMD_SIZE) { pmd_t new_pmd = pfn_pmd(pfn, mem_type); new_pmd = pmd_mkhuge(new_pmd); - if (writable) { + if (writable) new_pmd = kvm_s2pmd_mkwrite(new_pmd); - kvm_set_pfn_dirty(pfn); - } - - if (fault_status != FSC_PERM) - clean_dcache_guest_page(pfn, PMD_SIZE); if (exec_fault) { new_pmd = kvm_s2pmd_mkexec(new_pmd); - invalidate_icache_guest_page(pfn, PMD_SIZE); } else if (fault_status == FSC_PERM) { /* Preserve execute if XN was already cleared */ if (stage2_is_exec(kvm, fault_ipa)) @@ -1602,16 +1618,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (writable) { new_pte = kvm_s2pte_mkwrite(new_pte); - kvm_set_pfn_dirty(pfn); mark_page_dirty(kvm, gfn); } - if (fault_status != FSC_PERM) - clean_dcache_guest_page(pfn, PAGE_SIZE); - if (exec_fault) { new_pte = kvm_s2pte_mkexec(new_pte); - invalidate_icache_guest_page(pfn, PAGE_SIZE); } else if (fault_status == FSC_PERM) { /* Preserve execute if XN was already cleared */ if (stage2_is_exec(kvm, fault_ipa)) -- 2.7.4