Received: by 10.192.165.148 with SMTP id m20csp4764763imm; Tue, 1 May 2018 03:28:13 -0700 (PDT) X-Google-Smtp-Source: AB8JxZroSugxoIRxXbNP0zfuQb/w5Q5NMcQiaGBclnRl7CF8nzWQzXd6pFXeAftZ3bjJkdsvpPQY X-Received: by 2002:a65:4b86:: with SMTP id t6-v6mr12512297pgq.138.1525170493428; Tue, 01 May 2018 03:28:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525170493; cv=none; d=google.com; s=arc-20160816; b=h2HkWsTTjnTPFmdj7oQc5j38I5yx6hSxCH0YRRHlFQRsEZj1Vuj1dk0mShXZy1uoAp wMztdJcU48VGh+XeOlqQfKUXDuAaLtzxxT0rsSmemvJ61B0xdQfGghQUdVRHjGUfE4P4 0wnCc+xIvw+mTp4LCivXtNK7DxjkHO/gKVQiT8+ksABJ7dNZmbwEKk+by5ZyomHq7zzM zV/nistqz0QUZSB3hIQrRk88CsBHrZFAcb8x0czH8vdooUJWoPQ/ECbEM0rUhX0KP9+G Lcmy6Urnq6h0NIbCQqAovQa3E0paBS9Bb8aoCkVTnVNH/zW/re44NBbqwlnMRtgMCHhB NU6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=8YyvC90RyqmeYHVfeTwytkOdBt+oJH8/LJ+oZWYU8q0=; b=RcRKsB2QCx95w6qRIe9ku8HwHL+0p9H4Av5wTuV0L1i+ctmxrhm97dTNTMrsGUhuYc bgA2edIOHfnnrTW8nEhVUzcO2xf0O+7xl4suTd4315N6xvmoKLJ0lEQnW9+RXf4v5hBb fGDTa/KIYrbAdoBuUJkqwRcxZ4DPRp4FnwhuqoluPaWh/vdjsH2lMnWsm+2PDTTAnLZY kEE+G65spQQXWlC9Vlz3pVW6vEWi++iVacDdCLxp9dSeyb4Hpijkzcp7U3Ffy2U0kXui hUvV2UOnc1rUuHTnLbXWmCPblPSss1bN1KcO/S+pNElBQEGV1HF9cUlwzfjhP1gohrbs oVPA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a12-v6si7590587pgw.578.2018.05.01.03.27.59; Tue, 01 May 2018 03:28:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754682AbeEAK1f (ORCPT + 99 others); Tue, 1 May 2018 06:27:35 -0400 Received: from foss.arm.com ([217.140.101.70]:45200 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750994AbeEAK1e (ORCPT ); Tue, 1 May 2018 06:27:34 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4AD3A15BE; Tue, 1 May 2018 03:27:34 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.207.29]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C033F3F5A0; Tue, 1 May 2018 03:27:33 -0700 (PDT) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu Cc: Punit Agrawal , linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, christoffer.dall@arm.com, linux-kernel@vger.kernel.org, suzuki.poulose@arm.com Subject: [PATCH v2 1/4] KVM: arm/arm64: Share common code in user_mem_abort() Date: Tue, 1 May 2018 11:26:56 +0100 Message-Id: <20180501102659.13188-2-punit.agrawal@arm.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180501102659.13188-1-punit.agrawal@arm.com> References: <20180501102659.13188-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The code for operations such as marking the pfn as dirty, and dcache/icache maintenance during stage 2 fault handling is duplicated between normal pages and PMD hugepages. Instead of creating another copy of the operations when we introduce PUD hugepages, let's share them across the different pagesizes. Signed-off-by: Punit Agrawal Reviewed-by: Christoffer Dall Cc: Marc Zyngier --- virt/kvm/arm/mmu.c | 66 +++++++++++++++++++++++++++------------------- 1 file changed, 39 insertions(+), 27 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 7f6a944db23d..686fc6a4b866 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1396,6 +1396,21 @@ static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size) __invalidate_icache_guest_page(pfn, size); } +static bool stage2_should_exec(struct kvm *kvm, phys_addr_t addr, + bool exec_fault, unsigned long fault_status) +{ + /* + * If we took an execution fault we will have made the + * icache/dcache coherent and should now let the s2 mapping be + * executable. + * + * Write faults (!exec_fault && FSC_PERM) are orthogonal to + * execute permissions, and we preserve whatever we have. + */ + return exec_fault || + (fault_status == FSC_PERM && stage2_is_exec(kvm, addr)); +} + static void kvm_send_hwpoison_signal(unsigned long address, struct vm_area_struct *vma) { @@ -1428,7 +1443,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, kvm_pfn_t pfn; pgprot_t mem_type = PAGE_S2; bool logging_active = memslot_is_logging(memslot); - unsigned long flags = 0; + unsigned long vma_pagesize, flags = 0; write_fault = kvm_is_write_fault(vcpu); exec_fault = kvm_vcpu_trap_is_iabt(vcpu); @@ -1448,7 +1463,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) { + vma_pagesize = vma_kernel_pagesize(vma); + if (vma_pagesize == PMD_SIZE && !logging_active) { hugetlb = true; gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; } else { @@ -1517,28 +1533,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; - if (!hugetlb && !force_pte) + if (!hugetlb && !force_pte) { hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); + /* + * Only PMD_SIZE transparent hugepages(THP) are + * currently supported. This code will need to be + * updated to support other THP sizes. + */ + if (hugetlb) + vma_pagesize = PMD_SIZE; + } + + if (writable) + kvm_set_pfn_dirty(pfn); + + if (fault_status != FSC_PERM) + clean_dcache_guest_page(pfn, vma_pagesize); + + if (exec_fault) + invalidate_icache_guest_page(pfn, vma_pagesize); if (hugetlb) { pmd_t new_pmd = pfn_pmd(pfn, mem_type); new_pmd = pmd_mkhuge(new_pmd); - if (writable) { + if (writable) new_pmd = kvm_s2pmd_mkwrite(new_pmd); - kvm_set_pfn_dirty(pfn); - } - if (fault_status != FSC_PERM) - clean_dcache_guest_page(pfn, PMD_SIZE); - - if (exec_fault) { + if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status)) new_pmd = kvm_s2pmd_mkexec(new_pmd); - invalidate_icache_guest_page(pfn, PMD_SIZE); - } else if (fault_status == FSC_PERM) { - /* Preserve execute if XN was already cleared */ - if (stage2_is_exec(kvm, fault_ipa)) - new_pmd = kvm_s2pmd_mkexec(new_pmd); - } ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); } else { @@ -1546,21 +1568,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (writable) { new_pte = kvm_s2pte_mkwrite(new_pte); - kvm_set_pfn_dirty(pfn); mark_page_dirty(kvm, gfn); } - if (fault_status != FSC_PERM) - clean_dcache_guest_page(pfn, PAGE_SIZE); - - if (exec_fault) { + if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status)) new_pte = kvm_s2pte_mkexec(new_pte); - invalidate_icache_guest_page(pfn, PAGE_SIZE); - } else if (fault_status == FSC_PERM) { - /* Preserve execute if XN was already cleared */ - if (stage2_is_exec(kvm, fault_ipa)) - new_pte = kvm_s2pte_mkexec(new_pte); - } ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags); } -- 2.17.0