Received: by 2002:ac0:a591:0:0:0:0:0 with SMTP id m17-v6csp679755imm; Thu, 5 Jul 2018 07:10:56 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcoHyQQHnWpFZHCZQIfhaLv3bTvUkXyZFbwEQHsCcjCAYfMqz4inZjYBwAGGhSmc+ErsiFv X-Received: by 2002:a62:6a01:: with SMTP id f1-v6mr6648176pfc.156.1530799856544; Thu, 05 Jul 2018 07:10:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530799856; cv=none; d=google.com; s=arc-20160816; b=IeGfr8eBeB2SG08swLZcfySrc+SkHDL4RlWDegFiX8aIFpAS8+Bt16k1khXHYAnOKo 7z+k9dU9mTVY5ciXAPeL3CHLkIJwXFH2+9InmZs59qFasU2WujV/rrm/LdSRQXHGMlQ0 kUCLDWuDTYUf3PR5V3DAHEkBUeOxqWbFVxmWF2Dd1oso4FELph9B6oYSX36888YtokVn 5hdDwx7pHTbL1U0BpwIskoAaLMOpCwOXYC2D4SQdhINAINKFetzxDgkzIGVI0hv8GQLF 7dqjHOHXfahXUpj2UwX8uco6Ba9Z+GwKMrsE+dHPonyhYgb1ZJLLRnrkqkcb/TRDNFcU wx1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=Vul88R8MHVkCNs3o4u6voROJKoGdYSU6WZRxlAIbphQ=; b=yLx4pJp40xrZHYP7Uyc0OyBlORiQBb+xY5RyP1/690bBFumFGNACcfpCfX+mOuWwnb D+eCNwivE2WIbmZxUQZKMH/mF26NJsoUwFNXbQ9rHBH5Ha065qcy7FYqlp60GWSOq9MT WQwDebL5O9uDjz4gSSFY8s6xvZDQdOcaCvDioLBO/DvVbr1v5W7oujBjZxzwqE3/bMx0 4sXttQasGYWRt1xN7kQ0NaO1H6Aj+CmrdczpVCWTQwYaA5CEELE4CBKpEU6Xgj0mLMpp zJAH1ba/SZ8ANHzM4AuiN9+YVam+N40OW12fwdQGRdN4WioTexitMVM9lepdlDRludgr Or4A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c2-v6si6018707plb.77.2018.07.05.07.10.41; Thu, 05 Jul 2018 07:10:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753839AbeGEOJY (ORCPT + 99 others); Thu, 5 Jul 2018 10:09:24 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:50242 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753800AbeGEOJW (ORCPT ); Thu, 5 Jul 2018 10:09:22 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 552BA80D; Thu, 5 Jul 2018 07:09:22 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.206.33]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EE8193F5BA; Thu, 5 Jul 2018 07:09:21 -0700 (PDT) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu Cc: Punit Agrawal , linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, christoffer.dall@arm.com, linux-kernel@vger.kernel.org, suzuki.poulose@arm.com Subject: [PATCH v4 1/7] KVM: arm/arm64: Share common code in user_mem_abort() Date: Thu, 5 Jul 2018 15:08:44 +0100 Message-Id: <20180705140850.5801-2-punit.agrawal@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180705140850.5801-1-punit.agrawal@arm.com> References: <20180705140850.5801-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The code for operations such as marking the pfn as dirty, and dcache/icache maintenance during stage 2 fault handling is duplicated between normal pages and PMD hugepages. Instead of creating another copy of the operations when we introduce PUD hugepages, let's share them across the different pagesizes. Signed-off-by: Punit Agrawal Cc: Christoffer Dall Cc: Marc Zyngier --- virt/kvm/arm/mmu.c | 68 +++++++++++++++++++++++++++------------------- 1 file changed, 40 insertions(+), 28 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 1d90d79706bd..dd14cc36c51c 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1398,6 +1398,21 @@ static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size) __invalidate_icache_guest_page(pfn, size); } +static bool stage2_should_exec(struct kvm *kvm, phys_addr_t addr, + bool exec_fault, unsigned long fault_status) +{ + /* + * If we took an execution fault we will have made the + * icache/dcache coherent and should now let the s2 mapping be + * executable. + * + * Write faults (!exec_fault && FSC_PERM) are orthogonal to + * execute permissions, and we preserve whatever we have. + */ + return exec_fault || + (fault_status == FSC_PERM && stage2_is_exec(kvm, addr)); +} + static void kvm_send_hwpoison_signal(unsigned long address, struct vm_area_struct *vma) { @@ -1431,7 +1446,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, kvm_pfn_t pfn; pgprot_t mem_type = PAGE_S2; bool logging_active = memslot_is_logging(memslot); - unsigned long flags = 0; + unsigned long vma_pagesize, flags = 0; write_fault = kvm_is_write_fault(vcpu); exec_fault = kvm_vcpu_trap_is_iabt(vcpu); @@ -1451,7 +1466,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) { + vma_pagesize = vma_kernel_pagesize(vma); + if (vma_pagesize == PMD_SIZE && !logging_active) { hugetlb = true; gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; } else { @@ -1520,28 +1536,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; - if (!hugetlb && !force_pte) + if (!hugetlb && !force_pte) { + /* + * Only PMD_SIZE transparent hugepages(THP) are + * currently supported. This code will need to be + * updated to support other THP sizes. + */ hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); + if (hugetlb) + vma_pagesize = PMD_SIZE; + } + + if (writable) + kvm_set_pfn_dirty(pfn); - if (hugetlb) { + if (fault_status != FSC_PERM) + clean_dcache_guest_page(pfn, vma_pagesize); + + if (exec_fault) + invalidate_icache_guest_page(pfn, vma_pagesize); + + if (hugetlb && vma_pagesize == PMD_SIZE) { pmd_t new_pmd = pfn_pmd(pfn, mem_type); new_pmd = pmd_mkhuge(new_pmd); - if (writable) { + if (writable) new_pmd = kvm_s2pmd_mkwrite(new_pmd); - kvm_set_pfn_dirty(pfn); - } - if (fault_status != FSC_PERM) - clean_dcache_guest_page(pfn, PMD_SIZE); - - if (exec_fault) { + if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status)) new_pmd = kvm_s2pmd_mkexec(new_pmd); - invalidate_icache_guest_page(pfn, PMD_SIZE); - } else if (fault_status == FSC_PERM) { - /* Preserve execute if XN was already cleared */ - if (stage2_is_exec(kvm, fault_ipa)) - new_pmd = kvm_s2pmd_mkexec(new_pmd); - } ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); } else { @@ -1549,21 +1571,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (writable) { new_pte = kvm_s2pte_mkwrite(new_pte); - kvm_set_pfn_dirty(pfn); mark_page_dirty(kvm, gfn); } - if (fault_status != FSC_PERM) - clean_dcache_guest_page(pfn, PAGE_SIZE); - - if (exec_fault) { + if (stage2_should_exec(kvm, fault_ipa, exec_fault, fault_status)) new_pte = kvm_s2pte_mkexec(new_pte); - invalidate_icache_guest_page(pfn, PAGE_SIZE); - } else if (fault_status == FSC_PERM) { - /* Preserve execute if XN was already cleared */ - if (stage2_is_exec(kvm, fault_ipa)) - new_pte = kvm_s2pte_mkexec(new_pte); - } ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags); } -- 2.17.1