Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp2840879imm; Mon, 24 Sep 2018 10:46:51 -0700 (PDT) X-Google-Smtp-Source: ACcGV61WVIE/CSLznmB883qGxqZPC3A5gQUfo4VxEv5xW6LoG2tnYS7lE0w0Vfqdrrzof6abSyGK X-Received: by 2002:a63:1d47:: with SMTP id d7-v6mr10418794pgm.180.1537811211015; Mon, 24 Sep 2018 10:46:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537811210; cv=none; d=google.com; s=arc-20160816; b=OT0SD+NmcuXH5zffc+zrN7hUB/0+BsVVkXqovG59lWXY26CMtlmtD6TdZo42VyrY0V /7KvR6JaEcSc9IYWjTbZOJwY5M7cLmE5CMA6kLXUGi/jwT9sUTFUBflxeNPuI6RlYqp+ VjI60gDFl/rwI88qnSjolBcJ2ErEU9z/orXRkcWRdn1N87rxtIHlMxZF8TMV6lZTlsOB O3y6n9MNQQRCgxfcfrWC+ehnJr/6f9fgpwTxo+IVGnhT17niiqt1fyzZqGRUYlU8wbLH iqt0lvdj38M7BUI1ZhHHInDMS4saqBu3Sq+ide6oaz5n5spi3Bbg7cRP1a6v9l0VfNU5 Vm1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=wY2Fd2WZAkkl2oprImnKFQCN24qzsK2V3RW1HLo8MUM=; b=hQqHCN4bbms/3ZjltQcSedIgDRhsv9sKTd6xeLKaV7jLmpvBySW78PUvx1Tmxg8zbo b+uoAzJjXTgelLGwUy+gnvJB71qedvyQ7Mtht7WvFlallbq1Htq6dluWvEOogMyLoOwI YkQYa6Mx5eVofgu5lWJRa5Lz2VB08s8VqUh9wJ/QBVvZzifq+hMTQ3dbGRxtsrLHghOk nyEvOOvMKLXNvCJ298qTNB5poEGleJC75cvplW2kWbC3SWTNRhfDw1Z3MjfFtB+bG8y4 vNEWjCC6BUJYRB2nUmtlOkDfIGCpt9ASFGyGN6sPojwQD0B8yE3cp+TgltL5D11/GCaR Ba6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b6-v6si12278492pla.84.2018.09.24.10.46.35; Mon, 24 Sep 2018 10:46:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729321AbeIXXte (ORCPT + 99 others); Mon, 24 Sep 2018 19:49:34 -0400 Received: from foss.arm.com ([217.140.101.70]:39692 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726139AbeIXXtd (ORCPT ); Mon, 24 Sep 2018 19:49:33 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E138115B2; Mon, 24 Sep 2018 10:46:16 -0700 (PDT) Received: from localhost (e105922-lin.Emea.Arm.com [10.4.13.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 625353F5BD; Mon, 24 Sep 2018 10:46:16 -0700 (PDT) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu Cc: Punit Agrawal , marc.zyngier@arm.com, will.deacon@arm.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Suzuki K Poulose , Christoffer Dall Subject: [PATCH v7 2/9] KVM: arm/arm64: Share common code in user_mem_abort() Date: Mon, 24 Sep 2018 18:45:45 +0100 Message-Id: <20180924174552.8387-3-punit.agrawal@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180924174552.8387-1-punit.agrawal@arm.com> References: <20180924174552.8387-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The code for operations such as marking the pfn as dirty, and dcache/icache maintenance during stage 2 fault handling is duplicated between normal pages and PMD hugepages. Instead of creating another copy of the operations when we introduce PUD hugepages, let's share them across the different pagesizes. Signed-off-by: Punit Agrawal Cc: Suzuki K Poulose Cc: Christoffer Dall Cc: Marc Zyngier --- virt/kvm/arm/mmu.c | 45 +++++++++++++++++++++++++++++---------------- 1 file changed, 29 insertions(+), 16 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index c23a1b323aad..5b76ee204000 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1490,7 +1490,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, kvm_pfn_t pfn; pgprot_t mem_type = PAGE_S2; bool logging_active = memslot_is_logging(memslot); - unsigned long flags = 0; + unsigned long vma_pagesize, flags = 0; write_fault = kvm_is_write_fault(vcpu); exec_fault = kvm_vcpu_trap_is_iabt(vcpu); @@ -1510,10 +1510,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) { + vma_pagesize = vma_kernel_pagesize(vma); + if (vma_pagesize == PMD_SIZE && !logging_active) { hugetlb = true; gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; } else { + /* + * Fallback to PTE if it's not one of the Stage 2 + * supported hugepage sizes + */ + vma_pagesize = PAGE_SIZE; + /* * Pages belonging to memslots that don't have the same * alignment for userspace and IPA cannot be mapped using @@ -1579,23 +1586,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; - if (!hugetlb && !force_pte) + if (!hugetlb && !force_pte) { + /* + * Only PMD_SIZE transparent hugepages(THP) are + * currently supported. This code will need to be + * updated to support other THP sizes. + */ hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); + if (hugetlb) + vma_pagesize = PMD_SIZE; + } + + if (writable) + kvm_set_pfn_dirty(pfn); - if (hugetlb) { + if (fault_status != FSC_PERM) + clean_dcache_guest_page(pfn, vma_pagesize); + + if (exec_fault) + invalidate_icache_guest_page(pfn, vma_pagesize); + + if (hugetlb && vma_pagesize == PMD_SIZE) { pmd_t new_pmd = pfn_pmd(pfn, mem_type); new_pmd = pmd_mkhuge(new_pmd); - if (writable) { + if (writable) new_pmd = kvm_s2pmd_mkwrite(new_pmd); - kvm_set_pfn_dirty(pfn); - } - - if (fault_status != FSC_PERM) - clean_dcache_guest_page(pfn, PMD_SIZE); if (exec_fault) { new_pmd = kvm_s2pmd_mkexec(new_pmd); - invalidate_icache_guest_page(pfn, PMD_SIZE); } else if (fault_status == FSC_PERM) { /* Preserve execute if XN was already cleared */ if (stage2_is_exec(kvm, fault_ipa)) @@ -1608,16 +1626,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (writable) { new_pte = kvm_s2pte_mkwrite(new_pte); - kvm_set_pfn_dirty(pfn); mark_page_dirty(kvm, gfn); } - if (fault_status != FSC_PERM) - clean_dcache_guest_page(pfn, PAGE_SIZE); - if (exec_fault) { new_pte = kvm_s2pte_mkexec(new_pte); - invalidate_icache_guest_page(pfn, PAGE_SIZE); } else if (fault_status == FSC_PERM) { /* Preserve execute if XN was already cleared */ if (stage2_is_exec(kvm, fault_ipa)) -- 2.18.0