Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp2733427imm; Wed, 3 Oct 2018 08:22:27 -0700 (PDT) X-Google-Smtp-Source: ACcGV60Hh8qQEZCgJNav+dTP1m3aX37+c70q125KURWsRV3flY2z2MHIFZXXzAl74yRIktTOMwyl X-Received: by 2002:a17:902:2b84:: with SMTP id l4-v6mr2169260plb.265.1538580147488; Wed, 03 Oct 2018 08:22:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538580147; cv=none; d=google.com; s=arc-20160816; b=NkDlh3qH8Ok96yfGUP7COrjtTeBTN4kM9AtqYjQaKqTAh+WkH75aj2tyjaTRkGUM3F I9cXGhDRT751sopORJL2brqJX/9CGiuCryAplm55zwPmfuiVV+t2n6VWLbwQykoV9IWx w+2XFjZ8GfxVdOWIM2ozTJC+d51X2uc0Jnr6hSEJfF3EZNdoqcznjHWKNpUmOkf2NSDu dMf8crA5f1w5KMF3VrAFRnwqNXBVQMc/ojLybphQjX0EwoMAd/hFBlW7+Xs/vIxXYJUd QjAgoJs4u9e01KvZ495GERXh3fhcasmyEcZLUhipZTujYjDfLbvNbuPCjc5Ky9Nw9PdP H7QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:organization:from:references:cc:to:subject; bh=sdnffHWuJZZMhpC3M8U0ft5j5miXrpSuz2DnBcLsw2A=; b=BqWPG9aV9I5n3MfvVNq8uKxFtAUV2pmUDlYgnqZ6w1P6JwzQ217xUX0AwbyfoOYgvw aYY56r05gw12SRu6BI2E8sKK7MB3mUFtQ4A7IyXl2hejH2cjkRzW8UKTNYrQu9QqFes0 46xylxP3I45yBBE2OE/MDsxU28FmKeJ63qKEdAAu0nL7ii4GQXZrabuDggMdKtMDRtww uGJR1lVH/mZW6KrYo6Wy2eGlSSQd9jPwUTiKnQc1VhHhfs/MEQPengYM318ngrXlkMAd ZHJV3LifB+p2rbJQrisJNk36yjOemgX5LoOwtCUFXT3NfbMRnfFQsVKt2pWa/tLLz3uY uCKQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c27-v6si1681504pgc.461.2018.10.03.08.22.11; Wed, 03 Oct 2018 08:22:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727152AbeJCWJc (ORCPT + 99 others); Wed, 3 Oct 2018 18:09:32 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:52898 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726748AbeJCWJc (ORCPT ); Wed, 3 Oct 2018 18:09:32 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CD15F7A9; Wed, 3 Oct 2018 08:20:41 -0700 (PDT) Received: from [10.4.13.99] (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8570B3F5A0; Wed, 3 Oct 2018 08:20:40 -0700 (PDT) Subject: Re: [PATCH v8 2/9] KVM: arm/arm64: Share common code in user_mem_abort() To: Punit Agrawal , kvmarm@lists.cs.columbia.edu Cc: will.deacon@arm.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com, Christoffer Dall References: <20181001155443.23032-1-punit.agrawal@arm.com> <20181001155443.23032-3-punit.agrawal@arm.com> From: Marc Zyngier Organization: ARM Ltd Message-ID: <1eb4df93-4610-ac83-fec3-6a7efeddfc4f@arm.com> Date: Wed, 3 Oct 2018 16:20:39 +0100 User-Agent: Mozilla/5.0 (X11; Linux aarch64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20181001155443.23032-3-punit.agrawal@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Punit, On 01/10/18 16:54, Punit Agrawal wrote: > The code for operations such as marking the pfn as dirty, and > dcache/icache maintenance during stage 2 fault handling is duplicated > between normal pages and PMD hugepages. > > Instead of creating another copy of the operations when we introduce > PUD hugepages, let's share them across the different pagesizes. > > Signed-off-by: Punit Agrawal > Cc: Suzuki K Poulose > Cc: Christoffer Dall > Cc: Marc Zyngier > --- > virt/kvm/arm/mmu.c | 45 +++++++++++++++++++++++++++++---------------- > 1 file changed, 29 insertions(+), 16 deletions(-) > > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > index c23a1b323aad..5b76ee204000 100644 > --- a/virt/kvm/arm/mmu.c > +++ b/virt/kvm/arm/mmu.c > @@ -1490,7 +1490,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > kvm_pfn_t pfn; > pgprot_t mem_type = PAGE_S2; > bool logging_active = memslot_is_logging(memslot); > - unsigned long flags = 0; > + unsigned long vma_pagesize, flags = 0; > > write_fault = kvm_is_write_fault(vcpu); > exec_fault = kvm_vcpu_trap_is_iabt(vcpu); > @@ -1510,10 +1510,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > return -EFAULT; > } > > - if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) { > + vma_pagesize = vma_kernel_pagesize(vma); > + if (vma_pagesize == PMD_SIZE && !logging_active) { > hugetlb = true; > gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; > } else { > + /* > + * Fallback to PTE if it's not one of the Stage 2 > + * supported hugepage sizes > + */ > + vma_pagesize = PAGE_SIZE; > + > /* > * Pages belonging to memslots that don't have the same > * alignment for userspace and IPA cannot be mapped using > @@ -1579,23 +1586,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > if (mmu_notifier_retry(kvm, mmu_seq)) > goto out_unlock; > > - if (!hugetlb && !force_pte) > + if (!hugetlb && !force_pte) { > + /* > + * Only PMD_SIZE transparent hugepages(THP) are > + * currently supported. This code will need to be > + * updated to support other THP sizes. > + */ > hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); > + if (hugetlb) > + vma_pagesize = PMD_SIZE; > + } > + > + if (writable) > + kvm_set_pfn_dirty(pfn); > > - if (hugetlb) { > + if (fault_status != FSC_PERM) > + clean_dcache_guest_page(pfn, vma_pagesize); > + > + if (exec_fault) > + invalidate_icache_guest_page(pfn, vma_pagesize); > + > + if (hugetlb && vma_pagesize == PMD_SIZE) { Can you end-up in a situation where hugetlb==false and vma_pagesize == PMD_SIZE? If that's the case, then the above CMOs are not done on the same granularity as they were done before this patch. If that cannot happen, then the above condition can be simplified. Which one is it? Thanks, M. -- Jazz is not dead. It just smells funny...