Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp810973pxb; Thu, 15 Apr 2021 07:11:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyMT2veI3awv9AvPQYCnzrlBke4/NERssZsRVJU9ciTD8EZ79qOZEKBXTJr03JOF9t/dEc8 X-Received: by 2002:aa7:cd83:: with SMTP id x3mr4361702edv.373.1618495883000; Thu, 15 Apr 2021 07:11:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618495882; cv=none; d=google.com; s=arc-20160816; b=okjnwtQiZEbHdFD7vVzPd/xvH7U8ByqDW6FU6QYYlSkO/E/DKlu9AbghRiHQWqbAOS keaOBSX0mCkKWwh63E2CENC95xvduqfgt+uZ6APqkX3XTUoeIIOHCw+CcjGdOnhiQ95X gkweR/WsbTtUDr4bE8OytT/+ElYPUlUjnsmK3kHsa3ADGvN0Q71uiLZ4KFCwfXtNhOZa FTo4C+Ty+0Lxm0m3i52EmPwAHusLxi3+RZihLQjbK/W4kNL5q3Nk13HdJ0JEsvMUPsnd AgHkp3eGu4rib+4YvHU4BHPjP9DBXZfd0VHNbvV7U3uNLaViO12a783Ux8wYn3jA673i XYww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=9ipXxvPxWp5QJfcY0nndJhAB0ZVUm75ge3WSgb9bdnw=; b=t4V8dh4MR/60V5JByON5Tsi0RVRjSza8STpjXEs0Jj7qDWecvcvHQCRXC4WHcUsZ9U M5gf3xyUCnU38rFiAafyYk8nEgRKjU9V+tiQ/rjW7tBMbche3qWTRpJWpdz0eUhH9iQz T5am+rqCC5m7C6iSDDLSa8oOHfj43pC2jjE+JB+uJMYdtFQ8SFfM2SYk+21G15TBR8w5 hRsBjue7NpJybkDNfge+eDnv4b2KRNRAw41YKWSjaKoPJX1njfB4WDzTiuDfPIQMHRmi T6UIilmO2foEVdMzYaR+jllETgPIZXzPf7MY0S66f+Ivd5eK8iEoVW3OEVWhMMz1iW1R eHog== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a62si2439289edf.180.2021.04.15.07.10.59; Thu, 15 Apr 2021 07:11:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233238AbhDOOIo (ORCPT + 99 others); Thu, 15 Apr 2021 10:08:44 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:16128 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231391AbhDOOIn (ORCPT ); Thu, 15 Apr 2021 10:08:43 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FLh1v0CRnzpX5s; Thu, 15 Apr 2021 22:05:23 +0800 (CST) Received: from [10.174.187.224] (10.174.187.224) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Thu, 15 Apr 2021 22:08:10 +0800 Subject: Re: [PATCH v4 2/2] kvm/arm64: Try stage2 block mapping for host device MMIO To: , , , , Marc Zyngier References: <20210415140328.24200-1-zhukeqian1@huawei.com> <20210415140328.24200-3-zhukeqian1@huawei.com> CC: From: Keqian Zhu Message-ID: <8f55b64f-b4dd-700e-c997-8de9c5ea282f@huawei.com> Date: Thu, 15 Apr 2021 22:08:09 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20210415140328.24200-3-zhukeqian1@huawei.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Marc, On 2021/4/15 22:03, Keqian Zhu wrote: > The MMIO region of a device maybe huge (GB level), try to use > block mapping in stage2 to speedup both map and unmap. > > Compared to normal memory mapping, we should consider two more > points when try block mapping for MMIO region: > > 1. For normal memory mapping, the PA(host physical address) and > HVA have same alignment within PUD_SIZE or PMD_SIZE when we use > the HVA to request hugepage, so we don't need to consider PA > alignment when verifing block mapping. But for device memory > mapping, the PA and HVA may have different alignment. > > 2. For normal memory mapping, we are sure hugepage size properly > fit into vma, so we don't check whether the mapping size exceeds > the boundary of vma. But for device memory mapping, we should pay > attention to this. > > This adds get_vma_page_shift() to get page shift for both normal > memory and device MMIO region, and check these two points when > selecting block mapping size for MMIO region. > > Signed-off-by: Keqian Zhu > --- > arch/arm64/kvm/mmu.c | 61 ++++++++++++++++++++++++++++++++++++-------- > 1 file changed, 51 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index c59af5ca01b0..5a1cc7751e6d 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -738,6 +738,35 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot, > return PAGE_SIZE; > } > > +static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) > +{ > + unsigned long pa; > + > + if (is_vm_hugetlb_page(vma) && !(vma->vm_flags & VM_PFNMAP)) > + return huge_page_shift(hstate_vma(vma)); > + > + if (!(vma->vm_flags & VM_PFNMAP)) > + return PAGE_SHIFT; > + > + VM_BUG_ON(is_vm_hugetlb_page(vma)); > + > + pa = (vma->vm_pgoff << PAGE_SHIFT) + (hva - vma->vm_start); > + > +#ifndef __PAGETABLE_PMD_FOLDED > + if ((hva & (PUD_SIZE - 1)) == (pa & (PUD_SIZE - 1)) && > + ALIGN_DOWN(hva, PUD_SIZE) >= vma->vm_start && > + ALIGN(hva, PUD_SIZE) <= vma->vm_end) > + return PUD_SHIFT; > +#endif > + > + if ((hva & (PMD_SIZE - 1)) == (pa & (PMD_SIZE - 1)) && > + ALIGN_DOWN(hva, PMD_SIZE) >= vma->vm_start && > + ALIGN(hva, PMD_SIZE) <= vma->vm_end) > + return PMD_SHIFT; > + > + return PAGE_SHIFT; > +} > + > static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > struct kvm_memory_slot *memslot, unsigned long hva, > unsigned long fault_status) > @@ -769,7 +798,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > return -EFAULT; > } > > - /* Let's check if we will get back a huge page backed by hugetlbfs */ > + /* > + * Let's check if we will get back a huge page backed by hugetlbfs, or > + * get block mapping for device MMIO region. > + */ > mmap_read_lock(current->mm); > vma = find_vma_intersection(current->mm, hva, hva + 1); > if (unlikely(!vma)) { > @@ -778,15 +810,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > return -EFAULT; > } > > - if (is_vm_hugetlb_page(vma)) > - vma_shift = huge_page_shift(hstate_vma(vma)); > - else > - vma_shift = PAGE_SHIFT; > - > - if (logging_active || > - (vma->vm_flags & VM_PFNMAP)) { > + /* > + * logging_active is guaranteed to never be true for VM_PFNMAP > + * memslots. > + */ > + if (logging_active) { > force_pte = true; > vma_shift = PAGE_SHIFT; > + } else { > + vma_shift = get_vma_page_shift(vma, hva); > } I use a if/else manner in v4, please check that. Thanks very much! BRs, Keqian > > switch (vma_shift) { > @@ -854,8 +886,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > return -EFAULT; > > if (kvm_is_device_pfn(pfn)) { > + /* > + * If the page was identified as device early by looking at > + * the VMA flags, vma_pagesize is already representing the > + * largest quantity we can map. If instead it was mapped > + * via gfn_to_pfn_prot(), vma_pagesize is set to PAGE_SIZE > + * and must not be upgraded. > + * > + * In both cases, we don't let transparent_hugepage_adjust() > + * change things at the last minute. > + */ > device = true; > - force_pte = true; > } else if (logging_active && !write_fault) { > /* > * Only actually map the page as writable if this was a write > @@ -876,7 +917,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > * If we are not forced to use page mapping, check if we are > * backed by a THP and thus use block mapping if possible. > */ > - if (vma_pagesize == PAGE_SIZE && !force_pte) > + if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) > vma_pagesize = transparent_hugepage_adjust(memslot, hva, > &pfn, &fault_ipa); > if (writable) >