Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp3446547pxb; Wed, 14 Apr 2021 05:55:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzdhI1IZwtuYSB4rFhuWfgCq8kJTYidpSf/hI36Igsj/MEed8kOzO6z1z0RBlxOpTtyOCT7 X-Received: by 2002:a65:590a:: with SMTP id f10mr37515680pgu.358.1618404929853; Wed, 14 Apr 2021 05:55:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618404929; cv=none; d=google.com; s=arc-20160816; b=gO9Yb0/6rFrF2ucaBghapgP2gzqoKN2bMp3Mh+XkmDUyTsnFqYssdMYPeT2QV7SRML W2rONICViTtdIA/X4eTUsOAyoSxuN+4rDDFWiSWBxKpnLfZNZGGwruG7RQhDSUUyX36U Or+RrO2PIw76rtNloKYo0ZOaYeL7Ej571N4wZEa3w+d8d1H/emDzXvIWTkfBd5trK6ID rAtQtJ7qdgUuSne/mz3X3nQoqxusMMNlGhheleqkas9W2CqS2jgaZpPpKEc2exrvKxzT CJYwiwaV3J2wZVrJSeRIa3C4Y14oGQElsiaBSjwdFhM4BzU2o10Z29ksRZTHZbKt0wNx Jk0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=Zzg9pHJfKmLsLkt3eBFumSzXPT5N6BDbn5wB3KVPwSc=; b=zwpEmFjxr84AV+0g6TP0yllSFBTa5dLr78tRMPvNgpYRxAXmCwCeSeqV+xHBm+oCvi YrZeHBl9AMWE9KuZiVI8r7qxEoPtF3d7hmmA6oPPKW++pYh64PSPbGjwq2fP4wQaB70n +BY9iLPEWuDQShnD45fAznz13PFuxo3Hx2/jWKB80tRnV/x4ae/hoDw3yrJua8eXpr6Y zQQm56NBDhVEobyeuwaf8/1IQUvv7K5zA3u3rsJxrMDUIF83dKBhdGp4S6Qz7OY4XisP 7LfImAZh8FI1vJTVG4nzPH3/Eruh8qnwFoFT/2+KaWTjzArrwYFG0JVhTpIcIS0BCPZ6 79jA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o4si20048411pfk.309.2021.04.14.05.55.16; Wed, 14 Apr 2021 05:55:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347161AbhDNCt1 (ORCPT + 99 others); Tue, 13 Apr 2021 22:49:27 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:15674 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346823AbhDNCtV (ORCPT ); Tue, 13 Apr 2021 22:49:21 -0400 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FKn0Z05DMzpXFS; Wed, 14 Apr 2021 10:46:06 +0800 (CST) Received: from [10.174.187.224] (10.174.187.224) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 10:48:50 +0800 Subject: Re: [RFC PATCH v2 2/2] kvm/arm64: Try stage2 block mapping for host device MMIO To: Marc Zyngier References: <20210316134338.18052-1-zhukeqian1@huawei.com> <20210316134338.18052-3-zhukeqian1@huawei.com> <878s5up71v.wl-maz@kernel.org> CC: , , , , Will Deacon , Catalin Marinas , Mark Rutland , James Morse , Suzuki K Poulose , Julien Thierry , , , , From: Keqian Zhu Message-ID: Date: Wed, 14 Apr 2021 10:48:50 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <878s5up71v.wl-maz@kernel.org> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Marc, I think I have fully tested this patch. The next step is to do some restriction on HVA in vfio module, so we can build block mapping for it with a higher probability. Is there anything to improve? If not, could you apply it? ^_^ Thanks, Keqian On 2021/4/7 21:18, Marc Zyngier wrote: > On Tue, 16 Mar 2021 13:43:38 +0000, > Keqian Zhu wrote: >> >> The MMIO region of a device maybe huge (GB level), try to use >> block mapping in stage2 to speedup both map and unmap. >> >> Compared to normal memory mapping, we should consider two more >> points when try block mapping for MMIO region: >> >> 1. For normal memory mapping, the PA(host physical address) and >> HVA have same alignment within PUD_SIZE or PMD_SIZE when we use >> the HVA to request hugepage, so we don't need to consider PA >> alignment when verifing block mapping. But for device memory >> mapping, the PA and HVA may have different alignment. >> >> 2. For normal memory mapping, we are sure hugepage size properly >> fit into vma, so we don't check whether the mapping size exceeds >> the boundary of vma. But for device memory mapping, we should pay >> attention to this. >> >> This adds device_rough_page_shift() to check these two points when >> selecting block mapping size. >> >> Signed-off-by: Keqian Zhu >> --- >> >> Mainly for RFC, not fully tested. I will fully test it when the >> code logic is well accepted. >> >> --- >> arch/arm64/kvm/mmu.c | 42 ++++++++++++++++++++++++++++++++++++++---- >> 1 file changed, 38 insertions(+), 4 deletions(-) >> >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c >> index c59af5ca01b0..224aa15eb4d9 100644 >> --- a/arch/arm64/kvm/mmu.c >> +++ b/arch/arm64/kvm/mmu.c >> @@ -624,6 +624,36 @@ static void kvm_send_hwpoison_signal(unsigned long address, short lsb) >> send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, lsb, current); >> } >> >> +/* >> + * Find a mapping size that properly insides the intersection of vma and >> + * memslot. And hva and pa have the same alignment to this mapping size. >> + * It's rough because there are still other restrictions, which will be >> + * checked by the following fault_supports_stage2_huge_mapping(). > > I don't think these restrictions make complete sense to me. If this is > a PFNMAP VMA, we should use the biggest mapping size that covers the > VMA, and not more than the VMA. > >> + */ >> +static short device_rough_page_shift(struct kvm_memory_slot *memslot, >> + struct vm_area_struct *vma, >> + unsigned long hva) >> +{ >> + size_t size = memslot->npages * PAGE_SIZE; >> + hva_t sec_start = max(memslot->userspace_addr, vma->vm_start); >> + hva_t sec_end = min(memslot->userspace_addr + size, vma->vm_end); >> + phys_addr_t pa = (vma->vm_pgoff << PAGE_SHIFT) + (hva - vma->vm_start); >> + >> +#ifndef __PAGETABLE_PMD_FOLDED >> + if ((hva & (PUD_SIZE - 1)) == (pa & (PUD_SIZE - 1)) && >> + ALIGN_DOWN(hva, PUD_SIZE) >= sec_start && >> + ALIGN(hva, PUD_SIZE) <= sec_end) >> + return PUD_SHIFT; >> +#endif >> + >> + if ((hva & (PMD_SIZE - 1)) == (pa & (PMD_SIZE - 1)) && >> + ALIGN_DOWN(hva, PMD_SIZE) >= sec_start && >> + ALIGN(hva, PMD_SIZE) <= sec_end) >> + return PMD_SHIFT; >> + >> + return PAGE_SHIFT; >> +} >> + >> static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, >> unsigned long hva, >> unsigned long map_size) >> @@ -769,7 +799,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >> return -EFAULT; >> } >> >> - /* Let's check if we will get back a huge page backed by hugetlbfs */ >> + /* >> + * Let's check if we will get back a huge page backed by hugetlbfs, or >> + * get block mapping for device MMIO region. >> + */ >> mmap_read_lock(current->mm); >> vma = find_vma_intersection(current->mm, hva, hva + 1); >> if (unlikely(!vma)) { >> @@ -780,11 +813,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >> >> if (is_vm_hugetlb_page(vma)) >> vma_shift = huge_page_shift(hstate_vma(vma)); >> + else if (vma->vm_flags & VM_PFNMAP) >> + vma_shift = device_rough_page_shift(memslot, vma, hva); >> else >> vma_shift = PAGE_SHIFT; >> >> - if (logging_active || >> - (vma->vm_flags & VM_PFNMAP)) { >> + if (logging_active) { >> force_pte = true; >> vma_shift = PAGE_SHIFT; > > But why should we downgrade to page-size mappings if logging? This is > a device, and you aren't moving the device around, are you? Or is your > device actually memory with a device mapping that you are trying to > migrate? > >> } >> @@ -855,7 +889,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >> >> if (kvm_is_device_pfn(pfn)) { >> device = true; >> - force_pte = true; >> + force_pte = (vma_pagesize == PAGE_SIZE); >> } else if (logging_active && !write_fault) { >> /* >> * Only actually map the page as writable if this was a write >> -- >> 2.19.1 >> >> > > Thanks, > > M. >