Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp4322347pxb; Tue, 26 Jan 2021 19:35:35 -0800 (PST) X-Google-Smtp-Source: ABdhPJyfSMLxFjDVJuYQZv7NCx1yF8H6G3/OJ7F8A2siXuak8IO9qUC8qyLzY2eBtQrIfFUTUO8t X-Received: by 2002:a17:906:3805:: with SMTP id v5mr5078669ejc.508.1611718535640; Tue, 26 Jan 2021 19:35:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611718535; cv=none; d=google.com; s=arc-20160816; b=HQ9qCWrdfTI1rpOOJXag5rfRlDGiPAEV7pQK9Db07F42N1hRT0OCprVU48u/VMB11P Cm5cL0e8/4J0g8JWglWKjg6GX+c44ZNQ2XXyV2SVMcxtABp36dB4KieO7TCKKrA/UZQ7 PKnXq+2xulDFKyTiX22Qn/ubCHLDTUxh740nOfJ934It7ZuiIbortNP+ya08b3LS+zz3 z5L5s9zVKm95fNOgHeKTKp32Zi6dfnPxAwQYGBF98TsNmx4JioQ0l/+uGfgnyDDNavhw t5mMQEPHN3JcKD99Q20cmGZoVxrL6axNywrtlOiwDegNRloi3pxEd2BJJoXlRsDSuHvc mN2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=HFeycaOeZg1rTk4VDQPg0R/n4JpA63qv5uLoojCDd7s=; b=vD47SRJ6BWwfeKcqvvyYKFzhBBz4+iCpFRtza9r8OXJddNy5wnuiPjqdRibh0JUU8w F7d3oIgpBw1U2ZsPbY6YkwWH2TUrZHfCVgMpcQTap7MbhEYktaZp1d4ziIMlJx7i0hfj kq2jrgvJd5LNLOnQvZ923X3CXdGHqkzrpvVm95iZexkr7OZAwBCmmRyKQG+RgestjedN o2hAZ8OiL14a2qRJmfdzW3FEfUDBLjhA0Suz1YdjgCs1Ujp6xMTyfxsOTi2CIhyFyeje nqTuZP+Kd28WB1FJPN12Q0RiDxQNtIhiXs8HUe5kAw7ha6kF3vFP/N5HCprndboAJ2PA 991w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cs9si263401ejc.113.2021.01.26.19.35.11; Tue, 26 Jan 2021 19:35:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388436AbhAZFo3 (ORCPT + 99 others); Tue, 26 Jan 2021 00:44:29 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:11867 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727663AbhAYMKW (ORCPT ); Mon, 25 Jan 2021 07:10:22 -0500 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4DPSDf21Gzz7Zqv; Mon, 25 Jan 2021 19:24:02 +0800 (CST) Received: from [10.174.184.42] (10.174.184.42) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.498.0; Mon, 25 Jan 2021 19:25:04 +0800 Subject: Re: [RFC PATCH] kvm: arm64: Try stage2 block mapping for host device MMIO To: Marc Zyngier References: <20210122083650.21812-1-zhukeqian1@huawei.com> <09d89355cdbbd19c456699774a9a980a@kernel.org> CC: , , , , Will Deacon , Catalin Marinas , Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , Daniel Lezcano , Thomas Gleixner , "Suzuki K Poulose" , Julien Thierry , Andrew Morton , Alexios Zavras , , From: Keqian Zhu Message-ID: Date: Mon, 25 Jan 2021 19:25:03 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <09d89355cdbbd19c456699774a9a980a@kernel.org> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Marc, On 2021/1/22 17:45, Marc Zyngier wrote: > On 2021-01-22 08:36, Keqian Zhu wrote: >> The MMIO region of a device maybe huge (GB level), try to use block >> mapping in stage2 to speedup both map and unmap. >> >> Especially for unmap, it performs TLBI right after each invalidation >> of PTE. If all mapping is of PAGE_SIZE, it takes much time to handle >> GB level range. > > This is only on VM teardown, right? Or do you unmap the device more ofet? > Can you please quantify the speedup and the conditions this occurs in? Yes, and there are some other paths (includes what your patch series handles) will do the unmap action: 1、guest reboot without S2FWB: stage2_unmap_vm()which only unmaps guest regular RAM. 2、userspace deletes memslot: kvm_arch_flush_shadow_memslot(). 3、rollback of device MMIO mapping: kvm_arch_prepare_memory_region(). 4、rollback of dirty log tracking: If we enable hugepage for guest RAM, after dirty log is stopped, the newly created block mappings will unmap all page mappings. 5、mmu notifier: kvm_unmap_hva_range(). AFAICS, we will use this path when VM teardown or guest resets pass-through devices. The bugfix[1] gives the reason for unmapping MMIO region when guest resets pass-through devices. unmap related to MMIO region, as this patch solves: point 1 is not applied. point 2 occurs when userspace unplug pass-through devices. point 3 can occurs, but rarely. point 4 is not applied. point 5 occurs when VM teardown or guest resets pass-through devices. And I had a look at your patch series, it can solve: For VM teardown, elide CMO and perform VMALL instead of individually (But current kernel do not go through this path when VM teardown). For rollback of dirty log tracking, elide CMO. For kvm_unmap_hva_range, if event is MMU_NOTIFY_UNMAP. elide CMO. (But I doubt the CMOs in unmap. As we perform CMOs in user_mem_abort when install new stage2 mapping for VM, maybe the CMO in unmap is unnecessary under all conditions :-) ?) So it shows that we are solving different parts of unmap, so they are not conflicting. At least this patch can still speedup map of device MMIO region, and speedup unmap of device MMIO region even if we do not need to perform CMO and TLBI ;-). speedup: unmap 8GB MMIO on FPGA. before after opt cost 30+ minutes 949ms Thanks, Keqian > > I have the feeling that we are just circling around another problem, > which is that we could rely on a VM-wide TLBI when tearing down the > guest. I worked on something like that[1] a long while ago, and parked > it for some reason. Maybe it is worth reviving. > > [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/elide-cmo-tlbi > >> >> Signed-off-by: Keqian Zhu >> --- >> arch/arm64/include/asm/kvm_pgtable.h | 11 +++++++++++ >> arch/arm64/kvm/hyp/pgtable.c | 15 +++++++++++++++ >> arch/arm64/kvm/mmu.c | 12 ++++++++---- >> 3 files changed, 34 insertions(+), 4 deletions(-) >> >> diff --git a/arch/arm64/include/asm/kvm_pgtable.h >> b/arch/arm64/include/asm/kvm_pgtable.h >> index 52ab38db04c7..2266ac45f10c 100644 >> --- a/arch/arm64/include/asm/kvm_pgtable.h >> +++ b/arch/arm64/include/asm/kvm_pgtable.h >> @@ -82,6 +82,17 @@ struct kvm_pgtable_walker { >> const enum kvm_pgtable_walk_flags flags; >> }; >> >> +/** >> + * kvm_supported_pgsize() - Get the max supported page size of a mapping. >> + * @pgt: Initialised page-table structure. >> + * @addr: Virtual address at which to place the mapping. >> + * @end: End virtual address of the mapping. >> + * @phys: Physical address of the memory to map. >> + * >> + * The smallest return value is PAGE_SIZE. >> + */ >> +u64 kvm_supported_pgsize(struct kvm_pgtable *pgt, u64 addr, u64 end, u64 phys); >> + >> /** >> * kvm_pgtable_hyp_init() - Initialise a hypervisor stage-1 page-table. >> * @pgt: Uninitialised page-table structure to initialise. >> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c >> index bdf8e55ed308..ab11609b9b13 100644 >> --- a/arch/arm64/kvm/hyp/pgtable.c >> +++ b/arch/arm64/kvm/hyp/pgtable.c >> @@ -81,6 +81,21 @@ static bool kvm_block_mapping_supported(u64 addr, >> u64 end, u64 phys, u32 level) >> return IS_ALIGNED(addr, granule) && IS_ALIGNED(phys, granule); >> } >> >> +u64 kvm_supported_pgsize(struct kvm_pgtable *pgt, u64 addr, u64 end, u64 phys) >> +{ >> + u32 lvl; >> + u64 pgsize = PAGE_SIZE; >> + >> + for (lvl = pgt->start_level; lvl < KVM_PGTABLE_MAX_LEVELS; lvl++) { >> + if (kvm_block_mapping_supported(addr, end, phys, lvl)) { >> + pgsize = kvm_granule_size(lvl); >> + break; >> + } >> + } >> + >> + return pgsize; >> +} >> + >> static u32 kvm_pgtable_idx(struct kvm_pgtable_walk_data *data, u32 level) >> { >> u64 shift = kvm_granule_shift(level); >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c >> index 7d2257cc5438..80b403fc8e64 100644 >> --- a/arch/arm64/kvm/mmu.c >> +++ b/arch/arm64/kvm/mmu.c >> @@ -499,7 +499,8 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) >> int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, >> phys_addr_t pa, unsigned long size, bool writable) >> { >> - phys_addr_t addr; >> + phys_addr_t addr, end; >> + unsigned long pgsize; >> int ret = 0; >> struct kvm_mmu_memory_cache cache = { 0, __GFP_ZERO, NULL, }; >> struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; >> @@ -509,21 +510,24 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, >> phys_addr_t guest_ipa, >> >> size += offset_in_page(guest_ipa); >> guest_ipa &= PAGE_MASK; >> + end = guest_ipa + size; >> >> - for (addr = guest_ipa; addr < guest_ipa + size; addr += PAGE_SIZE) { >> + for (addr = guest_ipa; addr < end; addr += pgsize) { >> ret = kvm_mmu_topup_memory_cache(&cache, >> kvm_mmu_cache_min_pages(kvm)); >> if (ret) >> break; >> >> + pgsize = kvm_supported_pgsize(pgt, addr, end, pa); >> + >> spin_lock(&kvm->mmu_lock); >> - ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, >> + ret = kvm_pgtable_stage2_map(pgt, addr, pgsize, pa, prot, >> &cache); >> spin_unlock(&kvm->mmu_lock); >> if (ret) >> break; >> >> - pa += PAGE_SIZE; >> + pa += pgsize; >> } >> >> kvm_mmu_free_memory_cache(&cache); > > This otherwise looks neat enough. > > Thanks, > > M.