Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp306356pxj; Wed, 16 Jun 2021 02:53:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwFFPzv1Y6bPoMCzSC6gfMN2TJj5DcMqRsEsLHU91h1EyD6kCM0A2NIv9Fs7zh/wGNk32Av X-Received: by 2002:a6b:f717:: with SMTP id k23mr2927906iog.17.1623837225108; Wed, 16 Jun 2021 02:53:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623837225; cv=none; d=google.com; s=arc-20160816; b=NUhyi4fgs21+i0f+waoRTDU3UwFXuKVa1b5wtHcB3/B3DdPiTJDI8Ed4nKhOXhG4wV SdF1RMvroje7k9s/WnOi8Xp2cPFdlZ1uv4VOcR4vjt7cHpea0p5GLl85cUOm7I7Wqc1b VShiVtNhH2dKRG+Mwb2mVnJR+pljAyKSKPGBl714sR1+lawQ6F6FAxylNHiupUcRuFz2 8MQUfwka+9+nYu9+Ai/1WSloiwQm7wf9d0+RRg03Avt6SiPz4CDIXd8/vkmlu0FdtZY4 vZDapjWBJlQ7AStkk4ZFyR+Swq1SZQhYJ26TmK4b9b4ZB83zU2UQ88VDJsuueOyIk+nj ryHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from; bh=iX07NJjVZLlyg8WJ1nPCjVemClqwGYB/b2k3nPPe+Ro=; b=OrpC5AmU7dyM8Hx9h+R84cVIaTxz79So/U3vn7C/q4xctRF9hT2ANPR8E9OaKvIPJW CzBjJspnnfDxzYcAT2env7WjNhPF1fj8QeULHgrlcsuhR5fFLjnkEw9NU5v7mwtgcWCb Nl8SAFzGPkQoTakhFK2y272SglAv0bjQPGhqbWK+4+SatVjSSsSNaN8XEla7p2izxJR5 lBDCbWjrErBcJ+OAc0oMTdgVpJmILuk4k//Y6E+276K4mR9ABVVvnnbBk96kCA8fiKa/ oXi2ok30DPPH1KbjjUGGZ4dQzyPhtXyF292QePTKbhsCnAanKwduu0qchtc/p0vLAEKU c8cA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r14si2026607ila.121.2021.06.16.02.53.33; Wed, 16 Jun 2021 02:53:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232315AbhFPJyx (ORCPT + 99 others); Wed, 16 Jun 2021 05:54:53 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:10103 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232143AbhFPJyx (ORCPT ); Wed, 16 Jun 2021 05:54:53 -0400 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4G4gQP6dgGzZf6N; Wed, 16 Jun 2021 17:49:49 +0800 (CST) Received: from dggpemm500023.china.huawei.com (7.185.36.83) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 16 Jun 2021 17:52:45 +0800 Received: from DESKTOP-TMVL5KK.china.huawei.com (10.174.187.128) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 16 Jun 2021 17:52:44 +0800 From: Yanan Wang To: Marc Zyngier , Will Deacon , "Quentin Perret" , Alexandru Elisei , , , , CC: Catalin Marinas , James Morse , Julien Thierry , "Suzuki K Poulose" , Gavin Shan , , , , Yanan Wang Subject: [PATCH v6 0/4] KVM: arm64: Improve efficiency of stage2 page table Date: Wed, 16 Jun 2021 17:51:56 +0800 Message-ID: <20210616095200.38008-1-wangyanan55@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.187.128] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500023.china.huawei.com (7.185.36.83) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, This series makes some efficiency improvement of guest stage-2 page table code, and there are some test results to quantify the benefit. Description for this series: We currently uniformly permorm CMOs of D-cache and I-cache in function user_mem_abort before calling the fault handlers. If we get concurrent guest faults(e.g. translation faults, permission faults) or some really unnecessary guest faults caused by BBM, CMOs for the first vcpu are necessary while the others later are not. By moving CMOs to the fault handlers, we can easily identify conditions where they are really needed and avoid the unnecessary ones. As it's a time consuming process to perform CMOs especially when flushing a block range, so this solution reduces much load of kvm and improve efficiency of the stage-2 page table code. We can imagine two specific scenarios which will gain much benefit: 1) In a normal VM startup, this solution will improve the efficiency of handling guest page faults incurred by vCPUs, when initially populating stage-2 page tables. 2) After live migration, the heavy workload will be resumed on the destination VM, however all the stage-2 page tables need to be rebuilt at the moment. So this solution will ease the performance drop during resuming stage. The following are test results originally from v3 [1] to represent how much benefit was introduced by movement of CMOs. We can use KVM selftest to simulate a scenario of concurrent guest memory access and test the execution time that KVM uses to create new stage-2 mappings, update the existing mappings, split/rebuild huge mappings during/after dirty logging. hardware platform: HiSilicon Kunpeng920 Server host kernel: Linux mainline v5.12-rc2 test tools: KVM selftest [2] [1] https://lore.kernel.org/lkml/20210326031654.3716-1-wangyanan55@huawei.com/ [2] https://lore.kernel.org/lkml/20210302125751.19080-1-wangyanan55@huawei.com/ cmdline: ./kvm_page_table_test -m 4 -s anonymous -b 1G -v 80 (80 vcpus, 1G memory, page mappings(normal 4K)) KVM_CREATE_MAPPINGS: before 104.35s -> after 90.42s +13.35% KVM_UPDATE_MAPPINGS: before 78.64s -> after 75.45s + 4.06% cmdline: ./kvm_page_table_test -m 4 -s anonymous_thp -b 20G -v 40 (40 vcpus, 20G memory, block mappings(THP 2M)) KVM_CREATE_MAPPINGS: before 15.66s -> after 6.92s +55.80% KVM_UPDATE_MAPPINGS: before 178.80s -> after 123.35s +31.00% KVM_REBUILD_BLOCKS: before 187.34s -> after 131.76s +30.65% cmdline: ./kvm_page_table_test -m 4 -s anonymous_hugetlb_1gb -b 20G -v 40 (40 vcpus, 20G memory, block mappings(HUGETLB 1G)) KVM_CREATE_MAPPINGS: before 104.54s -> after 3.70s +96.46% KVM_UPDATE_MAPPINGS: before 174.20s -> after 115.94s +33.44% KVM_REBUILD_BLOCKS: before 103.95s -> after 2.96s +97.15% --- Changelogs: v5->v6: - convert the guest CMO functions into callbacks in kvm_pgtable_mm_ops (Marc) - drop patch #6 in v5 since we are stuffing topup into mmu_lock section (Quentin) - rebased on latest kvmarm/tree - v5: https://lore.kernel.org/lkml/20210415115032.35760-1-wangyanan55@huawei.com/ v4->v5: - rebased on the latest kvmarm/tree to adapt to the new stage-2 page-table code - v4: https://lore.kernel.org/lkml/20210409033652.28316-1-wangyanan55@huawei.com --- Yanan Wang (4): KVM: arm64: Introduce cache maintenance callbacks for guest stage-2 KVM: arm64: Introduce mm_ops member for structure stage2_attr_data KVM: arm64: Tweak parameters of guest cache maintenance functions KVM: arm64: Move guest CMOs to the fault handlers arch/arm64/include/asm/kvm_mmu.h | 9 ++---- arch/arm64/include/asm/kvm_pgtable.h | 7 +++++ arch/arm64/kvm/hyp/pgtable.c | 47 +++++++++++++++++++++------- arch/arm64/kvm/mmu.c | 39 ++++++++++------------- 4 files changed, 62 insertions(+), 40 deletions(-) -- 2.23.0