Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp353937pxj; Thu, 17 Jun 2021 04:24:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw48MpjB3CZO5rFiAzdVMdiLluTZM7zJBdv6Ye7+0nkJPZz2pEAz7wX7Sfw31XGPQX6974g X-Received: by 2002:a05:6402:254d:: with SMTP id l13mr5785135edb.286.1623929085087; Thu, 17 Jun 2021 04:24:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623929085; cv=none; d=google.com; s=arc-20160816; b=NEABp6qhaqh6VhBcp9xe9i4+K+UqvCCEyrbSXMO4JZycDAzqsBRcWbq3aVTjmzB6gw nSthQxJUDeOFGcP3oFPJRuZZgEwf+Egc6AnK6X7USP9SL1ZNuf2B/vJqP+XZ5EFACjcf SBDQaTPwiA0x5jJuhO5oLFW3xXkxqZTd0Bpf2BmlPPbbC8CzZLFlwjU46BCwMd8Hy3hk QKRlXQ/YVuqWYwQaLBvc93hF4MmbtD/ahF4tKgExWaswSU9sWY/g1Yj0gpeeH1n3p9Wo wC1jnrHwptA0BxPhG4iHoNsWCwDtniICyziRUhhZ8NL94L57K65ed6Op2ffGlX1rUyI0 GdYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=yRgKgyvOGQHeP2KRARLaC65MgrkCVXj3jra8wbMk4M8=; b=vsihH4Wp2sxL/BaqoGV0sC9qYOrfLvnHtqwR4Fc9ajZS70cKFQL615n8iI6oKcz/gx vByS0Ey5YGlU1hRdbgHPhs4kLonGRA1CtjSNGMWykhcRQ3/OLAJ+TTLJeEYy3T1yIBOR sX3gRE6a+QmRsYqNgNPiBDLGImCtI4AUKGHLfQKDg7Nu9qGY/Vftzkf70Q2MpLrFr8jA IOTaZXKzRYcWmnlslXL8CoI2dk1KOkcK1C06m6ctelh4fesBUMJglku5JkrpYXIx9b13 52BaKBux2LcJBuCmTS9km/ij0HLnzTAj5YzdrNXx7Z3xhhVbj/teUAPAnMqA5GKHqCRg Ns6Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dg12si5023216edb.166.2021.06.17.04.24.22; Thu, 17 Jun 2021 04:24:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232253AbhFQLAo (ORCPT + 99 others); Thu, 17 Jun 2021 07:00:44 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:7467 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232234AbhFQLAl (ORCPT ); Thu, 17 Jun 2021 07:00:41 -0400 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4G5Jqq0vSBzZhmL; Thu, 17 Jun 2021 18:55:35 +0800 (CST) Received: from dggpemm500023.china.huawei.com (7.185.36.83) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Thu, 17 Jun 2021 18:58:31 +0800 Received: from DESKTOP-TMVL5KK.china.huawei.com (10.174.187.128) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Thu, 17 Jun 2021 18:58:30 +0800 From: Yanan Wang To: Marc Zyngier , Will Deacon , "Quentin Perret" , Alexandru Elisei , , , , CC: Catalin Marinas , James Morse , Julien Thierry , "Suzuki K Poulose" , Gavin Shan , , , , Yanan Wang Subject: [PATCH v7 3/4] KVM: arm64: Tweak parameters of guest cache maintenance functions Date: Thu, 17 Jun 2021 18:58:23 +0800 Message-ID: <20210617105824.31752-4-wangyanan55@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210617105824.31752-1-wangyanan55@huawei.com> References: <20210617105824.31752-1-wangyanan55@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.187.128] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500023.china.huawei.com (7.185.36.83) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Adjust the parameter "kvm_pfn_t pfn" of __clean_dcache_guest_page and __invalidate_icache_guest_page to "void *va", which paves the way for converting these two guest CMO functions into callbacks in structure kvm_pgtable_mm_ops. No functional change. Signed-off-by: Yanan Wang --- arch/arm64/include/asm/kvm_mmu.h | 9 ++------- arch/arm64/kvm/mmu.c | 28 +++++++++++++++------------- 2 files changed, 17 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 25ed956f9af1..6844a7550392 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -187,10 +187,8 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101; } -static inline void __clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size) +static inline void __clean_dcache_guest_page(void *va, size_t size) { - void *va = page_address(pfn_to_page(pfn)); - /* * With FWB, we ensure that the guest always accesses memory using * cacheable attributes, and we don't have to clean to PoC when @@ -203,16 +201,13 @@ static inline void __clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size) kvm_flush_dcache_to_poc(va, size); } -static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn, - unsigned long size) +static inline void __invalidate_icache_guest_page(void *va, size_t size) { if (icache_is_aliasing()) { /* any kind of VIPT cache */ __flush_icache_all(); } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) { /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */ - void *va = page_address(pfn_to_page(pfn)); - invalidate_icache_range((unsigned long)va, (unsigned long)va + size); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 5742ba765ff9..b980f8a47cbb 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -126,6 +126,16 @@ static void *kvm_host_va(phys_addr_t phys) return __va(phys); } +static void clean_dcache_guest_page(void *va, size_t size) +{ + __clean_dcache_guest_page(va, size); +} + +static void invalidate_icache_guest_page(void *va, size_t size) +{ + __invalidate_icache_guest_page(va, size); +} + /* * Unmapping vs dcache management: * @@ -693,16 +703,6 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); } -static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size) -{ - __clean_dcache_guest_page(pfn, size); -} - -static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size) -{ - __invalidate_icache_guest_page(pfn, size); -} - static void kvm_send_hwpoison_signal(unsigned long address, short lsb) { send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, lsb, current); @@ -1013,11 +1013,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, prot |= KVM_PGTABLE_PROT_W; if (fault_status != FSC_PERM && !device) - clean_dcache_guest_page(pfn, vma_pagesize); + clean_dcache_guest_page(page_address(pfn_to_page(pfn)), + vma_pagesize); if (exec_fault) { prot |= KVM_PGTABLE_PROT_X; - invalidate_icache_guest_page(pfn, vma_pagesize); + invalidate_icache_guest_page(page_address(pfn_to_page(pfn)), + vma_pagesize); } if (device) @@ -1219,7 +1221,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) * We've moved a page around, probably through CoW, so let's treat it * just like a translation fault and clean the cache to the PoC. */ - clean_dcache_guest_page(pfn, PAGE_SIZE); + clean_dcache_guest_page(page_address(pfn_to_page(pfn), PAGE_SIZE); /* * The MMU notifiers will have unmapped a huge PMD before calling -- 2.23.0