Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp4436247pxf; Tue, 16 Mar 2021 13:28:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy7csAOoD/n2RVBsFRaAXRc7zN8UVRDzjd/hD3iZJFPKsnkBNif507fsksadG0Sax6B53w0 X-Received: by 2002:a17:906:558:: with SMTP id k24mr31865152eja.387.1615926495110; Tue, 16 Mar 2021 13:28:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1615926495; cv=none; d=google.com; s=arc-20160816; b=k2H5cWuSaXKUalL/d1mCsBQppFbKHx8PAaGSi6TGJzvkUFT9tVY7VejmEVfc7DQebh iOjU0cQTCLXGv3q572xyDHY9Vr5A3OGd3KuB/7WR6N7mJ3LCGGKY+IOKtx78oT/eJdMM i0SZ3jRKB/cyoGUO64soAB63L7iF9N8Ihvj0fdRQoQRiGLl06nz7pmCKLBfKEy/jIe0/ cFR3BJKztZ8cBvL7LAXjAL2MIvU6HQ1QXnnoQywnDtH+Yfi1VsrHcSjeunRtbs2Ep2u/ mXXOw6G+hGgdpYzoXPChvW1Rj+B8H1R/iIRqtcVibWHsrG9Xnezst+c/w1cRQJ4XQREF D2pQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=dQU3TRYfiftfdDOKqxkBxs0ABqksq7ZJOY/kXQ3jrJ4=; b=YLdI9d51HznIBiZEgegTAyBAgYXVwruPBtQuDTh8eOB2sUhEsC51d5irpw0jGtkfvk m4yQzhbxQgoE27K4tMuiScWHA6rfbRCwuy8rGDDA31LoHG+hhWSIVNXXLtu0qLTlwVBj MdsR92zlHmYlridcb2PTK6EnhLYdTPkk1JK+kefAmUiYK0NLEjtNDFKTB4Nm3pEKhlxG z0oJzsSmWSFIFsnzdr0VUHftESLuwfirOo5uTDSjv3eZP4LI86veykFuslvU5pkx+pUg v8HUHliQYo21NUHEUmimFYfNX8n/Ro/KswdYtAcd9t5UB1l0R9uIn1cTfdD7UnH+dwFY 3LDQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s7si15158884eds.267.2021.03.16.13.27.52; Tue, 16 Mar 2021 13:28:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235460AbhCPNoh (ORCPT + 99 others); Tue, 16 Mar 2021 09:44:37 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:13547 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235446AbhCPNoM (ORCPT ); Tue, 16 Mar 2021 09:44:12 -0400 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F0DwQ47S1zNngp; Tue, 16 Mar 2021 21:41:42 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.498.0; Tue, 16 Mar 2021 21:43:55 +0800 From: Keqian Zhu To: , , , , Will Deacon , Marc Zyngier CC: Catalin Marinas , Mark Rutland , James Morse , Suzuki K Poulose , Julien Thierry , , , , Subject: [RFC PATCH v2 1/2] kvm/arm64: Remove the creation time's mapping of MMIO regions Date: Tue, 16 Mar 2021 21:43:37 +0800 Message-ID: <20210316134338.18052-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210316134338.18052-1-zhukeqian1@huawei.com> References: <20210316134338.18052-1-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The MMIO regions may be unmapped for many reasons and can be remapped by stage2 fault path. Map MMIO regions at creation time becomes a minor optimization and makes these two mapping path hard to sync. Remove the mapping code while keep the useful sanity check. Signed-off-by: Keqian Zhu --- arch/arm64/kvm/mmu.c | 38 +++----------------------------------- 1 file changed, 3 insertions(+), 35 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8711894db8c2..c59af5ca01b0 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1301,7 +1301,6 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, { hva_t hva = mem->userspace_addr; hva_t reg_end = hva + mem->memory_size; - bool writable = !(mem->flags & KVM_MEM_READONLY); int ret = 0; if (change != KVM_MR_CREATE && change != KVM_MR_MOVE && @@ -1318,8 +1317,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, mmap_read_lock(current->mm); /* * A memory region could potentially cover multiple VMAs, and any holes - * between them, so iterate over all of them to find out if we can map - * any of them right now. + * between them, so iterate over all of them. * * +--------------------------------------------+ * +---------------+----------------+ +----------------+ @@ -1330,50 +1328,20 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, */ do { struct vm_area_struct *vma = find_vma(current->mm, hva); - hva_t vm_start, vm_end; if (!vma || vma->vm_start >= reg_end) break; - /* - * Take the intersection of this VMA with the memory region - */ - vm_start = max(hva, vma->vm_start); - vm_end = min(reg_end, vma->vm_end); - if (vma->vm_flags & VM_PFNMAP) { - gpa_t gpa = mem->guest_phys_addr + - (vm_start - mem->userspace_addr); - phys_addr_t pa; - - pa = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT; - pa += vm_start - vma->vm_start; - /* IO region dirty page logging not allowed */ if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES) { ret = -EINVAL; - goto out; - } - - ret = kvm_phys_addr_ioremap(kvm, gpa, pa, - vm_end - vm_start, - writable); - if (ret) break; + } } - hva = vm_end; + hva = min(reg_end, vma->vm_end); } while (hva < reg_end); - if (change == KVM_MR_FLAGS_ONLY) - goto out; - - spin_lock(&kvm->mmu_lock); - if (ret) - unmap_stage2_range(&kvm->arch.mmu, mem->guest_phys_addr, mem->memory_size); - else if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) - stage2_flush_memslot(kvm, memslot); - spin_unlock(&kvm->mmu_lock); -out: mmap_read_unlock(current->mm); return ret; } -- 2.19.1