Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp2309163ybt; Tue, 16 Jun 2020 02:39:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz0TFBKcPxyqg+kekcyZ9rjKf5H65qVu8MD2KCCpj9PYS315gnntKa1Z7aEA0sLzVCoqXzv X-Received: by 2002:aa7:c90a:: with SMTP id b10mr1754284edt.198.1592300341430; Tue, 16 Jun 2020 02:39:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592300341; cv=none; d=google.com; s=arc-20160816; b=sAaSfxKMeJp1rwF6NntvmbB+26Z95/o8vfodCieWmmAsKXINKQXTqlx2bqC8LWAo+j 7/b9O9r1bVOPWq36K/6wb1DLw2cOV3ojLRyDiQJIMuqYKh8dmG99edUY7gD8vpzEs4de PDTNo5XD33YNT9JXm/t04GElGn3vKgf+5setiaXUZ0awcQFUt2vbUXMZzVWSRAC2Hzhd dTaumPKoPiEH8W5suriL/Ir3Ty9ENgNeuast0+klkVdiKnEmn4Tsr+Q36hTmOlprEgfa eFl1zS2tA2dev/VT0bWiqYNUhAW0f9hu+ERo+++PAZRtcISo4V+FHdhyuTjcM48abRin A2Nw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=M4SEgvz0nyb9jcZO7f0rIys4Vap0y4Kr6/0oBvgK0ns=; b=yca1W830ZuMGMnq/QRoXTNFulEJef7wqVjhC+B9apHiZZWUxB3MXbtoN3wpYEev7Y6 X9na6ezcuUFesm3XL+XxeVnbUlw3R56/tp7uduSqMT5Ufycx4+j0p9VzxFpR9LoO0++X LxYbWHN8A/CgFcecDLCrAfHDKVt6Jtz3T9Lyn3hVNAGlKz00Sai6Im670kXWU/eAEUZw Qw0/M9SIaMJzzYw51XnKT8KN29zhMq9w/ComzGmodQVxkmATPE+c6eh92wGj3Gyfq2Gz h2eHlnlxdujKZGC7+JuskIgLSy10oSCZJ7d8WWa77xvNsMstD3dJ9/a9y5cvOnoE7QoQ B2AA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c14si4898977edl.273.2020.06.16.02.38.39; Tue, 16 Jun 2020 02:39:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728420AbgFPJga (ORCPT + 99 others); Tue, 16 Jun 2020 05:36:30 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:6264 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728333AbgFPJgU (ORCPT ); Tue, 16 Jun 2020 05:36:20 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id D41EBD40471D8FD3AFBD; Tue, 16 Jun 2020 17:36:18 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:12 +0800 From: Keqian Zhu To: , , , CC: Catalin Marinas , Marc Zyngier , James Morse , Will Deacon , "Suzuki K Poulose" , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , , , Keqian Zhu Subject: [PATCH 10/12] KVM: arm64: Save stage2 PTE dirty status if it is coverred Date: Tue, 16 Jun 2020 17:35:51 +0800 Message-ID: <20200616093553.27512-11-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are two types of operations will change PTE and may cover dirty status set by hardware. 1. Stage2 PTE unmapping: Page table merging (revert of huge page table dissolving), kvm_unmap_hva_range() and so on. 2. Stage2 PTE changing: including user_mem_abort(), kvm_mmu_notifier _change_pte() and so on. All operations above will invoke kvm_set_pte() finally. We should save the dirty status into memslot bitmap. Question: Should we acquire kvm_slots_lock when invoke mark_page_dirty? It seems that user_mem_abort does not acquire this lock when invoke it. Signed-off-by: Keqian Zhu --- arch/arm64/kvm/mmu.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 898e272a2c07..a230fbcf3889 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -294,15 +294,23 @@ static void unmap_stage2_ptes(struct kvm *kvm, pmd_t *pmd, { phys_addr_t start_addr = addr; pte_t *pte, *start_pte; + bool dirty_coverred; + int idx; start_pte = pte = pte_offset_kernel(pmd, addr); do { if (!pte_none(*pte)) { pte_t old_pte = *pte; - kvm_set_pte(pte, __pte(0)); + dirty_coverred = kvm_set_pte(pte, __pte(0)); kvm_tlb_flush_vmid_ipa(kvm, addr); + if (dirty_coverred) { + idx = srcu_read_lock(&kvm->srcu); + mark_page_dirty(kvm, addr >> PAGE_SHIFT); + srcu_read_unlock(&kvm->srcu, idx); + } + /* No need to invalidate the cache for device mappings */ if (!kvm_is_device_pfn(pte_pfn(old_pte))) kvm_flush_dcache_pte(old_pte); @@ -1388,6 +1396,8 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, pte_t *pte, old_pte; bool iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP; bool logging_active = flags & KVM_S2_FLAG_LOGGING_ACTIVE; + bool dirty_coverred; + int idx; VM_BUG_ON(logging_active && !cache); @@ -1453,8 +1463,14 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, if (pte_val(old_pte) == pte_val(*new_pte)) return 0; - kvm_set_pte(pte, __pte(0)); + dirty_coverred = kvm_set_pte(pte, __pte(0)); kvm_tlb_flush_vmid_ipa(kvm, addr); + + if (dirty_coverred) { + idx = srcu_read_lock(&kvm->srcu); + mark_page_dirty(kvm, addr >> PAGE_SHIFT); + srcu_read_unlock(&kvm->srcu, idx); + } } else { get_page(virt_to_page(pte)); } -- 2.19.1