Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp2310012ybt; Tue, 16 Jun 2020 02:40:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxRPsMsZxG2ggUYHlJt9VR4COWzH8Ln2y6xNNcAGbLyno5/q+9QYvGt5F7V3cSnr3arZw05 X-Received: by 2002:a17:906:fa92:: with SMTP id lt18mr2037835ejb.423.1592300440401; Tue, 16 Jun 2020 02:40:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592300440; cv=none; d=google.com; s=arc-20160816; b=jh7VhnJbzRo0yiEGWrVCQI0Uw3i9TKTqVkjtviVguUGl5Tmw3x+Ya3baOQS3cV2tSL 6OiXCh6lOrXM1WMs0BUh8o0yw9EdIT2OyUtLEOYSKwEbBwYM7BgMSpBAT72jSSLaI8nH i1p+QOzDLSM12COcSaHKYCJEPB7OwgqVX2+t3HcnKy3RvxI9/6Uo+zGJGkBfo0cdtnZK 3KxTtXbLN+Y716ttX0OHixDU9cAQQJLjEATr0bLwllP3x8X7zdRi8RVI6/KHIt4GKTPW A+GZ4WtEcycQrdUqbRpBTocHACKiC5cZ2bN5D1VTnaEVaLDC/mWWGkBXAP7gv1PLD29t XpwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=VySdk+hf+MAD9NPJyl+BKOTo4LdzyY0A2qDdrCYIdrw=; b=rLzFner8+m0FEOwj5dgT4MxEiFHyJbSpnf6ltHQSXWluvD4cd54EEG29MoT//ePw3a kPg7/YjLacpmK3x9z9OizvUGeVULVjrSA6Wid4ay5ZVToJyVTvRaxUJsfoF9GnjyZwiS D8qi94D/iEu/m7utvjeX+1gN+QNu2P3TcOKDfc8FP6EuZ5yTScSkyBbZIn+sid6KzZZT lTqVmY4qux9Vrw3u1n2tnfTNgAZnjTJqJW/qcNo2og8YAvQcEZJRsWsEpCJUzBEBtlhz pjATbEJccCdfT1PyVFo3PNQteKOp7lAyc5QZVS5HsIcVtuovLsGdiBnLz8Evfl5DIaym so5g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i10si10743082eja.402.2020.06.16.02.40.18; Tue, 16 Jun 2020 02:40:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728615AbgFPJhe (ORCPT + 99 others); Tue, 16 Jun 2020 05:37:34 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:6338 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728245AbgFPJgQ (ORCPT ); Tue, 16 Jun 2020 05:36:16 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id B8F055F8BBC2B2F531C4; Tue, 16 Jun 2020 17:36:13 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:05 +0800 From: Keqian Zhu To: , , , CC: Catalin Marinas , Marc Zyngier , James Morse , Will Deacon , "Suzuki K Poulose" , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , , , Keqian Zhu Subject: [PATCH 02/12] KVM: arm64: Modify stage2 young mechanism to support hw DBM Date: Tue, 16 Jun 2020 17:35:43 +0800 Message-ID: <20200616093553.27512-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Marking PTs young (set AF bit) should be atomic to avoid cover dirty status set by hardware. Signed-off-by: Keqian Zhu --- arch/arm64/include/asm/kvm_mmu.h | 32 ++++++++++++++++++++++---------- arch/arm64/kvm/mmu.c | 15 ++++++++------- 2 files changed, 30 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index e0ee6e23d626..51af71505fbc 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -215,6 +215,18 @@ static inline void kvm_set_s2pte_readonly(pte_t *ptep) } while (pteval != old_pteval); } +static inline void kvm_set_s2pte_young(pte_t *ptep) +{ + pteval_t old_pteval, pteval; + + pteval = READ_ONCE(pte_val(*ptep)); + do { + old_pteval = pteval; + pteval |= PTE_AF; + pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); + } while (pteval != old_pteval); +} + static inline bool kvm_s2pte_readonly(pte_t *ptep) { return (READ_ONCE(pte_val(*ptep)) & PTE_S2_RDWR) == PTE_S2_RDONLY; @@ -230,6 +242,11 @@ static inline void kvm_set_s2pmd_readonly(pmd_t *pmdp) kvm_set_s2pte_readonly((pte_t *)pmdp); } +static inline void kvm_set_s2pmd_young(pmd_t *pmdp) +{ + kvm_set_s2pte_young((pte_t *)pmdp); +} + static inline bool kvm_s2pmd_readonly(pmd_t *pmdp) { return kvm_s2pte_readonly((pte_t *)pmdp); @@ -245,6 +262,11 @@ static inline void kvm_set_s2pud_readonly(pud_t *pudp) kvm_set_s2pte_readonly((pte_t *)pudp); } +static inline void kvm_set_s2pud_young(pud_t *pudp) +{ + kvm_set_s2pte_young((pte_t *)pudp); +} + static inline bool kvm_s2pud_readonly(pud_t *pudp) { return kvm_s2pte_readonly((pte_t *)pudp); @@ -255,16 +277,6 @@ static inline bool kvm_s2pud_exec(pud_t *pudp) return !(READ_ONCE(pud_val(*pudp)) & PUD_S2_XN); } -static inline pud_t kvm_s2pud_mkyoung(pud_t pud) -{ - return pud_mkyoung(pud); -} - -static inline bool kvm_s2pud_young(pud_t pud) -{ - return pud_young(pud); -} - #ifdef CONFIG_ARM64_HW_AFDBM static inline bool kvm_hw_dbm_enabled(void) { diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8c0035cab6b6..5ad87bce23c0 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2008,8 +2008,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * Resolve the access fault by making the page young again. * Note that because the faulting entry is guaranteed not to be * cached in the TLB, we don't need to invalidate anything. - * Only the HW Access Flag updates are supported for Stage 2 (no DBM), - * so there is no need for atomic (pte|pmd)_mkyoung operations. + * + * Note: Both DBM and HW AF updates are supported for Stage2, so + * young operations should be atomic. */ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) { @@ -2027,15 +2028,15 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) goto out; if (pud) { /* HugeTLB */ - *pud = kvm_s2pud_mkyoung(*pud); + kvm_set_s2pud_young(pud); pfn = kvm_pud_pfn(*pud); pfn_valid = true; } else if (pmd) { /* THP, HugeTLB */ - *pmd = pmd_mkyoung(*pmd); + kvm_set_s2pmd_young(pmd); pfn = pmd_pfn(*pmd); pfn_valid = true; - } else { - *pte = pte_mkyoung(*pte); /* Just a page... */ + } else { /* Just a page... */ + kvm_set_s2pte_young(pte); pfn = pte_pfn(*pte); pfn_valid = true; } @@ -2280,7 +2281,7 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void * return 0; if (pud) - return kvm_s2pud_young(*pud); + return pud_young(*pud); else if (pmd) return pmd_young(*pmd); else -- 2.19.1