Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp3803636pxb; Tue, 26 Jan 2021 05:21:07 -0800 (PST) X-Google-Smtp-Source: ABdhPJwm+jfC6mRWCbW9frzKePs1xwQW8ZO3ecQT6JKwlnb81cDyXbKB+JEjRM9u56G4GsdRMpgJ X-Received: by 2002:a05:6402:1452:: with SMTP id d18mr4585281edx.11.1611667267563; Tue, 26 Jan 2021 05:21:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611667267; cv=none; d=google.com; s=arc-20160816; b=gS5j5e+4e5pU4rfGXC/yWG3xZ/ctjnuC2LoIUwX9waHxKE4xmrnK2GxNxCFeb1ncLQ a2vhVZ0hoTOrGuMm4QQ1fIgKRCRHDJZ5DkLAB5E2ZodtvvsXuIPHgPmbY0HTOI/VckdT x/tm2AxacK+MK0l8ycCRPdXXnM291Sq5V1kXJ9K3DVi2VLZStuCgwrqTEs513wS+z7xc DBDCjLProMxw4XkVRKUkgI6TH/6APO3A7fn4iZy8cqqEZxIgRNWGQBGn0C1/o/GuHlfM G0PU5II7j1jghwYKbfYzBWfhHypQwCNUSEZMYUFiSzqW6JEAiVuq/ymDiDCPZbQtnaKp em7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=Ll5e65oQofqyHGv3j8u5w/l+boiMprMCuI+eNAGFfoM=; b=nE/lM2y01F3hZPyIECQ4E72w0v7WiwLQsdqJRMYu4mfdttaUSSQ4SSgAywv5QqAgLf ULlr4aD0f82U5gjI6GYF9N1STZU8GnuH7BO6th8BskaYlvfuds3rVFQJUFDRM0C6uEcA Lfjx805MoRI1G3HeEZys4xscKFxNGYVIi+PUgZ7VqhgOLkFM7k8fWrUTkO550Kz1Srur nAhHCsluXdq3mgOwTSS5H6g++sjr9VuRZ9Ksmq2keX7eWyc3tvl501WfiKl9mPpnWRvF 67jtTch/NO5hUQiVAn8HIE1Up2Wpsn1xUblvdhaxbbO8t1FTpxrCBrHrvChG+atMLMeP 3sqw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z13si8790676edb.473.2021.01.26.05.20.42; Tue, 26 Jan 2021 05:21:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392664AbhAZNRf (ORCPT + 99 others); Tue, 26 Jan 2021 08:17:35 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:11503 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391789AbhAZMp6 (ORCPT ); Tue, 26 Jan 2021 07:45:58 -0500 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DQ5yR1JhfzjDdq; Tue, 26 Jan 2021 20:43:59 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.498.0; Tue, 26 Jan 2021 20:45:03 +0800 From: Keqian Zhu To: , , , , Marc Zyngier , Will Deacon , Catalin Marinas CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Mark Rutland , James Morse , Robin Murphy , Suzuki K Poulose , , , , , Subject: [RFC PATCH 2/7] kvm: arm64: Use atomic operation when update PTE Date: Tue, 26 Jan 2021 20:44:39 +0800 Message-ID: <20210126124444.27136-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210126124444.27136-1-zhukeqian1@huawei.com> References: <20210126124444.27136-1-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We are about to add HW_DBM support for stage2 dirty log, so software updating PTE may race with the MMU trying to set the access flag or dirty state. Use atomic oparations to avoid reverting these bits set by MMU. Signed-off-by: Keqian Zhu --- arch/arm64/kvm/hyp/pgtable.c | 41 ++++++++++++++++++++++++------------ 1 file changed, 27 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index bdf8e55ed308..4915ba35f93b 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -153,10 +153,34 @@ static kvm_pte_t *kvm_pte_follow(kvm_pte_t pte) return __va(kvm_pte_to_phys(pte)); } +/* + * We may race with the MMU trying to set the access flag or dirty state, + * use atomic oparations to avoid reverting these bits. + * + * Return original PTE. + */ +static kvm_pte_t kvm_update_pte(kvm_pte_t *ptep, kvm_pte_t bit_set, + kvm_pte_t bit_clr) +{ + kvm_pte_t old_pte, pte = *ptep; + + do { + old_pte = pte; + pte &= ~bit_clr; + pte |= bit_set; + + if (old_pte == pte) + break; + + pte = cmpxchg_relaxed(ptep, old_pte, pte); + } while (pte != old_pte); + + return old_pte; +} + static void kvm_set_invalid_pte(kvm_pte_t *ptep) { - kvm_pte_t pte = *ptep; - WRITE_ONCE(*ptep, pte & ~KVM_PTE_VALID); + kvm_update_pte(ptep, 0, KVM_PTE_VALID); } static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp) @@ -723,18 +747,7 @@ static int stage2_attr_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, return 0; data->level = level; - data->pte = pte; - pte &= ~data->attr_clr; - pte |= data->attr_set; - - /* - * We may race with the CPU trying to set the access flag here, - * but worst-case the access flag update gets lost and will be - * set on the next access instead. - */ - if (data->pte != pte) - WRITE_ONCE(*ptep, pte); - + data->pte = kvm_update_pte(ptep, data->attr_set, data->attr_clr); return 0; } -- 2.19.1