Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp338537pxb; Tue, 19 Oct 2021 04:05:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzciUS/21kAUu1lkizQv7a9MI3vRh/pa0vQ1Kgafwb5i3FKgVAr5V3/strDncCYDHIvtFRH X-Received: by 2002:a5d:6c69:: with SMTP id r9mr42676064wrz.280.1634641506023; Tue, 19 Oct 2021 04:05:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634641506; cv=none; d=google.com; s=arc-20160816; b=OtvNWKBnCmCxY1e1Jxrr8J+5SxPMXu8g7u+UbDPkn8Ee8AZAET+cv5yLdK5DIh25Jq GaSPjVCKILaVB4g+Z3SanAX0UqWCjODo4He6kAZ2cWJTLrw1cUxZrYd4yfCQNGknKp44 ncLVO2yACVAYq7tLf2Ta4OL/IMeyu41tD28bhH//CfEvAz0Lh+Gny/7UG23bnF0+P4fm TQt+BfBjgCtl/OicZE8ojPNFWFrcz/+EeFyJ5xXYugcKZj+Xt6LxQPgMUdoPZlpOvqES y5TsVfzY810Ulo/SYEubQZb3LneSexSAXIX9DxZIwOfOvMHnftBGlUk+AzZ7wq3LoUoY 4QMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=HqMhy1gFJqYHe4rVLAgrJaAnwOJLwZUqxNuxtFFoDtY=; b=OAH2OH747rr8gLiCAVRfnjTX0lL/3PLSmHlv9Itw2GvFHhJGrgg6OmqTX6/AejkcEt 94aWJ6N6Bb75x9/V+5TAy/MqU+PkUv0dVexVqwlL1QTSyjFn/stuy5W4CpN8De3qOBlk v70jcrDyhfQsTZYQE1yvFCIjAwH4w0lMwRei556WKqJK/EcEsshh+ipvTvnzq1a/mLia iIKFqsFOYF1b6FfZKpmLzQT4MHwborat9KgMm1gEsmTYWJl9H6cP7Y2uEADIXHwQy70E lFUI4NikWg1YJqr9/m2/2YFnKvfRQhaWwxTeugQq7vTHjRtNUQWFaCG7G5A2hrnXXAVX Fs8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=UfVdgalT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l25si24013982ejr.445.2021.10.19.04.04.42; Tue, 19 Oct 2021 04:05:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=UfVdgalT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235409AbhJSLE3 (ORCPT + 99 others); Tue, 19 Oct 2021 07:04:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235381AbhJSLEW (ORCPT ); Tue, 19 Oct 2021 07:04:22 -0400 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74811C061745; Tue, 19 Oct 2021 04:02:10 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id v8so13153780pfu.11; Tue, 19 Oct 2021 04:02:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HqMhy1gFJqYHe4rVLAgrJaAnwOJLwZUqxNuxtFFoDtY=; b=UfVdgalT6/QdEQX8d/jg9GOqRzrWKnhsINKmRQVWsuZRfTK8VGqWFvWQwr/I4ys8X4 nRVaO0MR5dQEum0WRUnG00e0+FcsMfVuVhnB+1nx2Cz9UCDRDNfkvDBrnCUlQr/fbVuQ AGXaNkahbdOQB8iizt/Vin5skU1SW1ktLJlK3VTweiI57dl02DBnNuDE4FJiX6IoKDE7 BTnFc+rwQs9Bi6j3Re+kPEekpR0QnfR3gJZPWdsMUGHgrio14vSmzt3d4xT6HHlmAgoP Rxk7UcOjFV+UVL4R4yK6pOCjDneAbrn2zACH7wtbAWkJHTj2ZT/NlXsUJ2cM4u2gQn1I Ox7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HqMhy1gFJqYHe4rVLAgrJaAnwOJLwZUqxNuxtFFoDtY=; b=buaETJcOcpTwLkkJP9UJZvhIUsVbRPw06MJBLK32sPeUn41Byji0zCXQPpV7KbAg6C QdQzMnDEyBRxhTfF5AP5or6tdf6eYQMQAtFzL/AJj9XecsjUP/wyYdAQUoobwPn9x8ek Qngy5Vn4Sxp+9LUt/VANvC3K1yS/AzNETp7akLWvcooB3BiD2FfCixNwbrJ0CIMcH2uA ehQzDZ9+zfpyjjV0+DGj3s8bJHQ/owr1X+cDk6knuw7p3VQZnmZrlmJrXPIS4nOx9/Kg kklJEQeRQZ85qs6d+wwlgjpbUi+hRILn1hFzgIgFs+OV0w136Rhkh2yhMgu+4t0uLSGV ALug== X-Gm-Message-State: AOAM530SM5D00cH9cQz7u+IAvBxM5dQbjrSi2k/FZb6khtFXnz7qmkwz f7dNKldNy1NwaXrtdwdrLOES4ISyqh8= X-Received: by 2002:a63:b50d:: with SMTP id y13mr28040336pge.286.1634641329694; Tue, 19 Oct 2021 04:02:09 -0700 (PDT) Received: from localhost ([47.88.60.64]) by smtp.gmail.com with ESMTPSA id mu11sm3038559pjb.20.2021.10.19.04.02.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Oct 2021 04:02:09 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini Cc: Lai Jiangshan , Junaid Shahid , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH 3/4] KVM: X86: Use smp_rmb() to pair with smp_wmb() in mmu_try_to_unsync_pages() Date: Tue, 19 Oct 2021 19:01:53 +0800 Message-Id: <20211019110154.4091-4-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211019110154.4091-1-jiangshanlai@gmail.com> References: <20211019110154.4091-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Lai Jiangshan The commit 578e1c4db2213 ("kvm: x86: Avoid taking MMU lock in kvm_mmu_sync_roots if no sync is needed") added smp_wmb() in mmu_try_to_unsync_pages(), but the corresponding smp_load_acquire() isn't used on the load of SPTE.W which is impossible since the load of SPTE.W is performed in the CPU's pagetable walking. This patch changes to use smp_rmb() instead. This patch fixes nothing but just comments since smp_rmb() is NOP and compiler barrier() is not required since the load of SPTE.W is before VMEXIT. Cc: Junaid Shahid Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 47 +++++++++++++++++++++++++++++------------- 1 file changed, 33 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c6ddb042b281..900c7a157c99 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2665,8 +2665,9 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, * (sp->unsync = true) * * The write barrier below ensures that 1.1 happens before 1.2 and thus - * the situation in 2.4 does not arise. The implicit barrier in 2.2 - * pairs with this write barrier. + * the situation in 2.4 does not arise. The implicit read barrier + * between 2.1's load of SPTE.W and 2.3 (as in is_unsync_root()) pairs + * with this write barrier. */ smp_wmb(); @@ -3629,6 +3630,35 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) #endif } +static bool is_unsync_root(hpa_t root) +{ + struct kvm_mmu_page *sp; + + /* + * Even if another CPU was marking the SP as unsync-ed simultaneously, + * any guest page table changes are not guaranteed to be visible anyway + * until this VCPU issues a TLB flush strictly after those changes are + * made. We only need to ensure that the other CPU sets these flags + * before any actual changes to the page tables are made. The comments + * in mmu_try_to_unsync_pages() describe what could go wrong if this + * requirement isn't satisfied. + * + * To pair with the smp_wmb() in mmu_try_to_unsync_pages() between the + * write to sp->unsync[_children] and the write to SPTE.W, a read + * barrier is needed after the CPU reads SPTE.W (or the read itself is + * an acquire operation) while doing page table walk and before the + * checks of sp->unsync[_children] here. The CPU has already provided + * the needed semantic, but an NOP smp_rmb() here can provide symmetric + * pairing and richer information. + */ + smp_rmb(); + sp = to_shadow_page(root); + if (sp->unsync || sp->unsync_children) + return true; + + return false; +} + void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) { int i; @@ -3646,18 +3676,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) hpa_t root = vcpu->arch.mmu->root_hpa; sp = to_shadow_page(root); - /* - * Even if another CPU was marking the SP as unsync-ed - * simultaneously, any guest page table changes are not - * guaranteed to be visible anyway until this VCPU issues a TLB - * flush strictly after those changes are made. We only need to - * ensure that the other CPU sets these flags before any actual - * changes to the page tables are made. The comments in - * mmu_try_to_unsync_pages() describe what could go wrong if - * this requirement isn't satisfied. - */ - if (!smp_load_acquire(&sp->unsync) && - !smp_load_acquire(&sp->unsync_children)) + if (!is_unsync_root(root)) return; write_lock(&vcpu->kvm->mmu_lock); -- 2.19.1.6.gb485710b