Received: by 2002:ac8:156:0:b0:3e0:cd10:60c8 with SMTP id f22csp1548580qtg; Tue, 21 Mar 2023 15:01:56 -0700 (PDT) X-Google-Smtp-Source: AK7set/4CRGxAXJC8Qm4fygUg6ubVhzmT/+UJdtQA90CRa8bg5qhxnoBYBS0im+e+WVuYk3XPeXs X-Received: by 2002:a17:90a:fb41:b0:23b:4bce:97de with SMTP id iq1-20020a17090afb4100b0023b4bce97demr3721214pjb.4.1679436116117; Tue, 21 Mar 2023 15:01:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679436116; cv=none; d=google.com; s=arc-20160816; b=J1NTSSw4QAIwtMYGUHTg8RjpOj5354iQUQJUYpE5HzgvBXCGWY/y9JSCV5yqtFA1aA Y1RHUiKbS4zigzB4EUgeUV34fr/03kwCbmyODIUDBziYSvDoG29XYvjlntfYu+XZHohb A5ZOO1nfHteD6BC67kQkFd7GjPdKjVFh0uNKC41xHp3SiYlbsrMlaDdH94FONNEedFS8 QlUldESTU7fO52avIeFRTLkGY4VAyzGoJxDHB1w+Ab9dr+oZeqHT9aN1HnKp595tM6T+ +8v4qOYJoPpoYdbv1Msx22kN878iNmhtSG5/0X5rRgkcMMbdw9WRI8NYqna78QmPdABJ H2jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:dkim-signature; bh=r1QVmdvFEb28GS25TRtVN9nWQiQqhscO2NCp34L3b7o=; b=bX9CQ3K3BN8g4phkV2ufydZ2M5YiWteEb3eTbownh13HBEXyAFPul8js6q+aNiA3lH FSnulI8m4Z0EpbmAfunlRRfaZCMbQXGx9PItvTzuM1uU02aOBBBXMEJGQGTvw8VgtDe6 Leg005pzwLvB+8pRA8Zaw+XWoYXSGUiW/PJXrrTDXX6k+qCphpLLB+Y9/9vyX16AHKcO TbpFcoQ4OyhWEf3kfKGXuhk+7Q2n7PtBJT6eE/VKrDaBEsiTDuk3y6kPyieJTVU5j9Pw 68l63/Z3En72LRuaEM7cUQGQoGhz6GQoMiL8dc22KrE80QNqJ3I+ig9L1F1nWQWmUvaq rISw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=opwJdvUD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d19-20020a170903209300b0019e95180a08si13131762plc.59.2023.03.21.15.01.40; Tue, 21 Mar 2023 15:01:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=opwJdvUD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229799AbjCUWA3 (ORCPT + 99 others); Tue, 21 Mar 2023 18:00:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229936AbjCUWA1 (ORCPT ); Tue, 21 Mar 2023 18:00:27 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F3083C33 for ; Tue, 21 Mar 2023 15:00:26 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id o10-20020a17090ac08a00b0023f3196fa6fso5894707pjs.2 for ; Tue, 21 Mar 2023 15:00:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436025; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=r1QVmdvFEb28GS25TRtVN9nWQiQqhscO2NCp34L3b7o=; b=opwJdvUDTbn5pEsZvmx7JbsvKQqX9DTDLTVcDZDneAgw2XeNuQfyqUfaJ8BDieFicc htw/EvDpnUnSsWMjtRvCiQzu3936Zt4+eV68c9o64kQJdP0TBAGwuBj7Pchn2kfUN/Zw qFbhd0xQ4EglTUg5q1IYu8kxZ2ViKx4lO9khc00I1Q5LxHxiFNx+JKmHe0KJ4V7/8lsQ bHX13cqimtF55sdaL/xKd0LC3OrM4b5htPbDPIjWnJrfV9Q/irYTpivlzAogCf5yfPmC H63FMbbXhcp294JYI7jYk8czBNUemSFKskQMYTim5y1C15NbuQPX64QSEzQw5v8k80jg aKhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436025; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=r1QVmdvFEb28GS25TRtVN9nWQiQqhscO2NCp34L3b7o=; b=51+vyi8hVLde9db2K/QIYM5NzvgSuHSz7kxLll6IVB2JJDfJdAo3BjhzpHbqL1gIDi So+o/l0dIkKD5myWzmoxBtwd9UAfJFMFvXrfUKD28ungJVx+tWSR7kp5Uk2XoX/z693v kA+FaD7ZQ9xU0sEKUdUX7i8xx/Q+RR+4SLiIZq7HwGUpElpQlERi3zQ4MrRt/NNhHTMf gP/+jNE0Iz/OdAfM/ucnE4sbA5HpUKT0caYIT86iC8pr5LoeKaTLQKbGAAkDTB2Ielv+ usgTSRBSkth+BzQKAK0YllDLkwk+ogoyxfgxSh8/cXR16XRNW+qxZEMHbXabDZdPOW9m JwiQ== X-Gm-Message-State: AO0yUKX/1zJS3keXO8JwzXVIUnWcR3JQ8dqOKxtUlrYUhKC5D9UG4Czt MkX2Thow8bgydo/62g/Ck6bi9dMFGC0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a65:4806:0:b0:50a:c1b3:ed55 with SMTP id h6-20020a654806000000b0050ac1b3ed55mr142164pgs.11.1679436025587; Tue, 21 Mar 2023 15:00:25 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:09 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-2-seanjc@google.com> Subject: [PATCH v4 01/13] KVM: x86/mmu: Add a helper function to check if an SPTE needs atomic write From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-7.7 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Vipin Sharma Move conditions in kvm_tdp_mmu_write_spte() to check if an SPTE should be written atomically or not to a separate function. This new function, kvm_tdp_mmu_spte_need_atomic_write(), will be used in future commits to optimize clearing bits in SPTEs. Signed-off-by: Vipin Sharma Reviewed-by: David Matlack Reviewed-by: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_iter.h | 34 ++++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index f0af385c56e0..c11c5d00b2c1 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -29,23 +29,29 @@ static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 new_spte) WRITE_ONCE(*rcu_dereference(sptep), new_spte); } +/* + * SPTEs must be modified atomically if they are shadow-present, leaf + * SPTEs, and have volatile bits, i.e. has bits that can be set outside + * of mmu_lock. The Writable bit can be set by KVM's fast page fault + * handler, and Accessed and Dirty bits can be set by the CPU. + * + * Note, non-leaf SPTEs do have Accessed bits and those bits are + * technically volatile, but KVM doesn't consume the Accessed bit of + * non-leaf SPTEs, i.e. KVM doesn't care if it clobbers the bit. This + * logic needs to be reassessed if KVM were to use non-leaf Accessed + * bits, e.g. to skip stepping down into child SPTEs when aging SPTEs. + */ +static inline bool kvm_tdp_mmu_spte_need_atomic_write(u64 old_spte, int level) +{ + return is_shadow_present_pte(old_spte) && + is_last_spte(old_spte, level) && + spte_has_volatile_bits(old_spte); +} + static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 old_spte, u64 new_spte, int level) { - /* - * Atomically write the SPTE if it is a shadow-present, leaf SPTE with - * volatile bits, i.e. has bits that can be set outside of mmu_lock. - * The Writable bit can be set by KVM's fast page fault handler, and - * Accessed and Dirty bits can be set by the CPU. - * - * Note, non-leaf SPTEs do have Accessed bits and those bits are - * technically volatile, but KVM doesn't consume the Accessed bit of - * non-leaf SPTEs, i.e. KVM doesn't care if it clobbers the bit. This - * logic needs to be reassessed if KVM were to use non-leaf Accessed - * bits, e.g. to skip stepping down into child SPTEs when aging SPTEs. - */ - if (is_shadow_present_pte(old_spte) && is_last_spte(old_spte, level) && - spte_has_volatile_bits(old_spte)) + if (kvm_tdp_mmu_spte_need_atomic_write(old_spte, level)) return kvm_tdp_mmu_write_spte_atomic(sptep, new_spte); __kvm_tdp_mmu_write_spte(sptep, new_spte); -- 2.40.0.rc2.332.ga46443480c-goog