Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp20910pxy; Tue, 4 May 2021 17:33:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyMg+9ZMOtrYxq694dKIDXR7RZnVB5bVz3Hv9rUYkWLwDA5+y/l6Exyu0rGLg1FyqCMSIDE X-Received: by 2002:a50:9b12:: with SMTP id o18mr29778406edi.376.1620174788167; Tue, 04 May 2021 17:33:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620174788; cv=none; d=google.com; s=arc-20160816; b=sHL/lXN+mxRt0gz41116t+ysZsjnyj8HukitLXgc5TdbHmaAIEamaiaA8DincV3z5h SLH9bgwMfxiODDi6aQFzrlTapRn2id6SvliqpCsa3jtUNd1JkxLGYZUV2+wkTfU7AJU8 G6fB7Yj6suZHSWGggUizV8bM9cczFG66kR3CzYsB03Ny2lx3WqL1VTgt0adClsJw0SFm eW2T5AxY8wTP0hOsovGQnKZ3Eyp1/s89yEyO3ywFwUrFTw63tnONq4pdKenOILrmdopa IhCEGeAhJRylrvVP3HqkqzqET0Qpd/+IsDp3XDtCjdwhcHH9Zcg/DfdkMKHaEY0BzFtr Wmng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:reply-to:dkim-signature; bh=nAq8b5zbDcnq4/b7mYll+jSK99onuEwFoGl5XtqXHxU=; b=uM3mZwx+S8AIyftJ1gzl2CsrbH4HJUwKLc0p0A3XKfu4dpejOkmvgyf17X8ETcBqr8 u6ChAnSKb+VxY1Ui9u6PYe02M8iqmT5xTJ6HCIGPYkE0n9pA/7ZgHQB8feJUH0uRedAn XhhLvcXIkGOh3FmJFnF51IWmLzlWdLV84JqFiwtLb4HD5jgZIRBGnUrHSIeLlbWXLXQ3 9PyfIjZEMEpePGVnAqeDP2kGGBp4we/oiZGa+V2v+acvp+4FSlQscswRbRvcfHeCFzQy DM/V4lj9m8UmLjERpRzgyuXZFwmNm2TyT9IiD1kOJXJZtohVw7bZ/4nQQ+nuPJTJK2fg KV8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=uiir3jq6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u12si3749139ejb.458.2021.05.04.17.32.44; Tue, 04 May 2021 17:33:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=uiir3jq6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231599AbhEEA3K (ORCPT + 99 others); Tue, 4 May 2021 20:29:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231567AbhEEA3A (ORCPT ); Tue, 4 May 2021 20:29:00 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8C5CC06138B for ; Tue, 4 May 2021 17:28:03 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id o12-20020a5b050c0000b02904f4a117bd74so423173ybp.17 for ; Tue, 04 May 2021 17:28:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=nAq8b5zbDcnq4/b7mYll+jSK99onuEwFoGl5XtqXHxU=; b=uiir3jq6tkI0cD+XcQBhRYL7ZfyZb5z0vqaDYT7mZ6ROmEmJkCpcTRuwdFGKcO/DLy sVkWgjcJbp1WXYW+SpYR1CGOqWtq+8iBGCeCpdlRFFkVlf3nhj0L7KIgK2O/39ea/pF9 +lvclTejVElLHEvWqcNZNeJRDcRmV7u4oeVE3LTlsh+ehT66+DHO/+XcIL9VXGRaU2zw 7bSQohSOC6MTrgGe40elFWEMT3uI7eVc0vpWpjE2INg86G+J5at9phjO85xcJGY0Sr9f QT/CV0s9I0AV3aiDQIkIHRxJ/TNSRNAZEWzVQ/2Ey0kUkEiLhxIIh66qb5MgtrDCKvfs trIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=nAq8b5zbDcnq4/b7mYll+jSK99onuEwFoGl5XtqXHxU=; b=L4QTnG0h33YlnCYY9Hcypurd3gbVSMdfrr5wiaom+HrxDRTSJLhRbnK/oSG6G7zRUe QpviwejD5GIVsMscDqFDYfL3th9pyNfgO3mPfNdXMhtmtLA4GNa5AaD4eH0F5T5IZbd8 mlLjnTvynVfsl3Hrpvg/zFH2Pa68p8L8zKsLlV76NeTVa2mvy6mL1YTdxbAa41TLB0kH s0P1KktIxIBSM9t7ou7jhO700cLpd1HaF2pIxJF/evRoqwPC/Z2RaF6iU8QVjOlJMxxb qogbpWXeJWOuTtbIGpalqyJHzX9ZYkIsivp+Vv81J5H3CiHD0hnL6g5T7puFSUdAWo7C jpPg== X-Gm-Message-State: AOAM532pWuPoaT7egDO2r6ACAtjNzPmYEA5fsjqSQSkkFJRtmhAr2+cl YmPCk9c5Hio0mL2pSVeLQ6nDDe1OORQ= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:df57:48cb:ea33:a156]) (user=seanjc job=sendgmr) by 2002:a25:5585:: with SMTP id j127mr36762359ybb.349.1620174483166; Tue, 04 May 2021 17:28:03 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 4 May 2021 17:27:35 -0700 In-Reply-To: <20210505002735.1684165-1-seanjc@google.com> Message-Id: <20210505002735.1684165-9-seanjc@google.com> Mime-Version: 1.0 References: <20210505002735.1684165-1-seanjc@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v4 8/8] KVM: x86: Consolidate guest enter/exit logic to common helpers From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner , Michael Tokarev , Christian Borntraeger , Frederic Weisbecker Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Move the enter/exit logic in {svm,vmx}_vcpu_enter_exit() to common helpers. Opportunistically update the somewhat stale comment about the updates needing to occur immediately after VM-Exit. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 39 ++---------------------------------- arch/x86/kvm/vmx/vmx.c | 39 ++---------------------------------- arch/x86/kvm/x86.h | 45 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 49 insertions(+), 74 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 7dd63545526b..8abaf4ec4f22 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3710,25 +3710,7 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); unsigned long vmcb_pa = svm->current_vmcb->pa; - /* - * VMENTER enables interrupts (host state), but the kernel state is - * interrupts disabled when this is invoked. Also tell RCU about - * it. This is the same logic as for exit_to_user_mode(). - * - * This ensures that e.g. latency analysis on the host observes - * guest mode as interrupt enabled. - * - * guest_enter_irqoff() informs context tracking about the - * transition to guest mode and if enabled adjusts RCU state - * accordingly. - */ - instrumentation_begin(); - trace_hardirqs_on_prepare(); - lockdep_hardirqs_on_prepare(CALLER_ADDR0); - instrumentation_end(); - - guest_enter_irqoff(); - lockdep_hardirqs_on(CALLER_ADDR0); + kvm_guest_enter_irqoff(); if (sev_es_guest(vcpu->kvm)) { __svm_sev_es_vcpu_run(vmcb_pa); @@ -3748,24 +3730,7 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu) vmload(__sme_page_pa(sd->save_area)); } - /* - * VMEXIT disables interrupts (host state), but tracing and lockdep - * have them in state 'on' as recorded before entering guest mode. - * Same as enter_from_user_mode(). - * - * context_tracking_guest_exit() restores host context and reinstates - * RCU if enabled and required. - * - * This needs to be done before the below as native_read_msr() - * contains a tracepoint and x86_spec_ctrl_restore_host() calls - * into world and some more. - */ - lockdep_hardirqs_off(CALLER_ADDR0); - context_tracking_guest_exit(); - - instrumentation_begin(); - trace_hardirqs_off_finish(); - instrumentation_end(); + kvm_guest_exit_irqoff(); } static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 8425827068c3..dd6fae37b139 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6662,25 +6662,7 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu) static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) { - /* - * VMENTER enables interrupts (host state), but the kernel state is - * interrupts disabled when this is invoked. Also tell RCU about - * it. This is the same logic as for exit_to_user_mode(). - * - * This ensures that e.g. latency analysis on the host observes - * guest mode as interrupt enabled. - * - * guest_enter_irqoff() informs context tracking about the - * transition to guest mode and if enabled adjusts RCU state - * accordingly. - */ - instrumentation_begin(); - trace_hardirqs_on_prepare(); - lockdep_hardirqs_on_prepare(CALLER_ADDR0); - instrumentation_end(); - - guest_enter_irqoff(); - lockdep_hardirqs_on(CALLER_ADDR0); + kvm_guest_enter_irqoff(); /* L1D Flush includes CPU buffer clear to mitigate MDS */ if (static_branch_unlikely(&vmx_l1d_should_flush)) @@ -6696,24 +6678,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, vcpu->arch.cr2 = native_read_cr2(); - /* - * VMEXIT disables interrupts (host state), but tracing and lockdep - * have them in state 'on' as recorded before entering guest mode. - * Same as enter_from_user_mode(). - * - * context_tracking_guest_exit() restores host context and reinstates - * RCU if enabled and required. - * - * This needs to be done before the below as native_read_msr() - * contains a tracepoint and x86_spec_ctrl_restore_host() calls - * into world and some more. - */ - lockdep_hardirqs_off(CALLER_ADDR0); - context_tracking_guest_exit(); - - instrumentation_begin(); - trace_hardirqs_off_finish(); - instrumentation_end(); + kvm_guest_exit_irqoff(); } static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 8ddd38146525..521f74e5bbf2 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -8,6 +8,51 @@ #include "kvm_cache_regs.h" #include "kvm_emulate.h" +static __always_inline void kvm_guest_enter_irqoff(void) +{ + /* + * VMENTER enables interrupts (host state), but the kernel state is + * interrupts disabled when this is invoked. Also tell RCU about + * it. This is the same logic as for exit_to_user_mode(). + * + * This ensures that e.g. latency analysis on the host observes + * guest mode as interrupt enabled. + * + * guest_enter_irqoff() informs context tracking about the + * transition to guest mode and if enabled adjusts RCU state + * accordingly. + */ + instrumentation_begin(); + trace_hardirqs_on_prepare(); + lockdep_hardirqs_on_prepare(CALLER_ADDR0); + instrumentation_end(); + + guest_enter_irqoff(); + lockdep_hardirqs_on(CALLER_ADDR0); +} + +static __always_inline void kvm_guest_exit_irqoff(void) +{ + /* + * VMEXIT disables interrupts (host state), but tracing and lockdep + * have them in state 'on' as recorded before entering guest mode. + * Same as enter_from_user_mode(). + * + * context_tracking_guest_exit() restores host context and reinstates + * RCU if enabled and required. + * + * This needs to be done immediately after VM-Exit, before any code + * that might contain tracepoints or call out to the greater world, + * e.g. before x86_spec_ctrl_restore_host(). + */ + lockdep_hardirqs_off(CALLER_ADDR0); + context_tracking_guest_exit(); + + instrumentation_begin(); + trace_hardirqs_off_finish(); + instrumentation_end(); +} + #define KVM_NESTED_VMENTER_CONSISTENCY_CHECK(consistency_check) \ ({ \ bool failed = (consistency_check); \ -- 2.31.1.527.g47e6f16901-goog