Received: by 2002:a05:6a10:d5a5:0:0:0:0 with SMTP id gn37csp2919551pxb; Fri, 8 Oct 2021 19:18:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJymzlvjI/8g6B/Slfya2cy65WU+biPTEN0+U9iHsCgDFZct064gtT+MgkrcYaIw4tB/WPo4 X-Received: by 2002:a05:6402:50d0:: with SMTP id h16mr19833190edb.303.1633745893764; Fri, 08 Oct 2021 19:18:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633745893; cv=none; d=google.com; s=arc-20160816; b=oCzfpIUDouqQTVONFEltS3H3hjzyLP0SZdGgqSskFPKNhokW1aJF3WSTobTJHT4wjp gtjmOAvfYAKMXfne0INly7ikSw+0Nb3s12FkUhElfMCOXIVkAhDK2fy/oCumDE+9GkGQ 6g4+N/Ci6LLHf+TTwD+xlws5JWmbYSLU7A7XkADAeymVNTVL23UjCvFkqCI3V+VWrnow zbKt9jhw4rzGk+O8TVUqbkBEa0lrJRY/i67cUHQU7P49fRz+aaTJbkaHI3iS31bthAI3 F4Ftp2Jco8EWkD8JVeVg79zh12FMpoFUOpZm21XzZguVHxmJ6FxRFc4g8mp5F8w+uSZm ZUqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:reply-to:dkim-signature; bh=b2NFNOmcSkF8bnqGQwOgX6HnOIy2SSUQ2D5ginVhVO0=; b=ceyz5JDuS63g2Rwlfu4/iXXZlkdL6KcjUte5+2cW/yMO3Nak/sWbeLgesI5NiTLkSk UWWGT3Wi++KaXj744jagqfv9b0bJSJLrPwea7iG7HiL1bWPUSQgmcHWbtHr2xM6Ic7+q lxTvHgNz3xPLSRCrOSiFn45YoOPPDTRjPF4brLspl/aUvPutkj3dPLpoT0XL46VDAMyj 4HCFJSrnBnNlOixK5q8pgwBi8ZzQN47Zt113svsrmJUj1Ek15b2JwiT9s3H/fhZaMiEA syOhYhXOc0I67rQdohNLCacSijnqcyKjRct2CLk6kdH8AmcVHZPM0GfwSfqj9o0HvGN/ WLxg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=IRLCbLMz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 14si1488594ejj.313.2021.10.08.19.17.50; Fri, 08 Oct 2021 19:18:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=IRLCbLMz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244381AbhJICSE (ORCPT + 99 others); Fri, 8 Oct 2021 22:18:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38342 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244331AbhJICRj (ORCPT ); Fri, 8 Oct 2021 22:17:39 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5D1BC06179A for ; Fri, 8 Oct 2021 19:14:01 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id i83-20020a252256000000b005b67a878f56so15033692ybi.17 for ; Fri, 08 Oct 2021 19:14:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=b2NFNOmcSkF8bnqGQwOgX6HnOIy2SSUQ2D5ginVhVO0=; b=IRLCbLMzdCb5UchOOv+Z0PtnZNSSoLW6N5YOs8Rf0ByvWkZw9fnrXDff7+K/1lsj6B NqqIJpk8pE4GidDTxI/LosWoSgWzx3bzDSM/n8KN7DTXuN2T17CPaZXkgIYzDMTtYTRb lzcpdr5xg4vfvZJBl69uVsKH5UNXjyyhFo8HAy7LJRDbUM63L3M5Gxx882mRoBUHmman GjJKxe2tvpIReuijhcU3M4KaZ5D41tl+LVTPtXRV7z3IcuqwUUZuaoObrBjmhydK/TA3 cbKULKY59/B6tcCWrJFg3N/GGr3AkBsO6Nwaaye8gGjJSjaELsGsbRMT7ru8tM6UfG9M F3tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=b2NFNOmcSkF8bnqGQwOgX6HnOIy2SSUQ2D5ginVhVO0=; b=xkCEXIdPZ5wZZJOys0VUlWKdlxJnF9EesO3O6jxP3TO3aBl0/23etIpx90wVlJlSQ8 0mdgt9O3B3ZwKXyKVV25C+DXsoXZTrz0rc4t3P/NwF3wB008DBLnVQiyE8v9+mzqbAg3 IaQR2CSYkFS4MOP8XLMoiWPPRkvblBHf4M+/LBu/dhgcngxUIymc/jkI6LNJPlOheRrP SipfyJ+YuMxvNZE5/VChAr00zjMPxqId1NoOctWbY3Y5+eGgstTladgor9DhemVivu1k LXg772PNNQT5EG2V+eXlws9noIuDGEkDwuEnyB7uwGcxy3vlROhz9Muajn4oz5daaNLg idQQ== X-Gm-Message-State: AOAM533Pn2GnUdxZ8Rz9ndKF/IPw6GRFDU6mWaT3gflaSkmHN7IF4gSj 8bFvN1tUSzgqTdTk3ZYchsP9VkfIrfM= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:90:200:e39b:6333:b001:cb]) (user=seanjc job=sendgmr) by 2002:a25:d607:: with SMTP id n7mr8053144ybg.96.1633745640889; Fri, 08 Oct 2021 19:14:00 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 8 Oct 2021 19:12:25 -0700 In-Reply-To: <20211009021236.4122790-1-seanjc@google.com> Message-Id: <20211009021236.4122790-33-seanjc@google.com> Mime-Version: 1.0 References: <20211009021236.4122790-1-seanjc@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH v2 32/43] KVM: VMX: Move preemption timer <=> hrtimer dance to common x86 From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , Oliver Upton , Jing Zhang Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Handle the switch to/from the hypervisor/software timer when a vCPU is blocking in common x86 instead of in VMX. Even though VMX is the only user of a hypervisor timer, the logic and all functions involved are generic x86 (unless future CPUs do something completely different and implement a hypervisor timer that runs regardless of mode). Handling the switch in common x86 will allow for the elimination of the pre/post_blocks hooks, and also lets KVM switch back to the hypervisor timer if and only if it was in use (without additional params). Add a comment explaining why the switch cannot be deferred to kvm_sched_out() or kvm_vcpu_block(). Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 6 +----- arch/x86/kvm/x86.c | 21 +++++++++++++++++++++ 2 files changed, 22 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b3bb2031a7ac..a24f19874716 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7464,16 +7464,12 @@ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu) static int vmx_pre_block(struct kvm_vcpu *vcpu) { - if (kvm_lapic_hv_timer_in_use(vcpu)) - kvm_lapic_switch_to_sw_timer(vcpu); - return 0; } static void vmx_post_block(struct kvm_vcpu *vcpu) { - if (kvm_x86_ops.set_hv_timer) - kvm_lapic_switch_to_hv_timer(vcpu); + } static void vmx_setup_mce(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e0219acfd9cf..909e932a7ae7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9896,8 +9896,21 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu) { + bool hv_timer; + if (!kvm_arch_vcpu_runnable(vcpu) && (!kvm_x86_ops.pre_block || static_call(kvm_x86_pre_block)(vcpu) == 0)) { + /* + * Switch to the software timer before halt-polling/blocking as + * the guest's timer may be a break event for the vCPU, and the + * hypervisor timer runs only when the CPU is in guest mode. + * Switch before halt-polling so that KVM recognizes an expired + * timer before blocking. + */ + hv_timer = kvm_lapic_hv_timer_in_use(vcpu); + if (hv_timer) + kvm_lapic_switch_to_sw_timer(vcpu); + srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED) kvm_vcpu_halt(vcpu); @@ -9905,6 +9918,9 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu) kvm_vcpu_block(vcpu); vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); + if (hv_timer) + kvm_lapic_switch_to_hv_timer(vcpu); + if (kvm_x86_ops.post_block) static_call(kvm_x86_post_block)(vcpu); @@ -10136,6 +10152,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) r = -EINTR; goto out; } + /* + * It should be impossible for the hypervisor timer to be in + * use before KVM has ever run the vCPU. + */ + WARN_ON_ONCE(kvm_lapic_hv_timer_in_use(vcpu)); kvm_vcpu_block(vcpu); if (kvm_apic_accept_events(vcpu) < 0) { r = 0; -- 2.33.0.882.g93a45727a2-goog