Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp1063311iog; Mon, 13 Jun 2022 20:35:02 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uFRqoSFtbMvFpzvttdkr+qmdfGc8s396Trbev2G0ifz55PpV2KHH+p+FzKWISREh8SWj7F X-Received: by 2002:a17:90b:1bc5:b0:1e3:3c67:37bf with SMTP id oa5-20020a17090b1bc500b001e33c6737bfmr2203203pjb.87.1655177702223; Mon, 13 Jun 2022 20:35:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655177702; cv=none; d=google.com; s=arc-20160816; b=IB1VYWwOpUnpFjtXE7u9d8NwdBWMLvJb1iNx57xLwYlNi23JeE5lQWb+u3gNmHGR7K Sb2Sll7yDC37NjXIpqE8MihAo2HIlomX5qDi7ZYeaXSkwA2ed21V7peLoLr0JxippbND HqQAfclEIw7RKC2BWt2voo5fgzP4A6TK7k1ATd9vUrQPcj7gTp0EFABdJuZ3Omxd1FU9 nnbfvbm/kAWK2mj9vWiIBSbQBHDqpdRnSqXYkzs6EQGwtWkzFeQKEyAGsW8Cxf3otw29 LZcmyAF7DghCuyYyRywjSu2P4YFl2ArXJ5csWK+1CEUm1ulFsp4mwB2TrwfgJeFI1c42 oDCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=js5L1twT6r7V/7vMBj8meNIgAUj4WzdcVe3qDYd1Ajo=; b=0asaHFUOi3jKge8JyppwsQYOfylZm5pVYWsWpety0JvzPTUU93OpDHW9e4isirWLmy niE02HUigDsTUWdi1QE1vA9o1KiJ/pXL8TqlenjZg9v6Pt80jMfeAuvwPMCe0TKANjxr DnH9qhE2vnYwvrzrkxB0+wQIFweYh7yt9WIg9ybIE/0x5oSvgJnRbFkAsV0jJB2GvDEs k5Ng5XEF+fTOrw8SMMTRuz7X3x6h4VqSD10HuAfL8nP5zT6awWWK9JnEVAxkHNRmm9lm MprlFJ8U1YBY1voD6DdCDWreB24S+wBEzHpBroo0Robv2DBPO68grR9J8ObgfKWMOJBj VHjw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=aleUPwvq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a2-20020a056a000c8200b0051bac3bbca0si13265488pfv.174.2022.06.13.20.34.49; Mon, 13 Jun 2022 20:35:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=aleUPwvq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353408AbiFNCeD (ORCPT + 99 others); Mon, 13 Jun 2022 22:34:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357115AbiFNCbp (ORCPT ); Mon, 13 Jun 2022 22:31:45 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 986BF457A1; Mon, 13 Jun 2022 19:12:41 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 18F70B81698; Tue, 14 Jun 2022 02:11:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9C89BC3411E; Tue, 14 Jun 2022 02:11:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655172710; bh=vdxcTw6H2fUlXn2a0YkRzG2Z3F/7Uql4vw13phIhCVE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aleUPwvqiEKQAppXIfwmOW4Fdu0As41kU++ZlCbWa2UD3NsRaCByMiykDKpwpGL9L Cnf2brWhNSayMW1XMKPQfKLhPGYkh5pyFpn4gsTtzKPRPcTgZNovaZPJHtJv186hsY nlWAyuGReSBbj/Ej3p2Nh7SJAe3n3jt8tAB7pGPA2LFs1kiJovK769KcSB+6Ctos2S beTiNLMPID0mkdID/1yDaYp8l9kg8GFRuDkZi5L9yr+uG0yp8iMnrCHHUxC0DDh3de GylP/jBzfLzjOt1C5HtpdoIsvhMHYbblqpV98rdgG3HqL11BX9FIGe7Xtw69kmu2Io mikP0lNY9G7Gg== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Sasha Levin , tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, kvm@vger.kernel.org Subject: [PATCH MANUALSEL 5.15 2/4] KVM: x86: do not set st->preempted when going back to user space Date: Mon, 13 Jun 2022 22:11:38 -0400 Message-Id: <20220614021141.1101486-2-sashal@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220614021141.1101486-1-sashal@kernel.org> References: <20220614021141.1101486-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Paolo Bonzini [ Upstream commit 54aa83c90198e68eee8b0850c749bc70efb548da ] Similar to the Xen path, only change the vCPU's reported state if the vCPU was actually preempted. The reason for KVM's behavior is that for example optimistic spinning might not be a good idea if the guest is doing repeated exits to userspace; however, it is confusing and unlikely to make a difference, because well-tuned guests will hardly ever exit KVM_RUN in the first place. Suggested-by: Sean Christopherson Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin --- arch/x86/kvm/x86.c | 26 ++++++++++++++------------ arch/x86/kvm/xen.h | 6 ++++-- 2 files changed, 18 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fbf10fa99507..16c1c5403b52 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4412,19 +4412,21 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { int idx; - if (vcpu->preempted && !vcpu->arch.guest_state_protected) - vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl)(vcpu); + if (vcpu->preempted) { + if (!vcpu->arch.guest_state_protected) + vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl)(vcpu); - /* - * Take the srcu lock as memslots will be accessed to check the gfn - * cache generation against the memslots generation. - */ - idx = srcu_read_lock(&vcpu->kvm->srcu); - if (kvm_xen_msr_enabled(vcpu->kvm)) - kvm_xen_runstate_set_preempted(vcpu); - else - kvm_steal_time_set_preempted(vcpu); - srcu_read_unlock(&vcpu->kvm->srcu, idx); + /* + * Take the srcu lock as memslots will be accessed to check the gfn + * cache generation against the memslots generation. + */ + idx = srcu_read_lock(&vcpu->kvm->srcu); + if (kvm_xen_msr_enabled(vcpu->kvm)) + kvm_xen_runstate_set_preempted(vcpu); + else + kvm_steal_time_set_preempted(vcpu); + srcu_read_unlock(&vcpu->kvm->srcu, idx); + } static_call(kvm_x86_vcpu_put)(vcpu); vcpu->arch.last_host_tsc = rdtsc(); diff --git a/arch/x86/kvm/xen.h b/arch/x86/kvm/xen.h index cc0cf5f37450..a7693a286e40 100644 --- a/arch/x86/kvm/xen.h +++ b/arch/x86/kvm/xen.h @@ -97,8 +97,10 @@ static inline void kvm_xen_runstate_set_preempted(struct kvm_vcpu *vcpu) * behalf of the vCPU. Only if the VMM does actually block * does it need to enter RUNSTATE_blocked. */ - if (vcpu->preempted) - kvm_xen_update_runstate_guest(vcpu, RUNSTATE_runnable); + if (WARN_ON_ONCE(!vcpu->preempted)) + return; + + kvm_xen_update_runstate_guest(vcpu, RUNSTATE_runnable); } /* 32-bit compatibility definitions, also used natively in 32-bit build */ -- 2.35.1