Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C9A8C636CC for ; Thu, 16 Feb 2023 00:22:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229460AbjBPAWV (ORCPT ); Wed, 15 Feb 2023 19:22:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229454AbjBPAWT (ORCPT ); Wed, 15 Feb 2023 19:22:19 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDF3D38E86 for ; Wed, 15 Feb 2023 16:22:17 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-4cddba76f55so1960187b3.23 for ; Wed, 15 Feb 2023 16:22:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OO/XTincpCiOXImHy7SwOjlBWMW8XPG1CnOI7CA/ZcY=; b=LnBstyJy4LT3mEIvdskbIgeDS+rclo6poLMV+Nv5ZzkibiXvVIGYgkx+eOYc5BJHvL 0zETyvNfNI/evJrwJjztLvCpN2ptg7Fb4ErZw3wTbRnLzZmYj7aGlhQrL7iJI0yGogOP rfyh44a+Gh2NWZVrepHI5BJIpv/+R24uMLgypUhJXK8IzqTxcYLcadWw38FdHhM2ZR8t O4QkPfZroFOoYjKZqdQ7aQQHblRW81SOmO6DpONJjDgKmi+9IrJrKWbRF7I6XkGDYUmo 2gyVpBEA5dHppBpAM2NvB3AMfRXKL0UlijifmMlLRS7DVO4EL6O+XkniDfrgCR2HNKPO 2Vwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OO/XTincpCiOXImHy7SwOjlBWMW8XPG1CnOI7CA/ZcY=; b=h3gSkG9qVOG2zjUuC+OtliREUmBfYq4DQuWCyaI8TXJmMMWmF/D6yAa03wnfCt2dCy xLFP/aeYUlBDUx7jSqOuti5ZDbr+k5w0iJ8DsoJayYr34eyYmbMqpmd9M0pxSwefP9SE NzMdCsG2EO6aZ32NJnJf3s4uTBqzNiUxoi0aZq4oGN9KBqTJk3vUr7Fmt9EJMHnltN/M nO8BDjQCh7CEBBnVYNGPBVPkoW7cH/2ztLxp1rSs6RZAFxHuR+sl/1w42y9DZ4WtYxhD Gn2HnQij4yA94Y2Hk13u3qjTd/AWA/5aII3wSoCKZeeBkUNfS/DkIJdDLwmqPRTjYwVB 5iDA== X-Gm-Message-State: AO0yUKUl8mit3X9b3q+V3xZExFcmBoxIFf4WijdHPs7T3P0iHEhCdE/n XEhsOlEGZIIoGv3BaBv3HOzbYssfYAc= X-Google-Smtp-Source: AK7set8TtgAp+JveubF1EyRpmR0tyRaCPAWGn+TbxCeFDcJCSfFyMY0Gdzt5kzn4yrgBcM4KmuI5pUnnlgk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a5b:bcd:0:b0:8ac:a1b7:6fa3 with SMTP id c13-20020a5b0bcd000000b008aca1b76fa3mr458932ybr.278.1676506937147; Wed, 15 Feb 2023 16:22:17 -0800 (PST) Date: Wed, 15 Feb 2023 16:22:15 -0800 In-Reply-To: Mime-Version: 1.0 References: <20221129193717.513824-1-mlevitsk@redhat.com> <20221129193717.513824-8-mlevitsk@redhat.com> <2b5994e2-15ba-dd57-285c-fb33827a5275@amd.com> Message-ID: Subject: Re: [PATCH v2 07/11] KVM: x86: add a delayed hardware NMI injection interface From: Sean Christopherson To: Santosh Shukla Cc: Maxim Levitsky , kvm@vger.kernel.org, Sandipan Das , Paolo Bonzini , Jim Mattson , Peter Zijlstra , Dave Hansen , Borislav Petkov , Pawan Gupta , Thomas Gleixner , Ingo Molnar , Josh Poimboeuf , Daniel Sneddon , Jiaxi Chen , Babu Moger , linux-kernel@vger.kernel.org, Jing Liu , Wyes Karny , x86@kernel.org, "H. Peter Anvin" Content-Type: text/plain; charset="us-ascii" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 15, 2023, Sean Christopherson wrote: > On Tue, Feb 14, 2023, Santosh Shukla wrote: > > " > > V_NMI_MASK: Indicates whether virtual NMIs are masked. The processor will set V_NMI_MASK > > once it takes the virtual NMI. V_NMI_MASK is cleared when the guest successfully completes an > > IRET instruction or #VMEXIT occurs while delivering the virtual NMI > > " > > > > In my initial implementation I had changed V_NMI_MASK for the SMM scenario [1], > > This is also not required as HW will save the V_NMI/V_NMI_MASK on > > SMM entry and restore them on RSM. > > > > That said the svm_{get,set}_nmi_mask will look something like: ... > > static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) > > { > > struct vcpu_svm *svm = to_svm(vcpu); > > > > + if (is_vnmi_enabled(svm)) > > + return; > > + > > if (masked) { > > svm->nmi_masked = true; > > svm_set_iret_intercept(svm); > > > > is there any inputs on above approach? > > What happens if software clears the "NMIs are blocked" flag? If KVM can't clear > the flag, then we've got problems. E.g. if KVM emulates IRET or SMI+RSM. And I > I believe there are use cases that use KVM to snapshot and reload vCPU state, > e.g. record+replay?, in which case KVM_SET_VCPU_EVENTS needs to be able to adjust > NMI blocking too. Actually, what am I thinking. Any type of state save/restore will need to stuff NMI blocking. E.g. live migration of a VM that is handling an NMI (V_NMI_MASK=1) _and_ has a pending NMI (V_NMI=1) absolutely needs to set V_NMI_MASK=1 on the dest, otherwise the pending NMI will get serviced when the guest expects NMIs to be blocked.