Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756635AbbEUVVZ (ORCPT ); Thu, 21 May 2015 17:21:25 -0400 Received: from mx1.redhat.com ([209.132.183.28]:39727 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756264AbbEUVVW (ORCPT ); Thu, 21 May 2015 17:21:22 -0400 Message-ID: <555E4C4E.1010603@redhat.com> Date: Thu, 21 May 2015 23:21:18 +0200 From: Paolo Bonzini User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= CC: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, bsd@redhat.com Subject: Re: [PATCH 08/12] KVM: x86: save/load state on SMM switch References: <1431084034-8425-1-git-send-email-pbonzini@redhat.com> <1431084034-8425-9-git-send-email-pbonzini@redhat.com> <20150521162036.GA31183@potion.brq.redhat.com> <555E0683.6020600@redhat.com> <20150521170014.GB31171@potion.brq.redhat.com> In-Reply-To: <20150521170014.GB31171@potion.brq.redhat.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2563 Lines: 61 On 21/05/2015 19:00, Radim Krčmář wrote: > Potentially, an NMI could be latched (while in SMM or upon exit) and > serviced upon exit [...] > > This "Potentially" could be in the sense that the whole 3rd paragraph is > only applicable to some ancient SMM design :) It could also be in the sense that you cannot exclude an NMI coming at exactly the wrong time. If you want to go full language lawyer, it does mention it whenever behavior is specific to a processor family. > The 1st paragraph has quite clear sentence: > > If NMIs were blocked before the SMI occurred, they are blocked after > execution of RSM. > > so I'd just ignore the 3rd paragraph ... > > And the APM 2:10.3.3 Exceptions and Interrupts > NMI—If an NMI occurs while the processor is in SMM, it is latched by > the processor, but the NMI handler is not invoked until the processor > leaves SMM with the execution of an RSM instruction. A pending NMI > causes the handler to be invoked immediately after the RSM completes > and before the first instruction in the interrupted program is > executed. > > An SMM handler can unmask NMI interrupts by simply executing an IRET. > Upon completion of the IRET instruction, the processor recognizes the > pending NMI, and transfers control to the NMI handler. Once an NMI is > recognized within SMM using this technique, subsequent NMIs are > recognized until SMM is exited. Later SMIs cause NMIs to be masked, > until the SMM handler unmasks them. > > makes me think that we should unmask them unconditionally or that SMM > doesn't do anything with NMI masking. Actually I hadn't noticed this paragraph. But I read it the same as the Intel manual (i.e. what I implemented): it doesn't say anywhere that RSM may cause the processor to *set* the "NMIs masked" flag. It makes no sense; as you said it's 1 bit of state! But it seems that it's the architectural behavior. :( > If we can choose, less NMI nesting seems like a good idea. It would---I'm just preempting future patches from Nadav. :) That said, even if OVMF does do IRETs in SMM (in 64-bit mode it fills in page tables lazily for memory above 4GB), we do not care about asynchronous SMIs such as those for power management. So we should never enter SMM with NMIs masked, to begin with. Paolo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/