Received: by 2002:a05:7412:2a8c:b0:e2:908c:2ebd with SMTP id u12csp2788151rdh; Wed, 27 Sep 2023 12:40:45 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE5rbdAIKVCOn0L2bYEEHFLhGVX18o76PiqySronTGOct1AUTsL59hoN2syYkAEWwT+zJ1a X-Received: by 2002:a17:902:e5ce:b0:1c6:ec8:4c67 with SMTP id u14-20020a170902e5ce00b001c60ec84c67mr3190366plf.18.1695843645648; Wed, 27 Sep 2023 12:40:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695843645; cv=none; d=google.com; s=arc-20160816; b=KDu5eTsuyYhjPBG9yx/YE2rS4WavGiJpi7QmL4q7m2Z/yfiL4/LjJLNR1j2kcOnMxw xpWN46Qrjdy2vYIRMZ2c8vJZkuQNR2DNGmZUhEMqvBgPnSCriDEsVZT+WUyS0i0kchIr lKIaRqCkYFcVNB5R5SIZ02oC7dqmvx81BZZzZ0L5uHQ5tosurOunemg+W4JyWwkTBkdA M+DE/ac1Bvjfbbwv8NB1uHGdQC73ddx/MgwmgTG6LezT4dXV2KsttHhTtdPPPDGrKjDu olxdynr2vJ0NG5IIHffeBe/Jx5I21LsyUb88aRmZ2TZ/Ky5DRakr8JfCfjkMbriBKOrZ RWyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=qWpsRjwmUIn37reupPi+utPeMXSOLSGyvQ+OBrd+Mx8=; fh=k2hek+H8GClY34rZ1G0MK+G3+XJMxVSB5NSkW4RDiD4=; b=XFX0RlBtNXPvUSa8WiCtArN03xqY9rdlJiUCU2rayymZvgTcbc1RbfWJCIC6NEAVWq p5T5RGkUx2hz+uWFZ88148AIqud2yRoZqT93Na4Nd2obqoI4A5fz19z8uKXoM48DJJYs vX/mYRHhsFx1ejy7dVUV4xlNnJhtDNkJFKAAP3pqbSWM54VXiw5O30AtM7XYWiJ30buh soiVK7CIEbP+kZHjOcZPuboI0JdriACMl0X9OIh/tCdW7qQyGdXn2pVKcrPAQCGN9W6P wVJTWQQ9cQHcbCpj/jY9GYHtm38bfAusHjJlrzvH0xs8oAE7djXD0TZAztWkNwbnSeRD NN6w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=I5oavEoB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id c4-20020a170902d48400b001b9ffda162csi17634742plg.441.2023.09.27.12.40.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Sep 2023 12:40:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=I5oavEoB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 8394A8095028; Wed, 27 Sep 2023 11:24:23 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229640AbjI0SYT (ORCPT + 99 others); Wed, 27 Sep 2023 14:24:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229497AbjI0SYS (ORCPT ); Wed, 27 Sep 2023 14:24:18 -0400 Received: from mail-qv1-xf36.google.com (mail-qv1-xf36.google.com [IPv6:2607:f8b0:4864:20::f36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDE81F5 for ; Wed, 27 Sep 2023 11:24:16 -0700 (PDT) Received: by mail-qv1-xf36.google.com with SMTP id 6a1803df08f44-65b2463d651so24961216d6.3 for ; Wed, 27 Sep 2023 11:24:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695839056; x=1696443856; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=qWpsRjwmUIn37reupPi+utPeMXSOLSGyvQ+OBrd+Mx8=; b=I5oavEoBnIiboxie1Zt6K9sZiTMIvkSwudD4A85sCapnx1Esj3z95PHJ4FhXAe4ujn TRIS8EbVAnp79AkGBkU5J0JBFXs+6c9CVRLd+BZZFtEsVX0N7oCDs5pUYHJqxWzg1BcN aMRWYo/FSBq1WaDu10Adkm+J92N+031WtengHkYvx4haNQhBjgm4JCT1YDGKSYK5GY6P Pm3xTXYebguK1NG/7V6ufn9zKtCC0sw/e2lZnCbWDUiTAULs1T+kxEaHZINxybFE9lDd J0IYm5g8X4AckiuLYq8MQIq/P/zvMndBWnrr3He7g/NUjrlDEsgai5XyuPSyyN5MzYIN MWrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695839056; x=1696443856; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qWpsRjwmUIn37reupPi+utPeMXSOLSGyvQ+OBrd+Mx8=; b=MLGcOetSURatfuFGcYK8wkAyoMWZqV28HLQGI3SglMfN4rB1b4GhgAZKo85qcsua2H AbmCd6UfRNb3t0e0M1j9g/SmU4h2jRj1Bfs3wpdRKW3wOzzDBxxOxaXFs1iGb44YSLOH VDfjaOJRNxyC0WIXQkFkqZOBmAXFj9hvS6VFMtlWs36asqHaD9cJ8t6VB8ADt/s/Z8s7 qDXT8ielGcACGa/HES9AY7CLy1K7wzQX4V3ccwgg1y1xHfVjFlyyQn/hkMWqC81grIRs vLR56tGT+j4OLe8Pnj2QVPuX2m5HhCwdPUGs/usS5Uz3xRLxlWu3bGrKvvatjAq5OQFg EMuQ== X-Gm-Message-State: AOJu0Yy9FUNURvgQ36cenUnLCmxEtS/PErj7+HsS/2ii3S8h+vU1UCp+ WxhYlhDQa6wynsYM7X819xlEtJ9B2VFiBtRyro+49Q== X-Received: by 2002:a05:6214:b2b:b0:65c:fec5:6f0 with SMTP id w11-20020a0562140b2b00b0065cfec506f0mr2355028qvj.45.1695839055820; Wed, 27 Sep 2023 11:24:15 -0700 (PDT) MIME-Version: 1.0 References: <20230927040939.342643-1-mizhang@google.com> <2c79115e-e16d-49cc-8f5b-2363d7910269@zytor.com> In-Reply-To: From: Mingwei Zhang Date: Wed, 27 Sep 2023 11:23:39 -0700 Message-ID: Subject: Re: [PATCH] KVM: x86: Move kvm_check_request(KVM_REQ_NMI) after kvm_check_request(KVM_REQ_NMI) To: Sean Christopherson Cc: Xin Li , Paolo Bonzini , "H. Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jim Mattson , Like Xu , Kan Liang , Dapeng1 Mi Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Wed, 27 Sep 2023 11:24:23 -0700 (PDT) On Wed, Sep 27, 2023 at 9:10=E2=80=AFAM Sean Christopherson wrote: > > On Tue, Sep 26, 2023, Xin Li wrote: > > On 9/26/2023 9:15 PM, Mingwei Zhang wrote: > > > ah, typo in the subject: The 2nd KVM_REQ_NMI should be KVM_REQ_PMI. > > > Sorry about that. > > > > > > On Tue, Sep 26, 2023 at 9:09=E2=80=AFPM Mingwei Zhang wrote: > > > > > > > > Move kvm_check_request(KVM_REQ_NMI) after kvm_check_request(KVM_REQ= _NMI). > > > > Please remove it, no need to repeat the subject. > > Heh, from Documentation/process/maintainer-kvm-x86.rst: > > Changelog > ~~~~~~~~~ > Most importantly, write changelogs using imperative mood and avoid pron= ouns. > > See :ref:`describe_changes` for more information, with one amendment: l= ead with > a short blurb on the actual changes, and then follow up with the contex= t and > background. Note! This order directly conflicts with the tip tree's p= referred > approach! Please follow the tip tree's preferred style when sending pa= tches > that primarily target arch/x86 code that is _NOT_ KVM code. > > That said, I do prefer that the changelog intro isn't just a copy+paste o= f the > shortlog, and the shortlog and changelog should use conversational langua= ge instead > of describing the literal code movement. > > > > > When vPMU is active use, processing each KVM_REQ_PMI will generate = a > > This is not guaranteed. > > > > > KVM_REQ_NMI. Existing control flow after KVM_REQ_PMI finished will = fail the > > > > guest enter, jump to kvm_x86_cancel_injection(), and re-enter > > > > vcpu_enter_guest(), this wasted lot of cycles and increase the over= head for > > > > vPMU as well as the virtualization. > > As above, use conversational language, the changelog isn't meant to be a = play-by-play. > > E.g. > > KVM: x86: Service NMI requests *after* PMI requests in VM-Enter path > > Move the handling of NMI requests after PMI requests in the VM-Enter pa= th > so that KVM doesn't need to cancel and redo VM-Enter in the likely > scenario that the vCPU has configured its LVPTC entry to generate an NM= I. > > Because APIC emulation "injects" NMIs via KVM_REQ_NMI, handling PMI > requests after NMI requests means KVM won't detect the pending NMI requ= est > until the final check for outstanding requests. Detecting requests at = the > final stage is costly as KVM has already loaded guest state, potentiall= y > queued events for injection, disabled IRQs, dropped SRCU, etc., most of > which needs to be unwound. > > > Optimization is after correctness, so please explain if this is correct > > first! > > Not first. Leading with an in-depth description of KVM requests and NMI = handling > is not going to help understand *why* this change is being first. But I = do agree > that this should provide an analysis of why it's ok to swap the order, sp= ecificially > why it's architecturally ok if KVM drops an NMI due to the swapped orderi= ng, e.g. > if the PMI is coincident with two other NMIs (or one other NMI and NMIs a= re blocked). > > > > > So move the code snippet of kvm_check_request(KVM_REQ_NMI) to make = KVM > > > > runloop more efficient with vPMU. > > > > > > > > To evaluate the effectiveness of this change, we launch a 8-vcpu QE= MU VM on > > Avoid pronouns. There's no need for all the "fluff", just state the setu= p, the > test, and the results. > > Really getting into the nits, but the whole "8-vcpu QEMU VM" versus > "the setup of using single core, single thread" is confusing IMO. If the= re were > potential performance downsides and/or tradeoffs, then getting the gory d= etails > might be necessary, but that's not the case here, and if it were really n= ecessary > to drill down that deep, then I would want to better quantify the impact,= e.g. in > terms latency. > > E.g. on Intel SPR running SPEC2017 benchmark and Intel vtune in the gue= st, > handling PMI requests before NMI requests reduces the number of cancele= d > runs by ~1500 per second, per vCPU (counted by probing calls to > vmx_cancel_injection()). > > > > > an Intel SPR CPU. In the VM, we run perf with all 48 events Intel v= tune > > > > uses. In addition, we use SPEC2017 benchmark programs as the worklo= ad with > > > > the setup of using single core, single thread. > > > > > > > > At the host level, we probe the invocations to vmx_cancel_injection= () with > > > > the following command: > > > > > > > > $ perf probe -a vmx_cancel_injection > > > > $ perf stat -a -e probe:vmx_cancel_injection -I 10000 # per 10= seconds > > > > > > > > The following is the result that we collected at beginning of the s= pec2017 > > > > benchmark run (so mostly for 500.perlbench_r in spec2017). Kindly f= orgive > > > > the incompleteness. > > > > > > > > On kernel without the change: > > > > 10.010018010 14254 probe:vmx_cancel_injectio= n > > > > 20.037646388 15207 probe:vmx_cancel_injectio= n > > > > 30.078739816 15261 probe:vmx_cancel_injectio= n > > > > 40.114033258 15085 probe:vmx_cancel_injectio= n > > > > 50.149297460 15112 probe:vmx_cancel_injectio= n > > > > 60.185103088 15104 probe:vmx_cancel_injectio= n > > > > > > > > On kernel with the change: > > > > 10.003595390 40 probe:vmx_cancel_injectio= n > > > > 20.017855682 31 probe:vmx_cancel_injectio= n > > > > 30.028355883 34 probe:vmx_cancel_injectio= n > > > > 40.038686298 31 probe:vmx_cancel_injectio= n > > > > 50.048795162 20 probe:vmx_cancel_injectio= n > > > > 60.069057747 19 probe:vmx_cancel_injectio= n > > > > > > > > From the above, it is clear that we save 1500 invocations per vcpu= per > > > > second to vmx_cancel_injection() for workloads like perlbench. > > Nit, this really should have: > > Suggested-by: Sean Christopherson > > I personally don't care about the attribution, but (a) others often do ca= re and > (b) the added context is helpful. E.g. for bad/questionable suggestsions= /ideas, > knowing that person X was also involved helps direct and/or curate questi= ons/comments > accordingly. For sure! I will also pay more attention to that in the future. Thanks. -Mingwei