Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp7926168imu; Fri, 28 Dec 2018 07:24:45 -0800 (PST) X-Google-Smtp-Source: AFSGD/XZJjPOQF8Tdo0yuKBWp8dQxV5fVBVzUdgQ4D3QHLyM1GBu97/6afanDwsH2Qhp6CteRl8+ X-Received: by 2002:a62:5c41:: with SMTP id q62mr29029937pfb.171.1546010685041; Fri, 28 Dec 2018 07:24:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1546010685; cv=none; d=google.com; s=arc-20160816; b=LF2Q95oTuovfiZ9PjpuV3SU3pT86l4fwIAN+EoiWhhXzPl770zOgcsbyYTzysQuX+0 NrEEiH1Alaiin2sxU4zjQuRLoKYtEswp3Xyth6Gy9YQefQONlLhyb2moBdZP8PojRMMZ dRs0n83JtoL3ZbC9C6ukn7Hu5r870+ngX+0CHFjPt6NB4XulBSt61VtaaalM6wP7p5tM 43oqSM95IKc45vjfkFzaO7g62ofsVIv+fGJs8au+XMkUYhmBG4CakzM7bngXXok8QTil SLBXA+j/KTy6zbQ2w3WuAgsVNJw2V0JhCB8li8RZ5IuK80yzdtgroNi/i2RYV67kZjq+ Wxvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :references:subject:cc:to:mime-version:user-agent:from:date :message-id; bh=ETj2mzZmKzHCe0YnS2p/sLaC2Pd/EROsvChxs46YmHE=; b=yB9ZMXUkLWEa0he0Z6DE79YzPLBgq8ZtIft69ivkDrx/EupPXQg3H/f9agvRF3bCT4 gURr8Z/XFiJkNNkeyv0GAdgCKwyz7VFv0QiJy6qr/ix71QqbDLGS9/EcVz5T6QnKsgnJ ITwf1rMufeMhhYAfK3p6B43j2OCxozo6LD3v5qyOkYuGiUFvu0MVL8KGpU5i+d0X3uaD K1NMppTbekTebWfGxAUAu+tcteuGfhC+Jx9R8M3nRXJXC1ktCmrqizXoXHHhBrY0n1vV E4iHfpuRx6xmhHYS+pgdqGLr3n2VLpEFV4fau9H4CIoQqZhbBXVmtvuTilgIMd0A/xqQ NHvw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v34si37689146plg.205.2018.12.28.07.24.29; Fri, 28 Dec 2018 07:24:45 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731495AbeL1Dls (ORCPT + 99 others); Thu, 27 Dec 2018 22:41:48 -0500 Received: from mga05.intel.com ([192.55.52.43]:5033 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726064AbeL1Dls (ORCPT ); Thu, 27 Dec 2018 22:41:48 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Dec 2018 19:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,407,1539673200"; d="scan'208";a="121563877" Received: from unknown (HELO [10.239.13.114]) ([10.239.13.114]) by orsmga002.jf.intel.com with ESMTP; 27 Dec 2018 19:41:45 -0800 Message-ID: <5C259CBA.4030805@intel.com> Date: Fri, 28 Dec 2018 11:47:06 +0800 From: Wei Wang User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Andi Kleen CC: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, peterz@infradead.org, kan.liang@intel.com, mingo@redhat.com, rkrcmar@redhat.com, like.xu@intel.com, jannh@google.com, arei.gonglei@huawei.com Subject: Re: [PATCH v4 10/10] KVM/x86/lbr: lazy save the guest lbr stack References: <1545816338-1171-1-git-send-email-wei.w.wang@intel.com> <1545816338-1171-11-git-send-email-wei.w.wang@intel.com> <20181227205104.GG25620@tassilo.jf.intel.com> In-Reply-To: <20181227205104.GG25620@tassilo.jf.intel.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/28/2018 04:51 AM, Andi Kleen wrote: > Thanks. This looks a lot better than the earlier versions. > > Some more comments. > > On Wed, Dec 26, 2018 at 05:25:38PM +0800, Wei Wang wrote: >> When the vCPU is scheduled in: >> - if the lbr feature was used in the last vCPU time slice, set the lbr >> stack to be interceptible, so that the host can capture whether the >> lbr feature will be used in this time slice; >> - if the lbr feature wasn't used in the last vCPU time slice, disable >> the vCPU support of the guest lbr switching. > time slice is the time from exit to exit? It's the vCPU thread time slice (e.g. 100ms). > > This might be rather short in some cases if the workload does a lot of exits > (which I would expect PMU workloads to do) Would be better to use some > explicit time check, or at least N exits. Did you mean further increasing the lazy time to multiple host thread scheduling time slices? What would be a good value for "N"? >> Upon the first access to one of the lbr related MSRs (since the vCPU was >> scheduled in): >> - record that the guest has used the lbr; >> - create a host perf event to help save/restore the guest lbr stack if >> the guest uses the user callstack mode lbr stack; > This is a bit risky. It would be safer (but also more expensive) > to always safe even for any guest LBR use independent of callstack. > > Otherwise we might get into a situation where > a vCPU context switch inside the guest PMI will clear the LBRs > before they can be read in the PMI, so some LBR samples will be fully > or partially cleared. This would be user visible. > > In theory could try to detect if the guest is inside a PMI and > save/restore then, but that would likely be complicated. I would > save/restore for all cases. Yes, it is easier to save for all the cases. But curious for the non-callstack mode, it's just ponit sampling functions (kind of speculative in some degree). Would rarely losing a few recordings important in that case? > >> +static void >> +__always_inline vmx_set_intercept_for_msr(unsigned long *msr_bitmap, u32 msr, >> + int type, bool value); > __always_inline should only be used if it's needed for functionality, > or in a header. Thanks, will fix it. Best, Wei