Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp3952076ybp; Sun, 13 Oct 2019 18:21:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqz5RuC1iAyhtJ7+7+LURjMTgBTRB9nIBnftsJTk6muKH5qpX2Kk70DDEoLbwTDiOjsp7Umf X-Received: by 2002:a17:906:4d95:: with SMTP id s21mr26014941eju.175.1571016098980; Sun, 13 Oct 2019 18:21:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571016098; cv=none; d=google.com; s=arc-20160816; b=wZkXAIuAUAFQ/XvttpuPNzt4tVbEAB6jzmYKXCieZJXR8vFj35UNDn15GotCVFdi/J 1Rh8t+/lAvRU0TfKgbyfGM2ac7Qh+4M0BUkdRHQ0GBWyujSZAhitRMlK1cNhFE4oYtEl RvSTvAHuqX05wRKC95aDkqauek5VgjCynsM/lTS+2jkxbCR8VUB7QHZ7seMh1wVZBNiu fy54KOMHF7YntrxeksjJyqlnQsZIPoQO2p1JQgrC8pi03PYquxFJK5E+oOSZTBlNZo8x A3zwYurtToVWGa+qBIDz/zsY9ct1mXnc+NdA8zrwv4uX+INQMclfOTmPSEGitBRBwxLx lRXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=S56+JBnOeQPQEA143TCcJuUPBdkaDM66oyxOm2bnH6o=; b=T2bfgVxvFO24g4VdWhb2dHYqAGuVHD3hjWsReL4TzVJ7EZ6NgK4aARwIgqMhF4SGr2 26HucrFePMCyu+QYs28lFKiDO7uV+mXDQOck67eLSR+fZ5gNScNQ1QEd8FLupEpnWF1o WWI3SlLUGNkqdjAkoQhjmd36bWCgR++Q4J6TgrCn8/JDq96oe8G6uT6s+WNApplFbbpg SU8AQ7F8NuIu0D7XUeG+2xEYorUn7TXtdmx06fUOeEXc5PwQIePZZqG9sNtMrfReq/B7 XzXETRl6WI34yutPf+kyOcVRuFs62SYM3LTNmA1Mue3iN4IKocSD/2gBSoWec34RhDSQ Xg4w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i38si12468122eda.64.2019.10.13.18.21.14; Sun, 13 Oct 2019 18:21:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729629AbfJNBUx (ORCPT + 99 others); Sun, 13 Oct 2019 21:20:53 -0400 Received: from mga02.intel.com ([134.134.136.20]:18548 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729180AbfJNBUx (ORCPT ); Sun, 13 Oct 2019 21:20:53 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Oct 2019 18:20:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,294,1566889200"; d="scan'208";a="395033621" Received: from sqa-gate.sh.intel.com (HELO clx-ap-likexu.tsp.org) ([10.239.48.212]) by fmsmga005.fm.intel.com with ESMTP; 13 Oct 2019 18:20:48 -0700 From: Like Xu To: Paolo Bonzini , kvm@vger.kernel.org, peterz@infradead.org, Jim Mattson Cc: rkrcmar@redhat.com, sean.j.christopherson@intel.com, vkuznets@redhat.com, Ingo Molnar , Arnaldo Carvalho de Melo , ak@linux.intel.com, wei.w.wang@intel.com, kan.liang@intel.com, like.xu@intel.com, ehankland@google.com, arbel.moshe@oracle.com, linux-kernel@vger.kernel.org Subject: [PATCH v2 0/4] KVM: x86/vPMU: Efficiency optimization by reusing last created perf_event Date: Sun, 13 Oct 2019 17:15:29 +0800 Message-Id: <20191013091533.12971-1-like.xu@linux.intel.com> X-Mailer: git-send-email 2.21.0 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Performance Monitoring Unit is designed to monitor micro architectural events which helps in analyzing how applications or operating systems are performing on the processors. In KVM/X86, version 2 Architectural PMU on Intel and AMD hosts have been enabled. This patch series is going to improve vPMU Efficiency for guest perf users which is mainly measured by guest NMI handler latency for basic perf usages [1][2][3][4] with hardware PMU. It's not a passthrough solution but based on the legacy vPMU implementation (since 2011) with backport-friendliness. The general idea (defined in patch 2/3) is to reuse last created perf_event for the same vPMC when the new requested config is the exactly same as the last programed config (used by pmc_reprogram_counter()) AND the new event period is appropriate and accepted (via perf_event_period() in patch 1/3). Before reusing the perf_event, it will be disabled until it's suitable for reuse and a hardware counter will be reassigned again to serve vPMC. If the disabled perf_event is no longer reused, we do a lazy release mechanism (defined in patch 3/3) which in a short is to release the disabled perf_events on the first call of vcpu_enter_guest since the vcpu gets next scheduled in if its MSRs is not accessed in the last sched time slice. The bitmap pmu->lazy_release_ctrl is added to track. The kvm_pmu_cleanup() is called at the first time to run vcpu_enter_guest after the vcpu shced_in and the overhead for check is very limited. With this optimization, the average latency of the guest NMI handler is reduced from 99450 ns to 56195 ns (1.76x speed up on CLX-AP with v5.3). If host disables the watchdog (echo 0 > /proc/sys/kernel/watchdog), the minimum latency of guest NMI handler could be speed up at 2994x and in the average at 685x. The run time of workload with perf attached inside the guest could be reduced significantly with this optimization. Please check each commit for more details and share your comments with us. Thanks, Like Xu --- [1] multiplexing sampling usage: time perf record -e \ `perf list | grep Hardware | grep event |\ awk '{print $1}' | head -n 10 |tr '\n' ',' | sed 's/,$//' ` ./ftest [2] one gp counter sampling usage: perf record -e branch-misses ./ftest [3] one fixed counter sampling usage: perf record -e instructions ./ftest [4] event count usage: perf stat -e branch-misses ./ftest --- Changes in v2: - use perf_event_pause() to disable, read, reset by only one lock; - use __perf_event_read_value() after _perf_event_disable(); - replace bitfields with 'u8 event_count; bool need_cleanup;'; - refine comments and commit messages; - fix two issues reported by kbuild test robot for ARCH=[nds32|sh] v1: https://lore.kernel.org/kvm/20190930072257.43352-1-like.xu@linux.intel.com/ Like Xu (4): perf/core: Provide a kernel-internal interface to recalibrate event period perf/core: Provide a kernel-internal interface to pause perf_event KVM: x86/vPMU: Reuse perf_event to avoid unnecessary pmc_reprogram_counter KVM: x86/vPMU: Add lazy mechanism to release perf_event per vPMC arch/x86/include/asm/kvm_host.h | 17 +++++++ arch/x86/kvm/pmu.c | 88 ++++++++++++++++++++++++++++++++- arch/x86/kvm/pmu.h | 15 +++++- arch/x86/kvm/pmu_amd.c | 14 ++++++ arch/x86/kvm/vmx/pmu_intel.c | 27 ++++++++++ arch/x86/kvm/x86.c | 12 +++++ include/linux/perf_event.h | 10 ++++ kernel/events/core.c | 44 ++++++++++++++--- 8 files changed, 216 insertions(+), 11 deletions(-) -- 2.21.0