Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp266452pxv; Wed, 21 Jul 2021 22:44:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx9o4a3ugbpblFAokSp8/VYEFWLqiJNuHfK+761Npk5RMVo2t9NDdaHBgTLRLLxYsVjZ2uR X-Received: by 2002:a92:d4c4:: with SMTP id o4mr25007778ilm.39.1626932657365; Wed, 21 Jul 2021 22:44:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626932657; cv=none; d=google.com; s=arc-20160816; b=URhfRLfVSn6AOPnjIGHpmR5/9jtICnRv9wcXc4IZMlmPXnSxhujaQS94Ydlhj8DKt5 G7JMonKMIpdK1ceQayWLatH2Ou+44FlByh/DNT6UMQL4R8dtpnRr9B0USKpRb6I9AGpZ 1FmfqJuGcUNhL3H/sb1tz+FwqEL5gFPu4pcT3I6VsTqPrGDp4auKQDvvF7gI+r9wxRd/ gtSO6j2AL1INLST7azLnyjb82xOvQf8oe/0r2+LxIooeIyZwONOmOabcMa39HkT5WEt2 h4OQ8GQiXiWoZAndcEd2OuSqVjQxCEU3ZndMG1l8zICJhfHoBrcbxJZOlafMBitmcRm+ XG6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Zs/Agw0HpsdTjOpVGv1+pmyujizL2179lX+JcUNqV0g=; b=GjSJirrrlyafp7v113bKdGXOWP6G1NAPkqi+OlbHKEZcRgwMSWtMHAWTYOWeP5ZESF 7nCA9vmH/vyF5uP3G17M1Cj8DPs/6Dm4SqLeR0LyC5SluyiPtylXmQSCQuwZMj5hN2c1 f/gdgCEcpbPbqsQNkOpGRk+R5ZctsHXDePUgy6sTZLT4oRwdhR/T6yDJiGMmR3lifhga Za5pysc7V+A40Ffu/0SsJdno9UKRgzV2Qs2WB9wadofu1Un0cMc0UQ46eTaAcFfBmKrM ERrkQVJCBmiVr4cQTFLV/ciZAE9ZqQBNGGQRtsfDx6Mpyr6s/scqamqb2V5Fn134Hh7m I36A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y12si21348852ilq.107.2021.07.21.22.44.06; Wed, 21 Jul 2021 22:44:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230473AbhGVFC1 (ORCPT + 99 others); Thu, 22 Jul 2021 01:02:27 -0400 Received: from mga06.intel.com ([134.134.136.31]:24019 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230200AbhGVFCS (ORCPT ); Thu, 22 Jul 2021 01:02:18 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10052"; a="272686939" X-IronPort-AV: E=Sophos;i="5.84,260,1620716400"; d="scan'208";a="272686939" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2021 22:42:54 -0700 X-IronPort-AV: E=Sophos;i="5.84,260,1620716400"; d="scan'208";a="512372411" Received: from vmm_a4_icx.sh.intel.com (HELO localhost.localdomain) ([10.239.53.245]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2021 22:42:50 -0700 From: Zhu Lingshan To: peterz@infradead.org, pbonzini@redhat.com Cc: bp@alien8.de, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, kan.liang@linux.intel.com, ak@linux.intel.com, wei.w.wang@intel.com, eranian@google.com, liuxiangdong5@huawei.com, linux-kernel@vger.kernel.org, x86@kernel.org, kvm@vger.kernel.org, like.xu.linux@gmail.com, boris.ostrvsky@oracle.com, Zhu Lingshan Subject: [PATCH V9 07/18] x86/perf/core: Add pebs_capable to store valid PEBS_COUNTER_MASK value Date: Thu, 22 Jul 2021 13:41:48 +0800 Message-Id: <20210722054159.4459-8-lingshan.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210722054159.4459-1-lingshan.zhu@intel.com> References: <20210722054159.4459-1-lingshan.zhu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Peter Zijlstra (Intel)" The value of pebs_counter_mask will be accessed frequently for repeated use in the intel_guest_get_msrs(). So it can be optimized instead of endlessly mucking about with branches. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Zhu Lingshan --- arch/x86/events/intel/core.c | 14 ++++++-------- arch/x86/events/perf_event.h | 1 + 2 files changed, 7 insertions(+), 8 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index e52d73348343..c97e00083d10 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -2867,10 +2867,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status) * counters from the GLOBAL_STATUS mask and we always process PEBS * events via drain_pebs(). */ - if (x86_pmu.flags & PMU_FL_PEBS_ALL) - status &= ~cpuc->pebs_enabled; - else - status &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK); + status &= ~(cpuc->pebs_enabled & x86_pmu.pebs_capable); /* * PEBS overflow sets bit 62 in the global status register @@ -3908,10 +3905,7 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data) arr[0].msr = MSR_CORE_PERF_GLOBAL_CTRL; arr[0].host = intel_ctrl & ~cpuc->intel_ctrl_guest_mask; arr[0].guest = intel_ctrl & ~cpuc->intel_ctrl_host_mask; - if (x86_pmu.flags & PMU_FL_PEBS_ALL) - arr[0].guest &= ~cpuc->pebs_enabled; - else - arr[0].guest &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK); + arr[0].guest &= ~(cpuc->pebs_enabled & x86_pmu.pebs_capable); *nr = 1; if (x86_pmu.pebs && x86_pmu.pebs_no_isolation) { @@ -5594,6 +5588,7 @@ __init int intel_pmu_init(void) x86_pmu.events_mask_len = eax.split.mask_length; x86_pmu.max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, x86_pmu.num_counters); + x86_pmu.pebs_capable = PEBS_COUNTER_MASK; /* * Quirk: v2 perfmon does not report fixed-purpose events, so @@ -5778,6 +5773,7 @@ __init int intel_pmu_init(void) x86_pmu.pebs_aliases = NULL; x86_pmu.pebs_prec_dist = true; x86_pmu.lbr_pt_coexist = true; + x86_pmu.pebs_capable = ~0ULL; x86_pmu.flags |= PMU_FL_HAS_RSP_1; x86_pmu.flags |= PMU_FL_PEBS_ALL; x86_pmu.get_event_constraints = glp_get_event_constraints; @@ -6135,6 +6131,7 @@ __init int intel_pmu_init(void) x86_pmu.pebs_aliases = NULL; x86_pmu.pebs_prec_dist = true; x86_pmu.pebs_block = true; + x86_pmu.pebs_capable = ~0ULL; x86_pmu.flags |= PMU_FL_HAS_RSP_1; x86_pmu.flags |= PMU_FL_NO_HT_SHARING; x86_pmu.flags |= PMU_FL_PEBS_ALL; @@ -6178,6 +6175,7 @@ __init int intel_pmu_init(void) x86_pmu.pebs_aliases = NULL; x86_pmu.pebs_prec_dist = true; x86_pmu.pebs_block = true; + x86_pmu.pebs_capable = ~0ULL; x86_pmu.flags |= PMU_FL_HAS_RSP_1; x86_pmu.flags |= PMU_FL_NO_HT_SHARING; x86_pmu.flags |= PMU_FL_PEBS_ALL; diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 1518f2754842..35d0a7ec5f20 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -807,6 +807,7 @@ struct x86_pmu { void (*pebs_aliases)(struct perf_event *event); unsigned long large_pebs_flags; u64 rtm_abort_event; + u64 pebs_capable; /* * Intel LBR -- 2.27.0