Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3561668pxf; Mon, 5 Apr 2021 15:51:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxwHJcgdItxGmDb83vFdvZun1m6aqDMR+2gKw3lYfT78th/yNIf5LQHzcTaeiBW7sg0jEOD X-Received: by 2002:aa7:dbd3:: with SMTP id v19mr33675322edt.314.1617663082440; Mon, 05 Apr 2021 15:51:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617663082; cv=none; d=google.com; s=arc-20160816; b=EWMnSjknyaCTgUjt0P39dQu3zeG8dpgiZKoS1ITkFuB3/dIDFP3aN84utBC9VxKD+7 /mEptv5aZgIcy//sg/KNpdlS25CjsrWXQ2QxQvRgBx4KTLVQANlYnOXX1YlMHeNE4uk2 ZpRPXu0JZtLilMgjVoGiib0PvC8pYz2gWHKYZaq/Wzil7R1dyIrtcmR2W2NvHwGW5Vix JTtGd4ftaXZIvz6AChDcFfiwRKhZWDKocwyu6JaK6NI9/q8wyjWQz0x+u1JnkWuKKg9Y HomRA3XuHJbVTHbXX1eM1JzYGotZOsM19jfbSGXBzkQbv8E3IDnBX/v1DTO5o0eBKAsf DW5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:ironport-sdr:ironport-sdr; bh=qq6Q6FQd7cl4kt/yfDryPX6F+JAL8bwZ0TuZyfGyxG4=; b=wlvznQavtzina43O+qLkzlxw5jpyFaP9GbXZ/6AM4Z4aM464UxkE7pqPPtN8G1i4v5 kxtsBsUSAFPbafS+FxaTgrkXcIyDqv9gIZ9ymj9DXufE5hwkVj4wr5ZZssipC7/ArCmE eoKowyBYiQc+GkoxDGDU7U/Yl/DTr3svqLTRydbvLKlKs6Qp7ezs47PO/FUW2KZrzw85 oH6qQwCg42fnsE3ovQ5Pj+lL+r/IUGirAi6TKDRwALey7vBNU3KGoiA30ugbJZfMcn7e 61yRDQGh7lfZ4KcID1RvuXK8wN2yAQyGtNJhXxiSudklIeY++37O3FwA2yorREfMIgKx p0qw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bu16si6943702edb.391.2021.04.05.15.51.00; Mon, 05 Apr 2021 15:51:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241547AbhDEPS3 (ORCPT + 99 others); Mon, 5 Apr 2021 11:18:29 -0400 Received: from mga18.intel.com ([134.134.136.126]:43412 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241490AbhDEPSV (ORCPT ); Mon, 5 Apr 2021 11:18:21 -0400 IronPort-SDR: 22+wemizZW2h+fFu891MVF8OGoUInURJCa2z/6eHYU/ZQsxu96sMreyYBGusuMKIrKGhoyDo3N GImgdOelWYjw== X-IronPort-AV: E=McAfee;i="6000,8403,9945"; a="180402946" X-IronPort-AV: E=Sophos;i="5.81,306,1610438400"; d="scan'208";a="180402946" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2021 08:18:13 -0700 IronPort-SDR: HOVRUkYg62J+cn8c2zJPZ2iYhiYoIyKtScomZfBiBhl/mBRgknW7uloeJFf/jJ1rKmJRrEygae EzQ9jmfy+8cA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,306,1610438400"; d="scan'208";a="379006262" Received: from otc-lr-04.jf.intel.com ([10.54.39.41]) by orsmga003.jf.intel.com with ESMTP; 05 Apr 2021 08:18:13 -0700 From: kan.liang@linux.intel.com To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org Cc: acme@kernel.org, tglx@linutronix.de, bp@alien8.de, namhyung@kernel.org, jolsa@redhat.com, ak@linux.intel.com, yao.jin@linux.intel.com, alexander.shishkin@linux.intel.com, adrian.hunter@intel.com, ricardo.neri-calderon@linux.intel.com, Kan Liang Subject: [PATCH V5 08/25] perf/x86: Hybrid PMU support for hardware cache event Date: Mon, 5 Apr 2021 08:10:50 -0700 Message-Id: <1617635467-181510-9-git-send-email-kan.liang@linux.intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1617635467-181510-1-git-send-email-kan.liang@linux.intel.com> References: <1617635467-181510-1-git-send-email-kan.liang@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Kan Liang The hardware cache events are different among hybrid PMUs. Each hybrid PMU should have its own hw cache event table. The hw_cache_extra_regs is not part of the struct x86_pmu, the hybrid() cannot be applied here. Reviewed-by: Andi Kleen Signed-off-by: Kan Liang --- arch/x86/events/core.c | 11 +++++++++-- arch/x86/events/perf_event.h | 9 +++++++++ 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 0bd9554..d71ca69 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -356,6 +356,7 @@ set_ext_hw_attr(struct hw_perf_event *hwc, struct perf_event *event) { struct perf_event_attr *attr = &event->attr; unsigned int cache_type, cache_op, cache_result; + struct x86_hybrid_pmu *pmu = is_hybrid() ? hybrid_pmu(event->pmu) : NULL; u64 config, val; config = attr->config; @@ -375,7 +376,10 @@ set_ext_hw_attr(struct hw_perf_event *hwc, struct perf_event *event) return -EINVAL; cache_result = array_index_nospec(cache_result, PERF_COUNT_HW_CACHE_RESULT_MAX); - val = hw_cache_event_ids[cache_type][cache_op][cache_result]; + if (pmu) + val = pmu->hw_cache_event_ids[cache_type][cache_op][cache_result]; + else + val = hw_cache_event_ids[cache_type][cache_op][cache_result]; if (val == 0) return -ENOENT; @@ -384,7 +388,10 @@ set_ext_hw_attr(struct hw_perf_event *hwc, struct perf_event *event) return -EINVAL; hwc->config |= val; - attr->config1 = hw_cache_extra_regs[cache_type][cache_op][cache_result]; + if (pmu) + attr->config1 = pmu->hw_cache_extra_regs[cache_type][cache_op][cache_result]; + else + attr->config1 = hw_cache_extra_regs[cache_type][cache_op][cache_result]; return x86_pmu_extra_regs(val, event); } diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index cfb2da0..203c165 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -640,6 +640,15 @@ struct x86_hybrid_pmu { int num_counters; int num_counters_fixed; struct event_constraint unconstrained; + + u64 hw_cache_event_ids + [PERF_COUNT_HW_CACHE_MAX] + [PERF_COUNT_HW_CACHE_OP_MAX] + [PERF_COUNT_HW_CACHE_RESULT_MAX]; + u64 hw_cache_extra_regs + [PERF_COUNT_HW_CACHE_MAX] + [PERF_COUNT_HW_CACHE_OP_MAX] + [PERF_COUNT_HW_CACHE_RESULT_MAX]; }; static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu) -- 2.7.4