Received: by 10.192.165.148 with SMTP id m20csp48765imm; Thu, 19 Apr 2018 12:47:58 -0700 (PDT) X-Google-Smtp-Source: AIpwx48Td4vfDccsjg72UxPCBkHw4HfCI6jqKqXRducMA9yOuKl3jjYBSumrDVNVdUa9BU9eEAjO X-Received: by 2002:a17:902:64d0:: with SMTP id y16-v6mr7222137pli.349.1524167278793; Thu, 19 Apr 2018 12:47:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524167278; cv=none; d=google.com; s=arc-20160816; b=ru6jjGV1VbBp16Hi/kJ1+aTcueMsHl9uIesvEzrMQjnrLVTnj6xn1ct9jI5AI6MqtL LtTqb7HRfB77A4AJLhcSwx30dyDJUX3+3T0GljJetjPDpfXAW56/w+5JU98CLSjwNWF1 bdKRpXDc5kmeJyWp61IBY4B+aQhbQ/emRffvlTjbAnlMwMXvts1FoUA6GswX1kIl5TSB YpAICuW04DYI2o+TYX0o6OMyMhcCp3PAzD25dIjMlA0UDPXTrXznKwNM50ZGJYg6R6rZ pXlF0ilIlNVPp7b3ZVLnmMbAo06dAoVUbpiGk95xSxfbxVdAR86q5qM8gCP3CmyyQit9 ukJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature :arc-authentication-results; bh=pE19I5JkUGXs7LMQ0rrr2IrwKCgx49Udb2bFuleWrdA=; b=ZYXOmQNPwXwlsoJABrF3domjvW0rABsGNqnbOTpFtsI/Dx2MkEQLbCF/C5Nmusjysp pocu/iPxsWvoEuf2ZfWjHAUOlfvAPRizzQ9lPj4k1nN4Xlfep04e1VGp9D4TmCwqw1N1 LGXTOXpoRkZvBYtvS8TLFlzMMJoVOM/kEKEO8/qS73wZZHpuIg03IrxRE3Ltaj59mck4 lcQJH5OafTPRD7E4vIFOd04TZQyhufX3RPootIrFZ3ge7oFsosCVZlbW9JO3farsjn79 apxqifANx59NYCYLa0SvfW4y8XA17izZjSfReXxf2WOj8pbN67uUKC8EXA3q6kTJX5aP Rt1A== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@wdc.com header.s=dkim.wdc.com header.b=aGe72sln; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i74si507713pfd.105.2018.04.19.12.47.44; Thu, 19 Apr 2018 12:47:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@wdc.com header.s=dkim.wdc.com header.b=aGe72sln; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753341AbeDSTq2 (ORCPT + 99 others); Thu, 19 Apr 2018 15:46:28 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:7658 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751386AbeDSTq0 (ORCPT ); Thu, 19 Apr 2018 15:46:26 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1524167186; x=1555703186; h=subject:to:cc:references:from:message-id:date: mime-version:in-reply-to:content-transfer-encoding; bh=kfWQs7EruuZMwadMXlrjGqmolBK3JUoZG61RBPe588w=; b=aGe72slndgvM7bTJP/mOvOU7JpDI4Ze9dZFEt/0H6ABji5F7LwUJD9O8 Qsj8LTC+hrdcrra3rv7LhbfaOTNaQ4UtsJgh+wZJmb4E7iy58Evnd6Hom qQwj8WQIlci4YBn9kdq8egXVocZ5CkkCoXjETi4sKwALFcwxBhD03eDK+ m4pVMdHgGdNW0iEennVNKjJScRkx/bw5OrU/fxSc5J48mmkFHN1BGVnR/ XHDEtmXboU2i3JLYcU3r7U4bL+GWpuPfJ2n5qo4oxo0YM+T7eY69X3ml+ +/EdvIgs7vwlwDAolsqYWtju6ptNkFsUel5QUGJa5NrSNOhKBVilz0QAt g==; X-IronPort-AV: E=Sophos;i="5.49,297,1520870400"; d="scan'208";a="76378378" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 20 Apr 2018 03:46:25 +0800 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP; 19 Apr 2018 12:38:49 -0700 Received: from c02v91rdhtd5.sdcorp.global.sandisk.com (HELO [10.11.46.45]) ([10.11.46.45]) by uls-op-cesaip01.wdc.com with ESMTP; 19 Apr 2018 12:46:25 -0700 Subject: Re: [PATCH v4 1/2] perf: riscv: preliminary RISC-V support To: Alan Kao , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Alex Solomatnikov , Jonathan Corbet , "linux-riscv@lists.infradead.org" , "linux-doc@vger.kernel.org" , "linux-kernel@vger.kernel.org" Cc: Nick Hu , Greentime Hu References: <1524017523-25076-1-git-send-email-alankao@andestech.com> <1524017523-25076-2-git-send-email-alankao@andestech.com> From: Atish Patra Message-ID: Date: Thu, 19 Apr 2018 12:46:24 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <1524017523-25076-2-git-send-email-alankao@andestech.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 4/17/18 7:13 PM, Alan Kao wrote: > This patch provide a basic PMU, riscv_base_pmu, which supports two > general hardware event, instructions and cycles. Furthermore, this > PMU serves as a reference implementation to ease the portings in > the future. > > riscv_base_pmu should be able to run on any RISC-V machine that > conforms to the Priv-Spec. Note that the latest qemu model hasn't > fully support a proper behavior of Priv-Spec 1.10 yet, but work > around should be easy with very small fixes. Please check > https://github.com/riscv/riscv-qemu/pull/115 for future updates. > > Cc: Nick Hu > Cc: Greentime Hu > Signed-off-by: Alan Kao > --- > arch/riscv/Kconfig | 13 + > arch/riscv/include/asm/perf_event.h | 79 ++++- > arch/riscv/kernel/Makefile | 1 + > arch/riscv/kernel/perf_event.c | 482 ++++++++++++++++++++++++++++ > 4 files changed, 571 insertions(+), 4 deletions(-) > create mode 100644 arch/riscv/kernel/perf_event.c > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig > index c22ebe08e902..90d9c8e50377 100644 > --- a/arch/riscv/Kconfig > +++ b/arch/riscv/Kconfig > @@ -203,6 +203,19 @@ config RISCV_ISA_C > config RISCV_ISA_A > def_bool y > > +menu "supported PMU type" > + depends on PERF_EVENTS > + > +config RISCV_BASE_PMU > + bool "Base Performance Monitoring Unit" > + def_bool y > + help > + A base PMU that serves as a reference implementation and has limited > + feature of perf. It can run on any RISC-V machines so serves as the > + fallback, but this option can also be disable to reduce kernel size. > + > +endmenu > + > endmenu > > menu "Kernel type" > diff --git a/arch/riscv/include/asm/perf_event.h b/arch/riscv/include/asm/perf_event.h > index e13d2ff29e83..0e638a0c3feb 100644 > --- a/arch/riscv/include/asm/perf_event.h > +++ b/arch/riscv/include/asm/perf_event.h > @@ -1,13 +1,84 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > /* > * Copyright (C) 2018 SiFive > + * Copyright (C) 2018 Andes Technology Corporation > * > - * This program is free software; you can redistribute it and/or > - * modify it under the terms of the GNU General Public Licence > - * as published by the Free Software Foundation; either version > - * 2 of the Licence, or (at your option) any later version. > */ > > #ifndef _ASM_RISCV_PERF_EVENT_H > #define _ASM_RISCV_PERF_EVENT_H > > +#include > +#include > + > +#define RISCV_BASE_COUNTERS 2 > + > +/* > + * The RISCV_MAX_COUNTERS parameter should be specified. > + */ > + > +#ifdef CONFIG_RISCV_BASE_PMU > +#define RISCV_MAX_COUNTERS 2 > +#endif > + > +#ifndef RISCV_MAX_COUNTERS > +#error "Please provide a valid RISCV_MAX_COUNTERS for the PMU." > +#endif > + > +/* > + * These are the indexes of bits in counteren register *minus* 1, > + * except for cycle. It would be coherent if it can directly mapped > + * to counteren bit definition, but there is a *time* register at > + * counteren[1]. Per-cpu structure is scarce resource here. > + * > + * According to the spec, an implementation can support counter up to > + * mhpmcounter31, but many high-end processors has at most 6 general > + * PMCs, we give the definition to MHPMCOUNTER8 here. > + */ > +#define RISCV_PMU_CYCLE 0 > +#define RISCV_PMU_INSTRET 1 > +#define RISCV_PMU_MHPMCOUNTER3 2 > +#define RISCV_PMU_MHPMCOUNTER4 3 > +#define RISCV_PMU_MHPMCOUNTER5 4 > +#define RISCV_PMU_MHPMCOUNTER6 5 > +#define RISCV_PMU_MHPMCOUNTER7 6 > +#define RISCV_PMU_MHPMCOUNTER8 7 > + > +#define RISCV_OP_UNSUPP (-EOPNOTSUPP) > + > +struct cpu_hw_events { > + /* # currently enabled events*/ > + int n_events; > + /* currently enabled events */ > + struct perf_event *events[RISCV_MAX_COUNTERS]; > + /* vendor-defined PMU data */ > + void *platform; > +}; > + > +struct riscv_pmu { > + struct pmu *pmu; > + > + /* generic hw/cache events table */ > + const int *hw_events; > + const int (*cache_events)[PERF_COUNT_HW_CACHE_MAX] > + [PERF_COUNT_HW_CACHE_OP_MAX] > + [PERF_COUNT_HW_CACHE_RESULT_MAX]; > + /* method used to map hw/cache events */ > + int (*map_hw_event)(u64 config); > + int (*map_cache_event)(u64 config); > + > + /* max generic hw events in map */ > + int max_events; > + /* number total counters, 2(base) + x(general) */ > + int num_counters; > + /* the width of the counter */ > + int counter_width; > + > + /* vendor-defined PMU features */ > + void *platform; > + > + irqreturn_t (*handle_irq)(int irq_num, void *dev); > + int irq; > +}; > + > #endif /* _ASM_RISCV_PERF_EVENT_H */ > diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile > index ffa439d4a364..f50d19816757 100644 > --- a/arch/riscv/kernel/Makefile > +++ b/arch/riscv/kernel/Makefile > @@ -39,5 +39,6 @@ obj-$(CONFIG_MODULE_SECTIONS) += module-sections.o > obj-$(CONFIG_FUNCTION_TRACER) += mcount.o > obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o > obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o > +obj-$(CONFIG_PERF_EVENTS) += perf_event.o > > clean: > diff --git a/arch/riscv/kernel/perf_event.c b/arch/riscv/kernel/perf_event.c > new file mode 100644 > index 000000000000..ba3192afc470 > --- /dev/null > +++ b/arch/riscv/kernel/perf_event.c > @@ -0,0 +1,482 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +/* > + * Copyright (C) 2008 Thomas Gleixner > + * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar > + * Copyright (C) 2009 Jaswinder Singh Rajput > + * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter > + * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra > + * Copyright (C) 2009 Intel Corporation, > + * Copyright (C) 2009 Google, Inc., Stephane Eranian > + * Copyright 2014 Tilera Corporation. All Rights Reserved. > + * Copyright (C) 2018 Andes Technology Corporation > + * > + * Perf_events support for RISC-V platforms. > + * > + * Since the spec. (as of now, Priv-Spec 1.10) does not provide enough > + * functionality for perf event to fully work, this file provides > + * the very basic framework only. > + * > + * For platform portings, please check Documentations/riscv/pmu.txt. > + * > + * The Copyright line includes x86 and tile ones. > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +static const struct riscv_pmu *riscv_pmu __read_mostly; > +static DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events); > + > +/* > + * Hardware & cache maps and their methods > + */ > + > +static const int riscv_hw_event_map[] = { > + [PERF_COUNT_HW_CPU_CYCLES] = RISCV_PMU_CYCLE, > + [PERF_COUNT_HW_INSTRUCTIONS] = RISCV_PMU_INSTRET, > + [PERF_COUNT_HW_CACHE_REFERENCES] = RISCV_OP_UNSUPP, > + [PERF_COUNT_HW_CACHE_MISSES] = RISCV_OP_UNSUPP, > + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = RISCV_OP_UNSUPP, > + [PERF_COUNT_HW_BRANCH_MISSES] = RISCV_OP_UNSUPP, > + [PERF_COUNT_HW_BUS_CYCLES] = RISCV_OP_UNSUPP, > +}; > + > +#define C(x) PERF_COUNT_HW_CACHE_##x > +static const int riscv_cache_event_map[PERF_COUNT_HW_CACHE_MAX] > +[PERF_COUNT_HW_CACHE_OP_MAX] > +[PERF_COUNT_HW_CACHE_RESULT_MAX] = { > + [C(L1D)] = { > + [C(OP_READ)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + [C(OP_WRITE)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + [C(OP_PREFETCH)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + }, > + [C(L1I)] = { > + [C(OP_READ)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + [C(OP_WRITE)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + [C(OP_PREFETCH)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + }, > + [C(LL)] = { > + [C(OP_READ)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + [C(OP_WRITE)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + [C(OP_PREFETCH)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + }, > + [C(DTLB)] = { > + [C(OP_READ)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + [C(OP_WRITE)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + [C(OP_PREFETCH)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + }, > + [C(ITLB)] = { > + [C(OP_READ)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + [C(OP_WRITE)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + [C(OP_PREFETCH)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + }, > + [C(BPU)] = { > + [C(OP_READ)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + [C(OP_WRITE)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + [C(OP_PREFETCH)] = { > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP, > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP, > + }, > + }, > +}; > + > +static int riscv_map_hw_event(u64 config) > +{ > + if (config >= riscv_pmu->max_events) > + return -EINVAL; > + > + return riscv_pmu->hw_events[config]; > +} > + > +int riscv_map_cache_decode(u64 config, unsigned int *type, > + unsigned int *op, unsigned int *result) > +{ > + return -ENOENT; > +} > + > +static int riscv_map_cache_event(u64 config) > +{ > + unsigned int type, op, result; > + int err = -ENOENT; > + int code; > + > + err = riscv_map_cache_decode(config, &type, &op, &result); > + if (!riscv_pmu->cache_events || err) > + return err; > + > + if (type >= PERF_COUNT_HW_CACHE_MAX || > + op >= PERF_COUNT_HW_CACHE_OP_MAX || > + result >= PERF_COUNT_HW_CACHE_RESULT_MAX) > + return -EINVAL; > + > + code = (*riscv_pmu->cache_events)[type][op][result]; > + if (code == RISCV_OP_UNSUPP) > + return -EINVAL; > + > + return code; > +} > + > +/* > + * Low-level functions: reading/writing counters > + */ > + > +static inline u64 read_counter(int idx) > +{ > + u64 val = 0; > + > + switch (idx) { > + case RISCV_PMU_CYCLE: > + val = csr_read(cycle); > + break; > + case RISCV_PMU_INSTRET: > + val = csr_read(instret); > + break; > + default: > + WARN_ON_ONCE(idx < 0 || idx > RISCV_MAX_COUNTERS); > + return -EINVAL; > + } > + > + return val; > +} > + > +static inline void write_counter(int idx, u64 value) > +{ > + /* currently not supported */ > + WARN_ON_ONCE(1); > +} > + > +/* > + * pmu->read: read and update the counter > + * > + * Other architectures' implementation often have a xxx_perf_event_update > + * routine, which can return counter values when called in the IRQ, but > + * return void when being called by the pmu->read method. > + */ > +static void riscv_pmu_read(struct perf_event *event) > +{ > + struct hw_perf_event *hwc = &event->hw; > + u64 prev_raw_count, new_raw_count; > + u64 oldval; > + int idx = hwc->idx; > + u64 delta; > + > + do { > + prev_raw_count = local64_read(&hwc->prev_count); > + new_raw_count = read_counter(idx); > + > + oldval = local64_cmpxchg(&hwc->prev_count, prev_raw_count, > + new_raw_count); > + } while (oldval != prev_raw_count); > + > + /* > + * delta is the value to update the counter we maintain in the kernel. > + */ > + delta = (new_raw_count - prev_raw_count) & > + ((1ULL << riscv_pmu->counter_width) - 1); > + local64_add(delta, &event->count); > + /* > + * Something like local64_sub(delta, &hwc->period_left) here is > + * needed if there is an interrupt for perf. > + */ > +} > + > +/* > + * State transition functions: > + * > + * stop()/start() & add()/del() > + */ > + > +/* > + * pmu->stop: stop the counter > + */ > +static void riscv_pmu_stop(struct perf_event *event, int flags) > +{ > + struct hw_perf_event *hwc = &event->hw; > + > + WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED); > + hwc->state |= PERF_HES_STOPPED; > + > + if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) { > + riscv_pmu->pmu->read(event); > + hwc->state |= PERF_HES_UPTODATE; > + } > +} > + > +/* > + * pmu->start: start the event. > + */ > +static void riscv_pmu_start(struct perf_event *event, int flags) > +{ > + struct hw_perf_event *hwc = &event->hw; > + > + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) > + return; > + > + if (flags & PERF_EF_RELOAD) { > + WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE)); > + > + /* > + * Set the counter to the period to the next interrupt here, > + * if you have any. > + */ > + } > + > + hwc->state = 0; > + perf_event_update_userpage(event); > + > + /* > + * Since we cannot write to counters, this serves as an initialization > + * to the delta-mechanism in pmu->read(); otherwise, the delta would be > + * wrong when pmu->read is called for the first time. > + */ > + local64_set(&hwc->prev_count, read_counter(hwc->idx)); > +} > + > +/* > + * pmu->add: add the event to PMU. > + */ > +static int riscv_pmu_add(struct perf_event *event, int flags) > +{ > + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); > + struct hw_perf_event *hwc = &event->hw; > + > + if (cpuc->n_events == riscv_pmu->num_counters) > + return -ENOSPC; > + > + /* > + * We don't have general conunters, so no binding-event-to-counter > + * process here. > + * > + * Indexing using hwc->config generally not works, since config may > + * contain extra information, but here the only info we have in > + * hwc->config is the event index. > + */ > + hwc->idx = hwc->config; > + cpuc->events[hwc->idx] = event; > + cpuc->n_events++; > + > + hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED; > + > + if (flags & PERF_EF_START) > + riscv_pmu->pmu->start(event, PERF_EF_RELOAD); > + > + return 0; > +} > + > +/* > + * pmu->del: delete the event from PMU. > + */ > +static void riscv_pmu_del(struct perf_event *event, int flags) > +{ > + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); > + struct hw_perf_event *hwc = &event->hw; > + > + cpuc->events[hwc->idx] = NULL; > + cpuc->n_events--; > + riscv_pmu->pmu->stop(event, PERF_EF_UPDATE); > + perf_event_update_userpage(event); > +} > + > +/* > + * Interrupt: a skeletion for reference. > + */ > + > +static DEFINE_MUTEX(pmc_reserve_mutex); > + > +irqreturn_t riscv_base_pmu_handle_irq(int irq_num, void *dev) > +{ > + return IRQ_NONE; > +} > + > +static int reserve_pmc_hardware(void) > +{ > + int err = 0; > + > + mutex_lock(&pmc_reserve_mutex); > + if (riscv_pmu->irq >=0 && riscv_pmu->handle_irq) { > + err = request_irq(riscv_pmu->irq, riscv_pmu->handle_irq, > + IRQF_PERCPU, "riscv-base-perf", NULL); > + } > + mutex_unlock(&pmc_reserve_mutex); > + > + return err; > +} > + > +void release_pmc_hardware(void) > +{ > + mutex_lock(&pmc_reserve_mutex); > + if (riscv_pmu->irq >=0) { > + free_irq(riscv_pmu->irq, NULL); > + } > + mutex_unlock(&pmc_reserve_mutex); > +} > + > +/* > + * Event Initialization/Finalization > + */ > + > +static atomic_t riscv_active_events = ATOMIC_INIT(0); > + > +static void riscv_event_destroy(struct perf_event *event) > +{ > + if (atomic_dec_return(&riscv_active_events) == 0) > + release_pmc_hardware(); > +} > + > +static int riscv_event_init(struct perf_event *event) > +{ > + struct perf_event_attr *attr = &event->attr; > + struct hw_perf_event *hwc = &event->hw; > + int err; > + int code; > + > + if (atomic_inc_return(&riscv_active_events) == 1) { > + err = reserve_pmc_hardware(); > + > + if (err) { > + pr_warn("PMC hardware not available\n"); > + atomic_dec(&riscv_active_events); > + return -EBUSY; > + } > + } > + > + switch (event->attr.type) { > + case PERF_TYPE_HARDWARE: > + code = riscv_pmu->map_hw_event(attr->config); > + break; > + case PERF_TYPE_HW_CACHE: > + code = riscv_pmu->map_cache_event(attr->config); > + break; > + case PERF_TYPE_RAW: > + return -EOPNOTSUPP; > + default: > + return -ENOENT; > + } > + > + event->destroy = riscv_event_destroy; > + if (code < 0) { > + event->destroy(event); > + return code; > + } > + > + /* > + * idx is set to -1 because the index of a general event should not be > + * decided until binding to some counter in pmu->add(). > + * > + * But since we don't have such support, later in pmu->add(), we just > + * use hwc->config as the index instead. > + */ > + hwc->config = code; > + hwc->idx = -1; > + > + return 0; > +} > + > +/* > + * Initialization > + */ > + > +static struct pmu min_pmu = { > + .name = "riscv-base", > + .event_init = riscv_event_init, > + .add = riscv_pmu_add, > + .del = riscv_pmu_del, > + .start = riscv_pmu_start, > + .stop = riscv_pmu_stop, > + .read = riscv_pmu_read, > +}; > + > +static const struct riscv_pmu riscv_base_pmu = { > + .pmu = &min_pmu, > + .max_events = ARRAY_SIZE(riscv_hw_event_map), > + .map_hw_event = riscv_map_hw_event, > + .hw_events = riscv_hw_event_map, > + .map_cache_event = riscv_map_cache_event, > + .cache_events = &riscv_cache_event_map, > + .counter_width = 63, > + .num_counters = RISCV_BASE_COUNTERS + 0, > + .handle_irq = &riscv_base_pmu_handle_irq, > + > + /* This means this PMU has no IRQ. */ > + .irq = -1, > +}; > + > +static const struct of_device_id riscv_pmu_of_ids[] = { > + {.compatible = "riscv,base-pmu", .data = &riscv_base_pmu}, > + { /* sentinel value */ } > +}; > + > +int __init init_hw_perf_events(void) > +{ > + struct device_node *node = of_find_node_by_type(NULL, "pmu"); > + const struct of_device_id *of_id; > + > + if (node && (of_id = of_match_node(riscv_pmu_of_ids, node))) > + riscv_pmu = of_id->data; > + else > + riscv_pmu = &riscv_base_pmu; > + > + perf_pmu_register(riscv_pmu->pmu, "cpu", PERF_TYPE_RAW); > + return 0; > +} > +arch_initcall(init_hw_perf_events); > Some check patch errors. ERROR: spaces required around that '>=' (ctx:WxV) #517: FILE: arch/riscv/kernel/perf_event.c:356: + if (riscv_pmu->irq >=0 && riscv_pmu->handle_irq) { ^ ERROR: spaces required around that '>=' (ctx:WxV) #529: FILE: arch/riscv/kernel/perf_event.c:368: + if (riscv_pmu->irq >=0) { ^ WARNING: braces {} are not necessary for single statement blocks #529: FILE: arch/riscv/kernel/perf_event.c:368: + if (riscv_pmu->irq >=0) { + free_irq(riscv_pmu->irq, NULL); + } WARNING: DT compatible string "riscv,base-pmu" appears un-documented -- check ./Documentation/devicetree/bindings/ #626: FILE: arch/riscv/kernel/perf_event.c:465: + {.compatible = "riscv,base-pmu", .data = &riscv_base_pmu}, ERROR: trailing whitespace #634: FILE: arch/riscv/kernel/perf_event.c:473: +^I$ ERROR: do not use assignment in if condition #635: FILE: arch/riscv/kernel/perf_event.c:474: + if (node && (of_id = of_match_node(riscv_pmu_of_ids, node))) total: 4 errors, 3 warnings, 595 lines checked Regards, Atish