2018-03-26 08:01:48

by Alan Kao

[permalink] [raw]
Subject: [PATCH 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

This implements the baseline PMU for RISC-V platforms.

To ease the future PMU portings, a guide is also written, containing
perf concepts, arch porting practices and some hints.

Alan Kao (2):
perf: riscv: preliminary RISC-V support
perf: riscv: Add Document for Future Porting Guide

Documentation/riscv/pmu.txt | 250 +++++++++++++++++++
arch/riscv/Kconfig | 12 +
arch/riscv/include/asm/perf_event.h | 76 +++++-
arch/riscv/kernel/Makefile | 1 +
arch/riscv/kernel/perf_event.c | 469 ++++++++++++++++++++++++++++++++++++
5 files changed, 804 insertions(+), 4 deletions(-)
create mode 100644 Documentation/riscv/pmu.txt
create mode 100644 arch/riscv/kernel/perf_event.c

--
2.16.2



2018-03-26 07:59:58

by Alan Kao

[permalink] [raw]
Subject: [PATCH 1/2] perf: riscv: preliminary RISC-V support

This patch provide a basic PMU, riscv_base_pmu, which supports two
general hardware event, instructions and cycles. Furthermore, this
PMU serves as a reference implementation to ease the portings in
the future.

riscv_base_pmu should be able to run on any RISC-V machine that
conforms to the Priv-Spec. Note that the latest qemu model hasn't
fully support a proper behavior of Priv-Spec 1.10 yet, but work
around should be easy with very small fixes. Please check
https://github.com/riscv/riscv-qemu/pull/115 for future updates.

Cc: Nick Hu <[email protected]>
Cc: Greentime Hu <[email protected]>
Signed-off-by: Alan Kao <[email protected]>
---
arch/riscv/Kconfig | 12 +
arch/riscv/include/asm/perf_event.h | 76 +++++-
arch/riscv/kernel/Makefile | 1 +
arch/riscv/kernel/perf_event.c | 469 ++++++++++++++++++++++++++++++++++++
4 files changed, 554 insertions(+), 4 deletions(-)
create mode 100644 arch/riscv/kernel/perf_event.c

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 310b9a5d6737..dd4aecfb5265 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -195,6 +195,18 @@ config RISCV_ISA_C
config RISCV_ISA_A
def_bool y

+menu "PMU type"
+ depends on PERF_EVENTS
+
+config RISCV_BASE_PMU
+ bool "Base Performance Monitoring Unit"
+ def_bool y
+ help
+ A base PMU that serves as a reference implementation and has limited
+ feature of perf.
+
+endmenu
+
endmenu

menu "Kernel type"
diff --git a/arch/riscv/include/asm/perf_event.h b/arch/riscv/include/asm/perf_event.h
index e13d2ff29e83..98e2efb02d25 100644
--- a/arch/riscv/include/asm/perf_event.h
+++ b/arch/riscv/include/asm/perf_event.h
@@ -1,13 +1,81 @@
+/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2018 SiFive
+ * Copyright (C) 2018 Andes Technology Corporation
*
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public Licence
- * as published by the Free Software Foundation; either version
- * 2 of the Licence, or (at your option) any later version.
*/

#ifndef _ASM_RISCV_PERF_EVENT_H
#define _ASM_RISCV_PERF_EVENT_H

+#include <linux/perf_event.h>
+#include <linux/ptrace.h>
+
+#define RISCV_BASE_COUNTERS 2
+
+/*
+ * The RISCV_MAX_COUNTERS parameter should be specified.
+ */
+
+#ifdef CONFIG_RISCV_BASE_PMU
+#define RISCV_MAX_COUNTERS 2
+#endif
+
+#ifndef RISCV_MAX_COUNTERS
+#error "Please provide a valid RISCV_MAX_COUNTERS for the PMU."
+#endif
+
+/*
+ * These are the indexes of bits in counteren register *minus* 1,
+ * except for cycle. It would be coherent if it can directly mapped
+ * to counteren bit definition, but there is a *time* register at
+ * counteren[1]. Per-cpu structure is scarce resource here.
+ *
+ * According to the spec, an implementation can support counter up to
+ * mhpmcounter31, but many high-end processors has at most 6 general
+ * PMCs, we give the definition to MHPMCOUNTER8 here.
+ */
+#define RISCV_PMU_CYCLE 0
+#define RISCV_PMU_INSTRET 1
+#define RISCV_PMU_MHPMCOUNTER3 2
+#define RISCV_PMU_MHPMCOUNTER4 3
+#define RISCV_PMU_MHPMCOUNTER5 4
+#define RISCV_PMU_MHPMCOUNTER6 5
+#define RISCV_PMU_MHPMCOUNTER7 6
+#define RISCV_PMU_MHPMCOUNTER8 7
+
+#define RISCV_OP_UNSUPP (-EOPNOTSUPP)
+
+struct cpu_hw_events {
+ /* # currently enabled events*/
+ int n_events;
+ /* currently enabled events */
+ struct perf_event *events[RISCV_MAX_COUNTERS];
+ /* vendor-defined PMU data */
+ void *platform;
+};
+
+struct riscv_pmu {
+ struct pmu *pmu;
+
+ /* generic hw/cache events table */
+ const int *hw_events;
+ const int (*cache_events)[PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX];
+ /* method used to map hw/cache events */
+ int (*map_hw_event)(u64 config);
+ int (*map_cache_event)(u64 config);
+
+ /* max generic hw events in map */
+ int max_events;
+ /* number total counters, 2(base) + x(general) */
+ int num_counters;
+ /* the width of the counter */
+ int counter_width;
+
+ /* vendor-defined PMU features */
+ void *platform;
+};
+
#endif /* _ASM_RISCV_PERF_EVENT_H */
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
index 196f62ffc428..849c38d9105f 100644
--- a/arch/riscv/kernel/Makefile
+++ b/arch/riscv/kernel/Makefile
@@ -36,5 +36,6 @@ obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_FUNCTION_TRACER) += mcount.o
obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
+obj-$(CONFIG_PERF_EVENTS) += perf_event.o

clean:
diff --git a/arch/riscv/kernel/perf_event.c b/arch/riscv/kernel/perf_event.c
new file mode 100644
index 000000000000..b78cb486683b
--- /dev/null
+++ b/arch/riscv/kernel/perf_event.c
@@ -0,0 +1,469 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2008 Thomas Gleixner <[email protected]>
+ * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar
+ * Copyright (C) 2009 Jaswinder Singh Rajput
+ * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+ * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra
+ * Copyright (C) 2009 Intel Corporation, <[email protected]>
+ * Copyright (C) 2009 Google, Inc., Stephane Eranian
+ * Copyright 2014 Tilera Corporation. All Rights Reserved.
+ * Copyright (C) 2018 Andes Technology Corporation
+ *
+ * Perf_events support for RISC-V platforms.
+ *
+ * Since the spec. (as of now, Priv-Spec 1.10) does not provide enough
+ * functionality for perf event to fully work, this file provides
+ * the very basic framework only.
+ *
+ * For platform portings, please check Documentations/riscv/pmu.txt.
+ *
+ * The Copyright line includes x86 and tile ones.
+ */
+
+#include <linux/kprobes.h>
+#include <linux/kernel.h>
+#include <linux/kdebug.h>
+#include <linux/mutex.h>
+#include <linux/bitmap.h>
+#include <linux/irq.h>
+#include <linux/interrupt.h>
+#include <linux/perf_event.h>
+#include <linux/atomic.h>
+#include <asm/perf_event.h>
+
+static const struct riscv_pmu *riscv_pmu __read_mostly;
+static DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events);
+
+/*
+ * Hardware & cache maps and their methods
+ */
+
+static const int riscv_hw_event_map[] = {
+ [PERF_COUNT_HW_CPU_CYCLES] = RISCV_PMU_CYCLE,
+ [PERF_COUNT_HW_INSTRUCTIONS] = RISCV_PMU_INSTRET,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = RISCV_OP_UNSUPP,
+ [PERF_COUNT_HW_CACHE_MISSES] = RISCV_OP_UNSUPP,
+ [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = RISCV_OP_UNSUPP,
+ [PERF_COUNT_HW_BRANCH_MISSES] = RISCV_OP_UNSUPP,
+ [PERF_COUNT_HW_BUS_CYCLES] = RISCV_OP_UNSUPP,
+};
+
+#define C(x) PERF_COUNT_HW_CACHE_##x
+static const int riscv_cache_event_map[PERF_COUNT_HW_CACHE_MAX]
+[PERF_COUNT_HW_CACHE_OP_MAX]
+[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+ [C(L1D)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ },
+ [C(L1I)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ },
+ [C(LL)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ },
+ [C(DTLB)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ },
+ [C(ITLB)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ },
+ [C(BPU)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ },
+};
+
+static int riscv_map_hw_event(u64 config)
+{
+ if (config >= riscv_pmu->max_events)
+ return -EINVAL;
+
+ return riscv_pmu->hw_events[config];
+}
+
+int riscv_map_cache_decode(u64 config, unsigned int *type,
+ unsigned int *op, unsigned int *result)
+{
+ return -ENOENT;
+}
+
+static int riscv_map_cache_event(u64 config)
+{
+ unsigned int type, op, result;
+ int err = -ENOENT;
+ int code;
+
+ err = riscv_map_cache_decode(config, &type, &op, &result);
+ if (!riscv_pmu->cache_events || err)
+ return err;
+
+ if (type >= PERF_COUNT_HW_CACHE_MAX ||
+ op >= PERF_COUNT_HW_CACHE_OP_MAX ||
+ result >= PERF_COUNT_HW_CACHE_RESULT_MAX)
+ return -EINVAL;
+
+ code = (*riscv_pmu->cache_events)[type][op][result];
+ if (code == RISCV_OP_UNSUPP)
+ return -EINVAL;
+
+ return code;
+}
+
+/*
+ * Low-level functions: reading/writing counters
+ */
+
+static inline u64 read_counter(int idx)
+{
+ u64 val = 0;
+
+ switch (idx) {
+ case RISCV_PMU_CYCLE:
+ val = csr_read(cycle);
+ break;
+ case RISCV_PMU_INSTRET:
+ val = csr_read(instret);
+ break;
+ default:
+ WARN_ON_ONCE(idx < 0 || idx > RISCV_MAX_COUNTERS);
+ return -EINVAL;
+ }
+
+ return val;
+}
+
+static inline void write_counter(int idx, u64 value)
+{
+ /* currently not supported */
+}
+
+/*
+ * pmu->read: read and update the counter
+ *
+ * Other architectures' implementation often have a xxx_perf_event_update
+ * routine, which can return counter values when called in the IRQ, but
+ * return void when being called by the pmu->read method.
+ */
+static void riscv_pmu_read(struct perf_event *event)
+{
+ struct hw_perf_event *hwc = &event->hw;
+ u64 prev_raw_count, new_raw_count;
+ u64 oldval;
+ int idx = hwc->idx;
+ u64 delta;
+
+ do {
+ prev_raw_count = local64_read(&hwc->prev_count);
+ new_raw_count = read_counter(idx);
+
+ oldval = local64_cmpxchg(&hwc->prev_count, prev_raw_count,
+ new_raw_count);
+ } while (oldval != prev_raw_count);
+
+ /*
+ * delta is the value to update the counter we maintain in the kernel.
+ */
+ delta = (new_raw_count - prev_raw_count) &
+ ((1ULL << riscv_pmu->counter_width) - 1);
+ local64_add(delta, &event->count);
+ /*
+ * Something like local64_sub(delta, &hwc->period_left) here is
+ * needed if there is an interrupt for perf.
+ */
+}
+
+/*
+ * State transition functions:
+ *
+ * stop()/start() & add()/del()
+ */
+
+/*
+ * pmu->stop: stop the counter
+ */
+static void riscv_pmu_stop(struct perf_event *event, int flags)
+{
+ struct hw_perf_event *hwc = &event->hw;
+
+ WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
+ hwc->state |= PERF_HES_STOPPED;
+
+ if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
+ riscv_pmu_read(event);
+ hwc->state |= PERF_HES_UPTODATE;
+ }
+}
+
+/*
+ * pmu->start: start the event.
+ */
+static void riscv_pmu_start(struct perf_event *event, int flags)
+{
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
+ return;
+
+ if (flags & PERF_EF_RELOAD) {
+ WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE));
+
+ /*
+ * Set the counter to the period to the next interrupt here,
+ * if you have any.
+ */
+ }
+
+ hwc->state = 0;
+ perf_event_update_userpage(event);
+
+ /*
+ * Since we cannot write to counters, this serves as an initialization
+ * to the delta-mechanism in pmu->read(); otherwise, the delta would be
+ * wrong when pmu->read is called for the first time.
+ */
+ if (local64_read(&hwc->prev_count) == 0)
+ local64_set(&hwc->prev_count, read_counter(hwc->idx));
+}
+
+/*
+ * pmu->add: add the event to PMU.
+ */
+static int riscv_pmu_add(struct perf_event *event, int flags)
+{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (cpuc->n_events == riscv_pmu->num_counters)
+ return -ENOSPC;
+
+ /*
+ * We don't have general conunters, so no binding-event-to-counter
+ * process here.
+ *
+ * Indexing using hwc->config generally not works, since config may
+ * contain extra information, but here the only info we have in
+ * hwc->config is the event index.
+ */
+ hwc->idx = hwc->config;
+ cpuc->events[hwc->idx] = event;
+ cpuc->n_events++;
+
+ hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+
+ if (flags & PERF_EF_START)
+ riscv_pmu_start(event, PERF_EF_RELOAD);
+
+ return 0;
+}
+
+/*
+ * pmu->del: delete the event from PMU.
+ */
+static void riscv_pmu_del(struct perf_event *event, int flags)
+{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ struct hw_perf_event *hwc = &event->hw;
+
+ cpuc->events[hwc->idx] = NULL;
+ cpuc->n_events--;
+ riscv_pmu_stop(event, PERF_EF_UPDATE);
+ perf_event_update_userpage(event);
+}
+
+/*
+ * Interrupt
+ */
+
+static DEFINE_MUTEX(pmc_reserve_mutex);
+typedef void (*perf_irq_t)(void *riscv_perf_irq);
+perf_irq_t perf_irq;
+
+void riscv_pmu_handle_irq(void *riscv_perf_irq)
+{
+}
+
+static perf_irq_t reserve_pmc_hardware(void)
+{
+ perf_irq_t old;
+
+ mutex_lock(&pmc_reserve_mutex);
+ old = perf_irq;
+ perf_irq = &riscv_pmu_handle_irq;
+ mutex_unlock(&pmc_reserve_mutex);
+
+ return old;
+}
+
+void release_pmc_hardware(void)
+{
+ mutex_lock(&pmc_reserve_mutex);
+ perf_irq = NULL;
+ mutex_unlock(&pmc_reserve_mutex);
+}
+
+/*
+ * Event Initialization
+ */
+
+static atomic_t riscv_active_events;
+
+static void riscv_event_destroy(struct perf_event *event)
+{
+ if (atomic_dec_return(&riscv_active_events) == 0)
+ release_pmc_hardware();
+}
+
+static int riscv_event_init(struct perf_event *event)
+{
+ struct perf_event_attr *attr = &event->attr;
+ struct hw_perf_event *hwc = &event->hw;
+ perf_irq_t old_irq_handler = NULL;
+ int code;
+
+ if (atomic_inc_return(&riscv_active_events) == 1)
+ old_irq_handler = reserve_pmc_hardware();
+
+ if (old_irq_handler) {
+ pr_warn("PMC hardware busy (reserved by oprofile)\n");
+ atomic_dec(&riscv_active_events);
+ return -EBUSY;
+ }
+
+ switch (event->attr.type) {
+ case PERF_TYPE_HARDWARE:
+ code = riscv_pmu->map_hw_event(attr->config);
+ break;
+ case PERF_TYPE_HW_CACHE:
+ code = riscv_pmu->map_cache_event(attr->config);
+ break;
+ case PERF_TYPE_RAW:
+ return -EOPNOTSUPP;
+ default:
+ return -ENOENT;
+ }
+
+ event->destroy = riscv_event_destroy;
+ if (code < 0) {
+ event->destroy(event);
+ return code;
+ }
+
+ /*
+ * idx is set to -1 because the index of a general event should not be
+ * decided until binding to some counter in pmu->add().
+ *
+ * But since we don't have such support, later in pmu->add(), we just
+ * use hwc->config as the index instead.
+ */
+ hwc->config = code;
+ hwc->idx = -1;
+
+ return 0;
+}
+
+/*
+ * Initialization
+ */
+
+static struct pmu min_pmu = {
+ .name = "riscv-base",
+ .event_init = riscv_event_init,
+ .add = riscv_pmu_add,
+ .del = riscv_pmu_del,
+ .start = riscv_pmu_start,
+ .stop = riscv_pmu_stop,
+ .read = riscv_pmu_read,
+};
+
+static const struct riscv_pmu riscv_base_pmu = {
+ .pmu = &min_pmu,
+ .max_events = ARRAY_SIZE(riscv_hw_event_map),
+ .map_hw_event = riscv_map_hw_event,
+ .hw_events = riscv_hw_event_map,
+ .map_cache_event = riscv_map_cache_event,
+ .cache_events = &riscv_cache_event_map,
+ .counter_width = 63,
+ .num_counters = RISCV_BASE_COUNTERS + 0,
+};
+
+struct pmu * __weak __init riscv_init_platform_pmu(void)
+{
+ riscv_pmu = &riscv_base_pmu;
+ return riscv_pmu->pmu;
+}
+
+int __init init_hw_perf_events(void)
+{
+ struct pmu *pmu = riscv_init_platform_pmu();
+
+ perf_irq = NULL;
+ perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
+ return 0;
+}
+arch_initcall(init_hw_perf_events);
--
2.16.2


2018-03-26 08:01:15

by Alan Kao

[permalink] [raw]
Subject: [PATCH 2/2] perf: riscv: Add Document for Future Porting Guide

Cc: Nick Hu <[email protected]>
Cc: Greentime Hu <[email protected]>
Signed-off-by: Alan Kao <[email protected]>
---
Documentation/riscv/pmu.txt | 250 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 250 insertions(+)
create mode 100644 Documentation/riscv/pmu.txt

diff --git a/Documentation/riscv/pmu.txt b/Documentation/riscv/pmu.txt
new file mode 100644
index 000000000000..496f00476560
--- /dev/null
+++ b/Documentation/riscv/pmu.txt
@@ -0,0 +1,250 @@
+Supporting PMUs on RISC-V platforms
+==========================================
+Alan Kao <[email protected]>, Mar 2018
+
+Introduction
+------------
+
+As of this writing, perf_event-related features mentioned in The RISC-V ISA
+Privileged Version 1.10 are as follows:
+(please check the manual for more details)
+
+* [m|s]counteren
+* mcycle[h], cycle[h]
+* minstret[h], instret[h]
+* mhpeventx, mhpcounterx[h]
+
+With such function set only, porting perf would require a lot of work, due to
+the lack of the following general architectural performance monitoring features:
+
+* Enabling/Disabling counters
+ Counters are just free-running all the time in our case.
+* Interrupt caused by counter overflow
+ No such design in the spec.
+* Interrupt indicator
+ It is not possible to have many interrupt ports for all counters, so an
+ interrupt indicator is required for software to tell which counter has
+ just overflowed.
+* Writing to counters
+ There will be an SBI to support this since the kernel cannot modify the
+ counters [1]. Alternatively, some vendor considers to implement
+ hardware-extension for M-S-U model machines to write counters directly.
+
+This document aims to provide developers a quick guide on supporting their
+PMUs in the kernel. The following sections briefly explain perf' mechanism
+and todos.
+
+You may check previous discussions here [1][2]. Also, it might be helpful
+to check the appendix for related kernel structures.
+
+
+1. Initialization
+-----------------
+
+*riscv_pmu* is a global pointer of type *struct riscv_pmu*, which contains
+various methods according to perf's internal convention and PMU-specific
+parameters. One should declare such instance to represent the PMU. By default,
+*riscv_pmu* points to a constant structure *riscv_base_pmu*, which has very
+basic support to a baseline QEMU model.
+
+Then he/she can either assign the instance's pointer to *riscv_pmu* so that
+the minimal and already-implemented logic can be leveraged, or invent his/her
+own *riscv_init_platform_pmu* implementation.
+
+In other words, existing sources of *riscv_base_pmu* merely provide a
+reference implementation. Developers can flexibly decide how many parts they
+can leverage, and in the most extreme case, they can customize every function
+according to their needs.
+
+
+2. Event Initialization
+-----------------------
+
+When a user launches a perf command to monitor some events, it is first
+interpreted by the userspace perf tool into multiple *perf_event_open*
+system calls, and then each of them calls to the body of *event_init*
+member function that was assigned in the previous step. In *riscv_base_pmu*'s
+case, it is *riscv_event_init*.
+
+The main purpose of this function is to translate the event provided by user
+into bitmap, so that HW-related control registers or counters can directly be
+manipulated. The translation is based on the mappings and methods provided in
+*riscv_pmu*.
+
+Note that some features can be done in this stage as well:
+
+(1) interrupt setting, which is stated in the next section;
+(2) privilege level setting (user space only, kernel space only, both);
+(3) destructor setting. Normally it is sufficient to apply *riscv_destroy_event*;
+(4) tweaks for non-sampling events, which will be utilized by functions such as
+*perf_adjust_period*, usually something like the follows:
+
+if (!is_sampling_event(event)) {
+ hwc->sample_period = x86_pmu.max_period;
+ hwc->last_period = hwc->sample_period;
+ local64_set(&hwc->period_left, hwc->sample_period);
+}
+
+In the case of *riscv_base_pmu*, only (3) is provided for now.
+
+
+3. Interrupt
+------------
+
+3.1. Interrupt Initialization
+
+This often occurs at the beginning of the *event_init* method. In common
+practice, this should be a code segment like
+
+int x86_reserve_hardware(void)
+{
+ int err = 0;
+
+ if (!atomic_inc_not_zero(&pmc_refcount)) {
+ mutex_lock(&pmc_reserve_mutex);
+ if (atomic_read(&pmc_refcount) == 0) {
+ if (!reserve_pmc_hardware())
+ err = -EBUSY;
+ else
+ reserve_ds_buffers();
+ }
+ if (!err)
+ atomic_inc(&pmc_refcount);
+ mutex_unlock(&pmc_reserve_mutex);
+ }
+
+ return err;
+}
+
+And the magic is in *reserve_pmc_hardware*, which usually does atomic
+operations to make implemented IRQ accessible from some global function pointer.
+*release_pmc_hardware* serves the opposite purpose, and it is used in event
+destructors mentioned in previous section.
+
+(Note: From the implementations in all the architectures, the *reserve/release*
+pair are always IRQ settings, so the *pmc_hardware* seems somehow misleading.
+It does NOT deal with the binding between an event and a physical counter,
+which will be introduced in the next section.)
+
+3.2. IRQ Structure
+
+Basically, a IRQ runs the following pseudo code:
+
+for each hardware counter that triggered this overflow
+
+ get the event of this counter
+
+ // following two steps are defined as *read()*,
+ // check the section Reading/Writing Counters for details.
+ count the delta value since previous interrupt
+ update the event->count (# event occurs) by adding delta, and
+ event->hw.period_left by subtracting delta
+
+ if the event overflows
+ sample data
+ set the counter appropriately for the next overflow
+
+ if the event overflows again
+ too frequently, throttle this event
+ fi
+ fi
+
+end for
+
+However as of this writing, none of the RISC-V implementations have designed an
+interrupt for perf, so the details are to be completed in the future.
+
+4. Reading/Writing Counters
+---------------------------
+
+They seem symmetric but perf treats them quite differently. For reading, there
+is a *read* interface in *struct pmu*, but it serves more than just reading.
+According to the context, the *read* function not only read the content of the
+counter (event->count), but also update the left period to the next interrupt
+(event->hw.period_left).
+
+But the core of perf does not need direct write to counters. Writing counters
+hides behind the abstraction of 1) *pmu->start*, literally start counting so one
+has to set the counter to a good value for the next interrupt; 2) inside the IRQ
+it should set the counter with the same reason.
+
+Reading is not a problem in RISC-V but writing would need some effort, since
+counters are not allowed to be written by S-mode.
+
+
+5. add()/del()/start()/stop()
+-----------------------------
+
+Basic idea: add()/del() adds/deletes events to/from a PMU, and start()/stop()
+starts/stop the counter of some event in the PMU. All of them take the same
+arguments: *struct perf_event *event* and *int flag*.
+
+Consider perf as a state machine, then you will find that these functions serve
+as the state transition process between those states.
+Three states (event->hw.state) are defined:
+
+* PERF_HES_STOPPED: the counter is stopped
+* PERF_HES_UPTODATE: the event->count is up-to-date
+* PERF_HES_ARCH: arch-dependent usage ... we don't need this for now
+
+A normal flow of these state transitions are as follows:
+
+* A user launches a perf event, resulting in calling to *event_init*.
+* When being context-switched in, *add* is called by the perf core, with flag
+ PERF_EF_START, which mean that the event should be started after it is added.
+ In this stage, an general event is binded to a physical counter, if any.
+ The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE, because it is now
+ stopped, and the (software) event count does not need updating.
+** *start* is then called, and the counter is enabled.
+ With flag PERF_EF_RELOAD, it write the counter to an appropriate value (check
+ previous section for detail).
+ No writing is made if the flag does not contain PERF_EF_RELOAD.
+ The state now is reset to none, because it is neither stopped nor update
+ (the counting already starts)
+* When being context-switched out, *del* is called. It then checkout all the
+ events in the PMU and call *stop* to update their counts.
+** *stop* is called by *del*
+ and the perf core with flag PERF_EF_UPDATE, and it often shares the same
+ subroutine as *read* with the same logic.
+ The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE, again.
+
+** Life cycles of these two pairs: *add* and *del* are called repeatedly as
+ tasks switch in-and-out; *start* and *stop* is also called when the perf core
+ needs a quick stop-and-start, for instance, when the interrupt period is being
+ adjusted.
+
+Current implementation is sufficient for now and can be easily extend to
+features in the future.
+
+A. Related Structures
+---------------------
+
+* struct pmu: include/linux/perf_events.h
+* struct riscv_pmu: arch/riscv/include/asm/perf_events.h
+
+ Both structures are designed to be read-only.
+
+ *struct pmu* defines some function pointer interfaces, and most of them take
+*struct perf_event* as a main argument, dealing with perf events according to
+perf's internal state machine (check kernel/events/core.c for details).
+
+ *struct riscv_pmu* defines PMU-specific parameters. The naming follows the
+convention of all other architectures.
+
+* struct perf_event: include/linux/perf_events.h
+* struct hw_perf_event
+
+ The generic structure that represents perf events, and the hardware-related
+details.
+
+* struct riscv_hw_events: arch/riscv/include/asm/perf_events.h
+
+ The structure that holds the status of events, has two fixed members:
+the number of events and the array of the events.
+
+References
+----------
+
+[1] https://github.com/riscv/riscv-linux/pull/124
+[2] https://groups.google.com/a/groups.riscv.org/forum/#!topic/sw-dev/f19TmCNP6yA
+
--
2.16.2


2018-03-29 00:42:21

by Palmer Dabbelt

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

On Wed, 28 Mar 2018 15:31:30 PDT (-0700), [email protected] wrote:
> Also, core IDs and socket IDs are wrong in perf report:

It looks like for this we need the stuff in topology.h. I've cobbled something
together quickly for Alex to use for now to see if that fixes other problems,
I'll submit a sane version when we can.

2018-03-29 02:31:45

by Alan Kao

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

Hi Alex,

I'm appreciated for your reply and tests.

On Wed, Mar 28, 2018 at 03:58:41PM -0700, Alex Solomatnikov wrote:
> Did you test this code?

I did test this patch on QEMU's virt model with multi-hart, which is the only
RISC-V machine I have for now. But as I mentioned in
https://github.com/riscv/riscv-qemu/pull/115 , the hardware counter support
in QEMU is not fully conformed to the 1.10 Priv-Spec, so I had to slightly
tweak the code to make reading work.

Specifically, the read to cycle and instret in QEMU looks like this:
...
case CSR_INSTRET:
case CSR_CYCLE:
// if (ctr_ok) {
return cpu_get_host_ticks();
// }
break;
...
and the two lines of comment was the tweak.

On such environment, I did not get anything unexpected. No matter which of them
is requested, QEMU returns the host's tick.

>
> I got funny numbers when I tried to run it on HiFive Unleashed:
>
> perf stat mem-latency
> ...
>
> Performance counter stats for 'mem-latency':
>
> 157.907000 task-clock (msec) # 0.940 CPUs utilized
>
> 1 context-switches # 0.006 K/sec
>
> 1 cpu-migrations # 0.006 K/sec
>
> 4102 page-faults # 0.026 M/sec
>
> 157923752 cycles # 1.000 GHz
>
> 9223372034948899840 instructions # 58403957087.78 insn
> per cycle
> <not supported> branches
>
> <not supported> branch-misses
>
>
> 0.168046000 seconds time elapsed
>
>
> Tracing read_counter(), I see this:
>
> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.058809] CPU 3:
> read_counter idx=0 val=2528358954912
> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.063339] CPU 3:
> read_counter idx=1 val=53892244920
> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.118160] CPU 3:
> read_counter idx=0 val=2528418303035
> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.122694] CPU 3:
> read_counter idx=1 val=53906699665
> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.216736] CPU 1:
> read_counter idx=0 val=2528516878664
> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.221270] CPU 1:
> read_counter idx=1 val=51986369142
>
> It looks like the counter values from different cores are subtracted and
> wraparound occurs.
>

Thanks for the hint. It makes sense. 9223372034948899840 is 7fffffff8e66a400,
which should be a wraparound with the mask I set (63-bit) in the code.

I will try this direction. Ideally, we can solve it by explicitly syncing the
hwc->prev_count when a cpu migration event happens.

>
> Also, core IDs and socket IDs are wrong in perf report:
>

As Palmer has replied to this, I have no comment here.

> perf report --header -I
> Error:
> The perf.data file has no samples!
> # ========
> # captured on: Thu Jan 1 02:52:07 1970
> # hostname : buildroot
> # os release : 4.15.0-00045-g0d7c030-dirty
> # perf version : 4.15.0
> # arch : riscv64
> # nrcpus online : 4
> # nrcpus avail : 5
> # total memory : 8188340 kB
> # cmdline : /usr/bin/perf record -F 1000 lat_mem_rd -P 1 -W 1 -N 1 -t 10
> # event : name = cycles:ppp, , size = 112, { sample_period, sample_freq } =
> 1000, sample_type = IP|TID|TIME|PERIOD, disabled = 1, inherit = 1, mmap =
> 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 3,
> sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1
> # sibling cores : 1
> # sibling cores : 2
> # sibling cores : 3
> # sibling cores : 4
> # sibling threads : 1
> # sibling threads : 2
> # sibling threads : 3
> # sibling threads : 4
> # CPU 0: Core ID -1, Socket ID -1
> # CPU 1: Core ID 0, Socket ID -1
> # CPU 2: Core ID 0, Socket ID -1
> # CPU 3: Core ID 0, Socket ID -1
> # CPU 4: Core ID 0, Socket ID -1
> # pmu mappings: cpu = 4, software = 1
> # CPU cache info:
> # L1 Instruction 32K [1]
> # L1 Data 32K [1]
> # L1 Instruction 32K [2]
> # L1 Data 32K [2]
> # L1 Instruction 32K [3]
> # L1 Data 32K [3]
> # missing features: TRACING_DATA BUILD_ID CPUDESC CPUID NUMA_TOPOLOGY
> BRANCH_STACK GROUP_DESC AUXTRACE STAT
> # ========
>
>
> Alex
>

Many thanks,
Alan

> On Mon, Mar 26, 2018 at 12:57 AM, Alan Kao <[email protected]> wrote:
>
> > This patch provide a basic PMU, riscv_base_pmu, which supports two
> > general hardware event, instructions and cycles. Furthermore, this
> > PMU serves as a reference implementation to ease the portings in
> > the future.
> >
> > riscv_base_pmu should be able to run on any RISC-V machine that
> > conforms to the Priv-Spec. Note that the latest qemu model hasn't
> > fully support a proper behavior of Priv-Spec 1.10 yet, but work
> > around should be easy with very small fixes. Please check
> > https://github.com/riscv/riscv-qemu/pull/115 for future updates.
> >
> > Cc: Nick Hu <[email protected]>
> > Cc: Greentime Hu <[email protected]>
> > Signed-off-by: Alan Kao <[email protected]>
> > ---
> > arch/riscv/Kconfig | 12 +
> > arch/riscv/include/asm/perf_event.h | 76 +++++-
> > arch/riscv/kernel/Makefile | 1 +
> > arch/riscv/kernel/perf_event.c | 469 ++++++++++++++++++++++++++++++
> > ++++++
> > 4 files changed, 554 insertions(+), 4 deletions(-)
> > create mode 100644 arch/riscv/kernel/perf_event.c
> >
> > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > index 310b9a5d6737..dd4aecfb5265 100644
> > --- a/arch/riscv/Kconfig
> > +++ b/arch/riscv/Kconfig
> > @@ -195,6 +195,18 @@ config RISCV_ISA_C
> > config RISCV_ISA_A
> > def_bool y
> >
> > +menu "PMU type"
> > + depends on PERF_EVENTS
> > +
> > +config RISCV_BASE_PMU
> > + bool "Base Performance Monitoring Unit"
> > + def_bool y
> > + help
> > + A base PMU that serves as a reference implementation and has
> > limited
> > + feature of perf.
> > +
> > +endmenu
> > +
> > endmenu
> >
> > menu "Kernel type"
> > diff --git a/arch/riscv/include/asm/perf_event.h
> > b/arch/riscv/include/asm/perf_event.h
> > index e13d2ff29e83..98e2efb02d25 100644
> > --- a/arch/riscv/include/asm/perf_event.h
> > +++ b/arch/riscv/include/asm/perf_event.h
> > @@ -1,13 +1,81 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > /*
> > * Copyright (C) 2018 SiFive
> > + * Copyright (C) 2018 Andes Technology Corporation
> > *
> > - * This program is free software; you can redistribute it and/or
> > - * modify it under the terms of the GNU General Public Licence
> > - * as published by the Free Software Foundation; either version
> > - * 2 of the Licence, or (at your option) any later version.
> > */
> >
> > #ifndef _ASM_RISCV_PERF_EVENT_H
> > #define _ASM_RISCV_PERF_EVENT_H
> >
> > +#include <linux/perf_event.h>
> > +#include <linux/ptrace.h>
> > +
> > +#define RISCV_BASE_COUNTERS 2
> > +
> > +/*
> > + * The RISCV_MAX_COUNTERS parameter should be specified.
> > + */
> > +
> > +#ifdef CONFIG_RISCV_BASE_PMU
> > +#define RISCV_MAX_COUNTERS 2
> > +#endif
> > +
> > +#ifndef RISCV_MAX_COUNTERS
> > +#error "Please provide a valid RISCV_MAX_COUNTERS for the PMU."
> > +#endif
> > +
> > +/*
> > + * These are the indexes of bits in counteren register *minus* 1,
> > + * except for cycle. It would be coherent if it can directly mapped
> > + * to counteren bit definition, but there is a *time* register at
> > + * counteren[1]. Per-cpu structure is scarce resource here.
> > + *
> > + * According to the spec, an implementation can support counter up to
> > + * mhpmcounter31, but many high-end processors has at most 6 general
> > + * PMCs, we give the definition to MHPMCOUNTER8 here.
> > + */
> > +#define RISCV_PMU_CYCLE 0
> > +#define RISCV_PMU_INSTRET 1
> > +#define RISCV_PMU_MHPMCOUNTER3 2
> > +#define RISCV_PMU_MHPMCOUNTER4 3
> > +#define RISCV_PMU_MHPMCOUNTER5 4
> > +#define RISCV_PMU_MHPMCOUNTER6 5
> > +#define RISCV_PMU_MHPMCOUNTER7 6
> > +#define RISCV_PMU_MHPMCOUNTER8 7
> > +
> > +#define RISCV_OP_UNSUPP (-EOPNOTSUPP)
> > +
> > +struct cpu_hw_events {
> > + /* # currently enabled events*/
> > + int n_events;
> > + /* currently enabled events */
> > + struct perf_event *events[RISCV_MAX_COUNTERS];
> > + /* vendor-defined PMU data */
> > + void *platform;
> > +};
> > +
> > +struct riscv_pmu {
> > + struct pmu *pmu;
> > +
> > + /* generic hw/cache events table */
> > + const int *hw_events;
> > + const int (*cache_events)[PERF_COUNT_HW_CACHE_MAX]
> > + [PERF_COUNT_HW_CACHE_OP_MAX]
> > + [PERF_COUNT_HW_CACHE_RESULT_MAX];
> > + /* method used to map hw/cache events */
> > + int (*map_hw_event)(u64 config);
> > + int (*map_cache_event)(u64 config);
> > +
> > + /* max generic hw events in map */
> > + int max_events;
> > + /* number total counters, 2(base) + x(general) */
> > + int num_counters;
> > + /* the width of the counter */
> > + int counter_width;
> > +
> > + /* vendor-defined PMU features */
> > + void *platform;
> > +};
> > +
> > #endif /* _ASM_RISCV_PERF_EVENT_H */
> > diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
> > index 196f62ffc428..849c38d9105f 100644
> > --- a/arch/riscv/kernel/Makefile
> > +++ b/arch/riscv/kernel/Makefile
> > @@ -36,5 +36,6 @@ obj-$(CONFIG_SMP) += smp.o
> > obj-$(CONFIG_MODULES) += module.o
> > obj-$(CONFIG_FUNCTION_TRACER) += mcount.o
> > obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
> > +obj-$(CONFIG_PERF_EVENTS) += perf_event.o
> >
> > clean:
> > diff --git a/arch/riscv/kernel/perf_event.c b/arch/riscv/kernel/perf_
> > event.c
> > new file mode 100644
> > index 000000000000..b78cb486683b
> > --- /dev/null
> > +++ b/arch/riscv/kernel/perf_event.c
> > @@ -0,0 +1,469 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright (C) 2008 Thomas Gleixner <[email protected]>
> > + * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar
> > + * Copyright (C) 2009 Jaswinder Singh Rajput
> > + * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
> > + * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra
> > + * Copyright (C) 2009 Intel Corporation, <[email protected]>
> > + * Copyright (C) 2009 Google, Inc., Stephane Eranian
> > + * Copyright 2014 Tilera Corporation. All Rights Reserved.
> > + * Copyright (C) 2018 Andes Technology Corporation
> > + *
> > + * Perf_events support for RISC-V platforms.
> > + *
> > + * Since the spec. (as of now, Priv-Spec 1.10) does not provide enough
> > + * functionality for perf event to fully work, this file provides
> > + * the very basic framework only.
> > + *
> > + * For platform portings, please check Documentations/riscv/pmu.txt.
> > + *
> > + * The Copyright line includes x86 and tile ones.
> > + */
> > +
> > +#include <linux/kprobes.h>
> > +#include <linux/kernel.h>
> > +#include <linux/kdebug.h>
> > +#include <linux/mutex.h>
> > +#include <linux/bitmap.h>
> > +#include <linux/irq.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/perf_event.h>
> > +#include <linux/atomic.h>
> > +#include <asm/perf_event.h>
> > +
> > +static const struct riscv_pmu *riscv_pmu __read_mostly;
> > +static DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events);
> > +
> > +/*
> > + * Hardware & cache maps and their methods
> > + */
> > +
> > +static const int riscv_hw_event_map[] = {
> > + [PERF_COUNT_HW_CPU_CYCLES] = RISCV_PMU_CYCLE,
> > + [PERF_COUNT_HW_INSTRUCTIONS] = RISCV_PMU_INSTRET,
> > + [PERF_COUNT_HW_CACHE_REFERENCES] = RISCV_OP_UNSUPP,
> > + [PERF_COUNT_HW_CACHE_MISSES] = RISCV_OP_UNSUPP,
> > + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = RISCV_OP_UNSUPP,
> > + [PERF_COUNT_HW_BRANCH_MISSES] = RISCV_OP_UNSUPP,
> > + [PERF_COUNT_HW_BUS_CYCLES] = RISCV_OP_UNSUPP,
> > +};
> > +
> > +#define C(x) PERF_COUNT_HW_CACHE_##x
> > +static const int riscv_cache_event_map[PERF_COUNT_HW_CACHE_MAX]
> > +[PERF_COUNT_HW_CACHE_OP_MAX]
> > +[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
> > + [C(L1D)] = {
> > + [C(OP_READ)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + [C(OP_WRITE)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + [C(OP_PREFETCH)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + },
> > + [C(L1I)] = {
> > + [C(OP_READ)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + [C(OP_WRITE)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + [C(OP_PREFETCH)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + },
> > + [C(LL)] = {
> > + [C(OP_READ)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + [C(OP_WRITE)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + [C(OP_PREFETCH)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + },
> > + [C(DTLB)] = {
> > + [C(OP_READ)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + [C(OP_WRITE)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + [C(OP_PREFETCH)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + },
> > + [C(ITLB)] = {
> > + [C(OP_READ)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + [C(OP_WRITE)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + [C(OP_PREFETCH)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + },
> > + [C(BPU)] = {
> > + [C(OP_READ)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + [C(OP_WRITE)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + [C(OP_PREFETCH)] = {
> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> > + },
> > + },
> > +};
> > +
> > +static int riscv_map_hw_event(u64 config)
> > +{
> > + if (config >= riscv_pmu->max_events)
> > + return -EINVAL;
> > +
> > + return riscv_pmu->hw_events[config];
> > +}
> > +
> > +int riscv_map_cache_decode(u64 config, unsigned int *type,
> > + unsigned int *op, unsigned int *result)
> > +{
> > + return -ENOENT;
> > +}
> > +
> > +static int riscv_map_cache_event(u64 config)
> > +{
> > + unsigned int type, op, result;
> > + int err = -ENOENT;
> > + int code;
> > +
> > + err = riscv_map_cache_decode(config, &type, &op, &result);
> > + if (!riscv_pmu->cache_events || err)
> > + return err;
> > +
> > + if (type >= PERF_COUNT_HW_CACHE_MAX ||
> > + op >= PERF_COUNT_HW_CACHE_OP_MAX ||
> > + result >= PERF_COUNT_HW_CACHE_RESULT_MAX)
> > + return -EINVAL;
> > +
> > + code = (*riscv_pmu->cache_events)[type][op][result];
> > + if (code == RISCV_OP_UNSUPP)
> > + return -EINVAL;
> > +
> > + return code;
> > +}
> > +
> > +/*
> > + * Low-level functions: reading/writing counters
> > + */
> > +
> > +static inline u64 read_counter(int idx)
> > +{
> > + u64 val = 0;
> > +
> > + switch (idx) {
> > + case RISCV_PMU_CYCLE:
> > + val = csr_read(cycle);
> > + break;
> > + case RISCV_PMU_INSTRET:
> > + val = csr_read(instret);
> > + break;
> > + default:
> > + WARN_ON_ONCE(idx < 0 || idx > RISCV_MAX_COUNTERS);
> > + return -EINVAL;
> > + }
> > +
> > + return val;
> > +}
> > +
> > +static inline void write_counter(int idx, u64 value)
> > +{
> > + /* currently not supported */
> > +}
> > +
> > +/*
> > + * pmu->read: read and update the counter
> > + *
> > + * Other architectures' implementation often have a xxx_perf_event_update
> > + * routine, which can return counter values when called in the IRQ, but
> > + * return void when being called by the pmu->read method.
> > + */
> > +static void riscv_pmu_read(struct perf_event *event)
> > +{
> > + struct hw_perf_event *hwc = &event->hw;
> > + u64 prev_raw_count, new_raw_count;
> > + u64 oldval;
> > + int idx = hwc->idx;
> > + u64 delta;
> > +
> > + do {
> > + prev_raw_count = local64_read(&hwc->prev_count);
> > + new_raw_count = read_counter(idx);
> > +
> > + oldval = local64_cmpxchg(&hwc->prev_count, prev_raw_count,
> > + new_raw_count);
> > + } while (oldval != prev_raw_count);
> > +
> > + /*
> > + * delta is the value to update the counter we maintain in the
> > kernel.
> > + */
> > + delta = (new_raw_count - prev_raw_count) &
> > + ((1ULL << riscv_pmu->counter_width) - 1);
> > + local64_add(delta, &event->count);
> > + /*
> > + * Something like local64_sub(delta, &hwc->period_left) here is
> > + * needed if there is an interrupt for perf.
> > + */
> > +}
> > +
> > +/*
> > + * State transition functions:
> > + *
> > + * stop()/start() & add()/del()
> > + */
> > +
> > +/*
> > + * pmu->stop: stop the counter
> > + */
> > +static void riscv_pmu_stop(struct perf_event *event, int flags)
> > +{
> > + struct hw_perf_event *hwc = &event->hw;
> > +
> > + WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
> > + hwc->state |= PERF_HES_STOPPED;
> > +
> > + if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE))
> > {
> > + riscv_pmu_read(event);
> > + hwc->state |= PERF_HES_UPTODATE;
> > + }
> > +}
> > +
> > +/*
> > + * pmu->start: start the event.
> > + */
> > +static void riscv_pmu_start(struct perf_event *event, int flags)
> > +{
> > + struct hw_perf_event *hwc = &event->hw;
> > +
> > + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
> > + return;
> > +
> > + if (flags & PERF_EF_RELOAD) {
> > + WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE));
> > +
> > + /*
> > + * Set the counter to the period to the next interrupt
> > here,
> > + * if you have any.
> > + */
> > + }
> > +
> > + hwc->state = 0;
> > + perf_event_update_userpage(event);
> > +
> > + /*
> > + * Since we cannot write to counters, this serves as an
> > initialization
> > + * to the delta-mechanism in pmu->read(); otherwise, the delta
> > would be
> > + * wrong when pmu->read is called for the first time.
> > + */
> > + if (local64_read(&hwc->prev_count) == 0)
> > + local64_set(&hwc->prev_count, read_counter(hwc->idx));
> > +}
> > +
> > +/*
> > + * pmu->add: add the event to PMU.
> > + */
> > +static int riscv_pmu_add(struct perf_event *event, int flags)
> > +{
> > + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> > + struct hw_perf_event *hwc = &event->hw;
> > +
> > + if (cpuc->n_events == riscv_pmu->num_counters)
> > + return -ENOSPC;
> > +
> > + /*
> > + * We don't have general conunters, so no binding-event-to-counter
> > + * process here.
> > + *
> > + * Indexing using hwc->config generally not works, since config may
> > + * contain extra information, but here the only info we have in
> > + * hwc->config is the event index.
> > + */
> > + hwc->idx = hwc->config;
> > + cpuc->events[hwc->idx] = event;
> > + cpuc->n_events++;
> > +
> > + hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
> > +
> > + if (flags & PERF_EF_START)
> > + riscv_pmu_start(event, PERF_EF_RELOAD);
> > +
> > + return 0;
> > +}
> > +
> > +/*
> > + * pmu->del: delete the event from PMU.
> > + */
> > +static void riscv_pmu_del(struct perf_event *event, int flags)
> > +{
> > + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> > + struct hw_perf_event *hwc = &event->hw;
> > +
> > + cpuc->events[hwc->idx] = NULL;
> > + cpuc->n_events--;
> > + riscv_pmu_stop(event, PERF_EF_UPDATE);
> > + perf_event_update_userpage(event);
> > +}
> > +
> > +/*
> > + * Interrupt
> > + */
> > +
> > +static DEFINE_MUTEX(pmc_reserve_mutex);
> > +typedef void (*perf_irq_t)(void *riscv_perf_irq);
> > +perf_irq_t perf_irq;
> > +
> > +void riscv_pmu_handle_irq(void *riscv_perf_irq)
> > +{
> > +}
> > +
> > +static perf_irq_t reserve_pmc_hardware(void)
> > +{
> > + perf_irq_t old;
> > +
> > + mutex_lock(&pmc_reserve_mutex);
> > + old = perf_irq;
> > + perf_irq = &riscv_pmu_handle_irq;
> > + mutex_unlock(&pmc_reserve_mutex);
> > +
> > + return old;
> > +}
> > +
> > +void release_pmc_hardware(void)
> > +{
> > + mutex_lock(&pmc_reserve_mutex);
> > + perf_irq = NULL;
> > + mutex_unlock(&pmc_reserve_mutex);
> > +}
> > +
> > +/*
> > + * Event Initialization
> > + */
> > +
> > +static atomic_t riscv_active_events;
> > +
> > +static void riscv_event_destroy(struct perf_event *event)
> > +{
> > + if (atomic_dec_return(&riscv_active_events) == 0)
> > + release_pmc_hardware();
> > +}
> > +
> > +static int riscv_event_init(struct perf_event *event)
> > +{
> > + struct perf_event_attr *attr = &event->attr;
> > + struct hw_perf_event *hwc = &event->hw;
> > + perf_irq_t old_irq_handler = NULL;
> > + int code;
> > +
> > + if (atomic_inc_return(&riscv_active_events) == 1)
> > + old_irq_handler = reserve_pmc_hardware();
> > +
> > + if (old_irq_handler) {
> > + pr_warn("PMC hardware busy (reserved by oprofile)\n");
> > + atomic_dec(&riscv_active_events);
> > + return -EBUSY;
> > + }
> > +
> > + switch (event->attr.type) {
> > + case PERF_TYPE_HARDWARE:
> > + code = riscv_pmu->map_hw_event(attr->config);
> > + break;
> > + case PERF_TYPE_HW_CACHE:
> > + code = riscv_pmu->map_cache_event(attr->config);
> > + break;
> > + case PERF_TYPE_RAW:
> > + return -EOPNOTSUPP;
> > + default:
> > + return -ENOENT;
> > + }
> > +
> > + event->destroy = riscv_event_destroy;
> > + if (code < 0) {
> > + event->destroy(event);
> > + return code;
> > + }
> > +
> > + /*
> > + * idx is set to -1 because the index of a general event should
> > not be
> > + * decided until binding to some counter in pmu->add().
> > + *
> > + * But since we don't have such support, later in pmu->add(), we
> > just
> > + * use hwc->config as the index instead.
> > + */
> > + hwc->config = code;
> > + hwc->idx = -1;
> > +
> > + return 0;
> > +}
> > +
> > +/*
> > + * Initialization
> > + */
> > +
> > +static struct pmu min_pmu = {
> > + .name = "riscv-base",
> > + .event_init = riscv_event_init,
> > + .add = riscv_pmu_add,
> > + .del = riscv_pmu_del,
> > + .start = riscv_pmu_start,
> > + .stop = riscv_pmu_stop,
> > + .read = riscv_pmu_read,
> > +};
> > +
> > +static const struct riscv_pmu riscv_base_pmu = {
> > + .pmu = &min_pmu,
> > + .max_events = ARRAY_SIZE(riscv_hw_event_map),
> > + .map_hw_event = riscv_map_hw_event,
> > + .hw_events = riscv_hw_event_map,
> > + .map_cache_event = riscv_map_cache_event,
> > + .cache_events = &riscv_cache_event_map,
> > + .counter_width = 63,
> > + .num_counters = RISCV_BASE_COUNTERS + 0,
> > +};
> > +
> > +struct pmu * __weak __init riscv_init_platform_pmu(void)
> > +{
> > + riscv_pmu = &riscv_base_pmu;
> > + return riscv_pmu->pmu;
> > +}
> > +
> > +int __init init_hw_perf_events(void)
> > +{
> > + struct pmu *pmu = riscv_init_platform_pmu();
> > +
> > + perf_irq = NULL;
> > + perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
> > + return 0;
> > +}
> > +arch_initcall(init_hw_perf_events);
> > --
> > 2.16.2
> >
> >

2018-03-31 22:49:32

by Alex Solomatnikov

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

You can add a skew between cores in qemu, something like this:

case CSR_INSTRET:
core_id()*return cpu_get_host_ticks()/10;
break;
case CSR_CYCLE:
return cpu_get_host_ticks();
break;

Alex

On Wed, Mar 28, 2018 at 7:30 PM, Alan Kao <[email protected]> wrote:
> Hi Alex,
>
> I'm appreciated for your reply and tests.
>
> On Wed, Mar 28, 2018 at 03:58:41PM -0700, Alex Solomatnikov wrote:
>> Did you test this code?
>
> I did test this patch on QEMU's virt model with multi-hart, which is the only
> RISC-V machine I have for now. But as I mentioned in
> https://github.com/riscv/riscv-qemu/pull/115 , the hardware counter support
> in QEMU is not fully conformed to the 1.10 Priv-Spec, so I had to slightly
> tweak the code to make reading work.
>
> Specifically, the read to cycle and instret in QEMU looks like this:
> ...
> case CSR_INSTRET:
> case CSR_CYCLE:
> // if (ctr_ok) {
> return cpu_get_host_ticks();
> // }
> break;
> ...
> and the two lines of comment was the tweak.
>
> On such environment, I did not get anything unexpected. No matter which of them
> is requested, QEMU returns the host's tick.
>
>>
>> I got funny numbers when I tried to run it on HiFive Unleashed:
>>
>> perf stat mem-latency
>> ...
>>
>> Performance counter stats for 'mem-latency':
>>
>> 157.907000 task-clock (msec) # 0.940 CPUs utilized
>>
>> 1 context-switches # 0.006 K/sec
>>
>> 1 cpu-migrations # 0.006 K/sec
>>
>> 4102 page-faults # 0.026 M/sec
>>
>> 157923752 cycles # 1.000 GHz
>>
>> 9223372034948899840 instructions # 58403957087.78 insn
>> per cycle
>> <not supported> branches
>>
>> <not supported> branch-misses
>>
>>
>> 0.168046000 seconds time elapsed
>>
>>
>> Tracing read_counter(), I see this:
>>
>> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.058809] CPU 3:
>> read_counter idx=0 val=2528358954912
>> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.063339] CPU 3:
>> read_counter idx=1 val=53892244920
>> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.118160] CPU 3:
>> read_counter idx=0 val=2528418303035
>> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.122694] CPU 3:
>> read_counter idx=1 val=53906699665
>> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.216736] CPU 1:
>> read_counter idx=0 val=2528516878664
>> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.221270] CPU 1:
>> read_counter idx=1 val=51986369142
>>
>> It looks like the counter values from different cores are subtracted and
>> wraparound occurs.
>>
>
> Thanks for the hint. It makes sense. 9223372034948899840 is 7fffffff8e66a400,
> which should be a wraparound with the mask I set (63-bit) in the code.
>
> I will try this direction. Ideally, we can solve it by explicitly syncing the
> hwc->prev_count when a cpu migration event happens.
>
>>
>> Also, core IDs and socket IDs are wrong in perf report:
>>
>
> As Palmer has replied to this, I have no comment here.
>
>> perf report --header -I
>> Error:
>> The perf.data file has no samples!
>> # ========
>> # captured on: Thu Jan 1 02:52:07 1970
>> # hostname : buildroot
>> # os release : 4.15.0-00045-g0d7c030-dirty
>> # perf version : 4.15.0
>> # arch : riscv64
>> # nrcpus online : 4
>> # nrcpus avail : 5
>> # total memory : 8188340 kB
>> # cmdline : /usr/bin/perf record -F 1000 lat_mem_rd -P 1 -W 1 -N 1 -t 10
>> # event : name = cycles:ppp, , size = 112, { sample_period, sample_freq } =
>> 1000, sample_type = IP|TID|TIME|PERIOD, disabled = 1, inherit = 1, mmap =
>> 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 3,
>> sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1
>> # sibling cores : 1
>> # sibling cores : 2
>> # sibling cores : 3
>> # sibling cores : 4
>> # sibling threads : 1
>> # sibling threads : 2
>> # sibling threads : 3
>> # sibling threads : 4
>> # CPU 0: Core ID -1, Socket ID -1
>> # CPU 1: Core ID 0, Socket ID -1
>> # CPU 2: Core ID 0, Socket ID -1
>> # CPU 3: Core ID 0, Socket ID -1
>> # CPU 4: Core ID 0, Socket ID -1
>> # pmu mappings: cpu = 4, software = 1
>> # CPU cache info:
>> # L1 Instruction 32K [1]
>> # L1 Data 32K [1]
>> # L1 Instruction 32K [2]
>> # L1 Data 32K [2]
>> # L1 Instruction 32K [3]
>> # L1 Data 32K [3]
>> # missing features: TRACING_DATA BUILD_ID CPUDESC CPUID NUMA_TOPOLOGY
>> BRANCH_STACK GROUP_DESC AUXTRACE STAT
>> # ========
>>
>>
>> Alex
>>
>
> Many thanks,
> Alan
>
>> On Mon, Mar 26, 2018 at 12:57 AM, Alan Kao <[email protected]> wrote:
>>
>> > This patch provide a basic PMU, riscv_base_pmu, which supports two
>> > general hardware event, instructions and cycles. Furthermore, this
>> > PMU serves as a reference implementation to ease the portings in
>> > the future.
>> >
>> > riscv_base_pmu should be able to run on any RISC-V machine that
>> > conforms to the Priv-Spec. Note that the latest qemu model hasn't
>> > fully support a proper behavior of Priv-Spec 1.10 yet, but work
>> > around should be easy with very small fixes. Please check
>> > https://github.com/riscv/riscv-qemu/pull/115 for future updates.
>> >
>> > Cc: Nick Hu <[email protected]>
>> > Cc: Greentime Hu <[email protected]>
>> > Signed-off-by: Alan Kao <[email protected]>
>> > ---
>> > arch/riscv/Kconfig | 12 +
>> > arch/riscv/include/asm/perf_event.h | 76 +++++-
>> > arch/riscv/kernel/Makefile | 1 +
>> > arch/riscv/kernel/perf_event.c | 469 ++++++++++++++++++++++++++++++
>> > ++++++
>> > 4 files changed, 554 insertions(+), 4 deletions(-)
>> > create mode 100644 arch/riscv/kernel/perf_event.c
>> >
>> > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>> > index 310b9a5d6737..dd4aecfb5265 100644
>> > --- a/arch/riscv/Kconfig
>> > +++ b/arch/riscv/Kconfig
>> > @@ -195,6 +195,18 @@ config RISCV_ISA_C
>> > config RISCV_ISA_A
>> > def_bool y
>> >
>> > +menu "PMU type"
>> > + depends on PERF_EVENTS
>> > +
>> > +config RISCV_BASE_PMU
>> > + bool "Base Performance Monitoring Unit"
>> > + def_bool y
>> > + help
>> > + A base PMU that serves as a reference implementation and has
>> > limited
>> > + feature of perf.
>> > +
>> > +endmenu
>> > +
>> > endmenu
>> >
>> > menu "Kernel type"
>> > diff --git a/arch/riscv/include/asm/perf_event.h
>> > b/arch/riscv/include/asm/perf_event.h
>> > index e13d2ff29e83..98e2efb02d25 100644
>> > --- a/arch/riscv/include/asm/perf_event.h
>> > +++ b/arch/riscv/include/asm/perf_event.h
>> > @@ -1,13 +1,81 @@
>> > +/* SPDX-License-Identifier: GPL-2.0 */
>> > /*
>> > * Copyright (C) 2018 SiFive
>> > + * Copyright (C) 2018 Andes Technology Corporation
>> > *
>> > - * This program is free software; you can redistribute it and/or
>> > - * modify it under the terms of the GNU General Public Licence
>> > - * as published by the Free Software Foundation; either version
>> > - * 2 of the Licence, or (at your option) any later version.
>> > */
>> >
>> > #ifndef _ASM_RISCV_PERF_EVENT_H
>> > #define _ASM_RISCV_PERF_EVENT_H
>> >
>> > +#include <linux/perf_event.h>
>> > +#include <linux/ptrace.h>
>> > +
>> > +#define RISCV_BASE_COUNTERS 2
>> > +
>> > +/*
>> > + * The RISCV_MAX_COUNTERS parameter should be specified.
>> > + */
>> > +
>> > +#ifdef CONFIG_RISCV_BASE_PMU
>> > +#define RISCV_MAX_COUNTERS 2
>> > +#endif
>> > +
>> > +#ifndef RISCV_MAX_COUNTERS
>> > +#error "Please provide a valid RISCV_MAX_COUNTERS for the PMU."
>> > +#endif
>> > +
>> > +/*
>> > + * These are the indexes of bits in counteren register *minus* 1,
>> > + * except for cycle. It would be coherent if it can directly mapped
>> > + * to counteren bit definition, but there is a *time* register at
>> > + * counteren[1]. Per-cpu structure is scarce resource here.
>> > + *
>> > + * According to the spec, an implementation can support counter up to
>> > + * mhpmcounter31, but many high-end processors has at most 6 general
>> > + * PMCs, we give the definition to MHPMCOUNTER8 here.
>> > + */
>> > +#define RISCV_PMU_CYCLE 0
>> > +#define RISCV_PMU_INSTRET 1
>> > +#define RISCV_PMU_MHPMCOUNTER3 2
>> > +#define RISCV_PMU_MHPMCOUNTER4 3
>> > +#define RISCV_PMU_MHPMCOUNTER5 4
>> > +#define RISCV_PMU_MHPMCOUNTER6 5
>> > +#define RISCV_PMU_MHPMCOUNTER7 6
>> > +#define RISCV_PMU_MHPMCOUNTER8 7
>> > +
>> > +#define RISCV_OP_UNSUPP (-EOPNOTSUPP)
>> > +
>> > +struct cpu_hw_events {
>> > + /* # currently enabled events*/
>> > + int n_events;
>> > + /* currently enabled events */
>> > + struct perf_event *events[RISCV_MAX_COUNTERS];
>> > + /* vendor-defined PMU data */
>> > + void *platform;
>> > +};
>> > +
>> > +struct riscv_pmu {
>> > + struct pmu *pmu;
>> > +
>> > + /* generic hw/cache events table */
>> > + const int *hw_events;
>> > + const int (*cache_events)[PERF_COUNT_HW_CACHE_MAX]
>> > + [PERF_COUNT_HW_CACHE_OP_MAX]
>> > + [PERF_COUNT_HW_CACHE_RESULT_MAX];
>> > + /* method used to map hw/cache events */
>> > + int (*map_hw_event)(u64 config);
>> > + int (*map_cache_event)(u64 config);
>> > +
>> > + /* max generic hw events in map */
>> > + int max_events;
>> > + /* number total counters, 2(base) + x(general) */
>> > + int num_counters;
>> > + /* the width of the counter */
>> > + int counter_width;
>> > +
>> > + /* vendor-defined PMU features */
>> > + void *platform;
>> > +};
>> > +
>> > #endif /* _ASM_RISCV_PERF_EVENT_H */
>> > diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
>> > index 196f62ffc428..849c38d9105f 100644
>> > --- a/arch/riscv/kernel/Makefile
>> > +++ b/arch/riscv/kernel/Makefile
>> > @@ -36,5 +36,6 @@ obj-$(CONFIG_SMP) += smp.o
>> > obj-$(CONFIG_MODULES) += module.o
>> > obj-$(CONFIG_FUNCTION_TRACER) += mcount.o
>> > obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
>> > +obj-$(CONFIG_PERF_EVENTS) += perf_event.o
>> >
>> > clean:
>> > diff --git a/arch/riscv/kernel/perf_event.c b/arch/riscv/kernel/perf_
>> > event.c
>> > new file mode 100644
>> > index 000000000000..b78cb486683b
>> > --- /dev/null
>> > +++ b/arch/riscv/kernel/perf_event.c
>> > @@ -0,0 +1,469 @@
>> > +/* SPDX-License-Identifier: GPL-2.0 */
>> > +/*
>> > + * Copyright (C) 2008 Thomas Gleixner <[email protected]>
>> > + * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar
>> > + * Copyright (C) 2009 Jaswinder Singh Rajput
>> > + * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
>> > + * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra
>> > + * Copyright (C) 2009 Intel Corporation, <[email protected]>
>> > + * Copyright (C) 2009 Google, Inc., Stephane Eranian
>> > + * Copyright 2014 Tilera Corporation. All Rights Reserved.
>> > + * Copyright (C) 2018 Andes Technology Corporation
>> > + *
>> > + * Perf_events support for RISC-V platforms.
>> > + *
>> > + * Since the spec. (as of now, Priv-Spec 1.10) does not provide enough
>> > + * functionality for perf event to fully work, this file provides
>> > + * the very basic framework only.
>> > + *
>> > + * For platform portings, please check Documentations/riscv/pmu.txt.
>> > + *
>> > + * The Copyright line includes x86 and tile ones.
>> > + */
>> > +
>> > +#include <linux/kprobes.h>
>> > +#include <linux/kernel.h>
>> > +#include <linux/kdebug.h>
>> > +#include <linux/mutex.h>
>> > +#include <linux/bitmap.h>
>> > +#include <linux/irq.h>
>> > +#include <linux/interrupt.h>
>> > +#include <linux/perf_event.h>
>> > +#include <linux/atomic.h>
>> > +#include <asm/perf_event.h>
>> > +
>> > +static const struct riscv_pmu *riscv_pmu __read_mostly;
>> > +static DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events);
>> > +
>> > +/*
>> > + * Hardware & cache maps and their methods
>> > + */
>> > +
>> > +static const int riscv_hw_event_map[] = {
>> > + [PERF_COUNT_HW_CPU_CYCLES] = RISCV_PMU_CYCLE,
>> > + [PERF_COUNT_HW_INSTRUCTIONS] = RISCV_PMU_INSTRET,
>> > + [PERF_COUNT_HW_CACHE_REFERENCES] = RISCV_OP_UNSUPP,
>> > + [PERF_COUNT_HW_CACHE_MISSES] = RISCV_OP_UNSUPP,
>> > + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = RISCV_OP_UNSUPP,
>> > + [PERF_COUNT_HW_BRANCH_MISSES] = RISCV_OP_UNSUPP,
>> > + [PERF_COUNT_HW_BUS_CYCLES] = RISCV_OP_UNSUPP,
>> > +};
>> > +
>> > +#define C(x) PERF_COUNT_HW_CACHE_##x
>> > +static const int riscv_cache_event_map[PERF_COUNT_HW_CACHE_MAX]
>> > +[PERF_COUNT_HW_CACHE_OP_MAX]
>> > +[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
>> > + [C(L1D)] = {
>> > + [C(OP_READ)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + [C(OP_WRITE)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + [C(OP_PREFETCH)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + },
>> > + [C(L1I)] = {
>> > + [C(OP_READ)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + [C(OP_WRITE)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + [C(OP_PREFETCH)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + },
>> > + [C(LL)] = {
>> > + [C(OP_READ)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + [C(OP_WRITE)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + [C(OP_PREFETCH)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + },
>> > + [C(DTLB)] = {
>> > + [C(OP_READ)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + [C(OP_WRITE)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + [C(OP_PREFETCH)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + },
>> > + [C(ITLB)] = {
>> > + [C(OP_READ)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + [C(OP_WRITE)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + [C(OP_PREFETCH)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + },
>> > + [C(BPU)] = {
>> > + [C(OP_READ)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + [C(OP_WRITE)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + [C(OP_PREFETCH)] = {
>> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
>> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
>> > + },
>> > + },
>> > +};
>> > +
>> > +static int riscv_map_hw_event(u64 config)
>> > +{
>> > + if (config >= riscv_pmu->max_events)
>> > + return -EINVAL;
>> > +
>> > + return riscv_pmu->hw_events[config];
>> > +}
>> > +
>> > +int riscv_map_cache_decode(u64 config, unsigned int *type,
>> > + unsigned int *op, unsigned int *result)
>> > +{
>> > + return -ENOENT;
>> > +}
>> > +
>> > +static int riscv_map_cache_event(u64 config)
>> > +{
>> > + unsigned int type, op, result;
>> > + int err = -ENOENT;
>> > + int code;
>> > +
>> > + err = riscv_map_cache_decode(config, &type, &op, &result);
>> > + if (!riscv_pmu->cache_events || err)
>> > + return err;
>> > +
>> > + if (type >= PERF_COUNT_HW_CACHE_MAX ||
>> > + op >= PERF_COUNT_HW_CACHE_OP_MAX ||
>> > + result >= PERF_COUNT_HW_CACHE_RESULT_MAX)
>> > + return -EINVAL;
>> > +
>> > + code = (*riscv_pmu->cache_events)[type][op][result];
>> > + if (code == RISCV_OP_UNSUPP)
>> > + return -EINVAL;
>> > +
>> > + return code;
>> > +}
>> > +
>> > +/*
>> > + * Low-level functions: reading/writing counters
>> > + */
>> > +
>> > +static inline u64 read_counter(int idx)
>> > +{
>> > + u64 val = 0;
>> > +
>> > + switch (idx) {
>> > + case RISCV_PMU_CYCLE:
>> > + val = csr_read(cycle);
>> > + break;
>> > + case RISCV_PMU_INSTRET:
>> > + val = csr_read(instret);
>> > + break;
>> > + default:
>> > + WARN_ON_ONCE(idx < 0 || idx > RISCV_MAX_COUNTERS);
>> > + return -EINVAL;
>> > + }
>> > +
>> > + return val;
>> > +}
>> > +
>> > +static inline void write_counter(int idx, u64 value)
>> > +{
>> > + /* currently not supported */
>> > +}
>> > +
>> > +/*
>> > + * pmu->read: read and update the counter
>> > + *
>> > + * Other architectures' implementation often have a xxx_perf_event_update
>> > + * routine, which can return counter values when called in the IRQ, but
>> > + * return void when being called by the pmu->read method.
>> > + */
>> > +static void riscv_pmu_read(struct perf_event *event)
>> > +{
>> > + struct hw_perf_event *hwc = &event->hw;
>> > + u64 prev_raw_count, new_raw_count;
>> > + u64 oldval;
>> > + int idx = hwc->idx;
>> > + u64 delta;
>> > +
>> > + do {
>> > + prev_raw_count = local64_read(&hwc->prev_count);
>> > + new_raw_count = read_counter(idx);
>> > +
>> > + oldval = local64_cmpxchg(&hwc->prev_count, prev_raw_count,
>> > + new_raw_count);
>> > + } while (oldval != prev_raw_count);
>> > +
>> > + /*
>> > + * delta is the value to update the counter we maintain in the
>> > kernel.
>> > + */
>> > + delta = (new_raw_count - prev_raw_count) &
>> > + ((1ULL << riscv_pmu->counter_width) - 1);
>> > + local64_add(delta, &event->count);
>> > + /*
>> > + * Something like local64_sub(delta, &hwc->period_left) here is
>> > + * needed if there is an interrupt for perf.
>> > + */
>> > +}
>> > +
>> > +/*
>> > + * State transition functions:
>> > + *
>> > + * stop()/start() & add()/del()
>> > + */
>> > +
>> > +/*
>> > + * pmu->stop: stop the counter
>> > + */
>> > +static void riscv_pmu_stop(struct perf_event *event, int flags)
>> > +{
>> > + struct hw_perf_event *hwc = &event->hw;
>> > +
>> > + WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
>> > + hwc->state |= PERF_HES_STOPPED;
>> > +
>> > + if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE))
>> > {
>> > + riscv_pmu_read(event);
>> > + hwc->state |= PERF_HES_UPTODATE;
>> > + }
>> > +}
>> > +
>> > +/*
>> > + * pmu->start: start the event.
>> > + */
>> > +static void riscv_pmu_start(struct perf_event *event, int flags)
>> > +{
>> > + struct hw_perf_event *hwc = &event->hw;
>> > +
>> > + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
>> > + return;
>> > +
>> > + if (flags & PERF_EF_RELOAD) {
>> > + WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE));
>> > +
>> > + /*
>> > + * Set the counter to the period to the next interrupt
>> > here,
>> > + * if you have any.
>> > + */
>> > + }
>> > +
>> > + hwc->state = 0;
>> > + perf_event_update_userpage(event);
>> > +
>> > + /*
>> > + * Since we cannot write to counters, this serves as an
>> > initialization
>> > + * to the delta-mechanism in pmu->read(); otherwise, the delta
>> > would be
>> > + * wrong when pmu->read is called for the first time.
>> > + */
>> > + if (local64_read(&hwc->prev_count) == 0)
>> > + local64_set(&hwc->prev_count, read_counter(hwc->idx));
>> > +}
>> > +
>> > +/*
>> > + * pmu->add: add the event to PMU.
>> > + */
>> > +static int riscv_pmu_add(struct perf_event *event, int flags)
>> > +{
>> > + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>> > + struct hw_perf_event *hwc = &event->hw;
>> > +
>> > + if (cpuc->n_events == riscv_pmu->num_counters)
>> > + return -ENOSPC;
>> > +
>> > + /*
>> > + * We don't have general conunters, so no binding-event-to-counter
>> > + * process here.
>> > + *
>> > + * Indexing using hwc->config generally not works, since config may
>> > + * contain extra information, but here the only info we have in
>> > + * hwc->config is the event index.
>> > + */
>> > + hwc->idx = hwc->config;
>> > + cpuc->events[hwc->idx] = event;
>> > + cpuc->n_events++;
>> > +
>> > + hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
>> > +
>> > + if (flags & PERF_EF_START)
>> > + riscv_pmu_start(event, PERF_EF_RELOAD);
>> > +
>> > + return 0;
>> > +}
>> > +
>> > +/*
>> > + * pmu->del: delete the event from PMU.
>> > + */
>> > +static void riscv_pmu_del(struct perf_event *event, int flags)
>> > +{
>> > + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>> > + struct hw_perf_event *hwc = &event->hw;
>> > +
>> > + cpuc->events[hwc->idx] = NULL;
>> > + cpuc->n_events--;
>> > + riscv_pmu_stop(event, PERF_EF_UPDATE);
>> > + perf_event_update_userpage(event);
>> > +}
>> > +
>> > +/*
>> > + * Interrupt
>> > + */
>> > +
>> > +static DEFINE_MUTEX(pmc_reserve_mutex);
>> > +typedef void (*perf_irq_t)(void *riscv_perf_irq);
>> > +perf_irq_t perf_irq;
>> > +
>> > +void riscv_pmu_handle_irq(void *riscv_perf_irq)
>> > +{
>> > +}
>> > +
>> > +static perf_irq_t reserve_pmc_hardware(void)
>> > +{
>> > + perf_irq_t old;
>> > +
>> > + mutex_lock(&pmc_reserve_mutex);
>> > + old = perf_irq;
>> > + perf_irq = &riscv_pmu_handle_irq;
>> > + mutex_unlock(&pmc_reserve_mutex);
>> > +
>> > + return old;
>> > +}
>> > +
>> > +void release_pmc_hardware(void)
>> > +{
>> > + mutex_lock(&pmc_reserve_mutex);
>> > + perf_irq = NULL;
>> > + mutex_unlock(&pmc_reserve_mutex);
>> > +}
>> > +
>> > +/*
>> > + * Event Initialization
>> > + */
>> > +
>> > +static atomic_t riscv_active_events;
>> > +
>> > +static void riscv_event_destroy(struct perf_event *event)
>> > +{
>> > + if (atomic_dec_return(&riscv_active_events) == 0)
>> > + release_pmc_hardware();
>> > +}
>> > +
>> > +static int riscv_event_init(struct perf_event *event)
>> > +{
>> > + struct perf_event_attr *attr = &event->attr;
>> > + struct hw_perf_event *hwc = &event->hw;
>> > + perf_irq_t old_irq_handler = NULL;
>> > + int code;
>> > +
>> > + if (atomic_inc_return(&riscv_active_events) == 1)
>> > + old_irq_handler = reserve_pmc_hardware();
>> > +
>> > + if (old_irq_handler) {
>> > + pr_warn("PMC hardware busy (reserved by oprofile)\n");
>> > + atomic_dec(&riscv_active_events);
>> > + return -EBUSY;
>> > + }
>> > +
>> > + switch (event->attr.type) {
>> > + case PERF_TYPE_HARDWARE:
>> > + code = riscv_pmu->map_hw_event(attr->config);
>> > + break;
>> > + case PERF_TYPE_HW_CACHE:
>> > + code = riscv_pmu->map_cache_event(attr->config);
>> > + break;
>> > + case PERF_TYPE_RAW:
>> > + return -EOPNOTSUPP;
>> > + default:
>> > + return -ENOENT;
>> > + }
>> > +
>> > + event->destroy = riscv_event_destroy;
>> > + if (code < 0) {
>> > + event->destroy(event);
>> > + return code;
>> > + }
>> > +
>> > + /*
>> > + * idx is set to -1 because the index of a general event should
>> > not be
>> > + * decided until binding to some counter in pmu->add().
>> > + *
>> > + * But since we don't have such support, later in pmu->add(), we
>> > just
>> > + * use hwc->config as the index instead.
>> > + */
>> > + hwc->config = code;
>> > + hwc->idx = -1;
>> > +
>> > + return 0;
>> > +}
>> > +
>> > +/*
>> > + * Initialization
>> > + */
>> > +
>> > +static struct pmu min_pmu = {
>> > + .name = "riscv-base",
>> > + .event_init = riscv_event_init,
>> > + .add = riscv_pmu_add,
>> > + .del = riscv_pmu_del,
>> > + .start = riscv_pmu_start,
>> > + .stop = riscv_pmu_stop,
>> > + .read = riscv_pmu_read,
>> > +};
>> > +
>> > +static const struct riscv_pmu riscv_base_pmu = {
>> > + .pmu = &min_pmu,
>> > + .max_events = ARRAY_SIZE(riscv_hw_event_map),
>> > + .map_hw_event = riscv_map_hw_event,
>> > + .hw_events = riscv_hw_event_map,
>> > + .map_cache_event = riscv_map_cache_event,
>> > + .cache_events = &riscv_cache_event_map,
>> > + .counter_width = 63,
>> > + .num_counters = RISCV_BASE_COUNTERS + 0,
>> > +};
>> > +
>> > +struct pmu * __weak __init riscv_init_platform_pmu(void)
>> > +{
>> > + riscv_pmu = &riscv_base_pmu;
>> > + return riscv_pmu->pmu;
>> > +}
>> > +
>> > +int __init init_hw_perf_events(void)
>> > +{
>> > + struct pmu *pmu = riscv_init_platform_pmu();
>> > +
>> > + perf_irq = NULL;
>> > + perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
>> > + return 0;
>> > +}
>> > +arch_initcall(init_hw_perf_events);
>> > --
>> > 2.16.2
>> >
>> >

2018-04-02 07:37:55

by Alan Kao

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

On Sat, Mar 31, 2018 at 03:47:10PM -0700, Alex Solomatnikov wrote:

The original guess was that maybe, an counter value on a hart is picked
as the minusend, and an old counter value on another hart was recorded
as the subtrahend but numerically larger. Then, the overflow causes
by that subtraction. Please let me name this guess as
"cross-hart subtraction."

> You can add a skew between cores in qemu, something like this:
>
> case CSR_INSTRET:
> core_id()*return cpu_get_host_ticks()/10;
> break;
> case CSR_CYCLE:
> return cpu_get_host_ticks();
> break;
>

However, I tried similar stuff to reproduce the phenomenon but in vain.
It seems that the cross-hart subtration doesn't even happen, because generic
code handles them. While I am still looking for the proof to it, I would
like to have more information on this first:

* What is the frequency of that "funny number" event? Was that often?

* If you monitor only one hart, will the event disappear?

* What will happen if you change the counter_width to fit U54's counter width?

* Is the test program you used open-sourced?

> Alex
>

Many thanks,
Alan

> On Wed, Mar 28, 2018 at 7:30 PM, Alan Kao <[email protected]> wrote:
> > Hi Alex,
> >
> > I'm appreciated for your reply and tests.
> >
> > On Wed, Mar 28, 2018 at 03:58:41PM -0700, Alex Solomatnikov wrote:
> >> Did you test this code?
> >
> > I did test this patch on QEMU's virt model with multi-hart, which is the only
> > RISC-V machine I have for now. But as I mentioned in
> > https://github.com/riscv/riscv-qemu/pull/115 , the hardware counter support
> > in QEMU is not fully conformed to the 1.10 Priv-Spec, so I had to slightly
> > tweak the code to make reading work.
> >
> > Specifically, the read to cycle and instret in QEMU looks like this:
> > ...
> > case CSR_INSTRET:
> > case CSR_CYCLE:
> > // if (ctr_ok) {
> > return cpu_get_host_ticks();
> > // }
> > break;
> > ...
> > and the two lines of comment was the tweak.
> >
> > On such environment, I did not get anything unexpected. No matter which of them
> > is requested, QEMU returns the host's tick.
> >
> >>
> >> I got funny numbers when I tried to run it on HiFive Unleashed:
> >>
> >> perf stat mem-latency
> >> ...
> >>
> >> Performance counter stats for 'mem-latency':
> >>
> >> 157.907000 task-clock (msec) # 0.940 CPUs utilized
> >>
> >> 1 context-switches # 0.006 K/sec
> >>
> >> 1 cpu-migrations # 0.006 K/sec
> >>
> >> 4102 page-faults # 0.026 M/sec
> >>
> >> 157923752 cycles # 1.000 GHz
> >>
> >> 9223372034948899840 instructions # 58403957087.78 insn
> >> per cycle
> >> <not supported> branches
> >>
> >> <not supported> branch-misses
> >>
> >>
> >> 0.168046000 seconds time elapsed
> >>
> >>
> >> Tracing read_counter(), I see this:
> >>
> >> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.058809] CPU 3:
> >> read_counter idx=0 val=2528358954912
> >> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.063339] CPU 3:
> >> read_counter idx=1 val=53892244920
> >> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.118160] CPU 3:
> >> read_counter idx=0 val=2528418303035
> >> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.122694] CPU 3:
> >> read_counter idx=1 val=53906699665
> >> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.216736] CPU 1:
> >> read_counter idx=0 val=2528516878664
> >> Jan 1 00:41:50 buildroot user.info kernel: [ 2510.221270] CPU 1:
> >> read_counter idx=1 val=51986369142
> >>
> >> It looks like the counter values from different cores are subtracted and
> >> wraparound occurs.
> >>
> >
> > Thanks for the hint. It makes sense. 9223372034948899840 is 7fffffff8e66a400,
> > which should be a wraparound with the mask I set (63-bit) in the code.
> >
> > I will try this direction. Ideally, we can solve it by explicitly syncing the
> > hwc->prev_count when a cpu migration event happens.
> >
> >>
> >> Also, core IDs and socket IDs are wrong in perf report:
> >>
> >
> > As Palmer has replied to this, I have no comment here.
> >
> >> perf report --header -I
> >> Error:
> >> The perf.data file has no samples!
> >> # ========
> >> # captured on: Thu Jan 1 02:52:07 1970
> >> # hostname : buildroot
> >> # os release : 4.15.0-00045-g0d7c030-dirty
> >> # perf version : 4.15.0
> >> # arch : riscv64
> >> # nrcpus online : 4
> >> # nrcpus avail : 5
> >> # total memory : 8188340 kB
> >> # cmdline : /usr/bin/perf record -F 1000 lat_mem_rd -P 1 -W 1 -N 1 -t 10
> >> # event : name = cycles:ppp, , size = 112, { sample_period, sample_freq } =
> >> 1000, sample_type = IP|TID|TIME|PERIOD, disabled = 1, inherit = 1, mmap =
> >> 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 3,
> >> sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1
> >> # sibling cores : 1
> >> # sibling cores : 2
> >> # sibling cores : 3
> >> # sibling cores : 4
> >> # sibling threads : 1
> >> # sibling threads : 2
> >> # sibling threads : 3
> >> # sibling threads : 4
> >> # CPU 0: Core ID -1, Socket ID -1
> >> # CPU 1: Core ID 0, Socket ID -1
> >> # CPU 2: Core ID 0, Socket ID -1
> >> # CPU 3: Core ID 0, Socket ID -1
> >> # CPU 4: Core ID 0, Socket ID -1
> >> # pmu mappings: cpu = 4, software = 1
> >> # CPU cache info:
> >> # L1 Instruction 32K [1]
> >> # L1 Data 32K [1]
> >> # L1 Instruction 32K [2]
> >> # L1 Data 32K [2]
> >> # L1 Instruction 32K [3]
> >> # L1 Data 32K [3]
> >> # missing features: TRACING_DATA BUILD_ID CPUDESC CPUID NUMA_TOPOLOGY
> >> BRANCH_STACK GROUP_DESC AUXTRACE STAT
> >> # ========
> >>
> >>
> >> Alex
> >>
> >
> > Many thanks,
> > Alan
> >
> >> On Mon, Mar 26, 2018 at 12:57 AM, Alan Kao <[email protected]> wrote:
> >>
> >> > This patch provide a basic PMU, riscv_base_pmu, which supports two
> >> > general hardware event, instructions and cycles. Furthermore, this
> >> > PMU serves as a reference implementation to ease the portings in
> >> > the future.
> >> >
> >> > riscv_base_pmu should be able to run on any RISC-V machine that
> >> > conforms to the Priv-Spec. Note that the latest qemu model hasn't
> >> > fully support a proper behavior of Priv-Spec 1.10 yet, but work
> >> > around should be easy with very small fixes. Please check
> >> > https://github.com/riscv/riscv-qemu/pull/115 for future updates.
> >> >
> >> > Cc: Nick Hu <[email protected]>
> >> > Cc: Greentime Hu <[email protected]>
> >> > Signed-off-by: Alan Kao <[email protected]>
> >> > ---
> >> > arch/riscv/Kconfig | 12 +
> >> > arch/riscv/include/asm/perf_event.h | 76 +++++-
> >> > arch/riscv/kernel/Makefile | 1 +
> >> > arch/riscv/kernel/perf_event.c | 469 ++++++++++++++++++++++++++++++
> >> > ++++++
> >> > 4 files changed, 554 insertions(+), 4 deletions(-)
> >> > create mode 100644 arch/riscv/kernel/perf_event.c
> >> >
> >> > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> >> > index 310b9a5d6737..dd4aecfb5265 100644
> >> > --- a/arch/riscv/Kconfig
> >> > +++ b/arch/riscv/Kconfig
> >> > @@ -195,6 +195,18 @@ config RISCV_ISA_C
> >> > config RISCV_ISA_A
> >> > def_bool y
> >> >
> >> > +menu "PMU type"
> >> > + depends on PERF_EVENTS
> >> > +
> >> > +config RISCV_BASE_PMU
> >> > + bool "Base Performance Monitoring Unit"
> >> > + def_bool y
> >> > + help
> >> > + A base PMU that serves as a reference implementation and has
> >> > limited
> >> > + feature of perf.
> >> > +
> >> > +endmenu
> >> > +
> >> > endmenu
> >> >
> >> > menu "Kernel type"
> >> > diff --git a/arch/riscv/include/asm/perf_event.h
> >> > b/arch/riscv/include/asm/perf_event.h
> >> > index e13d2ff29e83..98e2efb02d25 100644
> >> > --- a/arch/riscv/include/asm/perf_event.h
> >> > +++ b/arch/riscv/include/asm/perf_event.h
> >> > @@ -1,13 +1,81 @@
> >> > +/* SPDX-License-Identifier: GPL-2.0 */
> >> > /*
> >> > * Copyright (C) 2018 SiFive
> >> > + * Copyright (C) 2018 Andes Technology Corporation
> >> > *
> >> > - * This program is free software; you can redistribute it and/or
> >> > - * modify it under the terms of the GNU General Public Licence
> >> > - * as published by the Free Software Foundation; either version
> >> > - * 2 of the Licence, or (at your option) any later version.
> >> > */
> >> >
> >> > #ifndef _ASM_RISCV_PERF_EVENT_H
> >> > #define _ASM_RISCV_PERF_EVENT_H
> >> >
> >> > +#include <linux/perf_event.h>
> >> > +#include <linux/ptrace.h>
> >> > +
> >> > +#define RISCV_BASE_COUNTERS 2
> >> > +
> >> > +/*
> >> > + * The RISCV_MAX_COUNTERS parameter should be specified.
> >> > + */
> >> > +
> >> > +#ifdef CONFIG_RISCV_BASE_PMU
> >> > +#define RISCV_MAX_COUNTERS 2
> >> > +#endif
> >> > +
> >> > +#ifndef RISCV_MAX_COUNTERS
> >> > +#error "Please provide a valid RISCV_MAX_COUNTERS for the PMU."
> >> > +#endif
> >> > +
> >> > +/*
> >> > + * These are the indexes of bits in counteren register *minus* 1,
> >> > + * except for cycle. It would be coherent if it can directly mapped
> >> > + * to counteren bit definition, but there is a *time* register at
> >> > + * counteren[1]. Per-cpu structure is scarce resource here.
> >> > + *
> >> > + * According to the spec, an implementation can support counter up to
> >> > + * mhpmcounter31, but many high-end processors has at most 6 general
> >> > + * PMCs, we give the definition to MHPMCOUNTER8 here.
> >> > + */
> >> > +#define RISCV_PMU_CYCLE 0
> >> > +#define RISCV_PMU_INSTRET 1
> >> > +#define RISCV_PMU_MHPMCOUNTER3 2
> >> > +#define RISCV_PMU_MHPMCOUNTER4 3
> >> > +#define RISCV_PMU_MHPMCOUNTER5 4
> >> > +#define RISCV_PMU_MHPMCOUNTER6 5
> >> > +#define RISCV_PMU_MHPMCOUNTER7 6
> >> > +#define RISCV_PMU_MHPMCOUNTER8 7
> >> > +
> >> > +#define RISCV_OP_UNSUPP (-EOPNOTSUPP)
> >> > +
> >> > +struct cpu_hw_events {
> >> > + /* # currently enabled events*/
> >> > + int n_events;
> >> > + /* currently enabled events */
> >> > + struct perf_event *events[RISCV_MAX_COUNTERS];
> >> > + /* vendor-defined PMU data */
> >> > + void *platform;
> >> > +};
> >> > +
> >> > +struct riscv_pmu {
> >> > + struct pmu *pmu;
> >> > +
> >> > + /* generic hw/cache events table */
> >> > + const int *hw_events;
> >> > + const int (*cache_events)[PERF_COUNT_HW_CACHE_MAX]
> >> > + [PERF_COUNT_HW_CACHE_OP_MAX]
> >> > + [PERF_COUNT_HW_CACHE_RESULT_MAX];
> >> > + /* method used to map hw/cache events */
> >> > + int (*map_hw_event)(u64 config);
> >> > + int (*map_cache_event)(u64 config);
> >> > +
> >> > + /* max generic hw events in map */
> >> > + int max_events;
> >> > + /* number total counters, 2(base) + x(general) */
> >> > + int num_counters;
> >> > + /* the width of the counter */
> >> > + int counter_width;
> >> > +
> >> > + /* vendor-defined PMU features */
> >> > + void *platform;
> >> > +};
> >> > +
> >> > #endif /* _ASM_RISCV_PERF_EVENT_H */
> >> > diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
> >> > index 196f62ffc428..849c38d9105f 100644
> >> > --- a/arch/riscv/kernel/Makefile
> >> > +++ b/arch/riscv/kernel/Makefile
> >> > @@ -36,5 +36,6 @@ obj-$(CONFIG_SMP) += smp.o
> >> > obj-$(CONFIG_MODULES) += module.o
> >> > obj-$(CONFIG_FUNCTION_TRACER) += mcount.o
> >> > obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
> >> > +obj-$(CONFIG_PERF_EVENTS) += perf_event.o
> >> >
> >> > clean:
> >> > diff --git a/arch/riscv/kernel/perf_event.c b/arch/riscv/kernel/perf_
> >> > event.c
> >> > new file mode 100644
> >> > index 000000000000..b78cb486683b
> >> > --- /dev/null
> >> > +++ b/arch/riscv/kernel/perf_event.c
> >> > @@ -0,0 +1,469 @@
> >> > +/* SPDX-License-Identifier: GPL-2.0 */
> >> > +/*
> >> > + * Copyright (C) 2008 Thomas Gleixner <[email protected]>
> >> > + * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar
> >> > + * Copyright (C) 2009 Jaswinder Singh Rajput
> >> > + * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
> >> > + * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra
> >> > + * Copyright (C) 2009 Intel Corporation, <[email protected]>
> >> > + * Copyright (C) 2009 Google, Inc., Stephane Eranian
> >> > + * Copyright 2014 Tilera Corporation. All Rights Reserved.
> >> > + * Copyright (C) 2018 Andes Technology Corporation
> >> > + *
> >> > + * Perf_events support for RISC-V platforms.
> >> > + *
> >> > + * Since the spec. (as of now, Priv-Spec 1.10) does not provide enough
> >> > + * functionality for perf event to fully work, this file provides
> >> > + * the very basic framework only.
> >> > + *
> >> > + * For platform portings, please check Documentations/riscv/pmu.txt.
> >> > + *
> >> > + * The Copyright line includes x86 and tile ones.
> >> > + */
> >> > +
> >> > +#include <linux/kprobes.h>
> >> > +#include <linux/kernel.h>
> >> > +#include <linux/kdebug.h>
> >> > +#include <linux/mutex.h>
> >> > +#include <linux/bitmap.h>
> >> > +#include <linux/irq.h>
> >> > +#include <linux/interrupt.h>
> >> > +#include <linux/perf_event.h>
> >> > +#include <linux/atomic.h>
> >> > +#include <asm/perf_event.h>
> >> > +
> >> > +static const struct riscv_pmu *riscv_pmu __read_mostly;
> >> > +static DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events);
> >> > +
> >> > +/*
> >> > + * Hardware & cache maps and their methods
> >> > + */
> >> > +
> >> > +static const int riscv_hw_event_map[] = {
> >> > + [PERF_COUNT_HW_CPU_CYCLES] = RISCV_PMU_CYCLE,
> >> > + [PERF_COUNT_HW_INSTRUCTIONS] = RISCV_PMU_INSTRET,
> >> > + [PERF_COUNT_HW_CACHE_REFERENCES] = RISCV_OP_UNSUPP,
> >> > + [PERF_COUNT_HW_CACHE_MISSES] = RISCV_OP_UNSUPP,
> >> > + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = RISCV_OP_UNSUPP,
> >> > + [PERF_COUNT_HW_BRANCH_MISSES] = RISCV_OP_UNSUPP,
> >> > + [PERF_COUNT_HW_BUS_CYCLES] = RISCV_OP_UNSUPP,
> >> > +};
> >> > +
> >> > +#define C(x) PERF_COUNT_HW_CACHE_##x
> >> > +static const int riscv_cache_event_map[PERF_COUNT_HW_CACHE_MAX]
> >> > +[PERF_COUNT_HW_CACHE_OP_MAX]
> >> > +[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
> >> > + [C(L1D)] = {
> >> > + [C(OP_READ)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + [C(OP_WRITE)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + [C(OP_PREFETCH)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + },
> >> > + [C(L1I)] = {
> >> > + [C(OP_READ)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + [C(OP_WRITE)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + [C(OP_PREFETCH)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + },
> >> > + [C(LL)] = {
> >> > + [C(OP_READ)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + [C(OP_WRITE)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + [C(OP_PREFETCH)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + },
> >> > + [C(DTLB)] = {
> >> > + [C(OP_READ)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + [C(OP_WRITE)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + [C(OP_PREFETCH)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + },
> >> > + [C(ITLB)] = {
> >> > + [C(OP_READ)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + [C(OP_WRITE)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + [C(OP_PREFETCH)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + },
> >> > + [C(BPU)] = {
> >> > + [C(OP_READ)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + [C(OP_WRITE)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + [C(OP_PREFETCH)] = {
> >> > + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> >> > + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> >> > + },
> >> > + },
> >> > +};
> >> > +
> >> > +static int riscv_map_hw_event(u64 config)
> >> > +{
> >> > + if (config >= riscv_pmu->max_events)
> >> > + return -EINVAL;
> >> > +
> >> > + return riscv_pmu->hw_events[config];
> >> > +}
> >> > +
> >> > +int riscv_map_cache_decode(u64 config, unsigned int *type,
> >> > + unsigned int *op, unsigned int *result)
> >> > +{
> >> > + return -ENOENT;
> >> > +}
> >> > +
> >> > +static int riscv_map_cache_event(u64 config)
> >> > +{
> >> > + unsigned int type, op, result;
> >> > + int err = -ENOENT;
> >> > + int code;
> >> > +
> >> > + err = riscv_map_cache_decode(config, &type, &op, &result);
> >> > + if (!riscv_pmu->cache_events || err)
> >> > + return err;
> >> > +
> >> > + if (type >= PERF_COUNT_HW_CACHE_MAX ||
> >> > + op >= PERF_COUNT_HW_CACHE_OP_MAX ||
> >> > + result >= PERF_COUNT_HW_CACHE_RESULT_MAX)
> >> > + return -EINVAL;
> >> > +
> >> > + code = (*riscv_pmu->cache_events)[type][op][result];
> >> > + if (code == RISCV_OP_UNSUPP)
> >> > + return -EINVAL;
> >> > +
> >> > + return code;
> >> > +}
> >> > +
> >> > +/*
> >> > + * Low-level functions: reading/writing counters
> >> > + */
> >> > +
> >> > +static inline u64 read_counter(int idx)
> >> > +{
> >> > + u64 val = 0;
> >> > +
> >> > + switch (idx) {
> >> > + case RISCV_PMU_CYCLE:
> >> > + val = csr_read(cycle);
> >> > + break;
> >> > + case RISCV_PMU_INSTRET:
> >> > + val = csr_read(instret);
> >> > + break;
> >> > + default:
> >> > + WARN_ON_ONCE(idx < 0 || idx > RISCV_MAX_COUNTERS);
> >> > + return -EINVAL;
> >> > + }
> >> > +
> >> > + return val;
> >> > +}
> >> > +
> >> > +static inline void write_counter(int idx, u64 value)
> >> > +{
> >> > + /* currently not supported */
> >> > +}
> >> > +
> >> > +/*
> >> > + * pmu->read: read and update the counter
> >> > + *
> >> > + * Other architectures' implementation often have a xxx_perf_event_update
> >> > + * routine, which can return counter values when called in the IRQ, but
> >> > + * return void when being called by the pmu->read method.
> >> > + */
> >> > +static void riscv_pmu_read(struct perf_event *event)
> >> > +{
> >> > + struct hw_perf_event *hwc = &event->hw;
> >> > + u64 prev_raw_count, new_raw_count;
> >> > + u64 oldval;
> >> > + int idx = hwc->idx;
> >> > + u64 delta;
> >> > +
> >> > + do {
> >> > + prev_raw_count = local64_read(&hwc->prev_count);
> >> > + new_raw_count = read_counter(idx);
> >> > +
> >> > + oldval = local64_cmpxchg(&hwc->prev_count, prev_raw_count,
> >> > + new_raw_count);
> >> > + } while (oldval != prev_raw_count);
> >> > +
> >> > + /*
> >> > + * delta is the value to update the counter we maintain in the
> >> > kernel.
> >> > + */
> >> > + delta = (new_raw_count - prev_raw_count) &
> >> > + ((1ULL << riscv_pmu->counter_width) - 1);
> >> > + local64_add(delta, &event->count);
> >> > + /*
> >> > + * Something like local64_sub(delta, &hwc->period_left) here is
> >> > + * needed if there is an interrupt for perf.
> >> > + */
> >> > +}
> >> > +
> >> > +/*
> >> > + * State transition functions:
> >> > + *
> >> > + * stop()/start() & add()/del()
> >> > + */
> >> > +
> >> > +/*
> >> > + * pmu->stop: stop the counter
> >> > + */
> >> > +static void riscv_pmu_stop(struct perf_event *event, int flags)
> >> > +{
> >> > + struct hw_perf_event *hwc = &event->hw;
> >> > +
> >> > + WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
> >> > + hwc->state |= PERF_HES_STOPPED;
> >> > +
> >> > + if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE))
> >> > {
> >> > + riscv_pmu_read(event);
> >> > + hwc->state |= PERF_HES_UPTODATE;
> >> > + }
> >> > +}
> >> > +
> >> > +/*
> >> > + * pmu->start: start the event.
> >> > + */
> >> > +static void riscv_pmu_start(struct perf_event *event, int flags)
> >> > +{
> >> > + struct hw_perf_event *hwc = &event->hw;
> >> > +
> >> > + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
> >> > + return;
> >> > +
> >> > + if (flags & PERF_EF_RELOAD) {
> >> > + WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE));
> >> > +
> >> > + /*
> >> > + * Set the counter to the period to the next interrupt
> >> > here,
> >> > + * if you have any.
> >> > + */
> >> > + }
> >> > +
> >> > + hwc->state = 0;
> >> > + perf_event_update_userpage(event);
> >> > +
> >> > + /*
> >> > + * Since we cannot write to counters, this serves as an
> >> > initialization
> >> > + * to the delta-mechanism in pmu->read(); otherwise, the delta
> >> > would be
> >> > + * wrong when pmu->read is called for the first time.
> >> > + */
> >> > + if (local64_read(&hwc->prev_count) == 0)
> >> > + local64_set(&hwc->prev_count, read_counter(hwc->idx));
> >> > +}
> >> > +
> >> > +/*
> >> > + * pmu->add: add the event to PMU.
> >> > + */
> >> > +static int riscv_pmu_add(struct perf_event *event, int flags)
> >> > +{
> >> > + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> >> > + struct hw_perf_event *hwc = &event->hw;
> >> > +
> >> > + if (cpuc->n_events == riscv_pmu->num_counters)
> >> > + return -ENOSPC;
> >> > +
> >> > + /*
> >> > + * We don't have general conunters, so no binding-event-to-counter
> >> > + * process here.
> >> > + *
> >> > + * Indexing using hwc->config generally not works, since config may
> >> > + * contain extra information, but here the only info we have in
> >> > + * hwc->config is the event index.
> >> > + */
> >> > + hwc->idx = hwc->config;
> >> > + cpuc->events[hwc->idx] = event;
> >> > + cpuc->n_events++;
> >> > +
> >> > + hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
> >> > +
> >> > + if (flags & PERF_EF_START)
> >> > + riscv_pmu_start(event, PERF_EF_RELOAD);
> >> > +
> >> > + return 0;
> >> > +}
> >> > +
> >> > +/*
> >> > + * pmu->del: delete the event from PMU.
> >> > + */
> >> > +static void riscv_pmu_del(struct perf_event *event, int flags)
> >> > +{
> >> > + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> >> > + struct hw_perf_event *hwc = &event->hw;
> >> > +
> >> > + cpuc->events[hwc->idx] = NULL;
> >> > + cpuc->n_events--;
> >> > + riscv_pmu_stop(event, PERF_EF_UPDATE);
> >> > + perf_event_update_userpage(event);
> >> > +}
> >> > +
> >> > +/*
> >> > + * Interrupt
> >> > + */
> >> > +
> >> > +static DEFINE_MUTEX(pmc_reserve_mutex);
> >> > +typedef void (*perf_irq_t)(void *riscv_perf_irq);
> >> > +perf_irq_t perf_irq;
> >> > +
> >> > +void riscv_pmu_handle_irq(void *riscv_perf_irq)
> >> > +{
> >> > +}
> >> > +
> >> > +static perf_irq_t reserve_pmc_hardware(void)
> >> > +{
> >> > + perf_irq_t old;
> >> > +
> >> > + mutex_lock(&pmc_reserve_mutex);
> >> > + old = perf_irq;
> >> > + perf_irq = &riscv_pmu_handle_irq;
> >> > + mutex_unlock(&pmc_reserve_mutex);
> >> > +
> >> > + return old;
> >> > +}
> >> > +
> >> > +void release_pmc_hardware(void)
> >> > +{
> >> > + mutex_lock(&pmc_reserve_mutex);
> >> > + perf_irq = NULL;
> >> > + mutex_unlock(&pmc_reserve_mutex);
> >> > +}
> >> > +
> >> > +/*
> >> > + * Event Initialization
> >> > + */
> >> > +
> >> > +static atomic_t riscv_active_events;
> >> > +
> >> > +static void riscv_event_destroy(struct perf_event *event)
> >> > +{
> >> > + if (atomic_dec_return(&riscv_active_events) == 0)
> >> > + release_pmc_hardware();
> >> > +}
> >> > +
> >> > +static int riscv_event_init(struct perf_event *event)
> >> > +{
> >> > + struct perf_event_attr *attr = &event->attr;
> >> > + struct hw_perf_event *hwc = &event->hw;
> >> > + perf_irq_t old_irq_handler = NULL;
> >> > + int code;
> >> > +
> >> > + if (atomic_inc_return(&riscv_active_events) == 1)
> >> > + old_irq_handler = reserve_pmc_hardware();
> >> > +
> >> > + if (old_irq_handler) {
> >> > + pr_warn("PMC hardware busy (reserved by oprofile)\n");
> >> > + atomic_dec(&riscv_active_events);
> >> > + return -EBUSY;
> >> > + }
> >> > +
> >> > + switch (event->attr.type) {
> >> > + case PERF_TYPE_HARDWARE:
> >> > + code = riscv_pmu->map_hw_event(attr->config);
> >> > + break;
> >> > + case PERF_TYPE_HW_CACHE:
> >> > + code = riscv_pmu->map_cache_event(attr->config);
> >> > + break;
> >> > + case PERF_TYPE_RAW:
> >> > + return -EOPNOTSUPP;
> >> > + default:
> >> > + return -ENOENT;
> >> > + }
> >> > +
> >> > + event->destroy = riscv_event_destroy;
> >> > + if (code < 0) {
> >> > + event->destroy(event);
> >> > + return code;
> >> > + }
> >> > +
> >> > + /*
> >> > + * idx is set to -1 because the index of a general event should
> >> > not be
> >> > + * decided until binding to some counter in pmu->add().
> >> > + *
> >> > + * But since we don't have such support, later in pmu->add(), we
> >> > just
> >> > + * use hwc->config as the index instead.
> >> > + */
> >> > + hwc->config = code;
> >> > + hwc->idx = -1;
> >> > +
> >> > + return 0;
> >> > +}
> >> > +
> >> > +/*
> >> > + * Initialization
> >> > + */
> >> > +
> >> > +static struct pmu min_pmu = {
> >> > + .name = "riscv-base",
> >> > + .event_init = riscv_event_init,
> >> > + .add = riscv_pmu_add,
> >> > + .del = riscv_pmu_del,
> >> > + .start = riscv_pmu_start,
> >> > + .stop = riscv_pmu_stop,
> >> > + .read = riscv_pmu_read,
> >> > +};
> >> > +
> >> > +static const struct riscv_pmu riscv_base_pmu = {
> >> > + .pmu = &min_pmu,
> >> > + .max_events = ARRAY_SIZE(riscv_hw_event_map),
> >> > + .map_hw_event = riscv_map_hw_event,
> >> > + .hw_events = riscv_hw_event_map,
> >> > + .map_cache_event = riscv_map_cache_event,
> >> > + .cache_events = &riscv_cache_event_map,
> >> > + .counter_width = 63,
> >> > + .num_counters = RISCV_BASE_COUNTERS + 0,
> >> > +};
> >> > +
> >> > +struct pmu * __weak __init riscv_init_platform_pmu(void)
> >> > +{
> >> > + riscv_pmu = &riscv_base_pmu;
> >> > + return riscv_pmu->pmu;
> >> > +}
> >> > +
> >> > +int __init init_hw_perf_events(void)
> >> > +{
> >> > + struct pmu *pmu = riscv_init_platform_pmu();
> >> > +
> >> > + perf_irq = NULL;
> >> > + perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
> >> > + return 0;
> >> > +}
> >> > +arch_initcall(init_hw_perf_events);
> >> > --
> >> > 2.16.2
> >> >
> >> >

2018-04-02 08:19:34

by Alan Kao

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support


Hi Alex,

On Mon, Apr 02, 2018 at 03:36:12PM +0800, Alan Kao wrote:
> On Sat, Mar 31, 2018 at 03:47:10PM -0700, Alex Solomatnikov wrote:
>
> The original guess was that maybe, an counter value on a hart is picked
> as the minusend, and an old counter value on another hart was recorded
> as the subtrahend but numerically larger. Then, the overflow causes
> by that subtraction. Please let me name this guess as
> "cross-hart subtraction."
>
> > You can add a skew between cores in qemu, something like this:
> >
> > case CSR_INSTRET:
> > core_id()*return cpu_get_host_ticks()/10;
> > break;
> > case CSR_CYCLE:
> > return cpu_get_host_ticks();
> > break;
> >
>
> However, I tried similar stuff to reproduce the phenomenon but in vain.
> It seems that the
>
> ***cross-hart subtration doesn't even happen, because generic
> code handles them. ...

I am sorry that this observation is wrong. With appropriate tweak, we
successfully reproduce the behavior and locate the the bug.

This will be fix in v2.


Thanks for the helps.
Alan

2018-04-05 16:49:18

by Palmer Dabbelt

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

On Mon, 26 Mar 2018 00:57:54 PDT (-0700), [email protected] wrote:
> This patch provide a basic PMU, riscv_base_pmu, which supports two
> general hardware event, instructions and cycles. Furthermore, this
> PMU serves as a reference implementation to ease the portings in
> the future.
>
> riscv_base_pmu should be able to run on any RISC-V machine that
> conforms to the Priv-Spec. Note that the latest qemu model hasn't
> fully support a proper behavior of Priv-Spec 1.10 yet, but work
> around should be easy with very small fixes. Please check
> https://github.com/riscv/riscv-qemu/pull/115 for future updates.
>
> Cc: Nick Hu <[email protected]>
> Cc: Greentime Hu <[email protected]>
> Signed-off-by: Alan Kao <[email protected]>

We should really be able to detect PMU types at runtime (via a device tree
entry) rather than requiring that a single PMU is built in to the kernel. This
will require a handful of modifications to how this patch works, which I'll try
to list below.

> ---
> arch/riscv/Kconfig | 12 +
> arch/riscv/include/asm/perf_event.h | 76 +++++-
> arch/riscv/kernel/Makefile | 1 +
> arch/riscv/kernel/perf_event.c | 469 ++++++++++++++++++++++++++++++++++++
> 4 files changed, 554 insertions(+), 4 deletions(-)
> create mode 100644 arch/riscv/kernel/perf_event.c
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 310b9a5d6737..dd4aecfb5265 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -195,6 +195,18 @@ config RISCV_ISA_C
> config RISCV_ISA_A
> def_bool y
>
> +menu "PMU type"
> + depends on PERF_EVENTS
> +
> +config RISCV_BASE_PMU
> + bool "Base Performance Monitoring Unit"
> + def_bool y
> + help
> + A base PMU that serves as a reference implementation and has limited
> + feature of perf.
> +
> +endmenu
> +
> endmenu

Rather than a menu where a single PMU can be selected, there should be options
to enable or disable support for each PMU type -- this is just like how all our
other drivers work.

> menu "Kernel type"
> diff --git a/arch/riscv/include/asm/perf_event.h b/arch/riscv/include/asm/perf_event.h
> index e13d2ff29e83..98e2efb02d25 100644
> --- a/arch/riscv/include/asm/perf_event.h
> +++ b/arch/riscv/include/asm/perf_event.h
> @@ -1,13 +1,81 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> /*
> * Copyright (C) 2018 SiFive
> + * Copyright (C) 2018 Andes Technology Corporation
> *
> - * This program is free software; you can redistribute it and/or
> - * modify it under the terms of the GNU General Public Licence
> - * as published by the Free Software Foundation; either version
> - * 2 of the Licence, or (at your option) any later version.
> */
>
> #ifndef _ASM_RISCV_PERF_EVENT_H
> #define _ASM_RISCV_PERF_EVENT_H
>
> +#include <linux/perf_event.h>
> +#include <linux/ptrace.h>
> +
> +#define RISCV_BASE_COUNTERS 2
> +
> +/*
> + * The RISCV_MAX_COUNTERS parameter should be specified.
> + */
> +
> +#ifdef CONFIG_RISCV_BASE_PMU
> +#define RISCV_MAX_COUNTERS 2
> +#endif
> +
> +#ifndef RISCV_MAX_COUNTERS
> +#error "Please provide a valid RISCV_MAX_COUNTERS for the PMU."
> +#endif
> +
> +/*
> + * These are the indexes of bits in counteren register *minus* 1,
> + * except for cycle. It would be coherent if it can directly mapped
> + * to counteren bit definition, but there is a *time* register at
> + * counteren[1]. Per-cpu structure is scarce resource here.
> + *
> + * According to the spec, an implementation can support counter up to
> + * mhpmcounter31, but many high-end processors has at most 6 general
> + * PMCs, we give the definition to MHPMCOUNTER8 here.
> + */
> +#define RISCV_PMU_CYCLE 0
> +#define RISCV_PMU_INSTRET 1
> +#define RISCV_PMU_MHPMCOUNTER3 2
> +#define RISCV_PMU_MHPMCOUNTER4 3
> +#define RISCV_PMU_MHPMCOUNTER5 4
> +#define RISCV_PMU_MHPMCOUNTER6 5
> +#define RISCV_PMU_MHPMCOUNTER7 6
> +#define RISCV_PMU_MHPMCOUNTER8 7
> +
> +#define RISCV_OP_UNSUPP (-EOPNOTSUPP)
> +
> +struct cpu_hw_events {
> + /* # currently enabled events*/
> + int n_events;
> + /* currently enabled events */
> + struct perf_event *events[RISCV_MAX_COUNTERS];
> + /* vendor-defined PMU data */
> + void *platform;
> +};
> +
> +struct riscv_pmu {
> + struct pmu *pmu;
> +
> + /* generic hw/cache events table */
> + const int *hw_events;
> + const int (*cache_events)[PERF_COUNT_HW_CACHE_MAX]
> + [PERF_COUNT_HW_CACHE_OP_MAX]
> + [PERF_COUNT_HW_CACHE_RESULT_MAX];
> + /* method used to map hw/cache events */
> + int (*map_hw_event)(u64 config);
> + int (*map_cache_event)(u64 config);
> +
> + /* max generic hw events in map */
> + int max_events;
> + /* number total counters, 2(base) + x(general) */
> + int num_counters;
> + /* the width of the counter */
> + int counter_width;
> +
> + /* vendor-defined PMU features */
> + void *platform;
> +};
> +
> #endif /* _ASM_RISCV_PERF_EVENT_H */
> diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
> index 196f62ffc428..849c38d9105f 100644
> --- a/arch/riscv/kernel/Makefile
> +++ b/arch/riscv/kernel/Makefile
> @@ -36,5 +36,6 @@ obj-$(CONFIG_SMP) += smp.o
> obj-$(CONFIG_MODULES) += module.o
> obj-$(CONFIG_FUNCTION_TRACER) += mcount.o
> obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
> +obj-$(CONFIG_PERF_EVENTS) += perf_event.o
>
> clean:
> diff --git a/arch/riscv/kernel/perf_event.c b/arch/riscv/kernel/perf_event.c
> new file mode 100644
> index 000000000000..b78cb486683b
> --- /dev/null
> +++ b/arch/riscv/kernel/perf_event.c
> @@ -0,0 +1,469 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2008 Thomas Gleixner <[email protected]>
> + * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar
> + * Copyright (C) 2009 Jaswinder Singh Rajput
> + * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
> + * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra
> + * Copyright (C) 2009 Intel Corporation, <[email protected]>
> + * Copyright (C) 2009 Google, Inc., Stephane Eranian
> + * Copyright 2014 Tilera Corporation. All Rights Reserved.
> + * Copyright (C) 2018 Andes Technology Corporation
> + *
> + * Perf_events support for RISC-V platforms.
> + *
> + * Since the spec. (as of now, Priv-Spec 1.10) does not provide enough
> + * functionality for perf event to fully work, this file provides
> + * the very basic framework only.
> + *
> + * For platform portings, please check Documentations/riscv/pmu.txt.
> + *
> + * The Copyright line includes x86 and tile ones.
> + */
> +
> +#include <linux/kprobes.h>
> +#include <linux/kernel.h>
> +#include <linux/kdebug.h>
> +#include <linux/mutex.h>
> +#include <linux/bitmap.h>
> +#include <linux/irq.h>
> +#include <linux/interrupt.h>
> +#include <linux/perf_event.h>
> +#include <linux/atomic.h>
> +#include <asm/perf_event.h>
> +
> +static const struct riscv_pmu *riscv_pmu __read_mostly;
> +static DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events);
> +
> +/*
> + * Hardware & cache maps and their methods
> + */
> +
> +static const int riscv_hw_event_map[] = {
> + [PERF_COUNT_HW_CPU_CYCLES] = RISCV_PMU_CYCLE,
> + [PERF_COUNT_HW_INSTRUCTIONS] = RISCV_PMU_INSTRET,
> + [PERF_COUNT_HW_CACHE_REFERENCES] = RISCV_OP_UNSUPP,
> + [PERF_COUNT_HW_CACHE_MISSES] = RISCV_OP_UNSUPP,
> + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = RISCV_OP_UNSUPP,
> + [PERF_COUNT_HW_BRANCH_MISSES] = RISCV_OP_UNSUPP,
> + [PERF_COUNT_HW_BUS_CYCLES] = RISCV_OP_UNSUPP,
> +};
> +
> +#define C(x) PERF_COUNT_HW_CACHE_##x
> +static const int riscv_cache_event_map[PERF_COUNT_HW_CACHE_MAX]
> +[PERF_COUNT_HW_CACHE_OP_MAX]
> +[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
> + [C(L1D)] = {
> + [C(OP_READ)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + [C(OP_WRITE)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + [C(OP_PREFETCH)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + },
> + [C(L1I)] = {
> + [C(OP_READ)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + [C(OP_WRITE)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + [C(OP_PREFETCH)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + },
> + [C(LL)] = {
> + [C(OP_READ)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + [C(OP_WRITE)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + [C(OP_PREFETCH)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + },
> + [C(DTLB)] = {
> + [C(OP_READ)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + [C(OP_WRITE)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + [C(OP_PREFETCH)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + },
> + [C(ITLB)] = {
> + [C(OP_READ)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + [C(OP_WRITE)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + [C(OP_PREFETCH)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + },
> + [C(BPU)] = {
> + [C(OP_READ)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + [C(OP_WRITE)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + [C(OP_PREFETCH)] = {
> + [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
> + [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
> + },
> + },
> +};
> +
> +static int riscv_map_hw_event(u64 config)
> +{
> + if (config >= riscv_pmu->max_events)
> + return -EINVAL;
> +
> + return riscv_pmu->hw_events[config];
> +}
> +
> +int riscv_map_cache_decode(u64 config, unsigned int *type,
> + unsigned int *op, unsigned int *result)
> +{
> + return -ENOENT;
> +}
> +
> +static int riscv_map_cache_event(u64 config)
> +{
> + unsigned int type, op, result;
> + int err = -ENOENT;
> + int code;
> +
> + err = riscv_map_cache_decode(config, &type, &op, &result);
> + if (!riscv_pmu->cache_events || err)
> + return err;
> +
> + if (type >= PERF_COUNT_HW_CACHE_MAX ||
> + op >= PERF_COUNT_HW_CACHE_OP_MAX ||
> + result >= PERF_COUNT_HW_CACHE_RESULT_MAX)
> + return -EINVAL;
> +
> + code = (*riscv_pmu->cache_events)[type][op][result];
> + if (code == RISCV_OP_UNSUPP)
> + return -EINVAL;
> +
> + return code;
> +}
> +
> +/*
> + * Low-level functions: reading/writing counters
> + */
> +
> +static inline u64 read_counter(int idx)
> +{
> + u64 val = 0;
> +
> + switch (idx) {
> + case RISCV_PMU_CYCLE:
> + val = csr_read(cycle);
> + break;
> + case RISCV_PMU_INSTRET:
> + val = csr_read(instret);
> + break;
> + default:
> + WARN_ON_ONCE(idx < 0 || idx > RISCV_MAX_COUNTERS);
> + return -EINVAL;
> + }
> +
> + return val;
> +}
> +
> +static inline void write_counter(int idx, u64 value)
> +{
> + /* currently not supported */
> +}
> +
> +/*
> + * pmu->read: read and update the counter
> + *
> + * Other architectures' implementation often have a xxx_perf_event_update
> + * routine, which can return counter values when called in the IRQ, but
> + * return void when being called by the pmu->read method.
> + */
> +static void riscv_pmu_read(struct perf_event *event)
> +{
> + struct hw_perf_event *hwc = &event->hw;
> + u64 prev_raw_count, new_raw_count;
> + u64 oldval;
> + int idx = hwc->idx;
> + u64 delta;
> +
> + do {
> + prev_raw_count = local64_read(&hwc->prev_count);
> + new_raw_count = read_counter(idx);
> +
> + oldval = local64_cmpxchg(&hwc->prev_count, prev_raw_count,
> + new_raw_count);
> + } while (oldval != prev_raw_count);
> +
> + /*
> + * delta is the value to update the counter we maintain in the kernel.
> + */
> + delta = (new_raw_count - prev_raw_count) &
> + ((1ULL << riscv_pmu->counter_width) - 1);
> + local64_add(delta, &event->count);
> + /*
> + * Something like local64_sub(delta, &hwc->period_left) here is
> + * needed if there is an interrupt for perf.
> + */
> +}
> +
> +/*
> + * State transition functions:
> + *
> + * stop()/start() & add()/del()
> + */
> +
> +/*
> + * pmu->stop: stop the counter
> + */
> +static void riscv_pmu_stop(struct perf_event *event, int flags)
> +{
> + struct hw_perf_event *hwc = &event->hw;
> +
> + WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
> + hwc->state |= PERF_HES_STOPPED;
> +
> + if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
> + riscv_pmu_read(event);
> + hwc->state |= PERF_HES_UPTODATE;
> + }
> +}
> +
> +/*
> + * pmu->start: start the event.
> + */
> +static void riscv_pmu_start(struct perf_event *event, int flags)
> +{
> + struct hw_perf_event *hwc = &event->hw;
> +
> + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
> + return;
> +
> + if (flags & PERF_EF_RELOAD) {
> + WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE));
> +
> + /*
> + * Set the counter to the period to the next interrupt here,
> + * if you have any.
> + */
> + }
> +
> + hwc->state = 0;
> + perf_event_update_userpage(event);
> +
> + /*
> + * Since we cannot write to counters, this serves as an initialization
> + * to the delta-mechanism in pmu->read(); otherwise, the delta would be
> + * wrong when pmu->read is called for the first time.
> + */
> + if (local64_read(&hwc->prev_count) == 0)
> + local64_set(&hwc->prev_count, read_counter(hwc->idx));
> +}
> +
> +/*
> + * pmu->add: add the event to PMU.
> + */
> +static int riscv_pmu_add(struct perf_event *event, int flags)
> +{
> + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> + struct hw_perf_event *hwc = &event->hw;
> +
> + if (cpuc->n_events == riscv_pmu->num_counters)
> + return -ENOSPC;
> +
> + /*
> + * We don't have general conunters, so no binding-event-to-counter
> + * process here.
> + *
> + * Indexing using hwc->config generally not works, since config may
> + * contain extra information, but here the only info we have in
> + * hwc->config is the event index.
> + */
> + hwc->idx = hwc->config;
> + cpuc->events[hwc->idx] = event;
> + cpuc->n_events++;
> +
> + hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
> +
> + if (flags & PERF_EF_START)
> + riscv_pmu_start(event, PERF_EF_RELOAD);
> +
> + return 0;
> +}
> +
> +/*
> + * pmu->del: delete the event from PMU.
> + */
> +static void riscv_pmu_del(struct perf_event *event, int flags)
> +{
> + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> + struct hw_perf_event *hwc = &event->hw;
> +
> + cpuc->events[hwc->idx] = NULL;
> + cpuc->n_events--;
> + riscv_pmu_stop(event, PERF_EF_UPDATE);
> + perf_event_update_userpage(event);
> +}
> +
> +/*
> + * Interrupt
> + */
> +
> +static DEFINE_MUTEX(pmc_reserve_mutex);
> +typedef void (*perf_irq_t)(void *riscv_perf_irq);
> +perf_irq_t perf_irq;
> +
> +void riscv_pmu_handle_irq(void *riscv_perf_irq)
> +{
> +}
> +
> +static perf_irq_t reserve_pmc_hardware(void)
> +{
> + perf_irq_t old;
> +
> + mutex_lock(&pmc_reserve_mutex);
> + old = perf_irq;
> + perf_irq = &riscv_pmu_handle_irq;
> + mutex_unlock(&pmc_reserve_mutex);
> +
> + return old;
> +}
> +
> +void release_pmc_hardware(void)
> +{
> + mutex_lock(&pmc_reserve_mutex);
> + perf_irq = NULL;
> + mutex_unlock(&pmc_reserve_mutex);
> +}
> +
> +/*
> + * Event Initialization
> + */
> +
> +static atomic_t riscv_active_events;
> +
> +static void riscv_event_destroy(struct perf_event *event)
> +{
> + if (atomic_dec_return(&riscv_active_events) == 0)
> + release_pmc_hardware();
> +}
> +
> +static int riscv_event_init(struct perf_event *event)
> +{
> + struct perf_event_attr *attr = &event->attr;
> + struct hw_perf_event *hwc = &event->hw;
> + perf_irq_t old_irq_handler = NULL;
> + int code;
> +
> + if (atomic_inc_return(&riscv_active_events) == 1)
> + old_irq_handler = reserve_pmc_hardware();
> +
> + if (old_irq_handler) {
> + pr_warn("PMC hardware busy (reserved by oprofile)\n");
> + atomic_dec(&riscv_active_events);
> + return -EBUSY;
> + }
> +
> + switch (event->attr.type) {
> + case PERF_TYPE_HARDWARE:
> + code = riscv_pmu->map_hw_event(attr->config);
> + break;
> + case PERF_TYPE_HW_CACHE:
> + code = riscv_pmu->map_cache_event(attr->config);
> + break;
> + case PERF_TYPE_RAW:
> + return -EOPNOTSUPP;
> + default:
> + return -ENOENT;
> + }
> +
> + event->destroy = riscv_event_destroy;
> + if (code < 0) {
> + event->destroy(event);
> + return code;
> + }
> +
> + /*
> + * idx is set to -1 because the index of a general event should not be
> + * decided until binding to some counter in pmu->add().
> + *
> + * But since we don't have such support, later in pmu->add(), we just
> + * use hwc->config as the index instead.
> + */
> + hwc->config = code;
> + hwc->idx = -1;
> +
> + return 0;
> +}
> +
> +/*
> + * Initialization
> + */
> +
> +static struct pmu min_pmu = {
> + .name = "riscv-base",
> + .event_init = riscv_event_init,
> + .add = riscv_pmu_add,
> + .del = riscv_pmu_del,
> + .start = riscv_pmu_start,
> + .stop = riscv_pmu_stop,
> + .read = riscv_pmu_read,
> +};
> +
> +static const struct riscv_pmu riscv_base_pmu = {
> + .pmu = &min_pmu,
> + .max_events = ARRAY_SIZE(riscv_hw_event_map),
> + .map_hw_event = riscv_map_hw_event,
> + .hw_events = riscv_hw_event_map,
> + .map_cache_event = riscv_map_cache_event,
> + .cache_events = &riscv_cache_event_map,
> + .counter_width = 63,
> + .num_counters = RISCV_BASE_COUNTERS + 0,
> +};
> +
> +struct pmu * __weak __init riscv_init_platform_pmu(void)
> +{
> + riscv_pmu = &riscv_base_pmu;
> + return riscv_pmu->pmu;
> +}

Rather than relying on a weak symbol that gets overridden by other PMU types,
this should look through the device tree for a compatible PMU (in the case of
just the base PMU it could be any RISC-V hart) and install a PMU handler for
it. There'd probably be some sort of priority scheme here, like there are for
other driver subsystems, where we'd pick the best PMU driver that's compatible
with the PMUs on every hart.

> +
> +int __init init_hw_perf_events(void)
> +{
> + struct pmu *pmu = riscv_init_platform_pmu();
> +
> + perf_irq = NULL;
> + perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
> + return 0;
> +}
> +arch_initcall(init_hw_perf_events);

Since we only have a single PMU type right now this isn't critical to handle
right away, but we will have to refactor this before adding another PMU.

2018-04-09 07:11:09

by Alan Kao

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

On Thu, Apr 05, 2018 at 09:47:50AM -0700, Palmer Dabbelt wrote:
> On Mon, 26 Mar 2018 00:57:54 PDT (-0700), [email protected] wrote:
> >This patch provide a basic PMU, riscv_base_pmu, which supports two
> >general hardware event, instructions and cycles. Furthermore, this
> >PMU serves as a reference implementation to ease the portings in
> >the future.
> >
> >riscv_base_pmu should be able to run on any RISC-V machine that
> >conforms to the Priv-Spec. Note that the latest qemu model hasn't
> >fully support a proper behavior of Priv-Spec 1.10 yet, but work
> >around should be easy with very small fixes. Please check
> >https://github.com/riscv/riscv-qemu/pull/115 for future updates.
> >
> >Cc: Nick Hu <[email protected]>
> >Cc: Greentime Hu <[email protected]>
> >Signed-off-by: Alan Kao <[email protected]>
>
> We should really be able to detect PMU types at runtime (via a device tree
> entry) rather than requiring that a single PMU is built in to the kernel.
> This will require a handful of modifications to how this patch works, which
> I'll try to list below.

> >+menu "PMU type"
> >+ depends on PERF_EVENTS
> >+
> >+config RISCV_BASE_PMU
> >+ bool "Base Performance Monitoring Unit"
> >+ def_bool y
> >+ help
> >+ A base PMU that serves as a reference implementation and has limited
> >+ feature of perf.
> >+
> >+endmenu
> >+
>
> Rather than a menu where a single PMU can be selected, there should be
> options to enable or disable support for each PMU type -- this is just like
> how all our other drivers work.
>

I see. Sure. The descriptions and implementation will be refined in v3.

> >+struct pmu * __weak __init riscv_init_platform_pmu(void)
> >+{
> >+ riscv_pmu = &riscv_base_pmu;
> >+ return riscv_pmu->pmu;
> >+}
>
> Rather than relying on a weak symbol that gets overridden by other PMU
> types, this should look through the device tree for a compatible PMU (in the
> case of just the base PMU it could be any RISC-V hart) and install a PMU
> handler for it. There'd probably be some sort of priority scheme here, like
> there are for other driver subsystems, where we'd pick the best PMU driver
> that's compatible with the PMUs on every hart.
>
> >+
> >+int __init init_hw_perf_events(void)
> >+{
> >+ struct pmu *pmu = riscv_init_platform_pmu();
> >+
> >+ perf_irq = NULL;
> >+ perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
> >+ return 0;
> >+}
> >+arch_initcall(init_hw_perf_events);
>
> Since we only have a single PMU type right now this isn't critical to handle
> right away, but we will have to refactor this before adding another PMU.

I see. My rough plan is to do the device tree parsing here, and if no specific
PMU string is found then just register the base PMU proposed in this patch.
How about this idea?

Thanks.

2018-04-09 21:07:28

by Palmer Dabbelt

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

On Mon, 09 Apr 2018 00:07:11 PDT (-0700), [email protected] wrote:
> On Thu, Apr 05, 2018 at 09:47:50AM -0700, Palmer Dabbelt wrote:
>> On Mon, 26 Mar 2018 00:57:54 PDT (-0700), [email protected] wrote:
>> >This patch provide a basic PMU, riscv_base_pmu, which supports two
>> >general hardware event, instructions and cycles. Furthermore, this
>> >PMU serves as a reference implementation to ease the portings in
>> >the future.
>> >
>> >riscv_base_pmu should be able to run on any RISC-V machine that
>> >conforms to the Priv-Spec. Note that the latest qemu model hasn't
>> >fully support a proper behavior of Priv-Spec 1.10 yet, but work
>> >around should be easy with very small fixes. Please check
>> >https://github.com/riscv/riscv-qemu/pull/115 for future updates.
>> >
>> >Cc: Nick Hu <[email protected]>
>> >Cc: Greentime Hu <[email protected]>
>> >Signed-off-by: Alan Kao <[email protected]>
>>
>> We should really be able to detect PMU types at runtime (via a device tree
>> entry) rather than requiring that a single PMU is built in to the kernel.
>> This will require a handful of modifications to how this patch works, which
>> I'll try to list below.
>
>> >+menu "PMU type"
>> >+ depends on PERF_EVENTS
>> >+
>> >+config RISCV_BASE_PMU
>> >+ bool "Base Performance Monitoring Unit"
>> >+ def_bool y
>> >+ help
>> >+ A base PMU that serves as a reference implementation and has limited
>> >+ feature of perf.
>> >+
>> >+endmenu
>> >+
>>
>> Rather than a menu where a single PMU can be selected, there should be
>> options to enable or disable support for each PMU type -- this is just like
>> how all our other drivers work.
>>
>
> I see. Sure. The descriptions and implementation will be refined in v3.
>
>> >+struct pmu * __weak __init riscv_init_platform_pmu(void)
>> >+{
>> >+ riscv_pmu = &riscv_base_pmu;
>> >+ return riscv_pmu->pmu;
>> >+}
>>
>> Rather than relying on a weak symbol that gets overridden by other PMU
>> types, this should look through the device tree for a compatible PMU (in the
>> case of just the base PMU it could be any RISC-V hart) and install a PMU
>> handler for it. There'd probably be some sort of priority scheme here, like
>> there are for other driver subsystems, where we'd pick the best PMU driver
>> that's compatible with the PMUs on every hart.
>>
>> >+
>> >+int __init init_hw_perf_events(void)
>> >+{
>> >+ struct pmu *pmu = riscv_init_platform_pmu();
>> >+
>> >+ perf_irq = NULL;
>> >+ perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
>> >+ return 0;
>> >+}
>> >+arch_initcall(init_hw_perf_events);
>>
>> Since we only have a single PMU type right now this isn't critical to handle
>> right away, but we will have to refactor this before adding another PMU.
>
> I see. My rough plan is to do the device tree parsing here, and if no specific
> PMU string is found then just register the base PMU proposed in this patch.
> How about this idea?

Sounds good. We know the generic PMU will work on all RISC-V harts, so there's
no need to add an explicit device tree entry for it. Then we can figure out
how to add device tree entries for custom performance monitors later :)

2018-04-10 18:19:41

by Alex Solomatnikov

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

Alan,

I merged SBI emulation for perf counters and config:
https://github.com/riscv/riscv-pk/pull/98

You should be able to write these CSRs.

Thanks,
Alex

On Mon, Apr 9, 2018 at 12:07 AM, Alan Kao <[email protected]> wrote:
> On Thu, Apr 05, 2018 at 09:47:50AM -0700, Palmer Dabbelt wrote:
>> On Mon, 26 Mar 2018 00:57:54 PDT (-0700), [email protected] wrote:
>> >This patch provide a basic PMU, riscv_base_pmu, which supports two
>> >general hardware event, instructions and cycles. Furthermore, this
>> >PMU serves as a reference implementation to ease the portings in
>> >the future.
>> >
>> >riscv_base_pmu should be able to run on any RISC-V machine that
>> >conforms to the Priv-Spec. Note that the latest qemu model hasn't
>> >fully support a proper behavior of Priv-Spec 1.10 yet, but work
>> >around should be easy with very small fixes. Please check
>> >https://github.com/riscv/riscv-qemu/pull/115 for future updates.
>> >
>> >Cc: Nick Hu <[email protected]>
>> >Cc: Greentime Hu <[email protected]>
>> >Signed-off-by: Alan Kao <[email protected]>
>>
>> We should really be able to detect PMU types at runtime (via a device tree
>> entry) rather than requiring that a single PMU is built in to the kernel.
>> This will require a handful of modifications to how this patch works, which
>> I'll try to list below.
>
>> >+menu "PMU type"
>> >+ depends on PERF_EVENTS
>> >+
>> >+config RISCV_BASE_PMU
>> >+ bool "Base Performance Monitoring Unit"
>> >+ def_bool y
>> >+ help
>> >+ A base PMU that serves as a reference implementation and has limited
>> >+ feature of perf.
>> >+
>> >+endmenu
>> >+
>>
>> Rather than a menu where a single PMU can be selected, there should be
>> options to enable or disable support for each PMU type -- this is just like
>> how all our other drivers work.
>>
>
> I see. Sure. The descriptions and implementation will be refined in v3.
>
>> >+struct pmu * __weak __init riscv_init_platform_pmu(void)
>> >+{
>> >+ riscv_pmu = &riscv_base_pmu;
>> >+ return riscv_pmu->pmu;
>> >+}
>>
>> Rather than relying on a weak symbol that gets overridden by other PMU
>> types, this should look through the device tree for a compatible PMU (in the
>> case of just the base PMU it could be any RISC-V hart) and install a PMU
>> handler for it. There'd probably be some sort of priority scheme here, like
>> there are for other driver subsystems, where we'd pick the best PMU driver
>> that's compatible with the PMUs on every hart.
>>
>> >+
>> >+int __init init_hw_perf_events(void)
>> >+{
>> >+ struct pmu *pmu = riscv_init_platform_pmu();
>> >+
>> >+ perf_irq = NULL;
>> >+ perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
>> >+ return 0;
>> >+}
>> >+arch_initcall(init_hw_perf_events);
>>
>> Since we only have a single PMU type right now this isn't critical to handle
>> right away, but we will have to refactor this before adding another PMU.
>
> I see. My rough plan is to do the device tree parsing here, and if no specific
> PMU string is found then just register the base PMU proposed in this patch.
> How about this idea?
>
> Thanks.