2018-04-19 23:30:32

by Alan Kao

[permalink] [raw]
Subject: [PATCH v5 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

This implements the baseline PMU for RISC-V platforms.

To ease future PMU portings, a guide is also written, containing
perf concepts, arch porting practices and some hints.

Changes in v5:
- Fix patch errors from checkpatch.pl.

Changes in v4:
- Fix several compilation errors. Sorry for that.
- Raise a warning in the write_counter body.

Changes in v3:
- Fix typos in the document.
- Change the initialization routine from statically assigning PMU to
device-tree-based methods, and set default to the PMU proposed in
this patch.

Changes in v2:
- Fix the bug reported by Alex, which was caused by not sufficient
initialization. Check https://lkml.org/lkml/2018/3/31/251 for the
discussion.

Alan Kao (2):
perf: riscv: preliminary RISC-V support
perf: riscv: Add Document for Future Porting Guide

Documentation/riscv/pmu.txt | 249 ++++++++++++++
arch/riscv/Kconfig | 13 +
arch/riscv/include/asm/perf_event.h | 79 ++++-
arch/riscv/kernel/Makefile | 1 +
arch/riscv/kernel/perf_event.c | 485 ++++++++++++++++++++++++++++
5 files changed, 823 insertions(+), 4 deletions(-)
create mode 100644 Documentation/riscv/pmu.txt
create mode 100644 arch/riscv/kernel/perf_event.c

--
2.17.0



2018-04-19 23:30:05

by Alan Kao

[permalink] [raw]
Subject: [PATCH v5 2/2] perf: riscv: Add Document for Future Porting Guide

Reviewed-by: Alex Solomatnikov <[email protected]>
Cc: Nick Hu <[email protected]>
Cc: Greentime Hu <[email protected]>
Signed-off-by: Alan Kao <[email protected]>
---
Documentation/riscv/pmu.txt | 249 ++++++++++++++++++++++++++++++++++++
1 file changed, 249 insertions(+)
create mode 100644 Documentation/riscv/pmu.txt

diff --git a/Documentation/riscv/pmu.txt b/Documentation/riscv/pmu.txt
new file mode 100644
index 000000000000..b29f03a6d82f
--- /dev/null
+++ b/Documentation/riscv/pmu.txt
@@ -0,0 +1,249 @@
+Supporting PMUs on RISC-V platforms
+==========================================
+Alan Kao <[email protected]>, Mar 2018
+
+Introduction
+------------
+
+As of this writing, perf_event-related features mentioned in The RISC-V ISA
+Privileged Version 1.10 are as follows:
+(please check the manual for more details)
+
+* [m|s]counteren
+* mcycle[h], cycle[h]
+* minstret[h], instret[h]
+* mhpeventx, mhpcounterx[h]
+
+With such function set only, porting perf would require a lot of work, due to
+the lack of the following general architectural performance monitoring features:
+
+* Enabling/Disabling counters
+ Counters are just free-running all the time in our case.
+* Interrupt caused by counter overflow
+ No such feature in the spec.
+* Interrupt indicator
+ It is not possible to have many interrupt ports for all counters, so an
+ interrupt indicator is required for software to tell which counter has
+ just overflowed.
+* Writing to counters
+ There will be an SBI to support this since the kernel cannot modify the
+ counters [1]. Alternatively, some vendor considers to implement
+ hardware-extension for M-S-U model machines to write counters directly.
+
+This document aims to provide developers a quick guide on supporting their
+PMUs in the kernel. The following sections briefly explain perf' mechanism
+and todos.
+
+You may check previous discussions here [1][2]. Also, it might be helpful
+to check the appendix for related kernel structures.
+
+
+1. Initialization
+-----------------
+
+*riscv_pmu* is a global pointer of type *struct riscv_pmu*, which contains
+various methods according to perf's internal convention and PMU-specific
+parameters. One should declare such instance to represent the PMU. By default,
+*riscv_pmu* points to a constant structure *riscv_base_pmu*, which has very
+basic support to a baseline QEMU model.
+
+Then he/she can either assign the instance's pointer to *riscv_pmu* so that
+the minimal and already-implemented logic can be leveraged, or invent his/her
+own *riscv_init_platform_pmu* implementation.
+
+In other words, existing sources of *riscv_base_pmu* merely provide a
+reference implementation. Developers can flexibly decide how many parts they
+can leverage, and in the most extreme case, they can customize every function
+according to their needs.
+
+
+2. Event Initialization
+-----------------------
+
+When a user launches a perf command to monitor some events, it is first
+interpreted by the userspace perf tool into multiple *perf_event_open*
+system calls, and then each of them calls to the body of *event_init*
+member function that was assigned in the previous step. In *riscv_base_pmu*'s
+case, it is *riscv_event_init*.
+
+The main purpose of this function is to translate the event provided by user
+into bitmap, so that HW-related control registers or counters can directly be
+manipulated. The translation is based on the mappings and methods provided in
+*riscv_pmu*.
+
+Note that some features can be done in this stage as well:
+
+(1) interrupt setting, which is stated in the next section;
+(2) privilege level setting (user space only, kernel space only, both);
+(3) destructor setting. Normally it is sufficient to apply *riscv_destroy_event*;
+(4) tweaks for non-sampling events, which will be utilized by functions such as
+*perf_adjust_period*, usually something like the follows:
+
+if (!is_sampling_event(event)) {
+ hwc->sample_period = x86_pmu.max_period;
+ hwc->last_period = hwc->sample_period;
+ local64_set(&hwc->period_left, hwc->sample_period);
+}
+
+In the case of *riscv_base_pmu*, only (3) is provided for now.
+
+
+3. Interrupt
+------------
+
+3.1. Interrupt Initialization
+
+This often occurs at the beginning of the *event_init* method. In common
+practice, this should be a code segment like
+
+int x86_reserve_hardware(void)
+{
+ int err = 0;
+
+ if (!atomic_inc_not_zero(&pmc_refcount)) {
+ mutex_lock(&pmc_reserve_mutex);
+ if (atomic_read(&pmc_refcount) == 0) {
+ if (!reserve_pmc_hardware())
+ err = -EBUSY;
+ else
+ reserve_ds_buffers();
+ }
+ if (!err)
+ atomic_inc(&pmc_refcount);
+ mutex_unlock(&pmc_reserve_mutex);
+ }
+
+ return err;
+}
+
+And the magic is in *reserve_pmc_hardware*, which usually does atomic
+operations to make implemented IRQ accessible from some global function pointer.
+*release_pmc_hardware* serves the opposite purpose, and it is used in event
+destructors mentioned in previous section.
+
+(Note: From the implementations in all the architectures, the *reserve/release*
+pair are always IRQ settings, so the *pmc_hardware* seems somehow misleading.
+It does NOT deal with the binding between an event and a physical counter,
+which will be introduced in the next section.)
+
+3.2. IRQ Structure
+
+Basically, a IRQ runs the following pseudo code:
+
+for each hardware counter that triggered this overflow
+
+ get the event of this counter
+
+ // following two steps are defined as *read()*,
+ // check the section Reading/Writing Counters for details.
+ count the delta value since previous interrupt
+ update the event->count (# event occurs) by adding delta, and
+ event->hw.period_left by subtracting delta
+
+ if the event overflows
+ sample data
+ set the counter appropriately for the next overflow
+
+ if the event overflows again
+ too frequently, throttle this event
+ fi
+ fi
+
+end for
+
+However as of this writing, none of the RISC-V implementations have designed an
+interrupt for perf, so the details are to be completed in the future.
+
+4. Reading/Writing Counters
+---------------------------
+
+They seem symmetric but perf treats them quite differently. For reading, there
+is a *read* interface in *struct pmu*, but it serves more than just reading.
+According to the context, the *read* function not only reads the content of the
+counter (event->count), but also updates the left period to the next interrupt
+(event->hw.period_left).
+
+But the core of perf does not need direct write to counters. Writing counters
+is hidden behind the abstraction of 1) *pmu->start*, literally start counting so one
+has to set the counter to a good value for the next interrupt; 2) inside the IRQ
+it should set the counter to the same resonable value.
+
+Reading is not a problem in RISC-V but writing would need some effort, since
+counters are not allowed to be written by S-mode.
+
+
+5. add()/del()/start()/stop()
+-----------------------------
+
+Basic idea: add()/del() adds/deletes events to/from a PMU, and start()/stop()
+starts/stop the counter of some event in the PMU. All of them take the same
+arguments: *struct perf_event *event* and *int flag*.
+
+Consider perf as a state machine, then you will find that these functions serve
+as the state transition process between those states.
+Three states (event->hw.state) are defined:
+
+* PERF_HES_STOPPED: the counter is stopped
+* PERF_HES_UPTODATE: the event->count is up-to-date
+* PERF_HES_ARCH: arch-dependent usage ... we don't need this for now
+
+A normal flow of these state transitions are as follows:
+
+* A user launches a perf event, resulting in calling to *event_init*.
+* When being context-switched in, *add* is called by the perf core, with a flag
+ PERF_EF_START, which means that the event should be started after it is added.
+ At this stage, a general event is bound to a physical counter, if any.
+ The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE, because it is now
+ stopped, and the (software) event count does not need updating.
+** *start* is then called, and the counter is enabled.
+ With flag PERF_EF_RELOAD, it writes an appropriate value to the counter (check
+ previous section for detail).
+ Nothing is written if the flag does not contain PERF_EF_RELOAD.
+ The state now is reset to none, because it is neither stopped nor updated
+ (the counting already started)
+* When being context-switched out, *del* is called. It then checks out all the
+ events in the PMU and calls *stop* to update their counts.
+** *stop* is called by *del*
+ and the perf core with flag PERF_EF_UPDATE, and it often shares the same
+ subroutine as *read* with the same logic.
+ The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE, again.
+
+** Life cycle of these two pairs: *add* and *del* are called repeatedly as
+ tasks switch in-and-out; *start* and *stop* is also called when the perf core
+ needs a quick stop-and-start, for instance, when the interrupt period is being
+ adjusted.
+
+Current implementation is sufficient for now and can be easily extended to
+features in the future.
+
+A. Related Structures
+---------------------
+
+* struct pmu: include/linux/perf_event.h
+* struct riscv_pmu: arch/riscv/include/asm/perf_event.h
+
+ Both structures are designed to be read-only.
+
+ *struct pmu* defines some function pointer interfaces, and most of them take
+*struct perf_event* as a main argument, dealing with perf events according to
+perf's internal state machine (check kernel/events/core.c for details).
+
+ *struct riscv_pmu* defines PMU-specific parameters. The naming follows the
+convention of all other architectures.
+
+* struct perf_event: include/linux/perf_event.h
+* struct hw_perf_event
+
+ The generic structure that represents perf events, and the hardware-related
+details.
+
+* struct riscv_hw_events: arch/riscv/include/asm/perf_event.h
+
+ The structure that holds the status of events, has two fixed members:
+the number of events and the array of the events.
+
+References
+----------
+
+[1] https://github.com/riscv/riscv-linux/pull/124
+[2] https://groups.google.com/a/groups.riscv.org/forum/#!topic/sw-dev/f19TmCNP6yA
--
2.17.0


2018-04-19 23:30:31

by Alan Kao

[permalink] [raw]
Subject: [PATCH v5 1/2] perf: riscv: preliminary RISC-V support

This patch provide a basic PMU, riscv_base_pmu, which supports two
general hardware event, instructions and cycles. Furthermore, this
PMU serves as a reference implementation to ease the portings in
the future.

riscv_base_pmu should be able to run on any RISC-V machine that
conforms to the Priv-Spec. Note that the latest qemu model hasn't
fully support a proper behavior of Priv-Spec 1.10 yet, but work
around should be easy with very small fixes. Please check
https://github.com/riscv/riscv-qemu/pull/115 for future updates.

Cc: Nick Hu <[email protected]>
Cc: Greentime Hu <[email protected]>
Signed-off-by: Alan Kao <[email protected]>
---
arch/riscv/Kconfig | 13 +
arch/riscv/include/asm/perf_event.h | 79 ++++-
arch/riscv/kernel/Makefile | 1 +
arch/riscv/kernel/perf_event.c | 485 ++++++++++++++++++++++++++++
4 files changed, 574 insertions(+), 4 deletions(-)
create mode 100644 arch/riscv/kernel/perf_event.c

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index c22ebe08e902..90d9c8e50377 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -203,6 +203,19 @@ config RISCV_ISA_C
config RISCV_ISA_A
def_bool y

+menu "supported PMU type"
+ depends on PERF_EVENTS
+
+config RISCV_BASE_PMU
+ bool "Base Performance Monitoring Unit"
+ def_bool y
+ help
+ A base PMU that serves as a reference implementation and has limited
+ feature of perf. It can run on any RISC-V machines so serves as the
+ fallback, but this option can also be disable to reduce kernel size.
+
+endmenu
+
endmenu

menu "Kernel type"
diff --git a/arch/riscv/include/asm/perf_event.h b/arch/riscv/include/asm/perf_event.h
index e13d2ff29e83..0e638a0c3feb 100644
--- a/arch/riscv/include/asm/perf_event.h
+++ b/arch/riscv/include/asm/perf_event.h
@@ -1,13 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2018 SiFive
+ * Copyright (C) 2018 Andes Technology Corporation
*
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public Licence
- * as published by the Free Software Foundation; either version
- * 2 of the Licence, or (at your option) any later version.
*/

#ifndef _ASM_RISCV_PERF_EVENT_H
#define _ASM_RISCV_PERF_EVENT_H

+#include <linux/perf_event.h>
+#include <linux/ptrace.h>
+
+#define RISCV_BASE_COUNTERS 2
+
+/*
+ * The RISCV_MAX_COUNTERS parameter should be specified.
+ */
+
+#ifdef CONFIG_RISCV_BASE_PMU
+#define RISCV_MAX_COUNTERS 2
+#endif
+
+#ifndef RISCV_MAX_COUNTERS
+#error "Please provide a valid RISCV_MAX_COUNTERS for the PMU."
+#endif
+
+/*
+ * These are the indexes of bits in counteren register *minus* 1,
+ * except for cycle. It would be coherent if it can directly mapped
+ * to counteren bit definition, but there is a *time* register at
+ * counteren[1]. Per-cpu structure is scarce resource here.
+ *
+ * According to the spec, an implementation can support counter up to
+ * mhpmcounter31, but many high-end processors has at most 6 general
+ * PMCs, we give the definition to MHPMCOUNTER8 here.
+ */
+#define RISCV_PMU_CYCLE 0
+#define RISCV_PMU_INSTRET 1
+#define RISCV_PMU_MHPMCOUNTER3 2
+#define RISCV_PMU_MHPMCOUNTER4 3
+#define RISCV_PMU_MHPMCOUNTER5 4
+#define RISCV_PMU_MHPMCOUNTER6 5
+#define RISCV_PMU_MHPMCOUNTER7 6
+#define RISCV_PMU_MHPMCOUNTER8 7
+
+#define RISCV_OP_UNSUPP (-EOPNOTSUPP)
+
+struct cpu_hw_events {
+ /* # currently enabled events*/
+ int n_events;
+ /* currently enabled events */
+ struct perf_event *events[RISCV_MAX_COUNTERS];
+ /* vendor-defined PMU data */
+ void *platform;
+};
+
+struct riscv_pmu {
+ struct pmu *pmu;
+
+ /* generic hw/cache events table */
+ const int *hw_events;
+ const int (*cache_events)[PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX];
+ /* method used to map hw/cache events */
+ int (*map_hw_event)(u64 config);
+ int (*map_cache_event)(u64 config);
+
+ /* max generic hw events in map */
+ int max_events;
+ /* number total counters, 2(base) + x(general) */
+ int num_counters;
+ /* the width of the counter */
+ int counter_width;
+
+ /* vendor-defined PMU features */
+ void *platform;
+
+ irqreturn_t (*handle_irq)(int irq_num, void *dev);
+ int irq;
+};
+
#endif /* _ASM_RISCV_PERF_EVENT_H */
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
index ffa439d4a364..f50d19816757 100644
--- a/arch/riscv/kernel/Makefile
+++ b/arch/riscv/kernel/Makefile
@@ -39,5 +39,6 @@ obj-$(CONFIG_MODULE_SECTIONS) += module-sections.o
obj-$(CONFIG_FUNCTION_TRACER) += mcount.o
obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o
obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
+obj-$(CONFIG_PERF_EVENTS) += perf_event.o

clean:
diff --git a/arch/riscv/kernel/perf_event.c b/arch/riscv/kernel/perf_event.c
new file mode 100644
index 000000000000..b0e10c4e9f77
--- /dev/null
+++ b/arch/riscv/kernel/perf_event.c
@@ -0,0 +1,485 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2008 Thomas Gleixner <[email protected]>
+ * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar
+ * Copyright (C) 2009 Jaswinder Singh Rajput
+ * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+ * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra
+ * Copyright (C) 2009 Intel Corporation, <[email protected]>
+ * Copyright (C) 2009 Google, Inc., Stephane Eranian
+ * Copyright 2014 Tilera Corporation. All Rights Reserved.
+ * Copyright (C) 2018 Andes Technology Corporation
+ *
+ * Perf_events support for RISC-V platforms.
+ *
+ * Since the spec. (as of now, Priv-Spec 1.10) does not provide enough
+ * functionality for perf event to fully work, this file provides
+ * the very basic framework only.
+ *
+ * For platform portings, please check Documentations/riscv/pmu.txt.
+ *
+ * The Copyright line includes x86 and tile ones.
+ */
+
+#include <linux/kprobes.h>
+#include <linux/kernel.h>
+#include <linux/kdebug.h>
+#include <linux/mutex.h>
+#include <linux/bitmap.h>
+#include <linux/irq.h>
+#include <linux/interrupt.h>
+#include <linux/perf_event.h>
+#include <linux/atomic.h>
+#include <linux/of.h>
+#include <asm/perf_event.h>
+
+static const struct riscv_pmu *riscv_pmu __read_mostly;
+static DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events);
+
+/*
+ * Hardware & cache maps and their methods
+ */
+
+static const int riscv_hw_event_map[] = {
+ [PERF_COUNT_HW_CPU_CYCLES] = RISCV_PMU_CYCLE,
+ [PERF_COUNT_HW_INSTRUCTIONS] = RISCV_PMU_INSTRET,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = RISCV_OP_UNSUPP,
+ [PERF_COUNT_HW_CACHE_MISSES] = RISCV_OP_UNSUPP,
+ [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = RISCV_OP_UNSUPP,
+ [PERF_COUNT_HW_BRANCH_MISSES] = RISCV_OP_UNSUPP,
+ [PERF_COUNT_HW_BUS_CYCLES] = RISCV_OP_UNSUPP,
+};
+
+#define C(x) PERF_COUNT_HW_CACHE_##x
+static const int riscv_cache_event_map[PERF_COUNT_HW_CACHE_MAX]
+[PERF_COUNT_HW_CACHE_OP_MAX]
+[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+ [C(L1D)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ },
+ [C(L1I)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ },
+ [C(LL)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ },
+ [C(DTLB)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ },
+ [C(ITLB)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ },
+ [C(BPU)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
+ [C(RESULT_MISS)] = RISCV_OP_UNSUPP,
+ },
+ },
+};
+
+static int riscv_map_hw_event(u64 config)
+{
+ if (config >= riscv_pmu->max_events)
+ return -EINVAL;
+
+ return riscv_pmu->hw_events[config];
+}
+
+int riscv_map_cache_decode(u64 config, unsigned int *type,
+ unsigned int *op, unsigned int *result)
+{
+ return -ENOENT;
+}
+
+static int riscv_map_cache_event(u64 config)
+{
+ unsigned int type, op, result;
+ int err = -ENOENT;
+ int code;
+
+ err = riscv_map_cache_decode(config, &type, &op, &result);
+ if (!riscv_pmu->cache_events || err)
+ return err;
+
+ if (type >= PERF_COUNT_HW_CACHE_MAX ||
+ op >= PERF_COUNT_HW_CACHE_OP_MAX ||
+ result >= PERF_COUNT_HW_CACHE_RESULT_MAX)
+ return -EINVAL;
+
+ code = (*riscv_pmu->cache_events)[type][op][result];
+ if (code == RISCV_OP_UNSUPP)
+ return -EINVAL;
+
+ return code;
+}
+
+/*
+ * Low-level functions: reading/writing counters
+ */
+
+static inline u64 read_counter(int idx)
+{
+ u64 val = 0;
+
+ switch (idx) {
+ case RISCV_PMU_CYCLE:
+ val = csr_read(cycle);
+ break;
+ case RISCV_PMU_INSTRET:
+ val = csr_read(instret);
+ break;
+ default:
+ WARN_ON_ONCE(idx < 0 || idx > RISCV_MAX_COUNTERS);
+ return -EINVAL;
+ }
+
+ return val;
+}
+
+static inline void write_counter(int idx, u64 value)
+{
+ /* currently not supported */
+ WARN_ON_ONCE(1);
+}
+
+/*
+ * pmu->read: read and update the counter
+ *
+ * Other architectures' implementation often have a xxx_perf_event_update
+ * routine, which can return counter values when called in the IRQ, but
+ * return void when being called by the pmu->read method.
+ */
+static void riscv_pmu_read(struct perf_event *event)
+{
+ struct hw_perf_event *hwc = &event->hw;
+ u64 prev_raw_count, new_raw_count;
+ u64 oldval;
+ int idx = hwc->idx;
+ u64 delta;
+
+ do {
+ prev_raw_count = local64_read(&hwc->prev_count);
+ new_raw_count = read_counter(idx);
+
+ oldval = local64_cmpxchg(&hwc->prev_count, prev_raw_count,
+ new_raw_count);
+ } while (oldval != prev_raw_count);
+
+ /*
+ * delta is the value to update the counter we maintain in the kernel.
+ */
+ delta = (new_raw_count - prev_raw_count) &
+ ((1ULL << riscv_pmu->counter_width) - 1);
+ local64_add(delta, &event->count);
+ /*
+ * Something like local64_sub(delta, &hwc->period_left) here is
+ * needed if there is an interrupt for perf.
+ */
+}
+
+/*
+ * State transition functions:
+ *
+ * stop()/start() & add()/del()
+ */
+
+/*
+ * pmu->stop: stop the counter
+ */
+static void riscv_pmu_stop(struct perf_event *event, int flags)
+{
+ struct hw_perf_event *hwc = &event->hw;
+
+ WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
+ hwc->state |= PERF_HES_STOPPED;
+
+ if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
+ riscv_pmu->pmu->read(event);
+ hwc->state |= PERF_HES_UPTODATE;
+ }
+}
+
+/*
+ * pmu->start: start the event.
+ */
+static void riscv_pmu_start(struct perf_event *event, int flags)
+{
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
+ return;
+
+ if (flags & PERF_EF_RELOAD) {
+ WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE));
+
+ /*
+ * Set the counter to the period to the next interrupt here,
+ * if you have any.
+ */
+ }
+
+ hwc->state = 0;
+ perf_event_update_userpage(event);
+
+ /*
+ * Since we cannot write to counters, this serves as an initialization
+ * to the delta-mechanism in pmu->read(); otherwise, the delta would be
+ * wrong when pmu->read is called for the first time.
+ */
+ local64_set(&hwc->prev_count, read_counter(hwc->idx));
+}
+
+/*
+ * pmu->add: add the event to PMU.
+ */
+static int riscv_pmu_add(struct perf_event *event, int flags)
+{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (cpuc->n_events == riscv_pmu->num_counters)
+ return -ENOSPC;
+
+ /*
+ * We don't have general conunters, so no binding-event-to-counter
+ * process here.
+ *
+ * Indexing using hwc->config generally not works, since config may
+ * contain extra information, but here the only info we have in
+ * hwc->config is the event index.
+ */
+ hwc->idx = hwc->config;
+ cpuc->events[hwc->idx] = event;
+ cpuc->n_events++;
+
+ hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+
+ if (flags & PERF_EF_START)
+ riscv_pmu->pmu->start(event, PERF_EF_RELOAD);
+
+ return 0;
+}
+
+/*
+ * pmu->del: delete the event from PMU.
+ */
+static void riscv_pmu_del(struct perf_event *event, int flags)
+{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ struct hw_perf_event *hwc = &event->hw;
+
+ cpuc->events[hwc->idx] = NULL;
+ cpuc->n_events--;
+ riscv_pmu->pmu->stop(event, PERF_EF_UPDATE);
+ perf_event_update_userpage(event);
+}
+
+/*
+ * Interrupt: a skeletion for reference.
+ */
+
+static DEFINE_MUTEX(pmc_reserve_mutex);
+
+irqreturn_t riscv_base_pmu_handle_irq(int irq_num, void *dev)
+{
+ return IRQ_NONE;
+}
+
+static int reserve_pmc_hardware(void)
+{
+ int err = 0;
+
+ mutex_lock(&pmc_reserve_mutex);
+ if (riscv_pmu->irq >= 0 && riscv_pmu->handle_irq) {
+ err = request_irq(riscv_pmu->irq, riscv_pmu->handle_irq,
+ IRQF_PERCPU, "riscv-base-perf", NULL);
+ }
+ mutex_unlock(&pmc_reserve_mutex);
+
+ return err;
+}
+
+void release_pmc_hardware(void)
+{
+ mutex_lock(&pmc_reserve_mutex);
+ if (riscv_pmu->irq >= 0)
+ free_irq(riscv_pmu->irq, NULL);
+ mutex_unlock(&pmc_reserve_mutex);
+}
+
+/*
+ * Event Initialization/Finalization
+ */
+
+static atomic_t riscv_active_events = ATOMIC_INIT(0);
+
+static void riscv_event_destroy(struct perf_event *event)
+{
+ if (atomic_dec_return(&riscv_active_events) == 0)
+ release_pmc_hardware();
+}
+
+static int riscv_event_init(struct perf_event *event)
+{
+ struct perf_event_attr *attr = &event->attr;
+ struct hw_perf_event *hwc = &event->hw;
+ int err;
+ int code;
+
+ if (atomic_inc_return(&riscv_active_events) == 1) {
+ err = reserve_pmc_hardware();
+
+ if (err) {
+ pr_warn("PMC hardware not available\n");
+ atomic_dec(&riscv_active_events);
+ return -EBUSY;
+ }
+ }
+
+ switch (event->attr.type) {
+ case PERF_TYPE_HARDWARE:
+ code = riscv_pmu->map_hw_event(attr->config);
+ break;
+ case PERF_TYPE_HW_CACHE:
+ code = riscv_pmu->map_cache_event(attr->config);
+ break;
+ case PERF_TYPE_RAW:
+ return -EOPNOTSUPP;
+ default:
+ return -ENOENT;
+ }
+
+ event->destroy = riscv_event_destroy;
+ if (code < 0) {
+ event->destroy(event);
+ return code;
+ }
+
+ /*
+ * idx is set to -1 because the index of a general event should not be
+ * decided until binding to some counter in pmu->add().
+ *
+ * But since we don't have such support, later in pmu->add(), we just
+ * use hwc->config as the index instead.
+ */
+ hwc->config = code;
+ hwc->idx = -1;
+
+ return 0;
+}
+
+/*
+ * Initialization
+ */
+
+static struct pmu min_pmu = {
+ .name = "riscv-base",
+ .event_init = riscv_event_init,
+ .add = riscv_pmu_add,
+ .del = riscv_pmu_del,
+ .start = riscv_pmu_start,
+ .stop = riscv_pmu_stop,
+ .read = riscv_pmu_read,
+};
+
+static const struct riscv_pmu riscv_base_pmu = {
+ .pmu = &min_pmu,
+ .max_events = ARRAY_SIZE(riscv_hw_event_map),
+ .map_hw_event = riscv_map_hw_event,
+ .hw_events = riscv_hw_event_map,
+ .map_cache_event = riscv_map_cache_event,
+ .cache_events = &riscv_cache_event_map,
+ .counter_width = 63,
+ .num_counters = RISCV_BASE_COUNTERS + 0,
+ .handle_irq = &riscv_base_pmu_handle_irq,
+
+ /* This means this PMU has no IRQ. */
+ .irq = -1,
+};
+
+static const struct of_device_id riscv_pmu_of_ids[] = {
+ {.compatible = "riscv,base-pmu", .data = &riscv_base_pmu},
+ { /* sentinel value */ }
+};
+
+int __init init_hw_perf_events(void)
+{
+ struct device_node *node = of_find_node_by_type(NULL, "pmu");
+ const struct of_device_id *of_id;
+
+ riscv_pmu = &riscv_base_pmu;
+
+ if (node) {
+ of_id = of_match_node(riscv_pmu_of_ids, node);
+
+ if (of_id)
+ riscv_pmu = of_id->data;
+ }
+
+ perf_pmu_register(riscv_pmu->pmu, "cpu", PERF_TYPE_RAW);
+ return 0;
+}
+arch_initcall(init_hw_perf_events);
--
2.17.0


2018-04-24 18:08:52

by Atish Patra

[permalink] [raw]
Subject: Re: [PATCH v5 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

On 4/19/18 4:28 PM, Alan Kao wrote:
> This implements the baseline PMU for RISC-V platforms.
>
> To ease future PMU portings, a guide is also written, containing
> perf concepts, arch porting practices and some hints.
>
> Changes in v5:
> - Fix patch errors from checkpatch.pl.
>
> Changes in v4:
> - Fix several compilation errors. Sorry for that.
> - Raise a warning in the write_counter body.
>
> Changes in v3:
> - Fix typos in the document.
> - Change the initialization routine from statically assigning PMU to
> device-tree-based methods, and set default to the PMU proposed in
> this patch.
>
> Changes in v2:
> - Fix the bug reported by Alex, which was caused by not sufficient
> initialization. Check https://lkml.org/lkml/2018/3/31/251 for the
> discussion.
>
> Alan Kao (2):
> perf: riscv: preliminary RISC-V support
> perf: riscv: Add Document for Future Porting Guide
>
> Documentation/riscv/pmu.txt | 249 ++++++++++++++
> arch/riscv/Kconfig | 13 +
> arch/riscv/include/asm/perf_event.h | 79 ++++-
> arch/riscv/kernel/Makefile | 1 +
> arch/riscv/kernel/perf_event.c | 485 ++++++++++++++++++++++++++++
> 5 files changed, 823 insertions(+), 4 deletions(-)
> create mode 100644 Documentation/riscv/pmu.txt
> create mode 100644 arch/riscv/kernel/perf_event.c
>
Most of the perf tests either pass or fail because of unsupported
event/trace point which is fine.

However, I got an rcu-stall for the test "47: Event times".
# ./perf test -v 47
47: Event times :
--- start ---
test child forked, pid 2774
attaching to spawned child, enable on exec
OK : ena 2243000, run 2243000
attaching to current thread as enabled
OK : ena 19000, run 19000
attaching to current thread as disabled
OK : ena 5000, run 5000
attaching to CPU 0 as enabled
[ 1001.466578] INFO: rcu_sched self-detected stall on CPU
[ 1001.470947] 4-....: (29999 ticks this GP) idle=5fa/140000000000001/0
softirq=19762/19762 fqs=14602
[ 1001.480053] (t=30001 jiffies g=3471 c=3470 q=125)
[ 1001.484917] Task dump for CPU 4:
[ 1001.488129] perf R running task 0 2774 2773
0x00000008
[ 1001.495161] Call Trace:
[ 1001.497606] [<000000006a3d4f87>] walk_stackframe+0x0/0xc0
[ 1001.502980] [<000000004b4b0780>] show_stack+0x3c/0x46
[ 1001.508024] [<0000000060c96ab8>] sched_show_task+0xd0/0x122
[ 1001.513573] [<000000007d8bd54e>] dump_cpu_task+0x50/0x5a
[ 1001.518870] [<0000000053990e11>] rcu_dump_cpu_stacks+0x98/0xd2
[ 1001.524685] [<00000000fe94c593>] rcu_check_callbacks+0x614/0x822
[ 1001.530680] [<0000000057688dd3>] update_process_times+0x38/0x6a
[ 1001.536585] [<0000000063a96de0>] tick_periodic+0x58/0xd8
[ 1001.541876] [<0000000013d712f1>] tick_handle_periodic+0x2e/0x7c
[ 1001.547780] [<000000009e2ef428>] riscv_timer_interrupt+0x34/0x3c
[ 1001.553774] [<00000000ff6b1f18>] riscv_intc_irq+0xbc/0xe0
[ 1001.559153] [<00000000c8614c3b>] ret_from_exception+0x0/0xc

It is quite possible that we don't support some dependency
infrastructure. I am looking into it.

Regards,
Atish





2018-04-24 19:29:01

by Atish Patra

[permalink] [raw]
Subject: Re: [PATCH v5 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

On 4/24/18 11:07 AM, Atish Patra wrote:
> On 4/19/18 4:28 PM, Alan Kao wrote:
>> This implements the baseline PMU for RISC-V platforms.
>>
>> To ease future PMU portings, a guide is also written, containing
>> perf concepts, arch porting practices and some hints.
>>
>> Changes in v5:
>> - Fix patch errors from checkpatch.pl.
>>
>> Changes in v4:
>> - Fix several compilation errors. Sorry for that.
>> - Raise a warning in the write_counter body.
>>
>> Changes in v3:
>> - Fix typos in the document.
>> - Change the initialization routine from statically assigning PMU to
>> device-tree-based methods, and set default to the PMU proposed in
>> this patch.
>>
>> Changes in v2:
>> - Fix the bug reported by Alex, which was caused by not sufficient
>> initialization. Check https://lkml.org/lkml/2018/3/31/251 for the
>> discussion.
>>
>> Alan Kao (2):
>> perf: riscv: preliminary RISC-V support
>> perf: riscv: Add Document for Future Porting Guide
>>
>> Documentation/riscv/pmu.txt | 249 ++++++++++++++
>> arch/riscv/Kconfig | 13 +
>> arch/riscv/include/asm/perf_event.h | 79 ++++-
>> arch/riscv/kernel/Makefile | 1 +
>> arch/riscv/kernel/perf_event.c | 485 ++++++++++++++++++++++++++++
>> 5 files changed, 823 insertions(+), 4 deletions(-)
>> create mode 100644 Documentation/riscv/pmu.txt
>> create mode 100644 arch/riscv/kernel/perf_event.c
>>
> Most of the perf tests either pass or fail because of unsupported
> event/trace point which is fine.
>
> However, I got an rcu-stall for the test "47: Event times".
> # ./perf test -v 47
> 47: Event times :
> --- start ---
> test child forked, pid 2774
> attaching to spawned child, enable on exec
> OK : ena 2243000, run 2243000
> attaching to current thread as enabled
> OK : ena 19000, run 19000
> attaching to current thread as disabled
> OK : ena 5000, run 5000
> attaching to CPU 0 as enabled
> [ 1001.466578] INFO: rcu_sched self-detected stall on CPU
> [ 1001.470947] 4-....: (29999 ticks this GP) idle=5fa/140000000000001/0
> softirq=19762/19762 fqs=14602
> [ 1001.480053] (t=30001 jiffies g=3471 c=3470 q=125)
> [ 1001.484917] Task dump for CPU 4:
> [ 1001.488129] perf R running task 0 2774 2773
> 0x00000008
> [ 1001.495161] Call Trace:
> [ 1001.497606] [<000000006a3d4f87>] walk_stackframe+0x0/0xc0
> [ 1001.502980] [<000000004b4b0780>] show_stack+0x3c/0x46
> [ 1001.508024] [<0000000060c96ab8>] sched_show_task+0xd0/0x122
> [ 1001.513573] [<000000007d8bd54e>] dump_cpu_task+0x50/0x5a
> [ 1001.518870] [<0000000053990e11>] rcu_dump_cpu_stacks+0x98/0xd2
> [ 1001.524685] [<00000000fe94c593>] rcu_check_callbacks+0x614/0x822
> [ 1001.530680] [<0000000057688dd3>] update_process_times+0x38/0x6a
> [ 1001.536585] [<0000000063a96de0>] tick_periodic+0x58/0xd8
> [ 1001.541876] [<0000000013d712f1>] tick_handle_periodic+0x2e/0x7c
> [ 1001.547780] [<000000009e2ef428>] riscv_timer_interrupt+0x34/0x3c
> [ 1001.553774] [<00000000ff6b1f18>] riscv_intc_irq+0xbc/0xe0
> [ 1001.559153] [<00000000c8614c3b>] ret_from_exception+0x0/0xc
>
> It is quite possible that we don't support some dependency
> infrastructure. I am looking into it.
>
> Regards,
> Atish
>
>
>
>
>

Got it working. The test tries to attach the event to CPU0 which doesn't
exist in HighFive Unleashed. Changing it to cpu1 works.

diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
index 1a2686f..eb11632f 100644
--- a/tools/perf/tests/event-times.c
+++ b/tools/perf/tests/event-times.c
@@ -113,9 +113,9 @@ static int attach__cpu_disabled(struct perf_evlist
*evlist)
struct cpu_map *cpus;
int err;

- pr_debug("attaching to CPU 0 as enabled\n");
+ pr_debug("attaching to CPU 1 as disabled\n");

- cpus = cpu_map__new("0");
+ cpus = cpu_map__new("1");
if (cpus == NULL) {
pr_debug("failed to call cpu_map__new\n");
return -1;
@@ -142,9 +142,9 @@ static int attach__cpu_enabled(struct perf_evlist
*evlist)
struct cpu_map *cpus;
int err;

- pr_debug("attaching to CPU 0 as enabled\n");
+ pr_debug("attaching to CPU 1 as enabled\n");

- cpus = cpu_map__new("0");
+ cpus = cpu_map__new("1");
if (cpus == NULL) {
pr_debug("failed to call cpu_map__new\n");
return -1;


Palmer,
Would it be better to officially document it somewhere that CPU0 doesn't
exist in the HighFive Unleashed board ?
I fear that there will be other standard tests/code path that may fail
because of inherent assumption of cpu0 presence.


Regards,
Atish

2018-04-24 19:45:59

by Palmer Dabbelt

[permalink] [raw]
Subject: Re: [PATCH v5 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

On Tue, 24 Apr 2018 12:27:26 PDT (-0700), [email protected] wrote:
> On 4/24/18 11:07 AM, Atish Patra wrote:
>> On 4/19/18 4:28 PM, Alan Kao wrote:
>>> This implements the baseline PMU for RISC-V platforms.
>>>
>>> To ease future PMU portings, a guide is also written, containing
>>> perf concepts, arch porting practices and some hints.
>>>
>>> Changes in v5:
>>> - Fix patch errors from checkpatch.pl.
>>>
>>> Changes in v4:
>>> - Fix several compilation errors. Sorry for that.
>>> - Raise a warning in the write_counter body.
>>>
>>> Changes in v3:
>>> - Fix typos in the document.
>>> - Change the initialization routine from statically assigning PMU to
>>> device-tree-based methods, and set default to the PMU proposed in
>>> this patch.
>>>
>>> Changes in v2:
>>> - Fix the bug reported by Alex, which was caused by not sufficient
>>> initialization. Check https://lkml.org/lkml/2018/3/31/251 for the
>>> discussion.
>>>
>>> Alan Kao (2):
>>> perf: riscv: preliminary RISC-V support
>>> perf: riscv: Add Document for Future Porting Guide
>>>
>>> Documentation/riscv/pmu.txt | 249 ++++++++++++++
>>> arch/riscv/Kconfig | 13 +
>>> arch/riscv/include/asm/perf_event.h | 79 ++++-
>>> arch/riscv/kernel/Makefile | 1 +
>>> arch/riscv/kernel/perf_event.c | 485 ++++++++++++++++++++++++++++
>>> 5 files changed, 823 insertions(+), 4 deletions(-)
>>> create mode 100644 Documentation/riscv/pmu.txt
>>> create mode 100644 arch/riscv/kernel/perf_event.c
>>>
>> Most of the perf tests either pass or fail because of unsupported
>> event/trace point which is fine.
>>
>> However, I got an rcu-stall for the test "47: Event times".
>> # ./perf test -v 47
>> 47: Event times :
>> --- start ---
>> test child forked, pid 2774
>> attaching to spawned child, enable on exec
>> OK : ena 2243000, run 2243000
>> attaching to current thread as enabled
>> OK : ena 19000, run 19000
>> attaching to current thread as disabled
>> OK : ena 5000, run 5000
>> attaching to CPU 0 as enabled
>> [ 1001.466578] INFO: rcu_sched self-detected stall on CPU
>> [ 1001.470947] 4-....: (29999 ticks this GP) idle=5fa/140000000000001/0
>> softirq=19762/19762 fqs=14602
>> [ 1001.480053] (t=30001 jiffies g=3471 c=3470 q=125)
>> [ 1001.484917] Task dump for CPU 4:
>> [ 1001.488129] perf R running task 0 2774 2773
>> 0x00000008
>> [ 1001.495161] Call Trace:
>> [ 1001.497606] [<000000006a3d4f87>] walk_stackframe+0x0/0xc0
>> [ 1001.502980] [<000000004b4b0780>] show_stack+0x3c/0x46
>> [ 1001.508024] [<0000000060c96ab8>] sched_show_task+0xd0/0x122
>> [ 1001.513573] [<000000007d8bd54e>] dump_cpu_task+0x50/0x5a
>> [ 1001.518870] [<0000000053990e11>] rcu_dump_cpu_stacks+0x98/0xd2
>> [ 1001.524685] [<00000000fe94c593>] rcu_check_callbacks+0x614/0x822
>> [ 1001.530680] [<0000000057688dd3>] update_process_times+0x38/0x6a
>> [ 1001.536585] [<0000000063a96de0>] tick_periodic+0x58/0xd8
>> [ 1001.541876] [<0000000013d712f1>] tick_handle_periodic+0x2e/0x7c
>> [ 1001.547780] [<000000009e2ef428>] riscv_timer_interrupt+0x34/0x3c
>> [ 1001.553774] [<00000000ff6b1f18>] riscv_intc_irq+0xbc/0xe0
>> [ 1001.559153] [<00000000c8614c3b>] ret_from_exception+0x0/0xc
>>
>> It is quite possible that we don't support some dependency
>> infrastructure. I am looking into it.
>>
>> Regards,
>> Atish
>>
>>
>>
>>
>>
>
> Got it working. The test tries to attach the event to CPU0 which doesn't
> exist in HighFive Unleashed. Changing it to cpu1 works.
>
> diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
> index 1a2686f..eb11632f 100644
> --- a/tools/perf/tests/event-times.c
> +++ b/tools/perf/tests/event-times.c
> @@ -113,9 +113,9 @@ static int attach__cpu_disabled(struct perf_evlist
> *evlist)
> struct cpu_map *cpus;
> int err;
>
> - pr_debug("attaching to CPU 0 as enabled\n");
> + pr_debug("attaching to CPU 1 as disabled\n");
>
> - cpus = cpu_map__new("0");
> + cpus = cpu_map__new("1");
> if (cpus == NULL) {
> pr_debug("failed to call cpu_map__new\n");
> return -1;
> @@ -142,9 +142,9 @@ static int attach__cpu_enabled(struct perf_evlist
> *evlist)
> struct cpu_map *cpus;
> int err;
>
> - pr_debug("attaching to CPU 0 as enabled\n");
> + pr_debug("attaching to CPU 1 as enabled\n");
>
> - cpus = cpu_map__new("0");
> + cpus = cpu_map__new("1");
> if (cpus == NULL) {
> pr_debug("failed to call cpu_map__new\n");
> return -1;
>
>
> Palmer,
> Would it be better to officially document it somewhere that CPU0 doesn't
> exist in the HighFive Unleashed board ?
> I fear that there will be other standard tests/code path that may fail
> because of inherent assumption of cpu0 presence.

I think the best way to fix this is to just have BBL (or whatever the
bootloader is) renumber the CPUs so they're contiguous and begin with 0.
Documenting it it just a way to tell people their code needs to be changed,
it'll still break.

2018-04-24 22:18:08

by Atish Patra

[permalink] [raw]
Subject: Re: [PATCH v5 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

On 4/24/18 12:44 PM, Palmer Dabbelt wrote:
> On Tue, 24 Apr 2018 12:27:26 PDT (-0700), [email protected] wrote:
>> On 4/24/18 11:07 AM, Atish Patra wrote:
>>> On 4/19/18 4:28 PM, Alan Kao wrote:
>>>> This implements the baseline PMU for RISC-V platforms.
>>>>
>>>> To ease future PMU portings, a guide is also written, containing
>>>> perf concepts, arch porting practices and some hints.
>>>>
>>>> Changes in v5:
>>>> - Fix patch errors from checkpatch.pl.
>>>>
>>>> Changes in v4:
>>>> - Fix several compilation errors. Sorry for that.
>>>> - Raise a warning in the write_counter body.
>>>>
>>>> Changes in v3:
>>>> - Fix typos in the document.
>>>> - Change the initialization routine from statically assigning PMU to
>>>> device-tree-based methods, and set default to the PMU proposed in
>>>> this patch.
>>>>
>>>> Changes in v2:
>>>> - Fix the bug reported by Alex, which was caused by not sufficient
>>>> initialization. Check https://lkml.org/lkml/2018/3/31/251 for the
>>>> discussion.
>>>>
>>>> Alan Kao (2):
>>>> perf: riscv: preliminary RISC-V support
>>>> perf: riscv: Add Document for Future Porting Guide
>>>>
>>>> Documentation/riscv/pmu.txt | 249 ++++++++++++++
>>>> arch/riscv/Kconfig | 13 +
>>>> arch/riscv/include/asm/perf_event.h | 79 ++++-
>>>> arch/riscv/kernel/Makefile | 1 +
>>>> arch/riscv/kernel/perf_event.c | 485 ++++++++++++++++++++++++++++
>>>> 5 files changed, 823 insertions(+), 4 deletions(-)
>>>> create mode 100644 Documentation/riscv/pmu.txt
>>>> create mode 100644 arch/riscv/kernel/perf_event.c
>>>>
>>> Most of the perf tests either pass or fail because of unsupported
>>> event/trace point which is fine.
>>>
>>> However, I got an rcu-stall for the test "47: Event times".
>>> # ./perf test -v 47
>>> 47: Event times :
>>> --- start ---
>>> test child forked, pid 2774
>>> attaching to spawned child, enable on exec
>>> OK : ena 2243000, run 2243000
>>> attaching to current thread as enabled
>>> OK : ena 19000, run 19000
>>> attaching to current thread as disabled
>>> OK : ena 5000, run 5000
>>> attaching to CPU 0 as enabled
>>> [ 1001.466578] INFO: rcu_sched self-detected stall on CPU
>>> [ 1001.470947] 4-....: (29999 ticks this GP) idle=5fa/140000000000001/0
>>> softirq=19762/19762 fqs=14602
>>> [ 1001.480053] (t=30001 jiffies g=3471 c=3470 q=125)
>>> [ 1001.484917] Task dump for CPU 4:
>>> [ 1001.488129] perf R running task 0 2774 2773
>>> 0x00000008
>>> [ 1001.495161] Call Trace:
>>> [ 1001.497606] [<000000006a3d4f87>] walk_stackframe+0x0/0xc0
>>> [ 1001.502980] [<000000004b4b0780>] show_stack+0x3c/0x46
>>> [ 1001.508024] [<0000000060c96ab8>] sched_show_task+0xd0/0x122
>>> [ 1001.513573] [<000000007d8bd54e>] dump_cpu_task+0x50/0x5a
>>> [ 1001.518870] [<0000000053990e11>] rcu_dump_cpu_stacks+0x98/0xd2
>>> [ 1001.524685] [<00000000fe94c593>] rcu_check_callbacks+0x614/0x822
>>> [ 1001.530680] [<0000000057688dd3>] update_process_times+0x38/0x6a
>>> [ 1001.536585] [<0000000063a96de0>] tick_periodic+0x58/0xd8
>>> [ 1001.541876] [<0000000013d712f1>] tick_handle_periodic+0x2e/0x7c
>>> [ 1001.547780] [<000000009e2ef428>] riscv_timer_interrupt+0x34/0x3c
>>> [ 1001.553774] [<00000000ff6b1f18>] riscv_intc_irq+0xbc/0xe0
>>> [ 1001.559153] [<00000000c8614c3b>] ret_from_exception+0x0/0xc
>>>
>>> It is quite possible that we don't support some dependency
>>> infrastructure. I am looking into it.
>>>
>>> Regards,
>>> Atish
>>>
>>>
>>>
>>>
>>>
>>
>> Got it working. The test tries to attach the event to CPU0 which doesn't
>> exist in HighFive Unleashed. Changing it to cpu1 works.
>>
>> diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
>> index 1a2686f..eb11632f 100644
>> --- a/tools/perf/tests/event-times.c
>> +++ b/tools/perf/tests/event-times.c
>> @@ -113,9 +113,9 @@ static int attach__cpu_disabled(struct perf_evlist
>> *evlist)
>> struct cpu_map *cpus;
>> int err;
>>
>> - pr_debug("attaching to CPU 0 as enabled\n");
>> + pr_debug("attaching to CPU 1 as disabled\n");
>>
>> - cpus = cpu_map__new("0");
>> + cpus = cpu_map__new("1");
>> if (cpus == NULL) {
>> pr_debug("failed to call cpu_map__new\n");
>> return -1;
>> @@ -142,9 +142,9 @@ static int attach__cpu_enabled(struct perf_evlist
>> *evlist)
>> struct cpu_map *cpus;
>> int err;
>>
>> - pr_debug("attaching to CPU 0 as enabled\n");
>> + pr_debug("attaching to CPU 1 as enabled\n");
>>
>> - cpus = cpu_map__new("0");
>> + cpus = cpu_map__new("1");
>> if (cpus == NULL) {
>> pr_debug("failed to call cpu_map__new\n");
>> return -1;
>>
>>
>> Palmer,
>> Would it be better to officially document it somewhere that CPU0 doesn't
>> exist in the HighFive Unleashed board ?
>> I fear that there will be other standard tests/code path that may fail
>> because of inherent assumption of cpu0 presence.
>
> I think the best way to fix this is to just have BBL (or whatever the
> bootloader is) renumber the CPUs so they're contiguous and begin with 0.

Do you mean BBL will update the device tree that kernel eventually parse
and set the hart id?
Sounds good to me unless it acts as a big hack in future boot loaders.

> Documenting it it just a way to tell people their code needs to be changed,
> it'll still break.
>
Agreed.


2018-04-25 00:32:00

by Palmer Dabbelt

[permalink] [raw]
Subject: Re: [PATCH v5 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

On Tue, 24 Apr 2018 15:16:16 PDT (-0700), [email protected] wrote:
> On 4/24/18 12:44 PM, Palmer Dabbelt wrote:
>> On Tue, 24 Apr 2018 12:27:26 PDT (-0700), [email protected] wrote:
>>> On 4/24/18 11:07 AM, Atish Patra wrote:
>>>> On 4/19/18 4:28 PM, Alan Kao wrote:
>>>> However, I got an rcu-stall for the test "47: Event times".
>>>> # ./perf test -v 47
>>> Got it working. The test tries to attach the event to CPU0 which doesn't
>>> exist in HighFive Unleashed. Changing it to cpu1 works.
>>>
>>> diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
>>> index 1a2686f..eb11632f 100644
>>> --- a/tools/perf/tests/event-times.c
>>> +++ b/tools/perf/tests/event-times.c
>>> @@ -113,9 +113,9 @@ static int attach__cpu_disabled(struct perf_evlist
>>> *evlist)
>>> struct cpu_map *cpus;
>>> int err;
>>>
>>> - pr_debug("attaching to CPU 0 as enabled\n");
>>> + pr_debug("attaching to CPU 1 as disabled\n");
>>>
>>> - cpus = cpu_map__new("0");
>>> + cpus = cpu_map__new("1");
>>> if (cpus == NULL) {
>>> pr_debug("failed to call cpu_map__new\n");
>>> return -1;
>>> @@ -142,9 +142,9 @@ static int attach__cpu_enabled(struct perf_evlist
>>> *evlist)
>>> struct cpu_map *cpus;
>>> int err;
>>>
>>> - pr_debug("attaching to CPU 0 as enabled\n");
>>> + pr_debug("attaching to CPU 1 as enabled\n");
>>>
>>> - cpus = cpu_map__new("0");
>>> + cpus = cpu_map__new("1");
>>> if (cpus == NULL) {
>>> pr_debug("failed to call cpu_map__new\n");
>>> return -1;
>>>
>>>
>>> Palmer,
>>> Would it be better to officially document it somewhere that CPU0 doesn't
>>> exist in the HighFive Unleashed board ?
>>> I fear that there will be other standard tests/code path that may fail
>>> because of inherent assumption of cpu0 presence.
>>
>> I think the best way to fix this is to just have BBL (or whatever the
>> bootloader is) renumber the CPUs so they're contiguous and begin with 0.
>
> Do you mean BBL will update the device tree that kernel eventually parse
> and set the hart id?
> Sounds good to me unless it acts as a big hack in future boot loaders.

Right now the machine-mode and supervisor-mode hart IDs are logically separate:
the bootloader just provides the hart ID as a register argument when starting
the kernel. BBL already needs to enumerate the harts by looking through the
device tree for various other reasons (at least to mask off the harts that
Linux doesn't support), so it's not that much effort to just maintain a mapping
from supervisor-mode hart IDs to machine-mode hart IDs.

I have some patches floating around that do this, but appear to do it
incorrectly enough that nothing boots so maybe I'm missing something that makes
this complicated :).

2018-04-25 01:17:16

by Atish Patra

[permalink] [raw]
Subject: Re: [PATCH v5 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

On 4/24/18 5:29 PM, Palmer Dabbelt wrote:
> On Tue, 24 Apr 2018 15:16:16 PDT (-0700), [email protected] wrote:
>> On 4/24/18 12:44 PM, Palmer Dabbelt wrote:
>>> On Tue, 24 Apr 2018 12:27:26 PDT (-0700), [email protected] wrote:
>>>> On 4/24/18 11:07 AM, Atish Patra wrote:
>>>>> On 4/19/18 4:28 PM, Alan Kao wrote:
>>>>> However, I got an rcu-stall for the test "47: Event times".
>>>>> # ./perf test -v 47
>>>> Got it working. The test tries to attach the event to CPU0 which doesn't
>>>> exist in HighFive Unleashed. Changing it to cpu1 works.
>>>>
>>>> diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
>>>> index 1a2686f..eb11632f 100644
>>>> --- a/tools/perf/tests/event-times.c
>>>> +++ b/tools/perf/tests/event-times.c
>>>> @@ -113,9 +113,9 @@ static int attach__cpu_disabled(struct perf_evlist
>>>> *evlist)
>>>> struct cpu_map *cpus;
>>>> int err;
>>>>
>>>> - pr_debug("attaching to CPU 0 as enabled\n");
>>>> + pr_debug("attaching to CPU 1 as disabled\n");
>>>>
>>>> - cpus = cpu_map__new("0");
>>>> + cpus = cpu_map__new("1");
>>>> if (cpus == NULL) {
>>>> pr_debug("failed to call cpu_map__new\n");
>>>> return -1;
>>>> @@ -142,9 +142,9 @@ static int attach__cpu_enabled(struct perf_evlist
>>>> *evlist)
>>>> struct cpu_map *cpus;
>>>> int err;
>>>>
>>>> - pr_debug("attaching to CPU 0 as enabled\n");
>>>> + pr_debug("attaching to CPU 1 as enabled\n");
>>>>
>>>> - cpus = cpu_map__new("0");
>>>> + cpus = cpu_map__new("1");
>>>> if (cpus == NULL) {
>>>> pr_debug("failed to call cpu_map__new\n");
>>>> return -1;
>>>>
>>>>
>>>> Palmer,
>>>> Would it be better to officially document it somewhere that CPU0 doesn't
>>>> exist in the HighFive Unleashed board ?
>>>> I fear that there will be other standard tests/code path that may fail
>>>> because of inherent assumption of cpu0 presence.
>>>
>>> I think the best way to fix this is to just have BBL (or whatever the
>>> bootloader is) renumber the CPUs so they're contiguous and begin with 0.
>>
>> Do you mean BBL will update the device tree that kernel eventually parse
>> and set the hart id?
>> Sounds good to me unless it acts as a big hack in future boot loaders.
>
> Right now the machine-mode and supervisor-mode hart IDs are logically separate:
> the bootloader just provides the hart ID as a register argument when starting
> the kernel.

Yes.

BBL already needs to enumerate the harts by looking through the
> device tree for various other reasons (at least to mask off the harts that
> Linux doesn't support), so it's not that much effort to just maintain a mapping
> from supervisor-mode hart IDs to machine-mode hart IDs.
>

But Linux also parses the device tree to get hart ID in
riscv_of_processor_hart(). This is used to setup the possible/present
cpu map in setup_smp().

Thus, Linux also need to see a device tree with cpu0-3 instead of
cpu1-4. Otherwise, present cpu map will be incorrect. Isn't it ?

> I have some patches floating around that do this, but appear to do it
> incorrectly enough that nothing boots so maybe I'm missing something that makes
> this complicated :).
>

Just a wild guess: May be the because of the above reason ;)


2018-04-25 03:23:07

by Alan Kao

[permalink] [raw]
Subject: Re: [PATCH v5 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

Hi Atish, Palmer,

On Tue, Apr 24, 2018 at 06:15:49PM -0700, Atish Patra wrote:
> On 4/24/18 5:29 PM, Palmer Dabbelt wrote:
> >On Tue, 24 Apr 2018 15:16:16 PDT (-0700), [email protected] wrote:
> >>On 4/24/18 12:44 PM, Palmer Dabbelt wrote:
> >>>On Tue, 24 Apr 2018 12:27:26 PDT (-0700), [email protected] wrote:
> >>>>On 4/24/18 11:07 AM, Atish Patra wrote:
> >>>>>On 4/19/18 4:28 PM, Alan Kao wrote:
> >>>>>However, I got an rcu-stall for the test "47: Event times".
> >>>>># ./perf test -v 47
> >>>>Got it working. The test tries to attach the event to CPU0 which doesn't
> >>>>exist in HighFive Unleashed. Changing it to cpu1 works.
> >>>>
> >>>>diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
> >>>>index 1a2686f..eb11632f 100644
> >>>>--- a/tools/perf/tests/event-times.c
> >>>>+++ b/tools/perf/tests/event-times.c
> >>>>@@ -113,9 +113,9 @@ static int attach__cpu_disabled(struct perf_evlist
> >>>>*evlist)
> >>>> struct cpu_map *cpus;
> >>>> int err;
> >>>>
> >>>>- pr_debug("attaching to CPU 0 as enabled\n");
> >>>>+ pr_debug("attaching to CPU 1 as disabled\n");
> >>>>
> >>>>- cpus = cpu_map__new("0");
> >>>>+ cpus = cpu_map__new("1");
> >>>> if (cpus == NULL) {
> >>>> pr_debug("failed to call cpu_map__new\n");
> >>>> return -1;
> >>>>@@ -142,9 +142,9 @@ static int attach__cpu_enabled(struct perf_evlist
> >>>>*evlist)
> >>>> struct cpu_map *cpus;
> >>>> int err;
> >>>>
> >>>>- pr_debug("attaching to CPU 0 as enabled\n");
> >>>>+ pr_debug("attaching to CPU 1 as enabled\n");
> >>>>
> >>>>- cpus = cpu_map__new("0");
> >>>>+ cpus = cpu_map__new("1");
> >>>> if (cpus == NULL) {
> >>>> pr_debug("failed to call cpu_map__new\n");
> >>>> return -1;
> >>>>
> >>>>
> >>>>Palmer,
> >>>>Would it be better to officially document it somewhere that CPU0 doesn't
> >>>>exist in the HighFive Unleashed board ?
> >>>>I fear that there will be other standard tests/code path that may fail
> >>>>because of inherent assumption of cpu0 presence.
> >>>
> >>>I think the best way to fix this is to just have BBL (or whatever the
> >>>bootloader is) renumber the CPUs so they're contiguous and begin with 0.
> >>
> >>Do you mean BBL will update the device tree that kernel eventually parse
> >>and set the hart id?
> >>Sounds good to me unless it acts as a big hack in future boot loaders.
> >
> >Right now the machine-mode and supervisor-mode hart IDs are logically separate:
> >the bootloader just provides the hart ID as a register argument when starting
> >the kernel.
>
> Yes.
>
> BBL already needs to enumerate the harts by looking through the
> >device tree for various other reasons (at least to mask off the harts that
> >Linux doesn't support), so it's not that much effort to just maintain a mapping
> >from supervisor-mode hart IDs to machine-mode hart IDs.
> >
>
> But Linux also parses the device tree to get hart ID in
> riscv_of_processor_hart(). This is used to setup the possible/present cpu
> map in setup_smp().
>
> Thus, Linux also need to see a device tree with cpu0-3 instead of cpu1-4.
> Otherwise, present cpu map will be incorrect. Isn't it ?
>
> >I have some patches floating around that do this, but appear to do it
> >incorrectly enough that nothing boots so maybe I'm missing something that makes
> >this complicated :).
> >
>
> Just a wild guess: May be the because of the above reason ;)
>

Thanks for the test and discussion. It looks like am implementation issue from
Unleash, so ... is there anything I should fix and provide a further patch?

2018-04-25 08:12:51

by Atish Patra

[permalink] [raw]
Subject: Re: [PATCH v5 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

On 4/24/18 8:19 PM, Alan Kao wrote:
> Hi Atish, Palmer,
>
> On Tue, Apr 24, 2018 at 06:15:49PM -0700, Atish Patra wrote:
>> On 4/24/18 5:29 PM, Palmer Dabbelt wrote:
>>> On Tue, 24 Apr 2018 15:16:16 PDT (-0700), [email protected] wrote:
>>>> On 4/24/18 12:44 PM, Palmer Dabbelt wrote:
>>>>> On Tue, 24 Apr 2018 12:27:26 PDT (-0700), [email protected] wrote:
>>>>>> On 4/24/18 11:07 AM, Atish Patra wrote:
>>>>>>> On 4/19/18 4:28 PM, Alan Kao wrote:
>>>>>>> However, I got an rcu-stall for the test "47: Event times".
>>>>>>> # ./perf test -v 47
>>>>>> Got it working. The test tries to attach the event to CPU0 which doesn't
>>>>>> exist in HighFive Unleashed. Changing it to cpu1 works.
>>>>>>
>>>>>> diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
>>>>>> index 1a2686f..eb11632f 100644
>>>>>> --- a/tools/perf/tests/event-times.c
>>>>>> +++ b/tools/perf/tests/event-times.c
>>>>>> @@ -113,9 +113,9 @@ static int attach__cpu_disabled(struct perf_evlist
>>>>>> *evlist)
>>>>>> struct cpu_map *cpus;
>>>>>> int err;
>>>>>>
>>>>>> - pr_debug("attaching to CPU 0 as enabled\n");
>>>>>> + pr_debug("attaching to CPU 1 as disabled\n");
>>>>>>
>>>>>> - cpus = cpu_map__new("0");
>>>>>> + cpus = cpu_map__new("1");
>>>>>> if (cpus == NULL) {
>>>>>> pr_debug("failed to call cpu_map__new\n");
>>>>>> return -1;
>>>>>> @@ -142,9 +142,9 @@ static int attach__cpu_enabled(struct perf_evlist
>>>>>> *evlist)
>>>>>> struct cpu_map *cpus;
>>>>>> int err;
>>>>>>
>>>>>> - pr_debug("attaching to CPU 0 as enabled\n");
>>>>>> + pr_debug("attaching to CPU 1 as enabled\n");
>>>>>>
>>>>>> - cpus = cpu_map__new("0");
>>>>>> + cpus = cpu_map__new("1");
>>>>>> if (cpus == NULL) {
>>>>>> pr_debug("failed to call cpu_map__new\n");
>>>>>> return -1;
>>>>>>
>>>>>>
>>>>>> Palmer,
>>>>>> Would it be better to officially document it somewhere that CPU0 doesn't
>>>>>> exist in the HighFive Unleashed board ?
>>>>>> I fear that there will be other standard tests/code path that may fail
>>>>>> because of inherent assumption of cpu0 presence.
>>>>>
>>>>> I think the best way to fix this is to just have BBL (or whatever the
>>>>> bootloader is) renumber the CPUs so they're contiguous and begin with 0.
>>>>
>>>> Do you mean BBL will update the device tree that kernel eventually parse
>>>> and set the hart id?
>>>> Sounds good to me unless it acts as a big hack in future boot loaders.
>>>
>>> Right now the machine-mode and supervisor-mode hart IDs are logically separate:
>>> the bootloader just provides the hart ID as a register argument when starting
>>> the kernel.
>>
>> Yes.
>>
>> BBL already needs to enumerate the harts by looking through the
>>> device tree for various other reasons (at least to mask off the harts that
>>> Linux doesn't support), so it's not that much effort to just maintain a mapping
>> >from supervisor-mode hart IDs to machine-mode hart IDs.
>>>
>>
>> But Linux also parses the device tree to get hart ID in
>> riscv_of_processor_hart(). This is used to setup the possible/present cpu
>> map in setup_smp().
>>
>> Thus, Linux also need to see a device tree with cpu0-3 instead of cpu1-4.
>> Otherwise, present cpu map will be incorrect. Isn't it ?
>>
>>> I have some patches floating around that do this, but appear to do it
>>> incorrectly enough that nothing boots so maybe I'm missing something that makes
>>> this complicated :).
>>>
>>
>> Just a wild guess: May be the because of the above reason ;)
>>
>
> Thanks for the test and discussion. It looks like am implementation issue from
> Unleash, so ... is there anything I should fix and provide a further patch?
>
Nope. Nothing for this issue.
The fix for this issue should be either in bbl or kernel as a separate
patch.

2018-04-25 16:18:37

by Palmer Dabbelt

[permalink] [raw]
Subject: Re: [PATCH v5 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

On Tue, 24 Apr 2018 18:15:49 PDT (-0700), [email protected] wrote:
> On 4/24/18 5:29 PM, Palmer Dabbelt wrote:
>> On Tue, 24 Apr 2018 15:16:16 PDT (-0700), [email protected] wrote:
>>> On 4/24/18 12:44 PM, Palmer Dabbelt wrote:
>>>> On Tue, 24 Apr 2018 12:27:26 PDT (-0700), [email protected] wrote:
>>>>> On 4/24/18 11:07 AM, Atish Patra wrote:
>>>>>> On 4/19/18 4:28 PM, Alan Kao wrote:
>>>>>> However, I got an rcu-stall for the test "47: Event times".
>>>>>> # ./perf test -v 47
>>>>> Got it working. The test tries to attach the event to CPU0 which doesn't
>>>>> exist in HighFive Unleashed. Changing it to cpu1 works.
>>>>>
>>>>> diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
>>>>> index 1a2686f..eb11632f 100644
>>>>> --- a/tools/perf/tests/event-times.c
>>>>> +++ b/tools/perf/tests/event-times.c
>>>>> @@ -113,9 +113,9 @@ static int attach__cpu_disabled(struct perf_evlist
>>>>> *evlist)
>>>>> struct cpu_map *cpus;
>>>>> int err;
>>>>>
>>>>> - pr_debug("attaching to CPU 0 as enabled\n");
>>>>> + pr_debug("attaching to CPU 1 as disabled\n");
>>>>>
>>>>> - cpus = cpu_map__new("0");
>>>>> + cpus = cpu_map__new("1");
>>>>> if (cpus == NULL) {
>>>>> pr_debug("failed to call cpu_map__new\n");
>>>>> return -1;
>>>>> @@ -142,9 +142,9 @@ static int attach__cpu_enabled(struct perf_evlist
>>>>> *evlist)
>>>>> struct cpu_map *cpus;
>>>>> int err;
>>>>>
>>>>> - pr_debug("attaching to CPU 0 as enabled\n");
>>>>> + pr_debug("attaching to CPU 1 as enabled\n");
>>>>>
>>>>> - cpus = cpu_map__new("0");
>>>>> + cpus = cpu_map__new("1");
>>>>> if (cpus == NULL) {
>>>>> pr_debug("failed to call cpu_map__new\n");
>>>>> return -1;
>>>>>
>>>>>
>>>>> Palmer,
>>>>> Would it be better to officially document it somewhere that CPU0 doesn't
>>>>> exist in the HighFive Unleashed board ?
>>>>> I fear that there will be other standard tests/code path that may fail
>>>>> because of inherent assumption of cpu0 presence.
>>>>
>>>> I think the best way to fix this is to just have BBL (or whatever the
>>>> bootloader is) renumber the CPUs so they're contiguous and begin with 0.
>>>
>>> Do you mean BBL will update the device tree that kernel eventually parse
>>> and set the hart id?
>>> Sounds good to me unless it acts as a big hack in future boot loaders.
>>
>> Right now the machine-mode and supervisor-mode hart IDs are logically separate:
>> the bootloader just provides the hart ID as a register argument when starting
>> the kernel.
>
> Yes.
>
> BBL already needs to enumerate the harts by looking through the
>> device tree for various other reasons (at least to mask off the harts that
>> Linux doesn't support), so it's not that much effort to just maintain a mapping
>> from supervisor-mode hart IDs to machine-mode hart IDs.
>>
>
> But Linux also parses the device tree to get hart ID in
> riscv_of_processor_hart(). This is used to setup the possible/present
> cpu map in setup_smp().
>
> Thus, Linux also need to see a device tree with cpu0-3 instead of
> cpu1-4. Otherwise, present cpu map will be incorrect. Isn't it ?
>
>> I have some patches floating around that do this, but appear to do it
>> incorrectly enough that nothing boots so maybe I'm missing something that makes
>> this complicated :).
>>
>
> Just a wild guess: May be the because of the above reason ;)

I was already changing all the IDs in the device tree, so that wasn't it.

2018-04-25 16:40:36

by Palmer Dabbelt

[permalink] [raw]
Subject: Re: [PATCH v5 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

On Tue, 24 Apr 2018 20:19:36 PDT (-0700), [email protected] wrote:
> Hi Atish, Palmer,
>
> On Tue, Apr 24, 2018 at 06:15:49PM -0700, Atish Patra wrote:
>> On 4/24/18 5:29 PM, Palmer Dabbelt wrote:
>> >On Tue, 24 Apr 2018 15:16:16 PDT (-0700), [email protected] wrote:
>> >>On 4/24/18 12:44 PM, Palmer Dabbelt wrote:
>> >>>On Tue, 24 Apr 2018 12:27:26 PDT (-0700), [email protected] wrote:
>> >>>>On 4/24/18 11:07 AM, Atish Patra wrote:
>> >>>>>On 4/19/18 4:28 PM, Alan Kao wrote:
>> >>>>>However, I got an rcu-stall for the test "47: Event times".
>> >>>>># ./perf test -v 47
>> >>>>Got it working. The test tries to attach the event to CPU0 which doesn't
>> >>>>exist in HighFive Unleashed. Changing it to cpu1 works.
>> >>>>
>> >>>>diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
>> >>>>index 1a2686f..eb11632f 100644
>> >>>>--- a/tools/perf/tests/event-times.c
>> >>>>+++ b/tools/perf/tests/event-times.c
>> >>>>@@ -113,9 +113,9 @@ static int attach__cpu_disabled(struct perf_evlist
>> >>>>*evlist)
>> >>>> struct cpu_map *cpus;
>> >>>> int err;
>> >>>>
>> >>>>- pr_debug("attaching to CPU 0 as enabled\n");
>> >>>>+ pr_debug("attaching to CPU 1 as disabled\n");
>> >>>>
>> >>>>- cpus = cpu_map__new("0");
>> >>>>+ cpus = cpu_map__new("1");
>> >>>> if (cpus == NULL) {
>> >>>> pr_debug("failed to call cpu_map__new\n");
>> >>>> return -1;
>> >>>>@@ -142,9 +142,9 @@ static int attach__cpu_enabled(struct perf_evlist
>> >>>>*evlist)
>> >>>> struct cpu_map *cpus;
>> >>>> int err;
>> >>>>
>> >>>>- pr_debug("attaching to CPU 0 as enabled\n");
>> >>>>+ pr_debug("attaching to CPU 1 as enabled\n");
>> >>>>
>> >>>>- cpus = cpu_map__new("0");
>> >>>>+ cpus = cpu_map__new("1");
>> >>>> if (cpus == NULL) {
>> >>>> pr_debug("failed to call cpu_map__new\n");
>> >>>> return -1;
>> >>>>
>> >>>>
>> >>>>Palmer,
>> >>>>Would it be better to officially document it somewhere that CPU0 doesn't
>> >>>>exist in the HighFive Unleashed board ?
>> >>>>I fear that there will be other standard tests/code path that may fail
>> >>>>because of inherent assumption of cpu0 presence.
>> >>>
>> >>>I think the best way to fix this is to just have BBL (or whatever the
>> >>>bootloader is) renumber the CPUs so they're contiguous and begin with 0.
>> >>
>> >>Do you mean BBL will update the device tree that kernel eventually parse
>> >>and set the hart id?
>> >>Sounds good to me unless it acts as a big hack in future boot loaders.
>> >
>> >Right now the machine-mode and supervisor-mode hart IDs are logically separate:
>> >the bootloader just provides the hart ID as a register argument when starting
>> >the kernel.
>>
>> Yes.
>>
>> BBL already needs to enumerate the harts by looking through the
>> >device tree for various other reasons (at least to mask off the harts that
>> >Linux doesn't support), so it's not that much effort to just maintain a mapping
>> >from supervisor-mode hart IDs to machine-mode hart IDs.
>> >
>>
>> But Linux also parses the device tree to get hart ID in
>> riscv_of_processor_hart(). This is used to setup the possible/present cpu
>> map in setup_smp().
>>
>> Thus, Linux also need to see a device tree with cpu0-3 instead of cpu1-4.
>> Otherwise, present cpu map will be incorrect. Isn't it ?
>>
>> >I have some patches floating around that do this, but appear to do it
>> >incorrectly enough that nothing boots so maybe I'm missing something that makes
>> >this complicated :).
>> >
>>
>> Just a wild guess: May be the because of the above reason ;)
>>
>
> Thanks for the test and discussion. It looks like am implementation issue from
> Unleash, so ... is there anything I should fix and provide a further patch?

You're welcome to fix BBL if you want, but that's unrelated to this patch set.
I'm going to look over the code again as soon as I get a chance to, thanks for
submitting the patches!

2018-06-06 03:19:01

by Alan Kao

[permalink] [raw]
Subject: Re: [PATCH v5 0/2] perf: riscv: Preliminary Perf Event Support on RISC-V

On Wed, Apr 25, 2018 at 09:39:04AM -0700, Palmer Dabbelt wrote:
> On Tue, 24 Apr 2018 20:19:36 PDT (-0700), [email protected] wrote:
> >Hi Atish, Palmer,
> >
> >On Tue, Apr 24, 2018 at 06:15:49PM -0700, Atish Patra wrote:
> >>On 4/24/18 5:29 PM, Palmer Dabbelt wrote:
> >>>On Tue, 24 Apr 2018 15:16:16 PDT (-0700), [email protected] wrote:
> >>>>On 4/24/18 12:44 PM, Palmer Dabbelt wrote:
> >>>>>On Tue, 24 Apr 2018 12:27:26 PDT (-0700), [email protected] wrote:
> >>>>>>On 4/24/18 11:07 AM, Atish Patra wrote:
> >>>>>>>On 4/19/18 4:28 PM, Alan Kao wrote:
> >>>>>>>However, I got an rcu-stall for the test "47: Event times".
> >>>>>>># ./perf test -v 47
> >>>>>>Got it working. The test tries to attach the event to CPU0 which doesn't
> >>>>>>exist in HighFive Unleashed. Changing it to cpu1 works.
> >>>>>>
> >>>>>>diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
> >>>>>>index 1a2686f..eb11632f 100644
> >>>>>>--- a/tools/perf/tests/event-times.c
> >>>>>>+++ b/tools/perf/tests/event-times.c
> >>>>>>@@ -113,9 +113,9 @@ static int attach__cpu_disabled(struct perf_evlist
> >>>>>>*evlist)
> >>>>>> struct cpu_map *cpus;
> >>>>>> int err;
> >>>>>>
> >>>>>>- pr_debug("attaching to CPU 0 as enabled\n");
> >>>>>>+ pr_debug("attaching to CPU 1 as disabled\n");
> >>>>>>
> >>>>>>- cpus = cpu_map__new("0");
> >>>>>>+ cpus = cpu_map__new("1");
> >>>>>> if (cpus == NULL) {
> >>>>>> pr_debug("failed to call cpu_map__new\n");
> >>>>>> return -1;
> >>>>>>@@ -142,9 +142,9 @@ static int attach__cpu_enabled(struct perf_evlist
> >>>>>>*evlist)
> >>>>>> struct cpu_map *cpus;
> >>>>>> int err;
> >>>>>>
> >>>>>>- pr_debug("attaching to CPU 0 as enabled\n");
> >>>>>>+ pr_debug("attaching to CPU 1 as enabled\n");
> >>>>>>
> >>>>>>- cpus = cpu_map__new("0");
> >>>>>>+ cpus = cpu_map__new("1");
> >>>>>> if (cpus == NULL) {
> >>>>>> pr_debug("failed to call cpu_map__new\n");
> >>>>>> return -1;
> >>>>>>
> >>>>>>
> >>>>>>Palmer,
> >>>>>>Would it be better to officially document it somewhere that CPU0 doesn't
> >>>>>>exist in the HighFive Unleashed board ?
> >>>>>>I fear that there will be other standard tests/code path that may fail
> >>>>>>because of inherent assumption of cpu0 presence.
> >>>>>
> >>>>>I think the best way to fix this is to just have BBL (or whatever the
> >>>>>bootloader is) renumber the CPUs so they're contiguous and begin with 0.
> >>>>
> >>>>Do you mean BBL will update the device tree that kernel eventually parse
> >>>>and set the hart id?
> >>>>Sounds good to me unless it acts as a big hack in future boot loaders.
> >>>
> >>>Right now the machine-mode and supervisor-mode hart IDs are logically separate:
> >>>the bootloader just provides the hart ID as a register argument when starting
> >>>the kernel.
> >>
> >>Yes.
> >>
> >> BBL already needs to enumerate the harts by looking through the
> >>>device tree for various other reasons (at least to mask off the harts that
> >>>Linux doesn't support), so it's not that much effort to just maintain a mapping
> >>>from supervisor-mode hart IDs to machine-mode hart IDs.
> >>>
> >>
> >>But Linux also parses the device tree to get hart ID in
> >>riscv_of_processor_hart(). This is used to setup the possible/present cpu
> >>map in setup_smp().
> >>
> >>Thus, Linux also need to see a device tree with cpu0-3 instead of cpu1-4.
> >>Otherwise, present cpu map will be incorrect. Isn't it ?
> >>
> >>>I have some patches floating around that do this, but appear to do it
> >>>incorrectly enough that nothing boots so maybe I'm missing something that makes
> >>>this complicated :).
> >>>
> >>
> >>Just a wild guess: May be the because of the above reason ;)
> >>
> >
> >Thanks for the test and discussion. It looks like am implementation issue from
> >Unleash, so ... is there anything I should fix and provide a further patch?
>
> You're welcome to fix BBL if you want, but that's unrelated to this patch
> set. I'm going to look over the code again as soon as I get a chance to,
> thanks for submitting the patches!

Any updates?