2012-11-15 21:32:15

by Jacob Shin

[permalink] [raw]
Subject: [PATCH V2 0/4] perf, amd: Enable AMD family 15h northbridge counters

The following patchset enables 4 additional performance counters in
AMD family 15h processors that counts northbridge events -- such as
number of DRAM accesses.

This patchset is based on top of previous work done by Robert Richter
<[email protected]> :

https://lkml.org/lkml/2012/6/19/324

The main differences are:

- The northbridge counters are indexed contiguously right above the
core performance counters.

- MSR address offset calculations are moved to architecture specific
files.

- Interrups are set up to be delivered only to a single core.

V2:
Seprate out Robert's patches, and add properly ordered certificate of
origins.

Jacob Shin (2):
perf, x86: Move MSR address offset calculation to architecture
specific files
perf, amd: Enable northbridge performance counters on AMD family 15h

Robert Richter (2):
perf, amd: Rework northbridge event constraints handler
perf, amd: Generalize northbridge constraints code for family 15h

arch/x86/include/asm/cpufeature.h | 2 +
arch/x86/include/asm/msr-index.h | 2 +
arch/x86/include/asm/perf_event.h | 6 +
arch/x86/kernel/cpu/perf_event.h | 21 +--
arch/x86/kernel/cpu/perf_event_amd.c | 246 ++++++++++++++++++++++++----------
5 files changed, 187 insertions(+), 90 deletions(-)

--
1.7.9.5


2012-11-15 21:32:13

by Jacob Shin

[permalink] [raw]
Subject: [PATCH 1/4] perf, amd: Rework northbridge event constraints handler

From: Robert Richter <[email protected]>

Code simplification. No functional changes.

Signed-off-by: Robert Richter <[email protected]>
Signed-off-by: Jacob Shin <[email protected]>
---
arch/x86/kernel/cpu/perf_event_amd.c | 68 +++++++++++++---------------------
1 file changed, 26 insertions(+), 42 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_amd.c b/arch/x86/kernel/cpu/perf_event_amd.c
index 4528ae7..d60c5c7 100644
--- a/arch/x86/kernel/cpu/perf_event_amd.c
+++ b/arch/x86/kernel/cpu/perf_event_amd.c
@@ -256,9 +256,8 @@ amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
{
struct hw_perf_event *hwc = &event->hw;
struct amd_nb *nb = cpuc->amd_nb;
- struct perf_event *old = NULL;
- int max = x86_pmu.num_counters;
- int i, j, k = -1;
+ struct perf_event *old;
+ int idx, new = -1;

/*
* if not NB event or no NB, then no constraints
@@ -276,48 +275,33 @@ amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
* because of successive calls to x86_schedule_events() from
* hw_perf_group_sched_in() without hw_perf_enable()
*/
- for (i = 0; i < max; i++) {
- /*
- * keep track of first free slot
- */
- if (k == -1 && !nb->owners[i])
- k = i;
+ for (idx = 0; idx < x86_pmu.num_counters; idx++) {
+ if (new == -1 || hwc->idx == idx)
+ /* assign free slot, prefer hwc->idx */
+ old = cmpxchg(nb->owners + idx, NULL, event);
+ else if (nb->owners[idx] == event)
+ /* event already present */
+ old = event;
+ else
+ continue;
+
+ if (old && old != event)
+ continue;
+
+ /* reassign to this slot */
+ if (new != -1)
+ cmpxchg(nb->owners + new, event, NULL);
+ new = idx;

/* already present, reuse */
- if (nb->owners[i] == event)
- goto done;
- }
- /*
- * not present, so grab a new slot
- * starting either at:
- */
- if (hwc->idx != -1) {
- /* previous assignment */
- i = hwc->idx;
- } else if (k != -1) {
- /* start from free slot found */
- i = k;
- } else {
- /*
- * event not found, no slot found in
- * first pass, try again from the
- * beginning
- */
- i = 0;
- }
- j = i;
- do {
- old = cmpxchg(nb->owners+i, NULL, event);
- if (!old)
+ if (old == event)
break;
- if (++i == max)
- i = 0;
- } while (i != j);
-done:
- if (!old)
- return &nb->event_constraints[i];
-
- return &emptyconstraint;
+ }
+
+ if (new == -1)
+ return &emptyconstraint;
+
+ return &nb->event_constraints[new];
}

static struct amd_nb *amd_alloc_nb(int cpu)
--
1.7.9.5

2012-11-15 21:32:19

by Jacob Shin

[permalink] [raw]
Subject: [PATCH 3/4] perf, x86: Move MSR address offset calculation to architecture specific files

Move counter index to MSR address offset calculation to architecture
specific files. This prepares the way for perf_event_amd to enable
counter addresses that are not contiguous -- for example AMD Family
15h processors have 6 core performance counters starting at 0xc0010200
and 4 northbridge performance counters starting at 0xc0010240.

Signed-off-by: Jacob Shin <[email protected]>
---
arch/x86/kernel/cpu/perf_event.h | 21 +++++---------------
arch/x86/kernel/cpu/perf_event_amd.c | 35 ++++++++++++++++++++++++++++++++++
2 files changed, 40 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index 271d257..aacf025 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -325,6 +325,7 @@ struct x86_pmu {
int (*schedule_events)(struct cpu_hw_events *cpuc, int n, int *assign);
unsigned eventsel;
unsigned perfctr;
+ int (*addr_offset)(int index);
u64 (*event_map)(int);
int max_events;
int num_counters;
@@ -444,28 +445,16 @@ extern u64 __read_mostly hw_cache_extra_regs

u64 x86_perf_event_update(struct perf_event *event);

-static inline int x86_pmu_addr_offset(int index)
-{
- int offset;
-
- /* offset = X86_FEATURE_PERFCTR_CORE ? index << 1 : index */
- alternative_io(ASM_NOP2,
- "shll $1, %%eax",
- X86_FEATURE_PERFCTR_CORE,
- "=a" (offset),
- "a" (index));
-
- return offset;
-}
-
static inline unsigned int x86_pmu_config_addr(int index)
{
- return x86_pmu.eventsel + x86_pmu_addr_offset(index);
+ return x86_pmu.eventsel +
+ (x86_pmu.addr_offset ? x86_pmu.addr_offset(index) : index);
}

static inline unsigned int x86_pmu_event_addr(int index)
{
- return x86_pmu.perfctr + x86_pmu_addr_offset(index);
+ return x86_pmu.perfctr +
+ (x86_pmu.addr_offset ? x86_pmu.addr_offset(index) : index);
}

int x86_setup_perfctr(struct perf_event *event);
diff --git a/arch/x86/kernel/cpu/perf_event_amd.c b/arch/x86/kernel/cpu/perf_event_amd.c
index 04ef43f..d6e3337 100644
--- a/arch/x86/kernel/cpu/perf_event_amd.c
+++ b/arch/x86/kernel/cpu/perf_event_amd.c
@@ -132,6 +132,40 @@ static u64 amd_pmu_event_map(int hw_event)
return amd_perfmon_event_map[hw_event];
}

+/*
+ * Previously calculated offsets
+ */
+static unsigned int addr_offsets[X86_PMC_IDX_MAX] __read_mostly;
+
+/*
+ * Legacy CPUs:
+ * 4 counters starting at 0xc0010000 each offset by 1
+ *
+ * CPUs with core performance counter extensions:
+ * 6 counters starting at 0xc0010200 each offset by 2
+ */
+static inline int amd_pmu_addr_offset(int index)
+{
+ int offset;
+
+ if (!index)
+ return index;
+
+ offset = addr_offsets[index];
+
+ if (offset)
+ return offset;
+
+ if (!cpu_has_perfctr_core)
+ offset = index;
+ else
+ offset = index << 1;
+
+ addr_offsets[index] = offset;
+
+ return offset;
+}
+
static int amd_pmu_hw_config(struct perf_event *event)
{
int ret;
@@ -570,6 +604,7 @@ static __initconst const struct x86_pmu amd_pmu = {
.schedule_events = x86_schedule_events,
.eventsel = MSR_K7_EVNTSEL0,
.perfctr = MSR_K7_PERFCTR0,
+ .addr_offset = amd_pmu_addr_offset,
.event_map = amd_pmu_event_map,
.max_events = ARRAY_SIZE(amd_perfmon_event_map),
.num_counters = AMD64_NUM_COUNTERS,
--
1.7.9.5

2012-11-15 21:32:17

by Jacob Shin

[permalink] [raw]
Subject: [PATCH 2/4] perf, amd: Generalize northbridge constraints code for family 15h

From: Robert Richter <[email protected]>

Generalize northbridge constraints code for family 10h so that later
we can reuse the same code path with other AMD processor families that
have the same northbridge event constraints.

Signed-off-by: Robert Richter <[email protected]>
Signed-off-by: Jacob Shin <[email protected]>
---
arch/x86/kernel/cpu/perf_event_amd.c | 43 ++++++++++++++++++++--------------
1 file changed, 25 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_amd.c b/arch/x86/kernel/cpu/perf_event_amd.c
index d60c5c7..04ef43f 100644
--- a/arch/x86/kernel/cpu/perf_event_amd.c
+++ b/arch/x86/kernel/cpu/perf_event_amd.c
@@ -188,20 +188,13 @@ static inline int amd_has_nb(struct cpu_hw_events *cpuc)
return nb && nb->nb_id != -1;
}

-static void amd_put_event_constraints(struct cpu_hw_events *cpuc,
- struct perf_event *event)
+static void __amd_put_nb_event_constraints(struct cpu_hw_events *cpuc,
+ struct perf_event *event)
{
- struct hw_perf_event *hwc = &event->hw;
struct amd_nb *nb = cpuc->amd_nb;
int i;

/*
- * only care about NB events
- */
- if (!(amd_has_nb(cpuc) && amd_is_nb_event(hwc)))
- return;
-
- /*
* need to scan whole list because event may not have
* been assigned during scheduling
*
@@ -247,12 +240,13 @@ static void amd_put_event_constraints(struct cpu_hw_events *cpuc,
*
* Given that resources are allocated (cmpxchg), they must be
* eventually freed for others to use. This is accomplished by
- * calling amd_put_event_constraints().
+ * calling __amd_put_nb_event_constraints()
*
* Non NB events are not impacted by this restriction.
*/
static struct event_constraint *
-amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
+__amd_get_nb_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event,
+ struct event_constraint *c)
{
struct hw_perf_event *hwc = &event->hw;
struct amd_nb *nb = cpuc->amd_nb;
@@ -260,12 +254,6 @@ amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
int idx, new = -1;

/*
- * if not NB event or no NB, then no constraints
- */
- if (!(amd_has_nb(cpuc) && amd_is_nb_event(hwc)))
- return &unconstrained;
-
- /*
* detect if already present, if so reuse
*
* cannot merge with actual allocation
@@ -275,7 +263,7 @@ amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
* because of successive calls to x86_schedule_events() from
* hw_perf_group_sched_in() without hw_perf_enable()
*/
- for (idx = 0; idx < x86_pmu.num_counters; idx++) {
+ for_each_set_bit(idx, c->idxmsk, X86_PMC_IDX_MAX) {
if (new == -1 || hwc->idx == idx)
/* assign free slot, prefer hwc->idx */
old = cmpxchg(nb->owners + idx, NULL, event);
@@ -391,6 +379,25 @@ static void amd_pmu_cpu_dead(int cpu)
}
}

+static struct event_constraint *
+amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
+{
+ /*
+ * if not NB event or no NB, then no constraints
+ */
+ if (!(amd_has_nb(cpuc) && amd_is_nb_event(&event->hw)))
+ return &unconstrained;
+
+ return __amd_get_nb_event_constraints(cpuc, event, &unconstrained);
+}
+
+static void amd_put_event_constraints(struct cpu_hw_events *cpuc,
+ struct perf_event *event)
+{
+ if (amd_has_nb(cpuc) && amd_is_nb_event(&event->hw))
+ __amd_put_nb_event_constraints(cpuc, event);
+}
+
PMU_FORMAT_ATTR(event, "config:0-7,32-35");
PMU_FORMAT_ATTR(umask, "config:8-15" );
PMU_FORMAT_ATTR(edge, "config:18" );
--
1.7.9.5

2012-11-15 21:33:27

by Jacob Shin

[permalink] [raw]
Subject: [PATCH 4/4] perf, amd: Enable northbridge performance counters on AMD family 15h

On AMD family 15h processors, there are 4 new performance counters
(in addition to 6 core performance counters) that can be used for
counting northbridge events (i.e. DRAM accesses). Their bit fields are
almost identical to the core performance counters. However, unlike the
core performance counters, these MSRs are shared between multiple
cores (that share the same northbridge). We will reuse the same code
path as existing family 10h northbridge event constraints handler
logic to enforce this sharing.

These new counters are indexed contiguously right above the existing
core performance counters, and their indexes correspond to RDPMC ECX
values.

Signed-off-by: Jacob Shin <[email protected]>
---
arch/x86/include/asm/cpufeature.h | 2 +
arch/x86/include/asm/msr-index.h | 2 +
arch/x86/include/asm/perf_event.h | 6 ++
arch/x86/kernel/cpu/perf_event_amd.c | 116 +++++++++++++++++++++++++++-------
4 files changed, 104 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index 8c297aa..b05c722 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -167,6 +167,7 @@
#define X86_FEATURE_TBM (6*32+21) /* trailing bit manipulations */
#define X86_FEATURE_TOPOEXT (6*32+22) /* topology extensions CPUID leafs */
#define X86_FEATURE_PERFCTR_CORE (6*32+23) /* core performance counter extensions */
+#define X86_FEATURE_PERFCTR_NB (6*32+24) /* core performance counter extensions */

/*
* Auxiliary flags: Linux defined - For features scattered in various
@@ -308,6 +309,7 @@ extern const char * const x86_power_flags[32];
#define cpu_has_hypervisor boot_cpu_has(X86_FEATURE_HYPERVISOR)
#define cpu_has_pclmulqdq boot_cpu_has(X86_FEATURE_PCLMULQDQ)
#define cpu_has_perfctr_core boot_cpu_has(X86_FEATURE_PERFCTR_CORE)
+#define cpu_has_perfctr_nb boot_cpu_has(X86_FEATURE_PERFCTR_NB)
#define cpu_has_cx8 boot_cpu_has(X86_FEATURE_CX8)
#define cpu_has_cx16 boot_cpu_has(X86_FEATURE_CX16)
#define cpu_has_eager_fpu boot_cpu_has(X86_FEATURE_EAGER_FPU)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 7f0edce..e67ff1e 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -157,6 +157,8 @@
/* Fam 15h MSRs */
#define MSR_F15H_PERF_CTL 0xc0010200
#define MSR_F15H_PERF_CTR 0xc0010201
+#define MSR_F15H_NB_PERF_CTL 0xc0010240
+#define MSR_F15H_NB_PERF_CTR 0xc0010241

/* Fam 10h MSRs */
#define MSR_FAM10H_MMIO_CONF_BASE 0xc0010058
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 4fabcdf..a610ddb 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -29,9 +29,14 @@
#define ARCH_PERFMON_EVENTSEL_INV (1ULL << 23)
#define ARCH_PERFMON_EVENTSEL_CMASK 0xFF000000ULL

+#define AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE (1ULL << 36)
#define AMD_PERFMON_EVENTSEL_GUESTONLY (1ULL << 40)
#define AMD_PERFMON_EVENTSEL_HOSTONLY (1ULL << 41)

+#define AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT 37
+#define AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK \
+ (0xFULL << AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT)
+
#define AMD64_EVENTSEL_EVENT \
(ARCH_PERFMON_EVENTSEL_EVENT | (0x0FULL << 32))
#define INTEL_ARCH_EVENT_MASK \
@@ -48,6 +53,7 @@
AMD64_EVENTSEL_EVENT)
#define AMD64_NUM_COUNTERS 4
#define AMD64_NUM_COUNTERS_CORE 6
+#define AMD64_NUM_COUNTERS_NB 4

#define ARCH_PERFMON_UNHALTED_CORE_CYCLES_SEL 0x3c
#define ARCH_PERFMON_UNHALTED_CORE_CYCLES_UMASK (0x00 << 8)
diff --git a/arch/x86/kernel/cpu/perf_event_amd.c b/arch/x86/kernel/cpu/perf_event_amd.c
index d6e3337..2fb7b8c 100644
--- a/arch/x86/kernel/cpu/perf_event_amd.c
+++ b/arch/x86/kernel/cpu/perf_event_amd.c
@@ -143,10 +143,15 @@ static unsigned int addr_offsets[X86_PMC_IDX_MAX] __read_mostly;
*
* CPUs with core performance counter extensions:
* 6 counters starting at 0xc0010200 each offset by 2
+ *
+ * CPUs with north bridge performance counter extensions:
+ * 4 additional counters starting at 0xc0010240 each offset by 2
+ * (indexed right above either one of the above core counters)
*/
static inline int amd_pmu_addr_offset(int index)
{
int offset;
+ int ncore;

if (!index)
return index;
@@ -156,31 +161,28 @@ static inline int amd_pmu_addr_offset(int index)
if (offset)
return offset;

- if (!cpu_has_perfctr_core)
+ if (!cpu_has_perfctr_core) {
offset = index;
- else
+ ncore = AMD64_NUM_COUNTERS;
+ } else {
offset = index << 1;
+ ncore = AMD64_NUM_COUNTERS_CORE;
+ }
+
+ /* find offset of NB counters with respect to x86_pmu.eventsel */
+ if (cpu_has_perfctr_nb) {
+ if (index >= ncore && index < (ncore + AMD64_NUM_COUNTERS_NB))
+ offset = (MSR_F15H_NB_PERF_CTL - x86_pmu.eventsel) +
+ ((index - ncore) << 1);
+ }

addr_offsets[index] = offset;

return offset;
}

-static int amd_pmu_hw_config(struct perf_event *event)
+static int __amd_core_hw_config(struct perf_event *event)
{
- int ret;
-
- /* pass precise event sampling to ibs: */
- if (event->attr.precise_ip && get_ibs_caps())
- return -ENOENT;
-
- ret = x86_pmu_hw_config(event);
- if (ret)
- return ret;
-
- if (has_branch_stack(event))
- return -EOPNOTSUPP;
-
if (event->attr.exclude_host && event->attr.exclude_guest)
/*
* When HO == GO == 1 the hardware treats that as GO == HO == 0
@@ -194,10 +196,24 @@ static int amd_pmu_hw_config(struct perf_event *event)
else if (event->attr.exclude_guest)
event->hw.config |= AMD_PERFMON_EVENTSEL_HOSTONLY;

- if (event->attr.type != PERF_TYPE_RAW)
- return 0;
+ return 0;
+}
+
+static int __amd_nb_hw_config(struct perf_event *event)
+{
+ if (event->attr.exclude_user || event->attr.exclude_kernel ||
+ event->attr.exclude_host || event->attr.exclude_guest)
+ return -EINVAL;

- event->hw.config |= event->attr.config & AMD64_RAW_EVENT_MASK;
+ event->hw.config &= ~ARCH_PERFMON_EVENTSEL_USR;
+ event->hw.config &= ~ARCH_PERFMON_EVENTSEL_OS;
+
+ if (event->hw.config & ~(AMD64_EVENTSEL_EVENT |
+ ARCH_PERFMON_EVENTSEL_UMASK |
+ ARCH_PERFMON_EVENTSEL_INT |
+ AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE |
+ AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK))
+ return -EINVAL;

return 0;
}
@@ -215,6 +231,11 @@ static inline int amd_is_nb_event(struct hw_perf_event *hwc)
return (hwc->config & 0xe0) == 0xe0;
}

+static inline int amd_is_perfctr_nb_event(struct hw_perf_event *hwc)
+{
+ return cpu_has_perfctr_nb && amd_is_nb_event(hwc);
+}
+
static inline int amd_has_nb(struct cpu_hw_events *cpuc)
{
struct amd_nb *nb = cpuc->amd_nb;
@@ -222,6 +243,30 @@ static inline int amd_has_nb(struct cpu_hw_events *cpuc)
return nb && nb->nb_id != -1;
}

+static int amd_pmu_hw_config(struct perf_event *event)
+{
+ int ret;
+
+ /* pass precise event sampling to ibs: */
+ if (event->attr.precise_ip && get_ibs_caps())
+ return -ENOENT;
+
+ if (has_branch_stack(event))
+ return -EOPNOTSUPP;
+
+ ret = x86_pmu_hw_config(event);
+ if (ret)
+ return ret;
+
+ if (event->attr.type == PERF_TYPE_RAW)
+ event->hw.config |= event->attr.config & AMD64_RAW_EVENT_MASK;
+
+ if (amd_is_perfctr_nb_event(&event->hw))
+ return __amd_nb_hw_config(event);
+
+ return __amd_core_hw_config(event);
+}
+
static void __amd_put_nb_event_constraints(struct cpu_hw_events *cpuc,
struct perf_event *event)
{
@@ -323,6 +368,16 @@ __amd_get_nb_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *ev
if (new == -1)
return &emptyconstraint;

+ /* set up interrupts to be delivered only to this core */
+ if (cpu_has_perfctr_nb) {
+ struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());
+
+ hwc->config |= AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE;
+ hwc->config &= ~AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK;
+ hwc->config |= (0ULL | (c->cpu_core_id)) <<
+ AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT;
+ }
+
return &nb->event_constraints[new];
}

@@ -520,6 +575,7 @@ static struct event_constraint amd_f15_PMC3 = EVENT_CONSTRAINT(0, 0x08, 0);
static struct event_constraint amd_f15_PMC30 = EVENT_CONSTRAINT_OVERLAP(0, 0x09, 0);
static struct event_constraint amd_f15_PMC50 = EVENT_CONSTRAINT(0, 0x3F, 0);
static struct event_constraint amd_f15_PMC53 = EVENT_CONSTRAINT(0, 0x38, 0);
+static struct event_constraint amd_f15_NBPMC30 = EVENT_CONSTRAINT(0, 0x3C0, 0);

static struct event_constraint *
amd_get_event_constraints_f15h(struct cpu_hw_events *cpuc, struct perf_event *event)
@@ -586,8 +642,11 @@ amd_get_event_constraints_f15h(struct cpu_hw_events *cpuc, struct perf_event *ev
return &amd_f15_PMC20;
}
case AMD_EVENT_NB:
- /* not yet implemented */
- return &emptyconstraint;
+ if (cpuc->is_fake)
+ return &amd_f15_NBPMC30;
+
+ return __amd_get_nb_event_constraints(cpuc, event,
+ &amd_f15_NBPMC30);
default:
return &emptyconstraint;
}
@@ -625,7 +684,7 @@ static __initconst const struct x86_pmu amd_pmu = {

static int setup_event_constraints(void)
{
- if (boot_cpu_data.x86 >= 0x15)
+ if (boot_cpu_data.x86 == 0x15)
x86_pmu.get_event_constraints = amd_get_event_constraints_f15h;
return 0;
}
@@ -655,6 +714,18 @@ static int setup_perfctr_core(void)
return 0;
}

+static int setup_perfctr_nb(void)
+{
+ if (!cpu_has_perfctr_nb)
+ return -ENODEV;
+
+ x86_pmu.num_counters += AMD64_NUM_COUNTERS_NB;
+
+ printk(KERN_INFO "perf: AMD northbridge performance counters detected\n");
+
+ return 0;
+}
+
__init int amd_pmu_init(void)
{
/* Performance-monitoring supported from K7 and later: */
@@ -665,6 +736,7 @@ __init int amd_pmu_init(void)

setup_event_constraints();
setup_perfctr_core();
+ setup_perfctr_nb();

/* Events are common for all AMDs */
memcpy(hw_cache_event_ids, amd_hw_cache_event_ids,
--
1.7.9.5

2012-11-15 21:40:26

by Jacob Shin

[permalink] [raw]
Subject: Re: [PATCH V2 0/4] perf, amd: Enable AMD family 15h northbridge counters

On Thu, Nov 15, 2012 at 03:31:49PM -0600, Jacob Shin wrote:
> The following patchset enables 4 additional performance counters in
> AMD family 15h processors that counts northbridge events -- such as
> number of DRAM accesses.
>
> This patchset is based on top of previous work done by Robert Richter
> <[email protected]> :
>
> https://lkml.org/lkml/2012/6/19/324

Sorry this is a bit unclear, let me clarify, this patchset takes 2 of
Robert's patches from above, and adds 2 more of my own. So these 4
patches are all that's needed to enable AMD family 15h northbridge
performance counters.

If things look okay, please apply to perf/core.

Thanks!

>
> The main differences are:
>
> - The northbridge counters are indexed contiguously right above the
> core performance counters.
>
> - MSR address offset calculations are moved to architecture specific
> files.
>
> - Interrups are set up to be delivered only to a single core.
>
> V2:
> Seprate out Robert's patches, and add properly ordered certificate of
> origins.
>
> Jacob Shin (2):
> perf, x86: Move MSR address offset calculation to architecture
> specific files
> perf, amd: Enable northbridge performance counters on AMD family 15h
>
> Robert Richter (2):
> perf, amd: Rework northbridge event constraints handler
> perf, amd: Generalize northbridge constraints code for family 15h
>
> arch/x86/include/asm/cpufeature.h | 2 +
> arch/x86/include/asm/msr-index.h | 2 +
> arch/x86/include/asm/perf_event.h | 6 +
> arch/x86/kernel/cpu/perf_event.h | 21 +--
> arch/x86/kernel/cpu/perf_event_amd.c | 246 ++++++++++++++++++++++++----------
> 5 files changed, 187 insertions(+), 90 deletions(-)
>
> --
> 1.7.9.5
>

2012-11-16 18:43:55

by Robert Richter

[permalink] [raw]
Subject: Re: [PATCH 4/4] perf, amd: Enable northbridge performance counters on AMD family 15h

Jacob,

On 15.11.12 15:31:53, Jacob Shin wrote:
> On AMD family 15h processors, there are 4 new performance counters
> (in addition to 6 core performance counters) that can be used for
> counting northbridge events (i.e. DRAM accesses). Their bit fields are
> almost identical to the core performance counters. However, unlike the
> core performance counters, these MSRs are shared between multiple
> cores (that share the same northbridge). We will reuse the same code
> path as existing family 10h northbridge event constraints handler
> logic to enforce this sharing.
>
> These new counters are indexed contiguously right above the existing
> core performance counters, and their indexes correspond to RDPMC ECX
> values.
>
> Signed-off-by: Jacob Shin <[email protected]>

your approach looks ok to me in general, but see my comments inline.

> @@ -156,31 +161,28 @@ static inline int amd_pmu_addr_offset(int index)
> if (offset)
> return offset;
>
> - if (!cpu_has_perfctr_core)
> + if (!cpu_has_perfctr_core) {
> offset = index;
> - else
> + ncore = AMD64_NUM_COUNTERS;
> + } else {
> offset = index << 1;
> + ncore = AMD64_NUM_COUNTERS_CORE;
> + }
> +
> + /* find offset of NB counters with respect to x86_pmu.eventsel */
> + if (cpu_has_perfctr_nb) {
> + if (index >= ncore && index < (ncore + AMD64_NUM_COUNTERS_NB))
> + offset = (MSR_F15H_NB_PERF_CTL - x86_pmu.eventsel) +
> + ((index - ncore) << 1);
> + }

There is duplicate calculatoin of offset in some cases. Better we
avoid this.

> +static int __amd_nb_hw_config(struct perf_event *event)
> +{
> + if (event->attr.exclude_user || event->attr.exclude_kernel ||
> + event->attr.exclude_host || event->attr.exclude_guest)
> + return -EINVAL;
>
> - event->hw.config |= event->attr.config & AMD64_RAW_EVENT_MASK;
> + event->hw.config &= ~ARCH_PERFMON_EVENTSEL_USR;
> + event->hw.config &= ~ARCH_PERFMON_EVENTSEL_OS;
> +
> + if (event->hw.config & ~(AMD64_EVENTSEL_EVENT |
> + ARCH_PERFMON_EVENTSEL_UMASK |
> + ARCH_PERFMON_EVENTSEL_INT |
> + AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE |
> + AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK))
> + return -EINVAL;
>
> return 0;
> }

Comments are missing and an AMD64_NB_EVENT_MASK macro should be
defined for the above. See my previous version for reference:

/*
* AMD NB counters (MSRs 0xc0010240 etc.) do not support the following
* flags:
*
* Host/Guest Only
* Counter Mask
* Invert Comparison
* Edge Detect
* Operating-System Mode
* User Mode
*
* Try to fix the config for default settings, otherwise fail.
*/
static int amd_nb_event_config(struct perf_event *event)
{
if (!amd_is_nb_perfctr_event(&event->hw))
return 0;

if (event->attr.exclude_host || event->attr.exclude_guest
|| event->attr.exclude_user || event->attr.exclude_kernel)
goto fail;

event->hw.config &= ~(ARCH_PERFMON_EVENTSEL_USR | ARCH_PERFMON_EVENTSEL_OS);

if (event->hw.config & ~(AMD64_NB_EVENT_MASK | ARCH_PERFMON_EVENTSEL_INT))
goto fail;

return 0;
fail:
pr_debug("Invalid nb counter config value: %016Lx\n", event->hw.config);
return -EINVAL;
}


> @@ -323,6 +368,16 @@ __amd_get_nb_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *ev
> if (new == -1)
> return &emptyconstraint;
>
> + /* set up interrupts to be delivered only to this core */
> + if (cpu_has_perfctr_nb) {
> + struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());
> +
> + hwc->config |= AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE;
> + hwc->config &= ~AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK;
> + hwc->config |= (0ULL | (c->cpu_core_id)) <<
> + AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT;
> + }

Looks like a hack to me. The constaints handler is only supposed to
determine constraints and not to touch anything in the event's
structure. This should be done later when setting up hwc->config in
amd_nb_event_config() or so.

I also do not think that smp_processor_id() is the right thing to do
here. Since cpu_hw_events is per-cpu the cpu is already selected.

> +
> return &nb->event_constraints[new];
> }
>
> @@ -520,6 +575,7 @@ static struct event_constraint amd_f15_PMC3 = EVENT_CONSTRAINT(0, 0x08, 0);
> static struct event_constraint amd_f15_PMC30 = EVENT_CONSTRAINT_OVERLAP(0, 0x09, 0);
> static struct event_constraint amd_f15_PMC50 = EVENT_CONSTRAINT(0, 0x3F, 0);
> static struct event_constraint amd_f15_PMC53 = EVENT_CONSTRAINT(0, 0x38, 0);
> +static struct event_constraint amd_f15_NBPMC30 = EVENT_CONSTRAINT(0, 0x3C0, 0);

The counter index mask depends on the number of core counters which is
either 4 or 6 (depending on cpu_has_perfctr_core).

> static int setup_event_constraints(void)
> {
> - if (boot_cpu_data.x86 >= 0x15)
> + if (boot_cpu_data.x86 == 0x15)
> x86_pmu.get_event_constraints = amd_get_event_constraints_f15h;

Since this does not cover family 16h anymore, you also need to extend
amd_get_event_ constraints() to setup nb counters with __amd_get_nb_
event_constraints() if cpu_has_perfctr_nb is set.

> return 0;
> }
> @@ -655,6 +714,18 @@ static int setup_perfctr_core(void)
> return 0;
> }
>
> +static int setup_perfctr_nb(void)
> +{
> + if (!cpu_has_perfctr_nb)
> + return -ENODEV;
> +
> + x86_pmu.num_counters += AMD64_NUM_COUNTERS_NB;

You should add a nb counter mask here which is used for nb counters.

The mask can be either 0x3c0 or 0x0f0 depending on cpu_has_perfctr_
core for later use, you will need it at various locations.

In general I also would try to write the code in a way that make
further cpu_has_perfctr_nb lookups unnecessary. There are many tests
of this your code.

-Robert

2012-11-16 19:00:42

by Jacob Shin

[permalink] [raw]
Subject: Re: [PATCH 4/4] perf, amd: Enable northbridge performance counters on AMD family 15h

On Fri, Nov 16, 2012 at 07:43:44PM +0100, Robert Richter wrote:
> Jacob,
>
> On 15.11.12 15:31:53, Jacob Shin wrote:
> > On AMD family 15h processors, there are 4 new performance counters
> > (in addition to 6 core performance counters) that can be used for
> > counting northbridge events (i.e. DRAM accesses). Their bit fields are
> > almost identical to the core performance counters. However, unlike the
> > core performance counters, these MSRs are shared between multiple
> > cores (that share the same northbridge). We will reuse the same code
> > path as existing family 10h northbridge event constraints handler
> > logic to enforce this sharing.
> >
> > These new counters are indexed contiguously right above the existing
> > core performance counters, and their indexes correspond to RDPMC ECX
> > values.
> >
> > Signed-off-by: Jacob Shin <[email protected]>
>
> your approach looks ok to me in general, but see my comments inline.
>
> > @@ -156,31 +161,28 @@ static inline int amd_pmu_addr_offset(int index)
> > if (offset)
> > return offset;
> >
> > - if (!cpu_has_perfctr_core)
> > + if (!cpu_has_perfctr_core) {
> > offset = index;
> > - else
> > + ncore = AMD64_NUM_COUNTERS;
> > + } else {
> > offset = index << 1;
> > + ncore = AMD64_NUM_COUNTERS_CORE;
> > + }
> > +
> > + /* find offset of NB counters with respect to x86_pmu.eventsel */
> > + if (cpu_has_perfctr_nb) {
> > + if (index >= ncore && index < (ncore + AMD64_NUM_COUNTERS_NB))
> > + offset = (MSR_F15H_NB_PERF_CTL - x86_pmu.eventsel) +
> > + ((index - ncore) << 1);
> > + }
>
> There is duplicate calculatoin of offset in some cases. Better we
> avoid this.

Which cases? The code calculates the offset for a given index the very
first time it is called, stores it, and uses that stored offset from
then on. My [PATCH 3/4] sets that up.

>
> > +static int __amd_nb_hw_config(struct perf_event *event)
> > +{
> > + if (event->attr.exclude_user || event->attr.exclude_kernel ||
> > + event->attr.exclude_host || event->attr.exclude_guest)
> > + return -EINVAL;
> >
> > - event->hw.config |= event->attr.config & AMD64_RAW_EVENT_MASK;
> > + event->hw.config &= ~ARCH_PERFMON_EVENTSEL_USR;
> > + event->hw.config &= ~ARCH_PERFMON_EVENTSEL_OS;
> > +
> > + if (event->hw.config & ~(AMD64_EVENTSEL_EVENT |
> > + ARCH_PERFMON_EVENTSEL_UMASK |
> > + ARCH_PERFMON_EVENTSEL_INT |
> > + AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE |
> > + AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK))
> > + return -EINVAL;
> >
> > return 0;
> > }
>
> Comments are missing and an AMD64_NB_EVENT_MASK macro should be
> defined for the above. See my previous version for reference:
>
> /*
> * AMD NB counters (MSRs 0xc0010240 etc.) do not support the following
> * flags:
> *
> * Host/Guest Only
> * Counter Mask
> * Invert Comparison
> * Edge Detect
> * Operating-System Mode
> * User Mode
> *
> * Try to fix the config for default settings, otherwise fail.
> */
> static int amd_nb_event_config(struct perf_event *event)
> {
> if (!amd_is_nb_perfctr_event(&event->hw))
> return 0;
>
> if (event->attr.exclude_host || event->attr.exclude_guest
> || event->attr.exclude_user || event->attr.exclude_kernel)
> goto fail;
>
> event->hw.config &= ~(ARCH_PERFMON_EVENTSEL_USR | ARCH_PERFMON_EVENTSEL_OS);
>
> if (event->hw.config & ~(AMD64_NB_EVENT_MASK | ARCH_PERFMON_EVENTSEL_INT))
> goto fail;
>
> return 0;
> fail:
> pr_debug("Invalid nb counter config value: %016Lx\n", event->hw.config);
> return -EINVAL;
> }
>
>
> > @@ -323,6 +368,16 @@ __amd_get_nb_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *ev
> > if (new == -1)
> > return &emptyconstraint;
> >
> > + /* set up interrupts to be delivered only to this core */
> > + if (cpu_has_perfctr_nb) {
> > + struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());
> > +
> > + hwc->config |= AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE;
> > + hwc->config &= ~AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK;
> > + hwc->config |= (0ULL | (c->cpu_core_id)) <<
> > + AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT;
> > + }
>
> Looks like a hack to me. The constaints handler is only supposed to
> determine constraints and not to touch anything in the event's
> structure. This should be done later when setting up hwc->config in
> amd_nb_event_config() or so.

Hm.. is the hwc->config called after constraints have been set up
already? If so, I'll change it ..

>
> I also do not think that smp_processor_id() is the right thing to do
> here. Since cpu_hw_events is per-cpu the cpu is already selected.

Yeah, I could not figure out how to get the cpu number from cpuc. Is
there a container_of kind of thing that I can do to get the cpu number
?

>
> > +
> > return &nb->event_constraints[new];
> > }
> >
> > @@ -520,6 +575,7 @@ static struct event_constraint amd_f15_PMC3 = EVENT_CONSTRAINT(0, 0x08, 0);
> > static struct event_constraint amd_f15_PMC30 = EVENT_CONSTRAINT_OVERLAP(0, 0x09, 0);
> > static struct event_constraint amd_f15_PMC50 = EVENT_CONSTRAINT(0, 0x3F, 0);
> > static struct event_constraint amd_f15_PMC53 = EVENT_CONSTRAINT(0, 0x38, 0);
> > +static struct event_constraint amd_f15_NBPMC30 = EVENT_CONSTRAINT(0, 0x3C0, 0);
>
> The counter index mask depends on the number of core counters which is
> either 4 or 6 (depending on cpu_has_perfctr_core).
>
> > static int setup_event_constraints(void)
> > {
> > - if (boot_cpu_data.x86 >= 0x15)
> > + if (boot_cpu_data.x86 == 0x15)
> > x86_pmu.get_event_constraints = amd_get_event_constraints_f15h;
>
> Since this does not cover family 16h anymore, you also need to extend
> amd_get_event_ constraints() to setup nb counters with __amd_get_nb_
> event_constraints() if cpu_has_perfctr_nb is set.

Yes family 16h will be covered by a separate patch at a later date.

>
> > return 0;
> > }
> > @@ -655,6 +714,18 @@ static int setup_perfctr_core(void)
> > return 0;
> > }
> >
> > +static int setup_perfctr_nb(void)
> > +{
> > + if (!cpu_has_perfctr_nb)
> > + return -ENODEV;
> > +
> > + x86_pmu.num_counters += AMD64_NUM_COUNTERS_NB;
>
> You should add a nb counter mask here which is used for nb counters.
>
> The mask can be either 0x3c0 or 0x0f0 depending on cpu_has_perfctr_
> core for later use, you will need it at various locations.
>
> In general I also would try to write the code in a way that make
> further cpu_has_perfctr_nb lookups unnecessary. There are many tests
> of this your code.

Okay will spin V3 soon.

>
> -Robert
>

2012-11-16 19:32:32

by Robert Richter

[permalink] [raw]
Subject: Re: [PATCH 4/4] perf, amd: Enable northbridge performance counters on AMD family 15h

On 16.11.12 13:00:30, Jacob Shin wrote:
> On Fri, Nov 16, 2012 at 07:43:44PM +0100, Robert Richter wrote:
> > On 15.11.12 15:31:53, Jacob Shin wrote:
> > > @@ -156,31 +161,28 @@ static inline int amd_pmu_addr_offset(int index)
> > > if (offset)
> > > return offset;
> > >
> > > - if (!cpu_has_perfctr_core)
> > > + if (!cpu_has_perfctr_core) {
> > > offset = index;
> > > - else
> > > + ncore = AMD64_NUM_COUNTERS;
> > > + } else {

First calculation:

> > > offset = index << 1;
> > > + ncore = AMD64_NUM_COUNTERS_CORE;
> > > + }
> > > +
> > > + /* find offset of NB counters with respect to x86_pmu.eventsel */
> > > + if (cpu_has_perfctr_nb) {
> > > + if (index >= ncore && index < (ncore + AMD64_NUM_COUNTERS_NB))

Second calculation:

> > > + offset = (MSR_F15H_NB_PERF_CTL - x86_pmu.eventsel) +
> > > + ((index - ncore) << 1);
> > > + }
> >
> > There is duplicate calculatoin of offset in some cases. Better we
> > avoid this.
>
> Which cases? The code calculates the offset for a given index the very
> first time it is called, stores it, and uses that stored offset from
> then on. My [PATCH 3/4] sets that up.

One case above.

It looks like the paths should be defined more clearly.

> > > @@ -323,6 +368,16 @@ __amd_get_nb_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *ev
> > > if (new == -1)
> > > return &emptyconstraint;
> > >
> > > + /* set up interrupts to be delivered only to this core */
> > > + if (cpu_has_perfctr_nb) {
> > > + struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());
> > > +
> > > + hwc->config |= AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE;
> > > + hwc->config &= ~AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK;
> > > + hwc->config |= (0ULL | (c->cpu_core_id)) <<
> > > + AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT;
> > > + }
> >
> > Looks like a hack to me. The constaints handler is only supposed to
> > determine constraints and not to touch anything in the event's
> > structure. This should be done later when setting up hwc->config in
> > amd_nb_event_config() or so.
>
> Hm.. is the hwc->config called after constraints have been set up
> already? If so, I'll change it ..

Should be, since the hw register can be setup only after the counter
is selected.

>
> >
> > I also do not think that smp_processor_id() is the right thing to do
> > here. Since cpu_hw_events is per-cpu the cpu is already selected.
>
> Yeah, I could not figure out how to get the cpu number from cpuc. Is
> there a container_of kind of thing that I can do to get the cpu number
> ?

At some point event->cpu is assigned, I think.

-Robert

2012-11-16 22:12:58

by Jacob Shin

[permalink] [raw]
Subject: Re: [PATCH 4/4] perf, amd: Enable northbridge performance counters on AMD family 15h

On Fri, Nov 16, 2012 at 08:32:24PM +0100, Robert Richter wrote:
> On 16.11.12 13:00:30, Jacob Shin wrote:
> > On Fri, Nov 16, 2012 at 07:43:44PM +0100, Robert Richter wrote:
> > > On 15.11.12 15:31:53, Jacob Shin wrote:
> > > > @@ -156,31 +161,28 @@ static inline int amd_pmu_addr_offset(int index)
> > > > if (offset)
> > > > return offset;
> > > >
> > > > - if (!cpu_has_perfctr_core)
> > > > + if (!cpu_has_perfctr_core) {
> > > > offset = index;
> > > > - else
> > > > + ncore = AMD64_NUM_COUNTERS;
> > > > + } else {
>
> First calculation:
>
> > > > offset = index << 1;
> > > > + ncore = AMD64_NUM_COUNTERS_CORE;
> > > > + }
> > > > +
> > > > + /* find offset of NB counters with respect to x86_pmu.eventsel */
> > > > + if (cpu_has_perfctr_nb) {
> > > > + if (index >= ncore && index < (ncore + AMD64_NUM_COUNTERS_NB))
>
> Second calculation:
>
> > > > + offset = (MSR_F15H_NB_PERF_CTL - x86_pmu.eventsel) +
> > > > + ((index - ncore) << 1);
> > > > + }
> > >
> > > There is duplicate calculatoin of offset in some cases. Better we
> > > avoid this.
> >
> > Which cases? The code calculates the offset for a given index the very
> > first time it is called, stores it, and uses that stored offset from
> > then on. My [PATCH 3/4] sets that up.
>
> One case above.
>
> It looks like the paths should be defined more clearly.

Per comments above the function, I was logically going down the cases,
1. is this a legacy counter?
2. is this a perfctr_core counter?
3. is this a perfctr_nb counter?

To me it seems clear ..

>
> > > > @@ -323,6 +368,16 @@ __amd_get_nb_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *ev
> > > > if (new == -1)
> > > > return &emptyconstraint;
> > > >
> > > > + /* set up interrupts to be delivered only to this core */
> > > > + if (cpu_has_perfctr_nb) {
> > > > + struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());
> > > > +
> > > > + hwc->config |= AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE;
> > > > + hwc->config &= ~AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK;
> > > > + hwc->config |= (0ULL | (c->cpu_core_id)) <<
> > > > + AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT;
> > > > + }
> > >
> > > Looks like a hack to me. The constaints handler is only supposed to
> > > determine constraints and not to touch anything in the event's
> > > structure. This should be done later when setting up hwc->config in
> > > amd_nb_event_config() or so.
> >
> > Hm.. is the hwc->config called after constraints have been set up
> > already? If so, I'll change it ..
>
> Should be, since the hw register can be setup only after the counter
> is selected.
>
> >
> > >
> > > I also do not think that smp_processor_id() is the right thing to do
> > > here. Since cpu_hw_events is per-cpu the cpu is already selected.
> >
> > Yeah, I could not figure out how to get the cpu number from cpuc. Is
> > there a container_of kind of thing that I can do to get the cpu number
> > ?
>
> At some point event->cpu is assigned, I think.

Great, thanks for this hint!

So here is v3, how does this look? If okay, could you add Reviewed-by or
Acked-by ? After that, I'll send out the patchbomb again with review/ack
on patch [3/4] and [4/4]

diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index 8c297aa..b05c722 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -167,6 +167,7 @@
#define X86_FEATURE_TBM (6*32+21) /* trailing bit manipulations */
#define X86_FEATURE_TOPOEXT (6*32+22) /* topology extensions CPUID leafs */
#define X86_FEATURE_PERFCTR_CORE (6*32+23) /* core performance counter extensions */
+#define X86_FEATURE_PERFCTR_NB (6*32+24) /* core performance counter extensions */

/*
* Auxiliary flags: Linux defined - For features scattered in various
@@ -308,6 +309,7 @@ extern const char * const x86_power_flags[32];
#define cpu_has_hypervisor boot_cpu_has(X86_FEATURE_HYPERVISOR)
#define cpu_has_pclmulqdq boot_cpu_has(X86_FEATURE_PCLMULQDQ)
#define cpu_has_perfctr_core boot_cpu_has(X86_FEATURE_PERFCTR_CORE)
+#define cpu_has_perfctr_nb boot_cpu_has(X86_FEATURE_PERFCTR_NB)
#define cpu_has_cx8 boot_cpu_has(X86_FEATURE_CX8)
#define cpu_has_cx16 boot_cpu_has(X86_FEATURE_CX16)
#define cpu_has_eager_fpu boot_cpu_has(X86_FEATURE_EAGER_FPU)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 7f0edce..e67ff1e 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -157,6 +157,8 @@
/* Fam 15h MSRs */
#define MSR_F15H_PERF_CTL 0xc0010200
#define MSR_F15H_PERF_CTR 0xc0010201
+#define MSR_F15H_NB_PERF_CTL 0xc0010240
+#define MSR_F15H_NB_PERF_CTR 0xc0010241

/* Fam 10h MSRs */
#define MSR_FAM10H_MMIO_CONF_BASE 0xc0010058
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 4fabcdf..df97186 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -29,9 +29,14 @@
#define ARCH_PERFMON_EVENTSEL_INV (1ULL << 23)
#define ARCH_PERFMON_EVENTSEL_CMASK 0xFF000000ULL

+#define AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE (1ULL << 36)
#define AMD_PERFMON_EVENTSEL_GUESTONLY (1ULL << 40)
#define AMD_PERFMON_EVENTSEL_HOSTONLY (1ULL << 41)

+#define AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT 37
+#define AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK \
+ (0xFULL << AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT)
+
#define AMD64_EVENTSEL_EVENT \
(ARCH_PERFMON_EVENTSEL_EVENT | (0x0FULL << 32))
#define INTEL_ARCH_EVENT_MASK \
@@ -46,8 +51,12 @@
#define AMD64_RAW_EVENT_MASK \
(X86_RAW_EVENT_MASK | \
AMD64_EVENTSEL_EVENT)
+#define AMD64_NB_EVENT_MASK \
+ (AMD64_EVENTSEL_EVENT | \
+ ARCH_PERFMON_EVENTSEL_UMASK)
#define AMD64_NUM_COUNTERS 4
#define AMD64_NUM_COUNTERS_CORE 6
+#define AMD64_NUM_COUNTERS_NB 4

#define ARCH_PERFMON_UNHALTED_CORE_CYCLES_SEL 0x3c
#define ARCH_PERFMON_UNHALTED_CORE_CYCLES_UMASK (0x00 << 8)
diff --git a/arch/x86/kernel/cpu/perf_event_amd.c b/arch/x86/kernel/cpu/perf_event_amd.c
index d6e3337..80ad803 100644
--- a/arch/x86/kernel/cpu/perf_event_amd.c
+++ b/arch/x86/kernel/cpu/perf_event_amd.c
@@ -132,6 +132,8 @@ static u64 amd_pmu_event_map(int hw_event)
return amd_perfmon_event_map[hw_event];
}

+static struct event_constraint *amd_nb_event_constraint;
+
/*
* Previously calculated offsets
*/
@@ -143,10 +145,15 @@ static unsigned int addr_offsets[X86_PMC_IDX_MAX] __read_mostly;
*
* CPUs with core performance counter extensions:
* 6 counters starting at 0xc0010200 each offset by 2
+ *
+ * CPUs with north bridge performance counter extensions:
+ * 4 additional counters starting at 0xc0010240 each offset by 2
+ * (indexed right above either one of the above core counters)
*/
static inline int amd_pmu_addr_offset(int index)
{
int offset;
+ int ncore;

if (!index)
return index;
@@ -156,31 +163,27 @@ static inline int amd_pmu_addr_offset(int index)
if (offset)
return offset;

- if (!cpu_has_perfctr_core)
+ if (!cpu_has_perfctr_core) {
offset = index;
- else
+ ncore = AMD64_NUM_COUNTERS;
+ } else {
offset = index << 1;
+ ncore = AMD64_NUM_COUNTERS_CORE;
+ }
+
+ /* find offset of NB counters with respect to x86_pmu.eventsel */
+ if (amd_nb_event_constraint &&
+ test_bit(index, amd_nb_event_constraint->idxmsk))
+ offset = (MSR_F15H_NB_PERF_CTL - x86_pmu.eventsel) +
+ ((index - ncore) << 1);

addr_offsets[index] = offset;

return offset;
}

-static int amd_pmu_hw_config(struct perf_event *event)
+static int __amd_core_hw_config(struct perf_event *event)
{
- int ret;
-
- /* pass precise event sampling to ibs: */
- if (event->attr.precise_ip && get_ibs_caps())
- return -ENOENT;
-
- ret = x86_pmu_hw_config(event);
- if (ret)
- return ret;
-
- if (has_branch_stack(event))
- return -EOPNOTSUPP;
-
if (event->attr.exclude_host && event->attr.exclude_guest)
/*
* When HO == GO == 1 the hardware treats that as GO == HO == 0
@@ -194,10 +197,41 @@ static int amd_pmu_hw_config(struct perf_event *event)
else if (event->attr.exclude_guest)
event->hw.config |= AMD_PERFMON_EVENTSEL_HOSTONLY;

- if (event->attr.type != PERF_TYPE_RAW)
- return 0;
+ return 0;
+}
+
+/*
+ * NB counters do not support the following event select bits:
+ * Host/Guest only
+ * Counter mask
+ * Invert counter mask
+ * Edge detect
+ * OS/User mode
+ */
+static int __amd_nb_hw_config(struct perf_event *event)
+{
+ if (event->attr.exclude_user || event->attr.exclude_kernel ||
+ event->attr.exclude_host || event->attr.exclude_guest)
+ return -EINVAL;
+
+ /* set up interrupts to be delivered only to this core */
+ if (cpu_has_perfctr_nb) {
+ struct cpuinfo_x86 *c = &cpu_data(event->cpu);
+
+ event->hw.config |= AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE;
+ event->hw.config &= ~AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK;
+ event->hw.config |= (0ULL | (c->cpu_core_id)) <<
+ AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT;
+ }
+
+ event->hw.config &= ~(ARCH_PERFMON_EVENTSEL_USR |
+ ARCH_PERFMON_EVENTSEL_OS);

- event->hw.config |= event->attr.config & AMD64_RAW_EVENT_MASK;
+ if (event->hw.config & ~(AMD64_NB_EVENT_MASK |
+ ARCH_PERFMON_EVENTSEL_INT |
+ AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE |
+ AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK))
+ return -EINVAL;

return 0;
}
@@ -215,6 +249,11 @@ static inline int amd_is_nb_event(struct hw_perf_event *hwc)
return (hwc->config & 0xe0) == 0xe0;
}

+static inline int amd_is_perfctr_nb_event(struct hw_perf_event *hwc)
+{
+ return amd_nb_event_constraint && amd_is_nb_event(hwc);
+}
+
static inline int amd_has_nb(struct cpu_hw_events *cpuc)
{
struct amd_nb *nb = cpuc->amd_nb;
@@ -222,6 +261,30 @@ static inline int amd_has_nb(struct cpu_hw_events *cpuc)
return nb && nb->nb_id != -1;
}

+static int amd_pmu_hw_config(struct perf_event *event)
+{
+ int ret;
+
+ /* pass precise event sampling to ibs: */
+ if (event->attr.precise_ip && get_ibs_caps())
+ return -ENOENT;
+
+ if (has_branch_stack(event))
+ return -EOPNOTSUPP;
+
+ ret = x86_pmu_hw_config(event);
+ if (ret)
+ return ret;
+
+ if (event->attr.type == PERF_TYPE_RAW)
+ event->hw.config |= event->attr.config & AMD64_RAW_EVENT_MASK;
+
+ if (amd_is_perfctr_nb_event(&event->hw))
+ return __amd_nb_hw_config(event);
+
+ return __amd_core_hw_config(event);
+}
+
static void __amd_put_nb_event_constraints(struct cpu_hw_events *cpuc,
struct perf_event *event)
{
@@ -422,7 +485,10 @@ amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
if (!(amd_has_nb(cpuc) && amd_is_nb_event(&event->hw)))
return &unconstrained;

- return __amd_get_nb_event_constraints(cpuc, event, &unconstrained);
+ return __amd_get_nb_event_constraints(cpuc, event,
+ amd_nb_event_constraint ?
+ amd_nb_event_constraint :
+ &unconstrained);
}

static void amd_put_event_constraints(struct cpu_hw_events *cpuc,
@@ -521,6 +587,9 @@ static struct event_constraint amd_f15_PMC30 = EVENT_CONSTRAINT_OVERLAP(0, 0x09,
static struct event_constraint amd_f15_PMC50 = EVENT_CONSTRAINT(0, 0x3F, 0);
static struct event_constraint amd_f15_PMC53 = EVENT_CONSTRAINT(0, 0x38, 0);

+static struct event_constraint amd_NBPMC96 = EVENT_CONSTRAINT(0, 0x3C0, 0);
+static struct event_constraint amd_NBPMC74 = EVENT_CONSTRAINT(0, 0xF0, 0);
+
static struct event_constraint *
amd_get_event_constraints_f15h(struct cpu_hw_events *cpuc, struct perf_event *event)
{
@@ -586,8 +655,11 @@ amd_get_event_constraints_f15h(struct cpu_hw_events *cpuc, struct perf_event *ev
return &amd_f15_PMC20;
}
case AMD_EVENT_NB:
- /* not yet implemented */
- return &emptyconstraint;
+ if (cpuc->is_fake)
+ return amd_nb_event_constraint;
+
+ return __amd_get_nb_event_constraints(cpuc, event,
+ amd_nb_event_constraint);
default:
return &emptyconstraint;
}
@@ -625,7 +697,7 @@ static __initconst const struct x86_pmu amd_pmu = {

static int setup_event_constraints(void)
{
- if (boot_cpu_data.x86 >= 0x15)
+ if (boot_cpu_data.x86 == 0x15)
x86_pmu.get_event_constraints = amd_get_event_constraints_f15h;
return 0;
}
@@ -655,6 +727,23 @@ static int setup_perfctr_core(void)
return 0;
}

+static int setup_perfctr_nb(void)
+{
+ if (!cpu_has_perfctr_nb)
+ return -ENODEV;
+
+ x86_pmu.num_counters += AMD64_NUM_COUNTERS_NB;
+
+ if (cpu_has_perfctr_core)
+ amd_nb_event_constraint = &amd_NBPMC96;
+ else
+ amd_nb_event_constraint = &amd_NBPMC74;
+
+ printk(KERN_INFO "perf: AMD northbridge performance counters detected\n");
+
+ return 0;
+}
+
__init int amd_pmu_init(void)
{
/* Performance-monitoring supported from K7 and later: */
@@ -665,6 +754,7 @@ __init int amd_pmu_init(void)

setup_event_constraints();
setup_perfctr_core();
+ setup_perfctr_nb();

/* Events are common for all AMDs */
memcpy(hw_cache_event_ids, amd_hw_cache_event_ids,

2012-11-18 16:35:31

by Robert Richter

[permalink] [raw]
Subject: Re: [PATCH 4/4] perf, amd: Enable northbridge performance counters on AMD family 15h

On 16.11.12 13:00:30, Jacob Shin wrote:
> > > static int setup_event_constraints(void)
> > > {
> > > - if (boot_cpu_data.x86 >= 0x15)
> > > + if (boot_cpu_data.x86 == 0x15)
> > > x86_pmu.get_event_constraints = amd_get_event_constraints_f15h;
> >
> > Since this does not cover family 16h anymore, you also need to extend
> > amd_get_event_ constraints() to setup nb counters with __amd_get_nb_
> > event_constraints() if cpu_has_perfctr_nb is set.
>
> Yes family 16h will be covered by a separate patch at a later date.

This is fine, if functionality is added later. But If you run the
patch as it is on family 16h, you need to make sure nb counters are
not enabled at all. This is not the case since you have several checks
for cpu_has_perfctr_nb. This code must also run on family 16h as it
is, thus you need strictly disable nb counters in this case, or enable
it.

-Robert

2012-11-26 12:15:34

by Robert Richter

[permalink] [raw]
Subject: Re: [PATCH 4/4] perf, amd: Enable northbridge performance counters on AMD family 15h

Jacob,

On 16.11.12 15:57:18, Jacob Shin wrote:
> > It looks like the paths should be defined more clearly.
>
> Per comments above the function, I was logically going down the cases,
> 1. is this a legacy counter?
> 2. is this a perfctr_core counter?
> 3. is this a perfctr_nb counter?
>
> To me it seems clear ..

See below...

> So here is v3, how does this look? If okay, could you add Reviewed-by or
> Acked-by ? After that, I'll send out the patchbomb again with review/ack
> on patch [3/4] and [4/4]

I will ack your resent patches if they are looking good to me and we
leave it at the maintainers to add my acked-by after your
signed-off-by.

The direction looks good to me know, but see my comments below.

> diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> index 4fabcdf..df97186 100644
> --- a/arch/x86/include/asm/perf_event.h
> +++ b/arch/x86/include/asm/perf_event.h
> @@ -29,9 +29,14 @@
> #define ARCH_PERFMON_EVENTSEL_INV (1ULL << 23)
> #define ARCH_PERFMON_EVENTSEL_CMASK 0xFF000000ULL
>
> +#define AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE (1ULL << 36)
> #define AMD_PERFMON_EVENTSEL_GUESTONLY (1ULL << 40)
> #define AMD_PERFMON_EVENTSEL_HOSTONLY (1ULL << 41)
>
> +#define AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT 37
> +#define AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK \
> + (0xFULL << AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT)
> +

This is a bit shorter:

AMD64_EVENTSEL_INT_CORE_SEL_*

AMD64_* refers to AMD architectural definitions in the AMD64 manuals.
AMD_PERFMON_* is actually not consistent here. Arghh, Joerg.

> #define AMD64_EVENTSEL_EVENT \
> (ARCH_PERFMON_EVENTSEL_EVENT | (0x0FULL << 32))
> #define INTEL_ARCH_EVENT_MASK \
> @@ -46,8 +51,12 @@
> #define AMD64_RAW_EVENT_MASK \
> (X86_RAW_EVENT_MASK | \
> AMD64_EVENTSEL_EVENT)
> +#define AMD64_NB_EVENT_MASK \
> + (AMD64_EVENTSEL_EVENT | \
> + ARCH_PERFMON_EVENTSEL_UMASK)

Since this is equivalent to AMD64_RAW_EVENT_MASK a better name would
be AMD64_RAW_EVENT_MASK_NB.

> #define AMD64_NUM_COUNTERS 4
> #define AMD64_NUM_COUNTERS_CORE 6
> +#define AMD64_NUM_COUNTERS_NB 4
>
> #define ARCH_PERFMON_UNHALTED_CORE_CYCLES_SEL 0x3c
> #define ARCH_PERFMON_UNHALTED_CORE_CYCLES_UMASK (0x00 << 8)

> static inline int amd_pmu_addr_offset(int index)
> {
> int offset;
> + int ncore;
>
> if (!index)
> return index;
> @@ -156,31 +163,27 @@ static inline int amd_pmu_addr_offset(int index)
> if (offset)
> return offset;
>
> - if (!cpu_has_perfctr_core)
> + if (!cpu_has_perfctr_core) {
> offset = index;
> - else
> + ncore = AMD64_NUM_COUNTERS;
> + } else {
> offset = index << 1;
> + ncore = AMD64_NUM_COUNTERS_CORE;
> + }

We still go in a block above even if we have a nb counter. See
solution below.

> +
> + /* find offset of NB counters with respect to x86_pmu.eventsel */
> + if (amd_nb_event_constraint &&
> + test_bit(index, amd_nb_event_constraint->idxmsk))
> + offset = (MSR_F15H_NB_PERF_CTL - x86_pmu.eventsel) +
> + ((index - ncore) << 1);

I prefer the following paths:

if (amd_nb_event_constraint && ...) {
/* nb counter */
...
} else if (!cpu_has_perfctr_core) {
/* core counter */
...
} else {
/* legacy counter */
...
}

>
> addr_offsets[index] = offset;
>
> return offset;
> }
>
> -static int amd_pmu_hw_config(struct perf_event *event)
> +static int __amd_core_hw_config(struct perf_event *event)

No need for underscores...

> +/*
> + * NB counters do not support the following event select bits:
> + * Host/Guest only
> + * Counter mask
> + * Invert counter mask
> + * Edge detect
> + * OS/User mode
> + */
> +static int __amd_nb_hw_config(struct perf_event *event)

No need for underscores...

> +{
> + if (event->attr.exclude_user || event->attr.exclude_kernel ||
> + event->attr.exclude_host || event->attr.exclude_guest)
> + return -EINVAL;
> +
> + /* set up interrupts to be delivered only to this core */
> + if (cpu_has_perfctr_nb) {
> + struct cpuinfo_x86 *c = &cpu_data(event->cpu);
> +
> + event->hw.config |= AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE;
> + event->hw.config &= ~AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK;
> + event->hw.config |= (0ULL | (c->cpu_core_id)) <<

Better make the cast visible:

(u64)(c->cpu_core_id) << AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT

> + AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT;
> + }
> +
> + event->hw.config &= ~(ARCH_PERFMON_EVENTSEL_USR |
> + ARCH_PERFMON_EVENTSEL_OS);
>
> - event->hw.config |= event->attr.config & AMD64_RAW_EVENT_MASK;

Since this is calculated before ...

> + if (event->hw.config & ~(AMD64_NB_EVENT_MASK |
> + ARCH_PERFMON_EVENTSEL_INT |
> + AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE |
> + AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK))

... we can check:

if (event->hw.config & ~AMD64_NB_EVENT_MASK)

if we move core_sel setup after the check.

Move comment from above here too.

> + return -EINVAL;
>
> return 0;

> @@ -422,7 +485,10 @@ amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
> if (!(amd_has_nb(cpuc) && amd_is_nb_event(&event->hw)))
> return &unconstrained;
>
> - return __amd_get_nb_event_constraints(cpuc, event, &unconstrained);
> + return __amd_get_nb_event_constraints(cpuc, event,
> + amd_nb_event_constraint ?
> + amd_nb_event_constraint :
> + &unconstrained);

An option would be to always set amd_nb_event_constraint to
&unconstrained per default. Or, move the check to
__amd_get_nb_event_constraints().

-Robert

2012-11-28 17:42:37

by Jacob Shin

[permalink] [raw]
Subject: Re: [PATCH 4/4] perf, amd: Enable northbridge performance counters on AMD family 15h

Robert,

On Fri, Nov 16, 2012 at 08:32:24PM +0100, Robert Richter wrote:
> On 16.11.12 13:00:30, Jacob Shin wrote:
> > On Fri, Nov 16, 2012 at 07:43:44PM +0100, Robert Richter wrote:
> > > On 15.11.12 15:31:53, Jacob Shin wrote:
> > > > @@ -323,6 +368,16 @@ __amd_get_nb_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *ev
> > > > if (new == -1)
> > > > return &emptyconstraint;
> > > >
> > > > + /* set up interrupts to be delivered only to this core */
> > > > + if (cpu_has_perfctr_nb) {
> > > > + struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());
> > > > +
> > > > + hwc->config |= AMD_PERFMON_EVENTSEL_INT_CORE_ENABLE;
> > > > + hwc->config &= ~AMD_PERFMON_EVENTSEL_INT_CORE_SEL_MASK;
> > > > + hwc->config |= (0ULL | (c->cpu_core_id)) <<
> > > > + AMD_PERFMON_EVENTSEL_INT_CORE_SEL_SHIFT;
> > > > + }
> > >
> > > Looks like a hack to me. The constaints handler is only supposed to
> > > determine constraints and not to touch anything in the event's
> > > structure. This should be done later when setting up hwc->config in
> > > amd_nb_event_config() or so.
> >
> > Hm.. is the hwc->config called after constraints have been set up
> > already? If so, I'll change it ..
>
> Should be, since the hw register can be setup only after the counter
> is selected.

Ahh .. looking at this further, it looks like ->config is called
before constraints are set up (before we know what cpu we are going to
run on).

Sorry for not seeing this sooner, but it really looks like the event
constraints function is the right time to set up the INT_CORE_SEL bits
. Are you okay with this?

> > > I also do not think that smp_processor_id() is the right thing to do
> > > here. Since cpu_hw_events is per-cpu the cpu is already selected.
> >
> > Yeah, I could not figure out how to get the cpu number from cpuc. Is
> > there a container_of kind of thing that I can do to get the cpu number
> > ?
>
> At some point event->cpu is assigned, I think.

Furthermore, event->cpu can only be used if --cpu flag is specified
from userlan, otherwise event->cpu is 0xffff. And we do not know until
the schedule happens, what cpu we are going to be running on.

I tried to figure out if there was a way to get from cpu_hw_events to
a cpu number, but I didn't see any obvious ways. The cpu_hw_events is
derived from __get_cpu_var from the schedule function that calls the
constraints, so smp_processor_id seems okay to use here.

..

So I'll have to change things back, unless do you have any other
ideas ?

Thanks,

-Jacob

>
> -Robert
>