Subject: [PATCH 1/2] perf/core: Adding capability to disable PMUs event multiplexing

When PMUs are registered, perf core enables event multiplexing
support by default. There is no provision for PMUs to disable
event multiplexing, if PMUs want to disable due to unavoidable
circumstances like hardware errata etc.

Adding PMU capability flag PERF_PMU_CAP_NO_MUX_EVENTS and support
to allow PMUs to explicitly disable event multiplexing.

Signed-off-by: Ganapatrao Prabhakerrao Kulkarni <[email protected]>
---
include/linux/perf_event.h | 1 +
kernel/events/core.c | 8 ++++++++
2 files changed, 9 insertions(+)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 61448c19a132..9e18d841daf7 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -247,6 +247,7 @@ struct perf_event;
#define PERF_PMU_CAP_HETEROGENEOUS_CPUS 0x40
#define PERF_PMU_CAP_NO_EXCLUDE 0x80
#define PERF_PMU_CAP_AUX_OUTPUT 0x100
+#define PERF_PMU_CAP_NO_MUX_EVENTS 0x200

/**
* struct pmu - generic performance monitoring unit
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 4655adbbae10..65452784f81c 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1092,6 +1092,10 @@ static void __perf_mux_hrtimer_init(struct perf_cpu_context *cpuctx, int cpu)
if (pmu->task_ctx_nr == perf_sw_context)
return;

+ /* No PMU support */
+ if (pmu->capabilities & PERF_PMU_CAP_NO_MUX_EVENTS)
+ return 0;
+
/*
* check default is sane, if not set then force to
* default interval (1/tick)
@@ -1117,6 +1121,10 @@ static int perf_mux_hrtimer_restart(struct perf_cpu_context *cpuctx)
if (pmu->task_ctx_nr == perf_sw_context)
return 0;

+ /* No PMU support */
+ if (pmu->capabilities & PERF_PMU_CAP_NO_MUX_EVENTS)
+ return 0;
+
raw_spin_lock_irqsave(&cpuctx->hrtimer_lock, flags);
if (!cpuctx->hrtimer_active) {
cpuctx->hrtimer_active = 1;
--
2.17.1


2019-11-06 09:42:18

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf/core: Adding capability to disable PMUs event multiplexing

On Wed, Nov 06, 2019 at 01:01:40AM +0000, Ganapatrao Prabhakerrao Kulkarni wrote:
> When PMUs are registered, perf core enables event multiplexing
> support by default. There is no provision for PMUs to disable
> event multiplexing, if PMUs want to disable due to unavoidable
> circumstances like hardware errata etc.
>
> Adding PMU capability flag PERF_PMU_CAP_NO_MUX_EVENTS and support
> to allow PMUs to explicitly disable event multiplexing.

This doesn't make sense, multiplexing relies on nothing that normal
event scheduling doesn't also rely on.

Either you can schedule different sets of events, or you cannot.

NAK.

2019-11-06 10:00:27

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf/core: Adding capability to disable PMUs event multiplexing

On Wed, Nov 06, 2019 at 10:40:32AM +0100, Peter Zijlstra wrote:
> On Wed, Nov 06, 2019 at 01:01:40AM +0000, Ganapatrao Prabhakerrao Kulkarni wrote:
> > When PMUs are registered, perf core enables event multiplexing
> > support by default. There is no provision for PMUs to disable
> > event multiplexing, if PMUs want to disable due to unavoidable
> > circumstances like hardware errata etc.
> >
> > Adding PMU capability flag PERF_PMU_CAP_NO_MUX_EVENTS and support
> > to allow PMUs to explicitly disable event multiplexing.
>
> This doesn't make sense, multiplexing relies on nothing that normal
> event scheduling doesn't also rely on.
>
> Either you can schedule different sets of events, or you cannot.

More specifically, how is a reschedule due to rotation any different
than a reschedule due to context switch?

Both cases we do a full reprogram of the PMU.

2019-11-06 11:31:52

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf/core: Adding capability to disable PMUs event multiplexing

On Wed, Nov 06, 2019 at 01:01:40AM +0000, Ganapatrao Prabhakerrao Kulkarni wrote:
> When PMUs are registered, perf core enables event multiplexing
> support by default. There is no provision for PMUs to disable
> event multiplexing, if PMUs want to disable due to unavoidable
> circumstances like hardware errata etc.
>
> Adding PMU capability flag PERF_PMU_CAP_NO_MUX_EVENTS and support
> to allow PMUs to explicitly disable event multiplexing.

Even without multiplexing, this PMU activity can happen when switching
tasks, or when creating/destroying events, so as-is I don't think this
makes much sense.

If there's an erratum whereby heavy access to the PMU can lockup the
core, and it's possible to workaround that by minimzing accesses, that
should be done in the back-end PMU driver.

Either way, this minimzes the utility of the PMU.

Thanks,
Mark.

>
> Signed-off-by: Ganapatrao Prabhakerrao Kulkarni <[email protected]>
> ---
> include/linux/perf_event.h | 1 +
> kernel/events/core.c | 8 ++++++++
> 2 files changed, 9 insertions(+)
>
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index 61448c19a132..9e18d841daf7 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -247,6 +247,7 @@ struct perf_event;
> #define PERF_PMU_CAP_HETEROGENEOUS_CPUS 0x40
> #define PERF_PMU_CAP_NO_EXCLUDE 0x80
> #define PERF_PMU_CAP_AUX_OUTPUT 0x100
> +#define PERF_PMU_CAP_NO_MUX_EVENTS 0x200
>
> /**
> * struct pmu - generic performance monitoring unit
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 4655adbbae10..65452784f81c 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -1092,6 +1092,10 @@ static void __perf_mux_hrtimer_init(struct perf_cpu_context *cpuctx, int cpu)
> if (pmu->task_ctx_nr == perf_sw_context)
> return;
>
> + /* No PMU support */
> + if (pmu->capabilities & PERF_PMU_CAP_NO_MUX_EVENTS)
> + return 0;
> +
> /*
> * check default is sane, if not set then force to
> * default interval (1/tick)
> @@ -1117,6 +1121,10 @@ static int perf_mux_hrtimer_restart(struct perf_cpu_context *cpuctx)
> if (pmu->task_ctx_nr == perf_sw_context)
> return 0;
>
> + /* No PMU support */
> + if (pmu->capabilities & PERF_PMU_CAP_NO_MUX_EVENTS)
> + return 0;
> +
> raw_spin_lock_irqsave(&cpuctx->hrtimer_lock, flags);
> if (!cpuctx->hrtimer_active) {
> cpuctx->hrtimer_active = 1;
> --
> 2.17.1
>

2019-11-06 23:30:31

by Ganapatrao Kulkarni

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf/core: Adding capability to disable PMUs event multiplexing

Hi Peter, Mark,

On Wed, Nov 6, 2019 at 3:28 AM Mark Rutland <[email protected]> wrote:
>
> On Wed, Nov 06, 2019 at 01:01:40AM +0000, Ganapatrao Prabhakerrao Kulkarni wrote:
> > When PMUs are registered, perf core enables event multiplexing
> > support by default. There is no provision for PMUs to disable
> > event multiplexing, if PMUs want to disable due to unavoidable
> > circumstances like hardware errata etc.
> >
> > Adding PMU capability flag PERF_PMU_CAP_NO_MUX_EVENTS and support
> > to allow PMUs to explicitly disable event multiplexing.
>
> Even without multiplexing, this PMU activity can happen when switching
> tasks, or when creating/destroying events, so as-is I don't think this
> makes much sense.
>
> If there's an erratum whereby heavy access to the PMU can lockup the
> core, and it's possible to workaround that by minimzing accesses, that
> should be done in the back-end PMU driver.

As said in errata, If there are heavy access to memory like stream
application running and along with that if PMU control registers are
also accessed frequently, then CPU lockup is seen.

I ran perf stat with 4 events of thuderx2 PMU as well as with 6 events
for stream application.
For 4 events run, there is no event multiplexing, where as for 6
events run the events are multiplexed.

For 4 event run:
No of times pmu->add is called: 10
No of times pmu->del is called: 10
No of times pmu->read is called: 310

For 6 events run:
No of times pmu->add is called: 5216
No of times pmu->del is called: 5216
No of times pmu->read is called: 5216

Issue happens when the add and del are called too many times as seen
with 6 event case.
The PMU hardware control registers are programmed when add and del
functions are called.
For pmu->read no issues since no h/w issue with the data path.

This is UNCORE driver, not sure context switch has any influence on this?
Please suggest me, how can we fix this in back-end PMU driver without
any perf core help?

>
> Either way, this minimzes the utility of the PMU.
>
> Thanks,
> Mark.
>
> >
> > Signed-off-by: Ganapatrao Prabhakerrao Kulkarni <[email protected]>
> > ---
> > include/linux/perf_event.h | 1 +
> > kernel/events/core.c | 8 ++++++++
> > 2 files changed, 9 insertions(+)
> >
> > diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> > index 61448c19a132..9e18d841daf7 100644
> > --- a/include/linux/perf_event.h
> > +++ b/include/linux/perf_event.h
> > @@ -247,6 +247,7 @@ struct perf_event;
> > #define PERF_PMU_CAP_HETEROGENEOUS_CPUS 0x40
> > #define PERF_PMU_CAP_NO_EXCLUDE 0x80
> > #define PERF_PMU_CAP_AUX_OUTPUT 0x100
> > +#define PERF_PMU_CAP_NO_MUX_EVENTS 0x200
> >
> > /**
> > * struct pmu - generic performance monitoring unit
> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index 4655adbbae10..65452784f81c 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -1092,6 +1092,10 @@ static void __perf_mux_hrtimer_init(struct perf_cpu_context *cpuctx, int cpu)
> > if (pmu->task_ctx_nr == perf_sw_context)
> > return;
> >
> > + /* No PMU support */
> > + if (pmu->capabilities & PERF_PMU_CAP_NO_MUX_EVENTS)
> > + return 0;
> > +
> > /*
> > * check default is sane, if not set then force to
> > * default interval (1/tick)
> > @@ -1117,6 +1121,10 @@ static int perf_mux_hrtimer_restart(struct perf_cpu_context *cpuctx)
> > if (pmu->task_ctx_nr == perf_sw_context)
> > return 0;
> >
> > + /* No PMU support */
> > + if (pmu->capabilities & PERF_PMU_CAP_NO_MUX_EVENTS)
> > + return 0;
> > +
> > raw_spin_lock_irqsave(&cpuctx->hrtimer_lock, flags);
> > if (!cpuctx->hrtimer_active) {
> > cpuctx->hrtimer_active = 1;
> > --
> > 2.17.1
> >

Thanks,
Ganapat

2019-11-07 14:38:20

by Ganapatrao Kulkarni

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf/core: Adding capability to disable PMUs event multiplexing

Hi Mark,

On Wed, Nov 6, 2019 at 3:28 PM Ganapatrao Kulkarni <[email protected]> wrote:
>
> Hi Peter, Mark,
>
> On Wed, Nov 6, 2019 at 3:28 AM Mark Rutland <[email protected]> wrote:
> >
> > On Wed, Nov 06, 2019 at 01:01:40AM +0000, Ganapatrao Prabhakerrao Kulkarni wrote:
> > > When PMUs are registered, perf core enables event multiplexing
> > > support by default. There is no provision for PMUs to disable
> > > event multiplexing, if PMUs want to disable due to unavoidable
> > > circumstances like hardware errata etc.
> > >
> > > Adding PMU capability flag PERF_PMU_CAP_NO_MUX_EVENTS and support
> > > to allow PMUs to explicitly disable event multiplexing.
> >
> > Even without multiplexing, this PMU activity can happen when switching
> > tasks, or when creating/destroying events, so as-is I don't think this
> > makes much sense.
> >
> > If there's an erratum whereby heavy access to the PMU can lockup the
> > core, and it's possible to workaround that by minimzing accesses, that
> > should be done in the back-end PMU driver.
>
> As said in errata, If there are heavy access to memory like stream
> application running and along with that if PMU control registers are
> also accessed frequently, then CPU lockup is seen.
>
> I ran perf stat with 4 events of thuderx2 PMU as well as with 6 events
> for stream application.
> For 4 events run, there is no event multiplexing, where as for 6
> events run the events are multiplexed.
>
> For 4 event run:
> No of times pmu->add is called: 10
> No of times pmu->del is called: 10
> No of times pmu->read is called: 310
>
> For 6 events run:
> No of times pmu->add is called: 5216
> No of times pmu->del is called: 5216
> No of times pmu->read is called: 5216
>
> Issue happens when the add and del are called too many times as seen
> with 6 event case.
> The PMU hardware control registers are programmed when add and del
> functions are called.
> For pmu->read no issues since no h/w issue with the data path.
>
> This is UNCORE driver, not sure context switch has any influence on this?
> Please suggest me, how can we fix this in back-end PMU driver without
> any perf core help?
>
> >
> > Either way, this minimzes the utility of the PMU.
> >
> > Thanks,
> > Mark.
> >
> > >
> > > Signed-off-by: Ganapatrao Prabhakerrao Kulkarni <[email protected]>
> > > ---
> > > include/linux/perf_event.h | 1 +
> > > kernel/events/core.c | 8 ++++++++
> > > 2 files changed, 9 insertions(+)
> > >
> > > diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> > > index 61448c19a132..9e18d841daf7 100644
> > > --- a/include/linux/perf_event.h
> > > +++ b/include/linux/perf_event.h
> > > @@ -247,6 +247,7 @@ struct perf_event;
> > > #define PERF_PMU_CAP_HETEROGENEOUS_CPUS 0x40
> > > #define PERF_PMU_CAP_NO_EXCLUDE 0x80
> > > #define PERF_PMU_CAP_AUX_OUTPUT 0x100
> > > +#define PERF_PMU_CAP_NO_MUX_EVENTS 0x200
> > >
> > > /**
> > > * struct pmu - generic performance monitoring unit
> > > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > > index 4655adbbae10..65452784f81c 100644
> > > --- a/kernel/events/core.c
> > > +++ b/kernel/events/core.c
> > > @@ -1092,6 +1092,10 @@ static void __perf_mux_hrtimer_init(struct perf_cpu_context *cpuctx, int cpu)
> > > if (pmu->task_ctx_nr == perf_sw_context)
> > > return;
> > >
> > > + /* No PMU support */
> > > + if (pmu->capabilities & PERF_PMU_CAP_NO_MUX_EVENTS)
> > > + return 0;
> > > +
> > > /*
> > > * check default is sane, if not set then force to
> > > * default interval (1/tick)
> > > @@ -1117,6 +1121,10 @@ static int perf_mux_hrtimer_restart(struct perf_cpu_context *cpuctx)
> > > if (pmu->task_ctx_nr == perf_sw_context)
> > > return 0;
> > >
> > > + /* No PMU support */
> > > + if (pmu->capabilities & PERF_PMU_CAP_NO_MUX_EVENTS)
> > > + return 0;
> > > +
> > > raw_spin_lock_irqsave(&cpuctx->hrtimer_lock, flags);
> > > if (!cpuctx->hrtimer_active) {
> > > cpuctx->hrtimer_active = 1;
> > > --
> > > 2.17.1
> > >
>
> Thanks,
> Ganapat

Below diff does workaround without support of perf core.
Please review and let me know your thoughts?

root@SBR-26>perf>> git diff
diff --git a/drivers/perf/thunderx2_pmu.c b/drivers/perf/thunderx2_pmu.c
index 43d76c85da56..d5c90a93e96b 100644
--- a/drivers/perf/thunderx2_pmu.c
+++ b/drivers/perf/thunderx2_pmu.c
@@ -69,6 +69,7 @@ struct tx2_uncore_pmu {
int node;
int cpu;
u32 max_counters;
+ bool events_mux_disable;
u32 prorate_factor;
u32 max_events;
u64 hrtimer_interval;
@@ -442,6 +443,8 @@ static int tx2_uncore_event_init(struct perf_event *event)
if (!tx2_uncore_validate_event_group(event))
return -EINVAL;

+ /* reset flag */
+ tx2_pmu->events_mux_disable = false;
return 0;
}

@@ -490,10 +493,19 @@ static int tx2_uncore_event_add(struct
perf_event *event, int flags)

tx2_pmu = pmu_to_tx2_pmu(event->pmu);

+ /* Erratum ThunderX2 erratum 221.
+ * Disable support for events multiplexing.
+ * Limiting the number of events to available hardware counters.
+ */
+ if (tx2_pmu->events_mux_disable)
+ return -EOPNOTSUPP;
+
/* Allocate a free counter */
hwc->idx = alloc_counter(tx2_pmu);
- if (hwc->idx < 0)
+ if (hwc->idx < 0) {
+ tx2_pmu->events_mux_disable = true;
return -EAGAIN;
+ }

tx2_pmu->events[hwc->idx] = event;
/* set counter control and data registers base address */
@@ -648,6 +660,7 @@ static struct tx2_uncore_pmu
*tx2_uncore_pmu_init_dev(struct device *dev,
tx2_pmu->dev = dev;
tx2_pmu->type = type;
tx2_pmu->base = base;
+ tx2_pmu->events_mux_disable = false;
tx2_pmu->node = dev_to_node(dev);
INIT_LIST_HEAD(&tx2_pmu->entry);

2019-11-07 15:03:11

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf/core: Adding capability to disable PMUs event multiplexing

On Wed, Nov 06, 2019 at 03:28:46PM -0800, Ganapatrao Kulkarni wrote:
> Hi Peter, Mark,
>
> On Wed, Nov 6, 2019 at 3:28 AM Mark Rutland <[email protected]> wrote:
> >
> > On Wed, Nov 06, 2019 at 01:01:40AM +0000, Ganapatrao Prabhakerrao Kulkarni wrote:
> > > When PMUs are registered, perf core enables event multiplexing
> > > support by default. There is no provision for PMUs to disable
> > > event multiplexing, if PMUs want to disable due to unavoidable
> > > circumstances like hardware errata etc.
> > >
> > > Adding PMU capability flag PERF_PMU_CAP_NO_MUX_EVENTS and support
> > > to allow PMUs to explicitly disable event multiplexing.
> >
> > Even without multiplexing, this PMU activity can happen when switching
> > tasks, or when creating/destroying events, so as-is I don't think this
> > makes much sense.
> >
> > If there's an erratum whereby heavy access to the PMU can lockup the
> > core, and it's possible to workaround that by minimzing accesses, that
> > should be done in the back-end PMU driver.
>
> As said in errata, If there are heavy access to memory like stream
> application running and along with that if PMU control registers are
> also accessed frequently, then CPU lockup is seen.

Ok. So the issue is the frequency of access to those registers.

Which registers does that apply to?

Is this the case for only reads, only writes, or both?

Does the frequency of access actually matter, or is is just more likely
that we see the issue with a greater number of accesses? i.e the
increased frequency increases the probability of hitting the issue.

I'd really like a better description of the HW issue here.

> I ran perf stat with 4 events of thuderx2 PMU as well as with 6 events
> for stream application.
> For 4 events run, there is no event multiplexing, where as for 6
> events run the events are multiplexed.
>
> For 4 event run:
> No of times pmu->add is called: 10
> No of times pmu->del is called: 10
> No of times pmu->read is called: 310
>
> For 6 events run:
> No of times pmu->add is called: 5216
> No of times pmu->del is called: 5216
> No of times pmu->read is called: 5216
>
> Issue happens when the add and del are called too many times as seen
> with 6 event case.

Sure, but I can achieve similar by creating/destroying events in a loop.
Multiplexing is _one_ way to cause this behaviour, but it's not the
_only_ way.

> The PMU hardware control registers are programmed when add and del
> functions are called.
> For pmu->read no issues since no h/w issue with the data path.

As above, can you please describe the hardware conditions more
thoroughly?

> This is UNCORE driver, not sure context switch has any influence on this?

I believe that today it's possible for this to happen for cgroup events,
as non-sensical as it may be to have a cgroup-bound uncore PMU event.

> Please suggest me, how can we fix this in back-end PMU driver without
> any perf core help?

In order to do so, I need a better explanation of the underlying
hardware issue.

Thanks,
Mark.

2019-11-07 15:08:34

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf/core: Adding capability to disable PMUs event multiplexing

On Wed, Nov 06, 2019 at 03:28:46PM -0800, Ganapatrao Kulkarni wrote:
> Issue happens when the add and del are called too many times as seen
> with 6 event case.
> The PMU hardware control registers are programmed when add and del
> functions are called.
> For pmu->read no issues since no h/w issue with the data path.
>
> Please suggest me, how can we fix this in back-end PMU driver without
> any perf core help?

As Mark already said, a (much) better description of the actual hardware
fail is required, but one possible solution would be to add a busy spin
delay when writing to the hardware registers.

Something like:

u64 now, ts = this_cpu_read(tx2_throttle);

while ((now = cycle_counter()) <= ts)
cpu_relax();

write_register(...);

this_cpu_write(tx2_throttle, now + delay_ns);

Other known tricks include reading the register back until it contains
what you just wrote to it.

But really, first properly describe how your hardware is buggered.

2019-11-07 15:47:08

by Ganapatrao Kulkarni

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf/core: Adding capability to disable PMUs event multiplexing

On Thu, Nov 7, 2019 at 6:52 AM Mark Rutland <[email protected]> wrote:
>
> On Wed, Nov 06, 2019 at 03:28:46PM -0800, Ganapatrao Kulkarni wrote:
> > Hi Peter, Mark,
> >
> > On Wed, Nov 6, 2019 at 3:28 AM Mark Rutland <[email protected]> wrote:
> > >
> > > On Wed, Nov 06, 2019 at 01:01:40AM +0000, Ganapatrao Prabhakerrao Kulkarni wrote:
> > > > When PMUs are registered, perf core enables event multiplexing
> > > > support by default. There is no provision for PMUs to disable
> > > > event multiplexing, if PMUs want to disable due to unavoidable
> > > > circumstances like hardware errata etc.
> > > >
> > > > Adding PMU capability flag PERF_PMU_CAP_NO_MUX_EVENTS and support
> > > > to allow PMUs to explicitly disable event multiplexing.
> > >
> > > Even without multiplexing, this PMU activity can happen when switching
> > > tasks, or when creating/destroying events, so as-is I don't think this
> > > makes much sense.
> > >
> > > If there's an erratum whereby heavy access to the PMU can lockup the
> > > core, and it's possible to workaround that by minimzing accesses, that
> > > should be done in the back-end PMU driver.
> >
> > As said in errata, If there are heavy access to memory like stream
> > application running and along with that if PMU control registers are
> > also accessed frequently, then CPU lockup is seen.
>
> Ok. So the issue is the frequency of access to those registers.
>
> Which registers does that apply to?

The control register which are used to start, stop the counter and the
register which is used to set the event type.
>
> Is this the case for only reads, only writes, or both?

It is write issue, the h/w block has limited write buffers and
overflow getting hardware in weird state, when memory transactions are
high.

>
> Does the frequency of access actually matter, or is is just more likely
> that we see the issue with a greater number of accesses? i.e the
> increased frequency increases the probability of hitting the issue.

This is one scenario.
Any higher access to PMU register, when memory is busy needs to be controlled.

>
> I'd really like a better description of the HW issue here.
>
> > I ran perf stat with 4 events of thuderx2 PMU as well as with 6 events
> > for stream application.
> > For 4 events run, there is no event multiplexing, where as for 6
> > events run the events are multiplexed.
> >
> > For 4 event run:
> > No of times pmu->add is called: 10
> > No of times pmu->del is called: 10
> > No of times pmu->read is called: 310
> >
> > For 6 events run:
> > No of times pmu->add is called: 5216
> > No of times pmu->del is called: 5216
> > No of times pmu->read is called: 5216
> >
> > Issue happens when the add and del are called too many times as seen
> > with 6 event case.
>
> Sure, but I can achieve similar by creating/destroying events in a loop.
> Multiplexing is _one_ way to cause this behaviour, but it's not the
> _only_ way.

I agree, there may be other use cases also, however i am trying to fix
the common use case.

>
> > The PMU hardware control registers are programmed when add and del
> > functions are called.
> > For pmu->read no issues since no h/w issue with the data path.
>
> As above, can you please describe the hardware conditions more
> thoroughly?
>
> > This is UNCORE driver, not sure context switch has any influence on this?
>
> I believe that today it's possible for this to happen for cgroup events,
> as non-sensical as it may be to have a cgroup-bound uncore PMU event.
>
> > Please suggest me, how can we fix this in back-end PMU driver without
> > any perf core help?
>
> In order to do so, I need a better explanation of the underlying
> hardware issue.

I will try to put more explanation to erratum, however please let me
know, if you have any specific questions.

>
> Thanks,
> Mark.

Thanks,
Ganapat

2019-11-07 15:56:59

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf/core: Adding capability to disable PMUs event multiplexing

On Thu, Nov 07, 2019 at 07:45:07AM -0800, Ganapatrao Kulkarni wrote:
> On Thu, Nov 7, 2019 at 6:52 AM Mark Rutland <[email protected]> wrote:
> >
> > On Wed, Nov 06, 2019 at 03:28:46PM -0800, Ganapatrao Kulkarni wrote:
> > > Hi Peter, Mark,
> > >
> > > On Wed, Nov 6, 2019 at 3:28 AM Mark Rutland <[email protected]> wrote:
> > > >
> > > > On Wed, Nov 06, 2019 at 01:01:40AM +0000, Ganapatrao Prabhakerrao Kulkarni wrote:
> > > > > When PMUs are registered, perf core enables event multiplexing
> > > > > support by default. There is no provision for PMUs to disable
> > > > > event multiplexing, if PMUs want to disable due to unavoidable
> > > > > circumstances like hardware errata etc.
> > > > >
> > > > > Adding PMU capability flag PERF_PMU_CAP_NO_MUX_EVENTS and support
> > > > > to allow PMUs to explicitly disable event multiplexing.
> > > >
> > > > Even without multiplexing, this PMU activity can happen when switching
> > > > tasks, or when creating/destroying events, so as-is I don't think this
> > > > makes much sense.
> > > >
> > > > If there's an erratum whereby heavy access to the PMU can lockup the
> > > > core, and it's possible to workaround that by minimzing accesses, that
> > > > should be done in the back-end PMU driver.
> > >
> > > As said in errata, If there are heavy access to memory like stream
> > > application running and along with that if PMU control registers are
> > > also accessed frequently, then CPU lockup is seen.
> >
> > Ok. So the issue is the frequency of access to those registers.
> >
> > Which registers does that apply to?
>
> The control register which are used to start, stop the counter and the
> register which is used to set the event type.

Ok. Thanks for confirming those details.

> > Is this the case for only reads, only writes, or both?
>
> It is write issue, the h/w block has limited write buffers and
> overflow getting hardware in weird state, when memory transactions are
> high.

Just to confirm -- is that writes to the control registers that are
buffered, or is it that buffering of normal memory accesses goes wrong
when the control registers are under heavy load?

> > Does the frequency of access actually matter, or is is just more likely
> > that we see the issue with a greater number of accesses? i.e the
> > increased frequency increases the probability of hitting the issue.
>
> This is one scenario.
> Any higher access to PMU register, when memory is busy needs to be controlled.

Could you explain what you mean by "higher access to PMU register"?

Is there some threshold under which this is guaranteed to be ok? Or is
it probablistic, and we need to minimize accesses at all times?

> > I'd really like a better description of the HW issue here.
> >
> > > I ran perf stat with 4 events of thuderx2 PMU as well as with 6 events
> > > for stream application.
> > > For 4 events run, there is no event multiplexing, where as for 6
> > > events run the events are multiplexed.
> > >
> > > For 4 event run:
> > > No of times pmu->add is called: 10
> > > No of times pmu->del is called: 10
> > > No of times pmu->read is called: 310
> > >
> > > For 6 events run:
> > > No of times pmu->add is called: 5216
> > > No of times pmu->del is called: 5216
> > > No of times pmu->read is called: 5216
> > >
> > > Issue happens when the add and del are called too many times as seen
> > > with 6 event case.
> >
> > Sure, but I can achieve similar by creating/destroying events in a loop.
> > Multiplexing is _one_ way to cause this behaviour, but it's not the
> > _only_ way.
>
> I agree, there may be other use cases also, however i am trying to fix
> the common use case.

I appreciate what you're trying to do, but I think it's the wrong
approach.

Depending on the precise conditions under which this happens, I think
that we may be able to solve this entirely within the TX2 PMU driver,
handling all cases and also not breaking multiplexing.

Thanks,
Mark.

2019-11-07 23:18:55

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 1/2] perf/core: Adding capability to disable PMUs event multiplexing

Hi Ganapatrao,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on arm-soc/for-next]
[also build test WARNING on v5.4-rc6 next-20191107]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url: https://github.com/0day-ci/linux/commits/Ganapatrao-Prabhakerrao-Kulkarni/perf-core-Adding-capability-to-disable-PMUs-event-multiplexing/20191108-054345
base: https://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc.git for-next
config: i386-tinyconfig (attached as .config)
compiler: gcc-7 (Debian 7.4.0-14) 7.4.0
reproduce:
# save the attached .config to linux build tree
make ARCH=i386

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <[email protected]>

All warnings (new ones prefixed by >>):

kernel/events/core.c: In function '__perf_mux_hrtimer_init':
>> kernel/events/core.c:1097:10: warning: 'return' with a value, in function returning void
return 0;
^
kernel/events/core.c:1085:13: note: declared here
static void __perf_mux_hrtimer_init(struct perf_cpu_context *cpuctx, int cpu)
^~~~~~~~~~~~~~~~~~~~~~~

vim +/return +1097 kernel/events/core.c

1084
1085 static void __perf_mux_hrtimer_init(struct perf_cpu_context *cpuctx, int cpu)
1086 {
1087 struct hrtimer *timer = &cpuctx->hrtimer;
1088 struct pmu *pmu = cpuctx->ctx.pmu;
1089 u64 interval;
1090
1091 /* no multiplexing needed for SW PMU */
1092 if (pmu->task_ctx_nr == perf_sw_context)
1093 return;
1094
1095 /* No PMU support */
1096 if (pmu->capabilities & PERF_PMU_CAP_NO_MUX_EVENTS)
> 1097 return 0;
1098
1099 /*
1100 * check default is sane, if not set then force to
1101 * default interval (1/tick)
1102 */
1103 interval = pmu->hrtimer_interval_ms;
1104 if (interval < 1)
1105 interval = pmu->hrtimer_interval_ms = PERF_CPU_HRTIMER;
1106
1107 cpuctx->hrtimer_interval = ns_to_ktime(NSEC_PER_MSEC * interval);
1108
1109 raw_spin_lock_init(&cpuctx->hrtimer_lock);
1110 hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED_HARD);
1111 timer->function = perf_mux_hrtimer_handler;
1112 }
1113

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation


Attachments:
(No filename) (2.54 kB)
.config.gz (7.04 kB)
Download all attachments