2016-04-06 14:03:09

by Waiman Long

[permalink] [raw]
Subject: [PATCH] x86/hpet: Reduce HPET counter read contention

On a large system with many CPUs, using HPET as the clock source can
have a significant impact on the overall system performance because
of the following reasons:
1) There is a single HPET counter shared by all the CPUs.
2) HPET counter reading is a very slow operation.

Using HPET as the default clock source may happen when, for example,
the TSC clock calibration exceeds the allowable tolerance. Something
the performance slowdown can be so severe that the system may crash
because of a NMI watchdog soft lockup, for example.

This patch attempts to reduce HPET read contention by using the fact
that if more than one task are trying to access HPET at the same time,
it will be more efficient if one task in the group reads the HPET
counter and shares it with the rest of the group instead of each
group member reads the HPET counter individually.

This is done by using a combination word with a sequence number and
a bit lock. The task that gets the bit lock will be responsible for
reading the HPET counter and update the sequence number. The others
will monitor the change in sequence number and grab the HPET counter
accordingly.

On a 4-socket Haswell-EX box with 72 cores (HT off), running the
AIM7 compute workload (1500 users) on a 4.6-rc1 kernel (HZ=1000)
with and without the patch has the following performance numbers
(with HPET or TSC as clock source):

TSC = 646515 jobs/min
HPET w/o patch = 566708 jobs/min
HPET with patch = 638791 jobs/min

The perf profile showed a reduction of the %CPU time consumed by
read_hpet from 4.99% without patch to 1.41% with patch.

On a 16-socket IvyBridge-EX system with 240 cores (HT on), on the
other hand, the performance numbers of the same benchmark were:

TSC = 3145329 jobs/min
HPET w/o patch = 1108537 jobs/min
HPET with patch = 3019934 jobs/min

The corresponding perf profile showed a drop of CPU consumption of
the read_hpet function from more than 34% to just 2.96%.

Signed-off-by: Waiman Long <[email protected]>
---
arch/x86/kernel/hpet.c | 110 +++++++++++++++++++++++++++++++++++++++++++++++-
1 files changed, 109 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
index a1f0e4a..9e3de73 100644
--- a/arch/x86/kernel/hpet.c
+++ b/arch/x86/kernel/hpet.c
@@ -759,11 +759,112 @@ static int hpet_cpuhp_notify(struct notifier_block *n,
#endif

/*
+ * Reading the HPET counter is a very slow operation. If a large number of
+ * CPUs are trying to access the HPET counter simultaneously, it can cause
+ * massive delay and slow down system performance dramatically. This may
+ * happen when HPET is the default clock source instead of TSC. For a
+ * really large system with hundreds of CPUs, the slowdown may be so
+ * severe that it may actually crash the system because of a NMI watchdog
+ * soft lockup, for example.
+ *
+ * If multiple CPUs are trying to access the HPET counter at the same time,
+ * we don't actually need to read the counter multiple times. Instead, the
+ * other CPUs can use the counter value read by the first CPU in the group.
+ *
+ * A sequence number whose lsb is a lock bit is used to control which CPU
+ * has the right to read the HPET counter directly and which CPUs are going
+ * to get the indirect value read by the lock holder. For the later group,
+ * if the sequence number differs from the expected locked value, they
+ * can assume that the saved HPET value is up-to-date and return it.
+ *
+ * This mechanism is only activated on system with a large number of CPUs.
+ * Currently, it is enabled when nr_cpus > 64.
+ */
+static bool use_hpet_save __read_mostly;
+static struct {
+ /* Sequence number + bit lock */
+ int seq ____cacheline_aligned_in_smp;
+
+ /* Current HPET value */
+ cycle_t hpet ____cacheline_aligned_in_smp;
+} hpet_save;
+#define HPET_SEQ_LOCKED(seq) ((seq) & 1) /* Odd == locked */
+#define HPET_RESET_THRESHOLD (1 << 14)
+#define HPET_REUSE_THRESHOLD 64
+
+/*
* Clock source related code
*/
static cycle_t read_hpet(struct clocksource *cs)
{
- return (cycle_t)hpet_readl(HPET_COUNTER);
+ int seq, cnt = 0;
+ cycle_t time;
+
+ if (!use_hpet_save)
+ return (cycle_t)hpet_readl(HPET_COUNTER);
+
+ seq = READ_ONCE(hpet_save.seq);
+ if (!HPET_SEQ_LOCKED(seq)) {
+ int old, new = seq + 1;
+ unsigned long flags;
+
+ local_irq_save(flags);
+ /*
+ * Set the lock bit (lsb) to get the right to read HPET
+ * counter directly. If successful, read the counter, save
+ * its value, and increment the sequence number. Otherwise,
+ * increment the sequnce number to the expected locked value
+ * for comparison later on.
+ */
+ old = cmpxchg(&hpet_save.seq, seq, new);
+ if (old == seq) {
+ time = (cycle_t)hpet_readl(HPET_COUNTER);
+ WRITE_ONCE(hpet_save.hpet, time);
+
+ /* Unlock */
+ smp_store_release(&hpet_save.seq, new + 1);
+ local_irq_restore(flags);
+ return time;
+ }
+ local_irq_restore(flags);
+ seq = new;
+ }
+
+ /*
+ * Wait until the locked sequence number changes which indicates
+ * that the saved HPET value is up-to-date.
+ */
+ while (READ_ONCE(hpet_save.seq) == seq) {
+ /*
+ * Since reading the HPET is much slower than a single
+ * cpu_relax() instruction, we use two here in an attempt
+ * to reduce the amount of cacheline contention in the
+ * hpet_save.seq cacheline.
+ */
+ cpu_relax();
+ cpu_relax();
+
+ if (likely(++cnt <= HPET_RESET_THRESHOLD))
+ continue;
+
+ /*
+ * In the unlikely event that it takes too long for the lock
+ * holder to read the HPET, we do it ourselves and try to
+ * reset the lock. This will also break a deadlock if it
+ * happens, for example, when the process context lock holder
+ * gets killed in the middle of reading the HPET counter.
+ */
+ time = (cycle_t)hpet_readl(HPET_COUNTER);
+ WRITE_ONCE(hpet_save.hpet, time);
+ if (READ_ONCE(hpet_save.seq) == seq) {
+ if (cmpxchg(&hpet_save.seq, seq, seq + 1) == seq)
+ pr_warn("read_hpet: reset hpet seq to 0x%x\n",
+ seq + 1);
+ }
+ return time;
+ }
+
+ return READ_ONCE(hpet_save.hpet);
}

static struct clocksource clocksource_hpet = {
@@ -956,6 +1057,13 @@ static __init int hpet_late_init(void)
hpet_reserve_platform_timers(hpet_readl(HPET_ID));
hpet_print_config();

+ /*
+ * Reuse HPET value read by another CPU if there are more than
+ * HPET_REUSE_THRESHOLD CPUs in the system.
+ */
+ if (num_possible_cpus() > HPET_REUSE_THRESHOLD)
+ use_hpet_save = true;
+
if (hpet_msi_disable)
return 0;

--
1.7.1


2016-04-07 04:58:29

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH] x86/hpet: Reduce HPET counter read contention

On Wed, Apr 6, 2016 at 7:02 AM, Waiman Long <[email protected]> wrote:
> On a large system with many CPUs, using HPET as the clock source can
> have a significant impact on the overall system performance because
> of the following reasons:
> 1) There is a single HPET counter shared by all the CPUs.
> 2) HPET counter reading is a very slow operation.
>
> Using HPET as the default clock source may happen when, for example,
> the TSC clock calibration exceeds the allowable tolerance. Something
> the performance slowdown can be so severe that the system may crash
> because of a NMI watchdog soft lockup, for example.
>
> This patch attempts to reduce HPET read contention by using the fact
> that if more than one task are trying to access HPET at the same time,
> it will be more efficient if one task in the group reads the HPET
> counter and shares it with the rest of the group instead of each
> group member reads the HPET counter individually.
>
> This is done by using a combination word with a sequence number and
> a bit lock. The task that gets the bit lock will be responsible for
> reading the HPET counter and update the sequence number. The others
> will monitor the change in sequence number and grab the HPET counter
> accordingly.
>
> On a 4-socket Haswell-EX box with 72 cores (HT off), running the
> AIM7 compute workload (1500 users) on a 4.6-rc1 kernel (HZ=1000)
> with and without the patch has the following performance numbers
> (with HPET or TSC as clock source):
>
> TSC = 646515 jobs/min
> HPET w/o patch = 566708 jobs/min
> HPET with patch = 638791 jobs/min
>
> The perf profile showed a reduction of the %CPU time consumed by
> read_hpet from 4.99% without patch to 1.41% with patch.
>
> On a 16-socket IvyBridge-EX system with 240 cores (HT on), on the
> other hand, the performance numbers of the same benchmark were:
>
> TSC = 3145329 jobs/min
> HPET w/o patch = 1108537 jobs/min
> HPET with patch = 3019934 jobs/min
>
> The corresponding perf profile showed a drop of CPU consumption of
> the read_hpet function from more than 34% to just 2.96%.
>
> Signed-off-by: Waiman Long <[email protected]>
> ---
> arch/x86/kernel/hpet.c | 110 +++++++++++++++++++++++++++++++++++++++++++++++-
> 1 files changed, 109 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
> index a1f0e4a..9e3de73 100644
> --- a/arch/x86/kernel/hpet.c
> +++ b/arch/x86/kernel/hpet.c
> @@ -759,11 +759,112 @@ static int hpet_cpuhp_notify(struct notifier_block *n,
> #endif
>
> /*
> + * Reading the HPET counter is a very slow operation. If a large number of
> + * CPUs are trying to access the HPET counter simultaneously, it can cause
> + * massive delay and slow down system performance dramatically. This may
> + * happen when HPET is the default clock source instead of TSC. For a
> + * really large system with hundreds of CPUs, the slowdown may be so
> + * severe that it may actually crash the system because of a NMI watchdog
> + * soft lockup, for example.
> + *
> + * If multiple CPUs are trying to access the HPET counter at the same time,
> + * we don't actually need to read the counter multiple times. Instead, the
> + * other CPUs can use the counter value read by the first CPU in the group.
> + *
> + * A sequence number whose lsb is a lock bit is used to control which CPU
> + * has the right to read the HPET counter directly and which CPUs are going
> + * to get the indirect value read by the lock holder. For the later group,
> + * if the sequence number differs from the expected locked value, they
> + * can assume that the saved HPET value is up-to-date and return it.
> + *
> + * This mechanism is only activated on system with a large number of CPUs.
> + * Currently, it is enabled when nr_cpus > 64.
> + */

Reading the HPET is so slow that all the atomic ops in the world won't
make a dent. Why not just turn this optimization on unconditionally?

--Andy

2016-04-07 15:07:53

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH] x86/hpet: Reduce HPET counter read contention

On 04/07/2016 12:58 AM, Andy Lutomirski wrote:
> On Wed, Apr 6, 2016 at 7:02 AM, Waiman Long<[email protected]> wrote:
>> On a large system with many CPUs, using HPET as the clock source can
>> have a significant impact on the overall system performance because
>> of the following reasons:
>> 1) There is a single HPET counter shared by all the CPUs.
>> 2) HPET counter reading is a very slow operation.
>>
>> Using HPET as the default clock source may happen when, for example,
>> the TSC clock calibration exceeds the allowable tolerance. Something
>> the performance slowdown can be so severe that the system may crash
>> because of a NMI watchdog soft lockup, for example.
>>
>> This patch attempts to reduce HPET read contention by using the fact
>> that if more than one task are trying to access HPET at the same time,
>> it will be more efficient if one task in the group reads the HPET
>> counter and shares it with the rest of the group instead of each
>> group member reads the HPET counter individually.
>>
>> This is done by using a combination word with a sequence number and
>> a bit lock. The task that gets the bit lock will be responsible for
>> reading the HPET counter and update the sequence number. The others
>> will monitor the change in sequence number and grab the HPET counter
>> accordingly.
>>
>> On a 4-socket Haswell-EX box with 72 cores (HT off), running the
>> AIM7 compute workload (1500 users) on a 4.6-rc1 kernel (HZ=1000)
>> with and without the patch has the following performance numbers
>> (with HPET or TSC as clock source):
>>
>> TSC = 646515 jobs/min
>> HPET w/o patch = 566708 jobs/min
>> HPET with patch = 638791 jobs/min
>>
>> The perf profile showed a reduction of the %CPU time consumed by
>> read_hpet from 4.99% without patch to 1.41% with patch.
>>
>> On a 16-socket IvyBridge-EX system with 240 cores (HT on), on the
>> other hand, the performance numbers of the same benchmark were:
>>
>> TSC = 3145329 jobs/min
>> HPET w/o patch = 1108537 jobs/min
>> HPET with patch = 3019934 jobs/min
>>
>> The corresponding perf profile showed a drop of CPU consumption of
>> the read_hpet function from more than 34% to just 2.96%.
>>
>> Signed-off-by: Waiman Long<[email protected]>
>> ---
>> arch/x86/kernel/hpet.c | 110 +++++++++++++++++++++++++++++++++++++++++++++++-
>> 1 files changed, 109 insertions(+), 1 deletions(-)
>>
>> diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
>> index a1f0e4a..9e3de73 100644
>> --- a/arch/x86/kernel/hpet.c
>> +++ b/arch/x86/kernel/hpet.c
>> @@ -759,11 +759,112 @@ static int hpet_cpuhp_notify(struct notifier_block *n,
>> #endif
>>
>> /*
>> + * Reading the HPET counter is a very slow operation. If a large number of
>> + * CPUs are trying to access the HPET counter simultaneously, it can cause
>> + * massive delay and slow down system performance dramatically. This may
>> + * happen when HPET is the default clock source instead of TSC. For a
>> + * really large system with hundreds of CPUs, the slowdown may be so
>> + * severe that it may actually crash the system because of a NMI watchdog
>> + * soft lockup, for example.
>> + *
>> + * If multiple CPUs are trying to access the HPET counter at the same time,
>> + * we don't actually need to read the counter multiple times. Instead, the
>> + * other CPUs can use the counter value read by the first CPU in the group.
>> + *
>> + * A sequence number whose lsb is a lock bit is used to control which CPU
>> + * has the right to read the HPET counter directly and which CPUs are going
>> + * to get the indirect value read by the lock holder. For the later group,
>> + * if the sequence number differs from the expected locked value, they
>> + * can assume that the saved HPET value is up-to-date and return it.
>> + *
>> + * This mechanism is only activated on system with a large number of CPUs.
>> + * Currently, it is enabled when nr_cpus> 64.
>> + */
> Reading the HPET is so slow that all the atomic ops in the world won't
> make a dent. Why not just turn this optimization on unconditionally?
>
> --Andy

I am constantly on the alert that we should not introduce regression on
lesser systems like a single socket machine with a few cores. That is
why I put the check to conditionally enable this optimization. I have no
issue of taking that out and let it be the default as long as no one object.

Cheers,
Longman

2016-04-08 00:14:02

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH] x86/hpet: Reduce HPET counter read contention

On Thu, Apr 7, 2016 at 8:07 AM, Waiman Long <[email protected]> wrote:
> On 04/07/2016 12:58 AM, Andy Lutomirski wrote:
>> Reading the HPET is so slow that all the atomic ops in the world won't
>> make a dent. Why not just turn this optimization on unconditionally?
>>
>> --Andy
>
>
> I am constantly on the alert that we should not introduce regression on
> lesser systems like a single socket machine with a few cores. That is why I
> put the check to conditionally enable this optimization. I have no issue of
> taking that out and let it be the default as long as no one object.
>

Agreed. I just suspect it's actually faster on all systems.

This reminds me -- I need to send out my patch to disable the vdso
HPET code, which will make your change more effective. I'll cc you.

> Cheers,
> Longman



--
Andy Lutomirski
AMA Capital Management, LLC

2016-04-08 20:08:11

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH] x86/hpet: Reduce HPET counter read contention

On 04/07/2016 08:13 PM, Andy Lutomirski wrote:
> On Thu, Apr 7, 2016 at 8:07 AM, Waiman Long<[email protected]> wrote:
>> On 04/07/2016 12:58 AM, Andy Lutomirski wrote:
>>> Reading the HPET is so slow that all the atomic ops in the world won't
>>> make a dent. Why not just turn this optimization on unconditionally?
>>>
>>> --Andy
>>
>> I am constantly on the alert that we should not introduce regression on
>> lesser systems like a single socket machine with a few cores. That is why I
>> put the check to conditionally enable this optimization. I have no issue of
>> taking that out and let it be the default as long as no one object.
>>
> Agreed. I just suspect it's actually faster on all systems.
>
> This reminds me -- I need to send out my patch to disable the vdso
> HPET code, which will make your change more effective. I'll cc you.
>

I am going to send an updated patch which reduces CPU threshold to 32 as
well as adding kernel parameter to explicitly enable or disable this
optimization. In this way, you can enable it on system with less CPUs,
if necessary.

Cheers,
Longman