When testing on a 1024 thread ppc64 box I noticed a large amount of
CPU time in ext4 code.
ext4_has_free_blocks has a fast path to avoid summing every free and
dirty block per cpu counter, but only if the global count shows more
free blocks than the maximum amount that could be stored in all the
per cpu counters.
While we are summing 2 per cpu counters we set the breakpoint at 4
times the maximum amount in the per cpu counter portion. Reduce that
to 2.
Since we fold the per cpu count of CPUs going offline into the global
count, we can use num_online_cpus() instead of nr_cpu_ids here too.
Both these changes match percpu_counter_compare() which is used to
optimise a comparison against one per cpu counter.
Signed-off-by: Anton Blanchard <[email protected]>
---
Index: linux-2.6-work/fs/ext4/ext4.h
===================================================================
--- linux-2.6-work.orig/fs/ext4/ext4.h 2011-08-25 11:44:02.978785464 +1000
+++ linux-2.6-work/fs/ext4/ext4.h 2011-08-25 14:17:37.461904013 +1000
@@ -2051,10 +2051,11 @@ do { \
#ifdef CONFIG_SMP
/* Each CPU can accumulate percpu_counter_batch blocks in their local
- * counters. So we need to make sure we have free blocks more
- * than percpu_counter_batch * nr_cpu_ids. Also add a window of 4 times.
+ * counters. Since we sum two percpu counters (s_freeblocks_counter and
+ * s_dirtyblocks_counter), as a worst case we need to check for 2x this.
*/
-#define EXT4_FREEBLOCKS_WATERMARK (4 * (percpu_counter_batch * nr_cpu_ids))
+#define EXT4_FREEBLOCKS_WATERMARK \
+ (2 * (percpu_counter_batch * num_online_cpus()))
#else
#define EXT4_FREEBLOCKS_WATERMARK 0
#endif
When testing on a 1024 thread ppc64 box I noticed a large amount of
CPU time in ext4 code.
ext4_has_free_blocks has a fast path to avoid summing every free and
dirty block per cpu counter, but only if the global count shows more
free blocks than the maximum amount that could be stored in all the
per cpu counters.
Since percpu_counter_batch scales with num_online_cpus() and the maximum
amount in all per cpu counters is percpu_counter_batch * num_online_cpus(),
this breakpoint grows at O(n^2).
This issue will also hit with users of percpu_counter_compare which
does a similar thing for one percpu counter.
I chose to cap percpu_counter_batch at 1024 as a conservative first
step, but we may want to reduce it further based on further benchmarking.
Signed-off-by: Anton Blanchard <[email protected]>
---
Index: linux-2.6-work/lib/percpu_counter.c
===================================================================
--- linux-2.6-work.orig/lib/percpu_counter.c 2011-07-31 20:37:12.580765739 +1000
+++ linux-2.6-work/lib/percpu_counter.c 2011-08-25 11:43:57.828695957 +1000
@@ -149,11 +149,15 @@ EXPORT_SYMBOL(percpu_counter_destroy);
int percpu_counter_batch __read_mostly = 32;
EXPORT_SYMBOL(percpu_counter_batch);
+/*
+ * We set the batch at 2 * num_online_cpus(), with a minimum of 32 and
+ * a maximum of 1024.
+ */
static void compute_batch_value(void)
{
int nr = num_online_cpus();
- percpu_counter_batch = max(32, nr*2);
+ percpu_counter_batch = min(1024, max(32, nr*2));
}
static int __cpuinit percpu_counter_hotcpu_callback(struct notifier_block *nb,
Le vendredi 26 août 2011 à 07:29 +1000, Anton Blanchard a écrit :
> +/*
> + * We set the batch at 2 * num_online_cpus(), with a minimum of 32 and
> + * a maximum of 1024.
> + */
> static void compute_batch_value(void)
> {
> int nr = num_online_cpus();
>
> - percpu_counter_batch = max(32, nr*2);
> + percpu_counter_batch = min(1024, max(32, nr*2));
> }
>
> static int __cpuinit percpu_counter_hotcpu_callback(struct notifier_block *nb,
Or use the following :
percpu_counter_batch = clamp(nr*2, 32, 1024);
On Fri, Aug 26, 2011 at 07:29:27AM +1000, Anton Blanchard wrote:
>
> When testing on a 1024 thread ppc64 box I noticed a large amount of
> CPU time in ext4 code.
>
> ext4_has_free_blocks has a fast path to avoid summing every free and
> dirty block per cpu counter, but only if the global count shows more
> free blocks than the maximum amount that could be stored in all the
> per cpu counters.
>
> Since percpu_counter_batch scales with num_online_cpus() and the maximum
> amount in all per cpu counters is percpu_counter_batch * num_online_cpus(),
> this breakpoint grows at O(n^2).
>
> This issue will also hit with users of percpu_counter_compare which
> does a similar thing for one percpu counter.
>
> I chose to cap percpu_counter_batch at 1024 as a conservative first
> step, but we may want to reduce it further based on further benchmarking.
>
> Signed-off-by: Anton Blanchard <[email protected]>
Yeah, capping the upper bound seems reasonable but can you please add
some comment explaining why the upper bound is necessary there?
Thank you.
--
tejun
On Aug 25, 2011, at 5:29 PM, Anton Blanchard wrote:
>
> When testing on a 1024 thread ppc64 box I noticed a large amount of
> CPU time in ext4 code.
>
> ext4_has_free_blocks has a fast path to avoid summing every free and
> dirty block per cpu counter, but only if the global count shows more
> free blocks than the maximum amount that could be stored in all the
> per cpu counters.
>
> Since percpu_counter_batch scales with num_online_cpus() and the maximum
> amount in all per cpu counters is percpu_counter_batch * num_online_cpus(),
> this breakpoint grows at O(n^2).
I understand why we would want to reduce this number. Unfortunately, the
question is what do we do if all 1024 threads try to do buffered writes into
the file system at the same instant, when we have less than 4 megabytes
of space left?
The problem is that we can then do more writes than we have space, and
we will only find out about it at write back time, when the process may have
exited already -- at which point data loss is almost inevitable. (We could
keep the data in cache and frantically page the system administrator to
delete some files to make room for dirty data, but that's probably not going
to end well?.)
What we can do if we must clamp this threshold is to also increase the
threshold at which we shift away from delayed allocation. We'll then
allocate each block at write time, which does mean more CPU and
less efficient allocation of blocks, but if we're down to our last 4 megabytes,
there's probably not much we can do that will be efficient as far as
block layout anyway?.
-- Ted
When testing on a 1024 thread ppc64 box I noticed a large amount of
CPU time in ext4 code.
ext4_has_free_blocks has a fast path to avoid summing every free and
dirty block per cpu counter, but only if the global count shows more
free blocks than the maximum amount that could be stored in all the
per cpu counters.
Since percpu_counter_batch scales with num_online_cpus() and the maximum
amount in all per cpu counters is percpu_counter_batch * num_online_cpus(),
this breakpoint grows at O(n^2).
This issue will also hit with users of percpu_counter_compare which
does a similar thing for one percpu counter.
I chose to cap percpu_counter_batch at 1024 as a conservative first
step, but we may want to reduce it further based on further benchmarking.
Signed-off-by: Anton Blanchard <[email protected]>
---
Index: linux-2.6-work/lib/percpu_counter.c
===================================================================
--- linux-2.6-work.orig/lib/percpu_counter.c 2011-08-29 19:50:44.482008591 +1000
+++ linux-2.6-work/lib/percpu_counter.c 2011-08-29 21:21:10.026779139 +1000
@@ -153,7 +153,14 @@ static void compute_batch_value(void)
{
int nr = num_online_cpus();
- percpu_counter_batch = max(32, nr*2);
+ /*
+ * The cutoff point for the percpu_counter_compare() fast path grows
+ * at num_online_cpus^2 and on a big enough machine it will be
+ * unlikely to hit.
+ * We clamp the batch value to 1024 so the cutoff point only grows
+ * linearly past 512 CPUs.
+ */
+ percpu_counter_batch = clamp(nr*2, 32, 1024);
}
static int __cpuinit percpu_counter_hotcpu_callback(struct notifier_block *nb,
Hi Ted,
> I understand why we would want to reduce this number.
> Unfortunately, the question is what do we do if all 1024 threads try
> to do buffered writes into the file system at the same instant, when
> we have less than 4 megabytes of space left?
>
> The problem is that we can then do more writes than we have space, and
> we will only find out about it at write back time, when the process
> may have exited already -- at which point data loss is almost
> inevitable. (We could keep the data in cache and frantically page
> the system administrator to delete some files to make room for dirty
> data, but that's probably not going to end well….)
>
> What we can do if we must clamp this threshold is to also increase the
> threshold at which we shift away from delayed allocation. We'll then
> allocate each block at write time, which does mean more CPU and
> less efficient allocation of blocks, but if we're down to our last 4
> megabytes, there's probably not much we can do that will be efficient
> as far as block layout anyway….
Thanks for the explanation, I'll go back and take another look.
Anton
On Fri, Aug 26, 2011 at 07:48:52AM -0400, Theodore Tso wrote:
>
> I understand why we would want to reduce this number. Unfortunately, the
> question is what do we do if all 1024 threads try to do buffered writes into
> the file system at the same instant, when we have less than 4 megabytes
> of space left?
Oops, sorry, that should be 4 GB of space left (i.e., what do we do if
all 1024 cpu's all try write 1024 4k blocks all at the same time).
Imagine an simulation application where all threads finish at more or
less the same time and then try to write out their data files at the
same time....
- Ted
On Mon, Aug 29, 2011 at 09:46:09PM +1000, Anton Blanchard wrote:
>
> When testing on a 1024 thread ppc64 box I noticed a large amount of
> CPU time in ext4 code.
>
> ext4_has_free_blocks has a fast path to avoid summing every free and
> dirty block per cpu counter, but only if the global count shows more
> free blocks than the maximum amount that could be stored in all the
> per cpu counters.
>
> Since percpu_counter_batch scales with num_online_cpus() and the maximum
> amount in all per cpu counters is percpu_counter_batch * num_online_cpus(),
> this breakpoint grows at O(n^2).
>
> This issue will also hit with users of percpu_counter_compare which
> does a similar thing for one percpu counter.
>
> I chose to cap percpu_counter_batch at 1024 as a conservative first
> step, but we may want to reduce it further based on further benchmarking.
>
> Signed-off-by: Anton Blanchard <[email protected]>
Applied to percpu/for-3.2.
Thanks.
--
tejun
On Sep 5, 2011, at 11:48 PM, Tejun Heo wrote:
> On Mon, Aug 29, 2011 at 09:46:09PM +1000, Anton Blanchard wrote:
>>
>> When testing on a 1024 thread ppc64 box I noticed a large amount of
>> CPU time in ext4 code.
>>
>> ext4_has_free_blocks has a fast path to avoid summing every free and
>> dirty block per cpu counter, but only if the global count shows more
>> free blocks than the maximum amount that could be stored in all the
>> per cpu counters.
>>
>> Since percpu_counter_batch scales with num_online_cpus() and the maximum
>> amount in all per cpu counters is percpu_counter_batch * num_online_cpus(),
>> this breakpoint grows at O(n^2).
>>
>> This issue will also hit with users of percpu_counter_compare which
>> does a similar thing for one percpu counter.
>>
>> I chose to cap percpu_counter_batch at 1024 as a conservative first
>> step, but we may want to reduce it further based on further benchmarking.
>>
>> Signed-off-by: Anton Blanchard <[email protected]>
>
> Applied to percpu/for-3.2.
Um, this was an ext4 patch and I pointed out it could cause problems. (Specifically, data loss?)
- Ted
On Tue, Sep 06, 2011 at 09:30:50AM -0400, Theodore Tso wrote:
> >> I chose to cap percpu_counter_batch at 1024 as a conservative first
> >> step, but we may want to reduce it further based on further benchmarking.
> >>
> >> Signed-off-by: Anton Blanchard <[email protected]>
> >
> > Applied to percpu/for-3.2.
>
> Um, this was an ext4 patch and I pointed out it could cause problems. (Specifically, data loss…)
Ah okay, I thought you were talking about the first patch only.
Reverting for now.
Thanks.
--
tejun
Hi Ted,
> Um, this was an ext4 patch and I pointed out it could cause
> problems. (Specifically, data loss…)
I'm a bit confused. While the comment mentions ext4, the patch is just
putting an upper bound on the size of percpu_counter_batch and it is
useful for percpu_counter_compare() too:
static void compute_batch_value(void)
{
int nr = num_online_cpus();
- percpu_counter_batch = max(32, nr*2);
+ /*
+ * The cutoff point for the percpu_counter_compare() fast path grows
+ * at num_online_cpus^2 and on a big enough machine it will be
+ * unlikely to hit.
+ * We clamp the batch value to 1024 so the cutoff point only grows
+ * linearly past 512 CPUs.
+ */
+ percpu_counter_batch = clamp(nr*2, 32, 1024);
}
The batch value should be opaque to the rest of the kernel. If ext4
requires a specific batch value we can use the functions that take
an explicit one (eg __percpu_counter_add).
Anton