2020-11-03 12:45:57

by Dongdong Tao

[permalink] [raw]
Subject: [PATCH] bcache: consider the fragmentation when update the writeback rate

From: dongdong tao <[email protected]>

Current way to calculate the writeback rate only considered the
dirty sectors, this usually works fine when the fragmentation
is not high, but it will give us unreasonable small rate when
we are under a situation that very few dirty sectors consumed
a lot dirty buckets. In some case, the dirty bucekts can reached
to CUTOFF_WRITEBACK_SYNC while the dirty data(sectors) noteven
reached the writeback_percent, the writeback rate will still
be the minimum value (4k), thus it will cause all the writes to be
stucked in a non-writeback mode because of the slow writeback.

This patch will try to accelerate the writeback rate when the
fragmentation is high. It calculate the propotional_scaled value
based on below:
(dirty_sectors / writeback_rate_p_term_inverse) * fragment
As we can see, the higher fragmentation will result a larger
proportional_scaled value, thus cause a larger writeback rate.
The fragment value is calculated based on below:
(dirty_buckets * bucket_size) / dirty_sectors
If you think about it, the value of fragment will be always
inside [1, bucket_size].

This patch only considers the fragmentation when the number of
dirty_buckets reached to a dirty threshold(configurable by
writeback_fragment_percent, default is 50), so bcache will
remain the original behaviour before the dirty buckets reached
the threshold.

Signed-off-by: dongdong tao <[email protected]>
---
drivers/md/bcache/bcache.h | 1 +
drivers/md/bcache/sysfs.c | 6 ++++++
drivers/md/bcache/writeback.c | 21 +++++++++++++++++++++
3 files changed, 28 insertions(+)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 1d57f48307e6..87632f7032b6 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -374,6 +374,7 @@ struct cached_dev {
unsigned int writeback_metadata:1;
unsigned int writeback_running:1;
unsigned char writeback_percent;
+ unsigned char writeback_fragment_percent;
unsigned int writeback_delay;

uint64_t writeback_rate_target;
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 554e3afc9b68..69499113aef8 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -115,6 +115,7 @@ rw_attribute(stop_when_cache_set_failed);
rw_attribute(writeback_metadata);
rw_attribute(writeback_running);
rw_attribute(writeback_percent);
+rw_attribute(writeback_fragment_percent);
rw_attribute(writeback_delay);
rw_attribute(writeback_rate);

@@ -197,6 +198,7 @@ SHOW(__bch_cached_dev)
var_printf(writeback_running, "%i");
var_print(writeback_delay);
var_print(writeback_percent);
+ var_print(writeback_fragment_percent);
sysfs_hprint(writeback_rate,
wb ? atomic_long_read(&dc->writeback_rate.rate) << 9 : 0);
sysfs_printf(io_errors, "%i", atomic_read(&dc->io_errors));
@@ -308,6 +310,9 @@ STORE(__cached_dev)
sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent,
0, bch_cutoff_writeback);

+ sysfs_strtoul_clamp(writeback_fragment_percent, dc->writeback_fragment_percent,
+ 0, bch_cutoff_writeback_sync);
+
if (attr == &sysfs_writeback_rate) {
ssize_t ret;
long int v = atomic_long_read(&dc->writeback_rate.rate);
@@ -498,6 +503,7 @@ static struct attribute *bch_cached_dev_files[] = {
&sysfs_writeback_running,
&sysfs_writeback_delay,
&sysfs_writeback_percent,
+ &sysfs_writeback_fragment_percent,
&sysfs_writeback_rate,
&sysfs_writeback_rate_update_seconds,
&sysfs_writeback_rate_i_term_inverse,
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 3c74996978da..34babc89fdf3 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -88,6 +88,26 @@ static void __update_writeback_rate(struct cached_dev *dc)
int64_t integral_scaled;
uint32_t new_rate;

+ /*
+ * We need to consider the number of dirty buckets as well
+ * when calculating the proportional_scaled, Otherwise we might
+ * have an unreasonable small writeback rate at a highly fragmented situation
+ * when very few dirty sectors consumed a lot dirty buckets, the
+ * worst case is when dirty_data reached writeback_percent and
+ * dirty buckets reached to cutoff_writeback_sync, but the rate
+ * still will be at the minimum value, which will cause the write
+ * stuck at a non-writeback mode.
+ */
+ struct cache_set *c = dc->disk.c;
+
+ if (c->gc_stats.in_use > dc->writeback_fragment_percent && dirty > 0) {
+ int64_t dirty_buckets = (c->gc_stats.in_use * c->nbuckets) / 100;
+ int64_t fragment = (dirty_buckets * c->cache->sb.bucket_size) / dirty;
+
+ proportional_scaled =
+ div_s64(dirty, dc->writeback_rate_p_term_inverse) * (fragment);
+ }
+
if ((error < 0 && dc->writeback_rate_integral > 0) ||
(error > 0 && time_before64(local_clock(),
dc->writeback_rate.next + NSEC_PER_MSEC))) {
@@ -969,6 +989,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
dc->writeback_metadata = true;
dc->writeback_running = false;
dc->writeback_percent = 10;
+ dc->writeback_fragment_percent = 50;
dc->writeback_delay = 30;
atomic_long_set(&dc->writeback_rate.rate, 1024);
dc->writeback_rate_minimum = 8;
--
2.17.1


2020-11-04 05:09:33

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH] bcache: consider the fragmentation when update the writeback rate

Hi Dongdong,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v5.10-rc2 next-20201103]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/0day-ci/linux/commits/Dongdong-Tao/bcache-consider-the-fragmentation-when-update-the-writeback-rate/20201103-204517
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git b7cbaf59f62f8ab8f157698f9e31642bff525bd0
config: nds32-randconfig-p002-20201103 (attached as .config)
compiler: nds32le-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/4e57aac94bb0d0822473657d7ef0f3a0455e81f6
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Dongdong-Tao/bcache-consider-the-fragmentation-when-update-the-writeback-rate/20201103-204517
git checkout 4e57aac94bb0d0822473657d7ef0f3a0455e81f6
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=nds32

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>

All errors (new ones prefixed by >>):

nds32le-linux-ld: arch/nds32/kernel/ex-entry.o: in function `common_exception_handler':
(.text+0xfe): undefined reference to `__trace_hardirqs_off'
(.text+0xfe): relocation truncated to fit: R_NDS32_25_PCREL_RELA against undefined symbol `__trace_hardirqs_off'
nds32le-linux-ld: arch/nds32/kernel/ex-exit.o: in function `no_work_pending':
(.text+0xce): undefined reference to `__trace_hardirqs_off'
nds32le-linux-ld: (.text+0xd2): undefined reference to `__trace_hardirqs_off'
nds32le-linux-ld: (.text+0xd6): undefined reference to `__trace_hardirqs_on'
nds32le-linux-ld: (.text+0xda): undefined reference to `__trace_hardirqs_on'
nds32le-linux-ld: drivers/md/bcache/writeback.o: in function `update_writeback_rate':
>> writeback.c:(.text+0x1ffa): undefined reference to `__divdi3'
>> nds32le-linux-ld: writeback.c:(.text+0x1ffe): undefined reference to `__divdi3'

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]


Attachments:
(No filename) (2.51 kB)
.config.gz (33.43 kB)
Download all attachments

2020-11-05 16:34:16

by Coly Li

[permalink] [raw]
Subject: Re: [PATCH] bcache: consider the fragmentation when update the writeback rate

On 2020/11/3 20:42, Dongdong Tao wrote:
> From: dongdong tao <[email protected]>
>
> Current way to calculate the writeback rate only considered the
> dirty sectors, this usually works fine when the fragmentation
> is not high, but it will give us unreasonable small rate when
> we are under a situation that very few dirty sectors consumed
> a lot dirty buckets. In some case, the dirty bucekts can reached
> to CUTOFF_WRITEBACK_SYNC while the dirty data(sectors) noteven
> reached the writeback_percent, the writeback rate will still
> be the minimum value (4k), thus it will cause all the writes to be
> stucked in a non-writeback mode because of the slow writeback.
>
> This patch will try to accelerate the writeback rate when the
> fragmentation is high. It calculate the propotional_scaled value
> based on below:
> (dirty_sectors / writeback_rate_p_term_inverse) * fragment
> As we can see, the higher fragmentation will result a larger
> proportional_scaled value, thus cause a larger writeback rate.
> The fragment value is calculated based on below:
> (dirty_buckets * bucket_size) / dirty_sectors
> If you think about it, the value of fragment will be always
> inside [1, bucket_size].
>
> This patch only considers the fragmentation when the number of
> dirty_buckets reached to a dirty threshold(configurable by
> writeback_fragment_percent, default is 50), so bcache will
> remain the original behaviour before the dirty buckets reached
> the threshold.
>
> Signed-off-by: dongdong tao <[email protected]>

Hi Dongdong,

Change the writeback rate does not effect the real throughput indeed,
your change is just increasing the upper limit hint of the writeback
throughput, the bottle neck is spinning drive for random I/O.

A good direction should be the moving gc. If the moving gc may work
faster, the situation you mentioned above could be relaxed a lot.

I will NACK this patch unless you may have a observable and reproducible
performance number.

Thanks.

Coly Li


> ---
> drivers/md/bcache/bcache.h | 1 +
> drivers/md/bcache/sysfs.c | 6 ++++++
> drivers/md/bcache/writeback.c | 21 +++++++++++++++++++++
> 3 files changed, 28 insertions(+)
>
> diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
> index 1d57f48307e6..87632f7032b6 100644
> --- a/drivers/md/bcache/bcache.h
> +++ b/drivers/md/bcache/bcache.h
> @@ -374,6 +374,7 @@ struct cached_dev {
> unsigned int writeback_metadata:1;
> unsigned int writeback_running:1;
> unsigned char writeback_percent;
> + unsigned char writeback_fragment_percent;
> unsigned int writeback_delay;
>
> uint64_t writeback_rate_target;
> diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
> index 554e3afc9b68..69499113aef8 100644
> --- a/drivers/md/bcache/sysfs.c
> +++ b/drivers/md/bcache/sysfs.c
> @@ -115,6 +115,7 @@ rw_attribute(stop_when_cache_set_failed);
> rw_attribute(writeback_metadata);
> rw_attribute(writeback_running);
> rw_attribute(writeback_percent);
> +rw_attribute(writeback_fragment_percent);
> rw_attribute(writeback_delay);
> rw_attribute(writeback_rate);
>
> @@ -197,6 +198,7 @@ SHOW(__bch_cached_dev)
> var_printf(writeback_running, "%i");
> var_print(writeback_delay);
> var_print(writeback_percent);
> + var_print(writeback_fragment_percent);
> sysfs_hprint(writeback_rate,
> wb ? atomic_long_read(&dc->writeback_rate.rate) << 9 : 0);
> sysfs_printf(io_errors, "%i", atomic_read(&dc->io_errors));
> @@ -308,6 +310,9 @@ STORE(__cached_dev)
> sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent,
> 0, bch_cutoff_writeback);
>
> + sysfs_strtoul_clamp(writeback_fragment_percent, dc->writeback_fragment_percent,
> + 0, bch_cutoff_writeback_sync);
> +
> if (attr == &sysfs_writeback_rate) {
> ssize_t ret;
> long int v = atomic_long_read(&dc->writeback_rate.rate);
> @@ -498,6 +503,7 @@ static struct attribute *bch_cached_dev_files[] = {
> &sysfs_writeback_running,
> &sysfs_writeback_delay,
> &sysfs_writeback_percent,
> + &sysfs_writeback_fragment_percent,
> &sysfs_writeback_rate,
> &sysfs_writeback_rate_update_seconds,
> &sysfs_writeback_rate_i_term_inverse,
> diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
> index 3c74996978da..34babc89fdf3 100644
> --- a/drivers/md/bcache/writeback.c
> +++ b/drivers/md/bcache/writeback.c
> @@ -88,6 +88,26 @@ static void __update_writeback_rate(struct cached_dev *dc)
> int64_t integral_scaled;
> uint32_t new_rate;
>
> + /*
> + * We need to consider the number of dirty buckets as well
> + * when calculating the proportional_scaled, Otherwise we might
> + * have an unreasonable small writeback rate at a highly fragmented situation
> + * when very few dirty sectors consumed a lot dirty buckets, the
> + * worst case is when dirty_data reached writeback_percent and
> + * dirty buckets reached to cutoff_writeback_sync, but the rate
> + * still will be at the minimum value, which will cause the write
> + * stuck at a non-writeback mode.
> + */
> + struct cache_set *c = dc->disk.c;
> +
> + if (c->gc_stats.in_use > dc->writeback_fragment_percent && dirty > 0) {
> + int64_t dirty_buckets = (c->gc_stats.in_use * c->nbuckets) / 100;
> + int64_t fragment = (dirty_buckets * c->cache->sb.bucket_size) / dirty;
> +
> + proportional_scaled =
> + div_s64(dirty, dc->writeback_rate_p_term_inverse) * (fragment);
> + }
> +
> if ((error < 0 && dc->writeback_rate_integral > 0) ||
> (error > 0 && time_before64(local_clock(),
> dc->writeback_rate.next + NSEC_PER_MSEC))) {
> @@ -969,6 +989,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
> dc->writeback_metadata = true;
> dc->writeback_running = false;
> dc->writeback_percent = 10;
> + dc->writeback_fragment_percent = 50;
> dc->writeback_delay = 30;
> atomic_long_set(&dc->writeback_rate.rate, 1024);
> dc->writeback_rate_minimum = 8;
>

2020-11-10 04:21:39

by Dongdong Tao

[permalink] [raw]
Subject: Re: [PATCH] bcache: consider the fragmentation when update the writeback rate

[Sorry again for the SPAM detection]

Thank you the reply Coly!

I agree that this patch is not a final solution for fixing the
fragmentation issue, but more like a workaround to alleviate this
problem.
So, part of my intention is to look for how upstream would like to fix
this issue.

I've looked into the code of moving_gc part, as well as did some
debug/test, unfortunately, I think it's not the solution for this
issue also.
Because movnig_gc will not just move the dirty cache, but also the
clean cache, so seems the purpose of moving_gc
is trying to move the data (dirty and clean) from those relatively
empty buckets to some new buckets, so that to reclaim the original
buckets.
For this purpose, I guess moving gc was more useful at the time when
we usually don't have large nvme devices.

Let's get back to the problem I have, the problem that I'm trying to
fix is that you might have lots of buckets (Say 70 percent) that are
all being fully consumed,
while those buckets only contain very few dirty data (Say 10 percent
), since gc can't reclaim a bucket which contains any dirty data, so
the worst situation
is that the cache_availability_percent can drop under 30 percent which
will make all the write op can't perform in a writeback mode, thus
kill the performance of writes.

So, unlike the moving_gc, I only want to move dirty data around (as
you've suggested :)), but I don't think it's a good idea to change the
behavior of moving_gc.
My current idea is to implement a compaction thread that triggers the
dirty data compaction only under some certain circumstances (like when
the fragmentaion and dirty buckets are both high), and this thread can
be turned on/off based on an extra option, so that people can keep the
original behavior if they want.

This is a rough idea now, could you please let me know if the above
thought makes sense to you or any other suggestions will be
appreciated!
I also understand the hardest part is making sure the general bcache
performance and functionality still look sane,
so it might require much more time to do it and it's more likely a
feature atm.

How to reproduce and observe this issue:
This issue is very easy to repreduce by running below fio command
against a writeback mode bcache deivce:

fio --name=random-writers --filename=/dev/bcache0 --ioengine=libaio
--iodepth=4 --rw=randrw --rate_iops=95,5 --bs=4k --direct=1
--numjobs=4

Note that the key option to reproduce this issue here is
"rate_iops=95,5", so that you will have 95 percent read and only 5
percent write, this is to make sure
one bucket only contains very few dirty data.
Also, it's faster to reproduce this with a small cache device, I use
1GB cache, but it's same for bigger cache device, just a matter of
time.

We can observe this issue by monitoring bcache stats "data_dirty" and
"cache_available_percent", after the cache_available_percent dropped
to 30 percent,
we can observe the write performance is hugely degraded by below
bpftrace script:
---
#!/usr/bin/env bpftrace

#include <linux/bio.h>

kprobe:cached_dev_make_request
{
@start[arg1] = nsecs;
}

kprobe:bio_endio /@start[arg0]/
{
if(((struct bio *)arg0)->bi_opf & 1) {
@write = hist(nsecs - @start[arg0]); delete(@start[arg0]);
}
else {
@read = hist(nsecs - @start[arg0]); delete(@start[arg0]);
}
}
---

To run this script:
Save above bpftrace file to bcache_io_lat.bt, then run it with chmod
+x bcache_io_lat.bt & ./bcache_io_lat.bt

By the way, we mainly hit this issue on ceph, the fio reproducer is
just an easy way to reproduce it.

Regards,
Dongdong


On Fri, Nov 6, 2020 at 12:32 AM Coly Li <[email protected]> wrote:
>
> On 2020/11/3 20:42, Dongdong Tao wrote:
> > From: dongdong tao <[email protected]>
> >
> > Current way to calculate the writeback rate only considered the
> > dirty sectors, this usually works fine when the fragmentation
> > is not high, but it will give us unreasonable small rate when
> > we are under a situation that very few dirty sectors consumed
> > a lot dirty buckets. In some case, the dirty bucekts can reached
> > to CUTOFF_WRITEBACK_SYNC while the dirty data(sectors) noteven
> > reached the writeback_percent, the writeback rate will still
> > be the minimum value (4k), thus it will cause all the writes to be
> > stucked in a non-writeback mode because of the slow writeback.
> >
> > This patch will try to accelerate the writeback rate when the
> > fragmentation is high. It calculate the propotional_scaled value
> > based on below:
> > (dirty_sectors / writeback_rate_p_term_inverse) * fragment
> > As we can see, the higher fragmentation will result a larger
> > proportional_scaled value, thus cause a larger writeback rate.
> > The fragment value is calculated based on below:
> > (dirty_buckets * bucket_size) / dirty_sectors
> > If you think about it, the value of fragment will be always
> > inside [1, bucket_size].
> >
> > This patch only considers the fragmentation when the number of
> > dirty_buckets reached to a dirty threshold(configurable by
> > writeback_fragment_percent, default is 50), so bcache will
> > remain the original behaviour before the dirty buckets reached
> > the threshold.
> >
> > Signed-off-by: dongdong tao <[email protected]>
>
> Hi Dongdong,
>
> Change the writeback rate does not effect the real throughput indeed,
> your change is just increasing the upper limit hint of the writeback
> throughput, the bottle neck is spinning drive for random I/O.
>
> A good direction should be the moving gc. If the moving gc may work
> faster, the situation you mentioned above could be relaxed a lot.
>
> I will NACK this patch unless you may have a observable and reproducible
> performance number.
>
> Thanks.
>
> Coly Li
>
>
> > ---
> > drivers/md/bcache/bcache.h | 1 +
> > drivers/md/bcache/sysfs.c | 6 ++++++
> > drivers/md/bcache/writeback.c | 21 +++++++++++++++++++++
> > 3 files changed, 28 insertions(+)
> >
> > diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
> > index 1d57f48307e6..87632f7032b6 100644
> > --- a/drivers/md/bcache/bcache.h
> > +++ b/drivers/md/bcache/bcache.h
> > @@ -374,6 +374,7 @@ struct cached_dev {
> > unsigned int writeback_metadata:1;
> > unsigned int writeback_running:1;
> > unsigned char writeback_percent;
> > + unsigned char writeback_fragment_percent;
> > unsigned int writeback_delay;
> >
> > uint64_t writeback_rate_target;
> > diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
> > index 554e3afc9b68..69499113aef8 100644
> > --- a/drivers/md/bcache/sysfs.c
> > +++ b/drivers/md/bcache/sysfs.c
> > @@ -115,6 +115,7 @@ rw_attribute(stop_when_cache_set_failed);
> > rw_attribute(writeback_metadata);
> > rw_attribute(writeback_running);
> > rw_attribute(writeback_percent);
> > +rw_attribute(writeback_fragment_percent);
> > rw_attribute(writeback_delay);
> > rw_attribute(writeback_rate);
> >
> > @@ -197,6 +198,7 @@ SHOW(__bch_cached_dev)
> > var_printf(writeback_running, "%i");
> > var_print(writeback_delay);
> > var_print(writeback_percent);
> > + var_print(writeback_fragment_percent);
> > sysfs_hprint(writeback_rate,
> > wb ? atomic_long_read(&dc->writeback_rate.rate) << 9 : 0);
> > sysfs_printf(io_errors, "%i", atomic_read(&dc->io_errors));
> > @@ -308,6 +310,9 @@ STORE(__cached_dev)
> > sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent,
> > 0, bch_cutoff_writeback);
> >
> > + sysfs_strtoul_clamp(writeback_fragment_percent, dc->writeback_fragment_percent,
> > + 0, bch_cutoff_writeback_sync);
> > +
> > if (attr == &sysfs_writeback_rate) {
> > ssize_t ret;
> > long int v = atomic_long_read(&dc->writeback_rate.rate);
> > @@ -498,6 +503,7 @@ static struct attribute *bch_cached_dev_files[] = {
> > &sysfs_writeback_running,
> > &sysfs_writeback_delay,
> > &sysfs_writeback_percent,
> > + &sysfs_writeback_fragment_percent,
> > &sysfs_writeback_rate,
> > &sysfs_writeback_rate_update_seconds,
> > &sysfs_writeback_rate_i_term_inverse,
> > diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
> > index 3c74996978da..34babc89fdf3 100644
> > --- a/drivers/md/bcache/writeback.c
> > +++ b/drivers/md/bcache/writeback.c
> > @@ -88,6 +88,26 @@ static void __update_writeback_rate(struct cached_dev *dc)
> > int64_t integral_scaled;
> > uint32_t new_rate;
> >
> > + /*
> > + * We need to consider the number of dirty buckets as well
> > + * when calculating the proportional_scaled, Otherwise we might
> > + * have an unreasonable small writeback rate at a highly fragmented situation
> > + * when very few dirty sectors consumed a lot dirty buckets, the
> > + * worst case is when dirty_data reached writeback_percent and
> > + * dirty buckets reached to cutoff_writeback_sync, but the rate
> > + * still will be at the minimum value, which will cause the write
> > + * stuck at a non-writeback mode.
> > + */
> > + struct cache_set *c = dc->disk.c;
> > +
> > + if (c->gc_stats.in_use > dc->writeback_fragment_percent && dirty > 0) {
> > + int64_t dirty_buckets = (c->gc_stats.in_use * c->nbuckets) / 100;
> > + int64_t fragment = (dirty_buckets * c->cache->sb.bucket_size) / dirty;
> > +
> > + proportional_scaled =
> > + div_s64(dirty, dc->writeback_rate_p_term_inverse) * (fragment);
> > + }
> > +
> > if ((error < 0 && dc->writeback_rate_integral > 0) ||
> > (error > 0 && time_before64(local_clock(),
> > dc->writeback_rate.next + NSEC_PER_MSEC))) {
> > @@ -969,6 +989,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
> > dc->writeback_metadata = true;
> > dc->writeback_running = false;
> > dc->writeback_percent = 10;
> > + dc->writeback_fragment_percent = 50;
> > dc->writeback_delay = 30;
> > atomic_long_set(&dc->writeback_rate.rate, 1024);
> > dc->writeback_rate_minimum = 8;
> >
>

2020-11-11 08:35:48

by Coly Li

[permalink] [raw]
Subject: Re: [PATCH] bcache: consider the fragmentation when update the writeback rate

On 2020/11/10 12:19, Dongdong Tao wrote:
> [Sorry again for the SPAM detection]
>
> Thank you the reply Coly!
>
> I agree that this patch is not a final solution for fixing the
> fragmentation issue, but more like a workaround to alleviate this
> problem.
> So, part of my intention is to look for how upstream would like to fix
> this issue.
>
> I've looked into the code of moving_gc part, as well as did some
> debug/test, unfortunately, I think it's not the solution for this
> issue also.
> Because movnig_gc will not just move the dirty cache, but also the
> clean cache, so seems the purpose of moving_gc
> is trying to move the data (dirty and clean) from those relatively
> empty buckets to some new buckets, so that to reclaim the original
> buckets.
> For this purpose, I guess moving gc was more useful at the time when
> we usually don't have large nvme devices.
>
> Let's get back to the problem I have, the problem that I'm trying to
> fix is that you might have lots of buckets (Say 70 percent) that are
> all being fully consumed,
> while those buckets only contain very few dirty data (Say 10 percent
> ), since gc can't reclaim a bucket which contains any dirty data, so
> the worst situation
> is that the cache_availability_percent can drop under 30 percent which
> will make all the write op can't perform in a writeback mode, thus
> kill the performance of writes.
>
> So, unlike the moving_gc, I only want to move dirty data around (as
> you've suggested :)), but I don't think it's a good idea to change the
> behavior of moving_gc.
> My current idea is to implement a compaction thread that triggers the
> dirty data compaction only under some certain circumstances (like when
> the fragmentaion and dirty buckets are both high), and this thread can
> be turned on/off based on an extra option, so that people can keep the
> original behavior if they want.
>
> This is a rough idea now, could you please let me know if the above
> thought makes sense to you or any other suggestions will be
> appreciated!
> I also understand the hardest part is making sure the general bcache
> performance and functionality still look sane,
> so it might require much more time to do it and it's more likely a
> feature atm.
>
> How to reproduce and observe this issue:
> This issue is very easy to repreduce by running below fio command
> against a writeback mode bcache deivce:
>
> fio --name=random-writers --filename=/dev/bcache0 --ioengine=libaio
> --iodepth=4 --rw=randrw --rate_iops=95,5 --bs=4k --direct=1
> --numjobs=4
>
> Note that the key option to reproduce this issue here is
> "rate_iops=95,5", so that you will have 95 percent read and only 5
> percent write, this is to make sure
> one bucket only contains very few dirty data.
> Also, it's faster to reproduce this with a small cache device, I use
> 1GB cache, but it's same for bigger cache device, just a matter of
> time.
>
> We can observe this issue by monitoring bcache stats "data_dirty" and
> "cache_available_percent", after the cache_available_percent dropped
> to 30 percent,
> we can observe the write performance is hugely degraded by below
> bpftrace script:
> ---
> #!/usr/bin/env bpftrace
>
> #include <linux/bio.h>
>
> kprobe:cached_dev_make_request
> {
> @start[arg1] = nsecs;
> }
>
> kprobe:bio_endio /@start[arg0]/
> {
> if(((struct bio *)arg0)->bi_opf & 1) {
> @write = hist(nsecs - @start[arg0]); delete(@start[arg0]);
> }
> else {
> @read = hist(nsecs - @start[arg0]); delete(@start[arg0]);
> }
> }
> ---
>
> To run this script:
> Save above bpftrace file to bcache_io_lat.bt, then run it with chmod
> +x bcache_io_lat.bt & ./bcache_io_lat.bt
>
> By the way, we mainly hit this issue on ceph, the fio reproducer is
> just an easy way to reproduce it.
>

Hi Dongdong,

I know this situation, this is not the first time it is mentioned.

What is the performance number that your patch gains ? I wanted to see
"observable and reproducible performance number", especially the latency
and IOPS for regular I/O requests.

Thanks.

Coly Li


> On Fri, Nov 6, 2020 at 12:32 AM Coly Li <[email protected]> wrote:
>>
>> On 2020/11/3 20:42, Dongdong Tao wrote:
>>> From: dongdong tao <[email protected]>
>>>
>>> Current way to calculate the writeback rate only considered the
>>> dirty sectors, this usually works fine when the fragmentation
>>> is not high, but it will give us unreasonable small rate when
>>> we are under a situation that very few dirty sectors consumed
>>> a lot dirty buckets. In some case, the dirty bucekts can reached
>>> to CUTOFF_WRITEBACK_SYNC while the dirty data(sectors) noteven
>>> reached the writeback_percent, the writeback rate will still
>>> be the minimum value (4k), thus it will cause all the writes to be
>>> stucked in a non-writeback mode because of the slow writeback.
>>>
>>> This patch will try to accelerate the writeback rate when the
>>> fragmentation is high. It calculate the propotional_scaled value
>>> based on below:
>>> (dirty_sectors / writeback_rate_p_term_inverse) * fragment
>>> As we can see, the higher fragmentation will result a larger
>>> proportional_scaled value, thus cause a larger writeback rate.
>>> The fragment value is calculated based on below:
>>> (dirty_buckets * bucket_size) / dirty_sectors
>>> If you think about it, the value of fragment will be always
>>> inside [1, bucket_size].
>>>
>>> This patch only considers the fragmentation when the number of
>>> dirty_buckets reached to a dirty threshold(configurable by
>>> writeback_fragment_percent, default is 50), so bcache will
>>> remain the original behaviour before the dirty buckets reached
>>> the threshold.
>>>
>>> Signed-off-by: dongdong tao <[email protected]>
>>
>> Hi Dongdong,
>>
>> Change the writeback rate does not effect the real throughput indeed,
>> your change is just increasing the upper limit hint of the writeback
>> throughput, the bottle neck is spinning drive for random I/O.
>>
>> A good direction should be the moving gc. If the moving gc may work
>> faster, the situation you mentioned above could be relaxed a lot.
>>
>> I will NACK this patch unless you may have a observable and reproducible
>> performance number.
>>
>> Thanks.
>>
>> Coly Li

2020-12-09 02:44:40

by Dongsheng Yang

[permalink] [raw]
Subject: Re: [PATCH] bcache: consider the fragmentation when update the writeback rate


在 2020/11/3 星期二 下午 8:42, Dongdong Tao 写道:
> From: dongdong tao <[email protected]>
>
> Current way to calculate the writeback rate only considered the
> dirty sectors, this usually works fine when the fragmentation
> is not high, but it will give us unreasonable small rate when
> we are under a situation that very few dirty sectors consumed
> a lot dirty buckets. In some case, the dirty bucekts can reached
> to CUTOFF_WRITEBACK_SYNC while the dirty data(sectors) noteven
> reached the writeback_percent, the writeback rate will still
> be the minimum value (4k), thus it will cause all the writes to be
> stucked in a non-writeback mode because of the slow writeback.
>
> This patch will try to accelerate the writeback rate when the
> fragmentation is high. It calculate the propotional_scaled value
> based on below:
> (dirty_sectors / writeback_rate_p_term_inverse) * fragment
> As we can see, the higher fragmentation will result a larger
> proportional_scaled value, thus cause a larger writeback rate.
> The fragment value is calculated based on below:
> (dirty_buckets * bucket_size) / dirty_sectors
> If you think about it, the value of fragment will be always
> inside [1, bucket_size].
>
> This patch only considers the fragmentation when the number of
> dirty_buckets reached to a dirty threshold(configurable by
> writeback_fragment_percent, default is 50), so bcache will
> remain the original behaviour before the dirty buckets reached
> the threshold.
>
> Signed-off-by: dongdong tao <[email protected]>
> ---
> drivers/md/bcache/bcache.h | 1 +
> drivers/md/bcache/sysfs.c | 6 ++++++
> drivers/md/bcache/writeback.c | 21 +++++++++++++++++++++
> 3 files changed, 28 insertions(+)
>
> diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
> index 1d57f48307e6..87632f7032b6 100644
> --- a/drivers/md/bcache/bcache.h
> +++ b/drivers/md/bcache/bcache.h
> @@ -374,6 +374,7 @@ struct cached_dev {
> unsigned int writeback_metadata:1;
> unsigned int writeback_running:1;
> unsigned char writeback_percent;
> + unsigned char writeback_fragment_percent;
> unsigned int writeback_delay;
>
> uint64_t writeback_rate_target;
> diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
> index 554e3afc9b68..69499113aef8 100644
> --- a/drivers/md/bcache/sysfs.c
> +++ b/drivers/md/bcache/sysfs.c
> @@ -115,6 +115,7 @@ rw_attribute(stop_when_cache_set_failed);
> rw_attribute(writeback_metadata);
> rw_attribute(writeback_running);
> rw_attribute(writeback_percent);
> +rw_attribute(writeback_fragment_percent);


Hi Dongdong and Coly,

    What is the status about this patch? In my opinion, it is a problem
we need to solve,

but can we just reuse the parameter of writeback_percent, rather than
introduce a new writeback_fragment_percent?

That means the semantic of writeback_percent will act on dirty data
percent and dirty bucket percent.

When we found there are dirty buckets more than (c->nbuckets *
writeback_percent), start the writeback.


Thanx

Yang

> rw_attribute(writeback_delay);
> rw_attribute(writeback_rate);
>
> @@ -197,6 +198,7 @@ SHOW(__bch_cached_dev)
> var_printf(writeback_running, "%i");
> var_print(writeback_delay);
> var_print(writeback_percent);
> + var_print(writeback_fragment_percent);
> sysfs_hprint(writeback_rate,
> wb ? atomic_long_read(&dc->writeback_rate.rate) << 9 : 0);
> sysfs_printf(io_errors, "%i", atomic_read(&dc->io_errors));
> @@ -308,6 +310,9 @@ STORE(__cached_dev)
> sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent,
> 0, bch_cutoff_writeback);
>
> + sysfs_strtoul_clamp(writeback_fragment_percent, dc->writeback_fragment_percent,
> + 0, bch_cutoff_writeback_sync);
> +
> if (attr == &sysfs_writeback_rate) {
> ssize_t ret;
> long int v = atomic_long_read(&dc->writeback_rate.rate);
> @@ -498,6 +503,7 @@ static struct attribute *bch_cached_dev_files[] = {
> &sysfs_writeback_running,
> &sysfs_writeback_delay,
> &sysfs_writeback_percent,
> + &sysfs_writeback_fragment_percent,
> &sysfs_writeback_rate,
> &sysfs_writeback_rate_update_seconds,
> &sysfs_writeback_rate_i_term_inverse,
> diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
> index 3c74996978da..34babc89fdf3 100644
> --- a/drivers/md/bcache/writeback.c
> +++ b/drivers/md/bcache/writeback.c
> @@ -88,6 +88,26 @@ static void __update_writeback_rate(struct cached_dev *dc)
> int64_t integral_scaled;
> uint32_t new_rate;
>
> + /*
> + * We need to consider the number of dirty buckets as well
> + * when calculating the proportional_scaled, Otherwise we might
> + * have an unreasonable small writeback rate at a highly fragmented situation
> + * when very few dirty sectors consumed a lot dirty buckets, the
> + * worst case is when dirty_data reached writeback_percent and
> + * dirty buckets reached to cutoff_writeback_sync, but the rate
> + * still will be at the minimum value, which will cause the write
> + * stuck at a non-writeback mode.
> + */
> + struct cache_set *c = dc->disk.c;
> +
> + if (c->gc_stats.in_use > dc->writeback_fragment_percent && dirty > 0) {
> + int64_t dirty_buckets = (c->gc_stats.in_use * c->nbuckets) / 100;
> + int64_t fragment = (dirty_buckets * c->cache->sb.bucket_size) / dirty;
> +
> + proportional_scaled =
> + div_s64(dirty, dc->writeback_rate_p_term_inverse) * (fragment);
> + }
> +
> if ((error < 0 && dc->writeback_rate_integral > 0) ||
> (error > 0 && time_before64(local_clock(),
> dc->writeback_rate.next + NSEC_PER_MSEC))) {
> @@ -969,6 +989,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
> dc->writeback_metadata = true;
> dc->writeback_running = false;
> dc->writeback_percent = 10;
> + dc->writeback_fragment_percent = 50;
> dc->writeback_delay = 30;
> atomic_long_set(&dc->writeback_rate.rate, 1024);
> dc->writeback_rate_minimum = 8;

2020-12-09 04:52:08

by Dongdong Tao

[permalink] [raw]
Subject: Re: [PATCH] bcache: consider the fragmentation when update the writeback rate

Hi Dongsheng,

I'm working on it, next step I'm gathering some testing data and
upload (very sorry for the delay...)
Thanks for the comment.
One of the main concerns to alleviate this issue with the writeback
process is that we need to minimize the impact on the client IO
performance.
writeback_percent by default is 10, start writeback when dirty buckets
reached 10 percent might be a bit too aggressive, as the
writeback_cutoff_sync is 70 percent.
So i chose to start the writeback when dirty buckets reached 50
percent so that this patch will only take effect after dirty buckets
percent is above that

Thanks,
Dongdong




On Wed, Dec 9, 2020 at 10:27 AM Dongsheng Yang
<[email protected]> wrote:
>
>
> 在 2020/11/3 星期二 下午 8:42, Dongdong Tao 写道:
> > From: dongdong tao <[email protected]>
> >
> > Current way to calculate the writeback rate only considered the
> > dirty sectors, this usually works fine when the fragmentation
> > is not high, but it will give us unreasonable small rate when
> > we are under a situation that very few dirty sectors consumed
> > a lot dirty buckets. In some case, the dirty bucekts can reached
> > to CUTOFF_WRITEBACK_SYNC while the dirty data(sectors) noteven
> > reached the writeback_percent, the writeback rate will still
> > be the minimum value (4k), thus it will cause all the writes to be
> > stucked in a non-writeback mode because of the slow writeback.
> >
> > This patch will try to accelerate the writeback rate when the
> > fragmentation is high. It calculate the propotional_scaled value
> > based on below:
> > (dirty_sectors / writeback_rate_p_term_inverse) * fragment
> > As we can see, the higher fragmentation will result a larger
> > proportional_scaled value, thus cause a larger writeback rate.
> > The fragment value is calculated based on below:
> > (dirty_buckets * bucket_size) / dirty_sectors
> > If you think about it, the value of fragment will be always
> > inside [1, bucket_size].
> >
> > This patch only considers the fragmentation when the number of
> > dirty_buckets reached to a dirty threshold(configurable by
> > writeback_fragment_percent, default is 50), so bcache will
> > remain the original behaviour before the dirty buckets reached
> > the threshold.
> >
> > Signed-off-by: dongdong tao <[email protected]>
> > ---
> > drivers/md/bcache/bcache.h | 1 +
> > drivers/md/bcache/sysfs.c | 6 ++++++
> > drivers/md/bcache/writeback.c | 21 +++++++++++++++++++++
> > 3 files changed, 28 insertions(+)
> >
> > diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
> > index 1d57f48307e6..87632f7032b6 100644
> > --- a/drivers/md/bcache/bcache.h
> > +++ b/drivers/md/bcache/bcache.h
> > @@ -374,6 +374,7 @@ struct cached_dev {
> > unsigned int writeback_metadata:1;
> > unsigned int writeback_running:1;
> > unsigned char writeback_percent;
> > + unsigned char writeback_fragment_percent;
> > unsigned int writeback_delay;
> >
> > uint64_t writeback_rate_target;
> > diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
> > index 554e3afc9b68..69499113aef8 100644
> > --- a/drivers/md/bcache/sysfs.c
> > +++ b/drivers/md/bcache/sysfs.c
> > @@ -115,6 +115,7 @@ rw_attribute(stop_when_cache_set_failed);
> > rw_attribute(writeback_metadata);
> > rw_attribute(writeback_running);
> > rw_attribute(writeback_percent);
> > +rw_attribute(writeback_fragment_percent);
>
>
> Hi Dongdong and Coly,
>
> What is the status about this patch? In my opinion, it is a problem
> we need to solve,
>
> but can we just reuse the parameter of writeback_percent, rather than
> introduce a new writeback_fragment_percent?
>
> That means the semantic of writeback_percent will act on dirty data
> percent and dirty bucket percent.
>
> When we found there are dirty buckets more than (c->nbuckets *
> writeback_percent), start the writeback.
>
>
> Thanx
>
> Yang
>
> > rw_attribute(writeback_delay);
> > rw_attribute(writeback_rate);
> >
> > @@ -197,6 +198,7 @@ SHOW(__bch_cached_dev)
> > var_printf(writeback_running, "%i");
> > var_print(writeback_delay);
> > var_print(writeback_percent);
> > + var_print(writeback_fragment_percent);
> > sysfs_hprint(writeback_rate,
> > wb ? atomic_long_read(&dc->writeback_rate.rate) << 9 : 0);
> > sysfs_printf(io_errors, "%i", atomic_read(&dc->io_errors));
> > @@ -308,6 +310,9 @@ STORE(__cached_dev)
> > sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent,
> > 0, bch_cutoff_writeback);
> >
> > + sysfs_strtoul_clamp(writeback_fragment_percent, dc->writeback_fragment_percent,
> > + 0, bch_cutoff_writeback_sync);
> > +
> > if (attr == &sysfs_writeback_rate) {
> > ssize_t ret;
> > long int v = atomic_long_read(&dc->writeback_rate.rate);
> > @@ -498,6 +503,7 @@ static struct attribute *bch_cached_dev_files[] = {
> > &sysfs_writeback_running,
> > &sysfs_writeback_delay,
> > &sysfs_writeback_percent,
> > + &sysfs_writeback_fragment_percent,
> > &sysfs_writeback_rate,
> > &sysfs_writeback_rate_update_seconds,
> > &sysfs_writeback_rate_i_term_inverse,
> > diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
> > index 3c74996978da..34babc89fdf3 100644
> > --- a/drivers/md/bcache/writeback.c
> > +++ b/drivers/md/bcache/writeback.c
> > @@ -88,6 +88,26 @@ static void __update_writeback_rate(struct cached_dev *dc)
> > int64_t integral_scaled;
> > uint32_t new_rate;
> >
> > + /*
> > + * We need to consider the number of dirty buckets as well
> > + * when calculating the proportional_scaled, Otherwise we might
> > + * have an unreasonable small writeback rate at a highly fragmented situation
> > + * when very few dirty sectors consumed a lot dirty buckets, the
> > + * worst case is when dirty_data reached writeback_percent and
> > + * dirty buckets reached to cutoff_writeback_sync, but the rate
> > + * still will be at the minimum value, which will cause the write
> > + * stuck at a non-writeback mode.
> > + */
> > + struct cache_set *c = dc->disk.c;
> > +
> > + if (c->gc_stats.in_use > dc->writeback_fragment_percent && dirty > 0) {
> > + int64_t dirty_buckets = (c->gc_stats.in_use * c->nbuckets) / 100;
> > + int64_t fragment = (dirty_buckets * c->cache->sb.bucket_size) / dirty;
> > +
> > + proportional_scaled =
> > + div_s64(dirty, dc->writeback_rate_p_term_inverse) * (fragment);
> > + }
> > +
> > if ((error < 0 && dc->writeback_rate_integral > 0) ||
> > (error > 0 && time_before64(local_clock(),
> > dc->writeback_rate.next + NSEC_PER_MSEC))) {
> > @@ -969,6 +989,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
> > dc->writeback_metadata = true;
> > dc->writeback_running = false;
> > dc->writeback_percent = 10;
> > + dc->writeback_fragment_percent = 50;
> > dc->writeback_delay = 30;
> > atomic_long_set(&dc->writeback_rate.rate, 1024);
> > dc->writeback_rate_minimum = 8;

2020-12-09 07:53:06

by Dongsheng Yang

[permalink] [raw]
Subject: Re: [PATCH] bcache: consider the fragmentation when update the writeback rate


在 2020/12/9 星期三 下午 12:48, Dongdong Tao 写道:
> Hi Dongsheng,
>
> I'm working on it, next step I'm gathering some testing data and
> upload (very sorry for the delay...)
> Thanks for the comment.
> One of the main concerns to alleviate this issue with the writeback
> process is that we need to minimize the impact on the client IO
> performance.
> writeback_percent by default is 10, start writeback when dirty buckets
> reached 10 percent might be a bit too aggressive, as the
> writeback_cutoff_sync is 70 percent.
> So i chose to start the writeback when dirty buckets reached 50
> percent so that this patch will only take effect after dirty buckets
> percent is above that

Agree with that's too aggressive to reuse writeback_percent, and that's
less flexable.

Okey, let's wait for your testing result.


Thanx

>
> Thanks,
> Dongdong
>
>
>
>
> On Wed, Dec 9, 2020 at 10:27 AM Dongsheng Yang
> <[email protected]> wrote:
>>
>> 在 2020/11/3 星期二 下午 8:42, Dongdong Tao 写道:
>>> From: dongdong tao <[email protected]>
>>>
>>> Current way to calculate the writeback rate only considered the
>>> dirty sectors, this usually works fine when the fragmentation
>>> is not high, but it will give us unreasonable small rate when
>>> we are under a situation that very few dirty sectors consumed
>>> a lot dirty buckets. In some case, the dirty bucekts can reached
>>> to CUTOFF_WRITEBACK_SYNC while the dirty data(sectors) noteven
>>> reached the writeback_percent, the writeback rate will still
>>> be the minimum value (4k), thus it will cause all the writes to be
>>> stucked in a non-writeback mode because of the slow writeback.
>>>
>>> This patch will try to accelerate the writeback rate when the
>>> fragmentation is high. It calculate the propotional_scaled value
>>> based on below:
>>> (dirty_sectors / writeback_rate_p_term_inverse) * fragment
>>> As we can see, the higher fragmentation will result a larger
>>> proportional_scaled value, thus cause a larger writeback rate.
>>> The fragment value is calculated based on below:
>>> (dirty_buckets * bucket_size) / dirty_sectors
>>> If you think about it, the value of fragment will be always
>>> inside [1, bucket_size].
>>>
>>> This patch only considers the fragmentation when the number of
>>> dirty_buckets reached to a dirty threshold(configurable by
>>> writeback_fragment_percent, default is 50), so bcache will
>>> remain the original behaviour before the dirty buckets reached
>>> the threshold.
>>>
>>> Signed-off-by: dongdong tao <[email protected]>
>>> ---
>>> drivers/md/bcache/bcache.h | 1 +
>>> drivers/md/bcache/sysfs.c | 6 ++++++
>>> drivers/md/bcache/writeback.c | 21 +++++++++++++++++++++
>>> 3 files changed, 28 insertions(+)
>>>
>>> diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
>>> index 1d57f48307e6..87632f7032b6 100644
>>> --- a/drivers/md/bcache/bcache.h
>>> +++ b/drivers/md/bcache/bcache.h
>>> @@ -374,6 +374,7 @@ struct cached_dev {
>>> unsigned int writeback_metadata:1;
>>> unsigned int writeback_running:1;
>>> unsigned char writeback_percent;
>>> + unsigned char writeback_fragment_percent;
>>> unsigned int writeback_delay;
>>>
>>> uint64_t writeback_rate_target;
>>> diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
>>> index 554e3afc9b68..69499113aef8 100644
>>> --- a/drivers/md/bcache/sysfs.c
>>> +++ b/drivers/md/bcache/sysfs.c
>>> @@ -115,6 +115,7 @@ rw_attribute(stop_when_cache_set_failed);
>>> rw_attribute(writeback_metadata);
>>> rw_attribute(writeback_running);
>>> rw_attribute(writeback_percent);
>>> +rw_attribute(writeback_fragment_percent);
>>
>> Hi Dongdong and Coly,
>>
>> What is the status about this patch? In my opinion, it is a problem
>> we need to solve,
>>
>> but can we just reuse the parameter of writeback_percent, rather than
>> introduce a new writeback_fragment_percent?
>>
>> That means the semantic of writeback_percent will act on dirty data
>> percent and dirty bucket percent.
>>
>> When we found there are dirty buckets more than (c->nbuckets *
>> writeback_percent), start the writeback.
>>
>>
>> Thanx
>>
>> Yang
>>
>>> rw_attribute(writeback_delay);
>>> rw_attribute(writeback_rate);
>>>
>>> @@ -197,6 +198,7 @@ SHOW(__bch_cached_dev)
>>> var_printf(writeback_running, "%i");
>>> var_print(writeback_delay);
>>> var_print(writeback_percent);
>>> + var_print(writeback_fragment_percent);
>>> sysfs_hprint(writeback_rate,
>>> wb ? atomic_long_read(&dc->writeback_rate.rate) << 9 : 0);
>>> sysfs_printf(io_errors, "%i", atomic_read(&dc->io_errors));
>>> @@ -308,6 +310,9 @@ STORE(__cached_dev)
>>> sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent,
>>> 0, bch_cutoff_writeback);
>>>
>>> + sysfs_strtoul_clamp(writeback_fragment_percent, dc->writeback_fragment_percent,
>>> + 0, bch_cutoff_writeback_sync);
>>> +
>>> if (attr == &sysfs_writeback_rate) {
>>> ssize_t ret;
>>> long int v = atomic_long_read(&dc->writeback_rate.rate);
>>> @@ -498,6 +503,7 @@ static struct attribute *bch_cached_dev_files[] = {
>>> &sysfs_writeback_running,
>>> &sysfs_writeback_delay,
>>> &sysfs_writeback_percent,
>>> + &sysfs_writeback_fragment_percent,
>>> &sysfs_writeback_rate,
>>> &sysfs_writeback_rate_update_seconds,
>>> &sysfs_writeback_rate_i_term_inverse,
>>> diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
>>> index 3c74996978da..34babc89fdf3 100644
>>> --- a/drivers/md/bcache/writeback.c
>>> +++ b/drivers/md/bcache/writeback.c
>>> @@ -88,6 +88,26 @@ static void __update_writeback_rate(struct cached_dev *dc)
>>> int64_t integral_scaled;
>>> uint32_t new_rate;
>>>
>>> + /*
>>> + * We need to consider the number of dirty buckets as well
>>> + * when calculating the proportional_scaled, Otherwise we might
>>> + * have an unreasonable small writeback rate at a highly fragmented situation
>>> + * when very few dirty sectors consumed a lot dirty buckets, the
>>> + * worst case is when dirty_data reached writeback_percent and
>>> + * dirty buckets reached to cutoff_writeback_sync, but the rate
>>> + * still will be at the minimum value, which will cause the write
>>> + * stuck at a non-writeback mode.
>>> + */
>>> + struct cache_set *c = dc->disk.c;
>>> +
>>> + if (c->gc_stats.in_use > dc->writeback_fragment_percent && dirty > 0) {
>>> + int64_t dirty_buckets = (c->gc_stats.in_use * c->nbuckets) / 100;
>>> + int64_t fragment = (dirty_buckets * c->cache->sb.bucket_size) / dirty;
>>> +
>>> + proportional_scaled =
>>> + div_s64(dirty, dc->writeback_rate_p_term_inverse) * (fragment);
>>> + }
>>> +
>>> if ((error < 0 && dc->writeback_rate_integral > 0) ||
>>> (error > 0 && time_before64(local_clock(),
>>> dc->writeback_rate.next + NSEC_PER_MSEC))) {
>>> @@ -969,6 +989,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
>>> dc->writeback_metadata = true;
>>> dc->writeback_running = false;
>>> dc->writeback_percent = 10;
>>> + dc->writeback_fragment_percent = 50;
>>> dc->writeback_delay = 30;
>>> atomic_long_set(&dc->writeback_rate.rate, 1024);
>>> dc->writeback_rate_minimum = 8;

2020-12-15 01:30:24

by Coly Li

[permalink] [raw]
Subject: Re: [PATCH] bcache: consider the fragmentation when update the writeback rate

On 12/14/20 11:30 PM, Dongdong Tao wrote:
> Hi Coly and Dongsheng,
>
> I've get the testing result and confirmed that this testing result is
> reproducible by repeating it many times.
> I ran fio to get the write latency log and parsed the log and then
> generated below latency graphs with some visualization tool
>

Hi Dongdong,

Thank you so much for the performance number!

[snipped]
> So, my code will accelerate the writeback process when the dirty buckets
> exceeds 50%( can be tuned), as we can see
> the cache_available_percent does increase once it hit 50, so we won't
> hit writeback cutoff issue.
>
> Below are the steps that I used to do the experiment:
> 1. make-bcache -B <hdd> -C <nvme> --writeback -- I prepared the nvme
> size to 1G, so it can be reproduced faster
>
> 2. sudo fio --name=random-writers --filename=/dev/bcache0
> --ioengine=libaio --iodepth=1 --rw=randrw --bs=16k --direct=1
> --rate_iops=90,10 --numjobs=1 --write_lat_log=16k
>
> 3. For 1 G nvme, running for about 20 minutes is enough get the data.

1GB cache and 20 minutes is quite limited for the performance valuation.
Could you please to do similar testing with 1TB SSD and 1 hours for each
run of the benchmark ?

>
> Using randrw with rate_iops=90,10 is just one way to reproduce this
> easily, this can be reproduced as long as we can create a fragmented
> situation that quite few dirty data consumed a lot dirty buckets thus
> killing the write performance.
>

Yes this is a good method to generate dirty data segments.

> This bug nowadays is becoming very critical, as ceph is hitting it, ceph
> mostly submitting random small IO.
> Please let me know what you need in order to move forward in this
> direction, I'm sure this patch can be improved also.

The performance number is quite convinced and the idea in your patch is
promising.

I will provide my comments on your patch after we see the performance
number for larger cache device and longer run time.

Thanks again for the detailed performance number, which is really
desired for performance optimization changes :-)

Coly Li

2020-12-21 08:11:59

by Coly Li

[permalink] [raw]
Subject: Re: [PATCH] bcache: consider the fragmentation when update the writeback rate

On 12/21/20 12:06 PM, Dongdong Tao wrote:
> Hi Coly,
>
> Thank you so much for your prompt reply!
>
> So, I've performed the same fio testing based on 1TB NVME and 10 TB HDD
> disk as the backing device.
> I've run them both for about 4 hours, since it's 1TB nvme device, it
> will roughly take about 10 days to consume 50 percent dirty buckets
> I did increase the iops to 500,100, but the dirty buckets only increased
> to about 30 after 2 days run, and because the read is much limited by
> the backing hdd deivce, so the actual maximum read iops I can constantly
> get is only about 200.

HI Dongdong,

There are two method to make the buckets being filled faster.
1) use larger non-spinning backing device (e.g. a md raid0 with multiple
SSDs).
2) specify larger read block size and small write block size in fio

Or you may combine them together to fill the cache device faster.


>
> Though the 4 hours run on 1TB nvme bcache didn't make us hit the 50
> percent dirty bucket threshold, but I think it's still valuable to prove
> that 
> bcache with my patch behaves the same way as expected in terms of the
> latency when the dirty bucket is under 50 percent. I guess this might be
> what you wanted to confirm with this run.
>

Previous testing on tiny SSD size is much better than this time, I
cannot provide my opinion before a non-optimized-configuration testing
finished.

If the latency distribution has no (recognized) difference, it will mean
your patch does not improve I/O latency. I believe this is only the
testing is unfinished yet.

The idea to heuristically estimate bucket fragmentation condition is
cool IMHO, but we need solid performance number to prove this
optimization. Please continue to finish the benchmark with real hardware
configuration, and I do hope we can see recoganized positive result for
your goal (improve I/O latency and throughput for high bucket dirty
segmentation).


Thanks.


Coly Li


> Here is the result:
> Master:
>
> fio-master.png
>
> Master + My patch:
> fio-patch.png
> As we can see, the latency distributions for the outliers or those
> majorities are the same between these two runs。 
> Let's combine those two together, and they are more clear:
> fio-full.png
> The test steps are exactly the same for those two runs:
>
>
> 1. make-bcache -B <hdd> -C <nvme> --writeback
>
> 2. sudo fio --name=random-writers --filename=/dev/bcache0
> --ioengine=libaio --iodepth=1 --rw=randrw --bs=16k --direct=1
> --rate_iops=90,10 --numjobs=1 --write_lat_log=16k --runtime=14000
>
> Thank you so much!
> Regards,
> Dongdong
>
> On Tue, Dec 15, 2020 at 1:07 AM Coly Li <[email protected]
> <mailto:[email protected]>> wrote:
>
> On 12/14/20 11:30 PM, Dongdong Tao wrote:
> > Hi Coly and Dongsheng,
> >
> > I've get the testing result and confirmed that this testing result is
> > reproducible by repeating it many times.
> > I ran fio to get the write latency log and parsed the log and then
> > generated below latency graphs with some visualization tool
> >
>
> Hi Dongdong,
>
> Thank you so much for the performance number!
>
> [snipped]
> > So, my code will accelerate the writeback process when the dirty
> buckets
> > exceeds 50%( can be tuned), as we can see
> > the cache_available_percent does increase once it hit 50, so we won't
> > hit writeback cutoff issue.
> >
> > Below are the steps that I used to do the experiment:
> > 1. make-bcache -B <hdd> -C <nvme> --writeback -- I prepared the nvme
> > size to 1G, so it can be reproduced faster
> >
> > 2. sudo fio --name=random-writers --filename=/dev/bcache0
> > --ioengine=libaio --iodepth=1 --rw=randrw --bs=16k --direct=1
> > --rate_iops=90,10 --numjobs=1 --write_lat_log=16k
> >
> > 3. For 1 G nvme, running for about 20 minutes is enough get the data.
>
> 1GB cache and 20 minutes is quite limited for the performance valuation.
> Could you please to do similar testing with 1TB SSD and 1 hours for each
> run of the benchmark ?
>
> >
> > Using randrw with rate_iops=90,10 is just one way to reproduce this
> > easily, this can be reproduced as long as we can create a fragmented
> > situation that quite few dirty data consumed a lot dirty buckets thus
> > killing the write performance.
> >
>
> Yes this is a good method to generate dirty data segments.
>
> > This bug nowadays is becoming very critical, as ceph is hitting
> it, ceph
> > mostly submitting random small IO.
> > Please let me know what you need in order to move forward in this
> > direction, I'm sure this patch can be improved also.
>
> The performance number is quite convinced and the idea in your patch is
> promising.
>
> I will provide my comments on your patch after we see the performance
> number for larger cache device and longer run time.
>
> Thanks again for the detailed performance number, which is really
> desired for performance optimization changes :-)
>
> Coly Li
>