2022-02-22 04:30:49

by NeilBrown

[permalink] [raw]
Subject: [PATCH 04/11] fuse: remove reliance on bdi congestion

The bdi congestion tracking in not widely used and will be removed.

Fuse is one of a small number of filesystems that uses it, setting both
the sync (read) and async (write) congestion flags at what it determines
are appropriate times.

The only remaining effect of the sync flag is to cause read-ahead to be
skipped.
The only remaining effect of the async flag is to cause (some)
WB_SYNC_NONE writes to be skipped.

So instead of setting the flags, change:
- .readahead to stop when it has submitted all non-async pages
for read.
- .writepages to do nothing if WB_SYNC_NONE and the flag would be set
- .writepage to return AOP_WRITEPAGE_ACTIVATE if WB_SYNC_NONE
and the flag would be set.

The writepages change causes a behavioural change in that pageout() can
now return PAGE_ACTIVATE instead of PAGE_KEEP, so SetPageActive() will
be called on the page which (I think) will further delay the next attempt
at writeout. This might be a good thing.

Signed-off-by: NeilBrown <[email protected]>
---
fs/fuse/control.c | 17 -----------------
fs/fuse/dev.c | 8 --------
fs/fuse/file.c | 17 +++++++++++++++++
3 files changed, 17 insertions(+), 25 deletions(-)

diff --git a/fs/fuse/control.c b/fs/fuse/control.c
index 000d2e5627e9..7cede9a3bc96 100644
--- a/fs/fuse/control.c
+++ b/fs/fuse/control.c
@@ -164,7 +164,6 @@ static ssize_t fuse_conn_congestion_threshold_write(struct file *file,
{
unsigned val;
struct fuse_conn *fc;
- struct fuse_mount *fm;
ssize_t ret;

ret = fuse_conn_limit_write(file, buf, count, ppos, &val,
@@ -178,22 +177,6 @@ static ssize_t fuse_conn_congestion_threshold_write(struct file *file,
down_read(&fc->killsb);
spin_lock(&fc->bg_lock);
fc->congestion_threshold = val;
-
- /*
- * Get any fuse_mount belonging to this fuse_conn; s_bdi is
- * shared between all of them
- */
-
- if (!list_empty(&fc->mounts)) {
- fm = list_first_entry(&fc->mounts, struct fuse_mount, fc_entry);
- if (fc->num_background < fc->congestion_threshold) {
- clear_bdi_congested(fm->sb->s_bdi, BLK_RW_SYNC);
- clear_bdi_congested(fm->sb->s_bdi, BLK_RW_ASYNC);
- } else {
- set_bdi_congested(fm->sb->s_bdi, BLK_RW_SYNC);
- set_bdi_congested(fm->sb->s_bdi, BLK_RW_ASYNC);
- }
- }
spin_unlock(&fc->bg_lock);
up_read(&fc->killsb);
fuse_conn_put(fc);
diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
index cd54a529460d..e1b4a846c90d 100644
--- a/fs/fuse/dev.c
+++ b/fs/fuse/dev.c
@@ -315,10 +315,6 @@ void fuse_request_end(struct fuse_req *req)
wake_up(&fc->blocked_waitq);
}

- if (fc->num_background == fc->congestion_threshold && fm->sb) {
- clear_bdi_congested(fm->sb->s_bdi, BLK_RW_SYNC);
- clear_bdi_congested(fm->sb->s_bdi, BLK_RW_ASYNC);
- }
fc->num_background--;
fc->active_background--;
flush_bg_queue(fc);
@@ -540,10 +536,6 @@ static bool fuse_request_queue_background(struct fuse_req *req)
fc->num_background++;
if (fc->num_background == fc->max_background)
fc->blocked = 1;
- if (fc->num_background == fc->congestion_threshold && fm->sb) {
- set_bdi_congested(fm->sb->s_bdi, BLK_RW_SYNC);
- set_bdi_congested(fm->sb->s_bdi, BLK_RW_ASYNC);
- }
list_add_tail(&req->list, &fc->bg_queue);
flush_bg_queue(fc);
queued = true;
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 829094451774..94747bac3489 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -966,6 +966,14 @@ static void fuse_readahead(struct readahead_control *rac)
struct fuse_io_args *ia;
struct fuse_args_pages *ap;

+ if (fc->num_background >= fc->congestion_threshold &&
+ rac->ra->async_size >= readahead_count(rac))
+ /*
+ * Congested and only async pages left, so skip the
+ * rest.
+ */
+ break;
+
nr_pages = readahead_count(rac) - nr_pages;
if (nr_pages > max_pages)
nr_pages = max_pages;
@@ -1958,6 +1966,7 @@ static int fuse_writepage_locked(struct page *page)

static int fuse_writepage(struct page *page, struct writeback_control *wbc)
{
+ struct fuse_conn *fc = get_fuse_conn(page->mapping->host);
int err;

if (fuse_page_is_writeback(page->mapping->host, page->index)) {
@@ -1973,6 +1982,10 @@ static int fuse_writepage(struct page *page, struct writeback_control *wbc)
return 0;
}

+ if (wbc->sync_mode == WB_SYNC_NONE &&
+ fc->num_background >= fc->congestion_threshold)
+ return AOP_WRITEPAGE_ACTIVATE;
+
err = fuse_writepage_locked(page);
unlock_page(page);

@@ -2226,6 +2239,10 @@ static int fuse_writepages(struct address_space *mapping,
if (fuse_is_bad(inode))
goto out;

+ if (wbc->sync_mode == WB_SYNC_NONE &&
+ fc->num_background >= fc->congestion_threshold)
+ return 0;
+
data.inode = inode;
data.wpa = NULL;
data.ff = NULL;



2022-03-01 13:44:31

by Miklos Szeredi

[permalink] [raw]
Subject: Re: [PATCH 04/11] fuse: remove reliance on bdi congestion

On Tue, 22 Feb 2022 at 04:18, NeilBrown <[email protected]> wrote:
>
> The bdi congestion tracking in not widely used and will be removed.
>
> Fuse is one of a small number of filesystems that uses it, setting both
> the sync (read) and async (write) congestion flags at what it determines
> are appropriate times.
>
> The only remaining effect of the sync flag is to cause read-ahead to be
> skipped.
> The only remaining effect of the async flag is to cause (some)
> WB_SYNC_NONE writes to be skipped.
>
> So instead of setting the flags, change:
> - .readahead to stop when it has submitted all non-async pages
> for read.
> - .writepages to do nothing if WB_SYNC_NONE and the flag would be set
> - .writepage to return AOP_WRITEPAGE_ACTIVATE if WB_SYNC_NONE
> and the flag would be set.
>
> The writepages change causes a behavioural change in that pageout() can
> now return PAGE_ACTIVATE instead of PAGE_KEEP, so SetPageActive() will
> be called on the page which (I think) will further delay the next attempt
> at writeout. This might be a good thing.
>
> Signed-off-by: NeilBrown <[email protected]>
> ---
> fs/fuse/control.c | 17 -----------------
> fs/fuse/dev.c | 8 --------
> fs/fuse/file.c | 17 +++++++++++++++++
> 3 files changed, 17 insertions(+), 25 deletions(-)
>
> diff --git a/fs/fuse/control.c b/fs/fuse/control.c
> index 000d2e5627e9..7cede9a3bc96 100644
> --- a/fs/fuse/control.c
> +++ b/fs/fuse/control.c
> @@ -164,7 +164,6 @@ static ssize_t fuse_conn_congestion_threshold_write(struct file *file,
> {
> unsigned val;
> struct fuse_conn *fc;
> - struct fuse_mount *fm;
> ssize_t ret;
>
> ret = fuse_conn_limit_write(file, buf, count, ppos, &val,
> @@ -178,22 +177,6 @@ static ssize_t fuse_conn_congestion_threshold_write(struct file *file,
> down_read(&fc->killsb);
> spin_lock(&fc->bg_lock);
> fc->congestion_threshold = val;
> -
> - /*
> - * Get any fuse_mount belonging to this fuse_conn; s_bdi is
> - * shared between all of them
> - */
> -
> - if (!list_empty(&fc->mounts)) {
> - fm = list_first_entry(&fc->mounts, struct fuse_mount, fc_entry);
> - if (fc->num_background < fc->congestion_threshold) {
> - clear_bdi_congested(fm->sb->s_bdi, BLK_RW_SYNC);
> - clear_bdi_congested(fm->sb->s_bdi, BLK_RW_ASYNC);
> - } else {
> - set_bdi_congested(fm->sb->s_bdi, BLK_RW_SYNC);
> - set_bdi_congested(fm->sb->s_bdi, BLK_RW_ASYNC);
> - }
> - }
> spin_unlock(&fc->bg_lock);
> up_read(&fc->killsb);
> fuse_conn_put(fc);
> diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
> index cd54a529460d..e1b4a846c90d 100644
> --- a/fs/fuse/dev.c
> +++ b/fs/fuse/dev.c
> @@ -315,10 +315,6 @@ void fuse_request_end(struct fuse_req *req)
> wake_up(&fc->blocked_waitq);
> }
>
> - if (fc->num_background == fc->congestion_threshold && fm->sb) {
> - clear_bdi_congested(fm->sb->s_bdi, BLK_RW_SYNC);
> - clear_bdi_congested(fm->sb->s_bdi, BLK_RW_ASYNC);
> - }
> fc->num_background--;
> fc->active_background--;
> flush_bg_queue(fc);
> @@ -540,10 +536,6 @@ static bool fuse_request_queue_background(struct fuse_req *req)
> fc->num_background++;
> if (fc->num_background == fc->max_background)
> fc->blocked = 1;
> - if (fc->num_background == fc->congestion_threshold && fm->sb) {
> - set_bdi_congested(fm->sb->s_bdi, BLK_RW_SYNC);
> - set_bdi_congested(fm->sb->s_bdi, BLK_RW_ASYNC);
> - }
> list_add_tail(&req->list, &fc->bg_queue);
> flush_bg_queue(fc);
> queued = true;
> diff --git a/fs/fuse/file.c b/fs/fuse/file.c
> index 829094451774..94747bac3489 100644
> --- a/fs/fuse/file.c
> +++ b/fs/fuse/file.c
> @@ -966,6 +966,14 @@ static void fuse_readahead(struct readahead_control *rac)
> struct fuse_io_args *ia;
> struct fuse_args_pages *ap;
>
> + if (fc->num_background >= fc->congestion_threshold &&
> + rac->ra->async_size >= readahead_count(rac))
> + /*
> + * Congested and only async pages left, so skip the
> + * rest.
> + */
> + break;

Ah, you are taking care of it here...

Regarding the async part: a potential (corner?) case is if task A is
reading region X and triggering readahead for region Y and at the same
time task B is reading region Y. In the congestion case it can happen
that non-uptodate pages in Y are truncated off the pagecache while B
is waiting for them to become uptodate.

This shouldn't be too hard to trigger, just need two sequential
readers of the same file, where one is just ahead of the other. I'll
try to do a test program for this case specifically.

Thanks,
Miklos