2021-09-14 06:02:38

by Kiwoong Kim

[permalink] [raw]
Subject: Question about ufs_bsg

Hi,

ufs_bsg was introduced nearly three years ago and it allocates its own request queue.
I faced a sytmpom with this and want to ask something about it.

That is, sometimes queue depth for ufs is limited to half of the its maximum value
even in a situation with many IO requests from filesystem.
It turned out that it only occurs when a query is being processed at the same time.
Regarding my tracing, when the query process starts, users for the hctx that represents
a ufs host increase to two and with this, some pathes calling 'hctx_may_queue'
function in blk-mq seems to throttle dispatches, technically with 16 because the number of
ufs slots (32 in my case) is dividend by two (users).

I found that it happened when a query for write booster is processed
because write booster only turns on in some conditions in my base that is different
from kernel mainline. But when an exceptional event or others that could lead to a query occurs,
it can happen even in mainline.

I think the throttling is a little bit excessive,
so the question: is there any way to assign queue depth per user on an asymmetric basis?

Thanks.
Kiwoong Kim



2021-09-14 06:40:05

by Avri Altman

[permalink] [raw]
Subject: RE: Question about ufs_bsg

Hi,

> Hi,
>
> ufs_bsg was introduced nearly three years ago and it allocates its own request
> queue.
> I faced a sytmpom with this and want to ask something about it.
>
> That is, sometimes queue depth for ufs is limited to half of the its maximum
> value
> even in a situation with many IO requests from filesystem.
This is interesting indeed. Before going further with investigating this,
Could you share some more details on your setup:
The bsg node it creates was originally meant to convey a single query request via SG_IO ioctl,
Which is blocking.
- How do you create many IO requests queueing on that request queue?
- command upiu is not implemented, are all those IOs are query requests?

Thanks,
Avri

> It turned out that it only occurs when a query is being processed at the same
> time.
> Regarding my tracing, when the query process starts, users for the hctx that
> represents
> a ufs host increase to two and with this, some pathes calling 'hctx_may_queue'
> function in blk-mq seems to throttle dispatches, technically with 16 because the
> number of
> ufs slots (32 in my case) is dividend by two (users).
>
> I found that it happened when a query for write booster is processed
> because write booster only turns on in some conditions in my base that is
> different
> from kernel mainline. But when an exceptional event or others that could lead
> to a query occurs,
> it can happen even in mainline.
>
> I think the throttling is a little bit excessive,
> so the question: is there any way to assign queue depth per user on an
> asymmetric basis?
>
> Thanks.
> Kiwoong Kim
>

2021-09-14 06:46:26

by Kiwoong Kim

[permalink] [raw]
Subject: RE: Question about ufs_bsg

> Hi,
>
> > Hi,
> >
> > ufs_bsg was introduced nearly three years ago and it allocates its own
> > request queue.
> > I faced a sytmpom with this and want to ask something about it.
> >
> > That is, sometimes queue depth for ufs is limited to half of the its
> > maximum value even in a situation with many IO requests from
> > filesystem.
> This is interesting indeed. Before going further with investigating this,

Hi. What I first intended is not ufs_bsg but as you might already know, it also allocated its own request queue.
In that point, we can imagine it could be the same situation.

> Could you share some more details on your setup:
> The bsg node it creates was originally meant to convey a single query
> request via SG_IO ioctl, Which is blocking.
> - How do you create many IO requests queueing on that request queue?

I used some benchmarks, such tiobench or Androbench that could make heavy IO scenarios.

> - command upiu is not implemented, are all those IOs are query requests?

What I've seen is just one query and many scsi commands.

>
> > It turned out that it only occurs when a query is being processed at
> > the same time.
> > Regarding my tracing, when the query process starts, users for the
> > hctx that represents a ufs host increase to two and with this, some
> > pathes calling 'hctx_may_queue'
> > function in blk-mq seems to throttle dispatches, technically with 16
> > because the number of ufs slots (32 in my case) is dividend by two
> > (users).
> >
> > I found that it happened when a query for write booster is processed
> > because write booster only turns on in some conditions in my base that
> > is different from kernel mainline. But when an exceptional event or
> > others that could lead to a query occurs, it can happen even in
> > mainline.
> >
> > I think the throttling is a little bit excessive, so the question: is
> > there any way to assign queue depth per user on an asymmetric basis?
> >
> > Thanks.
> > Kiwoong Kim
> >