2018-02-28 17:12:59

by Vratislav Bendel

[permalink] [raw]
Subject: [PATCH] xfs: Correctly invert xfs_buftarg LRU isolation logic

The function xfs_buftarg_isolate() used by xfs buffer schrinkers
to determine whether a buffer should be isolated and disposed
from LRU list, has inverted logic.

Excerpt from xfs_buftarg_isolate():
/*
* Decrement the b_lru_ref count unless the value is already
* zero. If the value is already zero, we need to reclaim the
* buffer, otherwise it gets another trip through the LRU.
*/
if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
spin_unlock(&bp->b_lock);
return LRU_ROTATE;
}

However, as per documentation, atomic_add_unless() returns _zero_
if the atomic value was originally equal to the specified *unsless* value.

Ultimately causing a xfs_buffer with ->b_lru_ref == 0, to take another
trip around LRU, while isolating buffers with non-zero b_lru_ref.

Signed-off-by: Vratislav Bendel <[email protected]>
CC: Brian Foster <[email protected]>
---
fs/xfs/xfs_buf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index d1da2ee9e6db..ac669a10c62f 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -1708,7 +1708,7 @@ xfs_buftarg_isolate(
* zero. If the value is already zero, we need to reclaim the
* buffer, otherwise it gets another trip through the LRU.
*/
- if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
+ if (atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
spin_unlock(&bp->b_lock);
return LRU_ROTATE;
}
--
2.14.3



2018-02-28 19:09:58

by Luis Chamberlain

[permalink] [raw]
Subject: Re: [PATCH] xfs: Correctly invert xfs_buftarg LRU isolation logic

On Wed, Feb 28, 2018 at 04:49:51PM +0100, Vratislav Bendel wrote:
> The function xfs_buftarg_isolate() used by xfs buffer schrinkers
> to determine whether a buffer should be isolated and disposed
> from LRU list, has inverted logic.
>
> Excerpt from xfs_buftarg_isolate():
> /*
> * Decrement the b_lru_ref count unless the value is already
> * zero. If the value is already zero, we need to reclaim the
> * buffer, otherwise it gets another trip through the LRU.
> */
> if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> spin_unlock(&bp->b_lock);
> return LRU_ROTATE;
> }
>
> However, as per documentation, atomic_add_unless() returns _zero_
> if the atomic value was originally equal to the specified *unsless* value.
>
> Ultimately causing a xfs_buffer with ->b_lru_ref == 0, to take another
> trip around LRU, while isolating buffers with non-zero b_lru_ref.
>
> Signed-off-by: Vratislav Bendel <[email protected]>
> CC: Brian Foster <[email protected]>

Can you add a respective Fixes: tag? Also what effects are observed by
the user when this happens on the kernel log?

Luis

2018-03-01 17:40:39

by Brian Foster

[permalink] [raw]
Subject: Re: [PATCH] xfs: Correctly invert xfs_buftarg LRU isolation logic

On Wed, Feb 28, 2018 at 04:49:51PM +0100, Vratislav Bendel wrote:
> The function xfs_buftarg_isolate() used by xfs buffer schrinkers
> to determine whether a buffer should be isolated and disposed
> from LRU list, has inverted logic.
>
> Excerpt from xfs_buftarg_isolate():
> /*
> * Decrement the b_lru_ref count unless the value is already
> * zero. If the value is already zero, we need to reclaim the
> * buffer, otherwise it gets another trip through the LRU.
> */
> if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> spin_unlock(&bp->b_lock);
> return LRU_ROTATE;
> }
>
> However, as per documentation, atomic_add_unless() returns _zero_
> if the atomic value was originally equal to the specified *unsless* value.
>

Nit: unless

> Ultimately causing a xfs_buffer with ->b_lru_ref == 0, to take another
> trip around LRU, while isolating buffers with non-zero b_lru_ref.
>
> Signed-off-by: Vratislav Bendel <[email protected]>
> CC: Brian Foster <[email protected]>
> ---

It might be worth pointing out in the commit log that currently isolated
buffers end up right back on the LRU once they are released, because
->b_lru_ref remains elevated. Therefore, this patch essentially fixes
that circuitous route by leaving them on the LRU as originally intended.
Otherwise this looks Ok to me:

Reviewed-by: Brian Foster <[email protected]>

Thanks for sending the patch.

Brian

> fs/xfs/xfs_buf.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index d1da2ee9e6db..ac669a10c62f 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -1708,7 +1708,7 @@ xfs_buftarg_isolate(
> * zero. If the value is already zero, we need to reclaim the
> * buffer, otherwise it gets another trip through the LRU.
> */
> - if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> + if (atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> spin_unlock(&bp->b_lock);
> return LRU_ROTATE;
> }
> --
> 2.14.3
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2018-03-01 22:50:15

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH] xfs: Correctly invert xfs_buftarg LRU isolation logic

On Wed, Feb 28, 2018 at 04:49:51PM +0100, Vratislav Bendel wrote:
> The function xfs_buftarg_isolate() used by xfs buffer schrinkers
> to determine whether a buffer should be isolated and disposed
> from LRU list, has inverted logic.
>
> Excerpt from xfs_buftarg_isolate():
> /*
> * Decrement the b_lru_ref count unless the value is already
> * zero. If the value is already zero, we need to reclaim the
> * buffer, otherwise it gets another trip through the LRU.
> */
> if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> spin_unlock(&bp->b_lock);
> return LRU_ROTATE;
> }
>
> However, as per documentation, atomic_add_unless() returns _zero_
> if the atomic value was originally equal to the specified *unsless* value.
>
> Ultimately causing a xfs_buffer with ->b_lru_ref == 0, to take another
> trip around LRU, while isolating buffers with non-zero b_lru_ref.
>
> Signed-off-by: Vratislav Bendel <[email protected]>
> CC: Brian Foster <[email protected]>

Looks ok, will test...
Reviewed-by: Darrick J. Wong <[email protected]>

--D

> ---
> fs/xfs/xfs_buf.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index d1da2ee9e6db..ac669a10c62f 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -1708,7 +1708,7 @@ xfs_buftarg_isolate(
> * zero. If the value is already zero, we need to reclaim the
> * buffer, otherwise it gets another trip through the LRU.
> */
> - if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> + if (atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> spin_unlock(&bp->b_lock);
> return LRU_ROTATE;
> }
> --
> 2.14.3
>

2018-03-02 16:41:48

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH] xfs: Correctly invert xfs_buftarg LRU isolation logic

On Thu, Mar 01, 2018 at 02:48:00PM -0800, Darrick J. Wong wrote:
> On Wed, Feb 28, 2018 at 04:49:51PM +0100, Vratislav Bendel wrote:
> > The function xfs_buftarg_isolate() used by xfs buffer schrinkers
> > to determine whether a buffer should be isolated and disposed
> > from LRU list, has inverted logic.
> >
> > Excerpt from xfs_buftarg_isolate():
> > /*
> > * Decrement the b_lru_ref count unless the value is already
> > * zero. If the value is already zero, we need to reclaim the
> > * buffer, otherwise it gets another trip through the LRU.
> > */
> > if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> > spin_unlock(&bp->b_lock);
> > return LRU_ROTATE;
> > }
> >
> > However, as per documentation, atomic_add_unless() returns _zero_
> > if the atomic value was originally equal to the specified *unsless* value.
> >
> > Ultimately causing a xfs_buffer with ->b_lru_ref == 0, to take another
> > trip around LRU, while isolating buffers with non-zero b_lru_ref.
> >
> > Signed-off-by: Vratislav Bendel <[email protected]>
> > CC: Brian Foster <[email protected]>
>
> Looks ok, will test...
> Reviewed-by: Darrick J. Wong <[email protected]>

This tests ok, but please address Brian and Luis' comments before I put
this in the upstream tream.

--D

> --D
>
> > ---
> > fs/xfs/xfs_buf.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> > index d1da2ee9e6db..ac669a10c62f 100644
> > --- a/fs/xfs/xfs_buf.c
> > +++ b/fs/xfs/xfs_buf.c
> > @@ -1708,7 +1708,7 @@ xfs_buftarg_isolate(
> > * zero. If the value is already zero, we need to reclaim the
> > * buffer, otherwise it gets another trip through the LRU.
> > */
> > - if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> > + if (atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> > spin_unlock(&bp->b_lock);
> > return LRU_ROTATE;
> > }
> > --
> > 2.14.3
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2018-03-05 10:29:14

by Vratislav Bendel

[permalink] [raw]
Subject: Re: [PATCH] xfs: Correctly invert xfs_buftarg LRU isolation logic

(In response to Luis' comment:)
> Can you add a respective Fixes: tag?

It was apparently present since LRU was added to xfs buffer cache via:
commit 430cbeb86fdcbbdabea7d4aa65307de8de425350
[xfs: add a lru to the XFS buffer cache]

But I wouldn't say this patch "fixes" that commit.
What do you think? Should a fixes tag be added in this case?


> Also what effects are observed by
> the user when this happens on the kernel log?

I haven't spotted any differences visible to user, nor in the kernel log.

(In response to Brian's comment:)
>> However, as per documentation, atomic_add_unless() returns _zero_
>> if the atomic value was originally equal to the specified *unsless* value.
>>
> Nit: unless

Thanks very much for feedback. Since it's my very first upstream
commit-proposal,
I expected that some polish would be needed.


> It might be worth pointing out in the commit log that currently isolated
> buffers end up right back on the LRU once they are released, because
> ->b_lru_ref remains elevated. Therefore, this patch essentially fixes
> that circuitous route by leaving them on the LRU as originally intended.
> Otherwise this looks Ok to me:

So the final commit message could be:
~~~
Currently the xfs_buftarg_isolate() is causing an xfs_buffer
with zero b_lru_ref, to take another trip around LRU, while
isolating buffers with non-zero b_lru_ref.

Additionally those isolated buffers end up right back on the LRU
once they are released, because ->b_lru_ref remains elevated.

Fix that circuitous route by leaving them on the LRU
as originally intended.

>> Signed-off-by: Vratislav Bendel <[email protected]>
> Reviewed-by: Brian Foster <[email protected]>
> ---

2018-03-05 18:55:59

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH] xfs: Correctly invert xfs_buftarg LRU isolation logic

On Mon, Mar 05, 2018 at 11:19:46AM +0100, Vratislav Bendel wrote:
> (In response to Luis' comment:)
> > Can you add a respective Fixes: tag?
>
> It was apparently present since LRU was added to xfs buffer cache via:
> commit 430cbeb86fdcbbdabea7d4aa65307de8de425350
> [xfs: add a lru to the XFS buffer cache]
>
> But I wouldn't say this patch "fixes" that commit.
> What do you think? Should a fixes tag be added in this case?
>
>
> > Also what effects are observed by
> > the user when this happens on the kernel log?
>
> I haven't spotted any differences visible to user, nor in the kernel log.
>
> (In response to Brian's comment:)
> >> However, as per documentation, atomic_add_unless() returns _zero_
> >> if the atomic value was originally equal to the specified *unsless* value.
> >>
> > Nit: unless
>
> Thanks very much for feedback. Since it's my very first upstream
> commit-proposal,
> I expected that some polish would be needed.
>
>
> > It might be worth pointing out in the commit log that currently isolated
> > buffers end up right back on the LRU once they are released, because
> > ->b_lru_ref remains elevated. Therefore, this patch essentially fixes
> > that circuitous route by leaving them on the LRU as originally intended.
> > Otherwise this looks Ok to me:
>
> So the final commit message could be:
> ~~~
> Currently the xfs_buftarg_isolate() is causing an xfs_buffer

"Due to an inverted logic mistake in xfs_buftarg_isolate()..."?

> with zero b_lru_ref, to take another trip around LRU, while

^^^^ no need for this comma

> isolating buffers with non-zero b_lru_ref.
>
> Additionally those isolated buffers end up right back on the LRU
> once they are released, because ->b_lru_ref remains elevated.
>
> Fix that circuitous route by leaving them on the LRU
> as originally intended.

Otherwise this seems fine to me; can you please resend the patch w/
updated change log and reviewed-by tags?

Reviewed-by: Darrick J. Wong <[email protected]>

--D

>
> >> Signed-off-by: Vratislav Bendel <[email protected]>
> > Reviewed-by: Brian Foster <[email protected]>
> > ---
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html