Hello!
A relatively common misuse of these barriers is to apply these to
operations which are not read-modify-write operations, such as
atomic_set() and atomic_read(); examples were discussed in [1].
This series attempts to fix those uses by (conservatively) replacing
the smp_mb__{before,after}_atomic() barriers with full memory barriers.
Applies on 5.1-rc7.
Thanks,
Andrea
Cc: "Paul E. McKenney" <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Rob Clark <[email protected]>
Cc: Sean Paul <[email protected]>
Cc: David Airlie <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: Jordan Crouse <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Omar Sandoval <[email protected]>
Cc: "Yan, Zheng" <[email protected]>
Cc: Sage Weil <[email protected]>
Cc: Ilya Dryomov <[email protected]>
Cc: Dennis Dalessandro <[email protected]>
Cc: Mike Marciniszyn <[email protected]>
Cc: Doug Ledford <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
[1] http://lkml.kernel.org/r/[email protected]
Andrea Parri (5):
drm/msm: Fix improper uses of smp_mb__{before,after}_atomic()
bio: fix improper use of smp_mb__before_atomic()
sbitmap: fix improper use of smp_mb__before_atomic()
ceph: fix improper use of smp_mb__before_atomic()
IB/hfi1: Fix improper uses of smp_mb__before_atomic()
drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 4 ++--
drivers/infiniband/sw/rdmavt/qp.c | 6 +++---
fs/ceph/super.h | 2 +-
include/linux/bio.h | 2 +-
lib/sbitmap.c | 2 +-
5 files changed, 8 insertions(+), 8 deletions(-)
--
2.7.4
These barriers only apply to the read-modify-write operations; in
particular, they do not apply to the atomic_set() primitive.
Replace the barriers with smp_mb()s.
Fixes: b1fc2839d2f92 ("drm/msm: Implement preemption for A5XX targets")
Cc: [email protected]
Reported-by: "Paul E. McKenney" <[email protected]>
Reported-by: Peter Zijlstra <[email protected]>
Signed-off-by: Andrea Parri <[email protected]>
Cc: Rob Clark <[email protected]>
Cc: Sean Paul <[email protected]>
Cc: David Airlie <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: Jordan Crouse <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
---
drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
index 3d62310a535fb..ee0820ee0c664 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
@@ -39,10 +39,10 @@ static inline void set_preempt_state(struct a5xx_gpu *gpu,
* preemption or in the interrupt handler so barriers are needed
* before...
*/
- smp_mb__before_atomic();
+ smp_mb();
atomic_set(&gpu->preempt_state, new);
/* ... and after*/
- smp_mb__after_atomic();
+ smp_mb();
}
/* Write the most recent wptr for the given ring into the hardware */
--
2.7.4
This barrier only applies to the read-modify-write operations; in
particular, it does not apply to the atomic_set() primitive.
Replace the barrier with an smp_mb().
Fixes: dac56212e8127 ("bio: skip atomic inc/dec of ->bi_cnt for most use cases")
Cc: [email protected]
Reported-by: "Paul E. McKenney" <[email protected]>
Reported-by: Peter Zijlstra <[email protected]>
Signed-off-by: Andrea Parri <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: [email protected]
---
include/linux/bio.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/bio.h b/include/linux/bio.h
index e584673c18814..5becbafb84e8a 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -224,7 +224,7 @@ static inline void bio_cnt_set(struct bio *bio, unsigned int count)
{
if (count != 1) {
bio->bi_flags |= (1 << BIO_REFFED);
- smp_mb__before_atomic();
+ smp_mb();
}
atomic_set(&bio->__bi_cnt, count);
}
--
2.7.4
This barrier only applies to the read-modify-write operations; in
particular, it does not apply to the atomic_read() primitive.
Replace the barrier with an smp_mb().
Fixes: 856cc4c237add ("IB/hfi1: Add the capability for reserved operations")
Cc: [email protected]
Reported-by: "Paul E. McKenney" <[email protected]>
Reported-by: Peter Zijlstra <[email protected]>
Signed-off-by: Andrea Parri <[email protected]>
Cc: Dennis Dalessandro <[email protected]>
Cc: Mike Marciniszyn <[email protected]>
Cc: Doug Ledford <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: [email protected]
---
drivers/infiniband/sw/rdmavt/qp.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
index a34b9a2a32b60..b64fd151d31fb 100644
--- a/drivers/infiniband/sw/rdmavt/qp.c
+++ b/drivers/infiniband/sw/rdmavt/qp.c
@@ -1863,11 +1863,11 @@ static inline int rvt_qp_is_avail(
u32 reserved_used;
/* see rvt_qp_wqe_unreserve() */
- smp_mb__before_atomic();
+ smp_mb();
reserved_used = atomic_read(&qp->s_reserved_used);
if (unlikely(reserved_op)) {
/* see rvt_qp_wqe_unreserve() */
- smp_mb__before_atomic();
+ smp_mb();
if (reserved_used >= rdi->dparms.reserved_operations)
return -ENOMEM;
return 0;
@@ -1882,7 +1882,7 @@ static inline int rvt_qp_is_avail(
avail = slast - qp->s_head;
/* see rvt_qp_wqe_unreserve() */
- smp_mb__before_atomic();
+ smp_mb();
reserved_used = atomic_read(&qp->s_reserved_used);
avail = avail - 1 -
(rdi->dparms.reserved_operations - reserved_used);
--
2.7.4
This barrier only applies to the read-modify-write operations; in
particular, it does not apply to the atomic64_set() primitive.
Replace the barrier with an smp_mb().
Fixes: fdd4e15838e59 ("ceph: rework dcache readdir")
Cc: [email protected]
Reported-by: "Paul E. McKenney" <[email protected]>
Reported-by: Peter Zijlstra <[email protected]>
Signed-off-by: Andrea Parri <[email protected]>
Cc: "Yan, Zheng" <[email protected]>
Cc: Sage Weil <[email protected]>
Cc: Ilya Dryomov <[email protected]>
Cc: [email protected]
---
fs/ceph/super.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/ceph/super.h b/fs/ceph/super.h
index 16c03188578ea..b5c782e6d62f1 100644
--- a/fs/ceph/super.h
+++ b/fs/ceph/super.h
@@ -541,7 +541,7 @@ static inline void __ceph_dir_set_complete(struct ceph_inode_info *ci,
long long release_count,
long long ordered_count)
{
- smp_mb__before_atomic();
+ smp_mb();
atomic64_set(&ci->i_complete_seq[0], release_count);
atomic64_set(&ci->i_complete_seq[1], ordered_count);
}
--
2.7.4
This barrier only applies to the read-modify-write operations; in
particular, it does not apply to the atomic_set() primitive.
Replace the barrier with an smp_mb().
Fixes: 6c0ca7ae292ad ("sbitmap: fix wakeup hang after sbq resize")
Cc: [email protected]
Reported-by: "Paul E. McKenney" <[email protected]>
Reported-by: Peter Zijlstra <[email protected]>
Signed-off-by: Andrea Parri <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Omar Sandoval <[email protected]>
Cc: [email protected]
---
lib/sbitmap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index 155fe38756ecf..4a7fc4915dfc6 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -435,7 +435,7 @@ static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq,
* to ensure that the batch size is updated before the wait
* counts.
*/
- smp_mb__before_atomic();
+ smp_mb();
for (i = 0; i < SBQ_WAIT_QUEUES; i++)
atomic_set(&sbq->ws[i].wait_cnt, 1);
}
--
2.7.4
>-----Original Message-----
>From: [email protected] [mailto:linux-rdma-
>[email protected]] On Behalf Of Andrea Parri
>Sent: Monday, April 29, 2019 4:15 PM
>To: [email protected]
>Cc: Andrea Parri <[email protected]>;
>[email protected]; Dalessandro, Dennis
><[email protected]>; Marciniszyn, Mike
><[email protected]>; Doug Ledford <[email protected]>;
>Jason Gunthorpe <[email protected]>; [email protected]
>Subject: [PATCH 5/5] IB/hfi1: Fix improper uses of smp_mb__before_atomic()
>
>This barrier only applies to the read-modify-write operations; in
>particular, it does not apply to the atomic_read() primitive.
>
>Replace the barrier with an smp_mb().
This is one of a couple of barrier issues that we are currently looking into.
See:
[PATCH for-next 6/9] IB/rdmavt: Add new completion inline
We will take a look at this one as well.
Thanks,
Mike
>Fixes: 856cc4c237add ("IB/hfi1: Add the capability for reserved operations")
>Cc: [email protected]
>Reported-by: "Paul E. McKenney" <[email protected]>
>Reported-by: Peter Zijlstra <[email protected]>
>Signed-off-by: Andrea Parri <[email protected]>
>Cc: Dennis Dalessandro <[email protected]>
>Cc: Mike Marciniszyn <[email protected]>
>Cc: Doug Ledford <[email protected]>
>Cc: Jason Gunthorpe <[email protected]>
>Cc: [email protected]
>---
> drivers/infiniband/sw/rdmavt/qp.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
>diff --git a/drivers/infiniband/sw/rdmavt/qp.c
>b/drivers/infiniband/sw/rdmavt/qp.c
>index a34b9a2a32b60..b64fd151d31fb 100644
>--- a/drivers/infiniband/sw/rdmavt/qp.c
>+++ b/drivers/infiniband/sw/rdmavt/qp.c
>@@ -1863,11 +1863,11 @@ static inline int rvt_qp_is_avail(
> u32 reserved_used;
>
> /* see rvt_qp_wqe_unreserve() */
>- smp_mb__before_atomic();
>+ smp_mb();
> reserved_used = atomic_read(&qp->s_reserved_used);
> if (unlikely(reserved_op)) {
> /* see rvt_qp_wqe_unreserve() */
>- smp_mb__before_atomic();
>+ smp_mb();
> if (reserved_used >= rdi->dparms.reserved_operations)
> return -ENOMEM;
> return 0;
>@@ -1882,7 +1882,7 @@ static inline int rvt_qp_is_avail(
> avail = slast - qp->s_head;
>
> /* see rvt_qp_wqe_unreserve() */
>- smp_mb__before_atomic();
>+ smp_mb();
> reserved_used = atomic_read(&qp->s_reserved_used);
> avail = avail - 1 -
> (rdi->dparms.reserved_operations - reserved_used);
>--
>2.7.4
Hi Mike,
> >This barrier only applies to the read-modify-write operations; in
> >particular, it does not apply to the atomic_read() primitive.
> >
> >Replace the barrier with an smp_mb().
>
> This is one of a couple of barrier issues that we are currently looking into.
>
> See:
>
> [PATCH for-next 6/9] IB/rdmavt: Add new completion inline
>
> We will take a look at this one as well.
Thank you for the reference and for looking into this,
Andrea
On Mon, Apr 29, 2019 at 10:14:58PM +0200, Andrea Parri wrote:
> This barrier only applies to the read-modify-write operations; in
> particular, it does not apply to the atomic_set() primitive.
>
> Replace the barrier with an smp_mb().
> @@ -224,7 +224,7 @@ static inline void bio_cnt_set(struct bio *bio, unsigned int count)
> {
> if (count != 1) {
> bio->bi_flags |= (1 << BIO_REFFED);
> - smp_mb__before_atomic();
Maybe also add:
/*
* XXX the comment that explain this barrier goes here
*/
> + smp_mb();
> }
> atomic_set(&bio->__bi_cnt, count);
> }
On Mon, Apr 29, 2019 at 10:15:00PM +0200, Andrea Parri wrote:
> This barrier only applies to the read-modify-write operations; in
> particular, it does not apply to the atomic64_set() primitive.
>
> Replace the barrier with an smp_mb().
>
> @@ -541,7 +541,7 @@ static inline void __ceph_dir_set_complete(struct ceph_inode_info *ci,
> long long release_count,
> long long ordered_count)
> {
> - smp_mb__before_atomic();
same
/*
* XXX: the comment that explain this barrier goes here.
*/
> + smp_mb();
> atomic64_set(&ci->i_complete_seq[0], release_count);
> atomic64_set(&ci->i_complete_seq[1], ordered_count);
> }
> --
> 2.7.4
>
On Mon, Apr 29, 2019 at 10:15:01PM +0200, Andrea Parri wrote:
> This barrier only applies to the read-modify-write operations; in
> particular, it does not apply to the atomic_read() primitive.
>
> Replace the barrier with an smp_mb().
>
> Fixes: 856cc4c237add ("IB/hfi1: Add the capability for reserved operations")
> Cc: [email protected]
> Reported-by: "Paul E. McKenney" <[email protected]>
> Reported-by: Peter Zijlstra <[email protected]>
> Signed-off-by: Andrea Parri <[email protected]>
> Cc: Dennis Dalessandro <[email protected]>
> Cc: Mike Marciniszyn <[email protected]>
> Cc: Doug Ledford <[email protected]>
> Cc: Jason Gunthorpe <[email protected]>
> Cc: [email protected]
> ---
> drivers/infiniband/sw/rdmavt/qp.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
> index a34b9a2a32b60..b64fd151d31fb 100644
> --- a/drivers/infiniband/sw/rdmavt/qp.c
> +++ b/drivers/infiniband/sw/rdmavt/qp.c
> @@ -1863,11 +1863,11 @@ static inline int rvt_qp_is_avail(
> u32 reserved_used;
>
> /* see rvt_qp_wqe_unreserve() */
I see a completely bogus comment in rvf_op_wqe_unreserve(), referring to
bogus comments makes this barrier bogus too.
> - smp_mb__before_atomic();
> + smp_mb();
> reserved_used = atomic_read(&qp->s_reserved_used);
> if (unlikely(reserved_op)) {
> /* see rvt_qp_wqe_unreserve() */
> - smp_mb__before_atomic();
This was before, but there is nothing _after_ this. Which means this
barrier was complete garbage anyway.
> + smp_mb();
> if (reserved_used >= rdi->dparms.reserved_operations)
> return -ENOMEM;
> return 0;
> @@ -1882,7 +1882,7 @@ static inline int rvt_qp_is_avail(
> avail = slast - qp->s_head;
>
> /* see rvt_qp_wqe_unreserve() */
> - smp_mb__before_atomic();
> + smp_mb();
Same as the first.
> reserved_used = atomic_read(&qp->s_reserved_used);
> avail = avail - 1 -
> (rdi->dparms.reserved_operations - reserved_used);
On Mon, Apr 29, 2019 at 10:14:56PM +0200, Andrea Parri wrote:
> Hello!
>
> A relatively common misuse of these barriers is to apply these to
> operations which are not read-modify-write operations, such as
> atomic_set() and atomic_read(); examples were discussed in [1].
>
> This series attempts to fix those uses by (conservatively) replacing
> the smp_mb__{before,after}_atomic() barriers with full memory barriers.
I don't think blindly doing this replacement makes the code any better;
much of the code you found is just straight up dodgy to begin with.
I think the people should mostly just consider this a bug report.
Also, remember a memory barrier without a coherent comment is most
likely a bug anyway.
On Tue, Apr 30, 2019 at 4:26 PM Peter Zijlstra <[email protected]> wrote:
>
> On Mon, Apr 29, 2019 at 10:15:00PM +0200, Andrea Parri wrote:
> > This barrier only applies to the read-modify-write operations; in
> > particular, it does not apply to the atomic64_set() primitive.
> >
> > Replace the barrier with an smp_mb().
> >
>
> > @@ -541,7 +541,7 @@ static inline void __ceph_dir_set_complete(struct ceph_inode_info *ci,
> > long long release_count,
> > long long ordered_count)
> > {
> > - smp_mb__before_atomic();
>
> same
> /*
> * XXX: the comment that explain this barrier goes here.
> */
>
makes sure operations that setup readdir cache (update page cache and
i_size) are strongly ordered with following atomic64_set.
> > + smp_mb();
>
> > atomic64_set(&ci->i_complete_seq[0], release_count);
> > atomic64_set(&ci->i_complete_seq[1], ordered_count);
> > }
> > --
> > 2.7.4
> >
On Tue, Apr 30, 2019 at 10:34:09AM +0200, Peter Zijlstra wrote:
> On Mon, Apr 29, 2019 at 10:14:56PM +0200, Andrea Parri wrote:
> > Hello!
> >
> > A relatively common misuse of these barriers is to apply these to
> > operations which are not read-modify-write operations, such as
> > atomic_set() and atomic_read(); examples were discussed in [1].
> >
> > This series attempts to fix those uses by (conservatively) replacing
> > the smp_mb__{before,after}_atomic() barriers with full memory barriers.
>
> I don't think blindly doing this replacement makes the code any better;
> much of the code you found is just straight up dodgy to begin with.
>
> I think the people should mostly just consider this a bug report.
Bug, misuse, patch, and rfc seem all appropriate to me in this context.
> Also, remember a memory barrier without a coherent comment is most
> likely a bug anyway.
Right. Hopefully, the people in Cc: will want to shed some light about
this: I know what these smp_mb__{before,after}_atomic() can not do, but
I can only guess (I won't!) what they are supposed to accomplish (e.g.,
which mem. accesses are being ordered, what are the matching barriers);
maybe this can also justify the "conservative" approach presented here.
Andrea
On Mon, Apr 29, 2019 at 10:14:57PM +0200, Andrea Parri wrote:
> These barriers only apply to the read-modify-write operations; in
> particular, they do not apply to the atomic_set() primitive.
>
> Replace the barriers with smp_mb()s.
>
> Fixes: b1fc2839d2f92 ("drm/msm: Implement preemption for A5XX targets")
> Cc: [email protected]
> Reported-by: "Paul E. McKenney" <[email protected]>
> Reported-by: Peter Zijlstra <[email protected]>
> Signed-off-by: Andrea Parri <[email protected]>
> Cc: Rob Clark <[email protected]>
> Cc: Sean Paul <[email protected]>
> Cc: David Airlie <[email protected]>
> Cc: Daniel Vetter <[email protected]>
> Cc: Jordan Crouse <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
Rob, Sean, Jordan: any suggestions to move this patch forward?
Thanx,
Andrea
> ---
> drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
> index 3d62310a535fb..ee0820ee0c664 100644
> --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
> +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
> @@ -39,10 +39,10 @@ static inline void set_preempt_state(struct a5xx_gpu *gpu,
> * preemption or in the interrupt handler so barriers are needed
> * before...
> */
> - smp_mb__before_atomic();
> + smp_mb();
> atomic_set(&gpu->preempt_state, new);
> /* ... and after*/
> - smp_mb__after_atomic();
> + smp_mb();
> }
>
> /* Write the most recent wptr for the given ring into the hardware */
> --
> 2.7.4
>
On Mon, Apr 29, 2019 at 10:14:58PM +0200, Andrea Parri wrote:
> This barrier only applies to the read-modify-write operations; in
> particular, it does not apply to the atomic_set() primitive.
>
> Replace the barrier with an smp_mb().
>
> Fixes: dac56212e8127 ("bio: skip atomic inc/dec of ->bi_cnt for most use cases")
> Cc: [email protected]
> Reported-by: "Paul E. McKenney" <[email protected]>
> Reported-by: Peter Zijlstra <[email protected]>
> Signed-off-by: Andrea Parri <[email protected]>
> Cc: Jens Axboe <[email protected]>
> Cc: [email protected]
Jens: any suggestions to move this patch forward?
Thanx,
Andrea
> ---
> include/linux/bio.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/bio.h b/include/linux/bio.h
> index e584673c18814..5becbafb84e8a 100644
> --- a/include/linux/bio.h
> +++ b/include/linux/bio.h
> @@ -224,7 +224,7 @@ static inline void bio_cnt_set(struct bio *bio, unsigned int count)
> {
> if (count != 1) {
> bio->bi_flags |= (1 << BIO_REFFED);
> - smp_mb__before_atomic();
> + smp_mb();
> }
> atomic_set(&bio->__bi_cnt, count);
> }
> --
> 2.7.4
>
On Mon, Apr 29, 2019 at 10:14:59PM +0200, Andrea Parri wrote:
> This barrier only applies to the read-modify-write operations; in
> particular, it does not apply to the atomic_set() primitive.
>
> Replace the barrier with an smp_mb().
>
> Fixes: 6c0ca7ae292ad ("sbitmap: fix wakeup hang after sbq resize")
> Cc: [email protected]
> Reported-by: "Paul E. McKenney" <[email protected]>
> Reported-by: Peter Zijlstra <[email protected]>
> Signed-off-by: Andrea Parri <[email protected]>
> Cc: Jens Axboe <[email protected]>
> Cc: Omar Sandoval <[email protected]>
> Cc: [email protected]
Jens, Omar: any suggestions to move this patch forward?
Thanx,
Andrea
> ---
> lib/sbitmap.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> index 155fe38756ecf..4a7fc4915dfc6 100644
> --- a/lib/sbitmap.c
> +++ b/lib/sbitmap.c
> @@ -435,7 +435,7 @@ static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq,
> * to ensure that the batch size is updated before the wait
> * counts.
> */
> - smp_mb__before_atomic();
> + smp_mb();
> for (i = 0; i < SBQ_WAIT_QUEUES; i++)
> atomic_set(&sbq->ws[i].wait_cnt, 1);
> }
> --
> 2.7.4
>
On Tue, Apr 30, 2019 at 05:08:43PM +0800, Yan, Zheng wrote:
> On Tue, Apr 30, 2019 at 4:26 PM Peter Zijlstra <[email protected]> wrote:
> >
> > On Mon, Apr 29, 2019 at 10:15:00PM +0200, Andrea Parri wrote:
> > > This barrier only applies to the read-modify-write operations; in
> > > particular, it does not apply to the atomic64_set() primitive.
> > >
> > > Replace the barrier with an smp_mb().
> > >
> >
> > > @@ -541,7 +541,7 @@ static inline void __ceph_dir_set_complete(struct ceph_inode_info *ci,
> > > long long release_count,
> > > long long ordered_count)
> > > {
> > > - smp_mb__before_atomic();
> >
> > same
> > /*
> > * XXX: the comment that explain this barrier goes here.
> > */
> >
>
> makes sure operations that setup readdir cache (update page cache and
> i_size) are strongly ordered with following atomic64_set.
Thanks for the suggestion, Yan.
To be clear: would you like me to integrate your comment and resend?
any other suggestions?
Thanx,
Andrea
>
> > > + smp_mb();
> >
> > > atomic64_set(&ci->i_complete_seq[0], release_count);
> > > atomic64_set(&ci->i_complete_seq[1], ordered_count);
> > > }
> > > --
> > > 2.7.4
> > >
On Tue, Apr 30, 2019 at 01:16:57AM +0200, Andrea Parri wrote:
> Hi Mike,
>
> > >This barrier only applies to the read-modify-write operations; in
> > >particular, it does not apply to the atomic_read() primitive.
> > >
> > >Replace the barrier with an smp_mb().
> >
> > This is one of a couple of barrier issues that we are currently looking into.
> >
> > See:
> >
> > [PATCH for-next 6/9] IB/rdmavt: Add new completion inline
> >
> > We will take a look at this one as well.
>
> Thank you for the reference and for looking into this,
So, I'm planning to just drop this patch; or can I do something to help?
Please let me know.
Thanx,
Andrea
>
> Andrea
On Mon, Apr 29, 2019 at 10:14:59PM +0200, Andrea Parri wrote:
> This barrier only applies to the read-modify-write operations; in
> particular, it does not apply to the atomic_set() primitive.
>
> Replace the barrier with an smp_mb().
>
> Fixes: 6c0ca7ae292ad ("sbitmap: fix wakeup hang after sbq resize")
> Cc: [email protected]
> Reported-by: "Paul E. McKenney" <[email protected]>
> Reported-by: Peter Zijlstra <[email protected]>
> Signed-off-by: Andrea Parri <[email protected]>
> Cc: Jens Axboe <[email protected]>
> Cc: Omar Sandoval <[email protected]>
> Cc: [email protected]
> ---
> lib/sbitmap.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> index 155fe38756ecf..4a7fc4915dfc6 100644
> --- a/lib/sbitmap.c
> +++ b/lib/sbitmap.c
> @@ -435,7 +435,7 @@ static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq,
> * to ensure that the batch size is updated before the wait
> * counts.
> */
> - smp_mb__before_atomic();
> + smp_mb();
> for (i = 0; i < SBQ_WAIT_QUEUES; i++)
> atomic_set(&sbq->ws[i].wait_cnt, 1);
> }
> --
> 2.7.4
>
sbitmap_queue_update_wake_batch() won't be called in fast path, and
the fix is correct too, so:
Reviewed-by: Ming Lei <[email protected]>
thanks,
Ming
On Mon, Apr 29, 2019 at 10:14:58PM +0200, Andrea Parri wrote:
> This barrier only applies to the read-modify-write operations; in
> particular, it does not apply to the atomic_set() primitive.
>
> Replace the barrier with an smp_mb().
>
> Fixes: dac56212e8127 ("bio: skip atomic inc/dec of ->bi_cnt for most use cases")
> Cc: [email protected]
> Reported-by: "Paul E. McKenney" <[email protected]>
> Reported-by: Peter Zijlstra <[email protected]>
> Signed-off-by: Andrea Parri <[email protected]>
> Cc: Jens Axboe <[email protected]>
> Cc: [email protected]
> ---
> include/linux/bio.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/bio.h b/include/linux/bio.h
> index e584673c18814..5becbafb84e8a 100644
> --- a/include/linux/bio.h
> +++ b/include/linux/bio.h
> @@ -224,7 +224,7 @@ static inline void bio_cnt_set(struct bio *bio, unsigned int count)
> {
> if (count != 1) {
> bio->bi_flags |= (1 << BIO_REFFED);
> - smp_mb__before_atomic();
> + smp_mb();
> }
> atomic_set(&bio->__bi_cnt, count);
> }
> --
> 2.7.4
>
Looks fine,
Reviewed-by: Ming Lei <[email protected]>
Thanks,
Ming
Hi Ming,
On Fri, May 10, 2019 at 11:41:02AM +0800, Ming Lei wrote:
> On Mon, Apr 29, 2019 at 10:14:59PM +0200, Andrea Parri wrote:
> > This barrier only applies to the read-modify-write operations; in
> > particular, it does not apply to the atomic_set() primitive.
> >
> > Replace the barrier with an smp_mb().
> >
> > Fixes: 6c0ca7ae292ad ("sbitmap: fix wakeup hang after sbq resize")
> > Cc: [email protected]
> > Reported-by: "Paul E. McKenney" <[email protected]>
> > Reported-by: Peter Zijlstra <[email protected]>
> > Signed-off-by: Andrea Parri <[email protected]>
> > Cc: Jens Axboe <[email protected]>
> > Cc: Omar Sandoval <[email protected]>
> > Cc: [email protected]
> > ---
> > lib/sbitmap.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> > index 155fe38756ecf..4a7fc4915dfc6 100644
> > --- a/lib/sbitmap.c
> > +++ b/lib/sbitmap.c
> > @@ -435,7 +435,7 @@ static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq,
> > * to ensure that the batch size is updated before the wait
> > * counts.
> > */
> > - smp_mb__before_atomic();
> > + smp_mb();
> > for (i = 0; i < SBQ_WAIT_QUEUES; i++)
> > atomic_set(&sbq->ws[i].wait_cnt, 1);
> > }
> > --
> > 2.7.4
> >
>
> sbitmap_queue_update_wake_batch() won't be called in fast path, and
> the fix is correct too, so:
>
> Reviewed-by: Ming Lei <[email protected]>
Thank you for the review(s),
Andrea
> thanks,
> Ming
On 5/10/19 4:55 AM, Andrea Parri wrote:
> On Tue, Apr 30, 2019 at 05:08:43PM +0800, Yan, Zheng wrote:
>> On Tue, Apr 30, 2019 at 4:26 PM Peter Zijlstra <[email protected]> wrote:
>>>
>>> On Mon, Apr 29, 2019 at 10:15:00PM +0200, Andrea Parri wrote:
>>>> This barrier only applies to the read-modify-write operations; in
>>>> particular, it does not apply to the atomic64_set() primitive.
>>>>
>>>> Replace the barrier with an smp_mb().
>>>>
>>>
>>>> @@ -541,7 +541,7 @@ static inline void __ceph_dir_set_complete(struct ceph_inode_info *ci,
>>>> long long release_count,
>>>> long long ordered_count)
>>>> {
>>>> - smp_mb__before_atomic();
>>>
>>> same
>>> /*
>>> * XXX: the comment that explain this barrier goes here.
>>> */
>>>
>>
>> makes sure operations that setup readdir cache (update page cache and
>> i_size) are strongly ordered with following atomic64_set.
>
> Thanks for the suggestion, Yan.
>
> To be clear: would you like me to integrate your comment and resend?
> any other suggestions?
>
Yes, please
Regards
Yan, Zheng
> Thanx,
> Andrea
>
>
>>
>>>> + smp_mb();
>>>
>>>> atomic64_set(&ci->i_complete_seq[0], release_count);
>>>> atomic64_set(&ci->i_complete_seq[1], ordered_count);
>>>> }
>>>> --
>>>> 2.7.4
>>>>
> >>> /*
> >>> * XXX: the comment that explain this barrier goes here.
> >>> */
> >>>
> >>
> >>makes sure operations that setup readdir cache (update page cache and
> >>i_size) are strongly ordered with following atomic64_set.
> >
> >Thanks for the suggestion, Yan.
> >
> >To be clear: would you like me to integrate your comment and resend?
> >any other suggestions?
> >
>
> Yes, please
Will do: I'll let the merge window close and send v2 on top of -rc1.
Thanks,
Andrea
On 5/9/2019 5:12 PM, Andrea Parri wrote:
> On Tue, Apr 30, 2019 at 01:16:57AM +0200, Andrea Parri wrote:
>> Hi Mike,
>>
>>>> This barrier only applies to the read-modify-write operations; in
>>>> particular, it does not apply to the atomic_read() primitive.
>>>>
>>>> Replace the barrier with an smp_mb().
>>>
>>> This is one of a couple of barrier issues that we are currently looking into.
>>>
>>> See:
>>>
>>> [PATCH for-next 6/9] IB/rdmavt: Add new completion inline
>>>
>>> We will take a look at this one as well.
>>
>> Thank you for the reference and for looking into this,
>
> So, I'm planning to just drop this patch; or can I do something to help?
>
> Please let me know.
Mike was looking into this, and I've got a handful of patches from him
to review. He's unavailable for a while but if it's not included in the
patches I've got we'll get something out shortly. So yes I think we can
hold off on this patch for now. Thanks.
-Denny
On Tue, May 14, 2019 at 08:32:52AM -0400, Dennis Dalessandro wrote:
> On 5/9/2019 5:12 PM, Andrea Parri wrote:
> >On Tue, Apr 30, 2019 at 01:16:57AM +0200, Andrea Parri wrote:
> >>Hi Mike,
> >>
> >>>>This barrier only applies to the read-modify-write operations; in
> >>>>particular, it does not apply to the atomic_read() primitive.
> >>>>
> >>>>Replace the barrier with an smp_mb().
> >>>
> >>>This is one of a couple of barrier issues that we are currently looking into.
> >>>
> >>>See:
> >>>
> >>>[PATCH for-next 6/9] IB/rdmavt: Add new completion inline
> >>>
> >>>We will take a look at this one as well.
> >>
> >>Thank you for the reference and for looking into this,
> >
> >So, I'm planning to just drop this patch; or can I do something to help?
> >
> >Please let me know.
>
> Mike was looking into this, and I've got a handful of patches from him to
> review. He's unavailable for a while but if it's not included in the patches
> I've got we'll get something out shortly. So yes I think we can hold off on
> this patch for now. Thanks.
Thank you for the confirmation, Dennis. I'll hold off on this one.
Andrea