It's desirable to be able to rely on the following property: All stores
preceding (in program order) a call to a successful queue_work() will be
visible from the CPU which will execute the queued work by the time such
work executes, e.g.,
{ x is initially 0 }
CPU0 CPU1
WRITE_ONCE(x, 1); [ "work" is being executed ]
r0 = queue_work(wq, work); r1 = READ_ONCE(x);
Forbids: r0 == true && r1 == 0
The current implementation of queue_work() provides such memory-ordering
property:
- In __queue_work(), the ->lock spinlock is acquired.
- On the other side, in worker_thread(), this same ->lock is held
when dequeueing work.
So the locking ordering makes things work out.
Add this property to the DocBook headers of {queue,schedule}_work().
Suggested-by: Paul E. McKenney <[email protected]>
Signed-off-by: Andrea Parri <[email protected]>
---
include/linux/workqueue.h | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 4261d1c6e87b1..4fef6c38b0536 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -487,6 +487,19 @@ extern void wq_worker_comm(char *buf, size_t size, struct task_struct *task);
*
* We queue the work to the CPU on which it was submitted, but if the CPU dies
* it can be processed by another CPU.
+ *
+ * Memory-ordering properties: If it returns %true, guarantees that all stores
+ * preceding the call to queue_work() in the program order will be visible from
+ * the CPU which will execute @work by the time such work executes, e.g.,
+ *
+ * { x is initially 0 }
+ *
+ * CPU0 CPU1
+ *
+ * WRITE_ONCE(x, 1); [ @work is being executed ]
+ * r0 = queue_work(wq, work); r1 = READ_ONCE(x);
+ *
+ * Forbids: r0 == true && r1 == 0
*/
static inline bool queue_work(struct workqueue_struct *wq,
struct work_struct *work)
@@ -546,6 +559,9 @@ static inline bool schedule_work_on(int cpu, struct work_struct *work)
* This puts a job in the kernel-global workqueue if it was not already
* queued and leaves it in the same position on the kernel-global
* workqueue otherwise.
+ *
+ * Shares the same memory-ordering properties of queue_work(), c.f., the
+ * DocBook header of queue_work().
*/
static inline bool schedule_work(struct work_struct *work)
{
--
2.24.0
Hi,
On 1/18/20 1:58 PM, Andrea Parri wrote:
> It's desirable to be able to rely on the following property: All stores
> preceding (in program order) a call to a successful queue_work() will be
> visible from the CPU which will execute the queued work by the time such
> work executes, e.g.,
>
> { x is initially 0 }
>
> CPU0 CPU1
>
> WRITE_ONCE(x, 1); [ "work" is being executed ]
> r0 = queue_work(wq, work); r1 = READ_ONCE(x);
>
> Forbids: r0 == true && r1 == 0
>
> The current implementation of queue_work() provides such memory-ordering
> property:
>
> - In __queue_work(), the ->lock spinlock is acquired.
>
> - On the other side, in worker_thread(), this same ->lock is held
> when dequeueing work.
>
> So the locking ordering makes things work out.
>
> Add this property to the DocBook headers of {queue,schedule}_work().
>
> Suggested-by: Paul E. McKenney <[email protected]>
> Signed-off-by: Andrea Parri <[email protected]>
> ---
> include/linux/workqueue.h | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
> index 4261d1c6e87b1..4fef6c38b0536 100644
> --- a/include/linux/workqueue.h
> +++ b/include/linux/workqueue.h
> @@ -487,6 +487,19 @@ extern void wq_worker_comm(char *buf, size_t size, struct task_struct *task);
> *
> * We queue the work to the CPU on which it was submitted, but if the CPU dies
> * it can be processed by another CPU.
> + *
> + * Memory-ordering properties: If it returns %true, guarantees that all stores
> + * preceding the call to queue_work() in the program order will be visible from
> + * the CPU which will execute @work by the time such work executes, e.g.,
> + *
> + * { x is initially 0 }
> + *
> + * CPU0 CPU1
> + *
> + * WRITE_ONCE(x, 1); [ @work is being executed ]
> + * r0 = queue_work(wq, work); r1 = READ_ONCE(x);
> + *
> + * Forbids: r0 == true && r1 == 0
> */
> static inline bool queue_work(struct workqueue_struct *wq,
> struct work_struct *work)
> @@ -546,6 +559,9 @@ static inline bool schedule_work_on(int cpu, struct work_struct *work)
> * This puts a job in the kernel-global workqueue if it was not already
> * queued and leaves it in the same position on the kernel-global
> * workqueue otherwise.
> + *
> + * Shares the same memory-ordering properties of queue_work(), c.f., the
nit: cf. the
> + * DocBook header of queue_work().
> */
> static inline bool schedule_work(struct work_struct *work)
> {
>
--
~Randy
On Sat, Jan 18, 2020 at 10:58:20PM +0100, Andrea Parri wrote:
> It's desirable to be able to rely on the following property: All stores
> preceding (in program order) a call to a successful queue_work() will be
> visible from the CPU which will execute the queued work by the time such
> work executes, e.g.,
>
> { x is initially 0 }
>
> CPU0 CPU1
>
> WRITE_ONCE(x, 1); [ "work" is being executed ]
> r0 = queue_work(wq, work); r1 = READ_ONCE(x);
>
> Forbids: r0 == true && r1 == 0
>
> The current implementation of queue_work() provides such memory-ordering
> property:
>
> - In __queue_work(), the ->lock spinlock is acquired.
>
> - On the other side, in worker_thread(), this same ->lock is held
> when dequeueing work.
>
> So the locking ordering makes things work out.
>
> Add this property to the DocBook headers of {queue,schedule}_work().
>
> Suggested-by: Paul E. McKenney <[email protected]>
> Signed-off-by: Andrea Parri <[email protected]>
Acked-by: Paul E. McKenney <[email protected]>
An alternative to Randy's suggestion of dropping the comma following
the "cf." is to just drop that whole phrase. I will let you and Randy
work that one out, though. ;-)
> ---
> include/linux/workqueue.h | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
> index 4261d1c6e87b1..4fef6c38b0536 100644
> --- a/include/linux/workqueue.h
> +++ b/include/linux/workqueue.h
> @@ -487,6 +487,19 @@ extern void wq_worker_comm(char *buf, size_t size, struct task_struct *task);
> *
> * We queue the work to the CPU on which it was submitted, but if the CPU dies
> * it can be processed by another CPU.
> + *
> + * Memory-ordering properties: If it returns %true, guarantees that all stores
> + * preceding the call to queue_work() in the program order will be visible from
> + * the CPU which will execute @work by the time such work executes, e.g.,
> + *
> + * { x is initially 0 }
> + *
> + * CPU0 CPU1
> + *
> + * WRITE_ONCE(x, 1); [ @work is being executed ]
> + * r0 = queue_work(wq, work); r1 = READ_ONCE(x);
> + *
> + * Forbids: r0 == true && r1 == 0
> */
> static inline bool queue_work(struct workqueue_struct *wq,
> struct work_struct *work)
> @@ -546,6 +559,9 @@ static inline bool schedule_work_on(int cpu, struct work_struct *work)
> * This puts a job in the kernel-global workqueue if it was not already
> * queued and leaves it in the same position on the kernel-global
> * workqueue otherwise.
> + *
> + * Shares the same memory-ordering properties of queue_work(), c.f., the
> + * DocBook header of queue_work().
> */
> static inline bool schedule_work(struct work_struct *work)
> {
> --
> 2.24.0
>
On Sun, Jan 19, 2020 at 06:02:35PM -0800, Paul E. McKenney wrote:
> On Sat, Jan 18, 2020 at 10:58:20PM +0100, Andrea Parri wrote:
> > It's desirable to be able to rely on the following property: All stores
> > preceding (in program order) a call to a successful queue_work() will be
> > visible from the CPU which will execute the queued work by the time such
> > work executes, e.g.,
> >
> > { x is initially 0 }
> >
> > CPU0 CPU1
> >
> > WRITE_ONCE(x, 1); [ "work" is being executed ]
> > r0 = queue_work(wq, work); r1 = READ_ONCE(x);
> >
> > Forbids: r0 == true && r1 == 0
> >
> > The current implementation of queue_work() provides such memory-ordering
> > property:
> >
> > - In __queue_work(), the ->lock spinlock is acquired.
> >
> > - On the other side, in worker_thread(), this same ->lock is held
> > when dequeueing work.
> >
> > So the locking ordering makes things work out.
> >
> > Add this property to the DocBook headers of {queue,schedule}_work().
> >
> > Suggested-by: Paul E. McKenney <[email protected]>
> > Signed-off-by: Andrea Parri <[email protected]>
>
> Acked-by: Paul E. McKenney <[email protected]>
Thanks!
>
> An alternative to Randy's suggestion of dropping the comma following
> the "cf." is to just drop that whole phrase. I will let you and Randy
> work that one out, though. ;-)
Either way works for me.
I'd give Tejun and Lai some more time to review this and send a non-RFC with
your Ack and this nit fixed later this week (unless I hear some objections).
Thanks,
Andrea