2024-03-27 16:05:24

by Allen Pais

[permalink] [raw]
Subject: [PATCH 0/9] Convert Tasklets to BH Workqueues

This patch series represents a significant shift in how asynchronous
execution in the bottom half (BH) context is handled within the kernel.
Traditionally, tasklets have been the go-to mechanism for such operations.
This series introduces the conversion of existing tasklet implementations
to the newly supported BH workqueues, marking a pivotal enhancement
in how asynchronous tasks are managed and executed.

Background and Motivation:
Tasklets have served as the kernel's lightweight mechanism for
scheduling bottom-half processing, providing a simple interface
for deferring work from interrupt context. There have been increasing
requests and motivations to deprecate and eventually remove tasklets
in favor of more modern and flexible mechanisms.

Introduction of BH Workqueues:
BH workqueues are designed to behave similarly to regular workqueues
with the added benefit of execution in the BH context.

Conversion Details:
The conversion process involved identifying all instances where
tasklets were used within the kernel and replacing them with BH workqueue
implementations.

This patch series is a first step toward broader adoption of BH workqueues
across the kernel, and soon other subsystems using tasklets will undergo
a similar transition. The groundwork laid here could serve as a
blueprint for such future conversions.

Testing Request:
In addition to a thorough review of these changes,
I kindly request that the reviwers engage in both functional and
performance testing of this patch series. Specifically, benchmarks
that measure interrupt handling efficiency, latency, and throughput.

I welcome your feedback, suggestions, and any further discussion on this
patch series.


Additional Info:
Based on the work done by Tejun Heo <[email protected]>
Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10

Allen Pais (9):
hyperv: Convert from tasklet to BH workqueue
dma: Convert from tasklet to BH workqueue
IB: Convert from tasklet to BH workqueue
USB: Convert from tasklet to BH workqueue
mailbox: Convert from tasklet to BH workqueue
ipmi: Convert from tasklet to BH workqueue
s390: Convert from tasklet to BH workqueue
drivers/media/*: Convert from tasklet to BH workqueue
mmc: Convert from tasklet to BH workqueue

drivers/char/ipmi/ipmi_msghandler.c | 30 ++++----
drivers/dma/altera-msgdma.c | 15 ++--
drivers/dma/apple-admac.c | 15 ++--
drivers/dma/at_hdmac.c | 2 +-
drivers/dma/at_xdmac.c | 15 ++--
drivers/dma/bcm2835-dma.c | 2 +-
drivers/dma/dma-axi-dmac.c | 2 +-
drivers/dma/dma-jz4780.c | 2 +-
.../dma/dw-axi-dmac/dw-axi-dmac-platform.c | 2 +-
drivers/dma/dw-edma/dw-edma-core.c | 2 +-
drivers/dma/dw/core.c | 13 ++--
drivers/dma/dw/regs.h | 3 +-
drivers/dma/ep93xx_dma.c | 15 ++--
drivers/dma/fsl-edma-common.c | 2 +-
drivers/dma/fsl-qdma.c | 2 +-
drivers/dma/fsl_raid.c | 11 +--
drivers/dma/fsl_raid.h | 2 +-
drivers/dma/fsldma.c | 15 ++--
drivers/dma/fsldma.h | 3 +-
drivers/dma/hisi_dma.c | 2 +-
drivers/dma/hsu/hsu.c | 2 +-
drivers/dma/idma64.c | 4 +-
drivers/dma/img-mdc-dma.c | 2 +-
drivers/dma/imx-dma.c | 27 +++----
drivers/dma/imx-sdma.c | 6 +-
drivers/dma/ioat/dma.c | 17 +++--
drivers/dma/ioat/dma.h | 5 +-
drivers/dma/ioat/init.c | 2 +-
drivers/dma/k3dma.c | 19 ++---
drivers/dma/mediatek/mtk-cqdma.c | 35 ++++-----
drivers/dma/mediatek/mtk-hsdma.c | 2 +-
drivers/dma/mediatek/mtk-uart-apdma.c | 4 +-
drivers/dma/mmp_pdma.c | 13 ++--
drivers/dma/mmp_tdma.c | 11 +--
drivers/dma/mpc512x_dma.c | 17 +++--
drivers/dma/mv_xor.c | 13 ++--
drivers/dma/mv_xor.h | 5 +-
drivers/dma/mv_xor_v2.c | 23 +++---
drivers/dma/mxs-dma.c | 13 ++--
drivers/dma/nbpfaxi.c | 15 ++--
drivers/dma/owl-dma.c | 2 +-
drivers/dma/pch_dma.c | 17 +++--
drivers/dma/pl330.c | 31 ++++----
drivers/dma/plx_dma.c | 13 ++--
drivers/dma/ppc4xx/adma.c | 17 +++--
drivers/dma/ppc4xx/adma.h | 5 +-
drivers/dma/pxa_dma.c | 2 +-
drivers/dma/qcom/bam_dma.c | 35 ++++-----
drivers/dma/qcom/gpi.c | 18 ++---
drivers/dma/qcom/hidma.c | 11 +--
drivers/dma/qcom/hidma.h | 5 +-
drivers/dma/qcom/hidma_ll.c | 11 +--
drivers/dma/qcom/qcom_adm.c | 2 +-
drivers/dma/sa11x0-dma.c | 27 +++----
drivers/dma/sf-pdma/sf-pdma.c | 23 +++---
drivers/dma/sf-pdma/sf-pdma.h | 5 +-
drivers/dma/sprd-dma.c | 2 +-
drivers/dma/st_fdma.c | 2 +-
drivers/dma/ste_dma40.c | 17 +++--
drivers/dma/sun6i-dma.c | 33 ++++----
drivers/dma/tegra186-gpc-dma.c | 2 +-
drivers/dma/tegra20-apb-dma.c | 19 ++---
drivers/dma/tegra210-adma.c | 2 +-
drivers/dma/ti/edma.c | 2 +-
drivers/dma/ti/k3-udma.c | 11 +--
drivers/dma/ti/omap-dma.c | 2 +-
drivers/dma/timb_dma.c | 23 +++---
drivers/dma/txx9dmac.c | 29 +++----
drivers/dma/txx9dmac.h | 5 +-
drivers/dma/virt-dma.c | 9 ++-
drivers/dma/virt-dma.h | 9 ++-
drivers/dma/xgene-dma.c | 21 +++---
drivers/dma/xilinx/xilinx_dma.c | 23 +++---
drivers/dma/xilinx/xilinx_dpdma.c | 21 +++---
drivers/dma/xilinx/zynqmp_dma.c | 21 +++---
drivers/hv/channel.c | 8 +-
drivers/hv/channel_mgmt.c | 5 +-
drivers/hv/connection.c | 9 ++-
drivers/hv/hv.c | 3 +-
drivers/hv/hv_balloon.c | 4 +-
drivers/hv/hv_fcopy.c | 8 +-
drivers/hv/hv_kvp.c | 8 +-
drivers/hv/hv_snapshot.c | 8 +-
drivers/hv/hyperv_vmbus.h | 9 ++-
drivers/hv/vmbus_drv.c | 19 ++---
drivers/infiniband/hw/bnxt_re/bnxt_re.h | 3 +-
drivers/infiniband/hw/bnxt_re/qplib_fp.c | 21 +++---
drivers/infiniband/hw/bnxt_re/qplib_fp.h | 2 +-
drivers/infiniband/hw/bnxt_re/qplib_rcfw.c | 25 ++++---
drivers/infiniband/hw/bnxt_re/qplib_rcfw.h | 2 +-
drivers/infiniband/hw/erdma/erdma.h | 3 +-
drivers/infiniband/hw/erdma/erdma_eq.c | 11 +--
drivers/infiniband/hw/hfi1/rc.c | 2 +-
drivers/infiniband/hw/hfi1/sdma.c | 37 ++++-----
drivers/infiniband/hw/hfi1/sdma.h | 9 ++-
drivers/infiniband/hw/hfi1/tid_rdma.c | 6 +-
drivers/infiniband/hw/irdma/ctrl.c | 2 +-
drivers/infiniband/hw/irdma/hw.c | 24 +++---
drivers/infiniband/hw/irdma/main.h | 5 +-
drivers/infiniband/hw/qib/qib.h | 7 +-
drivers/infiniband/hw/qib/qib_iba7322.c | 9 ++-
drivers/infiniband/hw/qib/qib_rc.c | 16 ++--
drivers/infiniband/hw/qib/qib_ruc.c | 4 +-
drivers/infiniband/hw/qib/qib_sdma.c | 11 +--
drivers/infiniband/sw/rdmavt/qp.c | 2 +-
drivers/mailbox/bcm-pdc-mailbox.c | 21 +++---
drivers/mailbox/imx-mailbox.c | 16 ++--
drivers/media/pci/bt8xx/bt878.c | 8 +-
drivers/media/pci/bt8xx/bt878.h | 3 +-
drivers/media/pci/bt8xx/dvb-bt8xx.c | 9 ++-
drivers/media/pci/ddbridge/ddbridge.h | 3 +-
drivers/media/pci/mantis/hopper_cards.c | 2 +-
drivers/media/pci/mantis/mantis_cards.c | 2 +-
drivers/media/pci/mantis/mantis_common.h | 3 +-
drivers/media/pci/mantis/mantis_dma.c | 5 +-
drivers/media/pci/mantis/mantis_dma.h | 2 +-
drivers/media/pci/mantis/mantis_dvb.c | 12 +--
drivers/media/pci/ngene/ngene-core.c | 23 +++---
drivers/media/pci/ngene/ngene.h | 5 +-
drivers/media/pci/smipcie/smipcie-main.c | 18 ++---
drivers/media/pci/smipcie/smipcie.h | 3 +-
drivers/media/pci/ttpci/budget-av.c | 3 +-
drivers/media/pci/ttpci/budget-ci.c | 27 +++----
drivers/media/pci/ttpci/budget-core.c | 10 +--
drivers/media/pci/ttpci/budget.h | 5 +-
drivers/media/pci/tw5864/tw5864-core.c | 2 +-
drivers/media/pci/tw5864/tw5864-video.c | 13 ++--
drivers/media/pci/tw5864/tw5864.h | 7 +-
drivers/media/platform/intel/pxa_camera.c | 15 ++--
drivers/media/platform/marvell/mcam-core.c | 11 +--
drivers/media/platform/marvell/mcam-core.h | 3 +-
.../st/sti/c8sectpfe/c8sectpfe-core.c | 15 ++--
.../st/sti/c8sectpfe/c8sectpfe-core.h | 2 +-
drivers/media/radio/wl128x/fmdrv.h | 7 +-
drivers/media/radio/wl128x/fmdrv_common.c | 41 +++++-----
drivers/media/rc/mceusb.c | 2 +-
drivers/media/usb/ttusb-dec/ttusb_dec.c | 21 +++---
drivers/mmc/host/atmel-mci.c | 35 ++++-----
drivers/mmc/host/au1xmmc.c | 37 ++++-----
drivers/mmc/host/cb710-mmc.c | 15 ++--
drivers/mmc/host/cb710-mmc.h | 3 +-
drivers/mmc/host/dw_mmc.c | 25 ++++---
drivers/mmc/host/dw_mmc.h | 9 ++-
drivers/mmc/host/omap.c | 17 +++--
drivers/mmc/host/renesas_sdhi.h | 3 +-
drivers/mmc/host/renesas_sdhi_internal_dmac.c | 24 +++---
drivers/mmc/host/renesas_sdhi_sys_dmac.c | 9 +--
drivers/mmc/host/sdhci-bcm-kona.c | 2 +-
drivers/mmc/host/tifm_sd.c | 15 ++--
drivers/mmc/host/tmio_mmc.h | 3 +-
drivers/mmc/host/tmio_mmc_core.c | 4 +-
drivers/mmc/host/uniphier-sd.c | 13 ++--
drivers/mmc/host/via-sdmmc.c | 25 ++++---
drivers/mmc/host/wbsd.c | 75 ++++++++++---------
drivers/mmc/host/wbsd.h | 10 +--
drivers/s390/block/dasd.c | 42 +++++------
drivers/s390/block/dasd_int.h | 10 +--
drivers/s390/char/con3270.c | 27 ++++---
drivers/s390/crypto/ap_bus.c | 24 +++---
drivers/s390/crypto/ap_bus.h | 2 +-
drivers/s390/crypto/zcrypt_msgtype50.c | 2 +-
drivers/s390/crypto/zcrypt_msgtype6.c | 4 +-
drivers/s390/net/ctcm_fsms.c | 4 +-
drivers/s390/net/ctcm_main.c | 15 ++--
drivers/s390/net/ctcm_main.h | 5 +-
drivers/s390/net/ctcm_mpc.c | 12 +--
drivers/s390/net/ctcm_mpc.h | 7 +-
drivers/s390/net/lcs.c | 26 +++----
drivers/s390/net/lcs.h | 2 +-
drivers/s390/net/qeth_core_main.c | 2 +-
drivers/s390/scsi/zfcp_qdio.c | 45 +++++------
drivers/s390/scsi/zfcp_qdio.h | 9 ++-
drivers/usb/atm/usbatm.c | 55 +++++++-------
drivers/usb/atm/usbatm.h | 3 +-
drivers/usb/core/hcd.c | 22 +++---
drivers/usb/gadget/udc/fsl_qe_udc.c | 21 +++---
drivers/usb/gadget/udc/fsl_qe_udc.h | 4 +-
drivers/usb/host/ehci-sched.c | 2 +-
drivers/usb/host/fhci-hcd.c | 3 +-
drivers/usb/host/fhci-sched.c | 10 +--
drivers/usb/host/fhci.h | 5 +-
drivers/usb/host/xhci-dbgcap.h | 3 +-
drivers/usb/host/xhci-dbgtty.c | 15 ++--
include/linux/hyperv.h | 2 +-
include/linux/usb/cdc_ncm.h | 2 +-
include/linux/usb/usbnet.h | 2 +-
186 files changed, 1135 insertions(+), 1044 deletions(-)

--
2.17.1



2024-03-27 16:05:49

by Allen Pais

[permalink] [raw]
Subject: [PATCH 1/9] hyperv: Convert from tasklet to BH workqueue

The only generic interface to execute asynchronously in the BH context is
tasklet; however, it's marked deprecated and has some design flaws. To
replace tasklets, BH workqueue support was recently added. A BH workqueue
behaves similarly to regular workqueues except that the queued work items
are executed in the BH context.

This patch converts drivers/hv/* from tasklet to BH workqueue.

Based on the work done by Tejun Heo <[email protected]>
Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10

Signed-off-by: Allen Pais <[email protected]>
---
drivers/hv/channel.c | 8 ++++----
drivers/hv/channel_mgmt.c | 5 ++---
drivers/hv/connection.c | 9 +++++----
drivers/hv/hv.c | 3 +--
drivers/hv/hv_balloon.c | 4 ++--
drivers/hv/hv_fcopy.c | 8 ++++----
drivers/hv/hv_kvp.c | 8 ++++----
drivers/hv/hv_snapshot.c | 8 ++++----
drivers/hv/hyperv_vmbus.h | 9 +++++----
drivers/hv/vmbus_drv.c | 19 ++++++++++---------
include/linux/hyperv.h | 2 +-
11 files changed, 42 insertions(+), 41 deletions(-)

diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
index adbf674355b2..876d78eb4dce 100644
--- a/drivers/hv/channel.c
+++ b/drivers/hv/channel.c
@@ -859,7 +859,7 @@ void vmbus_reset_channel_cb(struct vmbus_channel *channel)
unsigned long flags;

/*
- * vmbus_on_event(), running in the per-channel tasklet, can race
+ * vmbus_on_event(), running in the per-channel work, can race
* with vmbus_close_internal() in the case of SMP guest, e.g., when
* the former is accessing channel->inbound.ring_buffer, the latter
* could be freeing the ring_buffer pages, so here we must stop it
@@ -871,7 +871,7 @@ void vmbus_reset_channel_cb(struct vmbus_channel *channel)
* and that the channel ring buffer is no longer being accessed, cf.
* the calls to napi_disable() in netvsc_device_remove().
*/
- tasklet_disable(&channel->callback_event);
+ disable_work_sync(&channel->callback_event);

/* See the inline comments in vmbus_chan_sched(). */
spin_lock_irqsave(&channel->sched_lock, flags);
@@ -880,8 +880,8 @@ void vmbus_reset_channel_cb(struct vmbus_channel *channel)

channel->sc_creation_callback = NULL;

- /* Re-enable tasklet for use on re-open */
- tasklet_enable(&channel->callback_event);
+ /* Re-enable work for use on re-open */
+ enable_and_queue_work(system_bh_wq, &channel->callback_event);
}

static int vmbus_close_internal(struct vmbus_channel *channel)
diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 2f4d09ce027a..58397071a0de 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -353,8 +353,7 @@ static struct vmbus_channel *alloc_channel(void)

INIT_LIST_HEAD(&channel->sc_list);

- tasklet_init(&channel->callback_event,
- vmbus_on_event, (unsigned long)channel);
+ INIT_WORK(&channel->callback_event, vmbus_on_event);

hv_ringbuffer_pre_init(channel);

@@ -366,7 +365,7 @@ static struct vmbus_channel *alloc_channel(void)
*/
static void free_channel(struct vmbus_channel *channel)
{
- tasklet_kill(&channel->callback_event);
+ cancel_work_sync(&channel->callback_event);
vmbus_remove_channel_attr_group(channel);

kobject_put(&channel->kobj);
diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
index 3cabeeabb1ca..f2a3394a8303 100644
--- a/drivers/hv/connection.c
+++ b/drivers/hv/connection.c
@@ -372,12 +372,13 @@ struct vmbus_channel *relid2channel(u32 relid)
* 3. Once we return, enable signaling from the host. Once this
* state is set we check to see if additional packets are
* available to read. In this case we repeat the process.
- * If this tasklet has been running for a long time
+ * If this work has been running for a long time
* then reschedule ourselves.
*/
-void vmbus_on_event(unsigned long data)
+void vmbus_on_event(struct work_struct *t)
{
- struct vmbus_channel *channel = (void *) data;
+ struct vmbus_channel *channel = from_work(channel, t,
+ callback_event);
void (*callback_fn)(void *context);

trace_vmbus_on_event(channel);
@@ -401,7 +402,7 @@ void vmbus_on_event(unsigned long data)
return;

hv_begin_read(&channel->inbound);
- tasklet_schedule(&channel->callback_event);
+ queue_work(system_bh_wq, &channel->callback_event);
}

/*
diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
index a8ad728354cb..2af92f08f9ce 100644
--- a/drivers/hv/hv.c
+++ b/drivers/hv/hv.c
@@ -119,8 +119,7 @@ int hv_synic_alloc(void)
for_each_present_cpu(cpu) {
hv_cpu = per_cpu_ptr(hv_context.cpu_context, cpu);

- tasklet_init(&hv_cpu->msg_dpc,
- vmbus_on_msg_dpc, (unsigned long) hv_cpu);
+ INIT_WORK(&hv_cpu->msg_dpc, vmbus_on_msg_dpc);

if (ms_hyperv.paravisor_present && hv_isolation_type_tdx()) {
hv_cpu->post_msg_page = (void *)get_zeroed_page(GFP_ATOMIC);
diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index e000fa3b9f97..c7efa2ff4cdf 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_balloon.c
@@ -2083,7 +2083,7 @@ static int balloon_suspend(struct hv_device *hv_dev)
{
struct hv_dynmem_device *dm = hv_get_drvdata(hv_dev);

- tasklet_disable(&hv_dev->channel->callback_event);
+ disable_work_sync(&hv_dev->channel->callback_event);

cancel_work_sync(&dm->balloon_wrk.wrk);
cancel_work_sync(&dm->ha_wrk.wrk);
@@ -2094,7 +2094,7 @@ static int balloon_suspend(struct hv_device *hv_dev)
vmbus_close(hv_dev->channel);
}

- tasklet_enable(&hv_dev->channel->callback_event);
+ enable_and_queue_work(system_bh_wq, &hv_dev->channel->callback_event);

return 0;

diff --git a/drivers/hv/hv_fcopy.c b/drivers/hv/hv_fcopy.c
index 922d83eb7ddf..fd6799293c17 100644
--- a/drivers/hv/hv_fcopy.c
+++ b/drivers/hv/hv_fcopy.c
@@ -71,7 +71,7 @@ static void fcopy_poll_wrapper(void *channel)
{
/* Transaction is finished, reset the state here to avoid races. */
fcopy_transaction.state = HVUTIL_READY;
- tasklet_schedule(&((struct vmbus_channel *)channel)->callback_event);
+ queue_work(system_bh_wq, &((struct vmbus_channel *)channel)->callback_event);
}

static void fcopy_timeout_func(struct work_struct *dummy)
@@ -391,7 +391,7 @@ int hv_fcopy_pre_suspend(void)
if (!fcopy_msg)
return -ENOMEM;

- tasklet_disable(&channel->callback_event);
+ disable_work_sync(&channel->callback_event);

fcopy_msg->operation = CANCEL_FCOPY;

@@ -404,7 +404,7 @@ int hv_fcopy_pre_suspend(void)

fcopy_transaction.state = HVUTIL_READY;

- /* tasklet_enable() will be called in hv_fcopy_pre_resume(). */
+ /* enable_and_queue_work(system_bh_wq, ) will be called in hv_fcopy_pre_resume(). */
return 0;
}

@@ -412,7 +412,7 @@ int hv_fcopy_pre_resume(void)
{
struct vmbus_channel *channel = fcopy_transaction.recv_channel;

- tasklet_enable(&channel->callback_event);
+ enable_and_queue_work(system_bh_wq, &channel->callback_event);

return 0;
}
diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c
index d35b60c06114..85b8fb4a3d2e 100644
--- a/drivers/hv/hv_kvp.c
+++ b/drivers/hv/hv_kvp.c
@@ -113,7 +113,7 @@ static void kvp_poll_wrapper(void *channel)
{
/* Transaction is finished, reset the state here to avoid races. */
kvp_transaction.state = HVUTIL_READY;
- tasklet_schedule(&((struct vmbus_channel *)channel)->callback_event);
+ queue_work(system_bh_wq, &((struct vmbus_channel *)channel)->callback_event);
}

static void kvp_register_done(void)
@@ -160,7 +160,7 @@ static void kvp_timeout_func(struct work_struct *dummy)

static void kvp_host_handshake_func(struct work_struct *dummy)
{
- tasklet_schedule(&kvp_transaction.recv_channel->callback_event);
+ queue_work(system_bh_wq, &kvp_transaction.recv_channel->callback_event);
}

static int kvp_handle_handshake(struct hv_kvp_msg *msg)
@@ -786,7 +786,7 @@ int hv_kvp_pre_suspend(void)
{
struct vmbus_channel *channel = kvp_transaction.recv_channel;

- tasklet_disable(&channel->callback_event);
+ disable_work_sync(&channel->callback_event);

/*
* If there is a pending transtion, it's unnecessary to tell the host
@@ -809,7 +809,7 @@ int hv_kvp_pre_resume(void)
{
struct vmbus_channel *channel = kvp_transaction.recv_channel;

- tasklet_enable(&channel->callback_event);
+ enable_and_queue_work(system_bh_wq, &channel->callback_event);

return 0;
}
diff --git a/drivers/hv/hv_snapshot.c b/drivers/hv/hv_snapshot.c
index 0d2184be1691..46c2263d2591 100644
--- a/drivers/hv/hv_snapshot.c
+++ b/drivers/hv/hv_snapshot.c
@@ -83,7 +83,7 @@ static void vss_poll_wrapper(void *channel)
{
/* Transaction is finished, reset the state here to avoid races. */
vss_transaction.state = HVUTIL_READY;
- tasklet_schedule(&((struct vmbus_channel *)channel)->callback_event);
+ queue_work(system_bh_wq, &((struct vmbus_channel *)channel)->callback_event);
}

/*
@@ -421,7 +421,7 @@ int hv_vss_pre_suspend(void)
if (!vss_msg)
return -ENOMEM;

- tasklet_disable(&channel->callback_event);
+ disable_work_sync(&channel->callback_event);

vss_msg->vss_hdr.operation = VSS_OP_THAW;

@@ -435,7 +435,7 @@ int hv_vss_pre_suspend(void)

vss_transaction.state = HVUTIL_READY;

- /* tasklet_enable() will be called in hv_vss_pre_resume(). */
+ /* enable_and_queue_work() will be called in hv_vss_pre_resume(). */
return 0;
}

@@ -443,7 +443,7 @@ int hv_vss_pre_resume(void)
{
struct vmbus_channel *channel = vss_transaction.recv_channel;

- tasklet_enable(&channel->callback_event);
+ enable_and_queue_work(system_bh_wq, &channel->callback_event);

return 0;
}
diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
index f6b1e710f805..95ca570ac7af 100644
--- a/drivers/hv/hyperv_vmbus.h
+++ b/drivers/hv/hyperv_vmbus.h
@@ -19,6 +19,7 @@
#include <linux/atomic.h>
#include <linux/hyperv.h>
#include <linux/interrupt.h>
+#include <linux/workqueue.h>

#include "hv_trace.h"

@@ -136,10 +137,10 @@ struct hv_per_cpu_context {

/*
* Starting with win8, we can take channel interrupts on any CPU;
- * we will manage the tasklet that handles events messages on a per CPU
+ * we will manage the work that handles events messages on a per CPU
* basis.
*/
- struct tasklet_struct msg_dpc;
+ struct work_struct msg_dpc;
};

struct hv_context {
@@ -366,8 +367,8 @@ void vmbus_disconnect(void);

int vmbus_post_msg(void *buffer, size_t buflen, bool can_sleep);

-void vmbus_on_event(unsigned long data);
-void vmbus_on_msg_dpc(unsigned long data);
+void vmbus_on_event(struct work_struct *t);
+void vmbus_on_msg_dpc(struct work_struct *t);

int hv_kvp_init(struct hv_util_service *srv);
void hv_kvp_deinit(void);
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 4cb17603a828..d9755054a881 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -1025,9 +1025,9 @@ static void vmbus_onmessage_work(struct work_struct *work)
kfree(ctx);
}

-void vmbus_on_msg_dpc(unsigned long data)
+void vmbus_on_msg_dpc(struct work_struct *t)
{
- struct hv_per_cpu_context *hv_cpu = (void *)data;
+ struct hv_per_cpu_context *hv_cpu = from_work(hv_cpu, t, msg_dpc);
void *page_addr = hv_cpu->synic_message_page;
struct hv_message msg_copy, *msg = (struct hv_message *)page_addr +
VMBUS_MESSAGE_SINT;
@@ -1131,7 +1131,7 @@ void vmbus_on_msg_dpc(unsigned long data)
* before sending the rescind message of the same
* channel. These messages are sent to the guest's
* connect CPU; the guest then starts processing them
- * in the tasklet handler on this CPU:
+ * in the work handler on this CPU:
*
* VMBUS_CONNECT_CPU
*
@@ -1276,7 +1276,7 @@ static void vmbus_chan_sched(struct hv_per_cpu_context *hv_cpu)
hv_begin_read(&channel->inbound);
fallthrough;
case HV_CALL_DIRECT:
- tasklet_schedule(&channel->callback_event);
+ queue_work(system_bh_wq, &channel->callback_event);
}

sched_unlock:
@@ -1304,7 +1304,7 @@ static void vmbus_isr(void)
hv_stimer0_isr();
vmbus_signal_eom(msg, HVMSG_TIMER_EXPIRED);
} else
- tasklet_schedule(&hv_cpu->msg_dpc);
+ queue_work(system_bh_wq, &hv_cpu->msg_dpc);
}

add_interrupt_randomness(vmbus_interrupt);
@@ -2371,10 +2371,11 @@ static int vmbus_bus_suspend(struct device *dev)
hv_context.cpu_context, VMBUS_CONNECT_CPU);
struct vmbus_channel *channel, *sc;

- tasklet_disable(&hv_cpu->msg_dpc);
+ disable_work_sync(&hv_cpu->msg_dpc);
vmbus_connection.ignore_any_offer_msg = true;
- /* The tasklet_enable() takes care of providing a memory barrier */
- tasklet_enable(&hv_cpu->msg_dpc);
+ /* The enable_and_queue_work() takes care of
+ * providing a memory barrier */
+ enable_and_queue_work(system_bh_wq, &hv_cpu->msg_dpc);

/* Drain all the workqueues as we are in suspend */
drain_workqueue(vmbus_connection.rescind_work_queue);
@@ -2692,7 +2693,7 @@ static void __exit vmbus_exit(void)
struct hv_per_cpu_context *hv_cpu
= per_cpu_ptr(hv_context.cpu_context, cpu);

- tasklet_kill(&hv_cpu->msg_dpc);
+ cancel_work_sync(&hv_cpu->msg_dpc);
}
hv_debug_rm_all_dir();

diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index 6ef0557b4bff..db3d85ea5ce6 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -882,7 +882,7 @@ struct vmbus_channel {
bool out_full_flag;

/* Channel callback's invoked in softirq context */
- struct tasklet_struct callback_event;
+ struct work_struct callback_event;
void (*onchannel_callback)(void *context);
void *channel_callback_context;

--
2.17.1


2024-03-27 16:07:06

by Allen Pais

[permalink] [raw]
Subject: [PATCH 5/9] mailbox: Convert from tasklet to BH workqueue

The only generic interface to execute asynchronously in the BH context is
tasklet; however, it's marked deprecated and has some design flaws. To
replace tasklets, BH workqueue support was recently added. A BH workqueue
behaves similarly to regular workqueues except that the queued work items
are executed in the BH context.

This patch converts drivers/infiniband/* from tasklet to BH workqueue.

Based on the work done by Tejun Heo <[email protected]>
Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10

Signed-off-by: Allen Pais <[email protected]>
---
drivers/mailbox/bcm-pdc-mailbox.c | 21 +++++++++++----------
drivers/mailbox/imx-mailbox.c | 16 ++++++++--------
2 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/drivers/mailbox/bcm-pdc-mailbox.c b/drivers/mailbox/bcm-pdc-mailbox.c
index 1768d3d5aaa0..242e7504a628 100644
--- a/drivers/mailbox/bcm-pdc-mailbox.c
+++ b/drivers/mailbox/bcm-pdc-mailbox.c
@@ -43,6 +43,7 @@
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/dmapool.h>
+#include <linux/workqueue.h>

#define PDC_SUCCESS 0

@@ -293,8 +294,8 @@ struct pdc_state {

unsigned int pdc_irq;

- /* tasklet for deferred processing after DMA rx interrupt */
- struct tasklet_struct rx_tasklet;
+ /* work for deferred processing after DMA rx interrupt */
+ struct work_struct rx_work;

/* Number of bytes of receive status prior to each rx frame */
u32 rx_status_len;
@@ -952,18 +953,18 @@ static irqreturn_t pdc_irq_handler(int irq, void *data)
iowrite32(intstatus, pdcs->pdc_reg_vbase + PDC_INTSTATUS_OFFSET);

/* Wakeup IRQ thread */
- tasklet_schedule(&pdcs->rx_tasklet);
+ queue_work(system_bh_wq, &pdcs->rx_work);
return IRQ_HANDLED;
}

/**
- * pdc_tasklet_cb() - Tasklet callback that runs the deferred processing after
+ * pdc_work_cb() - Work callback that runs the deferred processing after
* a DMA receive interrupt. Reenables the receive interrupt.
* @t: Pointer to the Altera sSGDMA channel structure
*/
-static void pdc_tasklet_cb(struct tasklet_struct *t)
+static void pdc_work_cb(struct work_struct *t)
{
- struct pdc_state *pdcs = from_tasklet(pdcs, t, rx_tasklet);
+ struct pdc_state *pdcs = from_work(pdcs, t, rx_work);

pdc_receive(pdcs);

@@ -1577,8 +1578,8 @@ static int pdc_probe(struct platform_device *pdev)

pdc_hw_init(pdcs);

- /* Init tasklet for deferred DMA rx processing */
- tasklet_setup(&pdcs->rx_tasklet, pdc_tasklet_cb);
+ /* Init work for deferred DMA rx processing */
+ INIT_WORK(&pdcs->rx_work, pdc_work_cb);

err = pdc_interrupts_init(pdcs);
if (err)
@@ -1595,7 +1596,7 @@ static int pdc_probe(struct platform_device *pdev)
return PDC_SUCCESS;

cleanup_buf_pool:
- tasklet_kill(&pdcs->rx_tasklet);
+ cancel_work_sync(&pdcs->rx_work);
dma_pool_destroy(pdcs->rx_buf_pool);

cleanup_ring_pool:
@@ -1611,7 +1612,7 @@ static void pdc_remove(struct platform_device *pdev)

pdc_free_debugfs();

- tasklet_kill(&pdcs->rx_tasklet);
+ cancel_work_sync(&pdcs->rx_work);

pdc_hw_disable(pdcs);

diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
index 5c1d09cad761..933727f89431 100644
--- a/drivers/mailbox/imx-mailbox.c
+++ b/drivers/mailbox/imx-mailbox.c
@@ -21,6 +21,7 @@
#include <linux/pm_runtime.h>
#include <linux/suspend.h>
#include <linux/slab.h>
+#include <linux/workqueue.h>

#include "mailbox.h"

@@ -80,7 +81,7 @@ struct imx_mu_con_priv {
char irq_desc[IMX_MU_CHAN_NAME_SIZE];
enum imx_mu_chan_type type;
struct mbox_chan *chan;
- struct tasklet_struct txdb_tasklet;
+ struct work_struct txdb_work;
};

struct imx_mu_priv {
@@ -232,7 +233,7 @@ static int imx_mu_generic_tx(struct imx_mu_priv *priv,
break;
case IMX_MU_TYPE_TXDB:
imx_mu_xcr_rmw(priv, IMX_MU_GCR, IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx), 0);
- tasklet_schedule(&cp->txdb_tasklet);
+ queue_work(system_bh_wq, &cp->txdb_work);
break;
case IMX_MU_TYPE_TXDB_V2:
imx_mu_xcr_rmw(priv, IMX_MU_GCR, IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx), 0);
@@ -420,7 +421,7 @@ static int imx_mu_seco_tx(struct imx_mu_priv *priv, struct imx_mu_con_priv *cp,
}

/* Simulate hack for mbox framework */
- tasklet_schedule(&cp->txdb_tasklet);
+ queue_work(system_bh_wq, &cp->txdb_work);

break;
default:
@@ -484,9 +485,9 @@ static int imx_mu_seco_rxdb(struct imx_mu_priv *priv, struct imx_mu_con_priv *cp
return err;
}

-static void imx_mu_txdb_tasklet(unsigned long data)
+static void imx_mu_txdb_work(struct work_struct *t)
{
- struct imx_mu_con_priv *cp = (struct imx_mu_con_priv *)data;
+ struct imx_mu_con_priv *cp = from_work(cp, t, txdb_work);

mbox_chan_txdone(cp->chan, 0);
}
@@ -570,8 +571,7 @@ static int imx_mu_startup(struct mbox_chan *chan)

if (cp->type == IMX_MU_TYPE_TXDB) {
/* Tx doorbell don't have ACK support */
- tasklet_init(&cp->txdb_tasklet, imx_mu_txdb_tasklet,
- (unsigned long)cp);
+ INIT_WORK(&cp->txdb_work, imx_mu_txdb_work);
return 0;
}

@@ -615,7 +615,7 @@ static void imx_mu_shutdown(struct mbox_chan *chan)
}

if (cp->type == IMX_MU_TYPE_TXDB) {
- tasklet_kill(&cp->txdb_tasklet);
+ cancel_work_sync(&cp->txdb_work);
pm_runtime_put_sync(priv->dev);
return;
}
--
2.17.1


2024-03-27 16:07:51

by Allen Pais

[permalink] [raw]
Subject: [PATCH 3/9] IB: Convert from tasklet to BH workqueue

The only generic interface to execute asynchronously in the BH context is
tasklet; however, it's marked deprecated and has some design flaws. To
replace tasklets, BH workqueue support was recently added. A BH workqueue
behaves similarly to regular workqueues except that the queued work items
are executed in the BH context.

This patch converts drivers/infiniband/* from tasklet to BH workqueue.

Based on the work done by Tejun Heo <[email protected]>
Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10

Signed-off-by: Allen Pais <[email protected]>
---
drivers/infiniband/hw/bnxt_re/bnxt_re.h | 3 +-
drivers/infiniband/hw/bnxt_re/qplib_fp.c | 21 ++++++------
drivers/infiniband/hw/bnxt_re/qplib_fp.h | 2 +-
drivers/infiniband/hw/bnxt_re/qplib_rcfw.c | 25 ++++++++-------
drivers/infiniband/hw/bnxt_re/qplib_rcfw.h | 2 +-
drivers/infiniband/hw/erdma/erdma.h | 3 +-
drivers/infiniband/hw/erdma/erdma_eq.c | 11 ++++---
drivers/infiniband/hw/hfi1/rc.c | 2 +-
drivers/infiniband/hw/hfi1/sdma.c | 37 +++++++++++-----------
drivers/infiniband/hw/hfi1/sdma.h | 9 +++---
drivers/infiniband/hw/hfi1/tid_rdma.c | 6 ++--
drivers/infiniband/hw/irdma/ctrl.c | 2 +-
drivers/infiniband/hw/irdma/hw.c | 24 +++++++-------
drivers/infiniband/hw/irdma/main.h | 5 +--
drivers/infiniband/hw/qib/qib.h | 7 ++--
drivers/infiniband/hw/qib/qib_iba7322.c | 9 +++---
drivers/infiniband/hw/qib/qib_rc.c | 16 +++++-----
drivers/infiniband/hw/qib/qib_ruc.c | 4 +--
drivers/infiniband/hw/qib/qib_sdma.c | 11 ++++---
drivers/infiniband/sw/rdmavt/qp.c | 2 +-
20 files changed, 106 insertions(+), 95 deletions(-)

diff --git a/drivers/infiniband/hw/bnxt_re/bnxt_re.h b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
index 9dca451ed522..f511c8415806 100644
--- a/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+++ b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
@@ -42,6 +42,7 @@
#include <rdma/uverbs_ioctl.h>
#include "hw_counters.h"
#include <linux/hashtable.h>
+#include <linux/workqueue.h>
#define ROCE_DRV_MODULE_NAME "bnxt_re"

#define BNXT_RE_DESC "Broadcom NetXtreme-C/E RoCE Driver"
@@ -162,7 +163,7 @@ struct bnxt_re_dev {
u8 cur_prio_map;

/* FP Notification Queue (CQ & SRQ) */
- struct tasklet_struct nq_task;
+ struct work_struct nq_work;

/* RCFW Channel */
struct bnxt_qplib_rcfw rcfw;
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
index 439d0c7c5d0c..052906982cdf 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
@@ -46,6 +46,7 @@
#include <linux/delay.h>
#include <linux/prefetch.h>
#include <linux/if_ether.h>
+#include <linux/workqueue.h>
#include <rdma/ib_mad.h>

#include "roce_hsi.h"
@@ -294,9 +295,9 @@ static void __wait_for_all_nqes(struct bnxt_qplib_cq *cq, u16 cnq_events)
}
}

-static void bnxt_qplib_service_nq(struct tasklet_struct *t)
+static void bnxt_qplib_service_nq(struct work_struct *t)
{
- struct bnxt_qplib_nq *nq = from_tasklet(nq, t, nq_tasklet);
+ struct bnxt_qplib_nq *nq = from_work(nq, t, nq_work);
struct bnxt_qplib_hwq *hwq = &nq->hwq;
struct bnxt_qplib_cq *cq;
int budget = nq->budget;
@@ -394,7 +395,7 @@ void bnxt_re_synchronize_nq(struct bnxt_qplib_nq *nq)
int budget = nq->budget;

nq->budget = nq->hwq.max_elements;
- bnxt_qplib_service_nq(&nq->nq_tasklet);
+ bnxt_qplib_service_nq(&nq->nq_work);
nq->budget = budget;
}

@@ -409,7 +410,7 @@ static irqreturn_t bnxt_qplib_nq_irq(int irq, void *dev_instance)
prefetch(bnxt_qplib_get_qe(hwq, sw_cons, NULL));

/* Fan out to CPU affinitized kthreads? */
- tasklet_schedule(&nq->nq_tasklet);
+ queue_work(system_bh_wq, &nq->nq_work);

return IRQ_HANDLED;
}
@@ -430,8 +431,8 @@ void bnxt_qplib_nq_stop_irq(struct bnxt_qplib_nq *nq, bool kill)
nq->name = NULL;

if (kill)
- tasklet_kill(&nq->nq_tasklet);
- tasklet_disable(&nq->nq_tasklet);
+ cancel_work_sync(&nq->nq_work);
+ disable_work_sync(&nq->nq_work);
}

void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
@@ -465,9 +466,9 @@ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,

nq->msix_vec = msix_vector;
if (need_init)
- tasklet_setup(&nq->nq_tasklet, bnxt_qplib_service_nq);
+ INIT_WORK(&nq->nq_work, bnxt_qplib_service_nq);
else
- tasklet_enable(&nq->nq_tasklet);
+ enable_and_queue_work(system_bh_wq, &nq->nq_work);

nq->name = kasprintf(GFP_KERNEL, "bnxt_re-nq-%d@pci:%s",
nq_indx, pci_name(res->pdev));
@@ -477,7 +478,7 @@ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,
if (rc) {
kfree(nq->name);
nq->name = NULL;
- tasklet_disable(&nq->nq_tasklet);
+ disable_work_sync(&nq->nq_work);
return rc;
}

@@ -541,7 +542,7 @@ int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq,
nq->cqn_handler = cqn_handler;
nq->srqn_handler = srqn_handler;

- /* Have a task to schedule CQ notifiers in post send case */
+ /* Have a work to schedule CQ notifiers in post send case */
nq->cqn_wq = create_singlethread_workqueue("bnxt_qplib_nq");
if (!nq->cqn_wq)
return -ENOMEM;
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
index 7fd4506b3584..6ee3e501d136 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
@@ -494,7 +494,7 @@ struct bnxt_qplib_nq {
u16 ring_id;
int msix_vec;
cpumask_t mask;
- struct tasklet_struct nq_tasklet;
+ struct work_struct nq_work;
bool requested;
int budget;

diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
index 3ffaef0c2651..2fba712d88db 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
@@ -43,6 +43,7 @@
#include <linux/pci.h>
#include <linux/prefetch.h>
#include <linux/delay.h>
+#include <linux/workqueue.h>

#include "roce_hsi.h"
#include "qplib_res.h"
@@ -51,7 +52,7 @@
#include "qplib_fp.h"
#include "qplib_tlv.h"

-static void bnxt_qplib_service_creq(struct tasklet_struct *t);
+static void bnxt_qplib_service_creq(struct work_struct *t);

/**
* bnxt_qplib_map_rc - map return type based on opcode
@@ -165,7 +166,7 @@ static int __wait_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie)
if (!crsqe->is_in_used)
return 0;

- bnxt_qplib_service_creq(&rcfw->creq.creq_tasklet);
+ bnxt_qplib_service_creq(&rcfw->creq.creq_work);

if (!crsqe->is_in_used)
return 0;
@@ -206,7 +207,7 @@ static int __block_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie)

udelay(1);

- bnxt_qplib_service_creq(&rcfw->creq.creq_tasklet);
+ bnxt_qplib_service_creq(&rcfw->creq.creq_work);
if (!crsqe->is_in_used)
return 0;

@@ -403,7 +404,7 @@ static int __poll_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie)

usleep_range(1000, 1001);

- bnxt_qplib_service_creq(&rcfw->creq.creq_tasklet);
+ bnxt_qplib_service_creq(&rcfw->creq.creq_work);
if (!crsqe->is_in_used)
return 0;
if (jiffies_to_msecs(jiffies - issue_time) >
@@ -727,9 +728,9 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
}

/* SP - CREQ Completion handlers */
-static void bnxt_qplib_service_creq(struct tasklet_struct *t)
+static void bnxt_qplib_service_creq(struct work_struct *t)
{
- struct bnxt_qplib_rcfw *rcfw = from_tasklet(rcfw, t, creq.creq_tasklet);
+ struct bnxt_qplib_rcfw *rcfw = from_work(rcfw, t, creq.creq_work);
struct bnxt_qplib_creq_ctx *creq = &rcfw->creq;
u32 type, budget = CREQ_ENTRY_POLL_BUDGET;
struct bnxt_qplib_hwq *hwq = &creq->hwq;
@@ -800,7 +801,7 @@ static irqreturn_t bnxt_qplib_creq_irq(int irq, void *dev_instance)
sw_cons = HWQ_CMP(hwq->cons, hwq);
prefetch(bnxt_qplib_get_qe(hwq, sw_cons, NULL));

- tasklet_schedule(&creq->creq_tasklet);
+ queue_work(system_bh_wq, &creq->creq_work);

return IRQ_HANDLED;
}
@@ -1007,8 +1008,8 @@ void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill)
creq->irq_name = NULL;
atomic_set(&rcfw->rcfw_intr_enabled, 0);
if (kill)
- tasklet_kill(&creq->creq_tasklet);
- tasklet_disable(&creq->creq_tasklet);
+ cancel_work_sync(&creq->creq_work);
+ disable_work_sync(&creq->creq_work);
}

void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw)
@@ -1045,9 +1046,9 @@ int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector,

creq->msix_vec = msix_vector;
if (need_init)
- tasklet_setup(&creq->creq_tasklet, bnxt_qplib_service_creq);
+ INIT_WORK(&creq->creq_work, bnxt_qplib_service_creq);
else
- tasklet_enable(&creq->creq_tasklet);
+ enable_and_queue_work(system_bh_wq, &creq->creq_work);

creq->irq_name = kasprintf(GFP_KERNEL, "bnxt_re-creq@pci:%s",
pci_name(res->pdev));
@@ -1058,7 +1059,7 @@ int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector,
if (rc) {
kfree(creq->irq_name);
creq->irq_name = NULL;
- tasklet_disable(&creq->creq_tasklet);
+ disable_work_sync(&creq->creq_work);
return rc;
}
creq->requested = true;
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
index 45996e60a0d0..8efa474fcf3f 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
@@ -207,7 +207,7 @@ struct bnxt_qplib_creq_ctx {
struct bnxt_qplib_hwq hwq;
struct bnxt_qplib_creq_db creq_db;
struct bnxt_qplib_creq_stat stats;
- struct tasklet_struct creq_tasklet;
+ struct work_struct creq_work;
aeq_handler_t aeq_handler;
u16 ring_id;
int msix_vec;
diff --git a/drivers/infiniband/hw/erdma/erdma.h b/drivers/infiniband/hw/erdma/erdma.h
index 5df401a30cb9..9a47c1432c27 100644
--- a/drivers/infiniband/hw/erdma/erdma.h
+++ b/drivers/infiniband/hw/erdma/erdma.h
@@ -11,6 +11,7 @@
#include <linux/netdevice.h>
#include <linux/pci.h>
#include <linux/xarray.h>
+#include <linux/workqueue.h>
#include <rdma/ib_verbs.h>

#include "erdma_hw.h"
@@ -161,7 +162,7 @@ struct erdma_eq_cb {
void *dev; /* All EQs use this fields to get erdma_dev struct */
struct erdma_irq irq;
struct erdma_eq eq;
- struct tasklet_struct tasklet;
+ struct work_struct work;
};

struct erdma_resource_cb {
diff --git a/drivers/infiniband/hw/erdma/erdma_eq.c b/drivers/infiniband/hw/erdma/erdma_eq.c
index ea47cb21fdb8..252906fd73b0 100644
--- a/drivers/infiniband/hw/erdma/erdma_eq.c
+++ b/drivers/infiniband/hw/erdma/erdma_eq.c
@@ -160,14 +160,16 @@ static irqreturn_t erdma_intr_ceq_handler(int irq, void *data)
{
struct erdma_eq_cb *ceq_cb = data;

- tasklet_schedule(&ceq_cb->tasklet);
+ queue_work(system_bh_wq, &ceq_cb->work);

return IRQ_HANDLED;
}

-static void erdma_intr_ceq_task(unsigned long data)
+static void erdma_intr_ceq_task(struct work_struct *t)
{
- erdma_ceq_completion_handler((struct erdma_eq_cb *)data);
+ struct erdma_eq_cb *ceq_cb = from_work(ceq_cb, t, work);
+
+ erdma_ceq_completion_handler(ceq_cb);
}

static int erdma_set_ceq_irq(struct erdma_dev *dev, u16 ceqn)
@@ -179,8 +181,7 @@ static int erdma_set_ceq_irq(struct erdma_dev *dev, u16 ceqn)
pci_name(dev->pdev));
eqc->irq.msix_vector = pci_irq_vector(dev->pdev, ceqn + 1);

- tasklet_init(&dev->ceqs[ceqn].tasklet, erdma_intr_ceq_task,
- (unsigned long)&dev->ceqs[ceqn]);
+ INIT_WORK(&dev->ceqs[ceqn].work, erdma_intr_ceq_task);

cpumask_set_cpu(cpumask_local_spread(ceqn + 1, dev->attrs.numa_node),
&eqc->irq.affinity_hint_mask);
diff --git a/drivers/infiniband/hw/hfi1/rc.c b/drivers/infiniband/hw/hfi1/rc.c
index b36242c9d42c..ec19ddbfdacb 100644
--- a/drivers/infiniband/hw/hfi1/rc.c
+++ b/drivers/infiniband/hw/hfi1/rc.c
@@ -1210,7 +1210,7 @@ static inline void hfi1_queue_rc_ack(struct hfi1_packet *packet, bool is_fecn)
if (is_fecn)
qp->s_flags |= RVT_S_ECN;

- /* Schedule the send tasklet. */
+ /* Schedule the send work. */
hfi1_schedule_send(qp);
unlock:
spin_unlock_irqrestore(&qp->s_lock, flags);
diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
index b67d23b1f286..5e1a1dd45511 100644
--- a/drivers/infiniband/hw/hfi1/sdma.c
+++ b/drivers/infiniband/hw/hfi1/sdma.c
@@ -11,6 +11,7 @@
#include <linux/timer.h>
#include <linux/vmalloc.h>
#include <linux/highmem.h>
+#include <linux/workqueue.h>

#include "hfi.h"
#include "common.h"
@@ -190,11 +191,11 @@ static const struct sdma_set_state_action sdma_action_table[] = {
static void sdma_complete(struct kref *);
static void sdma_finalput(struct sdma_state *);
static void sdma_get(struct sdma_state *);
-static void sdma_hw_clean_up_task(struct tasklet_struct *);
+static void sdma_hw_clean_up_task(struct work_struct *);
static void sdma_put(struct sdma_state *);
static void sdma_set_state(struct sdma_engine *, enum sdma_states);
static void sdma_start_hw_clean_up(struct sdma_engine *);
-static void sdma_sw_clean_up_task(struct tasklet_struct *);
+static void sdma_sw_clean_up_task(struct work_struct *);
static void sdma_sendctrl(struct sdma_engine *, unsigned);
static void init_sdma_regs(struct sdma_engine *, u32, uint);
static void sdma_process_event(
@@ -503,9 +504,9 @@ static void sdma_err_progress_check(struct timer_list *t)
schedule_work(&sde->err_halt_worker);
}

-static void sdma_hw_clean_up_task(struct tasklet_struct *t)
+static void sdma_hw_clean_up_task(struct work_struct *t)
{
- struct sdma_engine *sde = from_tasklet(sde, t,
+ struct sdma_engine *sde = from_work(sde, t,
sdma_hw_clean_up_task);
u64 statuscsr;

@@ -563,9 +564,9 @@ static void sdma_flush_descq(struct sdma_engine *sde)
sdma_desc_avail(sde, sdma_descq_freecnt(sde));
}

-static void sdma_sw_clean_up_task(struct tasklet_struct *t)
+static void sdma_sw_clean_up_task(struct work_struct *t)
{
- struct sdma_engine *sde = from_tasklet(sde, t, sdma_sw_clean_up_task);
+ struct sdma_engine *sde = from_work(sde, t, sdma_sw_clean_up_task);
unsigned long flags;

spin_lock_irqsave(&sde->tail_lock, flags);
@@ -624,7 +625,7 @@ static void sdma_sw_tear_down(struct sdma_engine *sde)

static void sdma_start_hw_clean_up(struct sdma_engine *sde)
{
- tasklet_hi_schedule(&sde->sdma_hw_clean_up_task);
+ queue_work(system_bh_highpri_wq, &sde->sdma_hw_clean_up_task);
}

static void sdma_set_state(struct sdma_engine *sde,
@@ -1415,9 +1416,9 @@ int sdma_init(struct hfi1_devdata *dd, u8 port)
sde->tail_csr =
get_kctxt_csr_addr(dd, this_idx, SD(TAIL));

- tasklet_setup(&sde->sdma_hw_clean_up_task,
+ INIT_WORK(&sde->sdma_hw_clean_up_task,
sdma_hw_clean_up_task);
- tasklet_setup(&sde->sdma_sw_clean_up_task,
+ INIT_WORK(&sde->sdma_sw_clean_up_task,
sdma_sw_clean_up_task);
INIT_WORK(&sde->err_halt_worker, sdma_err_halt_wait);
INIT_WORK(&sde->flush_worker, sdma_field_flush);
@@ -2741,7 +2742,7 @@ static void __sdma_process_event(struct sdma_engine *sde,
switch (event) {
case sdma_event_e00_go_hw_down:
sdma_set_state(sde, sdma_state_s00_hw_down);
- tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
+ queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
break;
case sdma_event_e10_go_hw_start:
break;
@@ -2783,13 +2784,13 @@ static void __sdma_process_event(struct sdma_engine *sde,
switch (event) {
case sdma_event_e00_go_hw_down:
sdma_set_state(sde, sdma_state_s00_hw_down);
- tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
+ queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
break;
case sdma_event_e10_go_hw_start:
break;
case sdma_event_e15_hw_halt_done:
sdma_set_state(sde, sdma_state_s30_sw_clean_up_wait);
- tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
+ queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
break;
case sdma_event_e25_hw_clean_up_done:
break;
@@ -2824,13 +2825,13 @@ static void __sdma_process_event(struct sdma_engine *sde,
switch (event) {
case sdma_event_e00_go_hw_down:
sdma_set_state(sde, sdma_state_s00_hw_down);
- tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
+ queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
break;
case sdma_event_e10_go_hw_start:
break;
case sdma_event_e15_hw_halt_done:
sdma_set_state(sde, sdma_state_s30_sw_clean_up_wait);
- tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
+ queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
break;
case sdma_event_e25_hw_clean_up_done:
break;
@@ -2864,7 +2865,7 @@ static void __sdma_process_event(struct sdma_engine *sde,
switch (event) {
case sdma_event_e00_go_hw_down:
sdma_set_state(sde, sdma_state_s00_hw_down);
- tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
+ queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
break;
case sdma_event_e10_go_hw_start:
break;
@@ -2888,7 +2889,7 @@ static void __sdma_process_event(struct sdma_engine *sde,
break;
case sdma_event_e81_hw_frozen:
sdma_set_state(sde, sdma_state_s82_freeze_sw_clean);
- tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
+ queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
break;
case sdma_event_e82_hw_unfreeze:
break;
@@ -2903,7 +2904,7 @@ static void __sdma_process_event(struct sdma_engine *sde,
switch (event) {
case sdma_event_e00_go_hw_down:
sdma_set_state(sde, sdma_state_s00_hw_down);
- tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
+ queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
break;
case sdma_event_e10_go_hw_start:
break;
@@ -2947,7 +2948,7 @@ static void __sdma_process_event(struct sdma_engine *sde,
switch (event) {
case sdma_event_e00_go_hw_down:
sdma_set_state(sde, sdma_state_s00_hw_down);
- tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
+ queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
break;
case sdma_event_e10_go_hw_start:
break;
diff --git a/drivers/infiniband/hw/hfi1/sdma.h b/drivers/infiniband/hw/hfi1/sdma.h
index d77246b48434..3f047260cebe 100644
--- a/drivers/infiniband/hw/hfi1/sdma.h
+++ b/drivers/infiniband/hw/hfi1/sdma.h
@@ -11,6 +11,7 @@
#include <asm/byteorder.h>
#include <linux/workqueue.h>
#include <linux/rculist.h>
+#include <linux/workqueue.h>

#include "hfi.h"
#include "verbs.h"
@@ -346,11 +347,11 @@ struct sdma_engine {

/* CONFIG SDMA for now, just blindly duplicate */
/* private: */
- struct tasklet_struct sdma_hw_clean_up_task
+ struct work_struct sdma_hw_clean_up_task
____cacheline_aligned_in_smp;

/* private: */
- struct tasklet_struct sdma_sw_clean_up_task
+ struct work_struct sdma_sw_clean_up_task
____cacheline_aligned_in_smp;
/* private: */
struct work_struct err_halt_worker;
@@ -471,7 +472,7 @@ void _sdma_txreq_ahgadd(
* Completions of submitted requests can be gotten on selected
* txreqs by giving a completion routine callback to sdma_txinit() or
* sdma_txinit_ahg(). The environment in which the callback runs
- * can be from an ISR, a tasklet, or a thread, so no sleeping
+ * can be from an ISR, a work, or a thread, so no sleeping
* kernel routines can be used. Aspects of the sdma ring may
* be locked so care should be taken with locking.
*
@@ -551,7 +552,7 @@ static inline int sdma_txinit_ahg(
* Completions of submitted requests can be gotten on selected
* txreqs by giving a completion routine callback to sdma_txinit() or
* sdma_txinit_ahg(). The environment in which the callback runs
- * can be from an ISR, a tasklet, or a thread, so no sleeping
+ * can be from an ISR, a work, or a thread, so no sleeping
* kernel routines can be used. The head size of the sdma ring may
* be locked so care should be taken with locking.
*
diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.c b/drivers/infiniband/hw/hfi1/tid_rdma.c
index c465966a1d9c..31cb5a092f42 100644
--- a/drivers/infiniband/hw/hfi1/tid_rdma.c
+++ b/drivers/infiniband/hw/hfi1/tid_rdma.c
@@ -2316,7 +2316,7 @@ void hfi1_rc_rcv_tid_rdma_read_req(struct hfi1_packet *packet)
*/
qpriv->r_tid_alloc = qp->r_head_ack_queue;

- /* Schedule the send tasklet. */
+ /* Schedule the send work. */
qp->s_flags |= RVT_S_RESP_PENDING;
if (fecn)
qp->s_flags |= RVT_S_ECN;
@@ -3807,7 +3807,7 @@ void hfi1_rc_rcv_tid_rdma_write_req(struct hfi1_packet *packet)
hfi1_tid_write_alloc_resources(qp, true);
trace_hfi1_tid_write_rsp_rcv_req(qp);

- /* Schedule the send tasklet. */
+ /* Schedule the send work. */
qp->s_flags |= RVT_S_RESP_PENDING;
if (fecn)
qp->s_flags |= RVT_S_ECN;
@@ -5389,7 +5389,7 @@ static void hfi1_do_tid_send(struct rvt_qp *qp)

/*
* If the packet cannot be sent now, return and
- * the send tasklet will be woken up later.
+ * the send work will be woken up later.
*/
if (hfi1_verbs_send(qp, &ps))
return;
diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c
index 6aed6169c07d..e9644f2b774d 100644
--- a/drivers/infiniband/hw/irdma/ctrl.c
+++ b/drivers/infiniband/hw/irdma/ctrl.c
@@ -5271,7 +5271,7 @@ int irdma_process_cqp_cmd(struct irdma_sc_dev *dev,
}

/**
- * irdma_process_bh - called from tasklet for cqp list
+ * irdma_process_bh - called from work for cqp list
* @dev: sc device struct
*/
int irdma_process_bh(struct irdma_sc_dev *dev)
diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c
index ad50b77282f8..18d552919c28 100644
--- a/drivers/infiniband/hw/irdma/hw.c
+++ b/drivers/infiniband/hw/irdma/hw.c
@@ -440,12 +440,12 @@ static void irdma_ena_intr(struct irdma_sc_dev *dev, u32 msix_id)
}

/**
- * irdma_dpc - tasklet for aeq and ceq 0
- * @t: tasklet_struct ptr
+ * irdma_dpc - work for aeq and ceq 0
+ * @t: work_struct ptr
*/
-static void irdma_dpc(struct tasklet_struct *t)
+static void irdma_dpc(struct work_struct *t)
{
- struct irdma_pci_f *rf = from_tasklet(rf, t, dpc_tasklet);
+ struct irdma_pci_f *rf = from_work(rf, t, dpc_work);

if (rf->msix_shared)
irdma_process_ceq(rf, rf->ceqlist);
@@ -455,11 +455,11 @@ static void irdma_dpc(struct tasklet_struct *t)

/**
* irdma_ceq_dpc - dpc handler for CEQ
- * @t: tasklet_struct ptr
+ * @t: work_struct ptr
*/
-static void irdma_ceq_dpc(struct tasklet_struct *t)
+static void irdma_ceq_dpc(struct work_struct *t)
{
- struct irdma_ceq *iwceq = from_tasklet(iwceq, t, dpc_tasklet);
+ struct irdma_ceq *iwceq = from_work(iwceq, t, dpc_work);
struct irdma_pci_f *rf = iwceq->rf;

irdma_process_ceq(rf, iwceq);
@@ -533,7 +533,7 @@ static irqreturn_t irdma_irq_handler(int irq, void *data)
{
struct irdma_pci_f *rf = data;

- tasklet_schedule(&rf->dpc_tasklet);
+ queue_work(system_bh_wq, &rf->dpc_work);

return IRQ_HANDLED;
}
@@ -550,7 +550,7 @@ static irqreturn_t irdma_ceq_handler(int irq, void *data)
if (iwceq->irq != irq)
ibdev_err(to_ibdev(&iwceq->rf->sc_dev), "expected irq = %d received irq = %d\n",
iwceq->irq, irq);
- tasklet_schedule(&iwceq->dpc_tasklet);
+ queue_work(system_bh_wq, &iwceq->dpc_work);

return IRQ_HANDLED;
}
@@ -1121,14 +1121,14 @@ static int irdma_cfg_ceq_vector(struct irdma_pci_f *rf, struct irdma_ceq *iwceq,
if (rf->msix_shared && !ceq_id) {
snprintf(msix_vec->name, sizeof(msix_vec->name) - 1,
"irdma-%s-AEQCEQ-0", dev_name(&rf->pcidev->dev));
- tasklet_setup(&rf->dpc_tasklet, irdma_dpc);
+ INIT_WORK(&rf->dpc_work, irdma_dpc);
status = request_irq(msix_vec->irq, irdma_irq_handler, 0,
msix_vec->name, rf);
} else {
snprintf(msix_vec->name, sizeof(msix_vec->name) - 1,
"irdma-%s-CEQ-%d",
dev_name(&rf->pcidev->dev), ceq_id);
- tasklet_setup(&iwceq->dpc_tasklet, irdma_ceq_dpc);
+ INIT_WORK(&iwceq->dpc_work, irdma_ceq_dpc);

status = request_irq(msix_vec->irq, irdma_ceq_handler, 0,
msix_vec->name, iwceq);
@@ -1162,7 +1162,7 @@ static int irdma_cfg_aeq_vector(struct irdma_pci_f *rf)
if (!rf->msix_shared) {
snprintf(msix_vec->name, sizeof(msix_vec->name) - 1,
"irdma-%s-AEQ", dev_name(&rf->pcidev->dev));
- tasklet_setup(&rf->dpc_tasklet, irdma_dpc);
+ INIT_WORK(&rf->dpc_work, irdma_dpc);
ret = request_irq(msix_vec->irq, irdma_irq_handler, 0,
msix_vec->name, rf);
}
diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h
index b65bc2ea542f..54301093b746 100644
--- a/drivers/infiniband/hw/irdma/main.h
+++ b/drivers/infiniband/hw/irdma/main.h
@@ -30,6 +30,7 @@
#endif
#include <linux/auxiliary_bus.h>
#include <linux/net/intel/iidc.h>
+#include <linux/workqueue.h>
#include <crypto/hash.h>
#include <rdma/ib_smi.h>
#include <rdma/ib_verbs.h>
@@ -192,7 +193,7 @@ struct irdma_ceq {
u32 irq;
u32 msix_idx;
struct irdma_pci_f *rf;
- struct tasklet_struct dpc_tasklet;
+ struct work_struct dpc_work;
spinlock_t ce_lock; /* sync cq destroy with cq completion event notification */
};

@@ -316,7 +317,7 @@ struct irdma_pci_f {
struct mc_table_list mc_qht_list;
struct irdma_msix_vector *iw_msixtbl;
struct irdma_qvlist_info *iw_qvlist;
- struct tasklet_struct dpc_tasklet;
+ struct work_struct dpc_work;
struct msix_entry *msix_entries;
struct irdma_dma_mem obj_mem;
struct irdma_dma_mem obj_next;
diff --git a/drivers/infiniband/hw/qib/qib.h b/drivers/infiniband/hw/qib/qib.h
index 26c615772be3..d2ebaf31ce5a 100644
--- a/drivers/infiniband/hw/qib/qib.h
+++ b/drivers/infiniband/hw/qib/qib.h
@@ -53,6 +53,7 @@
#include <linux/sched.h>
#include <linux/kthread.h>
#include <linux/xarray.h>
+#include <linux/workqueue.h>
#include <rdma/ib_hdrs.h>
#include <rdma/rdma_vt.h>

@@ -562,7 +563,7 @@ struct qib_pportdata {
u8 sdma_generation;
u8 sdma_intrequest;

- struct tasklet_struct sdma_sw_clean_up_task
+ struct work_struct sdma_sw_clean_up_task
____cacheline_aligned_in_smp;

wait_queue_head_t state_wait; /* for state_wanted */
@@ -1068,8 +1069,8 @@ struct qib_devdata {
u8 psxmitwait_supported;
/* cycle length of PS* counters in HW (in picoseconds) */
u16 psxmitwait_check_rate;
- /* high volume overflow errors defered to tasklet */
- struct tasklet_struct error_tasklet;
+ /* high volume overflow errors defered to work */
+ struct work_struct error_work;

int assigned_node_id; /* NUMA node closest to HCA */
};
diff --git a/drivers/infiniband/hw/qib/qib_iba7322.c b/drivers/infiniband/hw/qib/qib_iba7322.c
index f93906d8fc09..c3325071f2b3 100644
--- a/drivers/infiniband/hw/qib/qib_iba7322.c
+++ b/drivers/infiniband/hw/qib/qib_iba7322.c
@@ -46,6 +46,7 @@
#include <rdma/ib_smi.h>
#ifdef CONFIG_INFINIBAND_QIB_DCA
#include <linux/dca.h>
+#include <linux/workqueue.h>
#endif

#include "qib.h"
@@ -1711,9 +1712,9 @@ static noinline void handle_7322_errors(struct qib_devdata *dd)
return;
}

-static void qib_error_tasklet(struct tasklet_struct *t)
+static void qib_error_work(struct work_struct *t)
{
- struct qib_devdata *dd = from_tasklet(dd, t, error_tasklet);
+ struct qib_devdata *dd = from_work(dd, t, error_work);

handle_7322_errors(dd);
qib_write_kreg(dd, kr_errmask, dd->cspec->errormask);
@@ -3001,7 +3002,7 @@ static noinline void unlikely_7322_intr(struct qib_devdata *dd, u64 istat)
unknown_7322_gpio_intr(dd);
if (istat & QIB_I_C_ERROR) {
qib_write_kreg(dd, kr_errmask, 0ULL);
- tasklet_schedule(&dd->error_tasklet);
+ queue_work(system_bh_wq, &dd->error_work);
}
if (istat & INT_MASK_P(Err, 0) && dd->rcd[0])
handle_7322_p_errors(dd->rcd[0]->ppd);
@@ -3515,7 +3516,7 @@ static void qib_setup_7322_interrupt(struct qib_devdata *dd, int clearpend)
for (i = 0; i < ARRAY_SIZE(redirect); i++)
qib_write_kreg(dd, kr_intredirect + i, redirect[i]);
dd->cspec->main_int_mask = mask;
- tasklet_setup(&dd->error_tasklet, qib_error_tasklet);
+ INIT_WORK(&dd->error_work, qib_error_work);
}

/**
diff --git a/drivers/infiniband/hw/qib/qib_rc.c b/drivers/infiniband/hw/qib/qib_rc.c
index a1c20ffb4490..79e31921e384 100644
--- a/drivers/infiniband/hw/qib/qib_rc.c
+++ b/drivers/infiniband/hw/qib/qib_rc.c
@@ -593,7 +593,7 @@ int qib_make_rc_req(struct rvt_qp *qp, unsigned long *flags)
*
* This is called from qib_rc_rcv() and qib_kreceive().
* Note that RDMA reads and atomics are handled in the
- * send side QP state and tasklet.
+ * send side QP state and work.
*/
void qib_send_rc_ack(struct rvt_qp *qp)
{
@@ -670,7 +670,7 @@ void qib_send_rc_ack(struct rvt_qp *qp)
/*
* We are out of PIO buffers at the moment.
* Pass responsibility for sending the ACK to the
- * send tasklet so that when a PIO buffer becomes
+ * send work so that when a PIO buffer becomes
* available, the ACK is sent ahead of other outgoing
* packets.
*/
@@ -715,7 +715,7 @@ void qib_send_rc_ack(struct rvt_qp *qp)
qp->s_nak_state = qp->r_nak_state;
qp->s_ack_psn = qp->r_ack_psn;

- /* Schedule the send tasklet. */
+ /* Schedule the send work. */
qib_schedule_send(qp);
}
unlock:
@@ -806,7 +806,7 @@ static void reset_psn(struct rvt_qp *qp, u32 psn)
qp->s_psn = psn;
/*
* Set RVT_S_WAIT_PSN as qib_rc_complete() may start the timer
- * asynchronously before the send tasklet can get scheduled.
+ * asynchronously before the send work can get scheduled.
* Doing it in qib_make_rc_req() is too late.
*/
if ((qib_cmp24(qp->s_psn, qp->s_sending_hpsn) <= 0) &&
@@ -1292,7 +1292,7 @@ static void qib_rc_rcv_resp(struct qib_ibport *ibp,
(qib_cmp24(qp->s_sending_psn, qp->s_sending_hpsn) <= 0)) {

/*
- * If send tasklet not running attempt to progress
+ * If send work not running attempt to progress
* SDMA queue.
*/
if (!(qp->s_flags & RVT_S_BUSY)) {
@@ -1629,7 +1629,7 @@ static int qib_rc_rcv_error(struct ib_other_headers *ohdr,
case OP(FETCH_ADD): {
/*
* If we didn't find the atomic request in the ack queue
- * or the send tasklet is already backed up to send an
+ * or the send work is already backed up to send an
* earlier entry, we can ignore this request.
*/
if (!e || e->opcode != (u8) opcode || old_req)
@@ -1996,7 +1996,7 @@ void qib_rc_rcv(struct qib_ctxtdata *rcd, struct ib_header *hdr,
qp->r_nak_state = 0;
qp->r_head_ack_queue = next;

- /* Schedule the send tasklet. */
+ /* Schedule the send work. */
qp->s_flags |= RVT_S_RESP_PENDING;
qib_schedule_send(qp);

@@ -2059,7 +2059,7 @@ void qib_rc_rcv(struct qib_ctxtdata *rcd, struct ib_header *hdr,
qp->r_nak_state = 0;
qp->r_head_ack_queue = next;

- /* Schedule the send tasklet. */
+ /* Schedule the send work. */
qp->s_flags |= RVT_S_RESP_PENDING;
qib_schedule_send(qp);

diff --git a/drivers/infiniband/hw/qib/qib_ruc.c b/drivers/infiniband/hw/qib/qib_ruc.c
index 1fa21938f310..f44a2a8b4b1e 100644
--- a/drivers/infiniband/hw/qib/qib_ruc.c
+++ b/drivers/infiniband/hw/qib/qib_ruc.c
@@ -257,7 +257,7 @@ void _qib_do_send(struct work_struct *work)
* @qp: pointer to the QP
*
* Process entries in the send work queue until credit or queue is
- * exhausted. Only allow one CPU to send a packet per QP (tasklet).
+ * exhausted. Only allow one CPU to send a packet per QP (work).
* Otherwise, two threads could send packets out of order.
*/
void qib_do_send(struct rvt_qp *qp)
@@ -299,7 +299,7 @@ void qib_do_send(struct rvt_qp *qp)
spin_unlock_irqrestore(&qp->s_lock, flags);
/*
* If the packet cannot be sent now, return and
- * the send tasklet will be woken up later.
+ * the send work will be woken up later.
*/
if (qib_verbs_send(qp, priv->s_hdr, qp->s_hdrwords,
qp->s_cur_sge, qp->s_cur_size))
diff --git a/drivers/infiniband/hw/qib/qib_sdma.c b/drivers/infiniband/hw/qib/qib_sdma.c
index 5e86cbf7d70e..facb3964d2ec 100644
--- a/drivers/infiniband/hw/qib/qib_sdma.c
+++ b/drivers/infiniband/hw/qib/qib_sdma.c
@@ -34,6 +34,7 @@
#include <linux/spinlock.h>
#include <linux/netdevice.h>
#include <linux/moduleparam.h>
+#include <linux/workqueue.h>

#include "qib.h"
#include "qib_common.h"
@@ -62,7 +63,7 @@ static void sdma_get(struct qib_sdma_state *);
static void sdma_put(struct qib_sdma_state *);
static void sdma_set_state(struct qib_pportdata *, enum qib_sdma_states);
static void sdma_start_sw_clean_up(struct qib_pportdata *);
-static void sdma_sw_clean_up_task(struct tasklet_struct *);
+static void sdma_sw_clean_up_task(struct work_struct *);
static void unmap_desc(struct qib_pportdata *, unsigned);

static void sdma_get(struct qib_sdma_state *ss)
@@ -119,9 +120,9 @@ static void clear_sdma_activelist(struct qib_pportdata *ppd)
}
}

-static void sdma_sw_clean_up_task(struct tasklet_struct *t)
+static void sdma_sw_clean_up_task(struct work_struct *t)
{
- struct qib_pportdata *ppd = from_tasklet(ppd, t,
+ struct qib_pportdata *ppd = from_work(ppd, t,
sdma_sw_clean_up_task);
unsigned long flags;

@@ -188,7 +189,7 @@ static void sdma_sw_tear_down(struct qib_pportdata *ppd)

static void sdma_start_sw_clean_up(struct qib_pportdata *ppd)
{
- tasklet_hi_schedule(&ppd->sdma_sw_clean_up_task);
+ queue_work(system_bh_highpri_wq, &ppd->sdma_sw_clean_up_task);
}

static void sdma_set_state(struct qib_pportdata *ppd,
@@ -437,7 +438,7 @@ int qib_setup_sdma(struct qib_pportdata *ppd)

INIT_LIST_HEAD(&ppd->sdma_activelist);

- tasklet_setup(&ppd->sdma_sw_clean_up_task, sdma_sw_clean_up_task);
+ INIT_WORK(&ppd->sdma_sw_clean_up_task, sdma_sw_clean_up_task);

ret = dd->f_init_sdma_regs(ppd);
if (ret)
diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
index e6203e26cc06..efe4689151c2 100644
--- a/drivers/infiniband/sw/rdmavt/qp.c
+++ b/drivers/infiniband/sw/rdmavt/qp.c
@@ -1306,7 +1306,7 @@ int rvt_error_qp(struct rvt_qp *qp, enum ib_wc_status err)

rdi->driver_f.notify_error_qp(qp);

- /* Schedule the sending tasklet to drain the send work queue. */
+ /* Schedule the sending work to drain the send work queue. */
if (READ_ONCE(qp->s_last) != qp->s_head)
rdi->driver_f.schedule_send(qp);

--
2.17.1


2024-03-27 16:07:58

by Allen Pais

[permalink] [raw]
Subject: [PATCH 4/9] USB: Convert from tasklet to BH workqueue

The only generic interface to execute asynchronously in the BH context is
tasklet; however, it's marked deprecated and has some design flaws. To
replace tasklets, BH workqueue support was recently added. A BH workqueue
behaves similarly to regular workqueues except that the queued work items
are executed in the BH context.

This patch converts drivers/infiniband/* from tasklet to BH workqueue.

Based on the work done by Tejun Heo <[email protected]>
Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10

Signed-off-by: Allen Pais <[email protected]>
---
drivers/usb/atm/usbatm.c | 55 +++++++++++++++--------------
drivers/usb/atm/usbatm.h | 3 +-
drivers/usb/core/hcd.c | 22 ++++++------
drivers/usb/gadget/udc/fsl_qe_udc.c | 21 +++++------
drivers/usb/gadget/udc/fsl_qe_udc.h | 4 +--
drivers/usb/host/ehci-sched.c | 2 +-
drivers/usb/host/fhci-hcd.c | 3 +-
drivers/usb/host/fhci-sched.c | 10 +++---
drivers/usb/host/fhci.h | 5 +--
drivers/usb/host/xhci-dbgcap.h | 3 +-
drivers/usb/host/xhci-dbgtty.c | 15 ++++----
include/linux/usb/cdc_ncm.h | 2 +-
include/linux/usb/usbnet.h | 2 +-
13 files changed, 76 insertions(+), 71 deletions(-)

diff --git a/drivers/usb/atm/usbatm.c b/drivers/usb/atm/usbatm.c
index 2da6615fbb6f..74849f24e52e 100644
--- a/drivers/usb/atm/usbatm.c
+++ b/drivers/usb/atm/usbatm.c
@@ -17,7 +17,7 @@
* - Removed the limit on the number of devices
* - Module now autoloads on device plugin
* - Merged relevant parts of sarlib
- * - Replaced the kernel thread with a tasklet
+ * - Replaced the kernel thread with a work
* - New packet transmission code
* - Changed proc file contents
* - Fixed all known SMP races
@@ -68,6 +68,7 @@
#include <linux/wait.h>
#include <linux/kthread.h>
#include <linux/ratelimit.h>
+#include <linux/workqueue.h>

#ifdef VERBOSE_DEBUG
static int usbatm_print_packet(struct usbatm_data *instance, const unsigned char *data, int len);
@@ -249,7 +250,7 @@ static void usbatm_complete(struct urb *urb)
/* vdbg("%s: urb 0x%p, status %d, actual_length %d",
__func__, urb, status, urb->actual_length); */

- /* Can be invoked from task context, protect against interrupts */
+ /* Can be invoked from work context, protect against interrupts */
spin_lock_irqsave(&channel->lock, flags);

/* must add to the back when receiving; doesn't matter when sending */
@@ -269,7 +270,7 @@ static void usbatm_complete(struct urb *urb)
/* throttle processing in case of an error */
mod_timer(&channel->delay, jiffies + msecs_to_jiffies(THROTTLE_MSECS));
} else
- tasklet_schedule(&channel->tasklet);
+ queue_work(system_bh_wq, &channel->work);
}


@@ -511,10 +512,10 @@ static unsigned int usbatm_write_cells(struct usbatm_data *instance,
** receive **
**************/

-static void usbatm_rx_process(struct tasklet_struct *t)
+static void usbatm_rx_process(struct work_struct *t)
{
- struct usbatm_data *instance = from_tasklet(instance, t,
- rx_channel.tasklet);
+ struct usbatm_data *instance = from_work(instance, t,
+ rx_channel.work);
struct urb *urb;

while ((urb = usbatm_pop_urb(&instance->rx_channel))) {
@@ -565,10 +566,10 @@ static void usbatm_rx_process(struct tasklet_struct *t)
** send **
***********/

-static void usbatm_tx_process(struct tasklet_struct *t)
+static void usbatm_tx_process(struct work_struct *t)
{
- struct usbatm_data *instance = from_tasklet(instance, t,
- tx_channel.tasklet);
+ struct usbatm_data *instance = from_work(instance, t,
+ tx_channel.work);
struct sk_buff *skb = instance->current_skb;
struct urb *urb = NULL;
const unsigned int buf_size = instance->tx_channel.buf_size;
@@ -632,13 +633,13 @@ static void usbatm_cancel_send(struct usbatm_data *instance,
}
spin_unlock_irq(&instance->sndqueue.lock);

- tasklet_disable(&instance->tx_channel.tasklet);
+ disable_work_sync(&instance->tx_channel.work);
if ((skb = instance->current_skb) && (UDSL_SKB(skb)->atm.vcc == vcc)) {
atm_dbg(instance, "%s: popping current skb (0x%p)\n", __func__, skb);
instance->current_skb = NULL;
usbatm_pop(vcc, skb);
}
- tasklet_enable(&instance->tx_channel.tasklet);
+ enable_and_queue_work(system_bh_wq, &instance->tx_channel.work);
}

static int usbatm_atm_send(struct atm_vcc *vcc, struct sk_buff *skb)
@@ -677,7 +678,7 @@ static int usbatm_atm_send(struct atm_vcc *vcc, struct sk_buff *skb)
ctrl->crc = crc32_be(~0, skb->data, skb->len);

skb_queue_tail(&instance->sndqueue, skb);
- tasklet_schedule(&instance->tx_channel.tasklet);
+ queue_work(system_bh_wq, &instance->tx_channel.work);

return 0;

@@ -695,8 +696,8 @@ static void usbatm_destroy_instance(struct kref *kref)
{
struct usbatm_data *instance = container_of(kref, struct usbatm_data, refcount);

- tasklet_kill(&instance->rx_channel.tasklet);
- tasklet_kill(&instance->tx_channel.tasklet);
+ cancel_work_sync(&instance->rx_channel.work);
+ cancel_work_sync(&instance->tx_channel.work);
usb_put_dev(instance->usb_dev);
kfree(instance);
}
@@ -823,12 +824,12 @@ static int usbatm_atm_open(struct atm_vcc *vcc)

vcc->dev_data = new;

- tasklet_disable(&instance->rx_channel.tasklet);
+ disable_work_sync(&instance->rx_channel.work);
instance->cached_vcc = new;
instance->cached_vpi = vpi;
instance->cached_vci = vci;
list_add(&new->list, &instance->vcc_list);
- tasklet_enable(&instance->rx_channel.tasklet);
+ enable_and_queue_work(system_bh_wq, &instance->rx_channel.work);

set_bit(ATM_VF_ADDR, &vcc->flags);
set_bit(ATM_VF_PARTIAL, &vcc->flags);
@@ -858,14 +859,14 @@ static void usbatm_atm_close(struct atm_vcc *vcc)

mutex_lock(&instance->serialize); /* vs self, usbatm_atm_open, usbatm_usb_disconnect */

- tasklet_disable(&instance->rx_channel.tasklet);
+ disable_work_sync(&instance->rx_channel.work);
if (instance->cached_vcc == vcc_data) {
instance->cached_vcc = NULL;
instance->cached_vpi = ATM_VPI_UNSPEC;
instance->cached_vci = ATM_VCI_UNSPEC;
}
list_del(&vcc_data->list);
- tasklet_enable(&instance->rx_channel.tasklet);
+ enable_and_queue_work(system_bh_wq, &instance->rx_channel.work);

kfree_skb(vcc_data->sarb);
vcc_data->sarb = NULL;
@@ -991,18 +992,18 @@ static int usbatm_heavy_init(struct usbatm_data *instance)
return 0;
}

-static void usbatm_tasklet_schedule(struct timer_list *t)
+static void usbatm_queue_work(system_bh_wq, struct timer_list *t)
{
struct usbatm_channel *channel = from_timer(channel, t, delay);

- tasklet_schedule(&channel->tasklet);
+ queue_work(system_bh_wq, &channel->work);
}

static void usbatm_init_channel(struct usbatm_channel *channel)
{
spin_lock_init(&channel->lock);
INIT_LIST_HEAD(&channel->list);
- timer_setup(&channel->delay, usbatm_tasklet_schedule, 0);
+ timer_setup(&channel->delay, usbatm_queue_work, 0);
}

int usbatm_usb_probe(struct usb_interface *intf, const struct usb_device_id *id,
@@ -1074,8 +1075,8 @@ int usbatm_usb_probe(struct usb_interface *intf, const struct usb_device_id *id,

usbatm_init_channel(&instance->rx_channel);
usbatm_init_channel(&instance->tx_channel);
- tasklet_setup(&instance->rx_channel.tasklet, usbatm_rx_process);
- tasklet_setup(&instance->tx_channel.tasklet, usbatm_tx_process);
+ INIT_WORK(&instance->rx_channel.work, usbatm_rx_process);
+ INIT_WORK(&instance->tx_channel.work, usbatm_tx_process);
instance->rx_channel.stride = ATM_CELL_SIZE + driver->rx_padding;
instance->tx_channel.stride = ATM_CELL_SIZE + driver->tx_padding;
instance->rx_channel.usbatm = instance->tx_channel.usbatm = instance;
@@ -1231,8 +1232,8 @@ void usbatm_usb_disconnect(struct usb_interface *intf)
vcc_release_async(vcc_data->vcc, -EPIPE);
mutex_unlock(&instance->serialize);

- tasklet_disable(&instance->rx_channel.tasklet);
- tasklet_disable(&instance->tx_channel.tasklet);
+ disable_work_sync(&instance->rx_channel.work);
+ disable_work_sync(&instance->tx_channel.work);

for (i = 0; i < num_rcv_urbs + num_snd_urbs; i++)
usb_kill_urb(instance->urbs[i]);
@@ -1245,8 +1246,8 @@ void usbatm_usb_disconnect(struct usb_interface *intf)
INIT_LIST_HEAD(&instance->rx_channel.list);
INIT_LIST_HEAD(&instance->tx_channel.list);

- tasklet_enable(&instance->rx_channel.tasklet);
- tasklet_enable(&instance->tx_channel.tasklet);
+ enable_and_queue_work(system_bh_wq, &instance->rx_channel.work);
+ enable_and_queue_work(system_bh_wq, &instance->tx_channel.work);

if (instance->atm_dev && instance->driver->atm_stop)
instance->driver->atm_stop(instance, instance->atm_dev);
diff --git a/drivers/usb/atm/usbatm.h b/drivers/usb/atm/usbatm.h
index d96658e2e209..3452f8c2e1e5 100644
--- a/drivers/usb/atm/usbatm.h
+++ b/drivers/usb/atm/usbatm.h
@@ -21,6 +21,7 @@
#include <linux/usb.h>
#include <linux/mutex.h>
#include <linux/ratelimit.h>
+#include <linux/workqueue.h>

/*
#define VERBOSE_DEBUG
@@ -109,7 +110,7 @@ struct usbatm_channel {
unsigned int packet_size; /* endpoint maxpacket */
spinlock_t lock;
struct list_head list;
- struct tasklet_struct tasklet;
+ struct work_struct work;
struct timer_list delay;
struct usbatm_data *usbatm;
};
diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
index c0e005670d67..88d8e1c366cd 100644
--- a/drivers/usb/core/hcd.c
+++ b/drivers/usb/core/hcd.c
@@ -37,6 +37,7 @@
#include <linux/usb.h>
#include <linux/usb/hcd.h>
#include <linux/usb/otg.h>
+#include <linux/workqueue.h>

#include "usb.h"
#include "phy.h"
@@ -884,7 +885,7 @@ static void usb_bus_init (struct usb_bus *bus)
* usb_register_bus - registers the USB host controller with the usb core
* @bus: pointer to the bus to register
*
- * Context: task context, might sleep.
+ * Context: work context, might sleep.
*
* Assigns a bus number, and links the controller into usbcore data
* structures so that it can be seen by scanning the bus list.
@@ -920,7 +921,7 @@ static int usb_register_bus(struct usb_bus *bus)
* usb_deregister_bus - deregisters the USB host controller
* @bus: pointer to the bus to deregister
*
- * Context: task context, might sleep.
+ * Context: work context, might sleep.
*
* Recycles the bus number, and unlinks the controller from usbcore data
* structures so that it won't be seen by scanning the bus list.
@@ -1640,7 +1641,7 @@ static void __usb_hcd_giveback_urb(struct urb *urb)
/* pass ownership to the completion handler */
urb->status = status;
/*
- * This function can be called in task context inside another remote
+ * This function can be called in work context inside another remote
* coverage collection section, but kcov doesn't support that kind of
* recursion yet. Only collect coverage in softirq context for now.
*/
@@ -1662,10 +1663,9 @@ static void __usb_hcd_giveback_urb(struct urb *urb)
usb_put_urb(urb);
}

-static void usb_giveback_urb_bh(struct work_struct *work)
+static void usb_giveback_urb_bh(struct work_struct *t)
{
- struct giveback_urb_bh *bh =
- container_of(work, struct giveback_urb_bh, bh);
+ struct giveback_urb_bh *bh = from_work(bh, t, bh);
struct list_head local_list;

spin_lock_irq(&bh->lock);
@@ -1705,7 +1705,7 @@ static void usb_giveback_urb_bh(struct work_struct *work)
* @status: completion status code for the URB.
*
* Context: atomic. The completion callback is invoked in caller's context.
- * For HCDs with HCD_BH flag set, the completion callback is invoked in BH
+ * For HCDs with HCD_BH flag set, the completion callback is invoked in work
* context (except for URBs submitted to the root hub which always complete in
* caller's context).
*
@@ -1724,7 +1724,7 @@ void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb, int status)
struct giveback_urb_bh *bh;
bool running;

- /* pass status to BH via unlinked */
+ /* pass status to work via unlinked */
if (likely(!urb->unlinked))
urb->unlinked = status;

@@ -2611,7 +2611,7 @@ EXPORT_SYMBOL_GPL(__usb_create_hcd);
* @primary_hcd: a pointer to the usb_hcd structure that is sharing the
* PCI device. Only allocate certain resources for the primary HCD
*
- * Context: task context, might sleep.
+ * Context: work context, might sleep.
*
* Allocate a struct usb_hcd, with extra space at the end for the
* HC driver's private data. Initialize the generic members of the
@@ -2634,7 +2634,7 @@ EXPORT_SYMBOL_GPL(usb_create_shared_hcd);
* @dev: device for this HC, stored in hcd->self.controller
* @bus_name: value to store in hcd->self.bus_name
*
- * Context: task context, might sleep.
+ * Context: work context, might sleep.
*
* Allocate a struct usb_hcd, with extra space at the end for the
* HC driver's private data. Initialize the generic members of the
@@ -3001,7 +3001,7 @@ EXPORT_SYMBOL_GPL(usb_add_hcd);
* usb_remove_hcd - shutdown processing for generic HCDs
* @hcd: the usb_hcd structure to remove
*
- * Context: task context, might sleep.
+ * Context: work context, might sleep.
*
* Disconnects the root hub, then reverses the effects of usb_add_hcd(),
* invoking the HCD's stop() method.
diff --git a/drivers/usb/gadget/udc/fsl_qe_udc.c b/drivers/usb/gadget/udc/fsl_qe_udc.c
index 4e88681a79b6..ae29d946a972 100644
--- a/drivers/usb/gadget/udc/fsl_qe_udc.c
+++ b/drivers/usb/gadget/udc/fsl_qe_udc.c
@@ -35,6 +35,7 @@
#include <linux/usb/ch9.h>
#include <linux/usb/gadget.h>
#include <linux/usb/otg.h>
+#include <linux/workqueue.h>
#include <soc/fsl/qe/qe.h>
#include <asm/cpm.h>
#include <asm/dma.h>
@@ -930,9 +931,9 @@ static int qe_ep_rxframe_handle(struct qe_ep *ep)
return 0;
}

-static void ep_rx_tasklet(struct tasklet_struct *t)
+static void ep_rx_work(struct work_struct *t)
{
- struct qe_udc *udc = from_tasklet(udc, t, rx_tasklet);
+ struct qe_udc *udc = from_work(udc, t, rx_work);
struct qe_ep *ep;
struct qe_frame *pframe;
struct qe_bd __iomem *bd;
@@ -945,9 +946,9 @@ static void ep_rx_tasklet(struct tasklet_struct *t)
for (i = 1; i < USB_MAX_ENDPOINTS; i++) {
ep = &udc->eps[i];

- if (ep->dir == USB_DIR_IN || ep->enable_tasklet == 0) {
+ if (ep->dir == USB_DIR_IN || ep->enable_work == 0) {
dev_dbg(udc->dev,
- "This is a transmit ep or disable tasklet!\n");
+ "This is a transmit ep or disable work!\n");
continue;
}

@@ -1012,7 +1013,7 @@ static void ep_rx_tasklet(struct tasklet_struct *t)
if (ep->localnack)
ep_recycle_rxbds(ep);

- ep->enable_tasklet = 0;
+ ep->enable_work = 0;
} /* for i=1 */

spin_unlock_irqrestore(&udc->lock, flags);
@@ -1057,8 +1058,8 @@ static int qe_ep_rx(struct qe_ep *ep)
return 0;
}

- tasklet_schedule(&udc->rx_tasklet);
- ep->enable_tasklet = 1;
+ queue_work(system_bh_wq, &udc->rx_work);
+ ep->enable_work = 1;

return 0;
}
@@ -2559,7 +2560,7 @@ static int qe_udc_probe(struct platform_device *ofdev)
DMA_TO_DEVICE);
}

- tasklet_setup(&udc->rx_tasklet, ep_rx_tasklet);
+ INIT_WORK(&udc->rx_work, ep_rx_work);
/* request irq and disable DR */
udc->usb_irq = irq_of_parse_and_map(np, 0);
if (!udc->usb_irq) {
@@ -2636,7 +2637,7 @@ static void qe_udc_remove(struct platform_device *ofdev)
usb_del_gadget_udc(&udc->gadget);

udc->done = &done;
- tasklet_disable(&udc->rx_tasklet);
+ disable_work_sync(&udc->rx_work);

if (udc->nullmap) {
dma_unmap_single(udc->gadget.dev.parent,
@@ -2671,7 +2672,7 @@ static void qe_udc_remove(struct platform_device *ofdev)
free_irq(udc->usb_irq, udc);
irq_dispose_mapping(udc->usb_irq);

- tasklet_kill(&udc->rx_tasklet);
+ cancel_work_sync(&udc->rx_work);

iounmap(udc->usb_regs);

diff --git a/drivers/usb/gadget/udc/fsl_qe_udc.h b/drivers/usb/gadget/udc/fsl_qe_udc.h
index 53ca0ff7c2cb..1de87c318460 100644
--- a/drivers/usb/gadget/udc/fsl_qe_udc.h
+++ b/drivers/usb/gadget/udc/fsl_qe_udc.h
@@ -293,7 +293,7 @@ struct qe_ep {
u8 init;

u8 already_seen;
- u8 enable_tasklet;
+ u8 enable_work;
u8 setup_stage;
u32 last_io; /* timestamp */

@@ -353,7 +353,7 @@ struct qe_udc {
unsigned int usb_irq;
struct usb_ctlr __iomem *usb_regs;

- struct tasklet_struct rx_tasklet;
+ struct work_struct rx_work;

struct completion *done; /* to make sure release() is done */
};
diff --git a/drivers/usb/host/ehci-sched.c b/drivers/usb/host/ehci-sched.c
index 7e834587e7de..98823cf9dd0a 100644
--- a/drivers/usb/host/ehci-sched.c
+++ b/drivers/usb/host/ehci-sched.c
@@ -682,7 +682,7 @@ static void start_unlink_intr(struct ehci_hcd *ehci, struct ehci_qh *qh)

/*
* It is common only one intr URB is scheduled on one qh, and
- * given complete() is run in tasklet context, introduce a bit
+ * given complete() is run in work context, introduce a bit
* delay to avoid unlink qh too early.
*/
static void start_unlink_intr_wait(struct ehci_hcd *ehci,
diff --git a/drivers/usb/host/fhci-hcd.c b/drivers/usb/host/fhci-hcd.c
index 9a1b5224f239..5358bb688acb 100644
--- a/drivers/usb/host/fhci-hcd.c
+++ b/drivers/usb/host/fhci-hcd.c
@@ -211,8 +211,7 @@ static int fhci_mem_init(struct fhci_hcd *fhci)
INIT_LIST_HEAD(&fhci->empty_tds);

/* initialize work queue to handle done list */
- fhci_tasklet.data = (unsigned long)fhci;
- fhci->process_done_task = &fhci_tasklet;
+ INIT_WORK(&fhci->process_done_task, process_done_list);

for (i = 0; i < MAX_TDS; i++) {
struct td *td;
diff --git a/drivers/usb/host/fhci-sched.c b/drivers/usb/host/fhci-sched.c
index a45ede80edfc..9033cce28014 100644
--- a/drivers/usb/host/fhci-sched.c
+++ b/drivers/usb/host/fhci-sched.c
@@ -628,13 +628,13 @@ irqreturn_t fhci_irq(struct usb_hcd *hcd)
* is process_del_list(),which unlinks URBs by scanning EDs,instead of scanning
* the (re-reversed) done list as this does.
*/
-static void process_done_list(unsigned long data)
+static void process_done_list(struct work_struct *t)
{
struct urb *urb;
struct ed *ed;
struct td *td;
struct urb_priv *urb_priv;
- struct fhci_hcd *fhci = (struct fhci_hcd *)data;
+ struct fhci_hcd *fhci = from_work(fhci, t, process_done_task);

disable_irq(fhci->timer->irq);
disable_irq(fhci_to_hcd(fhci)->irq);
@@ -677,13 +677,13 @@ static void process_done_list(unsigned long data)
enable_irq(fhci_to_hcd(fhci)->irq);
}

-DECLARE_TASKLET_OLD(fhci_tasklet, process_done_list);
+DECLARE_WORK(fhci_work, process_done_list);

/* transfer complted callback */
u32 fhci_transfer_confirm_callback(struct fhci_hcd *fhci)
{
- if (!fhci->process_done_task->state)
- tasklet_schedule(fhci->process_done_task);
+ if (!fhci->process_done_task)
+ queue_work(system_bh_wq, fhci->process_done_task);
return 0;
}

diff --git a/drivers/usb/host/fhci.h b/drivers/usb/host/fhci.h
index 1f57b0989485..7cd613762249 100644
--- a/drivers/usb/host/fhci.h
+++ b/drivers/usb/host/fhci.h
@@ -24,6 +24,7 @@
#include <linux/usb.h>
#include <linux/usb/hcd.h>
#include <linux/gpio/consumer.h>
+#include <linux/workqueue.h>
#include <soc/fsl/qe/qe.h>
#include <soc/fsl/qe/immap_qe.h>

@@ -254,7 +255,7 @@ struct fhci_hcd {
struct virtual_root_hub *vroot_hub; /* the virtual root hub */
int active_urbs;
struct fhci_controller_list *hc_list;
- struct tasklet_struct *process_done_task; /* tasklet for done list */
+ struct work_struct *process_done_task; /* work for done list */

struct list_head empty_eds;
struct list_head empty_tds;
@@ -549,7 +550,7 @@ void fhci_init_ep_registers(struct fhci_usb *usb,
void fhci_ep0_free(struct fhci_usb *usb);

/* fhci-sched.c */
-extern struct tasklet_struct fhci_tasklet;
+extern struct work_struct fhci_work;
void fhci_transaction_confirm(struct fhci_usb *usb, struct packet *pkt);
void fhci_flush_all_transmissions(struct fhci_usb *usb);
void fhci_schedule_transactions(struct fhci_usb *usb);
diff --git a/drivers/usb/host/xhci-dbgcap.h b/drivers/usb/host/xhci-dbgcap.h
index 92661b555c2a..5660f0cd6d73 100644
--- a/drivers/usb/host/xhci-dbgcap.h
+++ b/drivers/usb/host/xhci-dbgcap.h
@@ -11,6 +11,7 @@

#include <linux/tty.h>
#include <linux/kfifo.h>
+#include <linux/workqueue.h>

struct dbc_regs {
__le32 capability;
@@ -107,7 +108,7 @@ struct dbc_port {
struct list_head read_pool;
struct list_head read_queue;
unsigned int n_read;
- struct tasklet_struct push;
+ struct work_struct push;

struct list_head write_pool;
struct kfifo write_fifo;
diff --git a/drivers/usb/host/xhci-dbgtty.c b/drivers/usb/host/xhci-dbgtty.c
index b74e98e94393..dec2280b4ae9 100644
--- a/drivers/usb/host/xhci-dbgtty.c
+++ b/drivers/usb/host/xhci-dbgtty.c
@@ -11,6 +11,7 @@
#include <linux/tty.h>
#include <linux/tty_flip.h>
#include <linux/idr.h>
+#include <linux/workqueue.h>

#include "xhci.h"
#include "xhci-dbgcap.h"
@@ -108,7 +109,7 @@ dbc_read_complete(struct xhci_dbc *dbc, struct dbc_request *req)

spin_lock_irqsave(&port->port_lock, flags);
list_add_tail(&req->list_pool, &port->read_queue);
- tasklet_schedule(&port->push);
+ queue_work(system_bh_wq, &port->push);
spin_unlock_irqrestore(&port->port_lock, flags);
}

@@ -278,7 +279,7 @@ static void dbc_tty_unthrottle(struct tty_struct *tty)
unsigned long flags;

spin_lock_irqsave(&port->port_lock, flags);
- tasklet_schedule(&port->push);
+ queue_work(system_bh_wq, &port->push);
spin_unlock_irqrestore(&port->port_lock, flags);
}

@@ -294,14 +295,14 @@ static const struct tty_operations dbc_tty_ops = {
.unthrottle = dbc_tty_unthrottle,
};

-static void dbc_rx_push(struct tasklet_struct *t)
+static void dbc_rx_push(struct work_struct *t)
{
struct dbc_request *req;
struct tty_struct *tty;
unsigned long flags;
bool do_push = false;
bool disconnect = false;
- struct dbc_port *port = from_tasklet(port, t, push);
+ struct dbc_port *port = from_work(port, t, push);
struct list_head *queue = &port->read_queue;

spin_lock_irqsave(&port->port_lock, flags);
@@ -355,7 +356,7 @@ static void dbc_rx_push(struct tasklet_struct *t)
if (!list_empty(queue) && tty) {
if (!tty_throttled(tty)) {
if (do_push)
- tasklet_schedule(&port->push);
+ queue_work(system_bh_wq, &port->push);
else
pr_warn("ttyDBC0: RX not scheduled?\n");
}
@@ -388,7 +389,7 @@ xhci_dbc_tty_init_port(struct xhci_dbc *dbc, struct dbc_port *port)
{
tty_port_init(&port->port);
spin_lock_init(&port->port_lock);
- tasklet_setup(&port->push, dbc_rx_push);
+ INIT_WORK(&port->push, dbc_rx_push);
INIT_LIST_HEAD(&port->read_pool);
INIT_LIST_HEAD(&port->read_queue);
INIT_LIST_HEAD(&port->write_pool);
@@ -400,7 +401,7 @@ xhci_dbc_tty_init_port(struct xhci_dbc *dbc, struct dbc_port *port)
static void
xhci_dbc_tty_exit_port(struct dbc_port *port)
{
- tasklet_kill(&port->push);
+ cancel_work_sync(&port->push);
tty_port_destroy(&port->port);
}

diff --git a/include/linux/usb/cdc_ncm.h b/include/linux/usb/cdc_ncm.h
index 2d207cb4837d..8775580852f9 100644
--- a/include/linux/usb/cdc_ncm.h
+++ b/include/linux/usb/cdc_ncm.h
@@ -96,7 +96,7 @@
struct cdc_ncm_ctx {
struct usb_cdc_ncm_ntb_parameters ncm_parm;
struct hrtimer tx_timer;
- struct tasklet_struct bh;
+ struct work_struct bh;

struct usbnet *dev;

diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h
index 9f08a584d707..522c533a966b 100644
--- a/include/linux/usb/usbnet.h
+++ b/include/linux/usb/usbnet.h
@@ -58,7 +58,7 @@ struct usbnet {
unsigned interrupt_count;
struct mutex interrupt_mutex;
struct usb_anchor deferred;
- struct tasklet_struct bh;
+ struct work_struct bh;

struct work_struct kevent;
unsigned long flags;
--
2.17.1


2024-03-27 16:08:07

by Allen Pais

[permalink] [raw]
Subject: [PATCH 6/9] ipmi: Convert from tasklet to BH workqueue

The only generic interface to execute asynchronously in the BH context is
tasklet; however, it's marked deprecated and has some design flaws. To
replace tasklets, BH workqueue support was recently added. A BH workqueue
behaves similarly to regular workqueues except that the queued work items
are executed in the BH context.

This patch converts drivers/infiniband/* from tasklet to BH workqueue.

Based on the work done by Tejun Heo <[email protected]>
Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10

Signed-off-by: Allen Pais <[email protected]>
---
drivers/char/ipmi/ipmi_msghandler.c | 30 ++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
index b0eedc4595b3..fce2a2dbdc82 100644
--- a/drivers/char/ipmi/ipmi_msghandler.c
+++ b/drivers/char/ipmi/ipmi_msghandler.c
@@ -36,12 +36,13 @@
#include <linux/nospec.h>
#include <linux/vmalloc.h>
#include <linux/delay.h>
+#include <linux/workqueue.h>

#define IPMI_DRIVER_VERSION "39.2"

static struct ipmi_recv_msg *ipmi_alloc_recv_msg(void);
static int ipmi_init_msghandler(void);
-static void smi_recv_tasklet(struct tasklet_struct *t);
+static void smi_recv_work(struct work_struct *t);
static void handle_new_recv_msgs(struct ipmi_smi *intf);
static void need_waiter(struct ipmi_smi *intf);
static int handle_one_recv_msg(struct ipmi_smi *intf,
@@ -498,13 +499,13 @@ struct ipmi_smi {
/*
* Messages queued for delivery. If delivery fails (out of memory
* for instance), They will stay in here to be processed later in a
- * periodic timer interrupt. The tasklet is for handling received
+ * periodic timer interrupt. The work is for handling received
* messages directly from the handler.
*/
spinlock_t waiting_rcv_msgs_lock;
struct list_head waiting_rcv_msgs;
atomic_t watchdog_pretimeouts_to_deliver;
- struct tasklet_struct recv_tasklet;
+ struct work_struct recv_work;

spinlock_t xmit_msgs_lock;
struct list_head xmit_msgs;
@@ -704,7 +705,7 @@ static void clean_up_interface_data(struct ipmi_smi *intf)
struct cmd_rcvr *rcvr, *rcvr2;
struct list_head list;

- tasklet_kill(&intf->recv_tasklet);
+ cancel_work_sync(&intf->recv_work);

free_smi_msg_list(&intf->waiting_rcv_msgs);
free_recv_msg_list(&intf->waiting_events);
@@ -1319,7 +1320,7 @@ static void free_user(struct kref *ref)
{
struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);

- /* SRCU cleanup must happen in task context. */
+ /* SRCU cleanup must happen in work context. */
queue_work(remove_work_wq, &user->remove_work);
}

@@ -3605,8 +3606,7 @@ int ipmi_add_smi(struct module *owner,
intf->curr_seq = 0;
spin_lock_init(&intf->waiting_rcv_msgs_lock);
INIT_LIST_HEAD(&intf->waiting_rcv_msgs);
- tasklet_setup(&intf->recv_tasklet,
- smi_recv_tasklet);
+ INIT_WORK(&intf->recv_work, smi_recv_work);
atomic_set(&intf->watchdog_pretimeouts_to_deliver, 0);
spin_lock_init(&intf->xmit_msgs_lock);
INIT_LIST_HEAD(&intf->xmit_msgs);
@@ -4779,7 +4779,7 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
* To preserve message order, quit if we
* can't handle a message. Add the message
* back at the head, this is safe because this
- * tasklet is the only thing that pulls the
+ * work is the only thing that pulls the
* messages.
*/
list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
@@ -4812,10 +4812,10 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
}
}

-static void smi_recv_tasklet(struct tasklet_struct *t)
+static void smi_recv_work(struct work_struct *t)
{
unsigned long flags = 0; /* keep us warning-free. */
- struct ipmi_smi *intf = from_tasklet(intf, t, recv_tasklet);
+ struct ipmi_smi *intf = from_work(intf, t, recv_work);
int run_to_completion = intf->run_to_completion;
struct ipmi_smi_msg *newmsg = NULL;

@@ -4866,7 +4866,7 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,

/*
* To preserve message order, we keep a queue and deliver from
- * a tasklet.
+ * a work.
*/
if (!run_to_completion)
spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
@@ -4887,9 +4887,9 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);

if (run_to_completion)
- smi_recv_tasklet(&intf->recv_tasklet);
+ smi_recv_work(&intf->recv_work);
else
- tasklet_schedule(&intf->recv_tasklet);
+ queue_work(system_bh_wq, &intf->recv_work);
}
EXPORT_SYMBOL(ipmi_smi_msg_received);

@@ -4899,7 +4899,7 @@ void ipmi_smi_watchdog_pretimeout(struct ipmi_smi *intf)
return;

atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1);
- tasklet_schedule(&intf->recv_tasklet);
+ queue_work(system_bh_wq, &intf->recv_work);
}
EXPORT_SYMBOL(ipmi_smi_watchdog_pretimeout);

@@ -5068,7 +5068,7 @@ static bool ipmi_timeout_handler(struct ipmi_smi *intf,
flags);
}

- tasklet_schedule(&intf->recv_tasklet);
+ queue_work(system_bh_wq, &intf->recv_work);

return need_timer;
}
--
2.17.1


2024-03-27 16:09:33

by Allen Pais

[permalink] [raw]
Subject: [PATCH 7/9] s390: Convert from tasklet to BH workqueue

The only generic interface to execute asynchronously in the BH context is
tasklet; however, it's marked deprecated and has some design flaws. To
replace tasklets, BH workqueue support was recently added. A BH workqueue
behaves similarly to regular workqueues except that the queued work items
are executed in the BH context.

This patch converts drivers/infiniband/* from tasklet to BH workqueue.

Based on the work done by Tejun Heo <[email protected]>
Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10

Note: Not tested. Please test/review.

Signed-off-by: Allen Pais <[email protected]>
---
drivers/s390/block/dasd.c | 42 ++++++++++++------------
drivers/s390/block/dasd_int.h | 10 +++---
drivers/s390/char/con3270.c | 27 ++++++++--------
drivers/s390/crypto/ap_bus.c | 24 +++++++-------
drivers/s390/crypto/ap_bus.h | 2 +-
drivers/s390/crypto/zcrypt_msgtype50.c | 2 +-
drivers/s390/crypto/zcrypt_msgtype6.c | 4 +--
drivers/s390/net/ctcm_fsms.c | 4 +--
drivers/s390/net/ctcm_main.c | 15 ++++-----
drivers/s390/net/ctcm_main.h | 5 +--
drivers/s390/net/ctcm_mpc.c | 12 +++----
drivers/s390/net/ctcm_mpc.h | 7 ++--
drivers/s390/net/lcs.c | 26 +++++++--------
drivers/s390/net/lcs.h | 2 +-
drivers/s390/net/qeth_core_main.c | 2 +-
drivers/s390/scsi/zfcp_qdio.c | 45 +++++++++++++-------------
drivers/s390/scsi/zfcp_qdio.h | 9 +++---
17 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
index 0a97cfedd706..c6f9910f0a98 100644
--- a/drivers/s390/block/dasd.c
+++ b/drivers/s390/block/dasd.c
@@ -54,8 +54,8 @@ MODULE_LICENSE("GPL");
* SECTION: prototypes for static functions of dasd.c
*/
static int dasd_flush_block_queue(struct dasd_block *);
-static void dasd_device_tasklet(unsigned long);
-static void dasd_block_tasklet(unsigned long);
+static void dasd_device_work(struct work_struct *);
+static void dasd_block_work(struct work_struct *);
static void do_kick_device(struct work_struct *);
static void do_reload_device(struct work_struct *);
static void do_requeue_requests(struct work_struct *);
@@ -114,9 +114,8 @@ struct dasd_device *dasd_alloc_device(void)
dasd_init_chunklist(&device->erp_chunks, device->erp_mem, PAGE_SIZE);
dasd_init_chunklist(&device->ese_chunks, device->ese_mem, PAGE_SIZE * 2);
spin_lock_init(&device->mem_lock);
- atomic_set(&device->tasklet_scheduled, 0);
- tasklet_init(&device->tasklet, dasd_device_tasklet,
- (unsigned long) device);
+ atomic_set(&device->work_scheduled, 0);
+ INIT_WORK(&device->bh, dasd_device_work);
INIT_LIST_HEAD(&device->ccw_queue);
timer_setup(&device->timer, dasd_device_timeout, 0);
INIT_WORK(&device->kick_work, do_kick_device);
@@ -154,9 +153,8 @@ struct dasd_block *dasd_alloc_block(void)
/* open_count = 0 means device online but not in use */
atomic_set(&block->open_count, -1);

- atomic_set(&block->tasklet_scheduled, 0);
- tasklet_init(&block->tasklet, dasd_block_tasklet,
- (unsigned long) block);
+ atomic_set(&block->work_scheduled, 0);
+ INIT_WORK(&block->bh, dasd_block_work);
INIT_LIST_HEAD(&block->ccw_queue);
spin_lock_init(&block->queue_lock);
INIT_LIST_HEAD(&block->format_list);
@@ -2148,12 +2146,12 @@ EXPORT_SYMBOL_GPL(dasd_flush_device_queue);
/*
* Acquire the device lock and process queues for the device.
*/
-static void dasd_device_tasklet(unsigned long data)
+static void dasd_device_work(struct work_struct *t)
{
- struct dasd_device *device = (struct dasd_device *) data;
+ struct dasd_device *device = from_work(device, t, bh);
struct list_head final_queue;

- atomic_set (&device->tasklet_scheduled, 0);
+ atomic_set (&device->work_scheduled, 0);
INIT_LIST_HEAD(&final_queue);
spin_lock_irq(get_ccwdev_lock(device->cdev));
/* Check expire time of first request on the ccw queue. */
@@ -2174,15 +2172,15 @@ static void dasd_device_tasklet(unsigned long data)
}

/*
- * Schedules a call to dasd_tasklet over the device tasklet.
+ * Schedules a call to dasd_work over the device wq.
*/
void dasd_schedule_device_bh(struct dasd_device *device)
{
/* Protect against rescheduling. */
- if (atomic_cmpxchg (&device->tasklet_scheduled, 0, 1) != 0)
+ if (atomic_cmpxchg (&device->work_scheduled, 0, 1) != 0)
return;
dasd_get_device(device);
- tasklet_hi_schedule(&device->tasklet);
+ queue_work(system_bh_highpri_wq, &device->bh);
}
EXPORT_SYMBOL(dasd_schedule_device_bh);

@@ -2595,7 +2593,7 @@ int dasd_sleep_on_immediatly(struct dasd_ccw_req *cqr)
else
rc = -EIO;

- /* kick tasklets */
+ /* kick works */
dasd_schedule_device_bh(device);
if (device->block)
dasd_schedule_block_bh(device->block);
@@ -2891,15 +2889,15 @@ static void __dasd_block_start_head(struct dasd_block *block)
* block layer request queue, creates ccw requests, enqueues them on
* a dasd_device and processes ccw requests that have been returned.
*/
-static void dasd_block_tasklet(unsigned long data)
+static void dasd_block_work(struct work_struct *t)
{
- struct dasd_block *block = (struct dasd_block *) data;
+ struct dasd_block *block = from_work(block, t, bh);
struct list_head final_queue;
struct list_head *l, *n;
struct dasd_ccw_req *cqr;
struct dasd_queue *dq;

- atomic_set(&block->tasklet_scheduled, 0);
+ atomic_set(&block->work_scheduled, 0);
INIT_LIST_HEAD(&final_queue);
spin_lock_irq(&block->queue_lock);
/* Finish off requests on ccw queue */
@@ -2970,7 +2968,7 @@ static int _dasd_requests_to_flushqueue(struct dasd_block *block,
if (rc < 0)
break;
/* Rechain request (including erp chain) so it won't be
- * touched by the dasd_block_tasklet anymore.
+ * touched by the dasd_block_work anymore.
* Replace the callback so we notice when the request
* is returned from the dasd_device layer.
*/
@@ -3025,16 +3023,16 @@ static int dasd_flush_block_queue(struct dasd_block *block)
}

/*
- * Schedules a call to dasd_tasklet over the device tasklet.
+ * Schedules a call to dasd_work over the device wq.
*/
void dasd_schedule_block_bh(struct dasd_block *block)
{
/* Protect against rescheduling. */
- if (atomic_cmpxchg(&block->tasklet_scheduled, 0, 1) != 0)
+ if (atomic_cmpxchg(&block->work_scheduled, 0, 1) != 0)
return;
/* life cycle of block is bound to it's base device */
dasd_get_device(block->base);
- tasklet_hi_schedule(&block->tasklet);
+ queue_work(system_bh_highpri_wq, &block->bh);
}
EXPORT_SYMBOL(dasd_schedule_block_bh);

diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
index e5f40536b425..abe4d43f474e 100644
--- a/drivers/s390/block/dasd_int.h
+++ b/drivers/s390/block/dasd_int.h
@@ -28,7 +28,7 @@
* known -> basic: request irq line for the device.
* basic -> ready: do the initial analysis, e.g. format detection,
* do block device setup and detect partitions.
- * ready -> online: schedule the device tasklet.
+ * ready -> online: schedule the device work.
* Things to do for shutdown state transitions:
* online -> ready: just set the new device state.
* ready -> basic: flush requests from the block device layer, clear
@@ -579,8 +579,8 @@ struct dasd_device {
struct list_head erp_chunks;
struct list_head ese_chunks;

- atomic_t tasklet_scheduled;
- struct tasklet_struct tasklet;
+ atomic_t work_scheduled;
+ struct work_struct bh;
struct work_struct kick_work;
struct work_struct reload_device;
struct work_struct kick_validate;
@@ -630,8 +630,8 @@ struct dasd_block {
struct list_head ccw_queue;
spinlock_t queue_lock;

- atomic_t tasklet_scheduled;
- struct tasklet_struct tasklet;
+ atomic_t work_scheduled;
+ struct work_struct bh;
struct timer_list timer;

struct dentry *debugfs_dentry;
diff --git a/drivers/s390/char/con3270.c b/drivers/s390/char/con3270.c
index 251d2a1c3eef..993275e9b2f4 100644
--- a/drivers/s390/char/con3270.c
+++ b/drivers/s390/char/con3270.c
@@ -28,6 +28,7 @@
#include <asm/ebcdic.h>
#include <asm/cpcmd.h>
#include <linux/uaccess.h>
+#include <linux/workqueue.h>

#include "raw3270.h"
#include "keyboard.h"
@@ -107,8 +108,8 @@ struct tty3270 {
struct raw3270_request *readpartreq;
unsigned char inattr; /* Visible/invisible input. */
int throttle, attn; /* tty throttle/unthrottle. */
- struct tasklet_struct readlet; /* Tasklet to issue read request. */
- struct tasklet_struct hanglet; /* Tasklet to hang up the tty. */
+ struct work_struct read_work; /* Work to issue read request. */
+ struct work_struct hang_work; /* Work to hang up the tty. */
struct kbd_data *kbd; /* key_maps stuff. */

/* Escape sequence parsing. */
@@ -667,9 +668,9 @@ static void tty3270_scroll_backward(struct kbd_data *kbd)
/*
* Pass input line to tty.
*/
-static void tty3270_read_tasklet(unsigned long data)
+static void tty3270_read_work(struct work_struct *T)
{
- struct raw3270_request *rrq = (struct raw3270_request *)data;
+ struct raw3270_request *rrq = from_work(rrq, t, read_work);
static char kreset_data = TW_KR;
struct tty3270 *tp = container_of(rrq->view, struct tty3270, view);
char *input;
@@ -734,8 +735,8 @@ static void tty3270_read_callback(struct raw3270_request *rq, void *data)
struct tty3270 *tp = container_of(rq->view, struct tty3270, view);

raw3270_get_view(rq->view);
- /* Schedule tasklet to pass input to tty. */
- tasklet_schedule(&tp->readlet);
+ /* Schedule work to pass input to tty. */
+ queue_work(system_bh_wq, &tp->read_work);
}

/*
@@ -768,9 +769,9 @@ static void tty3270_issue_read(struct tty3270 *tp, int lock)
/*
* Hang up the tty
*/
-static void tty3270_hangup_tasklet(unsigned long data)
+static void tty3270_hangup_work(struct work_struct *t)
{
- struct tty3270 *tp = (struct tty3270 *)data;
+ struct tty3270 *tp = from_work(tp, t, hang_work);

tty_port_tty_hangup(&tp->port, true);
raw3270_put_view(&tp->view);
@@ -797,7 +798,7 @@ static void tty3270_deactivate(struct raw3270_view *view)

static void tty3270_irq(struct tty3270 *tp, struct raw3270_request *rq, struct irb *irb)
{
- /* Handle ATTN. Schedule tasklet to read aid. */
+ /* Handle ATTN. Schedule work to read aid. */
if (irb->scsw.cmd.dstat & DEV_STAT_ATTENTION) {
if (!tp->throttle)
tty3270_issue_read(tp, 0);
@@ -809,7 +810,7 @@ static void tty3270_irq(struct tty3270 *tp, struct raw3270_request *rq, struct i
if (irb->scsw.cmd.dstat & DEV_STAT_UNIT_CHECK) {
rq->rc = -EIO;
raw3270_get_view(&tp->view);
- tasklet_schedule(&tp->hanglet);
+ queue_work(system_bh_wq, &tp->hang_work);
} else {
/* Normal end. Copy residual count. */
rq->rescnt = irb->scsw.cmd.count;
@@ -850,10 +851,8 @@ static struct tty3270 *tty3270_alloc_view(void)

tty_port_init(&tp->port);
timer_setup(&tp->timer, tty3270_update, 0);
- tasklet_init(&tp->readlet, tty3270_read_tasklet,
- (unsigned long)tp->read);
- tasklet_init(&tp->hanglet, tty3270_hangup_tasklet,
- (unsigned long)tp);
+ INIT_WORK(&tp->read_work, tty3270_read_work);
+ INIT_WORK(&tp->hang_work, tty3270_hangup_work);
return tp;

out_readpartreq:
diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
index cce0bafd4c92..5136ecd965ae 100644
--- a/drivers/s390/crypto/ap_bus.c
+++ b/drivers/s390/crypto/ap_bus.c
@@ -111,10 +111,10 @@ static void ap_scan_bus_wq_callback(struct work_struct *);
static DECLARE_WORK(ap_scan_bus_work, ap_scan_bus_wq_callback);

/*
- * Tasklet & timer for AP request polling and interrupts
+ * Work & timer for AP request polling and interrupts
*/
-static void ap_tasklet_fn(unsigned long);
-static DECLARE_TASKLET_OLD(ap_tasklet, ap_tasklet_fn);
+static void ap_work_fn(struct work_struct *);
+static DECLARE_WORK(ap_work, ap_work_fn);
static DECLARE_WAIT_QUEUE_HEAD(ap_poll_wait);
static struct task_struct *ap_poll_kthread;
static DEFINE_MUTEX(ap_poll_thread_mutex);
@@ -450,16 +450,16 @@ void ap_request_timeout(struct timer_list *t)
* ap_poll_timeout(): AP receive polling for finished AP requests.
* @unused: Unused pointer.
*
- * Schedules the AP tasklet using a high resolution timer.
+ * Schedules the AP work using a high resolution timer.
*/
static enum hrtimer_restart ap_poll_timeout(struct hrtimer *unused)
{
- tasklet_schedule(&ap_tasklet);
+ queue_work(system_bh_wq, &ap_work);
return HRTIMER_NORESTART;
}

/**
- * ap_interrupt_handler() - Schedule ap_tasklet on interrupt
+ * ap_interrupt_handler() - Schedule ap_work on interrupt
* @airq: pointer to adapter interrupt descriptor
* @tpi_info: ignored
*/
@@ -467,23 +467,23 @@ static void ap_interrupt_handler(struct airq_struct *airq,
struct tpi_info *tpi_info)
{
inc_irq_stat(IRQIO_APB);
- tasklet_schedule(&ap_tasklet);
+ queue_work(system_bh_wq, &ap_work);
}

/**
- * ap_tasklet_fn(): Tasklet to poll all AP devices.
- * @dummy: Unused variable
+ * ap_work_fn(): Work to poll all AP devices.
+ * @t: pointer to work_struct
*
* Poll all AP devices on the bus.
*/
-static void ap_tasklet_fn(unsigned long dummy)
+static void ap_work_fn(struct work_struct *t)
{
int bkt;
struct ap_queue *aq;
enum ap_sm_wait wait = AP_SM_WAIT_NONE;

/* Reset the indicator if interrupts are used. Thus new interrupts can
- * be received. Doing it in the beginning of the tasklet is therefore
+ * be received. Doing it in the beginning of the work is therefore
* important that no requests on any AP get lost.
*/
if (ap_irq_flag)
@@ -546,7 +546,7 @@ static int ap_poll_thread(void *data)
try_to_freeze();
continue;
}
- ap_tasklet_fn(0);
+ queue_work(system_bh_wq, &ap_work);
}

return 0;
diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
index 59c7ed49aa02..7daea5c536c9 100644
--- a/drivers/s390/crypto/ap_bus.h
+++ b/drivers/s390/crypto/ap_bus.h
@@ -223,7 +223,7 @@ struct ap_message {
u16 flags; /* Flags, see AP_MSG_FLAG_xxx */
int rc; /* Return code for this message */
void *private; /* ap driver private pointer. */
- /* receive is called from tasklet context */
+ /* receive is called from work context */
void (*receive)(struct ap_queue *, struct ap_message *,
struct ap_message *);
};
diff --git a/drivers/s390/crypto/zcrypt_msgtype50.c b/drivers/s390/crypto/zcrypt_msgtype50.c
index 3b39cb8f926d..c7bf389f2938 100644
--- a/drivers/s390/crypto/zcrypt_msgtype50.c
+++ b/drivers/s390/crypto/zcrypt_msgtype50.c
@@ -403,7 +403,7 @@ static int convert_response(struct zcrypt_queue *zq,
/*
* This function is called from the AP bus code after a crypto request
* "msg" has finished with the reply message "reply".
- * It is called from tasklet context.
+ * It is called from work context.
* @aq: pointer to the AP device
* @msg: pointer to the AP message
* @reply: pointer to the AP reply message
diff --git a/drivers/s390/crypto/zcrypt_msgtype6.c b/drivers/s390/crypto/zcrypt_msgtype6.c
index 215f257d2360..b62e2c9cee58 100644
--- a/drivers/s390/crypto/zcrypt_msgtype6.c
+++ b/drivers/s390/crypto/zcrypt_msgtype6.c
@@ -847,7 +847,7 @@ static int convert_response_rng(struct zcrypt_queue *zq,
/*
* This function is called from the AP bus code after a crypto request
* "msg" has finished with the reply message "reply".
- * It is called from tasklet context.
+ * It is called from work context.
* @aq: pointer to the AP queue
* @msg: pointer to the AP message
* @reply: pointer to the AP reply message
@@ -913,7 +913,7 @@ static void zcrypt_msgtype6_receive(struct ap_queue *aq,
/*
* This function is called from the AP bus code after a crypto request
* "msg" has finished with the reply message "reply".
- * It is called from tasklet context.
+ * It is called from work context.
* @aq: pointer to the AP queue
* @msg: pointer to the AP message
* @reply: pointer to the AP reply message
diff --git a/drivers/s390/net/ctcm_fsms.c b/drivers/s390/net/ctcm_fsms.c
index 9678c6a2cda7..3b02a41c4386 100644
--- a/drivers/s390/net/ctcm_fsms.c
+++ b/drivers/s390/net/ctcm_fsms.c
@@ -1420,12 +1420,12 @@ static void ctcmpc_chx_rx(fsm_instance *fi, int event, void *arg)
case MPCG_STATE_READY:
skb_put_data(new_skb, skb->data, block_len);
skb_queue_tail(&ch->io_queue, new_skb);
- tasklet_schedule(&ch->ch_tasklet);
+ queue_work(system_bh_wq, &ch->ch_work);
break;
default:
skb_put_data(new_skb, skb->data, len);
skb_queue_tail(&ch->io_queue, new_skb);
- tasklet_hi_schedule(&ch->ch_tasklet);
+ queue_work(system_bh_highpri_wq, &ch->ch_work);
break;
}
}
diff --git a/drivers/s390/net/ctcm_main.c b/drivers/s390/net/ctcm_main.c
index 878fe3ce53ad..c504db179982 100644
--- a/drivers/s390/net/ctcm_main.c
+++ b/drivers/s390/net/ctcm_main.c
@@ -223,8 +223,8 @@ static void channel_remove(struct channel *ch)
dev_kfree_skb_any(ch->trans_skb);
}
if (IS_MPC(ch)) {
- tasklet_kill(&ch->ch_tasklet);
- tasklet_kill(&ch->ch_disc_tasklet);
+ cancel_work_sync(&ch->ch_work);
+ cancel_work_sync(&ch->ch_disc_work);
kfree(ch->discontact_th);
}
kfree(ch->ccw);
@@ -1027,7 +1027,7 @@ static void ctcm_free_netdevice(struct net_device *dev)
kfree_fsm(grp->fsm);
dev_kfree_skb(grp->xid_skb);
dev_kfree_skb(grp->rcvd_xid_skb);
- tasklet_kill(&grp->mpc_tasklet2);
+ cancel_work_sync(&grp->mpc_work2);
kfree(grp);
priv->mpcg = NULL;
}
@@ -1118,8 +1118,7 @@ static struct net_device *ctcm_init_netdevice(struct ctcm_priv *priv)
free_netdev(dev);
return NULL;
}
- tasklet_init(&grp->mpc_tasklet2,
- mpc_group_ready, (unsigned long)dev);
+ INIT_WORK(&grp->mpc_work2, mpc_group_ready);
dev->mtu = MPC_BUFSIZE_DEFAULT -
TH_HEADER_LENGTH - PDU_HEADER_LENGTH;

@@ -1319,10 +1318,8 @@ static int add_channel(struct ccw_device *cdev, enum ctcm_channel_types type,
goto nomem_return;

ch->discontact_th->th_blk_flag = TH_DISCONTACT;
- tasklet_init(&ch->ch_disc_tasklet,
- mpc_action_send_discontact, (unsigned long)ch);
-
- tasklet_init(&ch->ch_tasklet, ctcmpc_bh, (unsigned long)ch);
+ INIT_WORK(&ch->ch_disc_work, mpc_action_send_discontact);
+ INIT_WORK(&ch->ch_work, ctcmpc_bh);
ch->max_bufsize = (MPC_BUFSIZE_DEFAULT - 35);
ccw_num = 17;
} else
diff --git a/drivers/s390/net/ctcm_main.h b/drivers/s390/net/ctcm_main.h
index 25164e8bf13d..1143c037a7f7 100644
--- a/drivers/s390/net/ctcm_main.h
+++ b/drivers/s390/net/ctcm_main.h
@@ -13,6 +13,7 @@

#include <linux/skbuff.h>
#include <linux/netdevice.h>
+#include <linux/workqueue.h>

#include "fsm.h"
#include "ctcm_dbug.h"
@@ -154,7 +155,7 @@ struct channel {
int max_bufsize;
struct sk_buff *trans_skb; /* transmit/receive buffer */
struct sk_buff_head io_queue; /* universal I/O queue */
- struct tasklet_struct ch_tasklet; /* MPC ONLY */
+ struct work_struct ch_work; /* MPC ONLY */
/*
* TX queue for collecting skb's during busy.
*/
@@ -188,7 +189,7 @@ struct channel {
fsm_timer sweep_timer;
struct sk_buff_head sweep_queue;
struct th_header *discontact_th;
- struct tasklet_struct ch_disc_tasklet;
+ struct work_struct ch_disc_work;
/* MPC ONLY section end */

int retry; /* retry counter for misc. operations */
diff --git a/drivers/s390/net/ctcm_mpc.c b/drivers/s390/net/ctcm_mpc.c
index 9e580ef69bda..0b7ed15ce29d 100644
--- a/drivers/s390/net/ctcm_mpc.c
+++ b/drivers/s390/net/ctcm_mpc.c
@@ -588,7 +588,7 @@ void ctc_mpc_flow_control(int port_num, int flowc)
fsm_newstate(grp->fsm, MPCG_STATE_READY);
/* ensure any data that has accumulated */
/* on the io_queue will now be sen t */
- tasklet_schedule(&rch->ch_tasklet);
+ queue_work(system_bh_wq, &rch->ch_work);
}
/* possible race condition */
if (mpcg_state == MPCG_STATE_READY) {
@@ -847,7 +847,7 @@ static void mpc_action_go_ready(fsm_instance *fsm, int event, void *arg)
grp->out_of_sequence = 0;
grp->estconn_called = 0;

- tasklet_hi_schedule(&grp->mpc_tasklet2);
+ queue_work(system_bh_highpri_wq, &grp->mpc_work2);

return;
}
@@ -1213,16 +1213,16 @@ static void ctcmpc_unpack_skb(struct channel *ch, struct sk_buff *pskb)
}

/*
- * tasklet helper for mpc's skb unpacking.
+ * work helper for mpc's skb unpacking.
*
* ch The channel to work on.
* Allow flow control back pressure to occur here.
* Throttling back channel can result in excessive
* channel inactivity and system deact of channel
*/
-void ctcmpc_bh(unsigned long thischan)
+void ctcmpc_bh(struct work_struct *t)
{
- struct channel *ch = (struct channel *)thischan;
+ struct channel *ch = from_work(ch, t, ch_work);
struct sk_buff *skb;
struct net_device *dev = ch->netdev;
struct ctcm_priv *priv = dev->ml_priv;
@@ -1380,7 +1380,7 @@ static void mpc_action_go_inop(fsm_instance *fi, int event, void *arg)
case MPCG_STATE_FLOWC:
case MPCG_STATE_READY:
default:
- tasklet_hi_schedule(&wch->ch_disc_tasklet);
+ queue_work(system_bh_wq, &wch->ch_disc_work);
}

grp->xid2_tgnum = 0;
diff --git a/drivers/s390/net/ctcm_mpc.h b/drivers/s390/net/ctcm_mpc.h
index da41b26f76d1..735bea5d565a 100644
--- a/drivers/s390/net/ctcm_mpc.h
+++ b/drivers/s390/net/ctcm_mpc.h
@@ -13,6 +13,7 @@

#include <linux/interrupt.h>
#include <linux/skbuff.h>
+#include <linux/workqueue.h>
#include "fsm.h"

/*
@@ -156,8 +157,8 @@ struct mpcg_info {
};

struct mpc_group {
- struct tasklet_struct mpc_tasklet;
- struct tasklet_struct mpc_tasklet2;
+ struct work_struct mpc_work;
+ struct work_struct mpc_work2;
int changed_side;
int saved_state;
int channels_terminating;
@@ -233,6 +234,6 @@ void mpc_group_ready(unsigned long adev);
void mpc_channel_action(struct channel *ch, int direction, int action);
void mpc_action_send_discontact(unsigned long thischan);
void mpc_action_discontact(fsm_instance *fi, int event, void *arg);
-void ctcmpc_bh(unsigned long thischan);
+void ctcmpc_bh(struct work_struct *t);
#endif
/* --- This is the END my friend --- */
diff --git a/drivers/s390/net/lcs.c b/drivers/s390/net/lcs.c
index 25d4e6376591..751b7b212c91 100644
--- a/drivers/s390/net/lcs.c
+++ b/drivers/s390/net/lcs.c
@@ -49,7 +49,7 @@ static struct device *lcs_root_dev;
/*
* Some prototypes.
*/
-static void lcs_tasklet(unsigned long);
+static void lcs_work(struct work_struct *);
static void lcs_start_kernel_thread(struct work_struct *);
static void lcs_get_frames_cb(struct lcs_channel *, struct lcs_buffer *);
#ifdef CONFIG_IP_MULTICAST
@@ -140,8 +140,8 @@ static void
lcs_cleanup_channel(struct lcs_channel *channel)
{
LCS_DBF_TEXT(3, setup, "cleanch");
- /* Kill write channel tasklets. */
- tasklet_kill(&channel->irq_tasklet);
+ /* Kill write channel works. */
+ cancel_work_sync(&channel->irq_work);
/* Free channel buffers. */
lcs_free_channel(channel);
}
@@ -244,9 +244,8 @@ lcs_setup_read(struct lcs_card *card)
LCS_DBF_TEXT(3, setup, "initread");

lcs_setup_read_ccws(card);
- /* Initialize read channel tasklet. */
- card->read.irq_tasklet.data = (unsigned long) &card->read;
- card->read.irq_tasklet.func = lcs_tasklet;
+ /* Initialize read channel work. */
+ INIT_WORK(card->read.irq_work, lcs_work);
/* Initialize waitqueue. */
init_waitqueue_head(&card->read.wait_q);
}
@@ -290,9 +289,8 @@ lcs_setup_write(struct lcs_card *card)
LCS_DBF_TEXT(3, setup, "initwrit");

lcs_setup_write_ccws(card);
- /* Initialize write channel tasklet. */
- card->write.irq_tasklet.data = (unsigned long) &card->write;
- card->write.irq_tasklet.func = lcs_tasklet;
+ /* Initialize write channel work. */
+ INIT_WORK(card->write.irq_work, lcs_work);
/* Initialize waitqueue. */
init_waitqueue_head(&card->write.wait_q);
}
@@ -1429,22 +1427,22 @@ lcs_irq(struct ccw_device *cdev, unsigned long intparm, struct irb *irb)
}
if (irb->scsw.cmd.fctl & SCSW_FCTL_CLEAR_FUNC)
channel->state = LCS_CH_STATE_CLEARED;
- /* Do the rest in the tasklet. */
- tasklet_schedule(&channel->irq_tasklet);
+ /* Do the rest in the work. */
+ queue_work(system_bh_wq, &channel->irq_work);
}

/*
- * Tasklet for IRQ handler
+ * Work for IRQ handler
*/
static void
-lcs_tasklet(unsigned long data)
+lcs_work(struct work_struct *t)
{
unsigned long flags;
struct lcs_channel *channel;
struct lcs_buffer *iob;
int buf_idx;

- channel = (struct lcs_channel *) data;
+ channel = from_work(channel, t, irq_work);
LCS_DBF_TEXT_(5, trace, "tlet%s", dev_name(&channel->ccwdev->dev));

/* Check for processed buffers. */
diff --git a/drivers/s390/net/lcs.h b/drivers/s390/net/lcs.h
index a2699b70b050..66bc02e1d7e5 100644
--- a/drivers/s390/net/lcs.h
+++ b/drivers/s390/net/lcs.h
@@ -290,7 +290,7 @@ struct lcs_channel {
struct ccw_device *ccwdev;
struct ccw1 ccws[LCS_NUM_BUFFS + 1];
wait_queue_head_t wait_q;
- struct tasklet_struct irq_tasklet;
+ struct work_struct irq_work;
struct lcs_buffer iob[LCS_NUM_BUFFS];
int io_idx;
int buf_idx;
diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
index a0cce6872075..10ea95abc753 100644
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -2911,7 +2911,7 @@ static int qeth_init_input_buffer(struct qeth_card *card,
}

/*
- * since the buffer is accessed only from the input_tasklet
+ * since the buffer is accessed only from the input_work
* there shouldn't be a need to synchronize; also, since we use
* the QETH_IN_BUF_REQUEUE_THRESHOLD we should never run out off
* buffers
diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c
index 8cbc5e1711af..407590697c66 100644
--- a/drivers/s390/scsi/zfcp_qdio.c
+++ b/drivers/s390/scsi/zfcp_qdio.c
@@ -13,6 +13,7 @@
#include <linux/lockdep.h>
#include <linux/slab.h>
#include <linux/module.h>
+#include <linux/workqueue.h>
#include "zfcp_ext.h"
#include "zfcp_qdio.h"

@@ -72,9 +73,9 @@ static void zfcp_qdio_int_req(struct ccw_device *cdev, unsigned int qdio_err,
zfcp_qdio_handler_error(qdio, "qdireq1", qdio_err);
}

-static void zfcp_qdio_request_tasklet(struct tasklet_struct *tasklet)
+static void zfcp_qdio_request_work(struct work_struct *work)
{
- struct zfcp_qdio *qdio = from_tasklet(qdio, tasklet, request_tasklet);
+ struct zfcp_qdio *qdio = from_work(qdio, work, request_work);
struct ccw_device *cdev = qdio->adapter->ccw_device;
unsigned int start, error;
int completed;
@@ -104,7 +105,7 @@ static void zfcp_qdio_request_timer(struct timer_list *timer)
{
struct zfcp_qdio *qdio = from_timer(qdio, timer, request_timer);

- tasklet_schedule(&qdio->request_tasklet);
+ queue_work(system_bh_wq, &qdio->request_work);
}

static void zfcp_qdio_int_resp(struct ccw_device *cdev, unsigned int qdio_err,
@@ -158,15 +159,15 @@ static void zfcp_qdio_int_resp(struct ccw_device *cdev, unsigned int qdio_err,
zfcp_erp_adapter_reopen(qdio->adapter, 0, "qdires2");
}

-static void zfcp_qdio_irq_tasklet(struct tasklet_struct *tasklet)
+static void zfcp_qdio_irq_work(struct work_struct *work)
{
- struct zfcp_qdio *qdio = from_tasklet(qdio, tasklet, irq_tasklet);
+ struct zfcp_qdio *qdio = from_work(qdio, work, irq_work);
struct ccw_device *cdev = qdio->adapter->ccw_device;
unsigned int start, error;
int completed;

if (atomic_read(&qdio->req_q_free) < QDIO_MAX_BUFFERS_PER_Q)
- tasklet_schedule(&qdio->request_tasklet);
+ queue_work(system_bh_wq, &qdio->request_work);

/* Check the Response Queue: */
completed = qdio_inspect_input_queue(cdev, 0, &start, &error);
@@ -178,14 +179,14 @@ static void zfcp_qdio_irq_tasklet(struct tasklet_struct *tasklet)

if (qdio_start_irq(cdev))
/* More work pending: */
- tasklet_schedule(&qdio->irq_tasklet);
+ queue_work(system_bh_wq, &qdio->irq_work);
}

static void zfcp_qdio_poll(struct ccw_device *cdev, unsigned long data)
{
struct zfcp_qdio *qdio = (struct zfcp_qdio *) data;

- tasklet_schedule(&qdio->irq_tasklet);
+ queue_work(system_bh_wq, &qdio->irq_work);
}

static struct qdio_buffer_element *
@@ -315,7 +316,7 @@ int zfcp_qdio_send(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)

/*
* This should actually be a spin_lock_bh(stat_lock), to protect against
- * Request Queue completion processing in tasklet context.
+ * Request Queue completion processing in work context.
* But we can't do so (and are safe), as we always get called with IRQs
* disabled by spin_lock_irq[save](req_q_lock).
*/
@@ -339,7 +340,7 @@ int zfcp_qdio_send(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
}

if (atomic_read(&qdio->req_q_free) <= 2 * ZFCP_QDIO_MAX_SBALS_PER_REQ)
- tasklet_schedule(&qdio->request_tasklet);
+ queue_work(system_bh_wq, &qdio->request_work);
else
timer_reduce(&qdio->request_timer,
jiffies + msecs_to_jiffies(ZFCP_QDIO_REQUEST_SCAN_MSECS));
@@ -406,8 +407,8 @@ void zfcp_qdio_close(struct zfcp_qdio *qdio)

wake_up(&qdio->req_q_wq);

- tasklet_disable(&qdio->irq_tasklet);
- tasklet_disable(&qdio->request_tasklet);
+ disable_work_sync(&qdio->irq_work);
+ disable_work_sync(&qdio->request_work);
del_timer_sync(&qdio->request_timer);
qdio_stop_irq(adapter->ccw_device);
qdio_shutdown(adapter->ccw_device, QDIO_FLAG_CLEANUP_USING_CLEAR);
@@ -511,11 +512,11 @@ int zfcp_qdio_open(struct zfcp_qdio *qdio)
atomic_or(ZFCP_STATUS_ADAPTER_QDIOUP, &qdio->adapter->status);

/* Enable processing for Request Queue completions: */
- tasklet_enable(&qdio->request_tasklet);
+ enable_and_queue_work(system_bh_wq, &qdio->request_work);
/* Enable processing for QDIO interrupts: */
- tasklet_enable(&qdio->irq_tasklet);
+ enable_and_queue_work(system_bh_wq, &qdio->irq_work);
/* This results in a qdio_start_irq(): */
- tasklet_schedule(&qdio->irq_tasklet);
+ queue_work(system_bh_wq, &qdio->irq_work);

zfcp_qdio_shost_update(adapter, qdio);

@@ -534,8 +535,8 @@ void zfcp_qdio_destroy(struct zfcp_qdio *qdio)
if (!qdio)
return;

- tasklet_kill(&qdio->irq_tasklet);
- tasklet_kill(&qdio->request_tasklet);
+ cancel_work_sync(&qdio->irq_work);
+ cancel_work_sync(&qdio->request_work);

if (qdio->adapter->ccw_device)
qdio_free(qdio->adapter->ccw_device);
@@ -563,10 +564,10 @@ int zfcp_qdio_setup(struct zfcp_adapter *adapter)
spin_lock_init(&qdio->req_q_lock);
spin_lock_init(&qdio->stat_lock);
timer_setup(&qdio->request_timer, zfcp_qdio_request_timer, 0);
- tasklet_setup(&qdio->irq_tasklet, zfcp_qdio_irq_tasklet);
- tasklet_setup(&qdio->request_tasklet, zfcp_qdio_request_tasklet);
- tasklet_disable(&qdio->irq_tasklet);
- tasklet_disable(&qdio->request_tasklet);
+ INIT_WORK(&qdio->irq_work, zfcp_qdio_irq_work);
+ INIT_WORK(&qdio->request_work, zfcp_qdio_request_work);
+ disable_work_sync(&qdio->irq_work);
+ disable_work_sync(&qdio->request_work);

adapter->qdio = qdio;
return 0;
@@ -580,7 +581,7 @@ int zfcp_qdio_setup(struct zfcp_adapter *adapter)
* wrapper function sets a flag to ensure hardware logging is only
* triggered once before going through qdio shutdown.
*
- * The triggers are always run from qdio tasklet context, so no
+ * The triggers are always run from qdio work context, so no
* additional synchronization is necessary.
*/
void zfcp_qdio_siosl(struct zfcp_adapter *adapter)
diff --git a/drivers/s390/scsi/zfcp_qdio.h b/drivers/s390/scsi/zfcp_qdio.h
index 8f7d2ae94441..ce92d2378b98 100644
--- a/drivers/s390/scsi/zfcp_qdio.h
+++ b/drivers/s390/scsi/zfcp_qdio.h
@@ -11,6 +11,7 @@
#define ZFCP_QDIO_H

#include <linux/interrupt.h>
+#include <linux/workqueue.h>
#include <asm/qdio.h>

#define ZFCP_QDIO_SBALE_LEN PAGE_SIZE
@@ -30,8 +31,8 @@
* @req_q_util: used for accounting
* @req_q_full: queue full incidents
* @req_q_wq: used to wait for SBAL availability
- * @irq_tasklet: used for QDIO interrupt processing
- * @request_tasklet: used for Request Queue completion processing
+ * @irq_work: used for QDIO interrupt processing
+ * @request_work: used for Request Queue completion processing
* @request_timer: used to trigger the Request Queue completion processing
* @adapter: adapter used in conjunction with this qdio structure
* @max_sbale_per_sbal: qdio limit per sbal
@@ -48,8 +49,8 @@ struct zfcp_qdio {
u64 req_q_util;
atomic_t req_q_full;
wait_queue_head_t req_q_wq;
- struct tasklet_struct irq_tasklet;
- struct tasklet_struct request_tasklet;
+ struct work_struct irq_work;
+ struct work_struct request_work;
struct timer_list request_timer;
struct zfcp_adapter *adapter;
u16 max_sbale_per_sbal;
--
2.17.1


2024-03-27 16:09:50

by Allen Pais

[permalink] [raw]
Subject: [PATCH 9/9] mmc: Convert from tasklet to BH workqueue

The only generic interface to execute asynchronously in the BH context is
tasklet; however, it's marked deprecated and has some design flaws. To
replace tasklets, BH workqueue support was recently added. A BH workqueue
behaves similarly to regular workqueues except that the queued work items
are executed in the BH context.

This patch converts drivers/infiniband/* from tasklet to BH workqueue.

Based on the work done by Tejun Heo <[email protected]>
Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10

Signed-off-by: Allen Pais <[email protected]>
---
drivers/mmc/host/atmel-mci.c | 35 ++++-----
drivers/mmc/host/au1xmmc.c | 37 ++++-----
drivers/mmc/host/cb710-mmc.c | 15 ++--
drivers/mmc/host/cb710-mmc.h | 3 +-
drivers/mmc/host/dw_mmc.c | 25 ++++---
drivers/mmc/host/dw_mmc.h | 9 ++-
drivers/mmc/host/omap.c | 17 +++--
drivers/mmc/host/renesas_sdhi.h | 3 +-
drivers/mmc/host/renesas_sdhi_internal_dmac.c | 24 +++---
drivers/mmc/host/renesas_sdhi_sys_dmac.c | 9 +--
drivers/mmc/host/sdhci-bcm-kona.c | 2 +-
drivers/mmc/host/tifm_sd.c | 15 ++--
drivers/mmc/host/tmio_mmc.h | 3 +-
drivers/mmc/host/tmio_mmc_core.c | 4 +-
drivers/mmc/host/uniphier-sd.c | 13 ++--
drivers/mmc/host/via-sdmmc.c | 25 ++++---
drivers/mmc/host/wbsd.c | 75 ++++++++++---------
drivers/mmc/host/wbsd.h | 10 +--
18 files changed, 167 insertions(+), 157 deletions(-)

diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
index dba826db739a..0a92a7fd020f 100644
--- a/drivers/mmc/host/atmel-mci.c
+++ b/drivers/mmc/host/atmel-mci.c
@@ -33,6 +33,7 @@
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/pinctrl/consumer.h>
+#include <linux/workqueue.h>

#include <asm/cacheflush.h>
#include <asm/io.h>
@@ -284,12 +285,12 @@ struct atmel_mci_dma {
* EVENT_DATA_ERROR is pending.
* @stop_cmdr: Value to be loaded into CMDR when the stop command is
* to be sent.
- * @tasklet: Tasklet running the request state machine.
+ * @work: Work running the request state machine.
* @pending_events: Bitmask of events flagged by the interrupt handler
- * to be processed by the tasklet.
+ * to be processed by the work.
* @completed_events: Bitmask of events which the state machine has
* processed.
- * @state: Tasklet state.
+ * @state: Work state.
* @queue: List of slots waiting for access to the controller.
* @need_clock_update: Update the clock rate before the next request.
* @need_reset: Reset controller before next request.
@@ -363,7 +364,7 @@ struct atmel_mci {
u32 data_status;
u32 stop_cmdr;

- struct tasklet_struct tasklet;
+ struct work_struct work;
unsigned long pending_events;
unsigned long completed_events;
enum atmel_mci_state state;
@@ -761,7 +762,7 @@ static void atmci_timeout_timer(struct timer_list *t)
host->need_reset = 1;
host->state = STATE_END_REQUEST;
smp_wmb();
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);
}

static inline unsigned int atmci_ns_to_clocks(struct atmel_mci *host,
@@ -983,7 +984,7 @@ static void atmci_pdc_complete(struct atmel_mci *host)

dev_dbg(&host->pdev->dev, "(%s) set pending xfer complete\n", __func__);
atmci_set_pending(host, EVENT_XFER_COMPLETE);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);
}

static void atmci_dma_cleanup(struct atmel_mci *host)
@@ -997,7 +998,7 @@ static void atmci_dma_cleanup(struct atmel_mci *host)
}

/*
- * This function is called by the DMA driver from tasklet context.
+ * This function is called by the DMA driver from work context.
*/
static void atmci_dma_complete(void *arg)
{
@@ -1020,7 +1021,7 @@ static void atmci_dma_complete(void *arg)
dev_dbg(&host->pdev->dev,
"(%s) set pending xfer complete\n", __func__);
atmci_set_pending(host, EVENT_XFER_COMPLETE);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);

/*
* Regardless of what the documentation says, we have
@@ -1033,7 +1034,7 @@ static void atmci_dma_complete(void *arg)
* haven't seen all the potential error bits yet.
*
* The interrupt handler will schedule a different
- * tasklet to finish things up when the data transfer
+ * work to finish things up when the data transfer
* is completely done.
*
* We may not complete the mmc request here anyway
@@ -1765,9 +1766,9 @@ static void atmci_detect_change(struct timer_list *t)
}
}

-static void atmci_tasklet_func(struct tasklet_struct *t)
+static void atmci_work_func(struct work_struct *t)
{
- struct atmel_mci *host = from_tasklet(host, t, tasklet);
+ struct atmel_mci *host = from_work(host, t, work);
struct mmc_request *mrq = host->mrq;
struct mmc_data *data = host->data;
enum atmel_mci_state state = host->state;
@@ -1779,7 +1780,7 @@ static void atmci_tasklet_func(struct tasklet_struct *t)
state = host->state;

dev_vdbg(&host->pdev->dev,
- "tasklet: state %u pending/completed/mask %lx/%lx/%x\n",
+ "work: state %u pending/completed/mask %lx/%lx/%x\n",
state, host->pending_events, host->completed_events,
atmci_readl(host, ATMCI_IMR));

@@ -2141,7 +2142,7 @@ static irqreturn_t atmci_interrupt(int irq, void *dev_id)
dev_dbg(&host->pdev->dev, "set pending data error\n");
smp_wmb();
atmci_set_pending(host, EVENT_DATA_ERROR);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);
}

if (pending & ATMCI_TXBUFE) {
@@ -2210,7 +2211,7 @@ static irqreturn_t atmci_interrupt(int irq, void *dev_id)
smp_wmb();
dev_dbg(&host->pdev->dev, "set pending notbusy\n");
atmci_set_pending(host, EVENT_NOTBUSY);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);
}

if (pending & ATMCI_NOTBUSY) {
@@ -2219,7 +2220,7 @@ static irqreturn_t atmci_interrupt(int irq, void *dev_id)
smp_wmb();
dev_dbg(&host->pdev->dev, "set pending notbusy\n");
atmci_set_pending(host, EVENT_NOTBUSY);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);
}

if (pending & ATMCI_RXRDY)
@@ -2234,7 +2235,7 @@ static irqreturn_t atmci_interrupt(int irq, void *dev_id)
smp_wmb();
dev_dbg(&host->pdev->dev, "set pending cmd rdy\n");
atmci_set_pending(host, EVENT_CMD_RDY);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);
}

if (pending & (ATMCI_SDIOIRQA | ATMCI_SDIOIRQB))
@@ -2530,7 +2531,7 @@ static int atmci_probe(struct platform_device *pdev)

host->mapbase = regs->start;

- tasklet_setup(&host->tasklet, atmci_tasklet_func);
+ INIT_WORK(&host->work, atmci_work_func);

ret = request_irq(irq, atmci_interrupt, 0, dev_name(&pdev->dev), host);
if (ret) {
diff --git a/drivers/mmc/host/au1xmmc.c b/drivers/mmc/host/au1xmmc.c
index b5a5c6a2fe8b..c86fa7d2ebb7 100644
--- a/drivers/mmc/host/au1xmmc.c
+++ b/drivers/mmc/host/au1xmmc.c
@@ -42,6 +42,7 @@
#include <linux/leds.h>
#include <linux/mmc/host.h>
#include <linux/slab.h>
+#include <linux/workqueue.h>

#include <asm/io.h>
#include <asm/mach-au1x00/au1000.h>
@@ -113,8 +114,8 @@ struct au1xmmc_host {

int irq;

- struct tasklet_struct finish_task;
- struct tasklet_struct data_task;
+ struct work_struct finish_task;
+ struct work_struct data_task;
struct au1xmmc_platform_data *platdata;
struct platform_device *pdev;
struct resource *ioarea;
@@ -253,9 +254,9 @@ static void au1xmmc_finish_request(struct au1xmmc_host *host)
mmc_request_done(host->mmc, mrq);
}

-static void au1xmmc_tasklet_finish(struct tasklet_struct *t)
+static void au1xmmc_work_finish(struct work_struct *t)
{
- struct au1xmmc_host *host = from_tasklet(host, t, finish_task);
+ struct au1xmmc_host *host = from_work(host, t, finish_task);
au1xmmc_finish_request(host);
}

@@ -363,9 +364,9 @@ static void au1xmmc_data_complete(struct au1xmmc_host *host, u32 status)
au1xmmc_finish_request(host);
}

-static void au1xmmc_tasklet_data(struct tasklet_struct *t)
+static void au1xmmc_work_data(struct work_struct *t)
{
- struct au1xmmc_host *host = from_tasklet(host, t, data_task);
+ struct au1xmmc_host *host = from_work(host, t, data_task);

u32 status = __raw_readl(HOST_STATUS(host));
au1xmmc_data_complete(host, status);
@@ -425,7 +426,7 @@ static void au1xmmc_send_pio(struct au1xmmc_host *host)
if (host->flags & HOST_F_STOP)
SEND_STOP(host);

- tasklet_schedule(&host->data_task);
+ queue_work(system_bh_wq, &host->data_task);
}
}

@@ -505,7 +506,7 @@ static void au1xmmc_receive_pio(struct au1xmmc_host *host)
if (host->flags & HOST_F_STOP)
SEND_STOP(host);

- tasklet_schedule(&host->data_task);
+ queue_work(system_bh_wq, &host->data_task);
}
}

@@ -561,7 +562,7 @@ static void au1xmmc_cmd_complete(struct au1xmmc_host *host, u32 status)

if (!trans || cmd->error) {
IRQ_OFF(host, SD_CONFIG_TH | SD_CONFIG_RA | SD_CONFIG_RF);
- tasklet_schedule(&host->finish_task);
+ queue_work(system_bh_wq, &host->finish_task);
return;
}

@@ -797,7 +798,7 @@ static irqreturn_t au1xmmc_irq(int irq, void *dev_id)
IRQ_OFF(host, SD_CONFIG_NE | SD_CONFIG_TH);

/* IRQ_OFF(host, SD_CONFIG_TH | SD_CONFIG_RA | SD_CONFIG_RF); */
- tasklet_schedule(&host->finish_task);
+ queue_work(system_bh_wq, &host->finish_task);
}
#if 0
else if (status & SD_STATUS_DD) {
@@ -806,7 +807,7 @@ static irqreturn_t au1xmmc_irq(int irq, void *dev_id)
au1xmmc_receive_pio(host);
else {
au1xmmc_data_complete(host, status);
- /* tasklet_schedule(&host->data_task); */
+ /* queue_work(system_bh_wq, &host->data_task); */
}
}
#endif
@@ -854,7 +855,7 @@ static void au1xmmc_dbdma_callback(int irq, void *dev_id)
if (host->flags & HOST_F_STOP)
SEND_STOP(host);

- tasklet_schedule(&host->data_task);
+ queue_work(system_bh_wq, &host->data_task);
}

static int au1xmmc_dbdma_init(struct au1xmmc_host *host)
@@ -1039,9 +1040,9 @@ static int au1xmmc_probe(struct platform_device *pdev)
if (host->platdata)
mmc->caps &= ~(host->platdata->mask_host_caps);

- tasklet_setup(&host->data_task, au1xmmc_tasklet_data);
+ INIT_WORK(&host->data_task, au1xmmc_work_data);

- tasklet_setup(&host->finish_task, au1xmmc_tasklet_finish);
+ INIT_WORK(&host->finish_task, au1xmmc_work_finish);

if (has_dbdma()) {
ret = au1xmmc_dbdma_init(host);
@@ -1091,8 +1092,8 @@ static int au1xmmc_probe(struct platform_device *pdev)
if (host->flags & HOST_F_DBDMA)
au1xmmc_dbdma_shutdown(host);

- tasklet_kill(&host->data_task);
- tasklet_kill(&host->finish_task);
+ cancel_work_sync(&host->data_task);
+ cancel_work_sync(&host->finish_task);

if (host->platdata && host->platdata->cd_setup &&
!(mmc->caps & MMC_CAP_NEEDS_POLL))
@@ -1135,8 +1136,8 @@ static void au1xmmc_remove(struct platform_device *pdev)
__raw_writel(0, HOST_CONFIG2(host));
wmb(); /* drain writebuffer */

- tasklet_kill(&host->data_task);
- tasklet_kill(&host->finish_task);
+ cancel_work_sync(&host->data_task);
+ cancel_work_sync(&host->finish_task);

if (host->flags & HOST_F_DBDMA)
au1xmmc_dbdma_shutdown(host);
diff --git a/drivers/mmc/host/cb710-mmc.c b/drivers/mmc/host/cb710-mmc.c
index 0aec33b88bef..eebb6797e785 100644
--- a/drivers/mmc/host/cb710-mmc.c
+++ b/drivers/mmc/host/cb710-mmc.c
@@ -8,6 +8,7 @@
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/delay.h>
+#include <linux/workqueue.h>
#include "cb710-mmc.h"

#define CB710_MMC_REQ_TIMEOUT_MS 2000
@@ -493,7 +494,7 @@ static void cb710_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
if (!cb710_mmc_command(mmc, mrq->cmd) && mrq->stop)
cb710_mmc_command(mmc, mrq->stop);

- tasklet_schedule(&reader->finish_req_tasklet);
+ queue_work(system_bh_wq, &reader->finish_req_work);
}

static int cb710_mmc_powerup(struct cb710_slot *slot)
@@ -646,10 +647,10 @@ static int cb710_mmc_irq_handler(struct cb710_slot *slot)
return 1;
}

-static void cb710_mmc_finish_request_tasklet(struct tasklet_struct *t)
+static void cb710_mmc_finish_request_work(struct work_struct *t)
{
- struct cb710_mmc_reader *reader = from_tasklet(reader, t,
- finish_req_tasklet);
+ struct cb710_mmc_reader *reader = from_work(reader, t,
+ finish_req_work);
struct mmc_request *mrq = reader->mrq;

reader->mrq = NULL;
@@ -718,8 +719,8 @@ static int cb710_mmc_init(struct platform_device *pdev)

reader = mmc_priv(mmc);

- tasklet_setup(&reader->finish_req_tasklet,
- cb710_mmc_finish_request_tasklet);
+ INIT_WORK(&reader->finish_req_work,
+ cb710_mmc_finish_request_work);
spin_lock_init(&reader->irq_lock);
cb710_dump_regs(chip, CB710_DUMP_REGS_MMC);

@@ -763,7 +764,7 @@ static void cb710_mmc_exit(struct platform_device *pdev)
cb710_write_port_32(slot, CB710_MMC_CONFIG_PORT, 0);
cb710_write_port_16(slot, CB710_MMC_CONFIGB_PORT, 0);

- tasklet_kill(&reader->finish_req_tasklet);
+ cancel_work_sync(&reader->finish_req_work);

mmc_free_host(mmc);
}
diff --git a/drivers/mmc/host/cb710-mmc.h b/drivers/mmc/host/cb710-mmc.h
index 5e053077dbed..b35ab8736374 100644
--- a/drivers/mmc/host/cb710-mmc.h
+++ b/drivers/mmc/host/cb710-mmc.h
@@ -8,10 +8,11 @@
#define LINUX_CB710_MMC_H

#include <linux/cb710.h>
+#include <linux/workqueue.h>

/* per-MMC-reader structure */
struct cb710_mmc_reader {
- struct tasklet_struct finish_req_tasklet;
+ struct work_struct finish_req_work;
struct mmc_request *mrq;
spinlock_t irq_lock;
unsigned char last_power_mode;
diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
index 8e2d676b9239..ee6f892bc0d8 100644
--- a/drivers/mmc/host/dw_mmc.c
+++ b/drivers/mmc/host/dw_mmc.c
@@ -36,6 +36,7 @@
#include <linux/regulator/consumer.h>
#include <linux/of.h>
#include <linux/mmc/slot-gpio.h>
+#include <linux/workqueue.h>

#include "dw_mmc.h"

@@ -493,7 +494,7 @@ static void dw_mci_dmac_complete_dma(void *arg)
*/
if (data) {
set_bit(EVENT_XFER_COMPLETE, &host->pending_events);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);
}
}

@@ -1834,7 +1835,7 @@ static enum hrtimer_restart dw_mci_fault_timer(struct hrtimer *t)
if (!host->data_status) {
host->data_status = SDMMC_INT_DCRC;
set_bit(EVENT_DATA_ERROR, &host->pending_events);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);
}

spin_unlock_irqrestore(&host->irq_lock, flags);
@@ -2056,9 +2057,9 @@ static bool dw_mci_clear_pending_data_complete(struct dw_mci *host)
return true;
}

-static void dw_mci_tasklet_func(struct tasklet_struct *t)
+static void dw_mci_work_func(struct work_struct *t)
{
- struct dw_mci *host = from_tasklet(host, t, tasklet);
+ struct dw_mci *host = from_work(host, t, work);
struct mmc_data *data;
struct mmc_command *cmd;
struct mmc_request *mrq;
@@ -2113,7 +2114,7 @@ static void dw_mci_tasklet_func(struct tasklet_struct *t)
* will waste a bit of time (we already know
* the command was bad), it can't cause any
* errors since it's possible it would have
- * taken place anyway if this tasklet got
+ * taken place anyway if this work got
* delayed. Allowing the transfer to take place
* avoids races and keeps things simple.
*/
@@ -2706,7 +2707,7 @@ static void dw_mci_cmd_interrupt(struct dw_mci *host, u32 status)
smp_wmb(); /* drain writebuffer */

set_bit(EVENT_CMD_COMPLETE, &host->pending_events);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);

dw_mci_start_fault_timer(host);
}
@@ -2774,7 +2775,7 @@ static irqreturn_t dw_mci_interrupt(int irq, void *dev_id)
set_bit(EVENT_DATA_COMPLETE,
&host->pending_events);

- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);

spin_unlock(&host->irq_lock);
}
@@ -2793,7 +2794,7 @@ static irqreturn_t dw_mci_interrupt(int irq, void *dev_id)
dw_mci_read_data_pio(host, true);
}
set_bit(EVENT_DATA_COMPLETE, &host->pending_events);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);

spin_unlock(&host->irq_lock);
}
@@ -3098,7 +3099,7 @@ static void dw_mci_cmd11_timer(struct timer_list *t)

host->cmd_status = SDMMC_INT_RTO;
set_bit(EVENT_CMD_COMPLETE, &host->pending_events);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);
}

static void dw_mci_cto_timer(struct timer_list *t)
@@ -3144,7 +3145,7 @@ static void dw_mci_cto_timer(struct timer_list *t)
*/
host->cmd_status = SDMMC_INT_RTO;
set_bit(EVENT_CMD_COMPLETE, &host->pending_events);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);
break;
default:
dev_warn(host->dev, "Unexpected command timeout, state %d\n",
@@ -3195,7 +3196,7 @@ static void dw_mci_dto_timer(struct timer_list *t)
host->data_status = SDMMC_INT_DRTO;
set_bit(EVENT_DATA_ERROR, &host->pending_events);
set_bit(EVENT_DATA_COMPLETE, &host->pending_events);
- tasklet_schedule(&host->tasklet);
+ queue_work(system_bh_wq, &host->work);
break;
default:
dev_warn(host->dev, "Unexpected data timeout, state %d\n",
@@ -3435,7 +3436,7 @@ int dw_mci_probe(struct dw_mci *host)
else
host->fifo_reg = host->regs + DATA_240A_OFFSET;

- tasklet_setup(&host->tasklet, dw_mci_tasklet_func);
+ INIT_WORK(&host->work, dw_mci_work_func);
ret = devm_request_irq(host->dev, host->irq, dw_mci_interrupt,
host->irq_flags, "dw-mci", host);
if (ret)
diff --git a/drivers/mmc/host/dw_mmc.h b/drivers/mmc/host/dw_mmc.h
index 4ed81f94f7ca..d17f398a0432 100644
--- a/drivers/mmc/host/dw_mmc.h
+++ b/drivers/mmc/host/dw_mmc.h
@@ -17,6 +17,7 @@
#include <linux/fault-inject.h>
#include <linux/hrtimer.h>
#include <linux/interrupt.h>
+#include <linux/workqueue.h>

enum dw_mci_state {
STATE_IDLE = 0,
@@ -89,12 +90,12 @@ struct dw_mci_dma_slave {
* @stop_cmdr: Value to be loaded into CMDR when the stop command is
* to be sent.
* @dir_status: Direction of current transfer.
- * @tasklet: Tasklet running the request state machine.
+ * @work: Work running the request state machine.
* @pending_events: Bitmask of events flagged by the interrupt handler
- * to be processed by the tasklet.
+ * to be processed by the work.
* @completed_events: Bitmask of events which the state machine has
* processed.
- * @state: Tasklet state.
+ * @state: Work state.
* @queue: List of slots waiting for access to the controller.
* @bus_hz: The rate of @mck in Hz. This forms the basis for MMC bus
* rate and timeout calculations.
@@ -194,7 +195,7 @@ struct dw_mci {
u32 data_status;
u32 stop_cmdr;
u32 dir_status;
- struct tasklet_struct tasklet;
+ struct work_struct work;
unsigned long pending_events;
unsigned long completed_events;
enum dw_mci_state state;
diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
index 088f8ed4fdc4..d85bae7b9cba 100644
--- a/drivers/mmc/host/omap.c
+++ b/drivers/mmc/host/omap.c
@@ -28,6 +28,7 @@
#include <linux/slab.h>
#include <linux/gpio/consumer.h>
#include <linux/platform_data/mmc-omap.h>
+#include <linux/workqueue.h>


#define OMAP_MMC_REG_CMD 0x00
@@ -105,7 +106,7 @@ struct mmc_omap_slot {
u16 power_mode;
unsigned int fclk_freq;

- struct tasklet_struct cover_tasklet;
+ struct work_struct cover_work;
struct timer_list cover_timer;
unsigned cover_open;

@@ -873,18 +874,18 @@ void omap_mmc_notify_cover_event(struct device *dev, int num, int is_closed)
sysfs_notify(&slot->mmc->class_dev.kobj, NULL, "cover_switch");
}

- tasklet_hi_schedule(&slot->cover_tasklet);
+ queue_work(system_bh_highpri_wq, &slot->cover_work);
}

static void mmc_omap_cover_timer(struct timer_list *t)
{
struct mmc_omap_slot *slot = from_timer(slot, t, cover_timer);
- tasklet_schedule(&slot->cover_tasklet);
+ queue_work(system_bh_wq, &slot->cover_work);
}

-static void mmc_omap_cover_handler(struct tasklet_struct *t)
+static void mmc_omap_cover_handler(struct work_struct *t)
{
- struct mmc_omap_slot *slot = from_tasklet(slot, t, cover_tasklet);
+ struct mmc_omap_slot *slot = from_work(slot, t, cover_work);
int cover_open = mmc_omap_cover_is_open(slot);

mmc_detect_change(slot->mmc, 0);
@@ -1299,7 +1300,7 @@ static int mmc_omap_new_slot(struct mmc_omap_host *host, int id)

if (slot->pdata->get_cover_state != NULL) {
timer_setup(&slot->cover_timer, mmc_omap_cover_timer, 0);
- tasklet_setup(&slot->cover_tasklet, mmc_omap_cover_handler);
+ INIT_WORK(&slot->cover_work, mmc_omap_cover_handler);
}

r = mmc_add_host(mmc);
@@ -1318,7 +1319,7 @@ static int mmc_omap_new_slot(struct mmc_omap_host *host, int id)
&dev_attr_cover_switch);
if (r < 0)
goto err_remove_slot_name;
- tasklet_schedule(&slot->cover_tasklet);
+ queue_work(system_bh_wq, &slot->cover_work);
}

return 0;
@@ -1341,7 +1342,7 @@ static void mmc_omap_remove_slot(struct mmc_omap_slot *slot)
if (slot->pdata->get_cover_state != NULL)
device_remove_file(&mmc->class_dev, &dev_attr_cover_switch);

- tasklet_kill(&slot->cover_tasklet);
+ cancel_work_sync(&slot->cover_work);
del_timer_sync(&slot->cover_timer);
flush_workqueue(slot->host->mmc_omap_wq);

diff --git a/drivers/mmc/host/renesas_sdhi.h b/drivers/mmc/host/renesas_sdhi.h
index 586f94d4dbfd..4fd2bfcacd76 100644
--- a/drivers/mmc/host/renesas_sdhi.h
+++ b/drivers/mmc/host/renesas_sdhi.h
@@ -11,6 +11,7 @@

#include <linux/dmaengine.h>
#include <linux/platform_device.h>
+#include <linux/workqueue.h>
#include "tmio_mmc.h"

struct renesas_sdhi_scc {
@@ -67,7 +68,7 @@ struct renesas_sdhi_dma {
dma_filter_fn filter;
void (*enable)(struct tmio_mmc_host *host, bool enable);
struct completion dma_dataend;
- struct tasklet_struct dma_complete;
+ struct work_struct dma_complete;
};

struct renesas_sdhi {
diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
index 53d34c3eddce..f175f8898516 100644
--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
@@ -336,7 +336,7 @@ static bool renesas_sdhi_internal_dmac_dma_irq(struct tmio_mmc_host *host)
writel(status ^ dma_irqs, host->ctl + DM_CM_INFO1);
set_bit(SDHI_DMA_END_FLAG_DMA, &dma_priv->end_flags);
if (test_bit(SDHI_DMA_END_FLAG_ACCESS, &dma_priv->end_flags))
- tasklet_schedule(&dma_priv->dma_complete);
+ queue_work(system_bh_wq, &dma_priv->dma_complete);
}

return status & dma_irqs;
@@ -351,7 +351,7 @@ renesas_sdhi_internal_dmac_dataend_dma(struct tmio_mmc_host *host)
set_bit(SDHI_DMA_END_FLAG_ACCESS, &dma_priv->end_flags);
if (test_bit(SDHI_DMA_END_FLAG_DMA, &dma_priv->end_flags) ||
host->data->error)
- tasklet_schedule(&dma_priv->dma_complete);
+ queue_work(system_bh_wq, &dma_priv->dma_complete);
}

/*
@@ -439,9 +439,9 @@ renesas_sdhi_internal_dmac_start_dma(struct tmio_mmc_host *host,
renesas_sdhi_internal_dmac_enable_dma(host, false);
}

-static void renesas_sdhi_internal_dmac_issue_tasklet_fn(unsigned long arg)
+static void renesas_sdhi_internal_dmac_issue_work_fn(struct work_struct *t)
{
- struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
+ struct tmio_mmc_host *host = from_work(host, t, dma_issue);
struct renesas_sdhi *priv = host_to_priv(host);

tmio_mmc_enable_mmc_irqs(host, TMIO_STAT_DATAEND);
@@ -453,7 +453,7 @@ static void renesas_sdhi_internal_dmac_issue_tasklet_fn(unsigned long arg)
/* on CMD errors, simulate DMA end immediately */
set_bit(SDHI_DMA_END_FLAG_DMA, &priv->dma_priv.end_flags);
if (test_bit(SDHI_DMA_END_FLAG_ACCESS, &priv->dma_priv.end_flags))
- tasklet_schedule(&priv->dma_priv.dma_complete);
+ queue_work(system_bh_wq, &priv->dma_priv.dma_complete);
}
}

@@ -483,9 +483,9 @@ static bool renesas_sdhi_internal_dmac_complete(struct tmio_mmc_host *host)
return true;
}

-static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
+static void renesas_sdhi_internal_dmac_complete_work_fn(struct work_struct *t)
{
- struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
+ struct tmio_mmc_host *host = from_work(host, t, dam_complete);

spin_lock_irq(&host->lock);
if (!renesas_sdhi_internal_dmac_complete(host))
@@ -543,12 +543,10 @@ renesas_sdhi_internal_dmac_request_dma(struct tmio_mmc_host *host,
/* Each value is set to non-zero to assume "enabling" each DMA */
host->chan_rx = host->chan_tx = (void *)0xdeadbeaf;

- tasklet_init(&priv->dma_priv.dma_complete,
- renesas_sdhi_internal_dmac_complete_tasklet_fn,
- (unsigned long)host);
- tasklet_init(&host->dma_issue,
- renesas_sdhi_internal_dmac_issue_tasklet_fn,
- (unsigned long)host);
+ INIT_WORK(&priv->dma_priv.dma_complete,
+ renesas_sdhi_internal_dmac_complete_work_fn);
+ INIT_WORK(&host->dma_issue,
+ renesas_sdhi_internal_dmac_issue_work_fn);

/* Add pre_req and post_req */
host->ops.pre_req = renesas_sdhi_internal_dmac_pre_req;
diff --git a/drivers/mmc/host/renesas_sdhi_sys_dmac.c b/drivers/mmc/host/renesas_sdhi_sys_dmac.c
index 9cf7f9feab72..793595ad6d02 100644
--- a/drivers/mmc/host/renesas_sdhi_sys_dmac.c
+++ b/drivers/mmc/host/renesas_sdhi_sys_dmac.c
@@ -312,9 +312,9 @@ static void renesas_sdhi_sys_dmac_start_dma(struct tmio_mmc_host *host,
}
}

-static void renesas_sdhi_sys_dmac_issue_tasklet_fn(unsigned long priv)
+static void renesas_sdhi_sys_dmac_issue_work_fn(struct work_struct *t)
{
- struct tmio_mmc_host *host = (struct tmio_mmc_host *)priv;
+ struct tmio_mmc_host *host = from_work(host, t, dma_issue);
struct dma_chan *chan = NULL;

spin_lock_irq(&host->lock);
@@ -401,9 +401,8 @@ static void renesas_sdhi_sys_dmac_request_dma(struct tmio_mmc_host *host,
goto ebouncebuf;

init_completion(&priv->dma_priv.dma_dataend);
- tasklet_init(&host->dma_issue,
- renesas_sdhi_sys_dmac_issue_tasklet_fn,
- (unsigned long)host);
+ INIT_WORK(&host->dma_issue,
+ renesas_sdhi_sys_dmac_issue_work_fn);
}

renesas_sdhi_sys_dmac_enable_dma(host, true);
diff --git a/drivers/mmc/host/sdhci-bcm-kona.c b/drivers/mmc/host/sdhci-bcm-kona.c
index cb9152c6a65d..974f205d479b 100644
--- a/drivers/mmc/host/sdhci-bcm-kona.c
+++ b/drivers/mmc/host/sdhci-bcm-kona.c
@@ -107,7 +107,7 @@ static void sdhci_bcm_kona_sd_init(struct sdhci_host *host)
* Software emulation of the SD card insertion/removal. Set insert=1 for insert
* and insert=0 for removal. The card detection is done by GPIO. For Broadcom
* IP to function properly the bit 0 of CORESTAT register needs to be set/reset
- * to generate the CD IRQ handled in sdhci.c which schedules card_tasklet.
+ * to generate the CD IRQ handled in sdhci.c which schedules card_work.
*/
static int sdhci_bcm_kona_sd_card_emulate(struct sdhci_host *host, int insert)
{
diff --git a/drivers/mmc/host/tifm_sd.c b/drivers/mmc/host/tifm_sd.c
index b5a2f2f25ad9..c6285c577db0 100644
--- a/drivers/mmc/host/tifm_sd.c
+++ b/drivers/mmc/host/tifm_sd.c
@@ -13,6 +13,7 @@
#include <linux/highmem.h>
#include <linux/scatterlist.h>
#include <linux/module.h>
+#include <linux/workqueue.h>
#include <asm/io.h>

#define DRIVER_NAME "tifm_sd"
@@ -97,7 +98,7 @@ struct tifm_sd {
unsigned int clk_div;
unsigned long timeout_jiffies;

- struct tasklet_struct finish_tasklet;
+ struct work_struct finish_work;
struct timer_list timer;
struct mmc_request *req;

@@ -463,7 +464,7 @@ static void tifm_sd_check_status(struct tifm_sd *host)
}
}
finish_request:
- tasklet_schedule(&host->finish_tasklet);
+ queue_work(system_bh_wq, &host->finish_work);
}

/* Called from interrupt handler */
@@ -723,9 +724,9 @@ static void tifm_sd_request(struct mmc_host *mmc, struct mmc_request *mrq)
mmc_request_done(mmc, mrq);
}

-static void tifm_sd_end_cmd(struct tasklet_struct *t)
+static void tifm_sd_end_cmd(struct work_struct *t)
{
- struct tifm_sd *host = from_tasklet(host, t, finish_tasklet);
+ struct tifm_sd *host = from_work(host, t, finish_work);
struct tifm_dev *sock = host->dev;
struct mmc_host *mmc = tifm_get_drvdata(sock);
struct mmc_request *mrq;
@@ -960,7 +961,7 @@ static int tifm_sd_probe(struct tifm_dev *sock)
*/
mmc->max_busy_timeout = TIFM_MMCSD_REQ_TIMEOUT_MS;

- tasklet_setup(&host->finish_tasklet, tifm_sd_end_cmd);
+ INIT_WORK(&host->finish_work, tifm_sd_end_cmd);
timer_setup(&host->timer, tifm_sd_abort, 0);

mmc->ops = &tifm_sd_ops;
@@ -999,7 +1000,7 @@ static void tifm_sd_remove(struct tifm_dev *sock)
writel(0, sock->addr + SOCK_MMCSD_INT_ENABLE);
spin_unlock_irqrestore(&sock->lock, flags);

- tasklet_kill(&host->finish_tasklet);
+ cancel_work_sync(&host->finish_work);

spin_lock_irqsave(&sock->lock, flags);
if (host->req) {
@@ -1009,7 +1010,7 @@ static void tifm_sd_remove(struct tifm_dev *sock)
host->req->cmd->error = -ENOMEDIUM;
if (host->req->stop)
host->req->stop->error = -ENOMEDIUM;
- tasklet_schedule(&host->finish_tasklet);
+ queue_work(system_bh_wq, &host->finish_work);
}
spin_unlock_irqrestore(&sock->lock, flags);
mmc_remove_host(mmc);
diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h
index de56e6534aea..bee13acaa80f 100644
--- a/drivers/mmc/host/tmio_mmc.h
+++ b/drivers/mmc/host/tmio_mmc.h
@@ -21,6 +21,7 @@
#include <linux/scatterlist.h>
#include <linux/spinlock.h>
#include <linux/interrupt.h>
+#include <linux/workqueue.h>

#define CTL_SD_CMD 0x00
#define CTL_ARG_REG 0x04
@@ -156,7 +157,7 @@ struct tmio_mmc_host {
bool dma_on;
struct dma_chan *chan_rx;
struct dma_chan *chan_tx;
- struct tasklet_struct dma_issue;
+ struct work_struct dma_issue;
struct scatterlist bounce_sg;
u8 *bounce_buf;

diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
index 93e912afd3ae..51bd2365795b 100644
--- a/drivers/mmc/host/tmio_mmc_core.c
+++ b/drivers/mmc/host/tmio_mmc_core.c
@@ -608,7 +608,7 @@ static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host, unsigned int stat)
} else {
tmio_mmc_disable_mmc_irqs(host,
TMIO_MASK_READOP);
- tasklet_schedule(&host->dma_issue);
+ queue_work(system_bh_wq, &host->dma_issue);
}
} else {
if (!host->dma_on) {
@@ -616,7 +616,7 @@ static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host, unsigned int stat)
} else {
tmio_mmc_disable_mmc_irqs(host,
TMIO_MASK_WRITEOP);
- tasklet_schedule(&host->dma_issue);
+ queue_work(system_bh_wq, &host->dma_issue);
}
}
} else {
diff --git a/drivers/mmc/host/uniphier-sd.c b/drivers/mmc/host/uniphier-sd.c
index 1404989e6151..d1964111c393 100644
--- a/drivers/mmc/host/uniphier-sd.c
+++ b/drivers/mmc/host/uniphier-sd.c
@@ -17,6 +17,7 @@
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/reset.h>
+#include <linux/workqueue.h>

#include "tmio_mmc.h"

@@ -90,9 +91,9 @@ static void uniphier_sd_dma_endisable(struct tmio_mmc_host *host, int enable)
}

/* external DMA engine */
-static void uniphier_sd_external_dma_issue(struct tasklet_struct *t)
+static void uniphier_sd_external_dma_issue(struct work_struct *t)
{
- struct tmio_mmc_host *host = from_tasklet(host, t, dma_issue);
+ struct tmio_mmc_host *host = from_work(host, t, dma_issue);
struct uniphier_sd_priv *priv = uniphier_sd_priv(host);

uniphier_sd_dma_endisable(host, 1);
@@ -199,7 +200,7 @@ static void uniphier_sd_external_dma_request(struct tmio_mmc_host *host,
host->chan_rx = chan;
host->chan_tx = chan;

- tasklet_setup(&host->dma_issue, uniphier_sd_external_dma_issue);
+ INIT_WORK(&host->dma_issue, uniphier_sd_external_dma_issue);
}

static void uniphier_sd_external_dma_release(struct tmio_mmc_host *host)
@@ -236,9 +237,9 @@ static const struct tmio_mmc_dma_ops uniphier_sd_external_dma_ops = {
.dataend = uniphier_sd_external_dma_dataend,
};

-static void uniphier_sd_internal_dma_issue(struct tasklet_struct *t)
+static void uniphier_sd_internal_dma_issue(struct work_struct *t)
{
- struct tmio_mmc_host *host = from_tasklet(host, t, dma_issue);
+ struct tmio_mmc_host *host = from_work(host, t, dma_issue);
unsigned long flags;

spin_lock_irqsave(&host->lock, flags);
@@ -317,7 +318,7 @@ static void uniphier_sd_internal_dma_request(struct tmio_mmc_host *host,

host->chan_tx = (void *)0xdeadbeaf;

- tasklet_setup(&host->dma_issue, uniphier_sd_internal_dma_issue);
+ INIT_WORK(&host->dma_issue, uniphier_sd_internal_dma_issue);
}

static void uniphier_sd_internal_dma_release(struct tmio_mmc_host *host)
diff --git a/drivers/mmc/host/via-sdmmc.c b/drivers/mmc/host/via-sdmmc.c
index ba6044b16e07..2777b773086b 100644
--- a/drivers/mmc/host/via-sdmmc.c
+++ b/drivers/mmc/host/via-sdmmc.c
@@ -12,6 +12,7 @@
#include <linux/interrupt.h>

#include <linux/mmc/host.h>
+#include <linux/workqueue.h>

#define DRV_NAME "via_sdmmc"

@@ -307,7 +308,7 @@ struct via_crdr_mmc_host {
struct sdhcreg pm_sdhc_reg;

struct work_struct carddet_work;
- struct tasklet_struct finish_tasklet;
+ struct work_struct finish_work;

struct timer_list timer;
spinlock_t lock;
@@ -643,7 +644,7 @@ static void via_sdc_finish_data(struct via_crdr_mmc_host *host)
if (data->stop)
via_sdc_send_command(host, data->stop);
else
- tasklet_schedule(&host->finish_tasklet);
+ queue_work(system_bh_wq, &host->finish_work);
}

static void via_sdc_finish_command(struct via_crdr_mmc_host *host)
@@ -653,7 +654,7 @@ static void via_sdc_finish_command(struct via_crdr_mmc_host *host)
host->cmd->error = 0;

if (!host->cmd->data)
- tasklet_schedule(&host->finish_tasklet);
+ queue_work(system_bh_wq, &host->finish_work);

host->cmd = NULL;
}
@@ -682,7 +683,7 @@ static void via_sdc_request(struct mmc_host *mmc, struct mmc_request *mrq)
status = readw(host->sdhc_mmiobase + VIA_CRDR_SDSTATUS);
if (!(status & VIA_CRDR_SDSTS_SLOTG) || host->reject) {
host->mrq->cmd->error = -ENOMEDIUM;
- tasklet_schedule(&host->finish_tasklet);
+ queue_work(system_bh_wq, &host->finish_work);
} else {
via_sdc_send_command(host, mrq->cmd);
}
@@ -848,7 +849,7 @@ static void via_sdc_cmd_isr(struct via_crdr_mmc_host *host, u16 intmask)
host->cmd->error = -EILSEQ;

if (host->cmd->error)
- tasklet_schedule(&host->finish_tasklet);
+ queue_work(system_bh_wq, &host->finish_work);
else if (intmask & VIA_CRDR_SDSTS_CRD)
via_sdc_finish_command(host);
}
@@ -955,16 +956,16 @@ static void via_sdc_timeout(struct timer_list *t)
sdhost->cmd->error = -ETIMEDOUT;
else
sdhost->mrq->cmd->error = -ETIMEDOUT;
- tasklet_schedule(&sdhost->finish_tasklet);
+ queue_work(system_bh_wq, &sdhost->finish_work);
}
}

spin_unlock_irqrestore(&sdhost->lock, flags);
}

-static void via_sdc_tasklet_finish(struct tasklet_struct *t)
+static void via_sdc_work_finish(struct work_struct *t)
{
- struct via_crdr_mmc_host *host = from_tasklet(host, t, finish_tasklet);
+ struct via_crdr_mmc_host *host = from_work(host, t, finish_work);
unsigned long flags;
struct mmc_request *mrq;

@@ -1005,7 +1006,7 @@ static void via_sdc_card_detect(struct work_struct *work)
pr_err("%s: Card removed during transfer!\n",
mmc_hostname(host->mmc));
host->mrq->cmd->error = -ENOMEDIUM;
- tasklet_schedule(&host->finish_tasklet);
+ queue_work(system_bh_wq, &host->finish_work);
}

spin_unlock_irqrestore(&host->lock, flags);
@@ -1051,7 +1052,7 @@ static void via_init_mmc_host(struct via_crdr_mmc_host *host)

INIT_WORK(&host->carddet_work, via_sdc_card_detect);

- tasklet_setup(&host->finish_tasklet, via_sdc_tasklet_finish);
+ INIT_WORK(&host->finish_work, via_sdc_work_finish);

addrbase = host->sdhc_mmiobase;
writel(0x0, addrbase + VIA_CRDR_SDINTMASK);
@@ -1193,7 +1194,7 @@ static void via_sd_remove(struct pci_dev *pcidev)
sdhost->mrq->cmd->error = -ENOMEDIUM;
if (sdhost->mrq->stop)
sdhost->mrq->stop->error = -ENOMEDIUM;
- tasklet_schedule(&sdhost->finish_tasklet);
+ queue_work(system_bh_wq, &sdhost->finish_work);
}
spin_unlock_irqrestore(&sdhost->lock, flags);

@@ -1203,7 +1204,7 @@ static void via_sd_remove(struct pci_dev *pcidev)

del_timer_sync(&sdhost->timer);

- tasklet_kill(&sdhost->finish_tasklet);
+ cancel_work_sync(&sdhost->finish_work);

/* switch off power */
gatt = readb(sdhost->pcictrl_mmiobase + VIA_CRDR_PCICLKGATT);
diff --git a/drivers/mmc/host/wbsd.c b/drivers/mmc/host/wbsd.c
index f0562f712d98..984e380abc71 100644
--- a/drivers/mmc/host/wbsd.c
+++ b/drivers/mmc/host/wbsd.c
@@ -32,6 +32,7 @@
#include <linux/mmc/sd.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
+#include <linux/workqueue.h>

#include <asm/io.h>
#include <asm/dma.h>
@@ -459,7 +460,7 @@ static void wbsd_empty_fifo(struct wbsd_host *host)
* FIFO threshold interrupts properly.
*/
if ((data->blocks * data->blksz - data->bytes_xfered) < 16)
- tasklet_schedule(&host->fifo_tasklet);
+ queue_work(system_bh_wq, &host->fifo_work);
}

static void wbsd_fill_fifo(struct wbsd_host *host)
@@ -524,7 +525,7 @@ static void wbsd_fill_fifo(struct wbsd_host *host)
* 'FIFO empty' under certain conditions. So we
* need to be a bit more pro-active.
*/
- tasklet_schedule(&host->fifo_tasklet);
+ queue_work(system_bh_wq, &host->fifo_work);
}

static void wbsd_prepare_data(struct wbsd_host *host, struct mmc_data *data)
@@ -746,7 +747,7 @@ static void wbsd_request(struct mmc_host *mmc, struct mmc_request *mrq)
struct mmc_command *cmd;

/*
- * Disable tasklets to avoid a deadlock.
+ * Disable works to avoid a deadlock.
*/
spin_lock_bh(&host->lock);

@@ -821,7 +822,7 @@ static void wbsd_request(struct mmc_host *mmc, struct mmc_request *mrq)
* Dirty fix for hardware bug.
*/
if (host->dma == -1)
- tasklet_schedule(&host->fifo_tasklet);
+ queue_work(system_bh_wq, &host->fifo_work);

spin_unlock_bh(&host->lock);

@@ -961,13 +962,13 @@ static void wbsd_reset_ignore(struct timer_list *t)
* Card status might have changed during the
* blackout.
*/
- tasklet_schedule(&host->card_tasklet);
+ queue_work(system_bh_wq, &host->card_work);

spin_unlock_bh(&host->lock);
}

/*
- * Tasklets
+ * Works
*/

static inline struct mmc_data *wbsd_get_data(struct wbsd_host *host)
@@ -987,9 +988,9 @@ static inline struct mmc_data *wbsd_get_data(struct wbsd_host *host)
return host->mrq->cmd->data;
}

-static void wbsd_tasklet_card(struct tasklet_struct *t)
+static void wbsd_work_card(struct work_struct *t)
{
- struct wbsd_host *host = from_tasklet(host, t, card_tasklet);
+ struct wbsd_host *host = from_work(host, t, card_work);
u8 csr;
int delay = -1;

@@ -1020,7 +1021,7 @@ static void wbsd_tasklet_card(struct tasklet_struct *t)
wbsd_reset(host);

host->mrq->cmd->error = -ENOMEDIUM;
- tasklet_schedule(&host->finish_tasklet);
+ queue_work(system_bh_wq, &host->finish_work);
}

delay = 0;
@@ -1036,9 +1037,9 @@ static void wbsd_tasklet_card(struct tasklet_struct *t)
mmc_detect_change(host->mmc, msecs_to_jiffies(delay));
}

-static void wbsd_tasklet_fifo(struct tasklet_struct *t)
+static void wbsd_work_fifo(struct work_struct *t)
{
- struct wbsd_host *host = from_tasklet(host, t, fifo_tasklet);
+ struct wbsd_host *host = from_work(host, t, fifo_work);
struct mmc_data *data;

spin_lock(&host->lock);
@@ -1060,16 +1061,16 @@ static void wbsd_tasklet_fifo(struct tasklet_struct *t)
*/
if (host->num_sg == 0) {
wbsd_write_index(host, WBSD_IDX_FIFOEN, 0);
- tasklet_schedule(&host->finish_tasklet);
+ queue_work(system_bh_wq, &host->finish_work);
}

end:
spin_unlock(&host->lock);
}

-static void wbsd_tasklet_crc(struct tasklet_struct *t)
+static void wbsd_work_crc(struct work_struct *t)
{
- struct wbsd_host *host = from_tasklet(host, t, crc_tasklet);
+ struct wbsd_host *host = from_work(host, t, crc_work);
struct mmc_data *data;

spin_lock(&host->lock);
@@ -1085,15 +1086,15 @@ static void wbsd_tasklet_crc(struct tasklet_struct *t)

data->error = -EILSEQ;

- tasklet_schedule(&host->finish_tasklet);
+ queue_work(system_bh_wq, &host->finish_work);

end:
spin_unlock(&host->lock);
}

-static void wbsd_tasklet_timeout(struct tasklet_struct *t)
+static void wbsd_work_timeout(struct work_struct *t)
{
- struct wbsd_host *host = from_tasklet(host, t, timeout_tasklet);
+ struct wbsd_host *host = from_work(host, t, timeout_work);
struct mmc_data *data;

spin_lock(&host->lock);
@@ -1109,15 +1110,15 @@ static void wbsd_tasklet_timeout(struct tasklet_struct *t)

data->error = -ETIMEDOUT;

- tasklet_schedule(&host->finish_tasklet);
+ queue_work(system_bh_wq, &host->finish_work);

end:
spin_unlock(&host->lock);
}

-static void wbsd_tasklet_finish(struct tasklet_struct *t)
+static void wbsd_work_finish(struct work_struct *t)
{
- struct wbsd_host *host = from_tasklet(host, t, finish_tasklet);
+ struct wbsd_host *host = from_work(host, t, finish_work);
struct mmc_data *data;

spin_lock(&host->lock);
@@ -1156,18 +1157,18 @@ static irqreturn_t wbsd_irq(int irq, void *dev_id)
host->isr |= isr;

/*
- * Schedule tasklets as needed.
+ * Schedule works as needed.
*/
if (isr & WBSD_INT_CARD)
- tasklet_schedule(&host->card_tasklet);
+ queue_work(system_bh_wq, &host->card_work);
if (isr & WBSD_INT_FIFO_THRE)
- tasklet_schedule(&host->fifo_tasklet);
+ queue_work(system_bh_wq, &host->fifo_work);
if (isr & WBSD_INT_CRC)
- tasklet_hi_schedule(&host->crc_tasklet);
+ queue_work(system_bh_highpri_wq, &host->crc_work);
if (isr & WBSD_INT_TIMEOUT)
- tasklet_hi_schedule(&host->timeout_tasklet);
+ queue_work(system_bh_highpri_wq, &host->timeout_work);
if (isr & WBSD_INT_TC)
- tasklet_schedule(&host->finish_tasklet);
+ queue_work(system_bh_wq, &host->finish_work);

return IRQ_HANDLED;
}
@@ -1443,13 +1444,13 @@ static int wbsd_request_irq(struct wbsd_host *host, int irq)
int ret;

/*
- * Set up tasklets. Must be done before requesting interrupt.
+ * Set up works. Must be done before requesting interrupt.
*/
- tasklet_setup(&host->card_tasklet, wbsd_tasklet_card);
- tasklet_setup(&host->fifo_tasklet, wbsd_tasklet_fifo);
- tasklet_setup(&host->crc_tasklet, wbsd_tasklet_crc);
- tasklet_setup(&host->timeout_tasklet, wbsd_tasklet_timeout);
- tasklet_setup(&host->finish_tasklet, wbsd_tasklet_finish);
+ INIT_WORK(&host->card_work, wbsd_work_card);
+ INIT_WORK(&host->fifo_work, wbsd_work_fifo);
+ INIT_WORK(&host->crc_work, wbsd_work_crc);
+ INIT_WORK(&host->timeout_work, wbsd_work_timeout);
+ INIT_WORK(&host->finish_work, wbsd_work_finish);

/*
* Allocate interrupt.
@@ -1472,11 +1473,11 @@ static void wbsd_release_irq(struct wbsd_host *host)

host->irq = 0;

- tasklet_kill(&host->card_tasklet);
- tasklet_kill(&host->fifo_tasklet);
- tasklet_kill(&host->crc_tasklet);
- tasklet_kill(&host->timeout_tasklet);
- tasklet_kill(&host->finish_tasklet);
+ cancel_work_sync(&host->card_work);
+ cancel_work_sync(&host->fifo_work);
+ cancel_work_sync(&host->crc_work);
+ cancel_work_sync(&host->timeout_work);
+ cancel_work_sync(&host->finish_work);
}

/*
diff --git a/drivers/mmc/host/wbsd.h b/drivers/mmc/host/wbsd.h
index be30b4d8ce4c..942a64a724e4 100644
--- a/drivers/mmc/host/wbsd.h
+++ b/drivers/mmc/host/wbsd.h
@@ -171,11 +171,11 @@ struct wbsd_host
int irq; /* Interrupt */
int dma; /* DMA channel */

- struct tasklet_struct card_tasklet; /* Tasklet structures */
- struct tasklet_struct fifo_tasklet;
- struct tasklet_struct crc_tasklet;
- struct tasklet_struct timeout_tasklet;
- struct tasklet_struct finish_tasklet;
+ struct work_struct card_work; /* Work structures */
+ struct work_struct fifo_work;
+ struct work_struct crc_work;
+ struct work_struct timeout_work;
+ struct work_struct finish_work;

struct timer_list ignore_timer; /* Ignore detection timer */
};
--
2.17.1


2024-03-27 16:10:02

by Allen Pais

[permalink] [raw]
Subject: [PATCH 2/9] dma: Convert from tasklet to BH workqueue

The only generic interface to execute asynchronously in the BH context is
tasklet; however, it's marked deprecated and has some design flaws. To
replace tasklets, BH workqueue support was recently added. A BH workqueue
behaves similarly to regular workqueues except that the queued work items
are executed in the BH context.

This patch converts drivers/dma/* from tasklet to BH workqueue.

Based on the work done by Tejun Heo <[email protected]>
Branch: git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10

Signed-off-by: Allen Pais <[email protected]>
---
drivers/dma/altera-msgdma.c | 15 ++++----
drivers/dma/apple-admac.c | 15 ++++----
drivers/dma/at_hdmac.c | 2 +-
drivers/dma/at_xdmac.c | 15 ++++----
drivers/dma/bcm2835-dma.c | 2 +-
drivers/dma/dma-axi-dmac.c | 2 +-
drivers/dma/dma-jz4780.c | 2 +-
.../dma/dw-axi-dmac/dw-axi-dmac-platform.c | 2 +-
drivers/dma/dw-edma/dw-edma-core.c | 2 +-
drivers/dma/dw/core.c | 13 +++----
drivers/dma/dw/regs.h | 3 +-
drivers/dma/ep93xx_dma.c | 15 ++++----
drivers/dma/fsl-edma-common.c | 2 +-
drivers/dma/fsl-qdma.c | 2 +-
drivers/dma/fsl_raid.c | 11 +++---
drivers/dma/fsl_raid.h | 2 +-
drivers/dma/fsldma.c | 15 ++++----
drivers/dma/fsldma.h | 3 +-
drivers/dma/hisi_dma.c | 2 +-
drivers/dma/hsu/hsu.c | 2 +-
drivers/dma/idma64.c | 4 +--
drivers/dma/img-mdc-dma.c | 2 +-
drivers/dma/imx-dma.c | 27 +++++++-------
drivers/dma/imx-sdma.c | 6 ++--
drivers/dma/ioat/dma.c | 17 ++++-----
drivers/dma/ioat/dma.h | 5 +--
drivers/dma/ioat/init.c | 2 +-
drivers/dma/k3dma.c | 19 +++++-----
drivers/dma/mediatek/mtk-cqdma.c | 35 ++++++++++---------
drivers/dma/mediatek/mtk-hsdma.c | 2 +-
drivers/dma/mediatek/mtk-uart-apdma.c | 4 +--
drivers/dma/mmp_pdma.c | 13 +++----
drivers/dma/mmp_tdma.c | 11 +++---
drivers/dma/mpc512x_dma.c | 17 ++++-----
drivers/dma/mv_xor.c | 13 +++----
drivers/dma/mv_xor.h | 5 +--
drivers/dma/mv_xor_v2.c | 23 ++++++------
drivers/dma/mxs-dma.c | 13 +++----
drivers/dma/nbpfaxi.c | 15 ++++----
drivers/dma/owl-dma.c | 2 +-
drivers/dma/pch_dma.c | 17 ++++-----
drivers/dma/pl330.c | 31 ++++++++--------
drivers/dma/plx_dma.c | 13 +++----
drivers/dma/ppc4xx/adma.c | 17 ++++-----
drivers/dma/ppc4xx/adma.h | 5 +--
drivers/dma/pxa_dma.c | 2 +-
drivers/dma/qcom/bam_dma.c | 35 ++++++++++---------
drivers/dma/qcom/gpi.c | 18 +++++-----
drivers/dma/qcom/hidma.c | 11 +++---
drivers/dma/qcom/hidma.h | 5 +--
drivers/dma/qcom/hidma_ll.c | 11 +++---
drivers/dma/qcom/qcom_adm.c | 2 +-
drivers/dma/sa11x0-dma.c | 27 +++++++-------
drivers/dma/sf-pdma/sf-pdma.c | 23 ++++++------
drivers/dma/sf-pdma/sf-pdma.h | 5 +--
drivers/dma/sprd-dma.c | 2 +-
drivers/dma/st_fdma.c | 2 +-
drivers/dma/ste_dma40.c | 17 ++++-----
drivers/dma/sun6i-dma.c | 33 ++++++++---------
drivers/dma/tegra186-gpc-dma.c | 2 +-
drivers/dma/tegra20-apb-dma.c | 19 +++++-----
drivers/dma/tegra210-adma.c | 2 +-
drivers/dma/ti/edma.c | 2 +-
drivers/dma/ti/k3-udma.c | 11 +++---
drivers/dma/ti/omap-dma.c | 2 +-
drivers/dma/timb_dma.c | 23 ++++++------
drivers/dma/txx9dmac.c | 29 +++++++--------
drivers/dma/txx9dmac.h | 5 +--
drivers/dma/virt-dma.c | 9 ++---
drivers/dma/virt-dma.h | 9 ++---
drivers/dma/xgene-dma.c | 21 +++++------
drivers/dma/xilinx/xilinx_dma.c | 23 ++++++------
drivers/dma/xilinx/xilinx_dpdma.c | 21 +++++------
drivers/dma/xilinx/zynqmp_dma.c | 21 +++++------
74 files changed, 442 insertions(+), 395 deletions(-)

diff --git a/drivers/dma/altera-msgdma.c b/drivers/dma/altera-msgdma.c
index a8e3615235b8..611b5290324b 100644
--- a/drivers/dma/altera-msgdma.c
+++ b/drivers/dma/altera-msgdma.c
@@ -20,6 +20,7 @@
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/of_dma.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -170,7 +171,7 @@ struct msgdma_sw_desc {
struct msgdma_device {
spinlock_t lock;
struct device *dev;
- struct tasklet_struct irq_tasklet;
+ struct work_struct irq_work;
struct list_head pending_list;
struct list_head free_list;
struct list_head active_list;
@@ -676,12 +677,12 @@ static int msgdma_alloc_chan_resources(struct dma_chan *dchan)
}

/**
- * msgdma_tasklet - Schedule completion tasklet
+ * msgdma_work - Schedule completion work
* @t: Pointer to the Altera sSGDMA channel structure
*/
-static void msgdma_tasklet(struct tasklet_struct *t)
+static void msgdma_work(struct work_struct *t)
{
- struct msgdma_device *mdev = from_tasklet(mdev, t, irq_tasklet);
+ struct msgdma_device *mdev = from_work(mdev, t, irq_work);
u32 count;
u32 __maybe_unused size;
u32 __maybe_unused status;
@@ -740,7 +741,7 @@ static irqreturn_t msgdma_irq_handler(int irq, void *data)
spin_unlock(&mdev->lock);
}

- tasklet_schedule(&mdev->irq_tasklet);
+ queue_work(system_bh_wq, &mdev->irq_work);

/* Clear interrupt in mSGDMA controller */
iowrite32(MSGDMA_CSR_STAT_IRQ, mdev->csr + MSGDMA_CSR_STATUS);
@@ -758,7 +759,7 @@ static void msgdma_dev_remove(struct msgdma_device *mdev)
return;

devm_free_irq(mdev->dev, mdev->irq, mdev);
- tasklet_kill(&mdev->irq_tasklet);
+ cancel_work_sync(&mdev->irq_work);
list_del(&mdev->dmachan.device_node);
}

@@ -844,7 +845,7 @@ static int msgdma_probe(struct platform_device *pdev)
if (ret)
return ret;

- tasklet_setup(&mdev->irq_tasklet, msgdma_tasklet);
+ INIT_WORK(&mdev->irq_work, msgdma_work);

dma_cookie_init(&mdev->dmachan);

diff --git a/drivers/dma/apple-admac.c b/drivers/dma/apple-admac.c
index 9588773dd2eb..7cdb4b6b5f81 100644
--- a/drivers/dma/apple-admac.c
+++ b/drivers/dma/apple-admac.c
@@ -16,6 +16,7 @@
#include <linux/reset.h>
#include <linux/spinlock.h>
#include <linux/interrupt.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -89,7 +90,7 @@ struct admac_chan {
unsigned int no;
struct admac_data *host;
struct dma_chan chan;
- struct tasklet_struct tasklet;
+ struct work_struct work;

u32 carveout;

@@ -520,7 +521,7 @@ static int admac_terminate_all(struct dma_chan *chan)
adchan->current_tx = NULL;
}
/*
- * Descriptors can only be freed after the tasklet
+ * Descriptors can only be freed after the work
* has been killed (in admac_synchronize).
*/
list_splice_tail_init(&adchan->submitted, &adchan->to_free);
@@ -541,7 +542,7 @@ static void admac_synchronize(struct dma_chan *chan)
list_splice_tail_init(&adchan->to_free, &head);
spin_unlock_irqrestore(&adchan->lock, flags);

- tasklet_kill(&adchan->tasklet);
+ cancel_work_sync(&adchan->work);

list_for_each_entry_safe(adtx, _adtx, &head, node) {
list_del(&adtx->node);
@@ -660,7 +661,7 @@ static void admac_handle_status_desc_done(struct admac_data *ad, int channo)
tx->reclaimed_pos %= 2 * tx->buf_len;

admac_cyclic_write_desc(ad, channo, tx);
- tasklet_schedule(&adchan->tasklet);
+ queue_work(system_bh_wq, &adchan->work);
}
spin_unlock_irqrestore(&adchan->lock, flags);
}
@@ -710,9 +711,9 @@ static irqreturn_t admac_interrupt(int irq, void *devid)
return IRQ_HANDLED;
}

-static void admac_chan_tasklet(struct tasklet_struct *t)
+static void admac_chan_work(struct work_struct *t)
{
- struct admac_chan *adchan = from_tasklet(adchan, t, tasklet);
+ struct admac_chan *adchan = from_work(adchan, t, work);
struct admac_tx *adtx;
struct dmaengine_desc_callback cb;
struct dmaengine_result tx_result;
@@ -884,7 +885,7 @@ static int admac_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&adchan->issued);
INIT_LIST_HEAD(&adchan->to_free);
list_add_tail(&adchan->chan.device_node, &dma->channels);
- tasklet_setup(&adchan->tasklet, admac_chan_tasklet);
+ INIT_WORK(&adchan->work, admac_chan_work);
}

err = reset_control_reset(ad->rstc);
diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
index 40052d1bd0b5..335816473a61 100644
--- a/drivers/dma/at_hdmac.c
+++ b/drivers/dma/at_hdmac.c
@@ -272,7 +272,7 @@ enum atc_status {
* @per_if: peripheral interface
* @mem_if: memory interface
* @status: transmit status information from irq/prep* functions
- * to tasklet (use atomic operations)
+ * to work (use atomic operations)
* @save_cfg: configuration register that is saved on suspend/resume cycle
* @save_dscr: for cyclic operations, preserve next descriptor address in
* the cyclic list on suspend/resume cycle
diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
index 299396121e6d..a73eab2881f9 100644
--- a/drivers/dma/at_xdmac.c
+++ b/drivers/dma/at_xdmac.c
@@ -22,6 +22,7 @@
#include <linux/platform_device.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -228,7 +229,7 @@ struct at_xdmac_chan {
u32 save_cndc;
u32 irq_status;
unsigned long status;
- struct tasklet_struct tasklet;
+ struct work_struct work;
struct dma_slave_config sconfig;

spinlock_t lock;
@@ -1762,9 +1763,9 @@ static void at_xdmac_handle_error(struct at_xdmac_chan *atchan)
/* Then continue with usual descriptor management */
}

-static void at_xdmac_tasklet(struct tasklet_struct *t)
+static void at_xdmac_work(struct work_struct *t)
{
- struct at_xdmac_chan *atchan = from_tasklet(atchan, t, tasklet);
+ struct at_xdmac_chan *atchan = from_work(atchan, t, work);
struct at_xdmac *atxdmac = to_at_xdmac(atchan->chan.device);
struct at_xdmac_desc *desc;
struct dma_async_tx_descriptor *txd;
@@ -1869,7 +1870,7 @@ static irqreturn_t at_xdmac_interrupt(int irq, void *dev_id)
if (atchan->irq_status & (AT_XDMAC_CIS_RBEIS | AT_XDMAC_CIS_WBEIS))
at_xdmac_write(atxdmac, AT_XDMAC_GD, atchan->mask);

- tasklet_schedule(&atchan->tasklet);
+ queue_work(system_bh_wq, &atchan->work);
ret = IRQ_HANDLED;
}

@@ -2307,7 +2308,7 @@ static int at_xdmac_probe(struct platform_device *pdev)
return PTR_ERR(atxdmac->clk);
}

- /* Do not use dev res to prevent races with tasklet */
+ /* Do not use dev res to prevent races with work */
ret = request_irq(atxdmac->irq, at_xdmac_interrupt, 0, "at_xdmac", atxdmac);
if (ret) {
dev_err(&pdev->dev, "can't request irq\n");
@@ -2387,7 +2388,7 @@ static int at_xdmac_probe(struct platform_device *pdev)
spin_lock_init(&atchan->lock);
INIT_LIST_HEAD(&atchan->xfers_list);
INIT_LIST_HEAD(&atchan->free_descs_list);
- tasklet_setup(&atchan->tasklet, at_xdmac_tasklet);
+ INIT_WORK(&atchan->work, at_xdmac_work);

/* Clear pending interrupts. */
while (at_xdmac_chan_read(atchan, AT_XDMAC_CIS))
@@ -2449,7 +2450,7 @@ static void at_xdmac_remove(struct platform_device *pdev)
for (i = 0; i < atxdmac->dma.chancnt; i++) {
struct at_xdmac_chan *atchan = &atxdmac->chan[i];

- tasklet_kill(&atchan->tasklet);
+ cancel_work_sync(&atchan->work);
at_xdmac_free_chan_resources(&atchan->chan);
}
}
diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c
index 9d74fe97452e..079d04986b73 100644
--- a/drivers/dma/bcm2835-dma.c
+++ b/drivers/dma/bcm2835-dma.c
@@ -846,7 +846,7 @@ static void bcm2835_dma_free(struct bcm2835_dmadev *od)
list_for_each_entry_safe(c, next, &od->ddev.channels,
vc.chan.device_node) {
list_del(&c->vc.chan.device_node);
- tasklet_kill(&c->vc.task);
+ cancel_work_sync(&c->vc.work);
}

dma_unmap_page_attrs(od->ddev.dev, od->zero_page, PAGE_SIZE,
diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c
index 4e339c04fc1e..95df109d3161 100644
--- a/drivers/dma/dma-axi-dmac.c
+++ b/drivers/dma/dma-axi-dmac.c
@@ -1136,7 +1136,7 @@ static void axi_dmac_remove(struct platform_device *pdev)

of_dma_controller_free(pdev->dev.of_node);
free_irq(dmac->irq, dmac);
- tasklet_kill(&dmac->chan.vchan.task);
+ cancel_work_sync(&dmac->chan.vchan.work);
dma_async_device_unregister(&dmac->dma_dev);
clk_disable_unprepare(dmac->clk);
}
diff --git a/drivers/dma/dma-jz4780.c b/drivers/dma/dma-jz4780.c
index c9cfa341db51..d8ce91369176 100644
--- a/drivers/dma/dma-jz4780.c
+++ b/drivers/dma/dma-jz4780.c
@@ -1019,7 +1019,7 @@ static void jz4780_dma_remove(struct platform_device *pdev)
free_irq(jzdma->irq, jzdma);

for (i = 0; i < jzdma->soc_data->nb_channels; i++)
- tasklet_kill(&jzdma->chan[i].vchan.task);
+ cancel_work_sync(&jzdma->chan[i].vchan.work);
}

static const struct jz4780_dma_soc_data jz4740_dma_soc_data = {
diff --git a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
index a86a81ff0caa..1c3c58496885 100644
--- a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+++ b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
@@ -1636,7 +1636,7 @@ static void dw_remove(struct platform_device *pdev)
list_for_each_entry_safe(chan, _chan, &dw->dma.channels,
vc.chan.device_node) {
list_del(&chan->vc.chan.device_node);
- tasklet_kill(&chan->vc.task);
+ cancel_work_sync(&chan->vc.work);
}
}

diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
index 68236247059d..34e9d2dcc00b 100644
--- a/drivers/dma/dw-edma/dw-edma-core.c
+++ b/drivers/dma/dw-edma/dw-edma-core.c
@@ -1003,7 +1003,7 @@ int dw_edma_remove(struct dw_edma_chip *chip)
dma_async_device_unregister(&dw->dma);
list_for_each_entry_safe(chan, _chan, &dw->dma.channels,
vc.chan.device_node) {
- tasklet_kill(&chan->vc.task);
+ cancel_work_sync(&chan->vc.work);
list_del(&chan->vc.chan.device_node);
}

diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
index 5f7d690e3dba..0e13750e9a19 100644
--- a/drivers/dma/dw/core.c
+++ b/drivers/dma/dw/core.c
@@ -20,6 +20,7 @@
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/pm_runtime.h>
+#include <linux/workqueue.h>

#include "../dmaengine.h"
#include "internal.h"
@@ -181,7 +182,7 @@ static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc *first)
__func__);
dwc_dump_chan_regs(dwc);

- /* The tasklet will hopefully advance the queue... */
+ /* The work will hopefully advance the queue... */
return;
}

@@ -460,9 +461,9 @@ static void dwc_handle_error(struct dw_dma *dw, struct dw_dma_chan *dwc)
dwc_descriptor_complete(dwc, bad_desc, true);
}

-static void dw_dma_tasklet(struct tasklet_struct *t)
+static void dw_dma_work(struct work_struct *t)
{
- struct dw_dma *dw = from_tasklet(dw, t, tasklet);
+ struct dw_dma *dw = from_work(dw, t, work);
struct dw_dma_chan *dwc;
u32 status_xfer;
u32 status_err;
@@ -526,7 +527,7 @@ static irqreturn_t dw_dma_interrupt(int irq, void *dev_id)
channel_clear_bit(dw, MASK.ERROR, (1 << 8) - 1);
}

- tasklet_schedule(&dw->tasklet);
+ queue_work(system_bh_wq, &dw->work);

return IRQ_HANDLED;
}
@@ -1138,7 +1139,7 @@ int do_dma_probe(struct dw_dma_chip *chip)
goto err_pdata;
}

- tasklet_setup(&dw->tasklet, dw_dma_tasklet);
+ INIT_WORK(&dw->work, dw_dma_work);

err = request_irq(chip->irq, dw_dma_interrupt, IRQF_SHARED,
dw->name, dw);
@@ -1283,7 +1284,7 @@ int do_dma_remove(struct dw_dma_chip *chip)
dma_async_device_unregister(&dw->dma);

free_irq(chip->irq, dw);
- tasklet_kill(&dw->tasklet);
+ cancel_work_sync(&dw->work);

list_for_each_entry_safe(dwc, _dwc, &dw->dma.channels,
chan.device_node) {
diff --git a/drivers/dma/dw/regs.h b/drivers/dma/dw/regs.h
index 76654bd13c1a..332a4eba5e9f 100644
--- a/drivers/dma/dw/regs.h
+++ b/drivers/dma/dw/regs.h
@@ -12,6 +12,7 @@
#include <linux/dmaengine.h>

#include <linux/io-64-nonatomic-hi-lo.h>
+#include <linux/workqueue.h>

#include "internal.h"

@@ -315,7 +316,7 @@ struct dw_dma {
char name[20];
void __iomem *regs;
struct dma_pool *desc_pool;
- struct tasklet_struct tasklet;
+ struct work_struct work;

/* channels */
struct dw_dma_chan *chan;
diff --git a/drivers/dma/ep93xx_dma.c b/drivers/dma/ep93xx_dma.c
index d6c60635e90d..acd9bde36e1b 100644
--- a/drivers/dma/ep93xx_dma.c
+++ b/drivers/dma/ep93xx_dma.c
@@ -24,6 +24,7 @@
#include <linux/slab.h>

#include <linux/platform_data/dma-ep93xx.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -136,7 +137,7 @@ struct ep93xx_dma_desc {
* @regs: memory mapped registers
* @irq: interrupt number of the channel
* @clk: clock used by this channel
- * @tasklet: channel specific tasklet used for callbacks
+ * @work: channel specific work used for callbacks
* @lock: lock protecting the fields following
* @flags: flags for the channel
* @buffer: which buffer to use next (0/1)
@@ -167,7 +168,7 @@ struct ep93xx_dma_chan {
void __iomem *regs;
int irq;
struct clk *clk;
- struct tasklet_struct tasklet;
+ struct work_struct work;
/* protects the fields following */
spinlock_t lock;
unsigned long flags;
@@ -745,9 +746,9 @@ static void ep93xx_dma_advance_work(struct ep93xx_dma_chan *edmac)
spin_unlock_irqrestore(&edmac->lock, flags);
}

-static void ep93xx_dma_tasklet(struct tasklet_struct *t)
+static void ep93xx_dma_work(struct work_struct *t)
{
- struct ep93xx_dma_chan *edmac = from_tasklet(edmac, t, tasklet);
+ struct ep93xx_dma_chan *edmac = from_work(edmac, t, work);
struct ep93xx_dma_desc *desc, *d;
struct dmaengine_desc_callback cb;
LIST_HEAD(list);
@@ -802,12 +803,12 @@ static irqreturn_t ep93xx_dma_interrupt(int irq, void *dev_id)
switch (edmac->edma->hw_interrupt(edmac)) {
case INTERRUPT_DONE:
desc->complete = true;
- tasklet_schedule(&edmac->tasklet);
+ queue_work(system_bh_wq, &edmac->work);
break;

case INTERRUPT_NEXT_BUFFER:
if (test_bit(EP93XX_DMA_IS_CYCLIC, &edmac->flags))
- tasklet_schedule(&edmac->tasklet);
+ queue_work(system_bh_wq, &edmac->work);
break;

default:
@@ -1351,7 +1352,7 @@ static int __init ep93xx_dma_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&edmac->active);
INIT_LIST_HEAD(&edmac->queue);
INIT_LIST_HEAD(&edmac->free_list);
- tasklet_setup(&edmac->tasklet, ep93xx_dma_tasklet);
+ INIT_WORK(&edmac->work, ep93xx_dma_work);

list_add_tail(&edmac->chan.device_node,
&dma_dev->channels);
diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
index b18faa7cfedb..5f568ca11b67 100644
--- a/drivers/dma/fsl-edma-common.c
+++ b/drivers/dma/fsl-edma-common.c
@@ -847,7 +847,7 @@ void fsl_edma_cleanup_vchan(struct dma_device *dmadev)
list_for_each_entry_safe(chan, _chan,
&dmadev->channels, vchan.chan.device_node) {
list_del(&chan->vchan.chan.device_node);
- tasklet_kill(&chan->vchan.task);
+ cancel_work_sync(&chan->vchan.work);
}
}

diff --git a/drivers/dma/fsl-qdma.c b/drivers/dma/fsl-qdma.c
index 5005e138fc23..e5bceec1a396 100644
--- a/drivers/dma/fsl-qdma.c
+++ b/drivers/dma/fsl-qdma.c
@@ -1261,7 +1261,7 @@ static void fsl_qdma_cleanup_vchan(struct dma_device *dmadev)
list_for_each_entry_safe(chan, _chan,
&dmadev->channels, vchan.chan.device_node) {
list_del(&chan->vchan.chan.device_node);
- tasklet_kill(&chan->vchan.task);
+ cancel_work_sync(&chan->vchan.task);
}
}

diff --git a/drivers/dma/fsl_raid.c b/drivers/dma/fsl_raid.c
index 014ff523d5ec..72c495e20f52 100644
--- a/drivers/dma/fsl_raid.c
+++ b/drivers/dma/fsl_raid.c
@@ -70,6 +70,7 @@
#include <linux/io.h>
#include <linux/spinlock.h>
#include <linux/slab.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"
#include "fsl_raid.h"
@@ -155,9 +156,9 @@ static void fsl_re_cleanup_descs(struct fsl_re_chan *re_chan)
fsl_re_issue_pending(&re_chan->chan);
}

-static void fsl_re_dequeue(struct tasklet_struct *t)
+static void fsl_re_dequeue(struct work_struct *t)
{
- struct fsl_re_chan *re_chan = from_tasklet(re_chan, t, irqtask);
+ struct fsl_re_chan *re_chan = from_work(re_chan, t, irqtask);
struct fsl_re_desc *desc, *_desc;
struct fsl_re_hw_desc *hwdesc;
unsigned long flags;
@@ -224,7 +225,7 @@ static irqreturn_t fsl_re_isr(int irq, void *data)
/* Clear interrupt */
out_be32(&re_chan->jrregs->jr_interrupt_status, FSL_RE_CLR_INTR);

- tasklet_schedule(&re_chan->irqtask);
+ queue_work(system_bh_wq, &re_chan->irqtask);

return IRQ_HANDLED;
}
@@ -670,7 +671,7 @@ static int fsl_re_chan_probe(struct platform_device *ofdev,
snprintf(chan->name, sizeof(chan->name), "re_jr%02d", q);

chandev = &chan_ofdev->dev;
- tasklet_setup(&chan->irqtask, fsl_re_dequeue);
+ INIT_WORK(&chan->irqtask, fsl_re_dequeue);

ret = request_irq(chan->irq, fsl_re_isr, 0, chan->name, chandev);
if (ret) {
@@ -848,7 +849,7 @@ static int fsl_re_probe(struct platform_device *ofdev)

static void fsl_re_remove_chan(struct fsl_re_chan *chan)
{
- tasklet_kill(&chan->irqtask);
+ cancel_work_sync(&chan->irqtask);

dma_pool_free(chan->re_dev->hw_desc_pool, chan->inb_ring_virt_addr,
chan->inb_phys_addr);
diff --git a/drivers/dma/fsl_raid.h b/drivers/dma/fsl_raid.h
index 69d743c04973..d4dfbb7c9984 100644
--- a/drivers/dma/fsl_raid.h
+++ b/drivers/dma/fsl_raid.h
@@ -275,7 +275,7 @@ struct fsl_re_chan {
struct dma_chan chan;
struct fsl_re_chan_cfg *jrregs;
int irq;
- struct tasklet_struct irqtask;
+ struct work_struct irqtask;
u32 alloc_count;

/* hw descriptor ring for inbound queue*/
diff --git a/drivers/dma/fsldma.c b/drivers/dma/fsldma.c
index 18a6c4bf6275..9f9af6869f2e 100644
--- a/drivers/dma/fsldma.c
+++ b/drivers/dma/fsldma.c
@@ -33,6 +33,7 @@
#include <linux/of_irq.h>
#include <linux/platform_device.h>
#include <linux/fsldma.h>
+#include <linux/workqueue.h>
#include "dmaengine.h"
#include "fsldma.h"

@@ -968,20 +969,20 @@ static irqreturn_t fsldma_chan_irq(int irq, void *data)
chan_err(chan, "irq: unhandled sr 0x%08x\n", stat);

/*
- * Schedule the tasklet to handle all cleanup of the current
+ * Schedule the work to handle all cleanup of the current
* transaction. It will start a new transaction if there is
* one pending.
*/
- tasklet_schedule(&chan->tasklet);
+ queue_work(system_bh_wq, &chan->work);
chan_dbg(chan, "irq: Exit\n");
return IRQ_HANDLED;
}

-static void dma_do_tasklet(struct tasklet_struct *t)
+static void dma_do_work(struct work_struct *t)
{
- struct fsldma_chan *chan = from_tasklet(chan, t, tasklet);
+ struct fsldma_chan *chan = from_work(chan, t, work);

- chan_dbg(chan, "tasklet entry\n");
+ chan_dbg(chan, "work entry\n");

spin_lock(&chan->desc_lock);

@@ -993,7 +994,7 @@ static void dma_do_tasklet(struct tasklet_struct *t)

spin_unlock(&chan->desc_lock);

- chan_dbg(chan, "tasklet exit\n");
+ chan_dbg(chan, "work exit\n");
}

static irqreturn_t fsldma_ctrl_irq(int irq, void *data)
@@ -1152,7 +1153,7 @@ static int fsl_dma_chan_probe(struct fsldma_device *fdev,
}

fdev->chan[chan->id] = chan;
- tasklet_setup(&chan->tasklet, dma_do_tasklet);
+ INIT_WORK(&chan->work, dma_do_work);
snprintf(chan->name, sizeof(chan->name), "chan%d", chan->id);

/* Initialize the channel */
diff --git a/drivers/dma/fsldma.h b/drivers/dma/fsldma.h
index 308bed0a560a..c165091e5c6a 100644
--- a/drivers/dma/fsldma.h
+++ b/drivers/dma/fsldma.h
@@ -12,6 +12,7 @@
#include <linux/device.h>
#include <linux/dmapool.h>
#include <linux/dmaengine.h>
+#include <linux/workqueue.h>

/* Define data structures needed by Freescale
* MPC8540 and MPC8349 DMA controller.
@@ -172,7 +173,7 @@ struct fsldma_chan {
struct device *dev; /* Channel device */
int irq; /* Channel IRQ */
int id; /* Raw id of this channel */
- struct tasklet_struct tasklet;
+ struct work_struct work;
u32 feature;
bool idle; /* DMA controller is idle */
#ifdef CONFIG_PM
diff --git a/drivers/dma/hisi_dma.c b/drivers/dma/hisi_dma.c
index 4c47bff81064..5bf7d8b3e959 100644
--- a/drivers/dma/hisi_dma.c
+++ b/drivers/dma/hisi_dma.c
@@ -720,7 +720,7 @@ static void hisi_dma_disable_qps(struct hisi_dma_dev *hdma_dev)

for (i = 0; i < hdma_dev->chan_num; i++) {
hisi_dma_disable_qp(hdma_dev, i);
- tasklet_kill(&hdma_dev->chan[i].vc.task);
+ cancel_work_sync(&hdma_dev->chan[i].vc.work);
}
}

diff --git a/drivers/dma/hsu/hsu.c b/drivers/dma/hsu/hsu.c
index af5a2e252c25..4ea3f18a20ac 100644
--- a/drivers/dma/hsu/hsu.c
+++ b/drivers/dma/hsu/hsu.c
@@ -500,7 +500,7 @@ int hsu_dma_remove(struct hsu_dma_chip *chip)
for (i = 0; i < hsu->nr_channels; i++) {
struct hsu_dma_chan *hsuc = &hsu->chan[i];

- tasklet_kill(&hsuc->vchan.task);
+ cancel_work_sync(&hsuc->vchan.work);
}

return 0;
diff --git a/drivers/dma/idma64.c b/drivers/dma/idma64.c
index 78a938969d7d..7715be6457e8 100644
--- a/drivers/dma/idma64.c
+++ b/drivers/dma/idma64.c
@@ -613,14 +613,14 @@ static void idma64_remove(struct idma64_chip *chip)

/*
* Explicitly call devm_request_irq() to avoid the side effects with
- * the scheduled tasklets.
+ * the scheduled works.
*/
devm_free_irq(chip->dev, chip->irq, idma64);

for (i = 0; i < idma64->dma.chancnt; i++) {
struct idma64_chan *idma64c = &idma64->chan[i];

- tasklet_kill(&idma64c->vchan.task);
+ cancel_work_sync(&idma64c->vchan.work);
}
}

diff --git a/drivers/dma/img-mdc-dma.c b/drivers/dma/img-mdc-dma.c
index 0532dd2640dc..71245f1b7e28 100644
--- a/drivers/dma/img-mdc-dma.c
+++ b/drivers/dma/img-mdc-dma.c
@@ -1031,7 +1031,7 @@ static void mdc_dma_remove(struct platform_device *pdev)

devm_free_irq(&pdev->dev, mchan->irq, mchan);

- tasklet_kill(&mchan->vc.task);
+ cancel_work_sync(&mchan->vc.work);
}

pm_runtime_disable(&pdev->dev);
diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c
index ebf7c115d553..1c2baa81c6a7 100644
--- a/drivers/dma/imx-dma.c
+++ b/drivers/dma/imx-dma.c
@@ -26,6 +26,7 @@

#include <asm/irq.h>
#include <linux/dma/imx-dma.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"
#define IMXDMA_MAX_CHAN_DESCRIPTORS 16
@@ -144,7 +145,7 @@ struct imxdma_channel {
struct imxdma_engine *imxdma;
unsigned int channel;

- struct tasklet_struct dma_tasklet;
+ struct work_struct dma_work;
struct list_head ld_free;
struct list_head ld_queue;
struct list_head ld_active;
@@ -345,8 +346,8 @@ static void imxdma_watchdog(struct timer_list *t)

imx_dmav1_writel(imxdma, 0, DMA_CCR(channel));

- /* Tasklet watchdog error handler */
- tasklet_schedule(&imxdmac->dma_tasklet);
+ /* Work watchdog error handler */
+ queue_work(system_bh_wq, &imxdmac->dma_work);
dev_dbg(imxdma->dev, "channel %d: watchdog timeout!\n",
imxdmac->channel);
}
@@ -391,8 +392,8 @@ static irqreturn_t imxdma_err_handler(int irq, void *dev_id)
imx_dmav1_writel(imxdma, 1 << i, DMA_DBOSR);
errcode |= IMX_DMA_ERR_BUFFER;
}
- /* Tasklet error handler */
- tasklet_schedule(&imxdma->channel[i].dma_tasklet);
+ /* Work error handler */
+ queue_work(system_bh_wq, &imxdma->channel[i].dma_work);

dev_warn(imxdma->dev,
"DMA timeout on channel %d -%s%s%s%s\n", i,
@@ -449,8 +450,8 @@ static void dma_irq_handle_channel(struct imxdma_channel *imxdmac)
imx_dmav1_writel(imxdma, tmp, DMA_CCR(chno));

if (imxdma_chan_is_doing_cyclic(imxdmac))
- /* Tasklet progression */
- tasklet_schedule(&imxdmac->dma_tasklet);
+ /* Work progression */
+ queue_work(system_bh_wq, &imxdmac->dma_work);

return;
}
@@ -463,8 +464,8 @@ static void dma_irq_handle_channel(struct imxdma_channel *imxdmac)

out:
imx_dmav1_writel(imxdma, 0, DMA_CCR(chno));
- /* Tasklet irq */
- tasklet_schedule(&imxdmac->dma_tasklet);
+ /* Work irq */
+ queue_work(system_bh_wq, &imxdmac->dma_work);
}

static irqreturn_t dma_irq_handler(int irq, void *dev_id)
@@ -593,9 +594,9 @@ static int imxdma_xfer_desc(struct imxdma_desc *d)
return 0;
}

-static void imxdma_tasklet(struct tasklet_struct *t)
+static void imxdma_work(struct work_struct *t)
{
- struct imxdma_channel *imxdmac = from_tasklet(imxdmac, t, dma_tasklet);
+ struct imxdma_channel *imxdmac = from_work(imxdmac, t, dma_work);
struct imxdma_engine *imxdma = imxdmac->imxdma;
struct imxdma_desc *desc, *next_desc;
unsigned long flags;
@@ -1143,7 +1144,7 @@ static int __init imxdma_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&imxdmac->ld_free);
INIT_LIST_HEAD(&imxdmac->ld_active);

- tasklet_setup(&imxdmac->dma_tasklet, imxdma_tasklet);
+ INIT_WORK(&imxdmac->dma_work, imxdma_work);
imxdmac->chan.device = &imxdma->dma_device;
dma_cookie_init(&imxdmac->chan);
imxdmac->channel = i;
@@ -1212,7 +1213,7 @@ static void imxdma_free_irq(struct platform_device *pdev, struct imxdma_engine *
if (!is_imx1_dma(imxdma))
disable_irq(imxdmac->irq);

- tasklet_kill(&imxdmac->dma_tasklet);
+ cancel_work_sync(&imxdmac->dma_work);
}
}

diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
index 9b42f5e96b1e..853f7e80bb76 100644
--- a/drivers/dma/imx-sdma.c
+++ b/drivers/dma/imx-sdma.c
@@ -881,7 +881,7 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
/*
* The callback is called from the interrupt context in order
* to reduce latency and to avoid the risk of altering the
- * SDMA transaction status by the time the client tasklet is
+ * SDMA transaction status by the time the client work is
* executed.
*/
spin_unlock(&sdmac->vc.lock);
@@ -2364,11 +2364,11 @@ static void sdma_remove(struct platform_device *pdev)
kfree(sdma->script_addrs);
clk_unprepare(sdma->clk_ahb);
clk_unprepare(sdma->clk_ipg);
- /* Kill the tasklet */
+ /* cancel work */
for (i = 0; i < MAX_DMA_CHANNELS; i++) {
struct sdma_channel *sdmac = &sdma->channel[i];

- tasklet_kill(&sdmac->vc.task);
+ cancel_work_sync(&sdmac->vc.work);
sdma_free_chan_resources(&sdmac->vc.chan);
}

diff --git a/drivers/dma/ioat/dma.c b/drivers/dma/ioat/dma.c
index 79d8957f9e60..16087e2cc4ae 100644
--- a/drivers/dma/ioat/dma.c
+++ b/drivers/dma/ioat/dma.c
@@ -20,6 +20,7 @@
#include <linux/workqueue.h>
#include <linux/prefetch.h>
#include <linux/sizes.h>
+#include <linux/workqueue.h>
#include "dma.h"
#include "registers.h"
#include "hw.h"
@@ -110,7 +111,7 @@ irqreturn_t ioat_dma_do_interrupt(int irq, void *data)
for_each_set_bit(bit, &attnstatus, BITS_PER_LONG) {
ioat_chan = ioat_chan_by_index(instance, bit);
if (test_bit(IOAT_RUN, &ioat_chan->state))
- tasklet_schedule(&ioat_chan->cleanup_task);
+ queue_work(system_bh_wq, &ioat_chan->cleanup_task);
}

writeb(intrctrl, instance->reg_base + IOAT_INTRCTRL_OFFSET);
@@ -127,7 +128,7 @@ irqreturn_t ioat_dma_do_interrupt_msix(int irq, void *data)
struct ioatdma_chan *ioat_chan = data;

if (test_bit(IOAT_RUN, &ioat_chan->state))
- tasklet_schedule(&ioat_chan->cleanup_task);
+ queue_work(system_bh_wq, &ioat_chan->cleanup_task);

return IRQ_HANDLED;
}
@@ -139,8 +140,8 @@ void ioat_stop(struct ioatdma_chan *ioat_chan)
int chan_id = chan_num(ioat_chan);
struct msix_entry *msix;

- /* 1/ stop irq from firing tasklets
- * 2/ stop the tasklet from re-arming irqs
+ /* 1/ stop irq from firing works
+ * 2/ stop the work from re-arming irqs
*/
clear_bit(IOAT_RUN, &ioat_chan->state);

@@ -161,8 +162,8 @@ void ioat_stop(struct ioatdma_chan *ioat_chan)
/* flush inflight timers */
del_timer_sync(&ioat_chan->timer);

- /* flush inflight tasklet runs */
- tasklet_kill(&ioat_chan->cleanup_task);
+ /* flush inflight work runs */
+ cancel_work_sync(&ioat_chan->cleanup_task);

/* final cleanup now that everything is quiesced and can't re-arm */
ioat_cleanup_event(&ioat_chan->cleanup_task);
@@ -690,9 +691,9 @@ static void ioat_cleanup(struct ioatdma_chan *ioat_chan)
spin_unlock_bh(&ioat_chan->cleanup_lock);
}

-void ioat_cleanup_event(struct tasklet_struct *t)
+void ioat_cleanup_event(struct work_struct *t)
{
- struct ioatdma_chan *ioat_chan = from_tasklet(ioat_chan, t, cleanup_task);
+ struct ioatdma_chan *ioat_chan = from_work(ioat_chan, t, cleanup_task);

ioat_cleanup(ioat_chan);
if (!test_bit(IOAT_RUN, &ioat_chan->state))
diff --git a/drivers/dma/ioat/dma.h b/drivers/dma/ioat/dma.h
index a180171087a8..d5629df04d7d 100644
--- a/drivers/dma/ioat/dma.h
+++ b/drivers/dma/ioat/dma.h
@@ -12,6 +12,7 @@
#include <linux/pci_ids.h>
#include <linux/circ_buf.h>
#include <linux/interrupt.h>
+#include <linux/workqueue.h>
#include "registers.h"
#include "hw.h"

@@ -109,7 +110,7 @@ struct ioatdma_chan {
struct ioatdma_device *ioat_dma;
dma_addr_t completion_dma;
u64 *completion;
- struct tasklet_struct cleanup_task;
+ struct work_struct cleanup_task;
struct kobject kobj;

/* ioat v2 / v3 channel attributes
@@ -392,7 +393,7 @@ int ioat_reset_hw(struct ioatdma_chan *ioat_chan);
enum dma_status
ioat_tx_status(struct dma_chan *c, dma_cookie_t cookie,
struct dma_tx_state *txstate);
-void ioat_cleanup_event(struct tasklet_struct *t);
+void ioat_cleanup_event(struct work_struct *t);
void ioat_timer_event(struct timer_list *t);
int ioat_check_space_lock(struct ioatdma_chan *ioat_chan, int num_descs);
void ioat_issue_pending(struct dma_chan *chan);
diff --git a/drivers/dma/ioat/init.c b/drivers/dma/ioat/init.c
index 9c364e92cb82..7b86ec4d0fa4 100644
--- a/drivers/dma/ioat/init.c
+++ b/drivers/dma/ioat/init.c
@@ -776,7 +776,7 @@ ioat_init_channel(struct ioatdma_device *ioat_dma,
list_add_tail(&ioat_chan->dma_chan.device_node, &dma->channels);
ioat_dma->idx[idx] = ioat_chan;
timer_setup(&ioat_chan->timer, ioat_timer_event, 0);
- tasklet_setup(&ioat_chan->cleanup_task, ioat_cleanup_event);
+ INIT_WORK(&ioat_chan->cleanup_task, ioat_cleanup_event);
}

#define IOAT_NUM_SRC_TEST 6 /* must be <= 8 */
diff --git a/drivers/dma/k3dma.c b/drivers/dma/k3dma.c
index 5de8c21d41e7..d852135e8544 100644
--- a/drivers/dma/k3dma.c
+++ b/drivers/dma/k3dma.c
@@ -18,6 +18,7 @@
#include <linux/of.h>
#include <linux/clk.h>
#include <linux/of_dma.h>
+#include <linux/workqueue.h>

#include "virt-dma.h"

@@ -98,7 +99,7 @@ struct k3_dma_phy {
struct k3_dma_dev {
struct dma_device slave;
void __iomem *base;
- struct tasklet_struct task;
+ struct work_struct work;
spinlock_t lock;
struct list_head chan_pending;
struct k3_dma_phy *phy;
@@ -252,7 +253,7 @@ static irqreturn_t k3_dma_int_handler(int irq, void *dev_id)
writel_relaxed(err2, d->base + INT_ERR2_RAW);

if (irq_chan)
- tasklet_schedule(&d->task);
+ queue_work(system_bh_wq, &d->work);

if (irq_chan || err1 || err2)
return IRQ_HANDLED;
@@ -295,9 +296,9 @@ static int k3_dma_start_txd(struct k3_dma_chan *c)
return -EAGAIN;
}

-static void k3_dma_tasklet(struct tasklet_struct *t)
+static void k3_dma_work(struct work_struct *t)
{
- struct k3_dma_dev *d = from_tasklet(d, t, task);
+ struct k3_dma_dev *d = from_work(d, t, work);
struct k3_dma_phy *p;
struct k3_dma_chan *c, *cn;
unsigned pch, pch_alloc = 0;
@@ -432,8 +433,8 @@ static void k3_dma_issue_pending(struct dma_chan *chan)
if (list_empty(&c->node)) {
/* if new channel, add chan_pending */
list_add_tail(&c->node, &d->chan_pending);
- /* check in tasklet */
- tasklet_schedule(&d->task);
+ /* check in work */
+ queue_work(system_bh_wq, &d->work);
dev_dbg(d->slave.dev, "vchan %p: issued\n", &c->vc);
}
}
@@ -956,7 +957,7 @@ static int k3_dma_probe(struct platform_device *op)

spin_lock_init(&d->lock);
INIT_LIST_HEAD(&d->chan_pending);
- tasklet_setup(&d->task, k3_dma_tasklet);
+ INIT_WORK(&d->work, k3_dma_work);
platform_set_drvdata(op, d);
dev_info(&op->dev, "initialized\n");

@@ -981,9 +982,9 @@ static void k3_dma_remove(struct platform_device *op)

list_for_each_entry_safe(c, cn, &d->slave.channels, vc.chan.device_node) {
list_del(&c->vc.chan.device_node);
- tasklet_kill(&c->vc.task);
+ cancel_work_sync(&c->vc.work);
}
- tasklet_kill(&d->task);
+ cancel_work_sync(&d->work);
clk_disable_unprepare(d->clk);
}

diff --git a/drivers/dma/mediatek/mtk-cqdma.c b/drivers/dma/mediatek/mtk-cqdma.c
index 529100c5b9f5..cac4e9f2b07b 100644
--- a/drivers/dma/mediatek/mtk-cqdma.c
+++ b/drivers/dma/mediatek/mtk-cqdma.c
@@ -23,6 +23,7 @@
#include <linux/pm_runtime.h>
#include <linux/refcount.h>
#include <linux/slab.h>
+#include <linux/workqueue.h>

#include "../virt-dma.h"

@@ -94,7 +95,7 @@ struct mtk_cqdma_vdesc {
* @base: The mapped register I/O base of this PC
* @irq: The IRQ that this PC are using
* @refcnt: Track how many VCs are using this PC
- * @tasklet: Tasklet for this PC
+ * @work: Work for this PC
* @lock: Lock protect agaisting multiple VCs access PC
*/
struct mtk_cqdma_pchan {
@@ -104,7 +105,7 @@ struct mtk_cqdma_pchan {

refcount_t refcnt;

- struct tasklet_struct tasklet;
+ struct work_struct work;

/* lock to protect PC */
spinlock_t lock;
@@ -355,9 +356,9 @@ static struct mtk_cqdma_vdesc
return ret;
}

-static void mtk_cqdma_tasklet_cb(struct tasklet_struct *t)
+static void mtk_cqdma_work_cb(struct work_struct *t)
{
- struct mtk_cqdma_pchan *pc = from_tasklet(pc, t, tasklet);
+ struct mtk_cqdma_pchan *pc = from_work(pc, t, work);
struct mtk_cqdma_vdesc *cvd = NULL;
unsigned long flags;

@@ -378,7 +379,7 @@ static void mtk_cqdma_tasklet_cb(struct tasklet_struct *t)
kfree(cvd);
}

- /* re-enable interrupt before leaving tasklet */
+ /* re-enable interrupt before leaving work */
enable_irq(pc->irq);
}

@@ -386,11 +387,11 @@ static irqreturn_t mtk_cqdma_irq(int irq, void *devid)
{
struct mtk_cqdma_device *cqdma = devid;
irqreturn_t ret = IRQ_NONE;
- bool schedule_tasklet = false;
+ bool schedule_work = false;
u32 i;

/* clear interrupt flags for each PC */
- for (i = 0; i < cqdma->dma_channels; ++i, schedule_tasklet = false) {
+ for (i = 0; i < cqdma->dma_channels; ++i, schedule_work = false) {
spin_lock(&cqdma->pc[i]->lock);
if (mtk_dma_read(cqdma->pc[i],
MTK_CQDMA_INT_FLAG) & MTK_CQDMA_INT_FLAG_BIT) {
@@ -398,17 +399,17 @@ static irqreturn_t mtk_cqdma_irq(int irq, void *devid)
mtk_dma_clr(cqdma->pc[i], MTK_CQDMA_INT_FLAG,
MTK_CQDMA_INT_FLAG_BIT);

- schedule_tasklet = true;
+ schedule_work = true;
ret = IRQ_HANDLED;
}
spin_unlock(&cqdma->pc[i]->lock);

- if (schedule_tasklet) {
+ if (schedule_work) {
/* disable interrupt */
disable_irq_nosync(cqdma->pc[i]->irq);

- /* schedule the tasklet to handle the transactions */
- tasklet_schedule(&cqdma->pc[i]->tasklet);
+ /* schedule the work to handle the transactions */
+ queue_work(system_bh_wq, &cqdma->pc[i]->work);
}
}

@@ -472,7 +473,7 @@ static void mtk_cqdma_issue_pending(struct dma_chan *c)
unsigned long pc_flags;
unsigned long vc_flags;

- /* acquire PC's lock before VS's lock for lock dependency in tasklet */
+ /* acquire PC's lock before VS's lock for lock dependency in work */
spin_lock_irqsave(&cvc->pc->lock, pc_flags);
spin_lock_irqsave(&cvc->vc.lock, vc_flags);

@@ -871,9 +872,9 @@ static int mtk_cqdma_probe(struct platform_device *pdev)

platform_set_drvdata(pdev, cqdma);

- /* initialize tasklet for each PC */
+ /* initialize work for each PC */
for (i = 0; i < cqdma->dma_channels; ++i)
- tasklet_setup(&cqdma->pc[i]->tasklet, mtk_cqdma_tasklet_cb);
+ INIT_WORK(&cqdma->pc[i]->work, mtk_cqdma_work_cb);

dev_info(&pdev->dev, "MediaTek CQDMA driver registered\n");

@@ -892,12 +893,12 @@ static void mtk_cqdma_remove(struct platform_device *pdev)
unsigned long flags;
int i;

- /* kill VC task */
+ /* kill VC work */
for (i = 0; i < cqdma->dma_requests; i++) {
vc = &cqdma->vc[i];

list_del(&vc->vc.chan.device_node);
- tasklet_kill(&vc->vc.task);
+ cancel_work_sync(&vc->vc.work);
}

/* disable interrupt */
@@ -910,7 +911,7 @@ static void mtk_cqdma_remove(struct platform_device *pdev)
/* Waits for any pending IRQ handlers to complete */
synchronize_irq(cqdma->pc[i]->irq);

- tasklet_kill(&cqdma->pc[i]->tasklet);
+ cancel_work_sync(&cqdma->pc[i]->work);
}

/* disable hardware */
diff --git a/drivers/dma/mediatek/mtk-hsdma.c b/drivers/dma/mediatek/mtk-hsdma.c
index 36ff11e909ea..a70eb9ae25ff 100644
--- a/drivers/dma/mediatek/mtk-hsdma.c
+++ b/drivers/dma/mediatek/mtk-hsdma.c
@@ -1020,7 +1020,7 @@ static void mtk_hsdma_remove(struct platform_device *pdev)
vc = &hsdma->vc[i];

list_del(&vc->vc.chan.device_node);
- tasklet_kill(&vc->vc.task);
+ cancel_work_sync(&vc->vc.work);
}

/* Disable DMA interrupt */
diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
index 1bdc1500be40..63123ff9d451 100644
--- a/drivers/dma/mediatek/mtk-uart-apdma.c
+++ b/drivers/dma/mediatek/mtk-uart-apdma.c
@@ -312,7 +312,7 @@ static void mtk_uart_apdma_free_chan_resources(struct dma_chan *chan)

free_irq(c->irq, chan);

- tasklet_kill(&c->vc.task);
+ cancel_work_sync(&c->vc.work);

vchan_free_chan_resources(&c->vc);

@@ -463,7 +463,7 @@ static void mtk_uart_apdma_free(struct mtk_uart_apdmadev *mtkd)
struct mtk_chan, vc.chan.device_node);

list_del(&c->vc.chan.device_node);
- tasklet_kill(&c->vc.task);
+ cancel_work_sync(&c->vc.task);
}
}

diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index 136fcaeff8dd..083ca760241d 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -17,6 +17,7 @@
#include <linux/dmapool.h>
#include <linux/of_dma.h>
#include <linux/of.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -97,7 +98,7 @@ struct mmp_pdma_chan {
* is in cyclic mode */

/* channel's basic info */
- struct tasklet_struct tasklet;
+ struct work_struct work;
u32 dcmd;
u32 drcmr;
u32 dev_addr;
@@ -204,7 +205,7 @@ static irqreturn_t mmp_pdma_chan_handler(int irq, void *dev_id)
if (clear_chan_irq(phy) != 0)
return IRQ_NONE;

- tasklet_schedule(&phy->vchan->tasklet);
+ queue_work(system_bh_wq, &phy->vchan->work);
return IRQ_HANDLED;
}

@@ -861,13 +862,13 @@ static void mmp_pdma_issue_pending(struct dma_chan *dchan)
}

/*
- * dma_do_tasklet
+ * dma_do_work
* Do call back
* Start pending list
*/
-static void dma_do_tasklet(struct tasklet_struct *t)
+static void dma_do_work(struct work_struct *t)
{
- struct mmp_pdma_chan *chan = from_tasklet(chan, t, tasklet);
+ struct mmp_pdma_chan *chan = from_work(chan, t, work);
struct mmp_pdma_desc_sw *desc, *_desc;
LIST_HEAD(chain_cleanup);
unsigned long flags;
@@ -984,7 +985,7 @@ static int mmp_pdma_chan_init(struct mmp_pdma_device *pdev, int idx, int irq)
spin_lock_init(&chan->desc_lock);
chan->dev = pdev->dev;
chan->chan.device = &pdev->device;
- tasklet_setup(&chan->tasklet, dma_do_tasklet);
+ INIT_WORK(&chan->work, dma_do_work);
INIT_LIST_HEAD(&chan->chain_pending);
INIT_LIST_HEAD(&chan->chain_running);

diff --git a/drivers/dma/mmp_tdma.c b/drivers/dma/mmp_tdma.c
index b76fe99e1151..05971f01a1ac 100644
--- a/drivers/dma/mmp_tdma.c
+++ b/drivers/dma/mmp_tdma.c
@@ -18,6 +18,7 @@
#include <linux/device.h>
#include <linux/genalloc.h>
#include <linux/of_dma.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -102,7 +103,7 @@ struct mmp_tdma_chan {
struct device *dev;
struct dma_chan chan;
struct dma_async_tx_descriptor desc;
- struct tasklet_struct tasklet;
+ struct work_struct work;

struct mmp_tdma_desc *desc_arr;
dma_addr_t desc_arr_phys;
@@ -320,7 +321,7 @@ static irqreturn_t mmp_tdma_chan_handler(int irq, void *dev_id)
struct mmp_tdma_chan *tdmac = dev_id;

if (mmp_tdma_clear_chan_irq(tdmac) == 0) {
- tasklet_schedule(&tdmac->tasklet);
+ queue_work(system_bh_wq, &tdmac->work);
return IRQ_HANDLED;
} else
return IRQ_NONE;
@@ -346,9 +347,9 @@ static irqreturn_t mmp_tdma_int_handler(int irq, void *dev_id)
return IRQ_NONE;
}

-static void dma_do_tasklet(struct tasklet_struct *t)
+static void dma_do_work(struct work_struct *t)
{
- struct mmp_tdma_chan *tdmac = from_tasklet(tdmac, t, tasklet);
+ struct mmp_tdma_chan *tdmac = from_work(tdmac, t, work);

dmaengine_desc_get_callback_invoke(&tdmac->desc, NULL);
}
@@ -584,7 +585,7 @@ static int mmp_tdma_chan_init(struct mmp_tdma_device *tdev,
tdmac->pool = pool;
tdmac->status = DMA_COMPLETE;
tdev->tdmac[tdmac->idx] = tdmac;
- tasklet_setup(&tdmac->tasklet, dma_do_tasklet);
+ INIT_WORK(&tdmac->work, dma_do_work);

/* add the channel to tdma_chan list */
list_add_tail(&tdmac->chan.device_node,
diff --git a/drivers/dma/mpc512x_dma.c b/drivers/dma/mpc512x_dma.c
index 68c247a46321..88a1388a78e4 100644
--- a/drivers/dma/mpc512x_dma.c
+++ b/drivers/dma/mpc512x_dma.c
@@ -43,6 +43,7 @@
#include <linux/platform_device.h>

#include <linux/random.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -214,7 +215,7 @@ struct mpc_dma_chan {

struct mpc_dma {
struct dma_device dma;
- struct tasklet_struct tasklet;
+ struct work_struct work;
struct mpc_dma_chan channels[MPC_DMA_CHANNELS];
struct mpc_dma_regs __iomem *regs;
struct mpc_dma_tcd __iomem *tcd;
@@ -366,8 +367,8 @@ static irqreturn_t mpc_dma_irq(int irq, void *data)
mpc_dma_irq_process(mdma, in_be32(&mdma->regs->dmaintl),
in_be32(&mdma->regs->dmaerrl), 0);

- /* Schedule tasklet */
- tasklet_schedule(&mdma->tasklet);
+ /* Schedule work */
+ queue_work(system_bh_wq, &mdma->work);

return IRQ_HANDLED;
}
@@ -413,10 +414,10 @@ static void mpc_dma_process_completed(struct mpc_dma *mdma)
}
}

-/* DMA Tasklet */
-static void mpc_dma_tasklet(struct tasklet_struct *t)
+/* DMA Work */
+static void mpc_dma_work(struct work_struct *t)
{
- struct mpc_dma *mdma = from_tasklet(mdma, t, tasklet);
+ struct mpc_dma *mdma = from_work(mdma, t, work);
unsigned long flags;
uint es;

@@ -1010,7 +1011,7 @@ static int mpc_dma_probe(struct platform_device *op)
list_add_tail(&mchan->chan.device_node, &dma->channels);
}

- tasklet_setup(&mdma->tasklet, mpc_dma_tasklet);
+ INIT_WORK(&mdma->work, mpc_dma_work);

/*
* Configure DMA Engine:
@@ -1098,7 +1099,7 @@ static void mpc_dma_remove(struct platform_device *op)
}
free_irq(mdma->irq, mdma);
irq_dispose_mapping(mdma->irq);
- tasklet_kill(&mdma->tasklet);
+ cancel_work_sync(&mdma->work);
}

static const struct of_device_id mpc_dma_match[] = {
diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index bcd3b623ac6c..5b8c03a43089 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -19,6 +19,7 @@
#include <linux/irqdomain.h>
#include <linux/cpumask.h>
#include <linux/platform_data/dma-mv_xor.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"
#include "mv_xor.h"
@@ -327,7 +328,7 @@ static void mv_chan_slot_cleanup(struct mv_xor_chan *mv_chan)
* some descriptors are still waiting
* to be cleaned
*/
- tasklet_schedule(&mv_chan->irq_tasklet);
+ queue_work(system_bh_wq, &mv_chan->irq_work);
}
}
}
@@ -336,9 +337,9 @@ static void mv_chan_slot_cleanup(struct mv_xor_chan *mv_chan)
mv_chan->dmachan.completed_cookie = cookie;
}

-static void mv_xor_tasklet(struct tasklet_struct *t)
+static void mv_xor_work(struct work_struct *t)
{
- struct mv_xor_chan *chan = from_tasklet(chan, t, irq_tasklet);
+ struct mv_xor_chan *chan = from_work(chan, t, irq_work);

spin_lock(&chan->lock);
mv_chan_slot_cleanup(chan);
@@ -372,7 +373,7 @@ mv_chan_alloc_slot(struct mv_xor_chan *mv_chan)
spin_unlock_bh(&mv_chan->lock);

/* try to free some slots if the allocation fails */
- tasklet_schedule(&mv_chan->irq_tasklet);
+ queue_work(system_bh_wq, &mv_chan->irq_work);

return NULL;
}
@@ -737,7 +738,7 @@ static irqreturn_t mv_xor_interrupt_handler(int irq, void *data)
if (intr_cause & XOR_INTR_ERRORS)
mv_chan_err_interrupt_handler(chan, intr_cause);

- tasklet_schedule(&chan->irq_tasklet);
+ queue_work(system_bh_wq, &chan->irq_work);

mv_chan_clear_eoc_cause(chan);

@@ -1097,7 +1098,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev,

mv_chan->mmr_base = xordev->xor_base;
mv_chan->mmr_high_base = xordev->xor_high_base;
- tasklet_setup(&mv_chan->irq_tasklet, mv_xor_tasklet);
+ INIT_WORK(&mv_chan->irq_work, mv_xor_work);

/* clear errors before enabling interrupts */
mv_chan_clear_err_status(mv_chan);
diff --git a/drivers/dma/mv_xor.h b/drivers/dma/mv_xor.h
index d86086b05b0e..80a19010e6c9 100644
--- a/drivers/dma/mv_xor.h
+++ b/drivers/dma/mv_xor.h
@@ -10,6 +10,7 @@
#include <linux/io.h>
#include <linux/dmaengine.h>
#include <linux/interrupt.h>
+#include <linux/workqueue.h>

#define MV_XOR_POOL_SIZE (MV_XOR_SLOT_SIZE * 3072)
#define MV_XOR_SLOT_SIZE 64
@@ -98,7 +99,7 @@ struct mv_xor_device {
* @device: parent device
* @common: common dmaengine channel object members
* @slots_allocated: records the actual size of the descriptor slot pool
- * @irq_tasklet: bottom half where mv_xor_slot_cleanup runs
+ * @irq_work: bottom half where mv_xor_slot_cleanup runs
* @op_in_desc: new mode of driver, each op is writen to descriptor.
*/
struct mv_xor_chan {
@@ -118,7 +119,7 @@ struct mv_xor_chan {
struct dma_device dmadev;
struct dma_chan dmachan;
int slots_allocated;
- struct tasklet_struct irq_tasklet;
+ struct work_struct irq_work;
int op_in_desc;
char dummy_src[MV_XOR_MIN_BYTE_COUNT];
char dummy_dst[MV_XOR_MIN_BYTE_COUNT];
diff --git a/drivers/dma/mv_xor_v2.c b/drivers/dma/mv_xor_v2.c
index 97ebc791a30b..bfa683f02d26 100644
--- a/drivers/dma/mv_xor_v2.c
+++ b/drivers/dma/mv_xor_v2.c
@@ -14,6 +14,7 @@
#include <linux/of_irq.h>
#include <linux/platform_device.h>
#include <linux/spinlock.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -139,7 +140,7 @@ struct mv_xor_v2_descriptor {
* @reg_clk: reference to the 'reg' clock
* @dma_base: memory mapped DMA register base
* @glob_base: memory mapped global register base
- * @irq_tasklet: tasklet used for IRQ handling call-backs
+ * @irq_work: work used for IRQ handling call-backs
* @free_sw_desc: linked list of free SW descriptors
* @dmadev: dma device
* @dmachan: dma channel
@@ -158,7 +159,7 @@ struct mv_xor_v2_device {
void __iomem *glob_base;
struct clk *clk;
struct clk *reg_clk;
- struct tasklet_struct irq_tasklet;
+ struct work_struct irq_work;
struct list_head free_sw_desc;
struct dma_device dmadev;
struct dma_chan dmachan;
@@ -290,8 +291,8 @@ static irqreturn_t mv_xor_v2_interrupt_handler(int irq, void *data)
if (!ndescs)
return IRQ_NONE;

- /* schedule a tasklet to handle descriptors callbacks */
- tasklet_schedule(&xor_dev->irq_tasklet);
+ /* schedule a work to handle descriptors callbacks */
+ queue_work(system_bh_wq, &xor_dev->irq_work);

return IRQ_HANDLED;
}
@@ -346,8 +347,8 @@ mv_xor_v2_prep_sw_desc(struct mv_xor_v2_device *xor_dev)

if (list_empty(&xor_dev->free_sw_desc)) {
spin_unlock_bh(&xor_dev->lock);
- /* schedule tasklet to free some descriptors */
- tasklet_schedule(&xor_dev->irq_tasklet);
+ /* schedule work to free some descriptors */
+ queue_work(system_bh_wq, &xor_dev->irq_work);
return NULL;
}

@@ -553,10 +554,10 @@ int mv_xor_v2_get_pending_params(struct mv_xor_v2_device *xor_dev,
/*
* handle the descriptors after HW process
*/
-static void mv_xor_v2_tasklet(struct tasklet_struct *t)
+static void mv_xor_v2_work(struct work_struct *t)
{
- struct mv_xor_v2_device *xor_dev = from_tasklet(xor_dev, t,
- irq_tasklet);
+ struct mv_xor_v2_device *xor_dev = from_work(xor_dev, t,
+ irq_work);
int pending_ptr, num_of_pending, i;
struct mv_xor_v2_sw_desc *next_pending_sw_desc = NULL;

@@ -760,7 +761,7 @@ static int mv_xor_v2_probe(struct platform_device *pdev)
if (ret)
goto free_msi_irqs;

- tasklet_setup(&xor_dev->irq_tasklet, mv_xor_v2_tasklet);
+ INIT_WORK(&xor_dev->irq_work, mv_xor_v2_work);

xor_dev->desc_size = mv_xor_v2_set_desc_size(xor_dev);

@@ -869,7 +870,7 @@ static void mv_xor_v2_remove(struct platform_device *pdev)

platform_device_msi_free_irqs_all(&pdev->dev);

- tasklet_kill(&xor_dev->irq_tasklet);
+ cancel_work_sync(&xor_dev->irq_work);
}

#ifdef CONFIG_OF
diff --git a/drivers/dma/mxs-dma.c b/drivers/dma/mxs-dma.c
index cfb9962417ef..6131e130dfc6 100644
--- a/drivers/dma/mxs-dma.c
+++ b/drivers/dma/mxs-dma.c
@@ -24,6 +24,7 @@
#include <linux/of_dma.h>
#include <linux/list.h>
#include <linux/dma/mxs-dma.h>
+#include <linux/workqueue.h>

#include <asm/irq.h>

@@ -109,7 +110,7 @@ struct mxs_dma_chan {
struct mxs_dma_engine *mxs_dma;
struct dma_chan chan;
struct dma_async_tx_descriptor desc;
- struct tasklet_struct tasklet;
+ struct work_struct work;
unsigned int chan_irq;
struct mxs_dma_ccw *ccw;
dma_addr_t ccw_phys;
@@ -300,9 +301,9 @@ static dma_cookie_t mxs_dma_tx_submit(struct dma_async_tx_descriptor *tx)
return dma_cookie_assign(tx);
}

-static void mxs_dma_tasklet(struct tasklet_struct *t)
+static void mxs_dma_work(struct work_struct *t)
{
- struct mxs_dma_chan *mxs_chan = from_tasklet(mxs_chan, t, tasklet);
+ struct mxs_dma_chan *mxs_chan = from_work(mxs_chan, t, work);

dmaengine_desc_get_callback_invoke(&mxs_chan->desc, NULL);
}
@@ -386,8 +387,8 @@ static irqreturn_t mxs_dma_int_handler(int irq, void *dev_id)
dma_cookie_complete(&mxs_chan->desc);
}

- /* schedule tasklet on this channel */
- tasklet_schedule(&mxs_chan->tasklet);
+ /* schedule work on this channel */
+ queue_work(system_bh_wq, &mxs_chan->work);

return IRQ_HANDLED;
}
@@ -782,7 +783,7 @@ static int mxs_dma_probe(struct platform_device *pdev)
mxs_chan->chan.device = &mxs_dma->dma_device;
dma_cookie_init(&mxs_chan->chan);

- tasklet_setup(&mxs_chan->tasklet, mxs_dma_tasklet);
+ INIT_WORK(&mxs_chan->work, mxs_dma_work);


/* Add the channel to mxs_chan list */
diff --git a/drivers/dma/nbpfaxi.c b/drivers/dma/nbpfaxi.c
index c08916339aa7..49bf42a41c7a 100644
--- a/drivers/dma/nbpfaxi.c
+++ b/drivers/dma/nbpfaxi.c
@@ -18,6 +18,7 @@
#include <linux/of_dma.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
+#include <linux/workqueue.h>

#include <dt-bindings/dma/nbpfaxi.h>

@@ -174,7 +175,7 @@ struct nbpf_desc_page {
/**
* struct nbpf_channel - one DMAC channel
* @dma_chan: standard dmaengine channel object
- * @tasklet: channel specific tasklet used for callbacks
+ * @work: channel specific work used for callbacks
* @base: register address base
* @nbpf: DMAC
* @name: IRQ name
@@ -200,7 +201,7 @@ struct nbpf_desc_page {
*/
struct nbpf_channel {
struct dma_chan dma_chan;
- struct tasklet_struct tasklet;
+ struct work_struct work;
void __iomem *base;
struct nbpf_device *nbpf;
char name[16];
@@ -1112,9 +1113,9 @@ static struct dma_chan *nbpf_of_xlate(struct of_phandle_args *dma_spec,
return dchan;
}

-static void nbpf_chan_tasklet(struct tasklet_struct *t)
+static void nbpf_chan_work(struct work_struct *t)
{
- struct nbpf_channel *chan = from_tasklet(chan, t, tasklet);
+ struct nbpf_channel *chan = from_work(chan, t, work);
struct nbpf_desc *desc, *tmp;
struct dmaengine_desc_callback cb;

@@ -1215,7 +1216,7 @@ static irqreturn_t nbpf_chan_irq(int irq, void *dev)
spin_unlock(&chan->lock);

if (bh)
- tasklet_schedule(&chan->tasklet);
+ queue_work(system_bh_wq, &chan->work);

return ret;
}
@@ -1259,7 +1260,7 @@ static int nbpf_chan_probe(struct nbpf_device *nbpf, int n)

snprintf(chan->name, sizeof(chan->name), "nbpf %d", n);

- tasklet_setup(&chan->tasklet, nbpf_chan_tasklet);
+ INIT_WORK(&chan->work, nbpf_chan_work);
ret = devm_request_irq(dma_dev->dev, chan->irq,
nbpf_chan_irq, IRQF_SHARED,
chan->name, chan);
@@ -1466,7 +1467,7 @@ static void nbpf_remove(struct platform_device *pdev)

devm_free_irq(&pdev->dev, chan->irq, chan);

- tasklet_kill(&chan->tasklet);
+ cancel_work_sync(&chan->work);
}

of_dma_controller_free(pdev->dev.of_node);
diff --git a/drivers/dma/owl-dma.c b/drivers/dma/owl-dma.c
index 4e76c4ec2d39..026dd32c5a16 100644
--- a/drivers/dma/owl-dma.c
+++ b/drivers/dma/owl-dma.c
@@ -1055,7 +1055,7 @@ static inline void owl_dma_free(struct owl_dma *od)
list_for_each_entry_safe(vchan,
next, &od->dma.channels, vc.chan.device_node) {
list_del(&vchan->vc.chan.device_node);
- tasklet_kill(&vchan->vc.task);
+ cancel_work_sync(&vchan->vc.work);
}
}

diff --git a/drivers/dma/pch_dma.c b/drivers/dma/pch_dma.c
index c359decc07a3..df9fd33c0740 100644
--- a/drivers/dma/pch_dma.c
+++ b/drivers/dma/pch_dma.c
@@ -13,6 +13,7 @@
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/pch_dma.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -91,7 +92,7 @@ struct pch_dma_chan {
struct dma_chan chan;
void __iomem *membase;
enum dma_transfer_direction dir;
- struct tasklet_struct tasklet;
+ struct work_struct work;
unsigned long err_status;

spinlock_t lock;
@@ -670,14 +671,14 @@ static int pd_device_terminate_all(struct dma_chan *chan)
return 0;
}

-static void pdc_tasklet(struct tasklet_struct *t)
+static void pdc_work(struct work_struct *t)
{
- struct pch_dma_chan *pd_chan = from_tasklet(pd_chan, t, tasklet);
+ struct pch_dma_chan *pd_chan = from_work(pd_chan, t, work);
unsigned long flags;

if (!pdc_is_idle(pd_chan)) {
dev_err(chan2dev(&pd_chan->chan),
- "BUG: handle non-idle channel in tasklet\n");
+ "BUG: handle non-idle channel in work\n");
return;
}

@@ -712,7 +713,7 @@ static irqreturn_t pd_irq(int irq, void *devid)
if (sts0 & DMA_STATUS0_ERR(i))
set_bit(0, &pd_chan->err_status);

- tasklet_schedule(&pd_chan->tasklet);
+ queue_work(system_bh_wq, &pd_chan->work);
ret0 = IRQ_HANDLED;
}
} else {
@@ -720,7 +721,7 @@ static irqreturn_t pd_irq(int irq, void *devid)
if (sts2 & DMA_STATUS2_ERR(i))
set_bit(0, &pd_chan->err_status);

- tasklet_schedule(&pd_chan->tasklet);
+ queue_work(system_bh_wq, &pd_chan->work);
ret2 = IRQ_HANDLED;
}
}
@@ -882,7 +883,7 @@ static int pch_dma_probe(struct pci_dev *pdev,
INIT_LIST_HEAD(&pd_chan->queue);
INIT_LIST_HEAD(&pd_chan->free_list);

- tasklet_setup(&pd_chan->tasklet, pdc_tasklet);
+ INIT_WORK(&pd_chan->work, pdc_work);
list_add_tail(&pd_chan->chan.device_node, &pd->dma.channels);
}

@@ -935,7 +936,7 @@ static void pch_dma_remove(struct pci_dev *pdev)
device_node) {
pd_chan = to_pd_chan(chan);

- tasklet_kill(&pd_chan->tasklet);
+ cancel_work_sync(&pd_chan->work);
}

dma_pool_destroy(pd->pool);
diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
index 5f6d7f1e095f..74ee0dfb638a 100644
--- a/drivers/dma/pl330.c
+++ b/drivers/dma/pl330.c
@@ -26,6 +26,7 @@
#include <linux/pm_runtime.h>
#include <linux/bug.h>
#include <linux/reset.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"
#define PL330_MAX_CHAN 8
@@ -360,7 +361,7 @@ struct _pl330_req {
struct dma_pl330_desc *desc;
};

-/* ToBeDone for tasklet */
+/* ToBeDone for work */
struct _pl330_tbd {
bool reset_dmac;
bool reset_mngr;
@@ -418,7 +419,7 @@ enum desc_status {

struct dma_pl330_chan {
/* Schedule desc completion */
- struct tasklet_struct task;
+ struct work_struct work;

/* DMA-Engine Channel */
struct dma_chan chan;
@@ -490,7 +491,7 @@ struct pl330_dmac {
/* Pointer to the MANAGER thread */
struct pl330_thread *manager;
/* To handle bad news in interrupt */
- struct tasklet_struct tasks;
+ struct work_struct tasks;
struct _pl330_tbd dmac_tbd;
/* State of DMAC operation */
enum pl330_dmac_state state;
@@ -1577,12 +1578,12 @@ static void dma_pl330_rqcb(struct dma_pl330_desc *desc, enum pl330_op_err err)

spin_unlock_irqrestore(&pch->lock, flags);

- tasklet_schedule(&pch->task);
+ queue_work(system_bh_wq, &pch->work);
}

-static void pl330_dotask(struct tasklet_struct *t)
+static void pl330_dotask(struct work_struct *t)
{
- struct pl330_dmac *pl330 = from_tasklet(pl330, t, tasks);
+ struct pl330_dmac *pl330 = from_work(pl330, t, tasks);
unsigned long flags;
int i;

@@ -1735,7 +1736,7 @@ static int pl330_update(struct pl330_dmac *pl330)
|| pl330->dmac_tbd.reset_mngr
|| pl330->dmac_tbd.reset_chan) {
ret = 1;
- tasklet_schedule(&pl330->tasks);
+ queue_work(system_bh_wq, &pl330->tasks);
}

return ret;
@@ -1986,7 +1987,7 @@ static int pl330_add(struct pl330_dmac *pl330)
return ret;
}

- tasklet_setup(&pl330->tasks, pl330_dotask);
+ INIT_WORK(&pl330->tasks, pl330_dotask);

pl330->state = INIT;

@@ -2014,7 +2015,7 @@ static void pl330_del(struct pl330_dmac *pl330)
{
pl330->state = UNINIT;

- tasklet_kill(&pl330->tasks);
+ cancel_work_sync(&pl330->tasks);

/* Free DMAC resources */
dmac_free_threads(pl330);
@@ -2064,14 +2065,14 @@ static inline void fill_queue(struct dma_pl330_chan *pch)
desc->status = DONE;
dev_err(pch->dmac->ddma.dev, "%s:%d Bad Desc(%d)\n",
__func__, __LINE__, desc->txd.cookie);
- tasklet_schedule(&pch->task);
+ queue_work(system_bh_wq, &pch->work);
}
}
}

-static void pl330_tasklet(struct tasklet_struct *t)
+static void pl330_work(struct work_struct *t)
{
- struct dma_pl330_chan *pch = from_tasklet(pch, t, task);
+ struct dma_pl330_chan *pch = from_work(pch, t, work);
struct dma_pl330_desc *desc, *_dt;
unsigned long flags;
bool power_down = false;
@@ -2179,7 +2180,7 @@ static int pl330_alloc_chan_resources(struct dma_chan *chan)
return -ENOMEM;
}

- tasklet_setup(&pch->task, pl330_tasklet);
+ INIT_WORK(&pch->work, pl330_work);

spin_unlock_irqrestore(&pl330->lock, flags);

@@ -2362,7 +2363,7 @@ static void pl330_free_chan_resources(struct dma_chan *chan)
struct pl330_dmac *pl330 = pch->dmac;
unsigned long flags;

- tasklet_kill(&pch->task);
+ cancel_work_sync(&pch->work);

pm_runtime_get_sync(pch->dmac->ddma.dev);
spin_lock_irqsave(&pl330->lock, flags);
@@ -2499,7 +2500,7 @@ static void pl330_issue_pending(struct dma_chan *chan)
list_splice_tail_init(&pch->submitted_list, &pch->work_list);
spin_unlock_irqrestore(&pch->lock, flags);

- pl330_tasklet(&pch->task);
+ pl330_work(&pch->work);
}

/*
diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
index 34b6416c3287..07e185fb8d2c 100644
--- a/drivers/dma/plx_dma.c
+++ b/drivers/dma/plx_dma.c
@@ -13,6 +13,7 @@
#include <linux/list.h>
#include <linux/module.h>
#include <linux/pci.h>
+#include <linux/workqueue.h>

MODULE_DESCRIPTION("PLX ExpressLane PEX PCI Switch DMA Engine");
MODULE_VERSION("0.1");
@@ -105,7 +106,7 @@ struct plx_dma_dev {
struct dma_chan dma_chan;
struct pci_dev __rcu *pdev;
void __iomem *bar;
- struct tasklet_struct desc_task;
+ struct work_struct desc_task;

spinlock_t ring_lock;
bool ring_active;
@@ -241,9 +242,9 @@ static void plx_dma_stop(struct plx_dma_dev *plxdev)
rcu_read_unlock();
}

-static void plx_dma_desc_task(struct tasklet_struct *t)
+static void plx_dma_desc_task(struct work_struct *t)
{
- struct plx_dma_dev *plxdev = from_tasklet(plxdev, t, desc_task);
+ struct plx_dma_dev *plxdev = from_work(plxdev, t, desc_task);

plx_dma_process_desc(plxdev);
}
@@ -366,7 +367,7 @@ static irqreturn_t plx_dma_isr(int irq, void *devid)
return IRQ_NONE;

if (status & PLX_REG_INTR_STATUS_DESC_DONE && plxdev->ring_active)
- tasklet_schedule(&plxdev->desc_task);
+ queue_work(system_bh_wq, &plxdev->desc_task);

writew(status, plxdev->bar + PLX_REG_INTR_STATUS);

@@ -472,7 +473,7 @@ static void plx_dma_free_chan_resources(struct dma_chan *chan)
if (irq > 0)
synchronize_irq(irq);

- tasklet_kill(&plxdev->desc_task);
+ cancel_work_sync(&plxdev->desc_task);

plx_dma_abort_desc(plxdev);

@@ -511,7 +512,7 @@ static int plx_dma_create(struct pci_dev *pdev)
goto free_plx;

spin_lock_init(&plxdev->ring_lock);
- tasklet_setup(&plxdev->desc_task, plx_dma_desc_task);
+ INIT_WORK(&plxdev->desc_task, plx_dma_desc_task);

RCU_INIT_POINTER(plxdev->pdev, pdev);
plxdev->bar = pcim_iomap_table(pdev)[0];
diff --git a/drivers/dma/ppc4xx/adma.c b/drivers/dma/ppc4xx/adma.c
index bbb60a970dab..a9e1c0e43fed 100644
--- a/drivers/dma/ppc4xx/adma.c
+++ b/drivers/dma/ppc4xx/adma.c
@@ -29,6 +29,7 @@
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/platform_device.h>
+#include <linux/workqueue.h>
#include <asm/dcr.h>
#include <asm/dcr-regs.h>
#include "adma.h"
@@ -1656,11 +1657,11 @@ static void __ppc440spe_adma_slot_cleanup(struct ppc440spe_adma_chan *chan)
}

/**
- * ppc440spe_adma_tasklet - clean up watch-dog initiator
+ * ppc440spe_adma_work - clean up watch-dog initiator
*/
-static void ppc440spe_adma_tasklet(struct tasklet_struct *t)
+static void ppc440spe_adma_work(struct work_struct *t)
{
- struct ppc440spe_adma_chan *chan = from_tasklet(chan, t, irq_tasklet);
+ struct ppc440spe_adma_chan *chan = from_work(chan, t, irq_work);

spin_lock_nested(&chan->lock, SINGLE_DEPTH_NESTING);
__ppc440spe_adma_slot_cleanup(chan);
@@ -1754,7 +1755,7 @@ static struct ppc440spe_adma_desc_slot *ppc440spe_adma_alloc_slots(
goto retry;

/* try to free some slots if the allocation fails */
- tasklet_schedule(&chan->irq_tasklet);
+ queue_work(system_bh_wq, &chan->irq_work);
return NULL;
}

@@ -3596,7 +3597,7 @@ static irqreturn_t ppc440spe_adma_eot_handler(int irq, void *data)
dev_dbg(chan->device->common.dev,
"ppc440spe adma%d: %s\n", chan->device->id, __func__);

- tasklet_schedule(&chan->irq_tasklet);
+ queue_work(system_bh_wq, &chan->irq_work);
ppc440spe_adma_device_clear_eot_status(chan);

return IRQ_HANDLED;
@@ -3613,7 +3614,7 @@ static irqreturn_t ppc440spe_adma_err_handler(int irq, void *data)
dev_dbg(chan->device->common.dev,
"ppc440spe adma%d: %s\n", chan->device->id, __func__);

- tasklet_schedule(&chan->irq_tasklet);
+ queue_work(system_bh_wq, &chan->irq_work);
ppc440spe_adma_device_clear_eot_status(chan);

return IRQ_HANDLED;
@@ -4138,7 +4139,7 @@ static int ppc440spe_adma_probe(struct platform_device *ofdev)
chan->common.device = &adev->common;
dma_cookie_init(&chan->common);
list_add_tail(&chan->common.device_node, &adev->common.channels);
- tasklet_setup(&chan->irq_tasklet, ppc440spe_adma_tasklet);
+ INIT_WORK(&chan->irq_work, ppc440spe_adma_work);

/* allocate and map helper pages for async validation or
* async_mult/async_sum_product operations on DMA0/1.
@@ -4248,7 +4249,7 @@ static void ppc440spe_adma_remove(struct platform_device *ofdev)
device_node) {
ppc440spe_chan = to_ppc440spe_adma_chan(chan);
ppc440spe_adma_release_irqs(adev, ppc440spe_chan);
- tasklet_kill(&ppc440spe_chan->irq_tasklet);
+ cancel_work_sync(&ppc440spe_chan->irq_work);
if (adev->id != PPC440SPE_XOR_ID) {
dma_unmap_page(&ofdev->dev, ppc440spe_chan->pdest,
PAGE_SIZE, DMA_BIDIRECTIONAL);
diff --git a/drivers/dma/ppc4xx/adma.h b/drivers/dma/ppc4xx/adma.h
index f8a5d7c1fb40..e3918cfcc5ae 100644
--- a/drivers/dma/ppc4xx/adma.h
+++ b/drivers/dma/ppc4xx/adma.h
@@ -9,6 +9,7 @@
#define _PPC440SPE_ADMA_H

#include <linux/types.h>
+#include <linux/workqueue.h>
#include "dma.h"
#include "xor.h"

@@ -80,7 +81,7 @@ struct ppc440spe_adma_device {
* @pending: allows batching of hardware operations
* @slots_allocated: records the actual size of the descriptor slot pool
* @hw_chain_inited: h/w descriptor chain initialization flag
- * @irq_tasklet: bottom half where ppc440spe_adma_slot_cleanup runs
+ * @irq_work: bottom half where ppc440spe_adma_slot_cleanup runs
* @needs_unmap: if buffers should not be unmapped upon final processing
* @pdest_page: P destination page for async validate operation
* @qdest_page: Q destination page for async validate operation
@@ -97,7 +98,7 @@ struct ppc440spe_adma_chan {
int pending;
int slots_allocated;
int hw_chain_inited;
- struct tasklet_struct irq_tasklet;
+ struct work_struct irq_work;
u8 needs_unmap;
struct page *pdest_page;
struct page *qdest_page;
diff --git a/drivers/dma/pxa_dma.c b/drivers/dma/pxa_dma.c
index 31f8da810c05..174c2fee1fcc 100644
--- a/drivers/dma/pxa_dma.c
+++ b/drivers/dma/pxa_dma.c
@@ -1218,7 +1218,7 @@ static void pxad_free_channels(struct dma_device *dmadev)
list_for_each_entry_safe(c, cn, &dmadev->channels,
vc.chan.device_node) {
list_del(&c->vc.chan.device_node);
- tasklet_kill(&c->vc.task);
+ cancel_work_sync(&c->vc.task);
}
}

diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
index 5e7d332731e0..71e43150793a 100644
--- a/drivers/dma/qcom/bam_dma.c
+++ b/drivers/dma/qcom/bam_dma.c
@@ -41,6 +41,7 @@
#include <linux/clk.h>
#include <linux/dmaengine.h>
#include <linux/pm_runtime.h>
+#include <linux/workqueue.h>

#include "../dmaengine.h"
#include "../virt-dma.h"
@@ -396,8 +397,8 @@ struct bam_device {
struct clk *bamclk;
int irq;

- /* dma start transaction tasklet */
- struct tasklet_struct task;
+ /* dma start transaction work */
+ struct work_struct work;
};

/**
@@ -875,7 +876,7 @@ static u32 process_channel_irqs(struct bam_device *bdev)
/*
* if complete, process cookie. Otherwise
* push back to front of desc_issued so that
- * it gets restarted by the tasklet
+ * it gets restarted by the work
*/
if (!async_desc->num_desc) {
vchan_cookie_complete(&async_desc->vd);
@@ -907,9 +908,9 @@ static irqreturn_t bam_dma_irq(int irq, void *data)

srcs |= process_channel_irqs(bdev);

- /* kick off tasklet to start next dma transfer */
+ /* kick off work to start next dma transfer */
if (srcs & P_IRQ)
- tasklet_schedule(&bdev->task);
+ queue_work(system_bh_wq, &bdev->work);

ret = pm_runtime_get_sync(bdev->dev);
if (ret < 0)
@@ -1107,14 +1108,14 @@ static void bam_start_dma(struct bam_chan *bchan)
}

/**
- * dma_tasklet - DMA IRQ tasklet
- * @t: tasklet argument (bam controller structure)
+ * dma_work - DMA IRQ work
+ * @t: work argument (bam controller structure)
*
* Sets up next DMA operation and then processes all completed transactions
*/
-static void dma_tasklet(struct tasklet_struct *t)
+static void dma_work(struct work_struct *t)
{
- struct bam_device *bdev = from_tasklet(bdev, t, task);
+ struct bam_device *bdev = from_work(bdev, t, work);
struct bam_chan *bchan;
unsigned long flags;
unsigned int i;
@@ -1135,7 +1136,7 @@ static void dma_tasklet(struct tasklet_struct *t)
* bam_issue_pending - starts pending transactions
* @chan: dma channel
*
- * Calls tasklet directly which in turn starts any pending transactions
+ * Calls work directly which in turn starts any pending transactions
*/
static void bam_issue_pending(struct dma_chan *chan)
{
@@ -1302,14 +1303,14 @@ static int bam_dma_probe(struct platform_device *pdev)
if (ret)
goto err_disable_clk;

- tasklet_setup(&bdev->task, dma_tasklet);
+ INIT_WORK(&bdev->work, dma_work);

bdev->channels = devm_kcalloc(bdev->dev, bdev->num_channels,
sizeof(*bdev->channels), GFP_KERNEL);

if (!bdev->channels) {
ret = -ENOMEM;
- goto err_tasklet_kill;
+ goto err_work_kill;
}

/* allocate and initialize channels */
@@ -1377,9 +1378,9 @@ static int bam_dma_probe(struct platform_device *pdev)
dma_async_device_unregister(&bdev->common);
err_bam_channel_exit:
for (i = 0; i < bdev->num_channels; i++)
- tasklet_kill(&bdev->channels[i].vc.task);
-err_tasklet_kill:
- tasklet_kill(&bdev->task);
+ cancel_work_sync(&bdev->channels[i].vc.work);
+err_work_kill:
+ cancel_work_sync(&bdev->work);
err_disable_clk:
clk_disable_unprepare(bdev->bamclk);

@@ -1403,7 +1404,7 @@ static void bam_dma_remove(struct platform_device *pdev)

for (i = 0; i < bdev->num_channels; i++) {
bam_dma_terminate_all(&bdev->channels[i].vc.chan);
- tasklet_kill(&bdev->channels[i].vc.task);
+ cancel_work_sync(&bdev->channels[i].vc.work);

if (!bdev->channels[i].fifo_virt)
continue;
@@ -1413,7 +1414,7 @@ static void bam_dma_remove(struct platform_device *pdev)
bdev->channels[i].fifo_phys);
}

- tasklet_kill(&bdev->task);
+ cancel_work_sync(&bdev->work);

clk_disable_unprepare(bdev->bamclk);
}
diff --git a/drivers/dma/qcom/gpi.c b/drivers/dma/qcom/gpi.c
index 1c93864e0e4d..a777cfc799fd 100644
--- a/drivers/dma/qcom/gpi.c
+++ b/drivers/dma/qcom/gpi.c
@@ -14,6 +14,7 @@
#include <linux/dma/qcom-gpi-dma.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
+#include <linux/workqueue.h>
#include "../dmaengine.h"
#include "../virt-dma.h"

@@ -515,7 +516,7 @@ struct gpii {
enum gpi_pm_state pm_state;
rwlock_t pm_lock;
struct gpi_ring ev_ring;
- struct tasklet_struct ev_task; /* event processing tasklet */
+ struct work_struct ev_task; /* event processing work */
struct completion cmd_completion;
enum gpi_cmd gpi_cmd;
u32 cntxt_type_irq_msk;
@@ -755,7 +756,7 @@ static void gpi_process_ieob(struct gpii *gpii)
gpi_write_reg(gpii, gpii->ieob_clr_reg, BIT(0));

gpi_config_interrupts(gpii, MASK_IEOB_SETTINGS, 0);
- tasklet_hi_schedule(&gpii->ev_task);
+ queue_work(system_bh_highpri_wq, &gpii->ev_task);
}

/* process channel control interrupt */
@@ -1145,10 +1146,10 @@ static void gpi_process_events(struct gpii *gpii)
} while (rp != ev_ring->rp);
}

-/* processing events using tasklet */
-static void gpi_ev_tasklet(unsigned long data)
+/* processing events using work */
+static void gpi_ev_work(struct work_struct *t)
{
- struct gpii *gpii = (struct gpii *)data;
+ struct gpii *gpii = from_work(gpii, t, ev_task);

read_lock(&gpii->pm_lock);
if (!REG_ACCESS_VALID(gpii->pm_state)) {
@@ -1565,7 +1566,7 @@ static int gpi_pause(struct dma_chan *chan)
disable_irq(gpii->irq);

/* Wait for threads to complete out */
- tasklet_kill(&gpii->ev_task);
+ cancel_work_sync(&gpii->ev_task);

write_lock_irq(&gpii->pm_lock);
gpii->pm_state = PAUSE_STATE;
@@ -2018,7 +2019,7 @@ static void gpi_free_chan_resources(struct dma_chan *chan)
write_unlock_irq(&gpii->pm_lock);

/* wait for threads to complete out */
- tasklet_kill(&gpii->ev_task);
+ cancel_work_sync(&gpii->ev_task);

/* send command to de allocate event ring */
if (cur_state == ACTIVE_STATE)
@@ -2237,8 +2238,7 @@ static int gpi_probe(struct platform_device *pdev)
}
mutex_init(&gpii->ctrl_lock);
rwlock_init(&gpii->pm_lock);
- tasklet_init(&gpii->ev_task, gpi_ev_tasklet,
- (unsigned long)gpii);
+ INIT_WORK(&gpii->ev_task, gpi_ev_work);
init_completion(&gpii->cmd_completion);
gpii->gpii_id = i;
gpii->regs = gpi_dev->ee_base;
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
index 202ac95227cb..cf540ffbb7b7 100644
--- a/drivers/dma/qcom/hidma.c
+++ b/drivers/dma/qcom/hidma.c
@@ -58,6 +58,7 @@
#include <linux/atomic.h>
#include <linux/pm_runtime.h>
#include <linux/msi.h>
+#include <linux/workqueue.h>

#include "../dmaengine.h"
#include "hidma.h"
@@ -217,9 +218,9 @@ static int hidma_chan_init(struct hidma_dev *dmadev, u32 dma_sig)
return 0;
}

-static void hidma_issue_task(struct tasklet_struct *t)
+static void hidma_issue_task(struct work_struct *t)
{
- struct hidma_dev *dmadev = from_tasklet(dmadev, t, task);
+ struct hidma_dev *dmadev = from_work(dmadev, t, work);

pm_runtime_get_sync(dmadev->ddev.dev);
hidma_ll_start(dmadev->lldev);
@@ -250,7 +251,7 @@ static void hidma_issue_pending(struct dma_chan *dmach)
/* PM will be released in hidma_callback function. */
status = pm_runtime_get(dmadev->ddev.dev);
if (status < 0)
- tasklet_schedule(&dmadev->task);
+ queue_work(system_bh_wq, &dmadev->work);
else
hidma_ll_start(dmadev->lldev);
}
@@ -879,7 +880,7 @@ static int hidma_probe(struct platform_device *pdev)
goto uninit;

dmadev->irq = chirq;
- tasklet_setup(&dmadev->task, hidma_issue_task);
+ INIT_WORK(&dmadev->work, hidma_issue_task);
hidma_debug_init(dmadev);
hidma_sysfs_init(dmadev);
dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
@@ -926,7 +927,7 @@ static void hidma_remove(struct platform_device *pdev)
else
hidma_free_msis(dmadev);

- tasklet_kill(&dmadev->task);
+ cancel_work_sync(&dmadev->work);
hidma_sysfs_uninit(dmadev);
hidma_debug_uninit(dmadev);
hidma_ll_uninit(dmadev->lldev);
diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
index f212466744f3..a6140c9bef76 100644
--- a/drivers/dma/qcom/hidma.h
+++ b/drivers/dma/qcom/hidma.h
@@ -11,6 +11,7 @@
#include <linux/kfifo.h>
#include <linux/interrupt.h>
#include <linux/dmaengine.h>
+#include <linux/workqueue.h>

#define HIDMA_TRE_SIZE 32 /* each TRE is 32 bytes */
#define HIDMA_TRE_CFG_IDX 0
@@ -69,7 +70,7 @@ struct hidma_lldev {
u32 evre_processed_off; /* last processed EVRE */

u32 tre_write_offset; /* TRE write location */
- struct tasklet_struct task; /* task delivering notifications */
+ struct work_struct work; /* work delivering notifications */
DECLARE_KFIFO_PTR(handoff_fifo,
struct hidma_tre *); /* pending TREs FIFO */
};
@@ -129,7 +130,7 @@ struct hidma_dev {
struct device_attribute *chid_attrs;

/* Task delivering issue_pending */
- struct tasklet_struct task;
+ struct work_struct work;
};

int hidma_ll_request(struct hidma_lldev *llhndl, u32 dev_id,
diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c
index 53244e0e34a3..ebe037bb85e5 100644
--- a/drivers/dma/qcom/hidma_ll.c
+++ b/drivers/dma/qcom/hidma_ll.c
@@ -16,6 +16,7 @@
#include <linux/iopoll.h>
#include <linux/kfifo.h>
#include <linux/bitops.h>
+#include <linux/workqueue.h>

#include "hidma.h"

@@ -173,9 +174,9 @@ int hidma_ll_request(struct hidma_lldev *lldev, u32 sig, const char *dev_name,
/*
* Multiple TREs may be queued and waiting in the pending queue.
*/
-static void hidma_ll_tre_complete(struct tasklet_struct *t)
+static void hidma_ll_tre_complete(struct work_struct *t)
{
- struct hidma_lldev *lldev = from_tasklet(lldev, t, task);
+ struct hidma_lldev *lldev = from_work(lldev, t, work);
struct hidma_tre *tre;

while (kfifo_out(&lldev->handoff_fifo, &tre, 1)) {
@@ -223,7 +224,7 @@ static int hidma_post_completed(struct hidma_lldev *lldev, u8 err_info,
tre->queued = 0;

kfifo_put(&lldev->handoff_fifo, tre);
- tasklet_schedule(&lldev->task);
+ queue_work(system_bh_wq, &lldev->work);

return 0;
}
@@ -792,7 +793,7 @@ struct hidma_lldev *hidma_ll_init(struct device *dev, u32 nr_tres,
return NULL;

spin_lock_init(&lldev->lock);
- tasklet_setup(&lldev->task, hidma_ll_tre_complete);
+ INIT_WORK(&lldev->work, hidma_ll_tre_complete);
lldev->initialized = 1;
writel(ENABLE_IRQS, lldev->evca + HIDMA_EVCA_IRQ_EN_REG);
return lldev;
@@ -813,7 +814,7 @@ int hidma_ll_uninit(struct hidma_lldev *lldev)
lldev->initialized = 0;

required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres;
- tasklet_kill(&lldev->task);
+ cancel_work_sync(&lldev->work);
memset(lldev->trepool, 0, required_bytes);
lldev->trepool = NULL;
atomic_set(&lldev->pending_tre_count, 0);
diff --git a/drivers/dma/qcom/qcom_adm.c b/drivers/dma/qcom/qcom_adm.c
index 53f4273b657c..0cc3b77899d2 100644
--- a/drivers/dma/qcom/qcom_adm.c
+++ b/drivers/dma/qcom/qcom_adm.c
@@ -919,7 +919,7 @@ static void adm_dma_remove(struct platform_device *pdev)
/* mask IRQs for this channel/EE pair */
writel(0, adev->regs + ADM_CH_RSLT_CONF(achan->id, adev->ee));

- tasklet_kill(&adev->channels[i].vc.task);
+ cancel_work_sync(&adev->channels[i].vc.work);
adm_terminate_all(&adev->channels[i].vc.chan);
}

diff --git a/drivers/dma/sa11x0-dma.c b/drivers/dma/sa11x0-dma.c
index 01e656c69e6c..888fe2311572 100644
--- a/drivers/dma/sa11x0-dma.c
+++ b/drivers/dma/sa11x0-dma.c
@@ -16,6 +16,7 @@
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
+#include <linux/workqueue.h>

#include "virt-dma.h"

@@ -118,7 +119,7 @@ struct sa11x0_dma_dev {
struct dma_device slave;
void __iomem *base;
spinlock_t lock;
- struct tasklet_struct task;
+ struct work_struct work;
struct list_head chan_pending;
struct sa11x0_dma_phy phy[NR_PHY_CHAN];
};
@@ -232,7 +233,7 @@ static void noinline sa11x0_dma_complete(struct sa11x0_dma_phy *p,
p->txd_done = p->txd_load;

if (!p->txd_done)
- tasklet_schedule(&p->dev->task);
+ queue_work(system_bh_wq, &p->dev->work);
} else {
if ((p->sg_done % txd->period) == 0)
vchan_cyclic_callback(&txd->vd);
@@ -323,14 +324,14 @@ static void sa11x0_dma_start_txd(struct sa11x0_dma_chan *c)
}
}

-static void sa11x0_dma_tasklet(struct tasklet_struct *t)
+static void sa11x0_dma_work(struct work_struct *t)
{
- struct sa11x0_dma_dev *d = from_tasklet(d, t, task);
+ struct sa11x0_dma_dev *d = from_work(d, t, work);
struct sa11x0_dma_phy *p;
struct sa11x0_dma_chan *c;
unsigned pch, pch_alloc = 0;

- dev_dbg(d->slave.dev, "tasklet enter\n");
+ dev_dbg(d->slave.dev, "work enter\n");

list_for_each_entry(c, &d->slave.channels, vc.chan.device_node) {
spin_lock_irq(&c->vc.lock);
@@ -381,7 +382,7 @@ static void sa11x0_dma_tasklet(struct tasklet_struct *t)
}
}

- dev_dbg(d->slave.dev, "tasklet exit\n");
+ dev_dbg(d->slave.dev, "work exit\n");
}


@@ -495,7 +496,7 @@ static enum dma_status sa11x0_dma_tx_status(struct dma_chan *chan,
/*
* Move pending txds to the issued list, and re-init pending list.
* If not already pending, add this channel to the list of pending
- * channels and trigger the tasklet to run.
+ * channels and trigger the work to run.
*/
static void sa11x0_dma_issue_pending(struct dma_chan *chan)
{
@@ -509,7 +510,7 @@ static void sa11x0_dma_issue_pending(struct dma_chan *chan)
spin_lock(&d->lock);
if (list_empty(&c->node)) {
list_add_tail(&c->node, &d->chan_pending);
- tasklet_schedule(&d->task);
+ queue_work(system_bh_wq, &d->work);
dev_dbg(d->slave.dev, "vchan %p: issued\n", &c->vc);
}
spin_unlock(&d->lock);
@@ -784,7 +785,7 @@ static int sa11x0_dma_device_terminate_all(struct dma_chan *chan)
spin_lock(&d->lock);
p->vchan = NULL;
spin_unlock(&d->lock);
- tasklet_schedule(&d->task);
+ queue_work(system_bh_wq, &d->work);
}
spin_unlock_irqrestore(&c->vc.lock, flags);
vchan_dma_desc_free_list(&c->vc, &head);
@@ -893,7 +894,7 @@ static void sa11x0_dma_free_channels(struct dma_device *dmadev)

list_for_each_entry_safe(c, cn, &dmadev->channels, vc.chan.device_node) {
list_del(&c->vc.chan.device_node);
- tasklet_kill(&c->vc.task);
+ cancel_work_sync(&c->vc.work);
kfree(c);
}
}
@@ -928,7 +929,7 @@ static int sa11x0_dma_probe(struct platform_device *pdev)
goto err_ioremap;
}

- tasklet_setup(&d->task, sa11x0_dma_tasklet);
+ INIT_WORK(&d->work, sa11x0_dma_work);

for (i = 0; i < NR_PHY_CHAN; i++) {
struct sa11x0_dma_phy *p = &d->phy[i];
@@ -976,7 +977,7 @@ static int sa11x0_dma_probe(struct platform_device *pdev)
for (i = 0; i < NR_PHY_CHAN; i++)
sa11x0_dma_free_irq(pdev, i, &d->phy[i]);
err_irq:
- tasklet_kill(&d->task);
+ cancel_work_sync(&d->work);
iounmap(d->base);
err_ioremap:
kfree(d);
@@ -994,7 +995,7 @@ static void sa11x0_dma_remove(struct platform_device *pdev)
sa11x0_dma_free_channels(&d->slave);
for (pch = 0; pch < NR_PHY_CHAN; pch++)
sa11x0_dma_free_irq(pdev, pch, &d->phy[pch]);
- tasklet_kill(&d->task);
+ cancel_work_sync(&d->work);
iounmap(d->base);
kfree(d);
}
diff --git a/drivers/dma/sf-pdma/sf-pdma.c b/drivers/dma/sf-pdma/sf-pdma.c
index 428473611115..eb865f326f1c 100644
--- a/drivers/dma/sf-pdma/sf-pdma.c
+++ b/drivers/dma/sf-pdma/sf-pdma.c
@@ -22,6 +22,7 @@
#include <linux/of.h>
#include <linux/of_dma.h>
#include <linux/slab.h>
+#include <linux/workqueue.h>

#include "sf-pdma.h"

@@ -295,9 +296,9 @@ static void sf_pdma_free_desc(struct virt_dma_desc *vdesc)
kfree(desc);
}

-static void sf_pdma_donebh_tasklet(struct tasklet_struct *t)
+static void sf_pdma_donebh_work(struct work_struct *t)
{
- struct sf_pdma_chan *chan = from_tasklet(chan, t, done_tasklet);
+ struct sf_pdma_chan *chan = from_work(chan, t, done_work);
unsigned long flags;

spin_lock_irqsave(&chan->lock, flags);
@@ -319,9 +320,9 @@ static void sf_pdma_donebh_tasklet(struct tasklet_struct *t)
spin_unlock_irqrestore(&chan->vchan.lock, flags);
}

-static void sf_pdma_errbh_tasklet(struct tasklet_struct *t)
+static void sf_pdma_errbh_work(struct work_struct *t)
{
- struct sf_pdma_chan *chan = from_tasklet(chan, t, err_tasklet);
+ struct sf_pdma_chan *chan = from_work(chan, t, err_work);
struct sf_pdma_desc *desc = chan->desc;
unsigned long flags;

@@ -352,7 +353,7 @@ static irqreturn_t sf_pdma_done_isr(int irq, void *dev_id)
residue = readq(regs->residue);

if (!residue) {
- tasklet_hi_schedule(&chan->done_tasklet);
+ queue_work(system_bh_highpri_wq, &chan->done_work);
} else {
/* submit next trascatioin if possible */
struct sf_pdma_desc *desc = chan->desc;
@@ -378,7 +379,7 @@ static irqreturn_t sf_pdma_err_isr(int irq, void *dev_id)
writel((readl(regs->ctrl)) & ~PDMA_ERR_STATUS_MASK, regs->ctrl);
spin_unlock(&chan->lock);

- tasklet_schedule(&chan->err_tasklet);
+ queue_work(system_bh_wq, &chan->err_work);

return IRQ_HANDLED;
}
@@ -488,8 +489,8 @@ static void sf_pdma_setup_chans(struct sf_pdma *pdma)

writel(PDMA_CLEAR_CTRL, chan->regs.ctrl);

- tasklet_setup(&chan->done_tasklet, sf_pdma_donebh_tasklet);
- tasklet_setup(&chan->err_tasklet, sf_pdma_errbh_tasklet);
+ INIT_WORK(&chan->done_work, sf_pdma_donebh_work);
+ INIT_WORK(&chan->err_work, sf_pdma_errbh_work);
}
}

@@ -603,9 +604,9 @@ static void sf_pdma_remove(struct platform_device *pdev)
devm_free_irq(&pdev->dev, ch->txirq, ch);
devm_free_irq(&pdev->dev, ch->errirq, ch);
list_del(&ch->vchan.chan.device_node);
- tasklet_kill(&ch->vchan.task);
- tasklet_kill(&ch->done_tasklet);
- tasklet_kill(&ch->err_tasklet);
+ cancel_work_sync(&ch->vchan.work);
+ cancel_work_sync(&ch->done_work);
+ cancel_work_sync(&ch->err_work);
}

if (pdev->dev.of_node)
diff --git a/drivers/dma/sf-pdma/sf-pdma.h b/drivers/dma/sf-pdma/sf-pdma.h
index 215e07183d7e..87c6dd06800a 100644
--- a/drivers/dma/sf-pdma/sf-pdma.h
+++ b/drivers/dma/sf-pdma/sf-pdma.h
@@ -18,6 +18,7 @@

#include <linux/dmaengine.h>
#include <linux/dma-direction.h>
+#include <linux/workqueue.h>

#include "../dmaengine.h"
#include "../virt-dma.h"
@@ -99,8 +100,8 @@ struct sf_pdma_chan {
u32 attr;
dma_addr_t dma_dev_addr;
u32 dma_dev_size;
- struct tasklet_struct done_tasklet;
- struct tasklet_struct err_tasklet;
+ struct work_struct done_work;
+ struct work_struct err_work;
struct pdma_regs regs;
spinlock_t lock; /* protect chan data */
bool xfer_err;
diff --git a/drivers/dma/sprd-dma.c b/drivers/dma/sprd-dma.c
index 3f54ff37c5e0..fb4b6b31cc22 100644
--- a/drivers/dma/sprd-dma.c
+++ b/drivers/dma/sprd-dma.c
@@ -1253,7 +1253,7 @@ static void sprd_dma_remove(struct platform_device *pdev)
list_for_each_entry_safe(c, cn, &sdev->dma_dev.channels,
vc.chan.device_node) {
list_del(&c->vc.chan.device_node);
- tasklet_kill(&c->vc.task);
+ cancel_work_sync(&c->vc.work);
}

of_dma_controller_free(pdev->dev.of_node);
diff --git a/drivers/dma/st_fdma.c b/drivers/dma/st_fdma.c
index 8880b5e336f8..5892632654e7 100644
--- a/drivers/dma/st_fdma.c
+++ b/drivers/dma/st_fdma.c
@@ -733,7 +733,7 @@ static void st_fdma_free(struct st_fdma_dev *fdev)
for (i = 0; i < fdev->nr_channels; i++) {
fchan = &fdev->chans[i];
list_del(&fchan->vchan.chan.device_node);
- tasklet_kill(&fchan->vchan.task);
+ cancel_work_sync(&fchan->vchan.work);
}
}

diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c
index 2c489299148e..c2b33351c1c9 100644
--- a/drivers/dma/ste_dma40.c
+++ b/drivers/dma/ste_dma40.c
@@ -23,6 +23,7 @@
#include <linux/of_dma.h>
#include <linux/amba/bus.h>
#include <linux/regulator/consumer.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"
#include "ste_dma40.h"
@@ -456,12 +457,12 @@ struct d40_base;
* @lock: A spinlock to protect this struct.
* @log_num: The logical number, if any of this channel.
* @pending_tx: The number of pending transfers. Used between interrupt handler
- * and tasklet.
+ * and work.
* @busy: Set to true when transfer is ongoing on this channel.
* @phy_chan: Pointer to physical channel which this instance runs on. If this
* point is NULL, then the channel is not allocated.
* @chan: DMA engine handle.
- * @tasklet: Tasklet that gets scheduled from interrupt context to complete a
+ * @work: Work that gets scheduled from interrupt context to complete a
* transfer and call client callback.
* @client: Cliented owned descriptor list.
* @pending_queue: Submitted jobs, to be issued by issue_pending()
@@ -489,7 +490,7 @@ struct d40_chan {
bool busy;
struct d40_phy_res *phy_chan;
struct dma_chan chan;
- struct tasklet_struct tasklet;
+ struct work_struct work;
struct list_head client;
struct list_head pending_queue;
struct list_head active;
@@ -1590,13 +1591,13 @@ static void dma_tc_handle(struct d40_chan *d40c)
}

d40c->pending_tx++;
- tasklet_schedule(&d40c->tasklet);
+ queue_work(system_bh_wq, &d40c->work);

}

-static void dma_tasklet(struct tasklet_struct *t)
+static void dma_work(struct work_struct *t)
{
- struct d40_chan *d40c = from_tasklet(d40c, t, tasklet);
+ struct d40_chan *d40c = from_work(d40c, t, work);
struct d40_desc *d40d;
unsigned long flags;
bool callback_active;
@@ -1644,7 +1645,7 @@ static void dma_tasklet(struct tasklet_struct *t)
d40c->pending_tx--;

if (d40c->pending_tx)
- tasklet_schedule(&d40c->tasklet);
+ queue_work(system_bh_wq, &d40c->work);

spin_unlock_irqrestore(&d40c->lock, flags);

@@ -2825,7 +2826,7 @@ static void __init d40_chan_init(struct d40_base *base, struct dma_device *dma,
INIT_LIST_HEAD(&d40c->client);
INIT_LIST_HEAD(&d40c->prepare_queue);

- tasklet_setup(&d40c->tasklet, dma_tasklet);
+ INIT_WORK(&d40c->work, dma_work);

list_add_tail(&d40c->chan.device_node,
&dma->channels);
diff --git a/drivers/dma/sun6i-dma.c b/drivers/dma/sun6i-dma.c
index 583bf49031cf..5afe43f92342 100644
--- a/drivers/dma/sun6i-dma.c
+++ b/drivers/dma/sun6i-dma.c
@@ -20,6 +20,7 @@
#include <linux/reset.h>
#include <linux/slab.h>
#include <linux/types.h>
+#include <linux/workqueue.h>

#include "virt-dma.h"

@@ -200,8 +201,8 @@ struct sun6i_dma_dev {
int irq;
spinlock_t lock;
struct reset_control *rstc;
- struct tasklet_struct task;
- atomic_t tasklet_shutdown;
+ struct work_struct work;
+ atomic_t work_shutdown;
struct list_head pending;
struct dma_pool *pool;
struct sun6i_pchan *pchans;
@@ -474,9 +475,9 @@ static int sun6i_dma_start_desc(struct sun6i_vchan *vchan)
return 0;
}

-static void sun6i_dma_tasklet(struct tasklet_struct *t)
+static void sun6i_dma_work(struct work_struct *t)
{
- struct sun6i_dma_dev *sdev = from_tasklet(sdev, t, task);
+ struct sun6i_dma_dev *sdev = from_work(sdev, t, work);
struct sun6i_vchan *vchan;
struct sun6i_pchan *pchan;
unsigned int pchan_alloc = 0;
@@ -574,8 +575,8 @@ static irqreturn_t sun6i_dma_interrupt(int irq, void *dev_id)
status = status >> DMA_IRQ_CHAN_WIDTH;
}

- if (!atomic_read(&sdev->tasklet_shutdown))
- tasklet_schedule(&sdev->task);
+ if (!atomic_read(&sdev->work_shutdown))
+ queue_work(system_bh_wq, &sdev->work);
ret = IRQ_HANDLED;
}

@@ -1000,7 +1001,7 @@ static void sun6i_dma_issue_pending(struct dma_chan *chan)

if (!vchan->phy && list_empty(&vchan->node)) {
list_add_tail(&vchan->node, &sdev->pending);
- tasklet_schedule(&sdev->task);
+ queue_work(system_bh_wq, &sdev->work);
dev_dbg(chan2dev(chan), "vchan %p: issued\n",
&vchan->vc);
}
@@ -1048,20 +1049,20 @@ static struct dma_chan *sun6i_dma_of_xlate(struct of_phandle_args *dma_spec,
return chan;
}

-static inline void sun6i_kill_tasklet(struct sun6i_dma_dev *sdev)
+static inline void sun6i_kill_work(struct sun6i_dma_dev *sdev)
{
/* Disable all interrupts from DMA */
writel(0, sdev->base + DMA_IRQ_EN(0));
writel(0, sdev->base + DMA_IRQ_EN(1));

- /* Prevent spurious interrupts from scheduling the tasklet */
- atomic_inc(&sdev->tasklet_shutdown);
+ /* Prevent spurious interrupts from scheduling the work */
+ atomic_inc(&sdev->work_shutdown);

/* Make sure we won't have any further interrupts */
devm_free_irq(sdev->slave.dev, sdev->irq, sdev);

- /* Actually prevent the tasklet from being scheduled */
- tasklet_kill(&sdev->task);
+ /* Actually prevent the work from being scheduled */
+ cancel_work_sync(&sdev->work);
}

static inline void sun6i_dma_free(struct sun6i_dma_dev *sdev)
@@ -1072,7 +1073,7 @@ static inline void sun6i_dma_free(struct sun6i_dma_dev *sdev)
struct sun6i_vchan *vchan = &sdev->vchans[i];

list_del(&vchan->vc.chan.device_node);
- tasklet_kill(&vchan->vc.task);
+ cancel_work_sync(&vchan->vc.work);
}
}

@@ -1393,7 +1394,7 @@ static int sun6i_dma_probe(struct platform_device *pdev)
if (!sdc->vchans)
return -ENOMEM;

- tasklet_setup(&sdc->task, sun6i_dma_tasklet);
+ INIT_WORK(&sdc->work, sun6i_dma_work);

for (i = 0; i < sdc->num_pchans; i++) {
struct sun6i_pchan *pchan = &sdc->pchans[i];
@@ -1458,7 +1459,7 @@ static int sun6i_dma_probe(struct platform_device *pdev)
err_dma_unregister:
dma_async_device_unregister(&sdc->slave);
err_irq_disable:
- sun6i_kill_tasklet(sdc);
+ sun6i_kill_work(sdc);
err_mbus_clk_disable:
clk_disable_unprepare(sdc->clk_mbus);
err_clk_disable:
@@ -1477,7 +1478,7 @@ static void sun6i_dma_remove(struct platform_device *pdev)
of_dma_controller_free(pdev->dev.of_node);
dma_async_device_unregister(&sdc->slave);

- sun6i_kill_tasklet(sdc);
+ sun6i_kill_work(sdc);

clk_disable_unprepare(sdc->clk_mbus);
clk_disable_unprepare(sdc->clk);
diff --git a/drivers/dma/tegra186-gpc-dma.c b/drivers/dma/tegra186-gpc-dma.c
index 88547a23825b..5078cb410401 100644
--- a/drivers/dma/tegra186-gpc-dma.c
+++ b/drivers/dma/tegra186-gpc-dma.c
@@ -1266,7 +1266,7 @@ static void tegra_dma_free_chan_resources(struct dma_chan *dc)
tegra_dma_terminate_all(dc);
synchronize_irq(tdc->irq);

- tasklet_kill(&tdc->vc.task);
+ cancel_work_sync(&tdc->vc.work);
tdc->config_init = false;
tdc->slave_id = -1;
tdc->sid_dir = DMA_TRANS_NONE;
diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
index ac69778827f2..51c462840d47 100644
--- a/drivers/dma/tegra20-apb-dma.c
+++ b/drivers/dma/tegra20-apb-dma.c
@@ -24,6 +24,7 @@
#include <linux/reset.h>
#include <linux/slab.h>
#include <linux/wait.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -194,9 +195,9 @@ struct tegra_dma_channel {
struct list_head free_dma_desc;
struct list_head cb_desc;

- /* ISR handler and tasklet for bottom half of isr handling */
+ /* ISR handler and work for bottom half of isr handling */
dma_isr_handler isr_handler;
- struct tasklet_struct tasklet;
+ struct work_struct work;

/* Channel-slave specific configuration */
unsigned int slave_id;
@@ -632,9 +633,9 @@ static void handle_cont_sngl_cycle_dma_done(struct tegra_dma_channel *tdc,
}
}

-static void tegra_dma_tasklet(struct tasklet_struct *t)
+static void tegra_dma_work(struct work_struct *t)
{
- struct tegra_dma_channel *tdc = from_tasklet(tdc, t, tasklet);
+ struct tegra_dma_channel *tdc = from_work(tdc, t, work);
struct dmaengine_desc_callback cb;
struct tegra_dma_desc *dma_desc;
unsigned int cb_count;
@@ -670,7 +671,7 @@ static irqreturn_t tegra_dma_isr(int irq, void *dev_id)
if (status & TEGRA_APBDMA_STATUS_ISE_EOC) {
tdc_write(tdc, TEGRA_APBDMA_CHAN_STATUS, status);
tdc->isr_handler(tdc, false);
- tasklet_schedule(&tdc->tasklet);
+ queue_work(system_bh_wq, &tdc->work);
wake_up_all(&tdc->wq);
spin_unlock(&tdc->lock);
return IRQ_HANDLED;
@@ -819,7 +820,7 @@ static void tegra_dma_synchronize(struct dma_chan *dc)
*/
wait_event(tdc->wq, tegra_dma_eoc_interrupt_deasserted(tdc));

- tasklet_kill(&tdc->tasklet);
+ cancel_work_sync(&tdc->work);

pm_runtime_put(tdc->tdma->dev);
}
@@ -1317,7 +1318,7 @@ static void tegra_dma_free_chan_resources(struct dma_chan *dc)
dev_dbg(tdc2dev(tdc), "Freeing channel %d\n", tdc->id);

tegra_dma_terminate_all(dc);
- tasklet_kill(&tdc->tasklet);
+ cancel_work_sync(&tdc->work);

list_splice_init(&tdc->pending_sg_req, &sg_req_list);
list_splice_init(&tdc->free_sg_req, &sg_req_list);
@@ -1511,7 +1512,7 @@ static int tegra_dma_probe(struct platform_device *pdev)
tdc->id = i;
tdc->slave_id = TEGRA_APBDMA_SLAVE_ID_INVALID;

- tasklet_setup(&tdc->tasklet, tegra_dma_tasklet);
+ INIT_WORK(&tdc->work, tegra_dma_work);
spin_lock_init(&tdc->lock);
init_waitqueue_head(&tdc->wq);

@@ -1617,7 +1618,7 @@ static int __maybe_unused tegra_dma_dev_suspend(struct device *dev)
for (i = 0; i < tdma->chip_data->nr_channels; i++) {
struct tegra_dma_channel *tdc = &tdma->channels[i];

- tasklet_kill(&tdc->tasklet);
+ cancel_work_sync(&tdc->work);

spin_lock_irqsave(&tdc->lock, flags);
busy = tdc->busy;
diff --git a/drivers/dma/tegra210-adma.c b/drivers/dma/tegra210-adma.c
index 24ad7077c53b..36d6bb3e9e1d 100644
--- a/drivers/dma/tegra210-adma.c
+++ b/drivers/dma/tegra210-adma.c
@@ -693,7 +693,7 @@ static void tegra_adma_free_chan_resources(struct dma_chan *dc)

tegra_adma_terminate_all(dc);
vchan_free_chan_resources(&tdc->vc);
- tasklet_kill(&tdc->vc.task);
+ cancel_work_sync(&tdc->vc.work);
free_irq(tdc->irq, tdc);
pm_runtime_put(tdc2dev(tdc));

diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
index 5f8d2e93ff3f..c9f78b462a70 100644
--- a/drivers/dma/ti/edma.c
+++ b/drivers/dma/ti/edma.c
@@ -2556,7 +2556,7 @@ static void edma_cleanupp_vchan(struct dma_device *dmadev)
list_for_each_entry_safe(echan, _echan,
&dmadev->channels, vchan.chan.device_node) {
list_del(&echan->vchan.chan.device_node);
- tasklet_kill(&echan->vchan.task);
+ cancel_work_sync(&echan->vchan.work);
}
}

diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
index 6400d06588a2..ab5780efb4db 100644
--- a/drivers/dma/ti/k3-udma.c
+++ b/drivers/dma/ti/k3-udma.c
@@ -28,6 +28,7 @@
#include <linux/soc/ti/ti_sci_inta_msi.h>
#include <linux/dma/k3-event-router.h>
#include <linux/dma/ti-cppi5.h>
+#include <linux/workqueue.h>

#include "../virt-dma.h"
#include "k3-udma.h"
@@ -4003,12 +4004,12 @@ static void udma_desc_pre_callback(struct virt_dma_chan *vc,
}

/*
- * This tasklet handles the completion of a DMA descriptor by
+ * This work handles the completion of a DMA descriptor by
* calling its callback and freeing it.
*/
-static void udma_vchan_complete(struct tasklet_struct *t)
+static void udma_vchan_complete(struct work_struct *t)
{
- struct virt_dma_chan *vc = from_tasklet(vc, t, task);
+ struct virt_dma_chan *vc = from_work(vc, t, work);
struct virt_dma_desc *vd, *_vd;
struct dmaengine_desc_callback cb;
LIST_HEAD(head);
@@ -4073,7 +4074,7 @@ static void udma_free_chan_resources(struct dma_chan *chan)
}

vchan_free_chan_resources(&uc->vc);
- tasklet_kill(&uc->vc.task);
+ cancel_work_sync(&uc->vc.work);

bcdma_free_bchan_resources(uc);
udma_free_tx_resources(uc);
@@ -5534,7 +5535,7 @@ static int udma_probe(struct platform_device *pdev)

vchan_init(&uc->vc, &ud->ddev);
/* Use custom vchan completion handling */
- tasklet_setup(&uc->vc.task, udma_vchan_complete);
+ INIT_WORK(&uc->vc.work, udma_vchan_complete);
init_completion(&uc->teardown_completed);
INIT_DELAYED_WORK(&uc->tx_drain.work, udma_check_tx_completion);
}
diff --git a/drivers/dma/ti/omap-dma.c b/drivers/dma/ti/omap-dma.c
index b9e0e22383b7..7b0c4f571a94 100644
--- a/drivers/dma/ti/omap-dma.c
+++ b/drivers/dma/ti/omap-dma.c
@@ -1521,7 +1521,7 @@ static void omap_dma_free(struct omap_dmadev *od)
struct omap_chan, vc.chan.device_node);

list_del(&c->vc.chan.device_node);
- tasklet_kill(&c->vc.task);
+ cancel_work_sync(&c->vc.work);
kfree(c);
}
}
diff --git a/drivers/dma/timb_dma.c b/drivers/dma/timb_dma.c
index 7410025605e0..c74f38a634ae 100644
--- a/drivers/dma/timb_dma.c
+++ b/drivers/dma/timb_dma.c
@@ -18,6 +18,7 @@
#include <linux/slab.h>

#include <linux/timb_dma.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -72,7 +73,7 @@ struct timb_dma_chan {
void __iomem *membase;
spinlock_t lock; /* Used to protect data structures,
especially the lists and descriptors,
- from races between the tasklet and calls
+ from races between the work and calls
from above */
bool ongoing;
struct list_head active_list;
@@ -87,7 +88,7 @@ struct timb_dma_chan {
struct timb_dma {
struct dma_device dma;
void __iomem *membase;
- struct tasklet_struct tasklet;
+ struct work_struct work;
struct timb_dma_chan channels[];
};

@@ -563,9 +564,9 @@ static int td_terminate_all(struct dma_chan *chan)
return 0;
}

-static void td_tasklet(struct tasklet_struct *t)
+static void td_work(struct work_struct *t)
{
- struct timb_dma *td = from_tasklet(td, t, tasklet);
+ struct timb_dma *td = from_work(td, t, work);
u32 isr;
u32 ipr;
u32 ier;
@@ -598,10 +599,10 @@ static irqreturn_t td_irq(int irq, void *devid)
u32 ipr = ioread32(td->membase + TIMBDMA_IPR);

if (ipr) {
- /* disable interrupts, will be re-enabled in tasklet */
+ /* disable interrupts, will be re-enabled in work */
iowrite32(0, td->membase + TIMBDMA_IER);

- tasklet_schedule(&td->tasklet);
+ queue_work(system_bh_wq, &td->work);

return IRQ_HANDLED;
} else
@@ -658,12 +659,12 @@ static int td_probe(struct platform_device *pdev)
iowrite32(0x0, td->membase + TIMBDMA_IER);
iowrite32(0xFFFFFFFF, td->membase + TIMBDMA_ISR);

- tasklet_setup(&td->tasklet, td_tasklet);
+ INIT_WORK(&td->work, td_work);

err = request_irq(irq, td_irq, IRQF_SHARED, DRIVER_NAME, td);
if (err) {
dev_err(&pdev->dev, "Failed to request IRQ\n");
- goto err_tasklet_kill;
+ goto err_work_kill;
}

td->dma.device_alloc_chan_resources = td_alloc_chan_resources;
@@ -728,8 +729,8 @@ static int td_probe(struct platform_device *pdev)

err_free_irq:
free_irq(irq, td);
-err_tasklet_kill:
- tasklet_kill(&td->tasklet);
+err_work_kill:
+ cancel_work_sync(&td->work);
iounmap(td->membase);
err_free_mem:
kfree(td);
@@ -748,7 +749,7 @@ static void td_remove(struct platform_device *pdev)

dma_async_device_unregister(&td->dma);
free_irq(irq, td);
- tasklet_kill(&td->tasklet);
+ cancel_work_sync(&td->work);
iounmap(td->membase);
kfree(td);
release_mem_region(iomem->start, resource_size(iomem));
diff --git a/drivers/dma/txx9dmac.c b/drivers/dma/txx9dmac.c
index 44ba377b4b5a..04916859a7fb 100644
--- a/drivers/dma/txx9dmac.c
+++ b/drivers/dma/txx9dmac.c
@@ -12,6 +12,7 @@
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/scatterlist.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"
#include "txx9dmac.h"
@@ -340,7 +341,7 @@ static void txx9dmac_dostart(struct txx9dmac_chan *dc,
dev_err(chan2dev(&dc->chan),
"BUG: Attempted to start non-idle channel\n");
txx9dmac_dump_regs(dc);
- /* The tasklet will hopefully advance the queue... */
+ /* The work will hopefully advance the queue... */
return;
}

@@ -601,15 +602,15 @@ static void txx9dmac_scan_descriptors(struct txx9dmac_chan *dc)
}
}

-static void txx9dmac_chan_tasklet(struct tasklet_struct *t)
+static void txx9dmac_chan_work(struct work_struct *t)
{
int irq;
u32 csr;
struct txx9dmac_chan *dc;

- dc = from_tasklet(dc, t, tasklet);
+ dc = from_work(dc, t, work);
csr = channel_readl(dc, CSR);
- dev_vdbg(chan2dev(&dc->chan), "tasklet: status=%x\n", csr);
+ dev_vdbg(chan2dev(&dc->chan), "work: status=%x\n", csr);

spin_lock(&dc->lock);
if (csr & (TXX9_DMA_CSR_ABCHC | TXX9_DMA_CSR_NCHNC |
@@ -628,7 +629,7 @@ static irqreturn_t txx9dmac_chan_interrupt(int irq, void *dev_id)
dev_vdbg(chan2dev(&dc->chan), "interrupt: status=%#x\n",
channel_readl(dc, CSR));

- tasklet_schedule(&dc->tasklet);
+ queue_work(system_bh_wq, &dc->work);
/*
* Just disable the interrupts. We'll turn them back on in the
* softirq handler.
@@ -638,23 +639,23 @@ static irqreturn_t txx9dmac_chan_interrupt(int irq, void *dev_id)
return IRQ_HANDLED;
}

-static void txx9dmac_tasklet(struct tasklet_struct *t)
+static void txx9dmac_work(struct work_struct *t)
{
int irq;
u32 csr;
struct txx9dmac_chan *dc;

- struct txx9dmac_dev *ddev = from_tasklet(ddev, t, tasklet);
+ struct txx9dmac_dev *ddev = from_work(ddev, t, work);
u32 mcr;
int i;

mcr = dma_readl(ddev, MCR);
- dev_vdbg(ddev->chan[0]->dma.dev, "tasklet: mcr=%x\n", mcr);
+ dev_vdbg(ddev->chan[0]->dma.dev, "work: mcr=%x\n", mcr);
for (i = 0; i < TXX9_DMA_MAX_NR_CHANNELS; i++) {
if ((mcr >> (24 + i)) & 0x11) {
dc = ddev->chan[i];
csr = channel_readl(dc, CSR);
- dev_vdbg(chan2dev(&dc->chan), "tasklet: status=%x\n",
+ dev_vdbg(chan2dev(&dc->chan), "work: status=%x\n",
csr);
spin_lock(&dc->lock);
if (csr & (TXX9_DMA_CSR_ABCHC | TXX9_DMA_CSR_NCHNC |
@@ -675,7 +676,7 @@ static irqreturn_t txx9dmac_interrupt(int irq, void *dev_id)
dev_vdbg(ddev->chan[0]->dma.dev, "interrupt: status=%#x\n",
dma_readl(ddev, MCR));

- tasklet_schedule(&ddev->tasklet);
+ queue_work(system_bh_wq, &ddev->work);
/*
* Just disable the interrupts. We'll turn them back on in the
* softirq handler.
@@ -1113,7 +1114,7 @@ static int __init txx9dmac_chan_probe(struct platform_device *pdev)
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
- tasklet_setup(&dc->tasklet, txx9dmac_chan_tasklet);
+ INIT_WORK(&dc->work, txx9dmac_chan_work);
dc->irq = irq;
err = devm_request_irq(&pdev->dev, dc->irq,
txx9dmac_chan_interrupt, 0, dev_name(&pdev->dev), dc);
@@ -1159,7 +1160,7 @@ static void txx9dmac_chan_remove(struct platform_device *pdev)
dma_async_device_unregister(&dc->dma);
if (dc->irq >= 0) {
devm_free_irq(&pdev->dev, dc->irq, dc);
- tasklet_kill(&dc->tasklet);
+ cancel_work_sync(&dc->work);
}
dc->ddev->chan[pdev->id % TXX9_DMA_MAX_NR_CHANNELS] = NULL;
}
@@ -1198,7 +1199,7 @@ static int __init txx9dmac_probe(struct platform_device *pdev)

ddev->irq = platform_get_irq(pdev, 0);
if (ddev->irq >= 0) {
- tasklet_setup(&ddev->tasklet, txx9dmac_tasklet);
+ INIT_WORK(&ddev->work, txx9dmac_work);
err = devm_request_irq(&pdev->dev, ddev->irq,
txx9dmac_interrupt, 0, dev_name(&pdev->dev), ddev);
if (err)
@@ -1221,7 +1222,7 @@ static void txx9dmac_remove(struct platform_device *pdev)
txx9dmac_off(ddev);
if (ddev->irq >= 0) {
devm_free_irq(&pdev->dev, ddev->irq, ddev);
- tasklet_kill(&ddev->tasklet);
+ cancel_work_sync(&ddev->work);
}
}

diff --git a/drivers/dma/txx9dmac.h b/drivers/dma/txx9dmac.h
index aa53eafb1519..e72457b0a509 100644
--- a/drivers/dma/txx9dmac.h
+++ b/drivers/dma/txx9dmac.h
@@ -8,6 +8,7 @@
#define TXX9DMAC_H

#include <linux/dmaengine.h>
+#include <linux/workqueue.h>
#include <asm/txx9/dmac.h>

/*
@@ -162,7 +163,7 @@ struct txx9dmac_chan {
struct dma_device dma;
struct txx9dmac_dev *ddev;
void __iomem *ch_regs;
- struct tasklet_struct tasklet;
+ struct work_struct work;
int irq;
u32 ccr;

@@ -178,7 +179,7 @@ struct txx9dmac_chan {

struct txx9dmac_dev {
void __iomem *regs;
- struct tasklet_struct tasklet;
+ struct work_struct work;
int irq;
struct txx9dmac_chan *chan[TXX9_DMA_MAX_NR_CHANNELS];
bool have_64bit_regs;
diff --git a/drivers/dma/virt-dma.c b/drivers/dma/virt-dma.c
index a6f4265be0c9..600084be36a7 100644
--- a/drivers/dma/virt-dma.c
+++ b/drivers/dma/virt-dma.c
@@ -8,6 +8,7 @@
#include <linux/dmaengine.h>
#include <linux/module.h>
#include <linux/spinlock.h>
+#include <linux/workqueue.h>

#include "virt-dma.h"

@@ -77,12 +78,12 @@ struct virt_dma_desc *vchan_find_desc(struct virt_dma_chan *vc,
EXPORT_SYMBOL_GPL(vchan_find_desc);

/*
- * This tasklet handles the completion of a DMA descriptor by
+ * This work handles the completion of a DMA descriptor by
* calling its callback and freeing it.
*/
-static void vchan_complete(struct tasklet_struct *t)
+static void vchan_complete(struct work_struct *t)
{
- struct virt_dma_chan *vc = from_tasklet(vc, t, task);
+ struct virt_dma_chan *vc = from_work(vc, t, work);
struct virt_dma_desc *vd, *_vd;
struct dmaengine_desc_callback cb;
LIST_HEAD(head);
@@ -131,7 +132,7 @@ void vchan_init(struct virt_dma_chan *vc, struct dma_device *dmadev)
INIT_LIST_HEAD(&vc->desc_completed);
INIT_LIST_HEAD(&vc->desc_terminated);

- tasklet_setup(&vc->task, vchan_complete);
+ INIT_WORK(&vc->work, vchan_complete);

vc->chan.device = dmadev;
list_add_tail(&vc->chan.device_node, &dmadev->channels);
diff --git a/drivers/dma/virt-dma.h b/drivers/dma/virt-dma.h
index e9f5250fbe4d..5b6e01508177 100644
--- a/drivers/dma/virt-dma.h
+++ b/drivers/dma/virt-dma.h
@@ -9,6 +9,7 @@

#include <linux/dmaengine.h>
#include <linux/interrupt.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -21,7 +22,7 @@ struct virt_dma_desc {

struct virt_dma_chan {
struct dma_chan chan;
- struct tasklet_struct task;
+ struct work_struct work;
void (*desc_free)(struct virt_dma_desc *);

spinlock_t lock;
@@ -102,7 +103,7 @@ static inline void vchan_cookie_complete(struct virt_dma_desc *vd)
vd, cookie);
list_add_tail(&vd->node, &vc->desc_completed);

- tasklet_schedule(&vc->task);
+ queue_work(system_bh_wq, &vc->work);
}

/**
@@ -133,7 +134,7 @@ static inline void vchan_cyclic_callback(struct virt_dma_desc *vd)
struct virt_dma_chan *vc = to_virt_chan(vd->tx.chan);

vc->cyclic = vd;
- tasklet_schedule(&vc->task);
+ queue_work(system_bh_wq, &vc->work);
}

/**
@@ -213,7 +214,7 @@ static inline void vchan_synchronize(struct virt_dma_chan *vc)
LIST_HEAD(head);
unsigned long flags;

- tasklet_kill(&vc->task);
+ cancel_work_sync(&vc->work);

spin_lock_irqsave(&vc->lock, flags);

diff --git a/drivers/dma/xgene-dma.c b/drivers/dma/xgene-dma.c
index fd4397adeb79..c7b7fc0c7fcc 100644
--- a/drivers/dma/xgene-dma.c
+++ b/drivers/dma/xgene-dma.c
@@ -21,6 +21,7 @@
#include <linux/mod_devicetable.h>
#include <linux/module.h>
#include <linux/platform_device.h>
+#include <linux/workqueue.h>

#include "dmaengine.h"

@@ -261,7 +262,7 @@ struct xgene_dma_desc_sw {
* These descriptors have already had their cleanup actions run. They
* are waiting for the ACK bit to be set by the async tx API.
* @desc_pool: descriptor pool for DMA operations
- * @tasklet: bottom half where all completed descriptors cleans
+ * @work: bottom half where all completed descriptors cleans
* @tx_ring: transmit ring descriptor that we use to prepare actual
* descriptors for further executions
* @rx_ring: receive ring descriptor that we use to get completed DMA
@@ -281,7 +282,7 @@ struct xgene_dma_chan {
struct list_head ld_running;
struct list_head ld_completed;
struct dma_pool *desc_pool;
- struct tasklet_struct tasklet;
+ struct work_struct work;
struct xgene_dma_ring tx_ring;
struct xgene_dma_ring rx_ring;
};
@@ -976,9 +977,9 @@ static enum dma_status xgene_dma_tx_status(struct dma_chan *dchan,
return dma_cookie_status(dchan, cookie, txstate);
}

-static void xgene_dma_tasklet_cb(struct tasklet_struct *t)
+static void xgene_dma_work_cb(struct work_struct *t)
{
- struct xgene_dma_chan *chan = from_tasklet(chan, t, tasklet);
+ struct xgene_dma_chan *chan = from_work(chan, t, work);

/* Run all cleanup for descriptors which have been completed */
xgene_dma_cleanup_descriptors(chan);
@@ -1000,11 +1001,11 @@ static irqreturn_t xgene_dma_chan_ring_isr(int irq, void *id)
disable_irq_nosync(chan->rx_irq);

/*
- * Schedule the tasklet to handle all cleanup of the current
+ * Schedule the work to handle all cleanup of the current
* transaction. It will start a new transaction if there is
* one pending.
*/
- tasklet_schedule(&chan->tasklet);
+ queue_work(system_bh_wq, &chan->work);

return IRQ_HANDLED;
}
@@ -1540,7 +1541,7 @@ static int xgene_dma_async_register(struct xgene_dma *pdma, int id)
INIT_LIST_HEAD(&chan->ld_pending);
INIT_LIST_HEAD(&chan->ld_running);
INIT_LIST_HEAD(&chan->ld_completed);
- tasklet_setup(&chan->tasklet, xgene_dma_tasklet_cb);
+ INIT_WORK(&chan->work, xgene_dma_work_cb);

chan->pending = 0;
chan->desc_pool = NULL;
@@ -1557,7 +1558,7 @@ static int xgene_dma_async_register(struct xgene_dma *pdma, int id)
ret = dma_async_device_register(dma_dev);
if (ret) {
chan_err(chan, "Failed to register async device %d", ret);
- tasklet_kill(&chan->tasklet);
+ cancel_work_sync(&chan->work);

return ret;
}
@@ -1580,7 +1581,7 @@ static int xgene_dma_init_async(struct xgene_dma *pdma)
if (ret) {
for (j = 0; j < i; j++) {
dma_async_device_unregister(&pdma->dma_dev[j]);
- tasklet_kill(&pdma->chan[j].tasklet);
+ cancel_work_sync(&pdma->chan[j].work);
}

return ret;
@@ -1791,7 +1792,7 @@ static void xgene_dma_remove(struct platform_device *pdev)

for (i = 0; i < XGENE_DMA_MAX_CHANNEL; i++) {
chan = &pdma->chan[i];
- tasklet_kill(&chan->tasklet);
+ cancel_work_sync(&chan->work);
xgene_dma_delete_chan_rings(chan);
}

diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index 5eb51ae93e89..d3192eedbee6 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -48,6 +48,7 @@
#include <linux/slab.h>
#include <linux/clk.h>
#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/workqueue.h>

#include "../dmaengine.h"

@@ -400,7 +401,7 @@ struct xilinx_dma_tx_descriptor {
* @err: Channel has errors
* @idle: Check for channel idle
* @terminating: Check for channel being synchronized by user
- * @tasklet: Cleanup work after irq
+ * @work: Cleanup work after irq
* @config: Device configuration info
* @flush_on_fsync: Flush on Frame sync
* @desc_pendingcount: Descriptor pending count
@@ -439,7 +440,7 @@ struct xilinx_dma_chan {
bool err;
bool idle;
bool terminating;
- struct tasklet_struct tasklet;
+ struct work_struct work;
struct xilinx_vdma_config config;
bool flush_on_fsync;
u32 desc_pendingcount;
@@ -1094,12 +1095,12 @@ static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan)
}

/**
- * xilinx_dma_do_tasklet - Schedule completion tasklet
+ * xilinx_dma_do_work - Schedule completion work
* @t: Pointer to the Xilinx DMA channel structure
*/
-static void xilinx_dma_do_tasklet(struct tasklet_struct *t)
+static void xilinx_dma_do_work(struct work_struct *t)
{
- struct xilinx_dma_chan *chan = from_tasklet(chan, t, tasklet);
+ struct xilinx_dma_chan *chan = from_work(chan, t, work);

xilinx_dma_chan_desc_cleanup(chan);
}
@@ -1859,7 +1860,7 @@ static irqreturn_t xilinx_mcdma_irq_handler(int irq, void *data)
spin_unlock(&chan->lock);
}

- tasklet_hi_schedule(&chan->tasklet);
+ queue_work(system_bh_highpri_wq, &chan->work);
return IRQ_HANDLED;
}

@@ -1916,7 +1917,7 @@ static irqreturn_t xilinx_dma_irq_handler(int irq, void *data)
spin_unlock(&chan->lock);
}

- tasklet_schedule(&chan->tasklet);
+ queue_work(system_bh_wq, &chan->work);
return IRQ_HANDLED;
}

@@ -2522,7 +2523,7 @@ static void xilinx_dma_synchronize(struct dma_chan *dchan)
{
struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);

- tasklet_kill(&chan->tasklet);
+ cancel_work_sync(&chan->work);
}

/**
@@ -2613,7 +2614,7 @@ static void xilinx_dma_chan_remove(struct xilinx_dma_chan *chan)
if (chan->irq > 0)
free_irq(chan->irq, chan);

- tasklet_kill(&chan->tasklet);
+ cancel_work_sync(&chan->work);

list_del(&chan->common.device_node);
}
@@ -2941,8 +2942,8 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
chan->has_sg ? "enabled" : "disabled");
}

- /* Initialize the tasklet */
- tasklet_setup(&chan->tasklet, xilinx_dma_do_tasklet);
+ /* Initialize the work */
+ INIT_WORK(&chan->work, xilinx_dma_do_work);

/*
* Initialize the DMA channel and add it to the DMA engine channels
diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
index b82815e64d24..a099ddffeba0 100644
--- a/drivers/dma/xilinx/xilinx_dpdma.c
+++ b/drivers/dma/xilinx/xilinx_dpdma.c
@@ -24,6 +24,7 @@
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/wait.h>
+#include <linux/workqueue.h>

#include <dt-bindings/dma/xlnx-zynqmp-dpdma.h>

@@ -234,7 +235,7 @@ struct xilinx_dpdma_chan {

spinlock_t lock; /* lock to access struct xilinx_dpdma_chan */
struct dma_pool *desc_pool;
- struct tasklet_struct err_task;
+ struct work_struct err_task;

struct {
struct xilinx_dpdma_tx_desc *pending;
@@ -1396,7 +1397,7 @@ static void xilinx_dpdma_synchronize(struct dma_chan *dchan)
}

/* -----------------------------------------------------------------------------
- * Interrupt and Tasklet Handling
+ * Interrupt and Work Handling
*/

/**
@@ -1443,7 +1444,7 @@ static void xilinx_dpdma_handle_err_irq(struct xilinx_dpdma_device *xdev,

for (i = 0; i < ARRAY_SIZE(xdev->chan); i++)
if (err || xilinx_dpdma_chan_err(xdev->chan[i], isr, eisr))
- tasklet_schedule(&xdev->chan[i]->err_task);
+ queue_work(system_bh_wq, &xdev->chan[i]->err_task);
}

/**
@@ -1471,16 +1472,16 @@ static void xilinx_dpdma_disable_irq(struct xilinx_dpdma_device *xdev)
}

/**
- * xilinx_dpdma_chan_err_task - Per channel tasklet for error handling
- * @t: pointer to the tasklet associated with this handler
+ * xilinx_dpdma_chan_err_task - Per channel work for error handling
+ * @t: pointer to the work associated with this handler
*
- * Per channel error handling tasklet. This function waits for the outstanding
+ * Per channel error handling work. This function waits for the outstanding
* transaction to complete and triggers error handling. After error handling,
* re-enable channel error interrupts, and restart the channel if needed.
*/
-static void xilinx_dpdma_chan_err_task(struct tasklet_struct *t)
+static void xilinx_dpdma_chan_err_task(struct work_struct *t)
{
- struct xilinx_dpdma_chan *chan = from_tasklet(chan, t, err_task);
+ struct xilinx_dpdma_chan *chan = from_work(chan, t, err_task);
struct xilinx_dpdma_device *xdev = chan->xdev;
unsigned long flags;

@@ -1569,7 +1570,7 @@ static int xilinx_dpdma_chan_init(struct xilinx_dpdma_device *xdev,
spin_lock_init(&chan->lock);
init_waitqueue_head(&chan->wait_to_stop);

- tasklet_setup(&chan->err_task, xilinx_dpdma_chan_err_task);
+ INIT_WORK(&chan->err_task, xilinx_dpdma_chan_err_task);

chan->vchan.desc_free = xilinx_dpdma_chan_free_tx_desc;
vchan_init(&chan->vchan, &xdev->common);
@@ -1584,7 +1585,7 @@ static void xilinx_dpdma_chan_remove(struct xilinx_dpdma_chan *chan)
if (!chan)
return;

- tasklet_kill(&chan->err_task);
+ cancel_work_sync(&chan->err_task);
list_del(&chan->vchan.chan.device_node);
}

diff --git a/drivers/dma/xilinx/zynqmp_dma.c b/drivers/dma/xilinx/zynqmp_dma.c
index f31631bef961..a4b55a45fd3d 100644
--- a/drivers/dma/xilinx/zynqmp_dma.c
+++ b/drivers/dma/xilinx/zynqmp_dma.c
@@ -18,6 +18,7 @@
#include <linux/clk.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/pm_runtime.h>
+#include <linux/workqueue.h>

#include "../dmaengine.h"

@@ -204,7 +205,7 @@ struct zynqmp_dma_desc_sw {
* @dev: The dma device
* @irq: Channel IRQ
* @is_dmacoherent: Tells whether dma operations are coherent or not
- * @tasklet: Cleanup work after irq
+ * @work: Cleanup work after irq
* @idle : Channel status;
* @desc_size: Size of the low level descriptor
* @err: Channel has errors
@@ -228,7 +229,7 @@ struct zynqmp_dma_chan {
struct device *dev;
int irq;
bool is_dmacoherent;
- struct tasklet_struct tasklet;
+ struct work_struct work;
bool idle;
size_t desc_size;
bool err;
@@ -724,7 +725,7 @@ static irqreturn_t zynqmp_dma_irq_handler(int irq, void *data)

writel(isr, chan->regs + ZYNQMP_DMA_ISR);
if (status & ZYNQMP_DMA_INT_DONE) {
- tasklet_schedule(&chan->tasklet);
+ queue_work(system_bh_wq, &chan->work);
ret = IRQ_HANDLED;
}

@@ -733,7 +734,7 @@ static irqreturn_t zynqmp_dma_irq_handler(int irq, void *data)

if (status & ZYNQMP_DMA_INT_ERR) {
chan->err = true;
- tasklet_schedule(&chan->tasklet);
+ queue_work(system_bh_wq, &chan->work);
dev_err(chan->dev, "Channel %p has errors\n", chan);
ret = IRQ_HANDLED;
}
@@ -748,12 +749,12 @@ static irqreturn_t zynqmp_dma_irq_handler(int irq, void *data)
}

/**
- * zynqmp_dma_do_tasklet - Schedule completion tasklet
+ * zynqmp_dma_do_work - Schedule completion work
* @t: Pointer to the ZynqMP DMA channel structure
*/
-static void zynqmp_dma_do_tasklet(struct tasklet_struct *t)
+static void zynqmp_dma_do_work(struct work_struct *t)
{
- struct zynqmp_dma_chan *chan = from_tasklet(chan, t, tasklet);
+ struct zynqmp_dma_chan *chan = from_work(chan, t, work);
u32 count;
unsigned long irqflags;

@@ -804,7 +805,7 @@ static void zynqmp_dma_synchronize(struct dma_chan *dchan)
{
struct zynqmp_dma_chan *chan = to_chan(dchan);

- tasklet_kill(&chan->tasklet);
+ cancel_work_sync(&chan->work);
}

/**
@@ -876,7 +877,7 @@ static void zynqmp_dma_chan_remove(struct zynqmp_dma_chan *chan)

if (chan->irq)
devm_free_irq(chan->zdev->dev, chan->irq, chan);
- tasklet_kill(&chan->tasklet);
+ cancel_work_sync(&chan->work);
list_del(&chan->common.device_node);
}

@@ -921,7 +922,7 @@ static int zynqmp_dma_chan_probe(struct zynqmp_dma_device *zdev,

chan->is_dmacoherent = of_property_read_bool(node, "dma-coherent");
zdev->chan = chan;
- tasklet_setup(&chan->tasklet, zynqmp_dma_do_tasklet);
+ INIT_WORK(&chan->work, zynqmp_dma_do_work);
spin_lock_init(&chan->lock);
INIT_LIST_HEAD(&chan->active_list);
INIT_LIST_HEAD(&chan->pending_list);
--
2.17.1


2024-03-27 16:22:49

by Duncan Sands

[permalink] [raw]
Subject: Re: [PATCH 4/9] USB: Convert from tasklet to BH workqueue

Hi Allen, the usbatm bits look very reasonable to me. Unfortunately I don't
have the hardware to test any more. Still, for what it's worth:

Signed-off-by: Duncan Sands <[email protected]>

2024-03-27 16:56:01

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 4/9] USB: Convert from tasklet to BH workqueue

On Wed, Mar 27, 2024 at 04:03:09PM +0000, Allen Pais wrote:
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/infiniband/* from tasklet to BH workqueue.

No it does not, I think your changelog is wrong :(

>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Signed-off-by: Allen Pais <[email protected]>
> ---
> drivers/usb/atm/usbatm.c | 55 +++++++++++++++--------------
> drivers/usb/atm/usbatm.h | 3 +-
> drivers/usb/core/hcd.c | 22 ++++++------
> drivers/usb/gadget/udc/fsl_qe_udc.c | 21 +++++------
> drivers/usb/gadget/udc/fsl_qe_udc.h | 4 +--
> drivers/usb/host/ehci-sched.c | 2 +-
> drivers/usb/host/fhci-hcd.c | 3 +-
> drivers/usb/host/fhci-sched.c | 10 +++---
> drivers/usb/host/fhci.h | 5 +--
> drivers/usb/host/xhci-dbgcap.h | 3 +-
> drivers/usb/host/xhci-dbgtty.c | 15 ++++----
> include/linux/usb/cdc_ncm.h | 2 +-
> include/linux/usb/usbnet.h | 2 +-
> 13 files changed, 76 insertions(+), 71 deletions(-)
>
> diff --git a/drivers/usb/atm/usbatm.c b/drivers/usb/atm/usbatm.c
> index 2da6615fbb6f..74849f24e52e 100644
> --- a/drivers/usb/atm/usbatm.c
> +++ b/drivers/usb/atm/usbatm.c
> @@ -17,7 +17,7 @@
> * - Removed the limit on the number of devices
> * - Module now autoloads on device plugin
> * - Merged relevant parts of sarlib
> - * - Replaced the kernel thread with a tasklet
> + * - Replaced the kernel thread with a work

a "work"?

> * - New packet transmission code
> * - Changed proc file contents
> * - Fixed all known SMP races
> @@ -68,6 +68,7 @@
> #include <linux/wait.h>
> #include <linux/kthread.h>
> #include <linux/ratelimit.h>
> +#include <linux/workqueue.h>
>
> #ifdef VERBOSE_DEBUG
> static int usbatm_print_packet(struct usbatm_data *instance, const unsigned char *data, int len);
> @@ -249,7 +250,7 @@ static void usbatm_complete(struct urb *urb)
> /* vdbg("%s: urb 0x%p, status %d, actual_length %d",
> __func__, urb, status, urb->actual_length); */
>
> - /* Can be invoked from task context, protect against interrupts */
> + /* Can be invoked from work context, protect against interrupts */

"workqueue"? This too seems wrong.

Same for other comment changes in this patch.

thanks,

greg k-h

2024-03-27 17:14:38

by Allen Pais

[permalink] [raw]
Subject: [PATCH 8/9] drivers/media/*: Convert from tasklet to BH workqueue

The only generic interface to execute asynchronously in the BH context is
tasklet; however, it's marked deprecated and has some design flaws. To
replace tasklets, BH workqueue support was recently added. A BH workqueue
behaves similarly to regular workqueues except that the queued work items
are executed in the BH context.

This patch converts drivers/media/* from tasklet to BH workqueue.

Based on the work done by Tejun Heo <[email protected]>
Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10

Signed-off-by: Allen Pais <[email protected]>
---
drivers/media/pci/bt8xx/bt878.c | 8 ++--
drivers/media/pci/bt8xx/bt878.h | 3 +-
drivers/media/pci/bt8xx/dvb-bt8xx.c | 9 ++--
drivers/media/pci/ddbridge/ddbridge.h | 3 +-
drivers/media/pci/mantis/hopper_cards.c | 2 +-
drivers/media/pci/mantis/mantis_cards.c | 2 +-
drivers/media/pci/mantis/mantis_common.h | 3 +-
drivers/media/pci/mantis/mantis_dma.c | 5 ++-
drivers/media/pci/mantis/mantis_dma.h | 2 +-
drivers/media/pci/mantis/mantis_dvb.c | 12 +++---
drivers/media/pci/ngene/ngene-core.c | 23 ++++++-----
drivers/media/pci/ngene/ngene.h | 5 ++-
drivers/media/pci/smipcie/smipcie-main.c | 18 ++++----
drivers/media/pci/smipcie/smipcie.h | 3 +-
drivers/media/pci/ttpci/budget-av.c | 3 +-
drivers/media/pci/ttpci/budget-ci.c | 27 ++++++------
drivers/media/pci/ttpci/budget-core.c | 10 ++---
drivers/media/pci/ttpci/budget.h | 5 ++-
drivers/media/pci/tw5864/tw5864-core.c | 2 +-
drivers/media/pci/tw5864/tw5864-video.c | 13 +++---
drivers/media/pci/tw5864/tw5864.h | 7 ++--
drivers/media/platform/intel/pxa_camera.c | 15 +++----
drivers/media/platform/marvell/mcam-core.c | 11 ++---
drivers/media/platform/marvell/mcam-core.h | 3 +-
.../st/sti/c8sectpfe/c8sectpfe-core.c | 15 +++----
.../st/sti/c8sectpfe/c8sectpfe-core.h | 2 +-
drivers/media/radio/wl128x/fmdrv.h | 7 ++--
drivers/media/radio/wl128x/fmdrv_common.c | 41 ++++++++++---------
drivers/media/rc/mceusb.c | 2 +-
drivers/media/usb/ttusb-dec/ttusb_dec.c | 21 +++++-----
30 files changed, 151 insertions(+), 131 deletions(-)

diff --git a/drivers/media/pci/bt8xx/bt878.c b/drivers/media/pci/bt8xx/bt878.c
index 90972d6952f1..983ec29108f0 100644
--- a/drivers/media/pci/bt8xx/bt878.c
+++ b/drivers/media/pci/bt8xx/bt878.c
@@ -300,8 +300,8 @@ static irqreturn_t bt878_irq(int irq, void *dev_id)
}
if (astat & BT878_ARISCI) {
bt->finished_block = (stat & BT878_ARISCS) >> 28;
- if (bt->tasklet.callback)
- tasklet_schedule(&bt->tasklet);
+ if (bt->work.func)
+ queue_work(system_bh_wq,
break;
}
count++;
@@ -478,8 +478,8 @@ static int bt878_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
btwrite(0, BT878_AINT_MASK);
bt878_num++;

- if (!bt->tasklet.func)
- tasklet_disable(&bt->tasklet);
+ if (!bt->work.func)
+ disable_work_sync(&bt->work);

return 0;

diff --git a/drivers/media/pci/bt8xx/bt878.h b/drivers/media/pci/bt8xx/bt878.h
index fde8db293c54..b9ce78e5116b 100644
--- a/drivers/media/pci/bt8xx/bt878.h
+++ b/drivers/media/pci/bt8xx/bt878.h
@@ -14,6 +14,7 @@
#include <linux/sched.h>
#include <linux/spinlock.h>
#include <linux/mutex.h>
+#include <linux/workqueue.h>

#include "bt848.h"
#include "bttv.h"
@@ -120,7 +121,7 @@ struct bt878 {
dma_addr_t risc_dma;
u32 risc_pos;

- struct tasklet_struct tasklet;
+ struct work_struct work;
int shutdown;
};

diff --git a/drivers/media/pci/bt8xx/dvb-bt8xx.c b/drivers/media/pci/bt8xx/dvb-bt8xx.c
index 390cbba6c065..8c0e1fa764a4 100644
--- a/drivers/media/pci/bt8xx/dvb-bt8xx.c
+++ b/drivers/media/pci/bt8xx/dvb-bt8xx.c
@@ -15,6 +15,7 @@
#include <linux/delay.h>
#include <linux/slab.h>
#include <linux/i2c.h>
+#include <linux/workqueue.h>

#include <media/dmxdev.h>
#include <media/dvbdev.h>
@@ -39,9 +40,9 @@ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);

#define IF_FREQUENCYx6 217 /* 6 * 36.16666666667MHz */

-static void dvb_bt8xx_task(struct tasklet_struct *t)
+static void dvb_bt8xx_task(struct work_struct *t)
{
- struct bt878 *bt = from_tasklet(bt, t, tasklet);
+ struct bt878 *bt = from_work(bt, t, work);
struct dvb_bt8xx_card *card = dev_get_drvdata(&bt->adapter->dev);

dprintk("%d\n", card->bt->finished_block);
@@ -782,7 +783,7 @@ static int dvb_bt8xx_load_card(struct dvb_bt8xx_card *card, u32 type)
goto err_disconnect_frontend;
}

- tasklet_setup(&card->bt->tasklet, dvb_bt8xx_task);
+ INIT_WORK(&card->bt->work, dvb_bt8xx_task);

frontend_init(card, type);

@@ -922,7 +923,7 @@ static void dvb_bt8xx_remove(struct bttv_sub_device *sub)
dprintk("dvb_bt8xx: unloading card%d\n", card->bttv_nr);

bt878_stop(card->bt);
- tasklet_kill(&card->bt->tasklet);
+ cancel_work_sync(&card->bt->work);
dvb_net_release(&card->dvbnet);
card->demux.dmx.remove_frontend(&card->demux.dmx, &card->fe_mem);
card->demux.dmx.remove_frontend(&card->demux.dmx, &card->fe_hw);
diff --git a/drivers/media/pci/ddbridge/ddbridge.h b/drivers/media/pci/ddbridge/ddbridge.h
index f3699dbd193f..037d1d13ef0f 100644
--- a/drivers/media/pci/ddbridge/ddbridge.h
+++ b/drivers/media/pci/ddbridge/ddbridge.h
@@ -35,6 +35,7 @@
#include <linux/uaccess.h>
#include <linux/vmalloc.h>
#include <linux/workqueue.h>
+#include <linux/workqueue.h>

#include <asm/dma.h>
#include <asm/irq.h>
@@ -298,7 +299,7 @@ struct ddb_link {
spinlock_t lock; /* lock link access */
struct mutex flash_mutex; /* lock flash access */
struct ddb_lnb lnb;
- struct tasklet_struct tasklet;
+ struct work_struct work;
struct ddb_ids ids;

spinlock_t temp_lock; /* lock temp chip access */
diff --git a/drivers/media/pci/mantis/hopper_cards.c b/drivers/media/pci/mantis/hopper_cards.c
index c0bd5d7e148b..869ea88c4893 100644
--- a/drivers/media/pci/mantis/hopper_cards.c
+++ b/drivers/media/pci/mantis/hopper_cards.c
@@ -116,7 +116,7 @@ static irqreturn_t hopper_irq_handler(int irq, void *dev_id)
if (stat & MANTIS_INT_RISCI) {
dprintk(MANTIS_DEBUG, 0, "<%s>", label[8]);
mantis->busy_block = (stat & MANTIS_INT_RISCSTAT) >> 28;
- tasklet_schedule(&mantis->tasklet);
+ queue_work(system_bh_wq, &mantis->work);
}
if (stat & MANTIS_INT_I2CDONE) {
dprintk(MANTIS_DEBUG, 0, "<%s>", label[9]);
diff --git a/drivers/media/pci/mantis/mantis_cards.c b/drivers/media/pci/mantis/mantis_cards.c
index 906e4500d87d..cb124b19e36e 100644
--- a/drivers/media/pci/mantis/mantis_cards.c
+++ b/drivers/media/pci/mantis/mantis_cards.c
@@ -125,7 +125,7 @@ static irqreturn_t mantis_irq_handler(int irq, void *dev_id)
if (stat & MANTIS_INT_RISCI) {
dprintk(MANTIS_DEBUG, 0, "<%s>", label[8]);
mantis->busy_block = (stat & MANTIS_INT_RISCSTAT) >> 28;
- tasklet_schedule(&mantis->tasklet);
+ queue_work(system_bh_wq, &mantis->work);
}
if (stat & MANTIS_INT_I2CDONE) {
dprintk(MANTIS_DEBUG, 0, "<%s>", label[9]);
diff --git a/drivers/media/pci/mantis/mantis_common.h b/drivers/media/pci/mantis/mantis_common.h
index d88ac280226c..f2247148f268 100644
--- a/drivers/media/pci/mantis/mantis_common.h
+++ b/drivers/media/pci/mantis/mantis_common.h
@@ -12,6 +12,7 @@
#include <linux/interrupt.h>
#include <linux/mutex.h>
#include <linux/workqueue.h>
+#include <linux/workqueue.h>

#include "mantis_reg.h"
#include "mantis_uart.h"
@@ -125,7 +126,7 @@ struct mantis_pci {
__le32 *risc_cpu;
dma_addr_t risc_dma;

- struct tasklet_struct tasklet;
+ struct work_struct work;
spinlock_t intmask_lock;

struct i2c_adapter adapter;
diff --git a/drivers/media/pci/mantis/mantis_dma.c b/drivers/media/pci/mantis/mantis_dma.c
index 80c843936493..c85f9b84a2c6 100644
--- a/drivers/media/pci/mantis/mantis_dma.c
+++ b/drivers/media/pci/mantis/mantis_dma.c
@@ -15,6 +15,7 @@
#include <linux/signal.h>
#include <linux/sched.h>
#include <linux/interrupt.h>
+#include <linux/workqueue.h>

#include <media/dmxdev.h>
#include <media/dvbdev.h>
@@ -200,9 +201,9 @@ void mantis_dma_stop(struct mantis_pci *mantis)
}


-void mantis_dma_xfer(struct tasklet_struct *t)
+void mantis_dma_xfer(struct work_struct *t)
{
- struct mantis_pci *mantis = from_tasklet(mantis, t, tasklet);
+ struct mantis_pci *mantis = from_work(mantis, t, work);
struct mantis_hwconfig *config = mantis->hwconfig;

while (mantis->last_block != mantis->busy_block) {
diff --git a/drivers/media/pci/mantis/mantis_dma.h b/drivers/media/pci/mantis/mantis_dma.h
index 37da982c9c29..5db0d3728f15 100644
--- a/drivers/media/pci/mantis/mantis_dma.h
+++ b/drivers/media/pci/mantis/mantis_dma.h
@@ -13,6 +13,6 @@ extern int mantis_dma_init(struct mantis_pci *mantis);
extern int mantis_dma_exit(struct mantis_pci *mantis);
extern void mantis_dma_start(struct mantis_pci *mantis);
extern void mantis_dma_stop(struct mantis_pci *mantis);
-extern void mantis_dma_xfer(struct tasklet_struct *t);
+extern void mantis_dma_xfer(struct work_struct *t);

#endif /* __MANTIS_DMA_H */
diff --git a/drivers/media/pci/mantis/mantis_dvb.c b/drivers/media/pci/mantis/mantis_dvb.c
index c7ba4a76e608..f640635de170 100644
--- a/drivers/media/pci/mantis/mantis_dvb.c
+++ b/drivers/media/pci/mantis/mantis_dvb.c
@@ -105,7 +105,7 @@ static int mantis_dvb_start_feed(struct dvb_demux_feed *dvbdmxfeed)
if (mantis->feeds == 1) {
dprintk(MANTIS_DEBUG, 1, "mantis start feed & dma");
mantis_dma_start(mantis);
- tasklet_enable(&mantis->tasklet);
+ enable_and_queue_work(system_bh_wq, &mantis->work);
}

return mantis->feeds;
@@ -125,7 +125,7 @@ static int mantis_dvb_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
mantis->feeds--;
if (mantis->feeds == 0) {
dprintk(MANTIS_DEBUG, 1, "mantis stop feed and dma");
- tasklet_disable(&mantis->tasklet);
+ disable_work_sync(&mantis->work);
mantis_dma_stop(mantis);
}

@@ -205,8 +205,8 @@ int mantis_dvb_init(struct mantis_pci *mantis)
}

dvb_net_init(&mantis->dvb_adapter, &mantis->dvbnet, &mantis->demux.dmx);
- tasklet_setup(&mantis->tasklet, mantis_dma_xfer);
- tasklet_disable(&mantis->tasklet);
+ INIT_WORK(&mantis->bh, mantis_dma_xfer);
+ disable_work_sync(&mantis->work);
if (mantis->hwconfig) {
result = config->frontend_init(mantis, mantis->fe);
if (result < 0) {
@@ -235,7 +235,7 @@ int mantis_dvb_init(struct mantis_pci *mantis)

/* Error conditions .. */
err5:
- tasklet_kill(&mantis->tasklet);
+ cancel_work_sync(&mantis->work);
dvb_net_release(&mantis->dvbnet);
if (mantis->fe) {
dvb_unregister_frontend(mantis->fe);
@@ -273,7 +273,7 @@ int mantis_dvb_exit(struct mantis_pci *mantis)
dvb_frontend_detach(mantis->fe);
}

- tasklet_kill(&mantis->tasklet);
+ cancel_work_sync(&mantis->work);
dvb_net_release(&mantis->dvbnet);

mantis->demux.dmx.remove_frontend(&mantis->demux.dmx, &mantis->fe_mem);
diff --git a/drivers/media/pci/ngene/ngene-core.c b/drivers/media/pci/ngene/ngene-core.c
index 7481f553f959..5211d6796748 100644
--- a/drivers/media/pci/ngene/ngene-core.c
+++ b/drivers/media/pci/ngene/ngene-core.c
@@ -21,6 +21,7 @@
#include <linux/byteorder/generic.h>
#include <linux/firmware.h>
#include <linux/vmalloc.h>
+#include <linux/workqueue.h>

#include "ngene.h"

@@ -50,9 +51,9 @@ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
/* nGene interrupt handler **************************************************/
/****************************************************************************/

-static void event_tasklet(struct tasklet_struct *t)
+static void event_work(struct work_struct *t)
{
- struct ngene *dev = from_tasklet(dev, t, event_tasklet);
+ struct ngene *dev = from_work(dev, t, event_work);

while (dev->EventQueueReadIndex != dev->EventQueueWriteIndex) {
struct EVENT_BUFFER Event =
@@ -68,9 +69,9 @@ static void event_tasklet(struct tasklet_struct *t)
}
}

-static void demux_tasklet(struct tasklet_struct *t)
+static void demux_work(struct work_struct *t)
{
- struct ngene_channel *chan = from_tasklet(chan, t, demux_tasklet);
+ struct ngene_channel *chan = from_work(chan, t, demux_work);
struct device *pdev = &chan->dev->pci_dev->dev;
struct SBufferHeader *Cur = chan->nextBuffer;

@@ -204,7 +205,7 @@ static irqreturn_t irq_handler(int irq, void *dev_id)
dev->EventQueueOverflowFlag = 1;
}
dev->EventBuffer->EventStatus &= ~0x80;
- tasklet_schedule(&dev->event_tasklet);
+ queue_work(system_bh_wq, &dev->event_work);
rc = IRQ_HANDLED;
}

@@ -217,8 +218,8 @@ static irqreturn_t irq_handler(int irq, void *dev_id)
ngeneBuffer.SR.Flags & 0xC0) == 0x80) {
dev->channel[i].nextBuffer->
ngeneBuffer.SR.Flags |= 0x40;
- tasklet_schedule(
- &dev->channel[i].demux_tasklet);
+ queue_work(system_bh_wq,
+ &dev->channel[i].demux_work);
rc = IRQ_HANDLED;
}
}
@@ -1181,7 +1182,7 @@ static void ngene_init(struct ngene *dev)
struct device *pdev = &dev->pci_dev->dev;
int i;

- tasklet_setup(&dev->event_tasklet, event_tasklet);
+ INIT_WORK(&dev->event_work, event_work);

memset_io(dev->iomem + 0xc000, 0x00, 0x220);
memset_io(dev->iomem + 0xc400, 0x00, 0x100);
@@ -1395,7 +1396,7 @@ static void release_channel(struct ngene_channel *chan)
if (chan->running)
set_transfer(chan, 0);

- tasklet_kill(&chan->demux_tasklet);
+ cancel_work_sync(&chan->demux_work);

if (chan->ci_dev) {
dvb_unregister_device(chan->ci_dev);
@@ -1445,7 +1446,7 @@ static int init_channel(struct ngene_channel *chan)
struct ngene_info *ni = dev->card_info;
int io = ni->io_type[nr];

- tasklet_setup(&chan->demux_tasklet, demux_tasklet);
+ INIT_WORK(&chan->demux_work, demux_work);
chan->users = 0;
chan->type = io;
chan->mode = chan->type; /* for now only one mode */
@@ -1647,7 +1648,7 @@ void ngene_remove(struct pci_dev *pdev)
struct ngene *dev = pci_get_drvdata(pdev);
int i;

- tasklet_kill(&dev->event_tasklet);
+ cancel_work_sync(&dev->event_work);
for (i = MAX_STREAM - 1; i >= 0; i--)
release_channel(&dev->channel[i]);
if (dev->ci.en)
diff --git a/drivers/media/pci/ngene/ngene.h b/drivers/media/pci/ngene/ngene.h
index d1d7da84cd9d..c2a23f6dbe09 100644
--- a/drivers/media/pci/ngene/ngene.h
+++ b/drivers/media/pci/ngene/ngene.h
@@ -16,6 +16,7 @@
#include <linux/scatterlist.h>

#include <linux/dvb/frontend.h>
+#include <linux/workqueue.h>

#include <media/dmxdev.h>
#include <media/dvbdev.h>
@@ -621,7 +622,7 @@ struct ngene_channel {
int users;
struct video_device *v4l_dev;
struct dvb_device *ci_dev;
- struct tasklet_struct demux_tasklet;
+ struct work_struct demux_work;

struct SBufferHeader *nextBuffer;
enum KSSTATE State;
@@ -717,7 +718,7 @@ struct ngene {
struct EVENT_BUFFER EventQueue[EVENT_QUEUE_SIZE];
int EventQueueOverflowCount;
int EventQueueOverflowFlag;
- struct tasklet_struct event_tasklet;
+ struct work_struct event_work;
struct EVENT_BUFFER *EventBuffer;
int EventQueueWriteIndex;
int EventQueueReadIndex;
diff --git a/drivers/media/pci/smipcie/smipcie-main.c b/drivers/media/pci/smipcie/smipcie-main.c
index 0c300d019d9c..7da6bb55660b 100644
--- a/drivers/media/pci/smipcie/smipcie-main.c
+++ b/drivers/media/pci/smipcie/smipcie-main.c
@@ -279,10 +279,10 @@ static void smi_port_clearInterrupt(struct smi_port *port)
(port->_dmaInterruptCH0 | port->_dmaInterruptCH1));
}

-/* tasklet handler: DMA data to dmx.*/
-static void smi_dma_xfer(struct tasklet_struct *t)
+/* work handler: DMA data to dmx.*/
+static void smi_dma_xfer(struct work_struct *t)
{
- struct smi_port *port = from_tasklet(port, t, tasklet);
+ struct smi_port *port = from_work(port, t, work);
struct smi_dev *dev = port->dev;
u32 intr_status, finishedData, dmaManagement;
u8 dmaChan0State, dmaChan1State;
@@ -426,8 +426,8 @@ static int smi_port_init(struct smi_port *port, int dmaChanUsed)
}

smi_port_disableInterrupt(port);
- tasklet_setup(&port->tasklet, smi_dma_xfer);
- tasklet_disable(&port->tasklet);
+ INIT_WORK(&port->work, smi_dma_xfer);
+ disable_work_sync(&port->work);
port->enable = 1;
return 0;
err:
@@ -438,7 +438,7 @@ static int smi_port_init(struct smi_port *port, int dmaChanUsed)
static void smi_port_exit(struct smi_port *port)
{
smi_port_disableInterrupt(port);
- tasklet_kill(&port->tasklet);
+ cancel_work_sync(&port->work);
smi_port_dma_free(port);
port->enable = 0;
}
@@ -452,7 +452,7 @@ static int smi_port_irq(struct smi_port *port, u32 int_status)
smi_port_disableInterrupt(port);
port->_int_status = int_status;
smi_port_clearInterrupt(port);
- tasklet_schedule(&port->tasklet);
+ queue_work(system_bh_wq, &port->work);
handled = 1;
}
return handled;
@@ -823,7 +823,7 @@ static int smi_start_feed(struct dvb_demux_feed *dvbdmxfeed)
smi_port_clearInterrupt(port);
smi_port_enableInterrupt(port);
smi_write(port->DMA_MANAGEMENT, dmaManagement);
- tasklet_enable(&port->tasklet);
+ enable_and_queue_work(system_bh_wq, &port->work);
}
return port->users;
}
@@ -837,7 +837,7 @@ static int smi_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
if (--port->users)
return port->users;

- tasklet_disable(&port->tasklet);
+ disable_work_sync(&port->work);
smi_port_disableInterrupt(port);
smi_clear(port->DMA_MANAGEMENT, 0x30003);
return 0;
diff --git a/drivers/media/pci/smipcie/smipcie.h b/drivers/media/pci/smipcie/smipcie.h
index 2b5e0154814c..f124d2cdead6 100644
--- a/drivers/media/pci/smipcie/smipcie.h
+++ b/drivers/media/pci/smipcie/smipcie.h
@@ -17,6 +17,7 @@
#include <linux/pci.h>
#include <linux/dma-mapping.h>
#include <linux/slab.h>
+#include <linux/workqueue.h>
#include <media/rc-core.h>

#include <media/demux.h>
@@ -257,7 +258,7 @@ struct smi_port {
u32 _dmaInterruptCH0;
u32 _dmaInterruptCH1;
u32 _int_status;
- struct tasklet_struct tasklet;
+ struct work_struct work;
/* dvb */
struct dmx_frontend hw_frontend;
struct dmx_frontend mem_frontend;
diff --git a/drivers/media/pci/ttpci/budget-av.c b/drivers/media/pci/ttpci/budget-av.c
index a47c5850ef87..6e43b1a01191 100644
--- a/drivers/media/pci/ttpci/budget-av.c
+++ b/drivers/media/pci/ttpci/budget-av.c
@@ -37,6 +37,7 @@
#include <linux/interrupt.h>
#include <linux/input.h>
#include <linux/spinlock.h>
+#include <linux/workqueue.h>

#include <media/dvb_ca_en50221.h>

@@ -55,7 +56,7 @@ struct budget_av {
struct video_device vd;
int cur_input;
int has_saa7113;
- struct tasklet_struct ciintf_irq_tasklet;
+ struct work_struct ciintf_irq_work;
int slot_status;
struct dvb_ca_en50221 ca;
u8 reinitialise_demod:1;
diff --git a/drivers/media/pci/ttpci/budget-ci.c b/drivers/media/pci/ttpci/budget-ci.c
index 66e1a004ee43..11e0ed62707e 100644
--- a/drivers/media/pci/ttpci/budget-ci.c
+++ b/drivers/media/pci/ttpci/budget-ci.c
@@ -17,6 +17,7 @@
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/spinlock.h>
+#include <linux/workqueue.h>
#include <media/rc-core.h>

#include "budget.h"
@@ -80,7 +81,7 @@ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);

struct budget_ci_ir {
struct rc_dev *dev;
- struct tasklet_struct msp430_irq_tasklet;
+ struct work_struct msp430_irq_work;
char name[72]; /* 40 + 32 for (struct saa7146_dev).name */
char phys[32];
int rc5_device;
@@ -91,7 +92,7 @@ struct budget_ci_ir {

struct budget_ci {
struct budget budget;
- struct tasklet_struct ciintf_irq_tasklet;
+ struct work_struct ciintf_irq_work;
int slot_status;
int ci_irq;
struct dvb_ca_en50221 ca;
@@ -99,9 +100,9 @@ struct budget_ci {
u8 tuner_pll_address; /* used for philips_tdm1316l configs */
};

-static void msp430_ir_interrupt(struct tasklet_struct *t)
+static void msp430_ir_interrupt(struct work_struct *t)
{
- struct budget_ci_ir *ir = from_tasklet(ir, t, msp430_irq_tasklet);
+ struct budget_ci_ir *ir = from_work(ir, t, msp430_irq_work);
struct budget_ci *budget_ci = container_of(ir, typeof(*budget_ci), ir);
struct rc_dev *dev = budget_ci->ir.dev;
u32 command = ttpci_budget_debiread(&budget_ci->budget, DEBINOSWAP, DEBIADDR_IR, 2, 1, 0) >> 8;
@@ -230,7 +231,7 @@ static int msp430_ir_init(struct budget_ci *budget_ci)

budget_ci->ir.dev = dev;

- tasklet_setup(&budget_ci->ir.msp430_irq_tasklet, msp430_ir_interrupt);
+ INIT_WORK(&budget_ci->ir.msp430_irq_work, msp430_ir_interrupt);

SAA7146_IER_ENABLE(saa, MASK_06);
saa7146_setgpio(saa, 3, SAA7146_GPIO_IRQHI);
@@ -244,7 +245,7 @@ static void msp430_ir_deinit(struct budget_ci *budget_ci)

SAA7146_IER_DISABLE(saa, MASK_06);
saa7146_setgpio(saa, 3, SAA7146_GPIO_INPUT);
- tasklet_kill(&budget_ci->ir.msp430_irq_tasklet);
+ cancel_work_sync(&budget_ci->ir.msp430_irq_work);

rc_unregister_device(budget_ci->ir.dev);
}
@@ -348,10 +349,10 @@ static int ciintf_slot_ts_enable(struct dvb_ca_en50221 *ca, int slot)
return 0;
}

-static void ciintf_interrupt(struct tasklet_struct *t)
+static void ciintf_interrupt(struct work_struct *t)
{
- struct budget_ci *budget_ci = from_tasklet(budget_ci, t,
- ciintf_irq_tasklet);
+ struct budget_ci *budget_ci = from_work(budget_ci, t,
+ ciintf_irq_work);
struct saa7146_dev *saa = budget_ci->budget.dev;
unsigned int flags;

@@ -492,7 +493,7 @@ static int ciintf_init(struct budget_ci *budget_ci)

// Setup CI slot IRQ
if (budget_ci->ci_irq) {
- tasklet_setup(&budget_ci->ciintf_irq_tasklet, ciintf_interrupt);
+ INIT_WORK(&budget_ci->ciintf_irq_work, ciintf_interrupt);
if (budget_ci->slot_status != SLOTSTATUS_NONE) {
saa7146_setgpio(saa, 0, SAA7146_GPIO_IRQLO);
} else {
@@ -532,7 +533,7 @@ static void ciintf_deinit(struct budget_ci *budget_ci)
if (budget_ci->ci_irq) {
SAA7146_IER_DISABLE(saa, MASK_03);
saa7146_setgpio(saa, 0, SAA7146_GPIO_INPUT);
- tasklet_kill(&budget_ci->ciintf_irq_tasklet);
+ cancel_work_sync(&budget_ci->ciintf_irq_work);
}

// reset interface
@@ -558,13 +559,13 @@ static void budget_ci_irq(struct saa7146_dev *dev, u32 * isr)
dprintk(8, "dev: %p, budget_ci: %p\n", dev, budget_ci);

if (*isr & MASK_06)
- tasklet_schedule(&budget_ci->ir.msp430_irq_tasklet);
+ queue_work(system_bh_wq, &budget_ci->ir.msp430_irq_work);

if (*isr & MASK_10)
ttpci_budget_irq10_handler(dev, isr);

if ((*isr & MASK_03) && (budget_ci->budget.ci_present) && (budget_ci->ci_irq))
- tasklet_schedule(&budget_ci->ciintf_irq_tasklet);
+ queue_work(system_bh_wq, &budget_ci->ciintf_irq_work);
}

static u8 philips_su1278_tt_inittab[] = {
diff --git a/drivers/media/pci/ttpci/budget-core.c b/drivers/media/pci/ttpci/budget-core.c
index 25f44c3eebf3..3443c12dc9f2 100644
--- a/drivers/media/pci/ttpci/budget-core.c
+++ b/drivers/media/pci/ttpci/budget-core.c
@@ -171,9 +171,9 @@ static int budget_read_fe_status(struct dvb_frontend *fe,
return ret;
}

-static void vpeirq(struct tasklet_struct *t)
+static void vpeirq(struct work_struct *t)
{
- struct budget *budget = from_tasklet(budget, t, vpe_tasklet);
+ struct budget *budget = from_work(budget, t, vpe_work);
u8 *mem = (u8 *) (budget->grabbing);
u32 olddma = budget->ttbp;
u32 newdma = saa7146_read(budget->dev, PCI_VDP3);
@@ -520,7 +520,7 @@ int ttpci_budget_init(struct budget *budget, struct saa7146_dev *dev,
/* upload all */
saa7146_write(dev, GPIO_CTRL, 0x000000);

- tasklet_setup(&budget->vpe_tasklet, vpeirq);
+ INIT_WORK(&budget->vpe_work, vpeirq);

/* frontend power on */
if (bi->type != BUDGET_FS_ACTIVY)
@@ -557,7 +557,7 @@ int ttpci_budget_deinit(struct budget *budget)

budget_unregister(budget);

- tasklet_kill(&budget->vpe_tasklet);
+ cancel_work_sync(&budget->vpe_work);

saa7146_vfree_destroy_pgtable(dev->pci, budget->grabbing, &budget->pt);

@@ -575,7 +575,7 @@ void ttpci_budget_irq10_handler(struct saa7146_dev *dev, u32 * isr)
dprintk(8, "dev: %p, budget: %p\n", dev, budget);

if (*isr & MASK_10)
- tasklet_schedule(&budget->vpe_tasklet);
+ queue_work(system_bh_wq, &budget->vpe_work);
}

void ttpci_budget_set_video_port(struct saa7146_dev *dev, int video_port)
diff --git a/drivers/media/pci/ttpci/budget.h b/drivers/media/pci/ttpci/budget.h
index bd87432e6cde..a3ee75e326b4 100644
--- a/drivers/media/pci/ttpci/budget.h
+++ b/drivers/media/pci/ttpci/budget.h
@@ -12,6 +12,7 @@

#include <linux/module.h>
#include <linux/mutex.h>
+#include <linux/workqueue.h>

#include <media/drv-intf/saa7146.h>

@@ -49,8 +50,8 @@ struct budget {
unsigned char *grabbing;
struct saa7146_pgtable pt;

- struct tasklet_struct fidb_tasklet;
- struct tasklet_struct vpe_tasklet;
+ struct work_struct fidb_work;
+ struct work_struct vpe_work;

struct dmxdev dmxdev;
struct dvb_demux demux;
diff --git a/drivers/media/pci/tw5864/tw5864-core.c b/drivers/media/pci/tw5864/tw5864-core.c
index 560ff1ddcc83..a58c268e94a8 100644
--- a/drivers/media/pci/tw5864/tw5864-core.c
+++ b/drivers/media/pci/tw5864/tw5864-core.c
@@ -144,7 +144,7 @@ static void tw5864_h264_isr(struct tw5864_dev *dev)
cur_frame->gop_seqno = input->frame_gop_seqno;

dev->h264_buf_w_index = next_frame_index;
- tasklet_schedule(&dev->tasklet);
+ queue_work(system_bh_wq, &dev->work);

cur_frame = next_frame;

diff --git a/drivers/media/pci/tw5864/tw5864-video.c b/drivers/media/pci/tw5864/tw5864-video.c
index 8b1aae4b6319..ac2249626506 100644
--- a/drivers/media/pci/tw5864/tw5864-video.c
+++ b/drivers/media/pci/tw5864/tw5864-video.c
@@ -6,6 +6,7 @@
*/

#include <linux/module.h>
+#include <linux/workqueue.h>
#include <media/v4l2-common.h>
#include <media/v4l2-event.h>
#include <media/videobuf2-dma-contig.h>
@@ -175,7 +176,7 @@ static const unsigned int intra4x4_lambda3[] = {
static v4l2_std_id tw5864_get_v4l2_std(enum tw5864_vid_std std);
static enum tw5864_vid_std tw5864_from_v4l2_std(v4l2_std_id v4l2_std);

-static void tw5864_handle_frame_task(struct tasklet_struct *t);
+static void tw5864_handle_frame_task(struct work_struct *t);
static void tw5864_handle_frame(struct tw5864_h264_frame *frame);
static void tw5864_frame_interval_set(struct tw5864_input *input);

@@ -1062,7 +1063,7 @@ int tw5864_video_init(struct tw5864_dev *dev, int *video_nr)
dev->irqmask |= TW5864_INTR_VLC_DONE | TW5864_INTR_TIMER;
tw5864_irqmask_apply(dev);

- tasklet_setup(&dev->tasklet, tw5864_handle_frame_task);
+ INIT_WORK(&dev->work, tw5864_handle_frame_task);

for (i = 0; i < TW5864_INPUTS; i++) {
dev->inputs[i].root = dev;
@@ -1079,7 +1080,7 @@ int tw5864_video_init(struct tw5864_dev *dev, int *video_nr)
for (i = last_input_nr_registered; i >= 0; i--)
tw5864_video_input_fini(&dev->inputs[i]);

- tasklet_kill(&dev->tasklet);
+ cancel_work_sync(&dev->work);

free_dma:
for (i = last_dma_allocated; i >= 0; i--) {
@@ -1198,7 +1199,7 @@ void tw5864_video_fini(struct tw5864_dev *dev)
{
int i;

- tasklet_kill(&dev->tasklet);
+ cancel_work_sync(&dev->work);

for (i = 0; i < TW5864_INPUTS; i++)
tw5864_video_input_fini(&dev->inputs[i]);
@@ -1315,9 +1316,9 @@ static int tw5864_is_motion_triggered(struct tw5864_h264_frame *frame)
return detected;
}

-static void tw5864_handle_frame_task(struct tasklet_struct *t)
+static void tw5864_handle_frame_task(struct work_struct *t)
{
- struct tw5864_dev *dev = from_tasklet(dev, t, tasklet);
+ struct tw5864_dev *dev = from_work(dev, t, work);
unsigned long flags;
int batch_size = H264_BUF_CNT;

diff --git a/drivers/media/pci/tw5864/tw5864.h b/drivers/media/pci/tw5864/tw5864.h
index a8b6fbd5b710..278373859098 100644
--- a/drivers/media/pci/tw5864/tw5864.h
+++ b/drivers/media/pci/tw5864/tw5864.h
@@ -12,6 +12,7 @@
#include <linux/mutex.h>
#include <linux/io.h>
#include <linux/interrupt.h>
+#include <linux/workqueue.h>

#include <media/v4l2-common.h>
#include <media/v4l2-ioctl.h>
@@ -85,7 +86,7 @@ struct tw5864_input {
int nr; /* input number */
struct tw5864_dev *root;
struct mutex lock; /* used for vidq and vdev */
- spinlock_t slock; /* used for sync between ISR, tasklet & V4L2 API */
+ spinlock_t slock; /* used for sync between ISR, work & V4L2 API */
struct video_device vdev;
struct v4l2_ctrl_handler hdl;
struct vb2_queue vidq;
@@ -142,7 +143,7 @@ struct tw5864_h264_frame {

/* global device status */
struct tw5864_dev {
- spinlock_t slock; /* used for sync between ISR, tasklet & V4L2 API */
+ spinlock_t slock; /* used for sync between ISR, work & V4L2 API */
struct v4l2_device v4l2_dev;
struct tw5864_input inputs[TW5864_INPUTS];
#define H264_BUF_CNT 4
@@ -150,7 +151,7 @@ struct tw5864_dev {
int h264_buf_r_index;
int h264_buf_w_index;

- struct tasklet_struct tasklet;
+ struct work_struct work;

int encoder_busy;
/* Input number to check next for ready raw picture (in RR fashion) */
diff --git a/drivers/media/platform/intel/pxa_camera.c b/drivers/media/platform/intel/pxa_camera.c
index d904952bf00e..df0a3c559287 100644
--- a/drivers/media/platform/intel/pxa_camera.c
+++ b/drivers/media/platform/intel/pxa_camera.c
@@ -43,6 +43,7 @@
#include <linux/videodev2.h>

#include <linux/platform_data/media/camera-pxa.h>
+#include <linux/workqueue.h>

#define PXA_CAM_VERSION "0.0.6"
#define PXA_CAM_DRV_NAME "pxa27x-camera"
@@ -683,7 +684,7 @@ struct pxa_camera_dev {
unsigned int buf_sequence;

struct pxa_buffer *active;
- struct tasklet_struct task_eof;
+ struct work_struct task_eof;

u32 save_cicr[5];
};
@@ -1146,9 +1147,9 @@ static void pxa_camera_deactivate(struct pxa_camera_dev *pcdev)
clk_disable_unprepare(pcdev->clk);
}

-static void pxa_camera_eof(struct tasklet_struct *t)
+static void pxa_camera_eof(struct work_struct *t)
{
- struct pxa_camera_dev *pcdev = from_tasklet(pcdev, t, task_eof);
+ struct pxa_camera_dev *pcdev = from_work(pcdev, t, task_eof);
unsigned long cifr;
struct pxa_buffer *buf;

@@ -1185,7 +1186,7 @@ static irqreturn_t pxa_camera_irq(int irq, void *data)
if (status & CISR_EOF) {
cicr0 = __raw_readl(pcdev->base + CICR0) | CICR0_EOFM;
__raw_writel(cicr0, pcdev->base + CICR0);
- tasklet_schedule(&pcdev->task_eof);
+ queue_work(system_bh_wq, &pcdev->task_eof);
}

return IRQ_HANDLED;
@@ -2383,7 +2384,7 @@ static int pxa_camera_probe(struct platform_device *pdev)
}
}

- tasklet_setup(&pcdev->task_eof, pxa_camera_eof);
+ INIT_WORK(&pcdev->task_eof, pxa_camera_eof);

pxa_camera_activate(pcdev);

@@ -2409,7 +2410,7 @@ static int pxa_camera_probe(struct platform_device *pdev)
return 0;
exit_deactivate:
pxa_camera_deactivate(pcdev);
- tasklet_kill(&pcdev->task_eof);
+ cancel_work_sync(&pcdev->task_eof);
exit_free_dma:
dma_release_channel(pcdev->dma_chans[2]);
exit_free_dma_u:
@@ -2428,7 +2429,7 @@ static void pxa_camera_remove(struct platform_device *pdev)
struct pxa_camera_dev *pcdev = platform_get_drvdata(pdev);

pxa_camera_deactivate(pcdev);
- tasklet_kill(&pcdev->task_eof);
+ cancel_work_sync(&pcdev->task_eof);
dma_release_channel(pcdev->dma_chans[0]);
dma_release_channel(pcdev->dma_chans[1]);
dma_release_channel(pcdev->dma_chans[2]);
diff --git a/drivers/media/platform/marvell/mcam-core.c b/drivers/media/platform/marvell/mcam-core.c
index 66688b4aece5..d6b96a7039be 100644
--- a/drivers/media/platform/marvell/mcam-core.c
+++ b/drivers/media/platform/marvell/mcam-core.c
@@ -25,6 +25,7 @@
#include <linux/clk-provider.h>
#include <linux/videodev2.h>
#include <linux/pm_runtime.h>
+#include <linux/workqueue.h>
#include <media/v4l2-device.h>
#include <media/v4l2-ioctl.h>
#include <media/v4l2-ctrls.h>
@@ -439,9 +440,9 @@ static void mcam_ctlr_dma_vmalloc(struct mcam_camera *cam)
/*
* Copy data out to user space in the vmalloc case
*/
-static void mcam_frame_tasklet(struct tasklet_struct *t)
+static void mcam_frame_work(struct work_struct *t)
{
- struct mcam_camera *cam = from_tasklet(cam, t, s_tasklet);
+ struct mcam_camera *cam = from_work(cam, t, s_work);
int i;
unsigned long flags;
struct mcam_vb_buffer *buf;
@@ -480,7 +481,7 @@ static void mcam_frame_tasklet(struct tasklet_struct *t)


/*
- * Make sure our allocated buffers are up to the task.
+ * Make sure our allocated buffers are up to the work.
*/
static int mcam_check_dma_buffers(struct mcam_camera *cam)
{
@@ -493,7 +494,7 @@ static int mcam_check_dma_buffers(struct mcam_camera *cam)

static void mcam_vmalloc_done(struct mcam_camera *cam, int frame)
{
- tasklet_schedule(&cam->s_tasklet);
+ queue_work(system_bh_wq, &cam->s_work);
}

#else /* MCAM_MODE_VMALLOC */
@@ -1305,7 +1306,7 @@ static int mcam_setup_vb2(struct mcam_camera *cam)
break;
case B_vmalloc:
#ifdef MCAM_MODE_VMALLOC
- tasklet_setup(&cam->s_tasklet, mcam_frame_tasklet);
+ INIT_WORK(&cam->s_work, mcam_frame_work);
vq->ops = &mcam_vb2_ops;
vq->mem_ops = &vb2_vmalloc_memops;
cam->dma_setup = mcam_ctlr_dma_vmalloc;
diff --git a/drivers/media/platform/marvell/mcam-core.h b/drivers/media/platform/marvell/mcam-core.h
index 51e66db45af6..0d4b953dbb23 100644
--- a/drivers/media/platform/marvell/mcam-core.h
+++ b/drivers/media/platform/marvell/mcam-core.h
@@ -9,6 +9,7 @@

#include <linux/list.h>
#include <linux/clk-provider.h>
+#include <linux/workqueue.h>
#include <media/v4l2-common.h>
#include <media/v4l2-ctrls.h>
#include <media/v4l2-dev.h>
@@ -167,7 +168,7 @@ struct mcam_camera {
unsigned int dma_buf_size; /* allocated size */
void *dma_bufs[MAX_DMA_BUFS]; /* Internal buffer addresses */
dma_addr_t dma_handles[MAX_DMA_BUFS]; /* Buffer bus addresses */
- struct tasklet_struct s_tasklet;
+ struct work_struct s_work;
#endif
unsigned int sequence; /* Frame sequence number */
unsigned int buf_seq[MAX_DMA_BUFS]; /* Sequence for individual bufs */
diff --git a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
index e4cf27b5a072..22b359569a10 100644
--- a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
+++ b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
@@ -33,6 +33,7 @@
#include <linux/time.h>
#include <linux/usb.h>
#include <linux/wait.h>
+#include <linux/workqueue.h>

#include "c8sectpfe-common.h"
#include "c8sectpfe-core.h"
@@ -73,16 +74,16 @@ static void c8sectpfe_timer_interrupt(struct timer_list *t)

/* is this descriptor initialised and TP enabled */
if (channel->irec && readl(channel->irec + DMA_PRDS_TPENABLE))
- tasklet_schedule(&channel->tsklet);
+ queue_work(system_bh_wq, &channel->tsklet);
}

fei->timer.expires = jiffies + msecs_to_jiffies(POLL_MSECS);
add_timer(&fei->timer);
}

-static void channel_swdemux_tsklet(struct tasklet_struct *t)
+static void channel_swdemux_tsklet(struct work_struct *t)
{
- struct channel_info *channel = from_tasklet(channel, t, tsklet);
+ struct channel_info *channel = from_work(channel, t, tsklet);
struct c8sectpfei *fei;
unsigned long wp, rp;
int pos, num_packets, n, size;
@@ -211,7 +212,7 @@ static int c8sectpfe_start_feed(struct dvb_demux_feed *dvbdmxfeed)

dev_dbg(fei->dev, "Starting channel=%p\n", channel);

- tasklet_setup(&channel->tsklet, channel_swdemux_tsklet);
+ INIT_WORK(&channel->tsklet, channel_swdemux_tsklet);

/* Reset the internal inputblock sram pointers */
writel(channel->fifo,
@@ -304,7 +305,7 @@ static int c8sectpfe_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
/* disable this channels descriptor */
writel(0, channel->irec + DMA_PRDS_TPENABLE);

- tasklet_disable(&channel->tsklet);
+ disable_work_sync(&channel->tsklet);

/* now request memdma channel goes idle */
idlereq = (1 << channel->tsin_id) | IDLEREQ;
@@ -631,8 +632,8 @@ static int configure_memdma_and_inputblock(struct c8sectpfei *fei,
writel(tsin->back_buffer_busaddr, tsin->irec + DMA_PRDS_BUSWP_TP(0));
writel(tsin->back_buffer_busaddr, tsin->irec + DMA_PRDS_BUSRP_TP(0));

- /* initialize tasklet */
- tasklet_setup(&tsin->tsklet, channel_swdemux_tsklet);
+ /* initialize work */
+ INIT_WORK(&tsin->tsklet, channel_swdemux_tsklet);

return 0;

diff --git a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.h b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.h
index bf377cc82225..d63f0ee83615 100644
--- a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.h
+++ b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.h
@@ -51,7 +51,7 @@ struct channel_info {
unsigned long fifo;

struct completion idle_completion;
- struct tasklet_struct tsklet;
+ struct work_struct tsklet;

struct c8sectpfei *fei;
void __iomem *irec;
diff --git a/drivers/media/radio/wl128x/fmdrv.h b/drivers/media/radio/wl128x/fmdrv.h
index da8920169df8..85282f638c4a 100644
--- a/drivers/media/radio/wl128x/fmdrv.h
+++ b/drivers/media/radio/wl128x/fmdrv.h
@@ -15,6 +15,7 @@
#include <sound/core.h>
#include <sound/initval.h>
#include <linux/timer.h>
+#include <linux/workqueue.h>
#include <media/v4l2-ioctl.h>
#include <media/v4l2-common.h>
#include <media/v4l2-device.h>
@@ -200,15 +201,15 @@ struct fmdev {
int streg_cbdata; /* status of ST registration */

struct sk_buff_head rx_q; /* RX queue */
- struct tasklet_struct rx_task; /* RX Tasklet */
+ struct work_struct rx_task; /* RX Work */

struct sk_buff_head tx_q; /* TX queue */
- struct tasklet_struct tx_task; /* TX Tasklet */
+ struct work_struct tx_task; /* TX Work */
unsigned long last_tx_jiffies; /* Timestamp of last pkt sent */
atomic_t tx_cnt; /* Number of packets can send at a time */

struct sk_buff *resp_skb; /* Response from the chip */
- /* Main task completion handler */
+ /* Main work completion handler */
struct completion maintask_comp;
/* Opcode of last command sent to the chip */
u8 pre_op;
diff --git a/drivers/media/radio/wl128x/fmdrv_common.c b/drivers/media/radio/wl128x/fmdrv_common.c
index 3da8e5102bec..52290bb4a4ad 100644
--- a/drivers/media/radio/wl128x/fmdrv_common.c
+++ b/drivers/media/radio/wl128x/fmdrv_common.c
@@ -9,7 +9,7 @@
* one Channel-8 command to be sent to the chip).
* 2) Sending each Channel-8 command to the chip and reading
* response back over Shared Transport.
- * 3) Managing TX and RX Queues and Tasklets.
+ * 3) Managing TX and RX Queues and Works.
* 4) Handling FM Interrupt packet and taking appropriate action.
* 5) Loading FM firmware to the chip (common, FM TX, and FM RX
* firmware files based on mode selection)
@@ -29,6 +29,7 @@
#include "fmdrv_v4l2.h"
#include "fmdrv_common.h"
#include <linux/ti_wilink_st.h>
+#include <linux/workqueue.h>
#include "fmdrv_rx.h"
#include "fmdrv_tx.h"

@@ -244,10 +245,10 @@ void fmc_update_region_info(struct fmdev *fmdev, u8 region_to_set)
}

/*
- * FM common sub-module will schedule this tasklet whenever it receives
+ * FM common sub-module will schedule this work whenever it receives
* FM packet from ST driver.
*/
-static void recv_tasklet(struct tasklet_struct *t)
+static void recv_work(struct work_struct *t)
{
struct fmdev *fmdev;
struct fm_irq *irq_info;
@@ -256,7 +257,7 @@ static void recv_tasklet(struct tasklet_struct *t)
u8 num_fm_hci_cmds;
unsigned long flags;

- fmdev = from_tasklet(fmdev, t, tx_task);
+ fmdev = from_work(fmdev, t, tx_task);
irq_info = &fmdev->irq_info;
/* Process all packets in the RX queue */
while ((skb = skb_dequeue(&fmdev->rx_q))) {
@@ -322,22 +323,22 @@ static void recv_tasklet(struct tasklet_struct *t)

/*
* Check flow control field. If Num_FM_HCI_Commands field is
- * not zero, schedule FM TX tasklet.
+ * not zero, schedule FM TX work.
*/
if (num_fm_hci_cmds && atomic_read(&fmdev->tx_cnt))
if (!skb_queue_empty(&fmdev->tx_q))
- tasklet_schedule(&fmdev->tx_task);
+ queue_work(system_bh_wq, &fmdev->tx_task);
}
}

-/* FM send tasklet: is scheduled when FM packet has to be sent to chip */
-static void send_tasklet(struct tasklet_struct *t)
+/* FM send work: is scheduled when FM packet has to be sent to chip */
+static void send_work(struct work_struct *t)
{
struct fmdev *fmdev;
struct sk_buff *skb;
int len;

- fmdev = from_tasklet(fmdev, t, tx_task);
+ fmdev = from_work(fmdev, t, tx_task);

if (!atomic_read(&fmdev->tx_cnt))
return;
@@ -366,7 +367,7 @@ static void send_tasklet(struct tasklet_struct *t)
if (len < 0) {
kfree_skb(skb);
fmdev->resp_comp = NULL;
- fmerr("TX tasklet failed to send skb(%p)\n", skb);
+ fmerr("TX work failed to send skb(%p)\n", skb);
atomic_set(&fmdev->tx_cnt, 1);
} else {
fmdev->last_tx_jiffies = jiffies;
@@ -374,7 +375,7 @@ static void send_tasklet(struct tasklet_struct *t)
}

/*
- * Queues FM Channel-8 packet to FM TX queue and schedules FM TX tasklet for
+ * Queues FM Channel-8 packet to FM TX queue and schedules FM TX work for
* transmission
*/
static int fm_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
@@ -440,7 +441,7 @@ static int fm_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,

fm_cb(skb)->completion = wait_completion;
skb_queue_tail(&fmdev->tx_q, skb);
- tasklet_schedule(&fmdev->tx_task);
+ queue_work(system_bh_wq, &fmdev->tx_task);

return 0;
}
@@ -462,7 +463,7 @@ int fmc_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,

if (!wait_for_completion_timeout(&fmdev->maintask_comp,
FM_DRV_TX_TIMEOUT)) {
- fmerr("Timeout(%d sec),didn't get regcompletion signal from RX tasklet\n",
+ fmerr("Timeout(%d sec),didn't get regcompletion signal from RX work\n",
jiffies_to_msecs(FM_DRV_TX_TIMEOUT) / 1000);
return -ETIMEDOUT;
}
@@ -1455,7 +1456,7 @@ static long fm_st_receive(void *arg, struct sk_buff *skb)

memcpy(skb_push(skb, 1), &skb->cb[0], 1);
skb_queue_tail(&fmdev->rx_q, skb);
- tasklet_schedule(&fmdev->rx_task);
+ queue_work(system_bh_wq, &fmdev->rx_task);

return 0;
}
@@ -1537,13 +1538,13 @@ int fmc_prepare(struct fmdev *fmdev)
spin_lock_init(&fmdev->rds_buff_lock);
spin_lock_init(&fmdev->resp_skb_lock);

- /* Initialize TX queue and TX tasklet */
+ /* Initialize TX queue and TX work */
skb_queue_head_init(&fmdev->tx_q);
- tasklet_setup(&fmdev->tx_task, send_tasklet);
+ INIT_WORK(&fmdev->tx_task, send_work);

- /* Initialize RX Queue and RX tasklet */
+ /* Initialize RX Queue and RX work */
skb_queue_head_init(&fmdev->rx_q);
- tasklet_setup(&fmdev->rx_task, recv_tasklet);
+ INIT_WORK(&fmdev->rx_task, recv_work);

fmdev->irq_info.stage = 0;
atomic_set(&fmdev->tx_cnt, 1);
@@ -1589,8 +1590,8 @@ int fmc_release(struct fmdev *fmdev)
/* Service pending read */
wake_up_interruptible(&fmdev->rx.rds.read_queue);

- tasklet_kill(&fmdev->tx_task);
- tasklet_kill(&fmdev->rx_task);
+ cancel_work_sync(&fmdev->tx_task);
+ cancel_work_sync(&fmdev->rx_task);

skb_queue_purge(&fmdev->tx_q);
skb_queue_purge(&fmdev->rx_q);
diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
index c76ba24c1f55..a2e2e58b7506 100644
--- a/drivers/media/rc/mceusb.c
+++ b/drivers/media/rc/mceusb.c
@@ -774,7 +774,7 @@ static void mceusb_dev_printdata(struct mceusb_dev *ir, u8 *buf, int buf_len,

/*
* Schedule work that can't be done in interrupt handlers
- * (mceusb_dev_recv() and mce_write_callback()) nor tasklets.
+ * (mceusb_dev_recv() and mce_write_callback()) nor works.
* Invokes mceusb_deferred_kevent() for recovering from
* error events specified by the kevent bit field.
*/
diff --git a/drivers/media/usb/ttusb-dec/ttusb_dec.c b/drivers/media/usb/ttusb-dec/ttusb_dec.c
index 79faa2560613..55eeb00f1126 100644
--- a/drivers/media/usb/ttusb-dec/ttusb_dec.c
+++ b/drivers/media/usb/ttusb-dec/ttusb_dec.c
@@ -19,6 +19,7 @@
#include <linux/input.h>

#include <linux/mutex.h>
+#include <linux/workqueue.h>

#include <media/dmxdev.h>
#include <media/dvb_demux.h>
@@ -139,7 +140,7 @@ struct ttusb_dec {
int v_pes_postbytes;

struct list_head urb_frame_list;
- struct tasklet_struct urb_tasklet;
+ struct work_struct urb_work;
spinlock_t urb_frame_list_lock;

struct dvb_demux_filter *audio_filter;
@@ -766,9 +767,9 @@ static void ttusb_dec_process_urb_frame(struct ttusb_dec *dec, u8 *b,
}
}

-static void ttusb_dec_process_urb_frame_list(struct tasklet_struct *t)
+static void ttusb_dec_process_urb_frame_list(struct work_struct *t)
{
- struct ttusb_dec *dec = from_tasklet(dec, t, urb_tasklet);
+ struct ttusb_dec *dec = from_work(dec, t, urb_work);
struct list_head *item;
struct urb_frame *frame;
unsigned long flags;
@@ -822,7 +823,7 @@ static void ttusb_dec_process_urb(struct urb *urb)
spin_unlock_irqrestore(&dec->urb_frame_list_lock,
flags);

- tasklet_schedule(&dec->urb_tasklet);
+ queue_work(system_bh_wq, &dec->urb_work);
}
}
} else {
@@ -1198,11 +1199,11 @@ static int ttusb_dec_alloc_iso_urbs(struct ttusb_dec *dec)
return 0;
}

-static void ttusb_dec_init_tasklet(struct ttusb_dec *dec)
+static void ttusb_dec_init_work(struct ttusb_dec *dec)
{
spin_lock_init(&dec->urb_frame_list_lock);
INIT_LIST_HEAD(&dec->urb_frame_list);
- tasklet_setup(&dec->urb_tasklet, ttusb_dec_process_urb_frame_list);
+ INIT_WORK(&dec->urb_work, ttusb_dec_process_urb_frame_list);
}

static int ttusb_init_rc( struct ttusb_dec *dec)
@@ -1588,12 +1589,12 @@ static void ttusb_dec_exit_usb(struct ttusb_dec *dec)
ttusb_dec_free_iso_urbs(dec);
}

-static void ttusb_dec_exit_tasklet(struct ttusb_dec *dec)
+static void ttusb_dec_exit_work(struct ttusb_dec *dec)
{
struct list_head *item;
struct urb_frame *frame;

- tasklet_kill(&dec->urb_tasklet);
+ cancel_work_sync(&dec->urb_work);

while ((item = dec->urb_frame_list.next) != &dec->urb_frame_list) {
frame = list_entry(item, struct urb_frame, urb_frame_list);
@@ -1703,7 +1704,7 @@ static int ttusb_dec_probe(struct usb_interface *intf,

ttusb_dec_init_v_pes(dec);
ttusb_dec_init_filters(dec);
- ttusb_dec_init_tasklet(dec);
+ ttusb_dec_init_work(dec);

dec->active = 1;

@@ -1729,7 +1730,7 @@ static void ttusb_dec_disconnect(struct usb_interface *intf)
dprintk("%s\n", __func__);

if (dec->active) {
- ttusb_dec_exit_tasklet(dec);
+ ttusb_dec_exit_work(dec);
ttusb_dec_exit_filters(dec);
if(enable_rc)
ttusb_dec_exit_rc(dec);
--
2.17.1


2024-03-27 17:25:19

by Allen

[permalink] [raw]
Subject: Re: [PATCH 4/9] USB: Convert from tasklet to BH workqueue

> > The only generic interface to execute asynchronously in the BH context is
> > tasklet; however, it's marked deprecated and has some design flaws. To
> > replace tasklets, BH workqueue support was recently added. A BH workqueue
> > behaves similarly to regular workqueues except that the queued work items
> > are executed in the BH context.
> >
> > This patch converts drivers/infiniband/* from tasklet to BH workqueue.
>
> No it does not, I think your changelog is wrong :(

Whoops, sorry about that. I messed up the commit messages. I will fix it in v2.
>
> >
> > Based on the work done by Tejun Heo <[email protected]>
> > Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
> >
> > Signed-off-by: Allen Pais <[email protected]>
> > ---
> > drivers/usb/atm/usbatm.c | 55 +++++++++++++++--------------
> > drivers/usb/atm/usbatm.h | 3 +-
> > drivers/usb/core/hcd.c | 22 ++++++------
> > drivers/usb/gadget/udc/fsl_qe_udc.c | 21 +++++------
> > drivers/usb/gadget/udc/fsl_qe_udc.h | 4 +--
> > drivers/usb/host/ehci-sched.c | 2 +-
> > drivers/usb/host/fhci-hcd.c | 3 +-
> > drivers/usb/host/fhci-sched.c | 10 +++---
> > drivers/usb/host/fhci.h | 5 +--
> > drivers/usb/host/xhci-dbgcap.h | 3 +-
> > drivers/usb/host/xhci-dbgtty.c | 15 ++++----
> > include/linux/usb/cdc_ncm.h | 2 +-
> > include/linux/usb/usbnet.h | 2 +-
> > 13 files changed, 76 insertions(+), 71 deletions(-)
> >
> > diff --git a/drivers/usb/atm/usbatm.c b/drivers/usb/atm/usbatm.c
> > index 2da6615fbb6f..74849f24e52e 100644
> > --- a/drivers/usb/atm/usbatm.c
> > +++ b/drivers/usb/atm/usbatm.c
> > @@ -17,7 +17,7 @@
> > * - Removed the limit on the number of devices
> > * - Module now autoloads on device plugin
> > * - Merged relevant parts of sarlib
> > - * - Replaced the kernel thread with a tasklet
> > + * - Replaced the kernel thread with a work
>
> a "work"?
will fix the comments.

>
> > * - New packet transmission code
> > * - Changed proc file contents
> > * - Fixed all known SMP races
> > @@ -68,6 +68,7 @@
> > #include <linux/wait.h>
> > #include <linux/kthread.h>
> > #include <linux/ratelimit.h>
> > +#include <linux/workqueue.h>
> >
> > #ifdef VERBOSE_DEBUG
> > static int usbatm_print_packet(struct usbatm_data *instance, const unsigned char *data, int len);
> > @@ -249,7 +250,7 @@ static void usbatm_complete(struct urb *urb)
> > /* vdbg("%s: urb 0x%p, status %d, actual_length %d",
> > __func__, urb, status, urb->actual_length); */
> >
> > - /* Can be invoked from task context, protect against interrupts */
> > + /* Can be invoked from work context, protect against interrupts */
>
> "workqueue"? This too seems wrong.
>
> Same for other comment changes in this patch.

Thanks for the quick review, I will fix the comments and send out v2.

- Alle

> thanks,
>
> greg k-h
>

2024-03-27 18:05:30

by Corey Minyard

[permalink] [raw]
Subject: Re: [PATCH 6/9] ipmi: Convert from tasklet to BH workqueue

On Wed, Mar 27, 2024 at 04:03:11PM +0000, Allen Pais wrote:
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/infiniband/* from tasklet to BH workqueue.

I think you mean drivers/char/ipmi/* here.

I believe that work queues items are execute single-threaded for a work
queue, so this should be good. I need to test this, though. It may be
that an IPMI device can have its own work queue; it may not be important
to run it in bh context.

-corey

>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Signed-off-by: Allen Pais <[email protected]>
> ---
> drivers/char/ipmi/ipmi_msghandler.c | 30 ++++++++++++++---------------
> 1 file changed, 15 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
> index b0eedc4595b3..fce2a2dbdc82 100644
> --- a/drivers/char/ipmi/ipmi_msghandler.c
> +++ b/drivers/char/ipmi/ipmi_msghandler.c
> @@ -36,12 +36,13 @@
> #include <linux/nospec.h>
> #include <linux/vmalloc.h>
> #include <linux/delay.h>
> +#include <linux/workqueue.h>
>
> #define IPMI_DRIVER_VERSION "39.2"
>
> static struct ipmi_recv_msg *ipmi_alloc_recv_msg(void);
> static int ipmi_init_msghandler(void);
> -static void smi_recv_tasklet(struct tasklet_struct *t);
> +static void smi_recv_work(struct work_struct *t);
> static void handle_new_recv_msgs(struct ipmi_smi *intf);
> static void need_waiter(struct ipmi_smi *intf);
> static int handle_one_recv_msg(struct ipmi_smi *intf,
> @@ -498,13 +499,13 @@ struct ipmi_smi {
> /*
> * Messages queued for delivery. If delivery fails (out of memory
> * for instance), They will stay in here to be processed later in a
> - * periodic timer interrupt. The tasklet is for handling received
> + * periodic timer interrupt. The work is for handling received
> * messages directly from the handler.
> */
> spinlock_t waiting_rcv_msgs_lock;
> struct list_head waiting_rcv_msgs;
> atomic_t watchdog_pretimeouts_to_deliver;
> - struct tasklet_struct recv_tasklet;
> + struct work_struct recv_work;
>
> spinlock_t xmit_msgs_lock;
> struct list_head xmit_msgs;
> @@ -704,7 +705,7 @@ static void clean_up_interface_data(struct ipmi_smi *intf)
> struct cmd_rcvr *rcvr, *rcvr2;
> struct list_head list;
>
> - tasklet_kill(&intf->recv_tasklet);
> + cancel_work_sync(&intf->recv_work);
>
> free_smi_msg_list(&intf->waiting_rcv_msgs);
> free_recv_msg_list(&intf->waiting_events);
> @@ -1319,7 +1320,7 @@ static void free_user(struct kref *ref)
> {
> struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);
>
> - /* SRCU cleanup must happen in task context. */
> + /* SRCU cleanup must happen in work context. */
> queue_work(remove_work_wq, &user->remove_work);
> }
>
> @@ -3605,8 +3606,7 @@ int ipmi_add_smi(struct module *owner,
> intf->curr_seq = 0;
> spin_lock_init(&intf->waiting_rcv_msgs_lock);
> INIT_LIST_HEAD(&intf->waiting_rcv_msgs);
> - tasklet_setup(&intf->recv_tasklet,
> - smi_recv_tasklet);
> + INIT_WORK(&intf->recv_work, smi_recv_work);
> atomic_set(&intf->watchdog_pretimeouts_to_deliver, 0);
> spin_lock_init(&intf->xmit_msgs_lock);
> INIT_LIST_HEAD(&intf->xmit_msgs);
> @@ -4779,7 +4779,7 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
> * To preserve message order, quit if we
> * can't handle a message. Add the message
> * back at the head, this is safe because this
> - * tasklet is the only thing that pulls the
> + * work is the only thing that pulls the
> * messages.
> */
> list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
> @@ -4812,10 +4812,10 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
> }
> }
>
> -static void smi_recv_tasklet(struct tasklet_struct *t)
> +static void smi_recv_work(struct work_struct *t)
> {
> unsigned long flags = 0; /* keep us warning-free. */
> - struct ipmi_smi *intf = from_tasklet(intf, t, recv_tasklet);
> + struct ipmi_smi *intf = from_work(intf, t, recv_work);
> int run_to_completion = intf->run_to_completion;
> struct ipmi_smi_msg *newmsg = NULL;
>
> @@ -4866,7 +4866,7 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
>
> /*
> * To preserve message order, we keep a queue and deliver from
> - * a tasklet.
> + * a work.
> */
> if (!run_to_completion)
> spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
> @@ -4887,9 +4887,9 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
> spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);
>
> if (run_to_completion)
> - smi_recv_tasklet(&intf->recv_tasklet);
> + smi_recv_work(&intf->recv_work);
> else
> - tasklet_schedule(&intf->recv_tasklet);
> + queue_work(system_bh_wq, &intf->recv_work);
> }
> EXPORT_SYMBOL(ipmi_smi_msg_received);
>
> @@ -4899,7 +4899,7 @@ void ipmi_smi_watchdog_pretimeout(struct ipmi_smi *intf)
> return;
>
> atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1);
> - tasklet_schedule(&intf->recv_tasklet);
> + queue_work(system_bh_wq, &intf->recv_work);
> }
> EXPORT_SYMBOL(ipmi_smi_watchdog_pretimeout);
>
> @@ -5068,7 +5068,7 @@ static bool ipmi_timeout_handler(struct ipmi_smi *intf,
> flags);
> }
>
> - tasklet_schedule(&intf->recv_tasklet);
> + queue_work(system_bh_wq, &intf->recv_work);
>
> return need_timer;
> }
> --
> 2.17.1
>
>

2024-03-27 18:10:27

by Alan Stern

[permalink] [raw]
Subject: Re: [PATCH 4/9] USB: Convert from tasklet to BH workqueue

On Wed, Mar 27, 2024 at 04:03:09PM +0000, Allen Pais wrote:
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/infiniband/* from tasklet to BH workqueue.
>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Signed-off-by: Allen Pais <[email protected]>
> ---

> diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
> index c0e005670d67..88d8e1c366cd 100644
> --- a/drivers/usb/core/hcd.c
> +++ b/drivers/usb/core/hcd.c

> @@ -1662,10 +1663,9 @@ static void __usb_hcd_giveback_urb(struct urb *urb)
> usb_put_urb(urb);
> }
>
> -static void usb_giveback_urb_bh(struct work_struct *work)
> +static void usb_giveback_urb_bh(struct work_struct *t)
> {
> - struct giveback_urb_bh *bh =
> - container_of(work, struct giveback_urb_bh, bh);
> + struct giveback_urb_bh *bh = from_work(bh, t, bh);
> struct list_head local_list;
>
> spin_lock_irq(&bh->lock);

Is there any reason for this apparently pointless change of a local
variable's name?

Alan Stern

2024-03-27 19:50:13

by Jernej Škrabec

[permalink] [raw]
Subject: Re: [PATCH 9/9] mmc: Convert from tasklet to BH workqueue

Dne sreda, 27. marec 2024 ob 17:03:14 CET je Allen Pais napisal(a):
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/infiniband/* from tasklet to BH workqueue.

infiniband -> mmc

Best regards,
Jernej

>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Signed-off-by: Allen Pais <[email protected]>




2024-03-28 05:56:11

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH 2/9] dma: Convert from tasklet to BH workqueue

Hi Allen,

Subsytem is dmaengine, can you rename this to dmaengine: ...

On 27-03-24, 16:03, Allen Pais wrote:
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.

Thanks for conversion, am happy with BH alternative as it helps in
dmaengine where we need shortest possible time between tasklet and
interrupt handling to maximize dma performance

>
> This patch converts drivers/dma/* from tasklet to BH workqueue.

>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Signed-off-by: Allen Pais <[email protected]>
> ---
> drivers/dma/altera-msgdma.c | 15 ++++----
> drivers/dma/apple-admac.c | 15 ++++----
> drivers/dma/at_hdmac.c | 2 +-
> drivers/dma/at_xdmac.c | 15 ++++----
> drivers/dma/bcm2835-dma.c | 2 +-
> drivers/dma/dma-axi-dmac.c | 2 +-
> drivers/dma/dma-jz4780.c | 2 +-
> .../dma/dw-axi-dmac/dw-axi-dmac-platform.c | 2 +-
> drivers/dma/dw-edma/dw-edma-core.c | 2 +-
> drivers/dma/dw/core.c | 13 +++----
> drivers/dma/dw/regs.h | 3 +-
> drivers/dma/ep93xx_dma.c | 15 ++++----
> drivers/dma/fsl-edma-common.c | 2 +-
> drivers/dma/fsl-qdma.c | 2 +-
> drivers/dma/fsl_raid.c | 11 +++---
> drivers/dma/fsl_raid.h | 2 +-
> drivers/dma/fsldma.c | 15 ++++----
> drivers/dma/fsldma.h | 3 +-
> drivers/dma/hisi_dma.c | 2 +-
> drivers/dma/hsu/hsu.c | 2 +-
> drivers/dma/idma64.c | 4 +--
> drivers/dma/img-mdc-dma.c | 2 +-
> drivers/dma/imx-dma.c | 27 +++++++-------
> drivers/dma/imx-sdma.c | 6 ++--
> drivers/dma/ioat/dma.c | 17 ++++-----
> drivers/dma/ioat/dma.h | 5 +--
> drivers/dma/ioat/init.c | 2 +-
> drivers/dma/k3dma.c | 19 +++++-----
> drivers/dma/mediatek/mtk-cqdma.c | 35 ++++++++++---------
> drivers/dma/mediatek/mtk-hsdma.c | 2 +-
> drivers/dma/mediatek/mtk-uart-apdma.c | 4 +--
> drivers/dma/mmp_pdma.c | 13 +++----
> drivers/dma/mmp_tdma.c | 11 +++---
> drivers/dma/mpc512x_dma.c | 17 ++++-----
> drivers/dma/mv_xor.c | 13 +++----
> drivers/dma/mv_xor.h | 5 +--
> drivers/dma/mv_xor_v2.c | 23 ++++++------
> drivers/dma/mxs-dma.c | 13 +++----
> drivers/dma/nbpfaxi.c | 15 ++++----
> drivers/dma/owl-dma.c | 2 +-
> drivers/dma/pch_dma.c | 17 ++++-----
> drivers/dma/pl330.c | 31 ++++++++--------
> drivers/dma/plx_dma.c | 13 +++----
> drivers/dma/ppc4xx/adma.c | 17 ++++-----
> drivers/dma/ppc4xx/adma.h | 5 +--
> drivers/dma/pxa_dma.c | 2 +-
> drivers/dma/qcom/bam_dma.c | 35 ++++++++++---------
> drivers/dma/qcom/gpi.c | 18 +++++-----
> drivers/dma/qcom/hidma.c | 11 +++---
> drivers/dma/qcom/hidma.h | 5 +--
> drivers/dma/qcom/hidma_ll.c | 11 +++---
> drivers/dma/qcom/qcom_adm.c | 2 +-
> drivers/dma/sa11x0-dma.c | 27 +++++++-------
> drivers/dma/sf-pdma/sf-pdma.c | 23 ++++++------
> drivers/dma/sf-pdma/sf-pdma.h | 5 +--
> drivers/dma/sprd-dma.c | 2 +-
> drivers/dma/st_fdma.c | 2 +-
> drivers/dma/ste_dma40.c | 17 ++++-----
> drivers/dma/sun6i-dma.c | 33 ++++++++---------
> drivers/dma/tegra186-gpc-dma.c | 2 +-
> drivers/dma/tegra20-apb-dma.c | 19 +++++-----
> drivers/dma/tegra210-adma.c | 2 +-
> drivers/dma/ti/edma.c | 2 +-
> drivers/dma/ti/k3-udma.c | 11 +++---
> drivers/dma/ti/omap-dma.c | 2 +-
> drivers/dma/timb_dma.c | 23 ++++++------
> drivers/dma/txx9dmac.c | 29 +++++++--------
> drivers/dma/txx9dmac.h | 5 +--
> drivers/dma/virt-dma.c | 9 ++---
> drivers/dma/virt-dma.h | 9 ++---
> drivers/dma/xgene-dma.c | 21 +++++------
> drivers/dma/xilinx/xilinx_dma.c | 23 ++++++------
> drivers/dma/xilinx/xilinx_dpdma.c | 21 +++++------
> drivers/dma/xilinx/zynqmp_dma.c | 21 +++++------
> 74 files changed, 442 insertions(+), 395 deletions(-)
>
> diff --git a/drivers/dma/altera-msgdma.c b/drivers/dma/altera-msgdma.c
> index a8e3615235b8..611b5290324b 100644
> --- a/drivers/dma/altera-msgdma.c
> +++ b/drivers/dma/altera-msgdma.c
> @@ -20,6 +20,7 @@
> #include <linux/platform_device.h>
> #include <linux/slab.h>
> #include <linux/of_dma.h>
> +#include <linux/workqueue.h>
>
> #include "dmaengine.h"
>
> @@ -170,7 +171,7 @@ struct msgdma_sw_desc {
> struct msgdma_device {
> spinlock_t lock;
> struct device *dev;
> - struct tasklet_struct irq_tasklet;
> + struct work_struct irq_work;

Can we name these as bh_work to signify that we are always in bh
context? here and everywhere please


> struct list_head pending_list;
> struct list_head free_list;
> struct list_head active_list;
> @@ -676,12 +677,12 @@ static int msgdma_alloc_chan_resources(struct dma_chan *dchan)
> }
>
> /**
> - * msgdma_tasklet - Schedule completion tasklet
> + * msgdma_work - Schedule completion work

..

> @@ -515,7 +516,7 @@ struct gpii {
> enum gpi_pm_state pm_state;
> rwlock_t pm_lock;
> struct gpi_ring ev_ring;
> - struct tasklet_struct ev_task; /* event processing tasklet */
> + struct work_struct ev_task; /* event processing work */
> struct completion cmd_completion;
> enum gpi_cmd gpi_cmd;
> u32 cntxt_type_irq_msk;
> @@ -755,7 +756,7 @@ static void gpi_process_ieob(struct gpii *gpii)
> gpi_write_reg(gpii, gpii->ieob_clr_reg, BIT(0));
>
> gpi_config_interrupts(gpii, MASK_IEOB_SETTINGS, 0);
> - tasklet_hi_schedule(&gpii->ev_task);
> + queue_work(system_bh_highpri_wq, &gpii->ev_task);

This is good conversion, thanks for ensuring system_bh_highpri_wq is
used here
--
~Vinod

2024-03-28 10:09:37

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [PATCH 2/9] dma: Convert from tasklet to BH workqueue

On Thu, Mar 28, 2024, at 06:55, Vinod Koul wrote:
> On 27-03-24, 16:03, Allen Pais wrote:
>> The only generic interface to execute asynchronously in the BH context is
>> tasklet; however, it's marked deprecated and has some design flaws. To
>> replace tasklets, BH workqueue support was recently added. A BH workqueue
>> behaves similarly to regular workqueues except that the queued work items
>> are executed in the BH context.
>
> Thanks for conversion, am happy with BH alternative as it helps in
> dmaengine where we need shortest possible time between tasklet and
> interrupt handling to maximize dma performance

I still feel that we want something different for dmaengine,
at least in the long run. As we have discussed in the past,
the tasklet context in these drivers is what the callbacks
from the dma client device is run in, and a lot of these probably
want something other than tasklet context, e.g. just call
complete() on a client-provided completion structure.

Instead of open-coding the use of the system_bh_wq in each
dmaengine, how about we start with a custom WQ_BH
specifically for the dmaengine subsystem and wrap them
inside of another interface.

Since almost every driver associates the tasklet with the
dma_chan, we could go one step further and add the
work_queue structure directly into struct dma_chan,
with the wrapper operating on the dma_chan rather than
the work_queue.

Arnd

2024-03-28 10:17:15

by Christian Loehle

[permalink] [raw]
Subject: Re: [PATCH 9/9] mmc: Convert from tasklet to BH workqueue

On 27/03/2024 16:03, Allen Pais wrote:
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/infiniband/* from tasklet to BH workqueue.
s/infiniband/mmc
>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Signed-off-by: Allen Pais <[email protected]>
> ---
> drivers/mmc/host/atmel-mci.c | 35 ++++-----
> drivers/mmc/host/au1xmmc.c | 37 ++++-----
> drivers/mmc/host/cb710-mmc.c | 15 ++--
> drivers/mmc/host/cb710-mmc.h | 3 +-
> drivers/mmc/host/dw_mmc.c | 25 ++++---
> drivers/mmc/host/dw_mmc.h | 9 ++-
For dw_mmc:
Performance numbers look good FWIW.
for i in $(seq 0 5); do echo performance > /sys/devices/system/cpu/cpu$i/cpufreq/scaling_governor; done
for i in $(seq 0 4); do fio --name=test --rw=randread --bs=4k --runtime=30 --time_based --filename=/dev/mmcblk1 --minimal --numjobs=6 --iodepth=32 --group_reporting | awk -F ";" '{print $8}'; sleep 30; done
Baseline:
1758
1773
1619
1835
1639
to:
1743
1643
1860
1638
1872
(I'd call that equivalent).
This is on a RK3399.
I would prefer most of the naming to change from "work" to "workqueue" in the driver
code.
Apart from that:
Reviewed-by: Christian Loehle <[email protected]>
Tested-by: Christian Loehle <[email protected]>
> drivers/mmc/host/omap.c | 17 +++--
> drivers/mmc/host/renesas_sdhi.h | 3 +-
> drivers/mmc/host/renesas_sdhi_internal_dmac.c | 24 +++---
See inline
> drivers/mmc/host/renesas_sdhi_sys_dmac.c | 9 +--
> drivers/mmc/host/sdhci-bcm-kona.c | 2 +-
> drivers/mmc/host/tifm_sd.c | 15 ++--
> drivers/mmc/host/tmio_mmc.h | 3 +-
> drivers/mmc/host/tmio_mmc_core.c | 4 +-
> drivers/mmc/host/uniphier-sd.c | 13 ++--
> drivers/mmc/host/via-sdmmc.c | 25 ++++---
> drivers/mmc/host/wbsd.c | 75 ++++++++++---------
> drivers/mmc/host/wbsd.h | 10 +--
> 18 files changed, 167 insertions(+), 157 deletions(-)
>
> diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
> index dba826db739a..0a92a7fd020f 100644
> --- a/drivers/mmc/host/atmel-mci.c
> +++ b/drivers/mmc/host/atmel-mci.c
> @@ -33,6 +33,7 @@
> #include <linux/pm.h>
> #include <linux/pm_runtime.h>
> #include <linux/pinctrl/consumer.h>
> +#include <linux/workqueue.h>
>
> #include <asm/cacheflush.h>
> #include <asm/io.h>
> @@ -284,12 +285,12 @@ struct atmel_mci_dma {
> * EVENT_DATA_ERROR is pending.
> * @stop_cmdr: Value to be loaded into CMDR when the stop command is
> * to be sent.
> - * @tasklet: Tasklet running the request state machine.
> + * @work: Work running the request state machine.
> * @pending_events: Bitmask of events flagged by the interrupt handler
> - * to be processed by the tasklet.
> + * to be processed by the work.
> * @completed_events: Bitmask of events which the state machine has
> * processed.
> - * @state: Tasklet state.
> + * @state: Work state.
> * @queue: List of slots waiting for access to the controller.
> * @need_clock_update: Update the clock rate before the next request.
> * @need_reset: Reset controller before next request.
> @@ -363,7 +364,7 @@ struct atmel_mci {
> u32 data_status;
> u32 stop_cmdr;
>
> - struct tasklet_struct tasklet;
> + struct work_struct work;
> unsigned long pending_events;
> unsigned long completed_events;
> enum atmel_mci_state state;
> @@ -761,7 +762,7 @@ static void atmci_timeout_timer(struct timer_list *t)
> host->need_reset = 1;
> host->state = STATE_END_REQUEST;
> smp_wmb();
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> static inline unsigned int atmci_ns_to_clocks(struct atmel_mci *host,
> @@ -983,7 +984,7 @@ static void atmci_pdc_complete(struct atmel_mci *host)
>
> dev_dbg(&host->pdev->dev, "(%s) set pending xfer complete\n", __func__);
> atmci_set_pending(host, EVENT_XFER_COMPLETE);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> static void atmci_dma_cleanup(struct atmel_mci *host)
> @@ -997,7 +998,7 @@ static void atmci_dma_cleanup(struct atmel_mci *host)
> }
>
> /*
> - * This function is called by the DMA driver from tasklet context.
> + * This function is called by the DMA driver from work context.
> */
> static void atmci_dma_complete(void *arg)
> {
> @@ -1020,7 +1021,7 @@ static void atmci_dma_complete(void *arg)
> dev_dbg(&host->pdev->dev,
> "(%s) set pending xfer complete\n", __func__);
> atmci_set_pending(host, EVENT_XFER_COMPLETE);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
>
> /*
> * Regardless of what the documentation says, we have
> @@ -1033,7 +1034,7 @@ static void atmci_dma_complete(void *arg)
> * haven't seen all the potential error bits yet.
> *
> * The interrupt handler will schedule a different
> - * tasklet to finish things up when the data transfer
> + * work to finish things up when the data transfer
> * is completely done.
> *
> * We may not complete the mmc request here anyway
> @@ -1765,9 +1766,9 @@ static void atmci_detect_change(struct timer_list *t)
> }
> }
>
> -static void atmci_tasklet_func(struct tasklet_struct *t)
> +static void atmci_work_func(struct work_struct *t)
> {
> - struct atmel_mci *host = from_tasklet(host, t, tasklet);
> + struct atmel_mci *host = from_work(host, t, work);
> struct mmc_request *mrq = host->mrq;
> struct mmc_data *data = host->data;
> enum atmel_mci_state state = host->state;
> @@ -1779,7 +1780,7 @@ static void atmci_tasklet_func(struct tasklet_struct *t)
> state = host->state;
>
> dev_vdbg(&host->pdev->dev,
> - "tasklet: state %u pending/completed/mask %lx/%lx/%x\n",
> + "work: state %u pending/completed/mask %lx/%lx/%x\n",
> state, host->pending_events, host->completed_events,
> atmci_readl(host, ATMCI_IMR));
>
> @@ -2141,7 +2142,7 @@ static irqreturn_t atmci_interrupt(int irq, void *dev_id)
> dev_dbg(&host->pdev->dev, "set pending data error\n");
> smp_wmb();
> atmci_set_pending(host, EVENT_DATA_ERROR);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> if (pending & ATMCI_TXBUFE) {
> @@ -2210,7 +2211,7 @@ static irqreturn_t atmci_interrupt(int irq, void *dev_id)
> smp_wmb();
> dev_dbg(&host->pdev->dev, "set pending notbusy\n");
> atmci_set_pending(host, EVENT_NOTBUSY);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> if (pending & ATMCI_NOTBUSY) {
> @@ -2219,7 +2220,7 @@ static irqreturn_t atmci_interrupt(int irq, void *dev_id)
> smp_wmb();
> dev_dbg(&host->pdev->dev, "set pending notbusy\n");
> atmci_set_pending(host, EVENT_NOTBUSY);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> if (pending & ATMCI_RXRDY)
> @@ -2234,7 +2235,7 @@ static irqreturn_t atmci_interrupt(int irq, void *dev_id)
> smp_wmb();
> dev_dbg(&host->pdev->dev, "set pending cmd rdy\n");
> atmci_set_pending(host, EVENT_CMD_RDY);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> if (pending & (ATMCI_SDIOIRQA | ATMCI_SDIOIRQB))
> @@ -2530,7 +2531,7 @@ static int atmci_probe(struct platform_device *pdev)
>
> host->mapbase = regs->start;
>
> - tasklet_setup(&host->tasklet, atmci_tasklet_func);
> + INIT_WORK(&host->work, atmci_work_func);
>
> ret = request_irq(irq, atmci_interrupt, 0, dev_name(&pdev->dev), host);
> if (ret) {
> diff --git a/drivers/mmc/host/au1xmmc.c b/drivers/mmc/host/au1xmmc.c
> index b5a5c6a2fe8b..c86fa7d2ebb7 100644
> --- a/drivers/mmc/host/au1xmmc.c
> +++ b/drivers/mmc/host/au1xmmc.c
> @@ -42,6 +42,7 @@
> #include <linux/leds.h>
> #include <linux/mmc/host.h>
> #include <linux/slab.h>
> +#include <linux/workqueue.h>
>
> #include <asm/io.h>
> #include <asm/mach-au1x00/au1000.h>
> @@ -113,8 +114,8 @@ struct au1xmmc_host {
>
> int irq;
>
> - struct tasklet_struct finish_task;
> - struct tasklet_struct data_task;
> + struct work_struct finish_task;
> + struct work_struct data_task;
> struct au1xmmc_platform_data *platdata;
> struct platform_device *pdev;
> struct resource *ioarea;
> @@ -253,9 +254,9 @@ static void au1xmmc_finish_request(struct au1xmmc_host *host)
> mmc_request_done(host->mmc, mrq);
> }
>
> -static void au1xmmc_tasklet_finish(struct tasklet_struct *t)
> +static void au1xmmc_work_finish(struct work_struct *t)
> {
> - struct au1xmmc_host *host = from_tasklet(host, t, finish_task);
> + struct au1xmmc_host *host = from_work(host, t, finish_task);
> au1xmmc_finish_request(host);
> }
>
> @@ -363,9 +364,9 @@ static void au1xmmc_data_complete(struct au1xmmc_host *host, u32 status)
> au1xmmc_finish_request(host);
> }
>
> -static void au1xmmc_tasklet_data(struct tasklet_struct *t)
> +static void au1xmmc_work_data(struct work_struct *t)
> {
> - struct au1xmmc_host *host = from_tasklet(host, t, data_task);
> + struct au1xmmc_host *host = from_work(host, t, data_task);
>
> u32 status = __raw_readl(HOST_STATUS(host));
> au1xmmc_data_complete(host, status);
> @@ -425,7 +426,7 @@ static void au1xmmc_send_pio(struct au1xmmc_host *host)
> if (host->flags & HOST_F_STOP)
> SEND_STOP(host);
>
> - tasklet_schedule(&host->data_task);
> + queue_work(system_bh_wq, &host->data_task);
> }
> }
>
> @@ -505,7 +506,7 @@ static void au1xmmc_receive_pio(struct au1xmmc_host *host)
> if (host->flags & HOST_F_STOP)
> SEND_STOP(host);
>
> - tasklet_schedule(&host->data_task);
> + queue_work(system_bh_wq, &host->data_task);
> }
> }
>
> @@ -561,7 +562,7 @@ static void au1xmmc_cmd_complete(struct au1xmmc_host *host, u32 status)
>
> if (!trans || cmd->error) {
> IRQ_OFF(host, SD_CONFIG_TH | SD_CONFIG_RA | SD_CONFIG_RF);
> - tasklet_schedule(&host->finish_task);
> + queue_work(system_bh_wq, &host->finish_task);
> return;
> }
>
> @@ -797,7 +798,7 @@ static irqreturn_t au1xmmc_irq(int irq, void *dev_id)
> IRQ_OFF(host, SD_CONFIG_NE | SD_CONFIG_TH);
>
> /* IRQ_OFF(host, SD_CONFIG_TH | SD_CONFIG_RA | SD_CONFIG_RF); */
> - tasklet_schedule(&host->finish_task);
> + queue_work(system_bh_wq, &host->finish_task);
> }
> #if 0
> else if (status & SD_STATUS_DD) {
> @@ -806,7 +807,7 @@ static irqreturn_t au1xmmc_irq(int irq, void *dev_id)
> au1xmmc_receive_pio(host);
> else {
> au1xmmc_data_complete(host, status);
> - /* tasklet_schedule(&host->data_task); */
> + /* queue_work(system_bh_wq, &host->data_task); */
> }
> }
> #endif
> @@ -854,7 +855,7 @@ static void au1xmmc_dbdma_callback(int irq, void *dev_id)
> if (host->flags & HOST_F_STOP)
> SEND_STOP(host);
>
> - tasklet_schedule(&host->data_task);
> + queue_work(system_bh_wq, &host->data_task);
> }
>
> static int au1xmmc_dbdma_init(struct au1xmmc_host *host)
> @@ -1039,9 +1040,9 @@ static int au1xmmc_probe(struct platform_device *pdev)
> if (host->platdata)
> mmc->caps &= ~(host->platdata->mask_host_caps);
>
> - tasklet_setup(&host->data_task, au1xmmc_tasklet_data);
> + INIT_WORK(&host->data_task, au1xmmc_work_data);
>
> - tasklet_setup(&host->finish_task, au1xmmc_tasklet_finish);
> + INIT_WORK(&host->finish_task, au1xmmc_work_finish);
>
> if (has_dbdma()) {
> ret = au1xmmc_dbdma_init(host);
> @@ -1091,8 +1092,8 @@ static int au1xmmc_probe(struct platform_device *pdev)
> if (host->flags & HOST_F_DBDMA)
> au1xmmc_dbdma_shutdown(host);
>
> - tasklet_kill(&host->data_task);
> - tasklet_kill(&host->finish_task);
> + cancel_work_sync(&host->data_task);
> + cancel_work_sync(&host->finish_task);
>
> if (host->platdata && host->platdata->cd_setup &&
> !(mmc->caps & MMC_CAP_NEEDS_POLL))
> @@ -1135,8 +1136,8 @@ static void au1xmmc_remove(struct platform_device *pdev)
> __raw_writel(0, HOST_CONFIG2(host));
> wmb(); /* drain writebuffer */
>
> - tasklet_kill(&host->data_task);
> - tasklet_kill(&host->finish_task);
> + cancel_work_sync(&host->data_task);
> + cancel_work_sync(&host->finish_task);
>
> if (host->flags & HOST_F_DBDMA)
> au1xmmc_dbdma_shutdown(host);
> diff --git a/drivers/mmc/host/cb710-mmc.c b/drivers/mmc/host/cb710-mmc.c
> index 0aec33b88bef..eebb6797e785 100644
> --- a/drivers/mmc/host/cb710-mmc.c
> +++ b/drivers/mmc/host/cb710-mmc.c
> @@ -8,6 +8,7 @@
> #include <linux/module.h>
> #include <linux/pci.h>
> #include <linux/delay.h>
> +#include <linux/workqueue.h>
> #include "cb710-mmc.h"
>
> #define CB710_MMC_REQ_TIMEOUT_MS 2000
> @@ -493,7 +494,7 @@ static void cb710_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
> if (!cb710_mmc_command(mmc, mrq->cmd) && mrq->stop)
> cb710_mmc_command(mmc, mrq->stop);
>
> - tasklet_schedule(&reader->finish_req_tasklet);
> + queue_work(system_bh_wq, &reader->finish_req_work);
> }
>
> static int cb710_mmc_powerup(struct cb710_slot *slot)
> @@ -646,10 +647,10 @@ static int cb710_mmc_irq_handler(struct cb710_slot *slot)
> return 1;
> }
>
> -static void cb710_mmc_finish_request_tasklet(struct tasklet_struct *t)
> +static void cb710_mmc_finish_request_work(struct work_struct *t)
> {
> - struct cb710_mmc_reader *reader = from_tasklet(reader, t,
> - finish_req_tasklet);
> + struct cb710_mmc_reader *reader = from_work(reader, t,
> + finish_req_work);
> struct mmc_request *mrq = reader->mrq;
>
> reader->mrq = NULL;
> @@ -718,8 +719,8 @@ static int cb710_mmc_init(struct platform_device *pdev)
>
> reader = mmc_priv(mmc);
>
> - tasklet_setup(&reader->finish_req_tasklet,
> - cb710_mmc_finish_request_tasklet);
> + INIT_WORK(&reader->finish_req_work,
> + cb710_mmc_finish_request_work);
> spin_lock_init(&reader->irq_lock);
> cb710_dump_regs(chip, CB710_DUMP_REGS_MMC);
>
> @@ -763,7 +764,7 @@ static void cb710_mmc_exit(struct platform_device *pdev)
> cb710_write_port_32(slot, CB710_MMC_CONFIG_PORT, 0);
> cb710_write_port_16(slot, CB710_MMC_CONFIGB_PORT, 0);
>
> - tasklet_kill(&reader->finish_req_tasklet);
> + cancel_work_sync(&reader->finish_req_work);
>
> mmc_free_host(mmc);
> }
> diff --git a/drivers/mmc/host/cb710-mmc.h b/drivers/mmc/host/cb710-mmc.h
> index 5e053077dbed..b35ab8736374 100644
> --- a/drivers/mmc/host/cb710-mmc.h
> +++ b/drivers/mmc/host/cb710-mmc.h
> @@ -8,10 +8,11 @@
> #define LINUX_CB710_MMC_H
>
> #include <linux/cb710.h>
> +#include <linux/workqueue.h>
>
> /* per-MMC-reader structure */
> struct cb710_mmc_reader {
> - struct tasklet_struct finish_req_tasklet;
> + struct work_struct finish_req_work;
> struct mmc_request *mrq;
> spinlock_t irq_lock;
> unsigned char last_power_mode;
> diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
> index 8e2d676b9239..ee6f892bc0d8 100644
> --- a/drivers/mmc/host/dw_mmc.c
> +++ b/drivers/mmc/host/dw_mmc.c
> @@ -36,6 +36,7 @@
> #include <linux/regulator/consumer.h>
> #include <linux/of.h>
> #include <linux/mmc/slot-gpio.h>
> +#include <linux/workqueue.h>
>
> #include "dw_mmc.h"
>
> @@ -493,7 +494,7 @@ static void dw_mci_dmac_complete_dma(void *arg)
> */
> if (data) {
> set_bit(EVENT_XFER_COMPLETE, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
> }
>
> @@ -1834,7 +1835,7 @@ static enum hrtimer_restart dw_mci_fault_timer(struct hrtimer *t)
> if (!host->data_status) {
> host->data_status = SDMMC_INT_DCRC;
> set_bit(EVENT_DATA_ERROR, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> spin_unlock_irqrestore(&host->irq_lock, flags);
> @@ -2056,9 +2057,9 @@ static bool dw_mci_clear_pending_data_complete(struct dw_mci *host)
> return true;
> }
>
> -static void dw_mci_tasklet_func(struct tasklet_struct *t)
> +static void dw_mci_work_func(struct work_struct *t)
> {
> - struct dw_mci *host = from_tasklet(host, t, tasklet);
> + struct dw_mci *host = from_work(host, t, work);
> struct mmc_data *data;
> struct mmc_command *cmd;
> struct mmc_request *mrq;
> @@ -2113,7 +2114,7 @@ static void dw_mci_tasklet_func(struct tasklet_struct *t)
> * will waste a bit of time (we already know
> * the command was bad), it can't cause any
> * errors since it's possible it would have
> - * taken place anyway if this tasklet got
> + * taken place anyway if this work got
> * delayed. Allowing the transfer to take place
> * avoids races and keeps things simple.
> */
> @@ -2706,7 +2707,7 @@ static void dw_mci_cmd_interrupt(struct dw_mci *host, u32 status)
> smp_wmb(); /* drain writebuffer */
>
> set_bit(EVENT_CMD_COMPLETE, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
>
> dw_mci_start_fault_timer(host);
> }
> @@ -2774,7 +2775,7 @@ static irqreturn_t dw_mci_interrupt(int irq, void *dev_id)
> set_bit(EVENT_DATA_COMPLETE,
> &host->pending_events);
>
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
>
> spin_unlock(&host->irq_lock);
> }
> @@ -2793,7 +2794,7 @@ static irqreturn_t dw_mci_interrupt(int irq, void *dev_id)
> dw_mci_read_data_pio(host, true);
> }
> set_bit(EVENT_DATA_COMPLETE, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
>
> spin_unlock(&host->irq_lock);
> }
> @@ -3098,7 +3099,7 @@ static void dw_mci_cmd11_timer(struct timer_list *t)
>
> host->cmd_status = SDMMC_INT_RTO;
> set_bit(EVENT_CMD_COMPLETE, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> static void dw_mci_cto_timer(struct timer_list *t)
> @@ -3144,7 +3145,7 @@ static void dw_mci_cto_timer(struct timer_list *t)
> */
> host->cmd_status = SDMMC_INT_RTO;
> set_bit(EVENT_CMD_COMPLETE, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> break;
> default:
> dev_warn(host->dev, "Unexpected command timeout, state %d\n",
> @@ -3195,7 +3196,7 @@ static void dw_mci_dto_timer(struct timer_list *t)
> host->data_status = SDMMC_INT_DRTO;
> set_bit(EVENT_DATA_ERROR, &host->pending_events);
> set_bit(EVENT_DATA_COMPLETE, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> break;
> default:
> dev_warn(host->dev, "Unexpected data timeout, state %d\n",
> @@ -3435,7 +3436,7 @@ int dw_mci_probe(struct dw_mci *host)
> else
> host->fifo_reg = host->regs + DATA_240A_OFFSET;
>
> - tasklet_setup(&host->tasklet, dw_mci_tasklet_func);
> + INIT_WORK(&host->work, dw_mci_work_func);
> ret = devm_request_irq(host->dev, host->irq, dw_mci_interrupt,
> host->irq_flags, "dw-mci", host);
> if (ret)
> diff --git a/drivers/mmc/host/dw_mmc.h b/drivers/mmc/host/dw_mmc.h
> index 4ed81f94f7ca..d17f398a0432 100644
> --- a/drivers/mmc/host/dw_mmc.h
> +++ b/drivers/mmc/host/dw_mmc.h
> @@ -17,6 +17,7 @@
> #include <linux/fault-inject.h>
> #include <linux/hrtimer.h>
> #include <linux/interrupt.h>
> +#include <linux/workqueue.h>
>
> enum dw_mci_state {
> STATE_IDLE = 0,
> @@ -89,12 +90,12 @@ struct dw_mci_dma_slave {
> * @stop_cmdr: Value to be loaded into CMDR when the stop command is
> * to be sent.
> * @dir_status: Direction of current transfer.
> - * @tasklet: Tasklet running the request state machine.
> + * @work: Work running the request state machine.
> * @pending_events: Bitmask of events flagged by the interrupt handler
> - * to be processed by the tasklet.
> + * to be processed by the work.
> * @completed_events: Bitmask of events which the state machine has
> * processed.
> - * @state: Tasklet state.
> + * @state: Work state.
> * @queue: List of slots waiting for access to the controller.
> * @bus_hz: The rate of @mck in Hz. This forms the basis for MMC bus
> * rate and timeout calculations.
> @@ -194,7 +195,7 @@ struct dw_mci {
> u32 data_status;
> u32 stop_cmdr;
> u32 dir_status;
> - struct tasklet_struct tasklet;
> + struct work_struct work;
> unsigned long pending_events;
> unsigned long completed_events;
> enum dw_mci_state state;
> diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
> index 088f8ed4fdc4..d85bae7b9cba 100644
> --- a/drivers/mmc/host/omap.c
> +++ b/drivers/mmc/host/omap.c
> @@ -28,6 +28,7 @@
> #include <linux/slab.h>
> #include <linux/gpio/consumer.h>
> #include <linux/platform_data/mmc-omap.h>
> +#include <linux/workqueue.h>
>
>
> #define OMAP_MMC_REG_CMD 0x00
> @@ -105,7 +106,7 @@ struct mmc_omap_slot {
> u16 power_mode;
> unsigned int fclk_freq;
>
> - struct tasklet_struct cover_tasklet;
> + struct work_struct cover_work;
> struct timer_list cover_timer;
> unsigned cover_open;
>
> @@ -873,18 +874,18 @@ void omap_mmc_notify_cover_event(struct device *dev, int num, int is_closed)
> sysfs_notify(&slot->mmc->class_dev.kobj, NULL, "cover_switch");
> }
>
> - tasklet_hi_schedule(&slot->cover_tasklet);
> + queue_work(system_bh_highpri_wq, &slot->cover_work);
> }
>
> static void mmc_omap_cover_timer(struct timer_list *t)
> {
> struct mmc_omap_slot *slot = from_timer(slot, t, cover_timer);
> - tasklet_schedule(&slot->cover_tasklet);
> + queue_work(system_bh_wq, &slot->cover_work);
> }
>
> -static void mmc_omap_cover_handler(struct tasklet_struct *t)
> +static void mmc_omap_cover_handler(struct work_struct *t)
> {
> - struct mmc_omap_slot *slot = from_tasklet(slot, t, cover_tasklet);
> + struct mmc_omap_slot *slot = from_work(slot, t, cover_work);
> int cover_open = mmc_omap_cover_is_open(slot);
>
> mmc_detect_change(slot->mmc, 0);
> @@ -1299,7 +1300,7 @@ static int mmc_omap_new_slot(struct mmc_omap_host *host, int id)
>
> if (slot->pdata->get_cover_state != NULL) {
> timer_setup(&slot->cover_timer, mmc_omap_cover_timer, 0);
> - tasklet_setup(&slot->cover_tasklet, mmc_omap_cover_handler);
> + INIT_WORK(&slot->cover_work, mmc_omap_cover_handler);
> }
>
> r = mmc_add_host(mmc);
> @@ -1318,7 +1319,7 @@ static int mmc_omap_new_slot(struct mmc_omap_host *host, int id)
> &dev_attr_cover_switch);
> if (r < 0)
> goto err_remove_slot_name;
> - tasklet_schedule(&slot->cover_tasklet);
> + queue_work(system_bh_wq, &slot->cover_work);
> }
>
> return 0;
> @@ -1341,7 +1342,7 @@ static void mmc_omap_remove_slot(struct mmc_omap_slot *slot)
> if (slot->pdata->get_cover_state != NULL)
> device_remove_file(&mmc->class_dev, &dev_attr_cover_switch);
>
> - tasklet_kill(&slot->cover_tasklet);
> + cancel_work_sync(&slot->cover_work);
> del_timer_sync(&slot->cover_timer);
> flush_workqueue(slot->host->mmc_omap_wq);
>
> diff --git a/drivers/mmc/host/renesas_sdhi.h b/drivers/mmc/host/renesas_sdhi.h
> index 586f94d4dbfd..4fd2bfcacd76 100644
> --- a/drivers/mmc/host/renesas_sdhi.h
> +++ b/drivers/mmc/host/renesas_sdhi.h
> @@ -11,6 +11,7 @@
>
> #include <linux/dmaengine.h>
> #include <linux/platform_device.h>
> +#include <linux/workqueue.h>
> #include "tmio_mmc.h"
>
> struct renesas_sdhi_scc {
> @@ -67,7 +68,7 @@ struct renesas_sdhi_dma {
> dma_filter_fn filter;
> void (*enable)(struct tmio_mmc_host *host, bool enable);
> struct completion dma_dataend;
> - struct tasklet_struct dma_complete;
> + struct work_struct dma_complete;
> };
>
> struct renesas_sdhi {
> diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
> index 53d34c3eddce..f175f8898516 100644
> --- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
> +++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
> @@ -336,7 +336,7 @@ static bool renesas_sdhi_internal_dmac_dma_irq(struct tmio_mmc_host *host)
> writel(status ^ dma_irqs, host->ctl + DM_CM_INFO1);
> set_bit(SDHI_DMA_END_FLAG_DMA, &dma_priv->end_flags);
> if (test_bit(SDHI_DMA_END_FLAG_ACCESS, &dma_priv->end_flags))
> - tasklet_schedule(&dma_priv->dma_complete);
> + queue_work(system_bh_wq, &dma_priv->dma_complete);
> }
>
> return status & dma_irqs;
> @@ -351,7 +351,7 @@ renesas_sdhi_internal_dmac_dataend_dma(struct tmio_mmc_host *host)
> set_bit(SDHI_DMA_END_FLAG_ACCESS, &dma_priv->end_flags);
> if (test_bit(SDHI_DMA_END_FLAG_DMA, &dma_priv->end_flags) ||
> host->data->error)
> - tasklet_schedule(&dma_priv->dma_complete);
> + queue_work(system_bh_wq, &dma_priv->dma_complete);
> }
>
> /*
> @@ -439,9 +439,9 @@ renesas_sdhi_internal_dmac_start_dma(struct tmio_mmc_host *host,
> renesas_sdhi_internal_dmac_enable_dma(host, false);
> }
>
> -static void renesas_sdhi_internal_dmac_issue_tasklet_fn(unsigned long arg)
> +static void renesas_sdhi_internal_dmac_issue_work_fn(struct work_struct *t)
> {
> - struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
> + struct tmio_mmc_host *host = from_work(host, t, dma_issue);
> struct renesas_sdhi *priv = host_to_priv(host);
>
> tmio_mmc_enable_mmc_irqs(host, TMIO_STAT_DATAEND);
> @@ -453,7 +453,7 @@ static void renesas_sdhi_internal_dmac_issue_tasklet_fn(unsigned long arg)
> /* on CMD errors, simulate DMA end immediately */
> set_bit(SDHI_DMA_END_FLAG_DMA, &priv->dma_priv.end_flags);
> if (test_bit(SDHI_DMA_END_FLAG_ACCESS, &priv->dma_priv.end_flags))
> - tasklet_schedule(&priv->dma_priv.dma_complete);
> + queue_work(system_bh_wq, &priv->dma_priv.dma_complete);
> }
> }
>
> @@ -483,9 +483,9 @@ static bool renesas_sdhi_internal_dmac_complete(struct tmio_mmc_host *host)
> return true;
> }
>
> -static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
> +static void renesas_sdhi_internal_dmac_complete_work_fn(struct work_struct *t)
> {
> - struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
> + struct tmio_mmc_host *host = from_work(host, t, dam_complete);
That doesn't compile, even if I fix the typo to dma_complete.
I have something that does compile, but don't have the platform to test it.

>
> spin_lock_irq(&host->lock);
> if (!renesas_sdhi_internal_dmac_complete(host))
> @@ -543,12 +543,10 @@ renesas_sdhi_internal_dmac_request_dma(struct tmio_mmc_host *host,
> /* Each value is set to non-zero to assume "enabling" each DMA */
> host->chan_rx = host->chan_tx = (void *)0xdeadbeaf;
>
> - tasklet_init(&priv->dma_priv.dma_complete,
> - renesas_sdhi_internal_dmac_complete_tasklet_fn,
> - (unsigned long)host);
> - tasklet_init(&host->dma_issue,
> - renesas_sdhi_internal_dmac_issue_tasklet_fn,
> - (unsigned long)host);
> + INIT_WORK(&priv->dma_priv.dma_complete,
> + renesas_sdhi_internal_dmac_complete_work_fn);
> + INIT_WORK(&host->dma_issue,
> + renesas_sdhi_internal_dmac_issue_work_fn);
>
> /* Add pre_req and post_req */
> host->ops.pre_req = renesas_sdhi_internal_dmac_pre_req;
> diff --git a/drivers/mmc/host/renesas_sdhi_sys_dmac.c b/drivers/mmc/host/renesas_sdhi_sys_dmac.c
> index 9cf7f9feab72..793595ad6d02 100644
> --- a/drivers/mmc/host/renesas_sdhi_sys_dmac.c
> +++ b/drivers/mmc/host/renesas_sdhi_sys_dmac.c
> @@ -312,9 +312,9 @@ static void renesas_sdhi_sys_dmac_start_dma(struct tmio_mmc_host *host,
> }
> }
>
> -static void renesas_sdhi_sys_dmac_issue_tasklet_fn(unsigned long priv)
> +static void renesas_sdhi_sys_dmac_issue_work_fn(struct work_struct *t)
> {
> - struct tmio_mmc_host *host = (struct tmio_mmc_host *)priv;
> + struct tmio_mmc_host *host = from_work(host, t, dma_issue);
> struct dma_chan *chan = NULL;
>
> spin_lock_irq(&host->lock);
> @@ -401,9 +401,8 @@ static void renesas_sdhi_sys_dmac_request_dma(struct tmio_mmc_host *host,
> goto ebouncebuf;
>
> init_completion(&priv->dma_priv.dma_dataend);
> - tasklet_init(&host->dma_issue,
> - renesas_sdhi_sys_dmac_issue_tasklet_fn,
> - (unsigned long)host);
> + INIT_WORK(&host->dma_issue,
> + renesas_sdhi_sys_dmac_issue_work_fn);
> }
>
> renesas_sdhi_sys_dmac_enable_dma(host, true);
> diff --git a/drivers/mmc/host/sdhci-bcm-kona.c b/drivers/mmc/host/sdhci-bcm-kona.c
> index cb9152c6a65d..974f205d479b 100644
> --- a/drivers/mmc/host/sdhci-bcm-kona.c
> +++ b/drivers/mmc/host/sdhci-bcm-kona.c
> @@ -107,7 +107,7 @@ static void sdhci_bcm_kona_sd_init(struct sdhci_host *host)
> * Software emulation of the SD card insertion/removal. Set insert=1 for insert
> * and insert=0 for removal. The card detection is done by GPIO. For Broadcom
> * IP to function properly the bit 0 of CORESTAT register needs to be set/reset
> - * to generate the CD IRQ handled in sdhci.c which schedules card_tasklet.
> + * to generate the CD IRQ handled in sdhci.c which schedules card_work.
> */
> static int sdhci_bcm_kona_sd_card_emulate(struct sdhci_host *host, int insert)
> {
> diff --git a/drivers/mmc/host/tifm_sd.c b/drivers/mmc/host/tifm_sd.c
> index b5a2f2f25ad9..c6285c577db0 100644
> --- a/drivers/mmc/host/tifm_sd.c
> +++ b/drivers/mmc/host/tifm_sd.c
> @@ -13,6 +13,7 @@
> #include <linux/highmem.h>
> #include <linux/scatterlist.h>
> #include <linux/module.h>
> +#include <linux/workqueue.h>
> #include <asm/io.h>
>
> #define DRIVER_NAME "tifm_sd"
> @@ -97,7 +98,7 @@ struct tifm_sd {
> unsigned int clk_div;
> unsigned long timeout_jiffies;
>
> - struct tasklet_struct finish_tasklet;
> + struct work_struct finish_work;
> struct timer_list timer;
> struct mmc_request *req;
>
> @@ -463,7 +464,7 @@ static void tifm_sd_check_status(struct tifm_sd *host)
> }
> }
> finish_request:
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> }
>
> /* Called from interrupt handler */
> @@ -723,9 +724,9 @@ static void tifm_sd_request(struct mmc_host *mmc, struct mmc_request *mrq)
> mmc_request_done(mmc, mrq);
> }
>
> -static void tifm_sd_end_cmd(struct tasklet_struct *t)
> +static void tifm_sd_end_cmd(struct work_struct *t)
> {
> - struct tifm_sd *host = from_tasklet(host, t, finish_tasklet);
> + struct tifm_sd *host = from_work(host, t, finish_work);
> struct tifm_dev *sock = host->dev;
> struct mmc_host *mmc = tifm_get_drvdata(sock);
> struct mmc_request *mrq;
> @@ -960,7 +961,7 @@ static int tifm_sd_probe(struct tifm_dev *sock)
> */
> mmc->max_busy_timeout = TIFM_MMCSD_REQ_TIMEOUT_MS;
>
> - tasklet_setup(&host->finish_tasklet, tifm_sd_end_cmd);
> + INIT_WORK(&host->finish_work, tifm_sd_end_cmd);
> timer_setup(&host->timer, tifm_sd_abort, 0);
>
> mmc->ops = &tifm_sd_ops;
> @@ -999,7 +1000,7 @@ static void tifm_sd_remove(struct tifm_dev *sock)
> writel(0, sock->addr + SOCK_MMCSD_INT_ENABLE);
> spin_unlock_irqrestore(&sock->lock, flags);
>
> - tasklet_kill(&host->finish_tasklet);
> + cancel_work_sync(&host->finish_work);
>
> spin_lock_irqsave(&sock->lock, flags);
> if (host->req) {
> @@ -1009,7 +1010,7 @@ static void tifm_sd_remove(struct tifm_dev *sock)
> host->req->cmd->error = -ENOMEDIUM;
> if (host->req->stop)
> host->req->stop->error = -ENOMEDIUM;
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> }
> spin_unlock_irqrestore(&sock->lock, flags);
> mmc_remove_host(mmc);
> diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h
> index de56e6534aea..bee13acaa80f 100644
> --- a/drivers/mmc/host/tmio_mmc.h
> +++ b/drivers/mmc/host/tmio_mmc.h
> @@ -21,6 +21,7 @@
> #include <linux/scatterlist.h>
> #include <linux/spinlock.h>
> #include <linux/interrupt.h>
> +#include <linux/workqueue.h>
>
> #define CTL_SD_CMD 0x00
> #define CTL_ARG_REG 0x04
> @@ -156,7 +157,7 @@ struct tmio_mmc_host {
> bool dma_on;
> struct dma_chan *chan_rx;
> struct dma_chan *chan_tx;
> - struct tasklet_struct dma_issue;
> + struct work_struct dma_issue;
> struct scatterlist bounce_sg;
> u8 *bounce_buf;
>
> diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
> index 93e912afd3ae..51bd2365795b 100644
> --- a/drivers/mmc/host/tmio_mmc_core.c
> +++ b/drivers/mmc/host/tmio_mmc_core.c
> @@ -608,7 +608,7 @@ static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host, unsigned int stat)
> } else {
> tmio_mmc_disable_mmc_irqs(host,
> TMIO_MASK_READOP);
> - tasklet_schedule(&host->dma_issue);
> + queue_work(system_bh_wq, &host->dma_issue);
> }
> } else {
> if (!host->dma_on) {
> @@ -616,7 +616,7 @@ static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host, unsigned int stat)
> } else {
> tmio_mmc_disable_mmc_irqs(host,
> TMIO_MASK_WRITEOP);
> - tasklet_schedule(&host->dma_issue);
> + queue_work(system_bh_wq, &host->dma_issue);
> }
> }
> } else {
> diff --git a/drivers/mmc/host/uniphier-sd.c b/drivers/mmc/host/uniphier-sd.c
> index 1404989e6151..d1964111c393 100644
> --- a/drivers/mmc/host/uniphier-sd.c
> +++ b/drivers/mmc/host/uniphier-sd.c
> @@ -17,6 +17,7 @@
> #include <linux/platform_device.h>
> #include <linux/regmap.h>
> #include <linux/reset.h>
> +#include <linux/workqueue.h>
>
> #include "tmio_mmc.h"
>
> @@ -90,9 +91,9 @@ static void uniphier_sd_dma_endisable(struct tmio_mmc_host *host, int enable)
> }
>
> /* external DMA engine */
> -static void uniphier_sd_external_dma_issue(struct tasklet_struct *t)
> +static void uniphier_sd_external_dma_issue(struct work_struct *t)
> {
> - struct tmio_mmc_host *host = from_tasklet(host, t, dma_issue);
> + struct tmio_mmc_host *host = from_work(host, t, dma_issue);
> struct uniphier_sd_priv *priv = uniphier_sd_priv(host);
>
> uniphier_sd_dma_endisable(host, 1);
> @@ -199,7 +200,7 @@ static void uniphier_sd_external_dma_request(struct tmio_mmc_host *host,
> host->chan_rx = chan;
> host->chan_tx = chan;
>
> - tasklet_setup(&host->dma_issue, uniphier_sd_external_dma_issue);
> + INIT_WORK(&host->dma_issue, uniphier_sd_external_dma_issue);
> }
>
> static void uniphier_sd_external_dma_release(struct tmio_mmc_host *host)
> @@ -236,9 +237,9 @@ static const struct tmio_mmc_dma_ops uniphier_sd_external_dma_ops = {
> .dataend = uniphier_sd_external_dma_dataend,
> };
>
> -static void uniphier_sd_internal_dma_issue(struct tasklet_struct *t)
> +static void uniphier_sd_internal_dma_issue(struct work_struct *t)
> {
> - struct tmio_mmc_host *host = from_tasklet(host, t, dma_issue);
> + struct tmio_mmc_host *host = from_work(host, t, dma_issue);
> unsigned long flags;
>
> spin_lock_irqsave(&host->lock, flags);
> @@ -317,7 +318,7 @@ static void uniphier_sd_internal_dma_request(struct tmio_mmc_host *host,
>
> host->chan_tx = (void *)0xdeadbeaf;
>
> - tasklet_setup(&host->dma_issue, uniphier_sd_internal_dma_issue);
> + INIT_WORK(&host->dma_issue, uniphier_sd_internal_dma_issue);
> }
>
> static void uniphier_sd_internal_dma_release(struct tmio_mmc_host *host)
> diff --git a/drivers/mmc/host/via-sdmmc.c b/drivers/mmc/host/via-sdmmc.c
> index ba6044b16e07..2777b773086b 100644
> --- a/drivers/mmc/host/via-sdmmc.c
> +++ b/drivers/mmc/host/via-sdmmc.c
> @@ -12,6 +12,7 @@
> #include <linux/interrupt.h>
>
> #include <linux/mmc/host.h>
> +#include <linux/workqueue.h>
>
> #define DRV_NAME "via_sdmmc"
>
> @@ -307,7 +308,7 @@ struct via_crdr_mmc_host {
> struct sdhcreg pm_sdhc_reg;
>
> struct work_struct carddet_work;
> - struct tasklet_struct finish_tasklet;
> + struct work_struct finish_work;
>
> struct timer_list timer;
> spinlock_t lock;
> @@ -643,7 +644,7 @@ static void via_sdc_finish_data(struct via_crdr_mmc_host *host)
> if (data->stop)
> via_sdc_send_command(host, data->stop);
> else
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> }
>
> static void via_sdc_finish_command(struct via_crdr_mmc_host *host)
> @@ -653,7 +654,7 @@ static void via_sdc_finish_command(struct via_crdr_mmc_host *host)
> host->cmd->error = 0;
>
> if (!host->cmd->data)
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
>
> host->cmd = NULL;
> }
> @@ -682,7 +683,7 @@ static void via_sdc_request(struct mmc_host *mmc, struct mmc_request *mrq)
> status = readw(host->sdhc_mmiobase + VIA_CRDR_SDSTATUS);
> if (!(status & VIA_CRDR_SDSTS_SLOTG) || host->reject) {
> host->mrq->cmd->error = -ENOMEDIUM;
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> } else {
> via_sdc_send_command(host, mrq->cmd);
> }
> @@ -848,7 +849,7 @@ static void via_sdc_cmd_isr(struct via_crdr_mmc_host *host, u16 intmask)
> host->cmd->error = -EILSEQ;
>
> if (host->cmd->error)
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> else if (intmask & VIA_CRDR_SDSTS_CRD)
> via_sdc_finish_command(host);
> }
> @@ -955,16 +956,16 @@ static void via_sdc_timeout(struct timer_list *t)
> sdhost->cmd->error = -ETIMEDOUT;
> else
> sdhost->mrq->cmd->error = -ETIMEDOUT;
> - tasklet_schedule(&sdhost->finish_tasklet);
> + queue_work(system_bh_wq, &sdhost->finish_work);
> }
> }
>
> spin_unlock_irqrestore(&sdhost->lock, flags);
> }
>
> -static void via_sdc_tasklet_finish(struct tasklet_struct *t)
> +static void via_sdc_work_finish(struct work_struct *t)
> {
> - struct via_crdr_mmc_host *host = from_tasklet(host, t, finish_tasklet);
> + struct via_crdr_mmc_host *host = from_work(host, t, finish_work);
> unsigned long flags;
> struct mmc_request *mrq;
>
> @@ -1005,7 +1006,7 @@ static void via_sdc_card_detect(struct work_struct *work)
> pr_err("%s: Card removed during transfer!\n",
> mmc_hostname(host->mmc));
> host->mrq->cmd->error = -ENOMEDIUM;
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> }
>
> spin_unlock_irqrestore(&host->lock, flags);
> @@ -1051,7 +1052,7 @@ static void via_init_mmc_host(struct via_crdr_mmc_host *host)
>
> INIT_WORK(&host->carddet_work, via_sdc_card_detect);
>
> - tasklet_setup(&host->finish_tasklet, via_sdc_tasklet_finish);
> + INIT_WORK(&host->finish_work, via_sdc_work_finish);
>
> addrbase = host->sdhc_mmiobase;
> writel(0x0, addrbase + VIA_CRDR_SDINTMASK);
> @@ -1193,7 +1194,7 @@ static void via_sd_remove(struct pci_dev *pcidev)
> sdhost->mrq->cmd->error = -ENOMEDIUM;
> if (sdhost->mrq->stop)
> sdhost->mrq->stop->error = -ENOMEDIUM;
> - tasklet_schedule(&sdhost->finish_tasklet);
> + queue_work(system_bh_wq, &sdhost->finish_work);
> }
> spin_unlock_irqrestore(&sdhost->lock, flags);
>
> @@ -1203,7 +1204,7 @@ static void via_sd_remove(struct pci_dev *pcidev)
>
> del_timer_sync(&sdhost->timer);
>
> - tasklet_kill(&sdhost->finish_tasklet);
> + cancel_work_sync(&sdhost->finish_work);
>
> /* switch off power */
> gatt = readb(sdhost->pcictrl_mmiobase + VIA_CRDR_PCICLKGATT);
> diff --git a/drivers/mmc/host/wbsd.c b/drivers/mmc/host/wbsd.c
> index f0562f712d98..984e380abc71 100644
> --- a/drivers/mmc/host/wbsd.c
> +++ b/drivers/mmc/host/wbsd.c
> @@ -32,6 +32,7 @@
> #include <linux/mmc/sd.h>
> #include <linux/scatterlist.h>
> #include <linux/slab.h>
> +#include <linux/workqueue.h>
>
> #include <asm/io.h>
> #include <asm/dma.h>
> @@ -459,7 +460,7 @@ static void wbsd_empty_fifo(struct wbsd_host *host)
> * FIFO threshold interrupts properly.
> */
> if ((data->blocks * data->blksz - data->bytes_xfered) < 16)
> - tasklet_schedule(&host->fifo_tasklet);
> + queue_work(system_bh_wq, &host->fifo_work);
> }
>
> static void wbsd_fill_fifo(struct wbsd_host *host)
> @@ -524,7 +525,7 @@ static void wbsd_fill_fifo(struct wbsd_host *host)
> * 'FIFO empty' under certain conditions. So we
> * need to be a bit more pro-active.
> */
> - tasklet_schedule(&host->fifo_tasklet);
> + queue_work(system_bh_wq, &host->fifo_work);
> }
>
> static void wbsd_prepare_data(struct wbsd_host *host, struct mmc_data *data)
> @@ -746,7 +747,7 @@ static void wbsd_request(struct mmc_host *mmc, struct mmc_request *mrq)
> struct mmc_command *cmd;
>
> /*
> - * Disable tasklets to avoid a deadlock.
> + * Disable works to avoid a deadlock.
> */
> spin_lock_bh(&host->lock);
>
> @@ -821,7 +822,7 @@ static void wbsd_request(struct mmc_host *mmc, struct mmc_request *mrq)
> * Dirty fix for hardware bug.
> */
> if (host->dma == -1)
> - tasklet_schedule(&host->fifo_tasklet);
> + queue_work(system_bh_wq, &host->fifo_work);
>
> spin_unlock_bh(&host->lock);
>
> @@ -961,13 +962,13 @@ static void wbsd_reset_ignore(struct timer_list *t)
> * Card status might have changed during the
> * blackout.
> */
> - tasklet_schedule(&host->card_tasklet);
> + queue_work(system_bh_wq, &host->card_work);
>
> spin_unlock_bh(&host->lock);
> }
>
> /*
> - * Tasklets
> + * Works
> */
>
> static inline struct mmc_data *wbsd_get_data(struct wbsd_host *host)
> @@ -987,9 +988,9 @@ static inline struct mmc_data *wbsd_get_data(struct wbsd_host *host)
> return host->mrq->cmd->data;
> }
>
> -static void wbsd_tasklet_card(struct tasklet_struct *t)
> +static void wbsd_work_card(struct work_struct *t)
> {
> - struct wbsd_host *host = from_tasklet(host, t, card_tasklet);
> + struct wbsd_host *host = from_work(host, t, card_work);
> u8 csr;
> int delay = -1;
>
> @@ -1020,7 +1021,7 @@ static void wbsd_tasklet_card(struct tasklet_struct *t)
> wbsd_reset(host);
>
> host->mrq->cmd->error = -ENOMEDIUM;
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> }
>
> delay = 0;
> @@ -1036,9 +1037,9 @@ static void wbsd_tasklet_card(struct tasklet_struct *t)
> mmc_detect_change(host->mmc, msecs_to_jiffies(delay));
> }
>
> -static void wbsd_tasklet_fifo(struct tasklet_struct *t)
> +static void wbsd_work_fifo(struct work_struct *t)
> {
> - struct wbsd_host *host = from_tasklet(host, t, fifo_tasklet);
> + struct wbsd_host *host = from_work(host, t, fifo_work);
> struct mmc_data *data;
>
> spin_lock(&host->lock);
> @@ -1060,16 +1061,16 @@ static void wbsd_tasklet_fifo(struct tasklet_struct *t)
> */
> if (host->num_sg == 0) {
> wbsd_write_index(host, WBSD_IDX_FIFOEN, 0);
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> }
>
> end:
> spin_unlock(&host->lock);
> }
>
> -static void wbsd_tasklet_crc(struct tasklet_struct *t)
> +static void wbsd_work_crc(struct work_struct *t)
> {
> - struct wbsd_host *host = from_tasklet(host, t, crc_tasklet);
> + struct wbsd_host *host = from_work(host, t, crc_work);
> struct mmc_data *data;
>
> spin_lock(&host->lock);
> @@ -1085,15 +1086,15 @@ static void wbsd_tasklet_crc(struct tasklet_struct *t)
>
> data->error = -EILSEQ;
>
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
>
> end:
> spin_unlock(&host->lock);
> }
>
> -static void wbsd_tasklet_timeout(struct tasklet_struct *t)
> +static void wbsd_work_timeout(struct work_struct *t)
> {
> - struct wbsd_host *host = from_tasklet(host, t, timeout_tasklet);
> + struct wbsd_host *host = from_work(host, t, timeout_work);
> struct mmc_data *data;
>
> spin_lock(&host->lock);
> @@ -1109,15 +1110,15 @@ static void wbsd_tasklet_timeout(struct tasklet_struct *t)
>
> data->error = -ETIMEDOUT;
>
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
>
> end:
> spin_unlock(&host->lock);
> }
>
> -static void wbsd_tasklet_finish(struct tasklet_struct *t)
> +static void wbsd_work_finish(struct work_struct *t)
> {
> - struct wbsd_host *host = from_tasklet(host, t, finish_tasklet);
> + struct wbsd_host *host = from_work(host, t, finish_work);
> struct mmc_data *data;
>
> spin_lock(&host->lock);
> @@ -1156,18 +1157,18 @@ static irqreturn_t wbsd_irq(int irq, void *dev_id)
> host->isr |= isr;
>
> /*
> - * Schedule tasklets as needed.
> + * Schedule works as needed.
> */
> if (isr & WBSD_INT_CARD)
> - tasklet_schedule(&host->card_tasklet);
> + queue_work(system_bh_wq, &host->card_work);
> if (isr & WBSD_INT_FIFO_THRE)
> - tasklet_schedule(&host->fifo_tasklet);
> + queue_work(system_bh_wq, &host->fifo_work);
> if (isr & WBSD_INT_CRC)
> - tasklet_hi_schedule(&host->crc_tasklet);
> + queue_work(system_bh_highpri_wq, &host->crc_work);
> if (isr & WBSD_INT_TIMEOUT)
> - tasklet_hi_schedule(&host->timeout_tasklet);
> + queue_work(system_bh_highpri_wq, &host->timeout_work);
> if (isr & WBSD_INT_TC)
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
>
> return IRQ_HANDLED;
> }
> @@ -1443,13 +1444,13 @@ static int wbsd_request_irq(struct wbsd_host *host, int irq)
> int ret;
>
> /*
> - * Set up tasklets. Must be done before requesting interrupt.
> + * Set up works. Must be done before requesting interrupt.
> */
> - tasklet_setup(&host->card_tasklet, wbsd_tasklet_card);
> - tasklet_setup(&host->fifo_tasklet, wbsd_tasklet_fifo);
> - tasklet_setup(&host->crc_tasklet, wbsd_tasklet_crc);
> - tasklet_setup(&host->timeout_tasklet, wbsd_tasklet_timeout);
> - tasklet_setup(&host->finish_tasklet, wbsd_tasklet_finish);
> + INIT_WORK(&host->card_work, wbsd_work_card);
> + INIT_WORK(&host->fifo_work, wbsd_work_fifo);
> + INIT_WORK(&host->crc_work, wbsd_work_crc);
> + INIT_WORK(&host->timeout_work, wbsd_work_timeout);
> + INIT_WORK(&host->finish_work, wbsd_work_finish);
>
> /*
> * Allocate interrupt.
> @@ -1472,11 +1473,11 @@ static void wbsd_release_irq(struct wbsd_host *host)
>
> host->irq = 0;
>
> - tasklet_kill(&host->card_tasklet);
> - tasklet_kill(&host->fifo_tasklet);
> - tasklet_kill(&host->crc_tasklet);
> - tasklet_kill(&host->timeout_tasklet);
> - tasklet_kill(&host->finish_tasklet);
> + cancel_work_sync(&host->card_work);
> + cancel_work_sync(&host->fifo_work);
> + cancel_work_sync(&host->crc_work);
> + cancel_work_sync(&host->timeout_work);
> + cancel_work_sync(&host->finish_work);
> }
>
> /*
> diff --git a/drivers/mmc/host/wbsd.h b/drivers/mmc/host/wbsd.h
> index be30b4d8ce4c..942a64a724e4 100644
> --- a/drivers/mmc/host/wbsd.h
> +++ b/drivers/mmc/host/wbsd.h
> @@ -171,11 +171,11 @@ struct wbsd_host
> int irq; /* Interrupt */
> int dma; /* DMA channel */
>
> - struct tasklet_struct card_tasklet; /* Tasklet structures */
> - struct tasklet_struct fifo_tasklet;
> - struct tasklet_struct crc_tasklet;
> - struct tasklet_struct timeout_tasklet;
> - struct tasklet_struct finish_tasklet;
> + struct work_struct card_work; /* Work structures */
> + struct work_struct fifo_work;
> + struct work_struct crc_work;
> + struct work_struct timeout_work;
> + struct work_struct finish_work;
>
> struct timer_list ignore_timer; /* Ignore detection timer */
> };


2024-03-28 12:54:41

by Ulf Hansson

[permalink] [raw]
Subject: Re: [PATCH 9/9] mmc: Convert from tasklet to BH workqueue

On Wed, 27 Mar 2024 at 17:03, Allen Pais <[email protected]> wrote:
>
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/infiniband/* from tasklet to BH workqueue.
>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Signed-off-by: Allen Pais <[email protected]>

Overall, this makes sense to me. However, just to make things clear,
an mmc host driver shouldn't really need the tasklet. As I understand
it, the few remaining users are there for legacy reasons.

At this point we have suggested to drivers to switch to use threaded
irq handlers (and regular work queues if needed too). That said,
what's the benefit of using the BH work queue?

Kind regards
Uffe

> ---
> drivers/mmc/host/atmel-mci.c | 35 ++++-----
> drivers/mmc/host/au1xmmc.c | 37 ++++-----
> drivers/mmc/host/cb710-mmc.c | 15 ++--
> drivers/mmc/host/cb710-mmc.h | 3 +-
> drivers/mmc/host/dw_mmc.c | 25 ++++---
> drivers/mmc/host/dw_mmc.h | 9 ++-
> drivers/mmc/host/omap.c | 17 +++--
> drivers/mmc/host/renesas_sdhi.h | 3 +-
> drivers/mmc/host/renesas_sdhi_internal_dmac.c | 24 +++---
> drivers/mmc/host/renesas_sdhi_sys_dmac.c | 9 +--
> drivers/mmc/host/sdhci-bcm-kona.c | 2 +-
> drivers/mmc/host/tifm_sd.c | 15 ++--
> drivers/mmc/host/tmio_mmc.h | 3 +-
> drivers/mmc/host/tmio_mmc_core.c | 4 +-
> drivers/mmc/host/uniphier-sd.c | 13 ++--
> drivers/mmc/host/via-sdmmc.c | 25 ++++---
> drivers/mmc/host/wbsd.c | 75 ++++++++++---------
> drivers/mmc/host/wbsd.h | 10 +--
> 18 files changed, 167 insertions(+), 157 deletions(-)
>
> diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
> index dba826db739a..0a92a7fd020f 100644
> --- a/drivers/mmc/host/atmel-mci.c
> +++ b/drivers/mmc/host/atmel-mci.c
> @@ -33,6 +33,7 @@
> #include <linux/pm.h>
> #include <linux/pm_runtime.h>
> #include <linux/pinctrl/consumer.h>
> +#include <linux/workqueue.h>
>
> #include <asm/cacheflush.h>
> #include <asm/io.h>
> @@ -284,12 +285,12 @@ struct atmel_mci_dma {
> * EVENT_DATA_ERROR is pending.
> * @stop_cmdr: Value to be loaded into CMDR when the stop command is
> * to be sent.
> - * @tasklet: Tasklet running the request state machine.
> + * @work: Work running the request state machine.
> * @pending_events: Bitmask of events flagged by the interrupt handler
> - * to be processed by the tasklet.
> + * to be processed by the work.
> * @completed_events: Bitmask of events which the state machine has
> * processed.
> - * @state: Tasklet state.
> + * @state: Work state.
> * @queue: List of slots waiting for access to the controller.
> * @need_clock_update: Update the clock rate before the next request.
> * @need_reset: Reset controller before next request.
> @@ -363,7 +364,7 @@ struct atmel_mci {
> u32 data_status;
> u32 stop_cmdr;
>
> - struct tasklet_struct tasklet;
> + struct work_struct work;
> unsigned long pending_events;
> unsigned long completed_events;
> enum atmel_mci_state state;
> @@ -761,7 +762,7 @@ static void atmci_timeout_timer(struct timer_list *t)
> host->need_reset = 1;
> host->state = STATE_END_REQUEST;
> smp_wmb();
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> static inline unsigned int atmci_ns_to_clocks(struct atmel_mci *host,
> @@ -983,7 +984,7 @@ static void atmci_pdc_complete(struct atmel_mci *host)
>
> dev_dbg(&host->pdev->dev, "(%s) set pending xfer complete\n", __func__);
> atmci_set_pending(host, EVENT_XFER_COMPLETE);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> static void atmci_dma_cleanup(struct atmel_mci *host)
> @@ -997,7 +998,7 @@ static void atmci_dma_cleanup(struct atmel_mci *host)
> }
>
> /*
> - * This function is called by the DMA driver from tasklet context.
> + * This function is called by the DMA driver from work context.
> */
> static void atmci_dma_complete(void *arg)
> {
> @@ -1020,7 +1021,7 @@ static void atmci_dma_complete(void *arg)
> dev_dbg(&host->pdev->dev,
> "(%s) set pending xfer complete\n", __func__);
> atmci_set_pending(host, EVENT_XFER_COMPLETE);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
>
> /*
> * Regardless of what the documentation says, we have
> @@ -1033,7 +1034,7 @@ static void atmci_dma_complete(void *arg)
> * haven't seen all the potential error bits yet.
> *
> * The interrupt handler will schedule a different
> - * tasklet to finish things up when the data transfer
> + * work to finish things up when the data transfer
> * is completely done.
> *
> * We may not complete the mmc request here anyway
> @@ -1765,9 +1766,9 @@ static void atmci_detect_change(struct timer_list *t)
> }
> }
>
> -static void atmci_tasklet_func(struct tasklet_struct *t)
> +static void atmci_work_func(struct work_struct *t)
> {
> - struct atmel_mci *host = from_tasklet(host, t, tasklet);
> + struct atmel_mci *host = from_work(host, t, work);
> struct mmc_request *mrq = host->mrq;
> struct mmc_data *data = host->data;
> enum atmel_mci_state state = host->state;
> @@ -1779,7 +1780,7 @@ static void atmci_tasklet_func(struct tasklet_struct *t)
> state = host->state;
>
> dev_vdbg(&host->pdev->dev,
> - "tasklet: state %u pending/completed/mask %lx/%lx/%x\n",
> + "work: state %u pending/completed/mask %lx/%lx/%x\n",
> state, host->pending_events, host->completed_events,
> atmci_readl(host, ATMCI_IMR));
>
> @@ -2141,7 +2142,7 @@ static irqreturn_t atmci_interrupt(int irq, void *dev_id)
> dev_dbg(&host->pdev->dev, "set pending data error\n");
> smp_wmb();
> atmci_set_pending(host, EVENT_DATA_ERROR);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> if (pending & ATMCI_TXBUFE) {
> @@ -2210,7 +2211,7 @@ static irqreturn_t atmci_interrupt(int irq, void *dev_id)
> smp_wmb();
> dev_dbg(&host->pdev->dev, "set pending notbusy\n");
> atmci_set_pending(host, EVENT_NOTBUSY);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> if (pending & ATMCI_NOTBUSY) {
> @@ -2219,7 +2220,7 @@ static irqreturn_t atmci_interrupt(int irq, void *dev_id)
> smp_wmb();
> dev_dbg(&host->pdev->dev, "set pending notbusy\n");
> atmci_set_pending(host, EVENT_NOTBUSY);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> if (pending & ATMCI_RXRDY)
> @@ -2234,7 +2235,7 @@ static irqreturn_t atmci_interrupt(int irq, void *dev_id)
> smp_wmb();
> dev_dbg(&host->pdev->dev, "set pending cmd rdy\n");
> atmci_set_pending(host, EVENT_CMD_RDY);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> if (pending & (ATMCI_SDIOIRQA | ATMCI_SDIOIRQB))
> @@ -2530,7 +2531,7 @@ static int atmci_probe(struct platform_device *pdev)
>
> host->mapbase = regs->start;
>
> - tasklet_setup(&host->tasklet, atmci_tasklet_func);
> + INIT_WORK(&host->work, atmci_work_func);
>
> ret = request_irq(irq, atmci_interrupt, 0, dev_name(&pdev->dev), host);
> if (ret) {
> diff --git a/drivers/mmc/host/au1xmmc.c b/drivers/mmc/host/au1xmmc.c
> index b5a5c6a2fe8b..c86fa7d2ebb7 100644
> --- a/drivers/mmc/host/au1xmmc.c
> +++ b/drivers/mmc/host/au1xmmc.c
> @@ -42,6 +42,7 @@
> #include <linux/leds.h>
> #include <linux/mmc/host.h>
> #include <linux/slab.h>
> +#include <linux/workqueue.h>
>
> #include <asm/io.h>
> #include <asm/mach-au1x00/au1000.h>
> @@ -113,8 +114,8 @@ struct au1xmmc_host {
>
> int irq;
>
> - struct tasklet_struct finish_task;
> - struct tasklet_struct data_task;
> + struct work_struct finish_task;
> + struct work_struct data_task;
> struct au1xmmc_platform_data *platdata;
> struct platform_device *pdev;
> struct resource *ioarea;
> @@ -253,9 +254,9 @@ static void au1xmmc_finish_request(struct au1xmmc_host *host)
> mmc_request_done(host->mmc, mrq);
> }
>
> -static void au1xmmc_tasklet_finish(struct tasklet_struct *t)
> +static void au1xmmc_work_finish(struct work_struct *t)
> {
> - struct au1xmmc_host *host = from_tasklet(host, t, finish_task);
> + struct au1xmmc_host *host = from_work(host, t, finish_task);
> au1xmmc_finish_request(host);
> }
>
> @@ -363,9 +364,9 @@ static void au1xmmc_data_complete(struct au1xmmc_host *host, u32 status)
> au1xmmc_finish_request(host);
> }
>
> -static void au1xmmc_tasklet_data(struct tasklet_struct *t)
> +static void au1xmmc_work_data(struct work_struct *t)
> {
> - struct au1xmmc_host *host = from_tasklet(host, t, data_task);
> + struct au1xmmc_host *host = from_work(host, t, data_task);
>
> u32 status = __raw_readl(HOST_STATUS(host));
> au1xmmc_data_complete(host, status);
> @@ -425,7 +426,7 @@ static void au1xmmc_send_pio(struct au1xmmc_host *host)
> if (host->flags & HOST_F_STOP)
> SEND_STOP(host);
>
> - tasklet_schedule(&host->data_task);
> + queue_work(system_bh_wq, &host->data_task);
> }
> }
>
> @@ -505,7 +506,7 @@ static void au1xmmc_receive_pio(struct au1xmmc_host *host)
> if (host->flags & HOST_F_STOP)
> SEND_STOP(host);
>
> - tasklet_schedule(&host->data_task);
> + queue_work(system_bh_wq, &host->data_task);
> }
> }
>
> @@ -561,7 +562,7 @@ static void au1xmmc_cmd_complete(struct au1xmmc_host *host, u32 status)
>
> if (!trans || cmd->error) {
> IRQ_OFF(host, SD_CONFIG_TH | SD_CONFIG_RA | SD_CONFIG_RF);
> - tasklet_schedule(&host->finish_task);
> + queue_work(system_bh_wq, &host->finish_task);
> return;
> }
>
> @@ -797,7 +798,7 @@ static irqreturn_t au1xmmc_irq(int irq, void *dev_id)
> IRQ_OFF(host, SD_CONFIG_NE | SD_CONFIG_TH);
>
> /* IRQ_OFF(host, SD_CONFIG_TH | SD_CONFIG_RA | SD_CONFIG_RF); */
> - tasklet_schedule(&host->finish_task);
> + queue_work(system_bh_wq, &host->finish_task);
> }
> #if 0
> else if (status & SD_STATUS_DD) {
> @@ -806,7 +807,7 @@ static irqreturn_t au1xmmc_irq(int irq, void *dev_id)
> au1xmmc_receive_pio(host);
> else {
> au1xmmc_data_complete(host, status);
> - /* tasklet_schedule(&host->data_task); */
> + /* queue_work(system_bh_wq, &host->data_task); */
> }
> }
> #endif
> @@ -854,7 +855,7 @@ static void au1xmmc_dbdma_callback(int irq, void *dev_id)
> if (host->flags & HOST_F_STOP)
> SEND_STOP(host);
>
> - tasklet_schedule(&host->data_task);
> + queue_work(system_bh_wq, &host->data_task);
> }
>
> static int au1xmmc_dbdma_init(struct au1xmmc_host *host)
> @@ -1039,9 +1040,9 @@ static int au1xmmc_probe(struct platform_device *pdev)
> if (host->platdata)
> mmc->caps &= ~(host->platdata->mask_host_caps);
>
> - tasklet_setup(&host->data_task, au1xmmc_tasklet_data);
> + INIT_WORK(&host->data_task, au1xmmc_work_data);
>
> - tasklet_setup(&host->finish_task, au1xmmc_tasklet_finish);
> + INIT_WORK(&host->finish_task, au1xmmc_work_finish);
>
> if (has_dbdma()) {
> ret = au1xmmc_dbdma_init(host);
> @@ -1091,8 +1092,8 @@ static int au1xmmc_probe(struct platform_device *pdev)
> if (host->flags & HOST_F_DBDMA)
> au1xmmc_dbdma_shutdown(host);
>
> - tasklet_kill(&host->data_task);
> - tasklet_kill(&host->finish_task);
> + cancel_work_sync(&host->data_task);
> + cancel_work_sync(&host->finish_task);
>
> if (host->platdata && host->platdata->cd_setup &&
> !(mmc->caps & MMC_CAP_NEEDS_POLL))
> @@ -1135,8 +1136,8 @@ static void au1xmmc_remove(struct platform_device *pdev)
> __raw_writel(0, HOST_CONFIG2(host));
> wmb(); /* drain writebuffer */
>
> - tasklet_kill(&host->data_task);
> - tasklet_kill(&host->finish_task);
> + cancel_work_sync(&host->data_task);
> + cancel_work_sync(&host->finish_task);
>
> if (host->flags & HOST_F_DBDMA)
> au1xmmc_dbdma_shutdown(host);
> diff --git a/drivers/mmc/host/cb710-mmc.c b/drivers/mmc/host/cb710-mmc.c
> index 0aec33b88bef..eebb6797e785 100644
> --- a/drivers/mmc/host/cb710-mmc.c
> +++ b/drivers/mmc/host/cb710-mmc.c
> @@ -8,6 +8,7 @@
> #include <linux/module.h>
> #include <linux/pci.h>
> #include <linux/delay.h>
> +#include <linux/workqueue.h>
> #include "cb710-mmc.h"
>
> #define CB710_MMC_REQ_TIMEOUT_MS 2000
> @@ -493,7 +494,7 @@ static void cb710_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
> if (!cb710_mmc_command(mmc, mrq->cmd) && mrq->stop)
> cb710_mmc_command(mmc, mrq->stop);
>
> - tasklet_schedule(&reader->finish_req_tasklet);
> + queue_work(system_bh_wq, &reader->finish_req_work);
> }
>
> static int cb710_mmc_powerup(struct cb710_slot *slot)
> @@ -646,10 +647,10 @@ static int cb710_mmc_irq_handler(struct cb710_slot *slot)
> return 1;
> }
>
> -static void cb710_mmc_finish_request_tasklet(struct tasklet_struct *t)
> +static void cb710_mmc_finish_request_work(struct work_struct *t)
> {
> - struct cb710_mmc_reader *reader = from_tasklet(reader, t,
> - finish_req_tasklet);
> + struct cb710_mmc_reader *reader = from_work(reader, t,
> + finish_req_work);
> struct mmc_request *mrq = reader->mrq;
>
> reader->mrq = NULL;
> @@ -718,8 +719,8 @@ static int cb710_mmc_init(struct platform_device *pdev)
>
> reader = mmc_priv(mmc);
>
> - tasklet_setup(&reader->finish_req_tasklet,
> - cb710_mmc_finish_request_tasklet);
> + INIT_WORK(&reader->finish_req_work,
> + cb710_mmc_finish_request_work);
> spin_lock_init(&reader->irq_lock);
> cb710_dump_regs(chip, CB710_DUMP_REGS_MMC);
>
> @@ -763,7 +764,7 @@ static void cb710_mmc_exit(struct platform_device *pdev)
> cb710_write_port_32(slot, CB710_MMC_CONFIG_PORT, 0);
> cb710_write_port_16(slot, CB710_MMC_CONFIGB_PORT, 0);
>
> - tasklet_kill(&reader->finish_req_tasklet);
> + cancel_work_sync(&reader->finish_req_work);
>
> mmc_free_host(mmc);
> }
> diff --git a/drivers/mmc/host/cb710-mmc.h b/drivers/mmc/host/cb710-mmc.h
> index 5e053077dbed..b35ab8736374 100644
> --- a/drivers/mmc/host/cb710-mmc.h
> +++ b/drivers/mmc/host/cb710-mmc.h
> @@ -8,10 +8,11 @@
> #define LINUX_CB710_MMC_H
>
> #include <linux/cb710.h>
> +#include <linux/workqueue.h>
>
> /* per-MMC-reader structure */
> struct cb710_mmc_reader {
> - struct tasklet_struct finish_req_tasklet;
> + struct work_struct finish_req_work;
> struct mmc_request *mrq;
> spinlock_t irq_lock;
> unsigned char last_power_mode;
> diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
> index 8e2d676b9239..ee6f892bc0d8 100644
> --- a/drivers/mmc/host/dw_mmc.c
> +++ b/drivers/mmc/host/dw_mmc.c
> @@ -36,6 +36,7 @@
> #include <linux/regulator/consumer.h>
> #include <linux/of.h>
> #include <linux/mmc/slot-gpio.h>
> +#include <linux/workqueue.h>
>
> #include "dw_mmc.h"
>
> @@ -493,7 +494,7 @@ static void dw_mci_dmac_complete_dma(void *arg)
> */
> if (data) {
> set_bit(EVENT_XFER_COMPLETE, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
> }
>
> @@ -1834,7 +1835,7 @@ static enum hrtimer_restart dw_mci_fault_timer(struct hrtimer *t)
> if (!host->data_status) {
> host->data_status = SDMMC_INT_DCRC;
> set_bit(EVENT_DATA_ERROR, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> spin_unlock_irqrestore(&host->irq_lock, flags);
> @@ -2056,9 +2057,9 @@ static bool dw_mci_clear_pending_data_complete(struct dw_mci *host)
> return true;
> }
>
> -static void dw_mci_tasklet_func(struct tasklet_struct *t)
> +static void dw_mci_work_func(struct work_struct *t)
> {
> - struct dw_mci *host = from_tasklet(host, t, tasklet);
> + struct dw_mci *host = from_work(host, t, work);
> struct mmc_data *data;
> struct mmc_command *cmd;
> struct mmc_request *mrq;
> @@ -2113,7 +2114,7 @@ static void dw_mci_tasklet_func(struct tasklet_struct *t)
> * will waste a bit of time (we already know
> * the command was bad), it can't cause any
> * errors since it's possible it would have
> - * taken place anyway if this tasklet got
> + * taken place anyway if this work got
> * delayed. Allowing the transfer to take place
> * avoids races and keeps things simple.
> */
> @@ -2706,7 +2707,7 @@ static void dw_mci_cmd_interrupt(struct dw_mci *host, u32 status)
> smp_wmb(); /* drain writebuffer */
>
> set_bit(EVENT_CMD_COMPLETE, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
>
> dw_mci_start_fault_timer(host);
> }
> @@ -2774,7 +2775,7 @@ static irqreturn_t dw_mci_interrupt(int irq, void *dev_id)
> set_bit(EVENT_DATA_COMPLETE,
> &host->pending_events);
>
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
>
> spin_unlock(&host->irq_lock);
> }
> @@ -2793,7 +2794,7 @@ static irqreturn_t dw_mci_interrupt(int irq, void *dev_id)
> dw_mci_read_data_pio(host, true);
> }
> set_bit(EVENT_DATA_COMPLETE, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
>
> spin_unlock(&host->irq_lock);
> }
> @@ -3098,7 +3099,7 @@ static void dw_mci_cmd11_timer(struct timer_list *t)
>
> host->cmd_status = SDMMC_INT_RTO;
> set_bit(EVENT_CMD_COMPLETE, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> }
>
> static void dw_mci_cto_timer(struct timer_list *t)
> @@ -3144,7 +3145,7 @@ static void dw_mci_cto_timer(struct timer_list *t)
> */
> host->cmd_status = SDMMC_INT_RTO;
> set_bit(EVENT_CMD_COMPLETE, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> break;
> default:
> dev_warn(host->dev, "Unexpected command timeout, state %d\n",
> @@ -3195,7 +3196,7 @@ static void dw_mci_dto_timer(struct timer_list *t)
> host->data_status = SDMMC_INT_DRTO;
> set_bit(EVENT_DATA_ERROR, &host->pending_events);
> set_bit(EVENT_DATA_COMPLETE, &host->pending_events);
> - tasklet_schedule(&host->tasklet);
> + queue_work(system_bh_wq, &host->work);
> break;
> default:
> dev_warn(host->dev, "Unexpected data timeout, state %d\n",
> @@ -3435,7 +3436,7 @@ int dw_mci_probe(struct dw_mci *host)
> else
> host->fifo_reg = host->regs + DATA_240A_OFFSET;
>
> - tasklet_setup(&host->tasklet, dw_mci_tasklet_func);
> + INIT_WORK(&host->work, dw_mci_work_func);
> ret = devm_request_irq(host->dev, host->irq, dw_mci_interrupt,
> host->irq_flags, "dw-mci", host);
> if (ret)
> diff --git a/drivers/mmc/host/dw_mmc.h b/drivers/mmc/host/dw_mmc.h
> index 4ed81f94f7ca..d17f398a0432 100644
> --- a/drivers/mmc/host/dw_mmc.h
> +++ b/drivers/mmc/host/dw_mmc.h
> @@ -17,6 +17,7 @@
> #include <linux/fault-inject.h>
> #include <linux/hrtimer.h>
> #include <linux/interrupt.h>
> +#include <linux/workqueue.h>
>
> enum dw_mci_state {
> STATE_IDLE = 0,
> @@ -89,12 +90,12 @@ struct dw_mci_dma_slave {
> * @stop_cmdr: Value to be loaded into CMDR when the stop command is
> * to be sent.
> * @dir_status: Direction of current transfer.
> - * @tasklet: Tasklet running the request state machine.
> + * @work: Work running the request state machine.
> * @pending_events: Bitmask of events flagged by the interrupt handler
> - * to be processed by the tasklet.
> + * to be processed by the work.
> * @completed_events: Bitmask of events which the state machine has
> * processed.
> - * @state: Tasklet state.
> + * @state: Work state.
> * @queue: List of slots waiting for access to the controller.
> * @bus_hz: The rate of @mck in Hz. This forms the basis for MMC bus
> * rate and timeout calculations.
> @@ -194,7 +195,7 @@ struct dw_mci {
> u32 data_status;
> u32 stop_cmdr;
> u32 dir_status;
> - struct tasklet_struct tasklet;
> + struct work_struct work;
> unsigned long pending_events;
> unsigned long completed_events;
> enum dw_mci_state state;
> diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
> index 088f8ed4fdc4..d85bae7b9cba 100644
> --- a/drivers/mmc/host/omap.c
> +++ b/drivers/mmc/host/omap.c
> @@ -28,6 +28,7 @@
> #include <linux/slab.h>
> #include <linux/gpio/consumer.h>
> #include <linux/platform_data/mmc-omap.h>
> +#include <linux/workqueue.h>
>
>
> #define OMAP_MMC_REG_CMD 0x00
> @@ -105,7 +106,7 @@ struct mmc_omap_slot {
> u16 power_mode;
> unsigned int fclk_freq;
>
> - struct tasklet_struct cover_tasklet;
> + struct work_struct cover_work;
> struct timer_list cover_timer;
> unsigned cover_open;
>
> @@ -873,18 +874,18 @@ void omap_mmc_notify_cover_event(struct device *dev, int num, int is_closed)
> sysfs_notify(&slot->mmc->class_dev.kobj, NULL, "cover_switch");
> }
>
> - tasklet_hi_schedule(&slot->cover_tasklet);
> + queue_work(system_bh_highpri_wq, &slot->cover_work);
> }
>
> static void mmc_omap_cover_timer(struct timer_list *t)
> {
> struct mmc_omap_slot *slot = from_timer(slot, t, cover_timer);
> - tasklet_schedule(&slot->cover_tasklet);
> + queue_work(system_bh_wq, &slot->cover_work);
> }
>
> -static void mmc_omap_cover_handler(struct tasklet_struct *t)
> +static void mmc_omap_cover_handler(struct work_struct *t)
> {
> - struct mmc_omap_slot *slot = from_tasklet(slot, t, cover_tasklet);
> + struct mmc_omap_slot *slot = from_work(slot, t, cover_work);
> int cover_open = mmc_omap_cover_is_open(slot);
>
> mmc_detect_change(slot->mmc, 0);
> @@ -1299,7 +1300,7 @@ static int mmc_omap_new_slot(struct mmc_omap_host *host, int id)
>
> if (slot->pdata->get_cover_state != NULL) {
> timer_setup(&slot->cover_timer, mmc_omap_cover_timer, 0);
> - tasklet_setup(&slot->cover_tasklet, mmc_omap_cover_handler);
> + INIT_WORK(&slot->cover_work, mmc_omap_cover_handler);
> }
>
> r = mmc_add_host(mmc);
> @@ -1318,7 +1319,7 @@ static int mmc_omap_new_slot(struct mmc_omap_host *host, int id)
> &dev_attr_cover_switch);
> if (r < 0)
> goto err_remove_slot_name;
> - tasklet_schedule(&slot->cover_tasklet);
> + queue_work(system_bh_wq, &slot->cover_work);
> }
>
> return 0;
> @@ -1341,7 +1342,7 @@ static void mmc_omap_remove_slot(struct mmc_omap_slot *slot)
> if (slot->pdata->get_cover_state != NULL)
> device_remove_file(&mmc->class_dev, &dev_attr_cover_switch);
>
> - tasklet_kill(&slot->cover_tasklet);
> + cancel_work_sync(&slot->cover_work);
> del_timer_sync(&slot->cover_timer);
> flush_workqueue(slot->host->mmc_omap_wq);
>
> diff --git a/drivers/mmc/host/renesas_sdhi.h b/drivers/mmc/host/renesas_sdhi.h
> index 586f94d4dbfd..4fd2bfcacd76 100644
> --- a/drivers/mmc/host/renesas_sdhi.h
> +++ b/drivers/mmc/host/renesas_sdhi.h
> @@ -11,6 +11,7 @@
>
> #include <linux/dmaengine.h>
> #include <linux/platform_device.h>
> +#include <linux/workqueue.h>
> #include "tmio_mmc.h"
>
> struct renesas_sdhi_scc {
> @@ -67,7 +68,7 @@ struct renesas_sdhi_dma {
> dma_filter_fn filter;
> void (*enable)(struct tmio_mmc_host *host, bool enable);
> struct completion dma_dataend;
> - struct tasklet_struct dma_complete;
> + struct work_struct dma_complete;
> };
>
> struct renesas_sdhi {
> diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
> index 53d34c3eddce..f175f8898516 100644
> --- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
> +++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
> @@ -336,7 +336,7 @@ static bool renesas_sdhi_internal_dmac_dma_irq(struct tmio_mmc_host *host)
> writel(status ^ dma_irqs, host->ctl + DM_CM_INFO1);
> set_bit(SDHI_DMA_END_FLAG_DMA, &dma_priv->end_flags);
> if (test_bit(SDHI_DMA_END_FLAG_ACCESS, &dma_priv->end_flags))
> - tasklet_schedule(&dma_priv->dma_complete);
> + queue_work(system_bh_wq, &dma_priv->dma_complete);
> }
>
> return status & dma_irqs;
> @@ -351,7 +351,7 @@ renesas_sdhi_internal_dmac_dataend_dma(struct tmio_mmc_host *host)
> set_bit(SDHI_DMA_END_FLAG_ACCESS, &dma_priv->end_flags);
> if (test_bit(SDHI_DMA_END_FLAG_DMA, &dma_priv->end_flags) ||
> host->data->error)
> - tasklet_schedule(&dma_priv->dma_complete);
> + queue_work(system_bh_wq, &dma_priv->dma_complete);
> }
>
> /*
> @@ -439,9 +439,9 @@ renesas_sdhi_internal_dmac_start_dma(struct tmio_mmc_host *host,
> renesas_sdhi_internal_dmac_enable_dma(host, false);
> }
>
> -static void renesas_sdhi_internal_dmac_issue_tasklet_fn(unsigned long arg)
> +static void renesas_sdhi_internal_dmac_issue_work_fn(struct work_struct *t)
> {
> - struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
> + struct tmio_mmc_host *host = from_work(host, t, dma_issue);
> struct renesas_sdhi *priv = host_to_priv(host);
>
> tmio_mmc_enable_mmc_irqs(host, TMIO_STAT_DATAEND);
> @@ -453,7 +453,7 @@ static void renesas_sdhi_internal_dmac_issue_tasklet_fn(unsigned long arg)
> /* on CMD errors, simulate DMA end immediately */
> set_bit(SDHI_DMA_END_FLAG_DMA, &priv->dma_priv.end_flags);
> if (test_bit(SDHI_DMA_END_FLAG_ACCESS, &priv->dma_priv.end_flags))
> - tasklet_schedule(&priv->dma_priv.dma_complete);
> + queue_work(system_bh_wq, &priv->dma_priv.dma_complete);
> }
> }
>
> @@ -483,9 +483,9 @@ static bool renesas_sdhi_internal_dmac_complete(struct tmio_mmc_host *host)
> return true;
> }
>
> -static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
> +static void renesas_sdhi_internal_dmac_complete_work_fn(struct work_struct *t)
> {
> - struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
> + struct tmio_mmc_host *host = from_work(host, t, dam_complete);
>
> spin_lock_irq(&host->lock);
> if (!renesas_sdhi_internal_dmac_complete(host))
> @@ -543,12 +543,10 @@ renesas_sdhi_internal_dmac_request_dma(struct tmio_mmc_host *host,
> /* Each value is set to non-zero to assume "enabling" each DMA */
> host->chan_rx = host->chan_tx = (void *)0xdeadbeaf;
>
> - tasklet_init(&priv->dma_priv.dma_complete,
> - renesas_sdhi_internal_dmac_complete_tasklet_fn,
> - (unsigned long)host);
> - tasklet_init(&host->dma_issue,
> - renesas_sdhi_internal_dmac_issue_tasklet_fn,
> - (unsigned long)host);
> + INIT_WORK(&priv->dma_priv.dma_complete,
> + renesas_sdhi_internal_dmac_complete_work_fn);
> + INIT_WORK(&host->dma_issue,
> + renesas_sdhi_internal_dmac_issue_work_fn);
>
> /* Add pre_req and post_req */
> host->ops.pre_req = renesas_sdhi_internal_dmac_pre_req;
> diff --git a/drivers/mmc/host/renesas_sdhi_sys_dmac.c b/drivers/mmc/host/renesas_sdhi_sys_dmac.c
> index 9cf7f9feab72..793595ad6d02 100644
> --- a/drivers/mmc/host/renesas_sdhi_sys_dmac.c
> +++ b/drivers/mmc/host/renesas_sdhi_sys_dmac.c
> @@ -312,9 +312,9 @@ static void renesas_sdhi_sys_dmac_start_dma(struct tmio_mmc_host *host,
> }
> }
>
> -static void renesas_sdhi_sys_dmac_issue_tasklet_fn(unsigned long priv)
> +static void renesas_sdhi_sys_dmac_issue_work_fn(struct work_struct *t)
> {
> - struct tmio_mmc_host *host = (struct tmio_mmc_host *)priv;
> + struct tmio_mmc_host *host = from_work(host, t, dma_issue);
> struct dma_chan *chan = NULL;
>
> spin_lock_irq(&host->lock);
> @@ -401,9 +401,8 @@ static void renesas_sdhi_sys_dmac_request_dma(struct tmio_mmc_host *host,
> goto ebouncebuf;
>
> init_completion(&priv->dma_priv.dma_dataend);
> - tasklet_init(&host->dma_issue,
> - renesas_sdhi_sys_dmac_issue_tasklet_fn,
> - (unsigned long)host);
> + INIT_WORK(&host->dma_issue,
> + renesas_sdhi_sys_dmac_issue_work_fn);
> }
>
> renesas_sdhi_sys_dmac_enable_dma(host, true);
> diff --git a/drivers/mmc/host/sdhci-bcm-kona.c b/drivers/mmc/host/sdhci-bcm-kona.c
> index cb9152c6a65d..974f205d479b 100644
> --- a/drivers/mmc/host/sdhci-bcm-kona.c
> +++ b/drivers/mmc/host/sdhci-bcm-kona.c
> @@ -107,7 +107,7 @@ static void sdhci_bcm_kona_sd_init(struct sdhci_host *host)
> * Software emulation of the SD card insertion/removal. Set insert=1 for insert
> * and insert=0 for removal. The card detection is done by GPIO. For Broadcom
> * IP to function properly the bit 0 of CORESTAT register needs to be set/reset
> - * to generate the CD IRQ handled in sdhci.c which schedules card_tasklet.
> + * to generate the CD IRQ handled in sdhci.c which schedules card_work.
> */
> static int sdhci_bcm_kona_sd_card_emulate(struct sdhci_host *host, int insert)
> {
> diff --git a/drivers/mmc/host/tifm_sd.c b/drivers/mmc/host/tifm_sd.c
> index b5a2f2f25ad9..c6285c577db0 100644
> --- a/drivers/mmc/host/tifm_sd.c
> +++ b/drivers/mmc/host/tifm_sd.c
> @@ -13,6 +13,7 @@
> #include <linux/highmem.h>
> #include <linux/scatterlist.h>
> #include <linux/module.h>
> +#include <linux/workqueue.h>
> #include <asm/io.h>
>
> #define DRIVER_NAME "tifm_sd"
> @@ -97,7 +98,7 @@ struct tifm_sd {
> unsigned int clk_div;
> unsigned long timeout_jiffies;
>
> - struct tasklet_struct finish_tasklet;
> + struct work_struct finish_work;
> struct timer_list timer;
> struct mmc_request *req;
>
> @@ -463,7 +464,7 @@ static void tifm_sd_check_status(struct tifm_sd *host)
> }
> }
> finish_request:
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> }
>
> /* Called from interrupt handler */
> @@ -723,9 +724,9 @@ static void tifm_sd_request(struct mmc_host *mmc, struct mmc_request *mrq)
> mmc_request_done(mmc, mrq);
> }
>
> -static void tifm_sd_end_cmd(struct tasklet_struct *t)
> +static void tifm_sd_end_cmd(struct work_struct *t)
> {
> - struct tifm_sd *host = from_tasklet(host, t, finish_tasklet);
> + struct tifm_sd *host = from_work(host, t, finish_work);
> struct tifm_dev *sock = host->dev;
> struct mmc_host *mmc = tifm_get_drvdata(sock);
> struct mmc_request *mrq;
> @@ -960,7 +961,7 @@ static int tifm_sd_probe(struct tifm_dev *sock)
> */
> mmc->max_busy_timeout = TIFM_MMCSD_REQ_TIMEOUT_MS;
>
> - tasklet_setup(&host->finish_tasklet, tifm_sd_end_cmd);
> + INIT_WORK(&host->finish_work, tifm_sd_end_cmd);
> timer_setup(&host->timer, tifm_sd_abort, 0);
>
> mmc->ops = &tifm_sd_ops;
> @@ -999,7 +1000,7 @@ static void tifm_sd_remove(struct tifm_dev *sock)
> writel(0, sock->addr + SOCK_MMCSD_INT_ENABLE);
> spin_unlock_irqrestore(&sock->lock, flags);
>
> - tasklet_kill(&host->finish_tasklet);
> + cancel_work_sync(&host->finish_work);
>
> spin_lock_irqsave(&sock->lock, flags);
> if (host->req) {
> @@ -1009,7 +1010,7 @@ static void tifm_sd_remove(struct tifm_dev *sock)
> host->req->cmd->error = -ENOMEDIUM;
> if (host->req->stop)
> host->req->stop->error = -ENOMEDIUM;
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> }
> spin_unlock_irqrestore(&sock->lock, flags);
> mmc_remove_host(mmc);
> diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h
> index de56e6534aea..bee13acaa80f 100644
> --- a/drivers/mmc/host/tmio_mmc.h
> +++ b/drivers/mmc/host/tmio_mmc.h
> @@ -21,6 +21,7 @@
> #include <linux/scatterlist.h>
> #include <linux/spinlock.h>
> #include <linux/interrupt.h>
> +#include <linux/workqueue.h>
>
> #define CTL_SD_CMD 0x00
> #define CTL_ARG_REG 0x04
> @@ -156,7 +157,7 @@ struct tmio_mmc_host {
> bool dma_on;
> struct dma_chan *chan_rx;
> struct dma_chan *chan_tx;
> - struct tasklet_struct dma_issue;
> + struct work_struct dma_issue;
> struct scatterlist bounce_sg;
> u8 *bounce_buf;
>
> diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
> index 93e912afd3ae..51bd2365795b 100644
> --- a/drivers/mmc/host/tmio_mmc_core.c
> +++ b/drivers/mmc/host/tmio_mmc_core.c
> @@ -608,7 +608,7 @@ static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host, unsigned int stat)
> } else {
> tmio_mmc_disable_mmc_irqs(host,
> TMIO_MASK_READOP);
> - tasklet_schedule(&host->dma_issue);
> + queue_work(system_bh_wq, &host->dma_issue);
> }
> } else {
> if (!host->dma_on) {
> @@ -616,7 +616,7 @@ static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host, unsigned int stat)
> } else {
> tmio_mmc_disable_mmc_irqs(host,
> TMIO_MASK_WRITEOP);
> - tasklet_schedule(&host->dma_issue);
> + queue_work(system_bh_wq, &host->dma_issue);
> }
> }
> } else {
> diff --git a/drivers/mmc/host/uniphier-sd.c b/drivers/mmc/host/uniphier-sd.c
> index 1404989e6151..d1964111c393 100644
> --- a/drivers/mmc/host/uniphier-sd.c
> +++ b/drivers/mmc/host/uniphier-sd.c
> @@ -17,6 +17,7 @@
> #include <linux/platform_device.h>
> #include <linux/regmap.h>
> #include <linux/reset.h>
> +#include <linux/workqueue.h>
>
> #include "tmio_mmc.h"
>
> @@ -90,9 +91,9 @@ static void uniphier_sd_dma_endisable(struct tmio_mmc_host *host, int enable)
> }
>
> /* external DMA engine */
> -static void uniphier_sd_external_dma_issue(struct tasklet_struct *t)
> +static void uniphier_sd_external_dma_issue(struct work_struct *t)
> {
> - struct tmio_mmc_host *host = from_tasklet(host, t, dma_issue);
> + struct tmio_mmc_host *host = from_work(host, t, dma_issue);
> struct uniphier_sd_priv *priv = uniphier_sd_priv(host);
>
> uniphier_sd_dma_endisable(host, 1);
> @@ -199,7 +200,7 @@ static void uniphier_sd_external_dma_request(struct tmio_mmc_host *host,
> host->chan_rx = chan;
> host->chan_tx = chan;
>
> - tasklet_setup(&host->dma_issue, uniphier_sd_external_dma_issue);
> + INIT_WORK(&host->dma_issue, uniphier_sd_external_dma_issue);
> }
>
> static void uniphier_sd_external_dma_release(struct tmio_mmc_host *host)
> @@ -236,9 +237,9 @@ static const struct tmio_mmc_dma_ops uniphier_sd_external_dma_ops = {
> .dataend = uniphier_sd_external_dma_dataend,
> };
>
> -static void uniphier_sd_internal_dma_issue(struct tasklet_struct *t)
> +static void uniphier_sd_internal_dma_issue(struct work_struct *t)
> {
> - struct tmio_mmc_host *host = from_tasklet(host, t, dma_issue);
> + struct tmio_mmc_host *host = from_work(host, t, dma_issue);
> unsigned long flags;
>
> spin_lock_irqsave(&host->lock, flags);
> @@ -317,7 +318,7 @@ static void uniphier_sd_internal_dma_request(struct tmio_mmc_host *host,
>
> host->chan_tx = (void *)0xdeadbeaf;
>
> - tasklet_setup(&host->dma_issue, uniphier_sd_internal_dma_issue);
> + INIT_WORK(&host->dma_issue, uniphier_sd_internal_dma_issue);
> }
>
> static void uniphier_sd_internal_dma_release(struct tmio_mmc_host *host)
> diff --git a/drivers/mmc/host/via-sdmmc.c b/drivers/mmc/host/via-sdmmc.c
> index ba6044b16e07..2777b773086b 100644
> --- a/drivers/mmc/host/via-sdmmc.c
> +++ b/drivers/mmc/host/via-sdmmc.c
> @@ -12,6 +12,7 @@
> #include <linux/interrupt.h>
>
> #include <linux/mmc/host.h>
> +#include <linux/workqueue.h>
>
> #define DRV_NAME "via_sdmmc"
>
> @@ -307,7 +308,7 @@ struct via_crdr_mmc_host {
> struct sdhcreg pm_sdhc_reg;
>
> struct work_struct carddet_work;
> - struct tasklet_struct finish_tasklet;
> + struct work_struct finish_work;
>
> struct timer_list timer;
> spinlock_t lock;
> @@ -643,7 +644,7 @@ static void via_sdc_finish_data(struct via_crdr_mmc_host *host)
> if (data->stop)
> via_sdc_send_command(host, data->stop);
> else
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> }
>
> static void via_sdc_finish_command(struct via_crdr_mmc_host *host)
> @@ -653,7 +654,7 @@ static void via_sdc_finish_command(struct via_crdr_mmc_host *host)
> host->cmd->error = 0;
>
> if (!host->cmd->data)
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
>
> host->cmd = NULL;
> }
> @@ -682,7 +683,7 @@ static void via_sdc_request(struct mmc_host *mmc, struct mmc_request *mrq)
> status = readw(host->sdhc_mmiobase + VIA_CRDR_SDSTATUS);
> if (!(status & VIA_CRDR_SDSTS_SLOTG) || host->reject) {
> host->mrq->cmd->error = -ENOMEDIUM;
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> } else {
> via_sdc_send_command(host, mrq->cmd);
> }
> @@ -848,7 +849,7 @@ static void via_sdc_cmd_isr(struct via_crdr_mmc_host *host, u16 intmask)
> host->cmd->error = -EILSEQ;
>
> if (host->cmd->error)
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> else if (intmask & VIA_CRDR_SDSTS_CRD)
> via_sdc_finish_command(host);
> }
> @@ -955,16 +956,16 @@ static void via_sdc_timeout(struct timer_list *t)
> sdhost->cmd->error = -ETIMEDOUT;
> else
> sdhost->mrq->cmd->error = -ETIMEDOUT;
> - tasklet_schedule(&sdhost->finish_tasklet);
> + queue_work(system_bh_wq, &sdhost->finish_work);
> }
> }
>
> spin_unlock_irqrestore(&sdhost->lock, flags);
> }
>
> -static void via_sdc_tasklet_finish(struct tasklet_struct *t)
> +static void via_sdc_work_finish(struct work_struct *t)
> {
> - struct via_crdr_mmc_host *host = from_tasklet(host, t, finish_tasklet);
> + struct via_crdr_mmc_host *host = from_work(host, t, finish_work);
> unsigned long flags;
> struct mmc_request *mrq;
>
> @@ -1005,7 +1006,7 @@ static void via_sdc_card_detect(struct work_struct *work)
> pr_err("%s: Card removed during transfer!\n",
> mmc_hostname(host->mmc));
> host->mrq->cmd->error = -ENOMEDIUM;
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> }
>
> spin_unlock_irqrestore(&host->lock, flags);
> @@ -1051,7 +1052,7 @@ static void via_init_mmc_host(struct via_crdr_mmc_host *host)
>
> INIT_WORK(&host->carddet_work, via_sdc_card_detect);
>
> - tasklet_setup(&host->finish_tasklet, via_sdc_tasklet_finish);
> + INIT_WORK(&host->finish_work, via_sdc_work_finish);
>
> addrbase = host->sdhc_mmiobase;
> writel(0x0, addrbase + VIA_CRDR_SDINTMASK);
> @@ -1193,7 +1194,7 @@ static void via_sd_remove(struct pci_dev *pcidev)
> sdhost->mrq->cmd->error = -ENOMEDIUM;
> if (sdhost->mrq->stop)
> sdhost->mrq->stop->error = -ENOMEDIUM;
> - tasklet_schedule(&sdhost->finish_tasklet);
> + queue_work(system_bh_wq, &sdhost->finish_work);
> }
> spin_unlock_irqrestore(&sdhost->lock, flags);
>
> @@ -1203,7 +1204,7 @@ static void via_sd_remove(struct pci_dev *pcidev)
>
> del_timer_sync(&sdhost->timer);
>
> - tasklet_kill(&sdhost->finish_tasklet);
> + cancel_work_sync(&sdhost->finish_work);
>
> /* switch off power */
> gatt = readb(sdhost->pcictrl_mmiobase + VIA_CRDR_PCICLKGATT);
> diff --git a/drivers/mmc/host/wbsd.c b/drivers/mmc/host/wbsd.c
> index f0562f712d98..984e380abc71 100644
> --- a/drivers/mmc/host/wbsd.c
> +++ b/drivers/mmc/host/wbsd.c
> @@ -32,6 +32,7 @@
> #include <linux/mmc/sd.h>
> #include <linux/scatterlist.h>
> #include <linux/slab.h>
> +#include <linux/workqueue.h>
>
> #include <asm/io.h>
> #include <asm/dma.h>
> @@ -459,7 +460,7 @@ static void wbsd_empty_fifo(struct wbsd_host *host)
> * FIFO threshold interrupts properly.
> */
> if ((data->blocks * data->blksz - data->bytes_xfered) < 16)
> - tasklet_schedule(&host->fifo_tasklet);
> + queue_work(system_bh_wq, &host->fifo_work);
> }
>
> static void wbsd_fill_fifo(struct wbsd_host *host)
> @@ -524,7 +525,7 @@ static void wbsd_fill_fifo(struct wbsd_host *host)
> * 'FIFO empty' under certain conditions. So we
> * need to be a bit more pro-active.
> */
> - tasklet_schedule(&host->fifo_tasklet);
> + queue_work(system_bh_wq, &host->fifo_work);
> }
>
> static void wbsd_prepare_data(struct wbsd_host *host, struct mmc_data *data)
> @@ -746,7 +747,7 @@ static void wbsd_request(struct mmc_host *mmc, struct mmc_request *mrq)
> struct mmc_command *cmd;
>
> /*
> - * Disable tasklets to avoid a deadlock.
> + * Disable works to avoid a deadlock.
> */
> spin_lock_bh(&host->lock);
>
> @@ -821,7 +822,7 @@ static void wbsd_request(struct mmc_host *mmc, struct mmc_request *mrq)
> * Dirty fix for hardware bug.
> */
> if (host->dma == -1)
> - tasklet_schedule(&host->fifo_tasklet);
> + queue_work(system_bh_wq, &host->fifo_work);
>
> spin_unlock_bh(&host->lock);
>
> @@ -961,13 +962,13 @@ static void wbsd_reset_ignore(struct timer_list *t)
> * Card status might have changed during the
> * blackout.
> */
> - tasklet_schedule(&host->card_tasklet);
> + queue_work(system_bh_wq, &host->card_work);
>
> spin_unlock_bh(&host->lock);
> }
>
> /*
> - * Tasklets
> + * Works
> */
>
> static inline struct mmc_data *wbsd_get_data(struct wbsd_host *host)
> @@ -987,9 +988,9 @@ static inline struct mmc_data *wbsd_get_data(struct wbsd_host *host)
> return host->mrq->cmd->data;
> }
>
> -static void wbsd_tasklet_card(struct tasklet_struct *t)
> +static void wbsd_work_card(struct work_struct *t)
> {
> - struct wbsd_host *host = from_tasklet(host, t, card_tasklet);
> + struct wbsd_host *host = from_work(host, t, card_work);
> u8 csr;
> int delay = -1;
>
> @@ -1020,7 +1021,7 @@ static void wbsd_tasklet_card(struct tasklet_struct *t)
> wbsd_reset(host);
>
> host->mrq->cmd->error = -ENOMEDIUM;
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> }
>
> delay = 0;
> @@ -1036,9 +1037,9 @@ static void wbsd_tasklet_card(struct tasklet_struct *t)
> mmc_detect_change(host->mmc, msecs_to_jiffies(delay));
> }
>
> -static void wbsd_tasklet_fifo(struct tasklet_struct *t)
> +static void wbsd_work_fifo(struct work_struct *t)
> {
> - struct wbsd_host *host = from_tasklet(host, t, fifo_tasklet);
> + struct wbsd_host *host = from_work(host, t, fifo_work);
> struct mmc_data *data;
>
> spin_lock(&host->lock);
> @@ -1060,16 +1061,16 @@ static void wbsd_tasklet_fifo(struct tasklet_struct *t)
> */
> if (host->num_sg == 0) {
> wbsd_write_index(host, WBSD_IDX_FIFOEN, 0);
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
> }
>
> end:
> spin_unlock(&host->lock);
> }
>
> -static void wbsd_tasklet_crc(struct tasklet_struct *t)
> +static void wbsd_work_crc(struct work_struct *t)
> {
> - struct wbsd_host *host = from_tasklet(host, t, crc_tasklet);
> + struct wbsd_host *host = from_work(host, t, crc_work);
> struct mmc_data *data;
>
> spin_lock(&host->lock);
> @@ -1085,15 +1086,15 @@ static void wbsd_tasklet_crc(struct tasklet_struct *t)
>
> data->error = -EILSEQ;
>
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
>
> end:
> spin_unlock(&host->lock);
> }
>
> -static void wbsd_tasklet_timeout(struct tasklet_struct *t)
> +static void wbsd_work_timeout(struct work_struct *t)
> {
> - struct wbsd_host *host = from_tasklet(host, t, timeout_tasklet);
> + struct wbsd_host *host = from_work(host, t, timeout_work);
> struct mmc_data *data;
>
> spin_lock(&host->lock);
> @@ -1109,15 +1110,15 @@ static void wbsd_tasklet_timeout(struct tasklet_struct *t)
>
> data->error = -ETIMEDOUT;
>
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
>
> end:
> spin_unlock(&host->lock);
> }
>
> -static void wbsd_tasklet_finish(struct tasklet_struct *t)
> +static void wbsd_work_finish(struct work_struct *t)
> {
> - struct wbsd_host *host = from_tasklet(host, t, finish_tasklet);
> + struct wbsd_host *host = from_work(host, t, finish_work);
> struct mmc_data *data;
>
> spin_lock(&host->lock);
> @@ -1156,18 +1157,18 @@ static irqreturn_t wbsd_irq(int irq, void *dev_id)
> host->isr |= isr;
>
> /*
> - * Schedule tasklets as needed.
> + * Schedule works as needed.
> */
> if (isr & WBSD_INT_CARD)
> - tasklet_schedule(&host->card_tasklet);
> + queue_work(system_bh_wq, &host->card_work);
> if (isr & WBSD_INT_FIFO_THRE)
> - tasklet_schedule(&host->fifo_tasklet);
> + queue_work(system_bh_wq, &host->fifo_work);
> if (isr & WBSD_INT_CRC)
> - tasklet_hi_schedule(&host->crc_tasklet);
> + queue_work(system_bh_highpri_wq, &host->crc_work);
> if (isr & WBSD_INT_TIMEOUT)
> - tasklet_hi_schedule(&host->timeout_tasklet);
> + queue_work(system_bh_highpri_wq, &host->timeout_work);
> if (isr & WBSD_INT_TC)
> - tasklet_schedule(&host->finish_tasklet);
> + queue_work(system_bh_wq, &host->finish_work);
>
> return IRQ_HANDLED;
> }
> @@ -1443,13 +1444,13 @@ static int wbsd_request_irq(struct wbsd_host *host, int irq)
> int ret;
>
> /*
> - * Set up tasklets. Must be done before requesting interrupt.
> + * Set up works. Must be done before requesting interrupt.
> */
> - tasklet_setup(&host->card_tasklet, wbsd_tasklet_card);
> - tasklet_setup(&host->fifo_tasklet, wbsd_tasklet_fifo);
> - tasklet_setup(&host->crc_tasklet, wbsd_tasklet_crc);
> - tasklet_setup(&host->timeout_tasklet, wbsd_tasklet_timeout);
> - tasklet_setup(&host->finish_tasklet, wbsd_tasklet_finish);
> + INIT_WORK(&host->card_work, wbsd_work_card);
> + INIT_WORK(&host->fifo_work, wbsd_work_fifo);
> + INIT_WORK(&host->crc_work, wbsd_work_crc);
> + INIT_WORK(&host->timeout_work, wbsd_work_timeout);
> + INIT_WORK(&host->finish_work, wbsd_work_finish);
>
> /*
> * Allocate interrupt.
> @@ -1472,11 +1473,11 @@ static void wbsd_release_irq(struct wbsd_host *host)
>
> host->irq = 0;
>
> - tasklet_kill(&host->card_tasklet);
> - tasklet_kill(&host->fifo_tasklet);
> - tasklet_kill(&host->crc_tasklet);
> - tasklet_kill(&host->timeout_tasklet);
> - tasklet_kill(&host->finish_tasklet);
> + cancel_work_sync(&host->card_work);
> + cancel_work_sync(&host->fifo_work);
> + cancel_work_sync(&host->crc_work);
> + cancel_work_sync(&host->timeout_work);
> + cancel_work_sync(&host->finish_work);
> }
>
> /*
> diff --git a/drivers/mmc/host/wbsd.h b/drivers/mmc/host/wbsd.h
> index be30b4d8ce4c..942a64a724e4 100644
> --- a/drivers/mmc/host/wbsd.h
> +++ b/drivers/mmc/host/wbsd.h
> @@ -171,11 +171,11 @@ struct wbsd_host
> int irq; /* Interrupt */
> int dma; /* DMA channel */
>
> - struct tasklet_struct card_tasklet; /* Tasklet structures */
> - struct tasklet_struct fifo_tasklet;
> - struct tasklet_struct crc_tasklet;
> - struct tasklet_struct timeout_tasklet;
> - struct tasklet_struct finish_tasklet;
> + struct work_struct card_work; /* Work structures */
> + struct work_struct fifo_work;
> + struct work_struct crc_work;
> + struct work_struct timeout_work;
> + struct work_struct finish_work;
>
> struct timer_list ignore_timer; /* Ignore detection timer */
> };
> --
> 2.17.1
>

2024-03-28 13:37:49

by Linus Walleij

[permalink] [raw]
Subject: Re: [PATCH 9/9] mmc: Convert from tasklet to BH workqueue

On Thu, Mar 28, 2024 at 1:54 PM Ulf Hansson <[email protected]> wrote:

> At this point we have suggested to drivers to switch to use threaded
> irq handlers (and regular work queues if needed too). That said,
> what's the benefit of using the BH work queue?

Context:
https://lwn.net/Articles/960041/
"Tasklets, in particular, remain because they offer lower latency than
workqueues which, since they must go through the CPU scheduler,
can take longer to execute a deferred-work item."

The BH WQ is controlled by a software IRQ and quicker than an
ordinary work item.

I don't know if this little latency could actually affect any MMC
device, I doubt it.

The other benefit IIUC is that it is easy to mechanically rewrite tasklets
to BH workqueues and be sure that it is as fast as the tasklet, if you want
to switch to threaded IRQ handlers or proper work, you need to write a
lot of elaborate code and test it (preferably on real hardware).

Yours,
Linus Walleij

2024-03-28 16:22:06

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 9/9] mmc: Convert from tasklet to BH workqueue

Hello,

On Thu, Mar 28, 2024 at 01:53:25PM +0100, Ulf Hansson wrote:
> At this point we have suggested to drivers to switch to use threaded
> irq handlers (and regular work queues if needed too). That said,
> what's the benefit of using the BH work queue?

BH workqueues should behave about the same as tasklets which have more
limited interface and is subtly broken in an expensive-to-fix way (around
freeing in-flight work item), so the plan is to replace tasklets with BH
workqueues and remove tasklets from the kernel.

The [dis]advantages of BH workqueues over threaded IRQs or regular threaded
workqueues are the same as when you compare them to tasklets. No thread
switching overhead, so latencies will be a bit tighter. Wheteher that
actually matters really depends on the use case. Here, the biggest advantage
is that it's mostly interchangeable with tasklets and can thus be swapped
easily.

Thanks.

--
tejun

2024-03-28 17:50:32

by Allen

[permalink] [raw]
Subject: Re: [PATCH 2/9] dma: Convert from tasklet to BH workqueue

>
> Subsytem is dmaengine, can you rename this to dmaengine: ...

My apologies, will have it fixed in v2.

>
> On 27-03-24, 16:03, Allen Pais wrote:
> > The only generic interface to execute asynchronously in the BH context is
> > tasklet; however, it's marked deprecated and has some design flaws. To
> > replace tasklets, BH workqueue support was recently added. A BH workqueue
> > behaves similarly to regular workqueues except that the queued work items
> > are executed in the BH context.
>
> Thanks for conversion, am happy with BH alternative as it helps in
> dmaengine where we need shortest possible time between tasklet and
> interrupt handling to maximize dma performance
>
> >
> > This patch converts drivers/dma/* from tasklet to BH workqueue.
>
> >
> > Based on the work done by Tejun Heo <[email protected]>
> > Branch: git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
> >
> > Signed-off-by: Allen Pais <[email protected]>
> > ---
> > drivers/dma/altera-msgdma.c | 15 ++++----
> > drivers/dma/apple-admac.c | 15 ++++----
> > drivers/dma/at_hdmac.c | 2 +-
> > drivers/dma/at_xdmac.c | 15 ++++----
> > drivers/dma/bcm2835-dma.c | 2 +-
> > drivers/dma/dma-axi-dmac.c | 2 +-
> > drivers/dma/dma-jz4780.c | 2 +-
> > .../dma/dw-axi-dmac/dw-axi-dmac-platform.c | 2 +-
> > drivers/dma/dw-edma/dw-edma-core.c | 2 +-
> > drivers/dma/dw/core.c | 13 +++----
> > drivers/dma/dw/regs.h | 3 +-
> > drivers/dma/ep93xx_dma.c | 15 ++++----
> > drivers/dma/fsl-edma-common.c | 2 +-
> > drivers/dma/fsl-qdma.c | 2 +-
> > drivers/dma/fsl_raid.c | 11 +++---
> > drivers/dma/fsl_raid.h | 2 +-
> > drivers/dma/fsldma.c | 15 ++++----
> > drivers/dma/fsldma.h | 3 +-
> > drivers/dma/hisi_dma.c | 2 +-
> > drivers/dma/hsu/hsu.c | 2 +-
> > drivers/dma/idma64.c | 4 +--
> > drivers/dma/img-mdc-dma.c | 2 +-
> > drivers/dma/imx-dma.c | 27 +++++++-------
> > drivers/dma/imx-sdma.c | 6 ++--
> > drivers/dma/ioat/dma.c | 17 ++++-----
> > drivers/dma/ioat/dma.h | 5 +--
> > drivers/dma/ioat/init.c | 2 +-
> > drivers/dma/k3dma.c | 19 +++++-----
> > drivers/dma/mediatek/mtk-cqdma.c | 35 ++++++++++---------
> > drivers/dma/mediatek/mtk-hsdma.c | 2 +-
> > drivers/dma/mediatek/mtk-uart-apdma.c | 4 +--
> > drivers/dma/mmp_pdma.c | 13 +++----
> > drivers/dma/mmp_tdma.c | 11 +++---
> > drivers/dma/mpc512x_dma.c | 17 ++++-----
> > drivers/dma/mv_xor.c | 13 +++----
> > drivers/dma/mv_xor.h | 5 +--
> > drivers/dma/mv_xor_v2.c | 23 ++++++------
> > drivers/dma/mxs-dma.c | 13 +++----
> > drivers/dma/nbpfaxi.c | 15 ++++----
> > drivers/dma/owl-dma.c | 2 +-
> > drivers/dma/pch_dma.c | 17 ++++-----
> > drivers/dma/pl330.c | 31 ++++++++--------
> > drivers/dma/plx_dma.c | 13 +++----
> > drivers/dma/ppc4xx/adma.c | 17 ++++-----
> > drivers/dma/ppc4xx/adma.h | 5 +--
> > drivers/dma/pxa_dma.c | 2 +-
> > drivers/dma/qcom/bam_dma.c | 35 ++++++++++---------
> > drivers/dma/qcom/gpi.c | 18 +++++-----
> > drivers/dma/qcom/hidma.c | 11 +++---
> > drivers/dma/qcom/hidma.h | 5 +--
> > drivers/dma/qcom/hidma_ll.c | 11 +++---
> > drivers/dma/qcom/qcom_adm.c | 2 +-
> > drivers/dma/sa11x0-dma.c | 27 +++++++-------
> > drivers/dma/sf-pdma/sf-pdma.c | 23 ++++++------
> > drivers/dma/sf-pdma/sf-pdma.h | 5 +--
> > drivers/dma/sprd-dma.c | 2 +-
> > drivers/dma/st_fdma.c | 2 +-
> > drivers/dma/ste_dma40.c | 17 ++++-----
> > drivers/dma/sun6i-dma.c | 33 ++++++++---------
> > drivers/dma/tegra186-gpc-dma.c | 2 +-
> > drivers/dma/tegra20-apb-dma.c | 19 +++++-----
> > drivers/dma/tegra210-adma.c | 2 +-
> > drivers/dma/ti/edma.c | 2 +-
> > drivers/dma/ti/k3-udma.c | 11 +++---
> > drivers/dma/ti/omap-dma.c | 2 +-
> > drivers/dma/timb_dma.c | 23 ++++++------
> > drivers/dma/txx9dmac.c | 29 +++++++--------
> > drivers/dma/txx9dmac.h | 5 +--
> > drivers/dma/virt-dma.c | 9 ++---
> > drivers/dma/virt-dma.h | 9 ++---
> > drivers/dma/xgene-dma.c | 21 +++++------
> > drivers/dma/xilinx/xilinx_dma.c | 23 ++++++------
> > drivers/dma/xilinx/xilinx_dpdma.c | 21 +++++------
> > drivers/dma/xilinx/zynqmp_dma.c | 21 +++++------
> > 74 files changed, 442 insertions(+), 395 deletions(-)
> >
> > diff --git a/drivers/dma/altera-msgdma.c b/drivers/dma/altera-msgdma.c
> > index a8e3615235b8..611b5290324b 100644
> > --- a/drivers/dma/altera-msgdma.c
> > +++ b/drivers/dma/altera-msgdma.c
> > @@ -20,6 +20,7 @@
> > #include <linux/platform_device.h>
> > #include <linux/slab.h>
> > #include <linux/of_dma.h>
> > +#include <linux/workqueue.h>
> >
> > #include "dmaengine.h"
> >
> > @@ -170,7 +171,7 @@ struct msgdma_sw_desc {
> > struct msgdma_device {
> > spinlock_t lock;
> > struct device *dev;
> > - struct tasklet_struct irq_tasklet;
> > + struct work_struct irq_work;
>
> Can we name these as bh_work to signify that we are always in bh
> context? here and everywhere please

Sure, will address it in v2.
>
>
> > struct list_head pending_list;
> > struct list_head free_list;
> > struct list_head active_list;
> > @@ -676,12 +677,12 @@ static int msgdma_alloc_chan_resources(struct dma_chan *dchan)
> > }
> >
> > /**
> > - * msgdma_tasklet - Schedule completion tasklet
> > + * msgdma_work - Schedule completion work
>
> ..
>
> > @@ -515,7 +516,7 @@ struct gpii {
> > enum gpi_pm_state pm_state;
> > rwlock_t pm_lock;
> > struct gpi_ring ev_ring;
> > - struct tasklet_struct ev_task; /* event processing tasklet */
> > + struct work_struct ev_task; /* event processing work */
> > struct completion cmd_completion;
> > enum gpi_cmd gpi_cmd;
> > u32 cntxt_type_irq_msk;
> > @@ -755,7 +756,7 @@ static void gpi_process_ieob(struct gpii *gpii)
> > gpi_write_reg(gpii, gpii->ieob_clr_reg, BIT(0));
> >
> > gpi_config_interrupts(gpii, MASK_IEOB_SETTINGS, 0);
> > - tasklet_hi_schedule(&gpii->ev_task);
> > + queue_work(system_bh_highpri_wq, &gpii->ev_task);
>
> This is good conversion, thanks for ensuring system_bh_highpri_wq is
> used here

Thank you very much for the review, will have v2 sent soon.

- Allen

> --
> ~Vinod
>

2024-03-28 17:54:46

by Allen

[permalink] [raw]
Subject: Re: [PATCH 4/9] USB: Convert from tasklet to BH workqueue

> >
> > This patch converts drivers/infiniband/* from tasklet to BH workqueue.
> >
> > Based on the work done by Tejun Heo <[email protected]>
> > Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
> >
> > Signed-off-by: Allen Pais <[email protected]>
> > ---
>
> > diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
> > index c0e005670d67..88d8e1c366cd 100644
> > --- a/drivers/usb/core/hcd.c
> > +++ b/drivers/usb/core/hcd.c
>
> > @@ -1662,10 +1663,9 @@ static void __usb_hcd_giveback_urb(struct urb *urb)
> > usb_put_urb(urb);
> > }
> >
> > -static void usb_giveback_urb_bh(struct work_struct *work)
> > +static void usb_giveback_urb_bh(struct work_struct *t)
> > {
> > - struct giveback_urb_bh *bh =
> > - container_of(work, struct giveback_urb_bh, bh);
> > + struct giveback_urb_bh *bh = from_work(bh, t, bh);
> > struct list_head local_list;
> >
> > spin_lock_irq(&bh->lock);
>
> Is there any reason for this apparently pointless change of a local
> variable's name?

No, it was done just to keep things consistent across the kernel.
I can revert it back to *work if you'd prefer.

Thanks.

>
> Alan Stern
>


--
- Allen

2024-03-28 17:56:54

by Allen

[permalink] [raw]
Subject: Re: [PATCH 9/9] mmc: Convert from tasklet to BH workqueue

On Thu, Mar 28, 2024 at 3:16 AM Christian Loehle
<[email protected]> wrote:
>
> On 27/03/2024 16:03, Allen Pais wrote:
> > The only generic interface to execute asynchronously in the BH context is
> > tasklet; however, it's marked deprecated and has some design flaws. To
> > replace tasklets, BH workqueue support was recently added. A BH workqueue
> > behaves similarly to regular workqueues except that the queued work items
> > are executed in the BH context.
> >
> > This patch converts drivers/infiniband/* from tasklet to BH workqueue.
> s/infiniband/mmc

Will fix it in v2.
> >
> > Based on the work done by Tejun Heo <[email protected]>
> > Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-610
> >
> > Signed-off-by: Allen Pais <[email protected]>
> > ---
> > drivers/mmc/host/atmel-mci.c | 35 ++++-----
> > drivers/mmc/host/au1xmmc.c | 37 ++++-----
> > drivers/mmc/host/cb710-mmc.c | 15 ++--
> > drivers/mmc/host/cb710-mmc.h | 3 +-
> > drivers/mmc/host/dw_mmc.c | 25 ++++---
> > drivers/mmc/host/dw_mmc.h | 9 ++-
> For dw_mmc:
> Performance numbers look good FWIW.
> for i in $(seq 0 5); do echo performance > /sys/devices/system/cpu/cpu$i/cpufreq/scaling_governor; done
> for i in $(seq 0 4); do fio --name=test --rw=randread --bs=4k --runtime=30 --time_based --filename=/dev/mmcblk1 --minimal --numjobs=6 --iodepth=32 --group_reporting | awk -F ";" '{print $8}'; sleep 30; done
> Baseline:
> 1758
> 1773
> 1619
> 1835
> 1639
> to:
> 1743
> 1643
> 1860
> 1638
> 1872
> (I'd call that equivalent).
> This is on a RK3399.
> I would prefer most of the naming to change from "work" to "workqueue" in the driver
> code.
> Apart from that:
> Reviewed-by: Christian Loehle <[email protected]>
> Tested-by: Christian Loehle <[email protected]>

Thank you very much for testing and the review. Will have your
concerns addressed in v2.

- Allen

2024-03-28 18:08:17

by Allen

[permalink] [raw]
Subject: Re: [PATCH 6/9] ipmi: Convert from tasklet to BH workqueue

On Wed, Mar 27, 2024 at 11:05 AM Corey Minyard <[email protected]> wrote:
>
> On Wed, Mar 27, 2024 at 04:03:11PM +0000, Allen Pais wrote:
> > The only generic interface to execute asynchronously in the BH context is
> > tasklet; however, it's marked deprecated and has some design flaws. To
> > replace tasklets, BH workqueue support was recently added. A BH workqueue
> > behaves similarly to regular workqueues except that the queued work items
> > are executed in the BH context.
> >
> > This patch converts drivers/infiniband/* from tasklet to BH workqueue.
>
> I think you mean drivers/char/ipmi/* here.

My apologies, my scripts messed up the commit messages for this series.
Will have it fixed in v2.

>
> I believe that work queues items are execute single-threaded for a work
> queue, so this should be good. I need to test this, though. It may be
> that an IPMI device can have its own work queue; it may not be important
> to run it in bh context.

Fair point. Could you please let me know once you have had a chance to test
these changes. Meanwhile, I will work on RFC wherein IPMI will have its own
workqueue.

Thanks for taking time out to review.

- Allen

>
> -corey
>
> >
> > Based on the work done by Tejun Heo <[email protected]>
> > Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-610
> >
> > Signed-off-by: Allen Pais <[email protected]>
> > ---
> > drivers/char/ipmi/ipmi_msghandler.c | 30 ++++++++++++++---------------
> > 1 file changed, 15 insertions(+), 15 deletions(-)
> >
> > diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
> > index b0eedc4595b3..fce2a2dbdc82 100644
> > --- a/drivers/char/ipmi/ipmi_msghandler.c
> > +++ b/drivers/char/ipmi/ipmi_msghandler.c
> > @@ -36,12 +36,13 @@
> > #include <linux/nospec.h>
> > #include <linux/vmalloc.h>
> > #include <linux/delay.h>
> > +#include <linux/workqueue.h>
> >
> > #define IPMI_DRIVER_VERSION "39.2"
> >
> > static struct ipmi_recv_msg *ipmi_alloc_recv_msg(void);
> > static int ipmi_init_msghandler(void);
> > -static void smi_recv_tasklet(struct tasklet_struct *t);
> > +static void smi_recv_work(struct work_struct *t);
> > static void handle_new_recv_msgs(struct ipmi_smi *intf);
> > static void need_waiter(struct ipmi_smi *intf);
> > static int handle_one_recv_msg(struct ipmi_smi *intf,
> > @@ -498,13 +499,13 @@ struct ipmi_smi {
> > /*
> > * Messages queued for delivery. If delivery fails (out of memory
> > * for instance), They will stay in here to be processed later in a
> > - * periodic timer interrupt. The tasklet is for handling received
> > + * periodic timer interrupt. The work is for handling received
> > * messages directly from the handler.
> > */
> > spinlock_t waiting_rcv_msgs_lock;
> > struct list_head waiting_rcv_msgs;
> > atomic_t watchdog_pretimeouts_to_deliver;
> > - struct tasklet_struct recv_tasklet;
> > + struct work_struct recv_work;
> >
> > spinlock_t xmit_msgs_lock;
> > struct list_head xmit_msgs;
> > @@ -704,7 +705,7 @@ static void clean_up_interface_data(struct ipmi_smi *intf)
> > struct cmd_rcvr *rcvr, *rcvr2;
> > struct list_head list;
> >
> > - tasklet_kill(&intf->recv_tasklet);
> > + cancel_work_sync(&intf->recv_work);
> >
> > free_smi_msg_list(&intf->waiting_rcv_msgs);
> > free_recv_msg_list(&intf->waiting_events);
> > @@ -1319,7 +1320,7 @@ static void free_user(struct kref *ref)
> > {
> > struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);
> >
> > - /* SRCU cleanup must happen in task context. */
> > + /* SRCU cleanup must happen in work context. */
> > queue_work(remove_work_wq, &user->remove_work);
> > }
> >
> > @@ -3605,8 +3606,7 @@ int ipmi_add_smi(struct module *owner,
> > intf->curr_seq = 0;
> > spin_lock_init(&intf->waiting_rcv_msgs_lock);
> > INIT_LIST_HEAD(&intf->waiting_rcv_msgs);
> > - tasklet_setup(&intf->recv_tasklet,
> > - smi_recv_tasklet);
> > + INIT_WORK(&intf->recv_work, smi_recv_work);
> > atomic_set(&intf->watchdog_pretimeouts_to_deliver, 0);
> > spin_lock_init(&intf->xmit_msgs_lock);
> > INIT_LIST_HEAD(&intf->xmit_msgs);
> > @@ -4779,7 +4779,7 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
> > * To preserve message order, quit if we
> > * can't handle a message. Add the message
> > * back at the head, this is safe because this
> > - * tasklet is the only thing that pulls the
> > + * work is the only thing that pulls the
> > * messages.
> > */
> > list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
> > @@ -4812,10 +4812,10 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
> > }
> > }
> >
> > -static void smi_recv_tasklet(struct tasklet_struct *t)
> > +static void smi_recv_work(struct work_struct *t)
> > {
> > unsigned long flags = 0; /* keep us warning-free. */
> > - struct ipmi_smi *intf = from_tasklet(intf, t, recv_tasklet);
> > + struct ipmi_smi *intf = from_work(intf, t, recv_work);
> > int run_to_completion = intf->run_to_completion;
> > struct ipmi_smi_msg *newmsg = NULL;
> >
> > @@ -4866,7 +4866,7 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
> >
> > /*
> > * To preserve message order, we keep a queue and deliver from
> > - * a tasklet.
> > + * a work.
> > */
> > if (!run_to_completion)
> > spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
> > @@ -4887,9 +4887,9 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
> > spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);
> >
> > if (run_to_completion)
> > - smi_recv_tasklet(&intf->recv_tasklet);
> > + smi_recv_work(&intf->recv_work);
> > else
> > - tasklet_schedule(&intf->recv_tasklet);
> > + queue_work(system_bh_wq, &intf->recv_work);
> > }
> > EXPORT_SYMBOL(ipmi_smi_msg_received);
> >
> > @@ -4899,7 +4899,7 @@ void ipmi_smi_watchdog_pretimeout(struct ipmi_smi *intf)
> > return;
> >
> > atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1);
> > - tasklet_schedule(&intf->recv_tasklet);
> > + queue_work(system_bh_wq, &intf->recv_work);
> > }
> > EXPORT_SYMBOL(ipmi_smi_watchdog_pretimeout);
> >
> > @@ -5068,7 +5068,7 @@ static bool ipmi_timeout_handler(struct ipmi_smi *intf,
> > flags);
> > }
> >
> > - tasklet_schedule(&intf->recv_tasklet);
> > + queue_work(system_bh_wq, &intf->recv_work);
> >
> > return need_timer;
> > }
> > --
> > 2.17.1
> >
> >
>


--
- Allen

2024-03-28 18:32:17

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH 2/9] dma: Convert from tasklet to BH workqueue

On 28-03-24, 11:08, Arnd Bergmann wrote:
> On Thu, Mar 28, 2024, at 06:55, Vinod Koul wrote:
> > On 27-03-24, 16:03, Allen Pais wrote:
> >> The only generic interface to execute asynchronously in the BH context is
> >> tasklet; however, it's marked deprecated and has some design flaws. To
> >> replace tasklets, BH workqueue support was recently added. A BH workqueue
> >> behaves similarly to regular workqueues except that the queued work items
> >> are executed in the BH context.
> >
> > Thanks for conversion, am happy with BH alternative as it helps in
> > dmaengine where we need shortest possible time between tasklet and
> > interrupt handling to maximize dma performance
>
> I still feel that we want something different for dmaengine,
> at least in the long run. As we have discussed in the past,
> the tasklet context in these drivers is what the callbacks
> from the dma client device is run in, and a lot of these probably
> want something other than tasklet context, e.g. just call
> complete() on a client-provided completion structure.
>
> Instead of open-coding the use of the system_bh_wq in each
> dmaengine, how about we start with a custom WQ_BH
> specifically for the dmaengine subsystem and wrap them
> inside of another interface.
>
> Since almost every driver associates the tasklet with the
> dma_chan, we could go one step further and add the
> work_queue structure directly into struct dma_chan,
> with the wrapper operating on the dma_chan rather than
> the work_queue.

I think that is very great idea. having this wrapped in dma_chan would
be very good way as well

Am not sure if Allen is up for it :-)

--
~Vinod

2024-03-28 19:32:29

by Corey Minyard

[permalink] [raw]
Subject: Re: [PATCH 6/9] ipmi: Convert from tasklet to BH workqueue

On Thu, Mar 28, 2024 at 10:52:16AM -0700, Allen wrote:
> On Wed, Mar 27, 2024 at 11:05 AM Corey Minyard <[email protected]> wrote:
> >
> > I believe that work queues items are execute single-threaded for a work
> > queue, so this should be good. I need to test this, though. It may be
> > that an IPMI device can have its own work queue; it may not be important
> > to run it in bh context.
>
> Fair point. Could you please let me know once you have had a chance to test
> these changes. Meanwhile, I will work on RFC wherein IPMI will have its own
> workqueue.
>
> Thanks for taking time out to review.

After looking and thinking about it a bit, a BH context is still
probably the best for this.

I have tested this patch under load and various scenarios and it seems
to work ok. So:

Tested-by: Corey Minyard <[email protected]>
Acked-by: Corey Minyard <[email protected]>

Or I can take this into my tree.

-corey

>
> - Allen
>
> >
> > -corey
> >
> > >
> > > Based on the work done by Tejun Heo <[email protected]>
> > > Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-610
> > >
> > > Signed-off-by: Allen Pais <[email protected]>
> > > ---
> > > drivers/char/ipmi/ipmi_msghandler.c | 30 ++++++++++++++---------------
> > > 1 file changed, 15 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
> > > index b0eedc4595b3..fce2a2dbdc82 100644
> > > --- a/drivers/char/ipmi/ipmi_msghandler.c
> > > +++ b/drivers/char/ipmi/ipmi_msghandler.c
> > > @@ -36,12 +36,13 @@
> > > #include <linux/nospec.h>
> > > #include <linux/vmalloc.h>
> > > #include <linux/delay.h>
> > > +#include <linux/workqueue.h>
> > >
> > > #define IPMI_DRIVER_VERSION "39.2"
> > >
> > > static struct ipmi_recv_msg *ipmi_alloc_recv_msg(void);
> > > static int ipmi_init_msghandler(void);
> > > -static void smi_recv_tasklet(struct tasklet_struct *t);
> > > +static void smi_recv_work(struct work_struct *t);
> > > static void handle_new_recv_msgs(struct ipmi_smi *intf);
> > > static void need_waiter(struct ipmi_smi *intf);
> > > static int handle_one_recv_msg(struct ipmi_smi *intf,
> > > @@ -498,13 +499,13 @@ struct ipmi_smi {
> > > /*
> > > * Messages queued for delivery. If delivery fails (out of memory
> > > * for instance), They will stay in here to be processed later in a
> > > - * periodic timer interrupt. The tasklet is for handling received
> > > + * periodic timer interrupt. The work is for handling received
> > > * messages directly from the handler.
> > > */
> > > spinlock_t waiting_rcv_msgs_lock;
> > > struct list_head waiting_rcv_msgs;
> > > atomic_t watchdog_pretimeouts_to_deliver;
> > > - struct tasklet_struct recv_tasklet;
> > > + struct work_struct recv_work;
> > >
> > > spinlock_t xmit_msgs_lock;
> > > struct list_head xmit_msgs;
> > > @@ -704,7 +705,7 @@ static void clean_up_interface_data(struct ipmi_smi *intf)
> > > struct cmd_rcvr *rcvr, *rcvr2;
> > > struct list_head list;
> > >
> > > - tasklet_kill(&intf->recv_tasklet);
> > > + cancel_work_sync(&intf->recv_work);
> > >
> > > free_smi_msg_list(&intf->waiting_rcv_msgs);
> > > free_recv_msg_list(&intf->waiting_events);
> > > @@ -1319,7 +1320,7 @@ static void free_user(struct kref *ref)
> > > {
> > > struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);
> > >
> > > - /* SRCU cleanup must happen in task context. */
> > > + /* SRCU cleanup must happen in work context. */
> > > queue_work(remove_work_wq, &user->remove_work);
> > > }
> > >
> > > @@ -3605,8 +3606,7 @@ int ipmi_add_smi(struct module *owner,
> > > intf->curr_seq = 0;
> > > spin_lock_init(&intf->waiting_rcv_msgs_lock);
> > > INIT_LIST_HEAD(&intf->waiting_rcv_msgs);
> > > - tasklet_setup(&intf->recv_tasklet,
> > > - smi_recv_tasklet);
> > > + INIT_WORK(&intf->recv_work, smi_recv_work);
> > > atomic_set(&intf->watchdog_pretimeouts_to_deliver, 0);
> > > spin_lock_init(&intf->xmit_msgs_lock);
> > > INIT_LIST_HEAD(&intf->xmit_msgs);
> > > @@ -4779,7 +4779,7 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
> > > * To preserve message order, quit if we
> > > * can't handle a message. Add the message
> > > * back at the head, this is safe because this
> > > - * tasklet is the only thing that pulls the
> > > + * work is the only thing that pulls the
> > > * messages.
> > > */
> > > list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
> > > @@ -4812,10 +4812,10 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
> > > }
> > > }
> > >
> > > -static void smi_recv_tasklet(struct tasklet_struct *t)
> > > +static void smi_recv_work(struct work_struct *t)
> > > {
> > > unsigned long flags = 0; /* keep us warning-free. */
> > > - struct ipmi_smi *intf = from_tasklet(intf, t, recv_tasklet);
> > > + struct ipmi_smi *intf = from_work(intf, t, recv_work);
> > > int run_to_completion = intf->run_to_completion;
> > > struct ipmi_smi_msg *newmsg = NULL;
> > >
> > > @@ -4866,7 +4866,7 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
> > >
> > > /*
> > > * To preserve message order, we keep a queue and deliver from
> > > - * a tasklet.
> > > + * a work.
> > > */
> > > if (!run_to_completion)
> > > spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
> > > @@ -4887,9 +4887,9 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
> > > spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);
> > >
> > > if (run_to_completion)
> > > - smi_recv_tasklet(&intf->recv_tasklet);
> > > + smi_recv_work(&intf->recv_work);
> > > else
> > > - tasklet_schedule(&intf->recv_tasklet);
> > > + queue_work(system_bh_wq, &intf->recv_work);
> > > }
> > > EXPORT_SYMBOL(ipmi_smi_msg_received);
> > >
> > > @@ -4899,7 +4899,7 @@ void ipmi_smi_watchdog_pretimeout(struct ipmi_smi *intf)
> > > return;
> > >
> > > atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1);
> > > - tasklet_schedule(&intf->recv_tasklet);
> > > + queue_work(system_bh_wq, &intf->recv_work);
> > > }
> > > EXPORT_SYMBOL(ipmi_smi_watchdog_pretimeout);
> > >
> > > @@ -5068,7 +5068,7 @@ static bool ipmi_timeout_handler(struct ipmi_smi *intf,
> > > flags);
> > > }
> > >
> > > - tasklet_schedule(&intf->recv_tasklet);
> > > + queue_work(system_bh_wq, &intf->recv_work);
> > >
> > > return need_timer;
> > > }
> > > --
> > > 2.17.1
> > >
> > >
> >
>
>
> --
> - Allen
>

2024-03-28 19:40:08

by Allen

[permalink] [raw]
Subject: Re: [PATCH 2/9] dma: Convert from tasklet to BH workqueue

> > >> The only generic interface to execute asynchronously in the BH context is
> > >> tasklet; however, it's marked deprecated and has some design flaws. To
> > >> replace tasklets, BH workqueue support was recently added. A BH workqueue
> > >> behaves similarly to regular workqueues except that the queued work items
> > >> are executed in the BH context.
> > >
> > > Thanks for conversion, am happy with BH alternative as it helps in
> > > dmaengine where we need shortest possible time between tasklet and
> > > interrupt handling to maximize dma performance
> >
> > I still feel that we want something different for dmaengine,
> > at least in the long run. As we have discussed in the past,
> > the tasklet context in these drivers is what the callbacks
> > from the dma client device is run in, and a lot of these probably
> > want something other than tasklet context, e.g. just call
> > complete() on a client-provided completion structure.
> >
> > Instead of open-coding the use of the system_bh_wq in each
> > dmaengine, how about we start with a custom WQ_BH
> > specifically for the dmaengine subsystem and wrap them
> > inside of another interface.
> >
> > Since almost every driver associates the tasklet with the
> > dma_chan, we could go one step further and add the
> > work_queue structure directly into struct dma_chan,
> > with the wrapper operating on the dma_chan rather than
> > the work_queue.
>
> I think that is very great idea. having this wrapped in dma_chan would
> be very good way as well
>
> Am not sure if Allen is up for it :-)

Thanks Arnd, I know we did speak about this at LPC. I did start
working on using completion. I dropped it as I thought it would
be easier to move to workqueues.

Vinod, I would like to give this a shot and put out a RFC, I would
really appreciate review and feedback.

Thanks,
Allen

>
> --
> ~Vinod
>

2024-03-28 19:41:59

by Allen

[permalink] [raw]
Subject: Re: [PATCH 6/9] ipmi: Convert from tasklet to BH workqueue

> > > I believe that work queues items are execute single-threaded for a work
> > > queue, so this should be good. I need to test this, though. It may be
> > > that an IPMI device can have its own work queue; it may not be important
> > > to run it in bh context.
> >
> > Fair point. Could you please let me know once you have had a chance to test
> > these changes. Meanwhile, I will work on RFC wherein IPMI will have its own
> > workqueue.
> >
> > Thanks for taking time out to review.
>
> After looking and thinking about it a bit, a BH context is still
> probably the best for this.
>
> I have tested this patch under load and various scenarios and it seems
> to work ok. So:
>
> Tested-by: Corey Minyard <[email protected]>
> Acked-by: Corey Minyard <[email protected]>
>
> Or I can take this into my tree.
>
> -corey

Thank you very much. I think it should be okay for you to carry it into
your tree.

- Allen

2024-03-28 19:51:39

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [PATCH 2/9] dma: Convert from tasklet to BH workqueue

On Thu, Mar 28, 2024, at 20:39, Allen wrote:
>> >
>> > Since almost every driver associates the tasklet with the
>> > dma_chan, we could go one step further and add the
>> > work_queue structure directly into struct dma_chan,
>> > with the wrapper operating on the dma_chan rather than
>> > the work_queue.
>>
>> I think that is very great idea. having this wrapped in dma_chan would
>> be very good way as well
>>
>> Am not sure if Allen is up for it :-)
>
> Thanks Arnd, I know we did speak about this at LPC. I did start
> working on using completion. I dropped it as I thought it would
> be easier to move to workqueues.

It's definitely easier to do the workqueue conversion as a first
step, and I agree adding support for the completion right away is
probably too much. Moving the work_struct into the dma_chan
is probably not too hard though, if you leave your current
approach for the cases where the tasklet is part of the
dma_dev rather than the dma_chan.

Arnd

2024-03-28 19:57:18

by Corey Minyard

[permalink] [raw]
Subject: Re: [PATCH 6/9] ipmi: Convert from tasklet to BH workqueue

On Thu, Mar 28, 2024 at 12:41:22PM -0700, Allen wrote:
> > > > I believe that work queues items are execute single-threaded for a work
> > > > queue, so this should be good. I need to test this, though. It may be
> > > > that an IPMI device can have its own work queue; it may not be important
> > > > to run it in bh context.
> > >
> > > Fair point. Could you please let me know once you have had a chance to test
> > > these changes. Meanwhile, I will work on RFC wherein IPMI will have its own
> > > workqueue.
> > >
> > > Thanks for taking time out to review.
> >
> > After looking and thinking about it a bit, a BH context is still
> > probably the best for this.
> >
> > I have tested this patch under load and various scenarios and it seems
> > to work ok. So:
> >
> > Tested-by: Corey Minyard <[email protected]>
> > Acked-by: Corey Minyard <[email protected]>
> >
> > Or I can take this into my tree.
> >
> > -corey
>
> Thank you very much. I think it should be okay for you to carry it into
> your tree.

Ok, it's in my for-next tree. I fixed the directory reference, and I
changed all the comments where you changed "tasklet" to "work" to
instead say "workqueue".

-corey

>
> - Allen
>

2024-03-28 20:01:00

by Allen

[permalink] [raw]
Subject: Re: [PATCH 6/9] ipmi: Convert from tasklet to BH workqueue

> > > >
> > > > Fair point. Could you please let me know once you have had a chance to test
> > > > these changes. Meanwhile, I will work on RFC wherein IPMI will have its own
> > > > workqueue.
> > > >
> > > > Thanks for taking time out to review.
> > >
> > > After looking and thinking about it a bit, a BH context is still
> > > probably the best for this.
> > >
> > > I have tested this patch under load and various scenarios and it seems
> > > to work ok. So:
> > >
> > > Tested-by: Corey Minyard <[email protected]>
> > > Acked-by: Corey Minyard <[email protected]>
> > >
> > > Or I can take this into my tree.
> > >
> > > -corey
> >
> > Thank you very much. I think it should be okay for you to carry it into
> > your tree.
>
> Ok, it's in my for-next tree. I fixed the directory reference, and I
> changed all the comments where you changed "tasklet" to "work" to
> instead say "workqueue".
>

Thank you very much for fixing it.

- Allen

2024-03-28 20:03:21

by Allen

[permalink] [raw]
Subject: Re: [PATCH 2/9] dma: Convert from tasklet to BH workqueue

> >> > Since almost every driver associates the tasklet with the
> >> > dma_chan, we could go one step further and add the
> >> > work_queue structure directly into struct dma_chan,
> >> > with the wrapper operating on the dma_chan rather than
> >> > the work_queue.
> >>
> >> I think that is very great idea. having this wrapped in dma_chan would
> >> be very good way as well
> >>
> >> Am not sure if Allen is up for it :-)
> >
> > Thanks Arnd, I know we did speak about this at LPC. I did start
> > working on using completion. I dropped it as I thought it would
> > be easier to move to workqueues.
>
> It's definitely easier to do the workqueue conversion as a first
> step, and I agree adding support for the completion right away is
> probably too much. Moving the work_struct into the dma_chan
> is probably not too hard though, if you leave your current
> approach for the cases where the tasklet is part of the
> dma_dev rather than the dma_chan.
>

Alright, I will work on moving work_struck into the dma_chan and
leave the dma_dev as is (using bh workqueues) and post a RFC.
Once reviewed, I could move to the next step.

Thank you.

- Allen

2024-03-29 16:52:40

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH 2/9] dma: Convert from tasklet to BH workqueue

On 28-03-24, 13:01, Allen wrote:
> > >> > Since almost every driver associates the tasklet with the
> > >> > dma_chan, we could go one step further and add the
> > >> > work_queue structure directly into struct dma_chan,
> > >> > with the wrapper operating on the dma_chan rather than
> > >> > the work_queue.
> > >>
> > >> I think that is very great idea. having this wrapped in dma_chan would
> > >> be very good way as well
> > >>
> > >> Am not sure if Allen is up for it :-)
> > >
> > > Thanks Arnd, I know we did speak about this at LPC. I did start
> > > working on using completion. I dropped it as I thought it would
> > > be easier to move to workqueues.
> >
> > It's definitely easier to do the workqueue conversion as a first
> > step, and I agree adding support for the completion right away is
> > probably too much. Moving the work_struct into the dma_chan
> > is probably not too hard though, if you leave your current
> > approach for the cases where the tasklet is part of the
> > dma_dev rather than the dma_chan.
> >
>
> Alright, I will work on moving work_struck into the dma_chan and
> leave the dma_dev as is (using bh workqueues) and post a RFC.
> Once reviewed, I could move to the next step.

That might be better from a performance pov but the current design is a
global tasklet and not a per chan one... We would need to carefully
review and test this for sure

--
~Vinod

2024-03-29 16:53:15

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH 2/9] dma: Convert from tasklet to BH workqueue

On 28-03-24, 12:39, Allen wrote:

> > I think that is very great idea. having this wrapped in dma_chan would
> > be very good way as well
> >
> > Am not sure if Allen is up for it :-)
>
> Thanks Arnd, I know we did speak about this at LPC. I did start
> working on using completion. I dropped it as I thought it would
> be easier to move to workqueues.
>
> Vinod, I would like to give this a shot and put out a RFC, I would
> really appreciate review and feedback.

Sounds like a good plan to me

--
~Vinod

2024-04-02 10:17:26

by Ulf Hansson

[permalink] [raw]
Subject: Re: [PATCH 9/9] mmc: Convert from tasklet to BH workqueue

On Thu, 28 Mar 2024 at 17:21, Tejun Heo <[email protected]> wrote:
>
> Hello,
>
> On Thu, Mar 28, 2024 at 01:53:25PM +0100, Ulf Hansson wrote:
> > At this point we have suggested to drivers to switch to use threaded
> > irq handlers (and regular work queues if needed too). That said,
> > what's the benefit of using the BH work queue?
>
> BH workqueues should behave about the same as tasklets which have more
> limited interface and is subtly broken in an expensive-to-fix way (around
> freeing in-flight work item), so the plan is to replace tasklets with BH
> workqueues and remove tasklets from the kernel.

Seems like a good approach!

>
> The [dis]advantages of BH workqueues over threaded IRQs or regular threaded
> workqueues are the same as when you compare them to tasklets. No thread
> switching overhead, so latencies will be a bit tighter. Wheteher that
> actually matters really depends on the use case. Here, the biggest advantage
> is that it's mostly interchangeable with tasklets and can thus be swapped
> easily.

Right, thanks for clarifying!

However, the main question is then - if/when it makes sense to use the
BH workqueue for an mmc host driver. Unless there are some HW
limitations, a threaded irq handler should be sufficient, I think.

That said, moving to threaded irq handlers is a different topic and
doesn't prevent us from moving to BH workqueues as it seems like a
step in the right direction.

Kind regards
Uffe

2024-04-02 12:25:59

by Linus Walleij

[permalink] [raw]
Subject: Re: [PATCH 2/9] dma: Convert from tasklet to BH workqueue

Hi Allen,

thanks for your patch!

On Wed, Mar 27, 2024 at 5:03 PM Allen Pais <[email protected]> wrote:

> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/dma/* from tasklet to BH workqueue.
>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Signed-off-by: Allen Pais <[email protected]>
(...)
> diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c
(...)
> if (d40c->pending_tx)
> - tasklet_schedule(&d40c->tasklet);
> + queue_work(system_bh_wq, &d40c->work);

Why is "my" driver not allowed to use system_bh_highpri_wq?

I can't see the reasoning between some drivers using system_bh_wq
and others being highpri?

Given the DMA usecase I would expect them all to be high prio.

Yours,
Linus Walleij

2024-04-02 12:50:11

by Alexandra Winter

[permalink] [raw]
Subject: Re: [PATCH 7/9] s390: Convert from tasklet to BH workqueue



On 27.03.24 17:03, Allen Pais wrote:
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/infiniband/* from tasklet to BH workqueue.
>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Note: Not tested. Please test/review.
>
> Signed-off-by: Allen Pais <[email protected]>
> ---
> drivers/s390/block/dasd.c | 42 ++++++++++++------------
> drivers/s390/block/dasd_int.h | 10 +++---
> drivers/s390/char/con3270.c | 27 ++++++++--------
> drivers/s390/crypto/ap_bus.c | 24 +++++++-------
> drivers/s390/crypto/ap_bus.h | 2 +-
> drivers/s390/crypto/zcrypt_msgtype50.c | 2 +-
> drivers/s390/crypto/zcrypt_msgtype6.c | 4 +--
> drivers/s390/net/ctcm_fsms.c | 4 +--
> drivers/s390/net/ctcm_main.c | 15 ++++-----
> drivers/s390/net/ctcm_main.h | 5 +--
> drivers/s390/net/ctcm_mpc.c | 12 +++----
> drivers/s390/net/ctcm_mpc.h | 7 ++--
> drivers/s390/net/lcs.c | 26 +++++++--------
> drivers/s390/net/lcs.h | 2 +-
> drivers/s390/net/qeth_core_main.c | 2 +-
> drivers/s390/scsi/zfcp_qdio.c | 45 +++++++++++++-------------
> drivers/s390/scsi/zfcp_qdio.h | 9 +++---
> 17 files changed, 117 insertions(+), 121 deletions(-)
>


We're looking into the best way to test this.

For drivers/s390/net/ctcm* and drivers/s390/net/lcs*:
Acked-by: Alexandra Winter <[email protected]>


[...]
> diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
> index a0cce6872075..10ea95abc753 100644
> --- a/drivers/s390/net/qeth_core_main.c
> +++ b/drivers/s390/net/qeth_core_main.c
> @@ -2911,7 +2911,7 @@ static int qeth_init_input_buffer(struct qeth_card *card,
> }
>
> /*
> - * since the buffer is accessed only from the input_tasklet
> + * since the buffer is accessed only from the input_work
> * there shouldn't be a need to synchronize; also, since we use
> * the QETH_IN_BUF_REQUEUE_THRESHOLD we should never run out off
> * buffers

I propose to delete the whole comment block. There have been many changes and
I don't think it is helpful for the current qeth driver.


2024-04-02 13:17:07

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH 2/9] dma: Convert from tasklet to BH workqueue

On 02-04-24, 14:25, Linus Walleij wrote:
> Hi Allen,
>
> thanks for your patch!
>
> On Wed, Mar 27, 2024 at 5:03 PM Allen Pais <[email protected]> wrote:
>
> > The only generic interface to execute asynchronously in the BH context is
> > tasklet; however, it's marked deprecated and has some design flaws. To
> > replace tasklets, BH workqueue support was recently added. A BH workqueue
> > behaves similarly to regular workqueues except that the queued work items
> > are executed in the BH context.
> >
> > This patch converts drivers/dma/* from tasklet to BH workqueue.
> >
> > Based on the work done by Tejun Heo <[email protected]>
> > Branch: git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
> >
> > Signed-off-by: Allen Pais <[email protected]>
> (...)
> > diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c
> (...)
> > if (d40c->pending_tx)
> > - tasklet_schedule(&d40c->tasklet);
> > + queue_work(system_bh_wq, &d40c->work);
>
> Why is "my" driver not allowed to use system_bh_highpri_wq?
>
> I can't see the reasoning between some drivers using system_bh_wq
> and others being highpri?
>
> Given the DMA usecase I would expect them all to be high prio.

It didnt use tasklet_hi_schedule(), I guess Allen has done the
conversion of tasklet_schedule -> system_bh_wq and tasklet_hi_schedule
-> system_bh_highpri_wq

Anyway, we are going to use a dma queue so should be better performance

--
~Vinod

2024-04-03 13:35:45

by Allen

[permalink] [raw]
Subject: Re: [PATCH 7/9] s390: Convert from tasklet to BH workqueue

> >
> > Signed-off-by: Allen Pais <[email protected]>
> > ---
> > drivers/s390/block/dasd.c | 42 ++++++++++++------------
> > drivers/s390/block/dasd_int.h | 10 +++---
> > drivers/s390/char/con3270.c | 27 ++++++++--------
> > drivers/s390/crypto/ap_bus.c | 24 +++++++-------
> > drivers/s390/crypto/ap_bus.h | 2 +-
> > drivers/s390/crypto/zcrypt_msgtype50.c | 2 +-
> > drivers/s390/crypto/zcrypt_msgtype6.c | 4 +--
> > drivers/s390/net/ctcm_fsms.c | 4 +--
> > drivers/s390/net/ctcm_main.c | 15 ++++-----
> > drivers/s390/net/ctcm_main.h | 5 +--
> > drivers/s390/net/ctcm_mpc.c | 12 +++----
> > drivers/s390/net/ctcm_mpc.h | 7 ++--
> > drivers/s390/net/lcs.c | 26 +++++++--------
> > drivers/s390/net/lcs.h | 2 +-
> > drivers/s390/net/qeth_core_main.c | 2 +-
> > drivers/s390/scsi/zfcp_qdio.c | 45 +++++++++++++-------------
> > drivers/s390/scsi/zfcp_qdio.h | 9 +++---
> > 17 files changed, 117 insertions(+), 121 deletions(-)
> >
>
>
> We're looking into the best way to test this.
>
> For drivers/s390/net/ctcm* and drivers/s390/net/lcs*:
> Acked-by: Alexandra Winter <[email protected]>

Thank you very much.

>
>
> [...]
> > diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
> > index a0cce6872075..10ea95abc753 100644
> > --- a/drivers/s390/net/qeth_core_main.c
> > +++ b/drivers/s390/net/qeth_core_main.c
> > @@ -2911,7 +2911,7 @@ static int qeth_init_input_buffer(struct qeth_card *card,
> > }
> >
> > /*
> > - * since the buffer is accessed only from the input_tasklet
> > + * since the buffer is accessed only from the input_work
> > * there shouldn't be a need to synchronize; also, since we use
> > * the QETH_IN_BUF_REQUEUE_THRESHOLD we should never run out off
> > * buffers
>
> I propose to delete the whole comment block. There have been many changes and
> I don't think it is helpful for the current qeth driver.


Sure, I will have it fixed in v2.

- Allen

2024-04-03 16:46:10

by Allen

[permalink] [raw]
Subject: Re: [PATCH 1/9] hyperv: Convert from tasklet to BH workqueue

> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/hv/* from tasklet to BH workqueue.
>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Signed-off-by: Allen Pais <[email protected]>
> ---
> drivers/hv/channel.c | 8 ++++----
> drivers/hv/channel_mgmt.c | 5 ++---
> drivers/hv/connection.c | 9 +++++----
> drivers/hv/hv.c | 3 +--
> drivers/hv/hv_balloon.c | 4 ++--
> drivers/hv/hv_fcopy.c | 8 ++++----
> drivers/hv/hv_kvp.c | 8 ++++----
> drivers/hv/hv_snapshot.c | 8 ++++----
> drivers/hv/hyperv_vmbus.h | 9 +++++----
> drivers/hv/vmbus_drv.c | 19 ++++++++++---------
> include/linux/hyperv.h | 2 +-
> 11 files changed, 42 insertions(+), 41 deletions(-)

Wei,

I need to send out a v2 as I did not include the second patch that
updates drivers/pci/controller/pci-hyperv.c

Thanks.

2024-04-05 09:40:45

by Michał Mirosław

[permalink] [raw]
Subject: Re: [PATCH 9/9] mmc: Convert from tasklet to BH workqueue

On Wed, Mar 27, 2024 at 04:03:14PM +0000, Allen Pais wrote:
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/infiniband/* from tasklet to BH workqueue.
>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Signed-off-by: Allen Pais <[email protected]>
> ---
[...]
> drivers/mmc/host/cb710-mmc.c | 15 ++--
> drivers/mmc/host/cb710-mmc.h | 3 +-
[...]

Acked-by: Micha? Miros?aw <[email protected]>

2024-04-07 18:57:18

by Zhu Yanjun

[permalink] [raw]
Subject: Re: [PATCH 3/9] IB: Convert from tasklet to BH workqueue

在 2024/3/27 17:03, Allen Pais 写道:
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/infiniband/* from tasklet to BH workqueue.
>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10

I made simple tests. And I can not find the difference on latency and
throughput on RoCEv2 devices.

Anyone also made tests with these patches on IB? Any difference on
Latency and throughput?

Thanks,
Zhu Yanjun

>
> Signed-off-by: Allen Pais <[email protected]>
> ---
> drivers/infiniband/hw/bnxt_re/bnxt_re.h | 3 +-
> drivers/infiniband/hw/bnxt_re/qplib_fp.c | 21 ++++++------
> drivers/infiniband/hw/bnxt_re/qplib_fp.h | 2 +-
> drivers/infiniband/hw/bnxt_re/qplib_rcfw.c | 25 ++++++++-------
> drivers/infiniband/hw/bnxt_re/qplib_rcfw.h | 2 +-
> drivers/infiniband/hw/erdma/erdma.h | 3 +-
> drivers/infiniband/hw/erdma/erdma_eq.c | 11 ++++---
> drivers/infiniband/hw/hfi1/rc.c | 2 +-
> drivers/infiniband/hw/hfi1/sdma.c | 37 +++++++++++-----------
> drivers/infiniband/hw/hfi1/sdma.h | 9 +++---
> drivers/infiniband/hw/hfi1/tid_rdma.c | 6 ++--
> drivers/infiniband/hw/irdma/ctrl.c | 2 +-
> drivers/infiniband/hw/irdma/hw.c | 24 +++++++-------
> drivers/infiniband/hw/irdma/main.h | 5 +--
> drivers/infiniband/hw/qib/qib.h | 7 ++--
> drivers/infiniband/hw/qib/qib_iba7322.c | 9 +++---
> drivers/infiniband/hw/qib/qib_rc.c | 16 +++++-----
> drivers/infiniband/hw/qib/qib_ruc.c | 4 +--
> drivers/infiniband/hw/qib/qib_sdma.c | 11 ++++---
> drivers/infiniband/sw/rdmavt/qp.c | 2 +-
> 20 files changed, 106 insertions(+), 95 deletions(-)
>
> diff --git a/drivers/infiniband/hw/bnxt_re/bnxt_re.h b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
> index 9dca451ed522..f511c8415806 100644
> --- a/drivers/infiniband/hw/bnxt_re/bnxt_re.h
> +++ b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
> @@ -42,6 +42,7 @@
> #include <rdma/uverbs_ioctl.h>
> #include "hw_counters.h"
> #include <linux/hashtable.h>
> +#include <linux/workqueue.h>
> #define ROCE_DRV_MODULE_NAME "bnxt_re"
>
> #define BNXT_RE_DESC "Broadcom NetXtreme-C/E RoCE Driver"
> @@ -162,7 +163,7 @@ struct bnxt_re_dev {
> u8 cur_prio_map;
>
> /* FP Notification Queue (CQ & SRQ) */
> - struct tasklet_struct nq_task;
> + struct work_struct nq_work;
>
> /* RCFW Channel */
> struct bnxt_qplib_rcfw rcfw;
> diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
> index 439d0c7c5d0c..052906982cdf 100644
> --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
> +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
> @@ -46,6 +46,7 @@
> #include <linux/delay.h>
> #include <linux/prefetch.h>
> #include <linux/if_ether.h>
> +#include <linux/workqueue.h>
> #include <rdma/ib_mad.h>
>
> #include "roce_hsi.h"
> @@ -294,9 +295,9 @@ static void __wait_for_all_nqes(struct bnxt_qplib_cq *cq, u16 cnq_events)
> }
> }
>
> -static void bnxt_qplib_service_nq(struct tasklet_struct *t)
> +static void bnxt_qplib_service_nq(struct work_struct *t)
> {
> - struct bnxt_qplib_nq *nq = from_tasklet(nq, t, nq_tasklet);
> + struct bnxt_qplib_nq *nq = from_work(nq, t, nq_work);
> struct bnxt_qplib_hwq *hwq = &nq->hwq;
> struct bnxt_qplib_cq *cq;
> int budget = nq->budget;
> @@ -394,7 +395,7 @@ void bnxt_re_synchronize_nq(struct bnxt_qplib_nq *nq)
> int budget = nq->budget;
>
> nq->budget = nq->hwq.max_elements;
> - bnxt_qplib_service_nq(&nq->nq_tasklet);
> + bnxt_qplib_service_nq(&nq->nq_work);
> nq->budget = budget;
> }
>
> @@ -409,7 +410,7 @@ static irqreturn_t bnxt_qplib_nq_irq(int irq, void *dev_instance)
> prefetch(bnxt_qplib_get_qe(hwq, sw_cons, NULL));
>
> /* Fan out to CPU affinitized kthreads? */
> - tasklet_schedule(&nq->nq_tasklet);
> + queue_work(system_bh_wq, &nq->nq_work);
>
> return IRQ_HANDLED;
> }
> @@ -430,8 +431,8 @@ void bnxt_qplib_nq_stop_irq(struct bnxt_qplib_nq *nq, bool kill)
> nq->name = NULL;
>
> if (kill)
> - tasklet_kill(&nq->nq_tasklet);
> - tasklet_disable(&nq->nq_tasklet);
> + cancel_work_sync(&nq->nq_work);
> + disable_work_sync(&nq->nq_work);
> }
>
> void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
> @@ -465,9 +466,9 @@ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,
>
> nq->msix_vec = msix_vector;
> if (need_init)
> - tasklet_setup(&nq->nq_tasklet, bnxt_qplib_service_nq);
> + INIT_WORK(&nq->nq_work, bnxt_qplib_service_nq);
> else
> - tasklet_enable(&nq->nq_tasklet);
> + enable_and_queue_work(system_bh_wq, &nq->nq_work);
>
> nq->name = kasprintf(GFP_KERNEL, "bnxt_re-nq-%d@pci:%s",
> nq_indx, pci_name(res->pdev));
> @@ -477,7 +478,7 @@ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,
> if (rc) {
> kfree(nq->name);
> nq->name = NULL;
> - tasklet_disable(&nq->nq_tasklet);
> + disable_work_sync(&nq->nq_work);
> return rc;
> }
>
> @@ -541,7 +542,7 @@ int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq,
> nq->cqn_handler = cqn_handler;
> nq->srqn_handler = srqn_handler;
>
> - /* Have a task to schedule CQ notifiers in post send case */
> + /* Have a work to schedule CQ notifiers in post send case */
> nq->cqn_wq = create_singlethread_workqueue("bnxt_qplib_nq");
> if (!nq->cqn_wq)
> return -ENOMEM;
> diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
> index 7fd4506b3584..6ee3e501d136 100644
> --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
> +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
> @@ -494,7 +494,7 @@ struct bnxt_qplib_nq {
> u16 ring_id;
> int msix_vec;
> cpumask_t mask;
> - struct tasklet_struct nq_tasklet;
> + struct work_struct nq_work;
> bool requested;
> int budget;
>
> diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
> index 3ffaef0c2651..2fba712d88db 100644
> --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
> +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
> @@ -43,6 +43,7 @@
> #include <linux/pci.h>
> #include <linux/prefetch.h>
> #include <linux/delay.h>
> +#include <linux/workqueue.h>
>
> #include "roce_hsi.h"
> #include "qplib_res.h"
> @@ -51,7 +52,7 @@
> #include "qplib_fp.h"
> #include "qplib_tlv.h"
>
> -static void bnxt_qplib_service_creq(struct tasklet_struct *t);
> +static void bnxt_qplib_service_creq(struct work_struct *t);
>
> /**
> * bnxt_qplib_map_rc - map return type based on opcode
> @@ -165,7 +166,7 @@ static int __wait_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie)
> if (!crsqe->is_in_used)
> return 0;
>
> - bnxt_qplib_service_creq(&rcfw->creq.creq_tasklet);
> + bnxt_qplib_service_creq(&rcfw->creq.creq_work);
>
> if (!crsqe->is_in_used)
> return 0;
> @@ -206,7 +207,7 @@ static int __block_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie)
>
> udelay(1);
>
> - bnxt_qplib_service_creq(&rcfw->creq.creq_tasklet);
> + bnxt_qplib_service_creq(&rcfw->creq.creq_work);
> if (!crsqe->is_in_used)
> return 0;
>
> @@ -403,7 +404,7 @@ static int __poll_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie)
>
> usleep_range(1000, 1001);
>
> - bnxt_qplib_service_creq(&rcfw->creq.creq_tasklet);
> + bnxt_qplib_service_creq(&rcfw->creq.creq_work);
> if (!crsqe->is_in_used)
> return 0;
> if (jiffies_to_msecs(jiffies - issue_time) >
> @@ -727,9 +728,9 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
> }
>
> /* SP - CREQ Completion handlers */
> -static void bnxt_qplib_service_creq(struct tasklet_struct *t)
> +static void bnxt_qplib_service_creq(struct work_struct *t)
> {
> - struct bnxt_qplib_rcfw *rcfw = from_tasklet(rcfw, t, creq.creq_tasklet);
> + struct bnxt_qplib_rcfw *rcfw = from_work(rcfw, t, creq.creq_work);
> struct bnxt_qplib_creq_ctx *creq = &rcfw->creq;
> u32 type, budget = CREQ_ENTRY_POLL_BUDGET;
> struct bnxt_qplib_hwq *hwq = &creq->hwq;
> @@ -800,7 +801,7 @@ static irqreturn_t bnxt_qplib_creq_irq(int irq, void *dev_instance)
> sw_cons = HWQ_CMP(hwq->cons, hwq);
> prefetch(bnxt_qplib_get_qe(hwq, sw_cons, NULL));
>
> - tasklet_schedule(&creq->creq_tasklet);
> + queue_work(system_bh_wq, &creq->creq_work);
>
> return IRQ_HANDLED;
> }
> @@ -1007,8 +1008,8 @@ void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill)
> creq->irq_name = NULL;
> atomic_set(&rcfw->rcfw_intr_enabled, 0);
> if (kill)
> - tasklet_kill(&creq->creq_tasklet);
> - tasklet_disable(&creq->creq_tasklet);
> + cancel_work_sync(&creq->creq_work);
> + disable_work_sync(&creq->creq_work);
> }
>
> void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw)
> @@ -1045,9 +1046,9 @@ int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector,
>
> creq->msix_vec = msix_vector;
> if (need_init)
> - tasklet_setup(&creq->creq_tasklet, bnxt_qplib_service_creq);
> + INIT_WORK(&creq->creq_work, bnxt_qplib_service_creq);
> else
> - tasklet_enable(&creq->creq_tasklet);
> + enable_and_queue_work(system_bh_wq, &creq->creq_work);
>
> creq->irq_name = kasprintf(GFP_KERNEL, "bnxt_re-creq@pci:%s",
> pci_name(res->pdev));
> @@ -1058,7 +1059,7 @@ int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector,
> if (rc) {
> kfree(creq->irq_name);
> creq->irq_name = NULL;
> - tasklet_disable(&creq->creq_tasklet);
> + disable_work_sync(&creq->creq_work);
> return rc;
> }
> creq->requested = true;
> diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
> index 45996e60a0d0..8efa474fcf3f 100644
> --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
> +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
> @@ -207,7 +207,7 @@ struct bnxt_qplib_creq_ctx {
> struct bnxt_qplib_hwq hwq;
> struct bnxt_qplib_creq_db creq_db;
> struct bnxt_qplib_creq_stat stats;
> - struct tasklet_struct creq_tasklet;
> + struct work_struct creq_work;
> aeq_handler_t aeq_handler;
> u16 ring_id;
> int msix_vec;
> diff --git a/drivers/infiniband/hw/erdma/erdma.h b/drivers/infiniband/hw/erdma/erdma.h
> index 5df401a30cb9..9a47c1432c27 100644
> --- a/drivers/infiniband/hw/erdma/erdma.h
> +++ b/drivers/infiniband/hw/erdma/erdma.h
> @@ -11,6 +11,7 @@
> #include <linux/netdevice.h>
> #include <linux/pci.h>
> #include <linux/xarray.h>
> +#include <linux/workqueue.h>
> #include <rdma/ib_verbs.h>
>
> #include "erdma_hw.h"
> @@ -161,7 +162,7 @@ struct erdma_eq_cb {
> void *dev; /* All EQs use this fields to get erdma_dev struct */
> struct erdma_irq irq;
> struct erdma_eq eq;
> - struct tasklet_struct tasklet;
> + struct work_struct work;
> };
>
> struct erdma_resource_cb {
> diff --git a/drivers/infiniband/hw/erdma/erdma_eq.c b/drivers/infiniband/hw/erdma/erdma_eq.c
> index ea47cb21fdb8..252906fd73b0 100644
> --- a/drivers/infiniband/hw/erdma/erdma_eq.c
> +++ b/drivers/infiniband/hw/erdma/erdma_eq.c
> @@ -160,14 +160,16 @@ static irqreturn_t erdma_intr_ceq_handler(int irq, void *data)
> {
> struct erdma_eq_cb *ceq_cb = data;
>
> - tasklet_schedule(&ceq_cb->tasklet);
> + queue_work(system_bh_wq, &ceq_cb->work);
>
> return IRQ_HANDLED;
> }
>
> -static void erdma_intr_ceq_task(unsigned long data)
> +static void erdma_intr_ceq_task(struct work_struct *t)
> {
> - erdma_ceq_completion_handler((struct erdma_eq_cb *)data);
> + struct erdma_eq_cb *ceq_cb = from_work(ceq_cb, t, work);
> +
> + erdma_ceq_completion_handler(ceq_cb);
> }
>
> static int erdma_set_ceq_irq(struct erdma_dev *dev, u16 ceqn)
> @@ -179,8 +181,7 @@ static int erdma_set_ceq_irq(struct erdma_dev *dev, u16 ceqn)
> pci_name(dev->pdev));
> eqc->irq.msix_vector = pci_irq_vector(dev->pdev, ceqn + 1);
>
> - tasklet_init(&dev->ceqs[ceqn].tasklet, erdma_intr_ceq_task,
> - (unsigned long)&dev->ceqs[ceqn]);
> + INIT_WORK(&dev->ceqs[ceqn].work, erdma_intr_ceq_task);
>
> cpumask_set_cpu(cpumask_local_spread(ceqn + 1, dev->attrs.numa_node),
> &eqc->irq.affinity_hint_mask);
> diff --git a/drivers/infiniband/hw/hfi1/rc.c b/drivers/infiniband/hw/hfi1/rc.c
> index b36242c9d42c..ec19ddbfdacb 100644
> --- a/drivers/infiniband/hw/hfi1/rc.c
> +++ b/drivers/infiniband/hw/hfi1/rc.c
> @@ -1210,7 +1210,7 @@ static inline void hfi1_queue_rc_ack(struct hfi1_packet *packet, bool is_fecn)
> if (is_fecn)
> qp->s_flags |= RVT_S_ECN;
>
> - /* Schedule the send tasklet. */
> + /* Schedule the send work. */
> hfi1_schedule_send(qp);
> unlock:
> spin_unlock_irqrestore(&qp->s_lock, flags);
> diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
> index b67d23b1f286..5e1a1dd45511 100644
> --- a/drivers/infiniband/hw/hfi1/sdma.c
> +++ b/drivers/infiniband/hw/hfi1/sdma.c
> @@ -11,6 +11,7 @@
> #include <linux/timer.h>
> #include <linux/vmalloc.h>
> #include <linux/highmem.h>
> +#include <linux/workqueue.h>
>
> #include "hfi.h"
> #include "common.h"
> @@ -190,11 +191,11 @@ static const struct sdma_set_state_action sdma_action_table[] = {
> static void sdma_complete(struct kref *);
> static void sdma_finalput(struct sdma_state *);
> static void sdma_get(struct sdma_state *);
> -static void sdma_hw_clean_up_task(struct tasklet_struct *);
> +static void sdma_hw_clean_up_task(struct work_struct *);
> static void sdma_put(struct sdma_state *);
> static void sdma_set_state(struct sdma_engine *, enum sdma_states);
> static void sdma_start_hw_clean_up(struct sdma_engine *);
> -static void sdma_sw_clean_up_task(struct tasklet_struct *);
> +static void sdma_sw_clean_up_task(struct work_struct *);
> static void sdma_sendctrl(struct sdma_engine *, unsigned);
> static void init_sdma_regs(struct sdma_engine *, u32, uint);
> static void sdma_process_event(
> @@ -503,9 +504,9 @@ static void sdma_err_progress_check(struct timer_list *t)
> schedule_work(&sde->err_halt_worker);
> }
>
> -static void sdma_hw_clean_up_task(struct tasklet_struct *t)
> +static void sdma_hw_clean_up_task(struct work_struct *t)
> {
> - struct sdma_engine *sde = from_tasklet(sde, t,
> + struct sdma_engine *sde = from_work(sde, t,
> sdma_hw_clean_up_task);
> u64 statuscsr;
>
> @@ -563,9 +564,9 @@ static void sdma_flush_descq(struct sdma_engine *sde)
> sdma_desc_avail(sde, sdma_descq_freecnt(sde));
> }
>
> -static void sdma_sw_clean_up_task(struct tasklet_struct *t)
> +static void sdma_sw_clean_up_task(struct work_struct *t)
> {
> - struct sdma_engine *sde = from_tasklet(sde, t, sdma_sw_clean_up_task);
> + struct sdma_engine *sde = from_work(sde, t, sdma_sw_clean_up_task);
> unsigned long flags;
>
> spin_lock_irqsave(&sde->tail_lock, flags);
> @@ -624,7 +625,7 @@ static void sdma_sw_tear_down(struct sdma_engine *sde)
>
> static void sdma_start_hw_clean_up(struct sdma_engine *sde)
> {
> - tasklet_hi_schedule(&sde->sdma_hw_clean_up_task);
> + queue_work(system_bh_highpri_wq, &sde->sdma_hw_clean_up_task);
> }
>
> static void sdma_set_state(struct sdma_engine *sde,
> @@ -1415,9 +1416,9 @@ int sdma_init(struct hfi1_devdata *dd, u8 port)
> sde->tail_csr =
> get_kctxt_csr_addr(dd, this_idx, SD(TAIL));
>
> - tasklet_setup(&sde->sdma_hw_clean_up_task,
> + INIT_WORK(&sde->sdma_hw_clean_up_task,
> sdma_hw_clean_up_task);
> - tasklet_setup(&sde->sdma_sw_clean_up_task,
> + INIT_WORK(&sde->sdma_sw_clean_up_task,
> sdma_sw_clean_up_task);
> INIT_WORK(&sde->err_halt_worker, sdma_err_halt_wait);
> INIT_WORK(&sde->flush_worker, sdma_field_flush);
> @@ -2741,7 +2742,7 @@ static void __sdma_process_event(struct sdma_engine *sde,
> switch (event) {
> case sdma_event_e00_go_hw_down:
> sdma_set_state(sde, sdma_state_s00_hw_down);
> - tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
> + queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
> break;
> case sdma_event_e10_go_hw_start:
> break;
> @@ -2783,13 +2784,13 @@ static void __sdma_process_event(struct sdma_engine *sde,
> switch (event) {
> case sdma_event_e00_go_hw_down:
> sdma_set_state(sde, sdma_state_s00_hw_down);
> - tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
> + queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
> break;
> case sdma_event_e10_go_hw_start:
> break;
> case sdma_event_e15_hw_halt_done:
> sdma_set_state(sde, sdma_state_s30_sw_clean_up_wait);
> - tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
> + queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
> break;
> case sdma_event_e25_hw_clean_up_done:
> break;
> @@ -2824,13 +2825,13 @@ static void __sdma_process_event(struct sdma_engine *sde,
> switch (event) {
> case sdma_event_e00_go_hw_down:
> sdma_set_state(sde, sdma_state_s00_hw_down);
> - tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
> + queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
> break;
> case sdma_event_e10_go_hw_start:
> break;
> case sdma_event_e15_hw_halt_done:
> sdma_set_state(sde, sdma_state_s30_sw_clean_up_wait);
> - tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
> + queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
> break;
> case sdma_event_e25_hw_clean_up_done:
> break;
> @@ -2864,7 +2865,7 @@ static void __sdma_process_event(struct sdma_engine *sde,
> switch (event) {
> case sdma_event_e00_go_hw_down:
> sdma_set_state(sde, sdma_state_s00_hw_down);
> - tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
> + queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
> break;
> case sdma_event_e10_go_hw_start:
> break;
> @@ -2888,7 +2889,7 @@ static void __sdma_process_event(struct sdma_engine *sde,
> break;
> case sdma_event_e81_hw_frozen:
> sdma_set_state(sde, sdma_state_s82_freeze_sw_clean);
> - tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
> + queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
> break;
> case sdma_event_e82_hw_unfreeze:
> break;
> @@ -2903,7 +2904,7 @@ static void __sdma_process_event(struct sdma_engine *sde,
> switch (event) {
> case sdma_event_e00_go_hw_down:
> sdma_set_state(sde, sdma_state_s00_hw_down);
> - tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
> + queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
> break;
> case sdma_event_e10_go_hw_start:
> break;
> @@ -2947,7 +2948,7 @@ static void __sdma_process_event(struct sdma_engine *sde,
> switch (event) {
> case sdma_event_e00_go_hw_down:
> sdma_set_state(sde, sdma_state_s00_hw_down);
> - tasklet_hi_schedule(&sde->sdma_sw_clean_up_task);
> + queue_work(system_bh_highpri_wq, &sde->sdma_sw_clean_up_task);
> break;
> case sdma_event_e10_go_hw_start:
> break;
> diff --git a/drivers/infiniband/hw/hfi1/sdma.h b/drivers/infiniband/hw/hfi1/sdma.h
> index d77246b48434..3f047260cebe 100644
> --- a/drivers/infiniband/hw/hfi1/sdma.h
> +++ b/drivers/infiniband/hw/hfi1/sdma.h
> @@ -11,6 +11,7 @@
> #include <asm/byteorder.h>
> #include <linux/workqueue.h>
> #include <linux/rculist.h>
> +#include <linux/workqueue.h>
>
> #include "hfi.h"
> #include "verbs.h"
> @@ -346,11 +347,11 @@ struct sdma_engine {
>
> /* CONFIG SDMA for now, just blindly duplicate */
> /* private: */
> - struct tasklet_struct sdma_hw_clean_up_task
> + struct work_struct sdma_hw_clean_up_task
> ____cacheline_aligned_in_smp;
>
> /* private: */
> - struct tasklet_struct sdma_sw_clean_up_task
> + struct work_struct sdma_sw_clean_up_task
> ____cacheline_aligned_in_smp;
> /* private: */
> struct work_struct err_halt_worker;
> @@ -471,7 +472,7 @@ void _sdma_txreq_ahgadd(
> * Completions of submitted requests can be gotten on selected
> * txreqs by giving a completion routine callback to sdma_txinit() or
> * sdma_txinit_ahg(). The environment in which the callback runs
> - * can be from an ISR, a tasklet, or a thread, so no sleeping
> + * can be from an ISR, a work, or a thread, so no sleeping
> * kernel routines can be used. Aspects of the sdma ring may
> * be locked so care should be taken with locking.
> *
> @@ -551,7 +552,7 @@ static inline int sdma_txinit_ahg(
> * Completions of submitted requests can be gotten on selected
> * txreqs by giving a completion routine callback to sdma_txinit() or
> * sdma_txinit_ahg(). The environment in which the callback runs
> - * can be from an ISR, a tasklet, or a thread, so no sleeping
> + * can be from an ISR, a work, or a thread, so no sleeping
> * kernel routines can be used. The head size of the sdma ring may
> * be locked so care should be taken with locking.
> *
> diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.c b/drivers/infiniband/hw/hfi1/tid_rdma.c
> index c465966a1d9c..31cb5a092f42 100644
> --- a/drivers/infiniband/hw/hfi1/tid_rdma.c
> +++ b/drivers/infiniband/hw/hfi1/tid_rdma.c
> @@ -2316,7 +2316,7 @@ void hfi1_rc_rcv_tid_rdma_read_req(struct hfi1_packet *packet)
> */
> qpriv->r_tid_alloc = qp->r_head_ack_queue;
>
> - /* Schedule the send tasklet. */
> + /* Schedule the send work. */
> qp->s_flags |= RVT_S_RESP_PENDING;
> if (fecn)
> qp->s_flags |= RVT_S_ECN;
> @@ -3807,7 +3807,7 @@ void hfi1_rc_rcv_tid_rdma_write_req(struct hfi1_packet *packet)
> hfi1_tid_write_alloc_resources(qp, true);
> trace_hfi1_tid_write_rsp_rcv_req(qp);
>
> - /* Schedule the send tasklet. */
> + /* Schedule the send work. */
> qp->s_flags |= RVT_S_RESP_PENDING;
> if (fecn)
> qp->s_flags |= RVT_S_ECN;
> @@ -5389,7 +5389,7 @@ static void hfi1_do_tid_send(struct rvt_qp *qp)
>
> /*
> * If the packet cannot be sent now, return and
> - * the send tasklet will be woken up later.
> + * the send work will be woken up later.
> */
> if (hfi1_verbs_send(qp, &ps))
> return;
> diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c
> index 6aed6169c07d..e9644f2b774d 100644
> --- a/drivers/infiniband/hw/irdma/ctrl.c
> +++ b/drivers/infiniband/hw/irdma/ctrl.c
> @@ -5271,7 +5271,7 @@ int irdma_process_cqp_cmd(struct irdma_sc_dev *dev,
> }
>
> /**
> - * irdma_process_bh - called from tasklet for cqp list
> + * irdma_process_bh - called from work for cqp list
> * @dev: sc device struct
> */
> int irdma_process_bh(struct irdma_sc_dev *dev)
> diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c
> index ad50b77282f8..18d552919c28 100644
> --- a/drivers/infiniband/hw/irdma/hw.c
> +++ b/drivers/infiniband/hw/irdma/hw.c
> @@ -440,12 +440,12 @@ static void irdma_ena_intr(struct irdma_sc_dev *dev, u32 msix_id)
> }
>
> /**
> - * irdma_dpc - tasklet for aeq and ceq 0
> - * @t: tasklet_struct ptr
> + * irdma_dpc - work for aeq and ceq 0
> + * @t: work_struct ptr
> */
> -static void irdma_dpc(struct tasklet_struct *t)
> +static void irdma_dpc(struct work_struct *t)
> {
> - struct irdma_pci_f *rf = from_tasklet(rf, t, dpc_tasklet);
> + struct irdma_pci_f *rf = from_work(rf, t, dpc_work);
>
> if (rf->msix_shared)
> irdma_process_ceq(rf, rf->ceqlist);
> @@ -455,11 +455,11 @@ static void irdma_dpc(struct tasklet_struct *t)
>
> /**
> * irdma_ceq_dpc - dpc handler for CEQ
> - * @t: tasklet_struct ptr
> + * @t: work_struct ptr
> */
> -static void irdma_ceq_dpc(struct tasklet_struct *t)
> +static void irdma_ceq_dpc(struct work_struct *t)
> {
> - struct irdma_ceq *iwceq = from_tasklet(iwceq, t, dpc_tasklet);
> + struct irdma_ceq *iwceq = from_work(iwceq, t, dpc_work);
> struct irdma_pci_f *rf = iwceq->rf;
>
> irdma_process_ceq(rf, iwceq);
> @@ -533,7 +533,7 @@ static irqreturn_t irdma_irq_handler(int irq, void *data)
> {
> struct irdma_pci_f *rf = data;
>
> - tasklet_schedule(&rf->dpc_tasklet);
> + queue_work(system_bh_wq, &rf->dpc_work);
>
> return IRQ_HANDLED;
> }
> @@ -550,7 +550,7 @@ static irqreturn_t irdma_ceq_handler(int irq, void *data)
> if (iwceq->irq != irq)
> ibdev_err(to_ibdev(&iwceq->rf->sc_dev), "expected irq = %d received irq = %d\n",
> iwceq->irq, irq);
> - tasklet_schedule(&iwceq->dpc_tasklet);
> + queue_work(system_bh_wq, &iwceq->dpc_work);
>
> return IRQ_HANDLED;
> }
> @@ -1121,14 +1121,14 @@ static int irdma_cfg_ceq_vector(struct irdma_pci_f *rf, struct irdma_ceq *iwceq,
> if (rf->msix_shared && !ceq_id) {
> snprintf(msix_vec->name, sizeof(msix_vec->name) - 1,
> "irdma-%s-AEQCEQ-0", dev_name(&rf->pcidev->dev));
> - tasklet_setup(&rf->dpc_tasklet, irdma_dpc);
> + INIT_WORK(&rf->dpc_work, irdma_dpc);
> status = request_irq(msix_vec->irq, irdma_irq_handler, 0,
> msix_vec->name, rf);
> } else {
> snprintf(msix_vec->name, sizeof(msix_vec->name) - 1,
> "irdma-%s-CEQ-%d",
> dev_name(&rf->pcidev->dev), ceq_id);
> - tasklet_setup(&iwceq->dpc_tasklet, irdma_ceq_dpc);
> + INIT_WORK(&iwceq->dpc_work, irdma_ceq_dpc);
>
> status = request_irq(msix_vec->irq, irdma_ceq_handler, 0,
> msix_vec->name, iwceq);
> @@ -1162,7 +1162,7 @@ static int irdma_cfg_aeq_vector(struct irdma_pci_f *rf)
> if (!rf->msix_shared) {
> snprintf(msix_vec->name, sizeof(msix_vec->name) - 1,
> "irdma-%s-AEQ", dev_name(&rf->pcidev->dev));
> - tasklet_setup(&rf->dpc_tasklet, irdma_dpc);
> + INIT_WORK(&rf->dpc_work, irdma_dpc);
> ret = request_irq(msix_vec->irq, irdma_irq_handler, 0,
> msix_vec->name, rf);
> }
> diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h
> index b65bc2ea542f..54301093b746 100644
> --- a/drivers/infiniband/hw/irdma/main.h
> +++ b/drivers/infiniband/hw/irdma/main.h
> @@ -30,6 +30,7 @@
> #endif
> #include <linux/auxiliary_bus.h>
> #include <linux/net/intel/iidc.h>
> +#include <linux/workqueue.h>
> #include <crypto/hash.h>
> #include <rdma/ib_smi.h>
> #include <rdma/ib_verbs.h>
> @@ -192,7 +193,7 @@ struct irdma_ceq {
> u32 irq;
> u32 msix_idx;
> struct irdma_pci_f *rf;
> - struct tasklet_struct dpc_tasklet;
> + struct work_struct dpc_work;
> spinlock_t ce_lock; /* sync cq destroy with cq completion event notification */
> };
>
> @@ -316,7 +317,7 @@ struct irdma_pci_f {
> struct mc_table_list mc_qht_list;
> struct irdma_msix_vector *iw_msixtbl;
> struct irdma_qvlist_info *iw_qvlist;
> - struct tasklet_struct dpc_tasklet;
> + struct work_struct dpc_work;
> struct msix_entry *msix_entries;
> struct irdma_dma_mem obj_mem;
> struct irdma_dma_mem obj_next;
> diff --git a/drivers/infiniband/hw/qib/qib.h b/drivers/infiniband/hw/qib/qib.h
> index 26c615772be3..d2ebaf31ce5a 100644
> --- a/drivers/infiniband/hw/qib/qib.h
> +++ b/drivers/infiniband/hw/qib/qib.h
> @@ -53,6 +53,7 @@
> #include <linux/sched.h>
> #include <linux/kthread.h>
> #include <linux/xarray.h>
> +#include <linux/workqueue.h>
> #include <rdma/ib_hdrs.h>
> #include <rdma/rdma_vt.h>
>
> @@ -562,7 +563,7 @@ struct qib_pportdata {
> u8 sdma_generation;
> u8 sdma_intrequest;
>
> - struct tasklet_struct sdma_sw_clean_up_task
> + struct work_struct sdma_sw_clean_up_task
> ____cacheline_aligned_in_smp;
>
> wait_queue_head_t state_wait; /* for state_wanted */
> @@ -1068,8 +1069,8 @@ struct qib_devdata {
> u8 psxmitwait_supported;
> /* cycle length of PS* counters in HW (in picoseconds) */
> u16 psxmitwait_check_rate;
> - /* high volume overflow errors defered to tasklet */
> - struct tasklet_struct error_tasklet;
> + /* high volume overflow errors defered to work */
> + struct work_struct error_work;
>
> int assigned_node_id; /* NUMA node closest to HCA */
> };
> diff --git a/drivers/infiniband/hw/qib/qib_iba7322.c b/drivers/infiniband/hw/qib/qib_iba7322.c
> index f93906d8fc09..c3325071f2b3 100644
> --- a/drivers/infiniband/hw/qib/qib_iba7322.c
> +++ b/drivers/infiniband/hw/qib/qib_iba7322.c
> @@ -46,6 +46,7 @@
> #include <rdma/ib_smi.h>
> #ifdef CONFIG_INFINIBAND_QIB_DCA
> #include <linux/dca.h>
> +#include <linux/workqueue.h>
> #endif
>
> #include "qib.h"
> @@ -1711,9 +1712,9 @@ static noinline void handle_7322_errors(struct qib_devdata *dd)
> return;
> }
>
> -static void qib_error_tasklet(struct tasklet_struct *t)
> +static void qib_error_work(struct work_struct *t)
> {
> - struct qib_devdata *dd = from_tasklet(dd, t, error_tasklet);
> + struct qib_devdata *dd = from_work(dd, t, error_work);
>
> handle_7322_errors(dd);
> qib_write_kreg(dd, kr_errmask, dd->cspec->errormask);
> @@ -3001,7 +3002,7 @@ static noinline void unlikely_7322_intr(struct qib_devdata *dd, u64 istat)
> unknown_7322_gpio_intr(dd);
> if (istat & QIB_I_C_ERROR) {
> qib_write_kreg(dd, kr_errmask, 0ULL);
> - tasklet_schedule(&dd->error_tasklet);
> + queue_work(system_bh_wq, &dd->error_work);
> }
> if (istat & INT_MASK_P(Err, 0) && dd->rcd[0])
> handle_7322_p_errors(dd->rcd[0]->ppd);
> @@ -3515,7 +3516,7 @@ static void qib_setup_7322_interrupt(struct qib_devdata *dd, int clearpend)
> for (i = 0; i < ARRAY_SIZE(redirect); i++)
> qib_write_kreg(dd, kr_intredirect + i, redirect[i]);
> dd->cspec->main_int_mask = mask;
> - tasklet_setup(&dd->error_tasklet, qib_error_tasklet);
> + INIT_WORK(&dd->error_work, qib_error_work);
> }
>
> /**
> diff --git a/drivers/infiniband/hw/qib/qib_rc.c b/drivers/infiniband/hw/qib/qib_rc.c
> index a1c20ffb4490..79e31921e384 100644
> --- a/drivers/infiniband/hw/qib/qib_rc.c
> +++ b/drivers/infiniband/hw/qib/qib_rc.c
> @@ -593,7 +593,7 @@ int qib_make_rc_req(struct rvt_qp *qp, unsigned long *flags)
> *
> * This is called from qib_rc_rcv() and qib_kreceive().
> * Note that RDMA reads and atomics are handled in the
> - * send side QP state and tasklet.
> + * send side QP state and work.
> */
> void qib_send_rc_ack(struct rvt_qp *qp)
> {
> @@ -670,7 +670,7 @@ void qib_send_rc_ack(struct rvt_qp *qp)
> /*
> * We are out of PIO buffers at the moment.
> * Pass responsibility for sending the ACK to the
> - * send tasklet so that when a PIO buffer becomes
> + * send work so that when a PIO buffer becomes
> * available, the ACK is sent ahead of other outgoing
> * packets.
> */
> @@ -715,7 +715,7 @@ void qib_send_rc_ack(struct rvt_qp *qp)
> qp->s_nak_state = qp->r_nak_state;
> qp->s_ack_psn = qp->r_ack_psn;
>
> - /* Schedule the send tasklet. */
> + /* Schedule the send work. */
> qib_schedule_send(qp);
> }
> unlock:
> @@ -806,7 +806,7 @@ static void reset_psn(struct rvt_qp *qp, u32 psn)
> qp->s_psn = psn;
> /*
> * Set RVT_S_WAIT_PSN as qib_rc_complete() may start the timer
> - * asynchronously before the send tasklet can get scheduled.
> + * asynchronously before the send work can get scheduled.
> * Doing it in qib_make_rc_req() is too late.
> */
> if ((qib_cmp24(qp->s_psn, qp->s_sending_hpsn) <= 0) &&
> @@ -1292,7 +1292,7 @@ static void qib_rc_rcv_resp(struct qib_ibport *ibp,
> (qib_cmp24(qp->s_sending_psn, qp->s_sending_hpsn) <= 0)) {
>
> /*
> - * If send tasklet not running attempt to progress
> + * If send work not running attempt to progress
> * SDMA queue.
> */
> if (!(qp->s_flags & RVT_S_BUSY)) {
> @@ -1629,7 +1629,7 @@ static int qib_rc_rcv_error(struct ib_other_headers *ohdr,
> case OP(FETCH_ADD): {
> /*
> * If we didn't find the atomic request in the ack queue
> - * or the send tasklet is already backed up to send an
> + * or the send work is already backed up to send an
> * earlier entry, we can ignore this request.
> */
> if (!e || e->opcode != (u8) opcode || old_req)
> @@ -1996,7 +1996,7 @@ void qib_rc_rcv(struct qib_ctxtdata *rcd, struct ib_header *hdr,
> qp->r_nak_state = 0;
> qp->r_head_ack_queue = next;
>
> - /* Schedule the send tasklet. */
> + /* Schedule the send work. */
> qp->s_flags |= RVT_S_RESP_PENDING;
> qib_schedule_send(qp);
>
> @@ -2059,7 +2059,7 @@ void qib_rc_rcv(struct qib_ctxtdata *rcd, struct ib_header *hdr,
> qp->r_nak_state = 0;
> qp->r_head_ack_queue = next;
>
> - /* Schedule the send tasklet. */
> + /* Schedule the send work. */
> qp->s_flags |= RVT_S_RESP_PENDING;
> qib_schedule_send(qp);
>
> diff --git a/drivers/infiniband/hw/qib/qib_ruc.c b/drivers/infiniband/hw/qib/qib_ruc.c
> index 1fa21938f310..f44a2a8b4b1e 100644
> --- a/drivers/infiniband/hw/qib/qib_ruc.c
> +++ b/drivers/infiniband/hw/qib/qib_ruc.c
> @@ -257,7 +257,7 @@ void _qib_do_send(struct work_struct *work)
> * @qp: pointer to the QP
> *
> * Process entries in the send work queue until credit or queue is
> - * exhausted. Only allow one CPU to send a packet per QP (tasklet).
> + * exhausted. Only allow one CPU to send a packet per QP (work).
> * Otherwise, two threads could send packets out of order.
> */
> void qib_do_send(struct rvt_qp *qp)
> @@ -299,7 +299,7 @@ void qib_do_send(struct rvt_qp *qp)
> spin_unlock_irqrestore(&qp->s_lock, flags);
> /*
> * If the packet cannot be sent now, return and
> - * the send tasklet will be woken up later.
> + * the send work will be woken up later.
> */
> if (qib_verbs_send(qp, priv->s_hdr, qp->s_hdrwords,
> qp->s_cur_sge, qp->s_cur_size))
> diff --git a/drivers/infiniband/hw/qib/qib_sdma.c b/drivers/infiniband/hw/qib/qib_sdma.c
> index 5e86cbf7d70e..facb3964d2ec 100644
> --- a/drivers/infiniband/hw/qib/qib_sdma.c
> +++ b/drivers/infiniband/hw/qib/qib_sdma.c
> @@ -34,6 +34,7 @@
> #include <linux/spinlock.h>
> #include <linux/netdevice.h>
> #include <linux/moduleparam.h>
> +#include <linux/workqueue.h>
>
> #include "qib.h"
> #include "qib_common.h"
> @@ -62,7 +63,7 @@ static void sdma_get(struct qib_sdma_state *);
> static void sdma_put(struct qib_sdma_state *);
> static void sdma_set_state(struct qib_pportdata *, enum qib_sdma_states);
> static void sdma_start_sw_clean_up(struct qib_pportdata *);
> -static void sdma_sw_clean_up_task(struct tasklet_struct *);
> +static void sdma_sw_clean_up_task(struct work_struct *);
> static void unmap_desc(struct qib_pportdata *, unsigned);
>
> static void sdma_get(struct qib_sdma_state *ss)
> @@ -119,9 +120,9 @@ static void clear_sdma_activelist(struct qib_pportdata *ppd)
> }
> }
>
> -static void sdma_sw_clean_up_task(struct tasklet_struct *t)
> +static void sdma_sw_clean_up_task(struct work_struct *t)
> {
> - struct qib_pportdata *ppd = from_tasklet(ppd, t,
> + struct qib_pportdata *ppd = from_work(ppd, t,
> sdma_sw_clean_up_task);
> unsigned long flags;
>
> @@ -188,7 +189,7 @@ static void sdma_sw_tear_down(struct qib_pportdata *ppd)
>
> static void sdma_start_sw_clean_up(struct qib_pportdata *ppd)
> {
> - tasklet_hi_schedule(&ppd->sdma_sw_clean_up_task);
> + queue_work(system_bh_highpri_wq, &ppd->sdma_sw_clean_up_task);
> }
>
> static void sdma_set_state(struct qib_pportdata *ppd,
> @@ -437,7 +438,7 @@ int qib_setup_sdma(struct qib_pportdata *ppd)
>
> INIT_LIST_HEAD(&ppd->sdma_activelist);
>
> - tasklet_setup(&ppd->sdma_sw_clean_up_task, sdma_sw_clean_up_task);
> + INIT_WORK(&ppd->sdma_sw_clean_up_task, sdma_sw_clean_up_task);
>
> ret = dd->f_init_sdma_regs(ppd);
> if (ret)
> diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
> index e6203e26cc06..efe4689151c2 100644
> --- a/drivers/infiniband/sw/rdmavt/qp.c
> +++ b/drivers/infiniband/sw/rdmavt/qp.c
> @@ -1306,7 +1306,7 @@ int rvt_error_qp(struct rvt_qp *qp, enum ib_wc_status err)
>
> rdi->driver_f.notify_error_qp(qp);
>
> - /* Schedule the sending tasklet to drain the send work queue. */
> + /* Schedule the sending work to drain the send work queue. */
> if (READ_ONCE(qp->s_last) != qp->s_head)
> rdi->driver_f.schedule_send(qp);
>


2024-04-08 09:35:10

by Heiko Carstens

[permalink] [raw]
Subject: Re: [PATCH 7/9] s390: Convert from tasklet to BH workqueue

On Wed, Mar 27, 2024 at 04:03:12PM +0000, Allen Pais wrote:
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/infiniband/* from tasklet to BH workqueue.
>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10

I guess this dependency is a hard requirement due to commit 134874e2eee9
("workqueue: Allow cancel_work_sync() and disable_work() from atomic contexts
on BH work items")?

> ---
> drivers/s390/block/dasd.c | 42 ++++++++++++------------
> drivers/s390/block/dasd_int.h | 10 +++---
> drivers/s390/char/con3270.c | 27 ++++++++--------
> drivers/s390/crypto/ap_bus.c | 24 +++++++-------
> drivers/s390/crypto/ap_bus.h | 2 +-
> drivers/s390/crypto/zcrypt_msgtype50.c | 2 +-
> drivers/s390/crypto/zcrypt_msgtype6.c | 4 +--
> drivers/s390/net/ctcm_fsms.c | 4 +--
> drivers/s390/net/ctcm_main.c | 15 ++++-----
> drivers/s390/net/ctcm_main.h | 5 +--
> drivers/s390/net/ctcm_mpc.c | 12 +++----
> drivers/s390/net/ctcm_mpc.h | 7 ++--
> drivers/s390/net/lcs.c | 26 +++++++--------
> drivers/s390/net/lcs.h | 2 +-
> drivers/s390/net/qeth_core_main.c | 2 +-
> drivers/s390/scsi/zfcp_qdio.c | 45 +++++++++++++-------------
> drivers/s390/scsi/zfcp_qdio.h | 9 +++---
> 17 files changed, 117 insertions(+), 121 deletions(-)

I'm asking since this patch comes with multiple compile errors. Probably due
to lack of cross compiler tool chain on your side.

If the above wouldn't be a hard dependency I'd say we could take those parts
of your patch which are fine into the s390 tree for 6.10, fix the rest, and
schedule that as well for 6.10 via the s390 tree.

2024-04-08 10:04:48

by Harald Freudenberger

[permalink] [raw]
Subject: Re: [PATCH 7/9] s390: Convert from tasklet to BH workqueue

On 2024-03-27 17:03, Allen Pais wrote:
> The only generic interface to execute asynchronously in the BH context
> is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH
> workqueue
> behaves similarly to regular workqueues except that the queued work
> items
> are executed in the BH context.
>
> This patch converts drivers/infiniband/* from tasklet to BH workqueue.
>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git
> for-6.10
>
> Note: Not tested. Please test/review.
>
> Signed-off-by: Allen Pais <[email protected]>
> ---
> ...
> drivers/s390/crypto/ap_bus.c | 24 +++++++-------
> drivers/s390/crypto/ap_bus.h | 2 +-
> drivers/s390/crypto/zcrypt_msgtype50.c | 2 +-
> drivers/s390/crypto/zcrypt_msgtype6.c | 4 +--
> ...

Applied and tested the s390 AP bus and zcrypt part of the patch.
Works fine, a sniff test did not show any problems.
Thanks for your work.

Reviewed-by: Harald Freudenberger <[email protected]>

2024-04-24 09:17:26

by Hans Verkuil

[permalink] [raw]
Subject: Re: [PATCH 8/9] drivers/media/*: Convert from tasklet to BH workqueue

On 27/03/2024 17:03, Allen Pais wrote:
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/media/* from tasklet to BH workqueue.
>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Signed-off-by: Allen Pais <[email protected]>
> ---
> drivers/media/pci/bt8xx/bt878.c | 8 ++--
> drivers/media/pci/bt8xx/bt878.h | 3 +-
> drivers/media/pci/bt8xx/dvb-bt8xx.c | 9 ++--
> drivers/media/pci/ddbridge/ddbridge.h | 3 +-
> drivers/media/pci/mantis/hopper_cards.c | 2 +-
> drivers/media/pci/mantis/mantis_cards.c | 2 +-
> drivers/media/pci/mantis/mantis_common.h | 3 +-
> drivers/media/pci/mantis/mantis_dma.c | 5 ++-
> drivers/media/pci/mantis/mantis_dma.h | 2 +-
> drivers/media/pci/mantis/mantis_dvb.c | 12 +++---
> drivers/media/pci/ngene/ngene-core.c | 23 ++++++-----
> drivers/media/pci/ngene/ngene.h | 5 ++-
> drivers/media/pci/smipcie/smipcie-main.c | 18 ++++----
> drivers/media/pci/smipcie/smipcie.h | 3 +-
> drivers/media/pci/ttpci/budget-av.c | 3 +-
> drivers/media/pci/ttpci/budget-ci.c | 27 ++++++------
> drivers/media/pci/ttpci/budget-core.c | 10 ++---
> drivers/media/pci/ttpci/budget.h | 5 ++-
> drivers/media/pci/tw5864/tw5864-core.c | 2 +-
> drivers/media/pci/tw5864/tw5864-video.c | 13 +++---
> drivers/media/pci/tw5864/tw5864.h | 7 ++--
> drivers/media/platform/intel/pxa_camera.c | 15 +++----
> drivers/media/platform/marvell/mcam-core.c | 11 ++---
> drivers/media/platform/marvell/mcam-core.h | 3 +-
> .../st/sti/c8sectpfe/c8sectpfe-core.c | 15 +++----
> .../st/sti/c8sectpfe/c8sectpfe-core.h | 2 +-
> drivers/media/radio/wl128x/fmdrv.h | 7 ++--
> drivers/media/radio/wl128x/fmdrv_common.c | 41 ++++++++++---------
> drivers/media/rc/mceusb.c | 2 +-
> drivers/media/usb/ttusb-dec/ttusb_dec.c | 21 +++++-----
> 30 files changed, 151 insertions(+), 131 deletions(-)
>
> diff --git a/drivers/media/pci/bt8xx/bt878.c b/drivers/media/pci/bt8xx/bt878.c
> index 90972d6952f1..983ec29108f0 100644
> --- a/drivers/media/pci/bt8xx/bt878.c
> +++ b/drivers/media/pci/bt8xx/bt878.c
> @@ -300,8 +300,8 @@ static irqreturn_t bt878_irq(int irq, void *dev_id)
> }
> if (astat & BT878_ARISCI) {
> bt->finished_block = (stat & BT878_ARISCS) >> 28;
> - if (bt->tasklet.callback)
> - tasklet_schedule(&bt->tasklet);
> + if (bt->work.func)
> + queue_work(system_bh_wq,

I stopped reviewing here: this clearly has not been compile tested.

Also please check the patch with 'checkpatch.pl --strict' and fix the reported issues.

Regards,

Hans

> break;
> }
> count++;
> @@ -478,8 +478,8 @@ static int bt878_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
> btwrite(0, BT878_AINT_MASK);
> bt878_num++;
>
> - if (!bt->tasklet.func)
> - tasklet_disable(&bt->tasklet);
> + if (!bt->work.func)
> + disable_work_sync(&bt->work);
>
> return 0;
>
> diff --git a/drivers/media/pci/bt8xx/bt878.h b/drivers/media/pci/bt8xx/bt878.h
> index fde8db293c54..b9ce78e5116b 100644
> --- a/drivers/media/pci/bt8xx/bt878.h
> +++ b/drivers/media/pci/bt8xx/bt878.h
> @@ -14,6 +14,7 @@
> #include <linux/sched.h>
> #include <linux/spinlock.h>
> #include <linux/mutex.h>
> +#include <linux/workqueue.h>
>
> #include "bt848.h"
> #include "bttv.h"
> @@ -120,7 +121,7 @@ struct bt878 {
> dma_addr_t risc_dma;
> u32 risc_pos;
>
> - struct tasklet_struct tasklet;
> + struct work_struct work;
> int shutdown;
> };
>
> diff --git a/drivers/media/pci/bt8xx/dvb-bt8xx.c b/drivers/media/pci/bt8xx/dvb-bt8xx.c
> index 390cbba6c065..8c0e1fa764a4 100644
> --- a/drivers/media/pci/bt8xx/dvb-bt8xx.c
> +++ b/drivers/media/pci/bt8xx/dvb-bt8xx.c
> @@ -15,6 +15,7 @@
> #include <linux/delay.h>
> #include <linux/slab.h>
> #include <linux/i2c.h>
> +#include <linux/workqueue.h>
>
> #include <media/dmxdev.h>
> #include <media/dvbdev.h>
> @@ -39,9 +40,9 @@ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
>
> #define IF_FREQUENCYx6 217 /* 6 * 36.16666666667MHz */
>
> -static void dvb_bt8xx_task(struct tasklet_struct *t)
> +static void dvb_bt8xx_task(struct work_struct *t)
> {
> - struct bt878 *bt = from_tasklet(bt, t, tasklet);
> + struct bt878 *bt = from_work(bt, t, work);
> struct dvb_bt8xx_card *card = dev_get_drvdata(&bt->adapter->dev);
>
> dprintk("%d\n", card->bt->finished_block);
> @@ -782,7 +783,7 @@ static int dvb_bt8xx_load_card(struct dvb_bt8xx_card *card, u32 type)
> goto err_disconnect_frontend;
> }
>
> - tasklet_setup(&card->bt->tasklet, dvb_bt8xx_task);
> + INIT_WORK(&card->bt->work, dvb_bt8xx_task);
>
> frontend_init(card, type);
>
> @@ -922,7 +923,7 @@ static void dvb_bt8xx_remove(struct bttv_sub_device *sub)
> dprintk("dvb_bt8xx: unloading card%d\n", card->bttv_nr);
>
> bt878_stop(card->bt);
> - tasklet_kill(&card->bt->tasklet);
> + cancel_work_sync(&card->bt->work);
> dvb_net_release(&card->dvbnet);
> card->demux.dmx.remove_frontend(&card->demux.dmx, &card->fe_mem);
> card->demux.dmx.remove_frontend(&card->demux.dmx, &card->fe_hw);
> diff --git a/drivers/media/pci/ddbridge/ddbridge.h b/drivers/media/pci/ddbridge/ddbridge.h
> index f3699dbd193f..037d1d13ef0f 100644
> --- a/drivers/media/pci/ddbridge/ddbridge.h
> +++ b/drivers/media/pci/ddbridge/ddbridge.h
> @@ -35,6 +35,7 @@
> #include <linux/uaccess.h>
> #include <linux/vmalloc.h>
> #include <linux/workqueue.h>
> +#include <linux/workqueue.h>
>
> #include <asm/dma.h>
> #include <asm/irq.h>
> @@ -298,7 +299,7 @@ struct ddb_link {
> spinlock_t lock; /* lock link access */
> struct mutex flash_mutex; /* lock flash access */
> struct ddb_lnb lnb;
> - struct tasklet_struct tasklet;
> + struct work_struct work;
> struct ddb_ids ids;
>
> spinlock_t temp_lock; /* lock temp chip access */
> diff --git a/drivers/media/pci/mantis/hopper_cards.c b/drivers/media/pci/mantis/hopper_cards.c
> index c0bd5d7e148b..869ea88c4893 100644
> --- a/drivers/media/pci/mantis/hopper_cards.c
> +++ b/drivers/media/pci/mantis/hopper_cards.c
> @@ -116,7 +116,7 @@ static irqreturn_t hopper_irq_handler(int irq, void *dev_id)
> if (stat & MANTIS_INT_RISCI) {
> dprintk(MANTIS_DEBUG, 0, "<%s>", label[8]);
> mantis->busy_block = (stat & MANTIS_INT_RISCSTAT) >> 28;
> - tasklet_schedule(&mantis->tasklet);
> + queue_work(system_bh_wq, &mantis->work);
> }
> if (stat & MANTIS_INT_I2CDONE) {
> dprintk(MANTIS_DEBUG, 0, "<%s>", label[9]);
> diff --git a/drivers/media/pci/mantis/mantis_cards.c b/drivers/media/pci/mantis/mantis_cards.c
> index 906e4500d87d..cb124b19e36e 100644
> --- a/drivers/media/pci/mantis/mantis_cards.c
> +++ b/drivers/media/pci/mantis/mantis_cards.c
> @@ -125,7 +125,7 @@ static irqreturn_t mantis_irq_handler(int irq, void *dev_id)
> if (stat & MANTIS_INT_RISCI) {
> dprintk(MANTIS_DEBUG, 0, "<%s>", label[8]);
> mantis->busy_block = (stat & MANTIS_INT_RISCSTAT) >> 28;
> - tasklet_schedule(&mantis->tasklet);
> + queue_work(system_bh_wq, &mantis->work);
> }
> if (stat & MANTIS_INT_I2CDONE) {
> dprintk(MANTIS_DEBUG, 0, "<%s>", label[9]);
> diff --git a/drivers/media/pci/mantis/mantis_common.h b/drivers/media/pci/mantis/mantis_common.h
> index d88ac280226c..f2247148f268 100644
> --- a/drivers/media/pci/mantis/mantis_common.h
> +++ b/drivers/media/pci/mantis/mantis_common.h
> @@ -12,6 +12,7 @@
> #include <linux/interrupt.h>
> #include <linux/mutex.h>
> #include <linux/workqueue.h>
> +#include <linux/workqueue.h>
>
> #include "mantis_reg.h"
> #include "mantis_uart.h"
> @@ -125,7 +126,7 @@ struct mantis_pci {
> __le32 *risc_cpu;
> dma_addr_t risc_dma;
>
> - struct tasklet_struct tasklet;
> + struct work_struct work;
> spinlock_t intmask_lock;
>
> struct i2c_adapter adapter;
> diff --git a/drivers/media/pci/mantis/mantis_dma.c b/drivers/media/pci/mantis/mantis_dma.c
> index 80c843936493..c85f9b84a2c6 100644
> --- a/drivers/media/pci/mantis/mantis_dma.c
> +++ b/drivers/media/pci/mantis/mantis_dma.c
> @@ -15,6 +15,7 @@
> #include <linux/signal.h>
> #include <linux/sched.h>
> #include <linux/interrupt.h>
> +#include <linux/workqueue.h>
>
> #include <media/dmxdev.h>
> #include <media/dvbdev.h>
> @@ -200,9 +201,9 @@ void mantis_dma_stop(struct mantis_pci *mantis)
> }
>
>
> -void mantis_dma_xfer(struct tasklet_struct *t)
> +void mantis_dma_xfer(struct work_struct *t)
> {
> - struct mantis_pci *mantis = from_tasklet(mantis, t, tasklet);
> + struct mantis_pci *mantis = from_work(mantis, t, work);
> struct mantis_hwconfig *config = mantis->hwconfig;
>
> while (mantis->last_block != mantis->busy_block) {
> diff --git a/drivers/media/pci/mantis/mantis_dma.h b/drivers/media/pci/mantis/mantis_dma.h
> index 37da982c9c29..5db0d3728f15 100644
> --- a/drivers/media/pci/mantis/mantis_dma.h
> +++ b/drivers/media/pci/mantis/mantis_dma.h
> @@ -13,6 +13,6 @@ extern int mantis_dma_init(struct mantis_pci *mantis);
> extern int mantis_dma_exit(struct mantis_pci *mantis);
> extern void mantis_dma_start(struct mantis_pci *mantis);
> extern void mantis_dma_stop(struct mantis_pci *mantis);
> -extern void mantis_dma_xfer(struct tasklet_struct *t);
> +extern void mantis_dma_xfer(struct work_struct *t);
>
> #endif /* __MANTIS_DMA_H */
> diff --git a/drivers/media/pci/mantis/mantis_dvb.c b/drivers/media/pci/mantis/mantis_dvb.c
> index c7ba4a76e608..f640635de170 100644
> --- a/drivers/media/pci/mantis/mantis_dvb.c
> +++ b/drivers/media/pci/mantis/mantis_dvb.c
> @@ -105,7 +105,7 @@ static int mantis_dvb_start_feed(struct dvb_demux_feed *dvbdmxfeed)
> if (mantis->feeds == 1) {
> dprintk(MANTIS_DEBUG, 1, "mantis start feed & dma");
> mantis_dma_start(mantis);
> - tasklet_enable(&mantis->tasklet);
> + enable_and_queue_work(system_bh_wq, &mantis->work);
> }
>
> return mantis->feeds;
> @@ -125,7 +125,7 @@ static int mantis_dvb_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
> mantis->feeds--;
> if (mantis->feeds == 0) {
> dprintk(MANTIS_DEBUG, 1, "mantis stop feed and dma");
> - tasklet_disable(&mantis->tasklet);
> + disable_work_sync(&mantis->work);
> mantis_dma_stop(mantis);
> }
>
> @@ -205,8 +205,8 @@ int mantis_dvb_init(struct mantis_pci *mantis)
> }
>
> dvb_net_init(&mantis->dvb_adapter, &mantis->dvbnet, &mantis->demux.dmx);
> - tasklet_setup(&mantis->tasklet, mantis_dma_xfer);
> - tasklet_disable(&mantis->tasklet);
> + INIT_WORK(&mantis->bh, mantis_dma_xfer);
> + disable_work_sync(&mantis->work);
> if (mantis->hwconfig) {
> result = config->frontend_init(mantis, mantis->fe);
> if (result < 0) {
> @@ -235,7 +235,7 @@ int mantis_dvb_init(struct mantis_pci *mantis)
>
> /* Error conditions .. */
> err5:
> - tasklet_kill(&mantis->tasklet);
> + cancel_work_sync(&mantis->work);
> dvb_net_release(&mantis->dvbnet);
> if (mantis->fe) {
> dvb_unregister_frontend(mantis->fe);
> @@ -273,7 +273,7 @@ int mantis_dvb_exit(struct mantis_pci *mantis)
> dvb_frontend_detach(mantis->fe);
> }
>
> - tasklet_kill(&mantis->tasklet);
> + cancel_work_sync(&mantis->work);
> dvb_net_release(&mantis->dvbnet);
>
> mantis->demux.dmx.remove_frontend(&mantis->demux.dmx, &mantis->fe_mem);
> diff --git a/drivers/media/pci/ngene/ngene-core.c b/drivers/media/pci/ngene/ngene-core.c
> index 7481f553f959..5211d6796748 100644
> --- a/drivers/media/pci/ngene/ngene-core.c
> +++ b/drivers/media/pci/ngene/ngene-core.c
> @@ -21,6 +21,7 @@
> #include <linux/byteorder/generic.h>
> #include <linux/firmware.h>
> #include <linux/vmalloc.h>
> +#include <linux/workqueue.h>
>
> #include "ngene.h"
>
> @@ -50,9 +51,9 @@ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
> /* nGene interrupt handler **************************************************/
> /****************************************************************************/
>
> -static void event_tasklet(struct tasklet_struct *t)
> +static void event_work(struct work_struct *t)
> {
> - struct ngene *dev = from_tasklet(dev, t, event_tasklet);
> + struct ngene *dev = from_work(dev, t, event_work);
>
> while (dev->EventQueueReadIndex != dev->EventQueueWriteIndex) {
> struct EVENT_BUFFER Event =
> @@ -68,9 +69,9 @@ static void event_tasklet(struct tasklet_struct *t)
> }
> }
>
> -static void demux_tasklet(struct tasklet_struct *t)
> +static void demux_work(struct work_struct *t)
> {
> - struct ngene_channel *chan = from_tasklet(chan, t, demux_tasklet);
> + struct ngene_channel *chan = from_work(chan, t, demux_work);
> struct device *pdev = &chan->dev->pci_dev->dev;
> struct SBufferHeader *Cur = chan->nextBuffer;
>
> @@ -204,7 +205,7 @@ static irqreturn_t irq_handler(int irq, void *dev_id)
> dev->EventQueueOverflowFlag = 1;
> }
> dev->EventBuffer->EventStatus &= ~0x80;
> - tasklet_schedule(&dev->event_tasklet);
> + queue_work(system_bh_wq, &dev->event_work);
> rc = IRQ_HANDLED;
> }
>
> @@ -217,8 +218,8 @@ static irqreturn_t irq_handler(int irq, void *dev_id)
> ngeneBuffer.SR.Flags & 0xC0) == 0x80) {
> dev->channel[i].nextBuffer->
> ngeneBuffer.SR.Flags |= 0x40;
> - tasklet_schedule(
> - &dev->channel[i].demux_tasklet);
> + queue_work(system_bh_wq,
> + &dev->channel[i].demux_work);
> rc = IRQ_HANDLED;
> }
> }
> @@ -1181,7 +1182,7 @@ static void ngene_init(struct ngene *dev)
> struct device *pdev = &dev->pci_dev->dev;
> int i;
>
> - tasklet_setup(&dev->event_tasklet, event_tasklet);
> + INIT_WORK(&dev->event_work, event_work);
>
> memset_io(dev->iomem + 0xc000, 0x00, 0x220);
> memset_io(dev->iomem + 0xc400, 0x00, 0x100);
> @@ -1395,7 +1396,7 @@ static void release_channel(struct ngene_channel *chan)
> if (chan->running)
> set_transfer(chan, 0);
>
> - tasklet_kill(&chan->demux_tasklet);
> + cancel_work_sync(&chan->demux_work);
>
> if (chan->ci_dev) {
> dvb_unregister_device(chan->ci_dev);
> @@ -1445,7 +1446,7 @@ static int init_channel(struct ngene_channel *chan)
> struct ngene_info *ni = dev->card_info;
> int io = ni->io_type[nr];
>
> - tasklet_setup(&chan->demux_tasklet, demux_tasklet);
> + INIT_WORK(&chan->demux_work, demux_work);
> chan->users = 0;
> chan->type = io;
> chan->mode = chan->type; /* for now only one mode */
> @@ -1647,7 +1648,7 @@ void ngene_remove(struct pci_dev *pdev)
> struct ngene *dev = pci_get_drvdata(pdev);
> int i;
>
> - tasklet_kill(&dev->event_tasklet);
> + cancel_work_sync(&dev->event_work);
> for (i = MAX_STREAM - 1; i >= 0; i--)
> release_channel(&dev->channel[i]);
> if (dev->ci.en)
> diff --git a/drivers/media/pci/ngene/ngene.h b/drivers/media/pci/ngene/ngene.h
> index d1d7da84cd9d..c2a23f6dbe09 100644
> --- a/drivers/media/pci/ngene/ngene.h
> +++ b/drivers/media/pci/ngene/ngene.h
> @@ -16,6 +16,7 @@
> #include <linux/scatterlist.h>
>
> #include <linux/dvb/frontend.h>
> +#include <linux/workqueue.h>
>
> #include <media/dmxdev.h>
> #include <media/dvbdev.h>
> @@ -621,7 +622,7 @@ struct ngene_channel {
> int users;
> struct video_device *v4l_dev;
> struct dvb_device *ci_dev;
> - struct tasklet_struct demux_tasklet;
> + struct work_struct demux_work;
>
> struct SBufferHeader *nextBuffer;
> enum KSSTATE State;
> @@ -717,7 +718,7 @@ struct ngene {
> struct EVENT_BUFFER EventQueue[EVENT_QUEUE_SIZE];
> int EventQueueOverflowCount;
> int EventQueueOverflowFlag;
> - struct tasklet_struct event_tasklet;
> + struct work_struct event_work;
> struct EVENT_BUFFER *EventBuffer;
> int EventQueueWriteIndex;
> int EventQueueReadIndex;
> diff --git a/drivers/media/pci/smipcie/smipcie-main.c b/drivers/media/pci/smipcie/smipcie-main.c
> index 0c300d019d9c..7da6bb55660b 100644
> --- a/drivers/media/pci/smipcie/smipcie-main.c
> +++ b/drivers/media/pci/smipcie/smipcie-main.c
> @@ -279,10 +279,10 @@ static void smi_port_clearInterrupt(struct smi_port *port)
> (port->_dmaInterruptCH0 | port->_dmaInterruptCH1));
> }
>
> -/* tasklet handler: DMA data to dmx.*/
> -static void smi_dma_xfer(struct tasklet_struct *t)
> +/* work handler: DMA data to dmx.*/
> +static void smi_dma_xfer(struct work_struct *t)
> {
> - struct smi_port *port = from_tasklet(port, t, tasklet);
> + struct smi_port *port = from_work(port, t, work);
> struct smi_dev *dev = port->dev;
> u32 intr_status, finishedData, dmaManagement;
> u8 dmaChan0State, dmaChan1State;
> @@ -426,8 +426,8 @@ static int smi_port_init(struct smi_port *port, int dmaChanUsed)
> }
>
> smi_port_disableInterrupt(port);
> - tasklet_setup(&port->tasklet, smi_dma_xfer);
> - tasklet_disable(&port->tasklet);
> + INIT_WORK(&port->work, smi_dma_xfer);
> + disable_work_sync(&port->work);
> port->enable = 1;
> return 0;
> err:
> @@ -438,7 +438,7 @@ static int smi_port_init(struct smi_port *port, int dmaChanUsed)
> static void smi_port_exit(struct smi_port *port)
> {
> smi_port_disableInterrupt(port);
> - tasklet_kill(&port->tasklet);
> + cancel_work_sync(&port->work);
> smi_port_dma_free(port);
> port->enable = 0;
> }
> @@ -452,7 +452,7 @@ static int smi_port_irq(struct smi_port *port, u32 int_status)
> smi_port_disableInterrupt(port);
> port->_int_status = int_status;
> smi_port_clearInterrupt(port);
> - tasklet_schedule(&port->tasklet);
> + queue_work(system_bh_wq, &port->work);
> handled = 1;
> }
> return handled;
> @@ -823,7 +823,7 @@ static int smi_start_feed(struct dvb_demux_feed *dvbdmxfeed)
> smi_port_clearInterrupt(port);
> smi_port_enableInterrupt(port);
> smi_write(port->DMA_MANAGEMENT, dmaManagement);
> - tasklet_enable(&port->tasklet);
> + enable_and_queue_work(system_bh_wq, &port->work);
> }
> return port->users;
> }
> @@ -837,7 +837,7 @@ static int smi_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
> if (--port->users)
> return port->users;
>
> - tasklet_disable(&port->tasklet);
> + disable_work_sync(&port->work);
> smi_port_disableInterrupt(port);
> smi_clear(port->DMA_MANAGEMENT, 0x30003);
> return 0;
> diff --git a/drivers/media/pci/smipcie/smipcie.h b/drivers/media/pci/smipcie/smipcie.h
> index 2b5e0154814c..f124d2cdead6 100644
> --- a/drivers/media/pci/smipcie/smipcie.h
> +++ b/drivers/media/pci/smipcie/smipcie.h
> @@ -17,6 +17,7 @@
> #include <linux/pci.h>
> #include <linux/dma-mapping.h>
> #include <linux/slab.h>
> +#include <linux/workqueue.h>
> #include <media/rc-core.h>
>
> #include <media/demux.h>
> @@ -257,7 +258,7 @@ struct smi_port {
> u32 _dmaInterruptCH0;
> u32 _dmaInterruptCH1;
> u32 _int_status;
> - struct tasklet_struct tasklet;
> + struct work_struct work;
> /* dvb */
> struct dmx_frontend hw_frontend;
> struct dmx_frontend mem_frontend;
> diff --git a/drivers/media/pci/ttpci/budget-av.c b/drivers/media/pci/ttpci/budget-av.c
> index a47c5850ef87..6e43b1a01191 100644
> --- a/drivers/media/pci/ttpci/budget-av.c
> +++ b/drivers/media/pci/ttpci/budget-av.c
> @@ -37,6 +37,7 @@
> #include <linux/interrupt.h>
> #include <linux/input.h>
> #include <linux/spinlock.h>
> +#include <linux/workqueue.h>
>
> #include <media/dvb_ca_en50221.h>
>
> @@ -55,7 +56,7 @@ struct budget_av {
> struct video_device vd;
> int cur_input;
> int has_saa7113;
> - struct tasklet_struct ciintf_irq_tasklet;
> + struct work_struct ciintf_irq_work;
> int slot_status;
> struct dvb_ca_en50221 ca;
> u8 reinitialise_demod:1;
> diff --git a/drivers/media/pci/ttpci/budget-ci.c b/drivers/media/pci/ttpci/budget-ci.c
> index 66e1a004ee43..11e0ed62707e 100644
> --- a/drivers/media/pci/ttpci/budget-ci.c
> +++ b/drivers/media/pci/ttpci/budget-ci.c
> @@ -17,6 +17,7 @@
> #include <linux/slab.h>
> #include <linux/interrupt.h>
> #include <linux/spinlock.h>
> +#include <linux/workqueue.h>
> #include <media/rc-core.h>
>
> #include "budget.h"
> @@ -80,7 +81,7 @@ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
>
> struct budget_ci_ir {
> struct rc_dev *dev;
> - struct tasklet_struct msp430_irq_tasklet;
> + struct work_struct msp430_irq_work;
> char name[72]; /* 40 + 32 for (struct saa7146_dev).name */
> char phys[32];
> int rc5_device;
> @@ -91,7 +92,7 @@ struct budget_ci_ir {
>
> struct budget_ci {
> struct budget budget;
> - struct tasklet_struct ciintf_irq_tasklet;
> + struct work_struct ciintf_irq_work;
> int slot_status;
> int ci_irq;
> struct dvb_ca_en50221 ca;
> @@ -99,9 +100,9 @@ struct budget_ci {
> u8 tuner_pll_address; /* used for philips_tdm1316l configs */
> };
>
> -static void msp430_ir_interrupt(struct tasklet_struct *t)
> +static void msp430_ir_interrupt(struct work_struct *t)
> {
> - struct budget_ci_ir *ir = from_tasklet(ir, t, msp430_irq_tasklet);
> + struct budget_ci_ir *ir = from_work(ir, t, msp430_irq_work);
> struct budget_ci *budget_ci = container_of(ir, typeof(*budget_ci), ir);
> struct rc_dev *dev = budget_ci->ir.dev;
> u32 command = ttpci_budget_debiread(&budget_ci->budget, DEBINOSWAP, DEBIADDR_IR, 2, 1, 0) >> 8;
> @@ -230,7 +231,7 @@ static int msp430_ir_init(struct budget_ci *budget_ci)
>
> budget_ci->ir.dev = dev;
>
> - tasklet_setup(&budget_ci->ir.msp430_irq_tasklet, msp430_ir_interrupt);
> + INIT_WORK(&budget_ci->ir.msp430_irq_work, msp430_ir_interrupt);
>
> SAA7146_IER_ENABLE(saa, MASK_06);
> saa7146_setgpio(saa, 3, SAA7146_GPIO_IRQHI);
> @@ -244,7 +245,7 @@ static void msp430_ir_deinit(struct budget_ci *budget_ci)
>
> SAA7146_IER_DISABLE(saa, MASK_06);
> saa7146_setgpio(saa, 3, SAA7146_GPIO_INPUT);
> - tasklet_kill(&budget_ci->ir.msp430_irq_tasklet);
> + cancel_work_sync(&budget_ci->ir.msp430_irq_work);
>
> rc_unregister_device(budget_ci->ir.dev);
> }
> @@ -348,10 +349,10 @@ static int ciintf_slot_ts_enable(struct dvb_ca_en50221 *ca, int slot)
> return 0;
> }
>
> -static void ciintf_interrupt(struct tasklet_struct *t)
> +static void ciintf_interrupt(struct work_struct *t)
> {
> - struct budget_ci *budget_ci = from_tasklet(budget_ci, t,
> - ciintf_irq_tasklet);
> + struct budget_ci *budget_ci = from_work(budget_ci, t,
> + ciintf_irq_work);
> struct saa7146_dev *saa = budget_ci->budget.dev;
> unsigned int flags;
>
> @@ -492,7 +493,7 @@ static int ciintf_init(struct budget_ci *budget_ci)
>
> // Setup CI slot IRQ
> if (budget_ci->ci_irq) {
> - tasklet_setup(&budget_ci->ciintf_irq_tasklet, ciintf_interrupt);
> + INIT_WORK(&budget_ci->ciintf_irq_work, ciintf_interrupt);
> if (budget_ci->slot_status != SLOTSTATUS_NONE) {
> saa7146_setgpio(saa, 0, SAA7146_GPIO_IRQLO);
> } else {
> @@ -532,7 +533,7 @@ static void ciintf_deinit(struct budget_ci *budget_ci)
> if (budget_ci->ci_irq) {
> SAA7146_IER_DISABLE(saa, MASK_03);
> saa7146_setgpio(saa, 0, SAA7146_GPIO_INPUT);
> - tasklet_kill(&budget_ci->ciintf_irq_tasklet);
> + cancel_work_sync(&budget_ci->ciintf_irq_work);
> }
>
> // reset interface
> @@ -558,13 +559,13 @@ static void budget_ci_irq(struct saa7146_dev *dev, u32 * isr)
> dprintk(8, "dev: %p, budget_ci: %p\n", dev, budget_ci);
>
> if (*isr & MASK_06)
> - tasklet_schedule(&budget_ci->ir.msp430_irq_tasklet);
> + queue_work(system_bh_wq, &budget_ci->ir.msp430_irq_work);
>
> if (*isr & MASK_10)
> ttpci_budget_irq10_handler(dev, isr);
>
> if ((*isr & MASK_03) && (budget_ci->budget.ci_present) && (budget_ci->ci_irq))
> - tasklet_schedule(&budget_ci->ciintf_irq_tasklet);
> + queue_work(system_bh_wq, &budget_ci->ciintf_irq_work);
> }
>
> static u8 philips_su1278_tt_inittab[] = {
> diff --git a/drivers/media/pci/ttpci/budget-core.c b/drivers/media/pci/ttpci/budget-core.c
> index 25f44c3eebf3..3443c12dc9f2 100644
> --- a/drivers/media/pci/ttpci/budget-core.c
> +++ b/drivers/media/pci/ttpci/budget-core.c
> @@ -171,9 +171,9 @@ static int budget_read_fe_status(struct dvb_frontend *fe,
> return ret;
> }
>
> -static void vpeirq(struct tasklet_struct *t)
> +static void vpeirq(struct work_struct *t)
> {
> - struct budget *budget = from_tasklet(budget, t, vpe_tasklet);
> + struct budget *budget = from_work(budget, t, vpe_work);
> u8 *mem = (u8 *) (budget->grabbing);
> u32 olddma = budget->ttbp;
> u32 newdma = saa7146_read(budget->dev, PCI_VDP3);
> @@ -520,7 +520,7 @@ int ttpci_budget_init(struct budget *budget, struct saa7146_dev *dev,
> /* upload all */
> saa7146_write(dev, GPIO_CTRL, 0x000000);
>
> - tasklet_setup(&budget->vpe_tasklet, vpeirq);
> + INIT_WORK(&budget->vpe_work, vpeirq);
>
> /* frontend power on */
> if (bi->type != BUDGET_FS_ACTIVY)
> @@ -557,7 +557,7 @@ int ttpci_budget_deinit(struct budget *budget)
>
> budget_unregister(budget);
>
> - tasklet_kill(&budget->vpe_tasklet);
> + cancel_work_sync(&budget->vpe_work);
>
> saa7146_vfree_destroy_pgtable(dev->pci, budget->grabbing, &budget->pt);
>
> @@ -575,7 +575,7 @@ void ttpci_budget_irq10_handler(struct saa7146_dev *dev, u32 * isr)
> dprintk(8, "dev: %p, budget: %p\n", dev, budget);
>
> if (*isr & MASK_10)
> - tasklet_schedule(&budget->vpe_tasklet);
> + queue_work(system_bh_wq, &budget->vpe_work);
> }
>
> void ttpci_budget_set_video_port(struct saa7146_dev *dev, int video_port)
> diff --git a/drivers/media/pci/ttpci/budget.h b/drivers/media/pci/ttpci/budget.h
> index bd87432e6cde..a3ee75e326b4 100644
> --- a/drivers/media/pci/ttpci/budget.h
> +++ b/drivers/media/pci/ttpci/budget.h
> @@ -12,6 +12,7 @@
>
> #include <linux/module.h>
> #include <linux/mutex.h>
> +#include <linux/workqueue.h>
>
> #include <media/drv-intf/saa7146.h>
>
> @@ -49,8 +50,8 @@ struct budget {
> unsigned char *grabbing;
> struct saa7146_pgtable pt;
>
> - struct tasklet_struct fidb_tasklet;
> - struct tasklet_struct vpe_tasklet;
> + struct work_struct fidb_work;
> + struct work_struct vpe_work;
>
> struct dmxdev dmxdev;
> struct dvb_demux demux;
> diff --git a/drivers/media/pci/tw5864/tw5864-core.c b/drivers/media/pci/tw5864/tw5864-core.c
> index 560ff1ddcc83..a58c268e94a8 100644
> --- a/drivers/media/pci/tw5864/tw5864-core.c
> +++ b/drivers/media/pci/tw5864/tw5864-core.c
> @@ -144,7 +144,7 @@ static void tw5864_h264_isr(struct tw5864_dev *dev)
> cur_frame->gop_seqno = input->frame_gop_seqno;
>
> dev->h264_buf_w_index = next_frame_index;
> - tasklet_schedule(&dev->tasklet);
> + queue_work(system_bh_wq, &dev->work);
>
> cur_frame = next_frame;
>
> diff --git a/drivers/media/pci/tw5864/tw5864-video.c b/drivers/media/pci/tw5864/tw5864-video.c
> index 8b1aae4b6319..ac2249626506 100644
> --- a/drivers/media/pci/tw5864/tw5864-video.c
> +++ b/drivers/media/pci/tw5864/tw5864-video.c
> @@ -6,6 +6,7 @@
> */
>
> #include <linux/module.h>
> +#include <linux/workqueue.h>
> #include <media/v4l2-common.h>
> #include <media/v4l2-event.h>
> #include <media/videobuf2-dma-contig.h>
> @@ -175,7 +176,7 @@ static const unsigned int intra4x4_lambda3[] = {
> static v4l2_std_id tw5864_get_v4l2_std(enum tw5864_vid_std std);
> static enum tw5864_vid_std tw5864_from_v4l2_std(v4l2_std_id v4l2_std);
>
> -static void tw5864_handle_frame_task(struct tasklet_struct *t);
> +static void tw5864_handle_frame_task(struct work_struct *t);
> static void tw5864_handle_frame(struct tw5864_h264_frame *frame);
> static void tw5864_frame_interval_set(struct tw5864_input *input);
>
> @@ -1062,7 +1063,7 @@ int tw5864_video_init(struct tw5864_dev *dev, int *video_nr)
> dev->irqmask |= TW5864_INTR_VLC_DONE | TW5864_INTR_TIMER;
> tw5864_irqmask_apply(dev);
>
> - tasklet_setup(&dev->tasklet, tw5864_handle_frame_task);
> + INIT_WORK(&dev->work, tw5864_handle_frame_task);
>
> for (i = 0; i < TW5864_INPUTS; i++) {
> dev->inputs[i].root = dev;
> @@ -1079,7 +1080,7 @@ int tw5864_video_init(struct tw5864_dev *dev, int *video_nr)
> for (i = last_input_nr_registered; i >= 0; i--)
> tw5864_video_input_fini(&dev->inputs[i]);
>
> - tasklet_kill(&dev->tasklet);
> + cancel_work_sync(&dev->work);
>
> free_dma:
> for (i = last_dma_allocated; i >= 0; i--) {
> @@ -1198,7 +1199,7 @@ void tw5864_video_fini(struct tw5864_dev *dev)
> {
> int i;
>
> - tasklet_kill(&dev->tasklet);
> + cancel_work_sync(&dev->work);
>
> for (i = 0; i < TW5864_INPUTS; i++)
> tw5864_video_input_fini(&dev->inputs[i]);
> @@ -1315,9 +1316,9 @@ static int tw5864_is_motion_triggered(struct tw5864_h264_frame *frame)
> return detected;
> }
>
> -static void tw5864_handle_frame_task(struct tasklet_struct *t)
> +static void tw5864_handle_frame_task(struct work_struct *t)
> {
> - struct tw5864_dev *dev = from_tasklet(dev, t, tasklet);
> + struct tw5864_dev *dev = from_work(dev, t, work);
> unsigned long flags;
> int batch_size = H264_BUF_CNT;
>
> diff --git a/drivers/media/pci/tw5864/tw5864.h b/drivers/media/pci/tw5864/tw5864.h
> index a8b6fbd5b710..278373859098 100644
> --- a/drivers/media/pci/tw5864/tw5864.h
> +++ b/drivers/media/pci/tw5864/tw5864.h
> @@ -12,6 +12,7 @@
> #include <linux/mutex.h>
> #include <linux/io.h>
> #include <linux/interrupt.h>
> +#include <linux/workqueue.h>
>
> #include <media/v4l2-common.h>
> #include <media/v4l2-ioctl.h>
> @@ -85,7 +86,7 @@ struct tw5864_input {
> int nr; /* input number */
> struct tw5864_dev *root;
> struct mutex lock; /* used for vidq and vdev */
> - spinlock_t slock; /* used for sync between ISR, tasklet & V4L2 API */
> + spinlock_t slock; /* used for sync between ISR, work & V4L2 API */
> struct video_device vdev;
> struct v4l2_ctrl_handler hdl;
> struct vb2_queue vidq;
> @@ -142,7 +143,7 @@ struct tw5864_h264_frame {
>
> /* global device status */
> struct tw5864_dev {
> - spinlock_t slock; /* used for sync between ISR, tasklet & V4L2 API */
> + spinlock_t slock; /* used for sync between ISR, work & V4L2 API */
> struct v4l2_device v4l2_dev;
> struct tw5864_input inputs[TW5864_INPUTS];
> #define H264_BUF_CNT 4
> @@ -150,7 +151,7 @@ struct tw5864_dev {
> int h264_buf_r_index;
> int h264_buf_w_index;
>
> - struct tasklet_struct tasklet;
> + struct work_struct work;
>
> int encoder_busy;
> /* Input number to check next for ready raw picture (in RR fashion) */
> diff --git a/drivers/media/platform/intel/pxa_camera.c b/drivers/media/platform/intel/pxa_camera.c
> index d904952bf00e..df0a3c559287 100644
> --- a/drivers/media/platform/intel/pxa_camera.c
> +++ b/drivers/media/platform/intel/pxa_camera.c
> @@ -43,6 +43,7 @@
> #include <linux/videodev2.h>
>
> #include <linux/platform_data/media/camera-pxa.h>
> +#include <linux/workqueue.h>
>
> #define PXA_CAM_VERSION "0.0.6"
> #define PXA_CAM_DRV_NAME "pxa27x-camera"
> @@ -683,7 +684,7 @@ struct pxa_camera_dev {
> unsigned int buf_sequence;
>
> struct pxa_buffer *active;
> - struct tasklet_struct task_eof;
> + struct work_struct task_eof;
>
> u32 save_cicr[5];
> };
> @@ -1146,9 +1147,9 @@ static void pxa_camera_deactivate(struct pxa_camera_dev *pcdev)
> clk_disable_unprepare(pcdev->clk);
> }
>
> -static void pxa_camera_eof(struct tasklet_struct *t)
> +static void pxa_camera_eof(struct work_struct *t)
> {
> - struct pxa_camera_dev *pcdev = from_tasklet(pcdev, t, task_eof);
> + struct pxa_camera_dev *pcdev = from_work(pcdev, t, task_eof);
> unsigned long cifr;
> struct pxa_buffer *buf;
>
> @@ -1185,7 +1186,7 @@ static irqreturn_t pxa_camera_irq(int irq, void *data)
> if (status & CISR_EOF) {
> cicr0 = __raw_readl(pcdev->base + CICR0) | CICR0_EOFM;
> __raw_writel(cicr0, pcdev->base + CICR0);
> - tasklet_schedule(&pcdev->task_eof);
> + queue_work(system_bh_wq, &pcdev->task_eof);
> }
>
> return IRQ_HANDLED;
> @@ -2383,7 +2384,7 @@ static int pxa_camera_probe(struct platform_device *pdev)
> }
> }
>
> - tasklet_setup(&pcdev->task_eof, pxa_camera_eof);
> + INIT_WORK(&pcdev->task_eof, pxa_camera_eof);
>
> pxa_camera_activate(pcdev);
>
> @@ -2409,7 +2410,7 @@ static int pxa_camera_probe(struct platform_device *pdev)
> return 0;
> exit_deactivate:
> pxa_camera_deactivate(pcdev);
> - tasklet_kill(&pcdev->task_eof);
> + cancel_work_sync(&pcdev->task_eof);
> exit_free_dma:
> dma_release_channel(pcdev->dma_chans[2]);
> exit_free_dma_u:
> @@ -2428,7 +2429,7 @@ static void pxa_camera_remove(struct platform_device *pdev)
> struct pxa_camera_dev *pcdev = platform_get_drvdata(pdev);
>
> pxa_camera_deactivate(pcdev);
> - tasklet_kill(&pcdev->task_eof);
> + cancel_work_sync(&pcdev->task_eof);
> dma_release_channel(pcdev->dma_chans[0]);
> dma_release_channel(pcdev->dma_chans[1]);
> dma_release_channel(pcdev->dma_chans[2]);
> diff --git a/drivers/media/platform/marvell/mcam-core.c b/drivers/media/platform/marvell/mcam-core.c
> index 66688b4aece5..d6b96a7039be 100644
> --- a/drivers/media/platform/marvell/mcam-core.c
> +++ b/drivers/media/platform/marvell/mcam-core.c
> @@ -25,6 +25,7 @@
> #include <linux/clk-provider.h>
> #include <linux/videodev2.h>
> #include <linux/pm_runtime.h>
> +#include <linux/workqueue.h>
> #include <media/v4l2-device.h>
> #include <media/v4l2-ioctl.h>
> #include <media/v4l2-ctrls.h>
> @@ -439,9 +440,9 @@ static void mcam_ctlr_dma_vmalloc(struct mcam_camera *cam)
> /*
> * Copy data out to user space in the vmalloc case
> */
> -static void mcam_frame_tasklet(struct tasklet_struct *t)
> +static void mcam_frame_work(struct work_struct *t)
> {
> - struct mcam_camera *cam = from_tasklet(cam, t, s_tasklet);
> + struct mcam_camera *cam = from_work(cam, t, s_work);
> int i;
> unsigned long flags;
> struct mcam_vb_buffer *buf;
> @@ -480,7 +481,7 @@ static void mcam_frame_tasklet(struct tasklet_struct *t)
>
>
> /*
> - * Make sure our allocated buffers are up to the task.
> + * Make sure our allocated buffers are up to the work.
> */
> static int mcam_check_dma_buffers(struct mcam_camera *cam)
> {
> @@ -493,7 +494,7 @@ static int mcam_check_dma_buffers(struct mcam_camera *cam)
>
> static void mcam_vmalloc_done(struct mcam_camera *cam, int frame)
> {
> - tasklet_schedule(&cam->s_tasklet);
> + queue_work(system_bh_wq, &cam->s_work);
> }
>
> #else /* MCAM_MODE_VMALLOC */
> @@ -1305,7 +1306,7 @@ static int mcam_setup_vb2(struct mcam_camera *cam)
> break;
> case B_vmalloc:
> #ifdef MCAM_MODE_VMALLOC
> - tasklet_setup(&cam->s_tasklet, mcam_frame_tasklet);
> + INIT_WORK(&cam->s_work, mcam_frame_work);
> vq->ops = &mcam_vb2_ops;
> vq->mem_ops = &vb2_vmalloc_memops;
> cam->dma_setup = mcam_ctlr_dma_vmalloc;
> diff --git a/drivers/media/platform/marvell/mcam-core.h b/drivers/media/platform/marvell/mcam-core.h
> index 51e66db45af6..0d4b953dbb23 100644
> --- a/drivers/media/platform/marvell/mcam-core.h
> +++ b/drivers/media/platform/marvell/mcam-core.h
> @@ -9,6 +9,7 @@
>
> #include <linux/list.h>
> #include <linux/clk-provider.h>
> +#include <linux/workqueue.h>
> #include <media/v4l2-common.h>
> #include <media/v4l2-ctrls.h>
> #include <media/v4l2-dev.h>
> @@ -167,7 +168,7 @@ struct mcam_camera {
> unsigned int dma_buf_size; /* allocated size */
> void *dma_bufs[MAX_DMA_BUFS]; /* Internal buffer addresses */
> dma_addr_t dma_handles[MAX_DMA_BUFS]; /* Buffer bus addresses */
> - struct tasklet_struct s_tasklet;
> + struct work_struct s_work;
> #endif
> unsigned int sequence; /* Frame sequence number */
> unsigned int buf_seq[MAX_DMA_BUFS]; /* Sequence for individual bufs */
> diff --git a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
> index e4cf27b5a072..22b359569a10 100644
> --- a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
> +++ b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
> @@ -33,6 +33,7 @@
> #include <linux/time.h>
> #include <linux/usb.h>
> #include <linux/wait.h>
> +#include <linux/workqueue.h>
>
> #include "c8sectpfe-common.h"
> #include "c8sectpfe-core.h"
> @@ -73,16 +74,16 @@ static void c8sectpfe_timer_interrupt(struct timer_list *t)
>
> /* is this descriptor initialised and TP enabled */
> if (channel->irec && readl(channel->irec + DMA_PRDS_TPENABLE))
> - tasklet_schedule(&channel->tsklet);
> + queue_work(system_bh_wq, &channel->tsklet);
> }
>
> fei->timer.expires = jiffies + msecs_to_jiffies(POLL_MSECS);
> add_timer(&fei->timer);
> }
>
> -static void channel_swdemux_tsklet(struct tasklet_struct *t)
> +static void channel_swdemux_tsklet(struct work_struct *t)
> {
> - struct channel_info *channel = from_tasklet(channel, t, tsklet);
> + struct channel_info *channel = from_work(channel, t, tsklet);
> struct c8sectpfei *fei;
> unsigned long wp, rp;
> int pos, num_packets, n, size;
> @@ -211,7 +212,7 @@ static int c8sectpfe_start_feed(struct dvb_demux_feed *dvbdmxfeed)
>
> dev_dbg(fei->dev, "Starting channel=%p\n", channel);
>
> - tasklet_setup(&channel->tsklet, channel_swdemux_tsklet);
> + INIT_WORK(&channel->tsklet, channel_swdemux_tsklet);
>
> /* Reset the internal inputblock sram pointers */
> writel(channel->fifo,
> @@ -304,7 +305,7 @@ static int c8sectpfe_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
> /* disable this channels descriptor */
> writel(0, channel->irec + DMA_PRDS_TPENABLE);
>
> - tasklet_disable(&channel->tsklet);
> + disable_work_sync(&channel->tsklet);
>
> /* now request memdma channel goes idle */
> idlereq = (1 << channel->tsin_id) | IDLEREQ;
> @@ -631,8 +632,8 @@ static int configure_memdma_and_inputblock(struct c8sectpfei *fei,
> writel(tsin->back_buffer_busaddr, tsin->irec + DMA_PRDS_BUSWP_TP(0));
> writel(tsin->back_buffer_busaddr, tsin->irec + DMA_PRDS_BUSRP_TP(0));
>
> - /* initialize tasklet */
> - tasklet_setup(&tsin->tsklet, channel_swdemux_tsklet);
> + /* initialize work */
> + INIT_WORK(&tsin->tsklet, channel_swdemux_tsklet);
>
> return 0;
>
> diff --git a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.h b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.h
> index bf377cc82225..d63f0ee83615 100644
> --- a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.h
> +++ b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.h
> @@ -51,7 +51,7 @@ struct channel_info {
> unsigned long fifo;
>
> struct completion idle_completion;
> - struct tasklet_struct tsklet;
> + struct work_struct tsklet;
>
> struct c8sectpfei *fei;
> void __iomem *irec;
> diff --git a/drivers/media/radio/wl128x/fmdrv.h b/drivers/media/radio/wl128x/fmdrv.h
> index da8920169df8..85282f638c4a 100644
> --- a/drivers/media/radio/wl128x/fmdrv.h
> +++ b/drivers/media/radio/wl128x/fmdrv.h
> @@ -15,6 +15,7 @@
> #include <sound/core.h>
> #include <sound/initval.h>
> #include <linux/timer.h>
> +#include <linux/workqueue.h>
> #include <media/v4l2-ioctl.h>
> #include <media/v4l2-common.h>
> #include <media/v4l2-device.h>
> @@ -200,15 +201,15 @@ struct fmdev {
> int streg_cbdata; /* status of ST registration */
>
> struct sk_buff_head rx_q; /* RX queue */
> - struct tasklet_struct rx_task; /* RX Tasklet */
> + struct work_struct rx_task; /* RX Work */
>
> struct sk_buff_head tx_q; /* TX queue */
> - struct tasklet_struct tx_task; /* TX Tasklet */
> + struct work_struct tx_task; /* TX Work */
> unsigned long last_tx_jiffies; /* Timestamp of last pkt sent */
> atomic_t tx_cnt; /* Number of packets can send at a time */
>
> struct sk_buff *resp_skb; /* Response from the chip */
> - /* Main task completion handler */
> + /* Main work completion handler */
> struct completion maintask_comp;
> /* Opcode of last command sent to the chip */
> u8 pre_op;
> diff --git a/drivers/media/radio/wl128x/fmdrv_common.c b/drivers/media/radio/wl128x/fmdrv_common.c
> index 3da8e5102bec..52290bb4a4ad 100644
> --- a/drivers/media/radio/wl128x/fmdrv_common.c
> +++ b/drivers/media/radio/wl128x/fmdrv_common.c
> @@ -9,7 +9,7 @@
> * one Channel-8 command to be sent to the chip).
> * 2) Sending each Channel-8 command to the chip and reading
> * response back over Shared Transport.
> - * 3) Managing TX and RX Queues and Tasklets.
> + * 3) Managing TX and RX Queues and Works.
> * 4) Handling FM Interrupt packet and taking appropriate action.
> * 5) Loading FM firmware to the chip (common, FM TX, and FM RX
> * firmware files based on mode selection)
> @@ -29,6 +29,7 @@
> #include "fmdrv_v4l2.h"
> #include "fmdrv_common.h"
> #include <linux/ti_wilink_st.h>
> +#include <linux/workqueue.h>
> #include "fmdrv_rx.h"
> #include "fmdrv_tx.h"
>
> @@ -244,10 +245,10 @@ void fmc_update_region_info(struct fmdev *fmdev, u8 region_to_set)
> }
>
> /*
> - * FM common sub-module will schedule this tasklet whenever it receives
> + * FM common sub-module will schedule this work whenever it receives
> * FM packet from ST driver.
> */
> -static void recv_tasklet(struct tasklet_struct *t)
> +static void recv_work(struct work_struct *t)
> {
> struct fmdev *fmdev;
> struct fm_irq *irq_info;
> @@ -256,7 +257,7 @@ static void recv_tasklet(struct tasklet_struct *t)
> u8 num_fm_hci_cmds;
> unsigned long flags;
>
> - fmdev = from_tasklet(fmdev, t, tx_task);
> + fmdev = from_work(fmdev, t, tx_task);
> irq_info = &fmdev->irq_info;
> /* Process all packets in the RX queue */
> while ((skb = skb_dequeue(&fmdev->rx_q))) {
> @@ -322,22 +323,22 @@ static void recv_tasklet(struct tasklet_struct *t)
>
> /*
> * Check flow control field. If Num_FM_HCI_Commands field is
> - * not zero, schedule FM TX tasklet.
> + * not zero, schedule FM TX work.
> */
> if (num_fm_hci_cmds && atomic_read(&fmdev->tx_cnt))
> if (!skb_queue_empty(&fmdev->tx_q))
> - tasklet_schedule(&fmdev->tx_task);
> + queue_work(system_bh_wq, &fmdev->tx_task);
> }
> }
>
> -/* FM send tasklet: is scheduled when FM packet has to be sent to chip */
> -static void send_tasklet(struct tasklet_struct *t)
> +/* FM send work: is scheduled when FM packet has to be sent to chip */
> +static void send_work(struct work_struct *t)
> {
> struct fmdev *fmdev;
> struct sk_buff *skb;
> int len;
>
> - fmdev = from_tasklet(fmdev, t, tx_task);
> + fmdev = from_work(fmdev, t, tx_task);
>
> if (!atomic_read(&fmdev->tx_cnt))
> return;
> @@ -366,7 +367,7 @@ static void send_tasklet(struct tasklet_struct *t)
> if (len < 0) {
> kfree_skb(skb);
> fmdev->resp_comp = NULL;
> - fmerr("TX tasklet failed to send skb(%p)\n", skb);
> + fmerr("TX work failed to send skb(%p)\n", skb);
> atomic_set(&fmdev->tx_cnt, 1);
> } else {
> fmdev->last_tx_jiffies = jiffies;
> @@ -374,7 +375,7 @@ static void send_tasklet(struct tasklet_struct *t)
> }
>
> /*
> - * Queues FM Channel-8 packet to FM TX queue and schedules FM TX tasklet for
> + * Queues FM Channel-8 packet to FM TX queue and schedules FM TX work for
> * transmission
> */
> static int fm_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
> @@ -440,7 +441,7 @@ static int fm_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
>
> fm_cb(skb)->completion = wait_completion;
> skb_queue_tail(&fmdev->tx_q, skb);
> - tasklet_schedule(&fmdev->tx_task);
> + queue_work(system_bh_wq, &fmdev->tx_task);
>
> return 0;
> }
> @@ -462,7 +463,7 @@ int fmc_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
>
> if (!wait_for_completion_timeout(&fmdev->maintask_comp,
> FM_DRV_TX_TIMEOUT)) {
> - fmerr("Timeout(%d sec),didn't get regcompletion signal from RX tasklet\n",
> + fmerr("Timeout(%d sec),didn't get regcompletion signal from RX work\n",
> jiffies_to_msecs(FM_DRV_TX_TIMEOUT) / 1000);
> return -ETIMEDOUT;
> }
> @@ -1455,7 +1456,7 @@ static long fm_st_receive(void *arg, struct sk_buff *skb)
>
> memcpy(skb_push(skb, 1), &skb->cb[0], 1);
> skb_queue_tail(&fmdev->rx_q, skb);
> - tasklet_schedule(&fmdev->rx_task);
> + queue_work(system_bh_wq, &fmdev->rx_task);
>
> return 0;
> }
> @@ -1537,13 +1538,13 @@ int fmc_prepare(struct fmdev *fmdev)
> spin_lock_init(&fmdev->rds_buff_lock);
> spin_lock_init(&fmdev->resp_skb_lock);
>
> - /* Initialize TX queue and TX tasklet */
> + /* Initialize TX queue and TX work */
> skb_queue_head_init(&fmdev->tx_q);
> - tasklet_setup(&fmdev->tx_task, send_tasklet);
> + INIT_WORK(&fmdev->tx_task, send_work);
>
> - /* Initialize RX Queue and RX tasklet */
> + /* Initialize RX Queue and RX work */
> skb_queue_head_init(&fmdev->rx_q);
> - tasklet_setup(&fmdev->rx_task, recv_tasklet);
> + INIT_WORK(&fmdev->rx_task, recv_work);
>
> fmdev->irq_info.stage = 0;
> atomic_set(&fmdev->tx_cnt, 1);
> @@ -1589,8 +1590,8 @@ int fmc_release(struct fmdev *fmdev)
> /* Service pending read */
> wake_up_interruptible(&fmdev->rx.rds.read_queue);
>
> - tasklet_kill(&fmdev->tx_task);
> - tasklet_kill(&fmdev->rx_task);
> + cancel_work_sync(&fmdev->tx_task);
> + cancel_work_sync(&fmdev->rx_task);
>
> skb_queue_purge(&fmdev->tx_q);
> skb_queue_purge(&fmdev->rx_q);
> diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
> index c76ba24c1f55..a2e2e58b7506 100644
> --- a/drivers/media/rc/mceusb.c
> +++ b/drivers/media/rc/mceusb.c
> @@ -774,7 +774,7 @@ static void mceusb_dev_printdata(struct mceusb_dev *ir, u8 *buf, int buf_len,
>
> /*
> * Schedule work that can't be done in interrupt handlers
> - * (mceusb_dev_recv() and mce_write_callback()) nor tasklets.
> + * (mceusb_dev_recv() and mce_write_callback()) nor works.
> * Invokes mceusb_deferred_kevent() for recovering from
> * error events specified by the kevent bit field.
> */
> diff --git a/drivers/media/usb/ttusb-dec/ttusb_dec.c b/drivers/media/usb/ttusb-dec/ttusb_dec.c
> index 79faa2560613..55eeb00f1126 100644
> --- a/drivers/media/usb/ttusb-dec/ttusb_dec.c
> +++ b/drivers/media/usb/ttusb-dec/ttusb_dec.c
> @@ -19,6 +19,7 @@
> #include <linux/input.h>
>
> #include <linux/mutex.h>
> +#include <linux/workqueue.h>
>
> #include <media/dmxdev.h>
> #include <media/dvb_demux.h>
> @@ -139,7 +140,7 @@ struct ttusb_dec {
> int v_pes_postbytes;
>
> struct list_head urb_frame_list;
> - struct tasklet_struct urb_tasklet;
> + struct work_struct urb_work;
> spinlock_t urb_frame_list_lock;
>
> struct dvb_demux_filter *audio_filter;
> @@ -766,9 +767,9 @@ static void ttusb_dec_process_urb_frame(struct ttusb_dec *dec, u8 *b,
> }
> }
>
> -static void ttusb_dec_process_urb_frame_list(struct tasklet_struct *t)
> +static void ttusb_dec_process_urb_frame_list(struct work_struct *t)
> {
> - struct ttusb_dec *dec = from_tasklet(dec, t, urb_tasklet);
> + struct ttusb_dec *dec = from_work(dec, t, urb_work);
> struct list_head *item;
> struct urb_frame *frame;
> unsigned long flags;
> @@ -822,7 +823,7 @@ static void ttusb_dec_process_urb(struct urb *urb)
> spin_unlock_irqrestore(&dec->urb_frame_list_lock,
> flags);
>
> - tasklet_schedule(&dec->urb_tasklet);
> + queue_work(system_bh_wq, &dec->urb_work);
> }
> }
> } else {
> @@ -1198,11 +1199,11 @@ static int ttusb_dec_alloc_iso_urbs(struct ttusb_dec *dec)
> return 0;
> }
>
> -static void ttusb_dec_init_tasklet(struct ttusb_dec *dec)
> +static void ttusb_dec_init_work(struct ttusb_dec *dec)
> {
> spin_lock_init(&dec->urb_frame_list_lock);
> INIT_LIST_HEAD(&dec->urb_frame_list);
> - tasklet_setup(&dec->urb_tasklet, ttusb_dec_process_urb_frame_list);
> + INIT_WORK(&dec->urb_work, ttusb_dec_process_urb_frame_list);
> }
>
> static int ttusb_init_rc( struct ttusb_dec *dec)
> @@ -1588,12 +1589,12 @@ static void ttusb_dec_exit_usb(struct ttusb_dec *dec)
> ttusb_dec_free_iso_urbs(dec);
> }
>
> -static void ttusb_dec_exit_tasklet(struct ttusb_dec *dec)
> +static void ttusb_dec_exit_work(struct ttusb_dec *dec)
> {
> struct list_head *item;
> struct urb_frame *frame;
>
> - tasklet_kill(&dec->urb_tasklet);
> + cancel_work_sync(&dec->urb_work);
>
> while ((item = dec->urb_frame_list.next) != &dec->urb_frame_list) {
> frame = list_entry(item, struct urb_frame, urb_frame_list);
> @@ -1703,7 +1704,7 @@ static int ttusb_dec_probe(struct usb_interface *intf,
>
> ttusb_dec_init_v_pes(dec);
> ttusb_dec_init_filters(dec);
> - ttusb_dec_init_tasklet(dec);
> + ttusb_dec_init_work(dec);
>
> dec->active = 1;
>
> @@ -1729,7 +1730,7 @@ static void ttusb_dec_disconnect(struct usb_interface *intf)
> dprintk("%s\n", __func__);
>
> if (dec->active) {
> - ttusb_dec_exit_tasklet(dec);
> + ttusb_dec_exit_work(dec);
> ttusb_dec_exit_filters(dec);
> if(enable_rc)
> ttusb_dec_exit_rc(dec);


2024-04-24 16:51:03

by Allen Pais

[permalink] [raw]
Subject: Re: [PATCH 8/9] drivers/media/*: Convert from tasklet to BH workqueue



> On Apr 24, 2024, at 2:12 AM, Hans Verkuil <[email protected]> wrote:
>
> On 27/03/2024 17:03, Allen Pais wrote:
>> The only generic interface to execute asynchronously in the BH context is
>> tasklet; however, it's marked deprecated and has some design flaws. To
>> replace tasklets, BH workqueue support was recently added. A BH workqueue
>> behaves similarly to regular workqueues except that the queued work items
>> are executed in the BH context.
>>
>> This patch converts drivers/media/* from tasklet to BH workqueue.
>>
>> Based on the work done by Tejun Heo <[email protected]>
>> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>>
>> Signed-off-by: Allen Pais <[email protected]>
>> ---
>> drivers/media/pci/bt8xx/bt878.c | 8 ++--
>> drivers/media/pci/bt8xx/bt878.h | 3 +-
>> drivers/media/pci/bt8xx/dvb-bt8xx.c | 9 ++--
>> drivers/media/pci/ddbridge/ddbridge.h | 3 +-
>> drivers/media/pci/mantis/hopper_cards.c | 2 +-
>> drivers/media/pci/mantis/mantis_cards.c | 2 +-
>> drivers/media/pci/mantis/mantis_common.h | 3 +-
>> drivers/media/pci/mantis/mantis_dma.c | 5 ++-
>> drivers/media/pci/mantis/mantis_dma.h | 2 +-
>> drivers/media/pci/mantis/mantis_dvb.c | 12 +++---
>> drivers/media/pci/ngene/ngene-core.c | 23 ++++++-----
>> drivers/media/pci/ngene/ngene.h | 5 ++-
>> drivers/media/pci/smipcie/smipcie-main.c | 18 ++++----
>> drivers/media/pci/smipcie/smipcie.h | 3 +-
>> drivers/media/pci/ttpci/budget-av.c | 3 +-
>> drivers/media/pci/ttpci/budget-ci.c | 27 ++++++------
>> drivers/media/pci/ttpci/budget-core.c | 10 ++---
>> drivers/media/pci/ttpci/budget.h | 5 ++-
>> drivers/media/pci/tw5864/tw5864-core.c | 2 +-
>> drivers/media/pci/tw5864/tw5864-video.c | 13 +++---
>> drivers/media/pci/tw5864/tw5864.h | 7 ++--
>> drivers/media/platform/intel/pxa_camera.c | 15 +++----
>> drivers/media/platform/marvell/mcam-core.c | 11 ++---
>> drivers/media/platform/marvell/mcam-core.h | 3 +-
>> .../st/sti/c8sectpfe/c8sectpfe-core.c | 15 +++----
>> .../st/sti/c8sectpfe/c8sectpfe-core.h | 2 +-
>> drivers/media/radio/wl128x/fmdrv.h | 7 ++--
>> drivers/media/radio/wl128x/fmdrv_common.c | 41 ++++++++++---------
>> drivers/media/rc/mceusb.c | 2 +-
>> drivers/media/usb/ttusb-dec/ttusb_dec.c | 21 +++++-----
>> 30 files changed, 151 insertions(+), 131 deletions(-)
>>
>> diff --git a/drivers/media/pci/bt8xx/bt878.c b/drivers/media/pci/bt8xx/bt878.c
>> index 90972d6952f1..983ec29108f0 100644
>> --- a/drivers/media/pci/bt8xx/bt878.c
>> +++ b/drivers/media/pci/bt8xx/bt878.c
>> @@ -300,8 +300,8 @@ static irqreturn_t bt878_irq(int irq, void *dev_id)
>> }
>> if (astat & BT878_ARISCI) {
>> bt->finished_block = (stat & BT878_ARISCS) >> 28;
>> - if (bt->tasklet.callback)
>> - tasklet_schedule(&bt->tasklet);
>> + if (bt->work.func)
>> + queue_work(system_bh_wq,
>
> I stopped reviewing here: this clearly has not been compile tested.
>
> Also please check the patch with 'checkpatch.pl --strict' and fix the reported issues.
>
> Regards,
>
> Hans

Hans,

Thanks for taking the time out to review. This was a mistake, I sent out a v2 which had
This fixed. I am working on v3 based on some of the comments I received recently. Will
Appreciate your review when it is sent out.

Allen

>
>> break;
>> }
>> count++;
>> @@ -478,8 +478,8 @@ static int bt878_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
>> btwrite(0, BT878_AINT_MASK);
>> bt878_num++;
>>
>> - if (!bt->tasklet.func)
>> - tasklet_disable(&bt->tasklet);
>> + if (!bt->work.func)
>> + disable_work_sync(&bt->work);
>>
>> return 0;
>>
>> diff --git a/drivers/media/pci/bt8xx/bt878.h b/drivers/media/pci/bt8xx/bt878.h
>> index fde8db293c54..b9ce78e5116b 100644
>> --- a/drivers/media/pci/bt8xx/bt878.h
>> +++ b/drivers/media/pci/bt8xx/bt878.h
>> @@ -14,6 +14,7 @@
>> #include <linux/sched.h>
>> #include <linux/spinlock.h>
>> #include <linux/mutex.h>
>> +#include <linux/workqueue.h>
>>
>> #include "bt848.h"
>> #include "bttv.h"
>> @@ -120,7 +121,7 @@ struct bt878 {
>> dma_addr_t risc_dma;
>> u32 risc_pos;
>>
>> - struct tasklet_struct tasklet;
>> + struct work_struct work;
>> int shutdown;
>> };
>>
>> diff --git a/drivers/media/pci/bt8xx/dvb-bt8xx.c b/drivers/media/pci/bt8xx/dvb-bt8xx.c
>> index 390cbba6c065..8c0e1fa764a4 100644
>> --- a/drivers/media/pci/bt8xx/dvb-bt8xx.c
>> +++ b/drivers/media/pci/bt8xx/dvb-bt8xx.c
>> @@ -15,6 +15,7 @@
>> #include <linux/delay.h>
>> #include <linux/slab.h>
>> #include <linux/i2c.h>
>> +#include <linux/workqueue.h>
>>
>> #include <media/dmxdev.h>
>> #include <media/dvbdev.h>
>> @@ -39,9 +40,9 @@ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
>>
>> #define IF_FREQUENCYx6 217 /* 6 * 36.16666666667MHz */
>>
>> -static void dvb_bt8xx_task(struct tasklet_struct *t)
>> +static void dvb_bt8xx_task(struct work_struct *t)
>> {
>> - struct bt878 *bt = from_tasklet(bt, t, tasklet);
>> + struct bt878 *bt = from_work(bt, t, work);
>> struct dvb_bt8xx_card *card = dev_get_drvdata(&bt->adapter->dev);
>>
>> dprintk("%d\n", card->bt->finished_block);
>> @@ -782,7 +783,7 @@ static int dvb_bt8xx_load_card(struct dvb_bt8xx_card *card, u32 type)
>> goto err_disconnect_frontend;
>> }
>>
>> - tasklet_setup(&card->bt->tasklet, dvb_bt8xx_task);
>> + INIT_WORK(&card->bt->work, dvb_bt8xx_task);
>>
>> frontend_init(card, type);
>>
>> @@ -922,7 +923,7 @@ static void dvb_bt8xx_remove(struct bttv_sub_device *sub)
>> dprintk("dvb_bt8xx: unloading card%d\n", card->bttv_nr);
>>
>> bt878_stop(card->bt);
>> - tasklet_kill(&card->bt->tasklet);
>> + cancel_work_sync(&card->bt->work);
>> dvb_net_release(&card->dvbnet);
>> card->demux.dmx.remove_frontend(&card->demux.dmx, &card->fe_mem);
>> card->demux.dmx.remove_frontend(&card->demux.dmx, &card->fe_hw);
>> diff --git a/drivers/media/pci/ddbridge/ddbridge.h b/drivers/media/pci/ddbridge/ddbridge.h
>> index f3699dbd193f..037d1d13ef0f 100644
>> --- a/drivers/media/pci/ddbridge/ddbridge.h
>> +++ b/drivers/media/pci/ddbridge/ddbridge.h
>> @@ -35,6 +35,7 @@
>> #include <linux/uaccess.h>
>> #include <linux/vmalloc.h>
>> #include <linux/workqueue.h>
>> +#include <linux/workqueue.h>
>>
>> #include <asm/dma.h>
>> #include <asm/irq.h>
>> @@ -298,7 +299,7 @@ struct ddb_link {
>> spinlock_t lock; /* lock link access */
>> struct mutex flash_mutex; /* lock flash access */
>> struct ddb_lnb lnb;
>> - struct tasklet_struct tasklet;
>> + struct work_struct work;
>> struct ddb_ids ids;
>>
>> spinlock_t temp_lock; /* lock temp chip access */
>> diff --git a/drivers/media/pci/mantis/hopper_cards.c b/drivers/media/pci/mantis/hopper_cards.c
>> index c0bd5d7e148b..869ea88c4893 100644
>> --- a/drivers/media/pci/mantis/hopper_cards.c
>> +++ b/drivers/media/pci/mantis/hopper_cards.c
>> @@ -116,7 +116,7 @@ static irqreturn_t hopper_irq_handler(int irq, void *dev_id)
>> if (stat & MANTIS_INT_RISCI) {
>> dprintk(MANTIS_DEBUG, 0, "<%s>", label[8]);
>> mantis->busy_block = (stat & MANTIS_INT_RISCSTAT) >> 28;
>> - tasklet_schedule(&mantis->tasklet);
>> + queue_work(system_bh_wq, &mantis->work);
>> }
>> if (stat & MANTIS_INT_I2CDONE) {
>> dprintk(MANTIS_DEBUG, 0, "<%s>", label[9]);
>> diff --git a/drivers/media/pci/mantis/mantis_cards.c b/drivers/media/pci/mantis/mantis_cards.c
>> index 906e4500d87d..cb124b19e36e 100644
>> --- a/drivers/media/pci/mantis/mantis_cards.c
>> +++ b/drivers/media/pci/mantis/mantis_cards.c
>> @@ -125,7 +125,7 @@ static irqreturn_t mantis_irq_handler(int irq, void *dev_id)
>> if (stat & MANTIS_INT_RISCI) {
>> dprintk(MANTIS_DEBUG, 0, "<%s>", label[8]);
>> mantis->busy_block = (stat & MANTIS_INT_RISCSTAT) >> 28;
>> - tasklet_schedule(&mantis->tasklet);
>> + queue_work(system_bh_wq, &mantis->work);
>> }
>> if (stat & MANTIS_INT_I2CDONE) {
>> dprintk(MANTIS_DEBUG, 0, "<%s>", label[9]);
>> diff --git a/drivers/media/pci/mantis/mantis_common.h b/drivers/media/pci/mantis/mantis_common.h
>> index d88ac280226c..f2247148f268 100644
>> --- a/drivers/media/pci/mantis/mantis_common.h
>> +++ b/drivers/media/pci/mantis/mantis_common.h
>> @@ -12,6 +12,7 @@
>> #include <linux/interrupt.h>
>> #include <linux/mutex.h>
>> #include <linux/workqueue.h>
>> +#include <linux/workqueue.h>
>>
>> #include "mantis_reg.h"
>> #include "mantis_uart.h"
>> @@ -125,7 +126,7 @@ struct mantis_pci {
>> __le32 *risc_cpu;
>> dma_addr_t risc_dma;
>>
>> - struct tasklet_struct tasklet;
>> + struct work_struct work;
>> spinlock_t intmask_lock;
>>
>> struct i2c_adapter adapter;
>> diff --git a/drivers/media/pci/mantis/mantis_dma.c b/drivers/media/pci/mantis/mantis_dma.c
>> index 80c843936493..c85f9b84a2c6 100644
>> --- a/drivers/media/pci/mantis/mantis_dma.c
>> +++ b/drivers/media/pci/mantis/mantis_dma.c
>> @@ -15,6 +15,7 @@
>> #include <linux/signal.h>
>> #include <linux/sched.h>
>> #include <linux/interrupt.h>
>> +#include <linux/workqueue.h>
>>
>> #include <media/dmxdev.h>
>> #include <media/dvbdev.h>
>> @@ -200,9 +201,9 @@ void mantis_dma_stop(struct mantis_pci *mantis)
>> }
>>
>>
>> -void mantis_dma_xfer(struct tasklet_struct *t)
>> +void mantis_dma_xfer(struct work_struct *t)
>> {
>> - struct mantis_pci *mantis = from_tasklet(mantis, t, tasklet);
>> + struct mantis_pci *mantis = from_work(mantis, t, work);
>> struct mantis_hwconfig *config = mantis->hwconfig;
>>
>> while (mantis->last_block != mantis->busy_block) {
>> diff --git a/drivers/media/pci/mantis/mantis_dma.h b/drivers/media/pci/mantis/mantis_dma.h
>> index 37da982c9c29..5db0d3728f15 100644
>> --- a/drivers/media/pci/mantis/mantis_dma.h
>> +++ b/drivers/media/pci/mantis/mantis_dma.h
>> @@ -13,6 +13,6 @@ extern int mantis_dma_init(struct mantis_pci *mantis);
>> extern int mantis_dma_exit(struct mantis_pci *mantis);
>> extern void mantis_dma_start(struct mantis_pci *mantis);
>> extern void mantis_dma_stop(struct mantis_pci *mantis);
>> -extern void mantis_dma_xfer(struct tasklet_struct *t);
>> +extern void mantis_dma_xfer(struct work_struct *t);
>>
>> #endif /* __MANTIS_DMA_H */
>> diff --git a/drivers/media/pci/mantis/mantis_dvb.c b/drivers/media/pci/mantis/mantis_dvb.c
>> index c7ba4a76e608..f640635de170 100644
>> --- a/drivers/media/pci/mantis/mantis_dvb.c
>> +++ b/drivers/media/pci/mantis/mantis_dvb.c
>> @@ -105,7 +105,7 @@ static int mantis_dvb_start_feed(struct dvb_demux_feed *dvbdmxfeed)
>> if (mantis->feeds == 1) {
>> dprintk(MANTIS_DEBUG, 1, "mantis start feed & dma");
>> mantis_dma_start(mantis);
>> - tasklet_enable(&mantis->tasklet);
>> + enable_and_queue_work(system_bh_wq, &mantis->work);
>> }
>>
>> return mantis->feeds;
>> @@ -125,7 +125,7 @@ static int mantis_dvb_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
>> mantis->feeds--;
>> if (mantis->feeds == 0) {
>> dprintk(MANTIS_DEBUG, 1, "mantis stop feed and dma");
>> - tasklet_disable(&mantis->tasklet);
>> + disable_work_sync(&mantis->work);
>> mantis_dma_stop(mantis);
>> }
>>
>> @@ -205,8 +205,8 @@ int mantis_dvb_init(struct mantis_pci *mantis)
>> }
>>
>> dvb_net_init(&mantis->dvb_adapter, &mantis->dvbnet, &mantis->demux.dmx);
>> - tasklet_setup(&mantis->tasklet, mantis_dma_xfer);
>> - tasklet_disable(&mantis->tasklet);
>> + INIT_WORK(&mantis->bh, mantis_dma_xfer);
>> + disable_work_sync(&mantis->work);
>> if (mantis->hwconfig) {
>> result = config->frontend_init(mantis, mantis->fe);
>> if (result < 0) {
>> @@ -235,7 +235,7 @@ int mantis_dvb_init(struct mantis_pci *mantis)
>>
>> /* Error conditions .. */
>> err5:
>> - tasklet_kill(&mantis->tasklet);
>> + cancel_work_sync(&mantis->work);
>> dvb_net_release(&mantis->dvbnet);
>> if (mantis->fe) {
>> dvb_unregister_frontend(mantis->fe);
>> @@ -273,7 +273,7 @@ int mantis_dvb_exit(struct mantis_pci *mantis)
>> dvb_frontend_detach(mantis->fe);
>> }
>>
>> - tasklet_kill(&mantis->tasklet);
>> + cancel_work_sync(&mantis->work);
>> dvb_net_release(&mantis->dvbnet);
>>
>> mantis->demux.dmx.remove_frontend(&mantis->demux.dmx, &mantis->fe_mem);
>> diff --git a/drivers/media/pci/ngene/ngene-core.c b/drivers/media/pci/ngene/ngene-core.c
>> index 7481f553f959..5211d6796748 100644
>> --- a/drivers/media/pci/ngene/ngene-core.c
>> +++ b/drivers/media/pci/ngene/ngene-core.c
>> @@ -21,6 +21,7 @@
>> #include <linux/byteorder/generic.h>
>> #include <linux/firmware.h>
>> #include <linux/vmalloc.h>
>> +#include <linux/workqueue.h>
>>
>> #include "ngene.h"
>>
>> @@ -50,9 +51,9 @@ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
>> /* nGene interrupt handler **************************************************/
>> /****************************************************************************/
>>
>> -static void event_tasklet(struct tasklet_struct *t)
>> +static void event_work(struct work_struct *t)
>> {
>> - struct ngene *dev = from_tasklet(dev, t, event_tasklet);
>> + struct ngene *dev = from_work(dev, t, event_work);
>>
>> while (dev->EventQueueReadIndex != dev->EventQueueWriteIndex) {
>> struct EVENT_BUFFER Event =
>> @@ -68,9 +69,9 @@ static void event_tasklet(struct tasklet_struct *t)
>> }
>> }
>>
>> -static void demux_tasklet(struct tasklet_struct *t)
>> +static void demux_work(struct work_struct *t)
>> {
>> - struct ngene_channel *chan = from_tasklet(chan, t, demux_tasklet);
>> + struct ngene_channel *chan = from_work(chan, t, demux_work);
>> struct device *pdev = &chan->dev->pci_dev->dev;
>> struct SBufferHeader *Cur = chan->nextBuffer;
>>
>> @@ -204,7 +205,7 @@ static irqreturn_t irq_handler(int irq, void *dev_id)
>> dev->EventQueueOverflowFlag = 1;
>> }
>> dev->EventBuffer->EventStatus &= ~0x80;
>> - tasklet_schedule(&dev->event_tasklet);
>> + queue_work(system_bh_wq, &dev->event_work);
>> rc = IRQ_HANDLED;
>> }
>>
>> @@ -217,8 +218,8 @@ static irqreturn_t irq_handler(int irq, void *dev_id)
>> ngeneBuffer.SR.Flags & 0xC0) == 0x80) {
>> dev->channel[i].nextBuffer->
>> ngeneBuffer.SR.Flags |= 0x40;
>> - tasklet_schedule(
>> - &dev->channel[i].demux_tasklet);
>> + queue_work(system_bh_wq,
>> + &dev->channel[i].demux_work);
>> rc = IRQ_HANDLED;
>> }
>> }
>> @@ -1181,7 +1182,7 @@ static void ngene_init(struct ngene *dev)
>> struct device *pdev = &dev->pci_dev->dev;
>> int i;
>>
>> - tasklet_setup(&dev->event_tasklet, event_tasklet);
>> + INIT_WORK(&dev->event_work, event_work);
>>
>> memset_io(dev->iomem + 0xc000, 0x00, 0x220);
>> memset_io(dev->iomem + 0xc400, 0x00, 0x100);
>> @@ -1395,7 +1396,7 @@ static void release_channel(struct ngene_channel *chan)
>> if (chan->running)
>> set_transfer(chan, 0);
>>
>> - tasklet_kill(&chan->demux_tasklet);
>> + cancel_work_sync(&chan->demux_work);
>>
>> if (chan->ci_dev) {
>> dvb_unregister_device(chan->ci_dev);
>> @@ -1445,7 +1446,7 @@ static int init_channel(struct ngene_channel *chan)
>> struct ngene_info *ni = dev->card_info;
>> int io = ni->io_type[nr];
>>
>> - tasklet_setup(&chan->demux_tasklet, demux_tasklet);
>> + INIT_WORK(&chan->demux_work, demux_work);
>> chan->users = 0;
>> chan->type = io;
>> chan->mode = chan->type; /* for now only one mode */
>> @@ -1647,7 +1648,7 @@ void ngene_remove(struct pci_dev *pdev)
>> struct ngene *dev = pci_get_drvdata(pdev);
>> int i;
>>
>> - tasklet_kill(&dev->event_tasklet);
>> + cancel_work_sync(&dev->event_work);
>> for (i = MAX_STREAM - 1; i >= 0; i--)
>> release_channel(&dev->channel[i]);
>> if (dev->ci.en)
>> diff --git a/drivers/media/pci/ngene/ngene.h b/drivers/media/pci/ngene/ngene.h
>> index d1d7da84cd9d..c2a23f6dbe09 100644
>> --- a/drivers/media/pci/ngene/ngene.h
>> +++ b/drivers/media/pci/ngene/ngene.h
>> @@ -16,6 +16,7 @@
>> #include <linux/scatterlist.h>
>>
>> #include <linux/dvb/frontend.h>
>> +#include <linux/workqueue.h>
>>
>> #include <media/dmxdev.h>
>> #include <media/dvbdev.h>
>> @@ -621,7 +622,7 @@ struct ngene_channel {
>> int users;
>> struct video_device *v4l_dev;
>> struct dvb_device *ci_dev;
>> - struct tasklet_struct demux_tasklet;
>> + struct work_struct demux_work;
>>
>> struct SBufferHeader *nextBuffer;
>> enum KSSTATE State;
>> @@ -717,7 +718,7 @@ struct ngene {
>> struct EVENT_BUFFER EventQueue[EVENT_QUEUE_SIZE];
>> int EventQueueOverflowCount;
>> int EventQueueOverflowFlag;
>> - struct tasklet_struct event_tasklet;
>> + struct work_struct event_work;
>> struct EVENT_BUFFER *EventBuffer;
>> int EventQueueWriteIndex;
>> int EventQueueReadIndex;
>> diff --git a/drivers/media/pci/smipcie/smipcie-main.c b/drivers/media/pci/smipcie/smipcie-main.c
>> index 0c300d019d9c..7da6bb55660b 100644
>> --- a/drivers/media/pci/smipcie/smipcie-main.c
>> +++ b/drivers/media/pci/smipcie/smipcie-main.c
>> @@ -279,10 +279,10 @@ static void smi_port_clearInterrupt(struct smi_port *port)
>> (port->_dmaInterruptCH0 | port->_dmaInterruptCH1));
>> }
>>
>> -/* tasklet handler: DMA data to dmx.*/
>> -static void smi_dma_xfer(struct tasklet_struct *t)
>> +/* work handler: DMA data to dmx.*/
>> +static void smi_dma_xfer(struct work_struct *t)
>> {
>> - struct smi_port *port = from_tasklet(port, t, tasklet);
>> + struct smi_port *port = from_work(port, t, work);
>> struct smi_dev *dev = port->dev;
>> u32 intr_status, finishedData, dmaManagement;
>> u8 dmaChan0State, dmaChan1State;
>> @@ -426,8 +426,8 @@ static int smi_port_init(struct smi_port *port, int dmaChanUsed)
>> }
>>
>> smi_port_disableInterrupt(port);
>> - tasklet_setup(&port->tasklet, smi_dma_xfer);
>> - tasklet_disable(&port->tasklet);
>> + INIT_WORK(&port->work, smi_dma_xfer);
>> + disable_work_sync(&port->work);
>> port->enable = 1;
>> return 0;
>> err:
>> @@ -438,7 +438,7 @@ static int smi_port_init(struct smi_port *port, int dmaChanUsed)
>> static void smi_port_exit(struct smi_port *port)
>> {
>> smi_port_disableInterrupt(port);
>> - tasklet_kill(&port->tasklet);
>> + cancel_work_sync(&port->work);
>> smi_port_dma_free(port);
>> port->enable = 0;
>> }
>> @@ -452,7 +452,7 @@ static int smi_port_irq(struct smi_port *port, u32 int_status)
>> smi_port_disableInterrupt(port);
>> port->_int_status = int_status;
>> smi_port_clearInterrupt(port);
>> - tasklet_schedule(&port->tasklet);
>> + queue_work(system_bh_wq, &port->work);
>> handled = 1;
>> }
>> return handled;
>> @@ -823,7 +823,7 @@ static int smi_start_feed(struct dvb_demux_feed *dvbdmxfeed)
>> smi_port_clearInterrupt(port);
>> smi_port_enableInterrupt(port);
>> smi_write(port->DMA_MANAGEMENT, dmaManagement);
>> - tasklet_enable(&port->tasklet);
>> + enable_and_queue_work(system_bh_wq, &port->work);
>> }
>> return port->users;
>> }
>> @@ -837,7 +837,7 @@ static int smi_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
>> if (--port->users)
>> return port->users;
>>
>> - tasklet_disable(&port->tasklet);
>> + disable_work_sync(&port->work);
>> smi_port_disableInterrupt(port);
>> smi_clear(port->DMA_MANAGEMENT, 0x30003);
>> return 0;
>> diff --git a/drivers/media/pci/smipcie/smipcie.h b/drivers/media/pci/smipcie/smipcie.h
>> index 2b5e0154814c..f124d2cdead6 100644
>> --- a/drivers/media/pci/smipcie/smipcie.h
>> +++ b/drivers/media/pci/smipcie/smipcie.h
>> @@ -17,6 +17,7 @@
>> #include <linux/pci.h>
>> #include <linux/dma-mapping.h>
>> #include <linux/slab.h>
>> +#include <linux/workqueue.h>
>> #include <media/rc-core.h>
>>
>> #include <media/demux.h>
>> @@ -257,7 +258,7 @@ struct smi_port {
>> u32 _dmaInterruptCH0;
>> u32 _dmaInterruptCH1;
>> u32 _int_status;
>> - struct tasklet_struct tasklet;
>> + struct work_struct work;
>> /* dvb */
>> struct dmx_frontend hw_frontend;
>> struct dmx_frontend mem_frontend;
>> diff --git a/drivers/media/pci/ttpci/budget-av.c b/drivers/media/pci/ttpci/budget-av.c
>> index a47c5850ef87..6e43b1a01191 100644
>> --- a/drivers/media/pci/ttpci/budget-av.c
>> +++ b/drivers/media/pci/ttpci/budget-av.c
>> @@ -37,6 +37,7 @@
>> #include <linux/interrupt.h>
>> #include <linux/input.h>
>> #include <linux/spinlock.h>
>> +#include <linux/workqueue.h>
>>
>> #include <media/dvb_ca_en50221.h>
>>
>> @@ -55,7 +56,7 @@ struct budget_av {
>> struct video_device vd;
>> int cur_input;
>> int has_saa7113;
>> - struct tasklet_struct ciintf_irq_tasklet;
>> + struct work_struct ciintf_irq_work;
>> int slot_status;
>> struct dvb_ca_en50221 ca;
>> u8 reinitialise_demod:1;
>> diff --git a/drivers/media/pci/ttpci/budget-ci.c b/drivers/media/pci/ttpci/budget-ci.c
>> index 66e1a004ee43..11e0ed62707e 100644
>> --- a/drivers/media/pci/ttpci/budget-ci.c
>> +++ b/drivers/media/pci/ttpci/budget-ci.c
>> @@ -17,6 +17,7 @@
>> #include <linux/slab.h>
>> #include <linux/interrupt.h>
>> #include <linux/spinlock.h>
>> +#include <linux/workqueue.h>
>> #include <media/rc-core.h>
>>
>> #include "budget.h"
>> @@ -80,7 +81,7 @@ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
>>
>> struct budget_ci_ir {
>> struct rc_dev *dev;
>> - struct tasklet_struct msp430_irq_tasklet;
>> + struct work_struct msp430_irq_work;
>> char name[72]; /* 40 + 32 for (struct saa7146_dev).name */
>> char phys[32];
>> int rc5_device;
>> @@ -91,7 +92,7 @@ struct budget_ci_ir {
>>
>> struct budget_ci {
>> struct budget budget;
>> - struct tasklet_struct ciintf_irq_tasklet;
>> + struct work_struct ciintf_irq_work;
>> int slot_status;
>> int ci_irq;
>> struct dvb_ca_en50221 ca;
>> @@ -99,9 +100,9 @@ struct budget_ci {
>> u8 tuner_pll_address; /* used for philips_tdm1316l configs */
>> };
>>
>> -static void msp430_ir_interrupt(struct tasklet_struct *t)
>> +static void msp430_ir_interrupt(struct work_struct *t)
>> {
>> - struct budget_ci_ir *ir = from_tasklet(ir, t, msp430_irq_tasklet);
>> + struct budget_ci_ir *ir = from_work(ir, t, msp430_irq_work);
>> struct budget_ci *budget_ci = container_of(ir, typeof(*budget_ci), ir);
>> struct rc_dev *dev = budget_ci->ir.dev;
>> u32 command = ttpci_budget_debiread(&budget_ci->budget, DEBINOSWAP, DEBIADDR_IR, 2, 1, 0) >> 8;
>> @@ -230,7 +231,7 @@ static int msp430_ir_init(struct budget_ci *budget_ci)
>>
>> budget_ci->ir.dev = dev;
>>
>> - tasklet_setup(&budget_ci->ir.msp430_irq_tasklet, msp430_ir_interrupt);
>> + INIT_WORK(&budget_ci->ir.msp430_irq_work, msp430_ir_interrupt);
>>
>> SAA7146_IER_ENABLE(saa, MASK_06);
>> saa7146_setgpio(saa, 3, SAA7146_GPIO_IRQHI);
>> @@ -244,7 +245,7 @@ static void msp430_ir_deinit(struct budget_ci *budget_ci)
>>
>> SAA7146_IER_DISABLE(saa, MASK_06);
>> saa7146_setgpio(saa, 3, SAA7146_GPIO_INPUT);
>> - tasklet_kill(&budget_ci->ir.msp430_irq_tasklet);
>> + cancel_work_sync(&budget_ci->ir.msp430_irq_work);
>>
>> rc_unregister_device(budget_ci->ir.dev);
>> }
>> @@ -348,10 +349,10 @@ static int ciintf_slot_ts_enable(struct dvb_ca_en50221 *ca, int slot)
>> return 0;
>> }
>>
>> -static void ciintf_interrupt(struct tasklet_struct *t)
>> +static void ciintf_interrupt(struct work_struct *t)
>> {
>> - struct budget_ci *budget_ci = from_tasklet(budget_ci, t,
>> - ciintf_irq_tasklet);
>> + struct budget_ci *budget_ci = from_work(budget_ci, t,
>> + ciintf_irq_work);
>> struct saa7146_dev *saa = budget_ci->budget.dev;
>> unsigned int flags;
>>
>> @@ -492,7 +493,7 @@ static int ciintf_init(struct budget_ci *budget_ci)
>>
>> // Setup CI slot IRQ
>> if (budget_ci->ci_irq) {
>> - tasklet_setup(&budget_ci->ciintf_irq_tasklet, ciintf_interrupt);
>> + INIT_WORK(&budget_ci->ciintf_irq_work, ciintf_interrupt);
>> if (budget_ci->slot_status != SLOTSTATUS_NONE) {
>> saa7146_setgpio(saa, 0, SAA7146_GPIO_IRQLO);
>> } else {
>> @@ -532,7 +533,7 @@ static void ciintf_deinit(struct budget_ci *budget_ci)
>> if (budget_ci->ci_irq) {
>> SAA7146_IER_DISABLE(saa, MASK_03);
>> saa7146_setgpio(saa, 0, SAA7146_GPIO_INPUT);
>> - tasklet_kill(&budget_ci->ciintf_irq_tasklet);
>> + cancel_work_sync(&budget_ci->ciintf_irq_work);
>> }
>>
>> // reset interface
>> @@ -558,13 +559,13 @@ static void budget_ci_irq(struct saa7146_dev *dev, u32 * isr)
>> dprintk(8, "dev: %p, budget_ci: %p\n", dev, budget_ci);
>>
>> if (*isr & MASK_06)
>> - tasklet_schedule(&budget_ci->ir.msp430_irq_tasklet);
>> + queue_work(system_bh_wq, &budget_ci->ir.msp430_irq_work);
>>
>> if (*isr & MASK_10)
>> ttpci_budget_irq10_handler(dev, isr);
>>
>> if ((*isr & MASK_03) && (budget_ci->budget.ci_present) && (budget_ci->ci_irq))
>> - tasklet_schedule(&budget_ci->ciintf_irq_tasklet);
>> + queue_work(system_bh_wq, &budget_ci->ciintf_irq_work);
>> }
>>
>> static u8 philips_su1278_tt_inittab[] = {
>> diff --git a/drivers/media/pci/ttpci/budget-core.c b/drivers/media/pci/ttpci/budget-core.c
>> index 25f44c3eebf3..3443c12dc9f2 100644
>> --- a/drivers/media/pci/ttpci/budget-core.c
>> +++ b/drivers/media/pci/ttpci/budget-core.c
>> @@ -171,9 +171,9 @@ static int budget_read_fe_status(struct dvb_frontend *fe,
>> return ret;
>> }
>>
>> -static void vpeirq(struct tasklet_struct *t)
>> +static void vpeirq(struct work_struct *t)
>> {
>> - struct budget *budget = from_tasklet(budget, t, vpe_tasklet);
>> + struct budget *budget = from_work(budget, t, vpe_work);
>> u8 *mem = (u8 *) (budget->grabbing);
>> u32 olddma = budget->ttbp;
>> u32 newdma = saa7146_read(budget->dev, PCI_VDP3);
>> @@ -520,7 +520,7 @@ int ttpci_budget_init(struct budget *budget, struct saa7146_dev *dev,
>> /* upload all */
>> saa7146_write(dev, GPIO_CTRL, 0x000000);
>>
>> - tasklet_setup(&budget->vpe_tasklet, vpeirq);
>> + INIT_WORK(&budget->vpe_work, vpeirq);
>>
>> /* frontend power on */
>> if (bi->type != BUDGET_FS_ACTIVY)
>> @@ -557,7 +557,7 @@ int ttpci_budget_deinit(struct budget *budget)
>>
>> budget_unregister(budget);
>>
>> - tasklet_kill(&budget->vpe_tasklet);
>> + cancel_work_sync(&budget->vpe_work);
>>
>> saa7146_vfree_destroy_pgtable(dev->pci, budget->grabbing, &budget->pt);
>>
>> @@ -575,7 +575,7 @@ void ttpci_budget_irq10_handler(struct saa7146_dev *dev, u32 * isr)
>> dprintk(8, "dev: %p, budget: %p\n", dev, budget);
>>
>> if (*isr & MASK_10)
>> - tasklet_schedule(&budget->vpe_tasklet);
>> + queue_work(system_bh_wq, &budget->vpe_work);
>> }
>>
>> void ttpci_budget_set_video_port(struct saa7146_dev *dev, int video_port)
>> diff --git a/drivers/media/pci/ttpci/budget.h b/drivers/media/pci/ttpci/budget.h
>> index bd87432e6cde..a3ee75e326b4 100644
>> --- a/drivers/media/pci/ttpci/budget.h
>> +++ b/drivers/media/pci/ttpci/budget.h
>> @@ -12,6 +12,7 @@
>>
>> #include <linux/module.h>
>> #include <linux/mutex.h>
>> +#include <linux/workqueue.h>
>>
>> #include <media/drv-intf/saa7146.h>
>>
>> @@ -49,8 +50,8 @@ struct budget {
>> unsigned char *grabbing;
>> struct saa7146_pgtable pt;
>>
>> - struct tasklet_struct fidb_tasklet;
>> - struct tasklet_struct vpe_tasklet;
>> + struct work_struct fidb_work;
>> + struct work_struct vpe_work;
>>
>> struct dmxdev dmxdev;
>> struct dvb_demux demux;
>> diff --git a/drivers/media/pci/tw5864/tw5864-core.c b/drivers/media/pci/tw5864/tw5864-core.c
>> index 560ff1ddcc83..a58c268e94a8 100644
>> --- a/drivers/media/pci/tw5864/tw5864-core.c
>> +++ b/drivers/media/pci/tw5864/tw5864-core.c
>> @@ -144,7 +144,7 @@ static void tw5864_h264_isr(struct tw5864_dev *dev)
>> cur_frame->gop_seqno = input->frame_gop_seqno;
>>
>> dev->h264_buf_w_index = next_frame_index;
>> - tasklet_schedule(&dev->tasklet);
>> + queue_work(system_bh_wq, &dev->work);
>>
>> cur_frame = next_frame;
>>
>> diff --git a/drivers/media/pci/tw5864/tw5864-video.c b/drivers/media/pci/tw5864/tw5864-video.c
>> index 8b1aae4b6319..ac2249626506 100644
>> --- a/drivers/media/pci/tw5864/tw5864-video.c
>> +++ b/drivers/media/pci/tw5864/tw5864-video.c
>> @@ -6,6 +6,7 @@
>> */
>>
>> #include <linux/module.h>
>> +#include <linux/workqueue.h>
>> #include <media/v4l2-common.h>
>> #include <media/v4l2-event.h>
>> #include <media/videobuf2-dma-contig.h>
>> @@ -175,7 +176,7 @@ static const unsigned int intra4x4_lambda3[] = {
>> static v4l2_std_id tw5864_get_v4l2_std(enum tw5864_vid_std std);
>> static enum tw5864_vid_std tw5864_from_v4l2_std(v4l2_std_id v4l2_std);
>>
>> -static void tw5864_handle_frame_task(struct tasklet_struct *t);
>> +static void tw5864_handle_frame_task(struct work_struct *t);
>> static void tw5864_handle_frame(struct tw5864_h264_frame *frame);
>> static void tw5864_frame_interval_set(struct tw5864_input *input);
>>
>> @@ -1062,7 +1063,7 @@ int tw5864_video_init(struct tw5864_dev *dev, int *video_nr)
>> dev->irqmask |= TW5864_INTR_VLC_DONE | TW5864_INTR_TIMER;
>> tw5864_irqmask_apply(dev);
>>
>> - tasklet_setup(&dev->tasklet, tw5864_handle_frame_task);
>> + INIT_WORK(&dev->work, tw5864_handle_frame_task);
>>
>> for (i = 0; i < TW5864_INPUTS; i++) {
>> dev->inputs[i].root = dev;
>> @@ -1079,7 +1080,7 @@ int tw5864_video_init(struct tw5864_dev *dev, int *video_nr)
>> for (i = last_input_nr_registered; i >= 0; i--)
>> tw5864_video_input_fini(&dev->inputs[i]);
>>
>> - tasklet_kill(&dev->tasklet);
>> + cancel_work_sync(&dev->work);
>>
>> free_dma:
>> for (i = last_dma_allocated; i >= 0; i--) {
>> @@ -1198,7 +1199,7 @@ void tw5864_video_fini(struct tw5864_dev *dev)
>> {
>> int i;
>>
>> - tasklet_kill(&dev->tasklet);
>> + cancel_work_sync(&dev->work);
>>
>> for (i = 0; i < TW5864_INPUTS; i++)
>> tw5864_video_input_fini(&dev->inputs[i]);
>> @@ -1315,9 +1316,9 @@ static int tw5864_is_motion_triggered(struct tw5864_h264_frame *frame)
>> return detected;
>> }
>>
>> -static void tw5864_handle_frame_task(struct tasklet_struct *t)
>> +static void tw5864_handle_frame_task(struct work_struct *t)
>> {
>> - struct tw5864_dev *dev = from_tasklet(dev, t, tasklet);
>> + struct tw5864_dev *dev = from_work(dev, t, work);
>> unsigned long flags;
>> int batch_size = H264_BUF_CNT;
>>
>> diff --git a/drivers/media/pci/tw5864/tw5864.h b/drivers/media/pci/tw5864/tw5864.h
>> index a8b6fbd5b710..278373859098 100644
>> --- a/drivers/media/pci/tw5864/tw5864.h
>> +++ b/drivers/media/pci/tw5864/tw5864.h
>> @@ -12,6 +12,7 @@
>> #include <linux/mutex.h>
>> #include <linux/io.h>
>> #include <linux/interrupt.h>
>> +#include <linux/workqueue.h>
>>
>> #include <media/v4l2-common.h>
>> #include <media/v4l2-ioctl.h>
>> @@ -85,7 +86,7 @@ struct tw5864_input {
>> int nr; /* input number */
>> struct tw5864_dev *root;
>> struct mutex lock; /* used for vidq and vdev */
>> - spinlock_t slock; /* used for sync between ISR, tasklet & V4L2 API */
>> + spinlock_t slock; /* used for sync between ISR, work & V4L2 API */
>> struct video_device vdev;
>> struct v4l2_ctrl_handler hdl;
>> struct vb2_queue vidq;
>> @@ -142,7 +143,7 @@ struct tw5864_h264_frame {
>>
>> /* global device status */
>> struct tw5864_dev {
>> - spinlock_t slock; /* used for sync between ISR, tasklet & V4L2 API */
>> + spinlock_t slock; /* used for sync between ISR, work & V4L2 API */
>> struct v4l2_device v4l2_dev;
>> struct tw5864_input inputs[TW5864_INPUTS];
>> #define H264_BUF_CNT 4
>> @@ -150,7 +151,7 @@ struct tw5864_dev {
>> int h264_buf_r_index;
>> int h264_buf_w_index;
>>
>> - struct tasklet_struct tasklet;
>> + struct work_struct work;
>>
>> int encoder_busy;
>> /* Input number to check next for ready raw picture (in RR fashion) */
>> diff --git a/drivers/media/platform/intel/pxa_camera.c b/drivers/media/platform/intel/pxa_camera.c
>> index d904952bf00e..df0a3c559287 100644
>> --- a/drivers/media/platform/intel/pxa_camera.c
>> +++ b/drivers/media/platform/intel/pxa_camera.c
>> @@ -43,6 +43,7 @@
>> #include <linux/videodev2.h>
>>
>> #include <linux/platform_data/media/camera-pxa.h>
>> +#include <linux/workqueue.h>
>>
>> #define PXA_CAM_VERSION "0.0.6"
>> #define PXA_CAM_DRV_NAME "pxa27x-camera"
>> @@ -683,7 +684,7 @@ struct pxa_camera_dev {
>> unsigned int buf_sequence;
>>
>> struct pxa_buffer *active;
>> - struct tasklet_struct task_eof;
>> + struct work_struct task_eof;
>>
>> u32 save_cicr[5];
>> };
>> @@ -1146,9 +1147,9 @@ static void pxa_camera_deactivate(struct pxa_camera_dev *pcdev)
>> clk_disable_unprepare(pcdev->clk);
>> }
>>
>> -static void pxa_camera_eof(struct tasklet_struct *t)
>> +static void pxa_camera_eof(struct work_struct *t)
>> {
>> - struct pxa_camera_dev *pcdev = from_tasklet(pcdev, t, task_eof);
>> + struct pxa_camera_dev *pcdev = from_work(pcdev, t, task_eof);
>> unsigned long cifr;
>> struct pxa_buffer *buf;
>>
>> @@ -1185,7 +1186,7 @@ static irqreturn_t pxa_camera_irq(int irq, void *data)
>> if (status & CISR_EOF) {
>> cicr0 = __raw_readl(pcdev->base + CICR0) | CICR0_EOFM;
>> __raw_writel(cicr0, pcdev->base + CICR0);
>> - tasklet_schedule(&pcdev->task_eof);
>> + queue_work(system_bh_wq, &pcdev->task_eof);
>> }
>>
>> return IRQ_HANDLED;
>> @@ -2383,7 +2384,7 @@ static int pxa_camera_probe(struct platform_device *pdev)
>> }
>> }
>>
>> - tasklet_setup(&pcdev->task_eof, pxa_camera_eof);
>> + INIT_WORK(&pcdev->task_eof, pxa_camera_eof);
>>
>> pxa_camera_activate(pcdev);
>>
>> @@ -2409,7 +2410,7 @@ static int pxa_camera_probe(struct platform_device *pdev)
>> return 0;
>> exit_deactivate:
>> pxa_camera_deactivate(pcdev);
>> - tasklet_kill(&pcdev->task_eof);
>> + cancel_work_sync(&pcdev->task_eof);
>> exit_free_dma:
>> dma_release_channel(pcdev->dma_chans[2]);
>> exit_free_dma_u:
>> @@ -2428,7 +2429,7 @@ static void pxa_camera_remove(struct platform_device *pdev)
>> struct pxa_camera_dev *pcdev = platform_get_drvdata(pdev);
>>
>> pxa_camera_deactivate(pcdev);
>> - tasklet_kill(&pcdev->task_eof);
>> + cancel_work_sync(&pcdev->task_eof);
>> dma_release_channel(pcdev->dma_chans[0]);
>> dma_release_channel(pcdev->dma_chans[1]);
>> dma_release_channel(pcdev->dma_chans[2]);
>> diff --git a/drivers/media/platform/marvell/mcam-core.c b/drivers/media/platform/marvell/mcam-core.c
>> index 66688b4aece5..d6b96a7039be 100644
>> --- a/drivers/media/platform/marvell/mcam-core.c
>> +++ b/drivers/media/platform/marvell/mcam-core.c
>> @@ -25,6 +25,7 @@
>> #include <linux/clk-provider.h>
>> #include <linux/videodev2.h>
>> #include <linux/pm_runtime.h>
>> +#include <linux/workqueue.h>
>> #include <media/v4l2-device.h>
>> #include <media/v4l2-ioctl.h>
>> #include <media/v4l2-ctrls.h>
>> @@ -439,9 +440,9 @@ static void mcam_ctlr_dma_vmalloc(struct mcam_camera *cam)
>> /*
>> * Copy data out to user space in the vmalloc case
>> */
>> -static void mcam_frame_tasklet(struct tasklet_struct *t)
>> +static void mcam_frame_work(struct work_struct *t)
>> {
>> - struct mcam_camera *cam = from_tasklet(cam, t, s_tasklet);
>> + struct mcam_camera *cam = from_work(cam, t, s_work);
>> int i;
>> unsigned long flags;
>> struct mcam_vb_buffer *buf;
>> @@ -480,7 +481,7 @@ static void mcam_frame_tasklet(struct tasklet_struct *t)
>>
>>
>> /*
>> - * Make sure our allocated buffers are up to the task.
>> + * Make sure our allocated buffers are up to the work.
>> */
>> static int mcam_check_dma_buffers(struct mcam_camera *cam)
>> {
>> @@ -493,7 +494,7 @@ static int mcam_check_dma_buffers(struct mcam_camera *cam)
>>
>> static void mcam_vmalloc_done(struct mcam_camera *cam, int frame)
>> {
>> - tasklet_schedule(&cam->s_tasklet);
>> + queue_work(system_bh_wq, &cam->s_work);
>> }
>>
>> #else /* MCAM_MODE_VMALLOC */
>> @@ -1305,7 +1306,7 @@ static int mcam_setup_vb2(struct mcam_camera *cam)
>> break;
>> case B_vmalloc:
>> #ifdef MCAM_MODE_VMALLOC
>> - tasklet_setup(&cam->s_tasklet, mcam_frame_tasklet);
>> + INIT_WORK(&cam->s_work, mcam_frame_work);
>> vq->ops = &mcam_vb2_ops;
>> vq->mem_ops = &vb2_vmalloc_memops;
>> cam->dma_setup = mcam_ctlr_dma_vmalloc;
>> diff --git a/drivers/media/platform/marvell/mcam-core.h b/drivers/media/platform/marvell/mcam-core.h
>> index 51e66db45af6..0d4b953dbb23 100644
>> --- a/drivers/media/platform/marvell/mcam-core.h
>> +++ b/drivers/media/platform/marvell/mcam-core.h
>> @@ -9,6 +9,7 @@
>>
>> #include <linux/list.h>
>> #include <linux/clk-provider.h>
>> +#include <linux/workqueue.h>
>> #include <media/v4l2-common.h>
>> #include <media/v4l2-ctrls.h>
>> #include <media/v4l2-dev.h>
>> @@ -167,7 +168,7 @@ struct mcam_camera {
>> unsigned int dma_buf_size; /* allocated size */
>> void *dma_bufs[MAX_DMA_BUFS]; /* Internal buffer addresses */
>> dma_addr_t dma_handles[MAX_DMA_BUFS]; /* Buffer bus addresses */
>> - struct tasklet_struct s_tasklet;
>> + struct work_struct s_work;
>> #endif
>> unsigned int sequence; /* Frame sequence number */
>> unsigned int buf_seq[MAX_DMA_BUFS]; /* Sequence for individual bufs */
>> diff --git a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
>> index e4cf27b5a072..22b359569a10 100644
>> --- a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
>> +++ b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
>> @@ -33,6 +33,7 @@
>> #include <linux/time.h>
>> #include <linux/usb.h>
>> #include <linux/wait.h>
>> +#include <linux/workqueue.h>
>>
>> #include "c8sectpfe-common.h"
>> #include "c8sectpfe-core.h"
>> @@ -73,16 +74,16 @@ static void c8sectpfe_timer_interrupt(struct timer_list *t)
>>
>> /* is this descriptor initialised and TP enabled */
>> if (channel->irec && readl(channel->irec + DMA_PRDS_TPENABLE))
>> - tasklet_schedule(&channel->tsklet);
>> + queue_work(system_bh_wq, &channel->tsklet);
>> }
>>
>> fei->timer.expires = jiffies + msecs_to_jiffies(POLL_MSECS);
>> add_timer(&fei->timer);
>> }
>>
>> -static void channel_swdemux_tsklet(struct tasklet_struct *t)
>> +static void channel_swdemux_tsklet(struct work_struct *t)
>> {
>> - struct channel_info *channel = from_tasklet(channel, t, tsklet);
>> + struct channel_info *channel = from_work(channel, t, tsklet);
>> struct c8sectpfei *fei;
>> unsigned long wp, rp;
>> int pos, num_packets, n, size;
>> @@ -211,7 +212,7 @@ static int c8sectpfe_start_feed(struct dvb_demux_feed *dvbdmxfeed)
>>
>> dev_dbg(fei->dev, "Starting channel=%p\n", channel);
>>
>> - tasklet_setup(&channel->tsklet, channel_swdemux_tsklet);
>> + INIT_WORK(&channel->tsklet, channel_swdemux_tsklet);
>>
>> /* Reset the internal inputblock sram pointers */
>> writel(channel->fifo,
>> @@ -304,7 +305,7 @@ static int c8sectpfe_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
>> /* disable this channels descriptor */
>> writel(0, channel->irec + DMA_PRDS_TPENABLE);
>>
>> - tasklet_disable(&channel->tsklet);
>> + disable_work_sync(&channel->tsklet);
>>
>> /* now request memdma channel goes idle */
>> idlereq = (1 << channel->tsin_id) | IDLEREQ;
>> @@ -631,8 +632,8 @@ static int configure_memdma_and_inputblock(struct c8sectpfei *fei,
>> writel(tsin->back_buffer_busaddr, tsin->irec + DMA_PRDS_BUSWP_TP(0));
>> writel(tsin->back_buffer_busaddr, tsin->irec + DMA_PRDS_BUSRP_TP(0));
>>
>> - /* initialize tasklet */
>> - tasklet_setup(&tsin->tsklet, channel_swdemux_tsklet);
>> + /* initialize work */
>> + INIT_WORK(&tsin->tsklet, channel_swdemux_tsklet);
>>
>> return 0;
>>
>> diff --git a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.h b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.h
>> index bf377cc82225..d63f0ee83615 100644
>> --- a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.h
>> +++ b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.h
>> @@ -51,7 +51,7 @@ struct channel_info {
>> unsigned long fifo;
>>
>> struct completion idle_completion;
>> - struct tasklet_struct tsklet;
>> + struct work_struct tsklet;
>>
>> struct c8sectpfei *fei;
>> void __iomem *irec;
>> diff --git a/drivers/media/radio/wl128x/fmdrv.h b/drivers/media/radio/wl128x/fmdrv.h
>> index da8920169df8..85282f638c4a 100644
>> --- a/drivers/media/radio/wl128x/fmdrv.h
>> +++ b/drivers/media/radio/wl128x/fmdrv.h
>> @@ -15,6 +15,7 @@
>> #include <sound/core.h>
>> #include <sound/initval.h>
>> #include <linux/timer.h>
>> +#include <linux/workqueue.h>
>> #include <media/v4l2-ioctl.h>
>> #include <media/v4l2-common.h>
>> #include <media/v4l2-device.h>
>> @@ -200,15 +201,15 @@ struct fmdev {
>> int streg_cbdata; /* status of ST registration */
>>
>> struct sk_buff_head rx_q; /* RX queue */
>> - struct tasklet_struct rx_task; /* RX Tasklet */
>> + struct work_struct rx_task; /* RX Work */
>>
>> struct sk_buff_head tx_q; /* TX queue */
>> - struct tasklet_struct tx_task; /* TX Tasklet */
>> + struct work_struct tx_task; /* TX Work */
>> unsigned long last_tx_jiffies; /* Timestamp of last pkt sent */
>> atomic_t tx_cnt; /* Number of packets can send at a time */
>>
>> struct sk_buff *resp_skb; /* Response from the chip */
>> - /* Main task completion handler */
>> + /* Main work completion handler */
>> struct completion maintask_comp;
>> /* Opcode of last command sent to the chip */
>> u8 pre_op;
>> diff --git a/drivers/media/radio/wl128x/fmdrv_common.c b/drivers/media/radio/wl128x/fmdrv_common.c
>> index 3da8e5102bec..52290bb4a4ad 100644
>> --- a/drivers/media/radio/wl128x/fmdrv_common.c
>> +++ b/drivers/media/radio/wl128x/fmdrv_common.c
>> @@ -9,7 +9,7 @@
>> * one Channel-8 command to be sent to the chip).
>> * 2) Sending each Channel-8 command to the chip and reading
>> * response back over Shared Transport.
>> - * 3) Managing TX and RX Queues and Tasklets.
>> + * 3) Managing TX and RX Queues and Works.
>> * 4) Handling FM Interrupt packet and taking appropriate action.
>> * 5) Loading FM firmware to the chip (common, FM TX, and FM RX
>> * firmware files based on mode selection)
>> @@ -29,6 +29,7 @@
>> #include "fmdrv_v4l2.h"
>> #include "fmdrv_common.h"
>> #include <linux/ti_wilink_st.h>
>> +#include <linux/workqueue.h>
>> #include "fmdrv_rx.h"
>> #include "fmdrv_tx.h"
>>
>> @@ -244,10 +245,10 @@ void fmc_update_region_info(struct fmdev *fmdev, u8 region_to_set)
>> }
>>
>> /*
>> - * FM common sub-module will schedule this tasklet whenever it receives
>> + * FM common sub-module will schedule this work whenever it receives
>> * FM packet from ST driver.
>> */
>> -static void recv_tasklet(struct tasklet_struct *t)
>> +static void recv_work(struct work_struct *t)
>> {
>> struct fmdev *fmdev;
>> struct fm_irq *irq_info;
>> @@ -256,7 +257,7 @@ static void recv_tasklet(struct tasklet_struct *t)
>> u8 num_fm_hci_cmds;
>> unsigned long flags;
>>
>> - fmdev = from_tasklet(fmdev, t, tx_task);
>> + fmdev = from_work(fmdev, t, tx_task);
>> irq_info = &fmdev->irq_info;
>> /* Process all packets in the RX queue */
>> while ((skb = skb_dequeue(&fmdev->rx_q))) {
>> @@ -322,22 +323,22 @@ static void recv_tasklet(struct tasklet_struct *t)
>>
>> /*
>> * Check flow control field. If Num_FM_HCI_Commands field is
>> - * not zero, schedule FM TX tasklet.
>> + * not zero, schedule FM TX work.
>> */
>> if (num_fm_hci_cmds && atomic_read(&fmdev->tx_cnt))
>> if (!skb_queue_empty(&fmdev->tx_q))
>> - tasklet_schedule(&fmdev->tx_task);
>> + queue_work(system_bh_wq, &fmdev->tx_task);
>> }
>> }
>>
>> -/* FM send tasklet: is scheduled when FM packet has to be sent to chip */
>> -static void send_tasklet(struct tasklet_struct *t)
>> +/* FM send work: is scheduled when FM packet has to be sent to chip */
>> +static void send_work(struct work_struct *t)
>> {
>> struct fmdev *fmdev;
>> struct sk_buff *skb;
>> int len;
>>
>> - fmdev = from_tasklet(fmdev, t, tx_task);
>> + fmdev = from_work(fmdev, t, tx_task);
>>
>> if (!atomic_read(&fmdev->tx_cnt))
>> return;
>> @@ -366,7 +367,7 @@ static void send_tasklet(struct tasklet_struct *t)
>> if (len < 0) {
>> kfree_skb(skb);
>> fmdev->resp_comp = NULL;
>> - fmerr("TX tasklet failed to send skb(%p)\n", skb);
>> + fmerr("TX work failed to send skb(%p)\n", skb);
>> atomic_set(&fmdev->tx_cnt, 1);
>> } else {
>> fmdev->last_tx_jiffies = jiffies;
>> @@ -374,7 +375,7 @@ static void send_tasklet(struct tasklet_struct *t)
>> }
>>
>> /*
>> - * Queues FM Channel-8 packet to FM TX queue and schedules FM TX tasklet for
>> + * Queues FM Channel-8 packet to FM TX queue and schedules FM TX work for
>> * transmission
>> */
>> static int fm_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
>> @@ -440,7 +441,7 @@ static int fm_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
>>
>> fm_cb(skb)->completion = wait_completion;
>> skb_queue_tail(&fmdev->tx_q, skb);
>> - tasklet_schedule(&fmdev->tx_task);
>> + queue_work(system_bh_wq, &fmdev->tx_task);
>>
>> return 0;
>> }
>> @@ -462,7 +463,7 @@ int fmc_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
>>
>> if (!wait_for_completion_timeout(&fmdev->maintask_comp,
>> FM_DRV_TX_TIMEOUT)) {
>> - fmerr("Timeout(%d sec),didn't get regcompletion signal from RX tasklet\n",
>> + fmerr("Timeout(%d sec),didn't get regcompletion signal from RX work\n",
>> jiffies_to_msecs(FM_DRV_TX_TIMEOUT) / 1000);
>> return -ETIMEDOUT;
>> }
>> @@ -1455,7 +1456,7 @@ static long fm_st_receive(void *arg, struct sk_buff *skb)
>>
>> memcpy(skb_push(skb, 1), &skb->cb[0], 1);
>> skb_queue_tail(&fmdev->rx_q, skb);
>> - tasklet_schedule(&fmdev->rx_task);
>> + queue_work(system_bh_wq, &fmdev->rx_task);
>>
>> return 0;
>> }
>> @@ -1537,13 +1538,13 @@ int fmc_prepare(struct fmdev *fmdev)
>> spin_lock_init(&fmdev->rds_buff_lock);
>> spin_lock_init(&fmdev->resp_skb_lock);
>>
>> - /* Initialize TX queue and TX tasklet */
>> + /* Initialize TX queue and TX work */
>> skb_queue_head_init(&fmdev->tx_q);
>> - tasklet_setup(&fmdev->tx_task, send_tasklet);
>> + INIT_WORK(&fmdev->tx_task, send_work);
>>
>> - /* Initialize RX Queue and RX tasklet */
>> + /* Initialize RX Queue and RX work */
>> skb_queue_head_init(&fmdev->rx_q);
>> - tasklet_setup(&fmdev->rx_task, recv_tasklet);
>> + INIT_WORK(&fmdev->rx_task, recv_work);
>>
>> fmdev->irq_info.stage = 0;
>> atomic_set(&fmdev->tx_cnt, 1);
>> @@ -1589,8 +1590,8 @@ int fmc_release(struct fmdev *fmdev)
>> /* Service pending read */
>> wake_up_interruptible(&fmdev->rx.rds.read_queue);
>>
>> - tasklet_kill(&fmdev->tx_task);
>> - tasklet_kill(&fmdev->rx_task);
>> + cancel_work_sync(&fmdev->tx_task);
>> + cancel_work_sync(&fmdev->rx_task);
>>
>> skb_queue_purge(&fmdev->tx_q);
>> skb_queue_purge(&fmdev->rx_q);
>> diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
>> index c76ba24c1f55..a2e2e58b7506 100644
>> --- a/drivers/media/rc/mceusb.c
>> +++ b/drivers/media/rc/mceusb.c
>> @@ -774,7 +774,7 @@ static void mceusb_dev_printdata(struct mceusb_dev *ir, u8 *buf, int buf_len,
>>
>> /*
>> * Schedule work that can't be done in interrupt handlers
>> - * (mceusb_dev_recv() and mce_write_callback()) nor tasklets.
>> + * (mceusb_dev_recv() and mce_write_callback()) nor works.
>> * Invokes mceusb_deferred_kevent() for recovering from
>> * error events specified by the kevent bit field.
>> */
>> diff --git a/drivers/media/usb/ttusb-dec/ttusb_dec.c b/drivers/media/usb/ttusb-dec/ttusb_dec.c
>> index 79faa2560613..55eeb00f1126 100644
>> --- a/drivers/media/usb/ttusb-dec/ttusb_dec.c
>> +++ b/drivers/media/usb/ttusb-dec/ttusb_dec.c
>> @@ -19,6 +19,7 @@
>> #include <linux/input.h>
>>
>> #include <linux/mutex.h>
>> +#include <linux/workqueue.h>
>>
>> #include <media/dmxdev.h>
>> #include <media/dvb_demux.h>
>> @@ -139,7 +140,7 @@ struct ttusb_dec {
>> int v_pes_postbytes;
>>
>> struct list_head urb_frame_list;
>> - struct tasklet_struct urb_tasklet;
>> + struct work_struct urb_work;
>> spinlock_t urb_frame_list_lock;
>>
>> struct dvb_demux_filter *audio_filter;
>> @@ -766,9 +767,9 @@ static void ttusb_dec_process_urb_frame(struct ttusb_dec *dec, u8 *b,
>> }
>> }
>>
>> -static void ttusb_dec_process_urb_frame_list(struct tasklet_struct *t)
>> +static void ttusb_dec_process_urb_frame_list(struct work_struct *t)
>> {
>> - struct ttusb_dec *dec = from_tasklet(dec, t, urb_tasklet);
>> + struct ttusb_dec *dec = from_work(dec, t, urb_work);
>> struct list_head *item;
>> struct urb_frame *frame;
>> unsigned long flags;
>> @@ -822,7 +823,7 @@ static void ttusb_dec_process_urb(struct urb *urb)
>> spin_unlock_irqrestore(&dec->urb_frame_list_lock,
>> flags);
>>
>> - tasklet_schedule(&dec->urb_tasklet);
>> + queue_work(system_bh_wq, &dec->urb_work);
>> }
>> }
>> } else {
>> @@ -1198,11 +1199,11 @@ static int ttusb_dec_alloc_iso_urbs(struct ttusb_dec *dec)
>> return 0;
>> }
>>
>> -static void ttusb_dec_init_tasklet(struct ttusb_dec *dec)
>> +static void ttusb_dec_init_work(struct ttusb_dec *dec)
>> {
>> spin_lock_init(&dec->urb_frame_list_lock);
>> INIT_LIST_HEAD(&dec->urb_frame_list);
>> - tasklet_setup(&dec->urb_tasklet, ttusb_dec_process_urb_frame_list);
>> + INIT_WORK(&dec->urb_work, ttusb_dec_process_urb_frame_list);
>> }
>>
>> static int ttusb_init_rc( struct ttusb_dec *dec)
>> @@ -1588,12 +1589,12 @@ static void ttusb_dec_exit_usb(struct ttusb_dec *dec)
>> ttusb_dec_free_iso_urbs(dec);
>> }
>>
>> -static void ttusb_dec_exit_tasklet(struct ttusb_dec *dec)
>> +static void ttusb_dec_exit_work(struct ttusb_dec *dec)
>> {
>> struct list_head *item;
>> struct urb_frame *frame;
>>
>> - tasklet_kill(&dec->urb_tasklet);
>> + cancel_work_sync(&dec->urb_work);
>>
>> while ((item = dec->urb_frame_list.next) != &dec->urb_frame_list) {
>> frame = list_entry(item, struct urb_frame, urb_frame_list);
>> @@ -1703,7 +1704,7 @@ static int ttusb_dec_probe(struct usb_interface *intf,
>>
>> ttusb_dec_init_v_pes(dec);
>> ttusb_dec_init_filters(dec);
>> - ttusb_dec_init_tasklet(dec);
>> + ttusb_dec_init_work(dec);
>>
>> dec->active = 1;
>>
>> @@ -1729,7 +1730,7 @@ static void ttusb_dec_disconnect(struct usb_interface *intf)
>> dprintk("%s\n", __func__);
>>
>> if (dec->active) {
>> - ttusb_dec_exit_tasklet(dec);
>> + ttusb_dec_exit_work(dec);
>> ttusb_dec_exit_filters(dec);
>> if(enable_rc)
>> ttusb_dec_exit_rc(dec);


2024-06-03 12:39:46

by Aubin Constans

[permalink] [raw]
Subject: Re: [PATCH 9/9] mmc: Convert from tasklet to BH workqueue

On 27/03/2024 17:03, Allen Pais wrote:
> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
>
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws. To
> replace tasklets, BH workqueue support was recently added. A BH workqueue
> behaves similarly to regular workqueues except that the queued work items
> are executed in the BH context.
>
> This patch converts drivers/infiniband/* from tasklet to BH workqueue.
>
> Based on the work done by Tejun Heo <[email protected]>
> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>
> Signed-off-by: Allen Pais <[email protected]>
> ---
> drivers/mmc/host/atmel-mci.c | 35 ++++-----
[...]

For atmel-mci, judging from a few simple tests, performance is preserved.
E.g. writing to a SD Card on the SAMA5D3-Xplained board:
time dd if=/dev/zero of=/opt/_del_me bs=4k count=64k

Base 6.9.0 : 0.07user 5.05system 0:18.92elapsed 27%CPU
Patched 6.9.0+: 0.12user 4.92system 0:18.76elapsed 26%CPU

However, please resolve what checkpatch is complaining about:
scripts/checkpatch.pl --strict
PATCH-9-9-mmc-Convert-from-tasklet-to-BH-workqueue.mbox

WARNING: please, no space before tabs
#72: FILE: drivers/mmc/host/atmel-mci.c:367:
+^Istruct work_struct ^Iwork;$

Same as discussions on the USB patch[1] and others in this series, I am
also in favour of "workqueue" or similar in the comments, rather than
just "work".

Apart from that:
Tested-by: Aubin Constans <[email protected]>
Acked-by: Aubin Constans <[email protected]>

Thanks.

[1]:
https://lore.kernel.org/linux-mmc/CAOMdWSLipPfm3OZTpjZz4uF4M+E_8QAoTeMcKBXawLnkTQx6Jg@mail.gmail.com/

2024-06-03 17:35:21

by Allen Pais

[permalink] [raw]
Subject: Re: [PATCH 9/9] mmc: Convert from tasklet to BH workqueue



> On Jun 3, 2024, at 5:38 AM, Aubin Constans <[email protected]> wrote:
>
> On 27/03/2024 17:03, Allen Pais wrote:
>> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
>> The only generic interface to execute asynchronously in the BH context is
>> tasklet; however, it's marked deprecated and has some design flaws. To
>> replace tasklets, BH workqueue support was recently added. A BH workqueue
>> behaves similarly to regular workqueues except that the queued work items
>> are executed in the BH context.
>> This patch converts drivers/infiniband/* from tasklet to BH workqueue.
>> Based on the work done by Tejun Heo <[email protected]>
>> Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-6.10
>> Signed-off-by: Allen Pais <[email protected]>
>> ---
>> drivers/mmc/host/atmel-mci.c | 35 ++++-----
> [...]
>
> For atmel-mci, judging from a few simple tests, performance is preserved.
> E.g. writing to a SD Card on the SAMA5D3-Xplained board:
> time dd if=/dev/zero of=/opt/_del_me bs=4k count=64k
>
> Base 6.9.0 : 0.07user 5.05system 0:18.92elapsed 27%CPU
> Patched 6.9.0+: 0.12user 4.92system 0:18.76elapsed 26%CPU
>
> However, please resolve what checkpatch is complaining about:
> scripts/checkpatch.pl --strict PATCH-9-9-mmc-Convert-from-tasklet-to-BH-workqueue.mbox
>
> WARNING: please, no space before tabs
> #72: FILE: drivers/mmc/host/atmel-mci.c:367:
> +^Istruct work_struct ^Iwork;$
>
> Same as discussions on the USB patch[1] and others in this series, I am also in favour of "workqueue" or similar in the comments, rather than just "work".

Will send out a new version.

Thank you very much for testing and providing your review.

- Allen

>
> Apart from that:
> Tested-by: Aubin Constans <[email protected]>
> Acked-by: Aubin Constans <[email protected]>
>
> Thanks.
>
> [1]: https://lore.kernel.org/linux-mmc/CAOMdWSLipPfm3OZTpjZz4uF4M+E_8QAoTeMcKBXawLnkTQx6Jg@mail.gmail.com/