There have been complaints around the fact that some softirq vectors
consume a lot of CPU at the expense of other's latency. A few solutions
have been proposed, mostly working around a fundamental design of softirqs
in Linux: a vector can not interrupt another while softirq are executing.
Also disabling the softirqs is an all-in-one toggle. It is not possible
to simply disable one vector. Therefore a section of code, that wants not
to be interrupted by a vector, must disable and delay all of them, even
though they are unrelated to the current critical code. That in turn
may induce latencies on workloads that rely on deterministic ends.
Following suggestions from the -rt team, this patchset propose to solve
this with finegrained per softirq vector disablement.
Functions such as local_bh_disable() or spin_lock_bh() now must be
passed a mask of vectors to disable. The functions return the mask of
the vectors enabled state prior to the call, that backup state is then
passed to local_bh_enable()/spin_unlock_bh() to be restored. Ie: it
follows the same logic as local_irq_save/restore():
// Start with local_bh_disabled() == SOFTIRQ_ALL_MASK
...
bh = local_bh_disable(BIT(NET_RX_SOFTIRQ)) {
bh = local_bh_disabled();
local_bh_disabled() &= ~BIT(NET_RX_SOFTIRQ);
// First vector disabled, inc preempt count
preempt_count += SOFTIRQ_DISABLE_OFFSET;
return bh;
}
....
bh2 = local_bh_disable(BIT(BLOCK_SOFTIRQ)) {
bh2 = local_bh_disabled();
local_bh_disabled() &= ~BIT(NET_RX_SOFTIRQ);
// No need to inc preempt count
return bh2;
}
...
local_bh_enable(bh2) {
local_bh_disabled() = bh2;
// No need to dec preempt count
}
...
local_bh_enable(bh1) {
local_bh_disabled() = bh;
preempt_count -= SOFTIRQ_DISABLE_OFFSET;
}
Similarly, the softirq processing is now re-entrant: a vector can
interrupt another, but a vector of course can not interrupt itself.
Although the diffstat is huge, some of the patches have been truncated
to fit in lkml. And I haven't yet converted every call sites, there are
still a few of them that I need to flip. At least it's enough
for my config to boot and be happy. Also I may need to teach lockdep
about the new situation.
Other than that, it works pretty well on my box, softirqs nest like a
charm (except for NET_RX and TASKLET as you may find out in the last
patch):
<idle>-0 [000] ..s2 119.907085: __do_softirq: run_rebalance_domains
<idle>-0 [000] ..s2 119.907090: <stack trace>
=> __do_softirq
=> irq_exit
=> scheduler_ipi
=> smp_reschedule_interrupt
=> reschedule_interrupt
=> _raw_spin_unlock_irq
=> run_timer_softirq
=> __do_softirq
=> irq_exit
=> smp_apic_timer_interrupt
=> apic_timer_interrupt
=> cpuidle_enter_state
=> cpuidle_enter
=> call_cpuidle
=> do_idle
So that's enough to start a debate.
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
irq/softirq-experimental
HEAD: 84e064f678eb06d0da3e97f04eced4cfb55866ba
Thanks,
Frederic
---
Frederic Weisbecker (30):
x86: Revert "x86/irq: Demote irq_cpustat_t::__softirq_pending to u16"
arch/softirq: Rename softirq_pending fields to softirq_data
softirq: Implement local_softirq_pending() below softirq vector definition
softirq: Normalize softirq_pending naming scheme
softirq: Convert softirq_pending_set() to softirq_pending_nand()
softirq: Introduce disabled softirq vectors bits
softirq: Rename _local_bh_enable() to local_bh_enable_no_softirq()
softirq: Move vectors bits to bottom_half.h
x86: Init softirq enabled field
softirq: Check enabled bits on the softirq loop
net: Prepare netif_tx_lock_bh/netif_tx_unlock_bh() for handling softirq mask
rcu: Prepare rcu_read_[un]lock_bh() for handling softirq mask
net: Prepare tcp_get_md5sig_pool() for handling softirq mask
softirq: Introduce local_bh_disable_all()
net: Prepare [un]lock_sock_fast() for handling softirq mask
net: Prepare nf_log_buf_open() for handling softirq mask
isdn: Prepare isdn_net_get_locked_lp() for handling softirq mask
softirq: Prepare local_bh_disable() for handling softirq mask
diva: Prepare diva_os_enter_spin_lock() for handling softirq mask
tg3: Prepare tg3_full_[un]lock() for handling softirq mask
locking: Prepare spin_lock_bh() for handling softirq mask
seqlock: Prepare write_seq[un]lock_bh() for handling softirq mask
rwlock: Prepare write_[un]lock_bh() for handling softirq mask
softirq: Introduce Local_bh_enter/exit()
softirq: Push down softirq mask to __local_bh_disable_ip()
softirq: Increment the softirq offset on top of enabled bits
softirq: Swap softirq serving VS disable on preempt mask layout
softirq: Disable vector on execution
softirq: Make softirq processing softinterruptible
softirq: Tasklet/net-rx fixup
arch/arm/include/asm/hardirq.h | 2 +-
arch/arm64/include/asm/hardirq.h | 2 +-
arch/arm64/kernel/fpsimd.c | 37 +--
arch/h8300/kernel/asm-offsets.c | 2 +-
arch/ia64/include/asm/hardirq.h | 2 +-
arch/ia64/include/asm/processor.h | 2 +-
arch/m68k/include/asm/hardirq.h | 2 +-
arch/m68k/kernel/asm-offsets.c | 2 +-
arch/parisc/include/asm/hardirq.h | 2 +-
arch/powerpc/include/asm/hardirq.h | 2 +-
arch/s390/include/asm/hardirq.h | 11 +-
arch/s390/lib/delay.c | 5 +-
arch/s390/mm/pgalloc.c | 24 +-
arch/sh/include/asm/hardirq.h | 2 +-
arch/sparc/include/asm/cpudata_64.h | 2 +-
arch/sparc/include/asm/hardirq_64.h | 4 +-
arch/um/include/asm/hardirq.h | 2 +-
arch/x86/crypto/sha1-mb/sha1_mb.c | 9 +-
arch/x86/crypto/sha256-mb/sha256_mb.c | 9 +-
arch/x86/crypto/sha512-mb/sha512_mb.c | 9 +-
arch/x86/include/asm/hardirq.h | 2 +-
arch/x86/kernel/irq.c | 5 +-
arch/xtensa/platforms/iss/console.c | 10 +-
arch/xtensa/platforms/iss/network.c | 28 +-
block/genhd.c | 15 +-
crypto/ansi_cprng.c | 10 +-
crypto/cryptd.c | 25 +-
crypto/mcryptd.c | 30 ++-
crypto/pcrypt.c | 5 +-
drivers/block/drbd/drbd_receiver.c | 10 +-
drivers/block/rsxx/core.c | 5 +-
drivers/block/rsxx/cregs.c | 34 ++-
drivers/block/rsxx/dma.c | 36 +--
drivers/block/umem.c | 10 +-
drivers/connector/cn_queue.c | 15 +-
drivers/connector/connector.c | 15 +-
drivers/crypto/atmel-aes.c | 5 +-
drivers/crypto/atmel-sha.c | 5 +-
drivers/crypto/atmel-tdes.c | 5 +-
drivers/crypto/axis/artpec6_crypto.c | 10 +-
drivers/crypto/caam/jr.c | 7 +-
drivers/crypto/cavium/cpt/cptvf_reqmanager.c | 22 +-
drivers/crypto/cavium/nitrox/nitrox_reqmgr.c | 25 +-
drivers/crypto/ccree/cc_request_mgr.c | 31 ++-
drivers/crypto/chelsio/chcr_algo.c | 5 +-
drivers/crypto/chelsio/chtls/chtls_cm.c | 36 ++-
drivers/crypto/chelsio/chtls/chtls_hw.c | 10 +-
drivers/crypto/chelsio/chtls/chtls_main.c | 9 +-
drivers/crypto/inside-secure/safexcel.c | 19 +-
drivers/crypto/inside-secure/safexcel_cipher.c | 15 +-
drivers/crypto/inside-secure/safexcel_hash.c | 15 +-
drivers/crypto/marvell/cesa.c | 20 +-
drivers/crypto/marvell/tdma.c | 13 +-
drivers/crypto/mediatek/mtk-aes.c | 5 +-
drivers/crypto/mediatek/mtk-sha.c | 5 +-
drivers/crypto/mxc-scc.c | 10 +-
drivers/crypto/nx/nx-842.c | 10 +-
drivers/crypto/omap-aes.c | 15 +-
drivers/crypto/omap-des.c | 5 +-
drivers/crypto/omap-sham.c | 10 +-
drivers/crypto/qat/qat_common/adf_transport.c | 15 +-
drivers/crypto/qce/core.c | 5 +-
drivers/crypto/stm32/stm32-cryp.c | 5 +-
drivers/crypto/stm32/stm32-hash.c | 5 +-
drivers/crypto/stm32/stm32_crc32.c | 5 +-
drivers/crypto/sunxi-ss/sun4i-ss-hash.c | 5 +-
drivers/crypto/sunxi-ss/sun4i-ss-prng.c | 5 +-
drivers/dma/at_xdmac.c | 5 +-
drivers/dma/dmaengine.c | 5 +-
drivers/dma/fsldma.c | 44 ++--
drivers/dma/ioat/dma.c | 59 +++--
drivers/dma/ioat/dma.h | 1 +
drivers/dma/ioat/init.c | 28 +-
drivers/dma/iop-adma.c | 60 +++--
drivers/dma/mv_xor.c | 32 ++-
drivers/dma/mv_xor_v2.c | 24 +-
drivers/dma/ppc4xx/adma.c | 74 +++---
drivers/dma/timb_dma.c | 35 ++-
drivers/dma/txx9dmac.c | 50 ++--
drivers/dma/xgene-dma.c | 20 +-
drivers/dma/xilinx/zynqmp_dma.c | 32 ++-
drivers/gpu/drm/drm_lock.c | 35 +--
drivers/gpu/drm/i915/gvt/debugfs.c | 5 +-
drivers/gpu/drm/i915/gvt/sched_policy.c | 5 +-
drivers/gpu/drm/i915/i915_gem.c | 5 +-
drivers/gpu/drm/i915/i915_request.c | 5 +-
drivers/gpu/drm/i915/intel_breadcrumbs.c | 5 +-
drivers/gpu/drm/i915/intel_engine_cs.c | 5 +-
drivers/gpu/drm/msm/adreno/a6xx_hfi.c | 9 +-
drivers/gpu/drm/vmwgfx/vmwgfx_irq.c | 10 +-
drivers/hsi/clients/cmt_speech.c | 76 +++---
drivers/hsi/clients/ssi_protocol.c | 135 +++++-----
drivers/hsi/controllers/omap_ssi_port.c | 60 +++--
drivers/infiniband/core/addr.c | 29 ++-
drivers/infiniband/core/roce_gid_mgmt.c | 5 +-
drivers/infiniband/hw/bnxt_re/qplib_fp.c | 10 +-
drivers/infiniband/hw/cxgb4/cm.c | 5 +-
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 7 +-
drivers/infiniband/hw/mlx4/main.c | 35 ++-
drivers/infiniband/sw/rdmavt/cq.c | 5 +-
drivers/infiniband/sw/rxe/rxe_mcast.c | 33 +--
drivers/infiniband/sw/rxe/rxe_mmap.c | 19 +-
drivers/infiniband/sw/rxe/rxe_net.c | 24 +-
drivers/infiniband/sw/rxe/rxe_queue.c | 5 +-
drivers/infiniband/sw/rxe/rxe_recv.c | 15 +-
drivers/infiniband/sw/rxe/rxe_resp.c | 14 +-
drivers/infiniband/ulp/ipoib/ipoib_cm.c | 42 +--
drivers/infiniband/ulp/ipoib/ipoib_ib.c | 10 +-
drivers/infiniband/ulp/ipoib/ipoib_main.c | 14 +-
drivers/infiniband/ulp/ipoib/ipoib_multicast.c | 19 +-
drivers/infiniband/ulp/isert/ib_isert.c | 52 ++--
drivers/isdn/capi/capi.c | 46 ++--
drivers/isdn/hardware/eicon/capifunc.c | 53 ++--
drivers/isdn/hardware/eicon/dadapter.c | 39 ++-
drivers/isdn/hardware/eicon/debug.c | 129 ++++++----
drivers/isdn/hardware/eicon/debug_if.h | 6 +-
drivers/isdn/hardware/eicon/diva.c | 45 ++--
drivers/isdn/hardware/eicon/idifunc.c | 22 +-
drivers/isdn/hardware/eicon/io.c | 88 ++++---
drivers/isdn/hardware/eicon/mntfunc.c | 13 +-
drivers/isdn/hardware/eicon/platform.h | 9 +-
drivers/isdn/hardware/eicon/um_idi.c | 104 +++++---
drivers/isdn/i4l/isdn_concap.c | 5 +-
drivers/isdn/i4l/isdn_net.c | 16 +-
drivers/isdn/i4l/isdn_net.h | 5 +-
drivers/isdn/i4l/isdn_ppp.c | 6 +-
drivers/isdn/mISDN/socket.c | 17 +-
drivers/isdn/mISDN/stack.c | 10 +-
drivers/leds/trigger/ledtrig-netdev.c | 15 +-
drivers/media/pci/ttpci/av7110_av.c | 10 +-
drivers/misc/sgi-xp/xpnet.c | 9 +-
drivers/misc/vmw_vmci/vmci_doorbell.c | 15 +-
drivers/mmc/host/atmel-mci.c | 24 +-
drivers/mmc/host/dw_mmc.c | 15 +-
drivers/mmc/host/wbsd.c | 22 +-
drivers/net/appletalk/ipddp.c | 19 +-
drivers/net/bonding/bond_3ad.c | 30 ++-
drivers/net/bonding/bond_alb.c | 60 +++--
drivers/net/bonding/bond_debugfs.c | 5 +-
drivers/net/caif/caif_hsi.c | 51 ++--
drivers/net/can/slcan.c | 24 +-
drivers/net/can/softing/softing_main.c | 15 +-
drivers/net/eql.c | 25 +-
drivers/net/ethernet/3com/3c59x.c | 10 +-
drivers/net/ethernet/alacritech/slicoss.c | 30 ++-
drivers/net/ethernet/altera/altera_tse_main.c | 5 +-
drivers/net/ethernet/aurora/nb8800.c | 5 +-
drivers/net/ethernet/broadcom/bcm63xx_enet.c | 10 +-
drivers/net/ethernet/broadcom/bnx2.c | 107 ++++----
.../net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c | 5 +-
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c | 29 ++-
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c | 39 +--
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 12 +-
drivers/net/ethernet/broadcom/cnic.c | 12 +-
drivers/net/ethernet/broadcom/genet/bcmgenet.c | 5 +-
drivers/net/ethernet/broadcom/tg3.c | 221 +++++++++-------
drivers/net/ethernet/calxeda/xgmac.c | 5 +-
drivers/net/ethernet/cavium/liquidio/lio_main.c | 10 +-
drivers/net/ethernet/cavium/liquidio/lio_vf_main.c | 10 +-
.../net/ethernet/cavium/liquidio/octeon_device.c | 32 ++-
drivers/net/ethernet/cavium/liquidio/octeon_droq.c | 12 +-
drivers/net/ethernet/cavium/liquidio/octeon_nic.c | 11 +-
.../net/ethernet/cavium/liquidio/request_manager.c | 22 +-
.../ethernet/cavium/liquidio/response_manager.c | 11 +-
drivers/net/ethernet/cavium/thunder/nicvf_main.c | 5 +-
drivers/net/ethernet/chelsio/cxgb/vsc7326.c | 10 +-
drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c | 5 +-
drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c | 56 ++--
drivers/net/ethernet/chelsio/cxgb3/l2t.c | 39 +--
drivers/net/ethernet/chelsio/cxgb3/sge.c | 5 +-
drivers/net/ethernet/chelsio/cxgb4/clip_tbl.c | 42 +--
drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c | 17 +-
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | 42 +--
.../net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c | 5 +-
drivers/net/ethernet/chelsio/cxgb4/l2t.c | 40 +--
drivers/net/ethernet/chelsio/cxgb4/sge.c | 32 ++-
drivers/net/ethernet/chelsio/cxgb4/smt.c | 10 +-
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c | 15 +-
drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.c | 23 +-
drivers/net/ethernet/cisco/enic/enic_api.c | 5 +-
drivers/net/ethernet/cisco/enic/enic_clsf.c | 25 +-
drivers/net/ethernet/cisco/enic/enic_dev.c | 75 +++---
drivers/net/ethernet/cisco/enic/enic_dev.h | 2 +-
drivers/net/ethernet/cisco/enic/enic_ethtool.c | 18 +-
drivers/net/ethernet/cisco/enic/enic_main.c | 35 ++-
drivers/net/ethernet/emulex/benet/be_cmds.c | 15 +-
drivers/net/ethernet/emulex/benet/be_main.c | 5 +-
drivers/net/ethernet/freescale/fec_main.c | 34 ++-
drivers/net/ethernet/freescale/gianfar.c | 5 +-
drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c | 30 ++-
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c | 7 +-
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c | 7 +-
drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 9 +-
drivers/net/ethernet/ibm/emac/core.c | 15 +-
drivers/net/ethernet/intel/i40e/i40e_main.c | 49 ++--
drivers/net/ethernet/intel/i40e/i40e_ptp.c | 17 +-
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c | 38 +--
drivers/net/ethernet/intel/i40evf/i40evf_main.c | 68 +++--
.../net/ethernet/intel/i40evf/i40evf_virtchnl.c | 36 +--
drivers/net/ethernet/intel/igbvf/ethtool.c | 5 +-
drivers/net/ethernet/intel/igbvf/netdev.c | 51 ++--
drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c | 10 +-
drivers/net/ethernet/intel/ixgbevf/ethtool.c | 5 +-
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 55 ++--
drivers/net/ethernet/jme.c | 52 ++--
drivers/net/ethernet/marvell/mv643xx_eth.c | 10 +-
drivers/net/ethernet/marvell/skge.c | 34 ++-
drivers/net/ethernet/marvell/sky2.c | 34 ++-
drivers/net/ethernet/mediatek/mtk_eth_soc.c | 10 +-
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c | 5 +-
drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 35 ++-
drivers/net/ethernet/mellanox/mlx4/en_port.c | 5 +-
drivers/net/ethernet/mellanox/mlx4/en_rx.c | 5 +-
drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c | 24 +-
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 5 +-
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 21 +-
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 10 +-
.../ethernet/mellanox/mlx5/core/ipoib/ipoib_vlan.c | 10 +-
.../net/ethernet/mellanox/mlx5/core/lib/vxlan.c | 15 +-
drivers/net/ethernet/mellanox/mlxsw/core.c | 12 +-
drivers/net/ethernet/mellanox/mlxsw/pci.c | 5 +-
.../net/ethernet/mellanox/mlxsw/spectrum_router.c | 10 +-
.../net/ethernet/mellanox/mlxsw/spectrum_span.c | 5 +-
drivers/net/ethernet/microchip/lan743x_ptp.c | 30 ++-
drivers/net/ethernet/netronome/nfp/flower/cmsg.c | 14 +-
drivers/net/ethernet/netronome/nfp/flower/main.c | 14 +-
.../net/ethernet/netronome/nfp/flower/offload.c | 5 +-
.../ethernet/netronome/nfp/flower/tunnel_conf.c | 21 +-
drivers/net/ethernet/netronome/nfp/nfp_net.h | 2 +-
.../net/ethernet/netronome/nfp/nfp_net_common.c | 35 ++-
drivers/net/ethernet/nvidia/forcedeth.c | 50 ++--
drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c | 7 +-
.../net/ethernet/qlogic/netxen/netxen_nic_init.c | 5 +-
drivers/net/ethernet/qlogic/qed/qed_dev.c | 7 +-
drivers/net/ethernet/qlogic/qed/qed_fcoe.c | 19 +-
drivers/net/ethernet/qlogic/qed/qed_hw.c | 12 +-
drivers/net/ethernet/qlogic/qed/qed_iscsi.c | 19 +-
drivers/net/ethernet/qlogic/qed/qed_iwarp.c | 91 ++++---
drivers/net/ethernet/qlogic/qed/qed_ll2.c | 10 +-
drivers/net/ethernet/qlogic/qed/qed_mcp.c | 26 +-
drivers/net/ethernet/qlogic/qed/qed_rdma.c | 64 +++--
drivers/net/ethernet/qlogic/qed/qed_roce.c | 16 +-
drivers/net/ethernet/qlogic/qed/qed_spq.c | 26 +-
drivers/net/ethernet/qlogic/qede/qede_filter.c | 25 +-
drivers/net/ethernet/qlogic/qede/qede_ptp.c | 42 +--
.../net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c | 22 +-
drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c | 22 +-
.../ethernet/qlogic/qlcnic/qlcnic_sriov_common.c | 20 +-
.../net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c | 5 +-
drivers/net/ethernet/qualcomm/qca_spi.c | 10 +-
drivers/net/ethernet/qualcomm/qca_uart.c | 14 +-
drivers/net/ethernet/realtek/8139too.c | 5 +-
drivers/net/ethernet/sfc/ef10.c | 15 +-
drivers/net/ethernet/sfc/efx.c | 15 +-
drivers/net/ethernet/sfc/ethtool.c | 5 +-
drivers/net/ethernet/sfc/falcon/efx.c | 15 +-
drivers/net/ethernet/sfc/falcon/ethtool.c | 5 +-
drivers/net/ethernet/sfc/falcon/falcon.c | 9 +-
drivers/net/ethernet/sfc/falcon/farch.c | 42 +--
drivers/net/ethernet/sfc/falcon/selftest.c | 10 +-
drivers/net/ethernet/sfc/farch.c | 5 +-
drivers/net/ethernet/sfc/mcdi.c | 34 ++-
drivers/net/ethernet/sfc/ptp.c | 32 ++-
drivers/net/ethernet/sfc/rx.c | 5 +-
drivers/net/ethernet/sfc/selftest.c | 10 +-
drivers/net/ethernet/silan/sc92031.c | 60 +++--
drivers/net/ethernet/ti/netcp_ethss.c | 10 +-
drivers/net/ethernet/toshiba/tc35815.c | 5 +-
drivers/net/ethernet/via/via-rhine.c | 25 +-
drivers/net/hamradio/6pack.c | 30 ++-
drivers/net/hamradio/mkiss.c | 45 ++--
drivers/net/hyperv/rndis_filter.c | 5 +-
drivers/net/ieee802154/fakelb.c | 20 +-
drivers/net/ipvlan/ipvlan_core.c | 10 +-
drivers/net/ipvlan/ipvlan_main.c | 22 +-
drivers/net/macsec.c | 37 +--
drivers/net/macvlan.c | 5 +-
drivers/net/ppp/ppp_async.c | 12 +-
drivers/net/ppp/ppp_generic.c | 70 ++---
drivers/net/ppp/ppp_synctty.c | 5 +-
drivers/net/ppp/pppoe.c | 24 +-
drivers/net/slip/slip.c | 53 ++--
drivers/net/tun.c | 62 +++--
drivers/net/usb/cdc_mbim.c | 5 +-
drivers/net/usb/cdc_ncm.c | 39 +--
drivers/net/usb/r8152.c | 5 +-
drivers/net/virtio_net.c | 5 +-
drivers/net/vrf.c | 19 +-
drivers/net/vxlan.c | 32 ++-
drivers/net/wan/x25_asy.c | 10 +-
drivers/net/wireless/ath/ath10k/ce.c | 49 ++--
drivers/net/wireless/ath/ath10k/coredump.c | 5 +-
drivers/net/wireless/ath/ath10k/debug.c | 47 ++--
drivers/net/wireless/ath/ath10k/debugfs_sta.c | 15 +-
drivers/net/wireless/ath/ath10k/htc.c | 23 +-
drivers/net/wireless/ath/ath10k/htt_rx.c | 79 +++---
drivers/net/wireless/ath/ath10k/htt_tx.c | 25 +-
drivers/net/wireless/ath/ath10k/hw.c | 9 +-
drivers/net/wireless/ath/ath10k/mac.c | 284 ++++++++++++---------
drivers/net/wireless/ath/ath10k/p2p.c | 5 +-
drivers/net/wireless/ath/ath10k/pci.c | 42 +--
drivers/net/wireless/ath/ath10k/sdio.c | 27 +-
drivers/net/wireless/ath/ath10k/snoc.c | 17 +-
drivers/net/wireless/ath/ath10k/testmode.c | 15 +-
drivers/net/wireless/ath/ath10k/thermal.c | 10 +-
drivers/net/wireless/ath/ath10k/txrx.c | 24 +-
drivers/net/wireless/ath/ath10k/wmi-tlv.c | 5 +-
drivers/net/wireless/ath/ath10k/wmi.c | 83 +++---
drivers/net/wireless/ath/ath5k/ani.c | 5 +-
drivers/net/wireless/ath/ath5k/base.c | 34 ++-
drivers/net/wireless/ath/ath5k/debug.c | 10 +-
drivers/net/wireless/ath/ath5k/mac80211-ops.c | 10 +-
drivers/net/wireless/ath/ath6kl/cfg80211.c | 29 ++-
drivers/net/wireless/ath/ath6kl/hif.c | 15 +-
drivers/net/wireless/ath/ath6kl/htc_mbox.c | 107 ++++----
drivers/net/wireless/ath/ath6kl/htc_pipe.c | 89 ++++---
drivers/net/wireless/ath/ath6kl/init.c | 7 +-
drivers/net/wireless/ath/ath6kl/main.c | 49 ++--
drivers/net/wireless/ath/ath6kl/sdio.c | 51 ++--
drivers/net/wireless/ath/ath6kl/txrx.c | 124 +++++----
drivers/net/wireless/ath/ath6kl/wmi.c | 56 ++--
drivers/net/wireless/ath/ath9k/ath9k.h | 2 +-
drivers/net/wireless/ath/ath9k/beacon.c | 5 +-
drivers/net/wireless/ath/ath9k/channel.c | 68 ++---
drivers/net/wireless/ath/ath9k/dynack.c | 12 +-
drivers/net/wireless/ath/ath9k/gpio.c | 10 +-
drivers/net/wireless/ath/ath9k/htc_drv_beacon.c | 33 ++-
drivers/net/wireless/ath/ath9k/htc_drv_debug.c | 10 +-
drivers/net/wireless/ath/ath9k/htc_drv_main.c | 25 +-
drivers/net/wireless/ath/ath9k/htc_drv_txrx.c | 50 ++--
drivers/net/wireless/ath/ath9k/main.c | 44 ++--
drivers/net/wireless/ath/ath9k/recv.c | 17 +-
drivers/net/wireless/ath/ath9k/wmi.c | 7 +-
drivers/net/wireless/ath/ath9k/wow.c | 10 +-
drivers/net/wireless/ath/ath9k/xmit.c | 38 +--
drivers/net/wireless/ath/carl9170/debug.c | 20 +-
drivers/net/wireless/ath/carl9170/main.c | 45 ++--
drivers/net/wireless/ath/carl9170/rx.c | 5 +-
drivers/net/wireless/ath/carl9170/tx.c | 80 +++---
drivers/net/wireless/ath/carl9170/usb.c | 12 +-
drivers/net/wireless/ath/dfs_pri_detector.c | 30 ++-
drivers/net/wireless/ath/wcn36xx/main.c | 13 +-
drivers/net/wireless/ath/wil6210/debugfs.c | 5 +-
drivers/net/wireless/ath/wil6210/main.c | 10 +-
drivers/net/wireless/ath/wil6210/rx_reorder.c | 5 +-
drivers/net/wireless/ath/wil6210/txrx.c | 28 +-
drivers/net/wireless/ath/wil6210/txrx_edma.c | 10 +-
drivers/net/wireless/ath/wil6210/wmi.c | 15 +-
drivers/net/wireless/atmel/atmel.c | 7 +-
.../wireless/broadcom/brcm80211/brcmfmac/sdio.c | 27 +-
.../wireless/broadcom/brcm80211/brcmsmac/debug.c | 5 +-
.../broadcom/brcm80211/brcmsmac/mac80211_if.c | 135 +++++-----
drivers/net/wireless/intel/iwlwifi/dvm/calib.c | 16 +-
drivers/net/wireless/intel/iwlwifi/dvm/debugfs.c | 20 +-
drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c | 5 +-
drivers/net/wireless/intel/iwlwifi/dvm/main.c | 5 +-
drivers/net/wireless/intel/iwlwifi/dvm/sta.c | 119 +++++----
drivers/net/wireless/intel/iwlwifi/dvm/tx.c | 38 +--
drivers/net/wireless/intel/iwlwifi/fw/notif-wait.c | 10 +-
drivers/net/wireless/intel/iwlwifi/mvm/d3.c | 5 +-
drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c | 10 +-
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c | 51 ++--
drivers/net/wireless/intel/iwlwifi/mvm/ops.c | 30 ++-
drivers/net/wireless/intel/iwlwifi/mvm/rs.c | 5 +-
drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c | 23 +-
drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 150 ++++++-----
.../net/wireless/intel/iwlwifi/mvm/time-event.c | 34 ++-
drivers/net/wireless/intel/iwlwifi/mvm/tx.c | 10 +-
drivers/net/wireless/intel/iwlwifi/mvm/utils.c | 46 ++--
drivers/net/wireless/intel/iwlwifi/pcie/rx.c | 19 +-
drivers/net/wireless/intel/iwlwifi/pcie/trans.c | 15 +-
drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c | 12 +-
drivers/net/wireless/intel/iwlwifi/pcie/tx.c | 29 ++-
.../net/wireless/intersil/hostap/hostap_80211_rx.c | 5 +-
drivers/net/wireless/intersil/hostap/hostap_ap.c | 145 ++++++-----
drivers/net/wireless/intersil/hostap/hostap_hw.c | 20 +-
.../net/wireless/intersil/hostap/hostap_ioctl.c | 9 +-
drivers/net/wireless/intersil/hostap/hostap_main.c | 12 +-
drivers/net/wireless/intersil/hostap/hostap_proc.c | 6 +-
.../net/wireless/intersil/orinoco/orinoco_usb.c | 9 +-
drivers/net/wireless/mac80211_hwsim.c | 67 +++--
drivers/net/wireless/marvell/mwl8k.c | 21 +-
drivers/net/wireless/mediatek/mt76/agg-rx.c | 20 +-
drivers/net/wireless/mediatek/mt76/dma.c | 15 +-
drivers/net/wireless/mediatek/mt76/mac80211.c | 5 +-
drivers/net/wireless/mediatek/mt76/mt76x0/mac.c | 10 +-
drivers/net/wireless/mediatek/mt76/mt76x0/phy.c | 5 +-
drivers/net/wireless/mediatek/mt76/mt76x2_dma.c | 5 +-
drivers/net/wireless/mediatek/mt76/mt76x2_mac.c | 5 +-
.../net/wireless/mediatek/mt76/mt76x2_mac_common.c | 10 +-
.../net/wireless/mediatek/mt76/mt76x2_phy_common.c | 5 +-
drivers/net/wireless/mediatek/mt76/mt76x2_tx.c | 5 +-
drivers/net/wireless/mediatek/mt76/tx.c | 45 ++--
drivers/net/wireless/mediatek/mt76/usb.c | 5 +-
drivers/net/wireless/mediatek/mt7601u/mac.c | 10 +-
drivers/net/wireless/mediatek/mt7601u/phy.c | 14 +-
drivers/net/wireless/ralink/rt2x00/rt2x00dev.c | 15 +-
drivers/net/wireless/ralink/rt2x00/rt2x00queue.c | 5 +-
.../realtek/rtlwifi/btcoexist/halbtcoutsrc.c | 5 +-
drivers/net/wireless/realtek/rtlwifi/core.c | 10 +-
drivers/net/wireless/realtek/rtlwifi/pci.c | 17 +-
.../net/wireless/realtek/rtlwifi/rtl8188ee/dm.c | 16 +-
.../net/wireless/realtek/rtlwifi/rtl8188ee/hw.c | 22 +-
.../net/wireless/realtek/rtlwifi/rtl8192ee/dm.c | 10 +-
.../net/wireless/realtek/rtlwifi/rtl8192ee/hw.c | 22 +-
.../net/wireless/realtek/rtlwifi/rtl8723be/dm.c | 10 +-
.../net/wireless/realtek/rtlwifi/rtl8723be/hw.c | 22 +-
.../net/wireless/realtek/rtlwifi/rtl8821ae/dm.c | 10 +-
.../net/wireless/realtek/rtlwifi/rtl8821ae/hw.c | 20 +-
drivers/net/wireless/st/cw1200/debug.c | 5 +-
drivers/net/wireless/st/cw1200/pm.c | 10 +-
drivers/net/wireless/st/cw1200/queue.c | 80 +++---
drivers/net/wireless/st/cw1200/sta.c | 34 ++-
drivers/net/wireless/st/cw1200/txrx.c | 77 +++---
drivers/net/wireless/st/cw1200/wsm.c | 5 +-
drivers/net/xen-netfront.c | 15 +-
drivers/pcmcia/bcm63xx_pcmcia.c | 10 +-
drivers/rapidio/devices/tsi721_dma.c | 32 ++-
drivers/rapidio/rio_cm.c | 92 ++++---
drivers/s390/block/dasd.c | 38 +--
drivers/s390/block/dasd_ioctl.c | 7 +-
drivers/s390/block/dasd_proc.c | 5 +-
drivers/s390/char/sclp.c | 5 +-
drivers/s390/char/tty3270.c | 40 +--
drivers/s390/char/vmlogrdr.c | 17 +-
drivers/s390/cio/cio.c | 5 +-
drivers/s390/crypto/ap_bus.c | 64 +++--
drivers/s390/crypto/ap_card.c | 25 +-
drivers/s390/crypto/ap_queue.c | 60 +++--
drivers/s390/crypto/pkey_api.c | 22 +-
drivers/s390/crypto/zcrypt_api.c | 20 +-
drivers/s390/net/netiucv.c | 36 +--
drivers/s390/net/qeth_l2_main.c | 10 +-
drivers/s390/net/qeth_l3_main.c | 65 +++--
drivers/s390/net/qeth_l3_sys.c | 25 +-
drivers/s390/net/smsgiucv.c | 10 +-
drivers/s390/net/smsgiucv_app.c | 5 +-
drivers/s390/scsi/zfcp_fc.c | 5 +-
drivers/s390/scsi/zfcp_sysfs.c | 7 +-
drivers/scsi/be2iscsi/be_main.c | 51 ++--
drivers/scsi/bnx2fc/bnx2fc_els.c | 36 +--
drivers/scsi/bnx2fc/bnx2fc_fcoe.c | 58 +++--
drivers/scsi/bnx2fc/bnx2fc_hwi.c | 20 +-
drivers/scsi/bnx2fc/bnx2fc_io.c | 67 ++---
drivers/scsi/bnx2fc/bnx2fc_tgt.c | 21 +-
drivers/scsi/bnx2i/bnx2i.h | 2 +-
drivers/scsi/bnx2i/bnx2i_hwi.c | 12 +-
drivers/scsi/bnx2i/bnx2i_init.c | 5 +-
drivers/scsi/bnx2i/bnx2i_iscsi.c | 59 +++--
drivers/scsi/cxgbi/cxgb3i/cxgb3i.c | 27 +-
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c | 58 +++--
drivers/scsi/cxgbi/libcxgbi.c | 76 +++---
drivers/scsi/fcoe/fcoe.c | 10 +-
drivers/scsi/fcoe/fcoe_ctlr.c | 20 +-
drivers/scsi/fcoe/fcoe_transport.c | 14 +-
drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c | 105 ++++----
drivers/scsi/iscsi_tcp.c | 57 +++--
drivers/scsi/libfc/fc_exch.c | 123 +++++----
drivers/scsi/libfc/fc_fcp.c | 20 +-
drivers/scsi/libiscsi.c | 170 ++++++------
drivers/scsi/libiscsi_tcp.c | 10 +-
drivers/scsi/qedi/qedi_fw.c | 46 ++--
drivers/scsi/qedi/qedi_main.c | 27 +-
drivers/staging/fwserial/fwserial.c | 167 +++++++-----
drivers/staging/mt7621-dma/mtk-hsdma.c | 15 +-
drivers/staging/rtl8188eu/core/rtw_ap.c | 69 ++---
drivers/staging/rtl8188eu/core/rtw_cmd.c | 17 +-
drivers/staging/rtl8188eu/core/rtw_ioctl_set.c | 32 ++-
drivers/staging/rtl8188eu/core/rtw_mlme.c | 92 ++++---
drivers/staging/rtl8188eu/core/rtw_mlme_ext.c | 37 +--
drivers/staging/rtl8188eu/core/rtw_recv.c | 38 +--
drivers/staging/rtl8188eu/core/rtw_sta_mgt.c | 40 +--
drivers/staging/rtl8188eu/core/rtw_xmit.c | 55 ++--
drivers/staging/rtl8188eu/hal/rtl8188eu_xmit.c | 12 +-
drivers/staging/rtl8188eu/include/rtw_mlme.h | 4 +-
drivers/staging/rtl8188eu/os_dep/ioctl_linux.c | 26 +-
drivers/staging/rtl8188eu/os_dep/xmit_linux.c | 12 +-
drivers/staging/rtl8723bs/core/rtw_ap.c | 65 +++--
drivers/staging/rtl8723bs/core/rtw_cmd.c | 21 +-
drivers/staging/rtl8723bs/core/rtw_debug.c | 12 +-
drivers/staging/rtl8723bs/core/rtw_ioctl_set.c | 37 +--
drivers/staging/rtl8723bs/core/rtw_mlme.c | 101 ++++----
drivers/staging/rtl8723bs/core/rtw_mlme_ext.c | 68 ++---
drivers/staging/rtl8723bs/core/rtw_recv.c | 53 ++--
drivers/staging/rtl8723bs/core/rtw_sta_mgt.c | 61 +++--
drivers/staging/rtl8723bs/core/rtw_wlan_util.c | 50 ++--
drivers/staging/rtl8723bs/core/rtw_xmit.c | 95 ++++---
drivers/staging/rtl8723bs/hal/hal_com.c | 2 +-
drivers/staging/rtl8723bs/hal/hal_sdio.c | 2 +-
drivers/staging/rtl8723bs/hal/rtl8723bs_recv.c | 2 +-
drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c | 22 +-
drivers/staging/rtl8723bs/hal/sdio_ops.c | 2 +-
drivers/staging/rtl8723bs/include/rtw_mlme.h | 4 +-
drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c | 29 ++-
drivers/staging/rtl8723bs/os_dep/ioctl_linux.c | 44 ++--
drivers/staging/rtl8723bs/os_dep/mlme_linux.c | 5 +-
drivers/staging/rtl8723bs/os_dep/xmit_linux.c | 5 +-
drivers/staging/rtlwifi/btcoexist/halbtcoutsrc.c | 5 +-
drivers/staging/rtlwifi/core.c | 10 +-
drivers/staging/rtlwifi/pci.c | 17 +-
drivers/staging/rtlwifi/rtl8822be/hw.c | 22 +-
.../vc04_services/interface/vchiq_arm/vchiq_arm.c | 69 ++---
drivers/target/iscsi/cxgbit/cxgbit_cm.c | 41 +--
drivers/target/iscsi/cxgbit/cxgbit_main.c | 17 +-
drivers/target/iscsi/cxgbit/cxgbit_target.c | 31 ++-
drivers/target/iscsi/iscsi_target.c | 221 +++++++++-------
drivers/target/iscsi/iscsi_target_configfs.c | 19 +-
drivers/target/iscsi/iscsi_target_erl0.c | 49 ++--
drivers/target/iscsi/iscsi_target_erl1.c | 79 +++---
drivers/target/iscsi/iscsi_target_erl2.c | 18 +-
drivers/target/iscsi/iscsi_target_login.c | 68 ++---
drivers/target/iscsi/iscsi_target_nego.c | 60 +++--
drivers/target/iscsi/iscsi_target_nodeattrib.c | 5 +-
drivers/target/iscsi/iscsi_target_stat.c | 45 ++--
drivers/target/iscsi/iscsi_target_tmr.c | 30 ++-
drivers/target/iscsi/iscsi_target_util.c | 208 ++++++++-------
drivers/target/sbp/sbp_target.c | 141 +++++-----
drivers/target/target_core_tpg.c | 10 +-
drivers/target/target_core_transport.c | 5 +-
drivers/target/target_core_user.c | 12 +-
drivers/tty/hvc/hvc_iucv.c | 55 ++--
drivers/tty/moxa.c | 21 +-
drivers/usb/serial/keyspan_pda.c | 7 +-
drivers/vhost/net.c | 5 +-
drivers/vhost/vsock.c | 45 ++--
drivers/xen/pvcalls-back.c | 20 +-
fs/afs/internal.h | 4 +-
fs/afs/rxrpc.c | 5 +-
fs/dlm/lowcomms.c | 40 +--
fs/fs-writeback.c | 15 +-
fs/jffs2/README.Locking | 2 +-
fs/nfs/callback.c | 7 +-
fs/ocfs2/cluster/tcp.c | 35 ++-
include/asm-generic/hardirq.h | 2 +-
include/linux/bottom_half.h | 57 ++++-
include/linux/dmaengine.h | 16 +-
include/linux/interrupt.h | 82 +++---
include/linux/netdevice.h | 40 ++-
include/linux/preempt.h | 11 +-
include/linux/ptr_ring.h | 30 ++-
include/linux/rcupdate.h | 11 +-
include/linux/rhashtable.h | 17 +-
include/linux/rwlock.h | 8 +-
include/linux/rwlock_api_smp.h | 40 +--
include/linux/seqlock.h | 21 +-
include/linux/spinlock.h | 23 +-
include/linux/spinlock_api_smp.h | 34 ++-
include/linux/spinlock_api_up.h | 18 +-
include/linux/u64_stats_sync.h | 2 +-
include/linux/xarray.h | 2 +-
include/net/arp.h | 10 +-
include/net/gen_stats.h | 1 +
include/net/ip6_fib.h | 1 +
include/net/mac80211.h | 15 +-
include/net/ndisc.h | 10 +-
include/net/neighbour.h | 1 +
include/net/netfilter/nf_log.h | 4 +-
include/net/netrom.h | 9 +-
include/net/ping.h | 1 +
include/net/pkt_cls.h | 6 +-
include/net/request_sock.h | 5 +-
include/net/sch_generic.h | 19 +-
include/net/snmp.h | 10 +-
include/net/sock.h | 17 +-
include/net/tcp.h | 9 +-
include/net/udp.h | 1 +
include/target/iscsi/iscsi_target_core.h | 2 +-
kernel/bpf/btf.c | 5 +-
kernel/bpf/core.c | 10 +-
kernel/bpf/cpumap.c | 5 +-
kernel/bpf/local_storage.c | 34 ++-
kernel/bpf/reuseport_array.c | 32 ++-
kernel/bpf/sockmap.c | 93 ++++---
kernel/bpf/syscall.c | 30 ++-
kernel/cgroup/cgroup.c | 15 +-
kernel/irq/manage.c | 5 +-
kernel/locking/spinlock.c | 41 +--
kernel/padata.c | 20 +-
kernel/rcu/rcuperf.c | 2 +-
kernel/rcu/rcutorture.c | 19 +-
kernel/rcu/srcutiny.c | 5 +-
kernel/rcu/srcutree.c | 5 +-
kernel/rcu/tiny.c | 5 +-
kernel/rcu/tree_plugin.h | 12 +-
kernel/rcu/update.c | 5 +-
kernel/softirq.c | 164 +++++++++---
kernel/time/hrtimer.c | 5 +-
kernel/trace/ring_buffer.c | 2 +-
kernel/trace/trace.c | 2 +-
lib/locking-selftest.c | 8 +-
lib/rhashtable.c | 12 +-
mm/backing-dev.c | 22 +-
mm/page-writeback.c | 10 +-
net/6lowpan/debugfs.c | 25 +-
net/6lowpan/iphc.c | 23 +-
net/6lowpan/ndisc.c | 12 +-
net/6lowpan/nhc.c | 31 ++-
net/802/garp.c | 19 +-
net/802/mrp.c | 19 +-
net/802/psnap.c | 10 +-
net/appletalk/aarp.c | 48 ++--
net/appletalk/atalk_proc.c | 6 +-
net/appletalk/ddp.c | 65 +++--
net/atm/clip.c | 5 +-
net/atm/mpc.c | 5 +-
net/atm/mpoa_caches.c | 41 +--
net/ax25/af_ax25.c | 26 +-
net/ax25/ax25_dev.c | 24 +-
net/ax25/ax25_iface.c | 53 ++--
net/ax25/ax25_out.c | 7 +-
net/ax25/ax25_route.c | 33 ++-
net/ax25/ax25_subr.c | 5 +-
net/batman-adv/bat_iv_ogm.c | 51 ++--
net/batman-adv/bridge_loop_avoidance.c | 70 ++---
net/batman-adv/distributed-arp-table.c | 5 +-
net/batman-adv/fragmentation.c | 10 +-
net/batman-adv/gateway_client.c | 20 +-
net/batman-adv/hash.h | 4 +-
net/batman-adv/icmp_socket.c | 17 +-
net/batman-adv/log.c | 12 +-
net/batman-adv/multicast.c | 37 +--
net/batman-adv/network-coding.c | 37 +--
net/batman-adv/originator.c | 62 +++--
net/batman-adv/routing.c | 22 +-
net/batman-adv/send.c | 21 +-
net/batman-adv/soft-interface.c | 10 +-
net/batman-adv/tp_meter.c | 67 +++--
net/batman-adv/translation-table.c | 158 +++++++-----
net/batman-adv/tvlv.c | 25 +-
net/bluetooth/hci_core.c | 5 +-
net/bridge/br.c | 13 +-
net/bridge/br_device.c | 5 +-
net/bridge/br_fdb.c | 65 +++--
net/bridge/br_if.c | 20 +-
net/bridge/br_ioctl.c | 9 +-
net/bridge/br_mdb.c | 15 +-
net/bridge/br_multicast.c | 47 ++--
net/bridge/br_netlink.c | 24 +-
net/bridge/br_stp.c | 20 +-
net/bridge/br_stp_if.c | 25 +-
net/bridge/br_sysfs_br.c | 5 +-
net/bridge/br_sysfs_if.c | 9 +-
net/bridge/br_vlan.c | 5 +-
net/bridge/netfilter/ebt_limit.c | 7 +-
net/bridge/netfilter/ebt_log.c | 5 +-
net/bridge/netfilter/ebtables.c | 32 ++-
net/caif/caif_dev.c | 21 +-
net/caif/caif_socket.c | 5 +-
net/caif/cfctrl.c | 40 +--
net/caif/cfmuxl.c | 30 ++-
net/can/gw.c | 5 +-
net/core/datagram.c | 10 +-
net/core/dev.c | 51 ++--
net/core/dev_addr_lists.c | 51 ++--
net/core/gen_estimator.c | 9 +-
net/core/gen_stats.c | 8 +-
net/core/link_watch.c | 5 +-
net/core/neighbour.c | 175 +++++++------
net/core/net-procfs.c | 5 +-
net/core/net_namespace.c | 31 ++-
net/core/netpoll.c | 5 +-
net/core/pktgen.c | 23 +-
net/core/request_sock.c | 7 +-
net/core/rtnetlink.c | 15 +-
net/core/skbuff.c | 5 +-
net/core/sock.c | 52 ++--
net/core/sock_reuseport.c | 26 +-
net/dcb/dcbnl.c | 54 ++--
net/dccp/input.c | 5 +-
net/dccp/ipv4.c | 5 +-
net/dccp/minisocks.c | 10 +-
net/dccp/proto.c | 5 +-
net/decnet/af_decnet.c | 20 +-
net/decnet/dn_fib.c | 20 +-
net/decnet/dn_route.c | 56 ++--
net/decnet/dn_table.c | 27 +-
net/hsr/hsr_device.c | 7 +-
net/ieee802154/6lowpan/tx.c | 5 +-
net/ieee802154/socket.c | 25 +-
net/ipv4/af_inet.c | 10 +-
net/ipv4/arp.c | 10 +-
net/ipv4/cipso_ipv4.c | 19 +-
net/ipv4/esp4.c | 19 +-
net/ipv4/fib_frontend.c | 5 +-
net/ipv4/fib_semantics.c | 20 +-
net/ipv4/icmp.c | 10 +-
net/ipv4/igmp.c | 82 +++---
net/ipv4/inet_connection_sock.c | 28 +-
net/ipv4/inet_diag.c | 5 +-
net/ipv4/inet_fragment.c | 5 +-
net/ipv4/inet_hashtables.c | 30 ++-
net/ipv4/inet_timewait_sock.c | 5 +-
net/ipv4/inetpeer.c | 5 +-
net/ipv4/ip_output.c | 7 +-
net/ipv4/ipmr.c | 36 +--
net/ipv4/ipmr_base.c | 17 +-
net/ipv4/netfilter/arp_tables.c | 10 +-
net/ipv4/netfilter/ip_tables.c | 10 +-
net/ipv4/netfilter/ipt_CLUSTERIP.c | 21 +-
net/ipv4/netfilter/nf_defrag_ipv4.c | 5 +-
net/ipv4/netfilter/nf_log_arp.c | 5 +-
net/ipv4/netfilter/nf_log_ipv4.c | 5 +-
net/ipv4/netfilter/nf_nat_snmp_basic_main.c | 5 +-
net/ipv4/ping.c | 22 +-
net/ipv4/raw.c | 15 +-
net/ipv4/route.c | 30 ++-
net/ipv4/sysctl_net_ipv4.c | 5 +-
net/ipv4/tcp.c | 32 ++-
net/ipv4/tcp_input.c | 5 +-
net/ipv4/tcp_ipv4.c | 32 ++-
net/ipv4/tcp_metrics.c | 20 +-
net/ipv4/tcp_minisocks.c | 5 +-
net/ipv4/udp.c | 52 ++--
net/ipv4/udp_diag.c | 7 +-
net/ipv6/addrconf.c | 240 +++++++++--------
net/ipv6/af_inet6.c | 10 +-
net/ipv6/anycast.c | 38 +--
net/ipv6/calipso.c | 19 +-
net/ipv6/esp6.c | 14 +-
net/ipv6/icmp.c | 10 +-
net/ipv6/inet6_hashtables.c | 5 +-
net/ipv6/ip6_fib.c | 43 ++--
net/ipv6/ip6_flowlabel.c | 88 ++++---
net/ipv6/ip6_output.c | 12 +-
net/ipv6/ip6mr.c | 46 ++--
net/ipv6/ipv6_sockglue.c | 20 +-
net/ipv6/mcast.c | 221 +++++++++-------
net/ipv6/mip6.c | 15 +-
net/ipv6/ndisc.c | 17 +-
net/ipv6/netfilter/ip6_tables.c | 10 +-
net/ipv6/netfilter/nf_conntrack_reasm.c | 5 +-
net/ipv6/netfilter/nf_log_ipv6.c | 5 +-
net/ipv6/netfilter/nf_tproxy_ipv6.c | 5 +-
net/ipv6/raw.c | 5 +-
net/ipv6/route.c | 87 ++++---
net/ipv6/seg6_hmac.c | 5 +-
net/ipv6/tcp_ipv6.c | 14 +-
net/ipv6/xfrm6_tunnel.c | 15 +-
net/iucv/af_iucv.c | 25 +-
net/iucv/iucv.c | 70 +++--
net/kcm/kcmproc.c | 10 +-
net/kcm/kcmsock.c | 130 ++++++----
net/key/af_key.c | 5 +-
net/l2tp/l2tp_core.c | 100 +++++---
net/l2tp/l2tp_debugfs.c | 5 +-
net/l2tp/l2tp_ip.c | 34 ++-
net/l2tp/l2tp_ip6.c | 29 ++-
net/l2tp/l2tp_ppp.c | 10 +-
net/lapb/lapb_iface.c | 15 +-
net/llc/llc_conn.c | 15 +-
net/llc/llc_core.c | 15 +-
net/llc/llc_proc.c | 23 +-
net/llc/llc_sap.c | 10 +-
net/mac80211/agg-rx.c | 5 +-
net/mac80211/agg-tx.c | 50 ++--
net/mac80211/cfg.c | 36 +--
net/mac80211/debugfs.c | 5 +-
net/mac80211/debugfs_netdev.c | 5 +-
net/mac80211/debugfs_sta.c | 5 +-
net/mac80211/ht.c | 7 +-
net/mac80211/ibss.c | 14 +-
net/mac80211/iface.c | 14 +-
net/mac80211/main.c | 5 +-
net/mac80211/mesh_hwmp.c | 58 +++--
net/mac80211/mesh_pathtbl.c | 37 +--
net/mac80211/mesh_plink.c | 36 +--
net/mac80211/mesh_sync.c | 15 +-
net/mac80211/mlme.c | 5 +-
net/mac80211/ocb.c | 14 +-
net/mac80211/rate.c | 20 +-
net/mac80211/rx.c | 25 +-
net/mac80211/sta_info.c | 20 +-
net/mac80211/tdls.c | 10 +-
net/mac80211/tkip.c | 5 +-
net/mac80211/tx.c | 55 ++--
net/mac80211/util.c | 5 +-
net/mac802154/llsec.c | 43 ++--
net/mpls/internal.h | 10 +-
net/netfilter/ipset/ip_set_bitmap_gen.h | 2 +-
net/netfilter/ipset/ip_set_core.c | 79 +++---
net/netfilter/ipset/ip_set_hash_gen.h | 21 +-
net/netfilter/ipset/ip_set_list_set.c | 5 +-
net/netfilter/ipvs/ip_vs_app.c | 5 +-
net/netfilter/ipvs/ip_vs_conn.c | 22 +-
net/netfilter/ipvs/ip_vs_core.c | 20 +-
net/netfilter/ipvs/ip_vs_ctl.c | 40 +--
net/netfilter/ipvs/ip_vs_est.c | 10 +-
net/netfilter/ipvs/ip_vs_lblc.c | 10 +-
net/netfilter/ipvs/ip_vs_lblcr.c | 18 +-
net/netfilter/ipvs/ip_vs_proto_sctp.c | 5 +-
net/netfilter/ipvs/ip_vs_proto_tcp.c | 10 +-
net/netfilter/ipvs/ip_vs_rr.c | 12 +-
net/netfilter/ipvs/ip_vs_sync.c | 48 ++--
net/netfilter/ipvs/ip_vs_wrr.c | 10 +-
net/netfilter/ipvs/ip_vs_xmit.c | 18 +-
net/netfilter/nf_conncount.c | 10 +-
net/netfilter/nf_conntrack_core.c | 46 ++--
net/netfilter/nf_conntrack_ecache.c | 15 +-
net/netfilter/nf_conntrack_expect.c | 32 ++-
net/netfilter/nf_conntrack_ftp.c | 5 +-
net/netfilter/nf_conntrack_h323_main.c | 26 +-
net/netfilter/nf_conntrack_helper.c | 10 +-
net/netfilter/nf_conntrack_irc.c | 5 +-
net/netfilter/nf_conntrack_netlink.c | 38 +--
net/netfilter/nf_conntrack_pptp.c | 5 +-
net/netfilter/nf_conntrack_proto_dccp.c | 21 +-
net/netfilter/nf_conntrack_proto_gre.c | 27 +-
net/netfilter/nf_conntrack_proto_sctp.c | 19 +-
net/netfilter/nf_conntrack_proto_tcp.c | 31 ++-
net/netfilter/nf_conntrack_sane.c | 5 +-
net/netfilter/nf_conntrack_seqadj.c | 10 +-
net/netfilter/nf_conntrack_sip.c | 10 +-
net/netfilter/nf_log.c | 8 +-
net/netfilter/nf_log_common.c | 5 +-
net/netfilter/nf_nat_core.c | 10 +-
net/netfilter/nf_nat_redirect.c | 5 +-
net/netfilter/nf_queue.c | 5 +-
net/netfilter/nf_tables_core.c | 5 +-
net/netfilter/nfnetlink_log.c | 76 +++---
net/netfilter/nfnetlink_queue.c | 48 ++--
net/netfilter/nft_counter.c | 10 +-
net/netfilter/nft_limit.c | 7 +-
net/netfilter/nft_meta.c | 13 +-
net/netfilter/nft_set_rbtree.c | 32 ++-
net/netfilter/x_tables.c | 7 +-
net/netfilter/xt_RATEEST.c | 5 +-
net/netfilter/xt_dccp.c | 9 +-
net/netfilter/xt_hashlimit.c | 18 +-
net/netfilter/xt_limit.c | 7 +-
net/netfilter/xt_quota.c | 5 +-
net/netfilter/xt_recent.c | 35 +--
net/netlink/af_netlink.c | 10 +-
net/netrom/af_netrom.c | 32 ++-
net/netrom/nr_route.c | 58 +++--
net/nfc/rawsock.c | 15 +-
net/openvswitch/datapath.c | 5 +-
net/openvswitch/flow.c | 10 +-
net/openvswitch/meter.c | 15 +-
net/packet/af_packet.c | 34 ++-
net/rds/af_rds.c | 20 +-
net/rds/tcp.c | 10 +-
net/rds/tcp_connect.c | 5 +-
net/rds/tcp_listen.c | 15 +-
net/rds/tcp_recv.c | 5 +-
net/rds/tcp_send.c | 5 +-
net/rose/af_rose.c | 32 ++-
net/rose/rose_route.c | 73 +++---
net/rxrpc/af_rxrpc.c | 15 +-
net/rxrpc/ar-internal.h | 15 +-
net/rxrpc/call_accept.c | 17 +-
net/rxrpc/call_event.c | 16 +-
net/rxrpc/call_object.c | 10 +-
net/rxrpc/conn_client.c | 10 +-
net/rxrpc/conn_event.c | 12 +-
net/rxrpc/conn_object.c | 5 +-
net/rxrpc/conn_service.c | 4 +-
net/rxrpc/input.c | 15 +-
net/rxrpc/output.c | 14 +-
net/rxrpc/peer_event.c | 22 +-
net/rxrpc/peer_object.c | 10 +-
net/rxrpc/recvmsg.c | 31 ++-
net/rxrpc/sendmsg.c | 15 +-
net/sched/act_bpf.c | 12 +-
net/sched/act_csum.c | 12 +-
net/sched/act_gact.c | 12 +-
net/sched/act_ife.c | 22 +-
net/sched/act_ipt.c | 12 +-
net/sched/act_mirred.c | 19 +-
net/sched/act_nat.c | 5 +-
net/sched/act_pedit.c | 14 +-
net/sched/act_police.c | 12 +-
net/sched/act_sample.c | 12 +-
net/sched/act_simple.c | 12 +-
net/sched/act_skbmod.c | 12 +-
net/sched/act_tunnel_key.c | 12 +-
net/sched/act_vlan.c | 12 +-
net/sched/cls_route.c | 10 +-
net/sched/sch_generic.c | 24 +-
net/sched/sch_mq.c | 5 +-
net/sched/sch_mqprio.c | 14 +-
net/sched/sch_netem.c | 5 +-
net/sched/sch_teql.c | 5 +-
net/sctp/associola.c | 10 +-
net/sctp/input.c | 15 +-
net/sctp/ipv6.c | 14 +-
net/sctp/proc.c | 5 +-
net/sctp/protocol.c | 28 +-
net/sctp/sm_make_chunk.c | 9 +-
net/sctp/socket.c | 45 ++--
net/smc/af_smc.c | 10 +-
net/smc/smc_cdc.c | 10 +-
net/smc/smc_core.c | 83 +++---
net/smc/smc_tx.c | 10 +-
net/sunrpc/backchannel_rqst.c | 10 +-
net/sunrpc/sched.c | 42 +--
net/sunrpc/svc.c | 29 ++-
net/sunrpc/svc_xprt.c | 52 ++--
net/sunrpc/svcsock.c | 12 +-
net/sunrpc/xprt.c | 60 +++--
net/sunrpc/xprtrdma/backchannel.c | 17 +-
net/sunrpc/xprtrdma/svc_rdma_backchannel.c | 5 +-
net/sunrpc/xprtrdma/svc_rdma_transport.c | 10 +-
net/sunrpc/xprtrdma/transport.c | 5 +-
net/sunrpc/xprtsock.c | 70 +++--
net/switchdev/switchdev.c | 10 +-
net/tipc/bcast.h | 2 +-
net/tipc/discover.c | 20 +-
net/tipc/monitor.c | 54 ++--
net/tipc/msg.h | 10 +-
net/tipc/name_distr.c | 20 +-
net/tipc/name_table.c | 71 +++---
net/tipc/node.c | 65 +++--
net/tipc/socket.c | 14 +-
net/tipc/topsrv.c | 85 +++---
net/tls/tls_sw.c | 10 +-
net/unix/af_unix.c | 10 +-
net/vmw_vsock/af_vsock.c | 50 ++--
net/vmw_vsock/diag.c | 5 +-
net/vmw_vsock/virtio_transport.c | 36 +--
net/vmw_vsock/virtio_transport_common.c | 44 ++--
net/vmw_vsock/vmci_transport.c | 17 +-
net/wireless/mlme.c | 29 ++-
net/wireless/nl80211.c | 26 +-
net/wireless/reg.c | 19 +-
net/wireless/scan.c | 49 ++--
net/x25/af_x25.c | 45 ++--
net/x25/x25_forward.c | 25 +-
net/x25/x25_link.c | 30 ++-
net/x25/x25_proc.c | 6 +-
net/x25/x25_route.c | 25 +-
net/xdp/xsk.c | 10 +-
net/xfrm/xfrm_input.c | 10 +-
net/xfrm/xfrm_ipcomp.c | 7 +-
net/xfrm/xfrm_output.c | 7 +-
net/xfrm/xfrm_policy.c | 94 ++++---
net/xfrm/xfrm_state.c | 172 ++++++++-----
net/xfrm/xfrm_user.c | 15 +-
security/selinux/netif.c | 15 +-
security/selinux/netnode.c | 12 +-
security/selinux/netport.c | 12 +-
security/smack/smack_lsm.c | 5 +-
sound/pci/asihpi/hpios.h | 2 +-
sound/soc/intel/atom/sst/sst_ipc.c | 19 +-
sound/soc/omap/ams-delta.c | 10 +-
tools/virtio/ringtest/ptr_ring.c | 2 +-
945 files changed, 13857 insertions(+), 9767 deletions(-)
The future extensions of this API are going to depend on the vector
definitions. So order the code accordingly.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
include/linux/interrupt.h | 25 +++++++++++++------------
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 5888545..1de87ec 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -432,18 +432,6 @@ extern bool force_irqthreads;
#define force_irqthreads (0)
#endif
-#ifndef local_softirq_pending
-
-#ifndef local_softirq_data_ref
-#define local_softirq_data_ref irq_stat.__softirq_data
-#endif
-
-#define local_softirq_pending() (__this_cpu_read(local_softirq_data_ref))
-#define set_softirq_pending(x) (__this_cpu_write(local_softirq_data_ref, (x)))
-#define or_softirq_pending(x) (__this_cpu_or(local_softirq_data_ref, (x)))
-
-#endif /* local_softirq_pending */
-
/* Some architectures might implement lazy enabling/disabling of
* interrupts. In some cases, such as stop_machine, we might want
* to ensure that after a local_irq_disable(), interrupts have
@@ -479,6 +467,19 @@ enum
#define SOFTIRQ_STOP_IDLE_MASK (~(1 << RCU_SOFTIRQ))
+#ifndef local_softirq_pending
+
+#ifndef local_softirq_data_ref
+#define local_softirq_data_ref irq_stat.__softirq_data
+#endif
+
+#define local_softirq_pending() (__this_cpu_read(local_softirq_data_ref))
+#define set_softirq_pending(x) (__this_cpu_write(local_softirq_data_ref, (x)))
+#define or_softirq_pending(x) (__this_cpu_or(local_softirq_data_ref, (x)))
+
+#endif /* local_softirq_pending */
+
+
/* map softirq index to softirq name. update 'softirq_to_name' in
* kernel/softirq.c when adding a new softirq.
*/
--
2.7.4
From: Frederic Weisbecker <[email protected]>
Use the subsystem as the prefix to name the __softirq_data accessors.
They are going to be extended and want a more greppable and standard
naming sheme.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
arch/s390/include/asm/hardirq.h | 4 ++--
include/linux/interrupt.h | 4 ++--
kernel/softirq.c | 4 ++--
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/s390/include/asm/hardirq.h b/arch/s390/include/asm/hardirq.h
index e26325f..3103680 100644
--- a/arch/s390/include/asm/hardirq.h
+++ b/arch/s390/include/asm/hardirq.h
@@ -14,8 +14,8 @@
#include <asm/lowcore.h>
#define local_softirq_pending() (S390_lowcore.softirq_data)
-#define set_softirq_pending(x) (S390_lowcore.softirq_data = (x))
-#define or_softirq_pending(x) (S390_lowcore.softirq_data |= (x))
+#define softirq_pending_set(x) (S390_lowcore.softirq_data = (x))
+#define softirq_pending_or(x) (S390_lowcore.softirq_data |= (x))
#define __ARCH_IRQ_STAT
#define __ARCH_HAS_DO_SOFTIRQ
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 1de87ec..fc88f0d 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -474,8 +474,8 @@ enum
#endif
#define local_softirq_pending() (__this_cpu_read(local_softirq_data_ref))
-#define set_softirq_pending(x) (__this_cpu_write(local_softirq_data_ref, (x)))
-#define or_softirq_pending(x) (__this_cpu_or(local_softirq_data_ref, (x)))
+#define softirq_pending_set(x) (__this_cpu_write(local_softirq_data_ref, (x)))
+#define softirq_pending_or(x) (__this_cpu_or(local_softirq_data_ref, (x)))
#endif /* local_softirq_pending */
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 6f58486..c39af4a 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -271,7 +271,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
restart:
/* Reset the pending bitmask before enabling irqs */
- set_softirq_pending(0);
+ softirq_pending_set(0);
local_irq_enable();
@@ -448,7 +448,7 @@ void raise_softirq(unsigned int nr)
void __raise_softirq_irqoff(unsigned int nr)
{
trace_softirq_raise(nr);
- or_softirq_pending(1UL << nr);
+ softirq_pending_or(1UL << nr);
}
void open_softirq(int nr, void (*action)(struct softirq_action *))
--
2.7.4
The bottom half masking APIs have become interestingly confusing with all
these flavours:
local_bh_enable()
_local_bh_enable()
local_bh_enable_ip()
__local_bh_enable_ip()
_local_bh_enable() is an exception here because it's the only version
that won't execute do_softirq() in the end.
Clarify this straight in the name. It may help reviewers who are already
familiar with functions such as preempt_enable_no_resched().
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
arch/s390/lib/delay.c | 2 +-
drivers/s390/char/sclp.c | 2 +-
drivers/s390/cio/cio.c | 2 +-
include/linux/bottom_half.h | 2 +-
kernel/softirq.c | 6 +++---
5 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/s390/lib/delay.c b/arch/s390/lib/delay.c
index d4aa1079..3f83ee9 100644
--- a/arch/s390/lib/delay.c
+++ b/arch/s390/lib/delay.c
@@ -91,7 +91,7 @@ void __udelay(unsigned long long usecs)
if (raw_irqs_disabled_flags(flags)) {
local_bh_disable();
__udelay_disabled(usecs);
- _local_bh_enable();
+ local_bh_enable_no_softirq();
goto out;
}
__udelay_enabled(usecs);
diff --git a/drivers/s390/char/sclp.c b/drivers/s390/char/sclp.c
index e9aa71c..6c6b745 100644
--- a/drivers/s390/char/sclp.c
+++ b/drivers/s390/char/sclp.c
@@ -572,7 +572,7 @@ sclp_sync_wait(void)
local_irq_disable();
__ctl_load(cr0, 0, 0);
if (!irq_context)
- _local_bh_enable();
+ local_bh_enable_no_softirq();
local_tick_enable(old_tick);
local_irq_restore(flags);
}
diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c
index de744ca..e3fb83b 100644
--- a/drivers/s390/cio/cio.c
+++ b/drivers/s390/cio/cio.c
@@ -607,7 +607,7 @@ void cio_tsch(struct subchannel *sch)
inc_irq_stat(IRQIO_CIO);
if (!irq_context) {
irq_exit();
- _local_bh_enable();
+ local_bh_enable_no_softirq();
}
}
diff --git a/include/linux/bottom_half.h b/include/linux/bottom_half.h
index a19519f..a104f81 100644
--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -19,7 +19,7 @@ static inline void local_bh_disable(void)
__local_bh_disable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET);
}
-extern void _local_bh_enable(void);
+extern void local_bh_enable_no_softirq(void);
extern void __local_bh_enable_ip(unsigned long ip, unsigned int cnt);
static inline void local_bh_enable_ip(unsigned long ip)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 288e007..fdb2574 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -156,12 +156,12 @@ static void __local_bh_enable(unsigned int cnt)
* Special-case - softirqs can safely be enabled by __do_softirq(),
* without processing still-pending softirqs:
*/
-void _local_bh_enable(void)
+void local_bh_enable_no_softirq(void)
{
WARN_ON_ONCE(in_irq());
__local_bh_enable(SOFTIRQ_DISABLE_OFFSET);
}
-EXPORT_SYMBOL(_local_bh_enable);
+EXPORT_SYMBOL(local_bh_enable_no_softirq);
void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
{
@@ -351,7 +351,7 @@ void irq_enter(void)
*/
local_bh_disable();
tick_irq_enter();
- _local_bh_enable();
+ local_bh_enable_no_softirq();
}
__irq_enter();
--
2.7.4
From: Frederic Weisbecker <[email protected]>
All softirqs must be set enabled on boot.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
arch/x86/kernel/irq.c | 5 ++++-
include/linux/bottom_half.h | 2 ++
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index 59b5f2e..b859861 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -11,6 +11,7 @@
#include <linux/delay.h>
#include <linux/export.h>
#include <linux/irq.h>
+#include <linux/bottom_half.h>
#include <asm/apic.h>
#include <asm/io_apic.h>
@@ -22,7 +23,9 @@
#define CREATE_TRACE_POINTS
#include <asm/trace/irq_vectors.h>
-DEFINE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);
+DEFINE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat) = {
+ .__softirq_data = SOFTIRQ_DATA_INIT,
+};
EXPORT_PER_CPU_SYMBOL(irq_stat);
DEFINE_PER_CPU(struct pt_regs *, irq_regs);
diff --git a/include/linux/bottom_half.h b/include/linux/bottom_half.h
index a9571ad..fd75d1a 100644
--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -33,6 +33,8 @@ enum
#define SOFTIRQ_ENABLED_SHIFT 16
#define SOFTIRQ_PENDING_MASK (BIT(SOFTIRQ_ENABLED_SHIFT) - 1)
+#define SOFTIRQ_DATA_INIT (SOFTIRQ_ALL_MASK << SOFTIRQ_ENABLED_SHIFT)
+
#ifdef CONFIG_TRACE_IRQFLAGS
extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
--
2.7.4
From: Frederic Weisbecker <[email protected]>
Check the enabled vector bits on softirq processing. Those that are
pending but disabled will be ignored and handled by the interrupted code
that disabled those vectors.
No effective change yet as the core isn't yet ready for softirq
re-entrancy. All softirqs should be enabled all the time for now and
driven through preempt_count().
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
kernel/softirq.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index fdb2574..75aab25 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -263,7 +263,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
*/
current->flags &= ~PF_MEMALLOC;
- pending = local_softirq_pending();
+ pending = local_softirq_pending() & local_softirq_enabled();
account_irq_enter_time(current);
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
@@ -271,7 +271,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
restart:
/* Reset the pending bitmask before enabling irqs */
- softirq_pending_nand(SOFTIRQ_ALL_MASK);
+ softirq_pending_nand(pending);
local_irq_enable();
@@ -304,7 +304,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
rcu_bh_qs();
local_irq_disable();
- pending = local_softirq_pending();
+ pending = local_softirq_pending() & local_softirq_enabled();
if (pending) {
if (time_before(jiffies, end) && !need_resched() &&
--max_restart)
--
2.7.4
This pair of function is implemented on top of spin_lock_bh() that is
going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to that of local_irq_save/restore. Subsequent calls to
local_bh_disable() and friends can then stack up:
bh = local_bh_disable(vec_mask);
bh1 = local_bh_disable(vec_mask1);
bh2 = spin_lock_bh(vec_mask2);
netif_tx_lock_bh(vec_mask3) {
bh3 = spin_lock_bh(vec_mask3)
return bh3;
}
...
netif_tx_unlock_bh(bh3, ...) {
spin_unlock_bh(bh3, ...);
}
spin_unlock_bh(bh2, ...);
local_bh_enable(bh1);
local_bh_enable(bh);
To prepare for that, make netif_tx_lock_bh() able to return a saved vector
enabled mask and pass it back to netif_tx_unlock_bh(). We'll plug it
to spin_[un]lock_bh() in a subsequent patch.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
drivers/infiniband/ulp/ipoib/ipoib_cm.c | 42 ++++++++++++----------
drivers/infiniband/ulp/ipoib/ipoib_ib.c | 5 +--
drivers/infiniband/ulp/ipoib/ipoib_main.c | 9 ++---
drivers/infiniband/ulp/ipoib/ipoib_multicast.c | 19 +++++-----
drivers/net/ethernet/aurora/nb8800.c | 5 +--
drivers/net/ethernet/chelsio/cxgb4/sge.c | 5 +--
drivers/net/ethernet/freescale/fec_main.c | 34 ++++++++++--------
drivers/net/ethernet/ibm/emac/core.c | 15 ++++----
drivers/net/ethernet/marvell/mv643xx_eth.c | 5 +--
drivers/net/ethernet/marvell/skge.c | 5 +--
drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 5 +--
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 5 +--
drivers/net/ethernet/nvidia/forcedeth.c | 40 ++++++++++++---------
drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c | 7 ++--
drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c | 7 ++--
drivers/net/ethernet/qualcomm/qca_spi.c | 10 +++---
drivers/net/ethernet/sfc/falcon/selftest.c | 10 +++---
drivers/net/ethernet/sfc/selftest.c | 10 +++---
drivers/net/hamradio/6pack.c | 10 +++---
drivers/net/hamradio/mkiss.c | 10 +++---
drivers/net/usb/cdc_ncm.c | 15 ++++----
include/linux/netdevice.h | 18 +++++++---
net/atm/clip.c | 5 +--
net/sched/sch_generic.c | 5 +--
24 files changed, 181 insertions(+), 120 deletions(-)
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index 3d5424f..de6cb14 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -983,6 +983,7 @@ void ipoib_cm_dev_stop(struct net_device *dev)
static int ipoib_cm_rep_handler(struct ib_cm_id *cm_id,
const struct ib_cm_event *event)
{
+ unsigned int bh;
struct ipoib_cm_tx *p = cm_id->context;
struct ipoib_dev_priv *priv = ipoib_priv(p->dev);
struct ipoib_cm_data *data = event->private_data;
@@ -1027,14 +1028,14 @@ static int ipoib_cm_rep_handler(struct ib_cm_id *cm_id,
skb_queue_head_init(&skqueue);
- netif_tx_lock_bh(p->dev);
+ bh = netif_tx_lock_bh(p->dev);
spin_lock_irq(&priv->lock);
set_bit(IPOIB_FLAG_OPER_UP, &p->flags);
if (p->neigh)
while ((skb = __skb_dequeue(&p->neigh->queue)))
__skb_queue_tail(&skqueue, skb);
spin_unlock_irq(&priv->lock);
- netif_tx_unlock_bh(p->dev);
+ netif_tx_unlock_bh(p->dev, bh);
while ((skb = __skb_dequeue(&skqueue))) {
skb->dev = p->dev;
@@ -1201,6 +1202,7 @@ static int ipoib_cm_tx_init(struct ipoib_cm_tx *p, u32 qpn,
static void ipoib_cm_tx_destroy(struct ipoib_cm_tx *p)
{
+ unsigned int bh;
struct ipoib_dev_priv *priv = ipoib_priv(p->dev);
struct ipoib_tx_buf *tx_req;
unsigned long begin;
@@ -1231,14 +1233,14 @@ static void ipoib_cm_tx_destroy(struct ipoib_cm_tx *p)
tx_req = &p->tx_ring[p->tx_tail & (ipoib_sendq_size - 1)];
ipoib_dma_unmap_tx(priv, tx_req);
dev_kfree_skb_any(tx_req->skb);
- netif_tx_lock_bh(p->dev);
+ bh = netif_tx_lock_bh(p->dev);
++p->tx_tail;
++priv->tx_tail;
if (unlikely(priv->tx_head - priv->tx_tail == ipoib_sendq_size >> 1) &&
netif_queue_stopped(p->dev) &&
test_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags))
netif_wake_queue(p->dev);
- netif_tx_unlock_bh(p->dev);
+ netif_tx_unlock_bh(p->dev, bh);
}
if (p->qp)
@@ -1251,6 +1253,7 @@ static void ipoib_cm_tx_destroy(struct ipoib_cm_tx *p)
static int ipoib_cm_tx_handler(struct ib_cm_id *cm_id,
const struct ib_cm_event *event)
{
+ unsigned int bh;
struct ipoib_cm_tx *tx = cm_id->context;
struct ipoib_dev_priv *priv = ipoib_priv(tx->dev);
struct net_device *dev = priv->dev;
@@ -1274,7 +1277,7 @@ static int ipoib_cm_tx_handler(struct ib_cm_id *cm_id,
case IB_CM_REJ_RECEIVED:
case IB_CM_TIMEWAIT_EXIT:
ipoib_dbg(priv, "CM error %d.\n", event->event);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
spin_lock_irqsave(&priv->lock, flags);
neigh = tx->neigh;
@@ -1291,7 +1294,7 @@ static int ipoib_cm_tx_handler(struct ib_cm_id *cm_id,
}
spin_unlock_irqrestore(&priv->lock, flags);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
break;
default:
break;
@@ -1339,6 +1342,7 @@ void ipoib_cm_destroy_tx(struct ipoib_cm_tx *tx)
static void ipoib_cm_tx_start(struct work_struct *work)
{
+ unsigned int bh;
struct ipoib_dev_priv *priv = container_of(work, struct ipoib_dev_priv,
cm.start_task);
struct net_device *dev = priv->dev;
@@ -1351,7 +1355,7 @@ static void ipoib_cm_tx_start(struct work_struct *work)
struct sa_path_rec pathrec;
u32 qpn;
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
spin_lock_irqsave(&priv->lock, flags);
while (!list_empty(&priv->cm.start_list)) {
@@ -1374,11 +1378,11 @@ static void ipoib_cm_tx_start(struct work_struct *work)
memcpy(&pathrec, &p->path->pathrec, sizeof(pathrec));
spin_unlock_irqrestore(&priv->lock, flags);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
ret = ipoib_cm_tx_init(p, qpn, &pathrec);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
spin_lock_irqsave(&priv->lock, flags);
if (ret) {
@@ -1394,36 +1398,38 @@ static void ipoib_cm_tx_start(struct work_struct *work)
}
spin_unlock_irqrestore(&priv->lock, flags);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
}
static void ipoib_cm_tx_reap(struct work_struct *work)
{
+ unsigned int bh;
struct ipoib_dev_priv *priv = container_of(work, struct ipoib_dev_priv,
cm.reap_task);
struct net_device *dev = priv->dev;
struct ipoib_cm_tx *p;
unsigned long flags;
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
spin_lock_irqsave(&priv->lock, flags);
while (!list_empty(&priv->cm.reap_list)) {
p = list_entry(priv->cm.reap_list.next, typeof(*p), list);
list_del_init(&p->list);
spin_unlock_irqrestore(&priv->lock, flags);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
ipoib_cm_tx_destroy(p);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
spin_lock_irqsave(&priv->lock, flags);
}
spin_unlock_irqrestore(&priv->lock, flags);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
}
static void ipoib_cm_skb_reap(struct work_struct *work)
{
+ unsigned int bh;
struct ipoib_dev_priv *priv = container_of(work, struct ipoib_dev_priv,
cm.skb_task);
struct net_device *dev = priv->dev;
@@ -1431,12 +1437,12 @@ static void ipoib_cm_skb_reap(struct work_struct *work)
unsigned long flags;
unsigned int mtu = priv->mcast_mtu;
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
spin_lock_irqsave(&priv->lock, flags);
while ((skb = skb_dequeue(&priv->cm.skb_queue))) {
spin_unlock_irqrestore(&priv->lock, flags);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
if (skb->protocol == htons(ETH_P_IP))
icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
@@ -1446,12 +1452,12 @@ static void ipoib_cm_skb_reap(struct work_struct *work)
#endif
dev_kfree_skb_any(skb);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
spin_lock_irqsave(&priv->lock, flags);
}
spin_unlock_irqrestore(&priv->lock, flags);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
}
void ipoib_cm_skb_too_long(struct net_device *dev, struct sk_buff *skb,
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index 9006a13..87f2a5c 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -667,12 +667,13 @@ int ipoib_send(struct net_device *dev, struct sk_buff *skb,
static void __ipoib_reap_ah(struct net_device *dev)
{
+ unsigned int bh;
struct ipoib_dev_priv *priv = ipoib_priv(dev);
struct ipoib_ah *ah, *tah;
LIST_HEAD(remove_list);
unsigned long flags;
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
spin_lock_irqsave(&priv->lock, flags);
list_for_each_entry_safe(ah, tah, &priv->dead_ahs, list)
@@ -683,7 +684,7 @@ static void __ipoib_reap_ah(struct net_device *dev)
}
spin_unlock_irqrestore(&priv->lock, flags);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
}
void ipoib_reap_ah(struct work_struct *work)
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
index e3d28f9..eaefa43 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -709,12 +709,13 @@ static void push_pseudo_header(struct sk_buff *skb, const char *daddr)
void ipoib_flush_paths(struct net_device *dev)
{
+ unsigned int bh;
struct ipoib_dev_priv *priv = ipoib_priv(dev);
struct ipoib_path *path, *tp;
LIST_HEAD(remove_list);
unsigned long flags;
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
spin_lock_irqsave(&priv->lock, flags);
list_splice_init(&priv->path_list, &remove_list);
@@ -726,15 +727,15 @@ void ipoib_flush_paths(struct net_device *dev)
if (path->query)
ib_sa_cancel_query(path->query_id, path->query);
spin_unlock_irqrestore(&priv->lock, flags);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
wait_for_completion(&path->done);
path_free(dev, path);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
spin_lock_irqsave(&priv->lock, flags);
}
spin_unlock_irqrestore(&priv->lock, flags);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
}
static void path_rec_completion(int status,
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
index b9e9562..26a4b01 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
@@ -111,6 +111,7 @@ static void __ipoib_mcast_schedule_join_thread(struct ipoib_dev_priv *priv,
static void ipoib_mcast_free(struct ipoib_mcast *mcast)
{
+ unsigned int bh;
struct net_device *dev = mcast->dev;
int tx_dropped = 0;
@@ -128,9 +129,9 @@ static void ipoib_mcast_free(struct ipoib_mcast *mcast)
dev_kfree_skb_any(skb_dequeue(&mcast->pkt_queue));
}
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
dev->stats.tx_dropped += tx_dropped;
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
kfree(mcast);
}
@@ -211,6 +212,7 @@ static int __ipoib_mcast_add(struct net_device *dev, struct ipoib_mcast *mcast)
static int ipoib_mcast_join_finish(struct ipoib_mcast *mcast,
struct ib_sa_mcmember_rec *mcmember)
{
+ unsigned int bh;
struct net_device *dev = mcast->dev;
struct ipoib_dev_priv *priv = ipoib_priv(dev);
struct rdma_netdev *rn = netdev_priv(dev);
@@ -304,11 +306,11 @@ static int ipoib_mcast_join_finish(struct ipoib_mcast *mcast,
mcast->mcmember.sl);
/* actually send any queued packets */
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
while (!skb_queue_empty(&mcast->pkt_queue)) {
struct sk_buff *skb = skb_dequeue(&mcast->pkt_queue);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
skb->dev = dev;
@@ -316,9 +318,9 @@ static int ipoib_mcast_join_finish(struct ipoib_mcast *mcast,
if (ret)
ipoib_warn(priv, "%s:dev_queue_xmit failed to re-queue packet, ret:%d\n",
__func__, ret);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
}
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
return 0;
}
@@ -367,6 +369,7 @@ void ipoib_mcast_carrier_on_task(struct work_struct *work)
static int ipoib_mcast_join_complete(int status,
struct ib_sa_multicast *multicast)
{
+ unsigned int bh;
struct ipoib_mcast *mcast = multicast->context;
struct net_device *dev = mcast->dev;
struct ipoib_dev_priv *priv = ipoib_priv(dev);
@@ -435,12 +438,12 @@ static int ipoib_mcast_join_complete(int status,
* is why the join thread ignores this group.
*/
mcast->backoff = 1;
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
while (!skb_queue_empty(&mcast->pkt_queue)) {
++dev->stats.tx_dropped;
dev_kfree_skb_any(skb_dequeue(&mcast->pkt_queue));
}
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
} else {
spin_lock_irq(&priv->lock);
/* Requeue this join task with a backoff delay */
diff --git a/drivers/net/ethernet/aurora/nb8800.c b/drivers/net/ethernet/aurora/nb8800.c
index c8d1f8f..77e116a 100644
--- a/drivers/net/ethernet/aurora/nb8800.c
+++ b/drivers/net/ethernet/aurora/nb8800.c
@@ -629,6 +629,7 @@ static void nb8800_mac_config(struct net_device *dev)
static void nb8800_pause_config(struct net_device *dev)
{
+ unsigned int bh;
struct nb8800_priv *priv = netdev_priv(dev);
struct phy_device *phydev = dev->phydev;
u32 rxcr;
@@ -649,11 +650,11 @@ static void nb8800_pause_config(struct net_device *dev)
if (netif_running(dev)) {
napi_disable(&priv->napi);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
nb8800_dma_stop(dev);
nb8800_modl(priv, NB8800_RXC_CR, RCR_FL, priv->pause_tx);
nb8800_start_rx(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
napi_enable(&priv->napi);
} else {
nb8800_modl(priv, NB8800_RXC_CR, RCR_FL, priv->pause_tx);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
index 6807bc3..a9799ce 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
@@ -3829,6 +3829,7 @@ void t4_free_ofld_rxqs(struct adapter *adap, int n, struct sge_ofld_rxq *q)
*/
void t4_free_sge_resources(struct adapter *adap)
{
+ unsigned int bh;
int i;
struct sge_eth_rxq *eq;
struct sge_eth_txq *etq;
@@ -3855,9 +3856,9 @@ void t4_free_sge_resources(struct adapter *adap)
if (etq->q.desc) {
t4_eth_eq_free(adap, adap->mbox, adap->pf, 0,
etq->q.cntxt_id);
- __netif_tx_lock_bh(etq->txq);
+ bh = __netif_tx_lock_bh(etq->txq);
free_tx_desc(adap, &etq->q, etq->q.in_use, true);
- __netif_tx_unlock_bh(etq->txq);
+ __netif_tx_unlock_bh(etq->txq, bh);
kfree(etq->q.sdesc);
free_txq(adap, &etq->q);
}
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index 2708297..17cda1d 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -1149,6 +1149,7 @@ fec_timeout(struct net_device *ndev)
static void fec_enet_timeout_work(struct work_struct *work)
{
+ unsigned int bh;
struct fec_enet_private *fep =
container_of(work, struct fec_enet_private, tx_timeout_work);
struct net_device *ndev = fep->netdev;
@@ -1156,10 +1157,10 @@ static void fec_enet_timeout_work(struct work_struct *work)
rtnl_lock();
if (netif_device_present(ndev) || netif_running(ndev)) {
napi_disable(&fep->napi);
- netif_tx_lock_bh(ndev);
+ bh = netif_tx_lock_bh(ndev);
fec_restart(ndev);
netif_wake_queue(ndev);
- netif_tx_unlock_bh(ndev);
+ netif_tx_unlock_bh(ndev, bh);
napi_enable(&fep->napi);
}
rtnl_unlock();
@@ -1708,6 +1709,7 @@ static void fec_get_mac(struct net_device *ndev)
*/
static void fec_enet_adjust_link(struct net_device *ndev)
{
+ unsigned int bh;
struct fec_enet_private *fep = netdev_priv(ndev);
struct phy_device *phy_dev = ndev->phydev;
int status_change = 0;
@@ -1744,18 +1746,18 @@ static void fec_enet_adjust_link(struct net_device *ndev)
/* if any of the above changed restart the FEC */
if (status_change) {
napi_disable(&fep->napi);
- netif_tx_lock_bh(ndev);
+ bh = netif_tx_lock_bh(ndev);
fec_restart(ndev);
netif_wake_queue(ndev);
- netif_tx_unlock_bh(ndev);
+ netif_tx_unlock_bh(ndev, bh);
napi_enable(&fep->napi);
}
} else {
if (fep->link) {
napi_disable(&fep->napi);
- netif_tx_lock_bh(ndev);
+ bh = netif_tx_lock_bh(ndev);
fec_stop(ndev);
- netif_tx_unlock_bh(ndev);
+ netif_tx_unlock_bh(ndev, bh);
napi_enable(&fep->napi);
fep->link = phy_dev->link;
status_change = 1;
@@ -2213,6 +2215,7 @@ static void fec_enet_get_pauseparam(struct net_device *ndev,
static int fec_enet_set_pauseparam(struct net_device *ndev,
struct ethtool_pauseparam *pause)
{
+ unsigned int bh;
struct fec_enet_private *fep = netdev_priv(ndev);
if (!ndev->phydev)
@@ -2245,10 +2248,10 @@ static int fec_enet_set_pauseparam(struct net_device *ndev,
}
if (netif_running(ndev)) {
napi_disable(&fep->napi);
- netif_tx_lock_bh(ndev);
+ bh = netif_tx_lock_bh(ndev);
fec_restart(ndev);
netif_wake_queue(ndev);
- netif_tx_unlock_bh(ndev);
+ netif_tx_unlock_bh(ndev, bh);
napi_enable(&fep->napi);
}
@@ -3072,17 +3075,18 @@ static inline void fec_enet_set_netdev_features(struct net_device *netdev,
static int fec_set_features(struct net_device *netdev,
netdev_features_t features)
{
+ unsigned int bh;
struct fec_enet_private *fep = netdev_priv(netdev);
netdev_features_t changed = features ^ netdev->features;
if (netif_running(netdev) && changed & NETIF_F_RXCSUM) {
napi_disable(&fep->napi);
- netif_tx_lock_bh(netdev);
+ bh = netif_tx_lock_bh(netdev);
fec_stop(netdev);
fec_enet_set_netdev_features(netdev, features);
fec_restart(netdev);
netif_tx_wake_all_queues(netdev);
- netif_tx_unlock_bh(netdev);
+ netif_tx_unlock_bh(netdev, bh);
napi_enable(&fep->napi);
} else {
fec_enet_set_netdev_features(netdev, features);
@@ -3609,6 +3613,7 @@ fec_drv_remove(struct platform_device *pdev)
static int __maybe_unused fec_suspend(struct device *dev)
{
+ unsigned int bh;
struct net_device *ndev = dev_get_drvdata(dev);
struct fec_enet_private *fep = netdev_priv(ndev);
@@ -3618,9 +3623,9 @@ static int __maybe_unused fec_suspend(struct device *dev)
fep->wol_flag |= FEC_WOL_FLAG_SLEEP_ON;
phy_stop(ndev->phydev);
napi_disable(&fep->napi);
- netif_tx_lock_bh(ndev);
+ bh = netif_tx_lock_bh(ndev);
netif_device_detach(ndev);
- netif_tx_unlock_bh(ndev);
+ netif_tx_unlock_bh(ndev, bh);
fec_stop(ndev);
fec_enet_clk_enable(ndev, false);
if (!(fep->wol_flag & FEC_WOL_FLAG_ENABLE))
@@ -3642,6 +3647,7 @@ static int __maybe_unused fec_suspend(struct device *dev)
static int __maybe_unused fec_resume(struct device *dev)
{
+ unsigned int bh;
struct net_device *ndev = dev_get_drvdata(dev);
struct fec_enet_private *fep = netdev_priv(ndev);
struct fec_platform_data *pdata = fep->pdev->dev.platform_data;
@@ -3672,9 +3678,9 @@ static int __maybe_unused fec_resume(struct device *dev)
pinctrl_pm_select_default_state(&fep->pdev->dev);
}
fec_restart(ndev);
- netif_tx_lock_bh(ndev);
+ bh = netif_tx_lock_bh(ndev);
netif_device_attach(ndev);
- netif_tx_unlock_bh(ndev);
+ netif_tx_unlock_bh(ndev, bh);
napi_enable(&fep->napi);
phy_start(ndev->phydev);
}
diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c
index 3726646..3f65b2c 100644
--- a/drivers/net/ethernet/ibm/emac/core.c
+++ b/drivers/net/ethernet/ibm/emac/core.c
@@ -296,11 +296,12 @@ static void emac_rx_disable(struct emac_instance *dev)
static inline void emac_netif_stop(struct emac_instance *dev)
{
- netif_tx_lock_bh(dev->ndev);
+ unsigned int bh;
+ bh = netif_tx_lock_bh(dev->ndev);
netif_addr_lock(dev->ndev);
dev->no_mcast = 1;
netif_addr_unlock(dev->ndev);
- netif_tx_unlock_bh(dev->ndev);
+ netif_tx_unlock_bh(dev->ndev, bh);
netif_trans_update(dev->ndev); /* prevent tx timeout */
mal_poll_disable(dev->mal, &dev->commac);
netif_tx_disable(dev->ndev);
@@ -308,13 +309,14 @@ static inline void emac_netif_stop(struct emac_instance *dev)
static inline void emac_netif_start(struct emac_instance *dev)
{
- netif_tx_lock_bh(dev->ndev);
+ unsigned int bh;
+ bh = netif_tx_lock_bh(dev->ndev);
netif_addr_lock(dev->ndev);
dev->no_mcast = 0;
if (dev->mcast_pending && netif_running(dev->ndev))
__emac_set_multicast_list(dev);
netif_addr_unlock(dev->ndev);
- netif_tx_unlock_bh(dev->ndev);
+ netif_tx_unlock_bh(dev->ndev, bh);
netif_wake_queue(dev->ndev);
@@ -1607,6 +1609,7 @@ static void emac_parse_tx_error(struct emac_instance *dev, u16 ctrl)
static void emac_poll_tx(void *param)
{
+ unsigned int bh;
struct emac_instance *dev = param;
u32 bad_mask;
@@ -1617,7 +1620,7 @@ static void emac_poll_tx(void *param)
else
bad_mask = EMAC_IS_BAD_TX;
- netif_tx_lock_bh(dev->ndev);
+ bh = netif_tx_lock_bh(dev->ndev);
if (dev->tx_cnt) {
u16 ctrl;
int slot = dev->ack_slot, n = 0;
@@ -1648,7 +1651,7 @@ static void emac_poll_tx(void *param)
DBG2(dev, "tx %d pkts" NL, n);
}
}
- netif_tx_unlock_bh(dev->ndev);
+ netif_tx_unlock_bh(dev->ndev, bh);
}
static inline void emac_recycle_rx_skb(struct emac_instance *dev, int slot,
diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c
index 62f204f..56c74c2 100644
--- a/drivers/net/ethernet/marvell/mv643xx_eth.c
+++ b/drivers/net/ethernet/marvell/mv643xx_eth.c
@@ -1071,11 +1071,12 @@ static void txq_kick(struct tx_queue *txq)
static int txq_reclaim(struct tx_queue *txq, int budget, int force)
{
+ unsigned int bh;
struct mv643xx_eth_private *mp = txq_to_mp(txq);
struct netdev_queue *nq = netdev_get_tx_queue(mp->dev, txq->index);
int reclaimed;
- __netif_tx_lock_bh(nq);
+ bh = __netif_tx_lock_bh(nq);
reclaimed = 0;
while (reclaimed < budget && txq->tx_desc_count > 0) {
@@ -1131,7 +1132,7 @@ static int txq_reclaim(struct tx_queue *txq, int budget, int force)
}
- __netif_tx_unlock_bh(nq);
+ __netif_tx_unlock_bh(nq, bh);
if (reclaimed < budget)
mp->work_tx &= ~(1 << txq->index);
diff --git a/drivers/net/ethernet/marvell/skge.c b/drivers/net/ethernet/marvell/skge.c
index 9c08c36..506087a 100644
--- a/drivers/net/ethernet/marvell/skge.c
+++ b/drivers/net/ethernet/marvell/skge.c
@@ -2653,6 +2653,7 @@ static void skge_rx_stop(struct skge_hw *hw, int port)
static int skge_down(struct net_device *dev)
{
+ unsigned int bh;
struct skge_port *skge = netdev_priv(dev);
struct skge_hw *hw = skge->hw;
int port = skge->port;
@@ -2718,9 +2719,9 @@ static int skge_down(struct net_device *dev)
skge_led(skge, LED_MODE_OFF);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
skge_tx_clean(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
skge_rx_clean(skge);
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index 6785661..666708a 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -1878,6 +1878,7 @@ int mlx4_en_start_port(struct net_device *dev)
void mlx4_en_stop_port(struct net_device *dev, int detach)
{
+ unsigned int bh;
struct mlx4_en_priv *priv = netdev_priv(dev);
struct mlx4_en_dev *mdev = priv->mdev;
struct mlx4_en_mc_list *mclist, *tmp;
@@ -1894,11 +1895,11 @@ void mlx4_en_stop_port(struct net_device *dev, int detach)
mlx4_CLOSE_PORT(mdev->dev, priv->port);
/* Synchronize with tx routine */
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
if (detach)
netif_device_detach(dev);
netif_tx_stop_all_queues(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
netif_tx_disable(dev);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 5a7939e..6d66d1c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -1349,9 +1349,10 @@ static void mlx5e_activate_txqsq(struct mlx5e_txqsq *sq)
static inline void netif_tx_disable_queue(struct netdev_queue *txq)
{
- __netif_tx_lock_bh(txq);
+ unsigned int bh;
+ bh = __netif_tx_lock_bh(txq);
netif_tx_stop_queue(txq);
- __netif_tx_unlock_bh(txq);
+ __netif_tx_unlock_bh(txq, bh);
}
static void mlx5e_deactivate_txqsq(struct mlx5e_txqsq *sq)
diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c
index 1d9b0d4..7b5ac16 100644
--- a/drivers/net/ethernet/nvidia/forcedeth.c
+++ b/drivers/net/ethernet/nvidia/forcedeth.c
@@ -3028,6 +3028,7 @@ static void set_bufsize(struct net_device *dev)
*/
static int nv_change_mtu(struct net_device *dev, int new_mtu)
{
+ unsigned int bh;
struct fe_priv *np = netdev_priv(dev);
int old_mtu;
@@ -3049,7 +3050,7 @@ static int nv_change_mtu(struct net_device *dev, int new_mtu)
*/
nv_disable_irq(dev);
nv_napi_disable(dev);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
netif_addr_lock(dev);
spin_lock(&np->lock);
/* stop engines */
@@ -3076,7 +3077,7 @@ static int nv_change_mtu(struct net_device *dev, int new_mtu)
nv_start_rxtx(dev);
spin_unlock(&np->lock);
netif_addr_unlock(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
nv_napi_enable(dev);
nv_enable_irq(dev);
}
@@ -3102,6 +3103,7 @@ static void nv_copy_mac_to_hw(struct net_device *dev)
*/
static int nv_set_mac_address(struct net_device *dev, void *addr)
{
+ unsigned int bh;
struct fe_priv *np = netdev_priv(dev);
struct sockaddr *macaddr = (struct sockaddr *)addr;
@@ -3112,7 +3114,7 @@ static int nv_set_mac_address(struct net_device *dev, void *addr)
memcpy(dev->dev_addr, macaddr->sa_data, ETH_ALEN);
if (netif_running(dev)) {
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
netif_addr_lock(dev);
spin_lock_irq(&np->lock);
@@ -3126,7 +3128,7 @@ static int nv_set_mac_address(struct net_device *dev, void *addr)
nv_start_rx(dev);
spin_unlock_irq(&np->lock);
netif_addr_unlock(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
} else {
nv_copy_mac_to_hw(dev);
}
@@ -4088,6 +4090,7 @@ static void nv_free_irq(struct net_device *dev)
static void nv_do_nic_poll(struct timer_list *t)
{
+ unsigned int bh;
struct fe_priv *np = from_timer(np, t, nic_poll);
struct net_device *dev = np->dev;
u8 __iomem *base = get_hwbase(dev);
@@ -4129,7 +4132,7 @@ static void nv_do_nic_poll(struct timer_list *t)
np->recover_error = 0;
netdev_info(dev, "MAC in recoverable error state\n");
if (netif_running(dev)) {
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
netif_addr_lock(dev);
spin_lock(&np->lock);
/* stop engines */
@@ -4163,7 +4166,7 @@ static void nv_do_nic_poll(struct timer_list *t)
nv_start_rxtx(dev);
spin_unlock(&np->lock);
netif_addr_unlock(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
}
}
@@ -4346,6 +4349,7 @@ static int nv_get_link_ksettings(struct net_device *dev,
static int nv_set_link_ksettings(struct net_device *dev,
const struct ethtool_link_ksettings *cmd)
{
+ unsigned int bh;
struct fe_priv *np = netdev_priv(dev);
u32 speed = cmd->base.speed;
u32 advertising;
@@ -4389,7 +4393,7 @@ static int nv_set_link_ksettings(struct net_device *dev,
unsigned long flags;
nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
netif_addr_lock(dev);
/* with plain spinlock lockdep complains */
spin_lock_irqsave(&np->lock, flags);
@@ -4405,7 +4409,7 @@ static int nv_set_link_ksettings(struct net_device *dev,
nv_stop_rxtx(dev);
spin_unlock_irqrestore(&np->lock, flags);
netif_addr_unlock(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
}
if (cmd->base.autoneg == AUTONEG_ENABLE) {
@@ -4540,6 +4544,7 @@ static void nv_get_regs(struct net_device *dev, struct ethtool_regs *regs, void
static int nv_nway_reset(struct net_device *dev)
{
+ unsigned int bh;
struct fe_priv *np = netdev_priv(dev);
int ret;
@@ -4549,14 +4554,14 @@ static int nv_nway_reset(struct net_device *dev)
netif_carrier_off(dev);
if (netif_running(dev)) {
nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
netif_addr_lock(dev);
spin_lock(&np->lock);
/* stop engines */
nv_stop_rxtx(dev);
spin_unlock(&np->lock);
netif_addr_unlock(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
netdev_info(dev, "link down\n");
}
@@ -4598,6 +4603,7 @@ static void nv_get_ringparam(struct net_device *dev, struct ethtool_ringparam* r
static int nv_set_ringparam(struct net_device *dev, struct ethtool_ringparam* ring)
{
+ unsigned int bh;
struct fe_priv *np = netdev_priv(dev);
u8 __iomem *base = get_hwbase(dev);
u8 *rxtx_ring, *rx_skbuff, *tx_skbuff;
@@ -4660,7 +4666,7 @@ static int nv_set_ringparam(struct net_device *dev, struct ethtool_ringparam* ri
if (netif_running(dev)) {
nv_disable_irq(dev);
nv_napi_disable(dev);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
netif_addr_lock(dev);
spin_lock(&np->lock);
/* stop engines */
@@ -4711,7 +4717,7 @@ static int nv_set_ringparam(struct net_device *dev, struct ethtool_ringparam* ri
nv_start_rxtx(dev);
spin_unlock(&np->lock);
netif_addr_unlock(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
nv_napi_enable(dev);
nv_enable_irq(dev);
}
@@ -4731,6 +4737,7 @@ static void nv_get_pauseparam(struct net_device *dev, struct ethtool_pauseparam*
static int nv_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam* pause)
{
+ unsigned int bh;
struct fe_priv *np = netdev_priv(dev);
int adv, bmcr;
@@ -4747,14 +4754,14 @@ static int nv_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam*
netif_carrier_off(dev);
if (netif_running(dev)) {
nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
netif_addr_lock(dev);
spin_lock(&np->lock);
/* stop engines */
nv_stop_rxtx(dev);
spin_unlock(&np->lock);
netif_addr_unlock(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
}
np->pause_flags &= ~(NV_PAUSEFRAME_RX_REQ|NV_PAUSEFRAME_TX_REQ);
@@ -5190,6 +5197,7 @@ static int nv_loopback_test(struct net_device *dev)
static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64 *buffer)
{
+ unsigned int bh;
struct fe_priv *np = netdev_priv(dev);
u8 __iomem *base = get_hwbase(dev);
int result, count;
@@ -5206,7 +5214,7 @@ static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64
if (netif_running(dev)) {
netif_stop_queue(dev);
nv_napi_disable(dev);
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
netif_addr_lock(dev);
spin_lock_irq(&np->lock);
nv_disable_hw_interrupts(dev, np->irqmask);
@@ -5221,7 +5229,7 @@ static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64
nv_drain_rxtx(dev);
spin_unlock_irq(&np->lock);
netif_addr_unlock(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
}
if (!nv_register_test(dev)) {
diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c
index 52ad806..b39e0e81 100644
--- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c
+++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c
@@ -566,6 +566,7 @@ static int
netxen_send_cmd_descs(struct netxen_adapter *adapter,
struct cmd_desc_type0 *cmd_desc_arr, int nr_desc)
{
+ unsigned int bh;
u32 i, producer;
struct netxen_cmd_buffer *pbuf;
struct nx_host_tx_ring *tx_ring;
@@ -576,7 +577,7 @@ netxen_send_cmd_descs(struct netxen_adapter *adapter,
return -EIO;
tx_ring = adapter->tx_ring;
- __netif_tx_lock_bh(tx_ring->txq);
+ bh = __netif_tx_lock_bh(tx_ring->txq);
producer = tx_ring->producer;
@@ -587,7 +588,7 @@ netxen_send_cmd_descs(struct netxen_adapter *adapter,
if (netxen_tx_avail(tx_ring) > TX_STOP_THRESH)
netif_tx_wake_queue(tx_ring->txq);
} else {
- __netif_tx_unlock_bh(tx_ring->txq);
+ __netif_tx_unlock_bh(tx_ring->txq, bh);
return -EBUSY;
}
}
@@ -609,7 +610,7 @@ netxen_send_cmd_descs(struct netxen_adapter *adapter,
netxen_nic_update_cmd_producer(adapter, tx_ring);
- __netif_tx_unlock_bh(tx_ring->txq);
+ __netif_tx_unlock_bh(tx_ring->txq, bh);
return 0;
}
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c
index 822aa39..3991ad0 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c
@@ -382,6 +382,7 @@ static int
qlcnic_send_cmd_descs(struct qlcnic_adapter *adapter,
struct cmd_desc_type0 *cmd_desc_arr, int nr_desc)
{
+ unsigned int bh;
u32 i, producer;
struct qlcnic_cmd_buffer *pbuf;
struct cmd_desc_type0 *cmd_desc;
@@ -393,7 +394,7 @@ qlcnic_send_cmd_descs(struct qlcnic_adapter *adapter,
return -EIO;
tx_ring = &adapter->tx_ring[0];
- __netif_tx_lock_bh(tx_ring->txq);
+ bh = __netif_tx_lock_bh(tx_ring->txq);
producer = tx_ring->producer;
@@ -405,7 +406,7 @@ qlcnic_send_cmd_descs(struct qlcnic_adapter *adapter,
netif_tx_wake_queue(tx_ring->txq);
} else {
adapter->stats.xmit_off++;
- __netif_tx_unlock_bh(tx_ring->txq);
+ __netif_tx_unlock_bh(tx_ring->txq, bh);
return -EBUSY;
}
}
@@ -429,7 +430,7 @@ qlcnic_send_cmd_descs(struct qlcnic_adapter *adapter,
qlcnic_update_cmd_producer(tx_ring);
- __netif_tx_unlock_bh(tx_ring->txq);
+ __netif_tx_unlock_bh(tx_ring->txq, bh);
return 0;
}
diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
index 66b775d..31dbd19 100644
--- a/drivers/net/ethernet/qualcomm/qca_spi.c
+++ b/drivers/net/ethernet/qualcomm/qca_spi.c
@@ -272,6 +272,7 @@ qcaspi_tx_frame(struct qcaspi *qca, struct sk_buff *skb)
static int
qcaspi_transmit(struct qcaspi *qca)
{
+ unsigned int bh;
struct net_device_stats *n_stats = &qca->net_dev->stats;
u16 available = 0;
u32 pkt_len;
@@ -306,7 +307,7 @@ qcaspi_transmit(struct qcaspi *qca)
/* XXX After inconsistent lock states netif_tx_lock()
* has been replaced by netif_tx_lock_bh() and so on.
*/
- netif_tx_lock_bh(qca->net_dev);
+ bh = netif_tx_lock_bh(qca->net_dev);
dev_kfree_skb(qca->txr.skb[qca->txr.head]);
qca->txr.skb[qca->txr.head] = NULL;
qca->txr.size -= pkt_len;
@@ -316,7 +317,7 @@ qcaspi_transmit(struct qcaspi *qca)
qca->txr.head = new_head;
if (netif_queue_stopped(qca->net_dev))
netif_wake_queue(qca->net_dev);
- netif_tx_unlock_bh(qca->net_dev);
+ netif_tx_unlock_bh(qca->net_dev, bh);
}
return 0;
@@ -450,12 +451,13 @@ qcaspi_tx_ring_has_space(struct tx_ring *txr)
static void
qcaspi_flush_tx_ring(struct qcaspi *qca)
{
+ unsigned int bh;
int i;
/* XXX After inconsistent lock states netif_tx_lock()
* has been replaced by netif_tx_lock_bh() and so on.
*/
- netif_tx_lock_bh(qca->net_dev);
+ bh = netif_tx_lock_bh(qca->net_dev);
for (i = 0; i < TX_RING_MAX_LEN; i++) {
if (qca->txr.skb[i]) {
dev_kfree_skb(qca->txr.skb[i]);
@@ -466,7 +468,7 @@ qcaspi_flush_tx_ring(struct qcaspi *qca)
qca->txr.tail = 0;
qca->txr.head = 0;
qca->txr.size = 0;
- netif_tx_unlock_bh(qca->net_dev);
+ netif_tx_unlock_bh(qca->net_dev, bh);
}
static void
diff --git a/drivers/net/ethernet/sfc/falcon/selftest.c b/drivers/net/ethernet/sfc/falcon/selftest.c
index 55c0fbb..ab223e6 100644
--- a/drivers/net/ethernet/sfc/falcon/selftest.c
+++ b/drivers/net/ethernet/sfc/falcon/selftest.c
@@ -412,6 +412,7 @@ static void ef4_iterate_state(struct ef4_nic *efx)
static int ef4_begin_loopback(struct ef4_tx_queue *tx_queue)
{
+ unsigned int bh;
struct ef4_nic *efx = tx_queue->efx;
struct ef4_loopback_state *state = efx->loopback_selftest;
struct ef4_loopback_payload *payload;
@@ -439,9 +440,9 @@ static int ef4_begin_loopback(struct ef4_tx_queue *tx_queue)
* interrupt handler. */
smp_wmb();
- netif_tx_lock_bh(efx->net_dev);
+ bh = netif_tx_lock_bh(efx->net_dev);
rc = ef4_enqueue_skb(tx_queue, skb);
- netif_tx_unlock_bh(efx->net_dev);
+ netif_tx_unlock_bh(efx->net_dev, bh);
if (rc != NETDEV_TX_OK) {
netif_err(efx, drv, efx->net_dev,
@@ -469,13 +470,14 @@ static int ef4_poll_loopback(struct ef4_nic *efx)
static int ef4_end_loopback(struct ef4_tx_queue *tx_queue,
struct ef4_loopback_self_tests *lb_tests)
{
+ unsigned int bh;
struct ef4_nic *efx = tx_queue->efx;
struct ef4_loopback_state *state = efx->loopback_selftest;
struct sk_buff *skb;
int tx_done = 0, rx_good, rx_bad;
int i, rc = 0;
- netif_tx_lock_bh(efx->net_dev);
+ bh = netif_tx_lock_bh(efx->net_dev);
/* Count the number of tx completions, and decrement the refcnt. Any
* skbs not already completed will be free'd when the queue is flushed */
@@ -486,7 +488,7 @@ static int ef4_end_loopback(struct ef4_tx_queue *tx_queue,
dev_kfree_skb(skb);
}
- netif_tx_unlock_bh(efx->net_dev);
+ netif_tx_unlock_bh(efx->net_dev, bh);
/* Check TX completion and received packet counts */
rx_good = atomic_read(&state->rx_good);
diff --git a/drivers/net/ethernet/sfc/selftest.c b/drivers/net/ethernet/sfc/selftest.c
index f693694..59e4d35 100644
--- a/drivers/net/ethernet/sfc/selftest.c
+++ b/drivers/net/ethernet/sfc/selftest.c
@@ -412,6 +412,7 @@ static void efx_iterate_state(struct efx_nic *efx)
static int efx_begin_loopback(struct efx_tx_queue *tx_queue)
{
+ unsigned int bh;
struct efx_nic *efx = tx_queue->efx;
struct efx_loopback_state *state = efx->loopback_selftest;
struct efx_loopback_payload *payload;
@@ -439,9 +440,9 @@ static int efx_begin_loopback(struct efx_tx_queue *tx_queue)
* interrupt handler. */
smp_wmb();
- netif_tx_lock_bh(efx->net_dev);
+ bh = netif_tx_lock_bh(efx->net_dev);
rc = efx_enqueue_skb(tx_queue, skb);
- netif_tx_unlock_bh(efx->net_dev);
+ netif_tx_unlock_bh(efx->net_dev, bh);
if (rc != NETDEV_TX_OK) {
netif_err(efx, drv, efx->net_dev,
@@ -469,13 +470,14 @@ static int efx_poll_loopback(struct efx_nic *efx)
static int efx_end_loopback(struct efx_tx_queue *tx_queue,
struct efx_loopback_self_tests *lb_tests)
{
+ unsigned int bh;
struct efx_nic *efx = tx_queue->efx;
struct efx_loopback_state *state = efx->loopback_selftest;
struct sk_buff *skb;
int tx_done = 0, rx_good, rx_bad;
int i, rc = 0;
- netif_tx_lock_bh(efx->net_dev);
+ bh = netif_tx_lock_bh(efx->net_dev);
/* Count the number of tx completions, and decrement the refcnt. Any
* skbs not already completed will be free'd when the queue is flushed */
@@ -486,7 +488,7 @@ static int efx_end_loopback(struct efx_tx_queue *tx_queue,
dev_kfree_skb(skb);
}
- netif_tx_unlock_bh(efx->net_dev);
+ netif_tx_unlock_bh(efx->net_dev, bh);
/* Check TX completion and received packet counts */
rx_good = atomic_read(&state->rx_good);
diff --git a/drivers/net/hamradio/6pack.c b/drivers/net/hamradio/6pack.c
index d79a69d..efc5c22 100644
--- a/drivers/net/hamradio/6pack.c
+++ b/drivers/net/hamradio/6pack.c
@@ -289,13 +289,14 @@ static int sp_close(struct net_device *dev)
static int sp_set_mac_address(struct net_device *dev, void *addr)
{
+ unsigned int bh;
struct sockaddr_ax25 *sa = addr;
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
netif_addr_lock(dev);
memcpy(dev->dev_addr, &sa->sax25_call, AX25_ADDR_LEN);
netif_addr_unlock(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
return 0;
}
@@ -693,6 +694,7 @@ static void sixpack_close(struct tty_struct *tty)
static int sixpack_ioctl(struct tty_struct *tty, struct file *file,
unsigned int cmd, unsigned long arg)
{
+ unsigned int bh;
struct sixpack *sp = sp_get(tty);
struct net_device *dev;
unsigned int tmp, err;
@@ -735,9 +737,9 @@ static int sixpack_ioctl(struct tty_struct *tty, struct file *file,
break;
}
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
memcpy(dev->dev_addr, &addr, AX25_ADDR_LEN);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
err = 0;
break;
diff --git a/drivers/net/hamradio/mkiss.c b/drivers/net/hamradio/mkiss.c
index 13e4c1e..3397dda 100644
--- a/drivers/net/hamradio/mkiss.c
+++ b/drivers/net/hamradio/mkiss.c
@@ -350,13 +350,14 @@ static void kiss_unesc(struct mkiss *ax, unsigned char s)
static int ax_set_mac_address(struct net_device *dev, void *addr)
{
+ unsigned int bh;
struct sockaddr_ax25 *sa = addr;
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
netif_addr_lock(dev);
memcpy(dev->dev_addr, &sa->sax25_call, AX25_ADDR_LEN);
netif_addr_unlock(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
return 0;
}
@@ -816,6 +817,7 @@ static void mkiss_close(struct tty_struct *tty)
static int mkiss_ioctl(struct tty_struct *tty, struct file *file,
unsigned int cmd, unsigned long arg)
{
+ unsigned int bh;
struct mkiss *ax = mkiss_get(tty);
struct net_device *dev;
unsigned int tmp, err;
@@ -859,9 +861,9 @@ static int mkiss_ioctl(struct tty_struct *tty, struct file *file,
break;
}
- netif_tx_lock_bh(dev);
+ bh = netif_tx_lock_bh(dev);
memcpy(dev->dev_addr, addr, AX25_ADDR_LEN);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
err = 0;
break;
diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
index 1eaec64..b4b8a61 100644
--- a/drivers/net/usb/cdc_ncm.c
+++ b/drivers/net/usb/cdc_ncm.c
@@ -296,6 +296,7 @@ static ssize_t ndp_to_end_show(struct device *d, struct device_attribute *attr,
static ssize_t ndp_to_end_store(struct device *d, struct device_attribute *attr, const char *buf, size_t len)
{
+ unsigned int bh;
struct usbnet *dev = netdev_priv(to_net_dev(d));
struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0];
bool enable;
@@ -314,7 +315,7 @@ static ssize_t ndp_to_end_store(struct device *d, struct device_attribute *attr
}
/* flush pending data before changing flag */
- netif_tx_lock_bh(dev->net);
+ bh = netif_tx_lock_bh(dev->net);
usbnet_start_xmit(NULL, dev->net);
spin_lock_bh(&ctx->mtx);
if (enable)
@@ -322,7 +323,7 @@ static ssize_t ndp_to_end_store(struct device *d, struct device_attribute *attr
else
ctx->drvflags &= ~CDC_NCM_FLAG_NDP_TO_END;
spin_unlock_bh(&ctx->mtx);
- netif_tx_unlock_bh(dev->net);
+ netif_tx_unlock_bh(dev->net, bh);
return len;
}
@@ -375,6 +376,7 @@ static const struct attribute_group cdc_ncm_sysfs_attr_group = {
/* handle rx_max and tx_max changes */
static void cdc_ncm_update_rxtx_max(struct usbnet *dev, u32 new_rx, u32 new_tx)
{
+ unsigned int bh;
struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0];
u8 iface_no = ctx->control->cur_altsetting->desc.bInterfaceNumber;
u32 val;
@@ -421,7 +423,7 @@ static void cdc_ncm_update_rxtx_max(struct usbnet *dev, u32 new_rx, u32 new_tx)
/* we might need to flush any pending tx buffers if running */
if (netif_running(dev->net) && val > ctx->tx_max) {
- netif_tx_lock_bh(dev->net);
+ bh = netif_tx_lock_bh(dev->net);
usbnet_start_xmit(NULL, dev->net);
/* make sure tx_curr_skb is reallocated if it was empty */
if (ctx->tx_curr_skb) {
@@ -429,7 +431,7 @@ static void cdc_ncm_update_rxtx_max(struct usbnet *dev, u32 new_rx, u32 new_tx)
ctx->tx_curr_skb = NULL;
}
ctx->tx_max = val;
- netif_tx_unlock_bh(dev->net);
+ netif_tx_unlock_bh(dev->net, bh);
} else {
ctx->tx_max = val;
}
@@ -1359,6 +1361,7 @@ static enum hrtimer_restart cdc_ncm_tx_timer_cb(struct hrtimer *timer)
static void cdc_ncm_txpath_bh(unsigned long param)
{
+ unsigned int bh;
struct usbnet *dev = (struct usbnet *)param;
struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0];
@@ -1370,9 +1373,9 @@ static void cdc_ncm_txpath_bh(unsigned long param)
} else if (dev->net != NULL) {
ctx->tx_reason_timeout++; /* count reason for transmitting */
spin_unlock_bh(&ctx->mtx);
- netif_tx_lock_bh(dev->net);
+ bh = netif_tx_lock_bh(dev->net);
usbnet_start_xmit(NULL, dev->net);
- netif_tx_unlock_bh(dev->net);
+ netif_tx_unlock_bh(dev->net, bh);
} else {
spin_unlock_bh(&ctx->mtx);
}
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index ca5ab98..b3617fe 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3794,10 +3794,14 @@ static inline void __netif_tx_release(struct netdev_queue *txq)
__release(&txq->_xmit_lock);
}
-static inline void __netif_tx_lock_bh(struct netdev_queue *txq)
+static inline unsigned int __netif_tx_lock_bh(struct netdev_queue *txq)
{
+ unsigned int bh = 0;
+
spin_lock_bh(&txq->_xmit_lock);
txq->xmit_lock_owner = smp_processor_id();
+
+ return bh;
}
static inline bool __netif_tx_trylock(struct netdev_queue *txq)
@@ -3814,7 +3818,8 @@ static inline void __netif_tx_unlock(struct netdev_queue *txq)
spin_unlock(&txq->_xmit_lock);
}
-static inline void __netif_tx_unlock_bh(struct netdev_queue *txq)
+static inline void __netif_tx_unlock_bh(struct netdev_queue *txq,
+ unsigned int bh)
{
txq->xmit_lock_owner = -1;
spin_unlock_bh(&txq->_xmit_lock);
@@ -3863,10 +3868,14 @@ static inline void netif_tx_lock(struct net_device *dev)
}
}
-static inline void netif_tx_lock_bh(struct net_device *dev)
+static inline unsigned int netif_tx_lock_bh(struct net_device *dev)
{
+ unsigned int bh = 0;
+
local_bh_disable();
netif_tx_lock(dev);
+
+ return bh;
}
static inline void netif_tx_unlock(struct net_device *dev)
@@ -3886,7 +3895,8 @@ static inline void netif_tx_unlock(struct net_device *dev)
spin_unlock(&dev->tx_global_lock);
}
-static inline void netif_tx_unlock_bh(struct net_device *dev)
+static inline void netif_tx_unlock_bh(struct net_device *dev,
+ unsigned int bh)
{
netif_tx_unlock(dev);
local_bh_enable();
diff --git a/net/atm/clip.c b/net/atm/clip.c
index d795b9c..5fddf85 100644
--- a/net/atm/clip.c
+++ b/net/atm/clip.c
@@ -84,6 +84,7 @@ static void link_vcc(struct clip_vcc *clip_vcc, struct atmarp_entry *entry)
static void unlink_clip_vcc(struct clip_vcc *clip_vcc)
{
+ unsigned int bh;
struct atmarp_entry *entry = clip_vcc->entry;
struct clip_vcc **walk;
@@ -91,7 +92,7 @@ static void unlink_clip_vcc(struct clip_vcc *clip_vcc)
pr_crit("!clip_vcc->entry (clip_vcc %p)\n", clip_vcc);
return;
}
- netif_tx_lock_bh(entry->neigh->dev); /* block clip_start_xmit() */
+ bh = netif_tx_lock_bh(entry->neigh->dev); /* block clip_start_xmit() */
entry->neigh->used = jiffies;
for (walk = &entry->vccs; *walk; walk = &(*walk)->next)
if (*walk == clip_vcc) {
@@ -113,7 +114,7 @@ static void unlink_clip_vcc(struct clip_vcc *clip_vcc)
}
pr_crit("ATMARP: failed (entry %p, vcc 0x%p)\n", entry, clip_vcc);
out:
- netif_tx_unlock_bh(entry->neigh->dev);
+ netif_tx_unlock_bh(entry->neigh->dev, bh);
}
/* The neighbour entry n->lock is held. */
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 69078c8..2266f1f 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -490,10 +490,11 @@ static void dev_watchdog_up(struct net_device *dev)
static void dev_watchdog_down(struct net_device *dev)
{
- netif_tx_lock_bh(dev);
+ unsigned int bh;
+ bh = netif_tx_lock_bh(dev);
if (del_timer(&dev->watchdog_timer))
dev_put(dev);
- netif_tx_unlock_bh(dev);
+ netif_tx_unlock_bh(dev, bh);
}
/**
--
2.7.4
As we plan to narrow down local_bh_disable() to a per-vector disablement
granularity, a shortcut can be handy for code that want to disable all
of them and not care about carrying the bh enabled mask state prior to
the call.
(TODO: check that it is called while bh are ALL enabled because
local_bh_enable_all() is going to re-enable ALL of them. Pretty much like
there should be no local_irq_enable() between a pair or local_irq_save()
and local_irq_restore()).
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
include/linux/bottom_half.h | 3 +++
kernel/softirq.c | 10 ++++++++++
2 files changed, 13 insertions(+)
diff --git a/include/linux/bottom_half.h b/include/linux/bottom_half.h
index fd75d1a..192a71c 100644
--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -64,4 +64,7 @@ static inline void local_bh_enable(void)
__local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET);
}
+extern void local_bh_disable_all(void);
+extern void local_bh_enable_all(void);
+
#endif /* _LINUX_BH_H */
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 75aab25..730a5c9 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -197,6 +197,16 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
}
EXPORT_SYMBOL(__local_bh_enable_ip);
+void local_bh_disable_all(void)
+{
+ local_bh_disable();
+}
+
+void local_bh_enable_all(void)
+{
+ local_bh_enable();
+}
+
/*
* We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
* but break the loop if need_resched() is set or after 2 ms.
--
2.7.4
This pair of function is implemented on top of spin_lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to that of local_irq_save/restore. Subsequent calls to
local_bh_disable() and friends can then stack up:
bh = local_bh_disable(vec_mask);
lock_sock_fast(&bh2) {
*bh2 = spin_lock_bh(...)
}
...
unlock_sock_fast(bh2) {
spin_unlock_bh(bh2, ...);
}
local_bh_enable(bh);
To prepare for that, make lock_sock_fast() able to return a saved vector
enabled mask and pass it back to unlock_sock_fast(). We'll plug it to
spin_lock_bh() in a subsequent patch.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
include/net/sock.h | 5 +++--
net/core/datagram.c | 5 +++--
net/core/sock.c | 2 +-
net/ipv4/tcp.c | 10 ++++++----
net/ipv4/udp.c | 11 +++++++----
5 files changed, 20 insertions(+), 13 deletions(-)
diff --git a/include/net/sock.h b/include/net/sock.h
index 433f45f..7bba619 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1500,7 +1500,7 @@ void release_sock(struct sock *sk);
SINGLE_DEPTH_NESTING)
#define bh_unlock_sock(__sk) spin_unlock(&((__sk)->sk_lock.slock))
-bool lock_sock_fast(struct sock *sk);
+bool lock_sock_fast(struct sock *sk, unsigned int *bh);
/**
* unlock_sock_fast - complement of lock_sock_fast
* @sk: socket
@@ -1509,7 +1509,8 @@ bool lock_sock_fast(struct sock *sk);
* fast unlock socket for user context.
* If slow mode is on, we call regular release_sock()
*/
-static inline void unlock_sock_fast(struct sock *sk, bool slow)
+static inline void unlock_sock_fast(struct sock *sk, bool slow,
+ unsigned int bh)
{
if (slow)
release_sock(sk);
diff --git a/net/core/datagram.c b/net/core/datagram.c
index 9aac0d6..0cdee87 100644
--- a/net/core/datagram.c
+++ b/net/core/datagram.c
@@ -334,17 +334,18 @@ EXPORT_SYMBOL(skb_free_datagram);
void __skb_free_datagram_locked(struct sock *sk, struct sk_buff *skb, int len)
{
bool slow;
+ unsigned int bh;
if (!skb_unref(skb)) {
sk_peek_offset_bwd(sk, len);
return;
}
- slow = lock_sock_fast(sk);
+ slow = lock_sock_fast(sk, &bh);
sk_peek_offset_bwd(sk, len);
skb_orphan(skb);
sk_mem_reclaim_partial(sk);
- unlock_sock_fast(sk, slow);
+ unlock_sock_fast(sk, slow, bh);
/* skb is now orphaned, can be freed outside of locked section */
__kfree_skb(skb);
diff --git a/net/core/sock.c b/net/core/sock.c
index 3730eb8..b886b86 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2874,7 +2874,7 @@ EXPORT_SYMBOL(release_sock);
*
* sk_lock.slock unlocked, owned = 1, BH enabled
*/
-bool lock_sock_fast(struct sock *sk)
+bool lock_sock_fast(struct sock *sk, unsigned int *bh)
{
might_sleep();
spin_lock_bh(&sk->sk_lock.slock);
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index dfd9bae..31b391a 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -605,6 +605,7 @@ EXPORT_SYMBOL(tcp_poll);
int tcp_ioctl(struct sock *sk, int cmd, unsigned long arg)
{
struct tcp_sock *tp = tcp_sk(sk);
+ unsigned int bh;
int answ;
bool slow;
@@ -613,9 +614,9 @@ int tcp_ioctl(struct sock *sk, int cmd, unsigned long arg)
if (sk->sk_state == TCP_LISTEN)
return -EINVAL;
- slow = lock_sock_fast(sk);
+ slow = lock_sock_fast(sk, &bh);
answ = tcp_inq(sk);
- unlock_sock_fast(sk, slow);
+ unlock_sock_fast(sk, slow, bh);
break;
case SIOCATMARK:
answ = tp->urg_data && tp->urg_seq == tp->copied_seq;
@@ -3101,6 +3102,7 @@ void tcp_get_info(struct sock *sk, struct tcp_info *info)
{
const struct tcp_sock *tp = tcp_sk(sk); /* iff sk_type == SOCK_STREAM */
const struct inet_connection_sock *icsk = inet_csk(sk);
+ unsigned int bh;
u32 now;
u64 rate64;
bool slow;
@@ -3134,7 +3136,7 @@ void tcp_get_info(struct sock *sk, struct tcp_info *info)
return;
}
- slow = lock_sock_fast(sk);
+ slow = lock_sock_fast(sk, &bh);
info->tcpi_ca_state = icsk->icsk_ca_state;
info->tcpi_retransmits = icsk->icsk_retransmits;
@@ -3208,7 +3210,7 @@ void tcp_get_info(struct sock *sk, struct tcp_info *info)
info->tcpi_bytes_retrans = tp->bytes_retrans;
info->tcpi_dsack_dups = tp->dsack_dups;
info->tcpi_reord_seen = tp->reord_seen;
- unlock_sock_fast(sk, slow);
+ unlock_sock_fast(sk, slow, bh);
}
EXPORT_SYMBOL_GPL(tcp_get_info);
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 7d69dd6..8148896 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1452,11 +1452,13 @@ EXPORT_SYMBOL_GPL(udp_init_sock);
void skb_consume_udp(struct sock *sk, struct sk_buff *skb, int len)
{
+ unsigned int bh;
+
if (unlikely(READ_ONCE(sk->sk_peek_off) >= 0)) {
- bool slow = lock_sock_fast(sk);
+ bool slow = lock_sock_fast(sk, &bh);
sk_peek_offset_bwd(sk, len);
- unlock_sock_fast(sk, slow);
+ unlock_sock_fast(sk, slow, bh);
}
if (!skb_unref(skb))
@@ -2378,10 +2380,11 @@ int udp_rcv(struct sk_buff *skb)
void udp_destroy_sock(struct sock *sk)
{
+ unsigned int bh;
struct udp_sock *up = udp_sk(sk);
- bool slow = lock_sock_fast(sk);
+ bool slow = lock_sock_fast(sk, &bh);
udp_flush_pending_frames(sk);
- unlock_sock_fast(sk, slow);
+ unlock_sock_fast(sk, slow, bh);
if (static_branch_unlikely(&udp_encap_needed_key) && up->encap_type) {
void (*encap_destroy)(struct sock *sk);
encap_destroy = READ_ONCE(up->encap_destroy);
--
2.7.4
This reverts commit 9aee5f8a7e30330d0a8f4c626dc924ca5590aba5.
We are going to need the 16 high bits above in order to implement
a softirq enable mask. x86 is the only architecture that doesn't use
unsigned int to implement softirq_pending.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
arch/x86/include/asm/hardirq.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h
index d9069bb..a8e8e12 100644
--- a/arch/x86/include/asm/hardirq.h
+++ b/arch/x86/include/asm/hardirq.h
@@ -5,7 +5,7 @@
#include <linux/threads.h>
typedef struct {
- u16 __softirq_pending;
+ unsigned int __softirq_pending;
#if IS_ENABLED(CONFIG_KVM_INTEL)
u8 kvm_cpu_l1tf_flush_l1d;
#endif
--
2.7.4
In order to be able to disable softirqs at the vector level, we'll need
to be able to:
1) Pass as parameter the vector mask we want to disable. By default it's
going to be all of them (SOFTIRQ_ALL_MASK) to keep the current
behaviour. Each callsite will later need to be audited in the long
run in order to narrow down to the relevant vectors.
2) Return the saved vector enabled state prior to the call to
local_bh_disable(). This saved mask will be pushed and restored to
the symetric call to local_bh_enable(), following the current model
we have with local_irq_save/restore(). This will allow us to safely
stack up the bh disable calls, which is a common situation:
bh = local_bh_disable(BIT(BLOCK_SOFTIRQ));
...
bh2 = spin_lock_bh(..., BIT(NET_RX_SOFTIRQ));
...
spin_unlock_bh(..., bh2);
...
local_bh_disable(bh);
Prepare all the callers so far, we'll care about pushing down the masks
to the softirq core in a subsequent patch.
Thanks to coccinelle that helped a lot with scripts such as the
following:
@bh exists@
identifier func;
@@
func(...) {
+ unsigned int bh;
...
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
...
- local_bh_enable();
+ local_bh_enable(bh);
...
}
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
arch/arm64/kernel/fpsimd.c | 37 ++++++++++--------
arch/s390/lib/delay.c | 5 ++-
arch/x86/crypto/sha1-mb/sha1_mb.c | 9 +++--
arch/x86/crypto/sha256-mb/sha256_mb.c | 9 +++--
arch/x86/crypto/sha512-mb/sha512_mb.c | 9 +++--
crypto/cryptd.c | 25 +++++++-----
crypto/mcryptd.c | 25 +++++++-----
drivers/crypto/chelsio/chcr_algo.c | 5 ++-
drivers/crypto/chelsio/chtls/chtls_cm.c | 25 +++++++-----
drivers/crypto/inside-secure/safexcel.c | 5 ++-
drivers/crypto/marvell/cesa.c | 5 ++-
drivers/gpu/drm/i915/i915_gem.c | 5 ++-
drivers/gpu/drm/i915/i915_request.c | 5 ++-
drivers/gpu/drm/i915/intel_breadcrumbs.c | 5 ++-
drivers/gpu/drm/i915/intel_engine_cs.c | 5 ++-
drivers/hsi/clients/cmt_speech.c | 15 +++++---
drivers/infiniband/sw/rdmavt/cq.c | 5 ++-
drivers/infiniband/ulp/ipoib/ipoib_ib.c | 5 ++-
drivers/isdn/i4l/isdn_net.h | 2 +-
.../net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c | 5 ++-
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c | 5 ++-
drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c | 5 ++-
drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c | 5 ++-
drivers/net/ethernet/chelsio/cxgb3/sge.c | 5 ++-
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | 5 ++-
drivers/net/ethernet/chelsio/cxgb4/sge.c | 22 ++++++-----
drivers/net/ethernet/emulex/benet/be_cmds.c | 5 ++-
drivers/net/ethernet/emulex/benet/be_main.c | 5 ++-
drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 5 ++-
drivers/net/ethernet/mellanox/mlx4/en_rx.c | 5 ++-
drivers/net/ethernet/sfc/ptp.c | 10 +++--
drivers/net/ipvlan/ipvlan_core.c | 5 ++-
drivers/net/ppp/ppp_generic.c | 7 ++--
drivers/net/tun.c | 40 ++++++++++---------
drivers/net/virtio_net.c | 5 ++-
drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c | 5 ++-
drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 5 ++-
drivers/net/wireless/intel/iwlwifi/pcie/rx.c | 19 +++++----
drivers/net/wireless/mac80211_hwsim.c | 14 ++++---
drivers/net/wireless/mediatek/mt76/agg-rx.c | 5 ++-
.../net/wireless/mediatek/mt76/mt76x2_phy_common.c | 5 ++-
drivers/s390/char/sclp.c | 5 ++-
drivers/s390/cio/cio.c | 5 ++-
drivers/s390/crypto/zcrypt_api.c | 20 ++++++----
drivers/tty/hvc/hvc_iucv.c | 5 ++-
include/linux/bottom_half.h | 5 ++-
include/linux/netdevice.h | 11 +++---
include/linux/rcupdate.h | 8 ++--
include/net/mac80211.h | 15 +++++---
include/net/snmp.h | 10 +++--
include/net/tcp.h | 2 +-
kernel/bpf/cpumap.c | 5 ++-
kernel/irq/manage.c | 5 ++-
kernel/locking/spinlock.c | 2 +-
kernel/padata.c | 15 +++++---
kernel/rcu/rcutorture.c | 5 ++-
kernel/rcu/srcutiny.c | 5 ++-
kernel/rcu/srcutree.c | 5 ++-
kernel/rcu/tiny.c | 5 ++-
kernel/rcu/tree_plugin.h | 12 ++++--
kernel/rcu/update.c | 5 ++-
kernel/softirq.c | 6 +--
kernel/time/hrtimer.c | 5 ++-
lib/locking-selftest.c | 8 ++--
net/ax25/ax25_subr.c | 5 ++-
net/bridge/br_fdb.c | 5 ++-
net/can/gw.c | 5 ++-
net/core/dev.c | 20 ++++++----
net/core/gen_estimator.c | 5 ++-
net/core/neighbour.c | 2 +-
net/core/pktgen.c | 9 +++--
net/core/sock.c | 4 +-
net/dccp/input.c | 5 ++-
net/dccp/ipv4.c | 5 ++-
net/dccp/minisocks.c | 5 ++-
net/dccp/proto.c | 5 ++-
net/decnet/dn_route.c | 5 ++-
net/ipv4/fib_frontend.c | 5 ++-
net/ipv4/icmp.c | 10 +++--
net/ipv4/inet_connection_sock.c | 5 ++-
net/ipv4/inet_hashtables.c | 14 ++++---
net/ipv4/inet_timewait_sock.c | 5 ++-
net/ipv4/netfilter/arp_tables.c | 10 +++--
net/ipv4/netfilter/ip_tables.c | 10 +++--
net/ipv4/netfilter/ipt_CLUSTERIP.c | 7 ++--
net/ipv4/netfilter/nf_defrag_ipv4.c | 5 ++-
net/ipv4/tcp.c | 19 ++++-----
net/ipv4/tcp_input.c | 5 ++-
net/ipv4/tcp_ipv4.c | 10 +++--
net/ipv4/tcp_minisocks.c | 5 ++-
net/ipv6/icmp.c | 10 +++--
net/ipv6/inet6_hashtables.c | 5 ++-
net/ipv6/ipv6_sockglue.c | 9 +++--
net/ipv6/netfilter/ip6_tables.c | 10 +++--
net/ipv6/route.c | 5 ++-
net/ipv6/seg6_hmac.c | 5 ++-
net/iucv/iucv.c | 45 +++++++++++++---------
net/l2tp/l2tp_ppp.c | 10 +++--
net/llc/llc_conn.c | 5 ++-
net/mac80211/agg-tx.c | 5 ++-
net/mac80211/cfg.c | 5 ++-
net/mac80211/sta_info.c | 5 ++-
net/mac80211/tdls.c | 5 ++-
net/mac80211/tx.c | 10 +++--
net/mpls/internal.h | 10 +++--
net/netfilter/ipvs/ip_vs_core.c | 20 ++++++----
net/netfilter/ipvs/ip_vs_ctl.c | 5 ++-
net/netfilter/nf_conntrack_core.c | 41 ++++++++++++--------
net/netfilter/nf_conntrack_ecache.c | 5 ++-
net/netfilter/nf_conntrack_netlink.c | 5 ++-
net/netfilter/nf_log.c | 4 +-
net/netfilter/nf_queue.c | 5 ++-
net/netfilter/nf_tables_core.c | 5 ++-
net/netfilter/nft_counter.c | 10 +++--
net/netfilter/x_tables.c | 7 ++--
net/netfilter/xt_hashlimit.c | 11 +++---
net/netlink/af_netlink.c | 10 +++--
net/openvswitch/datapath.c | 5 ++-
net/sctp/input.c | 15 +++++---
net/sctp/sm_make_chunk.c | 9 +++--
net/sctp/socket.c | 20 ++++++----
net/sunrpc/svcsock.c | 7 ++--
net/unix/af_unix.c | 10 +++--
net/xdp/xsk.c | 10 +++--
net/xfrm/xfrm_ipcomp.c | 7 ++--
security/smack/smack_lsm.c | 5 ++-
126 files changed, 658 insertions(+), 459 deletions(-)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 58c53bc..fddeac4 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -90,7 +90,7 @@
* To prevent this from racing with the manipulation of the task's FPSIMD state
* from task context and thereby corrupting the state, it is necessary to
* protect any manipulation of a task's fpsimd_state or TIF_FOREIGN_FPSTATE
- * flag with local_bh_disable() unless softirqs are already masked.
+ * flag with local_bh_disable(SOFTIRQ_ALL_MASK) unless softirqs are already masked.
*
* For a certain task, the sequence may look something like this:
* - the task gets scheduled in; if both the task's fpsimd_cpu field
@@ -510,6 +510,7 @@ void sve_sync_from_fpsimd_zeropad(struct task_struct *task)
int sve_set_vector_length(struct task_struct *task,
unsigned long vl, unsigned long flags)
{
+ unsigned int bh;
if (flags & ~(unsigned long)(PR_SVE_VL_INHERIT |
PR_SVE_SET_VL_ONEXEC))
return -EINVAL;
@@ -547,7 +548,7 @@ int sve_set_vector_length(struct task_struct *task,
* non-SVE thread.
*/
if (task == current) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
fpsimd_save();
set_thread_flag(TIF_FOREIGN_FPSTATE);
@@ -558,7 +559,7 @@ int sve_set_vector_length(struct task_struct *task,
sve_to_fpsimd(task);
if (task == current)
- local_bh_enable();
+ local_bh_enable(bh);
/*
* Force reallocation of task SVE state to the correct size
@@ -805,6 +806,7 @@ void fpsimd_release_task(struct task_struct *dead_task)
*/
asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs)
{
+ unsigned int bh;
/* Even if we chose not to use SVE, the hardware could still trap: */
if (unlikely(!system_supports_sve()) || WARN_ON(is_compat_task())) {
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
@@ -813,7 +815,7 @@ asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs)
sve_alloc(current);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
fpsimd_save();
fpsimd_to_sve(current);
@@ -825,7 +827,7 @@ asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs)
if (test_and_set_thread_flag(TIF_SVE))
WARN_ON(1); /* SVE access shouldn't have trapped */
- local_bh_enable();
+ local_bh_enable(bh);
}
/*
@@ -891,12 +893,13 @@ void fpsimd_thread_switch(struct task_struct *next)
void fpsimd_flush_thread(void)
{
+ unsigned int bh;
int vl, supported_vl;
if (!system_supports_fpsimd())
return;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
memset(¤t->thread.uw.fpsimd_state, 0,
sizeof(current->thread.uw.fpsimd_state));
@@ -939,7 +942,7 @@ void fpsimd_flush_thread(void)
set_thread_flag(TIF_FOREIGN_FPSTATE);
- local_bh_enable();
+ local_bh_enable(bh);
}
/*
@@ -948,12 +951,13 @@ void fpsimd_flush_thread(void)
*/
void fpsimd_preserve_current_state(void)
{
+ unsigned int bh;
if (!system_supports_fpsimd())
return;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
fpsimd_save();
- local_bh_enable();
+ local_bh_enable(bh);
}
/*
@@ -1008,17 +1012,18 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st)
*/
void fpsimd_restore_current_state(void)
{
+ unsigned int bh;
if (!system_supports_fpsimd())
return;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) {
task_fpsimd_load();
fpsimd_bind_task_to_cpu();
}
- local_bh_enable();
+ local_bh_enable(bh);
}
/*
@@ -1028,10 +1033,11 @@ void fpsimd_restore_current_state(void)
*/
void fpsimd_update_current_state(struct user_fpsimd_state const *state)
{
+ unsigned int bh;
if (!system_supports_fpsimd())
return;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
current->thread.uw.fpsimd_state = *state;
if (system_supports_sve() && test_thread_flag(TIF_SVE))
@@ -1042,7 +1048,7 @@ void fpsimd_update_current_state(struct user_fpsimd_state const *state)
clear_thread_flag(TIF_FOREIGN_FPSTATE);
- local_bh_enable();
+ local_bh_enable(bh);
}
/*
@@ -1083,12 +1089,13 @@ EXPORT_PER_CPU_SYMBOL(kernel_neon_busy);
*/
void kernel_neon_begin(void)
{
+ unsigned int bh;
if (WARN_ON(!system_supports_fpsimd()))
return;
BUG_ON(!may_use_simd());
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
__this_cpu_write(kernel_neon_busy, true);
@@ -1100,7 +1107,7 @@ void kernel_neon_begin(void)
preempt_disable();
- local_bh_enable();
+ local_bh_enable(bh);
}
EXPORT_SYMBOL(kernel_neon_begin);
diff --git a/arch/s390/lib/delay.c b/arch/s390/lib/delay.c
index 3f83ee9..05a4fce 100644
--- a/arch/s390/lib/delay.c
+++ b/arch/s390/lib/delay.c
@@ -74,6 +74,7 @@ static void __udelay_enabled(unsigned long long usecs)
void __udelay(unsigned long long usecs)
{
unsigned long flags;
+ unsigned int bh;
preempt_disable();
local_irq_save(flags);
@@ -89,9 +90,9 @@ void __udelay(unsigned long long usecs)
goto out;
}
if (raw_irqs_disabled_flags(flags)) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
__udelay_disabled(usecs);
- local_bh_enable_no_softirq();
+ local_bh_enable_no_softirq(bh);
goto out;
}
__udelay_enabled(usecs);
diff --git a/arch/x86/crypto/sha1-mb/sha1_mb.c b/arch/x86/crypto/sha1-mb/sha1_mb.c
index b938056..d1d2d9f 100644
--- a/arch/x86/crypto/sha1-mb/sha1_mb.c
+++ b/arch/x86/crypto/sha1-mb/sha1_mb.c
@@ -435,6 +435,7 @@ static int sha_complete_job(struct mcryptd_hash_request_ctx *rctx,
struct mcryptd_alg_cstate *cstate,
int err)
{
+ unsigned int bh;
struct ahash_request *req = cast_mcryptd_ctx_to_req(rctx);
struct sha1_hash_ctx *sha_ctx;
struct mcryptd_hash_request_ctx *req_ctx;
@@ -448,9 +449,9 @@ static int sha_complete_job(struct mcryptd_hash_request_ctx *rctx,
if (irqs_disabled())
rctx->complete(&req->base, err);
else {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rctx->complete(&req->base, err);
- local_bh_enable();
+ local_bh_enable(bh);
}
/* check to see if there are other jobs that are done */
@@ -467,9 +468,9 @@ static int sha_complete_job(struct mcryptd_hash_request_ctx *rctx,
if (irqs_disabled())
req_ctx->complete(&req->base, ret);
else {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
req_ctx->complete(&req->base, ret);
- local_bh_enable();
+ local_bh_enable(bh);
}
}
sha_ctx = sha1_ctx_mgr_get_comp_ctx(cstate->mgr);
diff --git a/arch/x86/crypto/sha256-mb/sha256_mb.c b/arch/x86/crypto/sha256-mb/sha256_mb.c
index 97c5fc4..f357cfd 100644
--- a/arch/x86/crypto/sha256-mb/sha256_mb.c
+++ b/arch/x86/crypto/sha256-mb/sha256_mb.c
@@ -434,6 +434,7 @@ static int sha_complete_job(struct mcryptd_hash_request_ctx *rctx,
struct mcryptd_alg_cstate *cstate,
int err)
{
+ unsigned int bh;
struct ahash_request *req = cast_mcryptd_ctx_to_req(rctx);
struct sha256_hash_ctx *sha_ctx;
struct mcryptd_hash_request_ctx *req_ctx;
@@ -447,9 +448,9 @@ static int sha_complete_job(struct mcryptd_hash_request_ctx *rctx,
if (irqs_disabled())
rctx->complete(&req->base, err);
else {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rctx->complete(&req->base, err);
- local_bh_enable();
+ local_bh_enable(bh);
}
/* check to see if there are other jobs that are done */
@@ -466,9 +467,9 @@ static int sha_complete_job(struct mcryptd_hash_request_ctx *rctx,
if (irqs_disabled())
req_ctx->complete(&req->base, ret);
else {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
req_ctx->complete(&req->base, ret);
- local_bh_enable();
+ local_bh_enable(bh);
}
}
sha_ctx = sha256_ctx_mgr_get_comp_ctx(cstate->mgr);
diff --git a/arch/x86/crypto/sha512-mb/sha512_mb.c b/arch/x86/crypto/sha512-mb/sha512_mb.c
index 26b8567..f8ab09d 100644
--- a/arch/x86/crypto/sha512-mb/sha512_mb.c
+++ b/arch/x86/crypto/sha512-mb/sha512_mb.c
@@ -463,6 +463,7 @@ static int sha_complete_job(struct mcryptd_hash_request_ctx *rctx,
struct mcryptd_alg_cstate *cstate,
int err)
{
+ unsigned int bh;
struct ahash_request *req = cast_mcryptd_ctx_to_req(rctx);
struct sha512_hash_ctx *sha_ctx;
struct mcryptd_hash_request_ctx *req_ctx;
@@ -477,9 +478,9 @@ static int sha_complete_job(struct mcryptd_hash_request_ctx *rctx,
if (irqs_disabled())
rctx->complete(&req->base, err);
else {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rctx->complete(&req->base, err);
- local_bh_enable();
+ local_bh_enable(bh);
}
/* check to see if there are other jobs that are done */
@@ -496,9 +497,9 @@ static int sha_complete_job(struct mcryptd_hash_request_ctx *rctx,
if (irqs_disabled())
req_ctx->complete(&req->base, ret);
else {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
req_ctx->complete(&req->base, ret);
- local_bh_enable();
+ local_bh_enable(bh);
}
}
sha_ctx = sha512_ctx_mgr_get_comp_ctx(cstate);
diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index addca7b..ee245c9 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -168,6 +168,7 @@ static int cryptd_enqueue_request(struct cryptd_queue *queue,
* do. */
static void cryptd_queue_worker(struct work_struct *work)
{
+ unsigned int bh;
struct cryptd_cpu_queue *cpu_queue;
struct crypto_async_request *req, *backlog;
@@ -178,12 +179,12 @@ static void cryptd_queue_worker(struct work_struct *work)
* cryptd_enqueue_request(). local_bh_disable/enable is used to prevent
* cryptd_enqueue_request() being accessed from software interrupts.
*/
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
preempt_disable();
backlog = crypto_get_backlog(&cpu_queue->queue);
req = crypto_dequeue_request(&cpu_queue->queue);
preempt_enable();
- local_bh_enable();
+ local_bh_enable(bh);
if (!req)
return;
@@ -240,6 +241,7 @@ static void cryptd_blkcipher_crypt(struct ablkcipher_request *req,
struct scatterlist *src,
unsigned int len))
{
+ unsigned int bh;
struct cryptd_blkcipher_request_ctx *rctx;
struct cryptd_blkcipher_ctx *ctx;
struct crypto_ablkcipher *tfm;
@@ -264,9 +266,9 @@ static void cryptd_blkcipher_crypt(struct ablkcipher_request *req,
ctx = crypto_ablkcipher_ctx(tfm);
refcnt = atomic_read(&ctx->refcnt);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rctx->complete(&req->base, err);
- local_bh_enable();
+ local_bh_enable(bh);
if (err != -EINPROGRESS && refcnt && atomic_dec_and_test(&ctx->refcnt))
crypto_free_ablkcipher(tfm);
@@ -463,14 +465,15 @@ static int cryptd_skcipher_setkey(struct crypto_skcipher *parent,
static void cryptd_skcipher_complete(struct skcipher_request *req, int err)
{
+ unsigned int bh;
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
int refcnt = atomic_read(&ctx->refcnt);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rctx->complete(&req->base, err);
- local_bh_enable();
+ local_bh_enable(bh);
if (err != -EINPROGRESS && refcnt && atomic_dec_and_test(&ctx->refcnt))
crypto_free_skcipher(tfm);
@@ -713,14 +716,15 @@ static int cryptd_hash_enqueue(struct ahash_request *req,
static void cryptd_hash_complete(struct ahash_request *req, int err)
{
+ unsigned int bh;
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm);
struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
int refcnt = atomic_read(&ctx->refcnt);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rctx->complete(&req->base, err);
- local_bh_enable();
+ local_bh_enable(bh);
if (err != -EINPROGRESS && refcnt && atomic_dec_and_test(&ctx->refcnt))
crypto_free_ahash(tfm);
@@ -952,6 +956,7 @@ static void cryptd_aead_crypt(struct aead_request *req,
int err,
int (*crypt)(struct aead_request *req))
{
+ unsigned int bh;
struct cryptd_aead_request_ctx *rctx;
struct cryptd_aead_ctx *ctx;
crypto_completion_t compl;
@@ -972,9 +977,9 @@ static void cryptd_aead_crypt(struct aead_request *req,
ctx = crypto_aead_ctx(tfm);
refcnt = atomic_read(&ctx->refcnt);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
compl(&req->base, err);
- local_bh_enable();
+ local_bh_enable(bh);
if (err != -EINPROGRESS && refcnt && atomic_dec_and_test(&ctx->refcnt))
crypto_free_aead(tfm);
diff --git a/crypto/mcryptd.c b/crypto/mcryptd.c
index f141521..1c8e1b8 100644
--- a/crypto/mcryptd.c
+++ b/crypto/mcryptd.c
@@ -331,6 +331,7 @@ static int mcryptd_hash_enqueue(struct ahash_request *req,
static void mcryptd_hash_init(struct crypto_async_request *req_async, int err)
{
+ unsigned int bh;
struct mcryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
struct crypto_ahash *child = ctx->child;
struct ahash_request *req = ahash_request_cast(req_async);
@@ -348,9 +349,9 @@ static void mcryptd_hash_init(struct crypto_async_request *req_async, int err)
err = crypto_ahash_init(desc);
out:
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rctx->complete(&req->base, err);
- local_bh_enable();
+ local_bh_enable(bh);
}
static int mcryptd_hash_init_enqueue(struct ahash_request *req)
@@ -360,6 +361,7 @@ static int mcryptd_hash_init_enqueue(struct ahash_request *req)
static void mcryptd_hash_update(struct crypto_async_request *req_async, int err)
{
+ unsigned int bh;
struct ahash_request *req = ahash_request_cast(req_async);
struct mcryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
@@ -375,9 +377,9 @@ static void mcryptd_hash_update(struct crypto_async_request *req_async, int err)
return;
out:
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rctx->complete(&req->base, err);
- local_bh_enable();
+ local_bh_enable(bh);
}
static int mcryptd_hash_update_enqueue(struct ahash_request *req)
@@ -387,6 +389,7 @@ static int mcryptd_hash_update_enqueue(struct ahash_request *req)
static void mcryptd_hash_final(struct crypto_async_request *req_async, int err)
{
+ unsigned int bh;
struct ahash_request *req = ahash_request_cast(req_async);
struct mcryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
@@ -402,9 +405,9 @@ static void mcryptd_hash_final(struct crypto_async_request *req_async, int err)
return;
out:
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rctx->complete(&req->base, err);
- local_bh_enable();
+ local_bh_enable(bh);
}
static int mcryptd_hash_final_enqueue(struct ahash_request *req)
@@ -414,6 +417,7 @@ static int mcryptd_hash_final_enqueue(struct ahash_request *req)
static void mcryptd_hash_finup(struct crypto_async_request *req_async, int err)
{
+ unsigned int bh;
struct ahash_request *req = ahash_request_cast(req_async);
struct mcryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
@@ -429,9 +433,9 @@ static void mcryptd_hash_finup(struct crypto_async_request *req_async, int err)
return;
out:
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rctx->complete(&req->base, err);
- local_bh_enable();
+ local_bh_enable(bh);
}
static int mcryptd_hash_finup_enqueue(struct ahash_request *req)
@@ -441,6 +445,7 @@ static int mcryptd_hash_finup_enqueue(struct ahash_request *req)
static void mcryptd_hash_digest(struct crypto_async_request *req_async, int err)
{
+ unsigned int bh;
struct mcryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
struct crypto_ahash *child = ctx->child;
struct ahash_request *req = ahash_request_cast(req_async);
@@ -458,9 +463,9 @@ static void mcryptd_hash_digest(struct crypto_async_request *req_async, int err)
err = crypto_ahash_init(desc) ?: crypto_ahash_finup(desc);
out:
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rctx->complete(&req->base, err);
- local_bh_enable();
+ local_bh_enable(bh);
}
static int mcryptd_hash_digest_enqueue(struct ahash_request *req)
diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
index 5c539af..72fce32 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -559,19 +559,20 @@ static inline int get_cryptoalg_subtype(struct crypto_tfm *tfm)
static int cxgb4_is_crypto_q_full(struct net_device *dev, unsigned int idx)
{
+ unsigned int bh;
struct adapter *adap = netdev2adap(dev);
struct sge_uld_txq_info *txq_info =
adap->sge.uld_txq_info[CXGB4_TX_CRYPTO];
struct sge_uld_txq *txq;
int ret = 0;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
txq = &txq_info->uldtxq[idx];
spin_lock(&txq->sendq.lock);
if (txq->full)
ret = -1;
spin_unlock(&txq->sendq.lock);
- local_bh_enable();
+ local_bh_enable(bh);
return ret;
}
diff --git a/drivers/crypto/chelsio/chtls/chtls_cm.c b/drivers/crypto/chelsio/chtls/chtls_cm.c
index 0997e16..8af8c84 100644
--- a/drivers/crypto/chelsio/chtls/chtls_cm.c
+++ b/drivers/crypto/chelsio/chtls/chtls_cm.c
@@ -298,6 +298,7 @@ static int make_close_transition(struct sock *sk)
void chtls_close(struct sock *sk, long timeout)
{
+ unsigned int bh;
int data_lost, prev_state;
struct chtls_sock *csk;
@@ -333,7 +334,7 @@ void chtls_close(struct sock *sk, long timeout)
release_sock(sk);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
bh_lock_sock(sk);
if (prev_state != TCP_CLOSE && sk->sk_state == TCP_CLOSE)
@@ -353,7 +354,7 @@ void chtls_close(struct sock *sk, long timeout)
out:
bh_unlock_sock(sk);
- local_bh_enable();
+ local_bh_enable(bh);
sock_put(sk);
}
@@ -470,6 +471,7 @@ static void reset_listen_child(struct sock *child)
static void chtls_disconnect_acceptq(struct sock *listen_sk)
{
+ unsigned int bh;
struct request_sock **pprev;
pprev = ACCEPT_QUEUE(listen_sk);
@@ -483,12 +485,12 @@ static void chtls_disconnect_acceptq(struct sock *listen_sk)
sk_acceptq_removed(listen_sk);
reqsk_put(req);
sock_hold(child);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
bh_lock_sock(child);
release_tcp_port(child);
reset_listen_child(child);
bh_unlock_sock(child);
- local_bh_enable();
+ local_bh_enable(bh);
sock_put(child);
} else {
pprev = &req->dl_next;
@@ -577,6 +579,7 @@ static void cleanup_syn_rcv_conn(struct sock *child, struct sock *parent)
static void chtls_reset_synq(struct listen_ctx *listen_ctx)
{
+ unsigned int bh;
struct sock *listen_sk = listen_ctx->lsk;
while (!skb_queue_empty(&listen_ctx->synq)) {
@@ -587,12 +590,12 @@ static void chtls_reset_synq(struct listen_ctx *listen_ctx)
cleanup_syn_rcv_conn(child, listen_sk);
sock_hold(child);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
bh_lock_sock(child);
release_tcp_port(child);
reset_listen_child(child);
bh_unlock_sock(child);
- local_bh_enable();
+ local_bh_enable(bh);
sock_put(child);
}
}
@@ -993,9 +996,10 @@ static void chtls_pass_accept_rpl(struct sk_buff *skb,
static void inet_inherit_port(struct inet_hashinfo *hash_info,
struct sock *lsk, struct sock *newsk)
{
- local_bh_disable();
+ unsigned int bh;
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
__inet_inherit_port(lsk, newsk);
- local_bh_enable();
+ local_bh_enable(bh);
}
static int chtls_backlog_rcv(struct sock *sk, struct sk_buff *skb)
@@ -1329,9 +1333,10 @@ static DECLARE_WORK(reap_task, process_reap_list);
static void add_to_reap_list(struct sock *sk)
{
+ unsigned int bh;
struct chtls_sock *csk = sk->sk_user_data;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
bh_lock_sock(sk);
release_tcp_port(sk); /* release the port immediately */
@@ -1342,7 +1347,7 @@ static void add_to_reap_list(struct sock *sk)
schedule_work(&reap_task);
spin_unlock(&reap_list_lock);
bh_unlock_sock(sk);
- local_bh_enable();
+ local_bh_enable(bh);
}
static void add_pass_open_to_parent(struct sock *child, struct sock *lsk,
diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
index 7e71043..2532cce 100644
--- a/drivers/crypto/inside-secure/safexcel.c
+++ b/drivers/crypto/inside-secure/safexcel.c
@@ -684,6 +684,7 @@ int safexcel_invalidate_cache(struct crypto_async_request *async,
static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv *priv,
int ring)
{
+ unsigned int bh;
struct crypto_async_request *req;
struct safexcel_context *ctx;
int ret, i, nreq, ndesc, tot_descs, handled = 0;
@@ -710,9 +711,9 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
}
if (should_complete) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
req->complete(req, ret);
- local_bh_enable();
+ local_bh_enable(bh);
}
tot_descs += ndesc;
diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
index a4aa681..bee4bb8 100644
--- a/drivers/crypto/marvell/cesa.c
+++ b/drivers/crypto/marvell/cesa.c
@@ -107,10 +107,11 @@ static inline void
mv_cesa_complete_req(struct mv_cesa_ctx *ctx, struct crypto_async_request *req,
int res)
{
+ unsigned int bh;
ctx->ops->cleanup(req);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
req->complete(req, res);
- local_bh_enable();
+ local_bh_enable(bh);
}
static irqreturn_t mv_cesa_int(int irq, void *priv)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index fcc73a6..ae4dc5b 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -575,6 +575,7 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
static void __fence_set_priority(struct dma_fence *fence,
const struct i915_sched_attr *attr)
{
+ unsigned int bh;
struct i915_request *rq;
struct intel_engine_cs *engine;
@@ -584,12 +585,12 @@ static void __fence_set_priority(struct dma_fence *fence,
rq = to_request(fence);
engine = rq->engine;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rcu_read_lock(); /* RCU serialisation for set-wedged protection */
if (engine->schedule)
engine->schedule(rq, attr);
rcu_read_unlock();
- local_bh_enable(); /* kick the tasklets if queues were reprioritised */
+ local_bh_enable(bh); /* kick the tasklets if queues were reprioritised */
}
static void fence_set_priority(struct dma_fence *fence,
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 5c2c93c..4bc4a12 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1043,6 +1043,7 @@ void i915_request_skip(struct i915_request *rq, int error)
*/
void i915_request_add(struct i915_request *request)
{
+ unsigned int bh;
struct intel_engine_cs *engine = request->engine;
struct i915_timeline *timeline = request->timeline;
struct intel_ring *ring = request->ring;
@@ -1124,13 +1125,13 @@ void i915_request_add(struct i915_request *request)
* decide whether to preempt the entire chain so that it is ready to
* run at the earliest possible convenience.
*/
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rcu_read_lock(); /* RCU serialisation for set-wedged protection */
if (engine->schedule)
engine->schedule(request, &request->gem_context->sched);
rcu_read_unlock();
i915_sw_fence_commit(&request->submit);
- local_bh_enable(); /* Kick the execlists tasklet if just scheduled */
+ local_bh_enable(bh); /* Kick the execlists tasklet if just scheduled */
/*
* In typical scenarios, we do not expect the previous request on
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index 1db6ba7..31e8d73 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -615,6 +615,7 @@ static void signaler_set_rtpriority(void)
static int intel_breadcrumbs_signaler(void *arg)
{
+ unsigned int bh;
struct intel_engine_cs *engine = arg;
struct intel_breadcrumbs *b = &engine->breadcrumbs;
struct i915_request *rq, *n;
@@ -669,13 +670,13 @@ static int intel_breadcrumbs_signaler(void *arg)
spin_unlock_irq(&b->rb_lock);
if (!list_empty(&list)) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
list_for_each_entry_safe(rq, n, &list, signaling.link) {
dma_fence_signal(&rq->fence);
GEM_BUG_ON(!i915_request_completed(rq));
i915_request_put(rq);
}
- local_bh_enable(); /* kick start the tasklets */
+ local_bh_enable(bh); /* kick start the tasklets */
/*
* If the engine is saturated we may be continually
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index 2d19528..cbf7776 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -973,6 +973,7 @@ static bool ring_is_idle(struct intel_engine_cs *engine)
*/
bool intel_engine_is_idle(struct intel_engine_cs *engine)
{
+ unsigned int bh;
struct drm_i915_private *dev_priv = engine->i915;
/* More white lies, if wedged, hw state is inconsistent */
@@ -991,14 +992,14 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine)
if (READ_ONCE(engine->execlists.active)) {
struct tasklet_struct *t = &engine->execlists.tasklet;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
if (tasklet_trylock(t)) {
/* Must wait for any GPU reset in progress. */
if (__tasklet_is_enabled(t))
t->func(t->data);
tasklet_unlock(t);
}
- local_bh_enable();
+ local_bh_enable(bh);
if (READ_ONCE(engine->execlists.active))
return false;
diff --git a/drivers/hsi/clients/cmt_speech.c b/drivers/hsi/clients/cmt_speech.c
index a1d4b93..a3de8f3 100644
--- a/drivers/hsi/clients/cmt_speech.c
+++ b/drivers/hsi/clients/cmt_speech.c
@@ -752,9 +752,10 @@ static unsigned int cs_hsi_get_state(struct cs_hsi_iface *hi)
static int cs_hsi_command(struct cs_hsi_iface *hi, u32 cmd)
{
+ unsigned int bh;
int ret = 0;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
switch (cmd & TARGET_MASK) {
case TARGET_REMOTE:
ret = cs_hsi_write_on_control(hi, cmd);
@@ -769,7 +770,7 @@ static int cs_hsi_command(struct cs_hsi_iface *hi, u32 cmd)
ret = -EINVAL;
break;
}
- local_bh_enable();
+ local_bh_enable(bh);
return ret;
}
@@ -937,6 +938,7 @@ static void cs_hsi_data_disable(struct cs_hsi_iface *hi, int old_state)
static int cs_hsi_buf_config(struct cs_hsi_iface *hi,
struct cs_buffer_config *buf_cfg)
{
+ unsigned int bh;
int r = 0;
unsigned int old_state = hi->iface_state;
@@ -981,9 +983,9 @@ static int cs_hsi_buf_config(struct cs_hsi_iface *hi,
pm_qos_add_request(&hi->pm_qos_req,
PM_QOS_CPU_DMA_LATENCY,
CS_QOS_LATENCY_FOR_DATA_USEC);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
cs_hsi_read_on_data(hi);
- local_bh_enable();
+ local_bh_enable(bh);
} else if (old_state == CS_STATE_CONFIGURED) {
pm_qos_remove_request(&hi->pm_qos_req);
}
@@ -998,6 +1000,7 @@ static int cs_hsi_buf_config(struct cs_hsi_iface *hi,
static int cs_hsi_start(struct cs_hsi_iface **hi, struct hsi_client *cl,
unsigned long mmap_base, unsigned long mmap_size)
{
+ unsigned int bh;
int err = 0;
struct cs_hsi_iface *hsi_if = kzalloc(sizeof(*hsi_if), GFP_KERNEL);
@@ -1045,9 +1048,9 @@ static int cs_hsi_start(struct cs_hsi_iface **hi, struct hsi_client *cl,
}
hsi_if->iface_state = CS_STATE_OPENED;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
cs_hsi_read_on_control(hsi_if);
- local_bh_enable();
+ local_bh_enable(bh);
dev_dbg(&cl->device, "cs_hsi_start...done\n");
diff --git a/drivers/infiniband/sw/rdmavt/cq.c b/drivers/infiniband/sw/rdmavt/cq.c
index 4f1544a..7ffffd4 100644
--- a/drivers/infiniband/sw/rdmavt/cq.c
+++ b/drivers/infiniband/sw/rdmavt/cq.c
@@ -137,6 +137,7 @@ EXPORT_SYMBOL(rvt_cq_enter);
static void send_complete(struct work_struct *work)
{
+ unsigned int bh;
struct rvt_cq *cq = container_of(work, struct rvt_cq, comptask);
/*
@@ -155,9 +156,9 @@ static void send_complete(struct work_struct *work)
* See the implementation for ipoib_cm_handle_tx_wc(),
* netif_tx_lock_bh() and netif_tx_lock().
*/
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
- local_bh_enable();
+ local_bh_enable(bh);
if (cq->triggered == triggered)
return;
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index 87f2a5c..ec3f30d 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -967,6 +967,7 @@ void ipoib_ib_dev_down(struct net_device *dev)
void ipoib_drain_cq(struct net_device *dev)
{
+ unsigned int bh;
struct ipoib_dev_priv *priv = ipoib_priv(dev);
int i, n;
@@ -975,7 +976,7 @@ void ipoib_drain_cq(struct net_device *dev)
* called from the BH-disabled NAPI poll context, so disable
* BHs here too.
*/
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
do {
n = ib_poll_cq(priv->recv_cq, IPOIB_NUM_WC, priv->ibwc);
@@ -1002,7 +1003,7 @@ void ipoib_drain_cq(struct net_device *dev)
while (poll_tx(priv))
; /* nothing */
- local_bh_enable();
+ local_bh_enable(bh);
}
/*
diff --git a/drivers/isdn/i4l/isdn_net.h b/drivers/isdn/i4l/isdn_net.h
index f4621b1..bb87788 100644
--- a/drivers/isdn/i4l/isdn_net.h
+++ b/drivers/isdn/i4l/isdn_net.h
@@ -95,7 +95,7 @@ static __inline__ isdn_net_local *isdn_net_get_locked_lp(isdn_net_dev *nd,
nd->queue = nd->queue->next;
spin_unlock_irqrestore(&nd->queue_lock, flags);
spin_lock(&lp->xmit_lock);
- local_bh_disable();
+ *bh = local_bh_disable(SOFTIRQ_ALL_MASK);
return lp;
errout:
spin_unlock_irqrestore(&nd->queue_lock, flags);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
index a4a90b6c..c4974142 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
@@ -2460,6 +2460,7 @@ static void bnx2x_wait_for_link(struct bnx2x *bp, u8 link_up, u8 is_serdes)
static int bnx2x_run_loopback(struct bnx2x *bp, int loopback_mode)
{
+ unsigned int bh;
unsigned int pkt_size, num_pkts, i;
struct sk_buff *skb;
unsigned char *packet;
@@ -2616,9 +2617,9 @@ static int bnx2x_run_loopback(struct bnx2x *bp, int loopback_mode)
* sch_direct_xmit() and bnx2x_run_loopback() (calling
* bnx2x_tx_int()), as both are taking netif_tx_lock().
*/
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
bnx2x_tx_int(bp, txdata);
- local_bh_enable();
+ local_bh_enable(bh);
}
rx_idx = le16_to_cpu(*fp_rx->rx_cons_sb);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 71362b7..f37973e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -5661,6 +5661,7 @@ static void bnx2x_eq_int(struct bnx2x *bp)
static void bnx2x_sp_task(struct work_struct *work)
{
+ unsigned int bh;
struct bnx2x *bp = container_of(work, struct bnx2x, sp_task.work);
DP(BNX2X_MSG_SP, "sp task invoked\n");
@@ -5691,9 +5692,9 @@ static void bnx2x_sp_task(struct work_struct *work)
/* Prevent local bottom-halves from running as
* we are going to change the local NAPI list.
*/
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
napi_schedule(&bnx2x_fcoe(bp, napi));
- local_bh_enable();
+ local_bh_enable(bh);
}
/* Handle EQ completions */
diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
index a19172d..b736aed 100644
--- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
@@ -893,11 +893,12 @@ static const struct attribute_group offload_attr_group = {
*/
static inline int offload_tx(struct t3cdev *tdev, struct sk_buff *skb)
{
+ unsigned int bh;
int ret;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
ret = t3_offload_tx(tdev, skb);
- local_bh_enable();
+ local_bh_enable(bh);
return ret;
}
diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c
index 50cd660..462c3e4 100644
--- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c
+++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c
@@ -1042,11 +1042,12 @@ static int process_rx(struct t3cdev *dev, struct sk_buff **skbs, int n)
*/
int cxgb3_ofld_send(struct t3cdev *dev, struct sk_buff *skb)
{
+ unsigned int bh;
int r;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
r = dev->send(dev, skb);
- local_bh_enable();
+ local_bh_enable(bh);
return r;
}
diff --git a/drivers/net/ethernet/chelsio/cxgb3/sge.c b/drivers/net/ethernet/chelsio/cxgb3/sge.c
index 20b6e1b..4d1fe4b 100644
--- a/drivers/net/ethernet/chelsio/cxgb3/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb3/sge.c
@@ -1562,10 +1562,11 @@ static void restart_ctrlq(unsigned long data)
*/
int t3_mgmt_tx(struct adapter *adap, struct sk_buff *skb)
{
+ unsigned int bh;
int ret;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
ret = ctrl_xmit(adap, &adap->sge.qs[0].txq[TXQ_CTRL], skb);
- local_bh_enable();
+ local_bh_enable(bh);
return ret;
}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index 961e3087..f6cc11d 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -441,6 +441,7 @@ static int set_rxmode(struct net_device *dev, int mtu, bool sleep_ok)
*/
static int link_start(struct net_device *dev)
{
+ unsigned int bh;
int ret;
struct port_info *pi = netdev_priv(dev);
unsigned int mb = pi->adapter->pf;
@@ -464,10 +465,10 @@ static int link_start(struct net_device *dev)
ret = t4_link_l1cfg(pi->adapter, mb, pi->tx_chan,
&pi->link_cfg);
if (ret == 0) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
ret = t4_enable_pi_params(pi->adapter, mb, pi, true,
true, CXGB4_DCB_ENABLED);
- local_bh_enable();
+ local_bh_enable(bh);
}
return ret;
diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
index a9799ce..f85d437 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
@@ -2071,11 +2071,12 @@ static void restart_ctrlq(unsigned long data)
*/
int t4_mgmt_tx(struct adapter *adap, struct sk_buff *skb)
{
+ unsigned int bh;
int ret;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
ret = ctrl_xmit(&adap->sge.ctrlq[0], skb);
- local_bh_enable();
+ local_bh_enable(bh);
return ret;
}
@@ -2385,11 +2386,12 @@ static inline int uld_send(struct adapter *adap, struct sk_buff *skb,
*/
int t4_ofld_send(struct adapter *adap, struct sk_buff *skb)
{
+ unsigned int bh;
int ret;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
ret = uld_send(adap, skb, CXGB4_TX_OFLD);
- local_bh_enable();
+ local_bh_enable(bh);
return ret;
}
@@ -2482,6 +2484,7 @@ static int ofld_xmit_direct(struct sge_uld_txq *q, const void *src,
int cxgb4_immdata_send(struct net_device *dev, unsigned int idx,
const void *src, unsigned int len)
{
+ unsigned int bh;
struct sge_uld_txq_info *txq_info;
struct sge_uld_txq *txq;
struct adapter *adap;
@@ -2489,17 +2492,17 @@ int cxgb4_immdata_send(struct net_device *dev, unsigned int idx,
adap = netdev2adap(dev);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
txq_info = adap->sge.uld_txq_info[CXGB4_TX_OFLD];
if (unlikely(!txq_info)) {
WARN_ON(true);
- local_bh_enable();
+ local_bh_enable(bh);
return NET_XMIT_DROP;
}
txq = &txq_info->uldtxq[idx];
ret = ofld_xmit_direct(txq, src, len);
- local_bh_enable();
+ local_bh_enable(bh);
return net_xmit_eval(ret);
}
EXPORT_SYMBOL(cxgb4_immdata_send);
@@ -2515,11 +2518,12 @@ EXPORT_SYMBOL(cxgb4_immdata_send);
*/
static int t4_crypto_send(struct adapter *adap, struct sk_buff *skb)
{
+ unsigned int bh;
int ret;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
ret = uld_send(adap, skb, CXGB4_TX_CRYPTO);
- local_bh_enable();
+ local_bh_enable(bh);
return ret;
}
diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
index 1e9d882..c6bdc33 100644
--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
+++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
@@ -577,6 +577,7 @@ int be_process_mcc(struct be_adapter *adapter)
/* Wait till no more pending mcc requests are present */
static int be_mcc_wait_compl(struct be_adapter *adapter)
{
+ unsigned int bh;
#define mcc_timeout 12000 /* 12s timeout */
int i, status = 0;
struct be_mcc_obj *mcc_obj = &adapter->mcc_obj;
@@ -585,9 +586,9 @@ static int be_mcc_wait_compl(struct be_adapter *adapter)
if (be_check_error(adapter, BE_ERROR_ANY))
return -EIO;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
status = be_process_mcc(adapter);
- local_bh_enable();
+ local_bh_enable(bh);
if (atomic_read(&mcc_obj->q.used) == 0)
break;
diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
index 74d1226..dc7d2ad 100644
--- a/drivers/net/ethernet/emulex/benet/be_main.c
+++ b/drivers/net/ethernet/emulex/benet/be_main.c
@@ -5616,6 +5616,7 @@ static void be_log_sfp_info(struct be_adapter *adapter)
static void be_worker(struct work_struct *work)
{
+ unsigned int bh;
struct be_adapter *adapter =
container_of(work, struct be_adapter, work.work);
struct be_rx_obj *rxo;
@@ -5629,9 +5630,9 @@ static void be_worker(struct work_struct *work)
* mcc completions
*/
if (!netif_running(adapter->netdev)) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
be_process_mcc(adapter);
- local_bh_enable();
+ local_bh_enable(bh);
goto reschedule;
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index 666708a..a5c7e70 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -1621,6 +1621,7 @@ static void mlx4_en_init_recycle_ring(struct mlx4_en_priv *priv,
int mlx4_en_start_port(struct net_device *dev)
{
+ unsigned int bh;
struct mlx4_en_priv *priv = netdev_priv(dev);
struct mlx4_en_dev *mdev = priv->mdev;
struct mlx4_en_cq *cq;
@@ -1835,9 +1836,9 @@ int mlx4_en_start_port(struct net_device *dev)
* the queues freezing if they are full
*/
for (i = 0; i < priv->rx_ring_num; i++) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
napi_schedule(&priv->rx_cq[i]->napi);
- local_bh_enable();
+ local_bh_enable(bh);
}
netif_tx_start_all_queues(dev);
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index a1aeeb8..4616e1a 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -395,6 +395,7 @@ int mlx4_en_activate_rx_rings(struct mlx4_en_priv *priv)
*/
void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv)
{
+ unsigned int bh;
int ring;
if (!priv->port_up)
@@ -402,9 +403,9 @@ void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv)
for (ring = 0; ring < priv->rx_ring_num; ring++) {
if (mlx4_en_is_ring_empty(priv->rx_ring[ring])) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
napi_reschedule(&priv->rx_cq[ring]->napi);
- local_bh_enable();
+ local_bh_enable(bh);
}
}
}
diff --git a/drivers/net/ethernet/sfc/ptp.c b/drivers/net/ethernet/sfc/ptp.c
index f216615..5148b1f 100644
--- a/drivers/net/ethernet/sfc/ptp.c
+++ b/drivers/net/ethernet/sfc/ptp.c
@@ -805,12 +805,13 @@ static int efx_ptp_disable(struct efx_nic *efx)
static void efx_ptp_deliver_rx_queue(struct sk_buff_head *q)
{
+ unsigned int bh;
struct sk_buff *skb;
while ((skb = skb_dequeue(q))) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
netif_receive_skb(skb);
- local_bh_enable();
+ local_bh_enable(bh);
}
}
@@ -1225,9 +1226,10 @@ static void efx_ptp_process_events(struct efx_nic *efx, struct sk_buff_head *q)
/* Complete processing of a received packet */
static inline void efx_ptp_process_rx(struct efx_nic *efx, struct sk_buff *skb)
{
- local_bh_disable();
+ unsigned int bh;
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
netif_receive_skb(skb);
- local_bh_enable();
+ local_bh_enable(bh);
}
static void efx_ptp_remove_multicast_filters(struct efx_nic *efx)
diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
index 1a8132e..193c473 100644
--- a/drivers/net/ipvlan/ipvlan_core.c
+++ b/drivers/net/ipvlan/ipvlan_core.c
@@ -229,6 +229,7 @@ unsigned int ipvlan_mac_hash(const unsigned char *addr)
void ipvlan_process_multicast(struct work_struct *work)
{
+ unsigned int bh;
struct ipvl_port *port = container_of(work, struct ipvl_port, wq);
struct ethhdr *ethh;
struct ipvl_dev *ipvlan;
@@ -270,7 +271,7 @@ void ipvlan_process_multicast(struct work_struct *work)
ret = NET_RX_DROP;
len = skb->len + ETH_HLEN;
nskb = skb_clone(skb, GFP_ATOMIC);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
if (nskb) {
consumed = true;
nskb->pkt_type = pkt_type;
@@ -281,7 +282,7 @@ void ipvlan_process_multicast(struct work_struct *work)
ret = netif_rx(nskb);
}
ipvlan_count_rx(ipvlan, len, ret == NET_RX_SUCCESS, true);
- local_bh_enable();
+ local_bh_enable(bh);
}
rcu_read_unlock();
diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
index 02ad03a..c505ecb 100644
--- a/drivers/net/ppp/ppp_generic.c
+++ b/drivers/net/ppp/ppp_generic.c
@@ -1425,7 +1425,8 @@ static void __ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb)
static void ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb)
{
- local_bh_disable();
+ unsigned int bh;
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
if (unlikely(*this_cpu_ptr(ppp->xmit_recursion)))
goto err;
@@ -1434,12 +1435,12 @@ static void ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb)
__ppp_xmit_process(ppp, skb);
(*this_cpu_ptr(ppp->xmit_recursion))--;
- local_bh_enable();
+ local_bh_enable(bh);
return;
err:
- local_bh_enable();
+ local_bh_enable(bh);
kfree_skb(skb);
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index ebd07ad..172a5da 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1482,6 +1482,7 @@ static struct sk_buff *tun_napi_alloc_frags(struct tun_file *tfile,
size_t len,
const struct iov_iter *it)
{
+ unsigned int bh;
struct sk_buff *skb;
size_t linear;
int err;
@@ -1490,9 +1491,9 @@ static struct sk_buff *tun_napi_alloc_frags(struct tun_file *tfile,
if (it->nr_segs > MAX_SKB_FRAGS + 1)
return ERR_PTR(-ENOMEM);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
skb = napi_get_frags(&tfile->napi);
- local_bh_enable();
+ local_bh_enable(bh);
if (!skb)
return ERR_PTR(-ENOMEM);
@@ -1562,15 +1563,16 @@ static struct sk_buff *tun_alloc_skb(struct tun_file *tfile,
static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile,
struct sk_buff *skb, int more)
{
+ unsigned int bh;
struct sk_buff_head *queue = &tfile->sk.sk_write_queue;
struct sk_buff_head process_queue;
u32 rx_batched = tun->rx_batched;
bool rcv = false;
if (!rx_batched || (!more && skb_queue_empty(queue))) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
netif_receive_skb(skb);
- local_bh_enable();
+ local_bh_enable(bh);
return;
}
@@ -1587,11 +1589,11 @@ static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile,
if (rcv) {
struct sk_buff *nskb;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
while ((nskb = __skb_dequeue(&process_queue)))
netif_receive_skb(nskb);
netif_receive_skb(skb);
- local_bh_enable();
+ local_bh_enable(bh);
}
}
@@ -1623,6 +1625,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
struct virtio_net_hdr *hdr,
int len, int *skb_xdp)
{
+ unsigned int bh;
struct page_frag *alloc_frag = ¤t->task_frag;
struct sk_buff *skb;
struct bpf_prog *xdp_prog;
@@ -1659,7 +1662,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
else
*skb_xdp = 0;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rcu_read_lock();
xdp_prog = rcu_dereference(tun->xdp_prog);
if (xdp_prog && !*skb_xdp) {
@@ -1684,7 +1687,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
if (err)
goto err_redirect;
rcu_read_unlock();
- local_bh_enable();
+ local_bh_enable(bh);
return NULL;
case XDP_TX:
get_page(alloc_frag->page);
@@ -1692,7 +1695,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
if (tun_xdp_tx(tun->dev, &xdp) < 0)
goto err_redirect;
rcu_read_unlock();
- local_bh_enable();
+ local_bh_enable(bh);
return NULL;
case XDP_PASS:
delta = orig_data - xdp.data;
@@ -1712,7 +1715,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
skb = build_skb(buf, buflen);
if (!skb) {
rcu_read_unlock();
- local_bh_enable();
+ local_bh_enable(bh);
return ERR_PTR(-ENOMEM);
}
@@ -1722,7 +1725,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
alloc_frag->offset += buflen;
rcu_read_unlock();
- local_bh_enable();
+ local_bh_enable(bh);
return skb;
@@ -1730,7 +1733,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
put_page(alloc_frag->page);
err_xdp:
rcu_read_unlock();
- local_bh_enable();
+ local_bh_enable(bh);
this_cpu_inc(tun->pcpu_stats->rx_dropped);
return NULL;
}
@@ -1740,6 +1743,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
void *msg_control, struct iov_iter *from,
int noblock, bool more)
{
+ unsigned int bh;
struct tun_pi pi = { 0, cpu_to_be16(ETH_P_IP) };
struct sk_buff *skb;
size_t total_len = iov_iter_count(from);
@@ -1926,19 +1930,19 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
struct bpf_prog *xdp_prog;
int ret;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rcu_read_lock();
xdp_prog = rcu_dereference(tun->xdp_prog);
if (xdp_prog) {
ret = do_xdp_generic(xdp_prog, skb);
if (ret != XDP_PASS) {
rcu_read_unlock();
- local_bh_enable();
+ local_bh_enable(bh);
return total_len;
}
}
rcu_read_unlock();
- local_bh_enable();
+ local_bh_enable(bh);
}
/* Compute the costly rx hash only if needed for flow updates.
@@ -1961,9 +1965,9 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
return -ENOMEM;
}
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
napi_gro_frags(&tfile->napi);
- local_bh_enable();
+ local_bh_enable(bh);
mutex_unlock(&tfile->napi_mutex);
} else if (tfile->napi_enabled) {
struct sk_buff_head *queue = &tfile->sk.sk_write_queue;
@@ -1977,7 +1981,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
if (!more || queue_len > NAPI_POLL_WEIGHT)
napi_schedule(&tfile->napi);
- local_bh_enable();
+ local_bh_enable(0);
} else if (!IS_ENABLED(CONFIG_4KSTACKS)) {
tun_rx_batched(tun, tfile, skb, more);
} else {
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 7659209..4a15c36 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1224,15 +1224,16 @@ static void skb_recv_done(struct virtqueue *rvq)
static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
{
+ unsigned int bh;
napi_enable(napi);
/* If all buffers were filled by other side before we napi_enabled, we
* won't get another interrupt, so process any outstanding packets now.
* Call local_bh_enable after to trigger softIRQ processing.
*/
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
virtqueue_napi_schedule(napi, vq);
- local_bh_enable();
+ local_bh_enable(bh);
}
static void virtnet_napi_tx_enable(struct virtnet_info *vi,
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
index 05b7741..061903e 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
@@ -1141,6 +1141,7 @@ static ssize_t iwl_dbgfs_inject_packet_write(struct iwl_mvm *mvm,
char *buf, size_t count,
loff_t *ppos)
{
+ unsigned int bh;
struct iwl_rx_cmd_buffer rxb = {
._rx_page_order = 0,
.truesize = 0, /* not used */
@@ -1186,9 +1187,9 @@ static ssize_t iwl_dbgfs_inject_packet_write(struct iwl_mvm *mvm,
(bin_len - mpdu_cmd_hdr_size - sizeof(*pkt)))
goto out;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
iwl_mvm_rx_mpdu_mq(mvm, NULL, &rxb, 0);
- local_bh_enable();
+ local_bh_enable(bh);
ret = 0;
out:
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
index 18db1ed..d3c43c5 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
@@ -1040,6 +1040,7 @@ static inline u8 iwl_mvm_tid_to_ac_queue(int tid)
static void iwl_mvm_tx_deferred_stream(struct iwl_mvm *mvm,
struct ieee80211_sta *sta, int tid)
{
+ unsigned int bh;
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
struct iwl_mvm_tid_data *tid_data = &mvmsta->tid_data[tid];
struct sk_buff *skb;
@@ -1075,7 +1076,7 @@ static void iwl_mvm_tx_deferred_stream(struct iwl_mvm *mvm,
__skb_queue_head_init(&deferred_tx);
/* Disable bottom-halves when entering TX path */
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
spin_lock(&mvmsta->lock);
skb_queue_splice_init(&tid_data->deferred_tx_frames, &deferred_tx);
mvmsta->deferred_traffic_tid_map &= ~BIT(tid);
@@ -1084,7 +1085,7 @@ static void iwl_mvm_tx_deferred_stream(struct iwl_mvm *mvm,
while ((skb = __skb_dequeue(&deferred_tx)))
if (no_queue || iwl_mvm_tx_skb(mvm, skb, sta))
ieee80211_free_txskb(mvm->hw, skb);
- local_bh_enable();
+ local_bh_enable(bh);
/* Wake queue */
iwl_mvm_start_mac_queues(mvm, BIT(mac_queue));
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
index d017aa2..aded2e8 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
@@ -1485,6 +1485,7 @@ static struct iwl_trans_pcie *iwl_pcie_get_trans_pcie(struct msix_entry *entry)
*/
irqreturn_t iwl_pcie_irq_rx_msix_handler(int irq, void *dev_id)
{
+ unsigned int bh;
struct msix_entry *entry = dev_id;
struct iwl_trans_pcie *trans_pcie = iwl_pcie_get_trans_pcie(entry);
struct iwl_trans *trans = trans_pcie->trans;
@@ -1496,9 +1497,9 @@ irqreturn_t iwl_pcie_irq_rx_msix_handler(int irq, void *dev_id)
lock_map_acquire(&trans->sync_cmd_lockdep_map);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
iwl_pcie_rx_handle(trans, entry->entry);
- local_bh_enable();
+ local_bh_enable(bh);
iwl_pcie_clear_irq(trans, entry);
@@ -1664,6 +1665,7 @@ void iwl_pcie_handle_rfkill_irq(struct iwl_trans *trans)
irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
{
+ unsigned int bh;
struct iwl_trans *trans = dev_id;
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
struct isr_statistics *isr_stats = &trans_pcie->isr_stats;
@@ -1860,9 +1862,9 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
isr_stats->rx++;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
iwl_pcie_rx_handle(trans, 0);
- local_bh_enable();
+ local_bh_enable(bh);
}
/* This "Tx" DMA channel is used only for loading uCode */
@@ -2014,6 +2016,7 @@ irqreturn_t iwl_pcie_msix_isr(int irq, void *data)
irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
{
+ unsigned int bh;
struct msix_entry *entry = dev_id;
struct iwl_trans_pcie *trans_pcie = iwl_pcie_get_trans_pcie(entry);
struct iwl_trans *trans = trans_pcie->trans;
@@ -2047,16 +2050,16 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
if ((trans_pcie->shared_vec_mask & IWL_SHARED_IRQ_NON_RX) &&
inta_fh & MSIX_FH_INT_CAUSES_Q0) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
iwl_pcie_rx_handle(trans, 0);
- local_bh_enable();
+ local_bh_enable(bh);
}
if ((trans_pcie->shared_vec_mask & IWL_SHARED_IRQ_FIRST_RSS) &&
inta_fh & MSIX_FH_INT_CAUSES_Q1) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
iwl_pcie_rx_handle(trans, 1);
- local_bh_enable();
+ local_bh_enable(bh);
}
/* This "Tx" DMA channel is used only for loading uCode */
diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
index 1068757..323456e 100644
--- a/drivers/net/wireless/mac80211_hwsim.c
+++ b/drivers/net/wireless/mac80211_hwsim.c
@@ -738,6 +738,7 @@ static int hwsim_fops_ps_read(void *dat, u64 *val)
static int hwsim_fops_ps_write(void *dat, u64 val)
{
+ unsigned int bh;
struct mac80211_hwsim_data *data = dat;
enum ps_mode old_ps;
@@ -748,17 +749,17 @@ static int hwsim_fops_ps_write(void *dat, u64 val)
if (val == PS_MANUAL_POLL) {
if (data->ps != PS_ENABLED)
return -EINVAL;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
ieee80211_iterate_active_interfaces_atomic(
data->hw, IEEE80211_IFACE_ITER_NORMAL,
hwsim_send_ps_poll, data);
- local_bh_enable();
+ local_bh_enable(bh);
return 0;
}
old_ps = data->ps;
data->ps = val;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
if (old_ps == PS_DISABLED && val != PS_DISABLED) {
ieee80211_iterate_active_interfaces_atomic(
data->hw, IEEE80211_IFACE_ITER_NORMAL,
@@ -768,7 +769,7 @@ static int hwsim_fops_ps_write(void *dat, u64 val)
data->hw, IEEE80211_IFACE_ITER_NORMAL,
hwsim_send_nullfunc_no_ps, data);
}
- local_bh_enable();
+ local_bh_enable(bh);
return 0;
}
@@ -2033,6 +2034,7 @@ static void mac80211_hwsim_flush(struct ieee80211_hw *hw,
static void hw_scan_work(struct work_struct *work)
{
+ unsigned int bh;
struct mac80211_hwsim_data *hwsim =
container_of(work, struct mac80211_hwsim_data, hw_scan.work);
struct cfg80211_scan_request *req = hwsim->hw_scan_request;
@@ -2083,10 +2085,10 @@ static void hw_scan_work(struct work_struct *work)
if (req->ie_len)
skb_put_data(probe, req->ie, req->ie_len);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
mac80211_hwsim_tx_frame(hwsim->hw, probe,
hwsim->tmp_chan);
- local_bh_enable();
+ local_bh_enable(bh);
}
}
ieee80211_queue_delayed_work(hwsim->hw, &hwsim->hw_scan,
diff --git a/drivers/net/wireless/mediatek/mt76/agg-rx.c b/drivers/net/wireless/mediatek/mt76/agg-rx.c
index 73c8b28..145e81f 100644
--- a/drivers/net/wireless/mediatek/mt76/agg-rx.c
+++ b/drivers/net/wireless/mediatek/mt76/agg-rx.c
@@ -94,6 +94,7 @@ mt76_rx_aggr_check_release(struct mt76_rx_tid *tid, struct sk_buff_head *frames)
static void
mt76_rx_aggr_reorder_work(struct work_struct *work)
{
+ unsigned int bh;
struct mt76_rx_tid *tid = container_of(work, struct mt76_rx_tid,
reorder_work.work);
struct mt76_dev *dev = tid->dev;
@@ -102,7 +103,7 @@ mt76_rx_aggr_reorder_work(struct work_struct *work)
__skb_queue_head_init(&frames);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rcu_read_lock();
spin_lock(&tid->lock);
@@ -116,7 +117,7 @@ mt76_rx_aggr_reorder_work(struct work_struct *work)
mt76_rx_complete(dev, &frames, NULL);
rcu_read_unlock();
- local_bh_enable();
+ local_bh_enable(bh);
}
static void
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_phy_common.c b/drivers/net/wireless/mediatek/mt76/mt76x2_phy_common.c
index 9fd6ab4..0f7c895 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x2_phy_common.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x2_phy_common.c
@@ -303,12 +303,13 @@ EXPORT_SYMBOL_GPL(mt76x2_phy_set_band);
int mt76x2_phy_get_min_avg_rssi(struct mt76x2_dev *dev)
{
+ unsigned int bh;
struct mt76x2_sta *sta;
struct mt76_wcid *wcid;
int i, j, min_rssi = 0;
s8 cur_rssi;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rcu_read_lock();
for (i = 0; i < ARRAY_SIZE(dev->wcid_mask); i++) {
@@ -339,7 +340,7 @@ int mt76x2_phy_get_min_avg_rssi(struct mt76x2_dev *dev)
}
rcu_read_unlock();
- local_bh_enable();
+ local_bh_enable(bh);
if (!min_rssi)
return -75;
diff --git a/drivers/s390/char/sclp.c b/drivers/s390/char/sclp.c
index 6c6b745..a2231b5 100644
--- a/drivers/s390/char/sclp.c
+++ b/drivers/s390/char/sclp.c
@@ -535,6 +535,7 @@ sclp_sync_wait(void)
unsigned long long old_tick;
unsigned long flags;
unsigned long cr0, cr0_sync;
+ unsigned int bh;
u64 timeout;
int irq_context;
@@ -551,7 +552,7 @@ sclp_sync_wait(void)
/* Prevent bottom half from executing once we force interrupts open */
irq_context = in_interrupt();
if (!irq_context)
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
/* Enable service-signal interruption, disable timer interrupts */
old_tick = local_tick_disable();
trace_hardirqs_on();
@@ -572,7 +573,7 @@ sclp_sync_wait(void)
local_irq_disable();
__ctl_load(cr0, 0, 0);
if (!irq_context)
- local_bh_enable_no_softirq();
+ local_bh_enable_no_softirq(bh);
local_tick_enable(old_tick);
local_irq_restore(flags);
}
diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c
index e3fb83b..3484e95 100644
--- a/drivers/s390/cio/cio.c
+++ b/drivers/s390/cio/cio.c
@@ -587,6 +587,7 @@ void cio_tsch(struct subchannel *sch)
{
struct irb *irb;
int irq_context;
+ unsigned int bh;
irb = this_cpu_ptr(&cio_irb);
/* Store interrupt response block to lowcore. */
@@ -597,7 +598,7 @@ void cio_tsch(struct subchannel *sch)
/* Call interrupt handler with updated status. */
irq_context = in_interrupt();
if (!irq_context) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
irq_enter();
}
kstat_incr_irq_this_cpu(IO_INTERRUPT);
@@ -607,7 +608,7 @@ void cio_tsch(struct subchannel *sch)
inc_irq_stat(IRQIO_CIO);
if (!irq_context) {
irq_exit();
- local_bh_enable_no_softirq();
+ local_bh_enable_no_softirq(bh);
}
}
diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c
index e685412..4d3757a 100644
--- a/drivers/s390/crypto/zcrypt_api.c
+++ b/drivers/s390/crypto/zcrypt_api.c
@@ -691,13 +691,14 @@ static void zcrypt_status_mask(char status[], size_t max_adapters)
static void zcrypt_qdepth_mask(char qdepth[], size_t max_adapters)
{
+ unsigned int bh;
struct zcrypt_card *zc;
struct zcrypt_queue *zq;
int card;
memset(qdepth, 0, max_adapters);
spin_lock(&zcrypt_list_lock);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
for_each_zcrypt_card(zc) {
for_each_zcrypt_queue(zq, zc) {
card = AP_QID_CARD(zq->queue->qid);
@@ -711,19 +712,20 @@ static void zcrypt_qdepth_mask(char qdepth[], size_t max_adapters)
spin_unlock(&zq->queue->lock);
}
}
- local_bh_enable();
+ local_bh_enable(bh);
spin_unlock(&zcrypt_list_lock);
}
static void zcrypt_perdev_reqcnt(int reqcnt[], size_t max_adapters)
{
+ unsigned int bh;
struct zcrypt_card *zc;
struct zcrypt_queue *zq;
int card;
memset(reqcnt, 0, sizeof(int) * max_adapters);
spin_lock(&zcrypt_list_lock);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
for_each_zcrypt_card(zc) {
for_each_zcrypt_queue(zq, zc) {
card = AP_QID_CARD(zq->queue->qid);
@@ -735,19 +737,20 @@ static void zcrypt_perdev_reqcnt(int reqcnt[], size_t max_adapters)
spin_unlock(&zq->queue->lock);
}
}
- local_bh_enable();
+ local_bh_enable(bh);
spin_unlock(&zcrypt_list_lock);
}
static int zcrypt_pendingq_count(void)
{
+ unsigned int bh;
struct zcrypt_card *zc;
struct zcrypt_queue *zq;
int pendingq_count;
pendingq_count = 0;
spin_lock(&zcrypt_list_lock);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
for_each_zcrypt_card(zc) {
for_each_zcrypt_queue(zq, zc) {
if (AP_QID_QUEUE(zq->queue->qid) != ap_domain_index)
@@ -757,20 +760,21 @@ static int zcrypt_pendingq_count(void)
spin_unlock(&zq->queue->lock);
}
}
- local_bh_enable();
+ local_bh_enable(bh);
spin_unlock(&zcrypt_list_lock);
return pendingq_count;
}
static int zcrypt_requestq_count(void)
{
+ unsigned int bh;
struct zcrypt_card *zc;
struct zcrypt_queue *zq;
int requestq_count;
requestq_count = 0;
spin_lock(&zcrypt_list_lock);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
for_each_zcrypt_card(zc) {
for_each_zcrypt_queue(zq, zc) {
if (AP_QID_QUEUE(zq->queue->qid) != ap_domain_index)
@@ -780,7 +784,7 @@ static int zcrypt_requestq_count(void)
spin_unlock(&zq->queue->lock);
}
}
- local_bh_enable();
+ local_bh_enable(bh);
spin_unlock(&zcrypt_list_lock);
return requestq_count;
}
diff --git a/drivers/tty/hvc/hvc_iucv.c b/drivers/tty/hvc/hvc_iucv.c
index 2af1e57..dccf50f 100644
--- a/drivers/tty/hvc/hvc_iucv.c
+++ b/drivers/tty/hvc/hvc_iucv.c
@@ -975,11 +975,12 @@ static void hvc_iucv_msg_complete(struct iucv_path *path,
*/
static int hvc_iucv_pm_freeze(struct device *dev)
{
+ unsigned int bh;
struct hvc_iucv_private *priv = dev_get_drvdata(dev);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
hvc_iucv_hangup(priv);
- local_bh_enable();
+ local_bh_enable(bh);
return 0;
}
diff --git a/include/linux/bottom_half.h b/include/linux/bottom_half.h
index 192a71c..31fcdae 100644
--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -46,9 +46,10 @@ static __always_inline void __local_bh_disable_ip(unsigned long ip, unsigned int
}
#endif
-static inline void local_bh_disable(void)
+static inline unsigned int local_bh_disable(unsigned int mask)
{
__local_bh_disable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET);
+ return 0;
}
extern void local_bh_enable_no_softirq(void);
@@ -59,7 +60,7 @@ static inline void local_bh_enable_ip(unsigned long ip)
__local_bh_enable_ip(ip, SOFTIRQ_DISABLE_OFFSET);
}
-static inline void local_bh_enable(void)
+static inline void local_bh_enable(unsigned int bh)
{
__local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET);
}
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index b3617fe..46675d6 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3870,9 +3870,9 @@ static inline void netif_tx_lock(struct net_device *dev)
static inline unsigned int netif_tx_lock_bh(struct net_device *dev)
{
- unsigned int bh = 0;
+ unsigned int bh;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
netif_tx_lock(dev);
return bh;
@@ -3899,7 +3899,7 @@ static inline void netif_tx_unlock_bh(struct net_device *dev,
unsigned int bh)
{
netif_tx_unlock(dev);
- local_bh_enable();
+ local_bh_enable(bh);
}
#define HARD_TX_LOCK(dev, txq, cpu) { \
@@ -3925,10 +3925,11 @@ static inline void netif_tx_unlock_bh(struct net_device *dev,
static inline void netif_tx_disable(struct net_device *dev)
{
+ unsigned int bh;
unsigned int i;
int cpu;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
cpu = smp_processor_id();
for (i = 0; i < dev->num_tx_queues; i++) {
struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
@@ -3937,7 +3938,7 @@ static inline void netif_tx_disable(struct net_device *dev)
netif_tx_stop_queue(txq);
__netif_tx_unlock(txq);
}
- local_bh_enable();
+ local_bh_enable(bh);
}
static inline void netif_addr_lock(struct net_device *dev)
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 60fbd15..853fb52 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -702,12 +702,14 @@ static inline void rcu_read_unlock(void)
*/
static inline unsigned int rcu_read_lock_bh(void)
{
- local_bh_disable();
+ unsigned int bh;
+
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
__acquire(RCU_BH);
rcu_lock_acquire(&rcu_bh_lock_map);
RCU_LOCKDEP_WARN(!rcu_is_watching(),
"rcu_read_lock_bh() used illegally while idle");
- return 0;
+ return bh;
}
/*
@@ -721,7 +723,7 @@ static inline void rcu_read_unlock_bh(unsigned int bh)
"rcu_read_unlock_bh() used illegally while idle");
rcu_lock_release(&rcu_bh_lock_map);
__release(RCU_BH);
- local_bh_enable();
+ local_bh_enable(bh);
}
/**
diff --git a/include/net/mac80211.h b/include/net/mac80211.h
index 5790f55..1ad54c7 100644
--- a/include/net/mac80211.h
+++ b/include/net/mac80211.h
@@ -4140,9 +4140,10 @@ void ieee80211_rx_irqsafe(struct ieee80211_hw *hw, struct sk_buff *skb);
static inline void ieee80211_rx_ni(struct ieee80211_hw *hw,
struct sk_buff *skb)
{
- local_bh_disable();
+ unsigned int bh;
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
ieee80211_rx(hw, skb);
- local_bh_enable();
+ local_bh_enable(bh);
}
/**
@@ -4180,11 +4181,12 @@ int ieee80211_sta_ps_transition(struct ieee80211_sta *sta, bool start);
static inline int ieee80211_sta_ps_transition_ni(struct ieee80211_sta *sta,
bool start)
{
+ unsigned int bh;
int ret;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
ret = ieee80211_sta_ps_transition(sta, start);
- local_bh_enable();
+ local_bh_enable(bh);
return ret;
}
@@ -4371,9 +4373,10 @@ static inline void ieee80211_tx_status_noskb(struct ieee80211_hw *hw,
static inline void ieee80211_tx_status_ni(struct ieee80211_hw *hw,
struct sk_buff *skb)
{
- local_bh_disable();
+ unsigned int bh;
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
ieee80211_tx_status(hw, skb);
- local_bh_enable();
+ local_bh_enable(bh);
}
/**
diff --git a/include/net/snmp.h b/include/net/snmp.h
index c9228ad..2ee8363 100644
--- a/include/net/snmp.h
+++ b/include/net/snmp.h
@@ -166,9 +166,10 @@ struct linux_xfrm_mib {
#define SNMP_ADD_STATS64(mib, field, addend) \
do { \
- local_bh_disable(); \
+ unsigned int bh; \
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK); \
__SNMP_ADD_STATS64(mib, field, addend); \
- local_bh_enable(); \
+ local_bh_enable(bh); \
} while (0)
#define __SNMP_INC_STATS64(mib, field) SNMP_ADD_STATS64(mib, field, 1)
@@ -184,9 +185,10 @@ struct linux_xfrm_mib {
} while (0)
#define SNMP_UPD_PO_STATS64(mib, basefield, addend) \
do { \
- local_bh_disable(); \
+ unsigned int bh; \
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK); \
__SNMP_UPD_PO_STATS64(mib, basefield, addend); \
- local_bh_enable(); \
+ local_bh_enable(bh); \
} while (0)
#else
#define __SNMP_INC_STATS64(mib, field) __SNMP_INC_STATS(mib, field)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 7fe357a..4caf43e 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1556,7 +1556,7 @@ bool tcp_alloc_md5sig_pool(void);
struct tcp_md5sig_pool *tcp_get_md5sig_pool(unsigned int *bh);
static inline void tcp_put_md5sig_pool(unsigned int bh)
{
- local_bh_enable();
+ local_bh_enable(bh);
}
int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *, const struct sk_buff *,
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index 24aac0d..11d0073 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -235,6 +235,7 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu)
static int cpu_map_kthread_run(void *data)
{
+ unsigned int bh;
struct bpf_cpu_map_entry *rcpu = data;
set_current_state(TASK_INTERRUPTIBLE);
@@ -263,7 +264,7 @@ static int cpu_map_kthread_run(void *data)
}
/* Process packets in rcpu->queue */
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
/*
* The bpf_cpu_map_entry is single consumer, with this
* kthread CPU pinned. Lockless access to ptr_ring
@@ -291,7 +292,7 @@ static int cpu_map_kthread_run(void *data)
/* Feedback loop via tracepoint */
trace_xdp_cpumap_kthread(rcpu->map_id, processed, drops, sched);
- local_bh_enable(); /* resched point, may call do_softirq() */
+ local_bh_enable(bh); /* resched point, may call do_softirq() */
}
__set_current_state(TASK_RUNNING);
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index fb86146..0a96cd4 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -923,12 +923,13 @@ irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action) { }
static irqreturn_t
irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
{
+ unsigned int bh;
irqreturn_t ret;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
ret = action->thread_fn(action->irq, action->dev_id);
irq_finalize_oneshot(desc, action);
- local_bh_enable();
+ local_bh_enable(bh);
return ret;
}
diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c
index 936f3d1..5304ff4 100644
--- a/kernel/locking/spinlock.c
+++ b/kernel/locking/spinlock.c
@@ -103,7 +103,7 @@ void __lockfunc __raw_##op##_lock_bh(locktype##_t *lock) \
/* function: */ \
/**/ \
flags = _raw_##op##_lock_irqsave(lock); \
- local_bh_disable(); \
+ local_bh_disable(SOFTIRQ_ALL_MASK); \
local_irq_restore(flags); \
} \
diff --git a/kernel/padata.c b/kernel/padata.c
index 8a2fbd4..a111a61 100644
--- a/kernel/padata.c
+++ b/kernel/padata.c
@@ -64,10 +64,11 @@ static int padata_cpu_hash(struct parallel_data *pd)
static void padata_parallel_worker(struct work_struct *parallel_work)
{
+ unsigned int bh;
struct padata_parallel_queue *pqueue;
LIST_HEAD(local_list);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
pqueue = container_of(parallel_work,
struct padata_parallel_queue, work);
@@ -86,7 +87,7 @@ static void padata_parallel_worker(struct work_struct *parallel_work)
padata->parallel(padata);
}
- local_bh_enable();
+ local_bh_enable(bh);
}
/**
@@ -280,14 +281,15 @@ static void padata_reorder(struct parallel_data *pd)
static void invoke_padata_reorder(struct work_struct *work)
{
+ unsigned int bh;
struct padata_parallel_queue *pqueue;
struct parallel_data *pd;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
pqueue = container_of(work, struct padata_parallel_queue, reorder_work);
pd = pqueue->pd;
padata_reorder(pd);
- local_bh_enable();
+ local_bh_enable(bh);
}
static void padata_reorder_timer(struct timer_list *t)
@@ -327,11 +329,12 @@ static void padata_reorder_timer(struct timer_list *t)
static void padata_serial_worker(struct work_struct *serial_work)
{
+ unsigned int bh;
struct padata_serial_queue *squeue;
struct parallel_data *pd;
LIST_HEAD(local_list);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
squeue = container_of(serial_work, struct padata_serial_queue, work);
pd = squeue->pd;
@@ -350,7 +353,7 @@ static void padata_serial_worker(struct work_struct *serial_work)
padata->serial(padata);
atomic_dec(&pd->refcnt);
}
- local_bh_enable();
+ local_bh_enable(bh);
}
/**
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index cb3abdc..1a8d1c7 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -1201,6 +1201,7 @@ static void rcu_torture_timer_cb(struct rcu_head *rhp)
static void rcutorture_one_extend(int *readstate, int newstate,
struct torture_random_state *trsp)
{
+ unsigned int bh;
int idxnew = -1;
int idxold = *readstate;
int statesnew = ~*readstate & newstate;
@@ -1211,7 +1212,7 @@ static void rcutorture_one_extend(int *readstate, int newstate,
/* First, put new protection in place to avoid critical-section gap. */
if (statesnew & RCUTORTURE_RDR_BH)
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
if (statesnew & RCUTORTURE_RDR_IRQ)
local_irq_disable();
if (statesnew & RCUTORTURE_RDR_PREEMPT)
@@ -1223,7 +1224,7 @@ static void rcutorture_one_extend(int *readstate, int newstate,
if (statesold & RCUTORTURE_RDR_IRQ)
local_irq_enable();
if (statesold & RCUTORTURE_RDR_BH)
- local_bh_enable();
+ local_bh_enable(bh);
if (statesold & RCUTORTURE_RDR_PREEMPT)
preempt_enable();
if (statesold & RCUTORTURE_RDR_RCU)
diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
index 04fc2ed..7a57103 100644
--- a/kernel/rcu/srcutiny.c
+++ b/kernel/rcu/srcutiny.c
@@ -121,6 +121,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_unlock);
*/
void srcu_drive_gp(struct work_struct *wp)
{
+ unsigned int bh;
int idx;
struct rcu_head *lh;
struct rcu_head *rhp;
@@ -147,9 +148,9 @@ void srcu_drive_gp(struct work_struct *wp)
while (lh) {
rhp = lh;
lh = lh->next;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rhp->func(rhp);
- local_bh_enable();
+ local_bh_enable(bh);
}
/*
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 6c9866a..d31ccc7c 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -1168,6 +1168,7 @@ static void srcu_advance_state(struct srcu_struct *sp)
*/
static void srcu_invoke_callbacks(struct work_struct *work)
{
+ unsigned int bh;
bool more;
struct rcu_cblist ready_cbs;
struct rcu_head *rhp;
@@ -1193,9 +1194,9 @@ static void srcu_invoke_callbacks(struct work_struct *work)
rhp = rcu_cblist_dequeue(&ready_cbs);
for (; rhp != NULL; rhp = rcu_cblist_dequeue(&ready_cbs)) {
debug_rcu_head_unqueue(rhp);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rhp->func(rhp);
- local_bh_enable();
+ local_bh_enable(bh);
}
/*
diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
index befc932..45a7fd7 100644
--- a/kernel/rcu/tiny.c
+++ b/kernel/rcu/tiny.c
@@ -132,6 +132,7 @@ void rcu_check_callbacks(int user)
*/
static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
{
+ unsigned int bh;
struct rcu_head *next, *list;
unsigned long flags;
@@ -155,9 +156,9 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
next = list->next;
prefetch(next);
debug_rcu_head_unqueue(list);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
__rcu_reclaim("", list);
- local_bh_enable();
+ local_bh_enable(bh);
list = next;
}
}
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index a97c20e..c67d87a 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1279,8 +1279,10 @@ static void rcu_cpu_kthread(unsigned int cpu)
int spincnt;
for (spincnt = 0; spincnt < 10; spincnt++) {
+ unsigned int bh;
+
trace_rcu_utilization(TPS("Start CPU kthread@rcu_wait"));
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
*statusp = RCU_KTHREAD_RUNNING;
this_cpu_inc(rcu_cpu_kthread_loops);
local_irq_disable();
@@ -1289,7 +1291,7 @@ static void rcu_cpu_kthread(unsigned int cpu)
local_irq_enable();
if (work)
rcu_kthread_do_work();
- local_bh_enable();
+ local_bh_enable(bh);
if (*workp == 0) {
trace_rcu_utilization(TPS("End CPU kthread@rcu_wait"));
*statusp = RCU_KTHREAD_WAITING;
@@ -2320,6 +2322,8 @@ static int rcu_nocb_kthread(void *arg)
atomic_long_read(&rdp->nocb_q_count), -1);
c = cl = 0;
while (list) {
+ unsigned int bh;
+
next = list->next;
/* Wait for enqueuing to complete, if needed. */
while (next == NULL && &list->next != tail) {
@@ -2331,11 +2335,11 @@ static int rcu_nocb_kthread(void *arg)
next = list->next;
}
debug_rcu_head_unqueue(list);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
if (__rcu_reclaim(rdp->rsp->name, list))
cl++;
c++;
- local_bh_enable();
+ local_bh_enable(bh);
cond_resched_tasks_rcu_qs();
list = next;
}
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 39cb23d..06e252e 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -662,6 +662,7 @@ static void check_holdout_task(struct task_struct *t,
/* RCU-tasks kthread that detects grace periods and invokes callbacks. */
static int __noreturn rcu_tasks_kthread(void *arg)
{
+ unsigned int bh;
unsigned long flags;
struct task_struct *g, *t;
unsigned long lastreport;
@@ -808,9 +809,9 @@ static int __noreturn rcu_tasks_kthread(void *arg)
/* Invoke the callbacks. */
while (list) {
next = list->next;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
list->func(list);
- local_bh_enable();
+ local_bh_enable(bh);
list = next;
cond_resched();
}
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 730a5c9..ae9e29f 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -199,12 +199,12 @@ EXPORT_SYMBOL(__local_bh_enable_ip);
void local_bh_disable_all(void)
{
- local_bh_disable();
+ local_bh_disable(SOFTIRQ_ALL_MASK);
}
void local_bh_enable_all(void)
{
- local_bh_enable();
+ local_bh_enable(SOFTIRQ_ALL_MASK);
}
/*
@@ -359,7 +359,7 @@ void irq_enter(void)
* Prevent raise_softirq from needlessly waking up ksoftirqd
* here, as softirq will be serviced on return from interrupt.
*/
- local_bh_disable();
+ local_bh_disable(SOFTIRQ_ALL_MASK);
tick_irq_enter();
local_bh_enable_no_softirq();
}
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index e1a549c..2c7d27a 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1855,6 +1855,7 @@ static void migrate_hrtimer_list(struct hrtimer_clock_base *old_base,
int hrtimers_dead_cpu(unsigned int scpu)
{
+ unsigned int bh;
struct hrtimer_cpu_base *old_base, *new_base;
int i;
@@ -1866,7 +1867,7 @@ int hrtimers_dead_cpu(unsigned int scpu)
* not wakeup ksoftirqd (and acquire the pi-lock) while
* holding the cpu_base lock
*/
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
local_irq_disable();
old_base = &per_cpu(hrtimer_bases, scpu);
new_base = this_cpu_ptr(&hrtimer_bases);
@@ -1894,7 +1895,7 @@ int hrtimers_dead_cpu(unsigned int scpu)
/* Check, if we got expired work to do */
__hrtimer_peek_ahead_timers();
local_irq_enable();
- local_bh_enable();
+ local_bh_enable(bh);
return 0;
}
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 1e1bbf1..35aa1fb 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -191,11 +191,11 @@ static void init_shared_classes(void)
__irq_exit(); \
local_irq_enable();
-#define SOFTIRQ_DISABLE local_bh_disable
-#define SOFTIRQ_ENABLE local_bh_enable
+#define SOFTIRQ_DISABLE local_bh_disable_all
+#define SOFTIRQ_ENABLE local_bh_enable_all
#define SOFTIRQ_ENTER() \
- local_bh_disable(); \
+ local_bh_disable_all(); \
local_irq_disable(); \
lockdep_softirq_enter(); \
WARN_ON(!in_softirq());
@@ -203,7 +203,7 @@ static void init_shared_classes(void)
#define SOFTIRQ_EXIT() \
lockdep_softirq_exit(); \
local_irq_enable(); \
- local_bh_enable();
+ local_bh_enable_all();
/*
* Shortcuts for lock/unlock API variants, to keep
diff --git a/net/ax25/ax25_subr.c b/net/ax25/ax25_subr.c
index 038b109..b63d27a 100644
--- a/net/ax25/ax25_subr.c
+++ b/net/ax25/ax25_subr.c
@@ -262,6 +262,7 @@ void ax25_calculate_rtt(ax25_cb *ax25)
void ax25_disconnect(ax25_cb *ax25, int reason)
{
+ unsigned int bh;
ax25_clear_queues(ax25);
if (!ax25->sk || !sock_flag(ax25->sk, SOCK_DESTROY))
@@ -276,7 +277,7 @@ void ax25_disconnect(ax25_cb *ax25, int reason)
ax25_link_failed(ax25, reason);
if (ax25->sk != NULL) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
bh_lock_sock(ax25->sk);
ax25->sk->sk_state = TCP_CLOSE;
ax25->sk->sk_err = reason;
@@ -286,6 +287,6 @@ void ax25_disconnect(ax25_cb *ax25, int reason)
sock_set_flag(ax25->sk, SOCK_DEAD);
}
bh_unlock_sock(ax25->sk);
- local_bh_enable();
+ local_bh_enable(bh);
}
}
diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
index 502f663..38fc0ab 100644
--- a/net/bridge/br_fdb.c
+++ b/net/bridge/br_fdb.c
@@ -847,6 +847,7 @@ static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br,
struct net_bridge_port *p, const unsigned char *addr,
u16 nlh_flags, u16 vid)
{
+ unsigned int bh;
int err = 0;
if (ndm->ndm_flags & NTF_USE) {
@@ -855,11 +856,11 @@ static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br,
br->dev->name);
return -EINVAL;
}
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
rcu_read_lock();
br_fdb_update(br, p, addr, vid, true);
rcu_read_unlock();
- local_bh_enable();
+ local_bh_enable(bh);
} else if (ndm->ndm_flags & NTF_EXT_LEARNED) {
err = br_fdb_external_learn_add(br, p, addr, vid, true);
} else {
diff --git a/net/can/gw.c b/net/can/gw.c
index faa3da8..48484e2 100644
--- a/net/can/gw.c
+++ b/net/can/gw.c
@@ -810,6 +810,7 @@ static int cgw_parse_attr(struct nlmsghdr *nlh, struct cf_mod *mod,
static int cgw_create_job(struct sk_buff *skb, struct nlmsghdr *nlh,
struct netlink_ext_ack *extack)
{
+ unsigned int bh;
struct net *net = sock_net(skb->sk);
struct rtcanmsg *r;
struct cgw_job *gwj;
@@ -851,9 +852,9 @@ static int cgw_create_job(struct sk_buff *skb, struct nlmsghdr *nlh,
return -EINVAL;
/* update modifications with disabled softirq & quit */
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
memcpy(&gwj->mod, &mod, sizeof(mod));
- local_bh_enable();
+ local_bh_enable(bh);
return 0;
}
}
diff --git a/net/core/dev.c b/net/core/dev.c
index 2898fb8..5103840 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3845,6 +3845,7 @@ EXPORT_SYMBOL(dev_queue_xmit_accel);
int dev_direct_xmit(struct sk_buff *skb, u16 queue_id)
{
+ unsigned int bh;
struct net_device *dev = skb->dev;
struct sk_buff *orig_skb = skb;
struct netdev_queue *txq;
@@ -3862,14 +3863,14 @@ int dev_direct_xmit(struct sk_buff *skb, u16 queue_id)
skb_set_queue_mapping(skb, queue_id);
txq = skb_get_tx_queue(dev, skb);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
HARD_TX_LOCK(dev, txq, smp_processor_id());
if (!netif_xmit_frozen_or_drv_stopped(txq))
ret = netdev_start_xmit(skb, dev, txq, false);
HARD_TX_UNLOCK(dev, txq);
- local_bh_enable();
+ local_bh_enable(bh);
if (!dev_xmit_complete(ret))
kfree_skb(skb);
@@ -5206,10 +5207,11 @@ DEFINE_PER_CPU(struct work_struct, flush_works);
/* Network device is going away, flush any packets still pending */
static void flush_backlog(struct work_struct *work)
{
+ unsigned int bh;
struct sk_buff *skb, *tmp;
struct softnet_data *sd;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
sd = this_cpu_ptr(&softnet_data);
local_irq_disable();
@@ -5231,7 +5233,7 @@ static void flush_backlog(struct work_struct *work)
input_queue_head_incr(sd);
}
}
- local_bh_enable();
+ local_bh_enable(bh);
}
static void flush_all_backlogs(void)
@@ -5975,6 +5977,7 @@ static struct napi_struct *napi_by_id(unsigned int napi_id)
static void busy_poll_stop(struct napi_struct *napi, void *have_poll_lock)
{
+ unsigned int bh;
int rc;
/* Busy polling means there is a high chance device driver hard irq
@@ -5989,7 +5992,7 @@ static void busy_poll_stop(struct napi_struct *napi, void *have_poll_lock)
clear_bit(NAPI_STATE_MISSED, &napi->state);
clear_bit(NAPI_STATE_IN_BUSY_POLL, &napi->state);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
/* All we really want here is to re-enable device interrupts.
* Ideally, a new ndo_busy_poll_stop() could avoid another round.
@@ -5999,13 +6002,14 @@ static void busy_poll_stop(struct napi_struct *napi, void *have_poll_lock)
netpoll_poll_unlock(have_poll_lock);
if (rc == BUSY_POLL_BUDGET)
__napi_schedule(napi);
- local_bh_enable();
+ local_bh_enable(bh);
}
void napi_busy_loop(unsigned int napi_id,
bool (*loop_end)(void *, unsigned long),
void *loop_end_arg)
{
+ unsigned int bh;
unsigned long start_time = loop_end ? busy_loop_current_time() : 0;
int (*napi_poll)(struct napi_struct *napi, int budget);
void *have_poll_lock = NULL;
@@ -6024,7 +6028,7 @@ void napi_busy_loop(unsigned int napi_id,
for (;;) {
int work = 0;
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
if (!napi_poll) {
unsigned long val = READ_ONCE(napi->state);
@@ -6047,7 +6051,7 @@ void napi_busy_loop(unsigned int napi_id,
if (work > 0)
__NET_ADD_STATS(dev_net(napi->dev),
LINUX_MIB_BUSYPOLLRXPACKETS, work);
- local_bh_enable();
+ local_bh_enable(bh);
if (!loop_end || loop_end(loop_end_arg, start_time))
break;
diff --git a/net/core/gen_estimator.c b/net/core/gen_estimator.c
index e4e442d..9da3c36 100644
--- a/net/core/gen_estimator.c
+++ b/net/core/gen_estimator.c
@@ -132,6 +132,7 @@ int gen_new_estimator(struct gnet_stats_basic_packed *bstats,
seqcount_t *running,
struct nlattr *opt)
{
+ unsigned int bh;
struct gnet_estimator *parm = nla_data(opt);
struct net_rate_estimator *old, *est;
struct gnet_stats_basic_packed b;
@@ -161,10 +162,10 @@ int gen_new_estimator(struct gnet_stats_basic_packed *bstats,
est->cpu_bstats = cpu_bstats;
if (lock)
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
est_fetch_counters(est, &b);
if (lock)
- local_bh_enable();
+ local_bh_enable(bh);
est->last_bytes = b.bytes;
est->last_packets = b.packets;
diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index 98cc21c..ec55470 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -1068,7 +1068,7 @@ int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)
neigh_probe(neigh);
else
write_unlock(&neigh->lock);
- local_bh_enable();
+ local_bh_enable(0);
return rc;
out_dead:
diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index 6e2bea0..1c0d2bd 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -3294,6 +3294,7 @@ static void pktgen_wait_for_skb(struct pktgen_dev *pkt_dev)
static void pktgen_xmit(struct pktgen_dev *pkt_dev)
{
+ unsigned int bh;
unsigned int burst = READ_ONCE(pkt_dev->burst);
struct net_device *odev = pkt_dev->odev;
struct netdev_queue *txq;
@@ -3338,7 +3339,7 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev)
skb = pkt_dev->skb;
skb->protocol = eth_type_trans(skb, skb->dev);
refcount_add(burst, &skb->users);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
do {
ret = netif_receive_skb(skb);
if (ret == NET_RX_DROP)
@@ -3362,7 +3363,7 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev)
} while (--burst > 0);
goto out; /* Skips xmit_mode M_START_XMIT */
} else if (pkt_dev->xmit_mode == M_QUEUE_XMIT) {
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
refcount_inc(&pkt_dev->skb->users);
ret = dev_queue_xmit(pkt_dev->skb);
@@ -3395,7 +3396,7 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev)
txq = skb_get_tx_queue(odev, pkt_dev->skb);
- local_bh_disable();
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
HARD_TX_LOCK(odev, txq, smp_processor_id());
@@ -3439,7 +3440,7 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev)
HARD_TX_UNLOCK(odev, txq);
out:
- local_bh_enable();
+ local_bh_enable(bh);
/* If pkt_dev->count is zero, then run forever */
if ((pkt_dev->count != 0) && (pkt_dev->sofar >= pkt_dev->count)) {
--
2.7.4
This pair of function is implemented on top of spin_[un]lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to that of local_irq_save/restore. Subsequent calls to
local_bh_disable() and friends can then stack up:
bh = local_bh_disable(vec_mask);
bh2 = tg3_full_lock(...) {
return spin_lock_bh(...)
}
...
tg3_full_unlock(..., bh2) {
spin_unlock_bh(bh2, ...);
}
local_bh_enable(bh);
To prepare for that, make tg3_full_lock() able to return a saved vector
enabled mask and pass it back to tg3_full_unlock(). We'll plug it to
spin_lock_bh() in a subsequent patch.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
drivers/net/ethernet/broadcom/tg3.c | 160 +++++++++++++++++++++---------------
1 file changed, 95 insertions(+), 65 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index e6f28c7..765185c 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -6156,8 +6156,8 @@ static void tg3_refclk_write(struct tg3 *tp, u64 newval)
tw32_f(TG3_EAV_REF_CLCK_CTL, clock_ctl | TG3_EAV_REF_CLCK_CTL_RESUME);
}
-static inline void tg3_full_lock(struct tg3 *tp, int irq_sync);
-static inline void tg3_full_unlock(struct tg3 *tp);
+static inline unsigned int tg3_full_lock(struct tg3 *tp, int irq_sync);
+static inline void tg3_full_unlock(struct tg3 *tp, unsigned int bh);
static int tg3_get_ts_info(struct net_device *dev, struct ethtool_ts_info *info)
{
struct tg3 *tp = netdev_priv(dev);
@@ -6189,6 +6189,7 @@ static int tg3_get_ts_info(struct net_device *dev, struct ethtool_ts_info *info)
static int tg3_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
{
struct tg3 *tp = container_of(ptp, struct tg3, ptp_info);
+ unsigned int bh;
bool neg_adj = false;
u32 correction = 0;
@@ -6208,7 +6209,7 @@ static int tg3_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
correction = div_u64((u64)ppb * (1 << 24), 1000000000ULL) &
TG3_EAV_REF_CLK_CORRECT_MASK;
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
if (correction)
tw32(TG3_EAV_REF_CLK_CORRECT_CTL,
@@ -6217,7 +6218,7 @@ static int tg3_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
else
tw32(TG3_EAV_REF_CLK_CORRECT_CTL, 0);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
return 0;
}
@@ -6225,10 +6226,11 @@ static int tg3_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
static int tg3_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
{
struct tg3 *tp = container_of(ptp, struct tg3, ptp_info);
+ unsigned int bh;
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tp->ptp_adjust += delta;
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
return 0;
}
@@ -6236,12 +6238,13 @@ static int tg3_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
static int tg3_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts)
{
u64 ns;
+ unsigned int bh;
struct tg3 *tp = container_of(ptp, struct tg3, ptp_info);
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
ns = tg3_refclk_read(tp);
ns += tp->ptp_adjust;
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
*ts = ns_to_timespec64(ns);
@@ -6253,13 +6256,14 @@ static int tg3_ptp_settime(struct ptp_clock_info *ptp,
{
u64 ns;
struct tg3 *tp = container_of(ptp, struct tg3, ptp_info);
+ unsigned int bh;
ns = timespec64_to_ns(ts);
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_refclk_write(tp, ns);
tp->ptp_adjust = 0;
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
return 0;
}
@@ -6268,6 +6272,7 @@ static int tg3_ptp_enable(struct ptp_clock_info *ptp,
struct ptp_clock_request *rq, int on)
{
struct tg3 *tp = container_of(ptp, struct tg3, ptp_info);
+ unsigned int bh;
u32 clock_ctl;
int rval = 0;
@@ -6276,7 +6281,7 @@ static int tg3_ptp_enable(struct ptp_clock_info *ptp,
if (rq->perout.index != 0)
return -EINVAL;
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
clock_ctl = tr32(TG3_EAV_REF_CLCK_CTL);
clock_ctl &= ~TG3_EAV_CTL_TSYNC_GPIO_MASK;
@@ -6313,7 +6318,7 @@ static int tg3_ptp_enable(struct ptp_clock_info *ptp,
}
err_out:
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
return rval;
default:
@@ -7428,7 +7433,7 @@ static inline void tg3_netif_start(struct tg3 *tp)
tg3_enable_ints(tp);
}
-static void tg3_irq_quiesce(struct tg3 *tp)
+static unsigned int tg3_irq_quiesce(struct tg3 *tp, unsigned int bh)
__releases(tp->lock)
__acquires(tp->lock)
{
@@ -7445,6 +7450,8 @@ static void tg3_irq_quiesce(struct tg3 *tp)
synchronize_irq(tp->napi[i].irq_vec);
spin_lock_bh(&tp->lock);
+
+ return 0;
}
/* Fully shutdown all tg3 driver activity elsewhere in the system.
@@ -7452,14 +7459,18 @@ static void tg3_irq_quiesce(struct tg3 *tp)
* with as well. Most of the time, this is not necessary except when
* shutting down the device.
*/
-static inline void tg3_full_lock(struct tg3 *tp, int irq_sync)
+static inline unsigned int tg3_full_lock(struct tg3 *tp, int irq_sync)
{
+ unsigned int bh = 0;
+
spin_lock_bh(&tp->lock);
if (irq_sync)
- tg3_irq_quiesce(tp);
+ bh = tg3_irq_quiesce(tp, bh);
+
+ return bh;
}
-static inline void tg3_full_unlock(struct tg3 *tp)
+static inline void tg3_full_unlock(struct tg3 *tp, unsigned int bh)
{
spin_unlock_bh(&tp->lock);
}
@@ -11184,10 +11195,11 @@ static int tg3_restart_hw(struct tg3 *tp, bool reset_phy)
static void tg3_reset_task(struct work_struct *work)
{
struct tg3 *tp = container_of(work, struct tg3, reset_task);
+ unsigned int bh;
int err;
rtnl_lock();
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
if (!netif_running(tp->dev)) {
tg3_flag_clear(tp, RESET_TASK_PENDING);
@@ -11219,7 +11231,7 @@ static void tg3_reset_task(struct work_struct *work)
tg3_netif_start(tp);
out:
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
if (!err)
tg3_phy_start(tp);
@@ -11350,6 +11362,7 @@ static int tg3_test_msi(struct tg3 *tp)
{
int err;
u16 pci_cmd;
+ unsigned int bh;
if (!tg3_flag(tp, USING_MSI))
return 0;
@@ -11391,12 +11404,12 @@ static int tg3_test_msi(struct tg3 *tp)
/* Need to reset the chip because the MSI cycle may have terminated
* with Master Abort.
*/
- tg3_full_lock(tp, 1);
+ bh = tg3_full_lock(tp, 1);
tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
err = tg3_init_hw(tp, true);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
if (err)
free_irq(tp->napi[0].irq_vec, &tp->napi[0]);
@@ -11565,6 +11578,7 @@ static int tg3_start(struct tg3 *tp, bool reset_phy, bool test_irq,
bool init)
{
struct net_device *dev = tp->dev;
+ unsigned int bh;
int i, err;
/*
@@ -11598,7 +11612,7 @@ static int tg3_start(struct tg3 *tp, bool reset_phy, bool test_irq,
}
}
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
if (init)
tg3_ape_driver_state_change(tp, RESET_KIND_INIT);
@@ -11609,7 +11623,7 @@ static int tg3_start(struct tg3 *tp, bool reset_phy, bool test_irq,
tg3_free_rings(tp);
}
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
if (err)
goto out_free_irq;
@@ -11618,10 +11632,10 @@ static int tg3_start(struct tg3 *tp, bool reset_phy, bool test_irq,
err = tg3_test_msi(tp);
if (err) {
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
tg3_free_rings(tp);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
goto out_napi_fini;
}
@@ -11638,7 +11652,7 @@ static int tg3_start(struct tg3 *tp, bool reset_phy, bool test_irq,
tg3_hwmon_open(tp);
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_timer_start(tp);
tg3_flag_set(tp, INIT_COMPLETE);
@@ -11646,7 +11660,7 @@ static int tg3_start(struct tg3 *tp, bool reset_phy, bool test_irq,
tg3_ptp_resume(tp);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
netif_tx_start_all_queues(dev);
@@ -11679,6 +11693,7 @@ static int tg3_start(struct tg3 *tp, bool reset_phy, bool test_irq,
static void tg3_stop(struct tg3 *tp)
{
int i;
+ unsigned int bh;
tg3_reset_task_cancel(tp);
tg3_netif_stop(tp);
@@ -11689,7 +11704,7 @@ static void tg3_stop(struct tg3 *tp)
tg3_phy_stop(tp);
- tg3_full_lock(tp, 1);
+ bh = tg3_full_lock(tp, 1);
tg3_disable_ints(tp);
@@ -11697,7 +11712,7 @@ static void tg3_stop(struct tg3 *tp)
tg3_free_rings(tp);
tg3_flag_clear(tp, INIT_COMPLETE);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
for (i = tp->irq_cnt - 1; i >= 0; i--) {
struct tg3_napi *tnapi = &tp->napi[i];
@@ -11714,6 +11729,7 @@ static void tg3_stop(struct tg3 *tp)
static int tg3_open(struct net_device *dev)
{
struct tg3 *tp = netdev_priv(dev);
+ unsigned int bh;
int err;
if (tp->pcierr_recovery) {
@@ -11750,12 +11766,12 @@ static int tg3_open(struct net_device *dev)
if (err)
return err;
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_disable_ints(tp);
tg3_flag_clear(tp, INIT_COMPLETE);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
err = tg3_start(tp,
!(tp->phy_flags & TG3_PHYFLG_KEEP_LINK_ON_PWRDN),
@@ -11968,6 +11984,7 @@ static void tg3_get_regs(struct net_device *dev,
struct ethtool_regs *regs, void *_p)
{
struct tg3 *tp = netdev_priv(dev);
+ unsigned int bh;
regs->version = 0;
@@ -11976,11 +11993,11 @@ static void tg3_get_regs(struct net_device *dev,
if (tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER)
return;
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_dump_legacy_regs(tp, (u32 *)_p);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
}
static int tg3_get_eeprom_len(struct net_device *dev)
@@ -12217,6 +12234,7 @@ static int tg3_set_link_ksettings(struct net_device *dev,
{
struct tg3 *tp = netdev_priv(dev);
u32 speed = cmd->base.speed;
+ unsigned int bh;
u32 advertising;
if (tg3_flag(tp, USE_PHYLIB)) {
@@ -12282,7 +12300,7 @@ static int tg3_set_link_ksettings(struct net_device *dev,
}
}
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tp->link_config.autoneg = cmd->base.autoneg;
if (cmd->base.autoneg == AUTONEG_ENABLE) {
@@ -12303,7 +12321,7 @@ static int tg3_set_link_ksettings(struct net_device *dev,
if (netif_running(dev))
tg3_setup_phy(tp, true);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
return 0;
}
@@ -12490,6 +12508,7 @@ static void tg3_get_pauseparam(struct net_device *dev, struct ethtool_pauseparam
static int tg3_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam *epause)
{
struct tg3 *tp = netdev_priv(dev);
+ unsigned int bh;
int err = 0;
if (tp->link_config.autoneg == AUTONEG_ENABLE)
@@ -12564,7 +12583,7 @@ static int tg3_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam
irq_sync = 1;
}
- tg3_full_lock(tp, irq_sync);
+ bh = tg3_full_lock(tp, irq_sync);
if (epause->autoneg)
tg3_flag_set(tp, PAUSE_AUTONEG);
@@ -12586,7 +12605,7 @@ static int tg3_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam
tg3_netif_start(tp);
}
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
}
tp->phy_flags |= TG3_PHYFLG_USER_CONFIGURED;
@@ -12662,6 +12681,7 @@ static int tg3_set_rxfh(struct net_device *dev, const u32 *indir, const u8 *key,
const u8 hfunc)
{
struct tg3 *tp = netdev_priv(dev);
+ unsigned int bh;
size_t i;
/* We require at least one supported parameter to be changed and no
@@ -12683,9 +12703,9 @@ static int tg3_set_rxfh(struct net_device *dev, const u32 *indir, const u8 *key,
/* It is legal to write the indirection
* table while the device is running.
*/
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_rss_write_indir_tbl(tp);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, 0);
return 0;
}
@@ -13762,6 +13782,7 @@ static void tg3_self_test(struct net_device *dev, struct ethtool_test *etest,
u64 *data)
{
struct tg3 *tp = netdev_priv(dev);
+ unsigned int bh;
bool doextlpbk = etest->flags & ETH_TEST_FL_EXTERNAL_LB;
if (tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER) {
@@ -13792,7 +13813,7 @@ static void tg3_self_test(struct net_device *dev, struct ethtool_test *etest,
irq_sync = 1;
}
- tg3_full_lock(tp, irq_sync);
+ bh = tg3_full_lock(tp, irq_sync);
tg3_halt(tp, RESET_KIND_SUSPEND, 1);
err = tg3_nvram_lock(tp);
tg3_halt_cpu(tp, RX_CPU_BASE);
@@ -13820,14 +13841,14 @@ static void tg3_self_test(struct net_device *dev, struct ethtool_test *etest,
if (tg3_test_loopback(tp, data, doextlpbk))
etest->flags |= ETH_TEST_FL_FAILED;
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
if (tg3_test_interrupt(tp) != 0) {
etest->flags |= ETH_TEST_FL_FAILED;
data[TG3_INTERRUPT_TEST] = 1;
}
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
if (netif_running(dev)) {
@@ -13837,7 +13858,7 @@ static void tg3_self_test(struct net_device *dev, struct ethtool_test *etest,
tg3_netif_start(tp);
}
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
if (irq_sync && !err2)
tg3_phy_start(tp);
@@ -14071,6 +14092,7 @@ static int tg3_get_coalesce(struct net_device *dev, struct ethtool_coalesce *ec)
static int tg3_set_coalesce(struct net_device *dev, struct ethtool_coalesce *ec)
{
struct tg3 *tp = netdev_priv(dev);
+ unsigned int bh;
u32 max_rxcoal_tick_int = 0, max_txcoal_tick_int = 0;
u32 max_stat_coal_ticks = 0, min_stat_coal_ticks = 0;
@@ -14107,9 +14129,9 @@ static int tg3_set_coalesce(struct net_device *dev, struct ethtool_coalesce *ec)
tp->coal.stats_block_coalesce_usecs = ec->stats_block_coalesce_usecs;
if (netif_running(dev)) {
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
__tg3_set_coalesce(tp, &tp->coal);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
}
return 0;
}
@@ -14117,6 +14139,7 @@ static int tg3_set_coalesce(struct net_device *dev, struct ethtool_coalesce *ec)
static int tg3_set_eee(struct net_device *dev, struct ethtool_eee *edata)
{
struct tg3 *tp = netdev_priv(dev);
+ unsigned int bh;
if (!(tp->phy_flags & TG3_PHYFLG_EEE_CAP)) {
netdev_warn(tp->dev, "Board does not support EEE!\n");
@@ -14142,10 +14165,10 @@ static int tg3_set_eee(struct net_device *dev, struct ethtool_eee *edata)
tg3_warn_mgmt_link_flap(tp);
if (netif_running(tp->dev)) {
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_setup_eee(tp);
tg3_phy_reset(tp);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
}
return 0;
@@ -14221,13 +14244,14 @@ static void tg3_get_stats64(struct net_device *dev,
static void tg3_set_rx_mode(struct net_device *dev)
{
struct tg3 *tp = netdev_priv(dev);
+ unsigned int bh;
if (!netif_running(dev))
return;
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
__tg3_set_rx_mode(dev);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
}
static inline void tg3_set_mtu(struct net_device *dev, struct tg3 *tp,
@@ -14256,6 +14280,7 @@ static int tg3_change_mtu(struct net_device *dev, int new_mtu)
struct tg3 *tp = netdev_priv(dev);
int err;
bool reset_phy = false;
+ unsigned int bh;
if (!netif_running(dev)) {
/* We'll just catch it later when the
@@ -14271,7 +14296,7 @@ static int tg3_change_mtu(struct net_device *dev, int new_mtu)
tg3_set_mtu(dev, tp, new_mtu);
- tg3_full_lock(tp, 1);
+ bh = tg3_full_lock(tp, 1);
tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
@@ -14289,7 +14314,7 @@ static int tg3_change_mtu(struct net_device *dev, int new_mtu)
if (!err)
tg3_netif_start(tp);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
if (!err)
tg3_phy_start(tp);
@@ -17663,6 +17688,7 @@ static int tg3_init_one(struct pci_dev *pdev,
char str[40];
u64 dma_mask, persist_dma_mask;
netdev_features_t features = 0;
+ unsigned int bh;
printk_once(KERN_INFO "%s\n", version);
@@ -17945,10 +17971,10 @@ static int tg3_init_one(struct pci_dev *pdev,
*/
if ((tr32(HOSTCC_MODE) & HOSTCC_MODE_ENABLE) ||
(tr32(WDMAC_MODE) & WDMAC_MODE_ENABLE)) {
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tw32(MEMARB_MODE, MEMARB_MODE_ENABLE);
tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
}
err = tg3_test_dma(tp);
@@ -18085,6 +18111,7 @@ static int tg3_suspend(struct device *device)
struct pci_dev *pdev = to_pci_dev(device);
struct net_device *dev = pci_get_drvdata(pdev);
struct tg3 *tp = netdev_priv(dev);
+ unsigned int bh;
int err = 0;
rtnl_lock();
@@ -18098,22 +18125,22 @@ static int tg3_suspend(struct device *device)
tg3_timer_stop(tp);
- tg3_full_lock(tp, 1);
+ bh= tg3_full_lock(tp, 1);
tg3_disable_ints(tp);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
netif_device_detach(dev);
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
tg3_flag_clear(tp, INIT_COMPLETE);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
err = tg3_power_down_prepare(tp);
if (err) {
int err2;
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_flag_set(tp, INIT_COMPLETE);
err2 = tg3_restart_hw(tp, true);
@@ -18126,7 +18153,7 @@ static int tg3_suspend(struct device *device)
tg3_netif_start(tp);
out:
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
if (!err2)
tg3_phy_start(tp);
@@ -18142,6 +18169,7 @@ static int tg3_resume(struct device *device)
struct pci_dev *pdev = to_pci_dev(device);
struct net_device *dev = pci_get_drvdata(pdev);
struct tg3 *tp = netdev_priv(dev);
+ unsigned int bh;
int err = 0;
rtnl_lock();
@@ -18151,7 +18179,7 @@ static int tg3_resume(struct device *device)
netif_device_attach(dev);
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_ape_driver_state_change(tp, RESET_KIND_INIT);
@@ -18166,7 +18194,7 @@ static int tg3_resume(struct device *device)
tg3_netif_start(tp);
out:
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
if (!err)
tg3_phy_start(tp);
@@ -18210,6 +18238,7 @@ static pci_ers_result_t tg3_io_error_detected(struct pci_dev *pdev,
struct net_device *netdev = pci_get_drvdata(pdev);
struct tg3 *tp = netdev_priv(netdev);
pci_ers_result_t err = PCI_ERS_RESULT_NEED_RESET;
+ unsigned int bh;
netdev_info(netdev, "PCI I/O error detected\n");
@@ -18235,9 +18264,9 @@ static pci_ers_result_t tg3_io_error_detected(struct pci_dev *pdev,
netif_device_detach(netdev);
/* Clean up software state, even if MMIO is blocked */
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_halt(tp, RESET_KIND_SHUTDOWN, 0);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
done:
if (state == pci_channel_io_perm_failure) {
@@ -18315,6 +18344,7 @@ static void tg3_io_resume(struct pci_dev *pdev)
{
struct net_device *netdev = pci_get_drvdata(pdev);
struct tg3 *tp = netdev_priv(netdev);
+ unsigned int bh;
int err;
rtnl_lock();
@@ -18322,12 +18352,12 @@ static void tg3_io_resume(struct pci_dev *pdev)
if (!netdev || !netif_running(netdev))
goto done;
- tg3_full_lock(tp, 0);
+ bh = tg3_full_lock(tp, 0);
tg3_ape_driver_state_change(tp, RESET_KIND_INIT);
tg3_flag_set(tp, INIT_COMPLETE);
err = tg3_restart_hw(tp, true);
if (err) {
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
netdev_err(netdev, "Cannot restart hardware after reset.\n");
goto done;
}
@@ -18338,7 +18368,7 @@ static void tg3_io_resume(struct pci_dev *pdev)
tg3_netif_start(tp);
- tg3_full_unlock(tp);
+ tg3_full_unlock(tp, bh);
tg3_phy_start(tp);
--
2.7.4
The vector pending bits will soon need to be opposed to the vector
enabled bits. As such, plain reset of new pending mask is not going to
be needed anymore. Instead we'll need to be able to clear specific bits.
Update the relevant API to allow that.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
arch/s390/include/asm/hardirq.h | 2 +-
include/linux/interrupt.h | 3 ++-
kernel/softirq.c | 2 +-
3 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/s390/include/asm/hardirq.h b/arch/s390/include/asm/hardirq.h
index 3103680..84ad789 100644
--- a/arch/s390/include/asm/hardirq.h
+++ b/arch/s390/include/asm/hardirq.h
@@ -14,7 +14,7 @@
#include <asm/lowcore.h>
#define local_softirq_pending() (S390_lowcore.softirq_data)
-#define softirq_pending_set(x) (S390_lowcore.softirq_data = (x))
+#define softirq_pending_nand(x) (S390_lowcore.softirq_data &= ~(x))
#define softirq_pending_or(x) (S390_lowcore.softirq_data |= (x))
#define __ARCH_IRQ_STAT
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index fc88f0d..a577a54 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -466,6 +466,7 @@ enum
};
#define SOFTIRQ_STOP_IDLE_MASK (~(1 << RCU_SOFTIRQ))
+#define SOFTIRQ_ALL_MASK (BIT(NR_SOFTIRQS) - 1)
#ifndef local_softirq_pending
@@ -474,7 +475,7 @@ enum
#endif
#define local_softirq_pending() (__this_cpu_read(local_softirq_data_ref))
-#define softirq_pending_set(x) (__this_cpu_write(local_softirq_data_ref, (x)))
+#define softirq_pending_nand(x) (__this_cpu_and(local_softirq_data_ref, ~(x)))
#define softirq_pending_or(x) (__this_cpu_or(local_softirq_data_ref, (x)))
#endif /* local_softirq_pending */
diff --git a/kernel/softirq.c b/kernel/softirq.c
index c39af4a..288e007 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -271,7 +271,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
restart:
/* Reset the pending bitmask before enabling irqs */
- softirq_pending_set(0);
+ softirq_pending_nand(SOFTIRQ_ALL_MASK);
local_irq_enable();
--
2.7.4
This pair of function is implemented on top of __local_bh_disable_ip()
that is going to handle a softirq mask in order to apply finegrained
vector disablement. The lock function is going to return the previous
vectors enabled mask prior to the last call to local_bh_disable(),
following a similar model to that of local_irq_save/restore. Subsequent
calls to local_bh_disable() and friends can then stack up:
bh = local_bh_disable(vec_mask);
bh2 = spin_lock_bh(...);
...
spin_unlock_bh(..., bh2);
local_bh_enable(bh);
To prepare for that, make spin_lock_bh() able to return a saved vector
enabled mask and pass it back to spin_unlock_bh(). We'll plug it to
__local_bh_disable_ip() in a subsequent patch.
Thanks to coccinelle that helped a lot with scripts such as the
following:
@spin exists@
identifier func;
expression e;
@@
func(...) {
+ unsigned int bh;
...
- spin_lock_bh(e);
+ bh = spin_lock_bh(e, SOFTIRQ_ALL_MASK);
...
- spin_unlock_bh(e);
+ spin_unlock_bh(e, bh);
...
}
@raw_spin exists@
identifier func;
expression e;
@@
func(...) {
+ unsigned int bh;
...
- raw_spin_lock_bh(e);
+ bh = raw_spin_lock_bh(e, SOFTIRQ_ALL_MASK);
...
- raw_spin_unlock_bh(e);
+ raw_spin_unlock_bh(e, bh);
...
}
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
arch/s390/mm/pgalloc.c | 24 +-
arch/xtensa/platforms/iss/console.c | 10 +-
arch/xtensa/platforms/iss/network.c | 28 +-
block/genhd.c | 15 +-
crypto/ansi_cprng.c | 10 +-
crypto/mcryptd.c | 5 +-
drivers/block/rsxx/core.c | 5 +-
drivers/block/rsxx/cregs.c | 34 ++-
drivers/block/rsxx/dma.c | 36 +--
drivers/block/umem.c | 10 +-
drivers/connector/cn_queue.c | 15 +-
drivers/connector/connector.c | 15 +-
drivers/crypto/atmel-aes.c | 5 +-
drivers/crypto/atmel-sha.c | 5 +-
drivers/crypto/atmel-tdes.c | 5 +-
drivers/crypto/axis/artpec6_crypto.c | 10 +-
drivers/crypto/caam/jr.c | 7 +-
drivers/crypto/cavium/cpt/cptvf_reqmanager.c | 22 +-
drivers/crypto/cavium/nitrox/nitrox_reqmgr.c | 25 +-
drivers/crypto/ccree/cc_request_mgr.c | 31 ++-
drivers/crypto/chelsio/chtls/chtls_cm.c | 11 +-
drivers/crypto/chelsio/chtls/chtls_hw.c | 10 +-
drivers/crypto/chelsio/chtls/chtls_main.c | 9 +-
drivers/crypto/inside-secure/safexcel.c | 14 +-
drivers/crypto/inside-secure/safexcel_cipher.c | 15 +-
drivers/crypto/inside-secure/safexcel_hash.c | 15 +-
drivers/crypto/marvell/cesa.c | 15 +-
drivers/crypto/marvell/tdma.c | 13 +-
drivers/crypto/mediatek/mtk-aes.c | 5 +-
drivers/crypto/mediatek/mtk-sha.c | 5 +-
drivers/crypto/mxc-scc.c | 10 +-
drivers/crypto/nx/nx-842.c | 10 +-
drivers/crypto/omap-aes.c | 15 +-
drivers/crypto/omap-des.c | 5 +-
drivers/crypto/omap-sham.c | 10 +-
drivers/crypto/qat/qat_common/adf_transport.c | 15 +-
drivers/crypto/qce/core.c | 5 +-
drivers/crypto/stm32/stm32-cryp.c | 5 +-
drivers/crypto/stm32/stm32-hash.c | 5 +-
drivers/crypto/stm32/stm32_crc32.c | 5 +-
drivers/crypto/sunxi-ss/sun4i-ss-hash.c | 5 +-
drivers/crypto/sunxi-ss/sun4i-ss-prng.c | 5 +-
drivers/dma/at_xdmac.c | 5 +-
drivers/dma/dmaengine.c | 5 +-
drivers/dma/fsldma.c | 44 ++--
drivers/dma/ioat/dma.c | 59 +++--
drivers/dma/ioat/dma.h | 1 +
drivers/dma/ioat/init.c | 28 +-
drivers/dma/iop-adma.c | 60 +++--
drivers/dma/mv_xor.c | 32 ++-
drivers/dma/mv_xor_v2.c | 24 +-
drivers/dma/ppc4xx/adma.c | 74 +++---
drivers/dma/timb_dma.c | 35 ++-
drivers/dma/txx9dmac.c | 50 ++--
drivers/dma/xgene-dma.c | 20 +-
drivers/dma/xilinx/zynqmp_dma.c | 32 ++-
drivers/gpu/drm/drm_lock.c | 35 +--
drivers/gpu/drm/i915/gvt/debugfs.c | 5 +-
drivers/gpu/drm/i915/gvt/sched_policy.c | 5 +-
drivers/gpu/drm/msm/adreno/a6xx_hfi.c | 9 +-
drivers/gpu/drm/vmwgfx/vmwgfx_irq.c | 10 +-
drivers/hsi/clients/cmt_speech.c | 61 +++--
drivers/hsi/clients/ssi_protocol.c | 135 +++++-----
drivers/hsi/controllers/omap_ssi_port.c | 60 +++--
drivers/infiniband/core/addr.c | 29 ++-
drivers/infiniband/hw/bnxt_re/qplib_fp.c | 10 +-
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 7 +-
drivers/infiniband/hw/mlx4/main.c | 35 ++-
drivers/infiniband/sw/rxe/rxe_mcast.c | 33 +--
drivers/infiniband/sw/rxe/rxe_mmap.c | 19 +-
drivers/infiniband/sw/rxe/rxe_net.c | 24 +-
drivers/infiniband/sw/rxe/rxe_queue.c | 5 +-
drivers/infiniband/sw/rxe/rxe_recv.c | 15 +-
drivers/infiniband/sw/rxe/rxe_resp.c | 14 +-
drivers/infiniband/ulp/isert/ib_isert.c | 52 ++--
drivers/isdn/capi/capi.c | 46 ++--
drivers/isdn/hardware/eicon/platform.h | 4 +-
drivers/isdn/i4l/isdn_concap.c | 2 +-
drivers/isdn/i4l/isdn_net.c | 13 +-
drivers/isdn/i4l/isdn_ppp.c | 3 +-
drivers/leds/trigger/ledtrig-netdev.c | 15 +-
drivers/media/pci/ttpci/av7110_av.c | 10 +-
drivers/misc/sgi-xp/xpnet.c | 9 +-
drivers/misc/vmw_vmci/vmci_doorbell.c | 15 +-
drivers/mmc/host/atmel-mci.c | 24 +-
drivers/mmc/host/dw_mmc.c | 15 +-
drivers/mmc/host/wbsd.c | 22 +-
drivers/net/appletalk/ipddp.c | 19 +-
drivers/net/bonding/bond_3ad.c | 30 ++-
drivers/net/bonding/bond_alb.c | 60 +++--
drivers/net/bonding/bond_debugfs.c | 5 +-
drivers/net/caif/caif_hsi.c | 51 ++--
drivers/net/can/slcan.c | 24 +-
drivers/net/can/softing/softing_main.c | 15 +-
drivers/net/eql.c | 25 +-
drivers/net/ethernet/3com/3c59x.c | 10 +-
drivers/net/ethernet/alacritech/slicoss.c | 30 ++-
drivers/net/ethernet/altera/altera_tse_main.c | 5 +-
drivers/net/ethernet/broadcom/bcm63xx_enet.c | 10 +-
drivers/net/ethernet/broadcom/bnx2.c | 107 ++++----
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c | 24 +-
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c | 39 +--
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 12 +-
drivers/net/ethernet/broadcom/cnic.c | 12 +-
drivers/net/ethernet/broadcom/genet/bcmgenet.c | 5 +-
drivers/net/ethernet/broadcom/tg3.c | 67 ++---
drivers/net/ethernet/calxeda/xgmac.c | 5 +-
drivers/net/ethernet/cavium/liquidio/lio_main.c | 10 +-
drivers/net/ethernet/cavium/liquidio/lio_vf_main.c | 10 +-
.../net/ethernet/cavium/liquidio/octeon_device.c | 32 ++-
drivers/net/ethernet/cavium/liquidio/octeon_droq.c | 12 +-
drivers/net/ethernet/cavium/liquidio/octeon_nic.c | 11 +-
.../net/ethernet/cavium/liquidio/request_manager.c | 22 +-
.../ethernet/cavium/liquidio/response_manager.c | 11 +-
drivers/net/ethernet/cavium/thunder/nicvf_main.c | 5 +-
drivers/net/ethernet/chelsio/cxgb/vsc7326.c | 10 +-
drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c | 34 ++-
drivers/net/ethernet/chelsio/cxgb3/l2t.c | 29 ++-
drivers/net/ethernet/chelsio/cxgb4/clip_tbl.c | 5 +-
drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c | 17 +-
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | 37 +--
.../net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c | 5 +-
drivers/net/ethernet/chelsio/cxgb4/l2t.c | 23 +-
drivers/net/ethernet/chelsio/cxgb4/sge.c | 5 +-
drivers/net/ethernet/chelsio/cxgb4/smt.c | 5 +-
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c | 15 +-
drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.c | 23 +-
drivers/net/ethernet/cisco/enic/enic_api.c | 5 +-
drivers/net/ethernet/cisco/enic/enic_clsf.c | 25 +-
drivers/net/ethernet/cisco/enic/enic_dev.c | 75 +++---
drivers/net/ethernet/cisco/enic/enic_dev.h | 2 +-
drivers/net/ethernet/cisco/enic/enic_ethtool.c | 18 +-
drivers/net/ethernet/cisco/enic/enic_main.c | 35 ++-
drivers/net/ethernet/emulex/benet/be_cmds.c | 10 +-
drivers/net/ethernet/freescale/gianfar.c | 5 +-
drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c | 30 ++-
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c | 7 +-
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c | 7 +-
drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 9 +-
drivers/net/ethernet/intel/i40e/i40e_main.c | 49 ++--
drivers/net/ethernet/intel/i40e/i40e_ptp.c | 17 +-
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c | 38 +--
drivers/net/ethernet/intel/i40evf/i40evf_main.c | 68 +++--
.../net/ethernet/intel/i40evf/i40evf_virtchnl.c | 36 +--
drivers/net/ethernet/intel/igbvf/ethtool.c | 5 +-
drivers/net/ethernet/intel/igbvf/netdev.c | 51 ++--
drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c | 10 +-
drivers/net/ethernet/intel/ixgbevf/ethtool.c | 5 +-
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 55 ++--
drivers/net/ethernet/jme.c | 52 ++--
drivers/net/ethernet/marvell/mv643xx_eth.c | 5 +-
drivers/net/ethernet/marvell/skge.c | 29 ++-
drivers/net/ethernet/marvell/sky2.c | 34 ++-
drivers/net/ethernet/mediatek/mtk_eth_soc.c | 10 +-
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c | 5 +-
drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 25 +-
drivers/net/ethernet/mellanox/mlx4/en_port.c | 5 +-
drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c | 24 +-
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 16 +-
.../ethernet/mellanox/mlx5/core/ipoib/ipoib_vlan.c | 10 +-
.../net/ethernet/mellanox/mlx5/core/lib/vxlan.c | 15 +-
drivers/net/ethernet/mellanox/mlxsw/core.c | 12 +-
drivers/net/ethernet/mellanox/mlxsw/pci.c | 5 +-
drivers/net/ethernet/microchip/lan743x_ptp.c | 30 ++-
drivers/net/ethernet/netronome/nfp/flower/cmsg.c | 14 +-
drivers/net/ethernet/netronome/nfp/flower/main.c | 14 +-
.../net/ethernet/netronome/nfp/flower/offload.c | 5 +-
.../ethernet/netronome/nfp/flower/tunnel_conf.c | 21 +-
drivers/net/ethernet/netronome/nfp/nfp_net.h | 2 +-
.../net/ethernet/netronome/nfp/nfp_net_common.c | 35 ++-
drivers/net/ethernet/nvidia/forcedeth.c | 10 +-
.../net/ethernet/qlogic/netxen/netxen_nic_init.c | 5 +-
drivers/net/ethernet/qlogic/qed/qed_dev.c | 7 +-
drivers/net/ethernet/qlogic/qed/qed_fcoe.c | 19 +-
drivers/net/ethernet/qlogic/qed/qed_hw.c | 12 +-
drivers/net/ethernet/qlogic/qed/qed_iscsi.c | 19 +-
drivers/net/ethernet/qlogic/qed/qed_iwarp.c | 91 ++++---
drivers/net/ethernet/qlogic/qed/qed_ll2.c | 10 +-
drivers/net/ethernet/qlogic/qed/qed_mcp.c | 26 +-
drivers/net/ethernet/qlogic/qed/qed_rdma.c | 64 +++--
drivers/net/ethernet/qlogic/qed/qed_roce.c | 16 +-
drivers/net/ethernet/qlogic/qed/qed_spq.c | 26 +-
drivers/net/ethernet/qlogic/qede/qede_filter.c | 25 +-
drivers/net/ethernet/qlogic/qede/qede_ptp.c | 42 +--
.../net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c | 22 +-
drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c | 15 +-
.../ethernet/qlogic/qlcnic/qlcnic_sriov_common.c | 20 +-
.../net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c | 5 +-
drivers/net/ethernet/qualcomm/qca_uart.c | 14 +-
drivers/net/ethernet/realtek/8139too.c | 5 +-
drivers/net/ethernet/sfc/ef10.c | 15 +-
drivers/net/ethernet/sfc/efx.c | 15 +-
drivers/net/ethernet/sfc/ethtool.c | 5 +-
drivers/net/ethernet/sfc/falcon/efx.c | 15 +-
drivers/net/ethernet/sfc/falcon/ethtool.c | 5 +-
drivers/net/ethernet/sfc/falcon/falcon.c | 9 +-
drivers/net/ethernet/sfc/falcon/farch.c | 42 +--
drivers/net/ethernet/sfc/farch.c | 5 +-
drivers/net/ethernet/sfc/mcdi.c | 34 ++-
drivers/net/ethernet/sfc/ptp.c | 22 +-
drivers/net/ethernet/sfc/rx.c | 5 +-
drivers/net/ethernet/silan/sc92031.c | 60 +++--
drivers/net/ethernet/ti/netcp_ethss.c | 10 +-
drivers/net/ethernet/toshiba/tc35815.c | 5 +-
drivers/net/ethernet/via/via-rhine.c | 25 +-
drivers/net/hamradio/6pack.c | 15 +-
drivers/net/hamradio/mkiss.c | 30 ++-
drivers/net/ipvlan/ipvlan_core.c | 5 +-
drivers/net/ipvlan/ipvlan_main.c | 22 +-
drivers/net/macsec.c | 25 +-
drivers/net/macvlan.c | 5 +-
drivers/net/ppp/ppp_async.c | 12 +-
drivers/net/ppp/ppp_generic.c | 28 +-
drivers/net/ppp/ppp_synctty.c | 5 +-
drivers/net/slip/slip.c | 53 ++--
drivers/net/tun.c | 22 +-
drivers/net/usb/cdc_mbim.c | 5 +-
drivers/net/usb/cdc_ncm.c | 24 +-
drivers/net/usb/r8152.c | 5 +-
drivers/net/vxlan.c | 32 ++-
drivers/net/wan/x25_asy.c | 10 +-
drivers/net/wireless/ath/ath10k/ce.c | 49 ++--
drivers/net/wireless/ath/ath10k/coredump.c | 5 +-
drivers/net/wireless/ath/ath10k/debug.c | 47 ++--
drivers/net/wireless/ath/ath10k/debugfs_sta.c | 15 +-
drivers/net/wireless/ath/ath10k/htc.c | 23 +-
drivers/net/wireless/ath/ath10k/htt_rx.c | 79 +++---
drivers/net/wireless/ath/ath10k/htt_tx.c | 25 +-
drivers/net/wireless/ath/ath10k/hw.c | 9 +-
drivers/net/wireless/ath/ath10k/mac.c | 284 ++++++++++++---------
drivers/net/wireless/ath/ath10k/p2p.c | 5 +-
drivers/net/wireless/ath/ath10k/pci.c | 42 +--
drivers/net/wireless/ath/ath10k/sdio.c | 27 +-
drivers/net/wireless/ath/ath10k/snoc.c | 17 +-
drivers/net/wireless/ath/ath10k/testmode.c | 15 +-
drivers/net/wireless/ath/ath10k/thermal.c | 10 +-
drivers/net/wireless/ath/ath10k/txrx.c | 24 +-
drivers/net/wireless/ath/ath10k/wmi-tlv.c | 5 +-
drivers/net/wireless/ath/ath10k/wmi.c | 83 +++---
drivers/net/wireless/ath/ath5k/ani.c | 5 +-
drivers/net/wireless/ath/ath5k/base.c | 34 ++-
drivers/net/wireless/ath/ath5k/debug.c | 10 +-
drivers/net/wireless/ath/ath5k/mac80211-ops.c | 10 +-
drivers/net/wireless/ath/ath6kl/cfg80211.c | 29 ++-
drivers/net/wireless/ath/ath6kl/hif.c | 15 +-
drivers/net/wireless/ath/ath6kl/htc_mbox.c | 107 ++++----
drivers/net/wireless/ath/ath6kl/htc_pipe.c | 89 ++++---
drivers/net/wireless/ath/ath6kl/init.c | 7 +-
drivers/net/wireless/ath/ath6kl/main.c | 49 ++--
drivers/net/wireless/ath/ath6kl/sdio.c | 51 ++--
drivers/net/wireless/ath/ath6kl/txrx.c | 124 +++++----
drivers/net/wireless/ath/ath6kl/wmi.c | 56 ++--
drivers/net/wireless/ath/ath9k/ath9k.h | 2 +-
drivers/net/wireless/ath/ath9k/beacon.c | 5 +-
drivers/net/wireless/ath/ath9k/channel.c | 68 ++---
drivers/net/wireless/ath/ath9k/dynack.c | 12 +-
drivers/net/wireless/ath/ath9k/gpio.c | 10 +-
drivers/net/wireless/ath/ath9k/htc_drv_beacon.c | 33 ++-
drivers/net/wireless/ath/ath9k/htc_drv_debug.c | 10 +-
drivers/net/wireless/ath/ath9k/htc_drv_main.c | 25 +-
drivers/net/wireless/ath/ath9k/htc_drv_txrx.c | 50 ++--
drivers/net/wireless/ath/ath9k/main.c | 44 ++--
drivers/net/wireless/ath/ath9k/recv.c | 17 +-
drivers/net/wireless/ath/ath9k/wmi.c | 7 +-
drivers/net/wireless/ath/ath9k/wow.c | 10 +-
drivers/net/wireless/ath/ath9k/xmit.c | 38 +--
drivers/net/wireless/ath/carl9170/debug.c | 20 +-
drivers/net/wireless/ath/carl9170/main.c | 45 ++--
drivers/net/wireless/ath/carl9170/rx.c | 5 +-
drivers/net/wireless/ath/carl9170/tx.c | 80 +++---
drivers/net/wireless/ath/carl9170/usb.c | 12 +-
drivers/net/wireless/ath/dfs_pri_detector.c | 30 ++-
drivers/net/wireless/ath/wcn36xx/main.c | 13 +-
drivers/net/wireless/ath/wil6210/debugfs.c | 5 +-
drivers/net/wireless/ath/wil6210/main.c | 10 +-
drivers/net/wireless/ath/wil6210/rx_reorder.c | 5 +-
drivers/net/wireless/ath/wil6210/txrx.c | 28 +-
drivers/net/wireless/ath/wil6210/txrx_edma.c | 10 +-
drivers/net/wireless/ath/wil6210/wmi.c | 15 +-
drivers/net/wireless/atmel/atmel.c | 7 +-
.../wireless/broadcom/brcm80211/brcmfmac/sdio.c | 27 +-
.../wireless/broadcom/brcm80211/brcmsmac/debug.c | 5 +-
.../broadcom/brcm80211/brcmsmac/mac80211_if.c | 135 +++++-----
drivers/net/wireless/intel/iwlwifi/dvm/calib.c | 16 +-
drivers/net/wireless/intel/iwlwifi/dvm/debugfs.c | 20 +-
drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c | 5 +-
drivers/net/wireless/intel/iwlwifi/dvm/main.c | 5 +-
drivers/net/wireless/intel/iwlwifi/dvm/sta.c | 119 +++++----
drivers/net/wireless/intel/iwlwifi/dvm/tx.c | 38 +--
drivers/net/wireless/intel/iwlwifi/fw/notif-wait.c | 10 +-
drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c | 5 +-
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c | 51 ++--
drivers/net/wireless/intel/iwlwifi/mvm/ops.c | 30 ++-
drivers/net/wireless/intel/iwlwifi/mvm/rs.c | 5 +-
drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c | 23 +-
drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 145 ++++++-----
.../net/wireless/intel/iwlwifi/mvm/time-event.c | 34 ++-
drivers/net/wireless/intel/iwlwifi/mvm/tx.c | 10 +-
drivers/net/wireless/intel/iwlwifi/mvm/utils.c | 46 ++--
drivers/net/wireless/intel/iwlwifi/pcie/trans.c | 15 +-
drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c | 12 +-
drivers/net/wireless/intel/iwlwifi/pcie/tx.c | 29 ++-
drivers/net/wireless/intersil/hostap/hostap_ap.c | 145 ++++++-----
drivers/net/wireless/intersil/hostap/hostap_hw.c | 20 +-
.../net/wireless/intersil/hostap/hostap_ioctl.c | 9 +-
drivers/net/wireless/intersil/hostap/hostap_proc.c | 4 +-
.../net/wireless/intersil/orinoco/orinoco_usb.c | 9 +-
drivers/net/wireless/mac80211_hwsim.c | 53 ++--
drivers/net/wireless/marvell/mwl8k.c | 21 +-
drivers/net/wireless/mediatek/mt76/agg-rx.c | 15 +-
drivers/net/wireless/mediatek/mt76/dma.c | 15 +-
drivers/net/wireless/mediatek/mt76/mac80211.c | 5 +-
drivers/net/wireless/mediatek/mt76/mt76x0/mac.c | 10 +-
drivers/net/wireless/mediatek/mt76/mt76x0/phy.c | 5 +-
drivers/net/wireless/mediatek/mt76/mt76x2_dma.c | 5 +-
drivers/net/wireless/mediatek/mt76/mt76x2_mac.c | 5 +-
.../net/wireless/mediatek/mt76/mt76x2_mac_common.c | 10 +-
drivers/net/wireless/mediatek/mt76/mt76x2_tx.c | 5 +-
drivers/net/wireless/mediatek/mt76/tx.c | 45 ++--
drivers/net/wireless/mediatek/mt76/usb.c | 5 +-
drivers/net/wireless/mediatek/mt7601u/mac.c | 10 +-
drivers/net/wireless/mediatek/mt7601u/phy.c | 14 +-
drivers/net/wireless/ralink/rt2x00/rt2x00dev.c | 15 +-
drivers/net/wireless/ralink/rt2x00/rt2x00queue.c | 5 +-
.../realtek/rtlwifi/btcoexist/halbtcoutsrc.c | 5 +-
drivers/net/wireless/realtek/rtlwifi/core.c | 10 +-
drivers/net/wireless/realtek/rtlwifi/pci.c | 17 +-
.../net/wireless/realtek/rtlwifi/rtl8188ee/dm.c | 16 +-
.../net/wireless/realtek/rtlwifi/rtl8188ee/hw.c | 22 +-
.../net/wireless/realtek/rtlwifi/rtl8192ee/dm.c | 10 +-
.../net/wireless/realtek/rtlwifi/rtl8192ee/hw.c | 22 +-
.../net/wireless/realtek/rtlwifi/rtl8723be/dm.c | 10 +-
.../net/wireless/realtek/rtlwifi/rtl8723be/hw.c | 22 +-
.../net/wireless/realtek/rtlwifi/rtl8821ae/dm.c | 10 +-
.../net/wireless/realtek/rtlwifi/rtl8821ae/hw.c | 20 +-
drivers/net/wireless/st/cw1200/debug.c | 5 +-
drivers/net/wireless/st/cw1200/pm.c | 10 +-
drivers/net/wireless/st/cw1200/queue.c | 80 +++---
drivers/net/wireless/st/cw1200/sta.c | 34 ++-
drivers/net/wireless/st/cw1200/txrx.c | 77 +++---
drivers/net/wireless/st/cw1200/wsm.c | 5 +-
drivers/net/xen-netfront.c | 15 +-
drivers/pcmcia/bcm63xx_pcmcia.c | 10 +-
drivers/rapidio/devices/tsi721_dma.c | 32 ++-
drivers/rapidio/rio_cm.c | 92 ++++---
drivers/s390/block/dasd.c | 38 +--
drivers/s390/block/dasd_ioctl.c | 7 +-
drivers/s390/block/dasd_proc.c | 5 +-
drivers/s390/char/tty3270.c | 40 +--
drivers/s390/char/vmlogrdr.c | 17 +-
drivers/s390/crypto/ap_bus.c | 64 +++--
drivers/s390/crypto/ap_card.c | 25 +-
drivers/s390/crypto/ap_queue.c | 60 +++--
drivers/s390/crypto/pkey_api.c | 22 +-
drivers/s390/net/qeth_l2_main.c | 10 +-
drivers/s390/net/qeth_l3_main.c | 55 ++--
drivers/s390/net/qeth_l3_sys.c | 25 +-
drivers/s390/net/smsgiucv.c | 10 +-
drivers/s390/net/smsgiucv_app.c | 5 +-
drivers/s390/scsi/zfcp_fc.c | 5 +-
drivers/s390/scsi/zfcp_sysfs.c | 7 +-
drivers/scsi/be2iscsi/be_main.c | 51 ++--
drivers/scsi/bnx2fc/bnx2fc_els.c | 36 +--
drivers/scsi/bnx2fc/bnx2fc_fcoe.c | 58 +++--
drivers/scsi/bnx2fc/bnx2fc_hwi.c | 20 +-
drivers/scsi/bnx2fc/bnx2fc_io.c | 67 ++---
drivers/scsi/bnx2fc/bnx2fc_tgt.c | 21 +-
drivers/scsi/bnx2i/bnx2i.h | 2 +-
drivers/scsi/bnx2i/bnx2i_hwi.c | 12 +-
drivers/scsi/bnx2i/bnx2i_init.c | 5 +-
drivers/scsi/bnx2i/bnx2i_iscsi.c | 14 +-
drivers/scsi/cxgbi/cxgb3i/cxgb3i.c | 27 +-
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c | 58 +++--
drivers/scsi/cxgbi/libcxgbi.c | 61 +++--
drivers/scsi/fcoe/fcoe.c | 10 +-
drivers/scsi/fcoe/fcoe_ctlr.c | 20 +-
drivers/scsi/fcoe/fcoe_transport.c | 14 +-
drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c | 105 ++++----
drivers/scsi/iscsi_tcp.c | 26 +-
drivers/scsi/libfc/fc_exch.c | 123 +++++----
drivers/scsi/libfc/fc_fcp.c | 20 +-
drivers/scsi/libiscsi.c | 170 ++++++------
drivers/scsi/libiscsi_tcp.c | 10 +-
drivers/scsi/qedi/qedi_fw.c | 46 ++--
drivers/scsi/qedi/qedi_main.c | 27 +-
drivers/staging/fwserial/fwserial.c | 167 +++++++-----
drivers/staging/mt7621-dma/mtk-hsdma.c | 15 +-
drivers/staging/rtl8188eu/core/rtw_ap.c | 69 ++---
drivers/staging/rtl8188eu/core/rtw_cmd.c | 17 +-
drivers/staging/rtl8188eu/core/rtw_ioctl_set.c | 32 ++-
drivers/staging/rtl8188eu/core/rtw_mlme.c | 92 ++++---
drivers/staging/rtl8188eu/core/rtw_mlme_ext.c | 37 +--
drivers/staging/rtl8188eu/core/rtw_recv.c | 38 +--
drivers/staging/rtl8188eu/core/rtw_sta_mgt.c | 40 +--
drivers/staging/rtl8188eu/core/rtw_xmit.c | 55 ++--
drivers/staging/rtl8188eu/hal/rtl8188eu_xmit.c | 12 +-
drivers/staging/rtl8188eu/include/rtw_mlme.h | 4 +-
drivers/staging/rtl8188eu/os_dep/ioctl_linux.c | 26 +-
drivers/staging/rtl8188eu/os_dep/xmit_linux.c | 12 +-
drivers/staging/rtl8723bs/core/rtw_ap.c | 65 +++--
drivers/staging/rtl8723bs/core/rtw_cmd.c | 21 +-
drivers/staging/rtl8723bs/core/rtw_debug.c | 12 +-
drivers/staging/rtl8723bs/core/rtw_ioctl_set.c | 37 +--
drivers/staging/rtl8723bs/core/rtw_mlme.c | 101 ++++----
drivers/staging/rtl8723bs/core/rtw_mlme_ext.c | 68 ++---
drivers/staging/rtl8723bs/core/rtw_recv.c | 53 ++--
drivers/staging/rtl8723bs/core/rtw_sta_mgt.c | 61 +++--
drivers/staging/rtl8723bs/core/rtw_wlan_util.c | 50 ++--
drivers/staging/rtl8723bs/core/rtw_xmit.c | 95 ++++---
drivers/staging/rtl8723bs/hal/hal_com.c | 2 +-
drivers/staging/rtl8723bs/hal/hal_sdio.c | 2 +-
drivers/staging/rtl8723bs/hal/rtl8723bs_recv.c | 2 +-
drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c | 22 +-
drivers/staging/rtl8723bs/hal/sdio_ops.c | 2 +-
drivers/staging/rtl8723bs/include/rtw_mlme.h | 4 +-
drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c | 29 ++-
drivers/staging/rtl8723bs/os_dep/ioctl_linux.c | 44 ++--
drivers/staging/rtl8723bs/os_dep/mlme_linux.c | 5 +-
drivers/staging/rtl8723bs/os_dep/xmit_linux.c | 5 +-
drivers/staging/rtlwifi/btcoexist/halbtcoutsrc.c | 5 +-
drivers/staging/rtlwifi/core.c | 10 +-
drivers/staging/rtlwifi/pci.c | 17 +-
drivers/staging/rtlwifi/rtl8822be/hw.c | 22 +-
drivers/target/iscsi/cxgbit/cxgbit_cm.c | 41 +--
drivers/target/iscsi/cxgbit/cxgbit_main.c | 17 +-
drivers/target/iscsi/cxgbit/cxgbit_target.c | 31 ++-
drivers/target/iscsi/iscsi_target.c | 221 +++++++++-------
drivers/target/iscsi/iscsi_target_configfs.c | 19 +-
drivers/target/iscsi/iscsi_target_erl0.c | 49 ++--
drivers/target/iscsi/iscsi_target_erl1.c | 79 +++---
drivers/target/iscsi/iscsi_target_erl2.c | 18 +-
drivers/target/iscsi/iscsi_target_login.c | 68 ++---
drivers/target/iscsi/iscsi_target_nodeattrib.c | 5 +-
drivers/target/iscsi/iscsi_target_stat.c | 45 ++--
drivers/target/iscsi/iscsi_target_tmr.c | 30 ++-
drivers/target/iscsi/iscsi_target_util.c | 208 ++++++++-------
drivers/target/sbp/sbp_target.c | 141 +++++-----
drivers/target/target_core_tpg.c | 10 +-
drivers/target/target_core_transport.c | 5 +-
drivers/target/target_core_user.c | 12 +-
drivers/tty/hvc/hvc_iucv.c | 40 +--
drivers/tty/moxa.c | 21 +-
drivers/usb/serial/keyspan_pda.c | 7 +-
drivers/vhost/vsock.c | 45 ++--
fs/afs/internal.h | 4 +-
fs/afs/rxrpc.c | 5 +-
fs/fs-writeback.c | 15 +-
fs/jffs2/README.Locking | 2 +-
fs/nfs/callback.c | 7 +-
include/linux/dmaengine.h | 16 +-
include/linux/netdevice.h | 13 +-
include/linux/preempt.h | 2 +-
include/linux/ptr_ring.h | 30 ++-
include/linux/rhashtable.h | 17 +-
include/linux/seqlock.h | 8 +-
include/linux/spinlock.h | 23 +-
include/linux/spinlock_api_smp.h | 26 +-
include/linux/spinlock_api_up.h | 18 +-
include/linux/u64_stats_sync.h | 2 +-
include/linux/xarray.h | 2 +-
include/net/gen_stats.h | 1 +
include/net/netrom.h | 9 +-
include/net/pkt_cls.h | 6 +-
include/net/request_sock.h | 5 +-
include/net/sch_generic.h | 19 +-
include/net/sock.h | 2 +-
include/net/tcp.h | 3 +-
include/net/udp.h | 1 +
include/target/iscsi/iscsi_target_core.h | 2 +-
kernel/bpf/btf.c | 5 +-
kernel/bpf/core.c | 10 +-
kernel/bpf/local_storage.c | 34 ++-
kernel/bpf/reuseport_array.c | 10 +-
kernel/bpf/sockmap.c | 73 +++---
kernel/bpf/syscall.c | 30 ++-
kernel/cgroup/cgroup.c | 15 +-
kernel/locking/spinlock.c | 17 +-
kernel/rcu/rcutorture.c | 12 +-
lib/rhashtable.c | 12 +-
mm/backing-dev.c | 22 +-
mm/page-writeback.c | 10 +-
net/6lowpan/debugfs.c | 25 +-
net/6lowpan/iphc.c | 23 +-
net/6lowpan/nhc.c | 31 ++-
net/802/garp.c | 19 +-
net/802/mrp.c | 19 +-
net/802/psnap.c | 10 +-
net/ax25/af_ax25.c | 26 +-
net/ax25/ax25_dev.c | 24 +-
net/ax25/ax25_iface.c | 38 +--
net/ax25/ax25_out.c | 7 +-
net/batman-adv/bat_iv_ogm.c | 51 ++--
net/batman-adv/bridge_loop_avoidance.c | 70 ++---
net/batman-adv/distributed-arp-table.c | 5 +-
net/batman-adv/fragmentation.c | 10 +-
net/batman-adv/gateway_client.c | 20 +-
net/batman-adv/hash.h | 4 +-
net/batman-adv/icmp_socket.c | 17 +-
net/batman-adv/log.c | 12 +-
net/batman-adv/multicast.c | 37 +--
net/batman-adv/network-coding.c | 37 +--
net/batman-adv/originator.c | 62 +++--
net/batman-adv/routing.c | 22 +-
net/batman-adv/send.c | 21 +-
net/batman-adv/soft-interface.c | 10 +-
net/batman-adv/tp_meter.c | 67 +++--
net/batman-adv/translation-table.c | 158 +++++++-----
net/batman-adv/tvlv.c | 25 +-
net/bluetooth/hci_core.c | 5 +-
net/bridge/br.c | 13 +-
net/bridge/br_device.c | 5 +-
net/bridge/br_fdb.c | 60 +++--
net/bridge/br_if.c | 20 +-
net/bridge/br_ioctl.c | 9 +-
net/bridge/br_mdb.c | 15 +-
net/bridge/br_multicast.c | 47 ++--
net/bridge/br_netlink.c | 24 +-
net/bridge/br_stp.c | 20 +-
net/bridge/br_stp_if.c | 25 +-
net/bridge/br_sysfs_br.c | 5 +-
net/bridge/br_sysfs_if.c | 9 +-
net/bridge/br_vlan.c | 5 +-
net/bridge/netfilter/ebt_limit.c | 7 +-
net/bridge/netfilter/ebt_log.c | 5 +-
net/caif/caif_dev.c | 16 +-
net/caif/caif_socket.c | 5 +-
net/caif/cfctrl.c | 40 +--
net/caif/cfmuxl.c | 30 ++-
net/core/datagram.c | 5 +-
net/core/dev.c | 5 +-
net/core/dev_addr_lists.c | 51 ++--
net/core/gen_estimator.c | 4 +-
net/core/gen_stats.c | 8 +-
net/core/net-procfs.c | 5 +-
net/core/net_namespace.c | 31 ++-
net/core/pktgen.c | 4 +-
net/core/request_sock.c | 7 +-
net/core/rtnetlink.c | 5 +-
net/core/sock.c | 40 +--
net/core/sock_reuseport.c | 26 +-
net/dcb/dcbnl.c | 54 ++--
net/dccp/minisocks.c | 5 +-
net/decnet/dn_fib.c | 20 +-
net/decnet/dn_route.c | 24 +-
net/ieee802154/socket.c | 5 +-
net/ipv4/af_inet.c | 10 +-
net/ipv4/cipso_ipv4.c | 19 +-
net/ipv4/esp4.c | 19 +-
net/ipv4/fib_semantics.c | 15 +-
net/ipv4/igmp.c | 82 +++---
net/ipv4/inet_connection_sock.c | 23 +-
net/ipv4/inet_diag.c | 5 +-
net/ipv4/inet_fragment.c | 5 +-
net/ipv4/inet_hashtables.c | 20 +-
net/ipv4/ipmr.c | 19 +-
net/ipv4/ipmr_base.c | 17 +-
net/ipv4/netfilter/ipt_CLUSTERIP.c | 9 +-
net/ipv4/netfilter/nf_nat_snmp_basic_main.c | 5 +-
net/ipv4/raw.c | 5 +-
net/ipv4/route.c | 30 ++-
net/ipv4/tcp_ipv4.c | 8 +-
net/ipv4/tcp_metrics.c | 15 +-
net/ipv4/udp.c | 41 +--
net/ipv4/udp_diag.c | 7 +-
net/ipv6/addrconf.c | 99 +++----
net/ipv6/af_inet6.c | 10 +-
net/ipv6/calipso.c | 19 +-
net/ipv6/esp6.c | 14 +-
net/ipv6/ip6_fib.c | 29 ++-
net/ipv6/ip6_flowlabel.c | 47 ++--
net/ipv6/ip6mr.c | 19 +-
net/ipv6/mcast.c | 93 ++++---
net/ipv6/mip6.c | 15 +-
net/ipv6/netfilter/nf_conntrack_reasm.c | 5 +-
net/ipv6/raw.c | 5 +-
net/ipv6/route.c | 67 +++--
net/ipv6/xfrm6_tunnel.c | 10 +-
net/iucv/af_iucv.c | 5 +-
net/iucv/iucv.c | 25 +-
net/kcm/kcmproc.c | 10 +-
net/kcm/kcmsock.c | 106 ++++----
net/key/af_key.c | 5 +-
net/l2tp/l2tp_core.c | 34 ++-
net/l2tp/l2tp_ip.c | 5 +-
net/llc/llc_conn.c | 10 +-
net/llc/llc_core.c | 10 +-
net/llc/llc_proc.c | 10 +-
net/llc/llc_sap.c | 5 +-
net/mac80211/agg-rx.c | 5 +-
net/mac80211/agg-tx.c | 45 ++--
net/mac80211/cfg.c | 31 ++-
net/mac80211/debugfs.c | 5 +-
net/mac80211/debugfs_netdev.c | 5 +-
net/mac80211/debugfs_sta.c | 5 +-
net/mac80211/ht.c | 7 +-
net/mac80211/ibss.c | 14 +-
net/mac80211/iface.c | 14 +-
net/mac80211/main.c | 5 +-
net/mac80211/mesh_hwmp.c | 58 +++--
net/mac80211/mesh_pathtbl.c | 37 +--
net/mac80211/mesh_plink.c | 36 +--
net/mac80211/mesh_sync.c | 15 +-
net/mac80211/mlme.c | 5 +-
net/mac80211/ocb.c | 14 +-
net/mac80211/rate.c | 20 +-
net/mac80211/rx.c | 25 +-
net/mac80211/sta_info.c | 15 +-
net/mac80211/tdls.c | 5 +-
net/mac80211/tkip.c | 5 +-
net/mac80211/tx.c | 45 ++--
net/mac80211/util.c | 5 +-
net/mac802154/llsec.c | 12 +-
net/netfilter/ipset/ip_set_bitmap_gen.h | 2 +-
net/netfilter/ipset/ip_set_core.c | 24 +-
net/netfilter/ipset/ip_set_hash_gen.h | 6 +-
net/netfilter/ipset/ip_set_list_set.c | 5 +-
net/netfilter/ipvs/ip_vs_app.c | 5 +-
net/netfilter/ipvs/ip_vs_conn.c | 14 +-
net/netfilter/ipvs/ip_vs_ctl.c | 35 ++-
net/netfilter/ipvs/ip_vs_est.c | 10 +-
net/netfilter/ipvs/ip_vs_lblc.c | 10 +-
net/netfilter/ipvs/ip_vs_lblcr.c | 18 +-
net/netfilter/ipvs/ip_vs_proto_sctp.c | 5 +-
net/netfilter/ipvs/ip_vs_proto_tcp.c | 10 +-
net/netfilter/ipvs/ip_vs_rr.c | 12 +-
net/netfilter/ipvs/ip_vs_sync.c | 48 ++--
net/netfilter/ipvs/ip_vs_wrr.c | 10 +-
net/netfilter/ipvs/ip_vs_xmit.c | 18 +-
net/netfilter/nf_conncount.c | 10 +-
net/netfilter/nf_conntrack_core.c | 5 +-
net/netfilter/nf_conntrack_ecache.c | 10 +-
net/netfilter/nf_conntrack_expect.c | 32 ++-
net/netfilter/nf_conntrack_ftp.c | 5 +-
net/netfilter/nf_conntrack_h323_main.c | 26 +-
net/netfilter/nf_conntrack_helper.c | 10 +-
net/netfilter/nf_conntrack_irc.c | 5 +-
net/netfilter/nf_conntrack_netlink.c | 33 ++-
net/netfilter/nf_conntrack_pptp.c | 5 +-
net/netfilter/nf_conntrack_proto_dccp.c | 21 +-
net/netfilter/nf_conntrack_proto_sctp.c | 19 +-
net/netfilter/nf_conntrack_proto_tcp.c | 31 ++-
net/netfilter/nf_conntrack_sane.c | 5 +-
net/netfilter/nf_conntrack_seqadj.c | 10 +-
net/netfilter/nf_conntrack_sip.c | 10 +-
net/netfilter/nf_nat_core.c | 10 +-
net/netfilter/nfnetlink_log.c | 52 ++--
net/netfilter/nfnetlink_queue.c | 36 +--
net/netfilter/nft_limit.c | 7 +-
net/netfilter/xt_RATEEST.c | 5 +-
net/netfilter/xt_dccp.c | 9 +-
net/netfilter/xt_hashlimit.c | 7 +-
net/netfilter/xt_limit.c | 7 +-
net/netfilter/xt_quota.c | 5 +-
net/netfilter/xt_recent.c | 35 +--
net/netrom/af_netrom.c | 32 ++-
net/netrom/nr_route.c | 58 +++--
net/nfc/rawsock.c | 15 +-
net/openvswitch/flow.c | 10 +-
net/openvswitch/meter.c | 15 +-
net/packet/af_packet.c | 34 ++-
net/rds/af_rds.c | 20 +-
net/rose/af_rose.c | 32 ++-
net/rose/rose_route.c | 73 +++---
net/rxrpc/af_rxrpc.c | 15 +-
net/rxrpc/call_accept.c | 5 +-
net/rxrpc/call_event.c | 16 +-
net/rxrpc/call_object.c | 5 +-
net/rxrpc/conn_client.c | 5 +-
net/rxrpc/conn_event.c | 7 +-
net/rxrpc/conn_object.c | 5 +-
net/rxrpc/input.c | 15 +-
net/rxrpc/output.c | 14 +-
net/rxrpc/peer_event.c | 22 +-
net/rxrpc/peer_object.c | 10 +-
net/rxrpc/recvmsg.c | 5 +-
net/rxrpc/sendmsg.c | 5 +-
net/sched/act_bpf.c | 12 +-
net/sched/act_csum.c | 12 +-
net/sched/act_gact.c | 12 +-
net/sched/act_ife.c | 22 +-
net/sched/act_ipt.c | 12 +-
net/sched/act_mirred.c | 19 +-
net/sched/act_nat.c | 5 +-
net/sched/act_pedit.c | 14 +-
net/sched/act_police.c | 12 +-
net/sched/act_sample.c | 12 +-
net/sched/act_simple.c | 12 +-
net/sched/act_skbmod.c | 12 +-
net/sched/act_tunnel_key.c | 12 +-
net/sched/act_vlan.c | 12 +-
net/sched/cls_route.c | 10 +-
net/sched/sch_generic.c | 19 +-
net/sched/sch_mq.c | 5 +-
net/sched/sch_mqprio.c | 14 +-
net/sched/sch_netem.c | 5 +-
net/sched/sch_teql.c | 5 +-
net/sctp/associola.c | 10 +-
net/sctp/ipv6.c | 9 +-
net/sctp/protocol.c | 28 +-
net/sctp/socket.c | 20 +-
net/smc/smc_cdc.c | 5 +-
net/smc/smc_core.c | 42 +--
net/smc/smc_tx.c | 10 +-
net/sunrpc/backchannel_rqst.c | 10 +-
net/sunrpc/sched.c | 42 +--
net/sunrpc/svc.c | 29 ++-
net/sunrpc/svc_xprt.c | 52 ++--
net/sunrpc/svcsock.c | 5 +-
net/sunrpc/xprt.c | 60 +++--
net/sunrpc/xprtrdma/backchannel.c | 17 +-
net/sunrpc/xprtrdma/svc_rdma_backchannel.c | 5 +-
net/sunrpc/xprtrdma/svc_rdma_transport.c | 10 +-
net/sunrpc/xprtrdma/transport.c | 5 +-
net/sunrpc/xprtsock.c | 25 +-
net/switchdev/switchdev.c | 10 +-
net/tipc/bcast.h | 2 +-
net/tipc/discover.c | 20 +-
net/tipc/msg.h | 10 +-
net/tipc/name_distr.c | 20 +-
net/tipc/name_table.c | 71 +++---
net/tipc/node.c | 51 ++--
net/tipc/socket.c | 14 +-
net/tipc/topsrv.c | 50 ++--
net/vmw_vsock/af_vsock.c | 50 ++--
net/vmw_vsock/diag.c | 5 +-
net/vmw_vsock/virtio_transport.c | 36 +--
net/vmw_vsock/virtio_transport_common.c | 44 ++--
net/vmw_vsock/vmci_transport.c | 17 +-
net/wireless/mlme.c | 29 ++-
net/wireless/nl80211.c | 26 +-
net/wireless/reg.c | 19 +-
net/wireless/scan.c | 49 ++--
net/xfrm/xfrm_input.c | 10 +-
net/xfrm/xfrm_output.c | 7 +-
net/xfrm/xfrm_policy.c | 87 ++++---
net/xfrm/xfrm_state.c | 172 ++++++++-----
net/xfrm/xfrm_user.c | 15 +-
security/selinux/netif.c | 15 +-
security/selinux/netnode.c | 12 +-
security/selinux/netport.c | 12 +-
sound/pci/asihpi/hpios.h | 2 +-
sound/soc/intel/atom/sst/sst_ipc.c | 19 +-
sound/soc/omap/ams-delta.c | 10 +-
tools/virtio/ringtest/ptr_ring.c | 2 +-
744 files changed, 10572 insertions(+), 7570 deletions(-)
diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
index 76d89ee..9cc2b96 100644
--- a/arch/s390/mm/pgalloc.c
+++ b/arch/s390/mm/pgalloc.c
@@ -79,6 +79,7 @@ static void __crst_table_upgrade(void *arg)
int crst_table_upgrade(struct mm_struct *mm, unsigned long end)
{
+ unsigned int bh;
unsigned long *table, *pgd;
int rc, notify;
@@ -92,7 +93,7 @@ int crst_table_upgrade(struct mm_struct *mm, unsigned long end)
rc = -ENOMEM;
break;
}
- spin_lock_bh(&mm->page_table_lock);
+ bh = spin_lock_bh(&mm->page_table_lock, SOFTIRQ_ALL_MASK);
pgd = (unsigned long *) mm->pgd;
if (mm->context.asce_limit == _REGION2_SIZE) {
crst_table_init(table, _REGION2_ENTRY_EMPTY);
@@ -110,7 +111,7 @@ int crst_table_upgrade(struct mm_struct *mm, unsigned long end)
_ASCE_USER_BITS | _ASCE_TYPE_REGION1;
}
notify = 1;
- spin_unlock_bh(&mm->page_table_lock);
+ spin_unlock_bh(&mm->page_table_lock, bh);
}
if (notify)
on_each_cpu(__crst_table_upgrade, mm, 0);
@@ -179,6 +180,7 @@ void page_table_free_pgste(struct page *page)
*/
unsigned long *page_table_alloc(struct mm_struct *mm)
{
+ unsigned int bh;
unsigned long *table;
struct page *page;
unsigned int mask, bit;
@@ -186,7 +188,7 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
/* Try to get a fragment of a 4K page as a 2K page table */
if (!mm_alloc_pgste(mm)) {
table = NULL;
- spin_lock_bh(&mm->context.lock);
+ bh = spin_lock_bh(&mm->context.lock, SOFTIRQ_ALL_MASK);
if (!list_empty(&mm->context.pgtable_list)) {
page = list_first_entry(&mm->context.pgtable_list,
struct page, lru);
@@ -202,7 +204,7 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
list_del(&page->lru);
}
}
- spin_unlock_bh(&mm->context.lock);
+ spin_unlock_bh(&mm->context.lock, bh);
if (table)
return table;
}
@@ -226,15 +228,16 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
/* Return the first 2K fragment of the page */
atomic_xor_bits(&page->_refcount, 1 << 24);
memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE);
- spin_lock_bh(&mm->context.lock);
+ bh = spin_lock_bh(&mm->context.lock, SOFTIRQ_ALL_MASK);
list_add(&page->lru, &mm->context.pgtable_list);
- spin_unlock_bh(&mm->context.lock);
+ spin_unlock_bh(&mm->context.lock, bh);
}
return table;
}
void page_table_free(struct mm_struct *mm, unsigned long *table)
{
+ unsigned int bh;
struct page *page;
unsigned int bit, mask;
@@ -242,14 +245,14 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
if (!mm_alloc_pgste(mm)) {
/* Free 2K page table fragment of a 4K page */
bit = (__pa(table) & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t));
- spin_lock_bh(&mm->context.lock);
+ bh = spin_lock_bh(&mm->context.lock, SOFTIRQ_ALL_MASK);
mask = atomic_xor_bits(&page->_refcount, 1U << (bit + 24));
mask >>= 24;
if (mask & 3)
list_add(&page->lru, &mm->context.pgtable_list);
else
list_del(&page->lru);
- spin_unlock_bh(&mm->context.lock);
+ spin_unlock_bh(&mm->context.lock, bh);
if (mask != 0)
return;
} else {
@@ -263,6 +266,7 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
unsigned long vmaddr)
{
+ unsigned int bh;
struct mm_struct *mm;
struct page *page;
unsigned int bit, mask;
@@ -276,14 +280,14 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
return;
}
bit = (__pa(table) & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t));
- spin_lock_bh(&mm->context.lock);
+ bh = spin_lock_bh(&mm->context.lock, SOFTIRQ_ALL_MASK);
mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
mask >>= 24;
if (mask & 3)
list_add_tail(&page->lru, &mm->context.pgtable_list);
else
list_del(&page->lru);
- spin_unlock_bh(&mm->context.lock);
+ spin_unlock_bh(&mm->context.lock, bh);
table = (unsigned long *) (__pa(table) | (1U << bit));
tlb_remove_table(tlb, table);
}
diff --git a/arch/xtensa/platforms/iss/console.c b/arch/xtensa/platforms/iss/console.c
index af81a62..9d5c4cc 100644
--- a/arch/xtensa/platforms/iss/console.c
+++ b/arch/xtensa/platforms/iss/console.c
@@ -51,13 +51,14 @@ static void rs_poll(struct timer_list *);
static int rs_open(struct tty_struct *tty, struct file * filp)
{
+ unsigned int bh;
tty->port = &serial_port;
- spin_lock_bh(&timer_lock);
+ bh = spin_lock_bh(&timer_lock, SOFTIRQ_ALL_MASK);
if (tty->count == 1) {
timer_setup(&serial_timer, rs_poll, 0);
mod_timer(&serial_timer, jiffies + SERIAL_TIMER_VALUE);
}
- spin_unlock_bh(&timer_lock);
+ spin_unlock_bh(&timer_lock, bh);
return 0;
}
@@ -75,10 +76,11 @@ static int rs_open(struct tty_struct *tty, struct file * filp)
*/
static void rs_close(struct tty_struct *tty, struct file * filp)
{
- spin_lock_bh(&timer_lock);
+ unsigned int bh;
+ bh = spin_lock_bh(&timer_lock, SOFTIRQ_ALL_MASK);
if (tty->count == 1)
del_timer_sync(&serial_timer);
- spin_unlock_bh(&timer_lock);
+ spin_unlock_bh(&timer_lock, bh);
}
diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
index d027ddd..f25ec51 100644
--- a/arch/xtensa/platforms/iss/network.c
+++ b/arch/xtensa/platforms/iss/network.c
@@ -364,10 +364,11 @@ static void iss_net_timer(struct timer_list *t)
static int iss_net_open(struct net_device *dev)
{
+ unsigned int bh;
struct iss_net_private *lp = netdev_priv(dev);
int err;
- spin_lock_bh(&lp->lock);
+ bh = spin_lock_bh(&lp->lock, SOFTIRQ_ALL_MASK);
err = lp->tp.open(lp);
if (err < 0)
@@ -382,26 +383,27 @@ static int iss_net_open(struct net_device *dev)
while ((err = iss_net_rx(dev)) > 0)
;
- spin_unlock_bh(&lp->lock);
- spin_lock_bh(&opened_lock);
+ spin_unlock_bh(&lp->lock, bh);
+ bh = spin_lock_bh(&opened_lock, SOFTIRQ_ALL_MASK);
list_add(&lp->opened_list, &opened);
- spin_unlock_bh(&opened_lock);
- spin_lock_bh(&lp->lock);
+ spin_unlock_bh(&opened_lock, bh);
+ bh = spin_lock_bh(&lp->lock, SOFTIRQ_ALL_MASK);
timer_setup(&lp->timer, iss_net_timer, 0);
lp->timer_val = ISS_NET_TIMER_VALUE;
mod_timer(&lp->timer, jiffies + lp->timer_val);
out:
- spin_unlock_bh(&lp->lock);
+ spin_unlock_bh(&lp->lock, bh);
return err;
}
static int iss_net_close(struct net_device *dev)
{
+ unsigned int bh;
struct iss_net_private *lp = netdev_priv(dev);
netif_stop_queue(dev);
- spin_lock_bh(&lp->lock);
+ bh = spin_lock_bh(&lp->lock, SOFTIRQ_ALL_MASK);
spin_lock(&opened_lock);
list_del(&opened);
@@ -411,17 +413,18 @@ static int iss_net_close(struct net_device *dev)
lp->tp.close(lp);
- spin_unlock_bh(&lp->lock);
+ spin_unlock_bh(&lp->lock, bh);
return 0;
}
static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
+ unsigned int bh;
struct iss_net_private *lp = netdev_priv(dev);
int len;
netif_stop_queue(dev);
- spin_lock_bh(&lp->lock);
+ bh = spin_lock_bh(&lp->lock, SOFTIRQ_ALL_MASK);
len = lp->tp.write(lp, &skb);
@@ -443,7 +446,7 @@ static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
pr_err("%s: %s failed(%d)\n", dev->name, __func__, len);
}
- spin_unlock_bh(&lp->lock);
+ spin_unlock_bh(&lp->lock, bh);
dev_kfree_skb(skb);
return NETDEV_TX_OK;
@@ -466,14 +469,15 @@ static void iss_net_tx_timeout(struct net_device *dev)
static int iss_net_set_mac(struct net_device *dev, void *addr)
{
+ unsigned int bh;
struct iss_net_private *lp = netdev_priv(dev);
struct sockaddr *hwaddr = addr;
if (!is_valid_ether_addr(hwaddr->sa_data))
return -EADDRNOTAVAIL;
- spin_lock_bh(&lp->lock);
+ bh = spin_lock_bh(&lp->lock, SOFTIRQ_ALL_MASK);
memcpy(dev->dev_addr, hwaddr->sa_data, ETH_ALEN);
- spin_unlock_bh(&lp->lock);
+ spin_unlock_bh(&lp->lock, bh);
return 0;
}
diff --git a/block/genhd.c b/block/genhd.c
index be5bab2..86cbaa6 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -473,6 +473,7 @@ static int blk_mangle_minor(int minor)
*/
int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
{
+ unsigned int bh;
struct gendisk *disk = part_to_disk(part);
int idx;
@@ -485,9 +486,9 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
/* allocate ext devt */
idr_preload(GFP_KERNEL);
- spin_lock_bh(&ext_devt_lock);
+ bh = spin_lock_bh(&ext_devt_lock, SOFTIRQ_ALL_MASK);
idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT);
- spin_unlock_bh(&ext_devt_lock);
+ spin_unlock_bh(&ext_devt_lock, bh);
idr_preload_end();
if (idx < 0)
@@ -508,13 +509,14 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
*/
void blk_free_devt(dev_t devt)
{
+ unsigned int bh;
if (devt == MKDEV(0, 0))
return;
if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
- spin_lock_bh(&ext_devt_lock);
+ bh = spin_lock_bh(&ext_devt_lock, SOFTIRQ_ALL_MASK);
idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
- spin_unlock_bh(&ext_devt_lock);
+ spin_unlock_bh(&ext_devt_lock, bh);
}
}
@@ -817,6 +819,7 @@ static ssize_t disk_badblocks_store(struct device *dev,
*/
struct gendisk *get_gendisk(dev_t devt, int *partno)
{
+ unsigned int bh;
struct gendisk *disk = NULL;
if (MAJOR(devt) != BLOCK_EXT_MAJOR) {
@@ -828,13 +831,13 @@ struct gendisk *get_gendisk(dev_t devt, int *partno)
} else {
struct hd_struct *part;
- spin_lock_bh(&ext_devt_lock);
+ bh = spin_lock_bh(&ext_devt_lock, SOFTIRQ_ALL_MASK);
part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
if (part && get_disk_and_module(part_to_disk(part))) {
*partno = part->partno;
disk = part_to_disk(part);
}
- spin_unlock_bh(&ext_devt_lock);
+ spin_unlock_bh(&ext_devt_lock, bh);
}
if (!disk)
diff --git a/crypto/ansi_cprng.c b/crypto/ansi_cprng.c
index eff337c..6ecc1a1 100644
--- a/crypto/ansi_cprng.c
+++ b/crypto/ansi_cprng.c
@@ -186,12 +186,13 @@ static int _get_more_prng_bytes(struct prng_context *ctx, int cont_test)
static int get_prng_bytes(char *buf, size_t nbytes, struct prng_context *ctx,
int do_cont_test)
{
+ unsigned int bh;
unsigned char *ptr = buf;
unsigned int byte_count = (unsigned int)nbytes;
int err;
- spin_lock_bh(&ctx->prng_lock);
+ bh = spin_lock_bh(&ctx->prng_lock, SOFTIRQ_ALL_MASK);
err = -EINVAL;
if (ctx->flags & PRNG_NEED_RESET)
@@ -267,7 +268,7 @@ static int get_prng_bytes(char *buf, size_t nbytes, struct prng_context *ctx,
goto remainder;
done:
- spin_unlock_bh(&ctx->prng_lock);
+ spin_unlock_bh(&ctx->prng_lock, bh);
dbgprint(KERN_CRIT "returning %d from get_prng_bytes in context %p\n",
err, ctx);
return err;
@@ -282,10 +283,11 @@ static int reset_prng_context(struct prng_context *ctx,
const unsigned char *key, size_t klen,
const unsigned char *V, const unsigned char *DT)
{
+ unsigned int bh;
int ret;
const unsigned char *prng_key;
- spin_lock_bh(&ctx->prng_lock);
+ bh = spin_lock_bh(&ctx->prng_lock, SOFTIRQ_ALL_MASK);
ctx->flags |= PRNG_NEED_RESET;
prng_key = (key != NULL) ? key : (unsigned char *)DEFAULT_PRNG_KEY;
@@ -318,7 +320,7 @@ static int reset_prng_context(struct prng_context *ctx,
ret = 0;
ctx->flags &= ~PRNG_NEED_RESET;
out:
- spin_unlock_bh(&ctx->prng_lock);
+ spin_unlock_bh(&ctx->prng_lock, bh);
return ret;
}
diff --git a/crypto/mcryptd.c b/crypto/mcryptd.c
index 1c8e1b8..911a3f7 100644
--- a/crypto/mcryptd.c
+++ b/crypto/mcryptd.c
@@ -150,6 +150,7 @@ static void mcryptd_opportunistic_flush(void)
*/
static void mcryptd_queue_worker(struct work_struct *work)
{
+ unsigned int bh;
struct mcryptd_cpu_queue *cpu_queue;
struct crypto_async_request *req, *backlog;
int i;
@@ -163,10 +164,10 @@ static void mcryptd_queue_worker(struct work_struct *work)
i = 0;
while (i < MCRYPTD_BATCH || single_task_running()) {
- spin_lock_bh(&cpu_queue->q_lock);
+ bh = spin_lock_bh(&cpu_queue->q_lock, SOFTIRQ_ALL_MASK);
backlog = crypto_get_backlog(&cpu_queue->queue);
req = crypto_dequeue_request(&cpu_queue->queue);
- spin_unlock_bh(&cpu_queue->q_lock);
+ spin_unlock_bh(&cpu_queue->q_lock, bh);
if (!req) {
mcryptd_opportunistic_flush();
diff --git a/drivers/block/rsxx/core.c b/drivers/block/rsxx/core.c
index f2c631c..50f8990 100644
--- a/drivers/block/rsxx/core.c
+++ b/drivers/block/rsxx/core.c
@@ -589,6 +589,7 @@ static int rsxx_eeh_frozen(struct pci_dev *dev)
static void rsxx_eeh_failure(struct pci_dev *dev)
{
+ unsigned int bh;
struct rsxx_cardinfo *card = pci_get_drvdata(dev);
int i;
int cnt = 0;
@@ -599,11 +600,11 @@ static void rsxx_eeh_failure(struct pci_dev *dev)
card->halt = 1;
for (i = 0; i < card->n_targets; i++) {
- spin_lock_bh(&card->ctrl[i].queue_lock);
+ bh = spin_lock_bh(&card->ctrl[i].queue_lock, SOFTIRQ_ALL_MASK);
cnt = rsxx_cleanup_dma_queue(&card->ctrl[i],
&card->ctrl[i].queue,
COMPLETE_DMA);
- spin_unlock_bh(&card->ctrl[i].queue_lock);
+ spin_unlock_bh(&card->ctrl[i].queue_lock, bh);
cnt += rsxx_dma_cancel(&card->ctrl[i]);
diff --git a/drivers/block/rsxx/cregs.c b/drivers/block/rsxx/cregs.c
index c148e83..2567387 100644
--- a/drivers/block/rsxx/cregs.c
+++ b/drivers/block/rsxx/cregs.c
@@ -167,6 +167,7 @@ static int creg_queue_cmd(struct rsxx_cardinfo *card,
creg_cmd_cb callback,
void *cb_private)
{
+ unsigned int bh;
struct creg_cmd *cmd;
/* Don't queue stuff up if we're halted. */
@@ -194,11 +195,11 @@ static int creg_queue_cmd(struct rsxx_cardinfo *card,
cmd->cb_private = cb_private;
cmd->status = 0;
- spin_lock_bh(&card->creg_ctrl.lock);
+ bh = spin_lock_bh(&card->creg_ctrl.lock, SOFTIRQ_ALL_MASK);
list_add_tail(&cmd->list, &card->creg_ctrl.queue);
card->creg_ctrl.q_depth++;
creg_kick_queue(card);
- spin_unlock_bh(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock, bh);
return 0;
}
@@ -235,6 +236,7 @@ static void creg_cmd_timed_out(struct timer_list *t)
static void creg_cmd_done(struct work_struct *work)
{
+ unsigned int bh;
struct rsxx_cardinfo *card;
struct creg_cmd *cmd;
int st = 0;
@@ -249,10 +251,10 @@ static void creg_cmd_done(struct work_struct *work)
if (del_timer_sync(&card->creg_ctrl.cmd_timer) == 0)
card->creg_ctrl.creg_stats.failed_cancel_timer++;
- spin_lock_bh(&card->creg_ctrl.lock);
+ bh = spin_lock_bh(&card->creg_ctrl.lock, SOFTIRQ_ALL_MASK);
cmd = card->creg_ctrl.active_cmd;
card->creg_ctrl.active_cmd = NULL;
- spin_unlock_bh(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock, bh);
if (cmd == NULL) {
dev_err(CARD_TO_DEV(card),
@@ -302,14 +304,15 @@ static void creg_cmd_done(struct work_struct *work)
kmem_cache_free(creg_cmd_pool, cmd);
- spin_lock_bh(&card->creg_ctrl.lock);
+ bh = spin_lock_bh(&card->creg_ctrl.lock, SOFTIRQ_ALL_MASK);
card->creg_ctrl.active = 0;
creg_kick_queue(card);
- spin_unlock_bh(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock, bh);
}
static void creg_reset(struct rsxx_cardinfo *card)
{
+ unsigned int bh;
struct creg_cmd *cmd = NULL;
struct creg_cmd *tmp;
unsigned long flags;
@@ -330,7 +333,7 @@ static void creg_reset(struct rsxx_cardinfo *card)
"Resetting creg interface for recovery\n");
/* Cancel outstanding commands */
- spin_lock_bh(&card->creg_ctrl.lock);
+ bh = spin_lock_bh(&card->creg_ctrl.lock, SOFTIRQ_ALL_MASK);
list_for_each_entry_safe(cmd, tmp, &card->creg_ctrl.queue, list) {
list_del(&cmd->list);
card->creg_ctrl.q_depth--;
@@ -351,7 +354,7 @@ static void creg_reset(struct rsxx_cardinfo *card)
card->creg_ctrl.active = 0;
}
- spin_unlock_bh(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock, bh);
card->creg_ctrl.reset = 0;
spin_lock_irqsave(&card->irq_lock, flags);
@@ -707,6 +710,7 @@ int rsxx_reg_access(struct rsxx_cardinfo *card,
void rsxx_eeh_save_issued_creg(struct rsxx_cardinfo *card)
{
+ unsigned int bh;
struct creg_cmd *cmd = NULL;
cmd = card->creg_ctrl.active_cmd;
@@ -715,20 +719,21 @@ void rsxx_eeh_save_issued_creg(struct rsxx_cardinfo *card)
if (cmd) {
del_timer_sync(&card->creg_ctrl.cmd_timer);
- spin_lock_bh(&card->creg_ctrl.lock);
+ bh = spin_lock_bh(&card->creg_ctrl.lock, SOFTIRQ_ALL_MASK);
list_add(&cmd->list, &card->creg_ctrl.queue);
card->creg_ctrl.q_depth++;
card->creg_ctrl.active = 0;
- spin_unlock_bh(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock, bh);
}
}
void rsxx_kick_creg_queue(struct rsxx_cardinfo *card)
{
- spin_lock_bh(&card->creg_ctrl.lock);
+ unsigned int bh;
+ bh = spin_lock_bh(&card->creg_ctrl.lock, SOFTIRQ_ALL_MASK);
if (!list_empty(&card->creg_ctrl.queue))
creg_kick_queue(card);
- spin_unlock_bh(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock, bh);
}
/*------------ Initialization & Setup --------------*/
@@ -752,12 +757,13 @@ int rsxx_creg_setup(struct rsxx_cardinfo *card)
void rsxx_creg_destroy(struct rsxx_cardinfo *card)
{
+ unsigned int bh;
struct creg_cmd *cmd;
struct creg_cmd *tmp;
int cnt = 0;
/* Cancel outstanding commands */
- spin_lock_bh(&card->creg_ctrl.lock);
+ bh = spin_lock_bh(&card->creg_ctrl.lock, SOFTIRQ_ALL_MASK);
list_for_each_entry_safe(cmd, tmp, &card->creg_ctrl.queue, list) {
list_del(&cmd->list);
if (cmd->cb)
@@ -782,7 +788,7 @@ void rsxx_creg_destroy(struct rsxx_cardinfo *card)
"Canceled active creg command\n");
kmem_cache_free(creg_cmd_pool, cmd);
}
- spin_unlock_bh(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock, bh);
cancel_work_sync(&card->creg_ctrl.done_work);
}
diff --git a/drivers/block/rsxx/dma.c b/drivers/block/rsxx/dma.c
index 8fbc1bf..ab6ac00 100644
--- a/drivers/block/rsxx/dma.c
+++ b/drivers/block/rsxx/dma.c
@@ -275,14 +275,15 @@ int rsxx_cleanup_dma_queue(struct rsxx_dma_ctrl *ctrl,
static void rsxx_requeue_dma(struct rsxx_dma_ctrl *ctrl,
struct rsxx_dma *dma)
{
+ unsigned int bh;
/*
* Requeued DMAs go to the front of the queue so they are issued
* first.
*/
- spin_lock_bh(&ctrl->queue_lock);
+ bh = spin_lock_bh(&ctrl->queue_lock, SOFTIRQ_ALL_MASK);
ctrl->stats.sw_q_depth++;
list_add(&dma->list, &ctrl->queue);
- spin_unlock_bh(&ctrl->queue_lock);
+ spin_unlock_bh(&ctrl->queue_lock, bh);
}
static void rsxx_handle_dma_error(struct rsxx_dma_ctrl *ctrl,
@@ -395,6 +396,7 @@ static void dma_engine_stalled(struct timer_list *t)
static void rsxx_issue_dmas(struct rsxx_dma_ctrl *ctrl)
{
+ unsigned int bh;
struct rsxx_dma *dma;
int tag;
int cmds_pending = 0;
@@ -408,22 +410,22 @@ static void rsxx_issue_dmas(struct rsxx_dma_ctrl *ctrl)
return;
while (1) {
- spin_lock_bh(&ctrl->queue_lock);
+ bh = spin_lock_bh(&ctrl->queue_lock, SOFTIRQ_ALL_MASK);
if (list_empty(&ctrl->queue)) {
- spin_unlock_bh(&ctrl->queue_lock);
+ spin_unlock_bh(&ctrl->queue_lock, bh);
break;
}
- spin_unlock_bh(&ctrl->queue_lock);
+ spin_unlock_bh(&ctrl->queue_lock, bh);
tag = pop_tracker(ctrl->trackers);
if (tag == -1)
break;
- spin_lock_bh(&ctrl->queue_lock);
+ bh = spin_lock_bh(&ctrl->queue_lock, SOFTIRQ_ALL_MASK);
dma = list_entry(ctrl->queue.next, struct rsxx_dma, list);
list_del(&dma->list);
ctrl->stats.sw_q_depth--;
- spin_unlock_bh(&ctrl->queue_lock);
+ spin_unlock_bh(&ctrl->queue_lock, bh);
/*
* This will catch any DMAs that slipped in right before the
@@ -507,6 +509,7 @@ static void rsxx_issue_dmas(struct rsxx_dma_ctrl *ctrl)
static void rsxx_dma_done(struct rsxx_dma_ctrl *ctrl)
{
+ unsigned int bh;
struct rsxx_dma *dma;
unsigned long flags;
u16 count;
@@ -583,10 +586,10 @@ static void rsxx_dma_done(struct rsxx_dma_ctrl *ctrl)
rsxx_enable_ier(ctrl->card, CR_INTR_DMA(ctrl->id));
spin_unlock_irqrestore(&ctrl->card->irq_lock, flags);
- spin_lock_bh(&ctrl->queue_lock);
+ bh = spin_lock_bh(&ctrl->queue_lock, SOFTIRQ_ALL_MASK);
if (ctrl->stats.sw_q_depth)
queue_work(ctrl->issue_wq, &ctrl->issue_dma_work);
- spin_unlock_bh(&ctrl->queue_lock);
+ spin_unlock_bh(&ctrl->queue_lock, bh);
}
static void rsxx_schedule_issue(struct work_struct *work)
@@ -683,6 +686,7 @@ blk_status_t rsxx_dma_queue_bio(struct rsxx_cardinfo *card,
rsxx_dma_cb cb,
void *cb_data)
{
+ unsigned int bh;
struct list_head dma_list[RSXX_MAX_TARGETS];
struct bio_vec bvec;
struct bvec_iter iter;
@@ -753,10 +757,10 @@ blk_status_t rsxx_dma_queue_bio(struct rsxx_cardinfo *card,
for (i = 0; i < card->n_targets; i++) {
if (!list_empty(&dma_list[i])) {
- spin_lock_bh(&card->ctrl[i].queue_lock);
+ bh = spin_lock_bh(&card->ctrl[i].queue_lock, SOFTIRQ_ALL_MASK);
card->ctrl[i].stats.sw_q_depth += dma_cnt[i];
list_splice_tail(&dma_list[i], &card->ctrl[i].queue);
- spin_unlock_bh(&card->ctrl[i].queue_lock);
+ spin_unlock_bh(&card->ctrl[i].queue_lock, bh);
queue_work(card->ctrl[i].issue_wq,
&card->ctrl[i].issue_dma_work);
@@ -995,6 +999,7 @@ int rsxx_dma_cancel(struct rsxx_dma_ctrl *ctrl)
void rsxx_dma_destroy(struct rsxx_cardinfo *card)
{
+ unsigned int bh;
struct rsxx_dma_ctrl *ctrl;
int i;
@@ -1015,9 +1020,9 @@ void rsxx_dma_destroy(struct rsxx_cardinfo *card)
del_timer_sync(&ctrl->activity_timer);
/* Clean up the DMA queue */
- spin_lock_bh(&ctrl->queue_lock);
+ bh = spin_lock_bh(&ctrl->queue_lock, SOFTIRQ_ALL_MASK);
rsxx_cleanup_dma_queue(ctrl, &ctrl->queue, COMPLETE_DMA);
- spin_unlock_bh(&ctrl->queue_lock);
+ spin_unlock_bh(&ctrl->queue_lock, bh);
rsxx_dma_cancel(ctrl);
@@ -1032,6 +1037,7 @@ void rsxx_dma_destroy(struct rsxx_cardinfo *card)
int rsxx_eeh_save_issued_dmas(struct rsxx_cardinfo *card)
{
+ unsigned int bh;
int i;
int j;
int cnt;
@@ -1071,13 +1077,13 @@ int rsxx_eeh_save_issued_dmas(struct rsxx_cardinfo *card)
cnt++;
}
- spin_lock_bh(&card->ctrl[i].queue_lock);
+ bh = spin_lock_bh(&card->ctrl[i].queue_lock, SOFTIRQ_ALL_MASK);
list_splice(&issued_dmas[i], &card->ctrl[i].queue);
atomic_sub(cnt, &card->ctrl[i].stats.hw_q_depth);
card->ctrl[i].stats.sw_q_depth += cnt;
card->ctrl[i].e_cnt = 0;
- spin_unlock_bh(&card->ctrl[i].queue_lock);
+ spin_unlock_bh(&card->ctrl[i].queue_lock, bh);
}
kfree(issued_dmas);
diff --git a/drivers/block/umem.c b/drivers/block/umem.c
index 5c7fb8c..33d25af 100644
--- a/drivers/block/umem.c
+++ b/drivers/block/umem.c
@@ -410,6 +410,7 @@ static int add_bio(struct cardinfo *card)
static void process_page(unsigned long data)
{
+ unsigned int bh;
/* check if any of the requests in the page are DMA_COMPLETE,
* and deal with them appropriately.
* If we find a descriptor without DMA_COMPLETE in the semaphore, then
@@ -421,7 +422,7 @@ static void process_page(unsigned long data)
struct cardinfo *card = (struct cardinfo *)data;
unsigned int dma_status = card->dma_status;
- spin_lock_bh(&card->lock);
+ bh = spin_lock_bh(&card->lock, SOFTIRQ_ALL_MASK);
if (card->Active < 0)
goto out_unlock;
page = &card->mm_pages[card->Active];
@@ -496,7 +497,7 @@ static void process_page(unsigned long data)
mm_start_io(card);
}
out_unlock:
- spin_unlock_bh(&card->lock);
+ spin_unlock_bh(&card->lock, bh);
while (return_bio) {
struct bio *bio = return_bio;
@@ -720,17 +721,18 @@ static void check_batteries(struct cardinfo *card)
static void check_all_batteries(struct timer_list *unused)
{
+ unsigned int bh;
int i;
for (i = 0; i < num_cards; i++)
if (!(cards[i].flags & UM_FLAG_NO_BATT)) {
struct cardinfo *card = &cards[i];
- spin_lock_bh(&card->lock);
+ bh = spin_lock_bh(&card->lock, SOFTIRQ_ALL_MASK);
if (card->Active >= 0)
card->check_batteries = 1;
else
check_batteries(card);
- spin_unlock_bh(&card->lock);
+ spin_unlock_bh(&card->lock, bh);
}
init_battery_timer();
diff --git a/drivers/connector/cn_queue.c b/drivers/connector/cn_queue.c
index 9c54fdf..1a94afb 100644
--- a/drivers/connector/cn_queue.c
+++ b/drivers/connector/cn_queue.c
@@ -75,6 +75,7 @@ int cn_queue_add_callback(struct cn_queue_dev *dev, const char *name,
void (*callback)(struct cn_msg *,
struct netlink_skb_parms *))
{
+ unsigned int bh;
struct cn_callback_entry *cbq, *__cbq;
int found = 0;
@@ -82,7 +83,7 @@ int cn_queue_add_callback(struct cn_queue_dev *dev, const char *name,
if (!cbq)
return -ENOMEM;
- spin_lock_bh(&dev->queue_lock);
+ bh = spin_lock_bh(&dev->queue_lock, SOFTIRQ_ALL_MASK);
list_for_each_entry(__cbq, &dev->queue_list, callback_entry) {
if (cn_cb_equal(&__cbq->id.id, id)) {
found = 1;
@@ -91,7 +92,7 @@ int cn_queue_add_callback(struct cn_queue_dev *dev, const char *name,
}
if (!found)
list_add_tail(&cbq->callback_entry, &dev->queue_list);
- spin_unlock_bh(&dev->queue_lock);
+ spin_unlock_bh(&dev->queue_lock, bh);
if (found) {
cn_queue_release_callback(cbq);
@@ -106,10 +107,11 @@ int cn_queue_add_callback(struct cn_queue_dev *dev, const char *name,
void cn_queue_del_callback(struct cn_queue_dev *dev, struct cb_id *id)
{
+ unsigned int bh;
struct cn_callback_entry *cbq, *n;
int found = 0;
- spin_lock_bh(&dev->queue_lock);
+ bh = spin_lock_bh(&dev->queue_lock, SOFTIRQ_ALL_MASK);
list_for_each_entry_safe(cbq, n, &dev->queue_list, callback_entry) {
if (cn_cb_equal(&cbq->id.id, id)) {
list_del(&cbq->callback_entry);
@@ -117,7 +119,7 @@ void cn_queue_del_callback(struct cn_queue_dev *dev, struct cb_id *id)
break;
}
}
- spin_unlock_bh(&dev->queue_lock);
+ spin_unlock_bh(&dev->queue_lock, bh);
if (found)
cn_queue_release_callback(cbq);
@@ -143,12 +145,13 @@ struct cn_queue_dev *cn_queue_alloc_dev(const char *name, struct sock *nls)
void cn_queue_free_dev(struct cn_queue_dev *dev)
{
+ unsigned int bh;
struct cn_callback_entry *cbq, *n;
- spin_lock_bh(&dev->queue_lock);
+ bh = spin_lock_bh(&dev->queue_lock, SOFTIRQ_ALL_MASK);
list_for_each_entry_safe(cbq, n, &dev->queue_list, callback_entry)
list_del(&cbq->callback_entry);
- spin_unlock_bh(&dev->queue_lock);
+ spin_unlock_bh(&dev->queue_lock, bh);
while (atomic_read(&dev->refcnt)) {
pr_info("Waiting for %s to become free: refcnt=%d.\n",
diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c
index eeb7d31..b9adef9 100644
--- a/drivers/connector/connector.c
+++ b/drivers/connector/connector.c
@@ -74,6 +74,7 @@ static int cn_already_initialized;
int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 __group,
gfp_t gfp_mask)
{
+ unsigned int bh;
struct cn_callback_entry *__cbq;
unsigned int size;
struct sk_buff *skb;
@@ -86,7 +87,7 @@ int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 __group,
if (portid || __group) {
group = __group;
} else {
- spin_lock_bh(&dev->cbdev->queue_lock);
+ bh = spin_lock_bh(&dev->cbdev->queue_lock, SOFTIRQ_ALL_MASK);
list_for_each_entry(__cbq, &dev->cbdev->queue_list,
callback_entry) {
if (cn_cb_equal(&__cbq->id.id, &msg->id)) {
@@ -95,7 +96,7 @@ int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 __group,
break;
}
}
- spin_unlock_bh(&dev->cbdev->queue_lock);
+ spin_unlock_bh(&dev->cbdev->queue_lock, bh);
if (!found)
return -ENODEV;
@@ -143,6 +144,7 @@ EXPORT_SYMBOL_GPL(cn_netlink_send);
*/
static int cn_call_callback(struct sk_buff *skb)
{
+ unsigned int bh;
struct nlmsghdr *nlh;
struct cn_callback_entry *i, *cbq = NULL;
struct cn_dev *dev = &cdev;
@@ -155,7 +157,7 @@ static int cn_call_callback(struct sk_buff *skb)
if (nlh->nlmsg_len < NLMSG_HDRLEN + sizeof(struct cn_msg) + msg->len)
return -EINVAL;
- spin_lock_bh(&dev->cbdev->queue_lock);
+ bh = spin_lock_bh(&dev->cbdev->queue_lock, SOFTIRQ_ALL_MASK);
list_for_each_entry(i, &dev->cbdev->queue_list, callback_entry) {
if (cn_cb_equal(&i->id.id, &msg->id)) {
refcount_inc(&i->refcnt);
@@ -163,7 +165,7 @@ static int cn_call_callback(struct sk_buff *skb)
break;
}
}
- spin_unlock_bh(&dev->cbdev->queue_lock);
+ spin_unlock_bh(&dev->cbdev->queue_lock, bh);
if (cbq != NULL) {
cbq->callback(msg, nsp);
@@ -242,12 +244,13 @@ EXPORT_SYMBOL_GPL(cn_del_callback);
static int __maybe_unused cn_proc_show(struct seq_file *m, void *v)
{
+ unsigned int bh;
struct cn_queue_dev *dev = cdev.cbdev;
struct cn_callback_entry *cbq;
seq_printf(m, "Name ID\n");
- spin_lock_bh(&dev->queue_lock);
+ bh = spin_lock_bh(&dev->queue_lock, SOFTIRQ_ALL_MASK);
list_for_each_entry(cbq, &dev->queue_list, callback_entry) {
seq_printf(m, "%-15s %u:%u\n",
@@ -256,7 +259,7 @@ static int __maybe_unused cn_proc_show(struct seq_file *m, void *v)
cbq->id.id.val);
}
- spin_unlock_bh(&dev->queue_lock);
+ spin_unlock_bh(&dev->queue_lock, bh);
return 0;
}
diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
index 801aeab..aaee981 100644
--- a/drivers/crypto/atmel-aes.c
+++ b/drivers/crypto/atmel-aes.c
@@ -423,10 +423,11 @@ static inline size_t atmel_aes_padlen(size_t len, size_t block_size)
static struct atmel_aes_dev *atmel_aes_find_dev(struct atmel_aes_base_ctx *ctx)
{
+ unsigned int bh;
struct atmel_aes_dev *aes_dd = NULL;
struct atmel_aes_dev *tmp;
- spin_lock_bh(&atmel_aes.lock);
+ bh = spin_lock_bh(&atmel_aes.lock, SOFTIRQ_ALL_MASK);
if (!ctx->dd) {
list_for_each_entry(tmp, &atmel_aes.dev_list, list) {
aes_dd = tmp;
@@ -437,7 +438,7 @@ static struct atmel_aes_dev *atmel_aes_find_dev(struct atmel_aes_base_ctx *ctx)
aes_dd = ctx->dd;
}
- spin_unlock_bh(&atmel_aes.lock);
+ spin_unlock_bh(&atmel_aes.lock, bh);
return aes_dd;
}
diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c
index 8a19df2..6c0a754 100644
--- a/drivers/crypto/atmel-sha.c
+++ b/drivers/crypto/atmel-sha.c
@@ -406,10 +406,11 @@ static void atmel_sha_fill_padding(struct atmel_sha_reqctx *ctx, int length)
static struct atmel_sha_dev *atmel_sha_find_dev(struct atmel_sha_ctx *tctx)
{
+ unsigned int bh;
struct atmel_sha_dev *dd = NULL;
struct atmel_sha_dev *tmp;
- spin_lock_bh(&atmel_sha.lock);
+ bh = spin_lock_bh(&atmel_sha.lock, SOFTIRQ_ALL_MASK);
if (!tctx->dd) {
list_for_each_entry(tmp, &atmel_sha.dev_list, list) {
dd = tmp;
@@ -420,7 +421,7 @@ static struct atmel_sha_dev *atmel_sha_find_dev(struct atmel_sha_ctx *tctx)
dd = tctx->dd;
}
- spin_unlock_bh(&atmel_sha.lock);
+ spin_unlock_bh(&atmel_sha.lock, bh);
return dd;
}
diff --git a/drivers/crypto/atmel-tdes.c b/drivers/crypto/atmel-tdes.c
index 97b0423..db2a3b1 100644
--- a/drivers/crypto/atmel-tdes.c
+++ b/drivers/crypto/atmel-tdes.c
@@ -198,10 +198,11 @@ static void atmel_tdes_write_n(struct atmel_tdes_dev *dd, u32 offset,
static struct atmel_tdes_dev *atmel_tdes_find_dev(struct atmel_tdes_ctx *ctx)
{
+ unsigned int bh;
struct atmel_tdes_dev *tdes_dd = NULL;
struct atmel_tdes_dev *tmp;
- spin_lock_bh(&atmel_tdes.lock);
+ bh = spin_lock_bh(&atmel_tdes.lock, SOFTIRQ_ALL_MASK);
if (!ctx->dd) {
list_for_each_entry(tmp, &atmel_tdes.dev_list, list) {
tdes_dd = tmp;
@@ -211,7 +212,7 @@ static struct atmel_tdes_dev *atmel_tdes_find_dev(struct atmel_tdes_ctx *ctx)
} else {
tdes_dd = ctx->dd;
}
- spin_unlock_bh(&atmel_tdes.lock);
+ spin_unlock_bh(&atmel_tdes.lock, bh);
return tdes_dd;
}
diff --git a/drivers/crypto/axis/artpec6_crypto.c b/drivers/crypto/axis/artpec6_crypto.c
index 7f07a50..5912232 100644
--- a/drivers/crypto/axis/artpec6_crypto.c
+++ b/drivers/crypto/axis/artpec6_crypto.c
@@ -459,10 +459,11 @@ static inline bool artpec6_crypto_busy(void)
static int artpec6_crypto_submit(struct artpec6_crypto_req_common *req)
{
+ unsigned int bh;
struct artpec6_crypto *ac = dev_get_drvdata(artpec6_crypto_dev);
int ret = -EBUSY;
- spin_lock_bh(&ac->queue_lock);
+ bh = spin_lock_bh(&ac->queue_lock, SOFTIRQ_ALL_MASK);
if (!artpec6_crypto_busy()) {
list_add_tail(&req->list, &ac->pending);
@@ -474,7 +475,7 @@ static int artpec6_crypto_submit(struct artpec6_crypto_req_common *req)
artpec6_crypto_common_destroy(req);
}
- spin_unlock_bh(&ac->queue_lock);
+ spin_unlock_bh(&ac->queue_lock, bh);
return ret;
}
@@ -2084,6 +2085,7 @@ static void artpec6_crypto_timeout(struct timer_list *t)
static void artpec6_crypto_task(unsigned long data)
{
+ unsigned int bh;
struct artpec6_crypto *ac = (struct artpec6_crypto *)data;
struct artpec6_crypto_req_common *req;
struct artpec6_crypto_req_common *n;
@@ -2093,7 +2095,7 @@ static void artpec6_crypto_task(unsigned long data)
return;
}
- spin_lock_bh(&ac->queue_lock);
+ bh = spin_lock_bh(&ac->queue_lock, SOFTIRQ_ALL_MASK);
list_for_each_entry_safe(req, n, &ac->pending, list) {
struct artpec6_crypto_dma_descriptors *dma = req->dma;
@@ -2132,7 +2134,7 @@ static void artpec6_crypto_task(unsigned long data)
artpec6_crypto_process_queue(ac);
- spin_unlock_bh(&ac->queue_lock);
+ spin_unlock_bh(&ac->queue_lock, bh);
}
static void artpec6_crypto_complete_crypto(struct crypto_async_request *req)
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index acdd720..e3b83f3 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -327,6 +327,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
u32 status, void *areq),
void *areq)
{
+ unsigned int bh;
struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
struct caam_jrentry_info *head_entry;
int head, tail, desc_size;
@@ -339,14 +340,14 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
return -EIO;
}
- spin_lock_bh(&jrp->inplock);
+ bh = spin_lock_bh(&jrp->inplock, SOFTIRQ_ALL_MASK);
head = jrp->head;
tail = READ_ONCE(jrp->tail);
if (!rd_reg32(&jrp->rregs->inpring_avail) ||
CIRC_SPACE(head, tail, JOBR_DEPTH) <= 0) {
- spin_unlock_bh(&jrp->inplock);
+ spin_unlock_bh(&jrp->inplock, bh);
dma_unmap_single(dev, desc_dma, desc_size, DMA_TO_DEVICE);
return -EBUSY;
}
@@ -379,7 +380,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
wr_reg32(&jrp->rregs->inpring_jobadd, 1);
- spin_unlock_bh(&jrp->inplock);
+ spin_unlock_bh(&jrp->inplock, bh);
return 0;
}
diff --git a/drivers/crypto/cavium/cpt/cptvf_reqmanager.c b/drivers/crypto/cavium/cpt/cptvf_reqmanager.c
index b0ba433..f988e66 100644
--- a/drivers/crypto/cavium/cpt/cptvf_reqmanager.c
+++ b/drivers/crypto/cavium/cpt/cptvf_reqmanager.c
@@ -342,6 +342,7 @@ static inline void process_pending_queue(struct cpt_vf *cptvf,
struct pending_qinfo *pqinfo,
int qno)
{
+ unsigned int bh;
struct pci_dev *pdev = cptvf->pdev;
struct pending_queue *pqueue = &pqinfo->queue[qno];
struct pending_entry *pentry = NULL;
@@ -350,10 +351,10 @@ static inline void process_pending_queue(struct cpt_vf *cptvf,
unsigned char ccode;
while (1) {
- spin_lock_bh(&pqueue->lock);
+ bh = spin_lock_bh(&pqueue->lock, SOFTIRQ_ALL_MASK);
pentry = &pqueue->head[pqueue->front];
if (unlikely(!pentry->busy)) {
- spin_unlock_bh(&pqueue->lock);
+ spin_unlock_bh(&pqueue->lock, bh);
break;
}
@@ -361,7 +362,7 @@ static inline void process_pending_queue(struct cpt_vf *cptvf,
if (unlikely(!info)) {
dev_err(&pdev->dev, "Pending Entry post arg NULL\n");
pending_queue_inc_front(pqinfo, qno);
- spin_unlock_bh(&pqueue->lock);
+ spin_unlock_bh(&pqueue->lock, bh);
continue;
}
@@ -378,7 +379,7 @@ static inline void process_pending_queue(struct cpt_vf *cptvf,
pentry->post_arg = NULL;
pending_queue_inc_front(pqinfo, qno);
do_request_cleanup(cptvf, info);
- spin_unlock_bh(&pqueue->lock);
+ spin_unlock_bh(&pqueue->lock, bh);
break;
} else if (status->s.compcode == COMPLETION_CODE_INIT) {
/* check for timeout */
@@ -392,14 +393,14 @@ static inline void process_pending_queue(struct cpt_vf *cptvf,
pentry->post_arg = NULL;
pending_queue_inc_front(pqinfo, qno);
do_request_cleanup(cptvf, info);
- spin_unlock_bh(&pqueue->lock);
+ spin_unlock_bh(&pqueue->lock, bh);
break;
} else if ((*info->alternate_caddr ==
(~COMPLETION_CODE_INIT)) &&
(info->extra_time < TIME_IN_RESET_COUNT)) {
info->time_in = jiffies;
info->extra_time++;
- spin_unlock_bh(&pqueue->lock);
+ spin_unlock_bh(&pqueue->lock, bh);
break;
}
}
@@ -409,7 +410,7 @@ static inline void process_pending_queue(struct cpt_vf *cptvf,
pentry->post_arg = NULL;
atomic64_dec((&pqueue->pending_count));
pending_queue_inc_front(pqinfo, qno);
- spin_unlock_bh(&pqueue->lock);
+ spin_unlock_bh(&pqueue->lock, bh);
do_post_process(info->cptvf, info);
/*
@@ -422,6 +423,7 @@ static inline void process_pending_queue(struct cpt_vf *cptvf,
int process_request(struct cpt_vf *cptvf, struct cpt_request_info *req)
{
+ unsigned int bh;
int ret = 0, clear = 0, queue = 0;
struct cpt_info_buffer *info = NULL;
struct cptvf_request *cpt_req = NULL;
@@ -500,10 +502,10 @@ int process_request(struct cpt_vf *cptvf, struct cpt_request_info *req)
}
get_pending_entry:
- spin_lock_bh(&pqueue->lock);
+ bh = spin_lock_bh(&pqueue->lock, SOFTIRQ_ALL_MASK);
pentry = get_free_pending_entry(pqueue, cptvf->pqinfo.qlen);
if (unlikely(!pentry)) {
- spin_unlock_bh(&pqueue->lock);
+ spin_unlock_bh(&pqueue->lock, bh);
if (clear == 0) {
process_pending_queue(cptvf, &cptvf->pqinfo, queue);
clear = 1;
@@ -541,7 +543,7 @@ int process_request(struct cpt_vf *cptvf, struct cpt_request_info *req)
cptinst.s.ei3 = vq_cmd.cptr.u64;
ret = send_cpt_command(cptvf, &cptinst, queue);
- spin_unlock_bh(&pqueue->lock);
+ spin_unlock_bh(&pqueue->lock, bh);
if (unlikely(ret)) {
dev_err(&pdev->dev, "Send command failed for AE\n");
ret = -EFAULT;
diff --git a/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c b/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
index 4a362fc..b6cf94f 100644
--- a/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
+++ b/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
@@ -380,31 +380,34 @@ static inline int softreq_map_iobuf(struct nitrox_softreq *sr,
static inline void backlog_list_add(struct nitrox_softreq *sr,
struct nitrox_cmdq *cmdq)
{
+ unsigned int bh;
INIT_LIST_HEAD(&sr->backlog);
- spin_lock_bh(&cmdq->backlog_lock);
+ bh = spin_lock_bh(&cmdq->backlog_lock, SOFTIRQ_ALL_MASK);
list_add_tail(&sr->backlog, &cmdq->backlog_head);
atomic_inc(&cmdq->backlog_count);
atomic_set(&sr->status, REQ_BACKLOG);
- spin_unlock_bh(&cmdq->backlog_lock);
+ spin_unlock_bh(&cmdq->backlog_lock, bh);
}
static inline void response_list_add(struct nitrox_softreq *sr,
struct nitrox_cmdq *cmdq)
{
+ unsigned int bh;
INIT_LIST_HEAD(&sr->response);
- spin_lock_bh(&cmdq->response_lock);
+ bh = spin_lock_bh(&cmdq->response_lock, SOFTIRQ_ALL_MASK);
list_add_tail(&sr->response, &cmdq->response_head);
- spin_unlock_bh(&cmdq->response_lock);
+ spin_unlock_bh(&cmdq->response_lock, bh);
}
static inline void response_list_del(struct nitrox_softreq *sr,
struct nitrox_cmdq *cmdq)
{
- spin_lock_bh(&cmdq->response_lock);
+ unsigned int bh;
+ bh = spin_lock_bh(&cmdq->response_lock, SOFTIRQ_ALL_MASK);
list_del(&sr->response);
- spin_unlock_bh(&cmdq->response_lock);
+ spin_unlock_bh(&cmdq->response_lock, bh);
}
static struct nitrox_softreq *
@@ -435,11 +438,12 @@ static inline bool cmdq_full(struct nitrox_cmdq *cmdq, int qlen)
static void post_se_instr(struct nitrox_softreq *sr,
struct nitrox_cmdq *cmdq)
{
+ unsigned int bh;
struct nitrox_device *ndev = sr->ndev;
int idx;
u8 *ent;
- spin_lock_bh(&cmdq->cmdq_lock);
+ bh = spin_lock_bh(&cmdq->cmdq_lock, SOFTIRQ_ALL_MASK);
idx = cmdq->write_idx;
/* copy the instruction */
@@ -459,11 +463,12 @@ static void post_se_instr(struct nitrox_softreq *sr,
cmdq->write_idx = incr_index(idx, 1, ndev->qlen);
- spin_unlock_bh(&cmdq->cmdq_lock);
+ spin_unlock_bh(&cmdq->cmdq_lock, bh);
}
static int post_backlog_cmds(struct nitrox_cmdq *cmdq)
{
+ unsigned int bh;
struct nitrox_device *ndev = cmdq->ndev;
struct nitrox_softreq *sr, *tmp;
int ret = 0;
@@ -471,7 +476,7 @@ static int post_backlog_cmds(struct nitrox_cmdq *cmdq)
if (!atomic_read(&cmdq->backlog_count))
return 0;
- spin_lock_bh(&cmdq->backlog_lock);
+ bh = spin_lock_bh(&cmdq->backlog_lock, SOFTIRQ_ALL_MASK);
list_for_each_entry_safe(sr, tmp, &cmdq->backlog_head, backlog) {
struct skcipher_request *skreq;
@@ -494,7 +499,7 @@ static int post_backlog_cmds(struct nitrox_cmdq *cmdq)
/* backlog requests are posted, wakeup with -EINPROGRESS */
skcipher_request_complete(skreq, -EINPROGRESS);
}
- spin_unlock_bh(&cmdq->backlog_lock);
+ spin_unlock_bh(&cmdq->backlog_lock, bh);
return ret;
}
diff --git a/drivers/crypto/ccree/cc_request_mgr.c b/drivers/crypto/ccree/cc_request_mgr.c
index 83a8aaae..4c0c135 100644
--- a/drivers/crypto/ccree/cc_request_mgr.c
+++ b/drivers/crypto/ccree/cc_request_mgr.c
@@ -335,12 +335,13 @@ static int cc_do_send_request(struct cc_drvdata *drvdata,
static void cc_enqueue_backlog(struct cc_drvdata *drvdata,
struct cc_bl_item *bli)
{
+ unsigned int bh;
struct cc_req_mgr_handle *mgr = drvdata->request_mgr_handle;
- spin_lock_bh(&mgr->bl_lock);
+ bh = spin_lock_bh(&mgr->bl_lock, SOFTIRQ_ALL_MASK);
list_add_tail(&bli->list, &mgr->backlog);
++mgr->bl_len;
- spin_unlock_bh(&mgr->bl_lock);
+ spin_unlock_bh(&mgr->bl_lock, bh);
tasklet_schedule(&mgr->comptask);
}
@@ -412,6 +413,7 @@ int cc_send_request(struct cc_drvdata *drvdata, struct cc_crypto_req *cc_req,
struct cc_hw_desc *desc, unsigned int len,
struct crypto_async_request *req)
{
+ unsigned int bh;
int rc;
struct cc_req_mgr_handle *mgr = drvdata->request_mgr_handle;
bool ivgen = !!cc_req->ivgen_dma_addr_len;
@@ -427,7 +429,7 @@ int cc_send_request(struct cc_drvdata *drvdata, struct cc_crypto_req *cc_req,
return rc;
}
- spin_lock_bh(&mgr->hw_lock);
+ bh = spin_lock_bh(&mgr->hw_lock, SOFTIRQ_ALL_MASK);
rc = cc_queues_status(drvdata, mgr, total_len);
#ifdef CC_DEBUG_FORCE_BACKLOG
@@ -436,7 +438,7 @@ int cc_send_request(struct cc_drvdata *drvdata, struct cc_crypto_req *cc_req,
#endif /* CC_DEBUG_FORCE_BACKLOG */
if (rc == -ENOSPC && backlog_ok) {
- spin_unlock_bh(&mgr->hw_lock);
+ spin_unlock_bh(&mgr->hw_lock, bh);
bli = kmalloc(sizeof(*bli), flags);
if (!bli) {
@@ -456,7 +458,7 @@ int cc_send_request(struct cc_drvdata *drvdata, struct cc_crypto_req *cc_req,
rc = cc_do_send_request(drvdata, cc_req, desc, len, false,
ivgen);
- spin_unlock_bh(&mgr->hw_lock);
+ spin_unlock_bh(&mgr->hw_lock, bh);
return rc;
}
@@ -464,6 +466,7 @@ int cc_send_sync_request(struct cc_drvdata *drvdata,
struct cc_crypto_req *cc_req, struct cc_hw_desc *desc,
unsigned int len)
{
+ unsigned int bh;
int rc;
struct device *dev = drvdata_to_dev(drvdata);
struct cc_req_mgr_handle *mgr = drvdata->request_mgr_handle;
@@ -479,13 +482,13 @@ int cc_send_sync_request(struct cc_drvdata *drvdata,
}
while (true) {
- spin_lock_bh(&mgr->hw_lock);
+ bh = spin_lock_bh(&mgr->hw_lock, SOFTIRQ_ALL_MASK);
rc = cc_queues_status(drvdata, mgr, len + 1);
if (!rc)
break;
- spin_unlock_bh(&mgr->hw_lock);
+ spin_unlock_bh(&mgr->hw_lock, bh);
if (rc != -EAGAIN) {
cc_pm_put_suspend(dev);
return rc;
@@ -495,7 +498,7 @@ int cc_send_sync_request(struct cc_drvdata *drvdata,
}
rc = cc_do_send_request(drvdata, cc_req, desc, len, true, false);
- spin_unlock_bh(&mgr->hw_lock);
+ spin_unlock_bh(&mgr->hw_lock, bh);
if (rc != -EINPROGRESS) {
cc_pm_put_suspend(dev);
@@ -668,12 +671,13 @@ static void comp_handler(unsigned long devarg)
#if defined(CONFIG_PM)
int cc_resume_req_queue(struct cc_drvdata *drvdata)
{
+ unsigned int bh;
struct cc_req_mgr_handle *request_mgr_handle =
drvdata->request_mgr_handle;
- spin_lock_bh(&request_mgr_handle->hw_lock);
+ bh = spin_lock_bh(&request_mgr_handle->hw_lock, SOFTIRQ_ALL_MASK);
request_mgr_handle->is_runtime_suspended = false;
- spin_unlock_bh(&request_mgr_handle->hw_lock);
+ spin_unlock_bh(&request_mgr_handle->hw_lock, bh);
return 0;
}
@@ -684,18 +688,19 @@ int cc_resume_req_queue(struct cc_drvdata *drvdata)
*/
int cc_suspend_req_queue(struct cc_drvdata *drvdata)
{
+ unsigned int bh;
struct cc_req_mgr_handle *request_mgr_handle =
drvdata->request_mgr_handle;
/* lock the send_request */
- spin_lock_bh(&request_mgr_handle->hw_lock);
+ bh = spin_lock_bh(&request_mgr_handle->hw_lock, SOFTIRQ_ALL_MASK);
if (request_mgr_handle->req_queue_head !=
request_mgr_handle->req_queue_tail) {
- spin_unlock_bh(&request_mgr_handle->hw_lock);
+ spin_unlock_bh(&request_mgr_handle->hw_lock, bh);
return -EBUSY;
}
request_mgr_handle->is_runtime_suspended = true;
- spin_unlock_bh(&request_mgr_handle->hw_lock);
+ spin_unlock_bh(&request_mgr_handle->hw_lock, bh);
return 0;
}
diff --git a/drivers/crypto/chelsio/chtls/chtls_cm.c b/drivers/crypto/chelsio/chtls/chtls_cm.c
index 8af8c84..67bee49 100644
--- a/drivers/crypto/chelsio/chtls/chtls_cm.c
+++ b/drivers/crypto/chelsio/chtls/chtls_cm.c
@@ -1307,7 +1307,9 @@ static DEFINE_SPINLOCK(reap_list_lock);
*/
DECLARE_TASK_FUNC(process_reap_list, task_param)
{
- spin_lock_bh(&reap_list_lock);
+ unsigned int bh;
+
+ bh = spin_lock_bh(&reap_list_lock, SOFTIRQ_ALL_MASK);
while (reap_list) {
struct sock *sk = reap_list;
struct chtls_sock *csk = rcu_dereference_sk_user_data(sk);
@@ -1326,7 +1328,7 @@ DECLARE_TASK_FUNC(process_reap_list, task_param)
sock_put(sk);
spin_lock(&reap_list_lock);
}
- spin_unlock_bh(&reap_list_lock);
+ spin_unlock_bh(&reap_list_lock, bh);
}
static DECLARE_WORK(reap_task, process_reap_list);
@@ -1838,12 +1840,13 @@ static void send_abort_rpl(struct sock *sk, struct sk_buff *skb,
static void t4_defer_reply(struct sk_buff *skb, struct chtls_dev *cdev,
defer_handler_t handler)
{
+ unsigned int bh;
DEFERRED_SKB_CB(skb)->handler = handler;
- spin_lock_bh(&cdev->deferq.lock);
+ bh = spin_lock_bh(&cdev->deferq.lock, SOFTIRQ_ALL_MASK);
__skb_queue_tail(&cdev->deferq, skb);
if (skb_queue_len(&cdev->deferq) == 1)
schedule_work(&cdev->deferq_task);
- spin_unlock_bh(&cdev->deferq.lock);
+ spin_unlock_bh(&cdev->deferq.lock, bh);
}
static void chtls_send_abort_rpl(struct sock *sk, struct sk_buff *skb,
diff --git a/drivers/crypto/chelsio/chtls/chtls_hw.c b/drivers/crypto/chelsio/chtls/chtls_hw.c
index 4909607..2fbe923 100644
--- a/drivers/crypto/chelsio/chtls/chtls_hw.c
+++ b/drivers/crypto/chelsio/chtls/chtls_hw.c
@@ -139,6 +139,7 @@ int chtls_init_kmap(struct chtls_dev *cdev, struct cxgb4_lld_info *lldi)
static int get_new_keyid(struct chtls_sock *csk, u32 optname)
{
+ unsigned int bh;
struct net_device *dev = csk->egress_dev;
struct chtls_dev *cdev = csk->cdev;
struct chtls_hws *hws;
@@ -148,7 +149,7 @@ static int get_new_keyid(struct chtls_sock *csk, u32 optname)
adap = netdev2adap(dev);
hws = &csk->tlshws;
- spin_lock_bh(&cdev->kmap.lock);
+ bh = spin_lock_bh(&cdev->kmap.lock, SOFTIRQ_ALL_MASK);
keyid = find_first_zero_bit(cdev->kmap.addr, cdev->kmap.size);
if (keyid < cdev->kmap.size) {
__set_bit(keyid, cdev->kmap.addr);
@@ -160,12 +161,13 @@ static int get_new_keyid(struct chtls_sock *csk, u32 optname)
} else {
keyid = -1;
}
- spin_unlock_bh(&cdev->kmap.lock);
+ spin_unlock_bh(&cdev->kmap.lock, bh);
return keyid;
}
void free_tls_keyid(struct sock *sk)
{
+ unsigned int bh;
struct chtls_sock *csk = rcu_dereference_sk_user_data(sk);
struct net_device *dev = csk->egress_dev;
struct chtls_dev *cdev = csk->cdev;
@@ -178,7 +180,7 @@ void free_tls_keyid(struct sock *sk)
adap = netdev2adap(dev);
hws = &csk->tlshws;
- spin_lock_bh(&cdev->kmap.lock);
+ bh = spin_lock_bh(&cdev->kmap.lock, SOFTIRQ_ALL_MASK);
if (hws->rxkey >= 0) {
__clear_bit(hws->rxkey, cdev->kmap.addr);
atomic_dec(&adap->chcr_stats.tls_key);
@@ -189,7 +191,7 @@ void free_tls_keyid(struct sock *sk)
atomic_dec(&adap->chcr_stats.tls_key);
hws->txkey = -1;
}
- spin_unlock_bh(&cdev->kmap.lock);
+ spin_unlock_bh(&cdev->kmap.lock, bh);
}
unsigned int keyid_to_addr(int start_addr, int keyid)
diff --git a/drivers/crypto/chelsio/chtls/chtls_main.c b/drivers/crypto/chelsio/chtls/chtls_main.c
index f59b044..5816d3f 100644
--- a/drivers/crypto/chelsio/chtls/chtls_main.c
+++ b/drivers/crypto/chelsio/chtls/chtls_main.c
@@ -170,17 +170,18 @@ static void chtls_unregister_dev(struct chtls_dev *cdev)
static void process_deferq(struct work_struct *task_param)
{
+ unsigned int bh;
struct chtls_dev *cdev = container_of(task_param,
struct chtls_dev, deferq_task);
struct sk_buff *skb;
- spin_lock_bh(&cdev->deferq.lock);
+ bh = spin_lock_bh(&cdev->deferq.lock, SOFTIRQ_ALL_MASK);
while ((skb = __skb_dequeue(&cdev->deferq)) != NULL) {
- spin_unlock_bh(&cdev->deferq.lock);
+ spin_unlock_bh(&cdev->deferq.lock, bh);
DEFERRED_SKB_CB(skb)->handler(cdev, skb);
- spin_lock_bh(&cdev->deferq.lock);
+ bh = spin_lock_bh(&cdev->deferq.lock, SOFTIRQ_ALL_MASK);
}
- spin_unlock_bh(&cdev->deferq.lock);
+ spin_unlock_bh(&cdev->deferq.lock, bh);
}
static int chtls_get_skb(struct chtls_dev *cdev)
--
2.7.4
Disabling the softirqs is currently an all-or-nothing operation: either
all softirqs are enabled or none of them. However we plan to introduce a
per vector granularity of this ability to improve latency response and
make each softirq vector interruptible by the others.
The first step carried here is to provide the necessary APIs to control
the per-vector enable bits.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
arch/s390/include/asm/hardirq.h | 7 +++++-
include/linux/interrupt.h | 54 +++++++++++++++++++++++++++++++++++++----
2 files changed, 55 insertions(+), 6 deletions(-)
diff --git a/arch/s390/include/asm/hardirq.h b/arch/s390/include/asm/hardirq.h
index 84ad789..5a6c5c7 100644
--- a/arch/s390/include/asm/hardirq.h
+++ b/arch/s390/include/asm/hardirq.h
@@ -13,7 +13,12 @@
#include <asm/lowcore.h>
-#define local_softirq_pending() (S390_lowcore.softirq_data)
+#define local_softirq_data() (S390_lowcore.softirq_data)
+#define local_softirq_pending() (local_softirq_data() & SOFTIRQ_PENDING_MASK)
+#define local_softirq_disabled() (local_softirq_data() & ~SOFTIRQ_PENDING_MASK)
+#define softirq_enabled_nand(x) (S390_lowcore.softirq_data &= ~((x) << SOFTIRQ_ENABLED_SHIFT))
+#define softirq_pending_or(x) (S390_lowcore.softirq_data |= ((x) << SOFTIRQ_ENABLED_SHIFT))
+#define softirq_pending_set(x) (S390_lowcore.softirq_data = ((x) << SOFTIRQ_ENABLED_SHIFT))
#define softirq_pending_nand(x) (S390_lowcore.softirq_data &= ~(x))
#define softirq_pending_or(x) (S390_lowcore.softirq_data |= (x))
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index a577a54..4882196 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -468,19 +468,63 @@ enum
#define SOFTIRQ_STOP_IDLE_MASK (~(1 << RCU_SOFTIRQ))
#define SOFTIRQ_ALL_MASK (BIT(NR_SOFTIRQS) - 1)
-#ifndef local_softirq_pending
+#define SOFTIRQ_ENABLED_SHIFT 16
+#define SOFTIRQ_PENDING_MASK (BIT(SOFTIRQ_ENABLED_SHIFT) - 1)
+
+
+#ifndef local_softirq_data
#ifndef local_softirq_data_ref
#define local_softirq_data_ref irq_stat.__softirq_data
#endif
-#define local_softirq_pending() (__this_cpu_read(local_softirq_data_ref))
-#define softirq_pending_nand(x) (__this_cpu_and(local_softirq_data_ref, ~(x)))
-#define softirq_pending_or(x) (__this_cpu_or(local_softirq_data_ref, (x)))
+static inline unsigned int local_softirq_data(void)
+{
+ return __this_cpu_read(local_softirq_data_ref);
+}
+static inline unsigned int local_softirq_enabled(void)
+{
+ return local_softirq_data() >> SOFTIRQ_ENABLED_SHIFT;
+}
+
+static inline unsigned int local_softirq_pending(void)
+{
+ return local_softirq_data() & SOFTIRQ_PENDING_MASK;
+}
+
+static inline void softirq_enabled_nand(unsigned int enabled)
+{
+ enabled <<= SOFTIRQ_ENABLED_SHIFT;
+ __this_cpu_and(local_softirq_data_ref, ~enabled);
+}
+
+static inline void softirq_enabled_or(unsigned int enabled)
+{
+ enabled <<= SOFTIRQ_ENABLED_SHIFT;
+ __this_cpu_or(local_softirq_data_ref, enabled);
+}
+
+static inline void softirq_enabled_set(unsigned int enabled)
+{
+ unsigned int data;
+
+ data = enabled << SOFTIRQ_ENABLED_SHIFT;
+ data |= local_softirq_pending();
+ __this_cpu_write(local_softirq_data_ref, data);
+}
+
+static inline void softirq_pending_nand(unsigned int pending)
+{
+ __this_cpu_and(local_softirq_data_ref, ~pending);
+}
+
+static inline void softirq_pending_or(unsigned int pending)
+{
+ __this_cpu_or(local_softirq_data_ref, pending);
+}
#endif /* local_softirq_pending */
-
/* map softirq index to softirq name. update 'softirq_to_name' in
* kernel/softirq.c when adding a new softirq.
*/
--
2.7.4
From: Frederic Weisbecker <[email protected]>
Using the bottom-half masking APIs defined in linux/bottom-half.h won't
be possible without passing the relevant softirq vectors that are
currently defined in linux/interrupt.h
Yet we can't include linux/interrupt.h from linux/bottom-half.h due to
circular dependencies.
Now the vector bits belong to bottom halves anyway, so moving them there
is more natural and avoid nasty header dances.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
include/linux/bottom_half.h | 30 ++++++++++++++++++++++++++++++
include/linux/interrupt.h | 30 ------------------------------
2 files changed, 30 insertions(+), 30 deletions(-)
diff --git a/include/linux/bottom_half.h b/include/linux/bottom_half.h
index a104f81..a9571ad 100644
--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -4,6 +4,36 @@
#include <linux/preempt.h>
+/* PLEASE, avoid to allocate new softirqs, if you need not _really_ high
+ frequency threaded job scheduling. For almost all the purposes
+ tasklets are more than enough. F.e. all serial device BHs et
+ al. should be converted to tasklets, not to softirqs.
+ */
+
+enum
+{
+ HI_SOFTIRQ=0,
+ TIMER_SOFTIRQ,
+ NET_TX_SOFTIRQ,
+ NET_RX_SOFTIRQ,
+ BLOCK_SOFTIRQ,
+ IRQ_POLL_SOFTIRQ,
+ TASKLET_SOFTIRQ,
+ SCHED_SOFTIRQ,
+ HRTIMER_SOFTIRQ, /* Unused, but kept as tools rely on the
+ numbering. Sigh! */
+ RCU_SOFTIRQ, /* Preferable RCU should always be the last softirq */
+
+ NR_SOFTIRQS
+};
+
+#define SOFTIRQ_STOP_IDLE_MASK (~(1 << RCU_SOFTIRQ))
+#define SOFTIRQ_ALL_MASK (BIT(NR_SOFTIRQS) - 1)
+
+#define SOFTIRQ_ENABLED_SHIFT 16
+#define SOFTIRQ_PENDING_MASK (BIT(SOFTIRQ_ENABLED_SHIFT) - 1)
+
+
#ifdef CONFIG_TRACE_IRQFLAGS
extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
#else
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 4882196..b4425e6 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -442,36 +442,6 @@ extern bool force_irqthreads;
#define hard_irq_disable() do { } while(0)
#endif
-/* PLEASE, avoid to allocate new softirqs, if you need not _really_ high
- frequency threaded job scheduling. For almost all the purposes
- tasklets are more than enough. F.e. all serial device BHs et
- al. should be converted to tasklets, not to softirqs.
- */
-
-enum
-{
- HI_SOFTIRQ=0,
- TIMER_SOFTIRQ,
- NET_TX_SOFTIRQ,
- NET_RX_SOFTIRQ,
- BLOCK_SOFTIRQ,
- IRQ_POLL_SOFTIRQ,
- TASKLET_SOFTIRQ,
- SCHED_SOFTIRQ,
- HRTIMER_SOFTIRQ, /* Unused, but kept as tools rely on the
- numbering. Sigh! */
- RCU_SOFTIRQ, /* Preferable RCU should always be the last softirq */
-
- NR_SOFTIRQS
-};
-
-#define SOFTIRQ_STOP_IDLE_MASK (~(1 << RCU_SOFTIRQ))
-#define SOFTIRQ_ALL_MASK (BIT(NR_SOFTIRQS) - 1)
-
-#define SOFTIRQ_ENABLED_SHIFT 16
-#define SOFTIRQ_PENDING_MASK (BIT(SOFTIRQ_ENABLED_SHIFT) - 1)
-
-
#ifndef local_softirq_data
#ifndef local_softirq_data_ref
--
2.7.4
This pair of function is implemented on top of local_bh_disable() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to that of local_irq_save/restore. Subsequent calls to
local_bh_disable() and friends can then stack up:
bh = local_bh_disable(vec_mask);
nf_log_buf_open(&bh2) {
*bh2 = local_bh_disable(...)
}
...
nf_log_buf_close(bh2) {
local_bh_enable(bh2);
}
local_bh_enable(bh);
To prepare for that, make nf_log_buf_open() able to return a saved vector
enabled mask and pass it back to nf_log_buf_close(). We'll plug it to
local_bh_disable() in a subsequent patch.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
include/net/netfilter/nf_log.h | 4 ++--
net/ipv4/netfilter/nf_log_arp.c | 5 +++--
net/ipv4/netfilter/nf_log_ipv4.c | 5 +++--
net/ipv6/netfilter/nf_log_ipv6.c | 5 +++--
net/netfilter/nf_log.c | 4 ++--
5 files changed, 13 insertions(+), 10 deletions(-)
diff --git a/include/net/netfilter/nf_log.h b/include/net/netfilter/nf_log.h
index 0d39208..ab55ff9 100644
--- a/include/net/netfilter/nf_log.h
+++ b/include/net/netfilter/nf_log.h
@@ -96,9 +96,9 @@ void nf_log_trace(struct net *net,
struct nf_log_buf;
-struct nf_log_buf *nf_log_buf_open(void);
+struct nf_log_buf *nf_log_buf_open(unsigned int *bh);
__printf(2, 3) int nf_log_buf_add(struct nf_log_buf *m, const char *f, ...);
-void nf_log_buf_close(struct nf_log_buf *m);
+void nf_log_buf_close(struct nf_log_buf *m, unsigned int bh);
/* common logging functions */
int nf_log_dump_udp_header(struct nf_log_buf *m, const struct sk_buff *skb,
diff --git a/net/ipv4/netfilter/nf_log_arp.c b/net/ipv4/netfilter/nf_log_arp.c
index df5c2a2..3696911 100644
--- a/net/ipv4/netfilter/nf_log_arp.c
+++ b/net/ipv4/netfilter/nf_log_arp.c
@@ -85,12 +85,13 @@ static void nf_log_arp_packet(struct net *net, u_int8_t pf,
const char *prefix)
{
struct nf_log_buf *m;
+ unsigned int bh;
/* FIXME: Disabled from containers until syslog ns is supported */
if (!net_eq(net, &init_net) && !sysctl_nf_log_all_netns)
return;
- m = nf_log_buf_open();
+ m = nf_log_buf_open(&bh);
if (!loginfo)
loginfo = &default_loginfo;
@@ -99,7 +100,7 @@ static void nf_log_arp_packet(struct net *net, u_int8_t pf,
prefix);
dump_arp_packet(m, loginfo, skb, 0);
- nf_log_buf_close(m);
+ nf_log_buf_close(m, bh);
}
static struct nf_logger nf_arp_logger __read_mostly = {
diff --git a/net/ipv4/netfilter/nf_log_ipv4.c b/net/ipv4/netfilter/nf_log_ipv4.c
index 1e6f28c..996f386 100644
--- a/net/ipv4/netfilter/nf_log_ipv4.c
+++ b/net/ipv4/netfilter/nf_log_ipv4.c
@@ -317,12 +317,13 @@ static void nf_log_ip_packet(struct net *net, u_int8_t pf,
const char *prefix)
{
struct nf_log_buf *m;
+ unsigned int bh;
/* FIXME: Disabled from containers until syslog ns is supported */
if (!net_eq(net, &init_net) && !sysctl_nf_log_all_netns)
return;
- m = nf_log_buf_open();
+ m = nf_log_buf_open(&bh);
if (!loginfo)
loginfo = &default_loginfo;
@@ -335,7 +336,7 @@ static void nf_log_ip_packet(struct net *net, u_int8_t pf,
dump_ipv4_packet(net, m, loginfo, skb, 0);
- nf_log_buf_close(m);
+ nf_log_buf_close(m, bh);
}
static struct nf_logger nf_ip_logger __read_mostly = {
diff --git a/net/ipv6/netfilter/nf_log_ipv6.c b/net/ipv6/netfilter/nf_log_ipv6.c
index c6bf580..62bff78 100644
--- a/net/ipv6/netfilter/nf_log_ipv6.c
+++ b/net/ipv6/netfilter/nf_log_ipv6.c
@@ -349,12 +349,13 @@ static void nf_log_ip6_packet(struct net *net, u_int8_t pf,
const char *prefix)
{
struct nf_log_buf *m;
+ unsigned int bh;
/* FIXME: Disabled from containers until syslog ns is supported */
if (!net_eq(net, &init_net) && !sysctl_nf_log_all_netns)
return;
- m = nf_log_buf_open();
+ m = nf_log_buf_open(&bh);
if (!loginfo)
loginfo = &default_loginfo;
@@ -367,7 +368,7 @@ static void nf_log_ip6_packet(struct net *net, u_int8_t pf,
dump_ipv6_packet(net, m, loginfo, skb, skb_network_offset(skb), 1);
- nf_log_buf_close(m);
+ nf_log_buf_close(m, bh);
}
static struct nf_logger nf_ip6_logger __read_mostly = {
diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c
index a61d6df..06ded8a31 100644
--- a/net/netfilter/nf_log.c
+++ b/net/netfilter/nf_log.c
@@ -292,7 +292,7 @@ __printf(2, 3) int nf_log_buf_add(struct nf_log_buf *m, const char *f, ...)
}
EXPORT_SYMBOL_GPL(nf_log_buf_add);
-struct nf_log_buf *nf_log_buf_open(void)
+struct nf_log_buf *nf_log_buf_open(unsigned int *bh)
{
struct nf_log_buf *m = kmalloc(sizeof(*m), GFP_ATOMIC);
@@ -307,7 +307,7 @@ struct nf_log_buf *nf_log_buf_open(void)
}
EXPORT_SYMBOL_GPL(nf_log_buf_open);
-void nf_log_buf_close(struct nf_log_buf *m)
+void nf_log_buf_close(struct nf_log_buf *m, unsigned int bh)
{
m->buf[m->count] = 0;
printk("%s\n", m->buf);
--
2.7.4
From: Frederic Weisbecker <[email protected]>
Tasklets and net-rx vectors don't quite get along. If one is interrupted
by another, we may run into a nasty spin_lock recursion:
[ 135.427198] Call Trace:
[ 135.429650] <IRQ>
[ 135.431690] dump_stack+0x67/0x95
[ 135.435024] spin_bug+0x95/0xf0
[ 135.438187] do_raw_spin_lock+0x77/0xa0
[ 135.442079] _raw_spin_lock_nested+0x40/0x50
[ 135.446439] ? tcp_v4_rcv+0x9da/0xb10
[ 135.450131] tcp_v4_rcv+0x9da/0xb10
[ 135.453650] ? ip_local_deliver+0x78/0x260
[ 135.457758] ip_local_deliver+0xdf/0x260
[ 135.461728] ip_rcv+0x4e/0x80
[ 135.464716] __netif_receive_skb_one_core+0x55/0x80
[ 135.469623] __netif_receive_skb+0x1b/0x70
[ 135.473757] netif_receive_skb_internal+0x92/0x390
[ 135.478574] napi_gro_receive+0xdf/0x1a0
[ 135.482545] rtl8169_poll+0x2b8/0x670
[ 135.486211] net_rx_action+0x1f8/0x3e0
[ 135.489989] __do_softirq+0x1a0/0x63c
[ 135.493691] irq_exit+0x10f/0x120
[ 135.497033] do_IRQ+0x71/0x130
[ 135.500137] common_interrupt+0xf/0xf
[ 135.503839] RIP: 0010:_raw_spin_unlock_irqrestore+0x59/0x70
[ 135.509471] Code: 75 21 53 9d e8 e8 1d a5 ff bf 01 00 00 00 e8 8e f6 97 ff 65 8b 05 ef 06 8d 7e 85 c0 74 0e 5b 41 5c 5d c3 e8 c9 20 a5 ff 53 9d <eb> dd e8 90 d4 8b ff 5b 41 5c 5d c3 66 66 2e 0f 1f 84 00 00 00 00
[ 135.528347] RSP: 0018:ffff88021fb03d28 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffde
[ 135.535989] RAX: ffff880217762480 RBX: 0000000000000246 RCX: 0000000000000002
[ 135.543201] RDX: 0000000000000000 RSI: ffff880217762c70 RDI: ffff880217762480
[ 135.550332] RBP: ffff88021fb03d38 R08: 0000000000000001 R09: 0000000000000000
[ 135.557519] R10: 0000000000000000 R11: 0000000000000000 R12: ffff88021fa59b40
[ 135.564719] R13: ffff88021fa59b40 R14: 00000000fffd78b5 R15: 000000000e000001
[ 135.571905] ? common_interrupt+0xa/0xf
[ 135.575783] mod_timer+0x196/0x440
[ 135.579221] sk_reset_timer+0x18/0x30
[ 135.582940] tcp_schedule_loss_probe+0xe9/0x120
[ 135.587515] tcp_write_xmit+0x2c4/0x1240
[ 135.591468] tcp_tsq_write.part.46+0x5e/0xb0
[ 135.595756] tcp_tsq_handler+0xa3/0xb0
[ 135.599534] tcp_tasklet_func+0xdc/0x120
[ 135.603488] tasklet_action_common.isra.17+0xa3/0xb0
[ 135.608471] tasklet_action+0x2d/0x30
[ 135.612161] __do_softirq+0x1a0/0x63c
[ 135.615847] irq_exit+0x10f/0x120
[ 135.619173] do_IRQ+0x71/0x130
[ 135.622251] common_interrupt+0xf/0xf
[ 135.625949] </IRQ>
This is an ugly workaround until we find a proper solution.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
kernel/softirq.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index f4cb1ea..d95295f 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -320,6 +320,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
unsigned long old_flags = current->flags;
int max_restart = MAX_SOFTIRQ_RESTART;
struct softirq_action *h;
+ bool tasklet_enabled = false, net_rx_enabled = false;
bool in_hardirq;
__u32 pending;
int softirq_bit;
@@ -338,6 +339,10 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
in_hardirq = lockdep_softirq_start();
restart:
+ if (local_softirq_enabled() & TASKLET_SOFTIRQ)
+ tasklet_enabled = true;
+ if (local_softirq_enabled() & NET_RX_SOFTIRQ)
+ net_rx_enabled = true;
/* Reset the pending bitmask before enabling irqs */
softirq_pending_nand(pending);
@@ -358,8 +363,16 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
trace_softirq_entry(vec_nr);
softirq_enabled_nand(BIT(vec_nr));
+ if (vec_nr == NET_RX_SOFTIRQ && tasklet_enabled)
+ softirq_enabled_nand(BIT(TASKLET_SOFTIRQ));
+ if (vec_nr == TASKLET_SOFTIRQ && net_rx_enabled)
+ softirq_enabled_nand(BIT(NET_RX_SOFTIRQ));
barrier();
h->action(h);
+ if (vec_nr == TASKLET_SOFTIRQ && net_rx_enabled)
+ softirq_enabled_or(BIT(NET_RX_SOFTIRQ));
+ if (vec_nr == NET_RX_SOFTIRQ && tasklet_enabled)
+ softirq_enabled_or(BIT(TASKLET_SOFTIRQ));
softirq_enabled_or(BIT(vec_nr));
trace_softirq_exit(vec_nr);
if (unlikely(prev_count != preempt_count())) {
--
2.7.4
Now we can rely on the vector enabled bits to know if some vector is
disabled. Hence we can also now drive the softirq offset on top of it.
As a result, the softirq offset don't need to nest anymore as the vector
enable mask does it on the stack on its behalf:
// Start with local_bh_disabled() == SOFTIRQ_ALL_MASK
...
bh = local_bh_disable(BIT(NET_RX_SOFTIRQ)) {
bh = local_bh_disabled();
local_bh_disabled() &= ~BIT(NET_RX_SOFTIRQ);
// First vector disabled, inc preempt count
preempt_count += SOFTIRQ_DISABLE_OFFSET;
return bh;
}
....
bh2 = local_bh_disable(BIT(BLOCK_SOFTIRQ)) {
bh2 = local_bh_disabled();
local_bh_disabled() &= ~BIT(NET_RX_SOFTIRQ);
// No need to inc preempt count
return bh2;
}
...
local_bh_enable(bh2) {
local_bh_disabled() = bh2;
// No need to dec preempt count
}
...
local_bh_enable(bh1) {
local_bh_disabled() = bh;
preempt_count -= SOFTIRQ_DISABLE_OFFSET;
}
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
kernel/softirq.c | 25 ++++++++++++++++---------
1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index e2435b0..84da16c 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -117,6 +117,10 @@ unsigned int __local_bh_disable_ip(unsigned long ip, unsigned int cnt,
raw_local_irq_save(flags);
+ enabled = local_softirq_enabled();
+ if (enabled != SOFTIRQ_ALL_MASK)
+ cnt &= ~SOFTIRQ_MASK;
+
/*
* The preempt tracer hooks into preempt_count_add and will break
* lockdep because it calls back into lockdep after SOFTIRQ_OFFSET
@@ -131,7 +135,6 @@ unsigned int __local_bh_disable_ip(unsigned long ip, unsigned int cnt,
if (softirq_count() == (cnt & SOFTIRQ_MASK))
trace_softirqs_off(ip);
- enabled = local_softirq_enabled();
softirq_enabled_nand(mask);
raw_local_irq_restore(flags);
@@ -157,6 +160,9 @@ void local_bh_enable_no_softirq(unsigned int bh)
softirq_enabled_set(bh);
+ if (bh != SOFTIRQ_ALL_MASK)
+ return;
+
if (preempt_count() == SOFTIRQ_DISABLE_OFFSET)
trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
@@ -175,18 +181,18 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt, unsigned int bh)
local_irq_disable();
#endif
softirq_enabled_set(bh);
-
- /*
- * Are softirqs going to be turned on now:
- */
- if (softirq_count() == SOFTIRQ_DISABLE_OFFSET)
+ if (bh != SOFTIRQ_ALL_MASK) {
+ cnt &= ~SOFTIRQ_MASK;
+ } else if (!(softirq_count() & SOFTIRQ_OFFSET)) {
+ /* Are softirqs going to be turned on now: */
trace_softirqs_on(ip);
+ }
/*
* Keep preemption disabled until we are done with
* softirq processing:
*/
- preempt_count_sub(cnt - 1);
-
+ if (cnt)
+ preempt_count_sub(cnt - 1);
if (unlikely(!in_interrupt() && local_softirq_pending())) {
/*
@@ -196,7 +202,8 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt, unsigned int bh)
do_softirq();
}
- preempt_count_dec();
+ if (cnt)
+ preempt_count_dec();
#ifdef CONFIG_TRACE_IRQFLAGS
local_irq_enable();
#endif
--
2.7.4
Now that all callers are ready, we can push down the softirq enabled
mask to the core from callers such as spin_lock_bh(), local_bh_disable(),
rcu_read_lock_bh(), etc...
It is applied to the CPU vector enabled mask on __local_bh_disable_ip()
which then returns the old value to be restored on __local_bh_enable_ip().
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
include/linux/bottom_half.h | 19 ++++++++++---------
include/linux/rwlock_api_smp.h | 14 ++++++++------
include/linux/spinlock_api_smp.h | 10 +++++-----
kernel/softirq.c | 28 +++++++++++++++++++---------
4 files changed, 42 insertions(+), 29 deletions(-)
diff --git a/include/linux/bottom_half.h b/include/linux/bottom_half.h
index 31fcdae..f8a68c8 100644
--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -37,9 +37,10 @@ enum
#ifdef CONFIG_TRACE_IRQFLAGS
-extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
+extern unsigned int __local_bh_disable_ip(unsigned long ip, unsigned int cnt,
+ unsigned int mask);
#else
-static __always_inline void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
+static __always_inline unsigned int __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
{
preempt_count_add(cnt);
barrier();
@@ -48,21 +49,21 @@ static __always_inline void __local_bh_disable_ip(unsigned long ip, unsigned int
static inline unsigned int local_bh_disable(unsigned int mask)
{
- __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET);
- return 0;
+ return __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET, mask);
}
-extern void local_bh_enable_no_softirq(void);
-extern void __local_bh_enable_ip(unsigned long ip, unsigned int cnt);
+extern void local_bh_enable_no_softirq(unsigned int bh);
+extern void __local_bh_enable_ip(unsigned long ip,
+ unsigned int cnt, unsigned int bh);
-static inline void local_bh_enable_ip(unsigned long ip)
+static inline void local_bh_enable_ip(unsigned long ip, unsigned int bh)
{
- __local_bh_enable_ip(ip, SOFTIRQ_DISABLE_OFFSET);
+ __local_bh_enable_ip(ip, SOFTIRQ_DISABLE_OFFSET, bh);
}
static inline void local_bh_enable(unsigned int bh)
{
- __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET);
+ __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET, bh);
}
extern void local_bh_disable_all(void);
diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h
index fb66489..90ba7bf 100644
--- a/include/linux/rwlock_api_smp.h
+++ b/include/linux/rwlock_api_smp.h
@@ -173,10 +173,11 @@ static inline void __raw_read_lock_irq(rwlock_t *lock)
static inline unsigned int __raw_read_lock_bh(rwlock_t *lock,
unsigned int mask)
{
- __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
+ unsigned int bh;
+ bh = __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, mask);
rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_);
LOCK_CONTENDED(lock, do_raw_read_trylock, do_raw_read_lock);
- return 0;
+ return bh;
}
static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock)
@@ -202,10 +203,11 @@ static inline void __raw_write_lock_irq(rwlock_t *lock)
static inline unsigned int __raw_write_lock_bh(rwlock_t *lock,
unsigned int mask)
{
- __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
+ unsigned int bh;
+ bh = __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, mask);
rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_);
LOCK_CONTENDED(lock, do_raw_write_trylock, do_raw_write_lock);
- return 0;
+ return bh;
}
static inline void __raw_write_lock(rwlock_t *lock)
@@ -253,7 +255,7 @@ static inline void __raw_read_unlock_bh(rwlock_t *lock,
{
rwlock_release(&lock->dep_map, 1, _RET_IP_);
do_raw_read_unlock(lock);
- __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
+ __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, bh);
}
static inline void __raw_write_unlock_irqrestore(rwlock_t *lock,
@@ -278,7 +280,7 @@ static inline void __raw_write_unlock_bh(rwlock_t *lock,
{
rwlock_release(&lock->dep_map, 1, _RET_IP_);
do_raw_write_unlock(lock);
- __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
+ __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, bh);
}
#endif /* __LINUX_RWLOCK_API_SMP_H */
diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h
index 42bbf68..6602a56 100644
--- a/include/linux/spinlock_api_smp.h
+++ b/include/linux/spinlock_api_smp.h
@@ -132,9 +132,9 @@ static inline void __raw_spin_lock_irq(raw_spinlock_t *lock)
static inline unsigned int __raw_spin_lock_bh(raw_spinlock_t *lock, unsigned int mask)
{
- unsigned int bh = 0;
+ unsigned int bh;
- __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
+ bh = __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, mask);
spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock);
@@ -179,19 +179,19 @@ static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock,
{
spin_release(&lock->dep_map, 1, _RET_IP_);
do_raw_spin_unlock(lock);
- __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
+ __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, bh);
}
static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock,
unsigned int *bh,
unsigned int mask)
{
- __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
+ *bh = __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, mask);
if (do_raw_spin_trylock(lock)) {
spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
return 1;
}
- __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
+ __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, *bh);
return 0;
}
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 22cc0a7..e2435b0 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -107,13 +107,16 @@ static bool ksoftirqd_running(unsigned long pending)
* where hardirqs are disabled legitimately:
*/
#ifdef CONFIG_TRACE_IRQFLAGS
-void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
+unsigned int __local_bh_disable_ip(unsigned long ip, unsigned int cnt,
+ unsigned int mask)
{
unsigned long flags;
+ unsigned int enabled;
WARN_ON_ONCE(in_irq());
raw_local_irq_save(flags);
+
/*
* The preempt tracer hooks into preempt_count_add and will break
* lockdep because it calls back into lockdep after SOFTIRQ_OFFSET
@@ -127,6 +130,9 @@ void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
*/
if (softirq_count() == (cnt & SOFTIRQ_MASK))
trace_softirqs_off(ip);
+
+ enabled = local_softirq_enabled();
+ softirq_enabled_nand(mask);
raw_local_irq_restore(flags);
if (preempt_count() == cnt) {
@@ -135,6 +141,7 @@ void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
#endif
trace_preempt_off(CALLER_ADDR0, get_lock_parent_ip());
}
+ return enabled;
}
EXPORT_SYMBOL(__local_bh_disable_ip);
#endif /* CONFIG_TRACE_IRQFLAGS */
@@ -143,11 +150,13 @@ EXPORT_SYMBOL(__local_bh_disable_ip);
* Special-case - softirqs can safely be enabled by __do_softirq(),
* without processing still-pending softirqs:
*/
-void local_bh_enable_no_softirq(void)
+void local_bh_enable_no_softirq(unsigned int bh)
{
WARN_ON_ONCE(in_irq());
lockdep_assert_irqs_disabled();
+ softirq_enabled_set(bh);
+
if (preempt_count() == SOFTIRQ_DISABLE_OFFSET)
trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
@@ -155,17 +164,18 @@ void local_bh_enable_no_softirq(void)
trace_softirqs_on(_RET_IP_);
__preempt_count_sub(SOFTIRQ_DISABLE_OFFSET);
-
}
EXPORT_SYMBOL(local_bh_enable_no_softirq);
-void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
+void __local_bh_enable_ip(unsigned long ip, unsigned int cnt, unsigned int bh)
{
WARN_ON_ONCE(in_irq());
lockdep_assert_irqs_enabled();
#ifdef CONFIG_TRACE_IRQFLAGS
local_irq_disable();
#endif
+ softirq_enabled_set(bh);
+
/*
* Are softirqs going to be turned on now:
*/
@@ -177,6 +187,7 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
*/
preempt_count_sub(cnt - 1);
+
if (unlikely(!in_interrupt() && local_softirq_pending())) {
/*
* Run softirq if any pending. And do it in its own stack
@@ -246,9 +257,6 @@ static void local_bh_exit(void)
__preempt_count_sub(SOFTIRQ_OFFSET);
}
-
-
-
/*
* We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
* but break the loop if need_resched() is set or after 2 ms.
@@ -395,15 +403,17 @@ asmlinkage __visible void do_softirq(void)
*/
void irq_enter(void)
{
+ unsigned int bh;
+
rcu_irq_enter();
if (is_idle_task(current) && !in_interrupt()) {
/*
* Prevent raise_softirq from needlessly waking up ksoftirqd
* here, as softirq will be serviced on return from interrupt.
*/
- local_bh_disable(SOFTIRQ_ALL_MASK);
+ bh = local_bh_disable(SOFTIRQ_ALL_MASK);
tick_irq_enter();
- local_bh_enable_no_softirq();
+ local_bh_enable_no_softirq(bh);
}
__irq_enter();
--
2.7.4
This function is implemented on top of spin_lock_bh() that is going to
handle a softirq mask in order to apply finegrained vector disablement.
The lock function is going to return the previous vectors enabled mask
prior to the last call to local_bh_disable(), following a similar model
to that of local_irq_save/restore. Subsequent calls to local_bh_disable()
and friends can then stack up:
bh = local_bh_disable(vec_mask);
isdn_net_get_locked_lp(&bh2) {
spin_lock(...)
*bh2 = local_bh_disable(...)
}
...
spin_unlock_bh(bh2, ...);
local_bh_enable(bh);
To prepare for that, make isdn_net_get_locked_lp() able to return a
saved vector enabled mask. We'll plug it to spin_[un]lock_bh() in a
subsequent patch.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
drivers/isdn/i4l/isdn_concap.c | 3 ++-
drivers/isdn/i4l/isdn_net.c | 3 ++-
drivers/isdn/i4l/isdn_net.h | 3 ++-
drivers/isdn/i4l/isdn_ppp.c | 3 ++-
4 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/isdn/i4l/isdn_concap.c b/drivers/isdn/i4l/isdn_concap.c
index 336523e..4352bec 100644
--- a/drivers/isdn/i4l/isdn_concap.c
+++ b/drivers/isdn/i4l/isdn_concap.c
@@ -41,9 +41,10 @@
static int isdn_concap_dl_data_req(struct concap_proto *concap, struct sk_buff *skb)
{
+ unsigned int bh;
struct net_device *ndev = concap->net_dev;
isdn_net_dev *nd = ((isdn_net_local *) netdev_priv(ndev))->netdev;
- isdn_net_local *lp = isdn_net_get_locked_lp(nd);
+ isdn_net_local *lp = isdn_net_get_locked_lp(nd, &bh);
IX25DEBUG("isdn_concap_dl_data_req: %s \n", concap->net_dev->name);
if (!lp) {
diff --git a/drivers/isdn/i4l/isdn_net.c b/drivers/isdn/i4l/isdn_net.c
index c138f66..a9687ca 100644
--- a/drivers/isdn/i4l/isdn_net.c
+++ b/drivers/isdn/i4l/isdn_net.c
@@ -1054,6 +1054,7 @@ isdn_net_xmit(struct net_device *ndev, struct sk_buff *skb)
isdn_net_local *slp;
isdn_net_local *lp = netdev_priv(ndev);
int retv = NETDEV_TX_OK;
+ unsigned int bh;
if (((isdn_net_local *) netdev_priv(ndev))->master) {
printk("isdn BUG at %s:%d!\n", __FILE__, __LINE__);
@@ -1068,7 +1069,7 @@ isdn_net_xmit(struct net_device *ndev, struct sk_buff *skb)
}
#endif
nd = ((isdn_net_local *) netdev_priv(ndev))->netdev;
- lp = isdn_net_get_locked_lp(nd);
+ lp = isdn_net_get_locked_lp(nd, &bh);
if (!lp) {
printk(KERN_WARNING "%s: all channels busy - requeuing!\n", ndev->name);
return NETDEV_TX_BUSY;
diff --git a/drivers/isdn/i4l/isdn_net.h b/drivers/isdn/i4l/isdn_net.h
index cca6d68..f4621b1 100644
--- a/drivers/isdn/i4l/isdn_net.h
+++ b/drivers/isdn/i4l/isdn_net.h
@@ -76,7 +76,8 @@ static __inline__ int isdn_net_lp_busy(isdn_net_local *lp)
* For the given net device, this will get a non-busy channel out of the
* corresponding bundle. The returned channel is locked.
*/
-static __inline__ isdn_net_local *isdn_net_get_locked_lp(isdn_net_dev *nd)
+static __inline__ isdn_net_local *isdn_net_get_locked_lp(isdn_net_dev *nd,
+ unsigned int *bh)
{
unsigned long flags;
isdn_net_local *lp;
diff --git a/drivers/isdn/i4l/isdn_ppp.c b/drivers/isdn/i4l/isdn_ppp.c
index a7b275e..4c5ac13 100644
--- a/drivers/isdn/i4l/isdn_ppp.c
+++ b/drivers/isdn/i4l/isdn_ppp.c
@@ -1258,6 +1258,7 @@ isdn_ppp_xmit(struct sk_buff *skb, struct net_device *netdev)
unsigned int proto = PPP_IP; /* 0x21 */
struct ippp_struct *ipt, *ipts;
int slot, retval = NETDEV_TX_OK;
+ unsigned int bh;
mlp = netdev_priv(netdev);
nd = mlp->netdev; /* get master lp */
@@ -1292,7 +1293,7 @@ isdn_ppp_xmit(struct sk_buff *skb, struct net_device *netdev)
goto out;
}
- lp = isdn_net_get_locked_lp(nd);
+ lp = isdn_net_get_locked_lp(nd, &bh);
if (!lp) {
printk(KERN_WARNING "%s: all channels busy - requeuing!\n", netdev->name);
retval = NETDEV_TX_BUSY;
--
2.7.4
From: Frederic Weisbecker <[email protected]>
This pair of function is implemented on top of __local_bh_disable_ip()
that is going to handle a softirq mask in order to apply finegrained
vector disablement. The lock function is going to return the previous
vectors enabled mask prior to the last call to local_bh_disable(),
following a similar model to that of local_irq_save/restore. Subsequent
calls to local_bh_disable() and friends can then stack up:
bh = local_bh_disable(vec_mask);
bh2 = write_lock_bh(...);
...
write_unlock_bh(..., bh2);
local_bh_enable(bh);
To prepare for that, make write_lock_bh() able to return a saved vector
enabled mask and pass it back to write_unlock_bh(). We'll plug it to
__local_bh_disable_ip() in a subsequent patch.
Thanks to coccinelle that helped a lot with scripts such as the
following:
@rw exists@
identifier func;
expression e;
@@
func(...) {
+ unsigned int bh;
...
- write_lock_bh(e);
+ bh = write_lock_bh(e);
...
- write_unlock_bh(e);
+ write_unlock_bh(e, bh);
...
}
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
drivers/block/drbd/drbd_receiver.c | 10 +-
drivers/infiniband/core/roce_gid_mgmt.c | 5 +-
drivers/infiniband/hw/cxgb4/cm.c | 5 +-
drivers/isdn/mISDN/socket.c | 17 +--
drivers/isdn/mISDN/stack.c | 10 +-
drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c | 17 +--
drivers/net/ethernet/chelsio/cxgb3/l2t.c | 10 +-
drivers/net/ethernet/chelsio/cxgb4/clip_tbl.c | 41 +++---
drivers/net/ethernet/chelsio/cxgb4/l2t.c | 17 +--
drivers/net/ethernet/chelsio/cxgb4/smt.c | 5 +-
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 5 +-
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 10 +-
.../net/ethernet/mellanox/mlxsw/spectrum_router.c | 10 +-
.../net/ethernet/mellanox/mlxsw/spectrum_span.c | 5 +-
drivers/net/hamradio/6pack.c | 5 +-
drivers/net/hamradio/mkiss.c | 5 +-
drivers/net/ieee802154/fakelb.c | 20 +--
drivers/net/ppp/ppp_generic.c | 35 +++---
drivers/net/ppp/pppoe.c | 24 ++--
drivers/net/wireless/intel/iwlwifi/mvm/d3.c | 5 +-
.../net/wireless/intersil/hostap/hostap_80211_rx.c | 5 +-
drivers/net/wireless/intersil/hostap/hostap_main.c | 12 +-
drivers/net/wireless/intersil/hostap/hostap_proc.c | 2 +-
drivers/s390/net/netiucv.c | 36 +++---
drivers/s390/net/qeth_l3_main.c | 10 +-
drivers/scsi/bnx2i/bnx2i_iscsi.c | 45 ++++---
drivers/scsi/cxgbi/libcxgbi.c | 15 ++-
drivers/scsi/iscsi_tcp.c | 31 +++--
.../vc04_services/interface/vchiq_arm/vchiq_arm.c | 69 +++++-----
drivers/target/iscsi/iscsi_target_nego.c | 60 +++++----
drivers/tty/hvc/hvc_iucv.c | 10 +-
drivers/xen/pvcalls-back.c | 20 +--
fs/dlm/lowcomms.c | 40 +++---
fs/ocfs2/cluster/tcp.c | 35 +++---
include/linux/rwlock.h | 8 +-
include/linux/rwlock_api_smp.h | 30 +++--
include/linux/spinlock_api_up.h | 2 +-
include/net/ping.h | 1 +
include/net/sock.h | 10 +-
kernel/bpf/reuseport_array.c | 22 ++--
kernel/bpf/sockmap.c | 20 +--
kernel/locking/spinlock.c | 26 ++--
net/6lowpan/ndisc.c | 12 +-
net/appletalk/aarp.c | 48 ++++---
net/appletalk/atalk_proc.c | 6 +-
net/appletalk/ddp.c | 65 ++++++----
net/atm/mpc.c | 5 +-
net/atm/mpoa_caches.c | 41 +++---
net/ax25/ax25_iface.c | 15 ++-
net/ax25/ax25_route.c | 33 ++---
net/bridge/netfilter/ebtables.c | 32 ++---
net/core/dev.c | 19 +--
net/core/link_watch.c | 5 +-
net/core/neighbour.c | 139 ++++++++++++---------
net/core/netpoll.c | 5 +-
net/core/pktgen.c | 5 +-
net/core/rtnetlink.c | 10 +-
net/core/skbuff.c | 5 +-
net/core/sock.c | 10 +-
net/decnet/af_decnet.c | 20 +--
net/decnet/dn_table.c | 27 ++--
net/hsr/hsr_device.c | 7 +-
net/ieee802154/6lowpan/tx.c | 5 +-
net/ieee802154/socket.c | 20 +--
net/ipv4/arp.c | 10 +-
net/ipv4/ipmr.c | 17 +--
net/ipv4/ping.c | 22 ++--
net/ipv4/raw.c | 10 +-
net/ipv6/addrconf.c | 120 ++++++++++--------
net/ipv6/anycast.c | 38 +++---
net/ipv6/ip6_fib.c | 10 +-
net/ipv6/ip6mr.c | 27 ++--
net/ipv6/ipv6_sockglue.c | 11 +-
net/ipv6/mcast.c | 128 +++++++++++--------
net/ipv6/ndisc.c | 17 +--
net/ipv6/netfilter/nf_tproxy_ipv6.c | 5 +-
net/iucv/af_iucv.c | 20 +--
net/kcm/kcmsock.c | 24 ++--
net/l2tp/l2tp_core.c | 33 ++---
net/l2tp/l2tp_debugfs.c | 5 +-
net/l2tp/l2tp_ip.c | 29 +++--
net/l2tp/l2tp_ip6.c | 29 +++--
net/lapb/lapb_iface.c | 15 ++-
net/mac802154/llsec.c | 31 +++--
net/netfilter/ipset/ip_set_core.c | 45 ++++---
net/netfilter/ipvs/ip_vs_conn.c | 8 +-
net/netfilter/nf_conntrack_proto_gre.c | 27 ++--
net/netfilter/nf_log_common.c | 5 +-
net/netfilter/nf_nat_redirect.c | 5 +-
net/netfilter/nfnetlink_log.c | 7 +-
net/netfilter/nfnetlink_queue.c | 12 +-
net/netfilter/nft_meta.c | 13 +-
net/netfilter/nft_set_rbtree.c | 32 +++--
net/rds/tcp.c | 10 +-
net/rds/tcp_connect.c | 5 +-
net/rds/tcp_listen.c | 15 ++-
net/rds/tcp_recv.c | 5 +-
net/rds/tcp_send.c | 5 +-
net/rxrpc/ar-internal.h | 15 ++-
net/rxrpc/call_accept.c | 12 +-
net/rxrpc/call_object.c | 5 +-
net/rxrpc/conn_client.c | 5 +-
net/rxrpc/conn_event.c | 5 +-
net/rxrpc/recvmsg.c | 26 ++--
net/rxrpc/sendmsg.c | 10 +-
net/sctp/ipv6.c | 5 +-
net/sctp/proc.c | 5 +-
net/sctp/socket.c | 5 +-
net/smc/af_smc.c | 10 +-
net/smc/smc_cdc.c | 5 +-
net/smc/smc_core.c | 41 +++---
net/sunrpc/xprtsock.c | 45 ++++---
net/tipc/monitor.c | 54 ++++----
net/tipc/node.c | 14 ++-
net/tipc/topsrv.c | 35 +++---
net/tls/tls_sw.c | 10 +-
net/x25/af_x25.c | 45 ++++---
net/x25/x25_forward.c | 25 ++--
net/x25/x25_link.c | 30 +++--
net/x25/x25_proc.c | 6 +-
net/x25/x25_route.c | 25 ++--
net/xfrm/xfrm_policy.c | 7 +-
122 files changed, 1494 insertions(+), 1050 deletions(-)
diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index 75f6b47..a763105 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -704,6 +704,7 @@ static void drbd_incoming_connection(struct sock *sk)
static int prepare_listen_socket(struct drbd_connection *connection, struct accept_wait_data *ad)
{
+ unsigned int bh;
int err, sndbuf_size, rcvbuf_size, my_addr_len;
struct sockaddr_in6 my_addr;
struct socket *s_listen;
@@ -740,11 +741,11 @@ static int prepare_listen_socket(struct drbd_connection *connection, struct acce
goto out;
ad->s_listen = s_listen;
- write_lock_bh(&s_listen->sk->sk_callback_lock);
+ bh = write_lock_bh(&s_listen->sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
ad->original_sk_state_change = s_listen->sk->sk_state_change;
s_listen->sk->sk_state_change = drbd_incoming_connection;
s_listen->sk->sk_user_data = ad;
- write_unlock_bh(&s_listen->sk->sk_callback_lock);
+ write_unlock_bh(&s_listen->sk->sk_callback_lock, bh);
what = "listen";
err = s_listen->ops->listen(s_listen, 5);
@@ -767,10 +768,11 @@ static int prepare_listen_socket(struct drbd_connection *connection, struct acce
static void unregister_state_change(struct sock *sk, struct accept_wait_data *ad)
{
- write_lock_bh(&sk->sk_callback_lock);
+ unsigned int bh;
+ bh = write_lock_bh(&sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
sk->sk_state_change = ad->original_sk_state_change;
sk->sk_user_data = NULL;
- write_unlock_bh(&sk->sk_callback_lock);
+ write_unlock_bh(&sk->sk_callback_lock, bh);
}
static struct socket *drbd_wait_for_connect(struct drbd_connection *connection, struct accept_wait_data *ad)
diff --git a/drivers/infiniband/core/roce_gid_mgmt.c b/drivers/infiniband/core/roce_gid_mgmt.c
index ee36619..d1ed810 100644
--- a/drivers/infiniband/core/roce_gid_mgmt.c
+++ b/drivers/infiniband/core/roce_gid_mgmt.c
@@ -370,6 +370,7 @@ static void enum_netdev_ipv4_ips(struct ib_device *ib_dev,
static void enum_netdev_ipv6_ips(struct ib_device *ib_dev,
u8 port, struct net_device *ndev)
{
+ unsigned int bh;
struct inet6_ifaddr *ifp;
struct inet6_dev *in6_dev;
struct sin6_list {
@@ -388,7 +389,7 @@ static void enum_netdev_ipv6_ips(struct ib_device *ib_dev,
if (!in6_dev)
return;
- read_lock_bh(&in6_dev->lock);
+ bh = read_lock_bh(&in6_dev->lock, SOFTIRQ_ALL_MASK);
list_for_each_entry(ifp, &in6_dev->addr_list, if_list) {
struct sin6_list *entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
@@ -399,7 +400,7 @@ static void enum_netdev_ipv6_ips(struct ib_device *ib_dev,
entry->sin6.sin6_addr = ifp->addr;
list_add_tail(&entry->list, &sin6_list);
}
- read_unlock_bh(&in6_dev->lock);
+ read_unlock_bh(&in6_dev->lock, bh);
in6_dev_put(in6_dev);
diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
index 0f83cbe..e064387 100644
--- a/drivers/infiniband/hw/cxgb4/cm.c
+++ b/drivers/infiniband/hw/cxgb4/cm.c
@@ -3163,6 +3163,7 @@ static int pick_local_ipaddrs(struct c4iw_dev *dev, struct iw_cm_id *cm_id)
static int get_lladdr(struct net_device *dev, struct in6_addr *addr,
unsigned char banned_flags)
{
+ unsigned int bh;
struct inet6_dev *idev;
int err = -EADDRNOTAVAIL;
@@ -3171,7 +3172,7 @@ static int get_lladdr(struct net_device *dev, struct in6_addr *addr,
if (idev != NULL) {
struct inet6_ifaddr *ifp;
- read_lock_bh(&idev->lock);
+ bh = read_lock_bh(&idev->lock, SOFTIRQ_ALL_MASK);
list_for_each_entry(ifp, &idev->addr_list, if_list) {
if (ifp->scope == IFA_LINK &&
!(ifp->flags & banned_flags)) {
@@ -3180,7 +3181,7 @@ static int get_lladdr(struct net_device *dev, struct in6_addr *addr,
break;
}
}
- read_unlock_bh(&idev->lock);
+ read_unlock_bh(&idev->lock, bh);
}
rcu_read_unlock();
return err;
diff --git a/drivers/isdn/mISDN/socket.c b/drivers/isdn/mISDN/socket.c
index 18c0a12..2c83d3d 100644
--- a/drivers/isdn/mISDN/socket.c
+++ b/drivers/isdn/mISDN/socket.c
@@ -54,16 +54,18 @@ _l2_alloc_skb(unsigned int len, gfp_t gfp_mask)
static void
mISDN_sock_link(struct mISDN_sock_list *l, struct sock *sk)
{
- write_lock_bh(&l->lock);
+ unsigned int bh;
+ bh = write_lock_bh(&l->lock, SOFTIRQ_ALL_MASK);
sk_add_node(sk, &l->head);
- write_unlock_bh(&l->lock);
+ write_unlock_bh(&l->lock, bh);
}
static void mISDN_sock_unlink(struct mISDN_sock_list *l, struct sock *sk)
{
- write_lock_bh(&l->lock);
+ unsigned int bh;
+ bh = write_lock_bh(&l->lock, SOFTIRQ_ALL_MASK);
sk_del_node_init(sk);
- write_unlock_bh(&l->lock);
+ write_unlock_bh(&l->lock, bh);
}
static int
@@ -474,6 +476,7 @@ static int data_sock_getsockopt(struct socket *sock, int level, int optname,
static int
data_sock_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
{
+ unsigned int bh;
struct sockaddr_mISDN *maddr = (struct sockaddr_mISDN *) addr;
struct sock *sk = sock->sk;
struct sock *csk;
@@ -499,7 +502,7 @@ data_sock_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
}
if (sk->sk_protocol < ISDN_P_B_START) {
- read_lock_bh(&data_sockets.lock);
+ bh = read_lock_bh(&data_sockets.lock, SOFTIRQ_ALL_MASK);
sk_for_each(csk, &data_sockets.head) {
if (sk == csk)
continue;
@@ -510,11 +513,11 @@ data_sock_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
if (IS_ISDN_P_TE(csk->sk_protocol)
== IS_ISDN_P_TE(sk->sk_protocol))
continue;
- read_unlock_bh(&data_sockets.lock);
+ read_unlock_bh(&data_sockets.lock, bh);
err = -EBUSY;
goto done;
}
- read_unlock_bh(&data_sockets.lock);
+ read_unlock_bh(&data_sockets.lock, bh);
}
_pms(sk)->ch.send = mISDN_send;
diff --git a/drivers/isdn/mISDN/stack.c b/drivers/isdn/mISDN/stack.c
index d97c6dd..fafa705 100644
--- a/drivers/isdn/mISDN/stack.c
+++ b/drivers/isdn/mISDN/stack.c
@@ -428,6 +428,7 @@ int
connect_layer1(struct mISDNdevice *dev, struct mISDNchannel *ch,
u_int protocol, struct sockaddr_mISDN *adr)
{
+ unsigned int bh;
struct mISDN_sock *msk = container_of(ch, struct mISDN_sock, ch);
struct channel_req rq;
int err;
@@ -452,9 +453,9 @@ connect_layer1(struct mISDNdevice *dev, struct mISDNchannel *ch,
dev->id);
if (err)
return err;
- write_lock_bh(&dev->D.st->l1sock.lock);
+ bh = write_lock_bh(&dev->D.st->l1sock.lock, SOFTIRQ_ALL_MASK);
sk_add_node(&msk->sk, &dev->D.st->l1sock.head);
- write_unlock_bh(&dev->D.st->l1sock.lock);
+ write_unlock_bh(&dev->D.st->l1sock.lock, bh);
break;
default:
return -ENOPROTOOPT;
@@ -572,6 +573,7 @@ create_l2entity(struct mISDNdevice *dev, struct mISDNchannel *ch,
void
delete_channel(struct mISDNchannel *ch)
{
+ unsigned int bh;
struct mISDN_sock *msk = container_of(ch, struct mISDN_sock, ch);
struct mISDNchannel *pch;
@@ -594,9 +596,9 @@ delete_channel(struct mISDNchannel *ch)
case ISDN_P_TE_S0:
case ISDN_P_NT_E1:
case ISDN_P_TE_E1:
- write_lock_bh(&ch->st->l1sock.lock);
+ bh = write_lock_bh(&ch->st->l1sock.lock, SOFTIRQ_ALL_MASK);
sk_del_node_init(&msk->sk);
- write_unlock_bh(&ch->st->l1sock.lock);
+ write_unlock_bh(&ch->st->l1sock.lock, bh);
ch->st->dev->D.ctrl(&ch->st->dev->D, CLOSE_CHANNEL, NULL);
break;
case ISDN_P_LAPD_TE:
diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c
index 10a1520..26a2b4d 100644
--- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c
+++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c
@@ -1061,19 +1061,20 @@ EXPORT_SYMBOL(cxgb3_ofld_send);
static int is_offloading(struct net_device *dev)
{
+ unsigned int bh;
struct adapter *adapter;
int i;
- read_lock_bh(&adapter_list_lock);
+ bh = read_lock_bh(&adapter_list_lock, SOFTIRQ_ALL_MASK);
list_for_each_entry(adapter, &adapter_list, adapter_list) {
for_each_port(adapter, i) {
if (dev == adapter->port[i]) {
- read_unlock_bh(&adapter_list_lock);
+ read_unlock_bh(&adapter_list_lock, bh);
return 1;
}
}
}
- read_unlock_bh(&adapter_list_lock);
+ read_unlock_bh(&adapter_list_lock, bh);
return 0;
}
@@ -1209,16 +1210,18 @@ static void free_tid_maps(struct tid_info *t)
static inline void add_adapter(struct adapter *adap)
{
- write_lock_bh(&adapter_list_lock);
+ unsigned int bh;
+ bh = write_lock_bh(&adapter_list_lock, SOFTIRQ_ALL_MASK);
list_add_tail(&adap->adapter_list, &adapter_list);
- write_unlock_bh(&adapter_list_lock);
+ write_unlock_bh(&adapter_list_lock, bh);
}
static inline void remove_adapter(struct adapter *adap)
{
- write_lock_bh(&adapter_list_lock);
+ unsigned int bh;
+ bh = write_lock_bh(&adapter_list_lock, SOFTIRQ_ALL_MASK);
list_del(&adap->adapter_list);
- write_unlock_bh(&adapter_list_lock);
+ write_unlock_bh(&adapter_list_lock, bh);
}
int cxgb3_offload_activate(struct adapter *adapter)
diff --git a/drivers/net/ethernet/chelsio/cxgb3/l2t.c b/drivers/net/ethernet/chelsio/cxgb3/l2t.c
index 1e2d466..3c27b94 100644
--- a/drivers/net/ethernet/chelsio/cxgb3/l2t.c
+++ b/drivers/net/ethernet/chelsio/cxgb3/l2t.c
@@ -305,6 +305,7 @@ static inline void reuse_entry(struct l2t_entry *e, struct neighbour *neigh)
struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct dst_entry *dst,
struct net_device *dev, const void *daddr)
{
+ unsigned int bh;
struct l2t_entry *e = NULL;
struct neighbour *neigh;
struct port_info *p;
@@ -333,7 +334,7 @@ struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct dst_entry *dst,
hash = arp_hash(addr, ifidx, d);
- write_lock_bh(&d->lock);
+ bh = write_lock_bh(&d->lock, SOFTIRQ_ALL_MASK);
for (e = d->l2tab[hash].first; e; e = e->next)
if (e->addr == addr && e->ifindex == ifidx &&
e->smt_idx == smt_idx) {
@@ -362,7 +363,7 @@ struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct dst_entry *dst,
spin_unlock(&e->lock);
}
done_unlock:
- write_unlock_bh(&d->lock);
+ write_unlock_bh(&d->lock, bh);
done_rcu:
if (neigh)
neigh_release(neigh);
@@ -401,6 +402,7 @@ static void handle_failed_resolution(struct t3cdev *dev, struct sk_buff_head *ar
*/
void t3_l2t_update(struct t3cdev *dev, struct neighbour *neigh)
{
+ unsigned int bh;
struct sk_buff_head arpq;
struct l2t_entry *e;
struct l2t_data *d = L2DATA(dev);
@@ -408,13 +410,13 @@ void t3_l2t_update(struct t3cdev *dev, struct neighbour *neigh)
int ifidx = neigh->dev->ifindex;
int hash = arp_hash(addr, ifidx, d);
- read_lock_bh(&d->lock);
+ bh = read_lock_bh(&d->lock, SOFTIRQ_ALL_MASK);
for (e = d->l2tab[hash].first; e; e = e->next)
if (e->addr == addr && e->ifindex == ifidx) {
spin_lock(&e->lock);
goto found;
}
- read_unlock_bh(&d->lock);
+ read_unlock_bh(&d->lock, bh);
return;
found:
diff --git a/drivers/net/ethernet/chelsio/cxgb4/clip_tbl.c b/drivers/net/ethernet/chelsio/cxgb4/clip_tbl.c
index 33314fe..a3da529 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/clip_tbl.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/clip_tbl.c
@@ -73,6 +73,8 @@ static int clip6_release_mbox(const struct net_device *dev,
int cxgb4_clip_get(const struct net_device *dev, const u32 *lip, u8 v6)
{
+ unsigned int bh;
+ unsigned int bh;
struct adapter *adap = netdev2adap(dev);
struct clip_tbl *ctbl = adap->clipt;
struct clip_entry *ce, *cte;
@@ -85,7 +87,7 @@ int cxgb4_clip_get(const struct net_device *dev, const u32 *lip, u8 v6)
hash = clip_addr_hash(ctbl, addr, v6);
- read_lock_bh(&ctbl->lock);
+ bh = read_lock_bh(&ctbl->lock, SOFTIRQ_ALL_MASK);
list_for_each_entry(cte, &ctbl->hash_list[hash], list) {
if (cte->addr6.sin6_family == AF_INET6 && v6)
ret = memcmp(lip, cte->addr6.sin6_addr.s6_addr,
@@ -95,14 +97,14 @@ int cxgb4_clip_get(const struct net_device *dev, const u32 *lip, u8 v6)
sizeof(struct in_addr));
if (!ret) {
ce = cte;
- read_unlock_bh(&ctbl->lock);
+ read_unlock_bh(&ctbl->lock, bh);
refcount_inc(&ce->refcnt);
return 0;
}
}
- read_unlock_bh(&ctbl->lock);
+ read_unlock_bh(&ctbl->lock, bh);
- write_lock_bh(&ctbl->lock);
+ bh = write_lock_bh(&ctbl->lock, SOFTIRQ_ALL_MASK);
if (!list_empty(&ctbl->ce_free_head)) {
ce = list_first_entry(&ctbl->ce_free_head,
struct clip_entry, list);
@@ -118,7 +120,7 @@ int cxgb4_clip_get(const struct net_device *dev, const u32 *lip, u8 v6)
lip, sizeof(struct in6_addr));
ret = clip6_get_mbox(dev, (const struct in6_addr *)lip);
if (ret) {
- write_unlock_bh(&ctbl->lock);
+ write_unlock_bh(&ctbl->lock, bh);
dev_err(adap->pdev_dev,
"CLIP FW cmd failed with error %d, "
"Connections using %pI6c wont be "
@@ -132,13 +134,13 @@ int cxgb4_clip_get(const struct net_device *dev, const u32 *lip, u8 v6)
sizeof(struct in_addr));
}
} else {
- write_unlock_bh(&ctbl->lock);
+ write_unlock_bh(&ctbl->lock, bh);
dev_info(adap->pdev_dev, "CLIP table overflow, "
"Connections using %pI6c wont be offloaded",
(void *)lip);
return -ENOMEM;
}
- write_unlock_bh(&ctbl->lock);
+ write_unlock_bh(&ctbl->lock, bh);
refcount_set(&ce->refcnt, 1);
return 0;
}
@@ -147,6 +149,7 @@ EXPORT_SYMBOL(cxgb4_clip_get);
void cxgb4_clip_release(const struct net_device *dev, const u32 *lip, u8 v6)
{
unsigned int bh;
+ unsigned int bh2;
struct adapter *adap = netdev2adap(dev);
struct clip_tbl *ctbl = adap->clipt;
struct clip_entry *ce, *cte;
@@ -159,7 +162,7 @@ void cxgb4_clip_release(const struct net_device *dev, const u32 *lip, u8 v6)
hash = clip_addr_hash(ctbl, addr, v6);
- read_lock_bh(&ctbl->lock);
+ bh = read_lock_bh(&ctbl->lock, SOFTIRQ_ALL_MASK);
list_for_each_entry(cte, &ctbl->hash_list[hash], list) {
if (cte->addr6.sin6_family == AF_INET6 && v6)
ret = memcmp(lip, cte->addr6.sin6_addr.s6_addr,
@@ -169,16 +172,16 @@ void cxgb4_clip_release(const struct net_device *dev, const u32 *lip, u8 v6)
sizeof(struct in_addr));
if (!ret) {
ce = cte;
- read_unlock_bh(&ctbl->lock);
+ read_unlock_bh(&ctbl->lock, bh);
goto found;
}
}
- read_unlock_bh(&ctbl->lock);
+ read_unlock_bh(&ctbl->lock, bh);
return;
found:
- write_lock_bh(&ctbl->lock);
- bh = spin_lock_bh(&ce->lock, SOFTIRQ_ALL_MASK);
+ bh = write_lock_bh(&ctbl->lock, SOFTIRQ_ALL_MASK);
+ bh2 = spin_lock_bh(&ce->lock, SOFTIRQ_ALL_MASK);
if (refcount_dec_and_test(&ce->refcnt)) {
list_del(&ce->list);
INIT_LIST_HEAD(&ce->list);
@@ -187,8 +190,8 @@ void cxgb4_clip_release(const struct net_device *dev, const u32 *lip, u8 v6)
if (v6)
clip6_release_mbox(dev, (const struct in6_addr *)lip);
}
- spin_unlock_bh(&ce->lock, bh);
- write_unlock_bh(&ctbl->lock);
+ spin_unlock_bh(&ce->lock, bh2);
+ write_unlock_bh(&ctbl->lock, bh);
}
EXPORT_SYMBOL(cxgb4_clip_release);
@@ -199,6 +202,7 @@ EXPORT_SYMBOL(cxgb4_clip_release);
static int cxgb4_update_dev_clip(struct net_device *root_dev,
struct net_device *dev)
{
+ unsigned int bh;
struct inet6_dev *idev = NULL;
struct inet6_ifaddr *ifa;
int ret = 0;
@@ -207,13 +211,13 @@ static int cxgb4_update_dev_clip(struct net_device *root_dev,
if (!idev)
return ret;
- read_lock_bh(&idev->lock);
+ bh = read_lock_bh(&idev->lock, SOFTIRQ_ALL_MASK);
list_for_each_entry(ifa, &idev->addr_list, if_list) {
ret = cxgb4_clip_get(dev, (const u32 *)ifa->addr.s6_addr, 1);
if (ret < 0)
break;
}
- read_unlock_bh(&idev->lock);
+ read_unlock_bh(&idev->lock, bh);
return ret;
}
@@ -252,13 +256,14 @@ EXPORT_SYMBOL(cxgb4_update_root_dev_clip);
int clip_tbl_show(struct seq_file *seq, void *v)
{
+ unsigned int bh;
struct adapter *adapter = seq->private;
struct clip_tbl *ctbl = adapter->clipt;
struct clip_entry *ce;
char ip[60];
int i;
- read_lock_bh(&ctbl->lock);
+ bh = read_lock_bh(&ctbl->lock, SOFTIRQ_ALL_MASK);
seq_puts(seq, "IP Address Users\n");
for (i = 0 ; i < ctbl->clipt_size; ++i) {
@@ -271,7 +276,7 @@ int clip_tbl_show(struct seq_file *seq, void *v)
}
seq_printf(seq, "Free clip entries : %d\n", atomic_read(&ctbl->nfree));
- read_unlock_bh(&ctbl->lock);
+ read_unlock_bh(&ctbl->lock, bh);
return 0;
}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/l2t.c b/drivers/net/ethernet/chelsio/cxgb4/l2t.c
index 3653fd0..7cf1976 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/l2t.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/l2t.c
@@ -422,6 +422,7 @@ struct l2t_entry *cxgb4_l2t_get(struct l2t_data *d, struct neighbour *neigh,
const struct net_device *physdev,
unsigned int priority)
{
+ unsigned int bh;
u8 lport;
u16 vlan;
struct l2t_entry *e;
@@ -440,7 +441,7 @@ struct l2t_entry *cxgb4_l2t_get(struct l2t_data *d, struct neighbour *neigh,
else
vlan = VLAN_NONE;
- write_lock_bh(&d->lock);
+ bh = write_lock_bh(&d->lock, SOFTIRQ_ALL_MASK);
for (e = d->l2tab[hash].first; e; e = e->next)
if (!addreq(e, addr) && e->ifindex == ifidx &&
e->vlan == vlan && e->lport == lport) {
@@ -470,7 +471,7 @@ struct l2t_entry *cxgb4_l2t_get(struct l2t_data *d, struct neighbour *neigh,
spin_unlock(&e->lock);
}
done:
- write_unlock_bh(&d->lock);
+ write_unlock_bh(&d->lock, bh);
return e;
}
EXPORT_SYMBOL(cxgb4_l2t_get);
@@ -536,6 +537,7 @@ static void handle_failed_resolution(struct adapter *adap, struct l2t_entry *e)
*/
void t4_l2t_update(struct adapter *adap, struct neighbour *neigh)
{
+ unsigned int bh;
struct l2t_entry *e;
struct sk_buff_head *arpq = NULL;
struct l2t_data *d = adap->l2t;
@@ -544,7 +546,7 @@ void t4_l2t_update(struct adapter *adap, struct neighbour *neigh)
int ifidx = neigh->dev->ifindex;
int hash = addr_hash(d, addr, addr_len, ifidx);
- read_lock_bh(&d->lock);
+ bh = read_lock_bh(&d->lock, SOFTIRQ_ALL_MASK);
for (e = d->l2tab[hash].first; e; e = e->next)
if (!addreq(e, addr) && e->ifindex == ifidx) {
spin_lock(&e->lock);
@@ -553,7 +555,7 @@ void t4_l2t_update(struct adapter *adap, struct neighbour *neigh)
spin_unlock(&e->lock);
break;
}
- read_unlock_bh(&d->lock);
+ read_unlock_bh(&d->lock, bh);
return;
found:
@@ -588,11 +590,12 @@ void t4_l2t_update(struct adapter *adap, struct neighbour *neigh)
struct l2t_entry *t4_l2t_alloc_switching(struct adapter *adap, u16 vlan,
u8 port, u8 *eth_addr)
{
+ unsigned int bh;
struct l2t_data *d = adap->l2t;
struct l2t_entry *e;
int ret;
- write_lock_bh(&d->lock);
+ bh = write_lock_bh(&d->lock, SOFTIRQ_ALL_MASK);
e = find_or_alloc_l2e(d, vlan, port, eth_addr);
if (e) {
spin_lock(&e->lock); /* avoid race with t4_l2t_free */
@@ -606,7 +609,7 @@ struct l2t_entry *t4_l2t_alloc_switching(struct adapter *adap, u16 vlan,
if (ret < 0) {
_t4_l2e_free(e);
spin_unlock(&e->lock);
- write_unlock_bh(&d->lock);
+ write_unlock_bh(&d->lock, bh);
return NULL;
}
} else {
@@ -615,7 +618,7 @@ struct l2t_entry *t4_l2t_alloc_switching(struct adapter *adap, u16 vlan,
spin_unlock(&e->lock);
}
- write_unlock_bh(&d->lock);
+ write_unlock_bh(&d->lock, bh);
return e;
}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/smt.c b/drivers/net/ethernet/chelsio/cxgb4/smt.c
index c660e9d..7201434 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/smt.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/smt.c
@@ -210,10 +210,11 @@ static int write_smt_entry(struct adapter *adapter, struct smt_entry *e)
static struct smt_entry *t4_smt_alloc_switching(struct adapter *adap, u16 pfvf,
u8 *smac)
{
+ unsigned int bh;
struct smt_data *s = adap->smt;
struct smt_entry *e;
- write_lock_bh(&s->lock);
+ bh = write_lock_bh(&s->lock, SOFTIRQ_ALL_MASK);
e = find_or_alloc_smte(s, smac);
if (e) {
spin_lock(&e->lock);
@@ -228,7 +229,7 @@ static struct smt_entry *t4_smt_alloc_switching(struct adapter *adap, u16 pfvf,
}
spin_unlock(&e->lock);
}
- write_unlock_bh(&s->lock);
+ write_unlock_bh(&s->lock, bh);
return e;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index 48a1f16..bb6b21c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -386,6 +386,7 @@ static void mlx5e_rep_update_flows(struct mlx5e_priv *priv,
static void mlx5e_rep_neigh_update(struct work_struct *work)
{
+ unsigned int bh;
struct mlx5e_neigh_hash_entry *nhe =
container_of(work, struct mlx5e_neigh_hash_entry, neigh_update_work);
struct neighbour *n = nhe->n;
@@ -403,11 +404,11 @@ static void mlx5e_rep_neigh_update(struct work_struct *work)
* We use this lock to avoid inconsistency between the neigh validity
* and it's hw address.
*/
- read_lock_bh(&n->lock);
+ bh = read_lock_bh(&n->lock, SOFTIRQ_ALL_MASK);
memcpy(ha, n->ha, ETH_ALEN);
nud_state = n->nud_state;
dead = n->dead;
- read_unlock_bh(&n->lock);
+ read_unlock_bh(&n->lock, bh);
neigh_connected = (nud_state & NUD_VALID) && !dead;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 9fed540..ffe8e61 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -2307,6 +2307,7 @@ static int mlx5e_create_encap_header_ipv4(struct mlx5e_priv *priv,
struct net_device *mirred_dev,
struct mlx5e_encap_entry *e)
{
+ unsigned int bh;
int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size);
int ipv4_encap_size = ETH_HLEN + sizeof(struct iphdr) + VXLAN_HLEN;
struct ip_tunnel_key *tun_key = &e->tun_info.key;
@@ -2366,10 +2367,10 @@ static int mlx5e_create_encap_header_ipv4(struct mlx5e_priv *priv,
if (err)
goto free_encap;
- read_lock_bh(&n->lock);
+ bh = read_lock_bh(&n->lock, SOFTIRQ_ALL_MASK);
nud_state = n->nud_state;
ether_addr_copy(e->h_dest, n->ha);
- read_unlock_bh(&n->lock);
+ read_unlock_bh(&n->lock, bh);
switch (e->tunnel_type) {
case MLX5_HEADER_TYPE_VXLAN:
@@ -2416,6 +2417,7 @@ static int mlx5e_create_encap_header_ipv6(struct mlx5e_priv *priv,
struct net_device *mirred_dev,
struct mlx5e_encap_entry *e)
{
+ unsigned int bh;
int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size);
int ipv6_encap_size = ETH_HLEN + sizeof(struct ipv6hdr) + VXLAN_HLEN;
struct ip_tunnel_key *tun_key = &e->tun_info.key;
@@ -2475,10 +2477,10 @@ static int mlx5e_create_encap_header_ipv6(struct mlx5e_priv *priv,
if (err)
goto free_encap;
- read_lock_bh(&n->lock);
+ bh = read_lock_bh(&n->lock, SOFTIRQ_ALL_MASK);
nud_state = n->nud_state;
ether_addr_copy(e->h_dest, n->ha);
- read_unlock_bh(&n->lock);
+ read_unlock_bh(&n->lock, bh);
switch (e->tunnel_type) {
case MLX5_HEADER_TYPE_VXLAN:
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
index 2ab9cf2..420d0e0 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
@@ -2345,6 +2345,7 @@ struct mlxsw_sp_netevent_work {
static void mlxsw_sp_router_neigh_event_work(struct work_struct *work)
{
+ unsigned int bh;
struct mlxsw_sp_netevent_work *net_work =
container_of(work, struct mlxsw_sp_netevent_work, work);
struct mlxsw_sp *mlxsw_sp = net_work->mlxsw_sp;
@@ -2358,11 +2359,11 @@ static void mlxsw_sp_router_neigh_event_work(struct work_struct *work)
* then we are guaranteed to receive another event letting us
* know about it.
*/
- read_lock_bh(&n->lock);
+ bh = read_lock_bh(&n->lock, SOFTIRQ_ALL_MASK);
memcpy(ha, n->ha, ETH_ALEN);
nud_state = n->nud_state;
dead = n->dead;
- read_unlock_bh(&n->lock);
+ read_unlock_bh(&n->lock, bh);
rtnl_lock();
mlxsw_sp_span_respin(mlxsw_sp);
@@ -3379,6 +3380,7 @@ static void mlxsw_sp_nexthop_rif_fini(struct mlxsw_sp_nexthop *nh)
static int mlxsw_sp_nexthop_neigh_init(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_nexthop *nh)
{
+ unsigned int bh;
struct mlxsw_sp_neigh_entry *neigh_entry;
struct neighbour *n;
u8 nud_state, dead;
@@ -3418,10 +3420,10 @@ static int mlxsw_sp_nexthop_neigh_init(struct mlxsw_sp *mlxsw_sp,
nh->neigh_entry = neigh_entry;
list_add_tail(&nh->neigh_list_node, &neigh_entry->nexthop_list);
- read_lock_bh(&n->lock);
+ bh = read_lock_bh(&n->lock, SOFTIRQ_ALL_MASK);
nud_state = n->nud_state;
dead = n->dead;
- read_unlock_bh(&n->lock);
+ read_unlock_bh(&n->lock, bh);
__mlxsw_sp_nexthop_neigh_update(nh, !(nud_state & NUD_VALID && !dead));
return 0;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
index d965fd2..40517d2 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
@@ -110,6 +110,7 @@ static int mlxsw_sp_span_dmac(struct neigh_table *tbl,
struct net_device *dev,
unsigned char dmac[ETH_ALEN])
{
+ unsigned int bh;
struct neighbour *neigh = neigh_lookup(tbl, pkey, dev);
int err = 0;
@@ -121,12 +122,12 @@ static int mlxsw_sp_span_dmac(struct neigh_table *tbl,
neigh_event_send(neigh, NULL);
- read_lock_bh(&neigh->lock);
+ bh = read_lock_bh(&neigh->lock, SOFTIRQ_ALL_MASK);
if ((neigh->nud_state & NUD_VALID) && !neigh->dead)
memcpy(dmac, neigh->ha, ETH_ALEN);
else
err = -ENOENT;
- read_unlock_bh(&neigh->lock);
+ read_unlock_bh(&neigh->lock, bh);
neigh_release(neigh);
return err;
diff --git a/drivers/net/hamradio/6pack.c b/drivers/net/hamradio/6pack.c
index 8317571..0bf7009 100644
--- a/drivers/net/hamradio/6pack.c
+++ b/drivers/net/hamradio/6pack.c
@@ -661,12 +661,13 @@ static int sixpack_open(struct tty_struct *tty)
*/
static void sixpack_close(struct tty_struct *tty)
{
+ unsigned int bh;
struct sixpack *sp;
- write_lock_bh(&disc_data_lock);
+ bh = write_lock_bh(&disc_data_lock, SOFTIRQ_ALL_MASK);
sp = tty->disc_data;
tty->disc_data = NULL;
- write_unlock_bh(&disc_data_lock);
+ write_unlock_bh(&disc_data_lock, bh);
if (!sp)
return;
diff --git a/drivers/net/hamradio/mkiss.c b/drivers/net/hamradio/mkiss.c
index ff57063..dce4a3f 100644
--- a/drivers/net/hamradio/mkiss.c
+++ b/drivers/net/hamradio/mkiss.c
@@ -786,12 +786,13 @@ static int mkiss_open(struct tty_struct *tty)
static void mkiss_close(struct tty_struct *tty)
{
+ unsigned int bh;
struct mkiss *ax;
- write_lock_bh(&disc_data_lock);
+ bh = write_lock_bh(&disc_data_lock, SOFTIRQ_ALL_MASK);
ax = tty->disc_data;
tty->disc_data = NULL;
- write_unlock_bh(&disc_data_lock);
+ write_unlock_bh(&disc_data_lock, bh);
if (!ax)
return;
diff --git a/drivers/net/ieee802154/fakelb.c b/drivers/net/ieee802154/fakelb.c
index 3b0588d..72d7369 100644
--- a/drivers/net/ieee802154/fakelb.c
+++ b/drivers/net/ieee802154/fakelb.c
@@ -57,20 +57,22 @@ static int fakelb_hw_ed(struct ieee802154_hw *hw, u8 *level)
static int fakelb_hw_channel(struct ieee802154_hw *hw, u8 page, u8 channel)
{
+ unsigned int bh;
struct fakelb_phy *phy = hw->priv;
- write_lock_bh(&fakelb_ifup_phys_lock);
+ bh = write_lock_bh(&fakelb_ifup_phys_lock, SOFTIRQ_ALL_MASK);
phy->page = page;
phy->channel = channel;
- write_unlock_bh(&fakelb_ifup_phys_lock);
+ write_unlock_bh(&fakelb_ifup_phys_lock, bh);
return 0;
}
static int fakelb_hw_xmit(struct ieee802154_hw *hw, struct sk_buff *skb)
{
+ unsigned int bh;
struct fakelb_phy *current_phy = hw->priv, *phy;
- read_lock_bh(&fakelb_ifup_phys_lock);
+ bh = read_lock_bh(&fakelb_ifup_phys_lock, SOFTIRQ_ALL_MASK);
WARN_ON(current_phy->suspended);
list_for_each_entry(phy, &fakelb_ifup_phys, list_ifup) {
if (current_phy == phy)
@@ -84,7 +86,7 @@ static int fakelb_hw_xmit(struct ieee802154_hw *hw, struct sk_buff *skb)
ieee802154_rx_irqsafe(phy->hw, newskb, 0xcc);
}
}
- read_unlock_bh(&fakelb_ifup_phys_lock);
+ read_unlock_bh(&fakelb_ifup_phys_lock, bh);
ieee802154_xmit_complete(hw, skb, false);
return 0;
@@ -92,24 +94,26 @@ static int fakelb_hw_xmit(struct ieee802154_hw *hw, struct sk_buff *skb)
static int fakelb_hw_start(struct ieee802154_hw *hw)
{
+ unsigned int bh;
struct fakelb_phy *phy = hw->priv;
- write_lock_bh(&fakelb_ifup_phys_lock);
+ bh = write_lock_bh(&fakelb_ifup_phys_lock, SOFTIRQ_ALL_MASK);
phy->suspended = false;
list_add(&phy->list_ifup, &fakelb_ifup_phys);
- write_unlock_bh(&fakelb_ifup_phys_lock);
+ write_unlock_bh(&fakelb_ifup_phys_lock, bh);
return 0;
}
static void fakelb_hw_stop(struct ieee802154_hw *hw)
{
+ unsigned int bh;
struct fakelb_phy *phy = hw->priv;
- write_lock_bh(&fakelb_ifup_phys_lock);
+ bh = write_lock_bh(&fakelb_ifup_phys_lock, SOFTIRQ_ALL_MASK);
phy->suspended = true;
list_del(&phy->list_ifup);
- write_unlock_bh(&fakelb_ifup_phys_lock);
+ write_unlock_bh(&fakelb_ifup_phys_lock, bh);
}
static int
diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
index c6cda50..23bc409 100644
--- a/drivers/net/ppp/ppp_generic.c
+++ b/drivers/net/ppp/ppp_generic.c
@@ -1935,7 +1935,8 @@ static void __ppp_channel_push(struct channel *pch)
static void ppp_channel_push(struct channel *pch)
{
- read_lock_bh(&pch->upl);
+ unsigned int bh;
+ bh = read_lock_bh(&pch->upl, SOFTIRQ_ALL_MASK);
if (pch->ppp) {
(*this_cpu_ptr(pch->ppp->xmit_recursion))++;
__ppp_channel_push(pch);
@@ -1943,7 +1944,7 @@ static void ppp_channel_push(struct channel *pch)
} else {
__ppp_channel_push(pch);
}
- read_unlock_bh(&pch->upl);
+ read_unlock_bh(&pch->upl, bh);
}
/*
@@ -1970,6 +1971,7 @@ ppp_do_recv(struct ppp *ppp, struct sk_buff *skb, struct channel *pch)
void
ppp_input(struct ppp_channel *chan, struct sk_buff *skb)
{
+ unsigned int bh;
struct channel *pch = chan->ppp;
int proto;
@@ -1978,7 +1980,7 @@ ppp_input(struct ppp_channel *chan, struct sk_buff *skb)
return;
}
- read_lock_bh(&pch->upl);
+ bh = read_lock_bh(&pch->upl, SOFTIRQ_ALL_MASK);
if (!pskb_may_pull(skb, 2)) {
kfree_skb(skb);
if (pch->ppp) {
@@ -2002,20 +2004,21 @@ ppp_input(struct ppp_channel *chan, struct sk_buff *skb)
}
done:
- read_unlock_bh(&pch->upl);
+ read_unlock_bh(&pch->upl, bh);
}
/* Put a 0-length skb in the receive queue as an error indication */
void
ppp_input_error(struct ppp_channel *chan, int code)
{
+ unsigned int bh;
struct channel *pch = chan->ppp;
struct sk_buff *skb;
if (!pch)
return;
- read_lock_bh(&pch->upl);
+ bh = read_lock_bh(&pch->upl, SOFTIRQ_ALL_MASK);
if (pch->ppp) {
skb = alloc_skb(0, GFP_ATOMIC);
if (skb) {
@@ -2024,7 +2027,7 @@ ppp_input_error(struct ppp_channel *chan, int code)
ppp_do_recv(pch->ppp, skb, pch);
}
}
- read_unlock_bh(&pch->upl);
+ read_unlock_bh(&pch->upl, bh);
}
/*
@@ -2606,14 +2609,15 @@ int ppp_channel_index(struct ppp_channel *chan)
*/
int ppp_unit_number(struct ppp_channel *chan)
{
+ unsigned int bh;
struct channel *pch = chan->ppp;
int unit = -1;
if (pch) {
- read_lock_bh(&pch->upl);
+ bh = read_lock_bh(&pch->upl, SOFTIRQ_ALL_MASK);
if (pch->ppp)
unit = pch->ppp->file.index;
- read_unlock_bh(&pch->upl);
+ read_unlock_bh(&pch->upl, bh);
}
return unit;
}
@@ -2623,14 +2627,15 @@ int ppp_unit_number(struct ppp_channel *chan)
*/
char *ppp_dev_name(struct ppp_channel *chan)
{
+ unsigned int bh;
struct channel *pch = chan->ppp;
char *name = NULL;
if (pch) {
- read_lock_bh(&pch->upl);
+ bh = read_lock_bh(&pch->upl, SOFTIRQ_ALL_MASK);
if (pch->ppp && pch->ppp->dev)
name = pch->ppp->dev->name;
- read_unlock_bh(&pch->upl);
+ read_unlock_bh(&pch->upl, bh);
}
return name;
}
@@ -3134,6 +3139,7 @@ static int
ppp_connect_channel(struct channel *pch, int unit)
{
unsigned int bh;
+ unsigned int bh;
struct ppp *ppp;
struct ppp_net *pn;
int ret = -ENXIO;
@@ -3145,7 +3151,7 @@ ppp_connect_channel(struct channel *pch, int unit)
ppp = ppp_find_unit(pn, unit);
if (!ppp)
goto out;
- write_lock_bh(&pch->upl);
+ bh = write_lock_bh(&pch->upl, SOFTIRQ_ALL_MASK);
ret = -EINVAL;
if (pch->ppp)
goto outl;
@@ -3173,7 +3179,7 @@ ppp_connect_channel(struct channel *pch, int unit)
ret = 0;
outl:
- write_unlock_bh(&pch->upl);
+ write_unlock_bh(&pch->upl, bh);
out:
mutex_unlock(&pn->all_ppp_mutex);
return ret;
@@ -3185,13 +3191,14 @@ ppp_connect_channel(struct channel *pch, int unit)
static int
ppp_disconnect_channel(struct channel *pch)
{
+ unsigned int bh;
struct ppp *ppp;
int err = -EINVAL;
- write_lock_bh(&pch->upl);
+ bh = write_lock_bh(&pch->upl, SOFTIRQ_ALL_MASK);
ppp = pch->ppp;
pch->ppp = NULL;
- write_unlock_bh(&pch->upl);
+ write_unlock_bh(&pch->upl, bh);
if (ppp) {
/* remove it from the ppp unit's list */
ppp_lock(ppp);
diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c
index 62dc564..68bf348 100644
--- a/drivers/net/ppp/pppoe.c
+++ b/drivers/net/ppp/pppoe.c
@@ -230,13 +230,14 @@ static void __delete_item(struct pppoe_net *pn, __be16 sid,
static inline struct pppox_sock *get_item(struct pppoe_net *pn, __be16 sid,
unsigned char *addr, int ifindex)
{
+ unsigned int bh;
struct pppox_sock *po;
- read_lock_bh(&pn->hash_lock);
+ bh = read_lock_bh(&pn->hash_lock, SOFTIRQ_ALL_MASK);
po = __get_item(pn, sid, addr, ifindex);
if (po)
sock_hold(sk_pppox(po));
- read_unlock_bh(&pn->hash_lock);
+ read_unlock_bh(&pn->hash_lock, bh);
return po;
}
@@ -265,9 +266,10 @@ static inline struct pppox_sock *get_item_by_addr(struct net *net,
static inline void delete_item(struct pppoe_net *pn, __be16 sid,
char *addr, int ifindex)
{
- write_lock_bh(&pn->hash_lock);
+ unsigned int bh;
+ bh = write_lock_bh(&pn->hash_lock, SOFTIRQ_ALL_MASK);
__delete_item(pn, sid, addr, ifindex);
- write_unlock_bh(&pn->hash_lock);
+ write_unlock_bh(&pn->hash_lock, bh);
}
/***************************************************************************
@@ -279,11 +281,12 @@ static inline void delete_item(struct pppoe_net *pn, __be16 sid,
static void pppoe_flush_dev(struct net_device *dev)
{
+ unsigned int bh;
struct pppoe_net *pn;
int i;
pn = pppoe_pernet(dev_net(dev));
- write_lock_bh(&pn->hash_lock);
+ bh = write_lock_bh(&pn->hash_lock, SOFTIRQ_ALL_MASK);
for (i = 0; i < PPPOE_HASH_SIZE; i++) {
struct pppox_sock *po = pn->hash_table[i];
struct sock *sk;
@@ -327,11 +330,11 @@ static void pppoe_flush_dev(struct net_device *dev)
*/
BUG_ON(pppoe_pernet(dev_net(dev)) == NULL);
- write_lock_bh(&pn->hash_lock);
+ write_lock_bh(&pn->hash_lock, SOFTIRQ_ALL_MASK);
po = pn->hash_table[i];
}
}
- write_unlock_bh(&pn->hash_lock);
+ write_unlock_bh(&pn->hash_lock, bh);
}
static int pppoe_device_event(struct notifier_block *this,
@@ -612,6 +615,7 @@ static int pppoe_release(struct socket *sock)
static int pppoe_connect(struct socket *sock, struct sockaddr *uservaddr,
int sockaddr_len, int flags)
{
+ unsigned int bh;
struct sock *sk = sock->sk;
struct sockaddr_pppox *sp = (struct sockaddr_pppox *)uservaddr;
struct pppox_sock *po = pppox_sk(sk);
@@ -684,9 +688,9 @@ static int pppoe_connect(struct socket *sock, struct sockaddr *uservaddr,
&sp->sa_addr.pppoe,
sizeof(struct pppoe_addr));
- write_lock_bh(&pn->hash_lock);
+ bh = write_lock_bh(&pn->hash_lock, SOFTIRQ_ALL_MASK);
error = __set_item(pn, po);
- write_unlock_bh(&pn->hash_lock);
+ write_unlock_bh(&pn->hash_lock, bh);
if (error < 0)
goto err_put;
@@ -1054,7 +1058,7 @@ static void *pppoe_seq_start(struct seq_file *seq, loff_t *pos)
struct pppoe_net *pn = pppoe_pernet(seq_file_net(seq));
loff_t l = *pos;
- read_lock_bh(&pn->hash_lock);
+ read_lock_bh(&pn->hash_lock, SOFTIRQ_ALL_MASK);
return l ? pppoe_get_idx(pn, --l) : SEQ_START_TOKEN;
}
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
index 79bdae9..ea8c31f 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
@@ -99,13 +99,14 @@ void iwl_mvm_ipv6_addr_change(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct inet6_dev *idev)
{
+ unsigned int bh;
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct inet6_ifaddr *ifa;
int idx = 0;
memset(mvmvif->tentative_addrs, 0, sizeof(mvmvif->tentative_addrs));
- read_lock_bh(&idev->lock);
+ bh = read_lock_bh(&idev->lock, SOFTIRQ_ALL_MASK);
list_for_each_entry(ifa, &idev->addr_list, if_list) {
mvmvif->target_ipv6_addrs[idx] = ifa->addr;
if (ifa->flags & IFA_F_TENTATIVE)
@@ -114,7 +115,7 @@ void iwl_mvm_ipv6_addr_change(struct ieee80211_hw *hw,
if (idx >= IWL_PROTO_OFFLOAD_NUM_IPV6_ADDRS_MAX)
break;
}
- read_unlock_bh(&idev->lock);
+ read_unlock_bh(&idev->lock, bh);
mvmvif->num_target_ipv6_addrs = idx;
}
diff --git a/drivers/net/wireless/intersil/hostap/hostap_80211_rx.c b/drivers/net/wireless/intersil/hostap/hostap_80211_rx.c
index 61be822..d6cf897 100644
--- a/drivers/net/wireless/intersil/hostap/hostap_80211_rx.c
+++ b/drivers/net/wireless/intersil/hostap/hostap_80211_rx.c
@@ -532,10 +532,11 @@ hostap_rx_frame_mgmt(local_info_t *local, struct sk_buff *skb,
static struct net_device *prism2_rx_get_wds(local_info_t *local,
u8 *addr)
{
+ unsigned int bh;
struct hostap_interface *iface = NULL;
struct list_head *ptr;
- read_lock_bh(&local->iface_lock);
+ bh = read_lock_bh(&local->iface_lock, SOFTIRQ_ALL_MASK);
list_for_each(ptr, &local->hostap_interfaces) {
iface = list_entry(ptr, struct hostap_interface, list);
if (iface->type == HOSTAP_INTERFACE_WDS &&
@@ -543,7 +544,7 @@ static struct net_device *prism2_rx_get_wds(local_info_t *local,
break;
iface = NULL;
}
- read_unlock_bh(&local->iface_lock);
+ read_unlock_bh(&local->iface_lock, bh);
return iface ? iface->dev : NULL;
}
diff --git a/drivers/net/wireless/intersil/hostap/hostap_main.c b/drivers/net/wireless/intersil/hostap/hostap_main.c
index 012930d..004318f 100644
--- a/drivers/net/wireless/intersil/hostap/hostap_main.c
+++ b/drivers/net/wireless/intersil/hostap/hostap_main.c
@@ -142,12 +142,13 @@ static inline int prism2_wds_special_addr(u8 *addr)
int prism2_wds_add(local_info_t *local, u8 *remote_addr,
int rtnl_locked)
{
+ unsigned int bh;
struct net_device *dev;
struct list_head *ptr;
struct hostap_interface *iface, *empty, *match;
empty = match = NULL;
- read_lock_bh(&local->iface_lock);
+ bh = read_lock_bh(&local->iface_lock, SOFTIRQ_ALL_MASK);
list_for_each(ptr, &local->hostap_interfaces) {
iface = list_entry(ptr, struct hostap_interface, list);
if (iface->type != HOSTAP_INTERFACE_WDS)
@@ -163,12 +164,12 @@ int prism2_wds_add(local_info_t *local, u8 *remote_addr,
if (!match && empty && !prism2_wds_special_addr(remote_addr)) {
/* take pre-allocated entry into use */
memcpy(empty->u.wds.remote_addr, remote_addr, ETH_ALEN);
- read_unlock_bh(&local->iface_lock);
+ read_unlock_bh(&local->iface_lock, bh);
printk(KERN_DEBUG "%s: using pre-allocated WDS netdevice %s\n",
local->dev->name, empty->dev->name);
return 0;
}
- read_unlock_bh(&local->iface_lock);
+ read_unlock_bh(&local->iface_lock, bh);
if (!prism2_wds_special_addr(remote_addr)) {
if (match)
@@ -702,6 +703,7 @@ static int prism2_open(struct net_device *dev)
static int prism2_set_mac_address(struct net_device *dev, void *p)
{
+ unsigned int bh;
struct hostap_interface *iface;
local_info_t *local;
struct list_head *ptr;
@@ -714,13 +716,13 @@ static int prism2_set_mac_address(struct net_device *dev, void *p)
ETH_ALEN) < 0 || local->func->reset_port(dev))
return -EINVAL;
- read_lock_bh(&local->iface_lock);
+ bh = read_lock_bh(&local->iface_lock, SOFTIRQ_ALL_MASK);
list_for_each(ptr, &local->hostap_interfaces) {
iface = list_entry(ptr, struct hostap_interface, list);
memcpy(iface->dev->dev_addr, addr->sa_data, ETH_ALEN);
}
memcpy(local->dev->dev_addr, addr->sa_data, ETH_ALEN);
- read_unlock_bh(&local->iface_lock);
+ read_unlock_bh(&local->iface_lock, bh);
return 0;
}
diff --git a/drivers/net/wireless/intersil/hostap/hostap_proc.c b/drivers/net/wireless/intersil/hostap/hostap_proc.c
index 56ae726..f3042bb 100644
--- a/drivers/net/wireless/intersil/hostap/hostap_proc.c
+++ b/drivers/net/wireless/intersil/hostap/hostap_proc.c
@@ -98,7 +98,7 @@ static int prism2_wds_proc_show(struct seq_file *m, void *v)
static void *prism2_wds_proc_start(struct seq_file *m, loff_t *_pos)
{
local_info_t *local = PDE_DATA(file_inode(m->file));
- read_lock_bh(&local->iface_lock);
+ read_lock_bh(&local->iface_lock, SOFTIRQ_ALL_MASK);
return seq_list_start(&local->hostap_interfaces, *_pos);
}
diff --git a/drivers/s390/net/netiucv.c b/drivers/s390/net/netiucv.c
index 5ce2424..1a48261 100644
--- a/drivers/s390/net/netiucv.c
+++ b/drivers/s390/net/netiucv.c
@@ -543,6 +543,7 @@ static void netiucv_callback_connack(struct iucv_path *path, u8 ipuser[16])
static int netiucv_callback_connreq(struct iucv_path *path, u8 *ipvmid,
u8 *ipuser)
{
+ unsigned int bh;
struct iucv_connection *conn = path->private;
struct iucv_event ev;
static char tmp_user[9];
@@ -553,7 +554,7 @@ static int netiucv_callback_connreq(struct iucv_path *path, u8 *ipvmid,
memcpy(tmp_user, netiucv_printname(ipvmid, 8), 8);
memcpy(tmp_udat, ipuser, 16);
EBCASC(tmp_udat, 16);
- read_lock_bh(&iucv_connection_rwlock);
+ bh = read_lock_bh(&iucv_connection_rwlock, SOFTIRQ_ALL_MASK);
list_for_each_entry(conn, &iucv_connection_list, list) {
if (strncmp(ipvmid, conn->userid, 8) ||
strncmp(ipuser, conn->userdata, 16))
@@ -567,7 +568,7 @@ static int netiucv_callback_connreq(struct iucv_path *path, u8 *ipvmid,
}
IUCV_DBF_TEXT_(setup, 2, "Connection requested for %s.%s\n",
tmp_user, netiucv_printname(tmp_udat, 16));
- read_unlock_bh(&iucv_connection_rwlock);
+ read_unlock_bh(&iucv_connection_rwlock, bh);
return rc;
}
@@ -1476,6 +1477,7 @@ static int netiucv_check_user(const char *buf, size_t count, char *username,
static ssize_t user_write(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
+ unsigned int bh;
struct netiucv_priv *priv = dev_get_drvdata(dev);
struct net_device *ndev = priv->conn->netdev;
char username[9];
@@ -1494,17 +1496,17 @@ static ssize_t user_write(struct device *dev, struct device_attribute *attr,
IUCV_DBF_TEXT(setup, 2, "user_write: device active\n");
return -EPERM;
}
- read_lock_bh(&iucv_connection_rwlock);
+ bh = read_lock_bh(&iucv_connection_rwlock, SOFTIRQ_ALL_MASK);
list_for_each_entry(cp, &iucv_connection_list, list) {
if (!strncmp(username, cp->userid, 9) &&
!strncmp(userdata, cp->userdata, 17) && cp->netdev != ndev) {
- read_unlock_bh(&iucv_connection_rwlock);
+ read_unlock_bh(&iucv_connection_rwlock, bh);
IUCV_DBF_TEXT_(setup, 2, "user_write: Connection to %s "
"already exists\n", netiucv_printuser(cp));
return -EEXIST;
}
}
- read_unlock_bh(&iucv_connection_rwlock);
+ read_unlock_bh(&iucv_connection_rwlock, bh);
memcpy(priv->conn->userid, username, 9);
memcpy(priv->conn->userdata, userdata, 17);
return count;
@@ -1845,6 +1847,7 @@ static struct iucv_connection *netiucv_new_connection(struct net_device *dev,
char *username,
char *userdata)
{
+ unsigned int bh;
struct iucv_connection *conn;
conn = kzalloc(sizeof(*conn), GFP_KERNEL);
@@ -1879,9 +1882,9 @@ static struct iucv_connection *netiucv_new_connection(struct net_device *dev,
fsm_newstate(conn->fsm, CONN_STATE_STOPPED);
}
- write_lock_bh(&iucv_connection_rwlock);
+ bh = write_lock_bh(&iucv_connection_rwlock, SOFTIRQ_ALL_MASK);
list_add_tail(&conn->list, &iucv_connection_list);
- write_unlock_bh(&iucv_connection_rwlock);
+ write_unlock_bh(&iucv_connection_rwlock, bh);
return conn;
out_tx:
@@ -1900,11 +1903,12 @@ static struct iucv_connection *netiucv_new_connection(struct net_device *dev,
*/
static void netiucv_remove_connection(struct iucv_connection *conn)
{
+ unsigned int bh;
IUCV_DBF_TEXT(trace, 3, __func__);
- write_lock_bh(&iucv_connection_rwlock);
+ bh = write_lock_bh(&iucv_connection_rwlock, SOFTIRQ_ALL_MASK);
list_del_init(&conn->list);
- write_unlock_bh(&iucv_connection_rwlock);
+ write_unlock_bh(&iucv_connection_rwlock, bh);
fsm_deltimer(&conn->timer);
netiucv_purge_skb_queue(&conn->collect_queue);
if (conn->path) {
@@ -2007,6 +2011,7 @@ static struct net_device *netiucv_init_netdevice(char *username, char *userdata)
static ssize_t connection_store(struct device_driver *drv, const char *buf,
size_t count)
{
+ unsigned int bh;
char username[9];
char userdata[17];
int rc;
@@ -2019,17 +2024,17 @@ static ssize_t connection_store(struct device_driver *drv, const char *buf,
if (rc)
return rc;
- read_lock_bh(&iucv_connection_rwlock);
+ bh = read_lock_bh(&iucv_connection_rwlock, SOFTIRQ_ALL_MASK);
list_for_each_entry(cp, &iucv_connection_list, list) {
if (!strncmp(username, cp->userid, 9) &&
!strncmp(userdata, cp->userdata, 17)) {
- read_unlock_bh(&iucv_connection_rwlock);
+ read_unlock_bh(&iucv_connection_rwlock, bh);
IUCV_DBF_TEXT_(setup, 2, "conn_write: Connection to %s "
"already exists\n", netiucv_printuser(cp));
return -EEXIST;
}
}
- read_unlock_bh(&iucv_connection_rwlock);
+ read_unlock_bh(&iucv_connection_rwlock, bh);
dev = netiucv_init_netdevice(username, userdata);
if (!dev) {
@@ -2071,6 +2076,7 @@ static DRIVER_ATTR_WO(connection);
static ssize_t remove_store(struct device_driver *drv, const char *buf,
size_t count)
{
+ unsigned int bh;
struct iucv_connection *cp;
struct net_device *ndev;
struct netiucv_priv *priv;
@@ -2092,14 +2098,14 @@ static ssize_t remove_store(struct device_driver *drv, const char *buf,
}
name[i] = '\0';
- read_lock_bh(&iucv_connection_rwlock);
+ bh = read_lock_bh(&iucv_connection_rwlock, SOFTIRQ_ALL_MASK);
list_for_each_entry(cp, &iucv_connection_list, list) {
ndev = cp->netdev;
priv = netdev_priv(ndev);
dev = priv->dev;
if (strncmp(name, ndev->name, count))
continue;
- read_unlock_bh(&iucv_connection_rwlock);
+ read_unlock_bh(&iucv_connection_rwlock, bh);
if (ndev->flags & (IFF_UP | IFF_RUNNING)) {
dev_warn(dev, "The IUCV device is connected"
" to %s and cannot be removed\n",
@@ -2111,7 +2117,7 @@ static ssize_t remove_store(struct device_driver *drv, const char *buf,
netiucv_unregister_device(dev);
return count;
}
- read_unlock_bh(&iucv_connection_rwlock);
+ read_unlock_bh(&iucv_connection_rwlock, bh);
IUCV_DBF_TEXT(data, 2, "remove_write: unknown device\n");
return -EINVAL;
}
diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
index 16df663..085cbd9 100644
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -1229,6 +1229,7 @@ static void qeth_l3_add_mc6_to_hash(struct qeth_card *card,
/* called with rcu_read_lock */
static void qeth_l3_add_vlan_mc6(struct qeth_card *card)
{
+ unsigned int bh;
struct inet6_dev *in_dev;
u16 vid;
@@ -1248,15 +1249,16 @@ static void qeth_l3_add_vlan_mc6(struct qeth_card *card)
in_dev = in6_dev_get(netdev);
if (!in_dev)
continue;
- read_lock_bh(&in_dev->lock);
+ bh = read_lock_bh(&in_dev->lock, SOFTIRQ_ALL_MASK);
qeth_l3_add_mc6_to_hash(card, in_dev);
- read_unlock_bh(&in_dev->lock);
+ read_unlock_bh(&in_dev->lock, bh);
in6_dev_put(in_dev);
}
}
static void qeth_l3_add_multicast_ipv6(struct qeth_card *card)
{
+ unsigned int bh;
struct inet6_dev *in6_dev;
QETH_CARD_TEXT(card, 4, "chkmcv6");
@@ -1268,10 +1270,10 @@ static void qeth_l3_add_multicast_ipv6(struct qeth_card *card)
return;
rcu_read_lock();
- read_lock_bh(&in6_dev->lock);
+ bh = read_lock_bh(&in6_dev->lock, SOFTIRQ_ALL_MASK);
qeth_l3_add_mc6_to_hash(card, in6_dev);
qeth_l3_add_vlan_mc6(card);
- read_unlock_bh(&in6_dev->lock);
+ read_unlock_bh(&in6_dev->lock, bh);
rcu_read_unlock();
in6_dev_put(in6_dev);
}
diff --git a/drivers/scsi/bnx2i/bnx2i_iscsi.c b/drivers/scsi/bnx2i/bnx2i_iscsi.c
index cda021f..12817ce 100644
--- a/drivers/scsi/bnx2i/bnx2i_iscsi.c
+++ b/drivers/scsi/bnx2i/bnx2i_iscsi.c
@@ -606,9 +606,10 @@ void bnx2i_drop_session(struct iscsi_cls_session *cls_session)
static int bnx2i_ep_destroy_list_add(struct bnx2i_hba *hba,
struct bnx2i_endpoint *ep)
{
- write_lock_bh(&hba->ep_rdwr_lock);
+ unsigned int bh;
+ bh = write_lock_bh(&hba->ep_rdwr_lock, SOFTIRQ_ALL_MASK);
list_add_tail(&ep->link, &hba->ep_destroy_list);
- write_unlock_bh(&hba->ep_rdwr_lock);
+ write_unlock_bh(&hba->ep_rdwr_lock, bh);
return 0;
}
@@ -623,9 +624,10 @@ static int bnx2i_ep_destroy_list_add(struct bnx2i_hba *hba,
static int bnx2i_ep_destroy_list_del(struct bnx2i_hba *hba,
struct bnx2i_endpoint *ep)
{
- write_lock_bh(&hba->ep_rdwr_lock);
+ unsigned int bh;
+ bh = write_lock_bh(&hba->ep_rdwr_lock, SOFTIRQ_ALL_MASK);
list_del_init(&ep->link);
- write_unlock_bh(&hba->ep_rdwr_lock);
+ write_unlock_bh(&hba->ep_rdwr_lock, bh);
return 0;
}
@@ -640,9 +642,10 @@ static int bnx2i_ep_destroy_list_del(struct bnx2i_hba *hba,
static int bnx2i_ep_ofld_list_add(struct bnx2i_hba *hba,
struct bnx2i_endpoint *ep)
{
- write_lock_bh(&hba->ep_rdwr_lock);
+ unsigned int bh;
+ bh = write_lock_bh(&hba->ep_rdwr_lock, SOFTIRQ_ALL_MASK);
list_add_tail(&ep->link, &hba->ep_ofld_list);
- write_unlock_bh(&hba->ep_rdwr_lock);
+ write_unlock_bh(&hba->ep_rdwr_lock, bh);
return 0;
}
@@ -656,9 +659,10 @@ static int bnx2i_ep_ofld_list_add(struct bnx2i_hba *hba,
static int bnx2i_ep_ofld_list_del(struct bnx2i_hba *hba,
struct bnx2i_endpoint *ep)
{
- write_lock_bh(&hba->ep_rdwr_lock);
+ unsigned int bh;
+ bh = write_lock_bh(&hba->ep_rdwr_lock, SOFTIRQ_ALL_MASK);
list_del_init(&ep->link);
- write_unlock_bh(&hba->ep_rdwr_lock);
+ write_unlock_bh(&hba->ep_rdwr_lock, bh);
return 0;
}
@@ -673,11 +677,12 @@ static int bnx2i_ep_ofld_list_del(struct bnx2i_hba *hba,
struct bnx2i_endpoint *
bnx2i_find_ep_in_ofld_list(struct bnx2i_hba *hba, u32 iscsi_cid)
{
+ unsigned int bh;
struct list_head *list;
struct list_head *tmp;
struct bnx2i_endpoint *ep = NULL;
- read_lock_bh(&hba->ep_rdwr_lock);
+ bh = read_lock_bh(&hba->ep_rdwr_lock, SOFTIRQ_ALL_MASK);
list_for_each_safe(list, tmp, &hba->ep_ofld_list) {
ep = (struct bnx2i_endpoint *)list;
@@ -685,7 +690,7 @@ bnx2i_find_ep_in_ofld_list(struct bnx2i_hba *hba, u32 iscsi_cid)
break;
ep = NULL;
}
- read_unlock_bh(&hba->ep_rdwr_lock);
+ read_unlock_bh(&hba->ep_rdwr_lock, bh);
if (!ep)
printk(KERN_ERR "l5 cid %d not found\n", iscsi_cid);
@@ -701,11 +706,12 @@ bnx2i_find_ep_in_ofld_list(struct bnx2i_hba *hba, u32 iscsi_cid)
struct bnx2i_endpoint *
bnx2i_find_ep_in_destroy_list(struct bnx2i_hba *hba, u32 iscsi_cid)
{
+ unsigned int bh;
struct list_head *list;
struct list_head *tmp;
struct bnx2i_endpoint *ep = NULL;
- read_lock_bh(&hba->ep_rdwr_lock);
+ bh = read_lock_bh(&hba->ep_rdwr_lock, SOFTIRQ_ALL_MASK);
list_for_each_safe(list, tmp, &hba->ep_destroy_list) {
ep = (struct bnx2i_endpoint *)list;
@@ -713,7 +719,7 @@ bnx2i_find_ep_in_destroy_list(struct bnx2i_hba *hba, u32 iscsi_cid)
break;
ep = NULL;
}
- read_unlock_bh(&hba->ep_rdwr_lock);
+ read_unlock_bh(&hba->ep_rdwr_lock, bh);
if (!ep)
printk(KERN_ERR "l5 cid %d not found\n", iscsi_cid);
@@ -731,9 +737,10 @@ bnx2i_find_ep_in_destroy_list(struct bnx2i_hba *hba, u32 iscsi_cid)
static void bnx2i_ep_active_list_add(struct bnx2i_hba *hba,
struct bnx2i_endpoint *ep)
{
- write_lock_bh(&hba->ep_rdwr_lock);
+ unsigned int bh;
+ bh = write_lock_bh(&hba->ep_rdwr_lock, SOFTIRQ_ALL_MASK);
list_add_tail(&ep->link, &hba->ep_active_list);
- write_unlock_bh(&hba->ep_rdwr_lock);
+ write_unlock_bh(&hba->ep_rdwr_lock, bh);
}
@@ -747,9 +754,10 @@ static void bnx2i_ep_active_list_add(struct bnx2i_hba *hba,
static void bnx2i_ep_active_list_del(struct bnx2i_hba *hba,
struct bnx2i_endpoint *ep)
{
- write_lock_bh(&hba->ep_rdwr_lock);
+ unsigned int bh;
+ bh = write_lock_bh(&hba->ep_rdwr_lock, SOFTIRQ_ALL_MASK);
list_del_init(&ep->link);
- write_unlock_bh(&hba->ep_rdwr_lock);
+ write_unlock_bh(&hba->ep_rdwr_lock, bh);
}
@@ -1558,6 +1566,7 @@ static int bnx2i_ep_get_param(struct iscsi_endpoint *ep,
static int bnx2i_host_get_param(struct Scsi_Host *shost,
enum iscsi_host_param param, char *buf)
{
+ unsigned int bh;
struct bnx2i_hba *hba = iscsi_host_priv(shost);
int len = 0;
@@ -1571,7 +1580,7 @@ static int bnx2i_host_get_param(struct Scsi_Host *shost,
case ISCSI_HOST_PARAM_IPADDRESS: {
struct list_head *active_list = &hba->ep_active_list;
- read_lock_bh(&hba->ep_rdwr_lock);
+ bh = read_lock_bh(&hba->ep_rdwr_lock, SOFTIRQ_ALL_MASK);
if (!list_empty(&hba->ep_active_list)) {
struct bnx2i_endpoint *bnx2i_ep;
struct cnic_sock *csk;
@@ -1585,7 +1594,7 @@ static int bnx2i_host_get_param(struct Scsi_Host *shost,
else
len = sprintf(buf, "%pI4\n", csk->src_ip);
}
- read_unlock_bh(&hba->ep_rdwr_lock);
+ read_unlock_bh(&hba->ep_rdwr_lock, bh);
break;
}
default:
diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
index 6f59276..2885e60 100644
--- a/drivers/scsi/cxgbi/libcxgbi.c
+++ b/drivers/scsi/cxgbi/libcxgbi.c
@@ -836,16 +836,17 @@ EXPORT_SYMBOL_GPL(cxgbi_sock_established);
static void cxgbi_inform_iscsi_conn_closing(struct cxgbi_sock *csk)
{
+ unsigned int bh;
log_debug(1 << CXGBI_DBG_SOCK,
"csk 0x%p, state %u, flags 0x%lx, conn 0x%p.\n",
csk, csk->state, csk->flags, csk->user_data);
if (csk->state != CTP_ESTABLISHED) {
- read_lock_bh(&csk->callback_lock);
+ bh = read_lock_bh(&csk->callback_lock, SOFTIRQ_ALL_MASK);
if (csk->user_data)
iscsi_conn_failure(csk->user_data,
ISCSI_ERR_TCP_CONN_CLOSE);
- read_unlock_bh(&csk->callback_lock);
+ read_unlock_bh(&csk->callback_lock, bh);
}
}
@@ -2377,6 +2378,7 @@ int cxgbi_bind_conn(struct iscsi_cls_session *cls_session,
struct iscsi_cls_conn *cls_conn,
u64 transport_eph, int is_leading)
{
+ unsigned int bh;
struct iscsi_conn *conn = cls_conn->dd_data;
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct cxgbi_conn *cconn = tcp_conn->dd_data;
@@ -2407,12 +2409,12 @@ int cxgbi_bind_conn(struct iscsi_cls_session *cls_session,
/* calculate the tag idx bits needed for this conn based on cmds_max */
cconn->task_idx_bits = (__ilog2_u32(conn->session->cmds_max - 1)) + 1;
- write_lock_bh(&csk->callback_lock);
+ bh = write_lock_bh(&csk->callback_lock, SOFTIRQ_ALL_MASK);
csk->user_data = conn;
cconn->chba = cep->chba;
cconn->cep = cep;
cep->cconn = cconn;
- write_unlock_bh(&csk->callback_lock);
+ write_unlock_bh(&csk->callback_lock, bh);
cxgbi_conn_max_xmit_dlength(conn);
cxgbi_conn_max_recv_dlength(conn);
@@ -2664,6 +2666,7 @@ EXPORT_SYMBOL_GPL(cxgbi_ep_poll);
void cxgbi_ep_disconnect(struct iscsi_endpoint *ep)
{
+ unsigned int bh;
struct cxgbi_endpoint *cep = ep->dd_data;
struct cxgbi_conn *cconn = cep->cconn;
struct cxgbi_sock *csk = cep->csk;
@@ -2674,10 +2677,10 @@ void cxgbi_ep_disconnect(struct iscsi_endpoint *ep)
if (cconn && cconn->iconn) {
iscsi_suspend_tx(cconn->iconn);
- write_lock_bh(&csk->callback_lock);
+ bh = write_lock_bh(&csk->callback_lock, SOFTIRQ_ALL_MASK);
cep->csk->user_data = NULL;
cconn->cep = NULL;
- write_unlock_bh(&csk->callback_lock);
+ write_unlock_bh(&csk->callback_lock, bh);
}
iscsi_destroy_endpoint(ep);
diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
index 44a6a66..5d26296 100644
--- a/drivers/scsi/iscsi_tcp.c
+++ b/drivers/scsi/iscsi_tcp.c
@@ -129,14 +129,15 @@ static inline int iscsi_sw_sk_state_check(struct sock *sk)
static void iscsi_sw_tcp_data_ready(struct sock *sk)
{
+ unsigned int bh;
struct iscsi_conn *conn;
struct iscsi_tcp_conn *tcp_conn;
read_descriptor_t rd_desc;
- read_lock_bh(&sk->sk_callback_lock);
+ bh = read_lock_bh(&sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
conn = sk->sk_user_data;
if (!conn) {
- read_unlock_bh(&sk->sk_callback_lock);
+ read_unlock_bh(&sk->sk_callback_lock, bh);
return;
}
tcp_conn = conn->dd_data;
@@ -156,20 +157,21 @@ static void iscsi_sw_tcp_data_ready(struct sock *sk)
/* If we had to (atomically) map a highmem page,
* unmap it now. */
iscsi_tcp_segment_unmap(&tcp_conn->in.segment);
- read_unlock_bh(&sk->sk_callback_lock);
+ read_unlock_bh(&sk->sk_callback_lock, bh);
}
static void iscsi_sw_tcp_state_change(struct sock *sk)
{
+ unsigned int bh;
struct iscsi_tcp_conn *tcp_conn;
struct iscsi_sw_tcp_conn *tcp_sw_conn;
struct iscsi_conn *conn;
void (*old_state_change)(struct sock *);
- read_lock_bh(&sk->sk_callback_lock);
+ bh = read_lock_bh(&sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
conn = sk->sk_user_data;
if (!conn) {
- read_unlock_bh(&sk->sk_callback_lock);
+ read_unlock_bh(&sk->sk_callback_lock, bh);
return;
}
@@ -179,7 +181,7 @@ static void iscsi_sw_tcp_state_change(struct sock *sk)
tcp_sw_conn = tcp_conn->dd_data;
old_state_change = tcp_sw_conn->old_state_change;
- read_unlock_bh(&sk->sk_callback_lock);
+ read_unlock_bh(&sk->sk_callback_lock, bh);
old_state_change(sk);
}
@@ -190,22 +192,23 @@ static void iscsi_sw_tcp_state_change(struct sock *sk)
**/
static void iscsi_sw_tcp_write_space(struct sock *sk)
{
+ unsigned int bh;
struct iscsi_conn *conn;
struct iscsi_tcp_conn *tcp_conn;
struct iscsi_sw_tcp_conn *tcp_sw_conn;
void (*old_write_space)(struct sock *);
- read_lock_bh(&sk->sk_callback_lock);
+ bh = read_lock_bh(&sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
conn = sk->sk_user_data;
if (!conn) {
- read_unlock_bh(&sk->sk_callback_lock);
+ read_unlock_bh(&sk->sk_callback_lock, bh);
return;
}
tcp_conn = conn->dd_data;
tcp_sw_conn = tcp_conn->dd_data;
old_write_space = tcp_sw_conn->old_write_space;
- read_unlock_bh(&sk->sk_callback_lock);
+ read_unlock_bh(&sk->sk_callback_lock, bh);
old_write_space(sk);
@@ -215,12 +218,13 @@ static void iscsi_sw_tcp_write_space(struct sock *sk)
static void iscsi_sw_tcp_conn_set_callbacks(struct iscsi_conn *conn)
{
+ unsigned int bh;
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
struct sock *sk = tcp_sw_conn->sock->sk;
/* assign new callbacks */
- write_lock_bh(&sk->sk_callback_lock);
+ bh = write_lock_bh(&sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
sk->sk_user_data = conn;
tcp_sw_conn->old_data_ready = sk->sk_data_ready;
tcp_sw_conn->old_state_change = sk->sk_state_change;
@@ -228,24 +232,25 @@ static void iscsi_sw_tcp_conn_set_callbacks(struct iscsi_conn *conn)
sk->sk_data_ready = iscsi_sw_tcp_data_ready;
sk->sk_state_change = iscsi_sw_tcp_state_change;
sk->sk_write_space = iscsi_sw_tcp_write_space;
- write_unlock_bh(&sk->sk_callback_lock);
+ write_unlock_bh(&sk->sk_callback_lock, bh);
}
static void
iscsi_sw_tcp_conn_restore_callbacks(struct iscsi_conn *conn)
{
+ unsigned int bh;
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
struct sock *sk = tcp_sw_conn->sock->sk;
/* restore socket callbacks, see also: iscsi_conn_set_callbacks() */
- write_lock_bh(&sk->sk_callback_lock);
+ bh = write_lock_bh(&sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
sk->sk_user_data = NULL;
sk->sk_data_ready = tcp_sw_conn->old_data_ready;
sk->sk_state_change = tcp_sw_conn->old_state_change;
sk->sk_write_space = tcp_sw_conn->old_write_space;
sk->sk_no_check_tx = 0;
- write_unlock_bh(&sk->sk_callback_lock);
+ write_unlock_bh(&sk->sk_callback_lock, bh);
}
/**
diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
index bc05c69..a812d61 100644
--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
@@ -2692,6 +2692,7 @@ need_resume(VCHIQ_STATE_T *state)
static int
block_resume(VCHIQ_ARM_STATE_T *arm_state)
{
+ unsigned int bh;
int status = VCHIQ_SUCCESS;
const unsigned long timeout_val =
msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS);
@@ -2713,12 +2714,12 @@ block_resume(VCHIQ_ARM_STATE_T *arm_state)
vchiq_log_error(vchiq_susp_log_level, "%s wait for "
"previously blocked clients failed", __func__);
status = VCHIQ_ERROR;
- write_lock_bh(&arm_state->susp_res_lock);
+ write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
goto out;
}
vchiq_log_info(vchiq_susp_log_level, "%s previously blocked "
"clients resumed", __func__);
- write_lock_bh(&arm_state->susp_res_lock);
+ bh = write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
}
/* We need to wait for resume to complete if it's in process */
@@ -2730,7 +2731,7 @@ block_resume(VCHIQ_ARM_STATE_T *arm_state)
"many times for resume", __func__);
goto out;
}
- write_unlock_bh(&arm_state->susp_res_lock);
+ write_unlock_bh(&arm_state->susp_res_lock, bh);
vchiq_log_info(vchiq_susp_log_level, "%s wait for resume",
__func__);
if (wait_for_completion_interruptible_timeout(
@@ -2741,11 +2742,11 @@ block_resume(VCHIQ_ARM_STATE_T *arm_state)
resume_state_names[arm_state->vc_resume_state +
VC_RESUME_NUM_OFFSET]);
status = VCHIQ_ERROR;
- write_lock_bh(&arm_state->susp_res_lock);
+ write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
goto out;
}
vchiq_log_info(vchiq_susp_log_level, "%s resumed", __func__);
- write_lock_bh(&arm_state->susp_res_lock);
+ bh = write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
resume_count++;
}
reinit_completion(&arm_state->resume_blocker);
@@ -2816,6 +2817,7 @@ vchiq_arm_vcsuspend(VCHIQ_STATE_T *state)
void
vchiq_platform_check_suspend(VCHIQ_STATE_T *state)
{
+ unsigned int bh;
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
int susp = 0;
@@ -2824,13 +2826,13 @@ vchiq_platform_check_suspend(VCHIQ_STATE_T *state)
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
- write_lock_bh(&arm_state->susp_res_lock);
+ bh = write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
if (arm_state->vc_suspend_state == VC_SUSPEND_REQUESTED &&
arm_state->vc_resume_state == VC_RESUME_RESUMED) {
set_suspend_state(arm_state, VC_SUSPEND_IN_PROGRESS);
susp = 1;
}
- write_unlock_bh(&arm_state->susp_res_lock);
+ write_unlock_bh(&arm_state->susp_res_lock, bh);
if (susp)
vchiq_platform_suspend(state);
@@ -2888,6 +2890,7 @@ output_timeout_error(VCHIQ_STATE_T *state)
VCHIQ_STATUS_T
vchiq_arm_force_suspend(VCHIQ_STATE_T *state)
{
+ unsigned int bh;
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
VCHIQ_STATUS_T status = VCHIQ_ERROR;
long rc = 0;
@@ -2898,7 +2901,7 @@ vchiq_arm_force_suspend(VCHIQ_STATE_T *state)
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
- write_lock_bh(&arm_state->susp_res_lock);
+ bh = write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
status = block_resume(arm_state);
if (status != VCHIQ_SUCCESS)
@@ -2944,7 +2947,7 @@ vchiq_arm_force_suspend(VCHIQ_STATE_T *state)
&arm_state->vc_suspend_complete,
msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS));
- write_lock_bh(&arm_state->susp_res_lock);
+ write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
if (rc < 0) {
vchiq_log_warning(vchiq_susp_log_level, "%s "
"interrupted waiting for suspend", __func__);
@@ -2989,7 +2992,7 @@ vchiq_arm_force_suspend(VCHIQ_STATE_T *state)
unblock_resume(arm_state);
unlock:
- write_unlock_bh(&arm_state->susp_res_lock);
+ write_unlock_bh(&arm_state->susp_res_lock, bh);
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, status);
@@ -2999,6 +3002,7 @@ vchiq_arm_force_suspend(VCHIQ_STATE_T *state)
void
vchiq_check_suspend(VCHIQ_STATE_T *state)
{
+ unsigned int bh;
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
if (!arm_state)
@@ -3006,13 +3010,13 @@ vchiq_check_suspend(VCHIQ_STATE_T *state)
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
- write_lock_bh(&arm_state->susp_res_lock);
+ bh = write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
if (arm_state->vc_suspend_state != VC_SUSPEND_SUSPENDED &&
arm_state->first_connect &&
!vchiq_videocore_wanted(state)) {
vchiq_arm_vcsuspend(state);
}
- write_unlock_bh(&arm_state->susp_res_lock);
+ write_unlock_bh(&arm_state->susp_res_lock, bh);
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
@@ -3021,6 +3025,8 @@ vchiq_check_suspend(VCHIQ_STATE_T *state)
int
vchiq_arm_allow_resume(VCHIQ_STATE_T *state)
{
+ unsigned int bh;
+ unsigned int bh;
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
int resume = 0;
int ret = -1;
@@ -3030,10 +3036,10 @@ vchiq_arm_allow_resume(VCHIQ_STATE_T *state)
vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
- write_lock_bh(&arm_state->susp_res_lock);
+ bh = write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
unblock_resume(arm_state);
resume = vchiq_check_resume(state);
- write_unlock_bh(&arm_state->susp_res_lock);
+ write_unlock_bh(&arm_state->susp_res_lock, bh);
if (resume) {
if (wait_for_completion_interruptible(
@@ -3046,7 +3052,7 @@ vchiq_arm_allow_resume(VCHIQ_STATE_T *state)
}
}
- read_lock_bh(&arm_state->susp_res_lock);
+ bh = read_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
if (arm_state->vc_suspend_state == VC_SUSPEND_SUSPENDED) {
vchiq_log_info(vchiq_susp_log_level,
"%s: Videocore remains suspended", __func__);
@@ -3055,7 +3061,7 @@ vchiq_arm_allow_resume(VCHIQ_STATE_T *state)
"%s: Videocore resumed", __func__);
ret = 0;
}
- read_unlock_bh(&arm_state->susp_res_lock);
+ read_unlock_bh(&arm_state->susp_res_lock, bh);
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, ret);
return ret;
@@ -3088,6 +3094,7 @@ VCHIQ_STATUS_T
vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
enum USE_TYPE_E use_type)
{
+ unsigned int bh;
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
VCHIQ_STATUS_T ret = VCHIQ_SUCCESS;
char entity[16];
@@ -3114,7 +3121,7 @@ vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
goto out;
}
- write_lock_bh(&arm_state->susp_res_lock);
+ bh = write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
while (arm_state->resume_blocked) {
/* If we call 'use' while force suspend is waiting for suspend,
* then we're about to block the thread which the force is
@@ -3143,14 +3150,14 @@ vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
"wait for resume blocker interrupted",
__func__, entity);
ret = VCHIQ_ERROR;
- write_lock_bh(&arm_state->susp_res_lock);
+ write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
arm_state->blocked_count--;
write_unlock_bh(&arm_state->susp_res_lock);
goto out;
}
vchiq_log_info(vchiq_susp_log_level, "%s %s resume "
"unblocked", __func__, entity);
- write_lock_bh(&arm_state->susp_res_lock);
+ write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
if (--arm_state->blocked_count == 0)
complete_all(&arm_state->blocked_blocker);
}
@@ -3179,7 +3186,7 @@ vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
"%s %s count %d, state count %d",
__func__, entity, *entity_uc, local_uc);
- write_unlock_bh(&arm_state->susp_res_lock);
+ write_unlock_bh(&arm_state->susp_res_lock, bh);
/* Completion is in a done state when we're not suspended, so this won't
* block for the non-suspended case. */
@@ -3220,6 +3227,7 @@ vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
VCHIQ_STATUS_T
vchiq_release_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service)
{
+ unsigned int bh;
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
VCHIQ_STATUS_T ret = VCHIQ_SUCCESS;
char entity[16];
@@ -3241,7 +3249,7 @@ vchiq_release_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service)
entity_uc = &arm_state->peer_use_count;
}
- write_lock_bh(&arm_state->susp_res_lock);
+ bh = write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
if (!arm_state->videocore_use_count || !(*entity_uc)) {
/* Don't use BUG_ON - don't allow user thread to crash kernel */
WARN_ON(!arm_state->videocore_use_count);
@@ -3272,7 +3280,7 @@ vchiq_release_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service)
arm_state->videocore_use_count);
unlock:
- write_unlock_bh(&arm_state->susp_res_lock);
+ write_unlock_bh(&arm_state->susp_res_lock, bh);
out:
vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, ret);
@@ -3419,6 +3427,7 @@ struct service_data_struct {
void
vchiq_dump_service_use_state(VCHIQ_STATE_T *state)
{
+ unsigned int bh;
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
struct service_data_struct *service_data;
int i, found = 0;
@@ -3441,7 +3450,7 @@ vchiq_dump_service_use_state(VCHIQ_STATE_T *state)
if (!service_data)
return;
- read_lock_bh(&arm_state->susp_res_lock);
+ bh = read_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
vc_suspend_state = arm_state->vc_suspend_state;
vc_resume_state = arm_state->vc_resume_state;
peer_count = arm_state->peer_use_count;
@@ -3470,7 +3479,7 @@ vchiq_dump_service_use_state(VCHIQ_STATE_T *state)
break;
}
- read_unlock_bh(&arm_state->susp_res_lock);
+ read_unlock_bh(&arm_state->susp_res_lock, bh);
vchiq_log_warning(vchiq_susp_log_level,
"-- Videcore suspend state: %s --",
@@ -3505,6 +3514,7 @@ vchiq_dump_service_use_state(VCHIQ_STATE_T *state)
VCHIQ_STATUS_T
vchiq_check_service(VCHIQ_SERVICE_T *service)
{
+ unsigned int bh;
VCHIQ_ARM_STATE_T *arm_state;
VCHIQ_STATUS_T ret = VCHIQ_ERROR;
@@ -3515,10 +3525,10 @@ vchiq_check_service(VCHIQ_SERVICE_T *service)
arm_state = vchiq_platform_get_arm_state(service->state);
- read_lock_bh(&arm_state->susp_res_lock);
+ bh = read_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
if (service->service_use_count)
ret = VCHIQ_SUCCESS;
- read_unlock_bh(&arm_state->susp_res_lock);
+ read_unlock_bh(&arm_state->susp_res_lock, bh);
if (ret == VCHIQ_ERROR) {
vchiq_log_error(vchiq_susp_log_level,
@@ -3544,17 +3554,18 @@ void vchiq_on_remote_use_active(VCHIQ_STATE_T *state)
void vchiq_platform_conn_state_changed(VCHIQ_STATE_T *state,
VCHIQ_CONNSTATE_T oldstate, VCHIQ_CONNSTATE_T newstate)
{
+ unsigned int bh;
VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
vchiq_log_info(vchiq_susp_log_level, "%d: %s->%s", state->id,
get_conn_state_name(oldstate), get_conn_state_name(newstate));
if (state->conn_state == VCHIQ_CONNSTATE_CONNECTED) {
- write_lock_bh(&arm_state->susp_res_lock);
+ bh = write_lock_bh(&arm_state->susp_res_lock, SOFTIRQ_ALL_MASK);
if (!arm_state->first_connect) {
char threadname[16];
arm_state->first_connect = 1;
- write_unlock_bh(&arm_state->susp_res_lock);
+ write_unlock_bh(&arm_state->susp_res_lock, bh);
snprintf(threadname, sizeof(threadname), "vchiq-keep/%d",
state->id);
arm_state->ka_thread = kthread_create(
@@ -3569,7 +3580,7 @@ void vchiq_platform_conn_state_changed(VCHIQ_STATE_T *state,
wake_up_process(arm_state->ka_thread);
}
} else
- write_unlock_bh(&arm_state->susp_res_lock);
+ write_unlock_bh(&arm_state->susp_res_lock, bh);
}
}
diff --git a/fs/ocfs2/cluster/tcp.c b/fs/ocfs2/cluster/tcp.c
index 7d9eea7..1137e8c 100644
--- a/fs/ocfs2/cluster/tcp.c
+++ b/fs/ocfs2/cluster/tcp.c
@@ -598,10 +598,11 @@ static void o2net_set_nn_state(struct o2net_node *nn,
/* see o2net_register_callbacks() */
static void o2net_data_ready(struct sock *sk)
{
+ unsigned int bh;
void (*ready)(struct sock *sk);
struct o2net_sock_container *sc;
- read_lock_bh(&sk->sk_callback_lock);
+ bh = read_lock_bh(&sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
sc = sk->sk_user_data;
if (sc) {
sclog(sc, "data_ready hit\n");
@@ -611,7 +612,7 @@ static void o2net_data_ready(struct sock *sk)
} else {
ready = sk->sk_data_ready;
}
- read_unlock_bh(&sk->sk_callback_lock);
+ read_unlock_bh(&sk->sk_callback_lock, bh);
ready(sk);
}
@@ -619,10 +620,11 @@ static void o2net_data_ready(struct sock *sk)
/* see o2net_register_callbacks() */
static void o2net_state_change(struct sock *sk)
{
+ unsigned int bh;
void (*state_change)(struct sock *sk);
struct o2net_sock_container *sc;
- read_lock_bh(&sk->sk_callback_lock);
+ bh = read_lock_bh(&sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
sc = sk->sk_user_data;
if (sc == NULL) {
state_change = sk->sk_state_change;
@@ -649,7 +651,7 @@ static void o2net_state_change(struct sock *sk)
break;
}
out:
- read_unlock_bh(&sk->sk_callback_lock);
+ read_unlock_bh(&sk->sk_callback_lock, bh);
state_change(sk);
}
@@ -661,7 +663,8 @@ static void o2net_state_change(struct sock *sk)
static void o2net_register_callbacks(struct sock *sk,
struct o2net_sock_container *sc)
{
- write_lock_bh(&sk->sk_callback_lock);
+ unsigned int bh;
+ bh = write_lock_bh(&sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
/* accepted sockets inherit the old listen socket data ready */
if (sk->sk_data_ready == o2net_listen_data_ready) {
@@ -680,22 +683,23 @@ static void o2net_register_callbacks(struct sock *sk,
mutex_init(&sc->sc_send_lock);
- write_unlock_bh(&sk->sk_callback_lock);
+ write_unlock_bh(&sk->sk_callback_lock, bh);
}
static int o2net_unregister_callbacks(struct sock *sk,
struct o2net_sock_container *sc)
{
+ unsigned int bh;
int ret = 0;
- write_lock_bh(&sk->sk_callback_lock);
+ bh = write_lock_bh(&sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
if (sk->sk_user_data == sc) {
ret = 1;
sk->sk_user_data = NULL;
sk->sk_data_ready = sc->sc_data_ready;
sk->sk_state_change = sc->sc_state_change;
}
- write_unlock_bh(&sk->sk_callback_lock);
+ write_unlock_bh(&sk->sk_callback_lock, bh);
return ret;
}
@@ -1986,9 +1990,10 @@ static void o2net_accept_many(struct work_struct *work)
static void o2net_listen_data_ready(struct sock *sk)
{
+ unsigned int bh;
void (*ready)(struct sock *sk);
- read_lock_bh(&sk->sk_callback_lock);
+ bh = read_lock_bh(&sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
ready = sk->sk_user_data;
if (ready == NULL) { /* check for teardown race */
ready = sk->sk_data_ready;
@@ -2015,13 +2020,14 @@ static void o2net_listen_data_ready(struct sock *sk)
}
out:
- read_unlock_bh(&sk->sk_callback_lock);
+ read_unlock_bh(&sk->sk_callback_lock, bh);
if (ready != NULL)
ready(sk);
}
static int o2net_open_listening_sock(__be32 addr, __be16 port)
{
+ unsigned int bh;
struct socket *sock = NULL;
int ret;
struct sockaddr_in sin = {
@@ -2038,10 +2044,10 @@ static int o2net_open_listening_sock(__be32 addr, __be16 port)
sock->sk->sk_allocation = GFP_ATOMIC;
- write_lock_bh(&sock->sk->sk_callback_lock);
+ bh = write_lock_bh(&sock->sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
sock->sk->sk_user_data = sock->sk->sk_data_ready;
sock->sk->sk_data_ready = o2net_listen_data_ready;
- write_unlock_bh(&sock->sk->sk_callback_lock);
+ write_unlock_bh(&sock->sk->sk_callback_lock, bh);
o2net_listen_sock = sock;
INIT_WORK(&o2net_listen_work, o2net_accept_many);
@@ -2104,6 +2110,7 @@ int o2net_start_listening(struct o2nm_node *node)
* tearing it down */
void o2net_stop_listening(struct o2nm_node *node)
{
+ unsigned int bh;
struct socket *sock = o2net_listen_sock;
size_t i;
@@ -2111,10 +2118,10 @@ void o2net_stop_listening(struct o2nm_node *node)
BUG_ON(o2net_listen_sock == NULL);
/* stop the listening socket from generating work */
- write_lock_bh(&sock->sk->sk_callback_lock);
+ bh = write_lock_bh(&sock->sk->sk_callback_lock, SOFTIRQ_ALL_MASK);
sock->sk->sk_data_ready = sock->sk->sk_user_data;
sock->sk->sk_user_data = NULL;
- write_unlock_bh(&sock->sk->sk_callback_lock);
+ write_unlock_bh(&sock->sk->sk_callback_lock, bh);
for (i = 0; i < ARRAY_SIZE(o2net_nodes); i++) {
struct o2nm_node *node = o2nm_get_node_by_num(i);
diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h
index 3dcd617..cda528b 100644
--- a/include/linux/rwlock.h
+++ b/include/linux/rwlock.h
@@ -99,9 +99,9 @@ do { \
#endif
#define read_lock_irq(lock) _raw_read_lock_irq(lock)
-#define read_lock_bh(lock) _raw_read_lock_bh(lock)
+#define read_lock_bh(lock, mask) _raw_read_lock_bh(lock, mask)
#define write_lock_irq(lock) _raw_write_lock_irq(lock)
-#define write_lock_bh(lock) _raw_write_lock_bh(lock)
+#define write_lock_bh(lock, mask) _raw_write_lock_bh(lock, mask)
#define read_unlock(lock) _raw_read_unlock(lock)
#define write_unlock(lock) _raw_write_unlock(lock)
#define read_unlock_irq(lock) _raw_read_unlock_irq(lock)
@@ -112,14 +112,14 @@ do { \
typecheck(unsigned long, flags); \
_raw_read_unlock_irqrestore(lock, flags); \
} while (0)
-#define read_unlock_bh(lock) _raw_read_unlock_bh(lock)
+#define read_unlock_bh(lock, bh) _raw_read_unlock_bh(lock, bh)
#define write_unlock_irqrestore(lock, flags) \
do { \
typecheck(unsigned long, flags); \
_raw_write_unlock_irqrestore(lock, flags); \
} while (0)
-#define write_unlock_bh(lock) _raw_write_unlock_bh(lock)
+#define write_unlock_bh(lock, bh) _raw_write_unlock_bh(lock, bh)
#define write_trylock_irqsave(lock, flags) \
({ \
diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h
index 86ebb4b..fb66489 100644
--- a/include/linux/rwlock_api_smp.h
+++ b/include/linux/rwlock_api_smp.h
@@ -17,8 +17,8 @@
void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires(lock);
void __lockfunc _raw_write_lock(rwlock_t *lock) __acquires(lock);
-void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires(lock);
-void __lockfunc _raw_write_lock_bh(rwlock_t *lock) __acquires(lock);
+unsigned int __lockfunc _raw_read_lock_bh(rwlock_t *lock, unsigned int mask) __acquires(lock);
+unsigned int __lockfunc _raw_write_lock_bh(rwlock_t *lock, unsigned int mask) __acquires(lock);
void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires(lock);
void __lockfunc _raw_write_lock_irq(rwlock_t *lock) __acquires(lock);
unsigned long __lockfunc _raw_read_lock_irqsave(rwlock_t *lock)
@@ -29,8 +29,8 @@ int __lockfunc _raw_read_trylock(rwlock_t *lock);
int __lockfunc _raw_write_trylock(rwlock_t *lock);
void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases(lock);
void __lockfunc _raw_write_unlock(rwlock_t *lock) __releases(lock);
-void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases(lock);
-void __lockfunc _raw_write_unlock_bh(rwlock_t *lock) __releases(lock);
+void __lockfunc _raw_read_unlock_bh(rwlock_t *lock, unsigned int bh) __releases(lock);
+void __lockfunc _raw_write_unlock_bh(rwlock_t *lock, unsigned int bh) __releases(lock);
void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases(lock);
void __lockfunc _raw_write_unlock_irq(rwlock_t *lock) __releases(lock);
void __lockfunc
@@ -49,11 +49,11 @@ _raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
#endif
#ifdef CONFIG_INLINE_READ_LOCK_BH
-#define _raw_read_lock_bh(lock) __raw_read_lock_bh(lock)
+#define _raw_read_lock_bh(lock, mask) __raw_read_lock_bh(lock, mask)
#endif
#ifdef CONFIG_INLINE_WRITE_LOCK_BH
-#define _raw_write_lock_bh(lock) __raw_write_lock_bh(lock)
+#define _raw_write_lock_bh(lock, mask) __raw_write_lock_bh(lock, mask)
#endif
#ifdef CONFIG_INLINE_READ_LOCK_IRQ
@@ -89,11 +89,11 @@ _raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
#endif
#ifdef CONFIG_INLINE_READ_UNLOCK_BH
-#define _raw_read_unlock_bh(lock) __raw_read_unlock_bh(lock)
+#define _raw_read_unlock_bh(lock, bh) __raw_read_unlock_bh(lock, bh)
#endif
#ifdef CONFIG_INLINE_WRITE_UNLOCK_BH
-#define _raw_write_unlock_bh(lock) __raw_write_unlock_bh(lock)
+#define _raw_write_unlock_bh(lock, bh) __raw_write_unlock_bh(lock, bh)
#endif
#ifdef CONFIG_INLINE_READ_UNLOCK_IRQ
@@ -170,11 +170,13 @@ static inline void __raw_read_lock_irq(rwlock_t *lock)
LOCK_CONTENDED(lock, do_raw_read_trylock, do_raw_read_lock);
}
-static inline void __raw_read_lock_bh(rwlock_t *lock)
+static inline unsigned int __raw_read_lock_bh(rwlock_t *lock,
+ unsigned int mask)
{
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_);
LOCK_CONTENDED(lock, do_raw_read_trylock, do_raw_read_lock);
+ return 0;
}
static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock)
@@ -197,11 +199,13 @@ static inline void __raw_write_lock_irq(rwlock_t *lock)
LOCK_CONTENDED(lock, do_raw_write_trylock, do_raw_write_lock);
}
-static inline void __raw_write_lock_bh(rwlock_t *lock)
+static inline unsigned int __raw_write_lock_bh(rwlock_t *lock,
+ unsigned int mask)
{
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_);
LOCK_CONTENDED(lock, do_raw_write_trylock, do_raw_write_lock);
+ return 0;
}
static inline void __raw_write_lock(rwlock_t *lock)
@@ -244,7 +248,8 @@ static inline void __raw_read_unlock_irq(rwlock_t *lock)
preempt_enable();
}
-static inline void __raw_read_unlock_bh(rwlock_t *lock)
+static inline void __raw_read_unlock_bh(rwlock_t *lock,
+ unsigned int bh)
{
rwlock_release(&lock->dep_map, 1, _RET_IP_);
do_raw_read_unlock(lock);
@@ -268,7 +273,8 @@ static inline void __raw_write_unlock_irq(rwlock_t *lock)
preempt_enable();
}
-static inline void __raw_write_unlock_bh(rwlock_t *lock)
+static inline void __raw_write_unlock_bh(rwlock_t *lock,
+ unsigned int bh)
{
rwlock_release(&lock->dep_map, 1, _RET_IP_);
do_raw_write_unlock(lock);
diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_up.h
index a4b6124..e2bfafe 100644
--- a/include/linux/spinlock_api_up.h
+++ b/include/linux/spinlock_api_up.h
@@ -61,7 +61,7 @@
#define _raw_write_lock(lock) __LOCK(lock)
#define _raw_spin_lock_bh(lock) ({ __LOCK_BH(lock); 0; }, SOFTIRQ_ALL_MASK)
#define _raw_read_lock_bh(lock) ({ __LOCK_BH(lock); 0; })
-#define _raw_write_lock_bh(lock) ({ __LOCK_BH(lock); 0; })
+#define _raw_write_lock_bh(lock) ({ __LOCK_BH(lock); 0; }, SOFTIRQ_ALL_MASK)
#define _raw_spin_lock_irq(lock) __LOCK_IRQ(lock)
#define _raw_read_lock_irq(lock) __LOCK_IRQ(lock)
#define _raw_write_lock_irq(lock) __LOCK_IRQ(lock)
--
2.7.4
From: Frederic Weisbecker <[email protected]>
So far, the state of handling the disablement of softirqs and processing
their callbacks have been handled the same: increment the softirq offset,
trace softirqs off, preempt off, etc...
The only difference remains in the way the preempt count is incremented:
by 1 for softirq processing (can't nest as softirqs processing aren't
re-entrant) and by 2 for softirq disablement (can nest).
Now their behaviour is going to drift entirely. Softirq processing will
need to be reentrant and accept stacking SOFTIRQ_OFFSET increments.
OTOH softirq disablement will be driven by the vector enabled mask and
toggled only once any vector get disabled.
Maintaining both behaviours under the same handler is going to be messy,
so move the preempt count related code on softirq processing to its own
handlers.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
kernel/softirq.c | 74 ++++++++++++++++++++++++++++++++++++++++++++------------
1 file changed, 58 insertions(+), 16 deletions(-)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index ae9e29f..22cc0a7 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -139,19 +139,6 @@ void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
EXPORT_SYMBOL(__local_bh_disable_ip);
#endif /* CONFIG_TRACE_IRQFLAGS */
-static void __local_bh_enable(unsigned int cnt)
-{
- lockdep_assert_irqs_disabled();
-
- if (preempt_count() == cnt)
- trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
-
- if (softirq_count() == (cnt & SOFTIRQ_MASK))
- trace_softirqs_on(_RET_IP_);
-
- __preempt_count_sub(cnt);
-}
-
/*
* Special-case - softirqs can safely be enabled by __do_softirq(),
* without processing still-pending softirqs:
@@ -159,7 +146,16 @@ static void __local_bh_enable(unsigned int cnt)
void local_bh_enable_no_softirq(void)
{
WARN_ON_ONCE(in_irq());
- __local_bh_enable(SOFTIRQ_DISABLE_OFFSET);
+ lockdep_assert_irqs_disabled();
+
+ if (preempt_count() == SOFTIRQ_DISABLE_OFFSET)
+ trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
+
+ if (softirq_count() == SOFTIRQ_DISABLE_OFFSET)
+ trace_softirqs_on(_RET_IP_);
+
+ __preempt_count_sub(SOFTIRQ_DISABLE_OFFSET);
+
}
EXPORT_SYMBOL(local_bh_enable_no_softirq);
@@ -207,6 +203,52 @@ void local_bh_enable_all(void)
local_bh_enable(SOFTIRQ_ALL_MASK);
}
+static void local_bh_enter(unsigned long ip)
+{
+ unsigned long flags;
+
+ WARN_ON_ONCE(in_irq());
+
+ raw_local_irq_save(flags);
+ /*
+ * The preempt tracer hooks into preempt_count_add and will break
+ * lockdep because it calls back into lockdep after SOFTIRQ_OFFSET
+ * is set and before current->softirq_enabled is cleared.
+ * We must manually increment preempt_count here and manually
+ * call the trace_preempt_off later.
+ */
+ __preempt_count_add(SOFTIRQ_OFFSET);
+ /*
+ * Were softirqs turned off above:
+ */
+ if (softirq_count() == SOFTIRQ_OFFSET)
+ trace_softirqs_off(ip);
+ raw_local_irq_restore(flags);
+
+ if (preempt_count() == SOFTIRQ_OFFSET) {
+#ifdef CONFIG_DEBUG_PREEMPT
+ current->preempt_disable_ip = get_lock_parent_ip();
+#endif
+ trace_preempt_off(CALLER_ADDR0, get_lock_parent_ip());
+ }
+}
+
+static void local_bh_exit(void)
+{
+ lockdep_assert_irqs_disabled();
+
+ if (preempt_count() == SOFTIRQ_OFFSET)
+ trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
+
+ if (softirq_count() == SOFTIRQ_OFFSET)
+ trace_softirqs_on(_RET_IP_);
+
+ __preempt_count_sub(SOFTIRQ_OFFSET);
+}
+
+
+
+
/*
* We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
* but break the loop if need_resched() is set or after 2 ms.
@@ -276,7 +318,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
pending = local_softirq_pending() & local_softirq_enabled();
account_irq_enter_time(current);
- __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+ local_bh_enter(_RET_IP_);
in_hardirq = lockdep_softirq_start();
restart:
@@ -325,7 +367,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
lockdep_softirq_end(in_hardirq);
account_irq_exit_time(current);
- __local_bh_enable(SOFTIRQ_OFFSET);
+ local_bh_exit();
WARN_ON_ONCE(in_interrupt());
current_restore_flags(old_flags, PF_MEMALLOC);
}
--
2.7.4
The current softirq_count() layout is designed as followed:
* Serving the softirq is done under SOFTIRQ_OFFSET. It makes the
softirq_count() odd and since it can't nest, due to softirq serving
not being re-entrant, it's fine to differenciate it from softirq
disablement that use even values.
* Disable the softirq is done under SOFTIRQ_OFFSET * 2. This can nest,
so increment of even values is fine to differenciate it from serving
softirqs.
Now the design is going to change:
* Serving softirqs will need to be re-entrant to allow a vector to
interrupt another.
* Disable softirqs can't nest anymore at the softirq_count() level.
This is all driven by the vector disabled mask now.
In order to support this new layout, simply swap them. Serving softirqs
now use even value increments and disable softirqs now use odd value
toggle.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
include/linux/bottom_half.h | 6 +++---
include/linux/preempt.h | 9 +++++----
kernel/softirq.c | 21 +++++++++++----------
kernel/trace/ring_buffer.c | 2 +-
kernel/trace/trace.c | 2 +-
5 files changed, 21 insertions(+), 19 deletions(-)
diff --git a/include/linux/bottom_half.h b/include/linux/bottom_half.h
index f8a68c8..74c986a 100644
--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -49,7 +49,7 @@ static __always_inline unsigned int __local_bh_disable_ip(unsigned long ip, unsi
static inline unsigned int local_bh_disable(unsigned int mask)
{
- return __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET, mask);
+ return __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_OFFSET, mask);
}
extern void local_bh_enable_no_softirq(unsigned int bh);
@@ -58,12 +58,12 @@ extern void __local_bh_enable_ip(unsigned long ip,
static inline void local_bh_enable_ip(unsigned long ip, unsigned int bh)
{
- __local_bh_enable_ip(ip, SOFTIRQ_DISABLE_OFFSET, bh);
+ __local_bh_enable_ip(ip, SOFTIRQ_OFFSET, bh);
}
static inline void local_bh_enable(unsigned int bh)
{
- __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET, bh);
+ __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_OFFSET, bh);
}
extern void local_bh_disable_all(void);
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index cf3fc3c..c4d9672 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -51,7 +51,8 @@
#define HARDIRQ_OFFSET (1UL << HARDIRQ_SHIFT)
#define NMI_OFFSET (1UL << NMI_SHIFT)
-#define SOFTIRQ_DISABLE_OFFSET (2 * SOFTIRQ_OFFSET)
+#define SOFTIRQ_SERVING_OFFSET (2 * SOFTIRQ_OFFSET)
+#define SOFTIRQ_SERVING_MASK (SOFTIRQ_MASK & ~SOFTIRQ_OFFSET)
/* We use the MSB mostly because its available */
#define PREEMPT_NEED_RESCHED 0x80000000
@@ -101,10 +102,10 @@
#define in_irq() (hardirq_count())
#define in_softirq() (softirq_count())
#define in_interrupt() (irq_count())
-#define in_serving_softirq() (softirq_count() & SOFTIRQ_OFFSET)
+#define in_serving_softirq() (softirq_count() & ~SOFTIRQ_OFFSET)
#define in_nmi() (preempt_count() & NMI_MASK)
#define in_task() (!(preempt_count() & \
- (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
+ (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_SERVING_MASK)))
/*
* The preempt_count offset after preempt_disable();
@@ -133,7 +134,7 @@
*
* Work as expected.
*/
-#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_LOCK_OFFSET)
+#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_OFFSET + PREEMPT_LOCK_OFFSET)
/*
* Are we running in atomic context? WARNING: this macro cannot
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 84da16c..3efa59e 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -163,13 +163,13 @@ void local_bh_enable_no_softirq(unsigned int bh)
if (bh != SOFTIRQ_ALL_MASK)
return;
- if (preempt_count() == SOFTIRQ_DISABLE_OFFSET)
+ if (preempt_count() == SOFTIRQ_OFFSET)
trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
- if (softirq_count() == SOFTIRQ_DISABLE_OFFSET)
+ if (softirq_count() == SOFTIRQ_OFFSET)
trace_softirqs_on(_RET_IP_);
- __preempt_count_sub(SOFTIRQ_DISABLE_OFFSET);
+ __preempt_count_sub(SOFTIRQ_OFFSET);
}
EXPORT_SYMBOL(local_bh_enable_no_softirq);
@@ -181,9 +181,10 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt, unsigned int bh)
local_irq_disable();
#endif
softirq_enabled_set(bh);
+
if (bh != SOFTIRQ_ALL_MASK) {
cnt &= ~SOFTIRQ_MASK;
- } else if (!(softirq_count() & SOFTIRQ_OFFSET)) {
+ } else if (!(softirq_count() & SOFTIRQ_SERVING_MASK)) {
/* Are softirqs going to be turned on now: */
trace_softirqs_on(ip);
}
@@ -235,15 +236,15 @@ static void local_bh_enter(unsigned long ip)
* We must manually increment preempt_count here and manually
* call the trace_preempt_off later.
*/
- __preempt_count_add(SOFTIRQ_OFFSET);
+ __preempt_count_add(SOFTIRQ_SERVING_OFFSET);
/*
* Were softirqs turned off above:
*/
- if (softirq_count() == SOFTIRQ_OFFSET)
+ if (softirq_count() == SOFTIRQ_SERVING_OFFSET)
trace_softirqs_off(ip);
raw_local_irq_restore(flags);
- if (preempt_count() == SOFTIRQ_OFFSET) {
+ if (preempt_count() == SOFTIRQ_SERVING_OFFSET) {
#ifdef CONFIG_DEBUG_PREEMPT
current->preempt_disable_ip = get_lock_parent_ip();
#endif
@@ -255,13 +256,13 @@ static void local_bh_exit(void)
{
lockdep_assert_irqs_disabled();
- if (preempt_count() == SOFTIRQ_OFFSET)
+ if (preempt_count() == SOFTIRQ_SERVING_OFFSET)
trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
- if (softirq_count() == SOFTIRQ_OFFSET)
+ if (softirq_count() == SOFTIRQ_SERVING_OFFSET)
trace_softirqs_on(_RET_IP_);
- __preempt_count_sub(SOFTIRQ_OFFSET);
+ __preempt_count_sub(SOFTIRQ_SERVING_OFFSET);
}
/*
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 65bd461..0fedc5c 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2655,7 +2655,7 @@ trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer)
unsigned long pc = preempt_count();
int bit;
- if (!(pc & (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
+ if (in_task())
bit = RB_CTX_NORMAL;
else
bit = pc & NMI_MASK ? RB_CTX_NMI :
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index bf6f1d7..af1abd6 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2143,7 +2143,7 @@ tracing_generic_entry_update(struct trace_entry *entry, unsigned long flags,
#endif
((pc & NMI_MASK ) ? TRACE_FLAG_NMI : 0) |
((pc & HARDIRQ_MASK) ? TRACE_FLAG_HARDIRQ : 0) |
- ((pc & SOFTIRQ_OFFSET) ? TRACE_FLAG_SOFTIRQ : 0) |
+ ((pc & SOFTIRQ_SERVING_MASK) ? TRACE_FLAG_SOFTIRQ : 0) |
(tif_need_resched() ? TRACE_FLAG_NEED_RESCHED : 0) |
(test_preempt_need_resched() ? TRACE_FLAG_PREEMPT_RESCHED : 0);
}
--
2.7.4
From: Frederic Weisbecker <[email protected]>
This pair of function is implemented on top of spin_[un]lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to that of local_irq_save/restore. Subsequent calls to
local_bh_disable() and friends can then stack up:
bh = local_bh_disable(vec_mask);
bh2 = write_seqlock_bh(...) {
return spin_lock_bh(...);
}
...
write_sequnlock_bh(..., bh2) {
spin_unlock_bh(..., bh2);
}
local_bh_enable(bh);
To prepare for that, make write_seqlock_bh() able to return a saved
vector enabled mask and pass it back to write_sequnlock_bh(). Then plug
the whole with spin_[un]lock_bh().
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
include/linux/seqlock.h | 21 +++++++++++++--------
net/core/neighbour.c | 5 +++--
net/ipv4/inetpeer.c | 5 +++--
net/ipv4/sysctl_net_ipv4.c | 5 +++--
net/ipv4/tcp_metrics.c | 5 +++--
net/rxrpc/conn_service.c | 4 ++--
6 files changed, 27 insertions(+), 18 deletions(-)
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index c22e19c..720e6e0 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -455,16 +455,19 @@ static inline void write_sequnlock(seqlock_t *sl)
spin_unlock(&sl->lock);
}
-static inline void write_seqlock_bh(seqlock_t *sl)
+static inline unsigned int write_seqlock_bh(seqlock_t *sl, unsigned int mask)
{
- spin_lock_bh(&sl->lock, SOFTIRQ_ALL_MASK);
+ unsigned int bh;
+ bh = spin_lock_bh(&sl->lock, mask);
write_seqcount_begin(&sl->seqcount);
+ return bh;
}
-static inline void write_sequnlock_bh(seqlock_t *sl)
+static inline void write_sequnlock_bh(seqlock_t *sl,
+ unsigned int bh)
{
write_seqcount_end(&sl->seqcount);
- spin_unlock_bh(&sl->lock, 0);
+ spin_unlock_bh(&sl->lock, bh);
}
static inline void write_seqlock_irq(seqlock_t *sl)
@@ -542,14 +545,16 @@ static inline void done_seqretry(seqlock_t *lock, int seq)
read_sequnlock_excl(lock);
}
-static inline void read_seqlock_excl_bh(seqlock_t *sl)
+static inline unsigned int read_seqlock_excl_bh(seqlock_t *sl,
+ unsigned int mask)
{
- spin_lock_bh(&sl->lock, SOFTIRQ_ALL_MASK);
+ return spin_lock_bh(&sl->lock, mask);
}
-static inline void read_sequnlock_excl_bh(seqlock_t *sl)
+static inline void read_sequnlock_excl_bh(seqlock_t *sl,
+ unsigned int bh)
{
- spin_unlock_bh(&sl->lock, 0);
+ spin_unlock_bh(&sl->lock, bh);
}
static inline void read_seqlock_excl_irq(seqlock_t *sl)
diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index ec55470..733449e 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -1083,6 +1083,7 @@ EXPORT_SYMBOL(__neigh_event_send);
static void neigh_update_hhs(struct neighbour *neigh)
{
struct hh_cache *hh;
+ unsigned int bh;
void (*update)(struct hh_cache*, const struct net_device*, const unsigned char *)
= NULL;
@@ -1092,9 +1093,9 @@ static void neigh_update_hhs(struct neighbour *neigh)
if (update) {
hh = &neigh->hh;
if (hh->hh_len) {
- write_seqlock_bh(&hh->hh_lock);
+ bh = write_seqlock_bh(&hh->hh_lock, SOFTIRQ_ALL_MASK);
update(hh, neigh->dev, neigh->ha);
- write_sequnlock_bh(&hh->hh_lock);
+ write_sequnlock_bh(&hh->hh_lock, bh);
}
}
}
diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
index d757b96..224d30e 100644
--- a/net/ipv4/inetpeer.c
+++ b/net/ipv4/inetpeer.c
@@ -182,6 +182,7 @@ struct inet_peer *inet_getpeer(struct inet_peer_base *base,
struct rb_node **pp, *parent;
unsigned int gc_cnt, seq;
int invalidated;
+ unsigned int bh;
/* Attempt a lockless lookup first.
* Because of a concurrent writer, we might not find an existing entry.
@@ -203,7 +204,7 @@ struct inet_peer *inet_getpeer(struct inet_peer_base *base,
* At least, nodes should be hot in our cache.
*/
parent = NULL;
- write_seqlock_bh(&base->lock);
+ bh = write_seqlock_bh(&base->lock, SOFTIRQ_ALL_MASK);
gc_cnt = 0;
p = lookup(daddr, base, seq, gc_stack, &gc_cnt, &parent, &pp);
@@ -228,7 +229,7 @@ struct inet_peer *inet_getpeer(struct inet_peer_base *base,
}
if (gc_cnt)
inet_peer_gc(base, gc_stack, gc_cnt);
- write_sequnlock_bh(&base->lock);
+ write_sequnlock_bh(&base->lock, bh);
return p;
}
diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
index b92f422..b6d1d52 100644
--- a/net/ipv4/sysctl_net_ipv4.c
+++ b/net/ipv4/sysctl_net_ipv4.c
@@ -56,15 +56,16 @@ static int sysctl_tcp_low_latency __read_mostly;
static void set_local_port_range(struct net *net, int range[2])
{
bool same_parity = !((range[0] ^ range[1]) & 1);
+ unsigned int bh;
- write_seqlock_bh(&net->ipv4.ip_local_ports.lock);
+ bh = write_seqlock_bh(&net->ipv4.ip_local_ports.lock, SOFTIRQ_ALL_MASK);
if (same_parity && !net->ipv4.ip_local_ports.warned) {
net->ipv4.ip_local_ports.warned = true;
pr_err_ratelimited("ip_local_port_range: prefer different parity for start/end values.\n");
}
net->ipv4.ip_local_ports.range[0] = range[0];
net->ipv4.ip_local_ports.range[1] = range[1];
- write_sequnlock_bh(&net->ipv4.ip_local_ports.lock);
+ write_sequnlock_bh(&net->ipv4.ip_local_ports.lock, bh);
}
/* Validate changes from /proc interface. */
diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
index fd6ba88..c65d499 100644
--- a/net/ipv4/tcp_metrics.c
+++ b/net/ipv4/tcp_metrics.c
@@ -574,6 +574,7 @@ void tcp_fastopen_cache_set(struct sock *sk, u16 mss,
u16 try_exp)
{
struct dst_entry *dst = __sk_dst_get(sk);
+ unsigned int bh;
struct tcp_metrics_block *tm;
if (!dst)
@@ -583,7 +584,7 @@ void tcp_fastopen_cache_set(struct sock *sk, u16 mss,
if (tm) {
struct tcp_fastopen_metrics *tfom = &tm->tcpm_fastopen;
- write_seqlock_bh(&fastopen_seqlock);
+ bh = write_seqlock_bh(&fastopen_seqlock, SOFTIRQ_ALL_MASK);
if (mss)
tfom->mss = mss;
if (cookie && cookie->len > 0)
@@ -596,7 +597,7 @@ void tcp_fastopen_cache_set(struct sock *sk, u16 mss,
tfom->last_syn_loss = jiffies;
} else
tfom->syn_loss = 0;
- write_sequnlock_bh(&fastopen_seqlock);
+ write_sequnlock_bh(&fastopen_seqlock, bh);
}
rcu_read_unlock();
}
diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c
index 80773a5..e253cd9 100644
--- a/net/rxrpc/conn_service.c
+++ b/net/rxrpc/conn_service.c
@@ -71,7 +71,7 @@ static void rxrpc_publish_service_conn(struct rxrpc_peer *peer,
struct rxrpc_conn_proto k = conn->proto;
struct rb_node **pp, *parent;
- write_seqlock_bh(&peer->service_conn_lock);
+ write_seqlock_bh(&peer->service_conn_lock, SOFTIRQ_ALL_MASK);
pp = &peer->service_conns.rb_node;
parent = NULL;
@@ -191,7 +191,7 @@ void rxrpc_unpublish_service_conn(struct rxrpc_connection *conn)
{
struct rxrpc_peer *peer = conn->params.peer;
- write_seqlock_bh(&peer->service_conn_lock);
+ write_seqlock_bh(&peer->service_conn_lock, SOFTIRQ_ALL_MASK);
if (test_and_clear_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags))
rb_erase(&conn->service_node, &peer->service_conns);
write_sequnlock_bh(&peer->service_conn_lock);
--
2.7.4
This pair of function is implemented on top of spin_lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to that of local_irq_save/restore. Subsequent calls to
local_bh_disable() and friends can then stack up:
bh = local_bh_disable(vec_mask);
bh2 = diva_os_enter_spin_lock() {
return spin_lock_bh(...)
}
...
diva_os_leave_spin_lock(bh2) {
spin_unlock_bh(bh2, ...);
}
local_bh_enable(bh);
To prepare for that, make diva_os_enter_spin_lock() able to return a
saved vector enabled mask and pass it back to diva_os_leave_spin_lock().
We'll plug it to spin_lock_bh() in a subsequent patch.
Thanks to coccinelle that helped a lot with scripts such as the
following:
@spin exists@
identifier func;
expression e, e1, e2;
@@
func(...) {
+ unsigned int bh;
...
- diva_os_enter_spin_lock(e, e1, e2);
+ bh = diva_os_enter_spin_lock(e, e1, e2);
...
- diva_os_leave_spin_lock(e, e1, e2);
+ diva_os_leave_spin_lock(e, e1, e2, bh);
...
}
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
drivers/isdn/hardware/eicon/capifunc.c | 53 ++++++++------
drivers/isdn/hardware/eicon/dadapter.c | 39 ++++++----
drivers/isdn/hardware/eicon/debug.c | 129 +++++++++++++++++++--------------
drivers/isdn/hardware/eicon/debug_if.h | 6 +-
drivers/isdn/hardware/eicon/diva.c | 45 +++++++-----
drivers/isdn/hardware/eicon/idifunc.c | 22 +++---
drivers/isdn/hardware/eicon/io.c | 88 ++++++++++++----------
drivers/isdn/hardware/eicon/mntfunc.c | 13 ++--
drivers/isdn/hardware/eicon/platform.h | 9 ++-
drivers/isdn/hardware/eicon/um_idi.c | 104 ++++++++++++++++----------
10 files changed, 304 insertions(+), 204 deletions(-)
diff --git a/drivers/isdn/hardware/eicon/capifunc.c b/drivers/isdn/hardware/eicon/capifunc.c
index 7a0bdbd..c345286 100644
--- a/drivers/isdn/hardware/eicon/capifunc.c
+++ b/drivers/isdn/hardware/eicon/capifunc.c
@@ -390,6 +390,7 @@ static void clean_adapter(int id, struct list_head *free_mem_q)
*/
static void divacapi_remove_card(DESCRIPTOR *d)
{
+ unsigned int bh;
diva_card *card = NULL;
diva_os_spin_lock_magic_t old_irql;
LIST_HEAD(free_mem_q);
@@ -401,7 +402,7 @@ static void divacapi_remove_card(DESCRIPTOR *d)
* Ensures that there is no call from sendf to CAPI in
* the time CAPI controller is about to be removed.
*/
- diva_os_enter_spin_lock(&api_lock, &old_irql, "remove card");
+ bh = diva_os_enter_spin_lock(&api_lock, &old_irql, "remove card");
list_for_each(tmp, &cards) {
card = list_entry(tmp, diva_card, list);
if (card->d.request == d->request) {
@@ -410,7 +411,7 @@ static void divacapi_remove_card(DESCRIPTOR *d)
break;
}
}
- diva_os_leave_spin_lock(&api_lock, &old_irql, "remove card");
+ diva_os_leave_spin_lock(&api_lock, &old_irql, "remove card", bh);
if (card) {
/*
@@ -423,7 +424,7 @@ static void divacapi_remove_card(DESCRIPTOR *d)
* Now get API lock (to ensure stable state of LI tables)
* and update the adapter map/LI table.
*/
- diva_os_enter_spin_lock(&api_lock, &old_irql, "remove card");
+ bh = diva_os_enter_spin_lock(&api_lock, &old_irql, "remove card");
clean_adapter(card->Id - 1, &free_mem_q);
DBG_TRC(("DelAdapterMap (%d) -> (%d)",
@@ -431,7 +432,7 @@ static void divacapi_remove_card(DESCRIPTOR *d)
ControllerMap[card->Id] = 0;
DBG_TRC(("adapter remove, max_adapter=%d",
max_adapter));
- diva_os_leave_spin_lock(&api_lock, &old_irql, "remove card");
+ diva_os_leave_spin_lock(&api_lock, &old_irql, "remove card", bh);
/* After releasing the lock, we can free the memory */
diva_os_free(0, card);
@@ -449,13 +450,14 @@ static void divacapi_remove_card(DESCRIPTOR *d)
*/
static void divacapi_remove_cards(void)
{
+ unsigned int bh;
DESCRIPTOR d;
struct list_head *tmp;
diva_card *card;
diva_os_spin_lock_magic_t old_irql;
rescan:
- diva_os_enter_spin_lock(&api_lock, &old_irql, "remove cards");
+ bh = diva_os_enter_spin_lock(&api_lock, &old_irql, "remove cards");
list_for_each(tmp, &cards) {
card = list_entry(tmp, diva_card, list);
diva_os_leave_spin_lock(&api_lock, &old_irql, "remove cards");
@@ -463,7 +465,7 @@ static void divacapi_remove_cards(void)
divacapi_remove_card(&d);
goto rescan;
}
- diva_os_leave_spin_lock(&api_lock, &old_irql, "remove cards");
+ diva_os_leave_spin_lock(&api_lock, &old_irql, "remove cards", bh);
}
/*
@@ -471,13 +473,15 @@ static void divacapi_remove_cards(void)
*/
static void sync_callback(ENTITY *e)
{
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
DBG_TRC(("cb:Id=%x,Rc=%x,Ind=%x", e->Id, e->Rc, e->Ind))
- diva_os_enter_spin_lock(&api_lock, &old_irql, "sync_callback");
+ bh = diva_os_enter_spin_lock(&api_lock, &old_irql,
+ "sync_callback");
callback(e);
- diva_os_leave_spin_lock(&api_lock, &old_irql, "sync_callback");
+ diva_os_leave_spin_lock(&api_lock, &old_irql, "sync_callback", bh);
}
/*
@@ -485,6 +489,7 @@ static void sync_callback(ENTITY *e)
*/
static int diva_add_card(DESCRIPTOR *d)
{
+ unsigned int bh;
int k = 0, i = 0;
diva_os_spin_lock_magic_t old_irql;
diva_card *card = NULL;
@@ -521,9 +526,9 @@ static int diva_add_card(DESCRIPTOR *d)
return (0);
}
- diva_os_enter_spin_lock(&api_lock, &old_irql, "find id");
+ bh = diva_os_enter_spin_lock(&api_lock, &old_irql, "find id");
card->Id = find_free_id();
- diva_os_leave_spin_lock(&api_lock, &old_irql, "find id");
+ diva_os_leave_spin_lock(&api_lock, &old_irql, "find id", bh);
strlcpy(ctrl->manu, M_COMPANY, sizeof(ctrl->manu));
ctrl->version.majorversion = 2;
@@ -630,7 +635,7 @@ static int diva_add_card(DESCRIPTOR *d)
}
/* Prevent access to line interconnect table in process update */
- diva_os_enter_spin_lock(&api_lock, &old_irql, "add card");
+ bh = diva_os_enter_spin_lock(&api_lock, &old_irql, "add card");
j = 0;
for (i = 0; i < k; i++) {
@@ -686,7 +691,7 @@ static int diva_add_card(DESCRIPTOR *d)
list_add(&(card->list), &cards);
AutomaticLaw(a);
- diva_os_leave_spin_lock(&api_lock, &old_irql, "add card");
+ diva_os_leave_spin_lock(&api_lock, &old_irql, "add card", bh);
if (mem_to_free) {
diva_os_free(0, mem_to_free);
@@ -733,6 +738,7 @@ static void diva_register_appl(struct capi_ctr *ctrl, __u16 appl,
diva_os_spin_lock_magic_t old_irql;
unsigned int mem_len;
int nconn = rp->level3cnt;
+ unsigned int bh;
if (diva_os_in_irq()) {
@@ -809,7 +815,7 @@ static void diva_register_appl(struct capi_ctr *ctrl, __u16 appl,
}
/* initialize application data */
- diva_os_enter_spin_lock(&api_lock, &old_irql, "register_appl");
+ bh = diva_os_enter_spin_lock(&api_lock, &old_irql, "register_appl");
this = &application[appl - 1];
memset(this, 0, sizeof(APPL));
@@ -838,7 +844,7 @@ static void diva_register_appl(struct capi_ctr *ctrl, __u16 appl,
}
CapiRegister(this->Id);
- diva_os_leave_spin_lock(&api_lock, &old_irql, "register_appl");
+ diva_os_leave_spin_lock(&api_lock, &old_irql, "register_appl", bh);
}
@@ -847,6 +853,7 @@ static void diva_register_appl(struct capi_ctr *ctrl, __u16 appl,
*/
static void diva_release_appl(struct capi_ctr *ctrl, __u16 appl)
{
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
APPL *this = &application[appl - 1];
void *mem_to_free = NULL;
@@ -858,14 +865,14 @@ static void diva_release_appl(struct capi_ctr *ctrl, __u16 appl)
return;
}
- diva_os_enter_spin_lock(&api_lock, &old_irql, "release_appl");
+ bh = diva_os_enter_spin_lock(&api_lock, &old_irql, "release_appl");
if (this->Id) {
CapiRelease(this->Id);
mem_to_free = this->DataNCCI;
this->DataNCCI = NULL;
this->Id = 0;
}
- diva_os_leave_spin_lock(&api_lock, &old_irql, "release_appl");
+ diva_os_leave_spin_lock(&api_lock, &old_irql, "release_appl", bh);
if (mem_to_free)
diva_os_free(0, mem_to_free);
@@ -888,6 +895,7 @@ static u16 diva_send_message(struct capi_ctr *ctrl,
word clength = GET_WORD(&msg->header.length);
word command = GET_WORD(&msg->header.command);
u16 retval = CAPI_NOERROR;
+ unsigned int bh;
if (diva_os_in_irq()) {
DBG_ERR(("CAPI_SEND_MSG - in irq context !"))
@@ -900,10 +908,10 @@ static u16 diva_send_message(struct capi_ctr *ctrl,
return CAPI_REGOSRESOURCEERR;
}
- diva_os_enter_spin_lock(&api_lock, &old_irql, "send message");
+ bh = diva_os_enter_spin_lock(&api_lock, &old_irql, "send message");
if (!this->Id) {
- diva_os_leave_spin_lock(&api_lock, &old_irql, "send message");
+ diva_os_leave_spin_lock(&api_lock, &old_irql, "send message", bh);
return CAPI_ILLAPPNR;
}
@@ -997,7 +1005,7 @@ static u16 diva_send_message(struct capi_ctr *ctrl,
}
write_end:
- diva_os_leave_spin_lock(&api_lock, &old_irql, "send message");
+ diva_os_leave_spin_lock(&api_lock, &old_irql, "send message", bh);
if (retval == CAPI_NOERROR)
diva_os_free_message_buffer(dmb);
return retval;
@@ -1163,13 +1171,16 @@ static void remove_main_structs(void)
*/
static void do_api_remove_start(void)
{
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
int ret = 1, count = 100;
do {
- diva_os_enter_spin_lock(&api_lock, &old_irql, "api remove start");
+ bh = diva_os_enter_spin_lock(&api_lock, &old_irql,
+ "api remove start");
ret = api_remove_start();
- diva_os_leave_spin_lock(&api_lock, &old_irql, "api remove start");
+ diva_os_leave_spin_lock(&api_lock, &old_irql,
+ "api remove start", bh);
diva_os_sleep(10);
} while (ret && count--);
diff --git a/drivers/isdn/hardware/eicon/dadapter.c b/drivers/isdn/hardware/eicon/dadapter.c
index 5142099..842c1f1 100644
--- a/drivers/isdn/hardware/eicon/dadapter.c
+++ b/drivers/isdn/hardware/eicon/dadapter.c
@@ -106,6 +106,7 @@ void diva_didd_load_time_finit(void) {
return -1 adapter array overflow
-------------------------------------------------------------------------- */
static int diva_didd_add_descriptor(DESCRIPTOR *d) {
+ unsigned int bh;
diva_os_spin_lock_magic_t irql;
int i;
if (d->type == IDI_DIMAINT) {
@@ -123,16 +124,17 @@ static int diva_didd_add_descriptor(DESCRIPTOR *d) {
return (NEW_MAX_DESCRIPTORS);
}
for (i = 0; i < NEW_MAX_DESCRIPTORS; i++) {
- diva_os_enter_spin_lock(&didd_spin, &irql, "didd_add");
+ bh = diva_os_enter_spin_lock(&didd_spin, &irql, "didd_add");
if (HandleTable[i].type == 0) {
memcpy(&HandleTable[i], d, sizeof(*d));
Adapters++;
- diva_os_leave_spin_lock(&didd_spin, &irql, "didd_add");
+ diva_os_leave_spin_lock(&didd_spin, &irql, "didd_add",
+ bh);
diva_notify_adapter_change(d, 0); /* we have new adapter */
DBG_TRC(("Add adapter[%d], request=%08x", (i + 1), d->request))
return (i + 1);
}
- diva_os_leave_spin_lock(&didd_spin, &irql, "didd_add");
+ diva_os_leave_spin_lock(&didd_spin, &irql, "didd_add", bh);
}
DBG_ERR(("Can't add adapter, out of resources"))
return (-1);
@@ -143,6 +145,7 @@ static int diva_didd_add_descriptor(DESCRIPTOR *d) {
return 0 on success
-------------------------------------------------------------------------- */
static int diva_didd_remove_descriptor(IDI_CALL request) {
+ unsigned int bh;
diva_os_spin_lock_magic_t irql;
int i;
if (request == MAdapter.request) {
@@ -155,10 +158,12 @@ static int diva_didd_remove_descriptor(IDI_CALL request) {
for (i = 0; (Adapters && (i < NEW_MAX_DESCRIPTORS)); i++) {
if (HandleTable[i].request == request) {
diva_notify_adapter_change(&HandleTable[i], 1); /* About to remove */
- diva_os_enter_spin_lock(&didd_spin, &irql, "didd_rm");
+ bh = diva_os_enter_spin_lock(&didd_spin, &irql,
+ "didd_rm");
memset(&HandleTable[i], 0x00, sizeof(HandleTable[0]));
Adapters--;
- diva_os_leave_spin_lock(&didd_spin, &irql, "didd_rm");
+ diva_os_leave_spin_lock(&didd_spin, &irql, "didd_rm",
+ bh);
DBG_TRC(("Remove adapter[%d], request=%08x", (i + 1), request))
return (0);
}
@@ -171,13 +176,14 @@ static int diva_didd_remove_descriptor(IDI_CALL request) {
return 1 if not enough space to save all available adapters
-------------------------------------------------------------------------- */
static int diva_didd_read_adapter_array(DESCRIPTOR *buffer, int length) {
+ unsigned int bh;
diva_os_spin_lock_magic_t irql;
int src, dst;
memset(buffer, 0x00, length);
length /= sizeof(DESCRIPTOR);
DBG_TRC(("DIDD_Read, space = %d, Adapters = %d", length, Adapters + 2))
- diva_os_enter_spin_lock(&didd_spin, &irql, "didd_read");
+ bh = diva_os_enter_spin_lock(&didd_spin, &irql, "didd_read");
for (src = 0, dst = 0;
(Adapters && (src < NEW_MAX_DESCRIPTORS) && (dst < length));
src++) {
@@ -186,7 +192,7 @@ static int diva_didd_read_adapter_array(DESCRIPTOR *buffer, int length) {
dst++;
}
}
- diva_os_leave_spin_lock(&didd_spin, &irql, "didd_read");
+ diva_os_leave_spin_lock(&didd_spin, &irql, "didd_read", bh);
if (dst < length) {
memcpy(&buffer[dst], &MAdapter, sizeof(DESCRIPTOR));
dst++;
@@ -268,19 +274,22 @@ static void IDI_CALL_LINK_T diva_dadapter_request( \
static dword diva_register_adapter_callback( \
didd_adapter_change_callback_t callback,
void IDI_CALL_ENTITY_T *context) {
+ unsigned int bh;
diva_os_spin_lock_magic_t irql;
dword i;
for (i = 0; i < DIVA_DIDD_MAX_NOTIFICATIONS; i++) {
- diva_os_enter_spin_lock(&didd_spin, &irql, "didd_nfy_add");
+ bh = diva_os_enter_spin_lock(&didd_spin, &irql,
+ "didd_nfy_add");
if (!NotificationTable[i].callback) {
NotificationTable[i].callback = callback;
NotificationTable[i].context = context;
- diva_os_leave_spin_lock(&didd_spin, &irql, "didd_nfy_add");
+ diva_os_leave_spin_lock(&didd_spin, &irql,
+ "didd_nfy_add", bh);
DBG_TRC(("Register adapter notification[%d]=%08x", i + 1, callback))
return (i + 1);
}
- diva_os_leave_spin_lock(&didd_spin, &irql, "didd_nfy_add");
+ diva_os_leave_spin_lock(&didd_spin, &irql, "didd_nfy_add", bh);
}
DBG_ERR(("Can't register adapter notification, overflow"))
return (0);
@@ -289,12 +298,13 @@ static dword diva_register_adapter_callback( \
IDI client does register his notification function
-------------------------------------------------------------------------- */
static void diva_remove_adapter_callback(dword handle) {
+ unsigned int bh;
diva_os_spin_lock_magic_t irql;
if (handle && ((--handle) < DIVA_DIDD_MAX_NOTIFICATIONS)) {
- diva_os_enter_spin_lock(&didd_spin, &irql, "didd_nfy_rm");
+ bh = diva_os_enter_spin_lock(&didd_spin, &irql, "didd_nfy_rm");
NotificationTable[handle].callback = NULL;
NotificationTable[handle].context = NULL;
- diva_os_leave_spin_lock(&didd_spin, &irql, "didd_nfy_rm");
+ diva_os_leave_spin_lock(&didd_spin, &irql, "didd_nfy_rm", bh);
DBG_TRC(("Remove adapter notification[%d]", (int)(handle + 1)))
return;
}
@@ -307,17 +317,18 @@ static void diva_remove_adapter_callback(dword handle) {
Step 2: Read Adapter Array
-------------------------------------------------------------------------- */
static void diva_notify_adapter_change(DESCRIPTOR *d, int removal) {
+ unsigned int bh;
int i, do_notify;
didd_adapter_change_notification_t nfy;
diva_os_spin_lock_magic_t irql;
for (i = 0; i < DIVA_DIDD_MAX_NOTIFICATIONS; i++) {
do_notify = 0;
- diva_os_enter_spin_lock(&didd_spin, &irql, "didd_nfy");
+ bh = diva_os_enter_spin_lock(&didd_spin, &irql, "didd_nfy");
if (NotificationTable[i].callback) {
memcpy(&nfy, &NotificationTable[i], sizeof(nfy));
do_notify = 1;
}
- diva_os_leave_spin_lock(&didd_spin, &irql, "didd_nfy");
+ diva_os_leave_spin_lock(&didd_spin, &irql, "didd_nfy", bh);
if (do_notify) {
(*(nfy.callback))(nfy.context, d, removal);
}
diff --git a/drivers/isdn/hardware/eicon/debug.c b/drivers/isdn/hardware/eicon/debug.c
index 3017881..cbfcd5c 100644
--- a/drivers/isdn/hardware/eicon/debug.c
+++ b/drivers/isdn/hardware/eicon/debug.c
@@ -307,19 +307,20 @@ dword diva_dbg_q_length(void) {
entry.
*/
diva_dbg_entry_head_t *diva_maint_get_message(word *size,
- diva_os_spin_lock_magic_t *old_irql) {
+ diva_os_spin_lock_magic_t *old_irql,
+ unsigned int *bh) {
diva_dbg_entry_head_t *pmsg = NULL;
- diva_os_enter_spin_lock(&dbg_q_lock, old_irql, "read");
+ *bh = diva_os_enter_spin_lock(&dbg_q_lock, old_irql, "read");
if (dbg_q_busy) {
- diva_os_leave_spin_lock(&dbg_q_lock, old_irql, "read_busy");
+ diva_os_leave_spin_lock(&dbg_q_lock, old_irql, "read_busy", *bh);
return NULL;
}
dbg_q_busy = 1;
if (!(pmsg = (diva_dbg_entry_head_t *)queuePeekMsg(dbg_queue, size))) {
dbg_q_busy = 0;
- diva_os_leave_spin_lock(&dbg_q_lock, old_irql, "read_empty");
+ diva_os_leave_spin_lock(&dbg_q_lock, old_irql, "read_empty", *bh);
}
return (pmsg);
@@ -330,7 +331,8 @@ diva_dbg_entry_head_t *diva_maint_get_message(word *size,
acknowledge last message and unlock queue
*/
void diva_maint_ack_message(int do_release,
- diva_os_spin_lock_magic_t *old_irql) {
+ diva_os_spin_lock_magic_t *old_irql,
+ unsigned int bh) {
if (!dbg_q_busy) {
return;
}
@@ -338,7 +340,7 @@ void diva_maint_ack_message(int do_release,
queueFreeMsg(dbg_queue);
}
dbg_q_busy = 0;
- diva_os_leave_spin_lock(&dbg_q_lock, old_irql, "read_ack");
+ diva_os_leave_spin_lock(&dbg_q_lock, old_irql, "read_ack", bh);
}
@@ -369,6 +371,7 @@ void diva_maint_prtComp(char *format, ...) {
}
static void DI_register(void *arg) {
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
dword sec, usec;
pDbgHandle hDbg;
@@ -386,14 +389,15 @@ static void DI_register(void *arg) {
return;
}
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "register");
+ bh = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "register");
for (id = 1; id < ARRAY_SIZE(clients); id++) {
if (clients[id].hDbg == hDbg) {
/*
driver already registered
*/
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql,
+ "register", bh);
return;
}
if (clients[id].hDbg) { /* slot is busy */
@@ -476,7 +480,7 @@ static void DI_register(void *arg) {
}
}
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register", bh);
}
static void DI_deregister(pDbgHandle hDbg) {
@@ -485,11 +489,12 @@ static void DI_deregister(pDbgHandle hDbg) {
int i;
word size;
byte *pmem = NULL;
+ unsigned int bh, bh2;
diva_os_get_time(&sec, &usec);
- diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "read");
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "read");
+ bh = diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "read");
+ bh2 = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "read");
for (i = 1; i < ARRAY_SIZE(clients); i++) {
if (clients[i].hDbg == hDbg) {
@@ -551,8 +556,8 @@ static void DI_deregister(pDbgHandle hDbg) {
}
}
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "read_ack");
- diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "read_ack");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "read_ack", bh2);
+ diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "read_ack", bh);
if (pmem) {
diva_os_free(0, pmem);
@@ -571,6 +576,7 @@ static void DI_format(int do_lock,
int type,
char *format,
va_list ap) {
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
dword sec, usec;
diva_dbg_entry_head_t *pmsg = NULL;
@@ -595,7 +601,7 @@ static void DI_format(int do_lock,
diva_os_get_time(&sec, &usec);
if (do_lock) {
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "format");
+ bh = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "format");
}
switch (type) {
@@ -720,7 +726,7 @@ static void DI_format(int do_lock,
}
if (do_lock) {
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "format");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "format", bh);
}
}
@@ -728,6 +734,7 @@ static void DI_format(int do_lock,
Write driver ID and driver revision to callers buffer
*/
int diva_get_driver_info(dword id, byte *data, int data_length) {
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
byte *p = data;
int to_copy;
@@ -737,7 +744,7 @@ int diva_get_driver_info(dword id, byte *data, int data_length) {
return (-1);
}
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "driver info");
+ bh = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "driver info");
if (clients[id].hDbg) {
*p++ = 1;
@@ -774,19 +781,20 @@ int diva_get_driver_info(dword id, byte *data, int data_length) {
}
*p++ = 0;
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "driver info");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "driver info", bh);
return (p - data);
}
int diva_get_driver_dbg_mask(dword id, byte *data) {
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
int ret = -1;
if (!data || !id || (id >= ARRAY_SIZE(clients))) {
return (-1);
}
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "driver info");
+ bh = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "driver info");
if (clients[id].hDbg) {
ret = 4;
@@ -796,12 +804,13 @@ int diva_get_driver_dbg_mask(dword id, byte *data) {
*data++ = (byte)(clients[id].hDbg->dbgMask >> 24);
}
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "driver info");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "driver info", bh);
return (ret);
}
int diva_set_driver_dbg_mask(dword id, dword mask) {
+ unsigned int bh, bh2;
diva_os_spin_lock_magic_t old_irql, old_irql1;
int ret = -1;
@@ -810,8 +819,9 @@ int diva_set_driver_dbg_mask(dword id, dword mask) {
return (-1);
}
- diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "dbg mask");
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "dbg mask");
+ bh = diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1,
+ "dbg mask");
+ bh2 = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "dbg mask");
if (clients[id].hDbg) {
dword old_mask = clients[id].hDbg->dbgMask;
@@ -823,14 +833,14 @@ int diva_set_driver_dbg_mask(dword id, dword mask) {
}
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "dbg mask");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "dbg mask", bh2);
if (clients[id].request_pending) {
clients[id].request_pending = 0;
(*(clients[id].request))((ENTITY *)(*(clients[id].pIdiLib->DivaSTraceGetHandle))(clients[id].pIdiLib->hLib));
}
- diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "dbg mask");
+ diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "dbg mask", bh);
return (ret);
}
@@ -856,6 +866,7 @@ static int diva_get_idi_adapter_info(IDI_CALL request, dword *serial, dword *log
Register XDI adapter as MAINT compatible driver
*/
void diva_mnt_add_xdi_adapter(const DESCRIPTOR *d) {
+ unsigned int bh, bh2;
diva_os_spin_lock_magic_t old_irql, old_irql1;
dword sec, usec, logical, serial, org_mask;
int id, free_id = -1;
@@ -881,13 +892,15 @@ void diva_mnt_add_xdi_adapter(const DESCRIPTOR *d) {
}
memset(pmem, 0x00, DivaSTraceGetMemotyRequirement(d->channels));
- diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "register");
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "register");
+ bh = diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1,
+ "register");
+ bh2 = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "register");
for (id = 1; id < ARRAY_SIZE(clients); id++) {
if (clients[id].hDbg && (clients[id].request == d->request)) {
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register");
- diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "register");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register", bh2);
+ diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1,
+ "register", bh);
diva_os_free(0, pmem);
return;
}
@@ -908,8 +921,9 @@ void diva_mnt_add_xdi_adapter(const DESCRIPTOR *d) {
}
if (free_id < 0) {
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register");
- diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "register");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register", bh2);
+ diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1,
+ "register", bh);
diva_os_free(0, pmem);
return;
}
@@ -967,8 +981,9 @@ void diva_mnt_add_xdi_adapter(const DESCRIPTOR *d) {
clients[id].request = NULL;
clients[id].request_pending = 0;
clients[id].hDbg = NULL;
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register");
- diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "register");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register", bh2);
+ diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1,
+ "register", bh);
diva_os_free(0, pmem);
return;
}
@@ -1006,14 +1021,14 @@ void diva_mnt_add_xdi_adapter(const DESCRIPTOR *d) {
org_mask = clients[id].Dbg.dbgMask;
clients[id].Dbg.dbgMask = 0;
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register", bh2);
if (clients[id].request_pending) {
clients[id].request_pending = 0;
(*(clients[id].request))((ENTITY *)(*(clients[id].pIdiLib->DivaSTraceGetHandle))(clients[id].pIdiLib->hLib));
}
- diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "register");
+ diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "register", bh);
diva_set_driver_dbg_mask(id, org_mask);
}
@@ -1027,11 +1042,12 @@ void diva_mnt_remove_xdi_adapter(const DESCRIPTOR *d) {
int i;
word size;
byte *pmem = NULL;
+ unsigned int bh, bh2;
diva_os_get_time(&sec, &usec);
- diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "read");
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "read");
+ bh = diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "read");
+ bh2 = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "read");
for (i = 1; i < ARRAY_SIZE(clients); i++) {
if (clients[i].hDbg && (clients[i].request == d->request)) {
@@ -1094,8 +1110,8 @@ void diva_mnt_remove_xdi_adapter(const DESCRIPTOR *d) {
}
}
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "read_ack");
- diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "read_ack");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "read_ack", bh2);
+ diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "read_ack", bh);
if (pmem) {
diva_os_free(0, pmem);
@@ -1355,13 +1371,14 @@ static void single_p(byte *P, word *PLength, byte Id) {
}
static void diva_maint_xdi_cb(ENTITY *e) {
+ unsigned int bh, bh2;
diva_strace_context_t *pLib = DIVAS_CONTAINING_RECORD(e, diva_strace_context_t, e);
diva_maint_client_t *pC;
diva_os_spin_lock_magic_t old_irql, old_irql1;
- diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "xdi_cb");
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "xdi_cb");
+ bh = diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "xdi_cb");
+ bh2 = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "xdi_cb");
pC = (diva_maint_client_t *)pLib->hAdapter;
@@ -1378,7 +1395,7 @@ static void diva_maint_xdi_cb(ENTITY *e) {
}
}
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "xdi_cb");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "xdi_cb", bh2);
if (pC->request_pending) {
@@ -1386,7 +1403,7 @@ static void diva_maint_xdi_cb(ENTITY *e) {
(*(pC->request))(e);
}
- diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "xdi_cb");
+ diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "xdi_cb", bh);
}
@@ -1916,6 +1933,7 @@ void diva_mnt_internal_dprintf(dword drv_id, dword type, char *fmt, ...) {
Shutdown all adapters before driver removal
*/
int diva_mnt_shutdown_xdi_adapters(void) {
+ unsigned int bh, bh2;
diva_os_spin_lock_magic_t old_irql, old_irql1;
int i, fret = 0;
byte *pmem;
@@ -1924,8 +1942,9 @@ int diva_mnt_shutdown_xdi_adapters(void) {
for (i = 1; i < ARRAY_SIZE(clients); i++) {
pmem = NULL;
- diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "unload");
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "unload");
+ bh = diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1,
+ "unload");
+ bh2 = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "unload");
if (clients[i].hDbg && clients[i].pIdiLib && clients[i].request) {
if ((*(clients[i].pIdiLib->DivaSTraceLibraryStop))(clients[i].pIdiLib) == 1) {
@@ -1955,7 +1974,7 @@ int diva_mnt_shutdown_xdi_adapters(void) {
}
}
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "unload");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "unload", bh2);
if (clients[i].hDbg && clients[i].pIdiLib && clients[i].request && clients[i].request_pending) {
clients[i].request_pending = 0;
(*(clients[i].request))((ENTITY *)(*(clients[i].pIdiLib->DivaSTraceGetHandle))(clients[i].pIdiLib->hLib));
@@ -1964,7 +1983,8 @@ int diva_mnt_shutdown_xdi_adapters(void) {
clients[i].dma_handle = -1;
}
}
- diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "unload");
+ diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1,
+ "unload", bh);
if (pmem) {
diva_os_free(0, pmem);
@@ -1979,11 +1999,13 @@ int diva_mnt_shutdown_xdi_adapters(void) {
Affects B- and Audio Tap trace mask at run time
*/
int diva_set_trace_filter(int filter_length, const char *filter) {
+ unsigned int bh, bh2;
diva_os_spin_lock_magic_t old_irql, old_irql1;
int i, ch, on, client_b_on, client_atap_on;
- diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "dbg mask");
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "write_filter");
+ bh = diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1,
+ "dbg mask");
+ bh2 = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "write_filter");
if (filter_length <= DIVA_MAX_SELECTIVE_FILTER_LENGTH) {
memcpy(&TraceFilter[0], filter, filter_length);
@@ -2015,29 +2037,30 @@ int diva_set_trace_filter(int filter_length, const char *filter) {
for (i = 1; i < ARRAY_SIZE(clients); i++) {
if (clients[i].hDbg && clients[i].pIdiLib && clients[i].request && clients[i].request_pending) {
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "write_filter");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "write_filter", bh2);
clients[i].request_pending = 0;
(*(clients[i].request))((ENTITY *)(*(clients[i].pIdiLib->DivaSTraceGetHandle))(clients[i].pIdiLib->hLib));
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "write_filter");
+ bh2 = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "write_filter");
}
}
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "write_filter");
- diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "dbg mask");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "write_filter", bh2);
+ diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "dbg mask", bh);
return (filter_length);
}
int diva_get_trace_filter(int max_length, char *filter) {
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
int len;
- diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "read_filter");
+ bh = diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "read_filter");
len = strlen(&TraceFilter[0]) + 1;
if (max_length >= len) {
memcpy(filter, &TraceFilter[0], len);
}
- diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "read_filter");
+ diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "read_filter", bh);
return (len);
}
diff --git a/drivers/isdn/hardware/eicon/debug_if.h b/drivers/isdn/hardware/eicon/debug_if.h
index fc5953a..baead0f 100644
--- a/drivers/isdn/hardware/eicon/debug_if.h
+++ b/drivers/isdn/hardware/eicon/debug_if.h
@@ -45,9 +45,11 @@ int diva_maint_init(byte *base, unsigned long length, int do_init);
void *diva_maint_finit(void);
dword diva_dbg_q_length(void);
diva_dbg_entry_head_t *diva_maint_get_message(word *size,
- diva_os_spin_lock_magic_t *old_irql);
+ diva_os_spin_lock_magic_t *old_irql,
+ unsigned int *bh);
void diva_maint_ack_message(int do_release,
- diva_os_spin_lock_magic_t *old_irql);
+ diva_os_spin_lock_magic_t *old_irql,
+ unsigned int bh);
void diva_maint_prtComp(char *format, ...);
void diva_maint_wakeup_read(void);
int diva_get_driver_info(dword id, byte *data, int data_length);
diff --git a/drivers/isdn/hardware/eicon/diva.c b/drivers/isdn/hardware/eicon/diva.c
index 1b25d8b..92e11ea 100644
--- a/drivers/isdn/hardware/eicon/diva.c
+++ b/drivers/isdn/hardware/eicon/diva.c
@@ -166,6 +166,7 @@ static diva_os_xdi_adapter_t *diva_q_get_next(struct list_head *what)
-------------------------------------------------------------------------- */
void *diva_driver_add_card(void *pdev, unsigned long CardOrdinal)
{
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
diva_os_xdi_adapter_t *pdiva, *pa;
int i, j, max, nr;
@@ -189,14 +190,14 @@ void *diva_driver_add_card(void *pdev, unsigned long CardOrdinal)
nr = 1;
}
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "add card");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "add card");
for (i = 0; i < max; i++) {
if (!diva_find_free_adapters(i, nr)) {
pdiva->controller = i + 1;
pdiva->xdi_adapter.ANum = pdiva->controller;
IoAdapters[i] = &pdiva->xdi_adapter;
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add card");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add card", bh);
create_adapter_proc(pdiva); /* add adapter to proc file system */
DBG_LOG(("add %s:%d",
@@ -204,7 +205,7 @@ void *diva_driver_add_card(void *pdev, unsigned long CardOrdinal)
[CardOrdinal].Name,
pdiva->controller))
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "add card");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "add card");
pa = pdiva;
for (j = 1; j < nr; j++) { /* slave adapters, if any */
pa = diva_q_get_next(&pa->link);
@@ -212,23 +213,23 @@ void *diva_driver_add_card(void *pdev, unsigned long CardOrdinal)
pa->controller = i + 1 + j;
pa->xdi_adapter.ANum = pa->controller;
IoAdapters[i + j] = &pa->xdi_adapter;
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add card");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add card", bh);
DBG_LOG(("add slave adapter (%d)",
pa->controller))
create_adapter_proc(pa); /* add adapter to proc file system */
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "add card");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "add card");
} else {
DBG_ERR(("slave adapter problem"))
break;
}
}
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add card");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add card", bh);
return (pdiva);
}
}
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add card");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add card", bh);
/*
Not able to add adapter - remove it and return error
@@ -260,17 +261,19 @@ int divasa_xdi_driver_entry(void)
-------------------------------------------------------------------------- */
static diva_os_xdi_adapter_t *get_and_remove_from_queue(void)
{
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
diva_os_xdi_adapter_t *a = NULL;
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "driver_unload");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql,
+ "driver_unload");
if (!list_empty(&adapter_queue)) {
a = list_entry(adapter_queue.next, diva_os_xdi_adapter_t, link);
list_del(adapter_queue.next);
}
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "driver_unload");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "driver_unload", bh);
return (a);
}
@@ -282,12 +285,13 @@ void diva_driver_remove_card(void *pdiva)
diva_os_spin_lock_magic_t old_irql;
diva_os_xdi_adapter_t *a[4];
diva_os_xdi_adapter_t *pa;
+ unsigned int bh;
int i;
pa = a[0] = (diva_os_xdi_adapter_t *) pdiva;
a[1] = a[2] = a[3] = NULL;
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "remode adapter");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "remode adapter");
for (i = 1; i < 4; i++) {
if ((pa = diva_q_get_next(&pa->link))
@@ -302,7 +306,7 @@ void diva_driver_remove_card(void *pdiva)
list_del(&a[i]->link);
}
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "driver_unload");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "driver_unload", bh);
(*(a[0]->interface.cleanup_adapter_proc)) (a[0]);
@@ -326,6 +330,7 @@ static void *divas_create_pci_card(int handle, void *pci_dev_handle)
diva_supported_cards_info_t *pI = &divas_supported_cards[handle];
diva_os_spin_lock_magic_t old_irql;
diva_os_xdi_adapter_t *a;
+ unsigned int bh;
DBG_LOG(("found %d-%s", pI->CardOrdinal, CardProperties[pI->CardOrdinal].Name))
@@ -348,14 +353,14 @@ static void *divas_create_pci_card(int handle, void *pci_dev_handle)
Add master adapter first, so slave adapters will receive higher
numbers as master adapter
*/
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "found_pci_card");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "found_pci_card");
list_add_tail(&a->link, &adapter_queue);
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "found_pci_card");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "found_pci_card", bh);
if ((*(pI->init_card)) (a)) {
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "found_pci_card");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "found_pci_card");
list_del(&a->link);
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "found_pci_card");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "found_pci_card", bh);
diva_os_free(0, a);
DBG_ERR(("A: can't get adapter resources"));
return NULL;
@@ -391,6 +396,7 @@ void *diva_xdi_open_adapter(void *os_handle, const void __user *src,
int length, void *mptr,
divas_xdi_copy_from_user_fn_t cp_fn)
{
+ unsigned int bh;
diva_xdi_um_cfg_cmd_t *msg = (diva_xdi_um_cfg_cmd_t *)mptr;
diva_os_xdi_adapter_t *a = NULL;
diva_os_spin_lock_magic_t old_irql;
@@ -405,14 +411,14 @@ void *diva_xdi_open_adapter(void *os_handle, const void __user *src,
DBG_ERR(("A: A(?) open, write error"))
return NULL;
}
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "open_adapter");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "open_adapter");
list_for_each(tmp, &adapter_queue) {
a = list_entry(tmp, diva_os_xdi_adapter_t, link);
if (a->controller == (int)msg->adapter)
break;
a = NULL;
}
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "open_adapter");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "open_adapter", bh);
if (!a) {
DBG_ERR(("A: A(%d) open, adapter not found", msg->adapter))
@@ -614,11 +620,12 @@ void diva_xdi_display_adapter_features(int card)
void diva_add_slave_adapter(diva_os_xdi_adapter_t *a)
{
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "add_slave");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "add_slave");
list_add_tail(&a->link, &adapter_queue);
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add_slave");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add_slave", bh);
}
int diva_card_read_xlog(diva_os_xdi_adapter_t *a)
diff --git a/drivers/isdn/hardware/eicon/idifunc.c b/drivers/isdn/hardware/eicon/idifunc.c
index fef6586..1c489c3 100644
--- a/drivers/isdn/hardware/eicon/idifunc.c
+++ b/drivers/isdn/hardware/eicon/idifunc.c
@@ -60,20 +60,21 @@ static diva_os_spin_lock_t ll_lock;
*/
static udiva_card *find_card_in_list(DESCRIPTOR *d)
{
+ unsigned int bh;
udiva_card *card;
struct list_head *tmp;
diva_os_spin_lock_magic_t old_irql;
- diva_os_enter_spin_lock(&ll_lock, &old_irql, "find card");
+ bh = diva_os_enter_spin_lock(&ll_lock, &old_irql, "find card");
list_for_each(tmp, &cards) {
card = list_entry(tmp, udiva_card, list);
if (card->d.request == d->request) {
diva_os_leave_spin_lock(&ll_lock, &old_irql,
- "find card");
+ "find card", bh);
return (card);
}
}
- diva_os_leave_spin_lock(&ll_lock, &old_irql, "find card");
+ diva_os_leave_spin_lock(&ll_lock, &old_irql, "find card", bh);
return ((udiva_card *) NULL);
}
@@ -82,6 +83,7 @@ static udiva_card *find_card_in_list(DESCRIPTOR *d)
*/
static void um_new_card(DESCRIPTOR *d)
{
+ unsigned int bh;
int adapter_nr = 0;
udiva_card *card = NULL;
IDI_SYNC_REQ sync_req;
@@ -100,9 +102,9 @@ static void um_new_card(DESCRIPTOR *d)
sync_req.xdi_logical_adapter_number.info.logical_adapter_number;
card->Id = adapter_nr;
if (!(diva_user_mode_idi_create_adapter(d, adapter_nr))) {
- diva_os_enter_spin_lock(&ll_lock, &old_irql, "add card");
+ bh = diva_os_enter_spin_lock(&ll_lock, &old_irql, "add card");
list_add_tail(&card->list, &cards);
- diva_os_leave_spin_lock(&ll_lock, &old_irql, "add card");
+ diva_os_leave_spin_lock(&ll_lock, &old_irql, "add card", bh);
} else {
DBG_ERR(("could not create user mode idi card %d",
adapter_nr));
@@ -115,6 +117,7 @@ static void um_new_card(DESCRIPTOR *d)
*/
static void um_remove_card(DESCRIPTOR *d)
{
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
udiva_card *card = NULL;
@@ -123,9 +126,9 @@ static void um_remove_card(DESCRIPTOR *d)
return;
}
diva_user_mode_idi_remove_adapter(card->Id);
- diva_os_enter_spin_lock(&ll_lock, &old_irql, "remove card");
+ bh = diva_os_enter_spin_lock(&ll_lock, &old_irql, "remove card");
list_del(&card->list);
- diva_os_leave_spin_lock(&ll_lock, &old_irql, "remove card");
+ diva_os_leave_spin_lock(&ll_lock, &old_irql, "remove card", bh);
DBG_LOG(("idi proc entry removed for card %d", card->Id));
diva_os_free(0, card);
}
@@ -135,11 +138,12 @@ static void um_remove_card(DESCRIPTOR *d)
*/
static void __exit remove_all_idi_proc(void)
{
+ unsigned int bh;
udiva_card *card;
diva_os_spin_lock_magic_t old_irql;
rescan:
- diva_os_enter_spin_lock(&ll_lock, &old_irql, "remove all");
+ bh = diva_os_enter_spin_lock(&ll_lock, &old_irql, "remove all");
if (!list_empty(&cards)) {
card = list_entry(cards.next, udiva_card, list);
list_del(&card->list);
@@ -148,7 +152,7 @@ static void __exit remove_all_idi_proc(void)
diva_os_free(0, card);
goto rescan;
}
- diva_os_leave_spin_lock(&ll_lock, &old_irql, "remove all");
+ diva_os_leave_spin_lock(&ll_lock, &old_irql, "remove all", bh);
}
/*
diff --git a/drivers/isdn/hardware/eicon/io.c b/drivers/isdn/hardware/eicon/io.c
index 8851ce5..df3d1f8 100644
--- a/drivers/isdn/hardware/eicon/io.c
+++ b/drivers/isdn/hardware/eicon/io.c
@@ -202,6 +202,7 @@ dump_trap_frame(PISDN_ADAPTER IoAdapter, byte __iomem *exceptionFrame)
-------------------------------------------------------------------------- */
void request(PISDN_ADAPTER IoAdapter, ENTITY *e)
{
+ unsigned int bh;
byte i;
diva_os_spin_lock_magic_t irql;
/*
@@ -222,7 +223,7 @@ void request(PISDN_ADAPTER IoAdapter, ENTITY *e)
pI->descriptor_number = -1;
return;
}
- diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "dma_op");
+ bh = diva_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "dma_op");
if (pI->operation == IDI_SYNC_REQ_DMA_DESCRIPTOR_ALLOC) {
pI->descriptor_number = diva_alloc_dma_map_entry(\
(struct _diva_dma_map_entry *)IoAdapter->dma_map);
@@ -249,7 +250,7 @@ void request(PISDN_ADAPTER IoAdapter, ENTITY *e)
pI->descriptor_number = -1;
pI->operation = -1;
}
- diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "dma_op");
+ diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "dma_op", bh);
} return;
#endif
case IDI_SYNC_REQ_XDI_GET_LOGICAL_ADAPTER_NUMBER: {
@@ -373,7 +374,8 @@ void request(PISDN_ADAPTER IoAdapter, ENTITY *e)
DBG_FTL(("xdi: uninitialized Adapter used - ignore request"))
return;
}
- diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req");
+ bh = diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql,
+ "data_req");
/*
* assign an entity
*/
@@ -383,7 +385,8 @@ void request(PISDN_ADAPTER IoAdapter, ENTITY *e)
{
DBG_FTL(("xdi: all Ids in use (max=%d) --> Req ignored",
IoAdapter->e_max))
- diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req");
+ diva_os_leave_spin_lock(&IoAdapter->data_spin_lock,
+ &irql, "data_req", bh);
return;
}
/*
@@ -416,7 +419,8 @@ void request(PISDN_ADAPTER IoAdapter, ENTITY *e)
(*(IoAdapter->os_trap_nfy_Fnc))(IoAdapter, IoAdapter->ANum);
}
}
- diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req");
+ diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql,
+ "data_req", bh);
return;
}
/*
@@ -444,12 +448,14 @@ void request(PISDN_ADAPTER IoAdapter, ENTITY *e)
* queue the DPC to process the request
*/
diva_os_schedule_soft_isr(&IoAdapter->req_soft_isr);
- diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req");
+ diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req",
+ bh);
}
/* ---------------------------------------------------------------------
Main DPC routine
--------------------------------------------------------------------- */
void DIDpcRoutine(struct _diva_os_soft_isr *psoft_isr, void *Context) {
+ unsigned int bh;
PISDN_ADAPTER IoAdapter = (PISDN_ADAPTER)Context;
ADAPTER *a = &IoAdapter->a;
diva_os_atomic_t *pin_dpc = &IoAdapter->in_dpc;
@@ -469,9 +475,8 @@ void DIDpcRoutine(struct _diva_os_soft_isr *psoft_isr, void *Context) {
if (IoAdapter->pcm_pending) {
struct pc_maint *pcm;
diva_os_spin_lock_magic_t OldIrql;
- diva_os_enter_spin_lock(&IoAdapter->data_spin_lock,
- &OldIrql,
- "data_dpc");
+ bh = diva_os_enter_spin_lock(&IoAdapter->data_spin_lock,
+ &OldIrql, "data_dpc");
pcm = (struct pc_maint *)IoAdapter->pcm_data;
switch (IoAdapter->pcm_pending) {
case 1: /* ask card for XLOG */
@@ -489,8 +494,7 @@ void DIDpcRoutine(struct _diva_os_soft_isr *psoft_isr, void *Context) {
break;
}
diva_os_leave_spin_lock(&IoAdapter->data_spin_lock,
- &OldIrql,
- "data_dpc");
+ &OldIrql, "data_dpc", bh);
}
/* ---------------------------------------------------------------- */
}
@@ -501,6 +505,7 @@ void DIDpcRoutine(struct _diva_os_soft_isr *psoft_isr, void *Context) {
static void
pcm_req(PISDN_ADAPTER IoAdapter, ENTITY *e)
{
+ unsigned int bh;
diva_os_spin_lock_magic_t OldIrql;
int i, rc;
ADAPTER *a = &IoAdapter->a;
@@ -511,45 +516,43 @@ pcm_req(PISDN_ADAPTER IoAdapter, ENTITY *e)
*/
if (IoAdapter->Properties.Card == CARD_MAE)
{
- diva_os_enter_spin_lock(&IoAdapter->data_spin_lock,
- &OldIrql,
- "data_pcm_1");
+ bh = diva_os_enter_spin_lock(&IoAdapter->data_spin_lock,
+ &OldIrql, "data_pcm_1");
IoAdapter->pcm_data = (void *)pcm;
IoAdapter->pcm_pending = 1;
diva_os_schedule_soft_isr(&IoAdapter->req_soft_isr);
- diva_os_leave_spin_lock(&IoAdapter->data_spin_lock,
- &OldIrql,
- "data_pcm_1");
+ diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &OldIrql,
+ "data_pcm_1", bh);
for (rc = 0, i = (IoAdapter->trapped ? 3000 : 250); !rc && (i > 0); --i)
{
diva_os_sleep(1);
if (IoAdapter->pcm_pending == 3) {
- diva_os_enter_spin_lock(&IoAdapter->data_spin_lock,
+ bh = diva_os_enter_spin_lock(&IoAdapter->data_spin_lock,
&OldIrql,
"data_pcm_3");
IoAdapter->pcm_pending = 0;
IoAdapter->pcm_data = NULL;
diva_os_leave_spin_lock(&IoAdapter->data_spin_lock,
&OldIrql,
- "data_pcm_3");
+ "data_pcm_3", bh);
return;
}
- diva_os_enter_spin_lock(&IoAdapter->data_spin_lock,
+ bh = diva_os_enter_spin_lock(&IoAdapter->data_spin_lock,
&OldIrql,
"data_pcm_2");
diva_os_schedule_soft_isr(&IoAdapter->req_soft_isr);
diva_os_leave_spin_lock(&IoAdapter->data_spin_lock,
&OldIrql,
- "data_pcm_2");
+ "data_pcm_2", bh);
}
- diva_os_enter_spin_lock(&IoAdapter->data_spin_lock,
+ bh = diva_os_enter_spin_lock(&IoAdapter->data_spin_lock,
&OldIrql,
"data_pcm_4");
IoAdapter->pcm_pending = 0;
IoAdapter->pcm_data = NULL;
diva_os_leave_spin_lock(&IoAdapter->data_spin_lock,
&OldIrql,
- "data_pcm_4");
+ "data_pcm_4", bh);
goto Trapped;
}
/*
@@ -755,48 +758,55 @@ void io_inc(ADAPTER *a, void *adr)
/*------------------------------------------------------------------*/
void free_entity(ADAPTER *a, byte e_no)
{
+ unsigned int bh;
PISDN_ADAPTER IoAdapter;
diva_os_spin_lock_magic_t irql;
IoAdapter = (PISDN_ADAPTER) a->io;
- diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_free");
+ bh = diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql,
+ "data_free");
IoAdapter->e_tbl[e_no].e = NULL;
IoAdapter->e_count--;
- diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_free");
+ diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql,
+ "data_free", bh);
}
void assign_queue(ADAPTER *a, byte e_no, word ref)
{
+ unsigned int bh;
PISDN_ADAPTER IoAdapter;
diva_os_spin_lock_magic_t irql;
IoAdapter = (PISDN_ADAPTER) a->io;
- diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_assign");
+ bh = diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql,
+ "data_assign");
IoAdapter->e_tbl[e_no].assign_ref = ref;
IoAdapter->e_tbl[e_no].next = (byte)IoAdapter->assign;
IoAdapter->assign = e_no;
- diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_assign");
+ diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql,
+ "data_assign", bh);
}
byte get_assign(ADAPTER *a, word ref)
{
+ unsigned int bh;
PISDN_ADAPTER IoAdapter;
diva_os_spin_lock_magic_t irql;
byte e_no;
IoAdapter = (PISDN_ADAPTER) a->io;
- diva_os_enter_spin_lock(&IoAdapter->data_spin_lock,
- &irql,
- "data_assign_get");
+ bh = diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql,
+ "data_assign_get");
for (e_no = (byte)IoAdapter->assign;
e_no && IoAdapter->e_tbl[e_no].assign_ref != ref;
e_no = IoAdapter->e_tbl[e_no].next);
- diva_os_leave_spin_lock(&IoAdapter->data_spin_lock,
- &irql,
- "data_assign_get");
+ diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql,
+ "data_assign_get", bh);
return e_no;
}
void req_queue(ADAPTER *a, byte e_no)
{
+ unsigned int bh;
PISDN_ADAPTER IoAdapter;
diva_os_spin_lock_magic_t irql;
IoAdapter = (PISDN_ADAPTER) a->io;
- diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req_q");
+ bh = diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql,
+ "data_req_q");
IoAdapter->e_tbl[e_no].next = 0;
if (IoAdapter->head) {
IoAdapter->e_tbl[IoAdapter->tail].next = e_no;
@@ -806,7 +816,8 @@ void req_queue(ADAPTER *a, byte e_no)
IoAdapter->head = e_no;
IoAdapter->tail = e_no;
}
- diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req_q");
+ diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql,
+ "data_req_q", bh);
}
byte look_req(ADAPTER *a)
{
@@ -816,13 +827,16 @@ byte look_req(ADAPTER *a)
}
void next_req(ADAPTER *a)
{
+ unsigned int bh;
PISDN_ADAPTER IoAdapter;
diva_os_spin_lock_magic_t irql;
IoAdapter = (PISDN_ADAPTER) a->io;
- diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req_next");
+ bh = diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql,
+ "data_req_next");
IoAdapter->head = IoAdapter->e_tbl[IoAdapter->head].next;
if (!IoAdapter->head) IoAdapter->tail = 0;
- diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req_next");
+ diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql,
+ "data_req_next", bh);
}
/*------------------------------------------------------------------*/
/* memory map functions */
diff --git a/drivers/isdn/hardware/eicon/mntfunc.c b/drivers/isdn/hardware/eicon/mntfunc.c
index 1cd9aff..b7a87ec 100644
--- a/drivers/isdn/hardware/eicon/mntfunc.c
+++ b/drivers/isdn/hardware/eicon/mntfunc.c
@@ -137,6 +137,7 @@ int maint_read_write(void __user *buf, int count)
{
byte data[128];
dword cmd, id, mask;
+ unsigned int bh;
int ret = 0;
if (count < (3 * sizeof(dword)))
@@ -218,17 +219,17 @@ int maint_read_write(void __user *buf, int count)
for (;;) {
if (!(pmsg =
- diva_maint_get_message(&size, &old_irql))) {
+ diva_maint_get_message(&size, &old_irql, &bh))) {
break;
}
if (size > mask) {
- diva_maint_ack_message(0, &old_irql);
+ diva_maint_ack_message(0, &old_irql, bh);
ret = -EINVAL;
break;
}
ret = size;
memcpy(pbuf, pmsg, size);
- diva_maint_ack_message(1, &old_irql);
+ diva_maint_ack_message(1, &old_irql, bh);
if ((count < size) ||
diva_os_copy_to_user(NULL, buf, (void *) pbuf, size))
ret = -EFAULT;
@@ -255,11 +256,11 @@ int maint_read_write(void __user *buf, int count)
for (;;) {
if (!(pmsg =
- diva_maint_get_message(&size, &old_irql))) {
+ diva_maint_get_message(&size, &old_irql, &bh))) {
break;
}
if ((size + 8) > mask) {
- diva_maint_ack_message(0, &old_irql);
+ diva_maint_ack_message(0, &old_irql, bh);
break;
}
/*
@@ -273,7 +274,7 @@ int maint_read_write(void __user *buf, int count)
Write message
*/
memcpy(&pbuf[written], pmsg, size);
- diva_maint_ack_message(1, &old_irql);
+ diva_maint_ack_message(1, &old_irql, bh);
written += size;
mask -= (size + 4);
}
diff --git a/drivers/isdn/hardware/eicon/platform.h b/drivers/isdn/hardware/eicon/platform.h
index 62e2073..345c901 100644
--- a/drivers/isdn/hardware/eicon/platform.h
+++ b/drivers/isdn/hardware/eicon/platform.h
@@ -235,12 +235,13 @@ typedef long diva_os_spin_lock_magic_t;
typedef spinlock_t diva_os_spin_lock_t;
static __inline__ int diva_os_initialize_spin_lock(spinlock_t *lock, void *unused) { \
spin_lock_init(lock); return (0); }
-static __inline__ void diva_os_enter_spin_lock(diva_os_spin_lock_t *a, \
- diva_os_spin_lock_magic_t *old_irql, \
- void *dbg) { spin_lock_bh(a); }
+static __inline__ unsigned int diva_os_enter_spin_lock(diva_os_spin_lock_t *a, \
+ diva_os_spin_lock_magic_t *old_irql, \
+ void *dbg) { spin_lock_bh(a); }
static __inline__ void diva_os_leave_spin_lock(diva_os_spin_lock_t *a, \
diva_os_spin_lock_magic_t *old_irql, \
- void *dbg) { spin_unlock_bh(a); }
+ void *dbg,
+ unsigned int bh) { spin_unlock_bh(a); }
#define diva_os_destroy_spin_lock(a, b) do { } while (0)
diff --git a/drivers/isdn/hardware/eicon/um_idi.c b/drivers/isdn/hardware/eicon/um_idi.c
index db4dd4f..138be33 100644
--- a/drivers/isdn/hardware/eicon/um_idi.c
+++ b/drivers/isdn/hardware/eicon/um_idi.c
@@ -121,6 +121,7 @@ void diva_user_mode_idi_finit(void)
------------------------------------------------------------------------- */
int diva_user_mode_idi_create_adapter(const DESCRIPTOR *d, int adapter_nr)
{
+ unsigned int bh;
diva_os_spin_lock_magic_t old_irql;
diva_um_idi_adapter_t *a =
(diva_um_idi_adapter_t *) diva_os_malloc(0,
@@ -139,9 +140,11 @@ int diva_user_mode_idi_create_adapter(const DESCRIPTOR *d, int adapter_nr)
DBG_LOG(("DIDD_ADD A(%d), type:%02x, features:%04x, channels:%d",
adapter_nr, a->d.type, a->d.features, a->d.channels));
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "create_adapter");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql,
+ "create_adapter");
list_add_tail(&a->link, &adapter_q);
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "create_adapter");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "create_adapter",
+ bh);
return (0);
}
@@ -208,6 +211,7 @@ static void cleanup_entity(divas_um_idi_entity_t *e)
------------------------------------------------------------------------ */
void *divas_um_idi_create_entity(dword adapter_nr, void *file)
{
+ unsigned int bh;
divas_um_idi_entity_t *e;
diva_um_idi_adapter_t *a;
diva_os_spin_lock_magic_t old_irql;
@@ -235,7 +239,8 @@ void *divas_um_idi_create_entity(dword adapter_nr, void *file)
return NULL;
}
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "create_entity");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql,
+ "create_entity");
/*
Look for Adapter requested
*/
@@ -243,7 +248,8 @@ void *divas_um_idi_create_entity(dword adapter_nr, void *file)
/*
No adapter was found, or this adapter was removed
*/
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "create_entity");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql,
+ "create_entity", bh);
DBG_LOG(("A: no adapter(%ld)", adapter_nr));
@@ -259,7 +265,8 @@ void *divas_um_idi_create_entity(dword adapter_nr, void *file)
list_add_tail(&e->link, &a->entity_q); /* link from adapter */
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "create_entity");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql,
+ "create_entity", bh);
DBG_LOG(("A(%ld), create E(%08x)", adapter_nr, e));
}
@@ -272,6 +279,7 @@ void *divas_um_idi_create_entity(dword adapter_nr, void *file)
------------------------------------------------------------------------ */
int divas_um_idi_delete_entity(int adapter_nr, void *entity)
{
+ unsigned int bh;
divas_um_idi_entity_t *e;
diva_um_idi_adapter_t *a;
diva_os_spin_lock_magic_t old_irql;
@@ -279,11 +287,12 @@ int divas_um_idi_delete_entity(int adapter_nr, void *entity)
if (!(e = (divas_um_idi_entity_t *) entity))
return (-1);
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "delete_entity");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql,
+ "delete_entity");
if ((a = e->adapter)) {
list_del(&e->link);
}
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "delete_entity");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "delete_entity", bh);
diva_um_idi_stop_wdog(entity);
cleanup_entity(e);
@@ -304,6 +313,7 @@ int diva_um_idi_read(void *entity,
void *dst,
int max_length, divas_um_idi_copy_to_user_fn_t cp_fn)
{
+ unsigned int bh;
divas_um_idi_entity_t *e;
diva_um_idi_adapter_t *a;
const void *data;
@@ -311,14 +321,14 @@ int diva_um_idi_read(void *entity,
diva_um_idi_data_queue_t *q;
diva_os_spin_lock_magic_t old_irql;
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "read");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "read");
e = (divas_um_idi_entity_t *) entity;
if (!e || (!(a = e->adapter)) ||
(e->status & DIVA_UM_IDI_REMOVE_PENDING) ||
(e->status & DIVA_UM_IDI_REMOVED) ||
(a->status & DIVA_UM_IDI_ADAPTER_REMOVED)) {
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "read");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "read", bh);
DBG_ERR(("E(%08x) read failed - adapter removed", e))
return (-1);
}
@@ -354,7 +364,7 @@ int diva_um_idi_read(void *entity,
DBG_ERR(("A: A(%d) E(%08x) read small buffer",
a->adapter_nr, e, ret));
diva_os_leave_spin_lock(&adapter_lock, &old_irql,
- "read");
+ "read", bh);
return (-2);
}
/*
@@ -373,7 +383,7 @@ int diva_um_idi_read(void *entity,
DBG_TRC(("A(%d) E(%08x) read=%d", a->adapter_nr, e, ret));
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "read");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "read", bh);
return (ret);
}
@@ -384,6 +394,7 @@ int diva_um_idi_write(void *entity,
const void *src,
int length, divas_um_idi_copy_from_user_fn_t cp_fn)
{
+ unsigned int bh;
divas_um_idi_entity_t *e;
diva_um_idi_adapter_t *a;
diva_um_idi_req_hdr_t *req;
@@ -391,14 +402,14 @@ int diva_um_idi_write(void *entity,
int ret = 0;
diva_os_spin_lock_magic_t old_irql;
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "write");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "write");
e = (divas_um_idi_entity_t *) entity;
if (!e || (!(a = e->adapter)) ||
(e->status & DIVA_UM_IDI_REMOVE_PENDING) ||
(e->status & DIVA_UM_IDI_REMOVED) ||
(a->status & DIVA_UM_IDI_ADAPTER_REMOVED)) {
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write", bh);
DBG_ERR(("E(%08x) write failed - adapter removed", e))
return (-1);
}
@@ -406,13 +417,13 @@ int diva_um_idi_write(void *entity,
DBG_TRC(("A(%d) E(%08x) write(%d)", a->adapter_nr, e, length));
if ((length < sizeof(*req)) || (length > sizeof(e->buffer))) {
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write", bh);
return (-2);
}
if (e->status & DIVA_UM_IDI_RC_PENDING) {
DBG_ERR(("A: A(%d) E(%08x) rc pending", a->adapter_nr, e));
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write", bh);
return (-1); /* should wait for RC code first */
}
@@ -423,7 +434,7 @@ int diva_um_idi_write(void *entity,
if ((ret = (*cp_fn) (os_handle, e->buffer, src, length)) < 0) {
DBG_TRC(("A: A(%d) E(%08x) write error=%d", a->adapter_nr,
e, ret));
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write", bh);
return (ret);
}
@@ -436,9 +447,8 @@ int diva_um_idi_write(void *entity,
diva_data_q_get_segment4write(&e->data))) {
DBG_ERR(("A(%d) get_features, no free buffer",
a->adapter_nr));
- diva_os_leave_spin_lock(&adapter_lock,
- &old_irql,
- "write");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql,
+ "write", bh);
return (0);
}
diva_user_mode_idi_adapter_features(a, &(((diva_um_idi_ind_hdr_t
@@ -449,7 +459,7 @@ int diva_um_idi_write(void *entity,
diva_data_q_ack_segment4write(&e->data,
sizeof(diva_um_idi_ind_hdr_t));
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write", bh);
diva_os_wakeup_read(e->os_context);
}
@@ -464,20 +474,23 @@ int diva_um_idi_write(void *entity,
req->type & DIVA_UM_IDI_REQ_TYPE_MASK));
switch (process_idi_request(e, req)) {
case -1:
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql,
+ "write", bh);
return (-1);
case -2:
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql,
+ "write", bh);
diva_os_wakeup_read(e->os_context);
break;
default:
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql,
+ "write", bh);
break;
}
break;
default:
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write", bh);
return (-1);
}
@@ -491,13 +504,14 @@ int diva_um_idi_write(void *entity,
-------------------------------------------------------------------------- */
static void diva_um_idi_xdi_callback(ENTITY *entity)
{
+ unsigned int bh;
divas_um_idi_entity_t *e = DIVAS_CONTAINING_RECORD(entity,
divas_um_idi_entity_t,
e);
diva_os_spin_lock_magic_t old_irql;
int call_wakeup = 0;
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "xdi_callback");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "xdi_callback");
if (e->e.complete == 255) {
if (!(e->status & DIVA_UM_IDI_REMOVE_PENDING)) {
@@ -509,7 +523,8 @@ static void diva_um_idi_xdi_callback(ENTITY *entity)
}
}
e->e.Rc = 0;
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "xdi_callback");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql,
+ "xdi_callback", bh);
if (call_wakeup) {
diva_os_wakeup_read(e->os_context);
@@ -523,7 +538,8 @@ static void diva_um_idi_xdi_callback(ENTITY *entity)
call_wakeup = process_idi_ind(e, e->e.Ind);
}
e->e.Ind = 0;
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "xdi_callback");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql,
+ "xdi_callback", bh);
if (call_wakeup) {
diva_os_wakeup_read(e->os_context);
}
@@ -759,6 +775,7 @@ static int write_return_code(divas_um_idi_entity_t *e, byte rc)
-------------------------------------------------------------------------- */
int diva_user_mode_idi_ind_ready(void *entity, void *os_handle)
{
+ unsigned int bh;
divas_um_idi_entity_t *e;
diva_um_idi_adapter_t *a;
diva_os_spin_lock_magic_t old_irql;
@@ -766,7 +783,7 @@ int diva_user_mode_idi_ind_ready(void *entity, void *os_handle)
if (!entity)
return (-1);
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "ind_ready");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "ind_ready");
e = (divas_um_idi_entity_t *) entity;
a = e->adapter;
@@ -774,7 +791,8 @@ int diva_user_mode_idi_ind_ready(void *entity, void *os_handle)
/*
Adapter was unloaded
*/
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "ind_ready");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "ind_ready",
+ bh);
return (-1); /* adapter was removed */
}
if (e->status & DIVA_UM_IDI_REMOVED) {
@@ -782,7 +800,8 @@ int diva_user_mode_idi_ind_ready(void *entity, void *os_handle)
entity was removed as result of adapter removal
user should assign this entity again
*/
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "ind_ready");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "ind_ready",
+ bh);
return (-1);
}
@@ -792,7 +811,7 @@ int diva_user_mode_idi_ind_ready(void *entity, void *os_handle)
ret = 0;
}
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "ind_ready");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "ind_ready", bh);
return (ret);
}
@@ -804,19 +823,21 @@ void *diva_um_id_get_os_context(void *entity)
int divas_um_idi_entity_assigned(void *entity)
{
+ unsigned int bh;
divas_um_idi_entity_t *e;
diva_um_idi_adapter_t *a;
int ret;
diva_os_spin_lock_magic_t old_irql;
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "assigned?");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "assigned?");
e = (divas_um_idi_entity_t *) entity;
if (!e || (!(a = e->adapter)) ||
(e->status & DIVA_UM_IDI_REMOVED) ||
(a->status & DIVA_UM_IDI_ADAPTER_REMOVED)) {
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "assigned?");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "assigned?",
+ bh);
return (0);
}
@@ -828,24 +849,27 @@ int divas_um_idi_entity_assigned(void *entity)
DBG_TRC(("Id:%02x, rc_count:%d, status:%08x", e->e.Id, e->rc_count,
e->status))
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "assigned?");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "assigned?",
+ bh);
return (ret);
}
int divas_um_idi_entity_start_remove(void *entity)
{
+ unsigned int bh;
divas_um_idi_entity_t *e;
diva_um_idi_adapter_t *a;
diva_os_spin_lock_magic_t old_irql;
- diva_os_enter_spin_lock(&adapter_lock, &old_irql, "start_remove");
+ bh = diva_os_enter_spin_lock(&adapter_lock, &old_irql, "start_remove");
e = (divas_um_idi_entity_t *) entity;
if (!e || (!(a = e->adapter)) ||
(e->status & DIVA_UM_IDI_REMOVED) ||
(a->status & DIVA_UM_IDI_ADAPTER_REMOVED)) {
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "start_remove");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql,
+ "start_remove", bh);
return (0);
}
@@ -853,7 +877,8 @@ int divas_um_idi_entity_start_remove(void *entity)
/*
Entity BUSY
*/
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "start_remove");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql,
+ "start_remove", bh);
return (1);
}
@@ -861,7 +886,8 @@ int divas_um_idi_entity_start_remove(void *entity)
/*
Remove request was already pending, and arrived now
*/
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "start_remove");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql,
+ "start_remove", bh);
return (0); /* REMOVE was pending */
}
@@ -880,7 +906,7 @@ int divas_um_idi_entity_start_remove(void *entity)
if (a->d.request)
(*(a->d.request)) (&e->e);
- diva_os_leave_spin_lock(&adapter_lock, &old_irql, "start_remove");
+ diva_os_leave_spin_lock(&adapter_lock, &old_irql, "start_remove", bh);
return (0);
}
--
2.7.4
This pair of function is implemented on top of local_bh_disable() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to that of local_irq_save/restore. Subsequent calls to
local_bh_disable() and friends can then stack up:
bh = local_bh_disable(vec_mask);
tcp_get_md5sig_pool(&bh2) {
*bh2 = local_bh_disable(...)
}
...
tcp_put_md5sig_pool(bh2) {
local_bh_enable(bh2);
}
local_bh_enable(bh);
To prepare for that, make tcp_get_md5sig_pool() able to return a saved
vector enabled mask and pass it back to rcu_read_unlock_bh(). We'll plug
it to local_bh_disable() in a subsequent patch.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
include/net/tcp.h | 4 ++--
net/ipv4/tcp.c | 5 ++++-
net/ipv4/tcp_ipv4.c | 14 ++++++++------
net/ipv6/tcp_ipv6.c | 14 ++++++++------
4 files changed, 22 insertions(+), 15 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 770917d..7fe357a 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1553,8 +1553,8 @@ static inline struct tcp_md5sig_key *tcp_md5_do_lookup(const struct sock *sk,
bool tcp_alloc_md5sig_pool(void);
-struct tcp_md5sig_pool *tcp_get_md5sig_pool(void);
-static inline void tcp_put_md5sig_pool(void)
+struct tcp_md5sig_pool *tcp_get_md5sig_pool(unsigned int *bh);
+static inline void tcp_put_md5sig_pool(unsigned int bh)
{
local_bh_enable();
}
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 10c6246..dfd9bae 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -3665,16 +3665,19 @@ EXPORT_SYMBOL(tcp_alloc_md5sig_pool);
* and BH disabled, to make sure another thread or softirq handling
* wont try to get same context.
*/
-struct tcp_md5sig_pool *tcp_get_md5sig_pool(void)
+struct tcp_md5sig_pool *tcp_get_md5sig_pool(unsigned int *bh)
{
local_bh_disable();
+ *bh = 0;
if (tcp_md5sig_pool_populated) {
/* coupled with smp_wmb() in __tcp_alloc_md5sig_pool() */
smp_rmb();
return this_cpu_ptr(&tcp_md5sig_pool);
}
+
local_bh_enable();
+
return NULL;
}
EXPORT_SYMBOL(tcp_get_md5sig_pool);
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 44c09ed..0378e77 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -1194,8 +1194,9 @@ static int tcp_v4_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key,
{
struct tcp_md5sig_pool *hp;
struct ahash_request *req;
+ unsigned int bh;
- hp = tcp_get_md5sig_pool();
+ hp = tcp_get_md5sig_pool(&bh);
if (!hp)
goto clear_hash_noput;
req = hp->md5_req;
@@ -1210,11 +1211,11 @@ static int tcp_v4_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key,
if (crypto_ahash_final(req))
goto clear_hash;
- tcp_put_md5sig_pool();
+ tcp_put_md5sig_pool(bh);
return 0;
clear_hash:
- tcp_put_md5sig_pool();
+ tcp_put_md5sig_pool(bh);
clear_hash_noput:
memset(md5_hash, 0, 16);
return 1;
@@ -1228,6 +1229,7 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key,
struct ahash_request *req;
const struct tcphdr *th = tcp_hdr(skb);
__be32 saddr, daddr;
+ unsigned int bh;
if (sk) { /* valid for establish/request sockets */
saddr = sk->sk_rcv_saddr;
@@ -1238,7 +1240,7 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key,
daddr = iph->daddr;
}
- hp = tcp_get_md5sig_pool();
+ hp = tcp_get_md5sig_pool(&bh);
if (!hp)
goto clear_hash_noput;
req = hp->md5_req;
@@ -1256,11 +1258,11 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key,
if (crypto_ahash_final(req))
goto clear_hash;
- tcp_put_md5sig_pool();
+ tcp_put_md5sig_pool(bh);
return 0;
clear_hash:
- tcp_put_md5sig_pool();
+ tcp_put_md5sig_pool(bh);
clear_hash_noput:
memset(md5_hash, 0, 16);
return 1;
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 03e6b7a..360efc3 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -610,8 +610,9 @@ static int tcp_v6_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key,
{
struct tcp_md5sig_pool *hp;
struct ahash_request *req;
+ unsigned int bh;
- hp = tcp_get_md5sig_pool();
+ hp = tcp_get_md5sig_pool(&bh);
if (!hp)
goto clear_hash_noput;
req = hp->md5_req;
@@ -626,11 +627,11 @@ static int tcp_v6_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key,
if (crypto_ahash_final(req))
goto clear_hash;
- tcp_put_md5sig_pool();
+ tcp_put_md5sig_pool(bh);
return 0;
clear_hash:
- tcp_put_md5sig_pool();
+ tcp_put_md5sig_pool(bh);
clear_hash_noput:
memset(md5_hash, 0, 16);
return 1;
@@ -645,6 +646,7 @@ static int tcp_v6_md5_hash_skb(char *md5_hash,
struct tcp_md5sig_pool *hp;
struct ahash_request *req;
const struct tcphdr *th = tcp_hdr(skb);
+ unsigned int bh;
if (sk) { /* valid for establish/request sockets */
saddr = &sk->sk_v6_rcv_saddr;
@@ -655,7 +657,7 @@ static int tcp_v6_md5_hash_skb(char *md5_hash,
daddr = &ip6h->daddr;
}
- hp = tcp_get_md5sig_pool();
+ hp = tcp_get_md5sig_pool(&bh);
if (!hp)
goto clear_hash_noput;
req = hp->md5_req;
@@ -673,11 +675,11 @@ static int tcp_v6_md5_hash_skb(char *md5_hash,
if (crypto_ahash_final(req))
goto clear_hash;
- tcp_put_md5sig_pool();
+ tcp_put_md5sig_pool(bh);
return 0;
clear_hash:
- tcp_put_md5sig_pool();
+ tcp_put_md5sig_pool(bh);
clear_hash_noput:
memset(md5_hash, 0, 16);
return 1;
--
2.7.4
From: Frederic Weisbecker <[email protected]>
Make do_softirq() re-entrant and allow a vector, being either processed
or disabled, to be interrupted by another vector. This way a vector
won't be able to monopolize the CPU for a long while at the expense of
the others that may rely on some predictable latency, especially on
softirq disabled sections that used to disable all vectors.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
kernel/softirq.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 457bf60..f4cb1ea 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -195,7 +195,7 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt, unsigned int bh)
if (cnt)
preempt_count_sub(cnt - 1);
- if (unlikely(!in_interrupt() && local_softirq_pending())) {
+ if (unlikely(!in_irq() && (local_softirq_pending() & local_softirq_enabled()))) {
/*
* Run softirq if any pending. And do it in its own stack
* as we may be calling this deep in a task call stack already.
@@ -387,7 +387,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
lockdep_softirq_end(in_hardirq);
account_irq_exit_time(current);
local_bh_exit();
- WARN_ON_ONCE(in_interrupt());
+ WARN_ON_ONCE(in_irq());
current_restore_flags(old_flags, PF_MEMALLOC);
}
@@ -396,12 +396,12 @@ asmlinkage __visible void do_softirq(void)
__u32 pending;
unsigned long flags;
- if (in_interrupt())
+ if (in_irq())
return;
local_irq_save(flags);
- pending = local_softirq_pending();
+ pending = local_softirq_pending() & local_softirq_enabled();
if (pending && !ksoftirqd_running(pending))
do_softirq_own_stack();
@@ -432,7 +432,7 @@ void irq_enter(void)
static inline void invoke_softirq(void)
{
- if (ksoftirqd_running(local_softirq_pending()))
+ if (ksoftirqd_running(local_softirq_pending() & local_softirq_enabled()))
return;
if (!force_irqthreads) {
@@ -481,7 +481,7 @@ void irq_exit(void)
#endif
account_irq_exit_time(current);
preempt_count_sub(HARDIRQ_OFFSET);
- if (!in_interrupt() && local_softirq_pending())
+ if (!in_irq() && (local_softirq_pending() & local_softirq_enabled()))
invoke_softirq();
tick_irq_exit();
@@ -712,13 +712,13 @@ void __init softirq_init(void)
static int ksoftirqd_should_run(unsigned int cpu)
{
- return local_softirq_pending();
+ return local_softirq_pending() & local_softirq_enabled();
}
static void run_ksoftirqd(unsigned int cpu)
{
local_irq_disable();
- if (local_softirq_pending()) {
+ if (local_softirq_pending() & local_softirq_enabled()) {
/*
* We can safely run softirq on inline stack, as we are not deep
* in the task stack here.
--
2.7.4
This pair of function is implemented on top of local_bh_disable() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to that of local_irq_save/restore. Subsequent calls to
local_bh_disable() and friends can then stack up:
bh = local_bh_disable(vec_mask);
bh2 = rcu_read_lock_bh() {
bh2 = local_bh_disable(...)
return bh2;
}
...
rcu_read_unlock_bh(bh2) {
local_bh_enable(bh2);
}
local_bh_enable(bh);
To prepare for that, make rcu_read_lock_bh() able to return a saved vector
enabled mask and pass it back to rcu_read_unlock_bh(). We'll plug it
to local_bh_disable() in a subsequent patch.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
crypto/pcrypt.c | 5 ++--
drivers/infiniband/ulp/ipoib/ipoib_main.c | 5 ++--
drivers/net/hyperv/rndis_filter.c | 5 ++--
drivers/net/macsec.c | 12 +++++----
drivers/net/vrf.c | 19 ++++++++------
drivers/vhost/net.c | 5 ++--
include/linux/rcupdate.h | 5 ++--
include/net/arp.h | 10 ++++---
include/net/ip6_fib.h | 1 +
include/net/ndisc.h | 10 ++++---
include/net/neighbour.h | 1 +
kernel/padata.c | 5 ++--
kernel/rcu/rcuperf.c | 2 +-
kernel/rcu/rcutorture.c | 2 +-
net/caif/caif_dev.c | 5 ++--
net/core/dev.c | 7 ++---
net/core/neighbour.c | 37 +++++++++++++++-----------
net/core/pktgen.c | 5 ++--
net/decnet/dn_route.c | 27 +++++++++++--------
net/ipv4/fib_semantics.c | 5 ++--
net/ipv4/ip_output.c | 7 ++---
net/ipv4/netfilter/ipt_CLUSTERIP.c | 5 ++--
net/ipv6/addrconf.c | 21 ++++++++-------
net/ipv6/ip6_fib.c | 4 +--
net/ipv6/ip6_flowlabel.c | 43 ++++++++++++++++++-------------
net/ipv6/ip6_output.c | 12 +++++----
net/ipv6/route.c | 15 ++++++-----
net/ipv6/xfrm6_tunnel.c | 5 ++--
net/l2tp/l2tp_core.c | 33 ++++++++++++++----------
net/llc/llc_core.c | 5 ++--
net/llc/llc_proc.c | 13 +++++++---
net/llc/llc_sap.c | 5 ++--
net/netfilter/ipset/ip_set_core.c | 10 ++++---
net/netfilter/ipset/ip_set_hash_gen.h | 15 ++++++-----
net/netfilter/nfnetlink_log.c | 17 ++++++++----
35 files changed, 229 insertions(+), 154 deletions(-)
diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
index f8ec3d4..490358c 100644
--- a/crypto/pcrypt.c
+++ b/crypto/pcrypt.c
@@ -73,12 +73,13 @@ struct pcrypt_aead_ctx {
static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu,
struct padata_pcrypt *pcrypt)
{
+ unsigned int bh;
unsigned int cpu_index, cpu, i;
struct pcrypt_cpumask *cpumask;
cpu = *cb_cpu;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
cpumask = rcu_dereference_bh(pcrypt->cb_cpumask);
if (cpumask_test_cpu(cpu, cpumask->mask))
goto out;
@@ -95,7 +96,7 @@ static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu,
*cb_cpu = cpu;
out:
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return padata_do_parallel(pcrypt->pinst, padata, cpu);
}
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
index eaefa43..709a3e1 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -1260,13 +1260,14 @@ static u32 ipoib_addr_hash(struct ipoib_neigh_hash *htbl, u8 *daddr)
struct ipoib_neigh *ipoib_neigh_get(struct net_device *dev, u8 *daddr)
{
+ unsigned int bh;
struct ipoib_dev_priv *priv = ipoib_priv(dev);
struct ipoib_neigh_table *ntbl = &priv->ntbl;
struct ipoib_neigh_hash *htbl;
struct ipoib_neigh *neigh = NULL;
u32 hash_val;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
htbl = rcu_dereference_bh(ntbl->htbl);
@@ -1292,7 +1293,7 @@ struct ipoib_neigh *ipoib_neigh_get(struct net_device *dev, u8 *daddr)
}
out_unlock:
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return neigh;
}
diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
index 2a5209f..8c95eac 100644
--- a/drivers/net/hyperv/rndis_filter.c
+++ b/drivers/net/hyperv/rndis_filter.c
@@ -214,6 +214,7 @@ static void dump_rndis_message(struct net_device *netdev,
static int rndis_filter_send_request(struct rndis_device *dev,
struct rndis_request *req)
{
+ unsigned int bh;
struct hv_netvsc_packet *packet;
struct hv_page_buffer page_buf[2];
struct hv_page_buffer *pb = page_buf;
@@ -245,9 +246,9 @@ static int rndis_filter_send_request(struct rndis_device *dev,
trace_rndis_send(dev->ndev, 0, &req->request_msg);
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
ret = netvsc_send(dev->ndev, packet, NULL, pb, NULL);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return ret;
}
diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
index 7de88b3..eb7b6b7 100644
--- a/drivers/net/macsec.c
+++ b/drivers/net/macsec.c
@@ -597,6 +597,7 @@ static void count_tx(struct net_device *dev, int ret, int len)
static void macsec_encrypt_done(struct crypto_async_request *base, int err)
{
+ unsigned int bh;
struct sk_buff *skb = base->data;
struct net_device *dev = skb->dev;
struct macsec_dev *macsec = macsec_priv(dev);
@@ -605,13 +606,13 @@ static void macsec_encrypt_done(struct crypto_async_request *base, int err)
aead_request_free(macsec_skb_cb(skb)->req);
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
macsec_encrypt_finish(skb, dev);
macsec_count_tx(skb, &macsec->secy.tx_sc, macsec_skb_cb(skb)->tx_sa);
len = skb->len;
ret = dev_queue_xmit(skb);
count_tx(dev, ret, len);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
macsec_txsa_put(sa);
dev_put(dev);
@@ -886,6 +887,7 @@ static void count_rx(struct net_device *dev, int len)
static void macsec_decrypt_done(struct crypto_async_request *base, int err)
{
+ unsigned int bh;
struct sk_buff *skb = base->data;
struct net_device *dev = skb->dev;
struct macsec_dev *macsec = macsec_priv(dev);
@@ -899,10 +901,10 @@ static void macsec_decrypt_done(struct crypto_async_request *base, int err)
if (!err)
macsec_skb_cb(skb)->valid = true;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
pn = ntohl(macsec_ethhdr(skb)->packet_number);
if (!macsec_post_decrypt(skb, &macsec->secy, pn)) {
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
kfree_skb(skb);
goto out;
}
@@ -915,7 +917,7 @@ static void macsec_decrypt_done(struct crypto_async_request *base, int err)
if (gro_cells_receive(&macsec->gro_cells, skb) == NET_RX_SUCCESS)
count_rx(dev, len);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
out:
macsec_rxsa_put(rx_sa);
diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
index f93547f..4f7c6cb 100644
--- a/drivers/net/vrf.c
+++ b/drivers/net/vrf.c
@@ -327,6 +327,7 @@ static netdev_tx_t vrf_xmit(struct sk_buff *skb, struct net_device *dev)
static int vrf_finish_direct(struct net *net, struct sock *sk,
struct sk_buff *skb)
{
+ unsigned int bh;
struct net_device *vrf_dev = skb->dev;
if (!list_empty(&vrf_dev->ptype_all) &&
@@ -337,9 +338,9 @@ static int vrf_finish_direct(struct net *net, struct sock *sk,
eth_zero_addr(eth->h_dest);
eth->h_proto = skb->protocol;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
dev_queue_xmit_nit(skb, vrf_dev);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
skb_pull(skb, ETH_HLEN);
}
@@ -352,6 +353,7 @@ static int vrf_finish_direct(struct net *net, struct sock *sk,
static int vrf_finish_output6(struct net *net, struct sock *sk,
struct sk_buff *skb)
{
+ unsigned int bh;
struct dst_entry *dst = skb_dst(skb);
struct net_device *dev = dst->dev;
struct neighbour *neigh;
@@ -363,7 +365,7 @@ static int vrf_finish_output6(struct net *net, struct sock *sk,
skb->protocol = htons(ETH_P_IPV6);
skb->dev = dev;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
nexthop = rt6_nexthop((struct rt6_info *)dst, &ipv6_hdr(skb)->daddr);
neigh = __ipv6_neigh_lookup_noref(dst->dev, nexthop);
if (unlikely(!neigh))
@@ -371,10 +373,10 @@ static int vrf_finish_output6(struct net *net, struct sock *sk,
if (!IS_ERR(neigh)) {
sock_confirm_neigh(skb, neigh);
ret = neigh_output(neigh, skb);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return ret;
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
IP6_INC_STATS(dev_net(dst->dev),
ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
@@ -544,6 +546,7 @@ static int vrf_rt6_create(struct net_device *dev)
/* modelled after ip_finish_output2 */
static int vrf_finish_output(struct net *net, struct sock *sk, struct sk_buff *skb)
{
+ unsigned int bh;
struct dst_entry *dst = skb_dst(skb);
struct rtable *rt = (struct rtable *)dst;
struct net_device *dev = dst->dev;
@@ -570,7 +573,7 @@ static int vrf_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
skb = skb2;
}
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
nexthop = (__force u32)rt_nexthop(rt, ip_hdr(skb)->daddr);
neigh = __ipv4_neigh_lookup_noref(dev, nexthop);
@@ -579,11 +582,11 @@ static int vrf_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
if (!IS_ERR(neigh)) {
sock_confirm_neigh(skb, neigh);
ret = neigh_output(neigh, skb);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return ret;
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
err:
vrf_tx_error(skb->dev, skb);
return ret;
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 4e656f8..a467932 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -371,11 +371,12 @@ static void vhost_zerocopy_signal_used(struct vhost_net *net,
static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
{
+ unsigned int bh;
struct vhost_net_ubuf_ref *ubufs = ubuf->ctx;
struct vhost_virtqueue *vq = ubufs->vq;
int cnt;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
/* set len to mark this desc buffers done DMA */
vq->heads[ubuf->desc].len = success ?
@@ -392,7 +393,7 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
if (cnt <= 1 || !(cnt % 16))
vhost_poll_queue(&vq->poll);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
}
static inline unsigned long busy_clock(void)
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 75e5b39..60fbd15 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -700,13 +700,14 @@ static inline void rcu_read_unlock(void)
* rcu_read_unlock_bh() from one task if the matching rcu_read_lock_bh()
* was invoked from some other task.
*/
-static inline void rcu_read_lock_bh(void)
+static inline unsigned int rcu_read_lock_bh(void)
{
local_bh_disable();
__acquire(RCU_BH);
rcu_lock_acquire(&rcu_bh_lock_map);
RCU_LOCKDEP_WARN(!rcu_is_watching(),
"rcu_read_lock_bh() used illegally while idle");
+ return 0;
}
/*
@@ -714,7 +715,7 @@ static inline void rcu_read_lock_bh(void)
*
* See rcu_read_lock_bh() for more information.
*/
-static inline void rcu_read_unlock_bh(void)
+static inline void rcu_read_unlock_bh(unsigned int bh)
{
RCU_LOCKDEP_WARN(!rcu_is_watching(),
"rcu_read_unlock_bh() used illegally while idle");
diff --git a/include/net/arp.h b/include/net/arp.h
index 977aabf..576a874 100644
--- a/include/net/arp.h
+++ b/include/net/arp.h
@@ -28,22 +28,24 @@ static inline struct neighbour *__ipv4_neigh_lookup_noref(struct net_device *dev
static inline struct neighbour *__ipv4_neigh_lookup(struct net_device *dev, u32 key)
{
+ unsigned int bh;
struct neighbour *n;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
n = __ipv4_neigh_lookup_noref(dev, key);
if (n && !refcount_inc_not_zero(&n->refcnt))
n = NULL;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return n;
}
static inline void __ipv4_confirm_neigh(struct net_device *dev, u32 key)
{
+ unsigned int bh;
struct neighbour *n;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
n = __ipv4_neigh_lookup_noref(dev, key);
if (n) {
unsigned long now = jiffies;
@@ -52,7 +54,7 @@ static inline void __ipv4_confirm_neigh(struct net_device *dev, u32 key)
if (n->confirmed != now)
n->confirmed = now;
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
}
void arp_init(void);
diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
index 3d49305..d12fec0 100644
--- a/include/net/ip6_fib.h
+++ b/include/net/ip6_fib.h
@@ -439,6 +439,7 @@ struct ipv6_route_iter {
loff_t skip;
struct fib6_table *tbl;
int sernum;
+ unsigned int bh;
};
extern const struct seq_operations ipv6_route_seq_ops;
diff --git a/include/net/ndisc.h b/include/net/ndisc.h
index ddfbb59..d43423d 100644
--- a/include/net/ndisc.h
+++ b/include/net/ndisc.h
@@ -381,13 +381,14 @@ static inline struct neighbour *__ipv6_neigh_lookup_noref(struct net_device *dev
static inline struct neighbour *__ipv6_neigh_lookup(struct net_device *dev, const void *pkey)
{
+ unsigned int bh;
struct neighbour *n;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
n = __ipv6_neigh_lookup_noref(dev, pkey);
if (n && !refcount_inc_not_zero(&n->refcnt))
n = NULL;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return n;
}
@@ -395,9 +396,10 @@ static inline struct neighbour *__ipv6_neigh_lookup(struct net_device *dev, cons
static inline void __ipv6_confirm_neigh(struct net_device *dev,
const void *pkey)
{
+ unsigned int bh;
struct neighbour *n;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
n = __ipv6_neigh_lookup_noref(dev, pkey);
if (n) {
unsigned long now = jiffies;
@@ -406,7 +408,7 @@ static inline void __ipv6_confirm_neigh(struct net_device *dev,
if (n->confirmed != now)
n->confirmed = now;
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
}
int ndisc_init(void);
diff --git a/include/net/neighbour.h b/include/net/neighbour.h
index 6c1eecd..b804121 100644
--- a/include/net/neighbour.h
+++ b/include/net/neighbour.h
@@ -374,6 +374,7 @@ struct neigh_seq_state {
struct neighbour *n, loff_t *pos);
unsigned int bucket;
unsigned int flags;
+ unsigned int bh;
#define NEIGH_SEQ_NEIGH_ONLY 0x00000001
#define NEIGH_SEQ_IS_PNEIGH 0x00000002
#define NEIGH_SEQ_SKIP_NOARP 0x00000004
diff --git a/kernel/padata.c b/kernel/padata.c
index d568cc5..8a2fbd4 100644
--- a/kernel/padata.c
+++ b/kernel/padata.c
@@ -104,11 +104,12 @@ static void padata_parallel_worker(struct work_struct *parallel_work)
int padata_do_parallel(struct padata_instance *pinst,
struct padata_priv *padata, int cb_cpu)
{
+ unsigned int bh;
int target_cpu, err;
struct padata_parallel_queue *queue;
struct parallel_data *pd;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
pd = rcu_dereference_bh(pinst->pd);
@@ -142,7 +143,7 @@ int padata_do_parallel(struct padata_instance *pinst,
queue_work_on(target_cpu, pinst->wq, &queue->work);
out:
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return err;
}
diff --git a/kernel/rcu/rcuperf.c b/kernel/rcu/rcuperf.c
index 3424452..fa25db2 100644
--- a/kernel/rcu/rcuperf.c
+++ b/kernel/rcu/rcuperf.c
@@ -201,7 +201,7 @@ static int rcu_bh_perf_read_lock(void) __acquires(RCU_BH)
static void rcu_bh_perf_read_unlock(int idx) __releases(RCU_BH)
{
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(0);
}
static struct rcu_perf_ops rcu_bh_ops = {
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index c596c6f..cb3abdc 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -446,7 +446,7 @@ static int rcu_bh_torture_read_lock(void) __acquires(RCU_BH)
static void rcu_bh_torture_read_unlock(int idx) __releases(RCU_BH)
{
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(0);
}
static void rcu_bh_torture_deferred_free(struct rcu_torture *p)
diff --git a/net/caif/caif_dev.c b/net/caif/caif_dev.c
index 711d715..264f715 100644
--- a/net/caif/caif_dev.c
+++ b/net/caif/caif_dev.c
@@ -165,13 +165,14 @@ static void caif_flow_cb(struct sk_buff *skb)
static int transmit(struct cflayer *layer, struct cfpkt *pkt)
{
+ unsigned int bh;
int err, high = 0, qlen = 0;
struct caif_device_entry *caifd =
container_of(layer, struct caif_device_entry, layer);
struct sk_buff *skb;
struct netdev_queue *txq;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
skb = cfpkt_tonative(pkt);
skb->dev = caifd->netdev;
@@ -225,7 +226,7 @@ static int transmit(struct cflayer *layer, struct cfpkt *pkt)
_CAIF_CTRLCMD_PHYIF_FLOW_OFF_IND,
caifd->layer.id);
noxoff:
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
err = dev_queue_xmit(skb);
if (err > 0)
diff --git a/net/core/dev.c b/net/core/dev.c
index 82114e1..2898fb8 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3725,6 +3725,7 @@ struct netdev_queue *netdev_pick_tx(struct net_device *dev,
*/
static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
{
+ unsigned int bh;
struct net_device *dev = skb->dev;
struct netdev_queue *txq;
struct Qdisc *q;
@@ -3739,7 +3740,7 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
/* Disable soft irqs for various locks below. Also
* stops preemption for RCU.
*/
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
skb_update_prio(skb);
@@ -3820,13 +3821,13 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
}
rc = -ENETDOWN;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
atomic_long_inc(&dev->tx_dropped);
kfree_skb_list(skb);
return rc;
out:
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return rc;
}
diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index 91592fc..98cc21c 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -438,11 +438,12 @@ static struct neigh_hash_table *neigh_hash_grow(struct neigh_table *tbl,
struct neighbour *neigh_lookup(struct neigh_table *tbl, const void *pkey,
struct net_device *dev)
{
+ unsigned int bh;
struct neighbour *n;
NEIGH_CACHE_STAT_INC(tbl, lookups);
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
n = __neigh_lookup_noref(tbl, pkey, dev);
if (n) {
if (!refcount_inc_not_zero(&n->refcnt))
@@ -450,7 +451,7 @@ struct neighbour *neigh_lookup(struct neigh_table *tbl, const void *pkey,
NEIGH_CACHE_STAT_INC(tbl, hits);
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return n;
}
EXPORT_SYMBOL(neigh_lookup);
@@ -458,6 +459,7 @@ EXPORT_SYMBOL(neigh_lookup);
struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net,
const void *pkey)
{
+ unsigned int bh;
struct neighbour *n;
unsigned int key_len = tbl->key_len;
u32 hash_val;
@@ -465,7 +467,7 @@ struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net,
NEIGH_CACHE_STAT_INC(tbl, lookups);
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
nht = rcu_dereference_bh(tbl->nht);
hash_val = tbl->hash(pkey, NULL, nht->hash_rnd) >> (32 - nht->hash_shift);
@@ -481,7 +483,7 @@ struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net,
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return n;
}
EXPORT_SYMBOL(neigh_lookup_nodev);
@@ -1856,6 +1858,7 @@ static int neightbl_fill_parms(struct sk_buff *skb, struct neigh_parms *parms)
static int neightbl_fill_info(struct sk_buff *skb, struct neigh_table *tbl,
u32 pid, u32 seq, int type, int flags)
{
+ unsigned int bh;
struct nlmsghdr *nlh;
struct ndtmsg *ndtmsg;
@@ -1890,11 +1893,11 @@ static int neightbl_fill_info(struct sk_buff *skb, struct neigh_table *tbl,
.ndtc_proxy_qlen = tbl->proxy_queue.qlen,
};
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
nht = rcu_dereference_bh(tbl->nht);
ndc.ndtc_hash_rnd = nht->hash_rnd[0];
ndc.ndtc_hash_mask = ((1 << nht->hash_shift) - 1);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
if (nla_put(skb, NDTA_CONFIG, sizeof(ndc), &ndc))
goto nla_put_failure;
@@ -2330,6 +2333,7 @@ static bool neigh_ifindex_filtered(struct net_device *dev, int filter_idx)
static int neigh_dump_table(struct neigh_table *tbl, struct sk_buff *skb,
struct netlink_callback *cb)
{
+ unsigned int bh;
struct net *net = sock_net(skb->sk);
const struct nlmsghdr *nlh = cb->nlh;
struct nlattr *tb[NDA_MAX + 1];
@@ -2357,7 +2361,7 @@ static int neigh_dump_table(struct neigh_table *tbl, struct sk_buff *skb,
flags |= NLM_F_DUMP_FILTERED;
}
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
nht = rcu_dereference_bh(tbl->nht);
for (h = s_h; h < (1 << nht->hash_shift); h++) {
@@ -2384,7 +2388,7 @@ static int neigh_dump_table(struct neigh_table *tbl, struct sk_buff *skb,
}
rc = skb->len;
out:
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
cb->args[1] = h;
cb->args[2] = idx;
return rc;
@@ -2470,10 +2474,11 @@ static int neigh_dump_info(struct sk_buff *skb, struct netlink_callback *cb)
void neigh_for_each(struct neigh_table *tbl, void (*cb)(struct neighbour *, void *), void *cookie)
{
+ unsigned int bh;
int chain;
struct neigh_hash_table *nht;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
nht = rcu_dereference_bh(tbl->nht);
read_lock(&tbl->lock); /* avoid resizes */
@@ -2486,7 +2491,7 @@ void neigh_for_each(struct neigh_table *tbl, void (*cb)(struct neighbour *, void
cb(n, cookie);
}
read_unlock(&tbl->lock);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
}
EXPORT_SYMBOL(neigh_for_each);
@@ -2528,6 +2533,7 @@ EXPORT_SYMBOL(__neigh_for_each_release);
int neigh_xmit(int index, struct net_device *dev,
const void *addr, struct sk_buff *skb)
{
+ unsigned int bh;
int err = -EAFNOSUPPORT;
if (likely(index < NEIGH_NR_TABLES)) {
struct neigh_table *tbl;
@@ -2536,17 +2542,17 @@ int neigh_xmit(int index, struct net_device *dev,
tbl = neigh_tables[index];
if (!tbl)
goto out;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
neigh = __neigh_lookup_noref(tbl, addr, dev);
if (!neigh)
neigh = __neigh_create(tbl, addr, dev, false);
err = PTR_ERR(neigh);
if (IS_ERR(neigh)) {
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
goto out_kfree_skb;
}
err = neigh->output(neigh, skb);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
}
else if (index == NEIGH_LINK_TABLE) {
err = dev_hard_header(skb, dev, ntohs(skb->protocol),
@@ -2753,7 +2759,7 @@ void *neigh_seq_start(struct seq_file *seq, loff_t *pos, struct neigh_table *tbl
state->bucket = 0;
state->flags = (neigh_seq_flags & ~NEIGH_SEQ_IS_PNEIGH);
- rcu_read_lock_bh();
+ state->bh = rcu_read_lock_bh();
state->nht = rcu_dereference_bh(tbl->nht);
return *pos ? neigh_get_idx_any(seq, pos) : SEQ_START_TOKEN;
@@ -2790,7 +2796,8 @@ EXPORT_SYMBOL(neigh_seq_next);
void neigh_seq_stop(struct seq_file *seq, void *v)
__releases(rcu_bh)
{
- rcu_read_unlock_bh();
+ struct neigh_seq_state *state = seq->private;
+ rcu_read_unlock_bh(state->bh);
}
EXPORT_SYMBOL(neigh_seq_stop);
diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index 7f69384..6e2bea0 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -2502,6 +2502,7 @@ static u32 pktgen_dst_metrics[RTAX_MAX + 1] = {
static int pktgen_output_ipsec(struct sk_buff *skb, struct pktgen_dev *pkt_dev)
{
+ unsigned int bh;
struct xfrm_state *x = pkt_dev->flows[pkt_dev->curfl].x;
int err = 0;
struct net *net = dev_net(pkt_dev->odev);
@@ -2519,9 +2520,9 @@ static int pktgen_output_ipsec(struct sk_buff *skb, struct pktgen_dev *pkt_dev)
if ((x->props.mode == XFRM_MODE_TUNNEL) && (pkt_dev->spi != 0))
skb->_skb_refdst = (unsigned long)&pkt_dev->xdst.u.dst | SKB_DST_NOREF;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
err = x->outer_mode->output(x, skb);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
if (err) {
XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTSTATEMODEERROR);
goto error;
diff --git a/net/decnet/dn_route.c b/net/decnet/dn_route.c
index 1c002c0..e1180f9 100644
--- a/net/decnet/dn_route.c
+++ b/net/decnet/dn_route.c
@@ -1247,11 +1247,12 @@ static int dn_route_output_slow(struct dst_entry **pprt, const struct flowidn *o
*/
static int __dn_route_output_key(struct dst_entry **pprt, const struct flowidn *flp, int flags)
{
+ unsigned int bh;
unsigned int hash = dn_hash(flp->saddr, flp->daddr);
struct dn_route *rt = NULL;
if (!(flags & MSG_TRYHARD)) {
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
for (rt = rcu_dereference_bh(dn_rt_hash_table[hash].chain); rt;
rt = rcu_dereference_bh(rt->dn_next)) {
if ((flp->daddr == rt->fld.daddr) &&
@@ -1260,12 +1261,12 @@ static int __dn_route_output_key(struct dst_entry **pprt, const struct flowidn *
dn_is_output_route(rt) &&
(rt->fld.flowidn_oif == flp->flowidn_oif)) {
dst_hold_and_use(&rt->dst, jiffies);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
*pprt = &rt->dst;
return 0;
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
}
return dn_route_output_slow(pprt, flp, flags);
@@ -1725,6 +1726,7 @@ static int dn_cache_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
*/
int dn_cache_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
+ unsigned int bh;
struct net *net = sock_net(skb->sk);
struct dn_route *rt;
int h, s_h;
@@ -1748,7 +1750,7 @@ int dn_cache_dump(struct sk_buff *skb, struct netlink_callback *cb)
continue;
if (h > s_h)
s_idx = 0;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
for(rt = rcu_dereference_bh(dn_rt_hash_table[h].chain), idx = 0;
rt;
rt = rcu_dereference_bh(rt->dn_next), idx++) {
@@ -1759,12 +1761,12 @@ int dn_cache_dump(struct sk_buff *skb, struct netlink_callback *cb)
cb->nlh->nlmsg_seq, RTM_NEWROUTE,
1, NLM_F_MULTI) < 0) {
skb_dst_drop(skb);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
goto done;
}
skb_dst_drop(skb);
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
}
done:
@@ -1775,6 +1777,7 @@ int dn_cache_dump(struct sk_buff *skb, struct netlink_callback *cb)
#ifdef CONFIG_PROC_FS
struct dn_rt_cache_iter_state {
+ unsigned int bh;
int bucket;
};
@@ -1784,25 +1787,26 @@ static struct dn_route *dn_rt_cache_get_first(struct seq_file *seq)
struct dn_rt_cache_iter_state *s = seq->private;
for(s->bucket = dn_rt_hash_mask; s->bucket >= 0; --s->bucket) {
- rcu_read_lock_bh();
+ s->bh = rcu_read_lock_bh();
rt = rcu_dereference_bh(dn_rt_hash_table[s->bucket].chain);
if (rt)
break;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(s->bh);
}
return rt;
}
static struct dn_route *dn_rt_cache_get_next(struct seq_file *seq, struct dn_route *rt)
{
+ unsigned int bh;
struct dn_rt_cache_iter_state *s = seq->private;
rt = rcu_dereference_bh(rt->dn_next);
while (!rt) {
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(s->bh);
if (--s->bucket < 0)
break;
- rcu_read_lock_bh();
+ s->bh = rcu_read_lock_bh();
rt = rcu_dereference_bh(dn_rt_hash_table[s->bucket].chain);
}
return rt;
@@ -1828,8 +1832,9 @@ static void *dn_rt_cache_seq_next(struct seq_file *seq, void *v, loff_t *pos)
static void dn_rt_cache_seq_stop(struct seq_file *seq, void *v)
{
+ struct dn_rt_cache_iter_state *s = seq->private;
if (v)
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(s->bh);
}
static int dn_rt_cache_seq_show(struct seq_file *seq, void *v)
diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
index f3c89cc..e87de42 100644
--- a/net/ipv4/fib_semantics.c
+++ b/net/ipv4/fib_semantics.c
@@ -1684,19 +1684,20 @@ int fib_sync_up(struct net_device *dev, unsigned int nh_flags)
#ifdef CONFIG_IP_ROUTE_MULTIPATH
static bool fib_good_nh(const struct fib_nh *nh)
{
+ unsigned int bh;
int state = NUD_REACHABLE;
if (nh->nh_scope == RT_SCOPE_LINK) {
struct neighbour *n;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
n = __ipv4_neigh_lookup_noref(nh->nh_dev,
(__force u32)nh->nh_gw);
if (n)
state = n->nud_state;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
}
return !!(state & NUD_VALID);
diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
index 9c4e72e..ffa7747 100644
--- a/net/ipv4/ip_output.c
+++ b/net/ipv4/ip_output.c
@@ -183,6 +183,7 @@ EXPORT_SYMBOL_GPL(ip_build_and_send_pkt);
static int ip_finish_output2(struct net *net, struct sock *sk, struct sk_buff *skb)
{
+ unsigned int bh;
struct dst_entry *dst = skb_dst(skb);
struct rtable *rt = (struct rtable *)dst;
struct net_device *dev = dst->dev;
@@ -217,7 +218,7 @@ static int ip_finish_output2(struct net *net, struct sock *sk, struct sk_buff *s
return res;
}
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
nexthop = (__force u32) rt_nexthop(rt, ip_hdr(skb)->daddr);
neigh = __ipv4_neigh_lookup_noref(dev, nexthop);
if (unlikely(!neigh))
@@ -228,10 +229,10 @@ static int ip_finish_output2(struct net *net, struct sock *sk, struct sk_buff *s
sock_confirm_neigh(skb, neigh);
res = neigh_output(neigh, skb);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return res;
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
net_dbg_ratelimited("%s: No header cache and no neighbour!\n",
__func__);
diff --git a/net/ipv4/netfilter/ipt_CLUSTERIP.c b/net/ipv4/netfilter/ipt_CLUSTERIP.c
index 2c8d313..b65449d 100644
--- a/net/ipv4/netfilter/ipt_CLUSTERIP.c
+++ b/net/ipv4/netfilter/ipt_CLUSTERIP.c
@@ -142,9 +142,10 @@ __clusterip_config_find(struct net *net, __be32 clusterip)
static inline struct clusterip_config *
clusterip_config_find_get(struct net *net, __be32 clusterip, int entry)
{
+ unsigned int bh;
struct clusterip_config *c;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
c = __clusterip_config_find(net, clusterip);
if (c) {
#ifdef CONFIG_PROC_FS
@@ -161,7 +162,7 @@ clusterip_config_find_get(struct net *net, __be32 clusterip, int entry)
}
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return c;
}
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index d51a8c0..9f1d3d0 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -987,6 +987,7 @@ static struct inet6_ifaddr *
ipv6_add_addr(struct inet6_dev *idev, struct ifa6_config *cfg,
bool can_block, struct netlink_ext_ack *extack)
{
+ unsigned int bh;
gfp_t gfp_flags = can_block ? GFP_KERNEL : GFP_ATOMIC;
int addr_type = ipv6_addr_type(cfg->pfx);
struct net *net = dev_net(idev->dev);
@@ -1072,11 +1073,11 @@ ipv6_add_addr(struct inet6_dev *idev, struct ifa6_config *cfg,
/* For caller */
refcount_set(&ifa->refcnt, 1);
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
err = ipv6_add_addr_hash(idev->dev, ifa);
if (err < 0) {
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
goto out;
}
@@ -1093,7 +1094,7 @@ ipv6_add_addr(struct inet6_dev *idev, struct ifa6_config *cfg,
in6_ifa_hold(ifa);
write_unlock(&idev->lock);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
inet6addr_notifier_call_chain(NETDEV_UP, ifa);
out:
@@ -4339,13 +4340,14 @@ int ipv6_chk_home_addr(struct net *net, const struct in6_addr *addr)
static void addrconf_verify_rtnl(void)
{
+ unsigned int bh;
unsigned long now, next, next_sec, next_sched;
struct inet6_ifaddr *ifp;
int i;
ASSERT_RTNL();
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
now = jiffies;
next = round_jiffies_up(now + ADDR_CHECK_FREQUENCY);
@@ -4418,11 +4420,11 @@ static void addrconf_verify_rtnl(void)
spin_lock(&ifpub->lock);
ifpub->regen_count = 0;
spin_unlock(&ifpub->lock);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
ipv6_create_tempaddr(ifpub, ifp, true);
in6_ifa_put(ifpub);
in6_ifa_put(ifp);
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
goto restart;
}
} else if (time_before(ifp->tstamp + ifp->prefered_lft * HZ - regen_advance * HZ, next))
@@ -4451,7 +4453,7 @@ static void addrconf_verify_rtnl(void)
pr_debug("now = %lu, schedule = %lu, rounded schedule = %lu => %lu\n",
now, next, next_sec, next_sched);
mod_delayed_work(addrconf_wq, &addr_chk_work, next_sched - now);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
}
static void addrconf_verify_work(struct work_struct *w)
@@ -5714,10 +5716,11 @@ static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp)
static void ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp)
{
- rcu_read_lock_bh();
+ unsigned int bh;
+ bh = rcu_read_lock_bh();
if (likely(ifp->idev->dead == 0))
__ipv6_ifa_notify(event, ifp);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
}
#ifdef CONFIG_SYSCTL
diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
index 5516f55..89e4083 100644
--- a/net/ipv6/ip6_fib.c
+++ b/net/ipv6/ip6_fib.c
@@ -2386,7 +2386,7 @@ static void *ipv6_route_seq_start(struct seq_file *seq, loff_t *pos)
struct net *net = seq_file_net(seq);
struct ipv6_route_iter *iter = seq->private;
- rcu_read_lock_bh();
+ iter->bh = rcu_read_lock_bh();
iter->tbl = ipv6_route_seq_next_table(NULL, net);
iter->skip = *pos;
@@ -2413,7 +2413,7 @@ static void ipv6_route_seq_stop(struct seq_file *seq, void *v)
if (ipv6_route_iter_active(iter))
fib6_walker_unlink(net, &iter->w);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(iter->bh);
}
const struct seq_operations ipv6_route_seq_ops = {
diff --git a/net/ipv6/ip6_flowlabel.c b/net/ipv6/ip6_flowlabel.c
index cb54a8a..61cb39c 100644
--- a/net/ipv6/ip6_flowlabel.c
+++ b/net/ipv6/ip6_flowlabel.c
@@ -84,13 +84,14 @@ static inline struct ip6_flowlabel *__fl_lookup(struct net *net, __be32 label)
static struct ip6_flowlabel *fl_lookup(struct net *net, __be32 label)
{
+ unsigned int bh;
struct ip6_flowlabel *fl;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
fl = __fl_lookup(net, label);
if (fl && !atomic_inc_not_zero(&fl->users))
fl = NULL;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return fl;
}
@@ -240,22 +241,23 @@ static struct ip6_flowlabel *fl_intern(struct net *net,
struct ip6_flowlabel *fl6_sock_lookup(struct sock *sk, __be32 label)
{
+ unsigned int bh;
struct ipv6_fl_socklist *sfl;
struct ipv6_pinfo *np = inet6_sk(sk);
label &= IPV6_FLOWLABEL_MASK;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
for_each_sk_fl_rcu(np, sfl) {
struct ip6_flowlabel *fl = sfl->fl;
if (fl->label == label) {
fl->lastuse = jiffies;
atomic_inc(&fl->users);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return fl;
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return NULL;
}
EXPORT_SYMBOL_GPL(fl6_sock_lookup);
@@ -441,6 +443,7 @@ fl_create(struct net *net, struct sock *sk, struct in6_flowlabel_req *freq,
static int mem_check(struct sock *sk)
{
+ unsigned int bh;
struct ipv6_pinfo *np = inet6_sk(sk);
struct ipv6_fl_socklist *sfl;
int room = FL_MAX_SIZE - atomic_read(&fl_size);
@@ -449,10 +452,10 @@ static int mem_check(struct sock *sk)
if (room > FL_MAX_SIZE - FL_MAX_PER_SOCK)
return 0;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
for_each_sk_fl_rcu(np, sfl)
count++;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
if (room <= 0 ||
((count >= FL_MAX_PER_SOCK ||
@@ -476,6 +479,7 @@ static inline void fl_link(struct ipv6_pinfo *np, struct ipv6_fl_socklist *sfl,
int ipv6_flowlabel_opt_get(struct sock *sk, struct in6_flowlabel_req *freq,
int flags)
{
+ unsigned int bh;
struct ipv6_pinfo *np = inet6_sk(sk);
struct ipv6_fl_socklist *sfl;
@@ -489,7 +493,7 @@ int ipv6_flowlabel_opt_get(struct sock *sk, struct in6_flowlabel_req *freq,
return 0;
}
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
for_each_sk_fl_rcu(np, sfl) {
if (sfl->fl->label == (np->flow_label & IPV6_FLOWLABEL_MASK)) {
@@ -501,17 +505,18 @@ int ipv6_flowlabel_opt_get(struct sock *sk, struct in6_flowlabel_req *freq,
freq->flr_linger = sfl->fl->linger / HZ;
spin_unlock_bh(&ip6_fl_lock);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return 0;
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return -ENOENT;
}
int ipv6_flowlabel_opt(struct sock *sk, char __user *optval, int optlen)
{
+ unsigned int bh;
int uninitialized_var(err);
struct net *net = sock_net(sk);
struct ipv6_pinfo *np = inet6_sk(sk);
@@ -558,15 +563,15 @@ int ipv6_flowlabel_opt(struct sock *sk, char __user *optval, int optlen)
return -ESRCH;
case IPV6_FL_A_RENEW:
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
for_each_sk_fl_rcu(np, sfl) {
if (sfl->fl->label == freq.flr_label) {
err = fl6_renew(sfl->fl, freq.flr_linger, freq.flr_expires);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return err;
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
if (freq.flr_share == IPV6_FL_S_NONE &&
ns_capable(net->user_ns, CAP_NET_ADMIN)) {
@@ -608,11 +613,11 @@ int ipv6_flowlabel_opt(struct sock *sk, char __user *optval, int optlen)
if (freq.flr_label) {
err = -EEXIST;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
for_each_sk_fl_rcu(np, sfl) {
if (sfl->fl->label == freq.flr_label) {
if (freq.flr_flags&IPV6_FL_F_EXCL) {
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
goto done;
}
fl1 = sfl->fl;
@@ -620,7 +625,7 @@ int ipv6_flowlabel_opt(struct sock *sk, char __user *optval, int optlen)
break;
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
if (!fl1)
fl1 = fl_lookup(net, freq.flr_label);
@@ -695,6 +700,7 @@ int ipv6_flowlabel_opt(struct sock *sk, char __user *optval, int optlen)
struct ip6fl_iter_state {
struct seq_net_private p;
struct pid_namespace *pid_ns;
+ unsigned int bh;
int bucket;
};
@@ -757,7 +763,7 @@ static void *ip6fl_seq_start(struct seq_file *seq, loff_t *pos)
state->pid_ns = proc_pid_ns(file_inode(seq->file));
- rcu_read_lock_bh();
+ state->bh = rcu_read_lock_bh();
return *pos ? ip6fl_get_idx(seq, *pos - 1) : SEQ_START_TOKEN;
}
@@ -776,7 +782,8 @@ static void *ip6fl_seq_next(struct seq_file *seq, void *v, loff_t *pos)
static void ip6fl_seq_stop(struct seq_file *seq, void *v)
__releases(RCU)
{
- rcu_read_unlock_bh();
+ struct ip6fl_iter_state *state = ip6fl_seq_private(seq);
+ rcu_read_unlock_bh(state->bh);
}
static int ip6fl_seq_show(struct seq_file *seq, void *v)
diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
index f9f8f554..93d49a9 100644
--- a/net/ipv6/ip6_output.c
+++ b/net/ipv6/ip6_output.c
@@ -61,6 +61,7 @@
static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *skb)
{
+ unsigned int bh;
struct dst_entry *dst = skb_dst(skb);
struct net_device *dev = dst->dev;
struct neighbour *neigh;
@@ -110,7 +111,7 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
return res;
}
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
nexthop = rt6_nexthop((struct rt6_info *)dst, &ipv6_hdr(skb)->daddr);
neigh = __ipv6_neigh_lookup_noref(dst->dev, nexthop);
if (unlikely(!neigh))
@@ -118,10 +119,10 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
if (!IS_ERR(neigh)) {
sock_confirm_neigh(skb, neigh);
ret = neigh_output(neigh, skb);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return ret;
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
kfree_skb(skb);
@@ -924,6 +925,7 @@ static struct dst_entry *ip6_sk_dst_check(struct sock *sk,
static int ip6_dst_lookup_tail(struct net *net, const struct sock *sk,
struct dst_entry **dst, struct flowi6 *fl6)
{
+ unsigned int bh;
#ifdef CONFIG_IPV6_OPTIMISTIC_DAD
struct neighbour *n;
struct rt6_info *rt;
@@ -989,11 +991,11 @@ static int ip6_dst_lookup_tail(struct net *net, const struct sock *sk,
* dst entry of the nexthop router
*/
rt = (struct rt6_info *) *dst;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
n = __ipv6_neigh_lookup_noref(rt->dst.dev,
rt6_nexthop(rt, &fl6->daddr));
err = n && !(n->nud_state & NUD_VALID) ? -EINVAL : 0;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
if (err) {
struct inet6_ifaddr *ifp;
diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index 480a79f..ce653f9 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -517,6 +517,7 @@ static void rt6_probe_deferred(struct work_struct *w)
static void rt6_probe(struct fib6_info *rt)
{
+ unsigned int bh;
struct __rt6_probe_work *work;
const struct in6_addr *nh_gw;
struct neighbour *neigh;
@@ -535,7 +536,7 @@ static void rt6_probe(struct fib6_info *rt)
nh_gw = &rt->fib6_nh.nh_gw;
dev = rt->fib6_nh.nh_dev;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
neigh = __ipv6_neigh_lookup_noref(dev, nh_gw);
if (neigh) {
struct inet6_dev *idev;
@@ -567,7 +568,7 @@ static void rt6_probe(struct fib6_info *rt)
}
out:
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
}
#else
static inline void rt6_probe(struct fib6_info *rt)
@@ -589,6 +590,7 @@ static inline int rt6_check_dev(struct fib6_info *rt, int oif)
static inline enum rt6_nud_state rt6_check_neigh(struct fib6_info *rt)
{
+ unsigned int bh;
enum rt6_nud_state ret = RT6_NUD_FAIL_HARD;
struct neighbour *neigh;
@@ -596,7 +598,7 @@ static inline enum rt6_nud_state rt6_check_neigh(struct fib6_info *rt)
!(rt->fib6_flags & RTF_GATEWAY))
return RT6_NUD_SUCCEED;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
neigh = __ipv6_neigh_lookup_noref(rt->fib6_nh.nh_dev,
&rt->fib6_nh.nh_gw);
if (neigh) {
@@ -614,7 +616,7 @@ static inline enum rt6_nud_state rt6_check_neigh(struct fib6_info *rt)
ret = IS_ENABLED(CONFIG_IPV6_ROUTER_PREF) ?
RT6_NUD_SUCCEED : RT6_NUD_FAIL_DO_RR;
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return ret;
}
@@ -1788,6 +1790,7 @@ void rt6_age_exceptions(struct fib6_info *rt,
struct fib6_gc_args *gc_args,
unsigned long now)
{
+ unsigned int bh;
struct rt6_exception_bucket *bucket;
struct rt6_exception *rt6_ex;
struct hlist_node *tmp;
@@ -1796,7 +1799,7 @@ void rt6_age_exceptions(struct fib6_info *rt,
if (!rcu_access_pointer(rt->rt6i_exception_bucket))
return;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
spin_lock(&rt6_exception_lock);
bucket = rcu_dereference_protected(rt->rt6i_exception_bucket,
lockdep_is_held(&rt6_exception_lock));
@@ -1812,7 +1815,7 @@ void rt6_age_exceptions(struct fib6_info *rt,
}
}
spin_unlock(&rt6_exception_lock);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
}
/* must be called with rcu lock held */
diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c
index 4a46df8..1989703 100644
--- a/net/ipv6/xfrm6_tunnel.c
+++ b/net/ipv6/xfrm6_tunnel.c
@@ -101,13 +101,14 @@ static struct xfrm6_tunnel_spi *__xfrm6_tunnel_spi_lookup(struct net *net, const
__be32 xfrm6_tunnel_spi_lookup(struct net *net, const xfrm_address_t *saddr)
{
+ unsigned int bh;
struct xfrm6_tunnel_spi *x6spi;
u32 spi;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
x6spi = __xfrm6_tunnel_spi_lookup(net, saddr);
spi = x6spi ? x6spi->spi : 0;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return htonl(spi);
}
EXPORT_SYMBOL(xfrm6_tunnel_spi_lookup);
diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
index 82cdf90..f8515be 100644
--- a/net/l2tp/l2tp_core.c
+++ b/net/l2tp/l2tp_core.c
@@ -165,19 +165,20 @@ EXPORT_SYMBOL(l2tp_tunnel_free);
/* Lookup a tunnel. A new reference is held on the returned tunnel. */
struct l2tp_tunnel *l2tp_tunnel_get(const struct net *net, u32 tunnel_id)
{
+ unsigned int bh;
const struct l2tp_net *pn = l2tp_pernet(net);
struct l2tp_tunnel *tunnel;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) {
if (tunnel->tunnel_id == tunnel_id) {
l2tp_tunnel_inc_refcount(tunnel);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return tunnel;
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return NULL;
}
@@ -185,19 +186,20 @@ EXPORT_SYMBOL_GPL(l2tp_tunnel_get);
struct l2tp_tunnel *l2tp_tunnel_get_nth(const struct net *net, int nth)
{
+ unsigned int bh;
const struct l2tp_net *pn = l2tp_pernet(net);
struct l2tp_tunnel *tunnel;
int count = 0;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) {
if (++count > nth) {
l2tp_tunnel_inc_refcount(tunnel);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return tunnel;
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return NULL;
}
@@ -227,20 +229,21 @@ EXPORT_SYMBOL_GPL(l2tp_tunnel_get_session);
struct l2tp_session *l2tp_session_get(const struct net *net, u32 session_id)
{
+ unsigned int bh;
struct hlist_head *session_list;
struct l2tp_session *session;
session_list = l2tp_session_id_hash_2(l2tp_pernet(net), session_id);
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
hlist_for_each_entry_rcu(session, session_list, global_hlist)
if (session->session_id == session_id) {
l2tp_session_inc_refcount(session);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return session;
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return NULL;
}
@@ -275,23 +278,24 @@ EXPORT_SYMBOL_GPL(l2tp_session_get_nth);
struct l2tp_session *l2tp_session_get_by_ifname(const struct net *net,
const char *ifname)
{
+ unsigned int bh;
struct l2tp_net *pn = l2tp_pernet(net);
int hash;
struct l2tp_session *session;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
for (hash = 0; hash < L2TP_HASH_SIZE_2; hash++) {
hlist_for_each_entry_rcu(session, &pn->l2tp_session_hlist[hash], global_hlist) {
if (!strcmp(session->ifname, ifname)) {
l2tp_session_inc_refcount(session);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return session;
}
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return NULL;
}
@@ -1723,15 +1727,16 @@ static __net_init int l2tp_init_net(struct net *net)
static __net_exit void l2tp_exit_net(struct net *net)
{
+ unsigned int bh;
struct l2tp_net *pn = l2tp_pernet(net);
struct l2tp_tunnel *tunnel = NULL;
int hash;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) {
l2tp_tunnel_delete(tunnel);
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
flush_workqueue(l2tp_wq);
rcu_barrier();
diff --git a/net/llc/llc_core.c b/net/llc/llc_core.c
index 260b3dc..9270e93 100644
--- a/net/llc/llc_core.c
+++ b/net/llc/llc_core.c
@@ -69,13 +69,14 @@ static struct llc_sap *__llc_sap_find(unsigned char sap_value)
*/
struct llc_sap *llc_sap_find(unsigned char sap_value)
{
+ unsigned int bh;
struct llc_sap *sap;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
sap = __llc_sap_find(sap_value);
if (!sap || !llc_sap_hold_safe(sap))
sap = NULL;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return sap;
}
diff --git a/net/llc/llc_proc.c b/net/llc/llc_proc.c
index f3a36c1..6f068c8 100644
--- a/net/llc/llc_proc.c
+++ b/net/llc/llc_proc.c
@@ -59,8 +59,9 @@ static struct sock *llc_get_sk_idx(loff_t pos)
static void *llc_seq_start(struct seq_file *seq, loff_t *pos)
{
loff_t l = *pos;
+ unsigned int *bh = seq->private
- rcu_read_lock_bh();
+ *bh = rcu_read_lock_bh();
return l ? llc_get_sk_idx(--l) : SEQ_START_TOKEN;
}
@@ -113,6 +114,8 @@ static void *llc_seq_next(struct seq_file *seq, void *v, loff_t *pos)
static void llc_seq_stop(struct seq_file *seq, void *v)
{
+ unsigned int *bh = seq->private;
+
if (v && v != SEQ_START_TOKEN) {
struct sock *sk = v;
struct llc_sock *llc = llc_sk(sk);
@@ -120,7 +123,7 @@ static void llc_seq_stop(struct seq_file *seq, void *v)
spin_unlock_bh(&sap->sk_lock);
}
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(*bh);
}
static int llc_seq_socket_show(struct seq_file *seq, void *v)
@@ -225,11 +228,13 @@ int __init llc_proc_init(void)
if (!llc_proc_dir)
goto out;
- p = proc_create_seq("socket", 0444, llc_proc_dir, &llc_seq_socket_ops);
+ p = proc_create_seq_private("socket", 0444, llc_proc_dir,
+ &llc_seq_socket_ops, sizeof(unsigned int), NULL);
if (!p)
goto out_socket;
- p = proc_create_seq("core", 0444, llc_proc_dir, &llc_seq_core_ops);
+ p = proc_create_seq_private("core", 0444, llc_proc_dir,
+ &llc_seq_core_ops, sizeof(unsigned int), NULL);
if (!p)
goto out_core;
diff --git a/net/llc/llc_sap.c b/net/llc/llc_sap.c
index a7f7b8f..d8dfcca 100644
--- a/net/llc/llc_sap.c
+++ b/net/llc/llc_sap.c
@@ -319,12 +319,13 @@ static inline bool llc_dgram_match(const struct llc_sap *sap,
static struct sock *llc_lookup_dgram(struct llc_sap *sap,
const struct llc_addr *laddr)
{
+ unsigned int bh;
struct sock *rc;
struct hlist_nulls_node *node;
int slot = llc_sk_laddr_hashfn(sap, laddr);
struct hlist_nulls_head *laddr_hb = &sap->sk_laddr_hash[slot];
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
again:
sk_nulls_for_each_rcu(rc, node, laddr_hb) {
if (llc_dgram_match(sap, laddr, rc)) {
@@ -348,7 +349,7 @@ static struct sock *llc_lookup_dgram(struct llc_sap *sap,
if (unlikely(get_nulls_value(node) != slot))
goto again;
found:
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return rc;
}
diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
index bc4bd247..bc93d4d 100644
--- a/net/netfilter/ipset/ip_set_core.c
+++ b/net/netfilter/ipset/ip_set_core.c
@@ -560,6 +560,7 @@ int
ip_set_test(ip_set_id_t index, const struct sk_buff *skb,
const struct xt_action_param *par, struct ip_set_adt_opt *opt)
{
+ unsigned int bh;
struct ip_set *set = ip_set_rcu_get(xt_net(par), index);
int ret = 0;
@@ -570,9 +571,9 @@ ip_set_test(ip_set_id_t index, const struct sk_buff *skb,
!(opt->family == set->family || set->family == NFPROTO_UNSPEC))
return 0;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
ret = set->variant->kadt(set, skb, par, IPSET_TEST, opt);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
if (ret == -EAGAIN) {
/* Type requests element to be completed */
@@ -1659,6 +1660,7 @@ static int ip_set_utest(struct net *net, struct sock *ctnl, struct sk_buff *skb,
const struct nlattr * const attr[],
struct netlink_ext_ack *extack)
{
+ unsigned int bh;
struct ip_set_net *inst = ip_set_pernet(net);
struct ip_set *set;
struct nlattr *tb[IPSET_ATTR_ADT_MAX + 1] = {};
@@ -1678,9 +1680,9 @@ static int ip_set_utest(struct net *net, struct sock *ctnl, struct sk_buff *skb,
set->type->adt_policy, NULL))
return -IPSET_ERR_PROTOCOL;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
ret = set->variant->uadt(set, tb, IPSET_TEST, NULL, 0, 0);
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
/* Userspace can't trigger element to be re-added */
if (ret == -EAGAIN)
ret = 1;
diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
index 8a33dac..7c31c5e 100644
--- a/net/netfilter/ipset/ip_set_hash_gen.h
+++ b/net/netfilter/ipset/ip_set_hash_gen.h
@@ -548,6 +548,7 @@ mtype_gc(struct timer_list *t)
static int
mtype_resize(struct ip_set *set, bool retried)
{
+ unsigned int bh;
struct htype *h = set->data;
struct htable *t, *orig;
u8 htable_bits;
@@ -567,10 +568,10 @@ mtype_resize(struct ip_set *set, bool retried)
if (!tmp)
return -ENOMEM;
#endif
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
orig = rcu_dereference_bh_nfnl(h->table);
htable_bits = orig->htable_bits;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
retry:
ret = 0;
@@ -1033,6 +1034,7 @@ mtype_test(struct ip_set *set, void *value, const struct ip_set_ext *ext,
static int
mtype_head(struct ip_set *set, struct sk_buff *skb)
{
+ unsigned int bh;
struct htype *h = set->data;
const struct htable *t;
struct nlattr *nested;
@@ -1051,11 +1053,11 @@ mtype_head(struct ip_set *set, struct sk_buff *skb)
spin_unlock_bh(&set->lock);
}
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
t = rcu_dereference_bh_nfnl(h->table);
memsize = mtype_ahash_memsize(h, t) + set->ext_size;
htable_bits = t->htable_bits;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
nested = ipset_nest_start(skb, IPSET_ATTR_DATA);
if (!nested)
@@ -1090,15 +1092,16 @@ mtype_head(struct ip_set *set, struct sk_buff *skb)
static void
mtype_uref(struct ip_set *set, struct netlink_callback *cb, bool start)
{
+ unsigned int bh;
struct htype *h = set->data;
struct htable *t;
if (start) {
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
t = rcu_dereference_bh_nfnl(h->table);
atomic_inc(&t->uref);
cb->args[IPSET_CB_PRIVATE] = (unsigned long)t;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
} else if (cb->args[IPSET_CB_PRIVATE]) {
t = (struct htable *)cb->args[IPSET_CB_PRIVATE];
if (atomic_dec_and_test(&t->uref) && atomic_read(&t->ref)) {
diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c
index 332c69d..05f6c44 100644
--- a/net/netfilter/nfnetlink_log.c
+++ b/net/netfilter/nfnetlink_log.c
@@ -123,13 +123,14 @@ instance_get(struct nfulnl_instance *inst)
static struct nfulnl_instance *
instance_lookup_get(struct nfnl_log_net *log, u_int16_t group_num)
{
+ unsigned int bh;
struct nfulnl_instance *inst;
- rcu_read_lock_bh();
+ bh = rcu_read_lock_bh();
inst = __instance_lookup(log, group_num);
if (inst && !refcount_inc_not_zero(&inst->use))
inst = NULL;
- rcu_read_unlock_bh();
+ rcu_read_unlock_bh(bh);
return inst;
}
@@ -955,6 +956,7 @@ static const struct nfnetlink_subsystem nfulnl_subsys = {
#ifdef CONFIG_PROC_FS
struct iter_state {
+ unsigned int bh;
struct seq_net_private p;
unsigned int bucket;
};
@@ -1009,8 +1011,11 @@ static struct hlist_node *get_idx(struct net *net, struct iter_state *st,
static void *seq_start(struct seq_file *s, loff_t *pos)
__acquires(rcu_bh)
{
- rcu_read_lock_bh();
- return get_idx(seq_file_net(s), s->private, *pos);
+ struct iter_state *st = s->private;
+
+ st->bh = rcu_read_lock_bh();
+
+ return get_idx(seq_file_net(s), st, *pos);
}
static void *seq_next(struct seq_file *s, void *v, loff_t *pos)
@@ -1022,7 +1027,9 @@ static void *seq_next(struct seq_file *s, void *v, loff_t *pos)
static void seq_stop(struct seq_file *s, void *v)
__releases(rcu_bh)
{
- rcu_read_unlock_bh();
+ struct iter_state *st = s->private;
+
+ rcu_read_unlock_bh(st->bh);
}
static int seq_show(struct seq_file *s, void *v)
--
2.7.4
From: Frederic Weisbecker <[email protected]>
Disable a vector while it is being processed. This prepare for softirq
re-entrancy with an obvious single constraint: a vector can't be
interrupted by itself.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
kernel/softirq.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 3efa59e..457bf60 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -357,7 +357,10 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
kstat_incr_softirqs_this_cpu(vec_nr);
trace_softirq_entry(vec_nr);
+ softirq_enabled_nand(BIT(vec_nr));
+ barrier();
h->action(h);
+ softirq_enabled_or(BIT(vec_nr));
trace_softirq_exit(vec_nr);
if (unlikely(prev_count != preempt_count())) {
pr_err("huh, entered softirq %u %s %p with preempt_count %08x, exited with %08x?\n",
--
2.7.4
We are going to extend the softirq bits with an enabled vector mask.
Provide the field with a more generic name to later layout the pending
states on the lower bits and the enabled states on the higher bits.
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
---
arch/arm/include/asm/hardirq.h | 2 +-
arch/arm64/include/asm/hardirq.h | 2 +-
arch/h8300/kernel/asm-offsets.c | 2 +-
arch/ia64/include/asm/hardirq.h | 2 +-
arch/ia64/include/asm/processor.h | 2 +-
arch/m68k/include/asm/hardirq.h | 2 +-
arch/m68k/kernel/asm-offsets.c | 2 +-
arch/parisc/include/asm/hardirq.h | 2 +-
arch/powerpc/include/asm/hardirq.h | 2 +-
arch/s390/include/asm/hardirq.h | 6 +++---
arch/sh/include/asm/hardirq.h | 2 +-
arch/sparc/include/asm/cpudata_64.h | 2 +-
arch/sparc/include/asm/hardirq_64.h | 4 ++--
arch/um/include/asm/hardirq.h | 2 +-
arch/x86/include/asm/hardirq.h | 2 +-
include/asm-generic/hardirq.h | 2 +-
include/linux/interrupt.h | 10 +++++-----
17 files changed, 24 insertions(+), 24 deletions(-)
diff --git a/arch/arm/include/asm/hardirq.h b/arch/arm/include/asm/hardirq.h
index cba23ea..e5b06dd 100644
--- a/arch/arm/include/asm/hardirq.h
+++ b/arch/arm/include/asm/hardirq.h
@@ -9,7 +9,7 @@
#define NR_IPI 7
typedef struct {
- unsigned int __softirq_pending;
+ unsigned int __softirq_data;
#ifdef CONFIG_SMP
unsigned int ipi_irqs[NR_IPI];
#endif
diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h
index 1473fc2..e9add887 100644
--- a/arch/arm64/include/asm/hardirq.h
+++ b/arch/arm64/include/asm/hardirq.h
@@ -23,7 +23,7 @@
#define NR_IPI 7
typedef struct {
- unsigned int __softirq_pending;
+ unsigned int __softirq_data;
unsigned int ipi_irqs[NR_IPI];
} ____cacheline_aligned irq_cpustat_t;
diff --git a/arch/h8300/kernel/asm-offsets.c b/arch/h8300/kernel/asm-offsets.c
index 85e6050..719d4cf 100644
--- a/arch/h8300/kernel/asm-offsets.c
+++ b/arch/h8300/kernel/asm-offsets.c
@@ -32,7 +32,7 @@ int main(void)
/* offsets into the irq_cpustat_t struct */
DEFINE(CPUSTAT_SOFTIRQ_PENDING, offsetof(irq_cpustat_t,
- __softirq_pending));
+ __softirq_data));
/* offsets into the thread struct */
OFFSET(THREAD_KSP, thread_struct, ksp);
diff --git a/arch/ia64/include/asm/hardirq.h b/arch/ia64/include/asm/hardirq.h
index ccde7c2..004f609 100644
--- a/arch/ia64/include/asm/hardirq.h
+++ b/arch/ia64/include/asm/hardirq.h
@@ -13,7 +13,7 @@
#define __ARCH_IRQ_STAT 1
-#define local_softirq_pending_ref ia64_cpu_info.softirq_pending
+#define local_softirq_data_ref ia64_cpu_info.softirq_data
#include <linux/threads.h>
#include <linux/irq.h>
diff --git a/arch/ia64/include/asm/processor.h b/arch/ia64/include/asm/processor.h
index 10061ccf..ea29e19 100644
--- a/arch/ia64/include/asm/processor.h
+++ b/arch/ia64/include/asm/processor.h
@@ -188,7 +188,7 @@ union ia64_rr {
* state comes earlier:
*/
struct cpuinfo_ia64 {
- unsigned int softirq_pending;
+ unsigned int softirq_data;
unsigned long itm_delta; /* # of clock cycles between clock ticks */
unsigned long itm_next; /* interval timer mask value to use for next clock tick */
unsigned long nsec_per_cyc; /* (1000000000<<IA64_NSEC_PER_CYC_SHIFT)/itc_freq */
diff --git a/arch/m68k/include/asm/hardirq.h b/arch/m68k/include/asm/hardirq.h
index 1179316..6ad364c 100644
--- a/arch/m68k/include/asm/hardirq.h
+++ b/arch/m68k/include/asm/hardirq.h
@@ -15,7 +15,7 @@ static inline void ack_bad_irq(unsigned int irq)
/* entry.S is sensitive to the offsets of these fields */
typedef struct {
- unsigned int __softirq_pending;
+ unsigned int __softirq_data;
} ____cacheline_aligned irq_cpustat_t;
#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
diff --git a/arch/m68k/kernel/asm-offsets.c b/arch/m68k/kernel/asm-offsets.c
index ccea355..93b6bea 100644
--- a/arch/m68k/kernel/asm-offsets.c
+++ b/arch/m68k/kernel/asm-offsets.c
@@ -64,7 +64,7 @@ int main(void)
#endif
/* offsets into the irq_cpustat_t struct */
- DEFINE(CPUSTAT_SOFTIRQ_PENDING, offsetof(irq_cpustat_t, __softirq_pending));
+ DEFINE(CPUSTAT_SOFTIRQ_PENDING, offsetof(irq_cpustat_t, __softirq_data));
/* signal defines */
DEFINE(LSIGSEGV, SIGSEGV);
diff --git a/arch/parisc/include/asm/hardirq.h b/arch/parisc/include/asm/hardirq.h
index 1a1235a..28d8cee 100644
--- a/arch/parisc/include/asm/hardirq.h
+++ b/arch/parisc/include/asm/hardirq.h
@@ -17,7 +17,7 @@
#endif
typedef struct {
- unsigned int __softirq_pending;
+ unsigned int __softirq_data;
unsigned int kernel_stack_usage;
unsigned int irq_stack_usage;
#ifdef CONFIG_SMP
diff --git a/arch/powerpc/include/asm/hardirq.h b/arch/powerpc/include/asm/hardirq.h
index f1e9067..d3a896b 100644
--- a/arch/powerpc/include/asm/hardirq.h
+++ b/arch/powerpc/include/asm/hardirq.h
@@ -6,7 +6,7 @@
#include <linux/irq.h>
typedef struct {
- unsigned int __softirq_pending;
+ unsigned int __softirq_data;
unsigned int timer_irqs_event;
unsigned int broadcast_irqs_event;
unsigned int timer_irqs_others;
diff --git a/arch/s390/include/asm/hardirq.h b/arch/s390/include/asm/hardirq.h
index dfbc3c6c0..e26325f 100644
--- a/arch/s390/include/asm/hardirq.h
+++ b/arch/s390/include/asm/hardirq.h
@@ -13,9 +13,9 @@
#include <asm/lowcore.h>
-#define local_softirq_pending() (S390_lowcore.softirq_pending)
-#define set_softirq_pending(x) (S390_lowcore.softirq_pending = (x))
-#define or_softirq_pending(x) (S390_lowcore.softirq_pending |= (x))
+#define local_softirq_pending() (S390_lowcore.softirq_data)
+#define set_softirq_pending(x) (S390_lowcore.softirq_data = (x))
+#define or_softirq_pending(x) (S390_lowcore.softirq_data |= (x))
#define __ARCH_IRQ_STAT
#define __ARCH_HAS_DO_SOFTIRQ
diff --git a/arch/sh/include/asm/hardirq.h b/arch/sh/include/asm/hardirq.h
index edaea35..e364a63 100644
--- a/arch/sh/include/asm/hardirq.h
+++ b/arch/sh/include/asm/hardirq.h
@@ -6,7 +6,7 @@
#include <linux/irq.h>
typedef struct {
- unsigned int __softirq_pending;
+ unsigned int __softirq_data;
unsigned int __nmi_count; /* arch dependent */
} ____cacheline_aligned irq_cpustat_t;
diff --git a/arch/sparc/include/asm/cpudata_64.h b/arch/sparc/include/asm/cpudata_64.h
index 666d6b5..ff6d5f2 100644
--- a/arch/sparc/include/asm/cpudata_64.h
+++ b/arch/sparc/include/asm/cpudata_64.h
@@ -11,7 +11,7 @@
typedef struct {
/* Dcache line 1 */
- unsigned int __softirq_pending; /* must be 1st, see rtrap.S */
+ unsigned int __softirq_data; /* must be 1st, see rtrap.S */
unsigned int __nmi_count;
unsigned long clock_tick; /* %tick's per second */
unsigned long __pad;
diff --git a/arch/sparc/include/asm/hardirq_64.h b/arch/sparc/include/asm/hardirq_64.h
index 75b92bf..8ff0458 100644
--- a/arch/sparc/include/asm/hardirq_64.h
+++ b/arch/sparc/include/asm/hardirq_64.h
@@ -11,8 +11,8 @@
#define __ARCH_IRQ_STAT
-#define local_softirq_pending_ref \
- __cpu_data.__softirq_pending
+#define local_softirq_data_ref \
+ __cpu_data.__softirq_data
void ack_bad_irq(unsigned int irq);
diff --git a/arch/um/include/asm/hardirq.h b/arch/um/include/asm/hardirq.h
index b426796..9684493 100644
--- a/arch/um/include/asm/hardirq.h
+++ b/arch/um/include/asm/hardirq.h
@@ -6,7 +6,7 @@
#include <linux/threads.h>
typedef struct {
- unsigned int __softirq_pending;
+ unsigned int __softirq_data;
} ____cacheline_aligned irq_cpustat_t;
#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h
index a8e8e12..875f7de 100644
--- a/arch/x86/include/asm/hardirq.h
+++ b/arch/x86/include/asm/hardirq.h
@@ -5,7 +5,7 @@
#include <linux/threads.h>
typedef struct {
- unsigned int __softirq_pending;
+ unsigned int __softirq_data;
#if IS_ENABLED(CONFIG_KVM_INTEL)
u8 kvm_cpu_l1tf_flush_l1d;
#endif
diff --git a/include/asm-generic/hardirq.h b/include/asm-generic/hardirq.h
index d14214d..4ea87b5 100644
--- a/include/asm-generic/hardirq.h
+++ b/include/asm-generic/hardirq.h
@@ -6,7 +6,7 @@
#include <linux/threads.h>
typedef struct {
- unsigned int __softirq_pending;
+ unsigned int __softirq_data;
} ____cacheline_aligned irq_cpustat_t;
#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index eeceac3..5888545 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -434,13 +434,13 @@ extern bool force_irqthreads;
#ifndef local_softirq_pending
-#ifndef local_softirq_pending_ref
-#define local_softirq_pending_ref irq_stat.__softirq_pending
+#ifndef local_softirq_data_ref
+#define local_softirq_data_ref irq_stat.__softirq_data
#endif
-#define local_softirq_pending() (__this_cpu_read(local_softirq_pending_ref))
-#define set_softirq_pending(x) (__this_cpu_write(local_softirq_pending_ref, (x)))
-#define or_softirq_pending(x) (__this_cpu_or(local_softirq_pending_ref, (x)))
+#define local_softirq_pending() (__this_cpu_read(local_softirq_data_ref))
+#define set_softirq_pending(x) (__this_cpu_write(local_softirq_data_ref, (x)))
+#define or_softirq_pending(x) (__this_cpu_or(local_softirq_data_ref, (x)))
#endif /* local_softirq_pending */
--
2.7.4
Hi Frederic,
On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote:
> From: Frederic Weisbecker <[email protected]>
>
> Make do_softirq() re-entrant and allow a vector, being either processed
> or disabled, to be interrupted by another vector. This way a vector
> won't be able to monopolize the CPU for a long while at the expense of
> the others that may rely on some predictable latency, especially on
> softirq disabled sections that used to disable all vectors.
>
I understand that a long running softirq can be preempted/interrupted by
other softirqs which is not possible today. I have few questions on your
patches.
(1) When softirq processing is pushed to ksoftirqd, then the long running
softirq can still block other softirqs (not in SOFTIRQ_NOW_MASK) for a while.
correct?
(2) When softirqs processing happens asynchronously, a particular softirq
like TASKLET can keep interrupting an already running softirq like TIMER/NET_RX,
correct? In worse case scenario, a long running softirq like NET_RX interrupt
a TIMER softirq. But I guess this is something expected with this. i.e
each softirq is independent and whichever comes recent gets to interrupt the
previously running softirqs.
Thanks,
Pavan
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.
On Thu, Oct 11, 2018 at 01:11:59AM +0200, Frederic Weisbecker wrote:
> This pair of function is implemented on top of local_bh_disable() that
> is going to handle a softirq mask in order to apply finegrained vector
> disablement. The lock function is going to return the previous vectors
> enabled mask prior to the last call to local_bh_disable(), following a
> similar model to that of local_irq_save/restore. Subsequent calls to
> local_bh_disable() and friends can then stack up:
>
> bh = local_bh_disable(vec_mask);
> bh2 = rcu_read_lock_bh() {
> bh2 = local_bh_disable(...)
> return bh2;
> }
> ...
> rcu_read_unlock_bh(bh2) {
> local_bh_enable(bh2);
> }
> local_bh_enable(bh);
>
> To prepare for that, make rcu_read_lock_bh() able to return a saved vector
> enabled mask and pass it back to rcu_read_unlock_bh(). We'll plug it
> to local_bh_disable() in a subsequent patch.
> Signed-off-by: Frederic Weisbecker <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Sebastian Andrzej Siewior <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Linus Torvalds <[email protected]>
> Cc: David S. Miller <[email protected]>
> Cc: Mauro Carvalho Chehab <[email protected]>
> Cc: Paul E. McKenney <[email protected]>
> ---
> crypto/pcrypt.c | 5 ++--
> drivers/infiniband/ulp/ipoib/ipoib_main.c | 5 ++--
> drivers/net/hyperv/rndis_filter.c | 5 ++--
> drivers/net/macsec.c | 12 +++++----
> drivers/net/vrf.c | 19 ++++++++------
> drivers/vhost/net.c | 5 ++--
> include/linux/rcupdate.h | 5 ++--
> include/net/arp.h | 10 ++++---
> include/net/ip6_fib.h | 1 +
> include/net/ndisc.h | 10 ++++---
> include/net/neighbour.h | 1 +
> kernel/padata.c | 5 ++--
> kernel/rcu/rcuperf.c | 2 +-
> kernel/rcu/rcutorture.c | 2 +-
> net/caif/caif_dev.c | 5 ++--
> net/core/dev.c | 7 ++---
> net/core/neighbour.c | 37 +++++++++++++++-----------
> net/core/pktgen.c | 5 ++--
> net/decnet/dn_route.c | 27 +++++++++++--------
> net/ipv4/fib_semantics.c | 5 ++--
> net/ipv4/ip_output.c | 7 ++---
> net/ipv4/netfilter/ipt_CLUSTERIP.c | 5 ++--
> net/ipv6/addrconf.c | 21 ++++++++-------
> net/ipv6/ip6_fib.c | 4 +--
> net/ipv6/ip6_flowlabel.c | 43 ++++++++++++++++++-------------
> net/ipv6/ip6_output.c | 12 +++++----
> net/ipv6/route.c | 15 ++++++-----
> net/ipv6/xfrm6_tunnel.c | 5 ++--
> net/l2tp/l2tp_core.c | 33 ++++++++++++++----------
> net/llc/llc_core.c | 5 ++--
> net/llc/llc_proc.c | 13 +++++++---
> net/llc/llc_sap.c | 5 ++--
> net/netfilter/ipset/ip_set_core.c | 10 ++++---
> net/netfilter/ipset/ip_set_hash_gen.h | 15 ++++++-----
> net/netfilter/nfnetlink_log.c | 17 ++++++++----
> 35 files changed, 229 insertions(+), 154 deletions(-)
>
> diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
> index f8ec3d4..490358c 100644
> --- a/crypto/pcrypt.c
> +++ b/crypto/pcrypt.c
> @@ -73,12 +73,13 @@ struct pcrypt_aead_ctx {
> static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu,
> struct padata_pcrypt *pcrypt)
> {
> + unsigned int bh;
> unsigned int cpu_index, cpu, i;
> struct pcrypt_cpumask *cpumask;
>
> cpu = *cb_cpu;
>
> - rcu_read_lock_bh();
> + bh = rcu_read_lock_bh();
> cpumask = rcu_dereference_bh(pcrypt->cb_cpumask);
> if (cpumask_test_cpu(cpu, cpumask->mask))
> goto out;
> @@ -95,7 +96,7 @@ static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu,
> *cb_cpu = cpu;
>
> out:
> - rcu_read_unlock_bh();
> + rcu_read_unlock_bh(bh);
> return padata_do_parallel(pcrypt->pinst, padata, cpu);
> }
This complicates the RCU API for -bh and doesn't look pretty at all. Is there
anything better we can do so we don't have to touch existing readers at all?
Also, I thought softirqs were kind of a thing of the past, and threaded
interrupts are the more preferred interrupt bottom halves these days,
especially for -rt. Maybe that was just wishful thinking on my part :-)
thanks,
- Joel
On Thu, 11 Oct 2018 01:11:47 +0200
Frederic Weisbecker <[email protected]> wrote:
> 945 files changed, 13857 insertions(+), 9767 deletions(-)
Impressive :)
I have to ask a dumb question, though. Might it not be better to add a
new set of functions like:
local_softirq_disable(mask);
spin_lock_softirq(lock, mask);
Then just define the existing functions to call the new ones with
SOFTIRQ_ALL_MASK? It would achieve something like the same result with
far less churn and conflict potential; then individual call sites could be
changed at leisure? For extra credit, somebody could do a checkpatch rule
to keep new calls to the _bh functions from being added.
Thanks,
jon
On Tue, Oct 16, 2018 at 04:03:59PM -0600, Jonathan Corbet wrote:
> I have to ask a dumb question, though. Might it not be better to add a
> new set of functions like:
>
> local_softirq_disable(mask);
> spin_lock_softirq(lock, mask);
>
> Then just define the existing functions to call the new ones with
> SOFTIRQ_ALL_MASK? It would achieve something like the same result with
> far less churn and conflict potential; then individual call sites could be
> changed at leisure?
I was thinking the exact same thing...
Thanks,
Richard
Hi Pavan,
On Tue, Oct 16, 2018 at 09:45:52AM +0530, Pavan Kondeti wrote:
> Hi Frederic,
>
> On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote:
> > From: Frederic Weisbecker <[email protected]>
> >
> > Make do_softirq() re-entrant and allow a vector, being either processed
> > or disabled, to be interrupted by another vector. This way a vector
> > won't be able to monopolize the CPU for a long while at the expense of
> > the others that may rely on some predictable latency, especially on
> > softirq disabled sections that used to disable all vectors.
> >
> I understand that a long running softirq can be preempted/interrupted by
> other softirqs which is not possible today. I have few questions on your
> patches.
>
> (1) When softirq processing is pushed to ksoftirqd, then the long running
> softirq can still block other softirqs (not in SOFTIRQ_NOW_MASK) for a while.
> correct?
No, Ksoftirqd is treated the same as IRQ tail processing here: a vector can
interrupt another. So for example, a NET_RX softirq running in Ksoftirqd can
be interrupted by a TIMER softirq running in hardirq tail.
>
> (2) When softirqs processing happens asynchronously, a particular softirq
> like TASKLET can keep interrupting an already running softirq like TIMER/NET_RX,
> correct? In worse case scenario, a long running softirq like NET_RX interrupt
> a TIMER softirq. But I guess this is something expected with this. i.e
> each softirq is independent and whichever comes recent gets to interrupt the
> previously running softirqs.
Exactly, and that's inherent with interrupts in general. The only way to work
around that is to thread each vector independantly but that's a whole different
dimension :-)
Thanks!
On Mon, Oct 15, 2018 at 10:28:44PM -0700, Joel Fernandes wrote:
> > diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
> > index f8ec3d4..490358c 100644
> > --- a/crypto/pcrypt.c
> > +++ b/crypto/pcrypt.c
> > @@ -73,12 +73,13 @@ struct pcrypt_aead_ctx {
> > static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu,
> > struct padata_pcrypt *pcrypt)
> > {
> > + unsigned int bh;
> > unsigned int cpu_index, cpu, i;
> > struct pcrypt_cpumask *cpumask;
> >
> > cpu = *cb_cpu;
> >
> > - rcu_read_lock_bh();
> > + bh = rcu_read_lock_bh();
> > cpumask = rcu_dereference_bh(pcrypt->cb_cpumask);
> > if (cpumask_test_cpu(cpu, cpumask->mask))
> > goto out;
> > @@ -95,7 +96,7 @@ static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu,
> > *cb_cpu = cpu;
> >
> > out:
> > - rcu_read_unlock_bh();
> > + rcu_read_unlock_bh(bh);
> > return padata_do_parallel(pcrypt->pinst, padata, cpu);
> > }
>
> This complicates the RCU API for -bh and doesn't look pretty at all. Is there
> anything better we can do so we don't have to touch existing readers at all?
Indeed, so I'm going to give up with the idea of converting all the callers
in once, this is unmaintainable anyway. I'll keep the RCU API as is for now,
ie: disable all softirqs, and we'll see later if we need per vector granularity.
Surely that would be too fun to handle, with per vector quiescent states and grace
periods ;-)
>
> Also, I thought softirqs were kind of a thing of the past, and threaded
> interrupts are the more preferred interrupt bottom halves these days,
> especially for -rt. Maybe that was just wishful thinking on my part :-)
We all wish that. I think it was the plan but threaded IRQs involve context
switches and IIUC it's the border that's hard to cross on some performance
measurements.
On Wed, Oct 17, 2018 at 02:44:19AM +0200, Frederic Weisbecker wrote:
> On Mon, Oct 15, 2018 at 10:28:44PM -0700, Joel Fernandes wrote:
> > > diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
> > > index f8ec3d4..490358c 100644
> > > --- a/crypto/pcrypt.c
> > > +++ b/crypto/pcrypt.c
> > > @@ -73,12 +73,13 @@ struct pcrypt_aead_ctx {
> > > static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu,
> > > struct padata_pcrypt *pcrypt)
> > > {
> > > + unsigned int bh;
> > > unsigned int cpu_index, cpu, i;
> > > struct pcrypt_cpumask *cpumask;
> > >
> > > cpu = *cb_cpu;
> > >
> > > - rcu_read_lock_bh();
> > > + bh = rcu_read_lock_bh();
> > > cpumask = rcu_dereference_bh(pcrypt->cb_cpumask);
> > > if (cpumask_test_cpu(cpu, cpumask->mask))
> > > goto out;
> > > @@ -95,7 +96,7 @@ static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu,
> > > *cb_cpu = cpu;
> > >
> > > out:
> > > - rcu_read_unlock_bh();
> > > + rcu_read_unlock_bh(bh);
> > > return padata_do_parallel(pcrypt->pinst, padata, cpu);
> > > }
> >
> > This complicates the RCU API for -bh and doesn't look pretty at all. Is there
> > anything better we can do so we don't have to touch existing readers at all?
>
> Indeed, so I'm going to give up with the idea of converting all the callers
> in once, this is unmaintainable anyway. I'll keep the RCU API as is for now,
> ie: disable all softirqs, and we'll see later if we need per vector granularity.
> Surely that would be too fun to handle, with per vector quiescent states and grace
> periods ;-)
Cool, sounds good.
> >
> > Also, I thought softirqs were kind of a thing of the past, and threaded
> > interrupts are the more preferred interrupt bottom halves these days,
> > especially for -rt. Maybe that was just wishful thinking on my part :-)
>
> We all wish that. I think it was the plan but threaded IRQs involve context
> switches and IIUC it's the border that's hard to cross on some performance
> measurements.
Ok, thanks.
- Joel
On Tue, Oct 16, 2018 at 04:03:59PM -0600, Jonathan Corbet wrote:
> On Thu, 11 Oct 2018 01:11:47 +0200
> Frederic Weisbecker <[email protected]> wrote:
>
> > 945 files changed, 13857 insertions(+), 9767 deletions(-)
>
> Impressive :)
In the wrong way :)
>
> I have to ask a dumb question, though. Might it not be better to add a
> new set of functions like:
>
> local_softirq_disable(mask);
> spin_lock_softirq(lock, mask);
>
> Then just define the existing functions to call the new ones with
> SOFTIRQ_ALL_MASK? It would achieve something like the same result with
> far less churn and conflict potential; then individual call sites could be
> changed at leisure? For extra credit, somebody could do a checkpatch rule
> to keep new calls to the _bh functions from being added.
So it's not a dumb question at all. That's in fact the core of the suggestions
I got while discussing that lately on IRC with the initial Cc list.
That's definetly the direction I'll take on v2: keeping the current API,
introduce new ones with per vector granularity and convert iteratively.
The diffstat will shrink tremendously and it can make the change
eventually maintainable.
The reason why I didn't take that approach first in this version is because
of a little technical detail:
_ Serving softirqs is done under SOFTIRQ_OFFSET: (1 << SOFTIRQ_SHIFT)
_ Disabling softirqs is done under SOFTIRQ_OFFSET * 2
We do that to differentiate both state. Serving softirqs can't nest whereas disabling
softirqs can nest. So we just need to check the value is odd to identify a serving
softirq state.
Now things are going to change as serving softirqs will be able to nest too.
And having that bh saved state allowed me to make softirqs disablement not
nesting. So I just needed to invert both ways to account. I wanted to do that
because otherwise we need to share SOFTIRQ_MASK for two counters, which makes
a maximum of 16 for both. That's enough for serving softirqs as it's couldn't
nest further than NR_SOFTIRQS, but disabling softirqs depth was unpredictable,
even though 16 levels is already insane, anyway...
There is an easy alternative though:
local_bh_enter()
{
bool nesting = false;
if (preempt_count() & SOFTIRQ_OFFSET)
nesting = true;
else
preempt_count() |= SOFTIRQ_OFFSET;
return nesting;
}
local_bh_exit(bool nesting)
{
if (nesting)
preempt_count() &= ~SOFTIRQ_OFFSET;
}
do_softirq()
{
bool nesting = local_bh_enter();
// process softirqs ....
local_bh_exit(nesting);
}
But I guess it was just too obvious for me to be considered :-S
Hi Frederic,
On Wed, Oct 17, 2018 at 02:26:02AM +0200, Frederic Weisbecker wrote:
> Hi Pavan,
>
> On Tue, Oct 16, 2018 at 09:45:52AM +0530, Pavan Kondeti wrote:
> > Hi Frederic,
> >
> > On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote:
> > > From: Frederic Weisbecker <[email protected]>
> > >
> > > Make do_softirq() re-entrant and allow a vector, being either processed
> > > or disabled, to be interrupted by another vector. This way a vector
> > > won't be able to monopolize the CPU for a long while at the expense of
> > > the others that may rely on some predictable latency, especially on
> > > softirq disabled sections that used to disable all vectors.
> > >
> > I understand that a long running softirq can be preempted/interrupted by
> > other softirqs which is not possible today. I have few questions on your
> > patches.
> >
> > (1) When softirq processing is pushed to ksoftirqd, then the long running
> > softirq can still block other softirqs (not in SOFTIRQ_NOW_MASK) for a while.
> > correct?
>
> No, Ksoftirqd is treated the same as IRQ tail processing here: a vector can
> interrupt another. So for example, a NET_RX softirq running in Ksoftirqd can
> be interrupted by a TIMER softirq running in hardirq tail.
>
When ksoftirqd is running, we are only allowing softirqs in SOFTIRQ_NOW_MASK
to run after serving an interrupt. So I don't see how TIMER which is not
in SOFTIRQ_NOW_MASK can interrupt a NET_RX softirq running in ksoftirqd
context.
> >
> > (2) When softirqs processing happens asynchronously, a particular softirq
> > like TASKLET can keep interrupting an already running softirq like TIMER/NET_RX,
> > correct? In worse case scenario, a long running softirq like NET_RX interrupt
> > a TIMER softirq. But I guess this is something expected with this. i.e
> > each softirq is independent and whichever comes recent gets to interrupt the
> > previously running softirqs.
>
> Exactly, and that's inherent with interrupts in general. The only way to work
> around that is to thread each vector independantly but that's a whole different
> dimension :-)
>
Right.
Assigning a thread for each vector also may not solve this problem because
preemption would be disabled while a softirq vector is running in its own
thread.
I guess there is no hard priorities among softirq vectors. Earlier
it was like first come first serve, now it is not. If we had priorities
defined, (don't know how :-)) we could disable the lower prio vectors while a
higher prio vector is being handled. This way we could gaurantee that TIMER
softirq or HI-TASKLET won't be starved while a long running softirq like
NET_RX/NET_TX/RCU is running.
Thanks,
Pavan
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.