I am announcing the review cycle for the 3.5.7u1 release of stable patches.
This new release contains 270 patches. Proposed patches are posted as a
response to this message. The same patches are also available at the
following repository:
git://kernel.ubuntu.com/ubuntu/linux.git linux-3.5.y-review
If there are any problems, or if anything is missing, please answer to
this or to any of the followup patches. Note that any answer should be
made at maximum in 2 days, after that the final release of 3.5.7u1 will be
made.
Not everything was queued up for 3.5, due to number of patches already
queued I'll first proceed with this release.
For more information about the 3.5.yuz tree, take a look at
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable
See below the diffstat and shortlog for this release.
-Herton
--
.../devicetree/bindings/arm/atmel-at91.txt | 2 +-
.../bindings/pinctrl/nvidia,tegra20-pinmux.txt | 2 +-
.../bindings/pinctrl/nvidia,tegra30-pinmux.txt | 2 +-
Documentation/hwmon/coretemp | 1 +
arch/arm/include/asm/hwcap.h | 3 +-
arch/arm/include/asm/vfpmacros.h | 12 +-
arch/arm/kernel/smp.c | 14 +-
arch/arm/mach-at91/at91rm9200_devices.c | 2 +-
arch/arm/mach-at91/at91sam9260_devices.c | 2 +-
arch/arm/mach-at91/at91sam9261_devices.c | 2 +-
arch/arm/mach-at91/at91sam9263_devices.c | 2 +-
arch/arm/mach-at91/at91sam9rl_devices.c | 2 +-
arch/arm/mach-at91/setup.c | 2 +-
arch/arm/plat-omap/counter_32k.c | 21 +-
arch/arm/vfp/vfpmodule.c | 9 +-
arch/mips/ath79/clock.c | 109 ++-
arch/mips/include/asm/mach-ath79/ar71xx_regs.h | 23 +
arch/mips/kernel/kgdb.c | 9 +
arch/powerpc/platforms/pseries/eeh_driver.c | 95 +-
arch/s390/boot/compressed/vmlinux.lds.S | 2 +-
arch/s390/kernel/vmlinux.lds.S | 2 +-
arch/tile/Makefile | 4 +
arch/x86/include/asm/efi.h | 1 +
arch/x86/kernel/e820.c | 3 +
arch/x86/kernel/entry_32.S | 8 +-
arch/x86/kernel/entry_64.S | 2 +-
arch/x86/kernel/setup.c | 30 +-
arch/x86/mm/init.c | 58 +-
arch/x86/mm/init_64.c | 7 +-
arch/x86/oprofile/nmi_int.c | 2 +-
arch/x86/platform/efi/efi.c | 43 +-
arch/x86/xen/enlighten.c | 18 +-
block/blk-core.c | 11 +-
block/blk-sysfs.c | 6 +
drivers/acpi/ec.c | 30 +-
drivers/bcma/main.c | 5 +-
drivers/cpufreq/powernow-k8.c | 9 +-
drivers/dma/dmaengine.c | 2 +-
drivers/dma/imx-dma.c | 4 +-
drivers/dma/sirf-dma.c | 4 +-
drivers/edac/amd64_edac.c | 11 +-
drivers/extcon/extcon_class.c | 7 +
drivers/firewire/core-cdev.c | 4 +-
drivers/gpu/drm/i915/i915_gem.c | 1 -
drivers/gpu/drm/i915/i915_reg.h | 2 +-
drivers/gpu/drm/i915/intel_display.c | 4 +-
drivers/gpu/drm/i915/intel_pm.c | 4 +-
drivers/gpu/drm/radeon/evergreen_cs.c | 1 +
drivers/gpu/drm/radeon/radeon_legacy_encoders.c | 6 +-
drivers/hv/channel.c | 24 +-
drivers/hwmon/coretemp.c | 7 +-
drivers/iommu/tegra-smmu.c | 2 +-
drivers/md/raid10.c | 2 +-
drivers/mfd/88pm860x-core.c | 89 +-
drivers/mmc/host/sdhci-s3c.c | 2 +-
drivers/mtd/nand/nand_base.c | 8 +-
drivers/net/ethernet/intel/e1000e/hw.h | 2 +
drivers/net/ethernet/intel/e1000e/netdev.c | 2 +
drivers/net/usb/cdc_ether.c | 41 +-
drivers/net/usb/qmi_wwan.c | 14 +
.../net/wireless/ath/ath9k/ar9003_2p2_initvals.h | 164 ++--
drivers/net/wireless/ath/ath9k/beacon.c | 2 +-
drivers/net/wireless/ath/ath9k/main.c | 2 +-
drivers/net/wireless/ath/ath9k/xmit.c | 53 +-
drivers/net/wireless/b43/main.c | 4 +
drivers/net/wireless/ipw2x00/ipw2200.c | 2 +-
drivers/net/wireless/iwlwifi/iwl-agn-devices.c | 39 +-
drivers/pcmcia/pxa2xx_sharpsl.c | 2 +-
drivers/pinctrl/core.c | 4 +-
drivers/pinctrl/pinconf.c | 4 -
drivers/pinctrl/pinctrl-tegra.c | 2 +-
drivers/pinctrl/pinctrl-tegra30.c | 24 +-
drivers/rtc/rtc-imxdi.c | 2 +
drivers/scsi/qla2xxx/qla_target.c | 2 +-
drivers/scsi/scsi_debug.c | 2 +-
drivers/scsi/storvsc_drv.c | 5 +
drivers/staging/android/binder.c | 30 +-
drivers/staging/comedi/drivers/amplc_pc236.c | 2 +-
drivers/target/iscsi/iscsi_target.c | 2 +-
drivers/target/iscsi/iscsi_target_core.h | 4 +-
drivers/target/iscsi/iscsi_target_tpg.c | 12 +
drivers/target/target_core_cdb.c | 48 +-
drivers/target/target_core_configfs.c | 8 +-
drivers/target/target_core_file.c | 41 +-
drivers/target/target_core_file.h | 1 +
drivers/tty/vt/vt.c | 13 +
drivers/usb/class/cdc-acm.c | 22 +-
drivers/usb/core/driver.c | 4 +
drivers/usb/core/hub.c | 40 +-
drivers/usb/dwc3/gadget.c | 1 +
drivers/usb/gadget/at91_udc.c | 2 +-
drivers/usb/host/pci-quirks.c | 9 +-
drivers/usb/host/xhci-ring.c | 11 +
drivers/usb/host/xhci.c | 7 +-
drivers/usb/musb/am35x.c | 6 +
drivers/usb/serial/mct_u232.c | 14 +-
drivers/usb/serial/metro-usb.c | 15 +-
drivers/usb/serial/mos7840.c | 30 +-
drivers/usb/serial/opticon.c | 11 +-
drivers/usb/serial/option.c | 84 +-
drivers/usb/serial/quatech2.c | 8 +
drivers/usb/serial/sierra.c | 26 +-
drivers/usb/serial/whiteheat.c | 1 +
drivers/usb/storage/unusual_devs.h | 6 +
drivers/vhost/net.c | 3 +-
drivers/video/udlfb.c | 2 +-
drivers/video/via/via_clock.c | 19 +
fs/autofs4/root.c | 6 +-
fs/ceph/addr.c | 11 +-
fs/ceph/debugfs.c | 1 +
fs/ceph/export.c | 20 +-
fs/ceph/mds_client.c | 13 +-
fs/compat_ioctl.c | 2 +
fs/ecryptfs/ecryptfs_kernel.h | 2 +
fs/ecryptfs/file.c | 100 +--
fs/ecryptfs/inode.c | 65 +-
fs/ecryptfs/main.c | 24 +-
fs/ecryptfs/mmap.c | 39 +-
fs/exec.c | 3 +-
fs/ext4/balloc.c | 8 +-
fs/ext4/bitmap.c | 6 +-
fs/ext4/ext4.h | 11 +-
fs/ext4/ext4_jbd2.c | 6 +-
fs/ext4/extents.c | 57 +-
fs/ext4/ialloc.c | 4 +-
fs/ext4/mballoc.c | 14 +-
fs/ext4/resize.c | 5 +-
fs/ext4/super.c | 7 +-
fs/gfs2/export.c | 4 +
fs/isofs/export.c | 2 +-
fs/jbd/commit.c | 45 +-
fs/jbd/transaction.c | 64 +-
fs/lockd/clntxdr.c | 2 +-
fs/lockd/mon.c | 4 +-
fs/lockd/svcproc.c | 3 +-
fs/nfs/blocklayout/blocklayout.c | 275 +++++-
fs/nfs/blocklayout/blocklayout.h | 1 +
fs/nfsd/nfs4idmap.c | 2 +-
fs/nfsd/nfs4state.c | 19 +-
fs/proc/stat.c | 14 +-
fs/reiserfs/inode.c | 6 +-
fs/sysfs/dir.c | 16 +-
fs/udf/super.c | 5 +-
fs/xfs/xfs_export.c | 3 +
include/drm/drm_pciids.h | 3 +
include/linux/ceph/libceph.h | 2 +-
include/linux/ceph/messenger.h | 60 +-
include/linux/ceph/mon_client.h | 2 +-
include/linux/ceph/msgpool.h | 3 +-
include/linux/ceph/osd_client.h | 2 +-
include/linux/ceph/osdmap.h | 6 +-
include/linux/efi.h | 5 +
include/linux/memblock.h | 1 +
include/linux/mtd/nand.h | 3 -
include/net/cfg80211.h | 1 +
include/net/netfilter/nf_conntrack_ecache.h | 1 +
init/main.c | 3 +
kernel/cgroup.c | 41 +-
kernel/debug/kdb/kdb_io.c | 33 +-
kernel/module.c | 4 +
kernel/sched/stop_task.c | 22 +-
kernel/sys.c | 12 +-
kernel/time/tick-sched.c | 1 -
kernel/time/timekeeping.c | 2 +-
kernel/timer.c | 10 +-
kernel/trace/ring_buffer.c | 4 +
lib/genalloc.c | 2 +-
mm/memblock.c | 24 +
mm/rmap.c | 20 +-
mm/shmem.c | 6 +-
net/bluetooth/smp.c | 6 +-
net/ceph/ceph_common.c | 21 +-
net/ceph/crypto.c | 1 +
net/ceph/crypto.h | 3 +-
net/ceph/debugfs.c | 4 +
net/ceph/messenger.c | 945 ++++++++++++--------
net/ceph/mon_client.c | 127 ++-
net/ceph/msgpool.c | 7 +-
net/ceph/osd_client.c | 100 ++-
net/ceph/osdmap.c | 38 +-
net/core/pktgen.c | 2 +-
net/core/skbuff.c | 6 +-
net/ipv4/netfilter/nf_nat_sip.c | 10 +-
net/mac80211/iface.c | 2 +-
net/mac80211/mlme.c | 5 +-
net/mac80211/sta_info.c | 4 +-
net/mac80211/status.c | 4 +-
net/mac80211/tx.c | 22 +-
net/mac80211/util.c | 4 +-
net/mac80211/wpa.c | 3 +-
net/netfilter/nf_conntrack_core.c | 16 +-
net/netfilter/nf_conntrack_expect.c | 29 +-
net/netfilter/nfnetlink_log.c | 2 +-
net/netfilter/xt_limit.c | 8 +-
net/sunrpc/cache.c | 4 +-
net/sunrpc/xprtsock.c | 62 +-
net/wireless/mlme.c | 12 +-
scripts/package/buildtar | 2 +-
sound/pci/hda/hda_codec.c | 10 +-
sound/pci/hda/hda_intel.c | 31 +-
sound/pci/hda/patch_cirrus.c | 6 +-
sound/pci/hda/patch_realtek.c | 39 +-
sound/pci/hda/patch_via.c | 4 +
sound/soc/codecs/wm2200.c | 3 +-
sound/soc/sh/fsi.c | 15 +-
usr/gen_init_cpio.c | 43 +-
206 files changed, 2831 insertions(+), 1504 deletions(-)
Alex Deucher (2):
drm/radeon: add some new SI PCI ids
drm/radeon: add error output if VM CS fails on cayman
Alex Elder (38):
libceph: eliminate connection state "DEAD"
libceph: kill bad_proto ceph connection op
libceph: rename socket callbacks
libceph: rename kvec_reset and kvec_add functions
libceph: embed ceph messenger structure in ceph_client
libceph: start separating connection flags from state
libceph: start tracking connection socket state
libceph: provide osd number when creating osd
libceph: set CLOSED state bit in con_init
libceph: embed ceph connection structure in mon_client
libceph: init monitor connection when opening
libceph: fully initialize connection in con_init()
libceph: tweak ceph_alloc_msg()
libceph: have messages point to their connection
libceph: have messages take a connection reference
libceph: make ceph_con_revoke() a msg operation
libceph: make ceph_con_revoke_message() a msg op
libceph: encapsulate out message data setup
libceph: encapsulate advancing msg page
libceph: don't mark footer complete before it is
libceph: move init_bio_*() functions up
libceph: move init of bio_iter
libceph: don't use bio_iter as a flag
libceph: SOCK_CLOSED is a flag, not a state
libceph: don't change socket state on sock event
libceph: just set SOCK_CLOSED when state changes
libceph: don't touch con state in con_close_socket()
libceph: clear CONNECTING in ceph_con_close()
libceph: clear NEGOTIATING when done
libceph: define and use an explicit CONNECTED state
libceph: separate banner and connect writes
libceph: distinguish two phases of connect sequence
libceph: small changes to messenger.c
libceph: add some fine ASCII art
libceph: only kunmap kmapped pages
rbd: reset BACKOFF if unable to re-queue
ceph: avoid 32-bit page index overflow
libceph: drop declaration of ceph_con_get()
Alexander Holler (1):
video/udlfb: fix line counting in fb_write
Alexis R. Cortes (1):
usb: host: xhci: New system added for Compliance Mode Patch on SN65LVPE502CP
Amerigo Wang (1):
pktgen: fix crash when generating IPv6 packets
Andreas Herrmann (1):
cpufreq / powernow-k8: Remove usage of smp_processor_id() in preemptible code
Andrew Morton (1):
amd64_edac:__amd64_set_scrub_rate(): avoid overindexing scrubrates[]
Anisse Astier (2):
ehci: fix Lucid nohandoff pci quirk to be more generic with BIOS versions
ehci: Add yet-another Lucid nohandoff pci quirk
Arnd Bergmann (1):
pcmcia: sharpsl: don't discard sharpsl_pcmcia_ops
Arve Hjønnevåg (2):
Staging: android: binder: Fix memory leak on thread/process exit
Staging: android: binder: Allow using highmem for binder buffers
Barry Song (2):
dmaengine: sirf: fix a typo in dma_prep_interleaved
dmaengine: sirf: fix a typo in moving running dma_desc to active queue
Bjørn Mork (2):
USB: option: blacklist net interface on ZTE devices
USB: option: add more ZTE devices
Bo Shen (1):
ARM: at91/i2c: change id to let i2c-gpio work
Brian Norris (1):
mtd: nand: allow NAND_NO_SUBPAGE_WRITE to be set from driver
Bruce Allan (1):
e1000e: add device IDs for i218
Chris Metcalf (1):
arch/tile: avoid generating .eh_frame information in modules
Christoph Hellwig (1):
iscsit: remove incorrect unlock in iscsit_build_sendtargets_resp
Colin Cross (1):
ARM: OMAP: counter: add locking to read_persistent_clock
Daisuke Nishimura (1):
cgroup: notify_on_release may not be triggered in some cases
Dan Carpenter (4):
timekeeping: Cast raw_interval to u64 to avoid shift overflow
md/raid10: use correct limit variable
libceph: fix NULL dereference in reset_connection()
oprofile, x86: Fix wrapping bug in op_x86_get_ctrl()
Dan Williams (1):
qmi_wwan/cdc_ether: move Novatel 551 and E362 to qmi_wwan
Daniel Drake (1):
viafb: don't touch clock state on OLPC XO-1.5
Dave Young (1):
Revert "x86/mm: Fix the size calculation of mapping tables"
David Henningsson (2):
ALSA: hda - do not detect jack on internal speakers for Realtek
ALSA: hda - Always check array bounds in alc_get_line_out_pfx
David Vrabel (1):
xen/x86: don't corrupt %eip when returning from a signal handler
David Zafman (1):
ceph: fix dentry reference leak in encode_fh()
Dmitry Monakhov (1):
ext4: race-condition protection for ext4_convert_unwritten_extents_endio
Dylan Reid (1):
ALSA: hda - Fix hang caused by race during suspend.
Egbert Eich (1):
drm/radeon: Don't destroy I2C Bus Rec in radeon_ext_tmds_enc_destroy().
Eric Dumazet (1):
net: fix secpath kmemleak
Fabio Estevam (1):
drivers/dma/dmaengine.c: lower the priority of 'failed to get' dma channel message
Fabio Porcedda (1):
usb: gadget: at91_udc: fix dt support
Felipe Balbi (1):
usb: dwc3: gadget: fix 'endpoint always busy' bug
Felix Fietkau (4):
ath9k: use ieee80211_free_txskb
mac80211: use ieee80211_free_txskb to fix possible skb leaks
mac80211: use ieee80211_free_txskb in a few more places
Revert "ath9k_hw: Updated AR9003 tx gain table for 5GHz"
Feng Tang (2):
ACPI: EC: Make the GPE storm threshold a module parameter
ACPI: EC: Add a quirk for CLEVO M720T/M730T laptop
Gabor Juhos (1):
MIPS: ath79: Fix CPU/DDR frequency calculation for SRIF PLLs
Gavin Shan (1):
powerpc/eeh: Lock module while handling EEH event
Geert Uytterhoeven (1):
sysfs: sysfs_pathname/sysfs_add_one: Use strlcat() instead of strcat()
Guanjun He (1):
libceph: prevent the race of incoming work during teardown
Guennadi Liakhovetski (1):
ASoC: fsi: don't reschedule DMA from an atomic context
Guenter Roeck (1):
hwmon: (coretemp) Add support for Atom CE4110/4150/4170
Haojian Zhuang (1):
pinctrl: remove mutex lock in groups show
Heiko Carstens (1):
s390: fix linker script for 31 bit builds
Herton Ronaldo Krzesinski (1):
Revert "sched: Add missing call to calc_load_exit_idle()"
Hildner, Christian (1):
timers: Fix endless looping between cascade() and internal_add_timer()
Hiro Sugawara (1):
iommu/tegra: smmu: Fix deadly typo
Hugh Dickins (1):
tmpfs,ceph,gfs2,isofs,reiserfs,xfs: fix fh_len checking
Ian Abbott (1):
staging: comedi: amplc_pc236: fix invalid register access during detach
Ian Kent (1):
autofs4 - fix reset pending flag on mount fail
Ivan Shugov (1):
ARM: at91: at91sam9g10: fix SOC type detection
J. Bruce Fields (2):
nfsd4: fix nfs4 stateid leak
nfsd4: don't pin clientids to pseudoflavors
Jacob Shin (2):
x86: Exclude E820_RESERVED regions and memory holes above 4 GB from direct mapping.
x86, mm: Find_early_table_space based on ranges that are actually being mapped
Jaehoon Chung (2):
block: remove the duplicated setting for congestion_threshold
mmc: sdhci-s3c: fix the wrong number of max bus clocks
Jan Beulich (1):
x86-64: Fix page table accounting
Jan Engelhardt (1):
netfilter: xt_limit: have r->cost != 0 case work
Jan Kara (2):
jbd: Fix assertion failure in commit code due to lacking transaction credits
mm: fix XFS oops due to dirty pages without buffers on s390
Jan Luebbe (1):
drivers/rtc/rtc-imxdi.c: add missing spin lock initialization
Jani Nikula (1):
drm/i915: use adjusted_mode instead of mode for checking the 6bpc force flag
Jason Wessel (2):
mips,kgdb: fix recursive page fault with CONFIG_KPROBES
kdb,vt_console: Fix missed data due to pager overruns
Jim Schutt (1):
libceph: avoid truncation due to racing banners
Johan Hedberg (1):
Bluetooth: SMP: Fix setting unknown auth_req bits
Johan Hovold (13):
USB: metro-usb: fix io after disconnect
USB: whiteheat: fix memory leak in error path
USB: quatech2: fix memory leak in error path
USB: quatech2: fix io after disconnect
USB: opticon: fix DMA from stack
USB: opticon: fix memory leak in error path
USB: mct_u232: fix broken close
USB: sierra: fix memory leak in attach error path
USB: sierra: fix memory leak in probe error path
USB: mos7840: fix urb leak at release
USB: mos7840: fix port-device leak in error path
USB: mos7840: remove NULL-urb submission
USB: mos7840: remove invalid disconnect handling
Johannes Berg (1):
iwlwifi: fix 6000 series channel switch command
Josh Triplett (1):
efi: Defer freeing boot services memory until after ACPI init
Josh Wu (1):
ARM: at91/tc: fix typo in the DT document
K. Y. Srinivasan (2):
storvsc: Account for in-transit packets in the RESET path
Drivers: hv: Cleanup error handling in vmbus_open()
Kees Cook (4):
kernel/sys.c: fix stack memory content leak via UNAME26
use clamp_t in UNAME26 fix
gen_init_cpio: avoid stack overflow when expanding
fs/compat_ioctl.c: VIDEO_SET_SPU_PALETTE missing error check
Kenneth Graunke (1):
drm/i915: Set guardband clipping workaround bit in the right register.
Konrad Rzeszutek Wilk (2):
xen/bootup: allow read_tscp call for Xen PV guests.
xen/bootup: allow {read|write}_cr8 pvops call.
Larry Finger (1):
b43: Fix oops on unload when firmware not found
Lennart Sorensen (1):
USB: serial: Fix memory leak in sierra_release()
Lukas Czerner (2):
scsi_debug: Fix off-by-one bug when unmapping region
ext4: Avoid underflow in ext4_trim_fs()
Malahal Naineni (1):
NFSD: pass null terminated buf to kstrtouint()
Mark Brown (3):
mfd: 88pm860x: Move _IO resources out of ioport_ioresource
ASoC: wm2200: Use rev A register patches on rev B
ASoC: wm2200: Fix non-inverted OUT2 mute control
Matthew Garrett (1):
module: taint kernel when lve module is loaded
Michael S. Tsirkin (1):
vhost: fix mergeable bufs on BE hosts
Michael Shigorin (1):
usb-storage: add unusual_devs entry for Casio EX-N1 digital camera
Michal Hocko (1):
nohz: Fix idle ticks in cpu summary line of /proc/stat
Michal Marek (1):
kbuild: Do not package /boot and /lib in make tar-pkg
Mike Galbraith (1):
sched: Fix migration thread runtime bogosity
Ming Lei (1):
USB: cdc-acm: fix pipe type of write endpoint
Nicholas Bellinger (5):
iscsi-target: Correctly set 0xffffffff field within ISCSI_OP_REJECT PDU
target/file: Re-enable optional fd_buffered_io=1 operation
iscsi-target: Add explicit set of cache_dynamic_acls=1 for TPG demo-mode
iscsi-target: Bump defaults for nopin_timeout + nopin_response_timeout values
target: Re-add explict zeroing of INQUIRY bounce buffer memory
Nicolas Boullis (1):
usb: acm: fix the computation of the number of data bits
Nikola Pajkovsky (1):
udf: fix retun value on error path in udf_load_logicalvol
Octavian Purdila (1):
usb hub: send clear_tt_buffer_complete events when canceling TT clear work
Oleg Nesterov (1):
freezer: exec should clear PF_NOFREEZE along with PF_KTHREAD
Oliver Neukum (2):
xhci: endianness xhci_calculate_intel_u2_timeout
xhci: fix integer overflow
Olof Johansson (1):
x86: efi: Turn off efi_enabled after setup on mixed fw/kernel
Pablo Neira Ayuso (3):
netfilter: nf_nat_sip: fix incorrect handling of EBUSY for RTCP expectation
netfilter: nf_ct_expect: fix possible access to uninitialized timer
netfilter: nf_conntrack: fix racy timer handling with reliable events
Paolo Bonzini (2):
target: support zero allocation length in INQUIRY
target: fix truncation of mode data, support zero allocation length
Patrick McHardy (2):
netfilter: nf_nat_sip: fix via header translation with multiple parameters
netfilter: nfnetlink_log: fix NLA_PUT macro removal bug
Paul Walmsley (1):
ARM: 7566/1: vfp: fix save and restore when running on pre-VFPv3 and CONFIG_VFPv3 set
Peng Tao (3):
pnfsblock: fix partial page buffer wirte
pnfsblock: fix non-aligned DIO read
pnfsblock: fix non-aligned DIO write
Peter Huewe (2):
extcon: Unregister compat class at module unload to fix oops
extcon: unregister compat link on cleanup
Peter Senna Tschudin (1):
target: fix return code in target_core_init_configfs error path
Piotr Haber (1):
bcma: fix unregistration of cores
Pritesh Raithatha (3):
dt: Document: correct tegra20/30 pinctrl slew-rate name
pinctrl: tegra: set low power mode bank width to 2
pinctrl: tegra: correct bank for pingroup and drv pingroup
Roland Dreier (1):
qla2xxx: Fix endianness of task management response code
Russell King (1):
ARM: vfp: fix saving d16-d31 vfp registers on v6+ kernels
Sage Weil (32):
libceph: drop connection refcounting for mon_client
libceph: transition socket state prior to actual connect
libceph: use con get/put methods
libceph: drop ceph_con_get/put helpers and nref member
libceph: set peer name on con_open, not init
libceph: initialize mon_client con only once
libceph: allow sock transition from CONNECTING to CLOSED
libceph: initialize msgpool message types
libceph: report socket read/write error message
libceph: fix mutex coverage for ceph_con_close
libceph: resubmit linger ops when pg mapping changes
libceph: (re)initialize bio_iter on start of message receive
libceph: protect ceph_con_open() with mutex
libceph: reset connection retry on successfully negotiation
libceph: fix fault locking; close socket on lossy fault
libceph: move msgr clear_standby under con mutex protection
libceph: move ceph_con_send() closed check under the con mutex
libceph: drop gratuitous socket close calls in con_work
libceph: close socket directly from ceph_con_close()
libceph: drop unnecessary CLOSED check in socket state change callback
libceph: replace connection state bits with states
libceph: clean up con flags
libceph: clear all flags on con_close
libceph: fix handling of immediate socket connect failure
libceph: revoke mon_client messages on session restart
libceph: verify state after retaking con lock after dispatch
libceph: avoid dropping con mutex before fault
libceph: change ceph_con_in_msg_alloc convention to be less weird
libceph: recheck con state after allocating incoming message
libceph: delay debugfs initialization until we learn global_id
libceph: avoid NULL kref_put when osd reset races with alloc_msg
libceph: check for invalid mapping
Sarah Sharp (4):
USB: Enable LPM after a failed probe.
usb: Don't enable LPM if the exit latency is zero.
usb: Send Set SEL before enabling parent U1/U2 timeout.
xhci: Fix potential NULL ptr deref in command cancellation.
Sasha Levin (1):
SUNRPC: Prevent kernel stack corruption on long values of flush
Stanislav Kinsbursky (1):
lockd: use rpc client's cl_nodename for id encoding
Stanislav Yakovlev (1):
net/wireless: ipw2200: Fix panic occurring in ipw_handle_promiscuous_tx()
Stanislaw Gruszka (2):
cfg80211/mac80211: avoid state mishmash on deauth
mac80211: check if key has TKIP type before updating IV
Stefan Richter (1):
firewire: cdev: fix user memory corruption (i386 userland on amd64 kernel)
Stefano Babic (1):
usb: musb: am35xx: drop spurious unplugging a device
Stefán Freyr (1):
ALSA: hda - add dock support for Thinkpad T430
Sylvain Munaut (1):
libceph: fix crypto key null deref, memory leak
Takashi Iwai (4):
ALSA: hda - Add missing hda_gen_spec to struct via_spec
ALSA: hda - Fix memory leaks at error path in patch_cirrus.c
ALSA: hda - Fix registration race of VGA switcheroo
ALSA: hda - Fix silent headphone output from Toshiba P200
Tao Ma (2):
ext4: remove erroneous ext4_superblock_csum_set() in update_backups()
ext4: Checksum the block bitmap properly with bigalloc enabled
Tejun Heo (4):
block: lift the initial queue bypass mode on blk_register_queue() instead of blk_init_allocated_queue()
block: fix request_queue->flags initialization
Revert "cgroup: Drop task_lock(parent) on cgroup_fork()"
Revert "cgroup: Remove task_lock() from cgroup_post_fork()"
Thadeu Lima de Souza Cascardo (1):
genalloc: stop crashing the system when destroying a pool
Theodore Ts'o (1):
ext4: fix metadata checksum calculation for the superblock
Tim Sally (1):
eCryptfs: check for eCryptfs cipher support at mount
Trond Myklebust (6):
SUNRPC: Ensure that the TCP socket is closed when in CLOSE_WAIT
NLM: nlm_lookup_file() may return NLMv4-specific error codes
SUNRPC: Clear the connect flag when socket state is TCP_CLOSE_WAIT
Revert "SUNRPC: Ensure we close the socket on EPIPE errors too..."
SUNRPC: Prevent races in xs_abort_connection()
SUNRPC: Get rid of the xs_error_report socket callback
Tyler Hicks (6):
eCryptfs: Copy up POSIX ACL and read-only flags from lower mount
eCryptfs: Revert to a writethrough cache model
eCryptfs: Initialize empty lower files when opening them
eCryptfs: Unlink lower inode when ecryptfs_create() fails
eCryptfs: Write out all dirty pages just before releasing the lower file
eCryptfs: Call lower ->flush() from ecryptfs_flush()
Vaibhav Nagarnaik (1):
ring-buffer: Check for uninitialized cpu buffer before resizing
Wei Yongjun (2):
pinctrl: fix missing unlock on error in pinctrl_groups_show()
dmaengine: imx-dma: fix missing unlock on error in imxdma_xfer_desc()
Will Deacon (1):
ARM: 7559/1: smp: switch away from the idmap before updating init_mm.mm_count
Willy Tarreau (1):
drm/i915: remove useless BUG_ON which caused a regression in 3.5.
Xi Wang (3):
libceph: fix overflow in __decode_pool_names()
libceph: fix overflow in osdmap_decode()
libceph: fix overflow in osdmap_apply_incremental()
Yan, Zheng (1):
ceph: Fix oops when handling mdsmap that decreases max_mds
Yinghai Lu (3):
x86, mm: Trim memory in memblock to be page aligned
x86, mm: Use memblock memory loop instead of e820_RAM
x86, mm: Undo incorrect revert in arch/x86/mm/init.c
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikola Pajkovsky <[email protected]>
commit 68766a2edcd5cd744262a70a2f67a320ac944760 upstream.
In case we detect a problem and bail out, we fail to set "ret" to a
nonzero value, and udf_load_logicalvol will mistakenly report success.
Signed-off-by: Nikola Pajkovsky <[email protected]>
Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/udf/super.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/udf/super.c b/fs/udf/super.c
index e660ffd..4988a8a 100644
--- a/fs/udf/super.c
+++ b/fs/udf/super.c
@@ -1287,6 +1287,7 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
udf_err(sb, "error loading logical volume descriptor: "
"Partition table too long (%u > %lu)\n", table_len,
sb->s_blocksize - sizeof(*lvd));
+ ret = 1;
goto out_bh;
}
@@ -1331,8 +1332,10 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
UDF_ID_SPARABLE,
strlen(UDF_ID_SPARABLE))) {
if (udf_load_sparable_map(sb, map,
- (struct sparablePartitionMap *)gpm) < 0)
+ (struct sparablePartitionMap *)gpm) < 0) {
+ ret = 1;
goto out_bh;
+ }
} else if (!strncmp(upm2->partIdent.ident,
UDF_ID_METADATA,
strlen(UDF_ID_METADATA))) {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyler Hicks <[email protected]>
commit 069ddcda37b2cf5bb4b6031a944c0e9359213262 upstream.
When the eCryptfs mount options do not include '-o acl', but the lower
filesystem's mount options do include 'acl', the MS_POSIXACL flag is not
flipped on in the eCryptfs super block flags. This flag is what the VFS
checks in do_last() when deciding if the current umask should be applied
to a newly created inode's mode or not. When a default POSIX ACL mask is
set on a directory, the current umask is incorrectly applied to new
inodes created in the directory. This patch ignores the MS_POSIXACL flag
passed into ecryptfs_mount() and sets the flag on the eCryptfs super
block depending on the flag's presence on the lower super block.
Additionally, it is incorrect to allow a writeable eCryptfs mount on top
of a read-only lower mount. This missing check did not allow writes to
the read-only lower mount because permissions checks are still performed
on the lower filesystem's objects but it is best to simply not allow a
rw mount on top of ro mount. However, a ro eCryptfs mount on top of a rw
mount is valid and still allowed.
https://launchpad.net/bugs/1009207
Signed-off-by: Tyler Hicks <[email protected]>
Reported-by: Stefan Beller <[email protected]>
Cc: John Johansen <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ecryptfs/main.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/fs/ecryptfs/main.c b/fs/ecryptfs/main.c
index 6895493..df217dc 100644
--- a/fs/ecryptfs/main.c
+++ b/fs/ecryptfs/main.c
@@ -505,7 +505,6 @@ static struct dentry *ecryptfs_mount(struct file_system_type *fs_type, int flags
goto out;
}
- s->s_flags = flags;
rc = bdi_setup_and_register(&sbi->bdi, "ecryptfs", BDI_CAP_MAP_COPY);
if (rc)
goto out1;
@@ -541,6 +540,15 @@ static struct dentry *ecryptfs_mount(struct file_system_type *fs_type, int flags
}
ecryptfs_set_superblock_lower(s, path.dentry->d_sb);
+
+ /**
+ * Set the POSIX ACL flag based on whether they're enabled in the lower
+ * mount. Force a read-only eCryptfs mount if the lower mount is ro.
+ * Allow a ro eCryptfs mount even when the lower mount is rw.
+ */
+ s->s_flags = flags & ~MS_POSIXACL;
+ s->s_flags |= path.dentry->d_sb->s_flags & (MS_RDONLY | MS_POSIXACL);
+
s->s_maxbytes = path.dentry->d_sb->s_maxbytes;
s->s_blocksize = path.dentry->d_sb->s_blocksize;
s->s_magic = ECRYPTFS_SUPER_MAGIC;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyler Hicks <[email protected]>
commit 821f7494a77627fb1ab539591c57b22cdca702d6 upstream.
A change was made about a year ago to get eCryptfs to better utilize its
page cache during writes. The idea was to do the page encryption
operations during page writeback, rather than doing them when initially
writing into the page cache, to reduce the number of page encryption
operations during sequential writes. This meant that the encrypted page
would only be written to the lower filesystem during page writeback,
which was a change from how eCryptfs had previously wrote to the lower
filesystem in ecryptfs_write_end().
The change caused a few eCryptfs-internal bugs that were shook out.
Unfortunately, more grave side effects have been identified that will
force changes outside of eCryptfs. Because the lower filesystem isn't
consulted until page writeback, eCryptfs has no way to pass lower write
errors (ENOSPC, mainly) back to userspace. Additionaly, it was reported
that quotas could be bypassed because of the way eCryptfs may sometimes
open the lower filesystem using a privileged kthread.
It would be nice to resolve the latest issues, but it is best if the
eCryptfs commits be reverted to the old behavior in the meantime.
This reverts:
32001d6f "eCryptfs: Flush file in vma close"
5be79de2 "eCryptfs: Flush dirty pages in setattr"
57db4e8d "ecryptfs: modify write path to encrypt page in writepage"
Signed-off-by: Tyler Hicks <[email protected]>
Tested-by: Colin King <[email protected]>
Cc: Colin King <[email protected]>
Cc: Thieu Le <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ecryptfs/file.c | 33 ++-------------------------------
fs/ecryptfs/inode.c | 6 ------
fs/ecryptfs/mmap.c | 39 +++++++++++++--------------------------
3 files changed, 15 insertions(+), 63 deletions(-)
diff --git a/fs/ecryptfs/file.c b/fs/ecryptfs/file.c
index 2b17f2f..49fc575 100644
--- a/fs/ecryptfs/file.c
+++ b/fs/ecryptfs/file.c
@@ -138,27 +138,6 @@ out:
return rc;
}
-static void ecryptfs_vma_close(struct vm_area_struct *vma)
-{
- filemap_write_and_wait(vma->vm_file->f_mapping);
-}
-
-static const struct vm_operations_struct ecryptfs_file_vm_ops = {
- .close = ecryptfs_vma_close,
- .fault = filemap_fault,
-};
-
-static int ecryptfs_file_mmap(struct file *file, struct vm_area_struct *vma)
-{
- int rc;
-
- rc = generic_file_mmap(file, vma);
- if (!rc)
- vma->vm_ops = &ecryptfs_file_vm_ops;
-
- return rc;
-}
-
struct kmem_cache *ecryptfs_file_info_cache;
/**
@@ -292,15 +271,7 @@ static int ecryptfs_release(struct inode *inode, struct file *file)
static int
ecryptfs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
{
- int rc = 0;
-
- rc = generic_file_fsync(file, start, end, datasync);
- if (rc)
- goto out;
- rc = vfs_fsync_range(ecryptfs_file_to_lower(file), start, end,
- datasync);
-out:
- return rc;
+ return vfs_fsync(ecryptfs_file_to_lower(file), datasync);
}
static int ecryptfs_fasync(int fd, struct file *file, int flag)
@@ -369,7 +340,7 @@ const struct file_operations ecryptfs_main_fops = {
#ifdef CONFIG_COMPAT
.compat_ioctl = ecryptfs_compat_ioctl,
#endif
- .mmap = ecryptfs_file_mmap,
+ .mmap = generic_file_mmap,
.open = ecryptfs_open,
.flush = ecryptfs_flush,
.release = ecryptfs_release,
diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
index 02e2fec..b01c7a9 100644
--- a/fs/ecryptfs/inode.c
+++ b/fs/ecryptfs/inode.c
@@ -977,12 +977,6 @@ static int ecryptfs_setattr(struct dentry *dentry, struct iattr *ia)
goto out;
}
- if (S_ISREG(inode->i_mode)) {
- rc = filemap_write_and_wait(inode->i_mapping);
- if (rc)
- goto out;
- fsstack_copy_attr_all(inode, lower_inode);
- }
memcpy(&lower_ia, ia, sizeof(lower_ia));
if (ia->ia_valid & ATTR_FILE)
lower_ia.ia_file = ecryptfs_file_to_lower(ia->ia_file);
diff --git a/fs/ecryptfs/mmap.c b/fs/ecryptfs/mmap.c
index a46b3a8..bd1d57f 100644
--- a/fs/ecryptfs/mmap.c
+++ b/fs/ecryptfs/mmap.c
@@ -66,18 +66,6 @@ static int ecryptfs_writepage(struct page *page, struct writeback_control *wbc)
{
int rc;
- /*
- * Refuse to write the page out if we are called from reclaim context
- * since our writepage() path may potentially allocate memory when
- * calling into the lower fs vfs_write() which may in turn invoke
- * us again.
- */
- if (current->flags & PF_MEMALLOC) {
- redirty_page_for_writepage(wbc, page);
- rc = 0;
- goto out;
- }
-
rc = ecryptfs_encrypt_page(page);
if (rc) {
ecryptfs_printk(KERN_WARNING, "Error encrypting "
@@ -498,7 +486,6 @@ static int ecryptfs_write_end(struct file *file,
struct ecryptfs_crypt_stat *crypt_stat =
&ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat;
int rc;
- int need_unlock_page = 1;
ecryptfs_printk(KERN_DEBUG, "Calling fill_zeros_to_end_of_page"
"(page w/ index = [0x%.16lx], to = [%d])\n", index, to);
@@ -519,26 +506,26 @@ static int ecryptfs_write_end(struct file *file,
"zeros in page with index = [0x%.16lx]\n", index);
goto out;
}
- set_page_dirty(page);
- unlock_page(page);
- need_unlock_page = 0;
+ rc = ecryptfs_encrypt_page(page);
+ if (rc) {
+ ecryptfs_printk(KERN_WARNING, "Error encrypting page (upper "
+ "index [0x%.16lx])\n", index);
+ goto out;
+ }
if (pos + copied > i_size_read(ecryptfs_inode)) {
i_size_write(ecryptfs_inode, pos + copied);
ecryptfs_printk(KERN_DEBUG, "Expanded file size to "
"[0x%.16llx]\n",
(unsigned long long)i_size_read(ecryptfs_inode));
- balance_dirty_pages_ratelimited(mapping);
- rc = ecryptfs_write_inode_size_to_metadata(ecryptfs_inode);
- if (rc) {
- printk(KERN_ERR "Error writing inode size to metadata; "
- "rc = [%d]\n", rc);
- goto out;
- }
}
- rc = copied;
+ rc = ecryptfs_write_inode_size_to_metadata(ecryptfs_inode);
+ if (rc)
+ printk(KERN_ERR "Error writing inode size to metadata; "
+ "rc = [%d]\n", rc);
+ else
+ rc = copied;
out:
- if (need_unlock_page)
- unlock_page(page);
+ unlock_page(page);
page_cache_release(page);
return rc;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyler Hicks <[email protected]>
commit 8bc2d3cf612994a960c2e8eaea37f6676f67082a upstream.
ecryptfs_create() creates a lower inode, allocates an eCryptfs inode,
initializes the eCryptfs inode and cryptographic metadata attached to
the inode, and then writes the metadata to the header of the file.
If an error was to occur after the lower inode was created, an empty
lower file would be left in the lower filesystem. This is a problem
because ecryptfs_open() refuses to open any lower files which do not
have the appropriate metadata in the file header.
This patch properly unlinks the lower inode when an error occurs in the
later stages of ecryptfs_create(), reducing the chance that an empty
lower file will be left in the lower filesystem.
https://launchpad.net/bugs/872905
Signed-off-by: Tyler Hicks <[email protected]>
Cc: John Johansen <[email protected]>
Cc: Colin Ian King <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ecryptfs/inode.c | 55 ++++++++++++++++++++++++++++++---------------------
1 file changed, 32 insertions(+), 23 deletions(-)
diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
index 68166ea..a14abba 100644
--- a/fs/ecryptfs/inode.c
+++ b/fs/ecryptfs/inode.c
@@ -143,6 +143,31 @@ static int ecryptfs_interpose(struct dentry *lower_dentry,
return 0;
}
+static int ecryptfs_do_unlink(struct inode *dir, struct dentry *dentry,
+ struct inode *inode)
+{
+ struct dentry *lower_dentry = ecryptfs_dentry_to_lower(dentry);
+ struct inode *lower_dir_inode = ecryptfs_inode_to_lower(dir);
+ struct dentry *lower_dir_dentry;
+ int rc;
+
+ dget(lower_dentry);
+ lower_dir_dentry = lock_parent(lower_dentry);
+ rc = vfs_unlink(lower_dir_inode, lower_dentry);
+ if (rc) {
+ printk(KERN_ERR "Error in vfs_unlink; rc = [%d]\n", rc);
+ goto out_unlock;
+ }
+ fsstack_copy_attr_times(dir, lower_dir_inode);
+ set_nlink(inode, ecryptfs_inode_to_lower(inode)->i_nlink);
+ inode->i_ctime = dir->i_ctime;
+ d_drop(dentry);
+out_unlock:
+ unlock_dir(lower_dir_dentry);
+ dput(lower_dentry);
+ return rc;
+}
+
/**
* ecryptfs_do_create
* @directory_inode: inode of the new file's dentry's parent in ecryptfs
@@ -182,8 +207,10 @@ ecryptfs_do_create(struct inode *directory_inode,
}
inode = __ecryptfs_get_inode(lower_dentry->d_inode,
directory_inode->i_sb);
- if (IS_ERR(inode))
+ if (IS_ERR(inode)) {
+ vfs_unlink(lower_dir_dentry->d_inode, lower_dentry);
goto out_lock;
+ }
fsstack_copy_attr_times(directory_inode, lower_dir_dentry->d_inode);
fsstack_copy_inode_size(directory_inode, lower_dir_dentry->d_inode);
out_lock:
@@ -265,7 +292,9 @@ ecryptfs_create(struct inode *directory_inode, struct dentry *ecryptfs_dentry,
* that this on disk file is prepared to be an ecryptfs file */
rc = ecryptfs_initialize_file(ecryptfs_dentry, ecryptfs_inode);
if (rc) {
- drop_nlink(ecryptfs_inode);
+ ecryptfs_do_unlink(directory_inode, ecryptfs_dentry,
+ ecryptfs_inode);
+ make_bad_inode(ecryptfs_inode);
unlock_new_inode(ecryptfs_inode);
iput(ecryptfs_inode);
goto out;
@@ -477,27 +506,7 @@ out_lock:
static int ecryptfs_unlink(struct inode *dir, struct dentry *dentry)
{
- int rc = 0;
- struct dentry *lower_dentry = ecryptfs_dentry_to_lower(dentry);
- struct inode *lower_dir_inode = ecryptfs_inode_to_lower(dir);
- struct dentry *lower_dir_dentry;
-
- dget(lower_dentry);
- lower_dir_dentry = lock_parent(lower_dentry);
- rc = vfs_unlink(lower_dir_inode, lower_dentry);
- if (rc) {
- printk(KERN_ERR "Error in vfs_unlink; rc = [%d]\n", rc);
- goto out_unlock;
- }
- fsstack_copy_attr_times(dir, lower_dir_inode);
- set_nlink(dentry->d_inode,
- ecryptfs_inode_to_lower(dentry->d_inode)->i_nlink);
- dentry->d_inode->i_ctime = dir->i_ctime;
- d_drop(dentry);
-out_unlock:
- unlock_dir(lower_dir_dentry);
- dput(lower_dentry);
- return rc;
+ return ecryptfs_do_unlink(dir, dentry, dentry->d_inode);
}
static int ecryptfs_symlink(struct inode *dir, struct dentry *dentry,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Mike Galbraith <[email protected]>
commit 8f6189684eb4e85e6c593cd710693f09c944450a upstream.
Make stop scheduler class do the same accounting as other classes,
Migration threads can be caught in the act while doing exec balancing,
leading to the below due to use of unmaintained ->se.exec_start. The
load that triggered this particular instance was an apparently out of
control heavily threaded application that does system monitoring in
what equated to an exec bomb, with one of the VERY frequently migrated
tasks being ps.
%CPU PID USER CMD
99.3 45 root [migration/10]
97.7 53 root [migration/12]
97.0 57 root [migration/13]
90.1 49 root [migration/11]
89.6 65 root [migration/15]
88.7 17 root [migration/3]
80.4 37 root [migration/8]
78.1 41 root [migration/9]
44.2 13 root [migration/2]
Signed-off-by: Mike Galbraith <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
kernel/sched/stop_task.c | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c
index 7b386e8..da5eb5b 100644
--- a/kernel/sched/stop_task.c
+++ b/kernel/sched/stop_task.c
@@ -27,8 +27,10 @@ static struct task_struct *pick_next_task_stop(struct rq *rq)
{
struct task_struct *stop = rq->stop;
- if (stop && stop->on_rq)
+ if (stop && stop->on_rq) {
+ stop->se.exec_start = rq->clock_task;
return stop;
+ }
return NULL;
}
@@ -52,6 +54,21 @@ static void yield_task_stop(struct rq *rq)
static void put_prev_task_stop(struct rq *rq, struct task_struct *prev)
{
+ struct task_struct *curr = rq->curr;
+ u64 delta_exec;
+
+ delta_exec = rq->clock_task - curr->se.exec_start;
+ if (unlikely((s64)delta_exec < 0))
+ delta_exec = 0;
+
+ schedstat_set(curr->se.statistics.exec_max,
+ max(curr->se.statistics.exec_max, delta_exec));
+
+ curr->se.sum_exec_runtime += delta_exec;
+ account_group_exec_runtime(curr, delta_exec);
+
+ curr->se.exec_start = rq->clock_task;
+ cpuacct_charge(curr, delta_exec);
}
static void task_tick_stop(struct rq *rq, struct task_struct *curr, int queued)
@@ -60,6 +77,9 @@ static void task_tick_stop(struct rq *rq, struct task_struct *curr, int queued)
static void set_curr_task_stop(struct rq *rq)
{
+ struct task_struct *stop = rq->stop;
+
+ stop->se.exec_start = rq->clock_task;
}
static void switched_to_stop(struct rq *rq, struct task_struct *p)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tim Sally <[email protected]>
commit 5f5b331d5c21228a6519dcb793fc1629646c51a6 upstream.
The issue occurs when eCryptfs is mounted with a cipher supported by
the crypto subsystem but not by eCryptfs. The mount succeeds and an
error does not occur until a write. This change checks for eCryptfs
cipher support at mount time.
Resolves Launchpad issue #338914, reported by Tyler Hicks in 03/2009.
https://bugs.launchpad.net/ecryptfs/+bug/338914
Signed-off-by: Tim Sally <[email protected]>
Signed-off-by: Tyler Hicks <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ecryptfs/main.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/fs/ecryptfs/main.c b/fs/ecryptfs/main.c
index c2a9c39..240832e 100644
--- a/fs/ecryptfs/main.c
+++ b/fs/ecryptfs/main.c
@@ -280,6 +280,7 @@ static int ecryptfs_parse_options(struct ecryptfs_sb_info *sbi, char *options,
char *fnek_src;
char *cipher_key_bytes_src;
char *fn_cipher_key_bytes_src;
+ u8 cipher_code;
*check_ruid = 0;
@@ -421,6 +422,18 @@ static int ecryptfs_parse_options(struct ecryptfs_sb_info *sbi, char *options,
&& !fn_cipher_key_bytes_set)
mount_crypt_stat->global_default_fn_cipher_key_bytes =
mount_crypt_stat->global_default_cipher_key_size;
+
+ cipher_code = ecryptfs_code_for_cipher_string(
+ mount_crypt_stat->global_default_cipher_name,
+ mount_crypt_stat->global_default_cipher_key_size);
+ if (!cipher_code) {
+ ecryptfs_printk(KERN_ERR,
+ "eCryptfs doesn't support cipher: %s",
+ mount_crypt_stat->global_default_cipher_name);
+ rc = -EINVAL;
+ goto out;
+ }
+
mutex_lock(&key_tfm_list_mutex);
if (!ecryptfs_tfm_exists(mount_crypt_stat->global_default_cipher_name,
NULL)) {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Patrick McHardy <[email protected]>
commit f22eb25cf5b1157b29ef88c793b71972efc47143 upstream.
Via-headers are parsed beginning at the first character after the Via-address.
When the address is translated first and its length decreases, the offset to
start parsing at is incorrect and header parameters might be missed.
Update the offset after translating the Via-address to fix this.
Signed-off-by: Patrick McHardy <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Acked-by: David S. Miller <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ipv4/netfilter/nf_nat_sip.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/netfilter/nf_nat_sip.c b/net/ipv4/netfilter/nf_nat_sip.c
index bb71caa..6dbcdb1 100644
--- a/net/ipv4/netfilter/nf_nat_sip.c
+++ b/net/ipv4/netfilter/nf_nat_sip.c
@@ -148,7 +148,7 @@ static unsigned int ip_nat_sip(struct sk_buff *skb, unsigned int dataoff,
if (ct_sip_parse_header_uri(ct, *dptr, NULL, *datalen,
hdr, NULL, &matchoff, &matchlen,
&addr, &port) > 0) {
- unsigned int matchend, poff, plen, buflen, n;
+ unsigned int olen, matchend, poff, plen, buflen, n;
char buffer[sizeof("nnn.nnn.nnn.nnn:nnnnn")];
/* We're only interested in headers related to this
@@ -163,11 +163,12 @@ static unsigned int ip_nat_sip(struct sk_buff *skb, unsigned int dataoff,
goto next;
}
+ olen = *datalen;
if (!map_addr(skb, dataoff, dptr, datalen, matchoff, matchlen,
&addr, port))
return NF_DROP;
- matchend = matchoff + matchlen;
+ matchend = matchoff + matchlen + *datalen - olen;
/* The maddr= parameter (RFC 2361) specifies where to send
* the reply. */
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Pablo Neira Ayuso <[email protected]>
commit 3f509c689a07a4aa989b426893d8491a7ffcc410 upstream.
We're hitting bug while trying to reinsert an already existing
expectation:
kernel BUG at kernel/timer.c:895!
invalid opcode: 0000 [#1] SMP
[...]
Call Trace:
<IRQ>
[<ffffffffa0069563>] nf_ct_expect_related_report+0x4a0/0x57a [nf_conntrack]
[<ffffffff812d423a>] ? in4_pton+0x72/0x131
[<ffffffffa00ca69e>] ip_nat_sdp_media+0xeb/0x185 [nf_nat_sip]
[<ffffffffa00b5b9b>] set_expected_rtp_rtcp+0x32d/0x39b [nf_conntrack_sip]
[<ffffffffa00b5f15>] process_sdp+0x30c/0x3ec [nf_conntrack_sip]
[<ffffffff8103f1eb>] ? irq_exit+0x9a/0x9c
[<ffffffffa00ca738>] ? ip_nat_sdp_media+0x185/0x185 [nf_nat_sip]
We have to remove the RTP expectation if the RTCP expectation hits EBUSY
since we keep trying with other ports until we succeed.
Reported-by: Rafal Fitt <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Acked-by: David S. Miller <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ipv4/netfilter/nf_nat_sip.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/net/ipv4/netfilter/nf_nat_sip.c b/net/ipv4/netfilter/nf_nat_sip.c
index ea4a238..bb71caa 100644
--- a/net/ipv4/netfilter/nf_nat_sip.c
+++ b/net/ipv4/netfilter/nf_nat_sip.c
@@ -501,7 +501,10 @@ static unsigned int ip_nat_sdp_media(struct sk_buff *skb, unsigned int dataoff,
ret = nf_ct_expect_related(rtcp_exp);
if (ret == 0)
break;
- else if (ret != -EBUSY) {
+ else if (ret == -EBUSY) {
+ nf_ct_unexpect_related(rtp_exp);
+ continue;
+ } else if (ret < 0) {
nf_ct_unexpect_related(rtp_exp);
port = 0;
break;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Pablo Neira Ayuso <[email protected]>
commit 2614f86490122bf51eb7c12ec73927f1900f4e7d upstream.
In __nf_ct_expect_check, the function refresh_timer returns 1
if a matching expectation is found and its timer is successfully
refreshed. This results in nf_ct_expect_related returning 0.
Note that at this point:
- the passed expectation is not inserted in the expectation table
and its timer was not initialized, since we have refreshed one
matching/existing expectation.
- nf_ct_expect_alloc uses kmem_cache_alloc, so the expectation
timer is in some undefined state just after the allocation,
until it is appropriately initialized.
This can be a problem for the SIP helper during the expectation
addition:
...
if (nf_ct_expect_related(rtp_exp) == 0) {
if (nf_ct_expect_related(rtcp_exp) != 0)
nf_ct_unexpect_related(rtp_exp);
...
Note that nf_ct_expect_related(rtp_exp) may return 0 for the timer refresh
case that is detailed above. Then, if nf_ct_unexpect_related(rtcp_exp)
returns != 0, nf_ct_unexpect_related(rtp_exp) is called, which does:
spin_lock_bh(&nf_conntrack_lock);
if (del_timer(&exp->timeout)) {
nf_ct_unlink_expect(exp);
nf_ct_expect_put(exp);
}
spin_unlock_bh(&nf_conntrack_lock);
Note that del_timer always returns false if the timer has been
initialized. However, the timer was not initialized since setup_timer
was not called, therefore, the expectation timer remains in some
undefined state. If I'm not missing anything, this may lead to the
removal an unexistent expectation.
To fix this, the optimization that allows refreshing an expectation
is removed. Now nf_conntrack_expect_related looks more consistent
to me since it always add the expectation in case that it returns
success.
Thanks to Patrick McHardy for participating in the discussion of
this patch.
I think this may be the source of the problem described by:
http://marc.info/?l=netfilter-devel&m=134073514719421&w=2
Reported-by: Rafal Fitt <[email protected]>
Acked-by: Patrick McHardy <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Acked-by: David S. Miller <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/netfilter/nf_conntrack_expect.c | 29 ++++++-----------------------
1 file changed, 6 insertions(+), 23 deletions(-)
diff --git a/net/netfilter/nf_conntrack_expect.c b/net/netfilter/nf_conntrack_expect.c
index 45cf602..527651a 100644
--- a/net/netfilter/nf_conntrack_expect.c
+++ b/net/netfilter/nf_conntrack_expect.c
@@ -361,23 +361,6 @@ static void evict_oldest_expect(struct nf_conn *master,
}
}
-static inline int refresh_timer(struct nf_conntrack_expect *i)
-{
- struct nf_conn_help *master_help = nfct_help(i->master);
- const struct nf_conntrack_expect_policy *p;
-
- if (!del_timer(&i->timeout))
- return 0;
-
- p = &rcu_dereference_protected(
- master_help->helper,
- lockdep_is_held(&nf_conntrack_lock)
- )->expect_policy[i->class];
- i->timeout.expires = jiffies + p->timeout * HZ;
- add_timer(&i->timeout);
- return 1;
-}
-
static inline int __nf_ct_expect_check(struct nf_conntrack_expect *expect)
{
const struct nf_conntrack_expect_policy *p;
@@ -386,7 +369,7 @@ static inline int __nf_ct_expect_check(struct nf_conntrack_expect *expect)
struct nf_conn_help *master_help = nfct_help(master);
struct nf_conntrack_helper *helper;
struct net *net = nf_ct_exp_net(expect);
- struct hlist_node *n;
+ struct hlist_node *n, *next;
unsigned int h;
int ret = 1;
@@ -395,12 +378,12 @@ static inline int __nf_ct_expect_check(struct nf_conntrack_expect *expect)
goto out;
}
h = nf_ct_expect_dst_hash(&expect->tuple);
- hlist_for_each_entry(i, n, &net->ct.expect_hash[h], hnode) {
+ hlist_for_each_entry_safe(i, n, next, &net->ct.expect_hash[h], hnode) {
if (expect_matches(i, expect)) {
- /* Refresh timer: if it's dying, ignore.. */
- if (refresh_timer(i)) {
- ret = 0;
- goto out;
+ if (del_timer(&i->timeout)) {
+ nf_ct_unlink_expect(i);
+ nf_ct_expect_put(i);
+ break;
}
} else if (expect_clash(i, expect)) {
ret = -EBUSY;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Pablo Neira Ayuso <[email protected]>
commit 5b423f6a40a0327f9d40bc8b97ce9be266f74368 upstream.
Existing code assumes that del_timer returns true for alive conntrack
entries. However, this is not true if reliable events are enabled.
In that case, del_timer may return true for entries that were
just inserted in the dying list. Note that packets / ctnetlink may
hold references to conntrack entries that were just inserted to such
list.
This patch fixes the issue by adding an independent timer for
event delivery. This increases the size of the ecache extension.
Still we can revisit this later and use variable size extensions
to allocate this area on demand.
Tested-by: Oliver Smith <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Acked-by: David Miller <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/net/netfilter/nf_conntrack_ecache.h | 1 +
net/netfilter/nf_conntrack_core.c | 16 +++++++++++-----
2 files changed, 12 insertions(+), 5 deletions(-)
diff --git a/include/net/netfilter/nf_conntrack_ecache.h b/include/net/netfilter/nf_conntrack_ecache.h
index e1ce104..4a045cd 100644
--- a/include/net/netfilter/nf_conntrack_ecache.h
+++ b/include/net/netfilter/nf_conntrack_ecache.h
@@ -18,6 +18,7 @@ struct nf_conntrack_ecache {
u16 ctmask; /* bitmask of ct events to be delivered */
u16 expmask; /* bitmask of expect events to be delivered */
u32 pid; /* netlink pid of destroyer */
+ struct timer_list timeout;
};
static inline struct nf_conntrack_ecache *
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index ac3af97..06ed51c 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -249,12 +249,15 @@ static void death_by_event(unsigned long ul_conntrack)
{
struct nf_conn *ct = (void *)ul_conntrack;
struct net *net = nf_ct_net(ct);
+ struct nf_conntrack_ecache *ecache = nf_ct_ecache_find(ct);
+
+ BUG_ON(ecache == NULL);
if (nf_conntrack_event(IPCT_DESTROY, ct) < 0) {
/* bad luck, let's retry again */
- ct->timeout.expires = jiffies +
+ ecache->timeout.expires = jiffies +
(random32() % net->ct.sysctl_events_retry_timeout);
- add_timer(&ct->timeout);
+ add_timer(&ecache->timeout);
return;
}
/* we've got the event delivered, now it's dying */
@@ -268,6 +271,9 @@ static void death_by_event(unsigned long ul_conntrack)
void nf_ct_insert_dying_list(struct nf_conn *ct)
{
struct net *net = nf_ct_net(ct);
+ struct nf_conntrack_ecache *ecache = nf_ct_ecache_find(ct);
+
+ BUG_ON(ecache == NULL);
/* add this conntrack to the dying list */
spin_lock_bh(&nf_conntrack_lock);
@@ -275,10 +281,10 @@ void nf_ct_insert_dying_list(struct nf_conn *ct)
&net->ct.dying);
spin_unlock_bh(&nf_conntrack_lock);
/* set a new timer to retry event delivery */
- setup_timer(&ct->timeout, death_by_event, (unsigned long)ct);
- ct->timeout.expires = jiffies +
+ setup_timer(&ecache->timeout, death_by_event, (unsigned long)ct);
+ ecache->timeout.expires = jiffies +
(random32() % net->ct.sysctl_events_retry_timeout);
- add_timer(&ct->timeout);
+ add_timer(&ecache->timeout);
}
EXPORT_SYMBOL_GPL(nf_ct_insert_dying_list);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Patrick McHardy <[email protected]>
commit 2dba62c30ec3040ef1b647a8976fd7e6854cc7a7 upstream.
Commit 1db20a52 (nfnetlink_log: Stop using NLA_PUT*().) incorrectly
converted a NLA_PUT_BE16 macro to nla_put_be32() in nfnetlink_log:
- NLA_PUT_BE16(inst->skb, NFULA_HWTYPE, htons(skb->dev->type));
+ if (nla_put_be32(inst->skb, NFULA_HWTYPE, htons(skb->dev->type))
Signed-off-by: Patrick McHardy <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Acked-by: David S. Miller <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/netfilter/nfnetlink_log.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c
index 3c3cfc0..bbc1d91 100644
--- a/net/netfilter/nfnetlink_log.c
+++ b/net/netfilter/nfnetlink_log.c
@@ -476,7 +476,7 @@ __build_packet_message(struct nfulnl_instance *inst,
}
if (indev && skb_mac_header_was_set(skb)) {
- if (nla_put_be32(inst->skb, NFULA_HWTYPE, htons(skb->dev->type)) ||
+ if (nla_put_be16(inst->skb, NFULA_HWTYPE, htons(skb->dev->type)) ||
nla_put_be16(inst->skb, NFULA_HWLEN,
htons(skb->dev->hard_header_len)) ||
nla_put(inst->skb, NFULA_HWHEADER, skb->dev->hard_header_len,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jan Kara <[email protected]>
commit 09e05d4805e6c524c1af74e524e5d0528bb3fef3 upstream.
ext3 users of data=journal mode with blocksize < pagesize were occasionally
hitting assertion failure in journal_commit_transaction() checking whether the
transaction has at least as many credits reserved as buffers attached. The
core of the problem is that when a file gets truncated, buffers that still need
checkpointing or that are attached to the committing transaction are left with
buffer_mapped set. When this happens to buffers beyond i_size attached to a
page stradding i_size, subsequent write extending the file will see these
buffers and as they are mapped (but underlying blocks were freed) things go
awry from here.
The assertion failure just coincidentally (and in this case luckily as we would
start corrupting filesystem) triggers due to journal_head not being properly
cleaned up as well.
Under some rare circumstances this bug could even hit data=ordered mode users.
There the assertion won't trigger and we would end up corrupting the
filesystem.
We fix the problem by unmapping buffers if possible (in lots of cases we just
need a buffer attached to a transaction as a place holder but it must not be
written out anyway). And in one case, we just have to bite the bullet and wait
for transaction commit to finish.
Reviewed-by: Josef Bacik <[email protected]>
Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/jbd/commit.c | 45 ++++++++++++++++++++++++++---------
fs/jbd/transaction.c | 64 ++++++++++++++++++++++++++++++++++----------------
2 files changed, 78 insertions(+), 31 deletions(-)
diff --git a/fs/jbd/commit.c b/fs/jbd/commit.c
index 52c15c7..86b39b1 100644
--- a/fs/jbd/commit.c
+++ b/fs/jbd/commit.c
@@ -86,7 +86,12 @@ nope:
static void release_data_buffer(struct buffer_head *bh)
{
if (buffer_freed(bh)) {
+ WARN_ON_ONCE(buffer_dirty(bh));
clear_buffer_freed(bh);
+ clear_buffer_mapped(bh);
+ clear_buffer_new(bh);
+ clear_buffer_req(bh);
+ bh->b_bdev = NULL;
release_buffer_page(bh);
} else
put_bh(bh);
@@ -866,17 +871,35 @@ restart_loop:
* there's no point in keeping a checkpoint record for
* it. */
- /* A buffer which has been freed while still being
- * journaled by a previous transaction may end up still
- * being dirty here, but we want to avoid writing back
- * that buffer in the future after the "add to orphan"
- * operation been committed, That's not only a performance
- * gain, it also stops aliasing problems if the buffer is
- * left behind for writeback and gets reallocated for another
- * use in a different page. */
- if (buffer_freed(bh) && !jh->b_next_transaction) {
- clear_buffer_freed(bh);
- clear_buffer_jbddirty(bh);
+ /*
+ * A buffer which has been freed while still being journaled by
+ * a previous transaction.
+ */
+ if (buffer_freed(bh)) {
+ /*
+ * If the running transaction is the one containing
+ * "add to orphan" operation (b_next_transaction !=
+ * NULL), we have to wait for that transaction to
+ * commit before we can really get rid of the buffer.
+ * So just clear b_modified to not confuse transaction
+ * credit accounting and refile the buffer to
+ * BJ_Forget of the running transaction. If the just
+ * committed transaction contains "add to orphan"
+ * operation, we can completely invalidate the buffer
+ * now. We are rather throughout in that since the
+ * buffer may be still accessible when blocksize <
+ * pagesize and it is attached to the last partial
+ * page.
+ */
+ jh->b_modified = 0;
+ if (!jh->b_next_transaction) {
+ clear_buffer_freed(bh);
+ clear_buffer_jbddirty(bh);
+ clear_buffer_mapped(bh);
+ clear_buffer_new(bh);
+ clear_buffer_req(bh);
+ bh->b_bdev = NULL;
+ }
}
if (buffer_jbddirty(bh)) {
diff --git a/fs/jbd/transaction.c b/fs/jbd/transaction.c
index febc10d..78b7f84 100644
--- a/fs/jbd/transaction.c
+++ b/fs/jbd/transaction.c
@@ -1843,15 +1843,16 @@ static int __dispose_buffer(struct journal_head *jh, transaction_t *transaction)
* We're outside-transaction here. Either or both of j_running_transaction
* and j_committing_transaction may be NULL.
*/
-static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
+static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh,
+ int partial_page)
{
transaction_t *transaction;
struct journal_head *jh;
int may_free = 1;
- int ret;
BUFFER_TRACE(bh, "entry");
+retry:
/*
* It is safe to proceed here without the j_list_lock because the
* buffers cannot be stolen by try_to_free_buffers as long as we are
@@ -1879,10 +1880,18 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
* clear the buffer dirty bit at latest at the moment when the
* transaction marking the buffer as freed in the filesystem
* structures is committed because from that moment on the
- * buffer can be reallocated and used by a different page.
+ * block can be reallocated and used by a different page.
* Since the block hasn't been freed yet but the inode has
* already been added to orphan list, it is safe for us to add
* the buffer to BJ_Forget list of the newest transaction.
+ *
+ * Also we have to clear buffer_mapped flag of a truncated buffer
+ * because the buffer_head may be attached to the page straddling
+ * i_size (can happen only when blocksize < pagesize) and thus the
+ * buffer_head can be reused when the file is extended again. So we end
+ * up keeping around invalidated buffers attached to transactions'
+ * BJ_Forget list just to stop checkpointing code from cleaning up
+ * the transaction this buffer was modified in.
*/
transaction = jh->b_transaction;
if (transaction == NULL) {
@@ -1909,13 +1918,9 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
* committed, the buffer won't be needed any
* longer. */
JBUFFER_TRACE(jh, "checkpointed: add to BJ_Forget");
- ret = __dispose_buffer(jh,
+ may_free = __dispose_buffer(jh,
journal->j_running_transaction);
- journal_put_journal_head(jh);
- spin_unlock(&journal->j_list_lock);
- jbd_unlock_bh_state(bh);
- spin_unlock(&journal->j_state_lock);
- return ret;
+ goto zap_buffer;
} else {
/* There is no currently-running transaction. So the
* orphan record which we wrote for this file must have
@@ -1923,13 +1928,9 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
* the committing transaction, if it exists. */
if (journal->j_committing_transaction) {
JBUFFER_TRACE(jh, "give to committing trans");
- ret = __dispose_buffer(jh,
+ may_free = __dispose_buffer(jh,
journal->j_committing_transaction);
- journal_put_journal_head(jh);
- spin_unlock(&journal->j_list_lock);
- jbd_unlock_bh_state(bh);
- spin_unlock(&journal->j_state_lock);
- return ret;
+ goto zap_buffer;
} else {
/* The orphan record's transaction has
* committed. We can cleanse this buffer */
@@ -1950,10 +1951,24 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
}
/*
* The buffer is committing, we simply cannot touch
- * it. So we just set j_next_transaction to the
- * running transaction (if there is one) and mark
- * buffer as freed so that commit code knows it should
- * clear dirty bits when it is done with the buffer.
+ * it. If the page is straddling i_size we have to wait
+ * for commit and try again.
+ */
+ if (partial_page) {
+ tid_t tid = journal->j_committing_transaction->t_tid;
+
+ journal_put_journal_head(jh);
+ spin_unlock(&journal->j_list_lock);
+ jbd_unlock_bh_state(bh);
+ spin_unlock(&journal->j_state_lock);
+ log_wait_commit(journal, tid);
+ goto retry;
+ }
+ /*
+ * OK, buffer won't be reachable after truncate. We just set
+ * j_next_transaction to the running transaction (if there is
+ * one) and mark buffer as freed so that commit code knows it
+ * should clear dirty bits when it is done with the buffer.
*/
set_buffer_freed(bh);
if (journal->j_running_transaction && buffer_jbddirty(bh))
@@ -1976,6 +1991,14 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
}
zap_buffer:
+ /*
+ * This is tricky. Although the buffer is truncated, it may be reused
+ * if blocksize < pagesize and it is attached to the page straddling
+ * EOF. Since the buffer might have been added to BJ_Forget list of the
+ * running transaction, journal_get_write_access() won't clear
+ * b_modified and credit accounting gets confused. So clear b_modified
+ * here. */
+ jh->b_modified = 0;
journal_put_journal_head(jh);
zap_buffer_no_jh:
spin_unlock(&journal->j_list_lock);
@@ -2024,7 +2047,8 @@ void journal_invalidatepage(journal_t *journal,
if (offset <= curr_off) {
/* This block is wholly outside the truncation point */
lock_buffer(bh);
- may_free &= journal_unmap_buffer(journal, bh);
+ may_free &= journal_unmap_buffer(journal, bh,
+ offset > 0);
unlock_buffer(bh);
}
curr_off = next_off;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jan Engelhardt <[email protected]>
commit 82e6bfe2fbc4d48852114c4f979137cd5bf1d1a8 upstream.
Commit v2.6.19-rc1~1272^2~41 tells us that r->cost != 0 can happen when
a running state is saved to userspace and then reinstated from there.
Make sure that private xt_limit area is initialized with correct values.
Otherwise, random matchings due to use of uninitialized memory.
Signed-off-by: Jan Engelhardt <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Acked-by: David S. Miller <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/netfilter/xt_limit.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/net/netfilter/xt_limit.c b/net/netfilter/xt_limit.c
index 5c22ce8..a4c1e45 100644
--- a/net/netfilter/xt_limit.c
+++ b/net/netfilter/xt_limit.c
@@ -117,11 +117,11 @@ static int limit_mt_check(const struct xt_mtchk_param *par)
/* For SMP, we only want to use one set of state. */
r->master = priv;
+ /* User avg in seconds * XT_LIMIT_SCALE: convert to jiffies *
+ 128. */
+ priv->prev = jiffies;
+ priv->credit = user2credits(r->avg * r->burst); /* Credits full. */
if (r->cost == 0) {
- /* User avg in seconds * XT_LIMIT_SCALE: convert to jiffies *
- 128. */
- priv->prev = jiffies;
- priv->credit = user2credits(r->avg * r->burst); /* Credits full. */
r->credit_cap = priv->credit; /* Credits full. */
r->cost = user2credits(r->avg);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: "J. Bruce Fields" <[email protected]>
commit cf9182e90b2af04245ac4fae497fe73fc71285b4 upstream.
Processes that open and close multiple files may end up setting this
oo_last_closed_stid without freeing what was previously pointed to.
This can result in a major leak, visible for example by watching the
nfsd4_stateids line of /proc/slabinfo.
Reported-by: Cyril B. <[email protected]>
Tested-by: Cyril B. <[email protected]>
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/nfsd/nfs4state.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index e8ead04..be324aa 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -3748,6 +3748,7 @@ nfsd4_close(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
memcpy(&close->cl_stateid, &stp->st_stid.sc_stateid, sizeof(stateid_t));
nfsd4_close_open_stateid(stp);
+ release_last_closed_stateid(oo);
oo->oo_last_closed_stid = stp;
/* place unused nfs4_stateowners on so_close_lru list to be
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Malahal Naineni <[email protected]>
commit 9959ba0c241a71c7ed8133401cfbbee2720da0b5 upstream.
The 'buf' is prepared with null termination with intention of using it for
this purpose, but 'name' is passed instead!
Signed-off-by: Malahal Naineni <[email protected]>
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/nfsd/nfs4idmap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/nfsd/nfs4idmap.c b/fs/nfsd/nfs4idmap.c
index dae36f1..b6f782e 100644
--- a/fs/nfsd/nfs4idmap.c
+++ b/fs/nfsd/nfs4idmap.c
@@ -598,7 +598,7 @@ numeric_name_to_id(struct svc_rqst *rqstp, int type, const char *name, u32 namel
/* Just to make sure it's null-terminated: */
memcpy(buf, name, namelen);
buf[namelen] = '\0';
- ret = kstrtouint(name, 10, id);
+ ret = kstrtouint(buf, 10, id);
return ret == 0;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Mark Brown <[email protected]>
commit c10c2aab634a3c61c46b98875988b2f53040bc9c upstream.
The removal of mach/io.h from most ARM platforms also set the range of
valid IO ports to be empty for most platforms when previously any 32
bit integer had been valid. This makes it impossible to add IO resources
as the added range is smaller than that of the root resource for IO ports.
Since we're not really using IO memory at all fix this by defining our
own root resource outside the normal tree and make that the parent of
all IO resources. This also ensures we won't conflict with read IO ports
if we ever run on a platform which happens to use them.
Signed-off-by: Mark Brown <[email protected]>
Acked-by: Arnd Bergmann <[email protected]>
Acked-by: Haojian Zhuang <[email protected]>
Tested-by: Haojian Zhuang <[email protected]>
Signed-off-by: Samuel Ortiz <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/mfd/88pm860x-core.c | 89 +++++++++++++++++++++++++++++--------------
1 file changed, 61 insertions(+), 28 deletions(-)
diff --git a/drivers/mfd/88pm860x-core.c b/drivers/mfd/88pm860x-core.c
index 87bd5ba..1a9e5f1 100644
--- a/drivers/mfd/88pm860x-core.c
+++ b/drivers/mfd/88pm860x-core.c
@@ -21,40 +21,73 @@
#define INT_STATUS_NUM 3
+static struct resource io_parent = {
+ .start = 0,
+ .end = 0xffffffff,
+ .flags = IORESOURCE_IO,
+};
+
static struct resource bk_resources[] __devinitdata = {
- {PM8606_BACKLIGHT1, PM8606_BACKLIGHT1, "backlight-0", IORESOURCE_IO,},
- {PM8606_BACKLIGHT2, PM8606_BACKLIGHT2, "backlight-1", IORESOURCE_IO,},
- {PM8606_BACKLIGHT3, PM8606_BACKLIGHT3, "backlight-2", IORESOURCE_IO,},
+ {PM8606_BACKLIGHT1, PM8606_BACKLIGHT1, "backlight-0", IORESOURCE_IO,
+ &io_parent,},
+ {PM8606_BACKLIGHT2, PM8606_BACKLIGHT2, "backlight-1", IORESOURCE_IO,
+ &io_parent,},
+ {PM8606_BACKLIGHT3, PM8606_BACKLIGHT3, "backlight-2", IORESOURCE_IO,
+ &io_parent,},
};
static struct resource led_resources[] __devinitdata = {
- {PM8606_LED1_RED, PM8606_LED1_RED, "led0-red", IORESOURCE_IO,},
- {PM8606_LED1_GREEN, PM8606_LED1_GREEN, "led0-green", IORESOURCE_IO,},
- {PM8606_LED1_BLUE, PM8606_LED1_BLUE, "led0-blue", IORESOURCE_IO,},
- {PM8606_LED2_RED, PM8606_LED2_RED, "led1-red", IORESOURCE_IO,},
- {PM8606_LED2_GREEN, PM8606_LED2_GREEN, "led1-green", IORESOURCE_IO,},
- {PM8606_LED2_BLUE, PM8606_LED2_BLUE, "led1-blue", IORESOURCE_IO,},
+ {PM8606_LED1_RED, PM8606_LED1_RED, "led0-red", IORESOURCE_IO,
+ &io_parent,},
+ {PM8606_LED1_GREEN, PM8606_LED1_GREEN, "led0-green", IORESOURCE_IO,
+ &io_parent,},
+ {PM8606_LED1_BLUE, PM8606_LED1_BLUE, "led0-blue", IORESOURCE_IO,
+ &io_parent,},
+ {PM8606_LED2_RED, PM8606_LED2_RED, "led1-red", IORESOURCE_IO,
+ &io_parent,},
+ {PM8606_LED2_GREEN, PM8606_LED2_GREEN, "led1-green", IORESOURCE_IO,
+ &io_parent,},
+ {PM8606_LED2_BLUE, PM8606_LED2_BLUE, "led1-blue", IORESOURCE_IO,
+ &io_parent,},
};
static struct resource regulator_resources[] __devinitdata = {
- {PM8607_ID_BUCK1, PM8607_ID_BUCK1, "buck-1", IORESOURCE_IO,},
- {PM8607_ID_BUCK2, PM8607_ID_BUCK2, "buck-2", IORESOURCE_IO,},
- {PM8607_ID_BUCK3, PM8607_ID_BUCK3, "buck-3", IORESOURCE_IO,},
- {PM8607_ID_LDO1, PM8607_ID_LDO1, "ldo-01", IORESOURCE_IO,},
- {PM8607_ID_LDO2, PM8607_ID_LDO2, "ldo-02", IORESOURCE_IO,},
- {PM8607_ID_LDO3, PM8607_ID_LDO3, "ldo-03", IORESOURCE_IO,},
- {PM8607_ID_LDO4, PM8607_ID_LDO4, "ldo-04", IORESOURCE_IO,},
- {PM8607_ID_LDO5, PM8607_ID_LDO5, "ldo-05", IORESOURCE_IO,},
- {PM8607_ID_LDO6, PM8607_ID_LDO6, "ldo-06", IORESOURCE_IO,},
- {PM8607_ID_LDO7, PM8607_ID_LDO7, "ldo-07", IORESOURCE_IO,},
- {PM8607_ID_LDO8, PM8607_ID_LDO8, "ldo-08", IORESOURCE_IO,},
- {PM8607_ID_LDO9, PM8607_ID_LDO9, "ldo-09", IORESOURCE_IO,},
- {PM8607_ID_LDO10, PM8607_ID_LDO10, "ldo-10", IORESOURCE_IO,},
- {PM8607_ID_LDO11, PM8607_ID_LDO11, "ldo-11", IORESOURCE_IO,},
- {PM8607_ID_LDO12, PM8607_ID_LDO12, "ldo-12", IORESOURCE_IO,},
- {PM8607_ID_LDO13, PM8607_ID_LDO13, "ldo-13", IORESOURCE_IO,},
- {PM8607_ID_LDO14, PM8607_ID_LDO14, "ldo-14", IORESOURCE_IO,},
- {PM8607_ID_LDO15, PM8607_ID_LDO15, "ldo-15", IORESOURCE_IO,},
+ {PM8607_ID_BUCK1, PM8607_ID_BUCK1, "buck-1", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_BUCK2, PM8607_ID_BUCK2, "buck-2", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_BUCK3, PM8607_ID_BUCK3, "buck-3", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO1, PM8607_ID_LDO1, "ldo-01", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO2, PM8607_ID_LDO2, "ldo-02", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO3, PM8607_ID_LDO3, "ldo-03", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO4, PM8607_ID_LDO4, "ldo-04", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO5, PM8607_ID_LDO5, "ldo-05", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO6, PM8607_ID_LDO6, "ldo-06", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO7, PM8607_ID_LDO7, "ldo-07", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO8, PM8607_ID_LDO8, "ldo-08", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO9, PM8607_ID_LDO9, "ldo-09", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO10, PM8607_ID_LDO10, "ldo-10", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO11, PM8607_ID_LDO11, "ldo-11", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO12, PM8607_ID_LDO12, "ldo-12", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO13, PM8607_ID_LDO13, "ldo-13", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO14, PM8607_ID_LDO14, "ldo-14", IORESOURCE_IO,
+ &io_parent,},
+ {PM8607_ID_LDO15, PM8607_ID_LDO15, "ldo-15", IORESOURCE_IO,
+ &io_parent,},
};
static struct resource touch_resources[] __devinitdata = {
@@ -91,7 +124,7 @@ static struct resource charger_resources[] __devinitdata = {
};
static struct resource rtc_resources[] __devinitdata = {
- {PM8607_IRQ_RTC, PM8607_IRQ_RTC, "rtc", IORESOURCE_IRQ,},
+ {PM8607_IRQ_RTC, PM8607_IRQ_RTC, "rtc", IORESOURCE_IRQ, &io_parent,},
};
static struct mfd_cell bk_devs[] = {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Paolo Bonzini <[email protected]>
commit ffe7b0e9326d9c68f5688bef691dd49f1e0d3651 upstream.
INQUIRY processing already uses an on-heap bounce buffer for loopback,
but not for other fabrics. Switch this to a cheaper on-stack bounce
buffer, similar to the one used by MODE SENSE and REQUEST SENSE, and
use it unconditionally. With this in place, zero allocation length is
handled simply by checking the return address of transport_kmap_data_sg.
Testcase: sg_raw /dev/sdb 12 00 83 00 00 00
should fail with ILLEGAL REQUEST / INVALID FIELD IN CDB sense
does not fail without the patch
fails correctly with the series
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
[ herton: code to be patched is in target_core_cdb.c on 3.5 ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/target/target_core_cdb.c | 31 ++++++-------------------------
1 file changed, 6 insertions(+), 25 deletions(-)
diff --git a/drivers/target/target_core_cdb.c b/drivers/target/target_core_cdb.c
index bf7d38a..5b20579 100644
--- a/drivers/target/target_core_cdb.c
+++ b/drivers/target/target_core_cdb.c
@@ -605,30 +605,11 @@ int target_emulate_inquiry(struct se_cmd *cmd)
{
struct se_device *dev = cmd->se_dev;
struct se_portal_group *tpg = cmd->se_lun->lun_sep->sep_tpg;
- unsigned char *buf, *map_buf;
+ unsigned char *rbuf;
unsigned char *cdb = cmd->t_task_cdb;
+ unsigned char buf[SE_INQUIRY_BUF];
int p, ret;
- map_buf = transport_kmap_data_sg(cmd);
- /*
- * If SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC is not set, then we
- * know we actually allocated a full page. Otherwise, if the
- * data buffer is too small, allocate a temporary buffer so we
- * don't have to worry about overruns in all our INQUIRY
- * emulation handling.
- */
- if (cmd->data_length < SE_INQUIRY_BUF &&
- (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC)) {
- buf = kzalloc(SE_INQUIRY_BUF, GFP_KERNEL);
- if (!buf) {
- transport_kunmap_data_sg(cmd);
- cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
- return -ENOMEM;
- }
- } else {
- buf = map_buf;
- }
-
if (dev == tpg->tpg_virt_lun0.lun_se_dev)
buf[0] = 0x3f; /* Not connected */
else
@@ -660,11 +641,11 @@ int target_emulate_inquiry(struct se_cmd *cmd)
ret = -EINVAL;
out:
- if (buf != map_buf) {
- memcpy(map_buf, buf, cmd->data_length);
- kfree(buf);
+ rbuf = transport_kmap_data_sg(cmd);
+ if (rbuf) {
+ memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length));
+ transport_kunmap_data_sg(cmd);
}
- transport_kunmap_data_sg(cmd);
if (!ret)
target_complete_cmd(cmd, GOOD);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Senna Tschudin <[email protected]>
commit 37bb7899ca366dc212b71b150e78566d04808cc0 upstream.
This patch fixes error cases within target_core_init_configfs() to
properly set ret = -ENOMEM before jumping to the out_global exception
path.
This was originally discovered with the following Coccinelle semantic
match information:
Convert a nonnegative error return code to a negative one, as returned
elsewhere in the function. A simplified version of the semantic match
that finds this problem is as follows: (http://coccinelle.lip6.fr/)
// <smpl>
(
if@p1 (\(ret < 0\|ret != 0\))
{ ... return ret; }
|
ret@p1 = 0
)
... when != ret = e1
when != &ret
*if(...)
{
... when != ret = e2
when forall
return ret;
}
// </smpl>
Signed-off-by: Peter Senna Tschudin <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/target/target_core_configfs.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
index 801efa8..06aca11 100644
--- a/drivers/target/target_core_configfs.c
+++ b/drivers/target/target_core_configfs.c
@@ -3132,6 +3132,7 @@ static int __init target_core_init_configfs(void)
GFP_KERNEL);
if (!target_cg->default_groups) {
pr_err("Unable to allocate target_cg->default_groups\n");
+ ret = -ENOMEM;
goto out_global;
}
@@ -3147,6 +3148,7 @@ static int __init target_core_init_configfs(void)
GFP_KERNEL);
if (!hba_cg->default_groups) {
pr_err("Unable to allocate hba_cg->default_groups\n");
+ ret = -ENOMEM;
goto out_global;
}
config_group_init_type_name(&alua_group,
@@ -3162,6 +3164,7 @@ static int __init target_core_init_configfs(void)
GFP_KERNEL);
if (!alua_cg->default_groups) {
pr_err("Unable to allocate alua_cg->default_groups\n");
+ ret = -ENOMEM;
goto out_global;
}
@@ -3173,14 +3176,17 @@ static int __init target_core_init_configfs(void)
* Add core/alua/lu_gps/default_lu_gp
*/
lu_gp = core_alua_allocate_lu_gp("default_lu_gp", 1);
- if (IS_ERR(lu_gp))
+ if (IS_ERR(lu_gp)) {
+ ret = -ENOMEM;
goto out_global;
+ }
lu_gp_cg = &alua_lu_gps_group;
lu_gp_cg->default_groups = kzalloc(sizeof(struct config_group) * 2,
GFP_KERNEL);
if (!lu_gp_cg->default_groups) {
pr_err("Unable to allocate lu_gp_cg->default_groups\n");
+ ret = -ENOMEM;
goto out_global;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust <[email protected]>
commit a519fc7a70d1a918574bb826cc6905b87b482eb9 upstream.
Instead of doing a shutdown() call, we need to do an actual close().
Ditto if/when the server is sending us junk RPC headers.
Signed-off-by: Trond Myklebust <[email protected]>
Tested-by: Simon Kirby <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/sunrpc/xprtsock.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index b88c6bf..00ff343 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1028,6 +1028,16 @@ static void xs_udp_data_ready(struct sock *sk, int len)
read_unlock_bh(&sk->sk_callback_lock);
}
+/*
+ * Helper function to force a TCP close if the server is sending
+ * junk and/or it has put us in CLOSE_WAIT
+ */
+static void xs_tcp_force_close(struct rpc_xprt *xprt)
+{
+ set_bit(XPRT_CONNECTION_CLOSE, &xprt->state);
+ xprt_force_disconnect(xprt);
+}
+
static inline void xs_tcp_read_fraghdr(struct rpc_xprt *xprt, struct xdr_skb_reader *desc)
{
struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
@@ -1054,7 +1064,7 @@ static inline void xs_tcp_read_fraghdr(struct rpc_xprt *xprt, struct xdr_skb_rea
/* Sanity check of the record length */
if (unlikely(transport->tcp_reclen < 8)) {
dprintk("RPC: invalid TCP record fragment length\n");
- xprt_force_disconnect(xprt);
+ xs_tcp_force_close(xprt);
return;
}
dprintk("RPC: reading TCP record fragment of length %d\n",
@@ -1135,7 +1145,7 @@ static inline void xs_tcp_read_calldir(struct sock_xprt *transport,
break;
default:
dprintk("RPC: invalid request message type\n");
- xprt_force_disconnect(&transport->xprt);
+ xs_tcp_force_close(&transport->xprt);
}
xs_tcp_check_fraghdr(transport);
}
@@ -1458,6 +1468,8 @@ static void xs_tcp_cancel_linger_timeout(struct rpc_xprt *xprt)
static void xs_sock_mark_closed(struct rpc_xprt *xprt)
{
smp_mb__before_clear_bit();
+ clear_bit(XPRT_CONNECTION_ABORT, &xprt->state);
+ clear_bit(XPRT_CONNECTION_CLOSE, &xprt->state);
clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
clear_bit(XPRT_CLOSING, &xprt->state);
smp_mb__after_clear_bit();
@@ -1515,8 +1527,8 @@ static void xs_tcp_state_change(struct sock *sk)
break;
case TCP_CLOSE_WAIT:
/* The server initiated a shutdown of the socket */
- xprt_force_disconnect(xprt);
xprt->connect_cookie++;
+ xs_tcp_force_close(xprt);
case TCP_CLOSING:
/*
* If the server closed down the connection, make sure that
@@ -2159,8 +2171,7 @@ static void xs_tcp_setup_socket(struct work_struct *work)
/* We're probably in TIME_WAIT. Get rid of existing socket,
* and retry
*/
- set_bit(XPRT_CONNECTION_CLOSE, &xprt->state);
- xprt_force_disconnect(xprt);
+ xs_tcp_force_close(xprt);
break;
case -ECONNREFUSED:
case -ECONNRESET:
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Paolo Bonzini <[email protected]>
commit 7a3f369ce31694017996524a1cdb08208a839077 upstream.
The offset was not bumped back to the full size after writing the
header of the MODE SENSE response, so the last 1 or 2 bytes were
not copied.
On top of this, support zero-length requests by checking for the
return value of transport_kmap_data_sg.
Testcase: sg_raw -r20 /dev/sdb 5a 00 0a 00 00 00 00 00 14 00
last byte should be 0x1e
it is 0x00 without the patch
it is correct with the patch
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
[ herton: backported, code to be patched is on target_core_cdb.c,
target_emulate_modesense, the function was moved/renamed later ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/target/target_core_cdb.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/drivers/target/target_core_cdb.c b/drivers/target/target_core_cdb.c
index 5b20579..3dc3393 100644
--- a/drivers/target/target_core_cdb.c
+++ b/drivers/target/target_core_cdb.c
@@ -856,7 +856,7 @@ int target_emulate_modesense(struct se_cmd *cmd)
unsigned char *rbuf;
int type = dev->transport->get_device_type(dev);
int ten = (cmd->t_task_cdb[0] == MODE_SENSE_10);
- int offset = ten ? 8 : 4;
+ u32 offset = ten ? 8 : 4;
int length = 0;
unsigned char buf[SE_MODE_PAGE_BUF];
@@ -889,6 +889,7 @@ int target_emulate_modesense(struct se_cmd *cmd)
offset -= 2;
buf[0] = (offset >> 8) & 0xff;
buf[1] = offset & 0xff;
+ offset += 2;
if ((cmd->se_lun->lun_access & TRANSPORT_LUNFLAGS_READ_ONLY) ||
(cmd->se_deve &&
@@ -898,13 +899,10 @@ int target_emulate_modesense(struct se_cmd *cmd)
if ((dev->se_sub_dev->se_dev_attrib.emulate_write_cache > 0) &&
(dev->se_sub_dev->se_dev_attrib.emulate_fua_write > 0))
target_modesense_dpofua(&buf[3], type);
-
- if ((offset + 2) > cmd->data_length)
- offset = cmd->data_length;
-
} else {
offset -= 1;
buf[0] = offset & 0xff;
+ offset += 1;
if ((cmd->se_lun->lun_access & TRANSPORT_LUNFLAGS_READ_ONLY) ||
(cmd->se_deve &&
@@ -914,14 +912,13 @@ int target_emulate_modesense(struct se_cmd *cmd)
if ((dev->se_sub_dev->se_dev_attrib.emulate_write_cache > 0) &&
(dev->se_sub_dev->se_dev_attrib.emulate_fua_write > 0))
target_modesense_dpofua(&buf[2], type);
-
- if ((offset + 1) > cmd->data_length)
- offset = cmd->data_length;
}
rbuf = transport_kmap_data_sg(cmd);
- memcpy(rbuf, buf, offset);
- transport_kunmap_data_sg(cmd);
+ if (rbuf) {
+ memcpy(rbuf, buf, min(offset, cmd->data_length));
+ transport_kunmap_data_sg(cmd);
+ }
target_complete_cmd(cmd, GOOD);
return 0;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Gavin Shan <[email protected]>
commit feadf7c0a1a7c08c74bebb4a13b755f8c40e3bbc upstream.
The EEH core is talking with the PCI device driver to determine the
action (purely reset, or PCI device removal). During the period, the
driver might be unloaded and in turn causes kernel crash as follows:
EEH: Detected PCI bus error on PHB#4-PE#10000
EEH: This PCI device has failed 3 times in the last hour
lpfc 0004:01:00.0: 0:2710 PCI channel disable preparing for reset
Unable to handle kernel paging request for data at address 0x00000490
Faulting instruction address: 0xd00000000e682c90
cpu 0x1: Vector: 300 (Data Access) at [c000000fc75ffa20]
pc: d00000000e682c90: .lpfc_io_error_detected+0x30/0x240 [lpfc]
lr: d00000000e682c8c: .lpfc_io_error_detected+0x2c/0x240 [lpfc]
sp: c000000fc75ffca0
msr: 8000000000009032
dar: 490
dsisr: 40000000
current = 0xc000000fc79b88b0
paca = 0xc00000000edb0380 softe: 0 irq_happened: 0x00
pid = 3386, comm = eehd
enter ? for help
[c000000fc75ffca0] c000000fc75ffd30 (unreliable)
[c000000fc75ffd30] c00000000004fd3c .eeh_report_error+0x7c/0xf0
[c000000fc75ffdc0] c00000000004ee00 .eeh_pe_dev_traverse+0xa0/0x180
[c000000fc75ffe70] c00000000004ffd8 .eeh_handle_event+0x68/0x300
[c000000fc75fff00] c0000000000503a0 .eeh_event_handler+0x130/0x1a0
[c000000fc75fff90] c000000000020138 .kernel_thread+0x54/0x70
1:mon>
The patch increases the reference of the corresponding driver modules
while EEH core does the negotiation with PCI device driver so that the
corresponding driver modules can't be unloaded during the period and
we're safe to refer the callbacks.
Reported-by: Alexey Kardashevskiy <[email protected]>
Signed-off-by: Gavin Shan <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
[ herton: backported for 3.5, adjusted driver assignments, return 0
instead of NULL, assume dev is not NULL ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/powerpc/platforms/pseries/eeh_driver.c | 95 +++++++++++++++++++++------
1 file changed, 74 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/eeh_driver.c b/arch/powerpc/platforms/pseries/eeh_driver.c
index baf92cd..041e28d 100644
--- a/arch/powerpc/platforms/pseries/eeh_driver.c
+++ b/arch/powerpc/platforms/pseries/eeh_driver.c
@@ -25,6 +25,7 @@
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
+#include <linux/module.h>
#include <linux/pci.h>
#include <asm/eeh.h>
#include <asm/eeh_event.h>
@@ -47,6 +48,41 @@ static inline const char *eeh_pcid_name(struct pci_dev *pdev)
return "";
}
+/**
+ * eeh_pcid_get - Get the PCI device driver
+ * @pdev: PCI device
+ *
+ * The function is used to retrieve the PCI device driver for
+ * the indicated PCI device. Besides, we will increase the reference
+ * of the PCI device driver to prevent that being unloaded on
+ * the fly. Otherwise, kernel crash would be seen.
+ */
+static inline struct pci_driver *eeh_pcid_get(struct pci_dev *pdev)
+{
+ if (!pdev || !pdev->driver)
+ return NULL;
+
+ if (!try_module_get(pdev->driver->driver.owner))
+ return NULL;
+
+ return pdev->driver;
+}
+
+/**
+ * eeh_pcid_put - Dereference on the PCI device driver
+ * @pdev: PCI device
+ *
+ * The function is called to do dereference on the PCI device
+ * driver of the indicated PCI device.
+ */
+static inline void eeh_pcid_put(struct pci_dev *pdev)
+{
+ if (!pdev || !pdev->driver)
+ return;
+
+ module_put(pdev->driver->driver.owner);
+}
+
#if 0
static void print_device_node_tree(struct pci_dn *pdn, int dent)
{
@@ -126,18 +162,20 @@ static void eeh_enable_irq(struct pci_dev *dev)
static int eeh_report_error(struct pci_dev *dev, void *userdata)
{
enum pci_ers_result rc, *res = userdata;
- struct pci_driver *driver = dev->driver;
+ struct pci_driver *driver;
dev->error_state = pci_channel_io_frozen;
- if (!driver)
- return 0;
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
eeh_disable_irq(dev);
if (!driver->err_handler ||
- !driver->err_handler->error_detected)
+ !driver->err_handler->error_detected) {
+ eeh_pcid_put(dev);
return 0;
+ }
rc = driver->err_handler->error_detected(dev, pci_channel_io_frozen);
@@ -145,6 +183,7 @@ static int eeh_report_error(struct pci_dev *dev, void *userdata)
if (rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
if (*res == PCI_ERS_RESULT_NONE) *res = rc;
+ eeh_pcid_put(dev);
return 0;
}
@@ -160,12 +199,16 @@ static int eeh_report_error(struct pci_dev *dev, void *userdata)
static int eeh_report_mmio_enabled(struct pci_dev *dev, void *userdata)
{
enum pci_ers_result rc, *res = userdata;
- struct pci_driver *driver = dev->driver;
+ struct pci_driver *driver;
+
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
- if (!driver ||
- !driver->err_handler ||
- !driver->err_handler->mmio_enabled)
+ if (!driver->err_handler ||
+ !driver->err_handler->mmio_enabled) {
+ eeh_pcid_put(dev);
return 0;
+ }
rc = driver->err_handler->mmio_enabled(dev);
@@ -173,6 +216,7 @@ static int eeh_report_mmio_enabled(struct pci_dev *dev, void *userdata)
if (rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
if (*res == PCI_ERS_RESULT_NONE) *res = rc;
+ eeh_pcid_put(dev);
return 0;
}
@@ -189,18 +233,20 @@ static int eeh_report_mmio_enabled(struct pci_dev *dev, void *userdata)
static int eeh_report_reset(struct pci_dev *dev, void *userdata)
{
enum pci_ers_result rc, *res = userdata;
- struct pci_driver *driver = dev->driver;
-
- if (!driver)
- return 0;
+ struct pci_driver *driver;
dev->error_state = pci_channel_io_normal;
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
+
eeh_enable_irq(dev);
if (!driver->err_handler ||
- !driver->err_handler->slot_reset)
+ !driver->err_handler->slot_reset) {
+ eeh_pcid_put(dev);
return 0;
+ }
rc = driver->err_handler->slot_reset(dev);
if ((*res == PCI_ERS_RESULT_NONE) ||
@@ -208,6 +254,7 @@ static int eeh_report_reset(struct pci_dev *dev, void *userdata)
if (*res == PCI_ERS_RESULT_DISCONNECT &&
rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
+ eeh_pcid_put(dev);
return 0;
}
@@ -222,21 +269,24 @@ static int eeh_report_reset(struct pci_dev *dev, void *userdata)
*/
static int eeh_report_resume(struct pci_dev *dev, void *userdata)
{
- struct pci_driver *driver = dev->driver;
+ struct pci_driver *driver;
dev->error_state = pci_channel_io_normal;
- if (!driver)
- return 0;
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
eeh_enable_irq(dev);
if (!driver->err_handler ||
- !driver->err_handler->resume)
+ !driver->err_handler->resume) {
+ eeh_pcid_put(dev);
return 0;
+ }
driver->err_handler->resume(dev);
+ eeh_pcid_put(dev);
return 0;
}
@@ -250,21 +300,24 @@ static int eeh_report_resume(struct pci_dev *dev, void *userdata)
*/
static int eeh_report_failure(struct pci_dev *dev, void *userdata)
{
- struct pci_driver *driver = dev->driver;
+ struct pci_driver *driver;
dev->error_state = pci_channel_io_perm_failure;
- if (!driver)
- return 0;
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
eeh_disable_irq(dev);
if (!driver->err_handler ||
- !driver->err_handler->error_detected)
+ !driver->err_handler->error_detected) {
+ eeh_pcid_put(dev);
return 0;
+ }
driver->err_handler->error_detected(dev, pci_channel_io_perm_failure);
+ eeh_pcid_put(dev);
return 0;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jaehoon Chung <[email protected]>
commit e32463b2f7801d6561887c01db37b34958504635 upstream.
Before call the blk_queue_congestion_threshold(),
the blk_queue_congestion_threshold() is already called at blk_queue_make_rquest().
Because this code is the duplicated, it has removed.
Signed-off-by: Jaehoon Chung <[email protected]>
Signed-off-by: Kyungmin Park <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
[ herton: only a cleanup, but allows clean application of commit 749fefe ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
block/blk-core.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 93eb3e4..ad39394 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -703,8 +703,6 @@ blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn,
if (elevator_init(q, NULL))
return NULL;
- blk_queue_congestion_threshold(q);
-
/* all done, end the initial bypass */
blk_queue_bypass_end(q);
return q;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Roland Dreier <[email protected]>
commit e4b11b89f9039ca97b2ed1b6efeb6749fbdeb252 upstream.
The qla2xxx firmware actually expects the task management response
code in a CTIO IOCB with SCSI status mode 1 to be in little-endian
byte order, ie the response code should be the first byte in the
sense_data[] array. The old code erroneously byte-swapped the
response code, which puts it in the wrong place on the wire and leads
to initiators thinking every task management request succeeds (since
they see 0 in the byte where they look for the response code).
Signed-off-by: Roland Dreier <[email protected]>
Cc: Chad Dupuis <[email protected]>
Cc: Arun Easi <[email protected]>
Acked-by: Saurav Kashyap <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/scsi/qla2xxx/qla_target.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
index 77759c7..5a71e84 100644
--- a/drivers/scsi/qla2xxx/qla_target.c
+++ b/drivers/scsi/qla2xxx/qla_target.c
@@ -1403,7 +1403,7 @@ static void qlt_24xx_send_task_mgmt_ctio(struct scsi_qla_host *ha,
ctio->u.status1.scsi_status =
__constant_cpu_to_le16(SS_RESPONSE_INFO_LEN_VALID);
ctio->u.status1.response_len = __constant_cpu_to_le16(8);
- ((uint32_t *)ctio->u.status1.sense_data)[0] = cpu_to_be32(resp_code);
+ ctio->u.status1.sense_data[0] = resp_code;
qla2x00_start_iocbs(ha, ha->req);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tejun Heo <[email protected]>
commit 60ea8226cbd5c8301f9a39edc574ddabcb8150e0 upstream.
A queue newly allocated with blk_alloc_queue_node() has only
QUEUE_FLAG_BYPASS set. For request-based drivers,
blk_init_allocated_queue() is called and q->queue_flags is overwritten
with QUEUE_FLAG_DEFAULT which doesn't include BYPASS even though the
initial bypass is still in effect.
In blk_init_allocated_queue(), or QUEUE_FLAG_DEFAULT to q->queue_flags
instead of overwriting.
Signed-off-by: Tejun Heo <[email protected]>
Acked-by: Vivek Goyal <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
block/blk-core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 96335a7..e17ce4b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -686,7 +686,7 @@ blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn,
q->request_fn = rfn;
q->prep_rq_fn = NULL;
q->unprep_rq_fn = NULL;
- q->queue_flags = QUEUE_FLAG_DEFAULT;
+ q->queue_flags |= QUEUE_FLAG_DEFAULT;
/* Override internal queue lock with supplied lock pointer */
if (lock)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Drake <[email protected]>
commit 012a1211845eab69a5488d59eb87d24cc518c627 upstream.
As detailed in the thread titled "viafb PLL/clock tweaking causes XO-1.5
instability," enabling or disabling the IGA1/IGA2 clocks causes occasional
stability problems during suspend/resume cycles on this platform.
This is rather odd, as the documentation suggests that clocks have two
states (on/off) and the default (stable) configuration is configured to
enable the clock only when it is needed. However, explicitly enabling *or*
disabling the clock triggers this system instability, suggesting that there
is a 3rd state at play here.
Leaving the clock enable/disable registers alone solves this problem.
This fixes spurious reboots during suspend/resume behaviour introduced by
commit b692a63a.
Signed-off-by: Daniel Drake <[email protected]>
Signed-off-by: Florian Tobias Schandinat <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/video/via/via_clock.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/video/via/via_clock.c b/drivers/video/via/via_clock.c
index af8f26b..db1e392 100644
--- a/drivers/video/via/via_clock.c
+++ b/drivers/video/via/via_clock.c
@@ -25,6 +25,7 @@
#include <linux/kernel.h>
#include <linux/via-core.h>
+#include <asm/olpc.h>
#include "via_clock.h"
#include "global.h"
#include "debug.h"
@@ -289,6 +290,10 @@ static void dummy_set_pll(struct via_pll_config config)
printk(KERN_INFO "Using undocumented set PLL.\n%s", via_slap);
}
+static void noop_set_clock_state(u8 state)
+{
+}
+
void via_clock_init(struct via_clock *clock, int gfx_chip)
{
switch (gfx_chip) {
@@ -346,4 +351,18 @@ void via_clock_init(struct via_clock *clock, int gfx_chip)
break;
}
+
+ if (machine_is_olpc()) {
+ /* The OLPC XO-1.5 cannot suspend/resume reliably if the
+ * IGA1/IGA2 clocks are set as on or off (memory rot
+ * occasionally happens during suspend under such
+ * configurations).
+ *
+ * The only known stable scenario is to leave this bits as-is,
+ * which in their default states are documented to enable the
+ * clock only when it is needed.
+ */
+ clock->set_primary_clock_state = noop_set_clock_state;
+ clock->set_secondary_clock_state = noop_set_clock_state;
+ }
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jani Nikula <[email protected]>
commit 0c96c65b48fba3ffe9822a554cbc0cd610765cd5 upstream.
The dithering introduced in
commit 3b5c78a35cf7511c15e09a9b0ffab290a42d9bcf
Author: Adam Jackson <[email protected]>
Date: Tue Dec 13 15:41:00 2011 -0800
drm/i915/dp: Dither down to 6bpc if it makes the mode fit
stores the INTEL_MODE_DP_FORCE_6BPC flag in the private_flags of the
adjusted mode, while i9xx_crtc_mode_set() and ironlake_crtc_mode_set() use
the original mode, without the flag, so it would never have any
effect. However, the BPC was clamped by VBT settings, making things work by
coincidence, until that part was removed in
commit 4344b813f105a19f793f1fd93ad775b784648b95
Author: Daniel Vetter <[email protected]>
Date: Fri Aug 10 11:10:20 2012 +0200
Use adjusted_mode instead of mode when checking for
INTEL_MODE_DP_FORCE_6BPC to make the flag have effect.
v2: Don't forget to fix this in i9xx_crtc_mode_set() also, pointed out by
Daniel both before and after sending the first patch.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=47621
CC: Adam Jackson <[email protected]>
Signed-off-by: Jani Nikula <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/gpu/drm/i915/intel_display.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 6f0d039..0865f27 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -4087,7 +4087,7 @@ static int i9xx_crtc_mode_set(struct drm_crtc *crtc,
/* default to 8bpc */
pipeconf &= ~(PIPECONF_BPP_MASK | PIPECONF_DITHER_EN);
if (is_dp) {
- if (mode->private_flags & INTEL_MODE_DP_FORCE_6BPC) {
+ if (adjusted_mode->private_flags & INTEL_MODE_DP_FORCE_6BPC) {
pipeconf |= PIPECONF_BPP_6 |
PIPECONF_DITHER_EN |
PIPECONF_DITHER_TYPE_SP;
@@ -4464,7 +4464,7 @@ static int ironlake_crtc_mode_set(struct drm_crtc *crtc,
/* determine panel color depth */
temp = I915_READ(PIPECONF(pipe));
temp &= ~PIPE_BPC_MASK;
- dither = intel_choose_pipe_bpp_dither(crtc, &pipe_bpp, mode);
+ dither = intel_choose_pipe_bpp_dither(crtc, &pipe_bpp, adjusted_mode);
switch (pipe_bpp) {
case 18:
temp |= PIPE_6BPC;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Matthew Garrett <[email protected]>
commit c99af3752bb52ba3aece5315279a57a477edfaf1 upstream.
Cloudlinux have a product called lve that includes a kernel module. This
was previously GPLed but is now under a proprietary license, but the
module continues to declare MODULE_LICENSE("GPL") and makes use of some
EXPORT_SYMBOL_GPL symbols. Forcibly taint it in order to avoid this.
Signed-off-by: Matthew Garrett <[email protected]>
Cc: Alex Lyashkov <[email protected]>
Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
kernel/module.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/module.c b/kernel/module.c
index 4edbd9c..9ad9ee9 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -2730,6 +2730,10 @@ static int check_module_license_and_versions(struct module *mod)
if (strcmp(mod->name, "driverloader") == 0)
add_taint_module(mod, TAINT_PROPRIETARY_MODULE);
+ /* lve claims to be GPL but upstream won't provide source */
+ if (strcmp(mod->name, "lve") == 0)
+ add_taint_module(mod, TAINT_PROPRIETARY_MODULE);
+
#ifdef CONFIG_MODVERSIONS
if ((mod->num_syms && !mod->crcs)
|| (mod->num_gpl_syms && !mod->gpl_crcs)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: "J. Bruce Fields" <[email protected]>
commit 68eb35081e297b37db49d854cda144c6a3397699 upstream.
I added cr_flavor to the data compared in same_creds without any
justification, in d5497fc693a446ce9100fcf4117c3f795ddfd0d2 "nfsd4: move
rq_flavor into svc_cred".
Recent client changes then started making
mount -osec=krb5 server:/export /mnt/
echo "hello" >/mnt/TMP
umount /mnt/
mount -osec=krb5i server:/export /mnt/
echo "hello" >/mnt/TMP
to fail due to a clid_inuse on the second open.
Mounting sequentially like this with different flavors probably isn't
that common outside artificial tests. Also, the real bug here may be
that the server isn't just destroying the former clientid in this case
(because it isn't good enough at recognizing when the old state is
gone). But it prompted some discussion and a look back at the spec, and
I think the check was probably wrong. Fix and document.
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/nfsd/nfs4state.c | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index be324aa..02ac082 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1215,10 +1215,26 @@ static bool groups_equal(struct group_info *g1, struct group_info *g2)
return true;
}
+/*
+ * RFC 3530 language requires clid_inuse be returned when the
+ * "principal" associated with a requests differs from that previously
+ * used. We use uid, gid's, and gss principal string as our best
+ * approximation. We also don't want to allow non-gss use of a client
+ * established using gss: in theory cr_principal should catch that
+ * change, but in practice cr_principal can be null even in the gss case
+ * since gssd doesn't always pass down a principal string.
+ */
+static bool is_gss_cred(struct svc_cred *cr)
+{
+ /* Is cr_flavor one of the gss "pseudoflavors"?: */
+ return (cr->cr_flavor > RPC_AUTH_MAXFLAVOR);
+}
+
+
static bool
same_creds(struct svc_cred *cr1, struct svc_cred *cr2)
{
- if ((cr1->cr_flavor != cr2->cr_flavor)
+ if ((is_gss_cred(cr1) != is_gss_cred(cr2))
|| (cr1->cr_uid != cr2->cr_uid)
|| (cr1->cr_gid != cr2->cr_gid)
|| !groups_equal(cr1->cr_group_info, cr2->cr_group_info))
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanislav Kinsbursky <[email protected]>
commit 303a7ce92064c285a04c870f2dc0192fdb2968cb upstream.
Taking hostname from uts namespace if not safe, because this cuold be
performind during umount operation on child reaper death. And in this case
current->nsproxy is NULL already.
Signed-off-by: Stanislav Kinsbursky <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/lockd/mon.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c
index 7ef14b3..606a8dd 100644
--- a/fs/lockd/mon.c
+++ b/fs/lockd/mon.c
@@ -40,6 +40,7 @@ struct nsm_args {
u32 proc;
char *mon_name;
+ char *nodename;
};
struct nsm_res {
@@ -94,6 +95,7 @@ static int nsm_mon_unmon(struct nsm_handle *nsm, u32 proc, struct nsm_res *res,
.vers = 3,
.proc = NLMPROC_NSM_NOTIFY,
.mon_name = nsm->sm_mon_name,
+ .nodename = utsname()->nodename,
};
struct rpc_message msg = {
.rpc_argp = &args,
@@ -430,7 +432,7 @@ static void encode_my_id(struct xdr_stream *xdr, const struct nsm_args *argp)
{
__be32 *p;
- encode_nsm_string(xdr, utsname()->nodename);
+ encode_nsm_string(xdr, argp->nodename);
p = xdr_reserve_space(xdr, 4 + 4 + 4);
*p++ = cpu_to_be32(argp->prog);
*p++ = cpu_to_be32(argp->vers);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger <[email protected]>
commit b32f4c7ed85c5cee2a21a55c9f59ebc9d57a2463 upstream.
This patch re-adds the ability to optionally run in buffered FILEIO mode
(eg: w/o O_DSYNC) for device backends in order to once again use the
Linux buffered cache as a write-back storage mechanism.
This logic was originally dropped with mainline v3.5-rc commit:
commit a4dff3043c231d57f982af635c9d2192ee40e5ae
Author: Nicholas Bellinger <[email protected]>
Date: Wed May 30 16:25:41 2012 -0700
target/file: Use O_DSYNC by default for FILEIO backends
This difference with this patch is that fd_create_virtdevice() now
forces the explicit setting of emulate_write_cache=1 when buffered FILEIO
operation has been enabled.
(v2: Switch to FDBD_HAS_BUFFERED_IO_WCE + add more detailed
comment as requested by hch)
Reported-by: Ferry <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/target/target_core_file.c | 41 ++++++++++++++++++++++++++++++++++---
drivers/target/target_core_file.h | 1 +
2 files changed, 39 insertions(+), 3 deletions(-)
diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index 9f99d04..c0d6e6d 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -138,6 +138,19 @@ static struct se_device *fd_create_virtdevice(
* of pure timestamp updates.
*/
flags = O_RDWR | O_CREAT | O_LARGEFILE | O_DSYNC;
+ /*
+ * Optionally allow fd_buffered_io=1 to be enabled for people
+ * who want use the fs buffer cache as an WriteCache mechanism.
+ *
+ * This means that in event of a hard failure, there is a risk
+ * of silent data-loss if the SCSI client has *not* performed a
+ * forced unit access (FUA) write, or issued SYNCHRONIZE_CACHE
+ * to write-out the entire device cache.
+ */
+ if (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) {
+ pr_debug("FILEIO: Disabling O_DSYNC, using buffered FILEIO\n");
+ flags &= ~O_DSYNC;
+ }
file = filp_open(dev_p, flags, 0600);
if (IS_ERR(file)) {
@@ -205,6 +218,12 @@ static struct se_device *fd_create_virtdevice(
if (!dev)
goto fail;
+ if (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) {
+ pr_debug("FILEIO: Forcing setting of emulate_write_cache=1"
+ " with FDBD_HAS_BUFFERED_IO_WCE\n");
+ dev->se_sub_dev->se_dev_attrib.emulate_write_cache = 1;
+ }
+
fd_dev->fd_dev_id = fd_host->fd_host_dev_id_count++;
fd_dev->fd_queue_depth = dev->queue_depth;
@@ -422,6 +441,7 @@ enum {
static match_table_t tokens = {
{Opt_fd_dev_name, "fd_dev_name=%s"},
{Opt_fd_dev_size, "fd_dev_size=%s"},
+ {Opt_fd_buffered_io, "fd_buffered_io=%d"},
{Opt_err, NULL}
};
@@ -433,7 +453,7 @@ static ssize_t fd_set_configfs_dev_params(
struct fd_dev *fd_dev = se_dev->se_dev_su_ptr;
char *orig, *ptr, *arg_p, *opts;
substring_t args[MAX_OPT_ARGS];
- int ret = 0, token;
+ int ret = 0, arg, token;
opts = kstrdup(page, GFP_KERNEL);
if (!opts)
@@ -477,6 +497,19 @@ static ssize_t fd_set_configfs_dev_params(
" bytes\n", fd_dev->fd_dev_size);
fd_dev->fbd_flags |= FBDF_HAS_SIZE;
break;
+ case Opt_fd_buffered_io:
+ match_int(args, &arg);
+ if (arg != 1) {
+ pr_err("bogus fd_buffered_io=%d value\n", arg);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ pr_debug("FILEIO: Using buffered I/O"
+ " operations for struct fd_dev\n");
+
+ fd_dev->fbd_flags |= FDBD_HAS_BUFFERED_IO_WCE;
+ break;
default:
break;
}
@@ -508,8 +541,10 @@ static ssize_t fd_show_configfs_dev_params(
ssize_t bl = 0;
bl = sprintf(b + bl, "TCM FILEIO ID: %u", fd_dev->fd_dev_id);
- bl += sprintf(b + bl, " File: %s Size: %llu Mode: O_DSYNC\n",
- fd_dev->fd_dev_name, fd_dev->fd_dev_size);
+ bl += sprintf(b + bl, " File: %s Size: %llu Mode: %s\n",
+ fd_dev->fd_dev_name, fd_dev->fd_dev_size,
+ (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) ?
+ "Buffered-WCE" : "O_DSYNC");
return bl;
}
diff --git a/drivers/target/target_core_file.h b/drivers/target/target_core_file.h
index 70ce7fd..876ae53 100644
--- a/drivers/target/target_core_file.h
+++ b/drivers/target/target_core_file.h
@@ -14,6 +14,7 @@
#define FBDF_HAS_PATH 0x01
#define FBDF_HAS_SIZE 0x02
+#define FDBD_HAS_BUFFERED_IO_WCE 0x04
struct fd_dev {
u32 fbd_flags;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Brian Norris <[email protected]>
commit bf7a01bf7987b63b121d572b240c132ec44129c4 upstream.
The NAND_CHIPOPTIONS_MSK has limited utility and is causing real bugs. It
silently masks off at least one flag that might be set by the driver
(NAND_NO_SUBPAGE_WRITE). This breaks the GPMI NAND driver and possibly
others.
Really, as long as driver writers exercise a small amount of care with
NAND_* options, this mask is not necessary at all; it was only here to
prevent certain options from accidentally being set by the driver. But the
original thought turns out to be a bad idea occasionally. Thus, kill it.
Note, this patch fixes some major gpmi-nand breakage.
Signed-off-by: Brian Norris <[email protected]>
Tested-by: Huang Shijie <[email protected]>
Signed-off-by: Artem Bityutskiy <[email protected]>
Signed-off-by: David Woodhouse <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/mtd/nand/nand_base.c | 8 +++-----
include/linux/mtd/nand.h | 3 ---
2 files changed, 3 insertions(+), 8 deletions(-)
diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c
index a11253a..c429abd 100644
--- a/drivers/mtd/nand/nand_base.c
+++ b/drivers/mtd/nand/nand_base.c
@@ -2914,8 +2914,7 @@ static int nand_flash_detect_onfi(struct mtd_info *mtd, struct nand_chip *chip,
if (le16_to_cpu(p->features) & 1)
*busw = NAND_BUSWIDTH_16;
- chip->options &= ~NAND_CHIPOPTIONS_MSK;
- chip->options |= NAND_NO_READRDY & NAND_CHIPOPTIONS_MSK;
+ chip->options |= NAND_NO_READRDY;
pr_info("ONFI flash detected\n");
return 1;
@@ -3080,9 +3079,8 @@ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd,
mtd->erasesize <<= ((id_data[3] & 0x03) << 1);
}
}
- /* Get chip options, preserve non chip based options */
- chip->options &= ~NAND_CHIPOPTIONS_MSK;
- chip->options |= type->options & NAND_CHIPOPTIONS_MSK;
+ /* Get chip options */
+ chip->options |= type->options;
/*
* Check if chip is not a Samsung device. Do not clear the
diff --git a/include/linux/mtd/nand.h b/include/linux/mtd/nand.h
index 57977c6..e5cf2c8 100644
--- a/include/linux/mtd/nand.h
+++ b/include/linux/mtd/nand.h
@@ -212,9 +212,6 @@ typedef enum {
#define NAND_SUBPAGE_READ(chip) ((chip->ecc.mode == NAND_ECC_SOFT) \
&& (chip->page_shift > 9))
-/* Mask to zero out the chip options, which come from the id table */
-#define NAND_CHIPOPTIONS_MSK 0x0000ffff
-
/* Non chip related options */
/* This option skips the bbt scan during initialization. */
#define NAND_SKIP_BBTSCAN 0x00010000
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Christoph Hellwig <[email protected]>
commit 904753da183566c71211d23c169a80184648c121 upstream.
Fix a potential multiple spin-unlock -> deadlock scenario during the
overflow check within iscsit_build_sendtargets_resp() as found by
sparse static checking.
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/target/iscsi/iscsi_target.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index d3114d1..6d1d906 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -3269,7 +3269,6 @@ static int iscsit_build_sendtargets_response(struct iscsi_cmd *cmd)
len += 1;
if ((len + payload_len) > buffer_len) {
- spin_unlock(&tiqn->tiqn_tpg_lock);
end_of_buf = 1;
goto eob;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Fabio Estevam <[email protected]>
commit 0eb5a35801df3c438ce3fc91310a415ea4452c00 upstream.
Do the same as commit a03a202e95fd ("dmaengine: failure to get a
specific DMA channel is not critical") to get rid of the following
messages during kernel boot:
dmaengine_get: failed to get dma1chan0: (-22)
dmaengine_get: failed to get dma1chan1: (-22)
dmaengine_get: failed to get dma1chan2: (-22)
dmaengine_get: failed to get dma1chan3: (-22)
..
Signed-off-by: Fabio Estevam <[email protected]>
Cc: Vinod Koul <[email protected]>
Cc: Dan Williams <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
[ herton: adjust context ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/dma/dmaengine.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 2397f6f..6c87d67 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -578,7 +578,7 @@ void dmaengine_get(void)
list_del_rcu(&device->global_node);
break;
} else if (err)
- pr_err("%s: failed to get %s: (%d)\n",
+ pr_debug("%s: failed to get %s: (%d)\n",
__func__, dma_chan_name(chan), err);
}
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jaehoon Chung <[email protected]>
commit 5feb54a1ab91a237e247c013b8c4fb100ea347b1 upstream.
We can use up to four bus-clocks; but on module remove, we didn't
disable the fourth bus clock.
Signed-off-by: Jaehoon Chung <[email protected]>
Signed-off-by: Kyungmin Park <[email protected]>
Signed-off-by: Chris Ball <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/mmc/host/sdhci-s3c.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/mmc/host/sdhci-s3c.c b/drivers/mmc/host/sdhci-s3c.c
index a50c205..02b7a4a 100644
--- a/drivers/mmc/host/sdhci-s3c.c
+++ b/drivers/mmc/host/sdhci-s3c.c
@@ -656,7 +656,7 @@ static int __devexit sdhci_s3c_remove(struct platform_device *pdev)
pm_runtime_disable(&pdev->dev);
- for (ptr = 0; ptr < 3; ptr++) {
+ for (ptr = 0; ptr < MAX_BUS_CLK; ptr++) {
if (sc->clk_bus[ptr]) {
clk_disable(sc->clk_bus[ptr]);
clk_put(sc->clk_bus[ptr]);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Dylan Reid <[email protected]>
commit d17344b3547669f5b6ee4fda993d03737a141bd6 upstream.
There was a race condition when the system suspends while hda_power_work
is running in the work queue. If system suspend (snd_hda_suspend)
happens after the work queue releases power_lock but before it calls
hda_call_codec_suspend, codec_suspend runs with power_on=0, causing the
codec to power up for register reads, and hanging when it calls
cancel_delayed_work_sync from the running work queue.
The call chain from the work queue will look like this:
hda_power_work <<- power_on = 1, unlock, then power_on cleard by suspend
hda_call_codec_suspend
hda_set_power_state
snd_hda_codec_read
codec_exec_verb
snd_hda_power_up
snd_hda_power_save
__snd_hda_power_up
cancel_delayed_work_sync <<-- cancelling executing wq
Fix this by waiting for the work queue to finish before starting suspend
if suspend is not happening on the work queue.
Signed-off-by: Dylan Reid <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
[ herton: backported to 3.5:
* hda_call_codec_suspend doesn't return state
* adjusted context ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
sound/pci/hda/hda_codec.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
index f1c6164..b1b6238 100644
--- a/sound/pci/hda/hda_codec.c
+++ b/sound/pci/hda/hda_codec.c
@@ -3542,7 +3542,7 @@ static inline void hda_exec_init_verbs(struct hda_codec *codec) {}
/*
* call suspend and power-down; used both from PM and power-save
*/
-static void hda_call_codec_suspend(struct hda_codec *codec)
+static void hda_call_codec_suspend(struct hda_codec *codec, bool in_wq)
{
if (codec->patch_ops.suspend)
codec->patch_ops.suspend(codec, PMSG_SUSPEND);
@@ -3551,7 +3551,9 @@ static void hda_call_codec_suspend(struct hda_codec *codec)
codec->afg ? codec->afg : codec->mfg,
AC_PWRST_D3);
#ifdef CONFIG_SND_HDA_POWER_SAVE
- cancel_delayed_work(&codec->power_work);
+ /* Cancel delayed work if we aren't currently running from it. */
+ if (!in_wq)
+ cancel_delayed_work_sync(&codec->power_work);
spin_lock(&codec->power_lock);
snd_hda_update_power_acct(codec);
trace_hda_power_down(codec);
@@ -4372,7 +4374,7 @@ static void hda_power_work(struct work_struct *work)
}
spin_unlock(&codec->power_lock);
- hda_call_codec_suspend(codec);
+ hda_call_codec_suspend(codec, true);
if (bus->ops.pm_notify)
bus->ops.pm_notify(bus);
}
@@ -5038,7 +5040,7 @@ int snd_hda_suspend(struct hda_bus *bus)
list_for_each_entry(codec, &bus->codec_list, list) {
if (hda_codec_is_power_on(codec))
- hda_call_codec_suspend(codec);
+ hda_call_codec_suspend(codec, false);
}
return 0;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Colin Cross <[email protected]>
commit 9d7d6e363b06934221b81a859d509844c97380df upstream.
read_persistent_clock uses a global variable, use a spinlock to
ensure non-atomic updates to the variable don't overlap and cause
time to move backwards.
Signed-off-by: Colin Cross <[email protected]>
Signed-off-by: R Sricharan <[email protected]>
Signed-off-by: Tony Lindgren <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/arm/plat-omap/counter_32k.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/arch/arm/plat-omap/counter_32k.c b/arch/arm/plat-omap/counter_32k.c
index 2132c4f..721eae5 100644
--- a/arch/arm/plat-omap/counter_32k.c
+++ b/arch/arm/plat-omap/counter_32k.c
@@ -52,22 +52,29 @@ static u32 notrace omap_32k_read_sched_clock(void)
* nsecs and adds to a monotonically increasing timespec.
*/
static struct timespec persistent_ts;
-static cycles_t cycles, last_cycles;
+static cycles_t cycles;
static unsigned int persistent_mult, persistent_shift;
+static DEFINE_SPINLOCK(read_persistent_clock_lock);
+
static void omap_read_persistent_clock(struct timespec *ts)
{
unsigned long long nsecs;
- cycles_t delta;
- struct timespec *tsp = &persistent_ts;
+ cycles_t last_cycles;
+ unsigned long flags;
+
+ spin_lock_irqsave(&read_persistent_clock_lock, flags);
last_cycles = cycles;
cycles = sync32k_cnt_reg ? __raw_readl(sync32k_cnt_reg) : 0;
- delta = cycles - last_cycles;
- nsecs = clocksource_cyc2ns(delta, persistent_mult, persistent_shift);
+ nsecs = clocksource_cyc2ns(cycles - last_cycles,
+ persistent_mult, persistent_shift);
+
+ timespec_add_ns(&persistent_ts, nsecs);
+
+ *ts = persistent_ts;
- timespec_add_ns(tsp, nsecs);
- *ts = *tsp;
+ spin_unlock_irqrestore(&read_persistent_clock_lock, flags);
}
/**
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Feng Tang <[email protected]>
commit 67bfa9b60bd689601554526d144b21d529f78a09 upstream.
By enlarging the GPE storm threshold back to 20, that laptop's
EC works fine with interrupt mode instead of polling mode.
https://bugzilla.kernel.org/show_bug.cgi?id=45151
Reported-and-Tested-by: Francesco <[email protected]>
Signed-off-by: Feng Tang <[email protected]>
Signed-off-by: Len Brown <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/acpi/ec.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
index 615264c..a51df96 100644
--- a/drivers/acpi/ec.c
+++ b/drivers/acpi/ec.c
@@ -930,6 +930,17 @@ static int ec_flag_msi(const struct dmi_system_id *id)
return 0;
}
+/*
+ * Clevo M720 notebook actually works ok with IRQ mode, if we lifted
+ * the GPE storm threshold back to 20
+ */
+static int ec_enlarge_storm_threshold(const struct dmi_system_id *id)
+{
+ pr_debug("Setting the EC GPE storm threshold to 20\n");
+ ec_storm_threshold = 20;
+ return 0;
+}
+
static struct dmi_system_id __initdata ec_dmi_table[] = {
{
ec_skip_dsdt_scan, "Compal JFL92", {
@@ -961,10 +972,13 @@ static struct dmi_system_id __initdata ec_dmi_table[] = {
{
ec_validate_ecdt, "ASUS hardware", {
DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer Inc.") }, NULL},
+ {
+ ec_enlarge_storm_threshold, "CLEVO hardware", {
+ DMI_MATCH(DMI_SYS_VENDOR, "CLEVO Co."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "M720T/M730T"),}, NULL},
{},
};
-
int __init acpi_ec_ecdt_probe(void)
{
acpi_status status;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Lukas Czerner <[email protected]>
commit bc977749e967daa56de1922cf4cb38525631c51c upstream.
Currently it is possible to unmap one more block than user requested to
due to the off-by-one error in unmap_region(). This is probably due to
the fact that the end variable despite its name actually points to the
last block to unmap + 1. However in the condition it is handled as the
last block of the region to unmap.
The bug was not previously spotted probably due to the fact that the
region was not zeroed, which has changed with commit
be1dd78de5686c062bb3103f9e86d444a10ed783. With that commit we were able
to corrupt the ext4 file system on 256M scsi_debug device with LBPRZ
enabled using fstrim.
Since the 'end' semantic is the same in several functions there this
commit just fixes the condition to use the 'end' variable correctly in
that context.
Reported-by: Paolo Bonzini <[email protected]>
Signed-off-by: Lukas Czerner <[email protected]>
Reviewed-by: Martin K. Petersen <[email protected]>
Acked-by: Douglas Gilbert <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/scsi/scsi_debug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
index 182d5a5..f4cc413 100644
--- a/drivers/scsi/scsi_debug.c
+++ b/drivers/scsi/scsi_debug.c
@@ -2054,7 +2054,7 @@ static void unmap_region(sector_t lba, unsigned int len)
block = lba + alignment;
rem = do_div(block, granularity);
- if (rem == 0 && lba + granularity <= end && block < map_size) {
+ if (rem == 0 && lba + granularity < end && block < map_size) {
clear_bit(block, map_storep);
if (scsi_debug_lbprz)
memset(fake_storep +
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: "K. Y. Srinivasan" <[email protected]>
commit 5c1b10ab7f93d24f29b5630286e323d1c5802d5c upstream.
Properly account for I/O in transit before returning from the RESET call.
In the absense of this patch, we could have a situation where the host may
respond to a command that was issued prior to the issuance of the RESET
command at some arbitrary time after responding to the RESET command.
Currently, the host does not do anything with the RESET command.
Signed-off-by: K. Y. Srinivasan <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/scsi/storvsc_drv.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index 528d52b..0144078 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1221,7 +1221,12 @@ static int storvsc_host_reset_handler(struct scsi_cmnd *scmnd)
/*
* At this point, all outstanding requests in the adapter
* should have been flushed out and return to us
+ * There is a potential race here where the host may be in
+ * the process of responding when we return from here.
+ * Just wait for all in-transit packets to be accounted for
+ * before we return from here.
*/
+ storvsc_wait_to_drain(stor_device);
return SUCCESS;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Holler <[email protected]>
commit b8c4321f3d194469007f5f5f2b34ec278c264a04 upstream.
Line 0 and 1 were both written to line 0 (on the display) and all subsequent
lines had an offset of -1. The result was that the last line on the display
was never overwritten by writes to /dev/fbN.
Signed-off-by: Alexander Holler <[email protected]>
Acked-by: Bernie Thompson <[email protected]>
Signed-off-by: Florian Tobias Schandinat <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/video/udlfb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/video/udlfb.c b/drivers/video/udlfb.c
index 8af6414..38fcfff 100644
--- a/drivers/video/udlfb.c
+++ b/drivers/video/udlfb.c
@@ -647,7 +647,7 @@ static ssize_t dlfb_ops_write(struct fb_info *info, const char __user *buf,
result = fb_sys_write(info, buf, count, ppos);
if (result > 0) {
- int start = max((int)(offset / info->fix.line_length) - 1, 0);
+ int start = max((int)(offset / info->fix.line_length), 0);
int lines = min((u32)((result / info->fix.line_length) + 1),
(u32)info->var.yres);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai <[email protected]>
commit 7819d1c70eb6a57e43554d86e10b39d1e106ed65 upstream.
The commit [4b527b65 ALSA: hda - limit internal mic boost for Asus
X202E] introduced the use of auto-parser code, but it forgot to add
struct hda_gen_spec at the head of codec->spec which the auto-parser
assumes silently. Without this record, it may result in memory
corruption.
This patch adds the missing piece.
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
sound/pci/hda/patch_via.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
index 9c35043..92dff3e 100644
--- a/sound/pci/hda/patch_via.c
+++ b/sound/pci/hda/patch_via.c
@@ -118,6 +118,8 @@ enum {
};
struct via_spec {
+ struct hda_gen_spec gen;
+
/* codec parameterization */
const struct snd_kcontrol_new *mixers[6];
unsigned int num_mixers;
@@ -246,6 +248,7 @@ static struct via_spec * via_new_spec(struct hda_codec *codec)
/* VT1708BCE & VT1708S are almost same */
if (spec->codec_type == VT1708BCE)
spec->codec_type = VT1708S;
+ snd_hda_gen_init(&spec->gen);
return spec;
}
@@ -1628,6 +1631,7 @@ static void via_free(struct hda_codec *codec)
vt1708_stop_hp_work(spec);
kfree(spec->bind_cap_vol);
kfree(spec->bind_cap_sw);
+ snd_hda_gen_free(&spec->gen);
kfree(spec);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: David Henningsson <[email protected]>
commit f7f4b2322bf7b8c5929b7eb5a667091f32592580 upstream.
This caused the internal speaker to mute itself because it was
present, which happened after powersave.
It was found on Dell XPS 15 (L502x), ALC665.
Reported-by: Da Fox <[email protected]>
Signed-off-by: David Henningsson <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
sound/pci/hda/patch_realtek.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index 95b9090..f6784d7 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -588,6 +588,8 @@ static void alc_line_automute(struct hda_codec *codec)
{
struct alc_spec *spec = codec->spec;
+ if (spec->autocfg.line_out_type == AUTO_PIN_SPEAKER_OUT)
+ return;
/* check LO jack only when it's different from HP */
if (spec->autocfg.line_out_pins[0] == spec->autocfg.hp_pins[0])
return;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai <[email protected]>
commit c5e0b6dbad9b4d18c561af90b384d02373f1c994 upstream.
The proper destructor should be called at the error path.
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
sound/pci/hda/patch_cirrus.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c
index 9647ed4..557f27d 100644
--- a/sound/pci/hda/patch_cirrus.c
+++ b/sound/pci/hda/patch_cirrus.c
@@ -1417,7 +1417,7 @@ static int patch_cs420x(struct hda_codec *codec)
return 0;
error:
- kfree(codec->spec);
+ cs_free(codec);
codec->spec = NULL;
return err;
}
@@ -1974,7 +1974,7 @@ static int patch_cs4210(struct hda_codec *codec)
return 0;
error:
- kfree(codec->spec);
+ cs_free(codec);
codec->spec = NULL;
return err;
}
@@ -1999,7 +1999,7 @@ static int patch_cs4213(struct hda_codec *codec)
return 0;
error:
- kfree(codec->spec);
+ cs_free(codec);
codec->spec = NULL;
return err;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Amerigo Wang <[email protected]>
commit 5aa8b572007c4bca1e6d3dd4c4820f1ae49d6bb2 upstream.
For IPv6, sizeof(struct ipv6hdr) = 40, thus the following
expression will result negative:
datalen = pkt_dev->cur_pkt_size - 14 -
sizeof(struct ipv6hdr) - sizeof(struct udphdr) -
pkt_dev->pkt_overhead;
And, the check "if (datalen < sizeof(struct pktgen_hdr))" will be
passed as "datalen" is promoted to unsigned, therefore will cause
a crash later.
This is a quick fix by checking if "datalen" is negative. The following
patch will increase the default value of 'min_pkt_size' for IPv6.
This bug should exist for a long time, so Cc -stable too.
Cc: David S. Miller <[email protected]>
Signed-off-by: Cong Wang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/core/pktgen.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index cce9e53..aa278cd 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -2927,7 +2927,7 @@ static struct sk_buff *fill_packet_ipv6(struct net_device *odev,
sizeof(struct ipv6hdr) - sizeof(struct udphdr) -
pkt_dev->pkt_overhead;
- if (datalen < sizeof(struct pktgen_hdr)) {
+ if (datalen < 0 || datalen < sizeof(struct pktgen_hdr)) {
datalen = sizeof(struct pktgen_hdr);
net_info_ratelimited("increased datalen to %d\n", datalen);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jason Wessel <[email protected]>
commit 17b572e82032bc246324ce136696656b66d4e3f1 upstream.
It is possible to miss data when using the kdb pager. The kdb pager
does not pay attention to the maximum column constraint of the screen
or serial terminal. This result is not incrementing the shown lines
correctly and the pager will print more lines that fit on the screen.
Obviously that is less than useful when using a VGA console where you
cannot scroll back.
The pager will now look at the kdb_buffer string to see how many
characters are printed. It might not be perfect considering you can
output ASCII that might move the cursor position, but it is a
substantially better approximation for viewing dmesg and trace logs.
This also means that the vt screen needs to set the kdb COLUMNS
variable.
Signed-off-by: Jason Wessel <[email protected]>
[ herton: adjusted context ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/tty/vt/vt.c | 13 +++++++++++++
kernel/debug/kdb/kdb_io.c | 33 ++++++++++++++++++++++++++++-----
2 files changed, 41 insertions(+), 5 deletions(-)
diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
index 84cbf29..a13f7e1 100644
--- a/drivers/tty/vt/vt.c
+++ b/drivers/tty/vt/vt.c
@@ -3475,6 +3475,19 @@ int con_debug_enter(struct vc_data *vc)
kdb_set(2, setargs);
}
}
+ if (vc->vc_cols < 999) {
+ int colcount;
+ char cols[4];
+ const char *setargs[3] = {
+ "set",
+ "COLUMNS",
+ cols,
+ };
+ if (kdbgetintenv(setargs[0], &colcount)) {
+ snprintf(cols, 4, "%i", vc->vc_cols);
+ kdb_set(2, setargs);
+ }
+ }
#endif /* CONFIG_KGDB_KDB */
return ret;
}
diff --git a/kernel/debug/kdb/kdb_io.c b/kernel/debug/kdb/kdb_io.c
index bb9520f..572e604 100644
--- a/kernel/debug/kdb/kdb_io.c
+++ b/kernel/debug/kdb/kdb_io.c
@@ -552,6 +552,7 @@ int vkdb_printf(const char *fmt, va_list ap)
{
int diag;
int linecount;
+ int colcount;
int logging, saved_loglevel = 0;
int saved_trap_printk;
int got_printf_lock = 0;
@@ -584,6 +585,10 @@ int vkdb_printf(const char *fmt, va_list ap)
if (diag || linecount <= 1)
linecount = 24;
+ diag = kdbgetintenv("COLUMNS", &colcount);
+ if (diag || colcount <= 1)
+ colcount = 80;
+
diag = kdbgetintenv("LOGGING", &logging);
if (diag)
logging = 0;
@@ -690,7 +695,7 @@ kdb_printit:
gdbstub_msg_write(kdb_buffer, retlen);
} else {
if (dbg_io_ops && !dbg_io_ops->is_console) {
- len = strlen(kdb_buffer);
+ len = retlen;
cp = kdb_buffer;
while (len--) {
dbg_io_ops->write_char(*cp);
@@ -709,11 +714,29 @@ kdb_printit:
printk(KERN_INFO "%s", kdb_buffer);
}
- if (KDB_STATE(PAGER) && strchr(kdb_buffer, '\n'))
- kdb_nextline++;
+ if (KDB_STATE(PAGER)) {
+ /*
+ * Check printed string to decide how to bump the
+ * kdb_nextline to control when the more prompt should
+ * show up.
+ */
+ int got = 0;
+ len = retlen;
+ while (len--) {
+ if (kdb_buffer[len] == '\n') {
+ kdb_nextline++;
+ got = 0;
+ } else if (kdb_buffer[len] == '\r') {
+ got = 0;
+ } else {
+ got++;
+ }
+ }
+ kdb_nextline += got / (colcount + 1);
+ }
/* check for having reached the LINES number of printed lines */
- if (kdb_nextline == linecount) {
+ if (kdb_nextline >= linecount) {
char buf1[16] = "";
#if defined(CONFIG_SMP)
char buf2[32];
@@ -776,7 +799,7 @@ kdb_printit:
kdb_grepping_flag = 0;
kdb_printf("\n");
} else if (buf1[0] == ' ') {
- kdb_printf("\n");
+ kdb_printf("\r");
suspend_grep = 1; /* for this recursion */
} else if (buf1[0] == '\n') {
kdb_nextline = linecount - 1;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Carpenter <[email protected]>
commit 91502f099dfc5a1e8812898e26ee280713e1d002 upstream.
Clang complains that we are assigning a variable to itself. This should
be using bad_sectors like the similar earlier check does.
Bug has been present since 3.1-rc1. It is minor but could
conceivably cause corruption or other bad behaviour.
Signed-off-by: Dan Carpenter <[email protected]>
Signed-off-by: NeilBrown <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/md/raid10.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index e987da4..17fae37 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -3139,7 +3139,7 @@ static sector_t sync_request(struct mddev *mddev, sector_t sector_nr,
else {
bad_sectors -= (sector - first_bad);
if (max_sync > bad_sectors)
- max_sync = max_sync;
+ max_sync = bad_sectors;
continue;
}
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Konrad Rzeszutek Wilk <[email protected]>
commit 1a7bbda5b1ab0e02622761305a32dc38735b90b2 upstream.
We actually do not do anything about it. Just return a default
value of zero and if the kernel tries to write anything but 0
we BUG_ON.
This fixes the case when an user tries to suspend the machine
and it blows up in save_processor_state b/c 'read_cr8' is set
to NULL and we get:
kernel BUG at /home/konrad/ssd/linux/arch/x86/include/asm/paravirt.h:100!
invalid opcode: 0000 [#1] SMP
Pid: 2687, comm: init.late Tainted: G O 3.6.0upstream-00002-gac264ac-dirty #4 Bochs Bochs
RIP: e030:[<ffffffff814d5f42>] [<ffffffff814d5f42>] save_processor_state+0x212/0x270
.. snip..
Call Trace:
[<ffffffff810733bf>] do_suspend_lowlevel+0xf/0xac
[<ffffffff8107330c>] ? x86_acpi_suspend_lowlevel+0x10c/0x150
[<ffffffff81342ee2>] acpi_suspend_enter+0x57/0xd5
Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/xen/enlighten.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 93dcfdc..c1656e0 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -952,7 +952,16 @@ static void xen_write_cr4(unsigned long cr4)
native_write_cr4(cr4);
}
-
+#ifdef CONFIG_X86_64
+static inline unsigned long xen_read_cr8(void)
+{
+ return 0;
+}
+static inline void xen_write_cr8(unsigned long val)
+{
+ BUG_ON(val);
+}
+#endif
static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
{
int ret;
@@ -1121,6 +1130,11 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
.read_cr4_safe = native_read_cr4_safe,
.write_cr4 = xen_write_cr4,
+#ifdef CONFIG_X86_64
+ .read_cr8 = xen_read_cr8,
+ .write_cr8 = xen_write_cr8,
+#endif
+
.wbinvd = native_wbinvd,
.read_msr = native_read_msr_safe,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit e22004235a900213625acd6583ac913d5a30c155 upstream.
The functions ceph_con_out_kvec_reset() and ceph_con_out_kvec_add()
are entirely private functions, so drop the "ceph_" prefix in their
name to make them slightly more wieldy.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 48 ++++++++++++++++++++++++------------------------
1 file changed, 24 insertions(+), 24 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 5fb9937..d0b27e9 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -486,14 +486,14 @@ static u32 get_global_seq(struct ceph_messenger *msgr, u32 gt)
return ret;
}
-static void ceph_con_out_kvec_reset(struct ceph_connection *con)
+static void con_out_kvec_reset(struct ceph_connection *con)
{
con->out_kvec_left = 0;
con->out_kvec_bytes = 0;
con->out_kvec_cur = &con->out_kvec[0];
}
-static void ceph_con_out_kvec_add(struct ceph_connection *con,
+static void con_out_kvec_add(struct ceph_connection *con,
size_t size, void *data)
{
int index;
@@ -534,7 +534,7 @@ static void prepare_write_message(struct ceph_connection *con)
struct ceph_msg *m;
u32 crc;
- ceph_con_out_kvec_reset(con);
+ con_out_kvec_reset(con);
con->out_kvec_is_msg = true;
con->out_msg_done = false;
@@ -542,9 +542,9 @@ static void prepare_write_message(struct ceph_connection *con)
* TCP packet that's a good thing. */
if (con->in_seq > con->in_seq_acked) {
con->in_seq_acked = con->in_seq;
- ceph_con_out_kvec_add(con, sizeof (tag_ack), &tag_ack);
+ con_out_kvec_add(con, sizeof (tag_ack), &tag_ack);
con->out_temp_ack = cpu_to_le64(con->in_seq_acked);
- ceph_con_out_kvec_add(con, sizeof (con->out_temp_ack),
+ con_out_kvec_add(con, sizeof (con->out_temp_ack),
&con->out_temp_ack);
}
@@ -576,12 +576,12 @@ static void prepare_write_message(struct ceph_connection *con)
BUG_ON(le32_to_cpu(m->hdr.front_len) != m->front.iov_len);
/* tag + hdr + front + middle */
- ceph_con_out_kvec_add(con, sizeof (tag_msg), &tag_msg);
- ceph_con_out_kvec_add(con, sizeof (m->hdr), &m->hdr);
- ceph_con_out_kvec_add(con, m->front.iov_len, m->front.iov_base);
+ con_out_kvec_add(con, sizeof (tag_msg), &tag_msg);
+ con_out_kvec_add(con, sizeof (m->hdr), &m->hdr);
+ con_out_kvec_add(con, m->front.iov_len, m->front.iov_base);
if (m->middle)
- ceph_con_out_kvec_add(con, m->middle->vec.iov_len,
+ con_out_kvec_add(con, m->middle->vec.iov_len,
m->middle->vec.iov_base);
/* fill in crc (except data pages), footer */
@@ -630,12 +630,12 @@ static void prepare_write_ack(struct ceph_connection *con)
con->in_seq_acked, con->in_seq);
con->in_seq_acked = con->in_seq;
- ceph_con_out_kvec_reset(con);
+ con_out_kvec_reset(con);
- ceph_con_out_kvec_add(con, sizeof (tag_ack), &tag_ack);
+ con_out_kvec_add(con, sizeof (tag_ack), &tag_ack);
con->out_temp_ack = cpu_to_le64(con->in_seq_acked);
- ceph_con_out_kvec_add(con, sizeof (con->out_temp_ack),
+ con_out_kvec_add(con, sizeof (con->out_temp_ack),
&con->out_temp_ack);
con->out_more = 1; /* more will follow.. eventually.. */
@@ -648,8 +648,8 @@ static void prepare_write_ack(struct ceph_connection *con)
static void prepare_write_keepalive(struct ceph_connection *con)
{
dout("prepare_write_keepalive %p\n", con);
- ceph_con_out_kvec_reset(con);
- ceph_con_out_kvec_add(con, sizeof (tag_keepalive), &tag_keepalive);
+ con_out_kvec_reset(con);
+ con_out_kvec_add(con, sizeof (tag_keepalive), &tag_keepalive);
set_bit(WRITE_PENDING, &con->state);
}
@@ -694,8 +694,8 @@ static struct ceph_auth_handshake *get_connect_authorizer(struct ceph_connection
*/
static void prepare_write_banner(struct ceph_connection *con)
{
- ceph_con_out_kvec_add(con, strlen(CEPH_BANNER), CEPH_BANNER);
- ceph_con_out_kvec_add(con, sizeof (con->msgr->my_enc_addr),
+ con_out_kvec_add(con, strlen(CEPH_BANNER), CEPH_BANNER);
+ con_out_kvec_add(con, sizeof (con->msgr->my_enc_addr),
&con->msgr->my_enc_addr);
con->out_more = 0;
@@ -742,10 +742,10 @@ static int prepare_write_connect(struct ceph_connection *con)
con->out_connect.authorizer_len = auth ?
cpu_to_le32(auth->authorizer_buf_len) : 0;
- ceph_con_out_kvec_add(con, sizeof (con->out_connect),
+ con_out_kvec_add(con, sizeof (con->out_connect),
&con->out_connect);
if (auth && auth->authorizer_buf_len)
- ceph_con_out_kvec_add(con, auth->authorizer_buf_len,
+ con_out_kvec_add(con, auth->authorizer_buf_len,
auth->authorizer_buf);
con->out_more = 0;
@@ -939,7 +939,7 @@ static int write_partial_msg_pages(struct ceph_connection *con)
/* prepare and queue up footer, too */
if (!do_datacrc)
con->out_msg->footer.flags |= CEPH_MSG_FOOTER_NOCRC;
- ceph_con_out_kvec_reset(con);
+ con_out_kvec_reset(con);
prepare_write_message_footer(con);
ret = 1;
out:
@@ -1402,7 +1402,7 @@ static int process_connect(struct ceph_connection *con)
return -1;
}
con->auth_retry = 1;
- ceph_con_out_kvec_reset(con);
+ con_out_kvec_reset(con);
ret = prepare_write_connect(con);
if (ret < 0)
return ret;
@@ -1423,7 +1423,7 @@ static int process_connect(struct ceph_connection *con)
ENTITY_NAME(con->peer_name),
ceph_pr_addr(&con->peer_addr.in_addr));
reset_connection(con);
- ceph_con_out_kvec_reset(con);
+ con_out_kvec_reset(con);
ret = prepare_write_connect(con);
if (ret < 0)
return ret;
@@ -1449,7 +1449,7 @@ static int process_connect(struct ceph_connection *con)
le32_to_cpu(con->out_connect.connect_seq),
le32_to_cpu(con->in_reply.connect_seq));
con->connect_seq = le32_to_cpu(con->in_reply.connect_seq);
- ceph_con_out_kvec_reset(con);
+ con_out_kvec_reset(con);
ret = prepare_write_connect(con);
if (ret < 0)
return ret;
@@ -1466,7 +1466,7 @@ static int process_connect(struct ceph_connection *con)
le32_to_cpu(con->in_reply.global_seq));
get_global_seq(con->msgr,
le32_to_cpu(con->in_reply.global_seq));
- ceph_con_out_kvec_reset(con);
+ con_out_kvec_reset(con);
ret = prepare_write_connect(con);
if (ret < 0)
return ret;
@@ -1873,7 +1873,7 @@ more:
/* open the socket first? */
if (con->sock == NULL) {
- ceph_con_out_kvec_reset(con);
+ con_out_kvec_reset(con);
prepare_write_banner(con);
ret = prepare_write_connect(con);
if (ret < 0)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 928443cd9644e7cfd46f687dbeffda2d1a357ff9 upstream.
A ceph_connection holds a mixture of connection state (as in "state
machine" state) and connection flags in a single "state" field. To
make the distinction more clear, define a new "flags" field and use
it rather than the "state" field to hold Boolean flag values.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil<[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 18 ++++++++++-----
net/ceph/messenger.c | 50 ++++++++++++++++++++--------------------
2 files changed, 37 insertions(+), 31 deletions(-)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 771b2ed..34e9506 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -103,20 +103,25 @@ struct ceph_msg_pos {
#define MAX_DELAY_INTERVAL (5 * 60 * HZ)
/*
- * ceph_connection state bit flags
+ * ceph_connection flag bits
*/
+
#define LOSSYTX 0 /* we can close channel or drop messages on errors */
-#define CONNECTING 1
-#define NEGOTIATING 2
#define KEEPALIVE_PENDING 3
#define WRITE_PENDING 4 /* we have data ready to send */
+#define SOCK_CLOSED 11 /* socket state changed to closed */
+#define BACKOFF 15
+
+/*
+ * ceph_connection states
+ */
+#define CONNECTING 1
+#define NEGOTIATING 2
#define STANDBY 8 /* no outgoing messages, socket closed. we keep
* the ceph_connection around to maintain shared
* state with the peer. */
#define CLOSED 10 /* we've closed the connection */
-#define SOCK_CLOSED 11 /* socket state changed to closed */
#define OPENING 13 /* open connection w/ (possibly new) peer */
-#define BACKOFF 15
/*
* A single connection with another host.
@@ -133,7 +138,8 @@ struct ceph_connection {
struct ceph_messenger *msgr;
struct socket *sock;
- unsigned long state; /* connection state (see flags above) */
+ unsigned long flags;
+ unsigned long state;
const char *error_msg; /* error message, if any */
struct ceph_entity_addr peer_addr; /* peer address */
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index a04557e..7ec608c 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -176,7 +176,7 @@ static void ceph_sock_write_space(struct sock *sk)
* buffer. See net/ipv4/tcp_input.c:tcp_check_space()
* and net/core/stream.c:sk_stream_write_space().
*/
- if (test_bit(WRITE_PENDING, &con->state)) {
+ if (test_bit(WRITE_PENDING, &con->flags)) {
if (sk_stream_wspace(sk) >= sk_stream_min_wspace(sk)) {
dout("%s %p queueing write work\n", __func__, con);
clear_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
@@ -203,7 +203,7 @@ static void ceph_sock_state_change(struct sock *sk)
dout("%s TCP_CLOSE\n", __func__);
case TCP_CLOSE_WAIT:
dout("%s TCP_CLOSE_WAIT\n", __func__);
- if (test_and_set_bit(SOCK_CLOSED, &con->state) == 0) {
+ if (test_and_set_bit(SOCK_CLOSED, &con->flags) == 0) {
if (test_bit(CONNECTING, &con->state))
con->error_msg = "connection failed";
else
@@ -395,9 +395,9 @@ void ceph_con_close(struct ceph_connection *con)
ceph_pr_addr(&con->peer_addr.in_addr));
set_bit(CLOSED, &con->state); /* in case there's queued work */
clear_bit(STANDBY, &con->state); /* avoid connect_seq bump */
- clear_bit(LOSSYTX, &con->state); /* so we retry next connect */
- clear_bit(KEEPALIVE_PENDING, &con->state);
- clear_bit(WRITE_PENDING, &con->state);
+ clear_bit(LOSSYTX, &con->flags); /* so we retry next connect */
+ clear_bit(KEEPALIVE_PENDING, &con->flags);
+ clear_bit(WRITE_PENDING, &con->flags);
mutex_lock(&con->mutex);
reset_connection(con);
con->peer_global_seq = 0;
@@ -618,7 +618,7 @@ static void prepare_write_message(struct ceph_connection *con)
prepare_write_message_footer(con);
}
- set_bit(WRITE_PENDING, &con->state);
+ set_bit(WRITE_PENDING, &con->flags);
}
/*
@@ -639,7 +639,7 @@ static void prepare_write_ack(struct ceph_connection *con)
&con->out_temp_ack);
con->out_more = 1; /* more will follow.. eventually.. */
- set_bit(WRITE_PENDING, &con->state);
+ set_bit(WRITE_PENDING, &con->flags);
}
/*
@@ -650,7 +650,7 @@ static void prepare_write_keepalive(struct ceph_connection *con)
dout("prepare_write_keepalive %p\n", con);
con_out_kvec_reset(con);
con_out_kvec_add(con, sizeof (tag_keepalive), &tag_keepalive);
- set_bit(WRITE_PENDING, &con->state);
+ set_bit(WRITE_PENDING, &con->flags);
}
/*
@@ -679,7 +679,7 @@ static struct ceph_auth_handshake *get_connect_authorizer(struct ceph_connection
if (IS_ERR(auth))
return auth;
- if (test_bit(CLOSED, &con->state) || test_bit(OPENING, &con->state))
+ if (test_bit(CLOSED, &con->state) || test_bit(OPENING, &con->flags))
return ERR_PTR(-EAGAIN);
con->auth_reply_buf = auth->authorizer_reply_buf;
@@ -699,7 +699,7 @@ static void prepare_write_banner(struct ceph_connection *con)
&con->msgr->my_enc_addr);
con->out_more = 0;
- set_bit(WRITE_PENDING, &con->state);
+ set_bit(WRITE_PENDING, &con->flags);
}
static int prepare_write_connect(struct ceph_connection *con)
@@ -749,7 +749,7 @@ static int prepare_write_connect(struct ceph_connection *con)
auth->authorizer_buf);
con->out_more = 0;
- set_bit(WRITE_PENDING, &con->state);
+ set_bit(WRITE_PENDING, &con->flags);
return 0;
}
@@ -1496,7 +1496,7 @@ static int process_connect(struct ceph_connection *con)
le32_to_cpu(con->in_reply.connect_seq));
if (con->in_reply.flags & CEPH_MSG_CONNECT_LOSSY)
- set_bit(LOSSYTX, &con->state);
+ set_bit(LOSSYTX, &con->flags);
prepare_read_tag(con);
break;
@@ -1937,14 +1937,14 @@ do_next:
prepare_write_ack(con);
goto more;
}
- if (test_and_clear_bit(KEEPALIVE_PENDING, &con->state)) {
+ if (test_and_clear_bit(KEEPALIVE_PENDING, &con->flags)) {
prepare_write_keepalive(con);
goto more;
}
}
/* Nothing to do! */
- clear_bit(WRITE_PENDING, &con->state);
+ clear_bit(WRITE_PENDING, &con->flags);
dout("try_write nothing else to write.\n");
ret = 0;
out:
@@ -2110,7 +2110,7 @@ static void con_work(struct work_struct *work)
mutex_lock(&con->mutex);
restart:
- if (test_and_clear_bit(BACKOFF, &con->state)) {
+ if (test_and_clear_bit(BACKOFF, &con->flags)) {
dout("con_work %p backing off\n", con);
if (queue_delayed_work(ceph_msgr_wq, &con->work,
round_jiffies_relative(con->delay))) {
@@ -2139,7 +2139,7 @@ restart:
con_close_socket(con);
}
- if (test_and_clear_bit(SOCK_CLOSED, &con->state))
+ if (test_and_clear_bit(SOCK_CLOSED, &con->flags))
goto fault;
ret = try_read(con);
@@ -2178,7 +2178,7 @@ static void ceph_fault(struct ceph_connection *con)
dout("fault %p state %lu to peer %s\n",
con, con->state, ceph_pr_addr(&con->peer_addr.in_addr));
- if (test_bit(LOSSYTX, &con->state)) {
+ if (test_bit(LOSSYTX, &con->flags)) {
dout("fault on LOSSYTX channel\n");
goto out;
}
@@ -2200,9 +2200,9 @@ static void ceph_fault(struct ceph_connection *con)
/* If there are no messages queued or keepalive pending, place
* the connection in a STANDBY state */
if (list_empty(&con->out_queue) &&
- !test_bit(KEEPALIVE_PENDING, &con->state)) {
+ !test_bit(KEEPALIVE_PENDING, &con->flags)) {
dout("fault %p setting STANDBY clearing WRITE_PENDING\n", con);
- clear_bit(WRITE_PENDING, &con->state);
+ clear_bit(WRITE_PENDING, &con->flags);
set_bit(STANDBY, &con->state);
} else {
/* retry after a delay. */
@@ -2226,7 +2226,7 @@ static void ceph_fault(struct ceph_connection *con)
* that when con_work restarts we schedule the
* delay then.
*/
- set_bit(BACKOFF, &con->state);
+ set_bit(BACKOFF, &con->flags);
}
}
@@ -2282,8 +2282,8 @@ static void clear_standby(struct ceph_connection *con)
mutex_lock(&con->mutex);
dout("clear_standby %p and ++connect_seq\n", con);
con->connect_seq++;
- WARN_ON(test_bit(WRITE_PENDING, &con->state));
- WARN_ON(test_bit(KEEPALIVE_PENDING, &con->state));
+ WARN_ON(test_bit(WRITE_PENDING, &con->flags));
+ WARN_ON(test_bit(KEEPALIVE_PENDING, &con->flags));
mutex_unlock(&con->mutex);
}
}
@@ -2321,7 +2321,7 @@ void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg)
/* if there wasn't anything waiting to send before, queue
* new work */
clear_standby(con);
- if (test_and_set_bit(WRITE_PENDING, &con->state) == 0)
+ if (test_and_set_bit(WRITE_PENDING, &con->flags) == 0)
queue_con(con);
}
EXPORT_SYMBOL(ceph_con_send);
@@ -2388,8 +2388,8 @@ void ceph_con_keepalive(struct ceph_connection *con)
{
dout("con_keepalive %p\n", con);
clear_standby(con);
- if (test_and_set_bit(KEEPALIVE_PENDING, &con->state) == 0 &&
- test_and_set_bit(WRITE_PENDING, &con->state) == 0)
+ if (test_and_set_bit(KEEPALIVE_PENDING, &con->flags) == 0 &&
+ test_and_set_bit(WRITE_PENDING, &con->flags) == 0)
queue_con(con);
}
EXPORT_SYMBOL(ceph_con_keepalive);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 15d9882c336db2db73ccf9871ae2398e452f694c upstream.
A ceph client has a pointer to a ceph messenger structure in it.
There is always exactly one ceph messenger for a ceph client, so
there is no need to allocate it separate from the ceph client
structure.
Switch the ceph_client structure to embed its ceph_messenger
structure.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ceph/mds_client.c | 2 +-
include/linux/ceph/libceph.h | 2 +-
include/linux/ceph/messenger.h | 9 +++++----
net/ceph/ceph_common.c | 18 +++++-------------
net/ceph/messenger.c | 30 +++++++++---------------------
net/ceph/mon_client.c | 6 +++---
net/ceph/osd_client.c | 4 ++--
7 files changed, 26 insertions(+), 45 deletions(-)
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index 200bc87..ad30261 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -394,7 +394,7 @@ static struct ceph_mds_session *register_session(struct ceph_mds_client *mdsc,
s->s_seq = 0;
mutex_init(&s->s_mutex);
- ceph_con_init(mdsc->fsc->client->msgr, &s->s_con);
+ ceph_con_init(&mdsc->fsc->client->msgr, &s->s_con);
s->s_con.private = s;
s->s_con.ops = &mds_con_ops;
s->s_con.peer_name.type = CEPH_ENTITY_TYPE_MDS;
diff --git a/include/linux/ceph/libceph.h b/include/linux/ceph/libceph.h
index e71d683..98ec36a 100644
--- a/include/linux/ceph/libceph.h
+++ b/include/linux/ceph/libceph.h
@@ -132,7 +132,7 @@ struct ceph_client {
u32 supported_features;
u32 required_features;
- struct ceph_messenger *msgr; /* messenger instance */
+ struct ceph_messenger msgr; /* messenger instance */
struct ceph_mon_client monc;
struct ceph_osd_client osdc;
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index ce7a483..771b2ed 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -203,10 +203,11 @@ extern int ceph_msgr_init(void);
extern void ceph_msgr_exit(void);
extern void ceph_msgr_flush(void);
-extern struct ceph_messenger *ceph_messenger_create(
- struct ceph_entity_addr *myaddr,
- u32 features, u32 required);
-extern void ceph_messenger_destroy(struct ceph_messenger *);
+extern void ceph_messenger_init(struct ceph_messenger *msgr,
+ struct ceph_entity_addr *myaddr,
+ u32 supported_features,
+ u32 required_features,
+ bool nocrc);
extern void ceph_con_init(struct ceph_messenger *msgr,
struct ceph_connection *con);
diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c
index ba4323b..58b09ef 100644
--- a/net/ceph/ceph_common.c
+++ b/net/ceph/ceph_common.c
@@ -468,19 +468,15 @@ struct ceph_client *ceph_create_client(struct ceph_options *opt, void *private,
/* msgr */
if (ceph_test_opt(client, MYIP))
myaddr = &client->options->my_addr;
- client->msgr = ceph_messenger_create(myaddr,
- client->supported_features,
- client->required_features);
- if (IS_ERR(client->msgr)) {
- err = PTR_ERR(client->msgr);
- goto fail;
- }
- client->msgr->nocrc = ceph_test_opt(client, NOCRC);
+ ceph_messenger_init(&client->msgr, myaddr,
+ client->supported_features,
+ client->required_features,
+ ceph_test_opt(client, NOCRC));
/* subsystems */
err = ceph_monc_init(&client->monc, client);
if (err < 0)
- goto fail_msgr;
+ goto fail;
err = ceph_osdc_init(&client->osdc, client);
if (err < 0)
goto fail_monc;
@@ -489,8 +485,6 @@ struct ceph_client *ceph_create_client(struct ceph_options *opt, void *private,
fail_monc:
ceph_monc_stop(&client->monc);
-fail_msgr:
- ceph_messenger_destroy(client->msgr);
fail:
kfree(client);
return ERR_PTR(err);
@@ -508,8 +502,6 @@ void ceph_destroy_client(struct ceph_client *client)
ceph_debugfs_client_cleanup(client);
- ceph_messenger_destroy(client->msgr);
-
ceph_destroy_options(client->options);
kfree(client);
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index d0b27e9..a04557e 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2249,18 +2249,14 @@ out:
/*
- * create a new messenger instance
+ * initialize a new messenger instance
*/
-struct ceph_messenger *ceph_messenger_create(struct ceph_entity_addr *myaddr,
- u32 supported_features,
- u32 required_features)
+void ceph_messenger_init(struct ceph_messenger *msgr,
+ struct ceph_entity_addr *myaddr,
+ u32 supported_features,
+ u32 required_features,
+ bool nocrc)
{
- struct ceph_messenger *msgr;
-
- msgr = kzalloc(sizeof(*msgr), GFP_KERNEL);
- if (msgr == NULL)
- return ERR_PTR(-ENOMEM);
-
msgr->supported_features = supported_features;
msgr->required_features = required_features;
@@ -2273,19 +2269,11 @@ struct ceph_messenger *ceph_messenger_create(struct ceph_entity_addr *myaddr,
msgr->inst.addr.type = 0;
get_random_bytes(&msgr->inst.addr.nonce, sizeof(msgr->inst.addr.nonce));
encode_my_addr(msgr);
+ msgr->nocrc = nocrc;
- dout("messenger_create %p\n", msgr);
- return msgr;
-}
-EXPORT_SYMBOL(ceph_messenger_create);
-
-void ceph_messenger_destroy(struct ceph_messenger *msgr)
-{
- dout("destroy %p\n", msgr);
- kfree(msgr);
- dout("destroyed messenger %p\n", msgr);
+ dout("%s %p\n", __func__, msgr);
}
-EXPORT_SYMBOL(ceph_messenger_destroy);
+EXPORT_SYMBOL(ceph_messenger_init);
static void clear_standby(struct ceph_connection *con)
{
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index d0649a9..37c4cd7 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -763,7 +763,7 @@ int ceph_monc_init(struct ceph_mon_client *monc, struct ceph_client *cl)
monc->con = kmalloc(sizeof(*monc->con), GFP_KERNEL);
if (!monc->con)
goto out_monmap;
- ceph_con_init(monc->client->msgr, monc->con);
+ ceph_con_init(&monc->client->msgr, monc->con);
monc->con->private = monc;
monc->con->ops = &mon_con_ops;
@@ -888,8 +888,8 @@ static void handle_auth_reply(struct ceph_mon_client *monc,
} else if (!was_auth && monc->auth->ops->is_authenticated(monc->auth)) {
dout("authenticated, starting session\n");
- monc->client->msgr->inst.name.type = CEPH_ENTITY_TYPE_CLIENT;
- monc->client->msgr->inst.name.num =
+ monc->client->msgr.inst.name.type = CEPH_ENTITY_TYPE_CLIENT;
+ monc->client->msgr.inst.name.num =
cpu_to_le64(monc->auth->global_id);
__send_subscribe(monc);
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index ca59e66..dc532c5 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -639,7 +639,7 @@ static struct ceph_osd *create_osd(struct ceph_osd_client *osdc)
INIT_LIST_HEAD(&osd->o_osd_lru);
osd->o_incarnation = 1;
- ceph_con_init(osdc->client->msgr, &osd->o_con);
+ ceph_con_init(&osdc->client->msgr, &osd->o_con);
osd->o_con.private = osd;
osd->o_con.ops = &osd_con_ops;
osd->o_con.peer_name.type = CEPH_ENTITY_TYPE_OSD;
@@ -1391,7 +1391,7 @@ void ceph_osdc_handle_map(struct ceph_osd_client *osdc, struct ceph_msg *msg)
epoch, maplen);
newmap = osdmap_apply_incremental(&p, next,
osdc->osdmap,
- osdc->client->msgr);
+ &osdc->client->msgr);
if (IS_ERR(newmap)) {
err = PTR_ERR(newmap);
goto bad;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit e10006f807ffc4d5b1d861305d18d9e8145891ca upstream.
Pass the osd number to the create_osd() routine, and move the
initialization of fields that depend on it therein.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/osd_client.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index dc532c5..cb5b847 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -624,7 +624,7 @@ static void osd_reset(struct ceph_connection *con)
/*
* Track open sessions with osds.
*/
-static struct ceph_osd *create_osd(struct ceph_osd_client *osdc)
+static struct ceph_osd *create_osd(struct ceph_osd_client *osdc, int onum)
{
struct ceph_osd *osd;
@@ -634,6 +634,7 @@ static struct ceph_osd *create_osd(struct ceph_osd_client *osdc)
atomic_set(&osd->o_ref, 1);
osd->o_osdc = osdc;
+ osd->o_osd = onum;
INIT_LIST_HEAD(&osd->o_requests);
INIT_LIST_HEAD(&osd->o_linger_requests);
INIT_LIST_HEAD(&osd->o_osd_lru);
@@ -643,6 +644,7 @@ static struct ceph_osd *create_osd(struct ceph_osd_client *osdc)
osd->o_con.private = osd;
osd->o_con.ops = &osd_con_ops;
osd->o_con.peer_name.type = CEPH_ENTITY_TYPE_OSD;
+ osd->o_con.peer_name.num = cpu_to_le64(onum);
INIT_LIST_HEAD(&osd->o_keepalive_item);
return osd;
@@ -998,15 +1000,13 @@ static int __map_request(struct ceph_osd_client *osdc,
req->r_osd = __lookup_osd(osdc, o);
if (!req->r_osd && o >= 0) {
err = -ENOMEM;
- req->r_osd = create_osd(osdc);
+ req->r_osd = create_osd(osdc, o);
if (!req->r_osd) {
list_move(&req->r_req_lru_item, &osdc->req_notarget);
goto out;
}
dout("map_request osd %p is osd%d\n", req->r_osd, o);
- req->r_osd->o_osd = o;
- req->r_osd->o_con.peer_name.num = cpu_to_le64(o);
__insert_osd(osdc, req->r_osd);
ceph_con_open(&req->r_osd->o_con, &osdc->osdmap->osd_addr[o]);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 67130934fb579fdf0f2f6d745960264378b57dc8 upstream.
A monitor client has a pointer to a ceph connection structure in it.
This is the only one of the three ceph client types that do it this
way; the OSD and MDS clients embed the connection into their main
structures. There is always exactly one ceph connection for a
monitor client, so there is no need to allocate it separate from the
monitor client structure.
So switch the ceph_mon_client structure to embed its
ceph_connection structure.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/mon_client.h | 2 +-
net/ceph/mon_client.c | 47 +++++++++++++++++----------------------
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/include/linux/ceph/mon_client.h b/include/linux/ceph/mon_client.h
index 545f859..2113e38 100644
--- a/include/linux/ceph/mon_client.h
+++ b/include/linux/ceph/mon_client.h
@@ -70,7 +70,7 @@ struct ceph_mon_client {
bool hunting;
int cur_mon; /* last monitor i contacted */
unsigned long sub_sent, sub_renew_after;
- struct ceph_connection *con;
+ struct ceph_connection con;
bool have_fsid;
/* pending generic requests */
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index 37c4cd7..ec75996 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -106,9 +106,9 @@ static void __send_prepared_auth_request(struct ceph_mon_client *monc, int len)
monc->pending_auth = 1;
monc->m_auth->front.iov_len = len;
monc->m_auth->hdr.front_len = cpu_to_le32(len);
- ceph_con_revoke(monc->con, monc->m_auth);
+ ceph_con_revoke(&monc->con, monc->m_auth);
ceph_msg_get(monc->m_auth); /* keep our ref */
- ceph_con_send(monc->con, monc->m_auth);
+ ceph_con_send(&monc->con, monc->m_auth);
}
/*
@@ -117,8 +117,8 @@ static void __send_prepared_auth_request(struct ceph_mon_client *monc, int len)
static void __close_session(struct ceph_mon_client *monc)
{
dout("__close_session closing mon%d\n", monc->cur_mon);
- ceph_con_revoke(monc->con, monc->m_auth);
- ceph_con_close(monc->con);
+ ceph_con_revoke(&monc->con, monc->m_auth);
+ ceph_con_close(&monc->con);
monc->cur_mon = -1;
monc->pending_auth = 0;
ceph_auth_reset(monc->auth);
@@ -142,9 +142,9 @@ static int __open_session(struct ceph_mon_client *monc)
monc->want_next_osdmap = !!monc->want_next_osdmap;
dout("open_session mon%d opening\n", monc->cur_mon);
- monc->con->peer_name.type = CEPH_ENTITY_TYPE_MON;
- monc->con->peer_name.num = cpu_to_le64(monc->cur_mon);
- ceph_con_open(monc->con,
+ monc->con.peer_name.type = CEPH_ENTITY_TYPE_MON;
+ monc->con.peer_name.num = cpu_to_le64(monc->cur_mon);
+ ceph_con_open(&monc->con,
&monc->monmap->mon_inst[monc->cur_mon].addr);
/* initiatiate authentication handshake */
@@ -226,8 +226,8 @@ static void __send_subscribe(struct ceph_mon_client *monc)
msg->front.iov_len = p - msg->front.iov_base;
msg->hdr.front_len = cpu_to_le32(msg->front.iov_len);
- ceph_con_revoke(monc->con, msg);
- ceph_con_send(monc->con, ceph_msg_get(msg));
+ ceph_con_revoke(&monc->con, msg);
+ ceph_con_send(&monc->con, ceph_msg_get(msg));
monc->sub_sent = jiffies | 1; /* never 0 */
}
@@ -247,7 +247,7 @@ static void handle_subscribe_ack(struct ceph_mon_client *monc,
if (monc->hunting) {
pr_info("mon%d %s session established\n",
monc->cur_mon,
- ceph_pr_addr(&monc->con->peer_addr.in_addr));
+ ceph_pr_addr(&monc->con.peer_addr.in_addr));
monc->hunting = false;
}
dout("handle_subscribe_ack after %d seconds\n", seconds);
@@ -461,7 +461,7 @@ static int do_generic_request(struct ceph_mon_client *monc,
req->request->hdr.tid = cpu_to_le64(req->tid);
__insert_generic_request(monc, req);
monc->num_generic_requests++;
- ceph_con_send(monc->con, ceph_msg_get(req->request));
+ ceph_con_send(&monc->con, ceph_msg_get(req->request));
mutex_unlock(&monc->mutex);
err = wait_for_completion_interruptible(&req->completion);
@@ -684,8 +684,8 @@ static void __resend_generic_request(struct ceph_mon_client *monc)
for (p = rb_first(&monc->generic_request_tree); p; p = rb_next(p)) {
req = rb_entry(p, struct ceph_mon_generic_request, node);
- ceph_con_revoke(monc->con, req->request);
- ceph_con_send(monc->con, ceph_msg_get(req->request));
+ ceph_con_revoke(&monc->con, req->request);
+ ceph_con_send(&monc->con, ceph_msg_get(req->request));
}
}
@@ -705,7 +705,7 @@ static void delayed_work(struct work_struct *work)
__close_session(monc);
__open_session(monc); /* continue hunting */
} else {
- ceph_con_keepalive(monc->con);
+ ceph_con_keepalive(&monc->con);
__validate_auth(monc);
@@ -760,19 +760,16 @@ int ceph_monc_init(struct ceph_mon_client *monc, struct ceph_client *cl)
goto out;
/* connection */
- monc->con = kmalloc(sizeof(*monc->con), GFP_KERNEL);
- if (!monc->con)
- goto out_monmap;
- ceph_con_init(&monc->client->msgr, monc->con);
- monc->con->private = monc;
- monc->con->ops = &mon_con_ops;
+ ceph_con_init(&monc->client->msgr, &monc->con);
+ monc->con.private = monc;
+ monc->con.ops = &mon_con_ops;
/* authentication */
monc->auth = ceph_auth_init(cl->options->name,
cl->options->key);
if (IS_ERR(monc->auth)) {
err = PTR_ERR(monc->auth);
- goto out_con;
+ goto out_monmap;
}
monc->auth->want_keys =
CEPH_ENTITY_TYPE_AUTH | CEPH_ENTITY_TYPE_MON |
@@ -824,8 +821,6 @@ out_subscribe_ack:
ceph_msg_put(monc->m_subscribe_ack);
out_auth:
ceph_auth_destroy(monc->auth);
-out_con:
- monc->con->ops->put(monc->con);
out_monmap:
kfree(monc->monmap);
out:
@@ -841,9 +836,7 @@ void ceph_monc_stop(struct ceph_mon_client *monc)
mutex_lock(&monc->mutex);
__close_session(monc);
- monc->con->private = NULL;
- monc->con->ops->put(monc->con);
- monc->con = NULL;
+ monc->con.private = NULL;
mutex_unlock(&monc->mutex);
@@ -1029,7 +1022,7 @@ static void mon_fault(struct ceph_connection *con)
if (!monc->hunting)
pr_info("mon%d %s session lost, "
"hunting for new mon\n", monc->cur_mon,
- ceph_pr_addr(&monc->con->peer_addr.in_addr));
+ ceph_pr_addr(&monc->con.peer_addr.in_addr));
__close_session(monc);
if (!monc->hunting) {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit ec87ef4309d33bd9c87a53bb5152a86ae7a65f25 upstream.
All references to the embedded ceph_connection come from the msgr
workqueue, which is drained prior to mon_client destruction. That
means we can ignore con refcounting entirely.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/mon_client.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index ec75996..b59ca7a 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -1037,9 +1037,23 @@ out:
mutex_unlock(&monc->mutex);
}
+/*
+ * We can ignore refcounting on the connection struct, as all references
+ * will come from the messenger workqueue, which is drained prior to
+ * mon_client destruction.
+ */
+static struct ceph_connection *con_get(struct ceph_connection *con)
+{
+ return con;
+}
+
+static void con_put(struct ceph_connection *con)
+{
+}
+
static const struct ceph_connection_operations mon_con_ops = {
- .get = ceph_con_get,
- .put = ceph_con_put,
+ .get = con_get,
+ .put = con_put,
.dispatch = dispatch,
.fault = mon_fault,
.alloc_msg = mon_alloc_msg,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 1bfd89f4e6e1adc6a782d94aa5d4c53be1e404d7 upstream.
Move the initialization of a ceph connection's private pointer,
operations vector pointer, and peer name information into
ceph_con_init(). Rearrange the arguments so the connection pointer
is first. Hide the byte-swapping of the peer entity number inside
ceph_con_init()
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ceph/mds_client.c | 7 ++-----
include/linux/ceph/messenger.h | 6 ++++--
net/ceph/messenger.c | 9 ++++++++-
net/ceph/mon_client.c | 8 +++-----
net/ceph/osd_client.c | 7 ++-----
5 files changed, 19 insertions(+), 18 deletions(-)
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index ad30261..ecd7f15 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -394,11 +394,8 @@ static struct ceph_mds_session *register_session(struct ceph_mds_client *mdsc,
s->s_seq = 0;
mutex_init(&s->s_mutex);
- ceph_con_init(&mdsc->fsc->client->msgr, &s->s_con);
- s->s_con.private = s;
- s->s_con.ops = &mds_con_ops;
- s->s_con.peer_name.type = CEPH_ENTITY_TYPE_MDS;
- s->s_con.peer_name.num = cpu_to_le64(mds);
+ ceph_con_init(&s->s_con, s, &mds_con_ops, &mdsc->fsc->client->msgr,
+ CEPH_ENTITY_TYPE_MDS, mds);
spin_lock_init(&s->s_gen_ttl_lock);
s->s_cap_gen = 0;
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 5f30c81..7ed7a87 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -219,8 +219,10 @@ extern void ceph_messenger_init(struct ceph_messenger *msgr,
u32 required_features,
bool nocrc);
-extern void ceph_con_init(struct ceph_messenger *msgr,
- struct ceph_connection *con);
+extern void ceph_con_init(struct ceph_connection *con, void *private,
+ const struct ceph_connection_operations *ops,
+ struct ceph_messenger *msgr, __u8 entity_type,
+ __u64 entity_num);
extern void ceph_con_open(struct ceph_connection *con,
struct ceph_entity_addr *addr);
extern bool ceph_con_opened(struct ceph_connection *con);
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 603d8b5..d8986e8 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -521,15 +521,22 @@ void ceph_con_put(struct ceph_connection *con)
/*
* initialize a new connection.
*/
-void ceph_con_init(struct ceph_messenger *msgr, struct ceph_connection *con)
+void ceph_con_init(struct ceph_connection *con, void *private,
+ const struct ceph_connection_operations *ops,
+ struct ceph_messenger *msgr, __u8 entity_type, __u64 entity_num)
{
dout("con_init %p\n", con);
memset(con, 0, sizeof(*con));
+ con->private = private;
+ con->ops = ops;
atomic_set(&con->nref, 1);
con->msgr = msgr;
con_sock_state_init(con);
+ con->peer_name.type = (__u8) entity_type;
+ con->peer_name.num = cpu_to_le64(entity_num);
+
mutex_init(&con->mutex);
INIT_LIST_HEAD(&con->out_queue);
INIT_LIST_HEAD(&con->out_sent);
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index fec3147..dad0abb 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -142,11 +142,9 @@ static int __open_session(struct ceph_mon_client *monc)
monc->sub_renew_after = jiffies; /* i.e., expired */
monc->want_next_osdmap = !!monc->want_next_osdmap;
- ceph_con_init(&monc->client->msgr, &monc->con);
- monc->con.private = monc;
- monc->con.ops = &mon_con_ops;
- monc->con.peer_name.type = CEPH_ENTITY_TYPE_MON;
- monc->con.peer_name.num = cpu_to_le64(monc->cur_mon);
+ ceph_con_init(&monc->con, monc, &mon_con_ops,
+ &monc->client->msgr,
+ CEPH_ENTITY_TYPE_MON, monc->cur_mon);
dout("open_session mon%d opening\n", monc->cur_mon);
ceph_con_open(&monc->con,
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index cb5b847..01ec0ac 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -640,11 +640,8 @@ static struct ceph_osd *create_osd(struct ceph_osd_client *osdc, int onum)
INIT_LIST_HEAD(&osd->o_osd_lru);
osd->o_incarnation = 1;
- ceph_con_init(&osdc->client->msgr, &osd->o_con);
- osd->o_con.private = osd;
- osd->o_con.ops = &osd_con_ops;
- osd->o_con.peer_name.type = CEPH_ENTITY_TYPE_OSD;
- osd->o_con.peer_name.num = cpu_to_le64(onum);
+ ceph_con_init(&osd->o_con, osd, &osd_con_ops, &osdc->client->msgr,
+ CEPH_ENTITY_TYPE_OSD, onum);
INIT_LIST_HEAD(&osd->o_keepalive_item);
return osd;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 38941f8031bf042dba3ced6394ba3a3b16c244ea upstream.
When a ceph message is queued for sending it is placed on a list of
pending messages (ceph_connection->out_queue). When they are
actually sent over the wire, they are moved from that list to
another (ceph_connection->out_sent). When acknowledgement for the
message is received, it is removed from the sent messages list.
During that entire time the message is "in the possession" of a
single ceph connection. Keep track of that connection in the
message. This will be used in the next patch (and is a helpful
bit of information for debugging anyway).
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 3 +++
net/ceph/messenger.c | 27 +++++++++++++++++++++++++--
2 files changed, 28 insertions(+), 2 deletions(-)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 7ed7a87..7d48ffc 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -77,7 +77,10 @@ struct ceph_msg {
unsigned nr_pages; /* size of page array */
unsigned page_alignment; /* io offset in first page */
struct ceph_pagelist *pagelist; /* instead of pages */
+
+ struct ceph_connection *con;
struct list_head list_head;
+
struct kref kref;
struct bio *bio; /* instead of pages/pagelist */
struct bio *bio_iter; /* bio iterator */
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 9e12806..dc88846 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -414,6 +414,9 @@ static int con_close_socket(struct ceph_connection *con)
static void ceph_msg_remove(struct ceph_msg *msg)
{
list_del_init(&msg->list_head);
+ BUG_ON(msg->con == NULL);
+ msg->con = NULL;
+
ceph_msg_put(msg);
}
static void ceph_msg_remove_list(struct list_head *head)
@@ -433,6 +436,8 @@ static void reset_connection(struct ceph_connection *con)
ceph_msg_remove_list(&con->out_sent);
if (con->in_msg) {
+ BUG_ON(con->in_msg->con != con);
+ con->in_msg->con = NULL;
ceph_msg_put(con->in_msg);
con->in_msg = NULL;
}
@@ -625,8 +630,10 @@ static void prepare_write_message(struct ceph_connection *con)
&con->out_temp_ack);
}
+ BUG_ON(list_empty(&con->out_queue));
m = list_first_entry(&con->out_queue, struct ceph_msg, list_head);
con->out_msg = m;
+ BUG_ON(m->con != con);
/* put message on sent list */
ceph_msg_get(m);
@@ -1810,6 +1817,8 @@ static int read_partial_message(struct ceph_connection *con)
"error allocating memory for incoming message";
return -ENOMEM;
}
+
+ BUG_ON(con->in_msg->con != con);
m = con->in_msg;
m->front.iov_len = 0; /* haven't read it yet */
if (m->middle)
@@ -1905,6 +1914,8 @@ static void process_message(struct ceph_connection *con)
{
struct ceph_msg *msg;
+ BUG_ON(con->in_msg->con != con);
+ con->in_msg->con = NULL;
msg = con->in_msg;
con->in_msg = NULL;
@@ -2264,6 +2275,8 @@ static void ceph_fault(struct ceph_connection *con)
con_close_socket(con);
if (con->in_msg) {
+ BUG_ON(con->in_msg->con != con);
+ con->in_msg->con = NULL;
ceph_msg_put(con->in_msg);
con->in_msg = NULL;
}
@@ -2382,6 +2395,8 @@ void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg)
/* queue */
mutex_lock(&con->mutex);
+ BUG_ON(msg->con != NULL);
+ msg->con = con;
BUG_ON(!list_empty(&msg->list_head));
list_add_tail(&msg->list_head, &con->out_queue);
dout("----- %p to %s%lld %d=%s len %d+%d+%d -----\n", msg,
@@ -2407,13 +2422,16 @@ void ceph_con_revoke(struct ceph_connection *con, struct ceph_msg *msg)
{
mutex_lock(&con->mutex);
if (!list_empty(&msg->list_head)) {
- dout("con_revoke %p msg %p - was on queue\n", con, msg);
+ dout("%s %p msg %p - was on queue\n", __func__, con, msg);
list_del_init(&msg->list_head);
+ BUG_ON(msg->con == NULL);
+ msg->con = NULL;
+
ceph_msg_put(msg);
msg->hdr.seq = 0;
}
if (con->out_msg == msg) {
- dout("con_revoke %p msg %p - was sending\n", con, msg);
+ dout("%s %p msg %p - was sending\n", __func__, con, msg);
con->out_msg = NULL;
if (con->out_kvec_is_msg) {
con->out_skip = con->out_kvec_bytes;
@@ -2482,6 +2500,8 @@ struct ceph_msg *ceph_msg_new(int type, int front_len, gfp_t flags,
if (m == NULL)
goto out;
kref_init(&m->kref);
+
+ m->con = NULL;
INIT_LIST_HEAD(&m->list_head);
m->hdr.tid = 0;
@@ -2602,6 +2622,8 @@ static bool ceph_con_in_msg_alloc(struct ceph_connection *con,
mutex_unlock(&con->mutex);
con->in_msg = con->ops->alloc_msg(con, hdr, &skip);
mutex_lock(&con->mutex);
+ if (con->in_msg)
+ con->in_msg->con = con;
if (skip)
con->in_msg = NULL;
@@ -2615,6 +2637,7 @@ static bool ceph_con_in_msg_alloc(struct ceph_connection *con,
type, front_len);
return false;
}
+ con->in_msg->con = con;
con->in_msg->page_alignment = le16_to_cpu(hdr->data_off);
}
memcpy(&con->in_msg->hdr, &con->in_hdr, sizeof(con->in_hdr));
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 6740a845b2543cc46e1902ba21bac743fbadd0dc upstream.
ceph_con_revoke() is passed both a message and a ceph connection.
Now that any message associated with a connection holds a pointer
to that connection, there's no need to provide the connection when
revoking a message.
This has the added benefit of precluding the possibility of the
providing the wrong connection pointer. If the message's connection
pointer is null, it is not being tracked by any connection, so
revoking it is a no-op. This is supported as a convenience for
upper layers, so they can revoke a message that is not actually
"in flight."
Rename the function ceph_msg_revoke() to reflect that it is really
an operation on a message, not a connection.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 3 ++-
net/ceph/messenger.c | 7 ++++++-
net/ceph/mon_client.c | 8 ++++----
net/ceph/osd_client.c | 4 ++--
4 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 7d48ffc..13bd9cd 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -231,7 +231,8 @@ extern void ceph_con_open(struct ceph_connection *con,
extern bool ceph_con_opened(struct ceph_connection *con);
extern void ceph_con_close(struct ceph_connection *con);
extern void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg);
-extern void ceph_con_revoke(struct ceph_connection *con, struct ceph_msg *msg);
+
+extern void ceph_msg_revoke(struct ceph_msg *msg);
extern void ceph_con_revoke_message(struct ceph_connection *con,
struct ceph_msg *msg);
extern void ceph_con_keepalive(struct ceph_connection *con);
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 964a8c3..81d8373 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2425,8 +2425,13 @@ EXPORT_SYMBOL(ceph_con_send);
/*
* Revoke a message that was previously queued for send
*/
-void ceph_con_revoke(struct ceph_connection *con, struct ceph_msg *msg)
+void ceph_msg_revoke(struct ceph_msg *msg)
{
+ struct ceph_connection *con = msg->con;
+
+ if (!con)
+ return; /* Message not in our possession */
+
mutex_lock(&con->mutex);
if (!list_empty(&msg->list_head)) {
dout("%s %p msg %p - was on queue\n", __func__, con, msg);
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index f65111c..e9db3de 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -106,7 +106,7 @@ static void __send_prepared_auth_request(struct ceph_mon_client *monc, int len)
monc->pending_auth = 1;
monc->m_auth->front.iov_len = len;
monc->m_auth->hdr.front_len = cpu_to_le32(len);
- ceph_con_revoke(&monc->con, monc->m_auth);
+ ceph_msg_revoke(monc->m_auth);
ceph_msg_get(monc->m_auth); /* keep our ref */
ceph_con_send(&monc->con, monc->m_auth);
}
@@ -117,7 +117,7 @@ static void __send_prepared_auth_request(struct ceph_mon_client *monc, int len)
static void __close_session(struct ceph_mon_client *monc)
{
dout("__close_session closing mon%d\n", monc->cur_mon);
- ceph_con_revoke(&monc->con, monc->m_auth);
+ ceph_msg_revoke(monc->m_auth);
ceph_con_close(&monc->con);
monc->con.private = NULL;
monc->cur_mon = -1;
@@ -229,7 +229,7 @@ static void __send_subscribe(struct ceph_mon_client *monc)
msg->front.iov_len = p - msg->front.iov_base;
msg->hdr.front_len = cpu_to_le32(msg->front.iov_len);
- ceph_con_revoke(&monc->con, msg);
+ ceph_msg_revoke(msg);
ceph_con_send(&monc->con, ceph_msg_get(msg));
monc->sub_sent = jiffies | 1; /* never 0 */
@@ -688,7 +688,7 @@ static void __resend_generic_request(struct ceph_mon_client *monc)
for (p = rb_first(&monc->generic_request_tree); p; p = rb_next(p)) {
req = rb_entry(p, struct ceph_mon_generic_request, node);
- ceph_con_revoke(&monc->con, req->request);
+ ceph_msg_revoke(req->request);
ceph_con_send(&monc->con, ceph_msg_get(req->request));
}
}
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index d137bf0..d934c85 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -852,7 +852,7 @@ static void __unregister_request(struct ceph_osd_client *osdc,
if (req->r_osd) {
/* make sure the original request isn't in flight. */
- ceph_con_revoke(&req->r_osd->o_con, req->r_request);
+ ceph_msg_revoke(req->r_request);
list_del_init(&req->r_osd_item);
if (list_empty(&req->r_osd->o_requests) &&
@@ -879,7 +879,7 @@ static void __unregister_request(struct ceph_osd_client *osdc,
static void __cancel_request(struct ceph_osd_request *req)
{
if (req->r_sent && req->r_osd) {
- ceph_con_revoke(&req->r_osd->o_con, req->r_request);
+ ceph_msg_revoke(req->r_request);
req->r_sent = 0;
}
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 8921d114f5574c6da2cdd00749d185633ecf88f3 upstream.
ceph_con_revoke_message() is passed both a message and a ceph
connection. A ceph_msg allocated for incoming messages on a
connection always has a pointer to that connection, so there's no
need to provide the connection when revoking such a message.
Note that the existing logic does not preclude the message supplied
being a null/bogus message pointer. The only user of this interface
is the OSD client, and the only value an osd client passes is a
request's r_reply field. That is always non-null (except briefly in
an error path in ceph_osdc_alloc_request(), and that drops the
only reference so the request won't ever have a reply to revoke).
So we can safely assume the passed-in message is non-null, but add a
BUG_ON() to make it very obvious we are imposing this restriction.
Rename the function ceph_msg_revoke_incoming() to reflect that it is
really an operation on an incoming message.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 4 ++--
net/ceph/messenger.c | 22 ++++++++++++++++------
net/ceph/osd_client.c | 9 ++++-----
3 files changed, 22 insertions(+), 13 deletions(-)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 13bd9cd..9c1f755 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -233,8 +233,8 @@ extern void ceph_con_close(struct ceph_connection *con);
extern void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg);
extern void ceph_msg_revoke(struct ceph_msg *msg);
-extern void ceph_con_revoke_message(struct ceph_connection *con,
- struct ceph_msg *msg);
+extern void ceph_msg_revoke_incoming(struct ceph_msg *msg);
+
extern void ceph_con_keepalive(struct ceph_connection *con);
extern struct ceph_connection *ceph_con_get(struct ceph_connection *con);
extern void ceph_con_put(struct ceph_connection *con);
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 81d8373..ff12d32 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2460,17 +2460,27 @@ void ceph_msg_revoke(struct ceph_msg *msg)
/*
* Revoke a message that we may be reading data into
*/
-void ceph_con_revoke_message(struct ceph_connection *con, struct ceph_msg *msg)
+void ceph_msg_revoke_incoming(struct ceph_msg *msg)
{
+ struct ceph_connection *con;
+
+ BUG_ON(msg == NULL);
+ if (!msg->con) {
+ dout("%s msg %p null con\n", __func__, msg);
+
+ return; /* Message not in our possession */
+ }
+
+ con = msg->con;
mutex_lock(&con->mutex);
- if (con->in_msg && con->in_msg == msg) {
+ if (con->in_msg == msg) {
unsigned int front_len = le32_to_cpu(con->in_hdr.front_len);
unsigned int middle_len = le32_to_cpu(con->in_hdr.middle_len);
unsigned int data_len = le32_to_cpu(con->in_hdr.data_len);
/* skip rest of message */
- dout("con_revoke_pages %p msg %p revoked\n", con, msg);
- con->in_base_pos = con->in_base_pos -
+ dout("%s %p msg %p revoked\n", __func__, con, msg);
+ con->in_base_pos = con->in_base_pos -
sizeof(struct ceph_msg_header) -
front_len -
middle_len -
@@ -2481,8 +2491,8 @@ void ceph_con_revoke_message(struct ceph_connection *con, struct ceph_msg *msg)
con->in_tag = CEPH_MSGR_TAG_READY;
con->in_seq++;
} else {
- dout("con_revoke_pages %p msg %p pages %p no-op\n",
- con, con->in_msg, msg);
+ dout("%s %p in_msg %p msg %p no-op\n",
+ __func__, con, con->in_msg, msg);
}
mutex_unlock(&con->mutex);
}
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index d934c85..db2da54 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -140,10 +140,9 @@ void ceph_osdc_release_request(struct kref *kref)
if (req->r_request)
ceph_msg_put(req->r_request);
if (req->r_con_filling_msg) {
- dout("release_request revoking pages %p from con %p\n",
+ dout("%s revoking pages %p from con %p\n", __func__,
req->r_pages, req->r_con_filling_msg);
- ceph_con_revoke_message(req->r_con_filling_msg,
- req->r_reply);
+ ceph_msg_revoke_incoming(req->r_reply);
req->r_con_filling_msg->ops->put(req->r_con_filling_msg);
}
if (req->r_reply)
@@ -2022,9 +2021,9 @@ static struct ceph_msg *get_reply(struct ceph_connection *con,
}
if (req->r_con_filling_msg) {
- dout("get_reply revoking msg %p from old con %p\n",
+ dout("%s revoking msg %p from old con %p\n", __func__,
req->r_reply, req->r_con_filling_msg);
- ceph_con_revoke_message(req->r_con_filling_msg, req->r_reply);
+ ceph_msg_revoke_incoming(req->r_reply);
req->r_con_filling_msg->ops->put(req->r_con_filling_msg);
req->r_con_filling_msg = NULL;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Xi Wang <[email protected]>
commit e91a9b639a691e0982088b5954eaafb5a25c8f1c upstream.
On 32-bit systems, a large `n' would overflow `n * sizeof(u32)' and bypass
the check ceph_decode_need(p, end, n * sizeof(u32), bad). It would also
overflow the subsequent kmalloc() size, leading to out-of-bounds write.
Signed-off-by: Xi Wang <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/osdmap.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/net/ceph/osdmap.c b/net/ceph/osdmap.c
index 95b2762..bc73341 100644
--- a/net/ceph/osdmap.c
+++ b/net/ceph/osdmap.c
@@ -667,6 +667,9 @@ struct ceph_osdmap *osdmap_decode(void **p, void *end)
ceph_decode_need(p, end, sizeof(u32) + sizeof(u64), bad);
ceph_decode_copy(p, &pgid, sizeof(pgid));
n = ceph_decode_32(p);
+ err = -EINVAL;
+ if (n > (UINT_MAX - sizeof(*pg)) / sizeof(u32))
+ goto bad;
ceph_decode_need(p, end, n * sizeof(u32), bad);
err = -ENOMEM;
pg = kmalloc(sizeof(*pg) + n*sizeof(u32), GFP_NOFS);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 89a86be0ce20022f6ede8bccec078dbb3d63caaa upstream.
Once we call ->connect(), we are racing against the actual
connection, and a subsequent transition from CONNECTING ->
CONNECTED. Set the state to CONNECTING before that, under the
protection of the mutex, to avoid the race.
This was introduced in 928443cd9644e7cfd46f687dbeffda2d1a357ff9,
with the original socket state code.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index ff12d32..c52d587 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -321,6 +321,7 @@ static int ceph_tcp_connect(struct ceph_connection *con)
dout("connect %s\n", ceph_pr_addr(&con->peer_addr.in_addr));
+ con_sock_state_connecting(con);
ret = sock->ops->connect(sock, (struct sockaddr *)paddr, sizeof(*paddr),
O_NONBLOCK);
if (ret == -EINPROGRESS) {
@@ -336,8 +337,6 @@ static int ceph_tcp_connect(struct ceph_connection *con)
return ret;
}
con->sock = sock;
- con_sock_state_connecting(con);
-
return 0;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Carpenter <[email protected]>
commit 26ce171915f348abd1f41da1ed139d93750d987f upstream.
We dereference "con->in_msg" on the line after it was set to NULL.
Signed-off-by: Dan Carpenter <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index c52d587..0a6fdf8 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -440,7 +440,7 @@ static void reset_connection(struct ceph_connection *con)
con->in_msg->con = NULL;
ceph_msg_put(con->in_msg);
con->in_msg = NULL;
- ceph_con_put(con->in_msg->con);
+ ceph_con_put(con);
}
con->connect_seq = 0;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit d59315ca8c0de00df9b363f94a2641a30961ca1c upstream.
These are no longer used. Every ceph_connection instance is embedded in
another structure, and refcounts manipulated via the get/put ops.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 1 -
net/ceph/messenger.c | 28 +---------------------------
2 files changed, 1 insertion(+), 28 deletions(-)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 9c1f755..f624b75 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -135,7 +135,6 @@ struct ceph_msg_pos {
*/
struct ceph_connection {
void *private;
- atomic_t nref;
const struct ceph_connection_operations *ops;
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index ddb710c..7329c8d 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -501,30 +501,6 @@ bool ceph_con_opened(struct ceph_connection *con)
}
/*
- * generic get/put
- */
-struct ceph_connection *ceph_con_get(struct ceph_connection *con)
-{
- int nref = __atomic_add_unless(&con->nref, 1, 0);
-
- dout("con_get %p nref = %d -> %d\n", con, nref, nref + 1);
-
- return nref ? con : NULL;
-}
-
-void ceph_con_put(struct ceph_connection *con)
-{
- int nref = atomic_dec_return(&con->nref);
-
- BUG_ON(nref < 0);
- if (nref == 0) {
- BUG_ON(con->sock);
- kfree(con);
- }
- dout("con_put %p nref = %d -> %d\n", con, nref + 1, nref);
-}
-
-/*
* initialize a new connection.
*/
void ceph_con_init(struct ceph_connection *con, void *private,
@@ -535,7 +511,6 @@ void ceph_con_init(struct ceph_connection *con, void *private,
memset(con, 0, sizeof(*con));
con->private = private;
con->ops = ops;
- atomic_set(&con->nref, 1);
con->msgr = msgr;
con_sock_state_init(con);
@@ -1951,8 +1926,7 @@ static int try_write(struct ceph_connection *con)
{
int ret = 1;
- dout("try_write start %p state %lu nref %d\n", con, con->state,
- atomic_read(&con->nref));
+ dout("try_write start %p state %lu\n", con, con->state);
more:
dout("try_write out_kvec_bytes %d\n", con->out_kvec_bytes);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 739c905baa018c99003564ebc367d93aa44d4861 upstream.
Move the code that prepares to write the data portion of a message
into its own function.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 37 +++++++++++++++++++++++--------------
1 file changed, 23 insertions(+), 14 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 7329c8d..c7efb92 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -565,6 +565,24 @@ static void con_out_kvec_add(struct ceph_connection *con,
con->out_kvec_bytes += size;
}
+static void prepare_write_message_data(struct ceph_connection *con)
+{
+ struct ceph_msg *msg = con->out_msg;
+
+ BUG_ON(!msg);
+ BUG_ON(!msg->hdr.data_len);
+
+ /* initialize page iterator */
+ con->out_msg_pos.page = 0;
+ if (msg->pages)
+ con->out_msg_pos.page_pos = msg->page_alignment;
+ else
+ con->out_msg_pos.page_pos = 0;
+ con->out_msg_pos.data_pos = 0;
+ con->out_msg_pos.did_page_crc = false;
+ con->out_more = 1; /* data + footer will follow */
+}
+
/*
* Prepare footer for currently outgoing message, and finish things
* off. Assumes out_kvec* are already valid.. we just add on to the end.
@@ -657,26 +675,17 @@ static void prepare_write_message(struct ceph_connection *con)
con->out_msg->footer.middle_crc = cpu_to_le32(crc);
} else
con->out_msg->footer.middle_crc = 0;
- con->out_msg->footer.data_crc = 0;
- dout("prepare_write_message front_crc %u data_crc %u\n",
+ dout("%s front_crc %u middle_crc %u\n", __func__,
le32_to_cpu(con->out_msg->footer.front_crc),
le32_to_cpu(con->out_msg->footer.middle_crc));
/* is there a data payload? */
- if (le32_to_cpu(m->hdr.data_len) > 0) {
- /* initialize page iterator */
- con->out_msg_pos.page = 0;
- if (m->pages)
- con->out_msg_pos.page_pos = m->page_alignment;
- else
- con->out_msg_pos.page_pos = 0;
- con->out_msg_pos.data_pos = 0;
- con->out_msg_pos.did_page_crc = false;
- con->out_more = 1; /* data + footer will follow */
- } else {
+ con->out_msg->footer.data_crc = 0;
+ if (m->hdr.data_len)
+ prepare_write_message_data(con);
+ else
/* no, queue up footer too and be done */
prepare_write_message_footer(con);
- }
set_bit(WRITE_PENDING, &con->flags);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 572c588edadaa3da3992bd8a0fed830bbcc861f8 upstream.
If a message has a non-null bio pointer, its bio_iter field is
initialized in write_partial_msg_pages() if this has not been done
already. This is really a one-time setup operation for sending a
message's (bio) data, so move that initialization code into
prepare_write_message_data() which serves that purpose.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 5e8dbc0d..b83c963 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -603,6 +603,10 @@ static void prepare_write_message_data(struct ceph_connection *con)
con->out_msg_pos.page_pos = msg->page_alignment;
else
con->out_msg_pos.page_pos = 0;
+#ifdef CONFIG_BLOCK
+ if (msg->bio && !msg->bio_iter)
+ init_bio_iter(msg->bio, &msg->bio_iter, &msg->bio_seg);
+#endif
con->out_msg_pos.data_pos = 0;
con->out_msg_pos.did_page_crc = false;
con->out_more = 1; /* data + footer will follow */
@@ -942,11 +946,6 @@ static int write_partial_msg_pages(struct ceph_connection *con)
con, msg, con->out_msg_pos.page, msg->nr_pages,
con->out_msg_pos.page_pos);
-#ifdef CONFIG_BLOCK
- if (msg->bio && !msg->bio_iter)
- init_bio_iter(msg->bio, &msg->bio_iter, &msg->bio_seg);
-#endif
-
while (data_len > con->out_msg_pos.data_pos) {
struct page *page = NULL;
int max_write = PAGE_SIZE;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit a8d00e3cdef4c1c4f194414b72b24cd995439a05 upstream.
The following commit changed it so SOCK_CLOSED bit was stored in
a connection's new "flags" field rather than its "state" field.
libceph: start separating connection flags from state
commit 928443cd
That bit is used in con_close_socket() to protect against setting an
error message more than once in the socket event handler function.
Unfortunately, the field being operated on in that function was not
updated to be "flags" as it should have been. This fixes that
error.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index d47305a..d0aca62 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -397,11 +397,11 @@ static int con_close_socket(struct ceph_connection *con)
dout("con_close_socket on %p sock %p\n", con, con->sock);
if (!con->sock)
return 0;
- set_bit(SOCK_CLOSED, &con->state);
+ set_bit(SOCK_CLOSED, &con->flags);
rc = con->sock->ops->shutdown(con->sock, SHUT_RDWR);
sock_release(con->sock);
con->sock = NULL;
- clear_bit(SOCK_CLOSED, &con->state);
+ clear_bit(SOCK_CLOSED, &con->flags);
con_sock_state_closed(con);
return rc;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 456ea46865787283088b23a8a7f69244513b95f0 upstream.
In con_close_socket(), a connection's SOCK_CLOSED flag gets set and
then cleared while its shutdown method is called and its reference
gets dropped.
Previously, that flag got set only if it had not already been set,
so setting it in con_close_socket() might have prevented additional
processing being done on a socket being shut down. We no longer set
SOCK_CLOSED in the socket event routine conditionally, so setting
that bit here no longer provides whatever benefit it might have
provided before.
A race condition could still leave the SOCK_CLOSED bit set even
after we've issued the call to con_close_socket(), so we still clear
that bit after shutting the socket down. Add a comment explaining
the reason for this.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index cd1aaa8..dfff350 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -392,10 +392,16 @@ static int con_close_socket(struct ceph_connection *con)
dout("con_close_socket on %p sock %p\n", con, con->sock);
if (!con->sock)
return 0;
- set_bit(SOCK_CLOSED, &con->flags);
rc = con->sock->ops->shutdown(con->sock, SHUT_RDWR);
sock_release(con->sock);
con->sock = NULL;
+
+ /*
+ * Forcibly clear the SOCK_CLOSE flag. It gets set
+ * independent of the connection mutex, and we could have
+ * received a socket close event before we had the chance to
+ * shut the socket down.
+ */
clear_bit(SOCK_CLOSED, &con->flags);
con_sock_state_closed(con);
return rc;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 3ec50d1868a9e0493046400bb1fdd054c7f64ebd upstream.
A connection state's NEGOTIATING bit gets set while in CONNECTING
state after we have successfully exchanged a ceph banner and IP
addresses with the connection's peer (the server). But that bit
is not cleared again--at least not until another connection attempt
is initiated.
Instead, clear it as soon as the connection is fully established.
Also, clear it when a socket connection gets prematurely closed
in the midst of establishing a ceph connection (in case we had
reached the point where it was set).
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 5e8033f..9e586ea 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1562,6 +1562,7 @@ static int process_connect(struct ceph_connection *con)
fail_protocol(con);
return -1;
}
+ clear_bit(NEGOTIATING, &con->state);
clear_bit(CONNECTING, &con->state);
con->peer_global_seq = le32_to_cpu(con->in_reply.global_seq);
con->connect_seq++;
@@ -1951,7 +1952,6 @@ more:
/* open the socket first? */
if (con->sock == NULL) {
- clear_bit(NEGOTIATING, &con->state);
set_bit(CONNECTING, &con->state);
con_out_kvec_reset(con);
@@ -2190,10 +2190,12 @@ static void con_work(struct work_struct *work)
mutex_lock(&con->mutex);
restart:
if (test_and_clear_bit(SOCK_CLOSED, &con->flags)) {
- if (test_and_clear_bit(CONNECTING, &con->state))
+ if (test_and_clear_bit(CONNECTING, &con->state)) {
+ clear_bit(NEGOTIATING, &con->state);
con->error_msg = "connection failed";
- else
+ } else {
con->error_msg = "socket closed";
+ }
goto fault;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit bb9e6bba5d8b85b631390f8dbe8a24ae1ff5b48a upstream.
A connection that is closed will no longer be connecting. So
clear the CONNECTING state bit in ceph_con_close(). Similarly,
if the socket has been closed we no longer are in connecting
state (a new connect sequence will need to be initiated).
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index dfff350..5e8033f 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -462,6 +462,7 @@ void ceph_con_close(struct ceph_connection *con)
dout("con_close %p peer %s\n", con,
ceph_pr_addr(&con->peer_addr.in_addr));
clear_bit(NEGOTIATING, &con->state);
+ clear_bit(CONNECTING, &con->state);
clear_bit(STANDBY, &con->state); /* avoid connect_seq bump */
set_bit(CLOSED, &con->state);
@@ -2189,7 +2190,7 @@ static void con_work(struct work_struct *work)
mutex_lock(&con->mutex);
restart:
if (test_and_clear_bit(SOCK_CLOSED, &con->flags)) {
- if (test_bit(CONNECTING, &con->state))
+ if (test_and_clear_bit(CONNECTING, &con->state))
con->error_msg = "connection failed";
else
con->error_msg = "socket closed";
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 7593af920baac37752190a0db703d2732bed4a3b upstream.
Currently a ceph connection enters a "CONNECTING" state when it
begins the process of (re-)connecting with its peer. Once the two
ends have successfully exchanged their banner and addresses, an
additional NEGOTIATING bit is set in the ceph connection's state to
indicate the connection information exhange has begun. The
CONNECTING bit/state continues to be set during this phase.
Rather than have the CONNECTING state continue while the NEGOTIATING
bit is set, interpret these two phases as distinct states. In other
words, when NEGOTIATING is set, clear CONNECTING. That way only
one of them will be active at a time.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 52 +++++++++++++++++++++++++++-----------------------
1 file changed, 28 insertions(+), 24 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 86294c1..fe340b2 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1559,7 +1559,6 @@ static int process_connect(struct ceph_connection *con)
return -1;
}
clear_bit(NEGOTIATING, &con->state);
- clear_bit(CONNECTING, &con->state);
set_bit(CONNECTED, &con->state);
con->peer_global_seq = le32_to_cpu(con->in_reply.global_seq);
con->connect_seq++;
@@ -2000,7 +1999,8 @@ more_kvec:
}
do_next:
- if (!test_bit(CONNECTING, &con->state)) {
+ if (!test_bit(CONNECTING, &con->state) &&
+ !test_bit(NEGOTIATING, &con->state)) {
/* is anything else pending? */
if (!list_empty(&con->out_queue)) {
prepare_write_message(con);
@@ -2057,25 +2057,29 @@ more:
}
if (test_bit(CONNECTING, &con->state)) {
- if (!test_bit(NEGOTIATING, &con->state)) {
- dout("try_read connecting\n");
- ret = read_partial_banner(con);
- if (ret <= 0)
- goto out;
- ret = process_banner(con);
- if (ret < 0)
- goto out;
-
- /* Banner is good, exchange connection info */
- ret = prepare_write_connect(con);
- if (ret < 0)
- goto out;
- prepare_read_connect(con);
- set_bit(NEGOTIATING, &con->state);
-
- /* Send connection info before awaiting response */
+ dout("try_read connecting\n");
+ ret = read_partial_banner(con);
+ if (ret <= 0)
goto out;
- }
+ ret = process_banner(con);
+ if (ret < 0)
+ goto out;
+
+ clear_bit(CONNECTING, &con->state);
+ set_bit(NEGOTIATING, &con->state);
+
+ /* Banner is good, exchange connection info */
+ ret = prepare_write_connect(con);
+ if (ret < 0)
+ goto out;
+ prepare_read_connect(con);
+
+ /* Send connection info before awaiting response */
+ goto out;
+ }
+
+ if (test_bit(NEGOTIATING, &con->state)) {
+ dout("try_read negotiating\n");
ret = read_partial_connect(con);
if (ret <= 0)
goto out;
@@ -2197,12 +2201,12 @@ restart:
if (test_and_clear_bit(SOCK_CLOSED, &con->flags)) {
if (test_and_clear_bit(CONNECTED, &con->state))
con->error_msg = "socket closed";
- else if (test_and_clear_bit(CONNECTING, &con->state)) {
- clear_bit(NEGOTIATING, &con->state);
+ else if (test_and_clear_bit(NEGOTIATING, &con->state))
+ con->error_msg = "negotiation failed";
+ else if (test_and_clear_bit(CONNECTING, &con->state))
con->error_msg = "connection failed";
- } else {
+ else
con->error_msg = "unrecognized con state";
- }
goto fault;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit ab166d5aa3bc036fba7efaca6e4e43a7e9510acf upstream.
There are two phases in the process of linking together the two ends
of a ceph connection. The first involves exchanging a banner and
IP addresses, and if that is successful a second phase exchanges
some detail about each side's connection capabilities.
When initiating a connection, the client side now queues to send
its information for both phases of this process at the same time.
This is probably a bit more efficient, but it is slightly messier
from a layering perspective in the code.
So rearrange things so that the client doesn't send the connection
information until it has received and processed the response in the
initial banner phase (in process_banner()).
Move the code (in the (con->sock == NULL) case in try_write()) that
prepares for writing the connection information, delaying doing that
until the banner exchange has completed. Move the code that begins
the transition to this second "NEGOTIATING" phase out of
process_banner() and into its caller, so preparing to write the
connection information and preparing to read the response are
adjacent to each other.
Finally, preparing to write the connection information now requires
the output kvec to be reset in all cases, so move that into the
prepare_write_connect() and delete it from all callers.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index dc95437..86294c1 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -841,6 +841,7 @@ static int prepare_write_connect(struct ceph_connection *con)
con->out_connect.authorizer_len = auth ?
cpu_to_le32(auth->authorizer_buf_len) : 0;
+ con_out_kvec_reset(con);
con_out_kvec_add(con, sizeof (con->out_connect),
&con->out_connect);
if (auth && auth->authorizer_buf_len)
@@ -1430,8 +1431,6 @@ static int process_banner(struct ceph_connection *con)
ceph_pr_addr(&con->msgr->inst.addr.in_addr));
}
- set_bit(NEGOTIATING, &con->state);
- prepare_read_connect(con);
return 0;
}
@@ -1481,7 +1480,6 @@ static int process_connect(struct ceph_connection *con)
return -1;
}
con->auth_retry = 1;
- con_out_kvec_reset(con);
ret = prepare_write_connect(con);
if (ret < 0)
return ret;
@@ -1502,7 +1500,6 @@ static int process_connect(struct ceph_connection *con)
ENTITY_NAME(con->peer_name),
ceph_pr_addr(&con->peer_addr.in_addr));
reset_connection(con);
- con_out_kvec_reset(con);
ret = prepare_write_connect(con);
if (ret < 0)
return ret;
@@ -1528,7 +1525,6 @@ static int process_connect(struct ceph_connection *con)
le32_to_cpu(con->out_connect.connect_seq),
le32_to_cpu(con->in_reply.connect_seq));
con->connect_seq = le32_to_cpu(con->in_reply.connect_seq);
- con_out_kvec_reset(con);
ret = prepare_write_connect(con);
if (ret < 0)
return ret;
@@ -1545,7 +1541,6 @@ static int process_connect(struct ceph_connection *con)
le32_to_cpu(con->in_reply.global_seq));
get_global_seq(con->msgr,
le32_to_cpu(con->in_reply.global_seq));
- con_out_kvec_reset(con);
ret = prepare_write_connect(con);
if (ret < 0)
return ret;
@@ -1958,9 +1953,6 @@ more:
con_out_kvec_reset(con);
prepare_write_banner(con);
- ret = prepare_write_connect(con);
- if (ret < 0)
- goto out;
prepare_read_banner(con);
BUG_ON(con->in_msg);
@@ -2073,6 +2065,16 @@ more:
ret = process_banner(con);
if (ret < 0)
goto out;
+
+ /* Banner is good, exchange connection info */
+ ret = prepare_write_connect(con);
+ if (ret < 0)
+ goto out;
+ prepare_read_connect(con);
+ set_bit(NEGOTIATING, &con->state);
+
+ /* Send connection info before awaiting response */
+ goto out;
}
ret = read_partial_connect(con);
if (ret <= 0)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 5821bd8ccdf5d17ab2c391c773756538603838c3 upstream.
This patch gathers a few small changes in "net/ceph/messenger.c":
out_msg_pos_next()
- small logic change that mostly affects indentation
write_partial_msg_pages().
- use a local variable trail_off to represent the offset into
a message of the trail portion of the data (if present)
- once we are in the trail portion we will always be there, so we
don't always need to check against our data position
- avoid computing len twice after we've reached the trail
- get rid of the variable tmpcrc, which is not needed
- trail_off and trail_len never change so mark them const
- update some comments
read_partial_message_bio()
- bio_iovec_idx() will never return an error, so don't bother
checking for it
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 63 +++++++++++++++++++++++++-------------------------
1 file changed, 31 insertions(+), 32 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index fe340b2..64db2c2 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -907,21 +907,23 @@ static void out_msg_pos_next(struct ceph_connection *con, struct page *page,
con->out_msg_pos.data_pos += sent;
con->out_msg_pos.page_pos += sent;
- if (sent == len) {
- con->out_msg_pos.page_pos = 0;
- con->out_msg_pos.page++;
- con->out_msg_pos.did_page_crc = false;
- if (in_trail)
- list_move_tail(&page->lru,
- &msg->trail->head);
- else if (msg->pagelist)
- list_move_tail(&page->lru,
- &msg->pagelist->head);
+ if (sent < len)
+ return;
+
+ BUG_ON(sent != len);
+ con->out_msg_pos.page_pos = 0;
+ con->out_msg_pos.page++;
+ con->out_msg_pos.did_page_crc = false;
+ if (in_trail)
+ list_move_tail(&page->lru,
+ &msg->trail->head);
+ else if (msg->pagelist)
+ list_move_tail(&page->lru,
+ &msg->pagelist->head);
#ifdef CONFIG_BLOCK
- else if (msg->bio)
- iter_bio_next(&msg->bio_iter, &msg->bio_seg);
+ else if (msg->bio)
+ iter_bio_next(&msg->bio_iter, &msg->bio_seg);
#endif
- }
}
/*
@@ -940,30 +942,31 @@ static int write_partial_msg_pages(struct ceph_connection *con)
int ret;
int total_max_write;
bool in_trail = false;
- size_t trail_len = (msg->trail ? msg->trail->length : 0);
+ const size_t trail_len = (msg->trail ? msg->trail->length : 0);
+ const size_t trail_off = data_len - trail_len;
dout("write_partial_msg_pages %p msg %p page %d/%d offset %d\n",
con, msg, con->out_msg_pos.page, msg->nr_pages,
con->out_msg_pos.page_pos);
+ /*
+ * Iterate through each page that contains data to be
+ * written, and send as much as possible for each.
+ *
+ * If we are calculating the data crc (the default), we will
+ * need to map the page. If we have no pages, they have
+ * been revoked, so use the zero page.
+ */
while (data_len > con->out_msg_pos.data_pos) {
struct page *page = NULL;
int max_write = PAGE_SIZE;
int bio_offset = 0;
- total_max_write = data_len - trail_len -
- con->out_msg_pos.data_pos;
-
- /*
- * if we are calculating the data crc (the default), we need
- * to map the page. if our pages[] has been revoked, use the
- * zero page.
- */
-
- /* have we reached the trail part of the data? */
- if (con->out_msg_pos.data_pos >= data_len - trail_len) {
- in_trail = true;
+ in_trail = in_trail || con->out_msg_pos.data_pos >= trail_off;
+ if (!in_trail)
+ total_max_write = trail_off - con->out_msg_pos.data_pos;
+ if (in_trail) {
total_max_write = data_len - con->out_msg_pos.data_pos;
page = list_first_entry(&msg->trail->head,
@@ -990,14 +993,13 @@ static int write_partial_msg_pages(struct ceph_connection *con)
if (do_datacrc && !con->out_msg_pos.did_page_crc) {
void *base;
- u32 crc;
- u32 tmpcrc = le32_to_cpu(msg->footer.data_crc);
+ u32 crc = le32_to_cpu(msg->footer.data_crc);
char *kaddr;
kaddr = kmap(page);
BUG_ON(kaddr == NULL);
base = kaddr + con->out_msg_pos.page_pos + bio_offset;
- crc = crc32c(tmpcrc, base, len);
+ crc = crc32c(crc, base, len);
msg->footer.data_crc = cpu_to_le32(crc);
con->out_msg_pos.did_page_crc = true;
}
@@ -1702,9 +1704,6 @@ static int read_partial_message_bio(struct ceph_connection *con,
void *p;
int ret, left;
- if (IS_ERR(bv))
- return PTR_ERR(bv);
-
left = min((int)(data_len - con->in_msg_pos.data_pos),
(int)(bv->bv_len - con->in_msg_pos.page_pos));
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit fbb85a478f6d4cce6942f1c25c6a68ec5b1e7e7f upstream.
It is possible to close a socket that is in the OPENING state. For
example, it can happen if ceph_con_close() is called on the con before
the TCP connection is established. con_work() will come around and shut
down the socket.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 25 +++++++++++++------------
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 5adf786..16814d1 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -48,17 +48,17 @@
* | ----------------------
* | \
* + con_sock_state_closed() \
- * |\ \
- * | \ \
- * | ----------- \
- * | | CLOSING | socket event; \
- * | ----------- await close \
- * | ^ |
- * | | |
- * | + con_sock_state_closing() |
- * | / \ |
- * | / --------------- |
- * | / \ v
+ * |+--------------------------- \
+ * | \ \ \
+ * | ----------- \ \
+ * | | CLOSING | socket event; \ \
+ * | ----------- await close \ \
+ * | ^ \ |
+ * | | \ |
+ * | + con_sock_state_closing() \ |
+ * | / \ | |
+ * | / --------------- | |
+ * | / \ v v
* | / --------------
* | / -----------------| CONNECTING | socket created, TCP
* | | / -------------- connect initiated
@@ -241,7 +241,8 @@ static void con_sock_state_closed(struct ceph_connection *con)
old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CLOSED);
if (WARN_ON(old_state != CON_SOCK_STATE_CONNECTED &&
- old_state != CON_SOCK_STATE_CLOSING))
+ old_state != CON_SOCK_STATE_CLOSING &&
+ old_state != CON_SOCK_STATE_CONNECTING))
printk("%s: unexpected old state %d\n", __func__, old_state);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Guanjun He <[email protected]>
commit a2a3258417eb6a1799cf893350771428875a8287 upstream.
Add an atomic variable 'stopping' as flag in struct ceph_messenger,
set this flag to 1 in function ceph_destroy_client(), and add the condition code
in function ceph_data_ready() to test the flag value, if true(1), just return.
Signed-off-by: Guanjun He <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 1 +
net/ceph/ceph_common.c | 2 ++
net/ceph/messenger.c | 5 +++++
3 files changed, 8 insertions(+)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index ec22abd..de39cda 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -50,6 +50,7 @@ struct ceph_messenger {
struct ceph_entity_inst inst; /* my name+address */
struct ceph_entity_addr my_enc_addr;
+ atomic_t stopping;
bool nocrc;
/*
diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c
index 58b09ef..3b45e01 100644
--- a/net/ceph/ceph_common.c
+++ b/net/ceph/ceph_common.c
@@ -495,6 +495,8 @@ void ceph_destroy_client(struct ceph_client *client)
{
dout("destroy_client %p\n", client);
+ atomic_set(&client->msgr.stopping, 1);
+
/* unmount */
ceph_osdc_stop(&client->osdc);
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 16814d1..63e1252 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -254,6 +254,9 @@ static void con_sock_state_closed(struct ceph_connection *con)
static void ceph_sock_data_ready(struct sock *sk, int count_unused)
{
struct ceph_connection *con = sk->sk_user_data;
+ if (atomic_read(&con->msgr->stopping)) {
+ return;
+ }
if (sk->sk_state != TCP_CLOSE_WAIT) {
dout("%s on %p state = %lu, queueing work\n", __func__,
@@ -2413,6 +2416,8 @@ void ceph_messenger_init(struct ceph_messenger *msgr,
encode_my_addr(msgr);
msgr->nocrc = nocrc;
+ atomic_set(&msgr->stopping, 0);
+
dout("%s %p\n", __func__, msgr);
}
EXPORT_SYMBOL(ceph_messenger_init);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 8c50c817566dfa4581f82373aac39f3e608a7dc8 upstream.
Hold the mutex while twiddling all of the state bits to avoid possible
races. While we're here, make not of why we cannot close the socket
directly.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 6e2f678..e65b15d 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -503,6 +503,7 @@ static void reset_connection(struct ceph_connection *con)
*/
void ceph_con_close(struct ceph_connection *con)
{
+ mutex_lock(&con->mutex);
dout("con_close %p peer %s\n", con,
ceph_pr_addr(&con->peer_addr.in_addr));
clear_bit(NEGOTIATING, &con->state);
@@ -515,11 +516,16 @@ void ceph_con_close(struct ceph_connection *con)
clear_bit(KEEPALIVE_PENDING, &con->flags);
clear_bit(WRITE_PENDING, &con->flags);
- mutex_lock(&con->mutex);
reset_connection(con);
con->peer_global_seq = 0;
cancel_delayed_work(&con->work);
mutex_unlock(&con->mutex);
+
+ /*
+ * We cannot close the socket directly from here because the
+ * work threads use it without holding the mutex. Instead, let
+ * con_work() do it.
+ */
queue_con(con);
}
EXPORT_SYMBOL(ceph_con_close);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 6194ea895e447fdf4adfd23f67873a32bf4f15ae upstream.
The linger op registration (i.e., watch) modifies the object state. As
such, the OSD will reply with success if it has already applied without
doing the associated side-effects (setting up the watch session state).
If we lose the ACK and resubmit, we will see success but the watch will not
be correctly registered and we won't get notifies.
To fix this, always resubmit the linger op with a new tid. We accomplish
this by re-registering as a linger (i.e., 'registered') if we are not yet
registered. Then the second loop will treat this just like a normal
case of re-registering.
This mirrors a similar fix on the userland ceph.git, commit 5dd68b95, and
ceph bug #2796.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/osd_client.c | 26 +++++++++++++++++++++-----
1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index 4475d17..752b498 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -888,7 +888,9 @@ static void __register_linger_request(struct ceph_osd_client *osdc,
{
dout("__register_linger_request %p\n", req);
list_add_tail(&req->r_linger_item, &osdc->req_linger);
- list_add_tail(&req->r_linger_osd, &req->r_osd->o_linger_requests);
+ if (req->r_osd)
+ list_add_tail(&req->r_linger_osd,
+ &req->r_osd->o_linger_requests);
}
static void __unregister_linger_request(struct ceph_osd_client *osdc,
@@ -1302,8 +1304,9 @@ static void kick_requests(struct ceph_osd_client *osdc, int force_resend)
dout("kick_requests %s\n", force_resend ? " (force resend)" : "");
mutex_lock(&osdc->request_mutex);
- for (p = rb_first(&osdc->requests); p; p = rb_next(p)) {
+ for (p = rb_first(&osdc->requests); p; ) {
req = rb_entry(p, struct ceph_osd_request, r_node);
+ p = rb_next(p);
err = __map_request(osdc, req, force_resend);
if (err < 0)
continue; /* error */
@@ -1311,10 +1314,23 @@ static void kick_requests(struct ceph_osd_client *osdc, int force_resend)
dout("%p tid %llu maps to no osd\n", req, req->r_tid);
needmap++; /* request a newer map */
} else if (err > 0) {
- dout("%p tid %llu requeued on osd%d\n", req, req->r_tid,
- req->r_osd ? req->r_osd->o_osd : -1);
- if (!req->r_linger)
+ if (!req->r_linger) {
+ dout("%p tid %llu requeued on osd%d\n", req,
+ req->r_tid,
+ req->r_osd ? req->r_osd->o_osd : -1);
req->r_flags |= CEPH_OSD_FLAG_RETRY;
+ }
+ }
+ if (req->r_linger && list_empty(&req->r_linger_item)) {
+ /*
+ * register as a linger so that we will
+ * re-submit below and get a new tid
+ */
+ dout("%p tid %llu restart on osd%d\n",
+ req, req->r_tid,
+ req->r_osd ? req->r_osd->o_osd : -1);
+ __register_linger_request(osdc, req);
+ __unregister_request(osdc, req);
}
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 3b5ede07b55b52c3be27749d183d87257d032065 upstream.
If we fault on a lossy connection, we should still close the socket
immediately, and do so under the con mutex.
We should also take the con mutex before printing out the state bits in
the debug output.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 07204f1..9aaf539 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2330,22 +2330,23 @@ fault:
*/
static void ceph_fault(struct ceph_connection *con)
{
+ mutex_lock(&con->mutex);
+
pr_err("%s%lld %s %s\n", ENTITY_NAME(con->peer_name),
ceph_pr_addr(&con->peer_addr.in_addr), con->error_msg);
dout("fault %p state %lu to peer %s\n",
con, con->state, ceph_pr_addr(&con->peer_addr.in_addr));
- if (test_bit(LOSSYTX, &con->flags)) {
- dout("fault on LOSSYTX channel\n");
- goto out;
- }
-
- mutex_lock(&con->mutex);
if (test_bit(CLOSED, &con->state))
goto out_unlock;
con_close_socket(con);
+ if (test_bit(LOSSYTX, &con->flags)) {
+ dout("fault on LOSSYTX channel\n");
+ goto out_unlock;
+ }
+
if (con->in_msg) {
BUG_ON(con->in_msg->con != con);
con->in_msg->con = NULL;
@@ -2392,7 +2393,6 @@ static void ceph_fault(struct ceph_connection *con)
out_unlock:
mutex_unlock(&con->mutex);
-out:
/*
* in case we faulted due to authentication, invalidate our
* current tickets so that we can get new ones.
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 00650931e52e97fe64096bec167f5a6780dfd94a upstream.
Avoid dropping and retaking con->mutex in the ceph_con_send() case by
leaving locking up to the caller.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 9aaf539..1a3cb4a 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2441,12 +2441,10 @@ static void clear_standby(struct ceph_connection *con)
{
/* come back from STANDBY? */
if (test_and_clear_bit(STANDBY, &con->state)) {
- mutex_lock(&con->mutex);
dout("clear_standby %p and ++connect_seq\n", con);
con->connect_seq++;
WARN_ON(test_bit(WRITE_PENDING, &con->flags));
WARN_ON(test_bit(KEEPALIVE_PENDING, &con->flags));
- mutex_unlock(&con->mutex);
}
}
@@ -2483,11 +2481,12 @@ void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg)
le32_to_cpu(msg->hdr.front_len),
le32_to_cpu(msg->hdr.middle_len),
le32_to_cpu(msg->hdr.data_len));
+
+ clear_standby(con);
mutex_unlock(&con->mutex);
/* if there wasn't anything waiting to send before, queue
* new work */
- clear_standby(con);
if (test_and_set_bit(WRITE_PENDING, &con->flags) == 0)
queue_con(con);
}
@@ -2574,7 +2573,9 @@ void ceph_msg_revoke_incoming(struct ceph_msg *msg)
void ceph_con_keepalive(struct ceph_connection *con)
{
dout("con_keepalive %p\n", con);
+ mutex_lock(&con->mutex);
clear_standby(con);
+ mutex_unlock(&con->mutex);
if (test_and_set_bit(KEEPALIVE_PENDING, &con->flags) == 0 &&
test_and_set_bit(WRITE_PENDING, &con->flags) == 0)
queue_con(con);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 2e8cb10063820af7ed7638e3fd9013eee21266e7 upstream.
If the state is CLOSED or OPENING, we shouldn't have a socket.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 20e60a8..32ab7cd 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2284,15 +2284,15 @@ restart:
dout("con_work %p STANDBY\n", con);
goto done;
}
- if (test_bit(CLOSED, &con->state)) { /* e.g. if we are replaced */
- dout("con_work CLOSED\n");
- con_close_socket(con);
+ if (test_bit(CLOSED, &con->state)) {
+ dout("con_work %p CLOSED\n", con);
+ BUG_ON(con->sock);
goto done;
}
if (test_and_clear_bit(OPENING, &con->state)) {
/* reopen w/ new peer */
dout("con_work OPENING\n");
- con_close_socket(con);
+ BUG_ON(con->sock);
}
ret = try_read(con);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit ee76e0736db8455e3b11827d6899bd2a4e1d0584 upstream.
It is simpler to do this immediately, since we already hold the con mutex.
It also avoids the need to deal with a not-quite-CLOSED socket in con_work.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 32ab7cd..46ce113 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -519,14 +519,8 @@ void ceph_con_close(struct ceph_connection *con)
reset_connection(con);
con->peer_global_seq = 0;
cancel_delayed_work(&con->work);
+ con_close_socket(con);
mutex_unlock(&con->mutex);
-
- /*
- * We cannot close the socket directly from here because the
- * work threads use it without holding the mutex. Instead, let
- * con_work() do it.
- */
- queue_con(con);
}
EXPORT_SYMBOL(ceph_con_close);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit d7353dd5aaf22ed611fbcd0d4a4a12fb30659290 upstream.
If we are CLOSED, the socket is closed and we won't get these.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 46ce113..e7320cd 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -296,9 +296,6 @@ static void ceph_sock_state_change(struct sock *sk)
dout("%s %p state = %lu sk_state = %u\n", __func__,
con, con->state, sk->sk_state);
- if (test_bit(CLOSED, &con->state))
- return;
-
switch (sk->sk_state) {
case TCP_CLOSE:
dout("%s TCP_CLOSE\n", __func__);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 4a8616920860920abaa51193146fe36b38ef09aa upstream.
Rename flags with CON_FLAG prefix, move the definitions into the c file,
and (better) document their meaning.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 10 -------
net/ceph/messenger.c | 62 +++++++++++++++++++++++-----------------
2 files changed, 36 insertions(+), 36 deletions(-)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index dc684f6..9844241 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -107,16 +107,6 @@ struct ceph_msg_pos {
#define MAX_DELAY_INTERVAL (5 * 60 * HZ)
/*
- * ceph_connection flag bits
- */
-
-#define LOSSYTX 0 /* we can close channel or drop messages on errors */
-#define KEEPALIVE_PENDING 3
-#define WRITE_PENDING 4 /* we have data ready to send */
-#define SOCK_CLOSED 11 /* socket state changed to closed */
-#define BACKOFF 15
-
-/*
* A single connection with another host.
*
* We maintain a queue of outgoing messages, and some session state to
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 563e46a..b872db5 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -87,6 +87,15 @@
#define CON_STATE_OPEN 5 /* -> STANDBY, CLOSED */
#define CON_STATE_STANDBY 6 /* -> PREOPEN, CLOSED */
+/*
+ * ceph_connection flag bits
+ */
+#define CON_FLAG_LOSSYTX 0 /* we can close channel or drop
+ * messages on errors */
+#define CON_FLAG_KEEPALIVE_PENDING 1 /* we need to send a keepalive */
+#define CON_FLAG_WRITE_PENDING 2 /* we have data ready to send */
+#define CON_FLAG_SOCK_CLOSED 3 /* socket state changed to closed */
+#define CON_FLAG_BACKOFF 4 /* need to retry queuing delayed work */
/* static tag bytes (protocol control messages) */
static char tag_msg = CEPH_MSGR_TAG_MSG;
@@ -288,7 +297,7 @@ static void ceph_sock_write_space(struct sock *sk)
* buffer. See net/ipv4/tcp_input.c:tcp_check_space()
* and net/core/stream.c:sk_stream_write_space().
*/
- if (test_bit(WRITE_PENDING, &con->flags)) {
+ if (test_bit(CON_FLAG_WRITE_PENDING, &con->flags)) {
if (sk_stream_wspace(sk) >= sk_stream_min_wspace(sk)) {
dout("%s %p queueing write work\n", __func__, con);
clear_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
@@ -313,7 +322,7 @@ static void ceph_sock_state_change(struct sock *sk)
case TCP_CLOSE_WAIT:
dout("%s TCP_CLOSE_WAIT\n", __func__);
con_sock_state_closing(con);
- set_bit(SOCK_CLOSED, &con->flags);
+ set_bit(CON_FLAG_SOCK_CLOSED, &con->flags);
queue_con(con);
break;
case TCP_ESTABLISHED:
@@ -449,12 +458,12 @@ static int con_close_socket(struct ceph_connection *con)
con->sock = NULL;
/*
- * Forcibly clear the SOCK_CLOSE flag. It gets set
+ * Forcibly clear the SOCK_CLOSED flag. It gets set
* independent of the connection mutex, and we could have
* received a socket close event before we had the chance to
* shut the socket down.
*/
- clear_bit(SOCK_CLOSED, &con->flags);
+ clear_bit(CON_FLAG_SOCK_CLOSED, &con->flags);
con_sock_state_closed(con);
return rc;
}
@@ -516,9 +525,9 @@ void ceph_con_close(struct ceph_connection *con)
ceph_pr_addr(&con->peer_addr.in_addr));
con->state = CON_STATE_CLOSED;
- clear_bit(LOSSYTX, &con->flags); /* so we retry next connect */
- clear_bit(KEEPALIVE_PENDING, &con->flags);
- clear_bit(WRITE_PENDING, &con->flags);
+ clear_bit(CON_FLAG_LOSSYTX, &con->flags); /* so we retry next connect */
+ clear_bit(CON_FLAG_KEEPALIVE_PENDING, &con->flags);
+ clear_bit(CON_FLAG_WRITE_PENDING, &con->flags);
reset_connection(con);
con->peer_global_seq = 0;
@@ -770,7 +779,7 @@ static void prepare_write_message(struct ceph_connection *con)
/* no, queue up footer too and be done */
prepare_write_message_footer(con);
- set_bit(WRITE_PENDING, &con->flags);
+ set_bit(CON_FLAG_WRITE_PENDING, &con->flags);
}
/*
@@ -791,7 +800,7 @@ static void prepare_write_ack(struct ceph_connection *con)
&con->out_temp_ack);
con->out_more = 1; /* more will follow.. eventually.. */
- set_bit(WRITE_PENDING, &con->flags);
+ set_bit(CON_FLAG_WRITE_PENDING, &con->flags);
}
/*
@@ -802,7 +811,7 @@ static void prepare_write_keepalive(struct ceph_connection *con)
dout("prepare_write_keepalive %p\n", con);
con_out_kvec_reset(con);
con_out_kvec_add(con, sizeof (tag_keepalive), &tag_keepalive);
- set_bit(WRITE_PENDING, &con->flags);
+ set_bit(CON_FLAG_WRITE_PENDING, &con->flags);
}
/*
@@ -845,7 +854,7 @@ static void prepare_write_banner(struct ceph_connection *con)
&con->msgr->my_enc_addr);
con->out_more = 0;
- set_bit(WRITE_PENDING, &con->flags);
+ set_bit(CON_FLAG_WRITE_PENDING, &con->flags);
}
static int prepare_write_connect(struct ceph_connection *con)
@@ -896,7 +905,7 @@ static int prepare_write_connect(struct ceph_connection *con)
auth->authorizer_buf);
con->out_more = 0;
- set_bit(WRITE_PENDING, &con->flags);
+ set_bit(CON_FLAG_WRITE_PENDING, &con->flags);
return 0;
}
@@ -1622,7 +1631,7 @@ static int process_connect(struct ceph_connection *con)
le32_to_cpu(con->in_reply.connect_seq));
if (con->in_reply.flags & CEPH_MSG_CONNECT_LOSSY)
- set_bit(LOSSYTX, &con->flags);
+ set_bit(CON_FLAG_LOSSYTX, &con->flags);
con->delay = 0; /* reset backoff memory */
@@ -2061,14 +2070,15 @@ do_next:
prepare_write_ack(con);
goto more;
}
- if (test_and_clear_bit(KEEPALIVE_PENDING, &con->flags)) {
+ if (test_and_clear_bit(CON_FLAG_KEEPALIVE_PENDING,
+ &con->flags)) {
prepare_write_keepalive(con);
goto more;
}
}
/* Nothing to do! */
- clear_bit(WRITE_PENDING, &con->flags);
+ clear_bit(CON_FLAG_WRITE_PENDING, &con->flags);
dout("try_write nothing else to write.\n");
ret = 0;
out:
@@ -2241,7 +2251,7 @@ static void con_work(struct work_struct *work)
mutex_lock(&con->mutex);
restart:
- if (test_and_clear_bit(SOCK_CLOSED, &con->flags)) {
+ if (test_and_clear_bit(CON_FLAG_SOCK_CLOSED, &con->flags)) {
switch (con->state) {
case CON_STATE_CONNECTING:
con->error_msg = "connection failed";
@@ -2260,7 +2270,7 @@ restart:
goto fault;
}
- if (test_and_clear_bit(BACKOFF, &con->flags)) {
+ if (test_and_clear_bit(CON_FLAG_BACKOFF, &con->flags)) {
dout("con_work %p backing off\n", con);
if (queue_delayed_work(ceph_msgr_wq, &con->work,
round_jiffies_relative(con->delay))) {
@@ -2336,7 +2346,7 @@ static void ceph_fault(struct ceph_connection *con)
con_close_socket(con);
- if (test_bit(LOSSYTX, &con->flags)) {
+ if (test_bit(CON_FLAG_LOSSYTX, &con->flags)) {
dout("fault on LOSSYTX channel, marking CLOSED\n");
con->state = CON_STATE_CLOSED;
goto out_unlock;
@@ -2356,9 +2366,9 @@ static void ceph_fault(struct ceph_connection *con)
/* If there are no messages queued or keepalive pending, place
* the connection in a STANDBY state */
if (list_empty(&con->out_queue) &&
- !test_bit(KEEPALIVE_PENDING, &con->flags)) {
+ !test_bit(CON_FLAG_KEEPALIVE_PENDING, &con->flags)) {
dout("fault %p setting STANDBY clearing WRITE_PENDING\n", con);
- clear_bit(WRITE_PENDING, &con->flags);
+ clear_bit(CON_FLAG_WRITE_PENDING, &con->flags);
con->state = CON_STATE_STANDBY;
} else {
/* retry after a delay. */
@@ -2383,7 +2393,7 @@ static void ceph_fault(struct ceph_connection *con)
* that when con_work restarts we schedule the
* delay then.
*/
- set_bit(BACKOFF, &con->flags);
+ set_bit(CON_FLAG_BACKOFF, &con->flags);
}
}
@@ -2440,8 +2450,8 @@ static void clear_standby(struct ceph_connection *con)
dout("clear_standby %p and ++connect_seq\n", con);
con->state = CON_STATE_PREOPEN;
con->connect_seq++;
- WARN_ON(test_bit(WRITE_PENDING, &con->flags));
- WARN_ON(test_bit(KEEPALIVE_PENDING, &con->flags));
+ WARN_ON(test_bit(CON_FLAG_WRITE_PENDING, &con->flags));
+ WARN_ON(test_bit(CON_FLAG_KEEPALIVE_PENDING, &con->flags));
}
}
@@ -2482,7 +2492,7 @@ void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg)
/* if there wasn't anything waiting to send before, queue
* new work */
- if (test_and_set_bit(WRITE_PENDING, &con->flags) == 0)
+ if (test_and_set_bit(CON_FLAG_WRITE_PENDING, &con->flags) == 0)
queue_con(con);
}
EXPORT_SYMBOL(ceph_con_send);
@@ -2571,8 +2581,8 @@ void ceph_con_keepalive(struct ceph_connection *con)
mutex_lock(&con->mutex);
clear_standby(con);
mutex_unlock(&con->mutex);
- if (test_and_set_bit(KEEPALIVE_PENDING, &con->flags) == 0 &&
- test_and_set_bit(WRITE_PENDING, &con->flags) == 0)
+ if (test_and_set_bit(CON_FLAG_KEEPALIVE_PENDING, &con->flags) == 0 &&
+ test_and_set_bit(CON_FLAG_WRITE_PENDING, &con->flags) == 0)
queue_con(con);
}
EXPORT_SYMBOL(ceph_con_keepalive);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 8007b8d626b49c34fb146ec16dc639d8b10c862f upstream.
If the connect() call immediately fails such that sock == NULL, we
still need con_close_socket() to reset our socket state to CLOSED.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 26 +++++++++++++++++++-------
1 file changed, 19 insertions(+), 7 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index fa16f2c..a6a0c7a 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -224,6 +224,8 @@ static void con_sock_state_init(struct ceph_connection *con)
old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CLOSED);
if (WARN_ON(old_state != CON_SOCK_STATE_NEW))
printk("%s: unexpected old state %d\n", __func__, old_state);
+ dout("%s con %p sock %d -> %d\n", __func__, con, old_state,
+ CON_SOCK_STATE_CLOSED);
}
static void con_sock_state_connecting(struct ceph_connection *con)
@@ -233,6 +235,8 @@ static void con_sock_state_connecting(struct ceph_connection *con)
old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CONNECTING);
if (WARN_ON(old_state != CON_SOCK_STATE_CLOSED))
printk("%s: unexpected old state %d\n", __func__, old_state);
+ dout("%s con %p sock %d -> %d\n", __func__, con, old_state,
+ CON_SOCK_STATE_CONNECTING);
}
static void con_sock_state_connected(struct ceph_connection *con)
@@ -242,6 +246,8 @@ static void con_sock_state_connected(struct ceph_connection *con)
old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CONNECTED);
if (WARN_ON(old_state != CON_SOCK_STATE_CONNECTING))
printk("%s: unexpected old state %d\n", __func__, old_state);
+ dout("%s con %p sock %d -> %d\n", __func__, con, old_state,
+ CON_SOCK_STATE_CONNECTED);
}
static void con_sock_state_closing(struct ceph_connection *con)
@@ -253,6 +259,8 @@ static void con_sock_state_closing(struct ceph_connection *con)
old_state != CON_SOCK_STATE_CONNECTED &&
old_state != CON_SOCK_STATE_CLOSING))
printk("%s: unexpected old state %d\n", __func__, old_state);
+ dout("%s con %p sock %d -> %d\n", __func__, con, old_state,
+ CON_SOCK_STATE_CLOSING);
}
static void con_sock_state_closed(struct ceph_connection *con)
@@ -262,8 +270,11 @@ static void con_sock_state_closed(struct ceph_connection *con)
old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CLOSED);
if (WARN_ON(old_state != CON_SOCK_STATE_CONNECTED &&
old_state != CON_SOCK_STATE_CLOSING &&
- old_state != CON_SOCK_STATE_CONNECTING))
+ old_state != CON_SOCK_STATE_CONNECTING &&
+ old_state != CON_SOCK_STATE_CLOSED))
printk("%s: unexpected old state %d\n", __func__, old_state);
+ dout("%s con %p sock %d -> %d\n", __func__, con, old_state,
+ CON_SOCK_STATE_CLOSED);
}
/*
@@ -448,14 +459,14 @@ static int ceph_tcp_sendpage(struct socket *sock, struct page *page,
*/
static int con_close_socket(struct ceph_connection *con)
{
- int rc;
+ int rc = 0;
dout("con_close_socket on %p sock %p\n", con, con->sock);
- if (!con->sock)
- return 0;
- rc = con->sock->ops->shutdown(con->sock, SHUT_RDWR);
- sock_release(con->sock);
- con->sock = NULL;
+ if (con->sock) {
+ rc = con->sock->ops->shutdown(con->sock, SHUT_RDWR);
+ sock_release(con->sock);
+ con->sock = NULL;
+ }
/*
* Forcibly clear the SOCK_CLOSED flag. It gets set
@@ -464,6 +475,7 @@ static int con_close_socket(struct ceph_connection *con)
* shut the socket down.
*/
clear_bit(CON_FLAG_SOCK_CLOSED, &con->flags);
+
con_sock_state_closed(con);
return rc;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 7b862e07b1a4d5c963d19027f10ea78085f27f9b upstream.
We drop the con mutex when delivering a message. When we retake the
lock, we need to verify we are still in the OPEN state before
preparing to read the next tag, or else we risk stepping on a
connection that has been closed.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index a6a0c7a..feb5a2a 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2003,7 +2003,6 @@ static void process_message(struct ceph_connection *con)
con->ops->dispatch(con, msg);
mutex_lock(&con->mutex);
- prepare_read_tag(con);
}
@@ -2213,6 +2212,8 @@ more:
if (con->in_tag == CEPH_MSGR_TAG_READY)
goto more;
process_message(con);
+ if (con->state == CON_STATE_OPEN)
+ prepare_read_tag(con);
goto more;
}
if (con->in_tag == CEPH_MSGR_TAG_ACK) {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 4740a623d20c51d167da7f752b63e2b8714b2543 upstream.
This function's calling convention is very limiting. In particular,
we can't return any error other than ENOMEM (and only implicitly),
which is a problem (see next patch).
Instead, return an normal 0 or error code, and make the skip a pointer
output parameter. Drop the useless in_hdr argument (we have the con
pointer).
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 56 ++++++++++++++++++++++++++++----------------------
1 file changed, 31 insertions(+), 25 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index c3b628c..13b549b 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1733,9 +1733,7 @@ static int read_partial_message_section(struct ceph_connection *con,
return 1;
}
-static bool ceph_con_in_msg_alloc(struct ceph_connection *con,
- struct ceph_msg_header *hdr);
-
+static int ceph_con_in_msg_alloc(struct ceph_connection *con, int *skip);
static int read_partial_message_pages(struct ceph_connection *con,
struct page **pages,
@@ -1864,9 +1862,14 @@ static int read_partial_message(struct ceph_connection *con)
/* allocate message? */
if (!con->in_msg) {
+ int skip = 0;
+
dout("got hdr type %d front %d data %d\n", con->in_hdr.type,
con->in_hdr.front_len, con->in_hdr.data_len);
- if (ceph_con_in_msg_alloc(con, &con->in_hdr)) {
+ ret = ceph_con_in_msg_alloc(con, &skip);
+ if (ret < 0)
+ return ret;
+ if (skip) {
/* skip this message */
dout("alloc_msg said skip message\n");
BUG_ON(con->in_msg);
@@ -1876,12 +1879,8 @@ static int read_partial_message(struct ceph_connection *con)
con->in_seq++;
return 0;
}
- if (!con->in_msg) {
- con->error_msg =
- "error allocating memory for incoming message";
- return -ENOMEM;
- }
+ BUG_ON(!con->in_msg);
BUG_ON(con->in_msg->con != con);
m = con->in_msg;
m->front.iov_len = 0; /* haven't read it yet */
@@ -2715,43 +2714,50 @@ static int ceph_alloc_middle(struct ceph_connection *con, struct ceph_msg *msg)
* connection, and save the result in con->in_msg. Uses the
* connection's private alloc_msg op if available.
*
- * Returns true if the message should be skipped, false otherwise.
- * If true is returned (skip message), con->in_msg will be NULL.
- * If false is returned, con->in_msg will contain a pointer to the
- * newly-allocated message, or NULL in case of memory exhaustion.
+ * Returns 0 on success, or a negative error code.
+ *
+ * On success, if we set *skip = 1:
+ * - the next message should be skipped and ignored.
+ * - con->in_msg == NULL
+ * or if we set *skip = 0:
+ * - con->in_msg is non-null.
+ * On error (ENOMEM, EAGAIN, ...),
+ * - con->in_msg == NULL
*/
-static bool ceph_con_in_msg_alloc(struct ceph_connection *con,
- struct ceph_msg_header *hdr)
+static int ceph_con_in_msg_alloc(struct ceph_connection *con, int *skip)
{
+ struct ceph_msg_header *hdr = &con->in_hdr;
int type = le16_to_cpu(hdr->type);
int front_len = le32_to_cpu(hdr->front_len);
int middle_len = le32_to_cpu(hdr->middle_len);
- int ret;
+ int ret = 0;
BUG_ON(con->in_msg != NULL);
if (con->ops->alloc_msg) {
- int skip = 0;
-
mutex_unlock(&con->mutex);
- con->in_msg = con->ops->alloc_msg(con, hdr, &skip);
+ con->in_msg = con->ops->alloc_msg(con, hdr, skip);
mutex_lock(&con->mutex);
if (con->in_msg) {
con->in_msg->con = con->ops->get(con);
BUG_ON(con->in_msg->con == NULL);
}
- if (skip)
+ if (*skip) {
con->in_msg = NULL;
-
- if (!con->in_msg)
- return skip != 0;
+ return 0;
+ }
+ if (!con->in_msg) {
+ con->error_msg =
+ "error allocating memory for incoming message";
+ return -ENOMEM;
+ }
}
if (!con->in_msg) {
con->in_msg = ceph_msg_new(type, front_len, GFP_NOFS, false);
if (!con->in_msg) {
pr_err("unable to allocate msg type %d len %d\n",
type, front_len);
- return false;
+ return -ENOMEM;
}
con->in_msg->con = con->ops->get(con);
BUG_ON(con->in_msg->con == NULL);
@@ -2767,7 +2773,7 @@ static bool ceph_con_in_msg_alloc(struct ceph_connection *con,
}
}
- return false;
+ return ret;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sylvain Munaut <[email protected]>
commit f0666b1ac875ff32fe290219b150ec62eebbe10e upstream.
Avoid crashing if the crypto key payload was NULL, as when it was not correctly
allocated and initialized. Also, avoid leaking it.
Signed-off-by: Sylvain Munaut <[email protected]>
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/crypto.c | 1 +
net/ceph/crypto.h | 3 ++-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/net/ceph/crypto.c b/net/ceph/crypto.c
index b780cb7..9da7fdd 100644
--- a/net/ceph/crypto.c
+++ b/net/ceph/crypto.c
@@ -466,6 +466,7 @@ void ceph_key_destroy(struct key *key) {
struct ceph_crypto_key *ckey = key->payload.data;
ceph_crypto_key_destroy(ckey);
+ kfree(ckey);
}
struct key_type key_type_ceph = {
diff --git a/net/ceph/crypto.h b/net/ceph/crypto.h
index 1919d15..3572dc5 100644
--- a/net/ceph/crypto.h
+++ b/net/ceph/crypto.h
@@ -16,7 +16,8 @@ struct ceph_crypto_key {
static inline void ceph_crypto_key_destroy(struct ceph_crypto_key *key)
{
- kfree(key->key);
+ if (key)
+ kfree(key->key);
}
extern int ceph_crypto_key_clone(struct ceph_crypto_key *dst,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 6139919133377652992a5fe134e22abce3e9c25e upstream.
We drop the lock when calling the ->alloc_msg() con op, which means
we need to (a) not clobber con->in_msg without the mutex held, and (b)
we need to verify that we are still in the OPEN state when we retake
it to avoid causing any mayhem. If the state does change, -EAGAIN
will get us back to con_work() and loop.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 13b549b..b6655b1 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2735,9 +2735,16 @@ static int ceph_con_in_msg_alloc(struct ceph_connection *con, int *skip)
BUG_ON(con->in_msg != NULL);
if (con->ops->alloc_msg) {
+ struct ceph_msg *msg;
+
mutex_unlock(&con->mutex);
- con->in_msg = con->ops->alloc_msg(con, hdr, skip);
+ msg = con->ops->alloc_msg(con, hdr, skip);
mutex_lock(&con->mutex);
+ if (con->state != CON_STATE_OPEN) {
+ ceph_msg_put(msg);
+ return -EAGAIN;
+ }
+ con->in_msg = msg;
if (con->in_msg) {
con->in_msg->con = con->ops->get(con);
BUG_ON(con->in_msg->con == NULL);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jim Schutt <[email protected]>
commit 6d4221b53707486dfad3f5bfe568d2ce7f4c9863 upstream.
Because the Ceph client messenger uses a non-blocking connect, it is
possible for the sending of the client banner to race with the
arrival of the banner sent by the peer.
When ceph_sock_state_change() notices the connect has completed, it
schedules work to process the socket via con_work(). During this
time the peer is writing its banner, and arrival of the peer banner
races with con_work().
If con_work() calls try_read() before the peer banner arrives, there
is nothing for it to do, after which con_work() calls try_write() to
send the client's banner. In this case Ceph's protocol negotiation
can complete succesfully.
The server-side messenger immediately sends its banner and addresses
after accepting a connect request, *before* actually attempting to
read or verify the banner from the client. As a result, it is
possible for the banner from the server to arrive before con_work()
calls try_read(). If that happens, try_read() will read the banner
and prepare protocol negotiation info via prepare_write_connect().
prepare_write_connect() calls con_out_kvec_reset(), which discards
the as-yet-unsent client banner. Next, con_work() calls
try_write(), which sends the protocol negotiation info rather than
the banner that the peer is expecting.
The result is that the peer sees an invalid banner, and the client
reports "negotiation failed".
Fix this by moving con_out_kvec_reset() out of
prepare_write_connect() to its callers at all locations except the
one where the banner might still need to be sent.
[[email protected]: added note about server-side behavior]
Signed-off-by: Jim Schutt <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index b6655b1..b141c86 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -911,7 +911,6 @@ static int prepare_write_connect(struct ceph_connection *con)
con->out_connect.authorizer_len = auth ?
cpu_to_le32(auth->authorizer_buf_len) : 0;
- con_out_kvec_reset(con);
con_out_kvec_add(con, sizeof (con->out_connect),
&con->out_connect);
if (auth && auth->authorizer_buf_len)
@@ -1553,6 +1552,7 @@ static int process_connect(struct ceph_connection *con)
return -1;
}
con->auth_retry = 1;
+ con_out_kvec_reset(con);
ret = prepare_write_connect(con);
if (ret < 0)
return ret;
@@ -1573,6 +1573,7 @@ static int process_connect(struct ceph_connection *con)
ENTITY_NAME(con->peer_name),
ceph_pr_addr(&con->peer_addr.in_addr));
reset_connection(con);
+ con_out_kvec_reset(con);
ret = prepare_write_connect(con);
if (ret < 0)
return ret;
@@ -1597,6 +1598,7 @@ static int process_connect(struct ceph_connection *con)
le32_to_cpu(con->out_connect.connect_seq),
le32_to_cpu(con->in_reply.connect_seq));
con->connect_seq = le32_to_cpu(con->in_reply.connect_seq);
+ con_out_kvec_reset(con);
ret = prepare_write_connect(con);
if (ret < 0)
return ret;
@@ -1613,6 +1615,7 @@ static int process_connect(struct ceph_connection *con)
le32_to_cpu(con->in_reply.global_seq));
get_global_seq(con->msgr,
le32_to_cpu(con->in_reply.global_seq));
+ con_out_kvec_reset(con);
ret = prepare_write_connect(con);
if (ret < 0)
return ret;
@@ -2131,7 +2134,11 @@ more:
BUG_ON(con->state != CON_STATE_CONNECTING);
con->state = CON_STATE_NEGOTIATING;
- /* Banner is good, exchange connection info */
+ /*
+ * Received banner is good, exchange connection info.
+ * Do not reset out_kvec, as sending our banner raced
+ * with receiving peer banner after connect completed.
+ */
ret = prepare_write_connect(con);
if (ret < 0)
goto out;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 588377d6199034c36d335e7df5818b731fea072c upstream.
If ceph_fault() is unable to queue work after a delay, it sets the
BACKOFF connection flag so con_work() will attempt to do so.
In con_work(), when BACKOFF is set, if queue_delayed_work() doesn't
result in newly-queued work, it simply ignores this condition and
proceeds as if no backoff delay were desired. There are two
problems with this--one of which is a bug.
The first problem is simply that the intended behavior is to back
off, and if we aren't able queue the work item to run after a delay
we're not doing that.
The only reason queue_delayed_work() won't queue work is if the
provided work item is already queued. In the messenger, this
means that con_work() is already scheduled to be run again. So
if we simply set the BACKOFF flag again when this occurs, we know
the next con_work() call will again attempt to hold off activity
on the connection until after the delay.
The second problem--the bug--is a leak of a reference count. If
queue_delayed_work() returns 0 in con_work(), con->ops->put() drops
the connection reference held on entry to con_work(). However,
processing is (was) allowed to continue, and at the end of the
function a second con->ops->put() is called.
This patch fixes both problems.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 8ba0eee..0de041f 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2296,10 +2296,11 @@ restart:
mutex_unlock(&con->mutex);
return;
} else {
- con->ops->put(con);
dout("con_work %p FAILED to back off %lu\n", con,
con->delay);
+ set_bit(CON_FLAG_BACKOFF, &con->flags);
}
+ goto done;
}
if (con->state == CON_STATE_STANDBY) {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 9bd952615a42d7e2ce3fa2c632e808e804637a1a upstream.
The ceph_on_in_msg_alloc() method drops con->mutex while it allocates a
message. If that races with a timeout that resends a zillion messages and
resets the connection, and the ->alloc_msg() method returns a NULL message,
it will call ceph_msg_put(NULL) and BUG.
Fix by only calling put if msg is non-NULL.
Fixes http://tracker.newdream.net/issues/3142
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 0de041f..692243a 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2746,7 +2746,8 @@ static int ceph_con_in_msg_alloc(struct ceph_connection *con, int *skip)
msg = con->ops->alloc_msg(con, hdr, skip);
mutex_lock(&con->mutex);
if (con->state != CON_STATE_OPEN) {
- ceph_msg_put(msg);
+ if (msg)
+ ceph_msg_put(msg);
return -EAGAIN;
}
con->in_msg = msg;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: "Yan, Zheng" <[email protected]>
commit 3e8f43a089f06279c5f76a9ccd42578eebf7bfa5 upstream.
When i >= newmap->m_max_mds, ceph_mdsmap_get_addr(newmap, i) return
NULL. Passing NULL to memcmp() triggers oops.
Signed-off-by: Yan, Zheng <[email protected]>
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ceph/mds_client.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index 5ac6434..7f1682d 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -2634,7 +2634,8 @@ static void check_new_map(struct ceph_mds_client *mdsc,
ceph_mdsmap_is_laggy(newmap, i) ? " (laggy)" : "",
session_state_name(s->s_state));
- if (memcmp(ceph_mdsmap_get_addr(oldmap, i),
+ if (i >= newmap->m_max_mds ||
+ memcmp(ceph_mdsmap_get_addr(oldmap, i),
ceph_mdsmap_get_addr(newmap, i),
sizeof(struct ceph_entity_addr))) {
if (s->s_state == CEPH_MDS_SESSION_OPENING) {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 6285bc231277419255f3498d3eb5ddc9f8e7fe79 upstream.
A pgoff_t is defined (by default) to have type (unsigned long). On
architectures such as i686 that's a 32-bit type. The ceph address
space code was attempting to produce 64 bit offsets by shifting a
page's index by PAGE_CACHE_SHIFT, but the result was not what was
desired because the shift occurred before the result got promoted
to 64 bits.
Fix this by converting all uses of page->index used in this way to
use the page_offset() macro, which ensures the 64-bit result has the
intended value.
This fixes http://tracker.newdream.net/issues/3112
Reported-by: Mohamed Pakkeer <[email protected]>
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ceph/addr.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 8b67304..32ee086 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -205,7 +205,7 @@ static int readpage_nounlock(struct file *filp, struct page *page)
dout("readpage inode %p file %p page %p index %lu\n",
inode, filp, page, page->index);
err = ceph_osdc_readpages(osdc, ceph_vino(inode), &ci->i_layout,
- page->index << PAGE_CACHE_SHIFT, &len,
+ (u64) page_offset(page), &len,
ci->i_truncate_seq, ci->i_truncate_size,
&page, 1, 0);
if (err == -ENOENT)
@@ -286,7 +286,7 @@ static int start_read(struct inode *inode, struct list_head *page_list, int max)
int nr_pages = 0;
int ret;
- off = page->index << PAGE_CACHE_SHIFT;
+ off = (u64) page_offset(page);
/* count pages */
next_index = page->index;
@@ -426,7 +426,7 @@ static int writepage_nounlock(struct page *page, struct writeback_control *wbc)
struct ceph_inode_info *ci;
struct ceph_fs_client *fsc;
struct ceph_osd_client *osdc;
- loff_t page_off = page->index << PAGE_CACHE_SHIFT;
+ loff_t page_off = page_offset(page);
int len = PAGE_CACHE_SIZE;
loff_t i_size;
int err = 0;
@@ -817,8 +817,7 @@ get_more_pages:
/* ok */
if (locked_pages == 0) {
/* prepare async write request */
- offset = (unsigned long long)page->index
- << PAGE_CACHE_SHIFT;
+ offset = (u64) page_offset(page);
len = wsize;
req = ceph_osdc_new_request(&fsc->client->osdc,
&ci->i_layout,
@@ -1180,7 +1179,7 @@ static int ceph_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
struct inode *inode = vma->vm_file->f_dentry->d_inode;
struct page *page = vmf->page;
struct ceph_mds_client *mdsc = ceph_inode_to_client(inode)->mdsc;
- loff_t off = page->index << PAGE_CACHE_SHIFT;
+ loff_t off = page_offset(page);
loff_t size, len;
int ret;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Mark Brown <[email protected]>
commit a1b98e12b7f8fad2f0aa3c08a3302bcac7ae1ec7 upstream.
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
sound/soc/codecs/wm2200.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sound/soc/codecs/wm2200.c b/sound/soc/codecs/wm2200.c
index 58ab97a..c8bff6d 100644
--- a/sound/soc/codecs/wm2200.c
+++ b/sound/soc/codecs/wm2200.c
@@ -1028,7 +1028,7 @@ SOC_DOUBLE_R_TLV("OUT2 Digital Volume", WM2200_DAC_DIGITAL_VOLUME_2L,
WM2200_DAC_DIGITAL_VOLUME_2R, WM2200_OUT2L_VOL_SHIFT, 0x9f, 0,
digital_tlv),
SOC_DOUBLE("OUT2 Switch", WM2200_PDM_1, WM2200_SPK1L_MUTE_SHIFT,
- WM2200_SPK1R_MUTE_SHIFT, 1, 0),
+ WM2200_SPK1R_MUTE_SHIFT, 1, 1),
};
WM2200_MIXER_ENUMS(OUT1L, WM2200_OUT1LMIX_INPUT_1_SOURCE);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sarah Sharp <[email protected]>
commit 65a95b75bc5afa7bbb844e222481044c1c4767eb upstream.
The Set SEL control transfer tells a device the exit latencies
associated with a device-initated U1 or U2 exit. Since a parent hub may
initiate a transition to U1 soon after a downstream port's U1 timeout is
set, we need to make sure the device receives the Set SEL transfer
before the parent hub timeout is set.
This patch should be backported to kernels as old as 3.5, that contain
the commit 1ea7e0e8e3d0f50901d335ea4178ab2aa8c88201 "USB: Add support to
enable/disable USB3 link states."
Signed-off-by: Sarah Sharp <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/core/hub.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index 209815a..c16230b 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -3199,16 +3199,6 @@ static int usb_set_device_initiated_lpm(struct usb_device *udev,
if (enable) {
/*
- * First, let the device know about the exit latencies
- * associated with the link state we're about to enable.
- */
- ret = usb_req_set_sel(udev, state);
- if (ret < 0) {
- dev_warn(&udev->dev, "Set SEL for device-initiated "
- "%s failed.\n", usb3_lpm_names[state]);
- return -EBUSY;
- }
- /*
* Now send the control transfer to enable device-initiated LPM
* for either U1 or U2.
*/
@@ -3293,7 +3283,7 @@ static int usb_set_lpm_timeout(struct usb_device *udev,
static void usb_enable_link_state(struct usb_hcd *hcd, struct usb_device *udev,
enum usb3_link_state state)
{
- int timeout;
+ int timeout, ret;
__u8 u1_mel = udev->bos->ss_cap->bU1devExitLat;
__le16 u2_mel = udev->bos->ss_cap->bU2DevExitLat;
@@ -3305,6 +3295,17 @@ static void usb_enable_link_state(struct usb_hcd *hcd, struct usb_device *udev,
(state == USB3_LPM_U2 && u2_mel == 0))
return;
+ /*
+ * First, let the device know about the exit latencies
+ * associated with the link state we're about to enable.
+ */
+ ret = usb_req_set_sel(udev, state);
+ if (ret < 0) {
+ dev_warn(&udev->dev, "Set SEL for device-initiated %s failed.\n",
+ usb3_lpm_names[state]);
+ return;
+ }
+
/* We allow the host controller to set the U1/U2 timeout internally
* first, so that it can change its schedule to account for the
* additional latency to send data to a device in a lower power
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Guennadi Liakhovetski <[email protected]>
commit 57451e437796548d658d03c2c4aab659eafcd799 upstream.
shdma doesn't support transfer re-scheduling or triggering from callbacks
or from atomic context. The fsi driver issues DMA transfers from a tasklet
context, which is a bug. To fix it convert tasklet to a work.
Reported-by: Do Q.Thang <[email protected]>
Tested-by: Do Q.Thang <[email protected]>
Signed-off-by: Guennadi Liakhovetski <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
sound/soc/sh/fsi.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/sound/soc/sh/fsi.c b/sound/soc/sh/fsi.c
index 2ef9853..e96c395 100644
--- a/sound/soc/sh/fsi.c
+++ b/sound/soc/sh/fsi.c
@@ -20,6 +20,7 @@
#include <linux/sh_dma.h>
#include <linux/slab.h>
#include <linux/module.h>
+#include <linux/workqueue.h>
#include <sound/soc.h>
#include <sound/sh_fsi.h>
@@ -223,7 +224,7 @@ struct fsi_stream {
*/
struct dma_chan *chan;
struct sh_dmae_slave slave; /* see fsi_handler_init() */
- struct tasklet_struct tasklet;
+ struct work_struct work;
dma_addr_t dma;
};
@@ -1085,9 +1086,9 @@ static void fsi_dma_complete(void *data)
snd_pcm_period_elapsed(io->substream);
}
-static void fsi_dma_do_tasklet(unsigned long data)
+static void fsi_dma_do_work(struct work_struct *work)
{
- struct fsi_stream *io = (struct fsi_stream *)data;
+ struct fsi_stream *io = container_of(work, struct fsi_stream, work);
struct fsi_priv *fsi = fsi_stream_to_priv(io);
struct dma_chan *chan;
struct snd_soc_dai *dai;
@@ -1140,7 +1141,7 @@ static void fsi_dma_do_tasklet(unsigned long data)
* FIXME
*
* In DMAEngine case, codec and FSI cannot be started simultaneously
- * since FSI is using tasklet.
+ * since FSI is using the scheduler work queue.
* Therefore, in capture case, probably FSI FIFO will have got
* overflow error in this point.
* in that case, DMA cannot start transfer until error was cleared.
@@ -1164,7 +1165,7 @@ static bool fsi_dma_filter(struct dma_chan *chan, void *param)
static int fsi_dma_transfer(struct fsi_priv *fsi, struct fsi_stream *io)
{
- tasklet_schedule(&io->tasklet);
+ schedule_work(&io->work);
return 0;
}
@@ -1195,14 +1196,14 @@ static int fsi_dma_probe(struct fsi_priv *fsi, struct fsi_stream *io)
if (!io->chan)
return -EIO;
- tasklet_init(&io->tasklet, fsi_dma_do_tasklet, (unsigned long)io);
+ INIT_WORK(&io->work, fsi_dma_do_work);
return 0;
}
static int fsi_dma_remove(struct fsi_priv *fsi, struct fsi_stream *io)
{
- tasklet_kill(&io->tasklet);
+ cancel_work_sync(&io->work);
fsi_stream_stop(fsi, io);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Guenter Roeck <[email protected]>
commit 1102dcab849313bd5a340b299b5cf61b518fbc0f upstream.
TjMax for the CE4100 series of Atom CPUs was previously reported to be
110 degrees C.
cpuinfo logs on the web show existing CPU types CE4110, CE4150, and CE4170,
reported as "model name : Intel(R) Atom(TM) CPU CE41{1|5|7}0 @ 1.{2|6}0GHz"
with model 28 (0x1c) and stepping 10 (0x0a). Add the three known variants
to the tjmax table.
Signed-off-by: Guenter Roeck <[email protected]>
Acked-by: Jean Delvare <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
Documentation/hwmon/coretemp | 1 +
drivers/hwmon/coretemp.c | 7 +++++--
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/Documentation/hwmon/coretemp b/Documentation/hwmon/coretemp
index c86b50c..f17256f 100644
--- a/Documentation/hwmon/coretemp
+++ b/Documentation/hwmon/coretemp
@@ -105,6 +105,7 @@ Process Processor TjMax(C)
330/230 125
E680/660/640/620 90
E680T/660T/640T/620T 110
+ CE4170/4150/4110 110
45nm Core2 Processors
Solo ULV SU3500/3300 100
diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
index 637c51c..fba26ee 100644
--- a/drivers/hwmon/coretemp.c
+++ b/drivers/hwmon/coretemp.c
@@ -205,8 +205,11 @@ static struct tjmax __cpuinitconst tjmax_table[] = {
{ "CPU N455", 100000 },
{ "CPU N470", 100000 },
{ "CPU N475", 100000 },
- { "CPU 230", 100000 },
- { "CPU 330", 125000 },
+ { "CPU 230", 100000 }, /* Model 0x1c, stepping 2 */
+ { "CPU 330", 125000 }, /* Model 0x1c, stepping 2 */
+ { "CPU CE4110", 110000 }, /* Model 0x1c, stepping 10 */
+ { "CPU CE4150", 110000 }, /* Model 0x1c, stepping 10 */
+ { "CPU CE4170", 110000 }, /* Model 0x1c, stepping 10 */
};
static int __cpuinit adjust_tjmax(struct cpuinfo_x86 *c, u32 id,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Felipe Balbi <[email protected]>
commit 041d81f493d90c940ec41f0ec98bc7c4f2fba431 upstream.
If a USB transfer has already been started, meaning
we have already issued StartTransfer command to that
particular endpoint, DWC3_EP_BUSY flag has also
already been set.
When we try to cancel this transfer which is already
in controller's cache, we will not receive XferComplete
event and we must clear DWC3_EP_BUSY in order to allow
subsequent requests to be properly started.
The best place to clear that flag is right after issuing
DWC3_DEPCMD_ENDTRANSFER.
Reported-by: Moiz Sonasath <[email protected]>
Signed-off-by: Felipe Balbi <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/dwc3/gadget.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 0b24d9d..c7721db 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -1832,6 +1832,7 @@ static void dwc3_stop_active_transfer(struct dwc3 *dwc, u32 epnum)
ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, cmd, ¶ms);
WARN_ON_ONCE(ret);
dep->res_trans_idx = 0;
+ dep->flags &= ~DWC3_EP_BUSY;
}
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust <[email protected]>
commit cd0b16c1c3cda12dbed1f8de8f1a9b0591990724 upstream.
If the filehandle is stale, or open access is denied for some reason,
nlm_fopen() may return one of the NLMv4-specific error codes nlm4_stale_fh
or nlm4_failed. These get passed right through nlm_lookup_file(),
and so when nlmsvc_retrieve_args() calls the latter, it needs to filter
the result through the cast_status() machinery.
Failure to do so, will trigger the BUG_ON() in encode_nlm_stat...
Signed-off-by: Trond Myklebust <[email protected]>
Reported-by: Larry McVoy <[email protected]>
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/lockd/clntxdr.c | 2 +-
fs/lockd/svcproc.c | 3 ++-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/lockd/clntxdr.c b/fs/lockd/clntxdr.c
index d269ada..982d267 100644
--- a/fs/lockd/clntxdr.c
+++ b/fs/lockd/clntxdr.c
@@ -223,7 +223,7 @@ static void encode_nlm_stat(struct xdr_stream *xdr,
{
__be32 *p;
- BUG_ON(be32_to_cpu(stat) > NLM_LCK_DENIED_GRACE_PERIOD);
+ WARN_ON_ONCE(be32_to_cpu(stat) > NLM_LCK_DENIED_GRACE_PERIOD);
p = xdr_reserve_space(xdr, 4);
*p = stat;
}
diff --git a/fs/lockd/svcproc.c b/fs/lockd/svcproc.c
index d27aab1..d413af3 100644
--- a/fs/lockd/svcproc.c
+++ b/fs/lockd/svcproc.c
@@ -67,7 +67,8 @@ nlmsvc_retrieve_args(struct svc_rqst *rqstp, struct nlm_args *argp,
/* Obtain file pointer. Not used by FREE_ALL call. */
if (filp != NULL) {
- if ((error = nlm_lookup_file(rqstp, &file, &lock->fh)) != 0)
+ error = cast_status(nlm_lookup_file(rqstp, &file, &lock->fh));
+ if (error != 0)
goto no_locks;
*filp = file;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sasha Levin <[email protected]>
commit 212ba90696ab4884e2025b0b13726d67aadc2cd4 upstream.
The buffer size in read_flush() is too small for the longest possible values
for it. This can lead to a kernel stack corruption:
[ 43.047329] Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: ffffffff833e64b4
[ 43.047329]
[ 43.049030] Pid: 6015, comm: trinity-child18 Tainted: G W 3.5.0-rc7-next-20120716-sasha #221
[ 43.050038] Call Trace:
[ 43.050435] [<ffffffff836c60c2>] panic+0xcd/0x1f4
[ 43.050931] [<ffffffff833e64b4>] ? read_flush.isra.7+0xe4/0x100
[ 43.051602] [<ffffffff810e94e6>] __stack_chk_fail+0x16/0x20
[ 43.052206] [<ffffffff833e64b4>] read_flush.isra.7+0xe4/0x100
[ 43.052951] [<ffffffff833e6500>] ? read_flush_pipefs+0x30/0x30
[ 43.053594] [<ffffffff833e652c>] read_flush_procfs+0x2c/0x30
[ 43.053596] [<ffffffff812b9a8c>] proc_reg_read+0x9c/0xd0
[ 43.053596] [<ffffffff812b99f0>] ? proc_reg_write+0xd0/0xd0
[ 43.053596] [<ffffffff81250d5b>] do_loop_readv_writev+0x4b/0x90
[ 43.053596] [<ffffffff81250fd6>] do_readv_writev+0xf6/0x1d0
[ 43.053596] [<ffffffff812510ee>] vfs_readv+0x3e/0x60
[ 43.053596] [<ffffffff812511b8>] sys_readv+0x48/0xb0
[ 43.053596] [<ffffffff8378167d>] system_call_fastpath+0x1a/0x1f
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/sunrpc/cache.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index 47ad266..97bbcdc 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -1406,11 +1406,11 @@ static ssize_t read_flush(struct file *file, char __user *buf,
size_t count, loff_t *ppos,
struct cache_detail *cd)
{
- char tbuf[20];
+ char tbuf[22];
unsigned long p = *ppos;
size_t len;
- sprintf(tbuf, "%lu\n", convert_to_wallclock(cd->flush_time));
+ snprintf(tbuf, sizeof(tbuf), "%lu\n", convert_to_wallclock(cd->flush_time));
len = strlen(tbuf);
if (p >= len)
return 0;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicolas Boullis <[email protected]>
commit 301a29da6e891e7eb95c843af0ecdbe86d01f723 upstream.
The current code assumes that CSIZE is 0000060, which appears to be
wrong on some arches (such as powerpc).
Signed-off-by: Nicolas Boullis <[email protected]>
Acked-by: Oliver Neukum <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/class/cdc-acm.c | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
index f4593ee..780e0d0 100644
--- a/drivers/usb/class/cdc-acm.c
+++ b/drivers/usb/class/cdc-acm.c
@@ -818,10 +818,6 @@ static const __u32 acm_tty_speed[] = {
2500000, 3000000, 3500000, 4000000
};
-static const __u8 acm_tty_size[] = {
- 5, 6, 7, 8
-};
-
static void acm_tty_set_termios(struct tty_struct *tty,
struct ktermios *termios_old)
{
@@ -835,7 +831,21 @@ static void acm_tty_set_termios(struct tty_struct *tty,
newline.bParityType = termios->c_cflag & PARENB ?
(termios->c_cflag & PARODD ? 1 : 2) +
(termios->c_cflag & CMSPAR ? 2 : 0) : 0;
- newline.bDataBits = acm_tty_size[(termios->c_cflag & CSIZE) >> 4];
+ switch (termios->c_cflag & CSIZE) {
+ case CS5:
+ newline.bDataBits = 5;
+ break;
+ case CS6:
+ newline.bDataBits = 6;
+ break;
+ case CS7:
+ newline.bDataBits = 7;
+ break;
+ case CS8:
+ default:
+ newline.bDataBits = 8;
+ break;
+ }
/* FIXME: Needs to clear unsupported bits in the termios */
acm->clocal = ((termios->c_cflag & CLOCAL) != 0);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: "Alexis R. Cortes" <[email protected]>
commit 470809741a28c3092279f4e1f3f432e534d46068 upstream.
This minor change adds a new system to which the "Fix Compliance Mode
on SN65LVPE502CP Hardware" patch has to be applied also.
System added:
Vendor: Hewlett-Packard. System Model: Z1
Signed-off-by: Alexis R. Cortes <[email protected]>
Acked-by: Sarah Sharp <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/host/xhci.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index db4a2fa..ad29888 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -479,7 +479,8 @@ static bool compliance_mode_recovery_timer_quirk_check(void)
if (strstr(dmi_product_name, "Z420") ||
strstr(dmi_product_name, "Z620") ||
- strstr(dmi_product_name, "Z820"))
+ strstr(dmi_product_name, "Z820") ||
+ strstr(dmi_product_name, "Z1"))
return true;
return false;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: =?UTF-8?q?Bj=C3=B8rn=20Mork?= <[email protected]>
commit 4b35f1c52943851b310afb09047bfe991ac8f5ae upstream.
Signed-off-by: Bjørn Mork <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/option.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
index 26d073f..b473e958 100644
--- a/drivers/usb/serial/option.c
+++ b/drivers/usb/serial/option.c
@@ -895,6 +895,12 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0165, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0167, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t)&net_intf4_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0191, 0xff, 0xff, 0xff), /* ZTE EuFi890 */
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0199, 0xff, 0xff, 0xff), /* ZTE MF820S */
+ .driver_info = (kernel_ulong_t)&net_intf1_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0257, 0xff, 0xff, 0xff), /* ZTE MF821 */
+ .driver_info = (kernel_ulong_t)&net_intf3_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0326, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t)&net_intf4_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1008, 0xff, 0xff, 0xff),
@@ -903,6 +909,8 @@ static const struct usb_device_id option_ids[] = {
.driver_info = (kernel_ulong_t)&net_intf4_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1012, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t)&net_intf4_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1021, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf2_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1057, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1058, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1059, 0xff, 0xff, 0xff) },
@@ -1080,8 +1088,16 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1298, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1299, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1300, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1401, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf2_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1402, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t)&net_intf2_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1424, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf2_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1425, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf2_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1426, 0xff, 0xff, 0xff), /* ZTE MF91 */
+ .driver_info = (kernel_ulong_t)&net_intf2_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2002, 0xff,
0xff, 0xff), .driver_info = (kernel_ulong_t)&zte_k3765_z_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2003, 0xff, 0xff, 0xff) },
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Theodore Ts'o <[email protected]>
commit 06db49e68ae70cf16819b85a14057acb2820776a upstream.
The function ext4_handle_dirty_super() was calculating the superblock
on the wrong block data. As a result, when the superblock is modified
while it is mounted (most commonly, when inodes are added or removed
from the orphan list), the superblock checksum would be wrong. We
didn't notice because the superblock *was* being correctly calculated
in ext4_commit_super(), and this would get called when the file system
was unmounted. So the problem only became obvious if the system
crashed while the file system was mounted.
Fix this by removing the poorly designed function signature for
ext4_superblock_csum_set(); if it only took a single argument, the
pointer to a struct superblock, the ambiguity which caused this
mistake would have been impossible.
Reported-by: George Spelvin <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
[ herton: backport for 3.5: s_dirt etc. still exists, adjust
ext4_superblock_csum_set calls on ext4_mark_super_dirty and on
__ext4_handle_dirty_super ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ext4/ext4.h | 7 ++-----
fs/ext4/ext4_jbd2.c | 6 ++----
fs/ext4/super.c | 7 ++++---
3 files changed, 8 insertions(+), 12 deletions(-)
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 01434f2..42f5a18 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -2039,8 +2039,7 @@ extern int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count);
extern int ext4_calculate_overhead(struct super_block *sb);
extern int ext4_superblock_csum_verify(struct super_block *sb,
struct ext4_super_block *es);
-extern void ext4_superblock_csum_set(struct super_block *sb,
- struct ext4_super_block *es);
+extern void ext4_superblock_csum_set(struct super_block *sb);
extern void *ext4_kvmalloc(size_t size, gfp_t flags);
extern void *ext4_kvzalloc(size_t size, gfp_t flags);
extern void ext4_kvfree(void *ptr);
@@ -2323,9 +2322,7 @@ static inline void ext4_unlock_group(struct super_block *sb,
static inline void ext4_mark_super_dirty(struct super_block *sb)
{
- struct ext4_super_block *es = EXT4_SB(sb)->s_es;
-
- ext4_superblock_csum_set(sb, es);
+ ext4_superblock_csum_set(sb);
if (EXT4_SB(sb)->s_journal == NULL)
sb->s_dirt =1;
}
diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
index 90f7c2e..f4f0ee6 100644
--- a/fs/ext4/ext4_jbd2.c
+++ b/fs/ext4/ext4_jbd2.c
@@ -145,15 +145,13 @@ int __ext4_handle_dirty_super(const char *where, unsigned int line,
int err = 0;
if (ext4_handle_valid(handle)) {
- ext4_superblock_csum_set(sb,
- (struct ext4_super_block *)bh->b_data);
+ ext4_superblock_csum_set(sb);
err = jbd2_journal_dirty_metadata(handle, bh);
if (err)
ext4_journal_abort_handle(where, line, __func__,
bh, handle, err);
} else if (now) {
- ext4_superblock_csum_set(sb,
- (struct ext4_super_block *)bh->b_data);
+ ext4_superblock_csum_set(sb);
mark_buffer_dirty(bh);
} else
sb->s_dirt = 1;
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 975405c..993392b 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -144,9 +144,10 @@ int ext4_superblock_csum_verify(struct super_block *sb,
return es->s_checksum == ext4_superblock_csum(sb, es);
}
-void ext4_superblock_csum_set(struct super_block *sb,
- struct ext4_super_block *es)
+void ext4_superblock_csum_set(struct super_block *sb)
{
+ struct ext4_super_block *es = EXT4_SB(sb)->s_es;
+
if (!EXT4_HAS_RO_COMPAT_FEATURE(sb,
EXT4_FEATURE_RO_COMPAT_METADATA_CSUM))
return;
@@ -4330,7 +4331,7 @@ static int ext4_commit_super(struct super_block *sb, int sync)
&EXT4_SB(sb)->s_freeinodes_counter));
sb->s_dirt = 0;
BUFFER_TRACE(sbh, "marking dirty");
- ext4_superblock_csum_set(sb, es);
+ ext4_superblock_csum_set(sb);
mark_buffer_dirty(sbh);
if (sync) {
error = sync_dirty_buffer(sbh);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Kees Cook <[email protected]>
commit 31fd84b95eb211d5db460a1dda85e004800a7b52 upstream.
The min/max call needed to have explicit types on some architectures
(e.g. mn10300). Use clamp_t instead to avoid the warning:
kernel/sys.c: In function 'override_release':
kernel/sys.c:1287:10: warning: comparison of distinct pointer types lacks a cast [enabled by default]
Reported-by: Fengguang Wu <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
kernel/sys.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sys.c b/kernel/sys.c
index 1b66408..b6fe559 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -1284,7 +1284,7 @@ static int override_release(char __user *release, size_t len)
rest++;
}
v = ((LINUX_VERSION_CODE >> 8) & 0xff) + 40;
- copy = min(sizeof(buf), max_t(size_t, 1, len));
+ copy = clamp_t(size_t, len, 1, sizeof(buf));
copy = scnprintf(buf, copy, "2.6.%u%s", v, rest);
ret = copy_to_user(release, buf, copy + 1);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Michal Hocko <[email protected]>
commit 7386cdbf2f57ea8cff3c9fde93f206e58b9fe13f upstream.
Git commit 09a1d34f8535ecf9 "nohz: Make idle/iowait counter update
conditional" introduced a bug in regard to cpu hotplug. The effect is
that the number of idle ticks in the cpu summary line in /proc/stat is
still counting ticks for offline cpus.
Reproduction is easy, just start a workload that keeps all cpus busy,
switch off one or more cpus and then watch the idle field in top.
On a dual-core with one cpu 100% busy and one offline cpu you will get
something like this:
%Cpu(s): 48.7 us, 1.3 sy, 0.0 ni, 50.0 id, 0.0 wa, 0.0 hi, 0.0 si,
%0.0 st
The problem is that an offline cpu still has ts->idle_active == 1.
To fix this we should make sure that the cpu is online when calling
get_cpu_idle_time_us and get_cpu_iowait_time_us.
[Srivatsa: Rebased to current mainline]
Reported-by: Martin Schwidefsky <[email protected]>
Signed-off-by: Michal Hocko <[email protected]>
Reviewed-by: Srivatsa S. Bhat <[email protected]>
Signed-off-by: Srivatsa S. Bhat <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Cc: [email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/proc/stat.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/fs/proc/stat.c b/fs/proc/stat.c
index 64c3b31..e296572 100644
--- a/fs/proc/stat.c
+++ b/fs/proc/stat.c
@@ -45,10 +45,13 @@ static cputime64_t get_iowait_time(int cpu)
static u64 get_idle_time(int cpu)
{
- u64 idle, idle_time = get_cpu_idle_time_us(cpu, NULL);
+ u64 idle, idle_time = -1ULL;
+
+ if (cpu_online(cpu))
+ idle_time = get_cpu_idle_time_us(cpu, NULL);
if (idle_time == -1ULL)
- /* !NO_HZ so we can rely on cpustat.idle */
+ /* !NO_HZ or cpu offline so we can rely on cpustat.idle */
idle = kcpustat_cpu(cpu).cpustat[CPUTIME_IDLE];
else
idle = usecs_to_cputime64(idle_time);
@@ -58,10 +61,13 @@ static u64 get_idle_time(int cpu)
static u64 get_iowait_time(int cpu)
{
- u64 iowait, iowait_time = get_cpu_iowait_time_us(cpu, NULL);
+ u64 iowait, iowait_time = -1ULL;
+
+ if (cpu_online(cpu))
+ iowait_time = get_cpu_iowait_time_us(cpu, NULL);
if (iowait_time == -1ULL)
- /* !NO_HZ so we can rely on cpustat.iowait */
+ /* !NO_HZ or cpu offline so we can rely on cpustat.iowait */
iowait = kcpustat_cpu(cpu).cpustat[CPUTIME_IOWAIT];
else
iowait = usecs_to_cputime64(iowait_time);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Carpenter <[email protected]>
commit 44009105081b51417f311f4c3be0061870b6b8ed upstream.
The "event" variable is a u16 so the shift will always wrap to zero
making the line a no-op.
Signed-off-by: Dan Carpenter <[email protected]>
Signed-off-by: Robert Richter <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/oprofile/nmi_int.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/oprofile/nmi_int.c b/arch/x86/oprofile/nmi_int.c
index 26b8a85..48768df 100644
--- a/arch/x86/oprofile/nmi_int.c
+++ b/arch/x86/oprofile/nmi_int.c
@@ -55,7 +55,7 @@ u64 op_x86_get_ctrl(struct op_x86_model_spec const *model,
val |= counter_config->extra;
event &= model->event_mask ? model->event_mask : 0xFF;
val |= event & 0xFF;
- val |= (event & 0x0F00) << 24;
+ val |= (u64)(event & 0x0F00) << 24;
return val;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Piotr Haber <[email protected]>
commit 1fffa905adffbf0d3767fc978ef09afb830275eb upstream.
When cores are unregistered, entries
need to be removed from cores list in a safe manner.
Reported-by: Stanislaw Gruszka <[email protected]>
Reviewed-by: Arend Van Spriel <[email protected]>
Signed-off-by: Piotr Haber <[email protected]>
Signed-off-by: John W. Linville <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/bcma/main.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/bcma/main.c b/drivers/bcma/main.c
index 7e138ec..edb9233 100644
--- a/drivers/bcma/main.c
+++ b/drivers/bcma/main.c
@@ -131,9 +131,10 @@ static int bcma_register_cores(struct bcma_bus *bus)
static void bcma_unregister_cores(struct bcma_bus *bus)
{
- struct bcma_device *core;
+ struct bcma_device *core, *tmp;
- list_for_each_entry(core, &bus->cores, list) {
+ list_for_each_entry_safe(core, tmp, &bus->cores, list) {
+ list_del(&core->list);
if (core->dev_registered)
device_unregister(&core->dev);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johannes Berg <[email protected]>
commit 8f7b8db6e0557c8437adf9371e020cd89a7e85dc upstream.
The channel switch command for 6000 series devices
is larger than the maximum inline command size of
320 bytes. The command is therefore refused with a
warning. Fix this by allocating the command and
using the NOCOPY mechanism.
Reviewed-by: Emmanuel Grumbach <[email protected]>
Signed-off-by: Johannes Berg <[email protected]>
[ herton: file name is different on 3.5, code differs a little bit at
the end, adjusted context ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/net/wireless/iwlwifi/iwl-agn-devices.c | 39 +++++++++++++++---------
1 file changed, 24 insertions(+), 15 deletions(-)
diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-devices.c b/drivers/net/wireless/iwlwifi/iwl-agn-devices.c
index 48533b3..8ab0a7c 100644
--- a/drivers/net/wireless/iwlwifi/iwl-agn-devices.c
+++ b/drivers/net/wireless/iwlwifi/iwl-agn-devices.c
@@ -653,7 +653,7 @@ static int iwl6000_hw_channel_switch(struct iwl_priv *priv,
* See iwlagn_mac_channel_switch.
*/
struct iwl_rxon_context *ctx = &priv->contexts[IWL_RXON_CTX_BSS];
- struct iwl6000_channel_switch_cmd cmd;
+ struct iwl6000_channel_switch_cmd *cmd;
const struct iwl_channel_info *ch_info;
u32 switch_time_in_usec, ucode_switch_time;
u16 ch;
@@ -663,18 +663,25 @@ static int iwl6000_hw_channel_switch(struct iwl_priv *priv,
struct ieee80211_vif *vif = ctx->vif;
struct iwl_host_cmd hcmd = {
.id = REPLY_CHANNEL_SWITCH,
- .len = { sizeof(cmd), },
+ .len = { sizeof(*cmd), },
.flags = CMD_SYNC,
- .data = { &cmd, },
+ .dataflags[0] = IWL_HCMD_DFL_NOCOPY,
};
+ int err;
- cmd.band = priv->band == IEEE80211_BAND_2GHZ;
+ cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
+ if (!cmd)
+ return -ENOMEM;
+
+ hcmd.data[0] = cmd;
+
+ cmd->band = priv->band == IEEE80211_BAND_2GHZ;
ch = ch_switch->channel->hw_value;
IWL_DEBUG_11H(priv, "channel switch from %u to %u\n",
ctx->active.channel, ch);
- cmd.channel = cpu_to_le16(ch);
- cmd.rxon_flags = ctx->staging.flags;
- cmd.rxon_filter_flags = ctx->staging.filter_flags;
+ cmd->channel = cpu_to_le16(ch);
+ cmd->rxon_flags = ctx->staging.flags;
+ cmd->rxon_filter_flags = ctx->staging.filter_flags;
switch_count = ch_switch->count;
tsf_low = ch_switch->timestamp & 0x0ffffffff;
/*
@@ -690,30 +697,32 @@ static int iwl6000_hw_channel_switch(struct iwl_priv *priv,
switch_count = 0;
}
if (switch_count <= 1)
- cmd.switch_time = cpu_to_le32(priv->ucode_beacon_time);
+ cmd->switch_time = cpu_to_le32(priv->ucode_beacon_time);
else {
switch_time_in_usec =
vif->bss_conf.beacon_int * switch_count * TIME_UNIT;
ucode_switch_time = iwl_usecs_to_beacons(priv,
switch_time_in_usec,
beacon_interval);
- cmd.switch_time = iwl_add_beacon_time(priv,
- priv->ucode_beacon_time,
- ucode_switch_time,
- beacon_interval);
+ cmd->switch_time = iwl_add_beacon_time(priv,
+ priv->ucode_beacon_time,
+ ucode_switch_time,
+ beacon_interval);
}
IWL_DEBUG_11H(priv, "uCode time for the switch is 0x%x\n",
- cmd.switch_time);
+ cmd->switch_time);
ch_info = iwl_get_channel_info(priv, priv->band, ch);
if (ch_info)
- cmd.expect_beacon = is_channel_radar(ch_info);
+ cmd->expect_beacon = is_channel_radar(ch_info);
else {
IWL_ERR(priv, "invalid channel switch from %u to %u\n",
ctx->active.channel, ch);
return -EFAULT;
}
- return iwl_dvm_send_cmd(priv, &hcmd);
+ err = iwl_dvm_send_cmd(priv, &hcmd);
+ kfree(cmd);
+ return err;
}
struct iwl_lib_ops iwl6000_lib = {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Felix Fietkau <[email protected]>
commit d4fa14cd62bd078c8e3ef39283b9f237e5b2ff0f upstream.
Free tx status skbs when draining power save buffers, pending frames, or
when tearing down a vif.
Fixes remaining conditions that can lead to hostapd/wpa_supplicant hangs when
running out of socket write memory.
Signed-off-by: Felix Fietkau <[email protected]>
Signed-off-by: John W. Linville <[email protected]>
[ herton: adjust context ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/mac80211/iface.c | 2 +-
net/mac80211/sta_info.c | 4 ++--
net/mac80211/util.c | 4 ++--
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
index 8664111..4b4d9c5 100644
--- a/net/mac80211/iface.c
+++ b/net/mac80211/iface.c
@@ -711,7 +711,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata,
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
if (info->control.vif == &sdata->vif) {
__skb_unlink(skb, &local->pending[i]);
- dev_kfree_skb_irq(skb);
+ ieee80211_free_txskb(&local->hw, skb);
}
}
}
diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
index de455f8..277bd96 100644
--- a/net/mac80211/sta_info.c
+++ b/net/mac80211/sta_info.c
@@ -591,7 +591,7 @@ static bool sta_info_cleanup_expire_buffered_ac(struct ieee80211_local *local,
*/
if (!skb)
break;
- dev_kfree_skb(skb);
+ ieee80211_free_txskb(&local->hw, skb);
}
/*
@@ -622,7 +622,7 @@ static bool sta_info_cleanup_expire_buffered_ac(struct ieee80211_local *local,
printk(KERN_DEBUG "Buffered frame expired (STA %pM)\n",
sta->sta.addr);
#endif
- dev_kfree_skb(skb);
+ ieee80211_free_txskb(&local->hw, skb);
}
/*
diff --git a/net/mac80211/util.c b/net/mac80211/util.c
index f564b5e..92d84f5 100644
--- a/net/mac80211/util.c
+++ b/net/mac80211/util.c
@@ -400,7 +400,7 @@ void ieee80211_add_pending_skb(struct ieee80211_local *local,
int queue = info->hw_queue;
if (WARN_ON(!info->control.vif)) {
- kfree_skb(skb);
+ ieee80211_free_txskb(&local->hw, skb);
return;
}
@@ -425,7 +425,7 @@ void ieee80211_add_pending_skbs_fn(struct ieee80211_local *local,
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
if (WARN_ON(!info->control.vif)) {
- kfree_skb(skb);
+ ieee80211_free_txskb(&local->hw, skb);
continue;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Pritesh Raithatha <[email protected]>
commit 154f3ebf53edcfbe28728452b4ab37a118581125 upstream.
Signed-off-by: Pritesh Raithatha <[email protected]>
Acked-by: Stephen Warren <[email protected]>
Tested-by: Stephen Warren <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/pinctrl/pinctrl-tegra.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/pinctrl/pinctrl-tegra.c b/drivers/pinctrl/pinctrl-tegra.c
index b693486..2c9166d 100644
--- a/drivers/pinctrl/pinctrl-tegra.c
+++ b/drivers/pinctrl/pinctrl-tegra.c
@@ -466,7 +466,7 @@ static int tegra_pinconf_reg(struct tegra_pmx *pmx,
*bank = g->drv_bank;
*reg = g->drv_reg;
*bit = g->lpmd_bit;
- *width = 1;
+ *width = 2;
break;
case TEGRA_PINCONF_PARAM_DRIVE_DOWN_STRENGTH:
*bank = g->drv_bank;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Heiko Carstens <[email protected]>
commit c985cb37f1b39c2c8035af741a2a0b79f1fbaca7 upstream.
Because of a change in the s390 arch backend of binutils (commit 23ecd77
"Pick the default arch depending on the target size" in binutils repo)
31 bit builds will fail since the linker would now try to create 64 bit
binary output.
Fix this by setting OUTPUT_ARCH to s390:31-bit instead of s390.
Thanks to Andreas Krebbel for figuring out the issue.
Fixes this build error:
LD init/built-in.o
s390x-4.7.2-ld: s390:31-bit architecture of input file
`arch/s390/kernel/head.o' is incompatible with s390:64-bit output
Cc: Andreas Krebbel <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Martin Schwidefsky <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/s390/boot/compressed/vmlinux.lds.S | 2 +-
arch/s390/kernel/vmlinux.lds.S | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/s390/boot/compressed/vmlinux.lds.S b/arch/s390/boot/compressed/vmlinux.lds.S
index d80f79d..8e1fb82 100644
--- a/arch/s390/boot/compressed/vmlinux.lds.S
+++ b/arch/s390/boot/compressed/vmlinux.lds.S
@@ -5,7 +5,7 @@ OUTPUT_FORMAT("elf64-s390", "elf64-s390", "elf64-s390")
OUTPUT_ARCH(s390:64-bit)
#else
OUTPUT_FORMAT("elf32-s390", "elf32-s390", "elf32-s390")
-OUTPUT_ARCH(s390)
+OUTPUT_ARCH(s390:31-bit)
#endif
ENTRY(startup)
diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
index 21109c6..1343d7c 100644
--- a/arch/s390/kernel/vmlinux.lds.S
+++ b/arch/s390/kernel/vmlinux.lds.S
@@ -8,7 +8,7 @@
#ifndef CONFIG_64BIT
OUTPUT_FORMAT("elf32-s390", "elf32-s390", "elf32-s390")
-OUTPUT_ARCH(s390)
+OUTPUT_ARCH(s390:31-bit)
ENTRY(startup)
jiffies = jiffies_64 + 4;
#else
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: =?UTF-8?q?Stef=C3=A1n=20Freyr?= <[email protected]>
commit 84f98fdf7865fbd35b312eb39ea91e5618c514c7 upstream.
I have a Lenovo ThinkPad T430 and an UltraBase Series 3 docking
station.
Without this patch, if I plug my headphones into the jack on the
computer, everything works fine. The computer speakers mute and the
audio is played in the headphones. However, if I plug into the docking
station headphone jack the computer speakers are muted but there is no
audio in the headphones.
Addresses https://bugs.launchpad.net/bugs/1060372
Signed-off-by: Joseph Salisbury <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
sound/pci/hda/patch_realtek.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index 2549f0b..e996453 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -6028,6 +6028,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x17aa, 0x21e9, "Thinkpad Edge 15", ALC269_FIXUP_SKU_IGNORE),
SND_PCI_QUIRK(0x17aa, 0x21f6, "Thinkpad T530", ALC269_FIXUP_LENOVO_DOCK),
SND_PCI_QUIRK(0x17aa, 0x21fa, "Thinkpad X230", ALC269_FIXUP_LENOVO_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x21f3, "Thinkpad T430", ALC269_FIXUP_LENOVO_DOCK),
SND_PCI_QUIRK(0x17aa, 0x21fb, "Thinkpad T430s", ALC269_FIXUP_LENOVO_DOCK),
SND_PCI_QUIRK(0x17aa, 0x2203, "Thinkpad X230 Tablet", ALC269_FIXUP_LENOVO_DOCK),
SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tao Ma <[email protected]>
commit 79f1ba49569e5aec919b653c55b03274c2331701 upstream.
In mke2fs, we only checksum the whole bitmap block and it is right.
While in the kernel, we use EXT4_BLOCKS_PER_GROUP to indicate the
size of the checksumed bitmap which is wrong when we enable bigalloc.
The right size should be EXT4_CLUSTERS_PER_GROUP and this patch fixes
it.
Also as every caller of ext4_block_bitmap_csum_set and
ext4_block_bitmap_csum_verify pass in EXT4_BLOCKS_PER_GROUP(sb)/8,
we'd better removes this parameter and sets it in the function itself.
Signed-off-by: Tao Ma <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Reviewed-by: Lukas Czerner <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ext4/balloc.c | 8 +++-----
fs/ext4/bitmap.c | 6 ++++--
fs/ext4/ext4.h | 4 ++--
fs/ext4/ialloc.c | 4 +---
fs/ext4/mballoc.c | 9 +++------
fs/ext4/resize.c | 3 +--
6 files changed, 14 insertions(+), 20 deletions(-)
diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
index 1b50890..cf18217 100644
--- a/fs/ext4/balloc.c
+++ b/fs/ext4/balloc.c
@@ -174,8 +174,7 @@ void ext4_init_block_bitmap(struct super_block *sb, struct buffer_head *bh,
ext4_free_inodes_set(sb, gdp, 0);
ext4_itable_unused_set(sb, gdp, 0);
memset(bh->b_data, 0xff, sb->s_blocksize);
- ext4_block_bitmap_csum_set(sb, block_group, gdp, bh,
- EXT4_BLOCKS_PER_GROUP(sb) / 8);
+ ext4_block_bitmap_csum_set(sb, block_group, gdp, bh);
return;
}
memset(bh->b_data, 0, sb->s_blocksize);
@@ -212,8 +211,7 @@ void ext4_init_block_bitmap(struct super_block *sb, struct buffer_head *bh,
*/
ext4_mark_bitmap_end(num_clusters_in_group(sb, block_group),
sb->s_blocksize * 8, bh->b_data);
- ext4_block_bitmap_csum_set(sb, block_group, gdp, bh,
- EXT4_BLOCKS_PER_GROUP(sb) / 8);
+ ext4_block_bitmap_csum_set(sb, block_group, gdp, bh);
ext4_group_desc_csum_set(sb, block_group, gdp);
}
@@ -350,7 +348,7 @@ void ext4_validate_block_bitmap(struct super_block *sb,
return;
}
if (unlikely(!ext4_block_bitmap_csum_verify(sb, block_group,
- desc, bh, EXT4_BLOCKS_PER_GROUP(sb) / 8))) {
+ desc, bh))) {
ext4_unlock_group(sb, block_group);
ext4_error(sb, "bg %u: bad block bitmap checksum", block_group);
return;
diff --git a/fs/ext4/bitmap.c b/fs/ext4/bitmap.c
index ad9d96e..e76c10e 100644
--- a/fs/ext4/bitmap.c
+++ b/fs/ext4/bitmap.c
@@ -65,11 +65,12 @@ void ext4_inode_bitmap_csum_set(struct super_block *sb, ext4_group_t group,
int ext4_block_bitmap_csum_verify(struct super_block *sb, ext4_group_t group,
struct ext4_group_desc *gdp,
- struct buffer_head *bh, int sz)
+ struct buffer_head *bh)
{
__u32 hi;
__u32 provided, calculated;
struct ext4_sb_info *sbi = EXT4_SB(sb);
+ int sz = EXT4_CLUSTERS_PER_GROUP(sb) / 8;
if (!EXT4_HAS_RO_COMPAT_FEATURE(sb,
EXT4_FEATURE_RO_COMPAT_METADATA_CSUM))
@@ -91,8 +92,9 @@ int ext4_block_bitmap_csum_verify(struct super_block *sb, ext4_group_t group,
void ext4_block_bitmap_csum_set(struct super_block *sb, ext4_group_t group,
struct ext4_group_desc *gdp,
- struct buffer_head *bh, int sz)
+ struct buffer_head *bh)
{
+ int sz = EXT4_CLUSTERS_PER_GROUP(sb) / 8;
__u32 csum;
struct ext4_sb_info *sbi = EXT4_SB(sb);
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 42f5a18..a14c45e 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -1860,10 +1860,10 @@ int ext4_inode_bitmap_csum_verify(struct super_block *sb, ext4_group_t group,
struct buffer_head *bh, int sz);
void ext4_block_bitmap_csum_set(struct super_block *sb, ext4_group_t group,
struct ext4_group_desc *gdp,
- struct buffer_head *bh, int sz);
+ struct buffer_head *bh);
int ext4_block_bitmap_csum_verify(struct super_block *sb, ext4_group_t group,
struct ext4_group_desc *gdp,
- struct buffer_head *bh, int sz);
+ struct buffer_head *bh);
/* balloc.c */
extern void ext4_validate_block_bitmap(struct super_block *sb,
diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
index 6866bc2..76dcdcf 100644
--- a/fs/ext4/ialloc.c
+++ b/fs/ext4/ialloc.c
@@ -754,9 +754,7 @@ got:
ext4_free_group_clusters_set(sb, gdp,
ext4_free_clusters_after_init(sb, group, gdp));
ext4_block_bitmap_csum_set(sb, group, gdp,
- block_bitmap_bh,
- EXT4_BLOCKS_PER_GROUP(sb) /
- 8);
+ block_bitmap_bh);
ext4_group_desc_csum_set(sb, group, gdp);
}
ext4_unlock_group(sb, group);
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 1cd6994..70f97a5 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -2798,8 +2798,7 @@ ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac,
}
len = ext4_free_group_clusters(sb, gdp) - ac->ac_b_ex.fe_len;
ext4_free_group_clusters_set(sb, gdp, len);
- ext4_block_bitmap_csum_set(sb, ac->ac_b_ex.fe_group, gdp, bitmap_bh,
- EXT4_BLOCKS_PER_GROUP(sb) / 8);
+ ext4_block_bitmap_csum_set(sb, ac->ac_b_ex.fe_group, gdp, bitmap_bh);
ext4_group_desc_csum_set(sb, ac->ac_b_ex.fe_group, gdp);
ext4_unlock_group(sb, ac->ac_b_ex.fe_group);
@@ -4659,8 +4658,7 @@ do_more:
ret = ext4_free_group_clusters(sb, gdp) + count_clusters;
ext4_free_group_clusters_set(sb, gdp, ret);
- ext4_block_bitmap_csum_set(sb, block_group, gdp, bitmap_bh,
- EXT4_BLOCKS_PER_GROUP(sb) / 8);
+ ext4_block_bitmap_csum_set(sb, block_group, gdp, bitmap_bh);
ext4_group_desc_csum_set(sb, block_group, gdp);
ext4_unlock_group(sb, block_group);
percpu_counter_add(&sbi->s_freeclusters_counter, count_clusters);
@@ -4805,8 +4803,7 @@ int ext4_group_add_blocks(handle_t *handle, struct super_block *sb,
mb_free_blocks(NULL, &e4b, bit, count);
blk_free_count = blocks_freed + ext4_free_group_clusters(sb, desc);
ext4_free_group_clusters_set(sb, desc, blk_free_count);
- ext4_block_bitmap_csum_set(sb, block_group, desc, bitmap_bh,
- EXT4_BLOCKS_PER_GROUP(sb) / 8);
+ ext4_block_bitmap_csum_set(sb, block_group, desc, bitmap_bh);
ext4_group_desc_csum_set(sb, block_group, desc);
ext4_unlock_group(sb, block_group);
percpu_counter_add(&sbi->s_freeclusters_counter,
diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
index dc1affc..367510e 100644
--- a/fs/ext4/resize.c
+++ b/fs/ext4/resize.c
@@ -1119,8 +1119,7 @@ static int ext4_set_bitmap_checksums(struct super_block *sb,
bh = ext4_get_bitmap(sb, group_data->block_bitmap);
if (!bh)
return -EIO;
- ext4_block_bitmap_csum_set(sb, group, gdp, bh,
- EXT4_BLOCKS_PER_GROUP(sb) / 8);
+ ext4_block_bitmap_csum_set(sb, group, gdp, bh);
brelse(bh);
return 0;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
commit 5f40b909728ad784eb43aa309d3c4e9bdf050781 upstream.
When booting a secondary CPU, the primary CPU hands two sets of page
tables via the secondary_data struct:
(1) swapper_pg_dir: a normal, cacheable, shared (if SMP) mapping
of the kernel image (i.e. the tables used by init_mm).
(2) idmap_pgd: an uncached mapping of the .idmap.text ELF
section.
The idmap is generally used when enabling and disabling the MMU, which
includes early CPU boot. In this case, the secondary CPU switches to
swapper as soon as it enters C code:
struct mm_struct *mm = &init_mm;
unsigned int cpu = smp_processor_id();
/*
* All kernel threads share the same mm context; grab a
* reference and switch to it.
*/
atomic_inc(&mm->mm_count);
current->active_mm = mm;
cpumask_set_cpu(cpu, mm_cpumask(mm));
cpu_switch_mm(mm->pgd, mm);
This causes a problem on ARMv7, where the identity mapping is treated as
strongly-ordered leading to architecturally UNPREDICTABLE behaviour of
exclusive accesses, such as those used by atomic_inc.
This patch re-orders the secondary_start_kernel function so that we
switch to swapper before performing any exclusive accesses.
Cc: David McKay <[email protected]>
Reported-by: Gilles Chanteperdrix <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/arm/kernel/smp.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index ea73045..dbde2b9 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -222,18 +222,24 @@ static void percpu_timer_setup(void);
asmlinkage void __cpuinit secondary_start_kernel(void)
{
struct mm_struct *mm = &init_mm;
- unsigned int cpu = smp_processor_id();
+ unsigned int cpu;
+
+ /*
+ * The identity mapping is uncached (strongly ordered), so
+ * switch away from it before attempting any exclusive accesses.
+ */
+ cpu_switch_mm(mm->pgd, mm);
+ enter_lazy_tlb(mm, current);
+ local_flush_tlb_all();
/*
* All kernel threads share the same mm context; grab a
* reference and switch to it.
*/
+ cpu = smp_processor_id();
atomic_inc(&mm->mm_count);
current->active_mm = mm;
cpumask_set_cpu(cpu, mm_cpumask(mm));
- cpu_switch_mm(mm->pgd, mm);
- enter_lazy_tlb(mm, current);
- local_flush_tlb_all();
printk("CPU%u: Booted secondary processor\n", cpu);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= <[email protected]>
commit 675d66b0ed5fd170d6a44cf8dbb3fa56a5347bdb upstream.
If a thread or process exited while a reply, one-way transaction or
death notification was pending, the struct holding the pending work
was leaked.
Signed-off-by: Arve Hjønnevåg <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/staging/android/binder.c | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/android/binder.c b/drivers/staging/android/binder.c
index c283212..80acd0e 100644
--- a/drivers/staging/android/binder.c
+++ b/drivers/staging/android/binder.c
@@ -2507,14 +2507,38 @@ static void binder_release_work(struct list_head *list)
struct binder_transaction *t;
t = container_of(w, struct binder_transaction, work);
- if (t->buffer->target_node && !(t->flags & TF_ONE_WAY))
+ if (t->buffer->target_node &&
+ !(t->flags & TF_ONE_WAY)) {
binder_send_failed_reply(t, BR_DEAD_REPLY);
+ } else {
+ binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
+ "binder: undelivered transaction %d\n",
+ t->debug_id);
+ t->buffer->transaction = NULL;
+ kfree(t);
+ binder_stats_deleted(BINDER_STAT_TRANSACTION);
+ }
} break;
case BINDER_WORK_TRANSACTION_COMPLETE: {
+ binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
+ "binder: undelivered TRANSACTION_COMPLETE\n");
kfree(w);
binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
} break;
+ case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
+ case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
+ struct binder_ref_death *death;
+
+ death = container_of(w, struct binder_ref_death, work);
+ binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
+ "binder: undelivered death notification, %p\n",
+ death->cookie);
+ kfree(death);
+ binder_stats_deleted(BINDER_STAT_DEATH);
+ } break;
default:
+ pr_err("binder: unexpected work type, %d, not freed\n",
+ w->type);
break;
}
}
@@ -2984,6 +3008,7 @@ static void binder_deferred_release(struct binder_proc *proc)
nodes++;
rb_erase(&node->rb_node, &proc->nodes);
list_del_init(&node->work.entry);
+ binder_release_work(&node->async_todo);
if (hlist_empty(&node->refs)) {
kfree(node);
binder_stats_deleted(BINDER_STAT_NODE);
@@ -3022,6 +3047,7 @@ static void binder_deferred_release(struct binder_proc *proc)
binder_delete_ref(ref);
}
binder_release_work(&proc->todo);
+ binder_release_work(&proc->delivered_death);
buffers = 0;
while ((n = rb_first(&proc->allocated_buffers))) {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Lukas Czerner <[email protected]>
commit 5de35e8d5c02d271c20e18337e01bc20e6ef472e upstream.
Currently if len argument in ext4_trim_fs() is smaller than one block,
the 'end' variable underflow. Avoid that by returning EINVAL if len is
smaller than file system block.
Also remove useless unlikely().
Signed-off-by: Lukas Czerner <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ext4/mballoc.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 70f97a5..af848bb 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -4983,8 +4983,9 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
end = start + (range->len >> sb->s_blocksize_bits) - 1;
minlen = range->minlen >> sb->s_blocksize_bits;
- if (unlikely(minlen > EXT4_CLUSTERS_PER_GROUP(sb)) ||
- unlikely(start >= max_blks))
+ if (minlen > EXT4_CLUSTERS_PER_GROUP(sb) ||
+ start >= max_blks ||
+ range->len < sb->s_blocksize)
return -EINVAL;
if (end >= max_blks)
end = max_blks - 1;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Chris Metcalf <[email protected]>
commit 627072b06c362bbe7dc256f618aaa63351f0cfe6 upstream.
The tile tool chain uses the .eh_frame information for backtracing.
The vmlinux build drops any .eh_frame sections at link time, but when
present in kernel modules, it causes a module load failure due to the
presence of unsupported pc-relative relocations. When compiling to
use compiler feedback support, the compiler by default omits .eh_frame
information, so we don't see this problem. But when not using feedback,
we need to explicitly suppress the .eh_frame.
Signed-off-by: Chris Metcalf <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/tile/Makefile | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/tile/Makefile b/arch/tile/Makefile
index e20b0a0..7da1015 100644
--- a/arch/tile/Makefile
+++ b/arch/tile/Makefile
@@ -26,6 +26,10 @@ $(error Set TILERA_ROOT or CROSS_COMPILE when building $(ARCH) on $(HOST_ARCH))
endif
endif
+# The tile compiler may emit .eh_frame information for backtracing.
+# In kernel modules, this causes load failures due to unsupported relocations.
+KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
+
ifneq ($(CONFIG_DEBUG_EXTRA_FLAGS),"")
KBUILD_CFLAGS += $(CONFIG_DEBUG_EXTRA_FLAGS)
endif
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Deucher <[email protected]>
commit c71721324c612f7f040657ce9917d87f530f9784 upstream.
So we know why the CS was rejected.
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/gpu/drm/radeon/evergreen_cs.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c b/drivers/gpu/drm/radeon/evergreen_cs.c
index f2e5c54..49231aa 100644
--- a/drivers/gpu/drm/radeon/evergreen_cs.c
+++ b/drivers/gpu/drm/radeon/evergreen_cs.c
@@ -2777,6 +2777,7 @@ static bool evergreen_vm_reg_valid(u32 reg)
case CAYMAN_SQ_EX_ALLOC_TABLE_SLOTS:
return true;
default:
+ DRM_ERROR("Invalid register 0x%x in CS\n", reg);
return false;
}
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Oliver Neukum <[email protected]>
commit 966e7a854177097083683176ced871558b631a12 upstream.
An le16 is accessed without conversion.
This patch should be backported to kernels as old as 3.5, that contain
the commit e3567d2c15a7a8e2f992a5f7c7683453ca406d82 "xhci: Add Intel
U1/U2 timeout policy."
Signed-off-by: Oliver Neukum <[email protected]>
Signed-off-by: Sarah Sharp <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/host/xhci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index ad29888..1f9b8a4 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -4141,7 +4141,7 @@ static u16 xhci_calculate_intel_u2_timeout(struct usb_device *udev,
(xhci_service_interval_to_ns(desc) > timeout_ns))
timeout_ns = xhci_service_interval_to_ns(desc);
- u2_del_ns = udev->bos->ss_cap->bU2DevExitLat * 1000;
+ u2_del_ns = le16_to_cpu(udev->bos->ss_cap->bU2DevExitLat) * 1000ULL;
if (u2_del_ns > timeout_ns)
timeout_ns = u2_del_ns;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Yongjun <[email protected]>
commit 720dfd250e48a8c7fd1b2b8645955413989c4ee0 upstream.
Add the missing unlock on the error handling path in function
imxdma_xfer_desc().
Signed-off-by: Wei Yongjun <[email protected]>
Signed-off-by: Vinod Koul <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/dma/imx-dma.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c
index 5084975..8aa9113 100644
--- a/drivers/dma/imx-dma.c
+++ b/drivers/dma/imx-dma.c
@@ -474,8 +474,10 @@ static int imxdma_xfer_desc(struct imxdma_desc *d)
slot = i;
break;
}
- if (slot < 0)
+ if (slot < 0) {
+ spin_unlock_irqrestore(&imxdma->lock, flags);
return -EBUSY;
+ }
imxdma->slots_2d[slot].xsr = d->x;
imxdma->slots_2d[slot].ysr = d->y;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Dave Young <[email protected]>
commit 7b16bbf97375d9fb7fc107b3f80afeb94a204e44 upstream.
Commit:
722bc6b16771 x86/mm: Fix the size calculation of mapping tables
Tried to address the issue that the first 2/4M should use 4k pages
if PSE enabled, but extra counts should only be valid for x86_32.
This commit caused a kdump regression: the kdump kernel hangs.
Work is in progress to fundamentally fix the various page table
initialization issues that we have, via the design suggested
by H. Peter Anvin, but it's not ready yet to be merged.
So, to get a working kdump revert to the last known working version,
which is the revert of this commit and of a followup fix (which was
incomplete):
bd2753b2dda7 x86/mm: Only add extra pages count for the first memory range during pre-allocation
Tested kdump on physical and virtual machines.
Signed-off-by: Dave Young <[email protected]>
Acked-by: Yinghai Lu <[email protected]>
Acked-by: Cong Wang <[email protected]>
Acked-by: Flavio Leitner <[email protected]>
Tested-by: Flavio Leitner <[email protected]>
Cc: Dan Carpenter <[email protected]>
Cc: Cong Wang <[email protected]>
Cc: Flavio Leitner <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: [email protected]
Cc: Vivek Goyal <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Andrew Morton <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/mm/init.c | 22 +++++++++-------------
1 file changed, 9 insertions(+), 13 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index bc4e9d8..37b2e6a 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -29,14 +29,8 @@ int direct_gbpages
#endif
;
-struct map_range {
- unsigned long start;
- unsigned long end;
- unsigned page_size_mask;
-};
-
-static void __init find_early_table_space(struct map_range *mr, unsigned long end,
- int use_pse, int use_gbpages)
+static void __init find_early_table_space(unsigned long end, int use_pse,
+ int use_gbpages)
{
unsigned long puds, pmds, ptes, tables, start = 0, good_end = end;
phys_addr_t base;
@@ -61,10 +55,6 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
#ifdef CONFIG_X86_32
extra += PMD_SIZE;
#endif
- /* The first 2/4M doesn't use large pages. */
- if (mr->start < PMD_SIZE)
- extra += mr->end - mr->start;
-
ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
} else
ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT;
@@ -95,6 +85,12 @@ void __init native_pagetable_reserve(u64 start, u64 end)
memblock_reserve(start, end - start);
}
+struct map_range {
+ unsigned long start;
+ unsigned long end;
+ unsigned page_size_mask;
+};
+
#ifdef CONFIG_X86_32
#define NR_RANGE_MR 3
#else /* CONFIG_X86_64 */
@@ -267,7 +263,7 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
* nodes are discovered.
*/
if (!after_bootmem)
- find_early_table_space(&mr[0], end, use_pse, use_gbpages);
+ find_early_table_space(end, use_pse, use_gbpages);
for (i = 0; i < nr_range; i++)
ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Barry Song <[email protected]>
commit 5997e089e4c3a7f0958a8fb0a54ec2b5a6f06168 upstream.
either DEV_TO_MEM or MEM_TO_DEV is supported, so change
OR to AND.
Signed-off-by: Barry Song <[email protected]>
Signed-off-by: Vinod Koul <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/dma/sirf-dma.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dma/sirf-dma.c b/drivers/dma/sirf-dma.c
index 434ad31..35a329d 100644
--- a/drivers/dma/sirf-dma.c
+++ b/drivers/dma/sirf-dma.c
@@ -428,7 +428,7 @@ static struct dma_async_tx_descriptor *sirfsoc_dma_prep_interleaved(
unsigned long iflags;
int ret;
- if ((xt->dir != DMA_MEM_TO_DEV) || (xt->dir != DMA_DEV_TO_MEM)) {
+ if ((xt->dir != DMA_MEM_TO_DEV) && (xt->dir != DMA_DEV_TO_MEM)) {
ret = -EINVAL;
goto err_dir;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrew Morton <[email protected]>
commit 168bfeef7bba3f9784f7540b053e4ac72b769ce9 upstream.
If none of the elements in scrubrates[] matches, this loop will cause
__amd64_set_scrub_rate() to incorrectly use the n+1th element.
As the function is designed to use the final scrubrates[] element in the
case of no match, we can fix this bug by simply terminating the array
search at the n-1th element.
Boris: this code is fragile anyway, see here why:
http://marc.info/?l=linux-kernel&m=135102834131236&w=2
It will be rewritten more robustly soonish.
Reported-by: Denis Kirjanov <[email protected]>
Cc: Doug Thompson <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Borislav Petkov <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/edac/amd64_edac.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
index 7be9b72..a814ea8 100644
--- a/drivers/edac/amd64_edac.c
+++ b/drivers/edac/amd64_edac.c
@@ -170,8 +170,11 @@ static int __amd64_set_scrub_rate(struct pci_dev *ctl, u32 new_bw, u32 min_rate)
* memory controller and apply to register. Search for the first
* bandwidth entry that is greater or equal than the setting requested
* and program that. If at last entry, turn off DRAM scrubbing.
+ *
+ * If no suitable bandwidth is found, turn off DRAM scrubbing entirely
+ * by falling back to the last element in scrubrates[].
*/
- for (i = 0; i < ARRAY_SIZE(scrubrates); i++) {
+ for (i = 0; i < ARRAY_SIZE(scrubrates) - 1; i++) {
/*
* skip scrub rates which aren't recommended
* (see F10 BKDG, F3x58)
@@ -181,12 +184,6 @@ static int __amd64_set_scrub_rate(struct pci_dev *ctl, u32 new_bw, u32 min_rate)
if (scrubrates[i].bandwidth <= new_bw)
break;
-
- /*
- * if no suitable bandwidth found, turn off DRAM scrubbing
- * entirely by falling back to the last element in the
- * scrubrates array.
- */
}
scrubval = scrubrates[i].scrubval;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust <[email protected]>
commit 4bc1e68ed6a8b59be8a79eb719be515a55c7bc68 upstream.
The call to xprt_disconnect_done() that is triggered by a successful
connection reset will trigger another automatic wakeup of all tasks
on the xprt->pending rpc_wait_queue. In particular it will cause an
early wake up of the task that called xprt_connect().
All we really want to do here is clear all the socket-specific state
flags, so we split that functionality out of xs_sock_mark_closed()
into a helper that can be called by xs_abort_connection()
Reported-by: Chris Perl <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Tested-by: Chris Perl <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/sunrpc/xprtsock.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index a3f1990..67c46ec 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1465,7 +1465,7 @@ static void xs_tcp_cancel_linger_timeout(struct rpc_xprt *xprt)
xprt_clear_connecting(xprt);
}
-static void xs_sock_mark_closed(struct rpc_xprt *xprt)
+static void xs_sock_reset_connection_flags(struct rpc_xprt *xprt)
{
smp_mb__before_clear_bit();
clear_bit(XPRT_CONNECTION_ABORT, &xprt->state);
@@ -1473,6 +1473,11 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
clear_bit(XPRT_CLOSING, &xprt->state);
smp_mb__after_clear_bit();
+}
+
+static void xs_sock_mark_closed(struct rpc_xprt *xprt)
+{
+ xs_sock_reset_connection_flags(xprt);
/* Mark transport as closed and wake up all pending tasks */
xprt_disconnect_done(xprt);
}
@@ -2028,10 +2033,8 @@ static void xs_abort_connection(struct sock_xprt *transport)
any.sa_family = AF_UNSPEC;
result = kernel_connect(transport->sock, &any, sizeof(any), 0);
if (!result)
- xs_sock_mark_closed(&transport->xprt);
- else
- dprintk("RPC: AF_UNSPEC connect return code %d\n",
- result);
+ xs_sock_reset_connection_flags(&transport->xprt);
+ dprintk("RPC: AF_UNSPEC connect return code %d\n", result);
}
static void xs_tcp_reuse_connection(struct sock_xprt *transport)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Hiro Sugawara <[email protected]>
commit d0078e72314df2e5ede03f2102cddde06767c374 upstream.
Fix a deadly typo in macro definition.
Signed-off-by: Hiro Sugawara <[email protected]>
Signed-off-by: Hiroshi Doyu <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/iommu/tegra-smmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
index 3f3d09d..1fcc936 100644
--- a/drivers/iommu/tegra-smmu.c
+++ b/drivers/iommu/tegra-smmu.c
@@ -148,7 +148,7 @@
#define SMMU_ADDR_TO_PFN(addr) ((addr) >> 12)
#define SMMU_ADDR_TO_PDN(addr) ((addr) >> 22)
-#define SMMU_PDN_TO_ADDR(addr) ((pdn) << 22)
+#define SMMU_PDN_TO_ADDR(pdn) ((pdn) << 22)
#define _READABLE (1 << SMMU_PTB_DATA_ASID_READABLE_SHIFT)
#define _WRITABLE (1 << SMMU_PTB_DATA_ASID_WRITABLE_SHIFT)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Josh Wu <[email protected]>
commit 11930c530f3edf81160e4962e363d579f5cdce7e upstream.
Signed-off-by: Josh Wu <[email protected]>
Signed-off-by: Nicolas Ferre <[email protected]>
Acked-by: Jean-Christophe PLAGNIOL-VILLARD <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
.../devicetree/bindings/arm/atmel-at91.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/devicetree/bindings/arm/atmel-at91.txt b/Documentation/devicetree/bindings/arm/atmel-at91.txt
index ecc81e3..d187e9f 100644
--- a/Documentation/devicetree/bindings/arm/atmel-at91.txt
+++ b/Documentation/devicetree/bindings/arm/atmel-at91.txt
@@ -8,7 +8,7 @@ PIT Timer required properties:
shared across all System Controller members.
TC/TCLIB Timer required properties:
-- compatible: Should be "atmel,<chip>-pit".
+- compatible: Should be "atmel,<chip>-tcb".
<chip> can be "at91rm9200" or "at91sam9x5"
- reg: Should contain registers location and length
- interrupts: Should contain all interrupts for the TC block
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Bo Shen <[email protected]>
commit 7840487cd6298f9f931103b558290d8d98d41c49 upstream.
The i2c core driver will turn the platform device ID to busnum
When using platfrom device ID as -1, it means dynamically assigned
the busnum. When writing code, we need to make sure the busnum,
and call i2c_register_board_info(int busnum, ...) to register device
if using -1, we do not know the value of busnum
In order to solve this issue, set the platform device ID as a fix number
Here using 0 to match the busnum used in i2c_regsiter_board_info()
Signed-off-by: Bo Shen <[email protected]>
Acked-by: Jean Delvare <[email protected]>
Signed-off-by: Nicolas Ferre <[email protected]>
Acked-by: Jean-Christophe PLAGNIOL-VILLARD <[email protected]>
Acked-by: Ludovic Desroches <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/arm/mach-at91/at91rm9200_devices.c | 2 +-
arch/arm/mach-at91/at91sam9260_devices.c | 2 +-
arch/arm/mach-at91/at91sam9261_devices.c | 2 +-
arch/arm/mach-at91/at91sam9263_devices.c | 2 +-
arch/arm/mach-at91/at91sam9rl_devices.c | 2 +-
5 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/arm/mach-at91/at91rm9200_devices.c b/arch/arm/mach-at91/at91rm9200_devices.c
index e6b7d05..6a466c3 100644
--- a/arch/arm/mach-at91/at91rm9200_devices.c
+++ b/arch/arm/mach-at91/at91rm9200_devices.c
@@ -463,7 +463,7 @@ static struct i2c_gpio_platform_data pdata = {
static struct platform_device at91rm9200_twi_device = {
.name = "i2c-gpio",
- .id = -1,
+ .id = 0,
.dev.platform_data = &pdata,
};
diff --git a/arch/arm/mach-at91/at91sam9260_devices.c b/arch/arm/mach-at91/at91sam9260_devices.c
index 0ded951..138feac 100644
--- a/arch/arm/mach-at91/at91sam9260_devices.c
+++ b/arch/arm/mach-at91/at91sam9260_devices.c
@@ -471,7 +471,7 @@ static struct i2c_gpio_platform_data pdata = {
static struct platform_device at91sam9260_twi_device = {
.name = "i2c-gpio",
- .id = -1,
+ .id = 0,
.dev.platform_data = &pdata,
};
diff --git a/arch/arm/mach-at91/at91sam9261_devices.c b/arch/arm/mach-at91/at91sam9261_devices.c
index 9295e90..da8aa8f 100644
--- a/arch/arm/mach-at91/at91sam9261_devices.c
+++ b/arch/arm/mach-at91/at91sam9261_devices.c
@@ -285,7 +285,7 @@ static struct i2c_gpio_platform_data pdata = {
static struct platform_device at91sam9261_twi_device = {
.name = "i2c-gpio",
- .id = -1,
+ .id = 0,
.dev.platform_data = &pdata,
};
diff --git a/arch/arm/mach-at91/at91sam9263_devices.c b/arch/arm/mach-at91/at91sam9263_devices.c
index 175e000..a01f9ed 100644
--- a/arch/arm/mach-at91/at91sam9263_devices.c
+++ b/arch/arm/mach-at91/at91sam9263_devices.c
@@ -542,7 +542,7 @@ static struct i2c_gpio_platform_data pdata = {
static struct platform_device at91sam9263_twi_device = {
.name = "i2c-gpio",
- .id = -1,
+ .id = 0,
.dev.platform_data = &pdata,
};
diff --git a/arch/arm/mach-at91/at91sam9rl_devices.c b/arch/arm/mach-at91/at91sam9rl_devices.c
index 9c0b148..d25d775 100644
--- a/arch/arm/mach-at91/at91sam9rl_devices.c
+++ b/arch/arm/mach-at91/at91sam9rl_devices.c
@@ -314,7 +314,7 @@ static struct i2c_gpio_platform_data pdata = {
static struct platform_device at91sam9rl_twi_device = {
.name = "i2c-gpio",
- .id = -1,
+ .id = 0,
.dev.platform_data = &pdata,
};
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Lennart Sorensen <[email protected]>
commit f7bc5051667b74c3861f79eed98c60d5c3b883f7 upstream.
I found a memory leak in sierra_release() (well sierra_probe() I guess)
that looses 8 bytes each time the driver releases a device.
Signed-off-by: Len Sorensen <[email protected]>
Acked-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/sierra.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c
index d423d36..7d7ab91 100644
--- a/drivers/usb/serial/sierra.c
+++ b/drivers/usb/serial/sierra.c
@@ -964,6 +964,7 @@ static void sierra_release(struct usb_serial *serial)
continue;
kfree(portdata);
}
+ kfree(serial->private);
}
#ifdef CONFIG_PM
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: "K. Y. Srinivasan" <[email protected]>
commit 1392550240aaa72ce3a094a38bd23525cd67ce60 upstream.
Fix a memory leak in the error handling path in the function vmbus_open().
Signed-off-by: K. Y. Srinivasan <[email protected]>
Reviewed-by: Haiyang Zhang <[email protected]>
Reported-by: Jason Wang <[email protected]>
Acked-by: Jason Wang <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/hv/channel.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
index 4065374..f4c3d28 100644
--- a/drivers/hv/channel.c
+++ b/drivers/hv/channel.c
@@ -146,14 +146,14 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
if (ret != 0) {
err = ret;
- goto errorout;
+ goto error0;
}
ret = hv_ringbuffer_init(
&newchannel->inbound, in, recv_ringbuffer_size);
if (ret != 0) {
err = ret;
- goto errorout;
+ goto error0;
}
@@ -168,7 +168,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
if (ret != 0) {
err = ret;
- goto errorout;
+ goto error0;
}
/* Create and init the channel open message */
@@ -177,7 +177,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
GFP_KERNEL);
if (!open_info) {
err = -ENOMEM;
- goto errorout;
+ goto error0;
}
init_completion(&open_info->waitevent);
@@ -193,7 +193,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
if (userdatalen > MAX_USER_DEFINED_BYTES) {
err = -EINVAL;
- goto errorout;
+ goto error0;
}
if (userdatalen)
@@ -208,19 +208,18 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
sizeof(struct vmbus_channel_open_channel));
if (ret != 0)
- goto cleanup;
+ goto error1;
t = wait_for_completion_timeout(&open_info->waitevent, 5*HZ);
if (t == 0) {
err = -ETIMEDOUT;
- goto errorout;
+ goto error1;
}
if (open_info->response.open_result.status)
err = open_info->response.open_result.status;
-cleanup:
spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
list_del(&open_info->msglistentry);
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
@@ -228,9 +227,12 @@ cleanup:
kfree(open_info);
return err;
-errorout:
- hv_ringbuffer_cleanup(&newchannel->outbound);
- hv_ringbuffer_cleanup(&newchannel->inbound);
+error1:
+ spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
+ list_del(&open_info->msglistentry);
+ spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
+
+error0:
free_pages((unsigned long)out,
get_order(send_ringbuffer_size + recv_ringbuffer_size));
kfree(open_info);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit c129197c99550d356cf5f69b046994dd53cd1b9d upstream.
Make sure command buffer is deallocated in case of errors during attach.
Cc: <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/whiteheat.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/usb/serial/whiteheat.c b/drivers/usb/serial/whiteheat.c
index 473635e..b65ac03 100644
--- a/drivers/usb/serial/whiteheat.c
+++ b/drivers/usb/serial/whiteheat.c
@@ -405,6 +405,7 @@ no_firmware:
"%s: please contact [email protected]\n",
serial->type->description);
kfree(result);
+ kfree(command);
return -ENODEV;
no_command_private:
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit 2f0295adf6438188c4cd0868f2b1976a2b034e1d upstream.
Make sure no control urb is submitted during close after a disconnect by
checking the disconnected flag.
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/quatech2.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/usb/serial/quatech2.c b/drivers/usb/serial/quatech2.c
index d170271..09d736a 100644
--- a/drivers/usb/serial/quatech2.c
+++ b/drivers/usb/serial/quatech2.c
@@ -434,6 +434,12 @@ static void qt2_close(struct usb_serial_port *port)
port_priv->urb_in_use = false;
spin_unlock_irqrestore(&port_priv->urb_lock, flags);
+ mutex_lock(&port->serial->disc_mutex);
+ if (port->serial->disconnected) {
+ mutex_unlock(&port->serial->disc_mutex);
+ return;
+ }
+
/* flush the port transmit buffer */
i = usb_control_msg(serial->dev,
usb_rcvctrlpipe(serial->dev, 0),
@@ -465,6 +471,7 @@ static void qt2_close(struct usb_serial_port *port)
dev_err(&port->dev, "%s - close port failed %i\n",
__func__, i);
+ mutex_unlock(&port->serial->disc_mutex);
}
static void qt2_disconnect(struct usb_serial *serial)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit acbf0e5263de563e25f7c104868e4490b9e72b13 upstream.
Fix memory leak in write error path.
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/opticon.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/serial/opticon.c b/drivers/usb/serial/opticon.c
index 5ba2700..623358a 100644
--- a/drivers/usb/serial/opticon.c
+++ b/drivers/usb/serial/opticon.c
@@ -289,7 +289,7 @@ static int opticon_write(struct tty_struct *tty, struct usb_serial_port *port,
if (!dr) {
dev_err(&port->dev, "out of memory\n");
count = -ENOMEM;
- goto error;
+ goto error_no_dr;
}
dr->bRequestType = USB_TYPE_VENDOR | USB_RECIP_INTERFACE | USB_DIR_OUT;
@@ -319,6 +319,8 @@ static int opticon_write(struct tty_struct *tty, struct usb_serial_port *port,
return count;
error:
+ kfree(dr);
+error_no_dr:
usb_free_urb(urb);
error_no_urb:
kfree(buffer);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit 5260e458f5eff269a43e4f1e9c47186c57b88ddb upstream.
Make sure generic close is called at close.
The driver relies on the generic write implementation but did not call
generic close.
Note that the call to kill the read urb is not redundant, as mct_u232
uses an interrupt urb from the second port as the read urb and that
generic close therefore fails to kill it.
Compile-only tested.
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/mct_u232.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/usb/serial/mct_u232.c b/drivers/usb/serial/mct_u232.c
index a71fa0a..d06130d 100644
--- a/drivers/usb/serial/mct_u232.c
+++ b/drivers/usb/serial/mct_u232.c
@@ -519,12 +519,14 @@ static void mct_u232_dtr_rts(struct usb_serial_port *port, int on)
static void mct_u232_close(struct usb_serial_port *port)
{
- if (port->serial->dev) {
- /* shutdown our urbs */
- usb_kill_urb(port->write_urb);
- usb_kill_urb(port->read_urb);
- usb_kill_urb(port->interrupt_in_urb);
- }
+ /*
+ * Must kill the read urb as it is actually an interrupt urb, which
+ * generic close thus fails to kill.
+ */
+ usb_kill_urb(port->read_urb);
+ usb_kill_urb(port->interrupt_in_urb);
+
+ usb_serial_generic_close(port);
} /* mct_u232_close */
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit 084817d79399ab5ccab2f90a148b0369912a8369 upstream.
Move interface data allocation to attach so that it is deallocated on
errors in usb-serial probe.
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/sierra.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c
index f72bbaa..6aeddcd 100644
--- a/drivers/usb/serial/sierra.c
+++ b/drivers/usb/serial/sierra.c
@@ -162,7 +162,6 @@ static int sierra_probe(struct usb_serial *serial,
{
int result = 0;
struct usb_device *udev;
- struct sierra_intf_private *data;
u8 ifnum;
udev = serial->dev;
@@ -189,11 +188,6 @@ static int sierra_probe(struct usb_serial *serial,
return -ENODEV;
}
- data = serial->private = kzalloc(sizeof(struct sierra_intf_private), GFP_KERNEL);
- if (!data)
- return -ENOMEM;
- spin_lock_init(&data->susp_lock);
-
return result;
}
@@ -890,11 +884,20 @@ static void sierra_dtr_rts(struct usb_serial_port *port, int on)
static int sierra_startup(struct usb_serial *serial)
{
struct usb_serial_port *port;
+ struct sierra_intf_private *intfdata;
struct sierra_port_private *portdata;
struct sierra_iface_info *himemoryp = NULL;
int i;
u8 ifnum;
+ intfdata = kzalloc(sizeof(*intfdata), GFP_KERNEL);
+ if (!intfdata)
+ return -ENOMEM;
+
+ spin_lock_init(&intfdata->susp_lock);
+
+ usb_set_serial_data(serial, intfdata);
+
/* Set Device mode to D0 */
sierra_set_power_state(serial->dev, 0x0000);
@@ -952,6 +955,7 @@ err:
portdata = usb_get_serial_port_data(serial->port[i]);
kfree(portdata);
}
+ kfree(intfdata);
return -ENOMEM;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit 3eb55cc4ed88eee3b5230f66abcdbd2a91639eda upstream.
The driver set the usb-serial port pointers to NULL on errors in attach,
effectively preventing usb-serial core from decrementing the port ref
counters and releasing the port devices and associated data.
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/mos7840.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
index 748bc6a..f717db4 100644
--- a/drivers/usb/serial/mos7840.c
+++ b/drivers/usb/serial/mos7840.c
@@ -2685,7 +2685,6 @@ error:
kfree(mos7840_port->ctrl_buf);
usb_free_urb(mos7840_port->control_urb);
kfree(mos7840_port);
- serial->port[i] = NULL;
}
return status;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit e681b66f2e19fadbe8a7e2a17900978cb6bc921f upstream.
Remove private zombie flag used to signal disconnect and to prevent
control urb from being submitted from interrupt urb completion handler.
The control urb will not be re-submitted as both the control urb and the
interrupt urb is killed on disconnect.
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/mos7840.c | 13 +------------
1 file changed, 1 insertion(+), 12 deletions(-)
diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
index eab84c1..9b819c4 100644
--- a/drivers/usb/serial/mos7840.c
+++ b/drivers/usb/serial/mos7840.c
@@ -223,7 +223,6 @@ struct moschip_port {
__u8 shadowMCR; /* last MCR value received */
char open;
char open_ports;
- char zombie;
wait_queue_head_t wait_chase; /* for handling sleeping while waiting for chase to finish */
wait_queue_head_t delta_msr_wait; /* for handling sleeping while waiting for msr change to happen */
int delta_msr_cond;
@@ -692,14 +691,7 @@ static void mos7840_interrupt_callback(struct urb *urb)
wreg = MODEM_STATUS_REGISTER;
break;
}
- spin_lock(&mos7840_port->pool_lock);
- if (!mos7840_port->zombie) {
- rv = mos7840_get_reg(mos7840_port, wval, wreg, &Data);
- } else {
- spin_unlock(&mos7840_port->pool_lock);
- return;
- }
- spin_unlock(&mos7840_port->pool_lock);
+ rv = mos7840_get_reg(mos7840_port, wval, wreg, &Data);
}
}
}
@@ -2701,9 +2693,6 @@ static void mos7840_disconnect(struct usb_serial *serial)
mos7840_port = mos7840_get_port_private(serial->port[i]);
dbg ("mos7840_port %d = %p", i, mos7840_port);
if (mos7840_port) {
- spin_lock_irqsave(&mos7840_port->pool_lock, flags);
- mos7840_port->zombie = 1;
- spin_unlock_irqrestore(&mos7840_port->pool_lock, flags);
usb_kill_urb(mos7840_port->control_urb);
}
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Anisse Astier <[email protected]>
commit 8daf8b6086f9d575200cd0aa3797e26137255609 upstream.
Board name changed on another shipping Lucid tablet.
Signed-off-by: Anisse Astier <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/host/pci-quirks.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c
index ead4525..39f9e4a 100644
--- a/drivers/usb/host/pci-quirks.c
+++ b/drivers/usb/host/pci-quirks.c
@@ -548,6 +548,13 @@ static const struct dmi_system_id __devinitconst ehci_dmi_nohandoff_table[] = {
DMI_MATCH(DMI_BIOS_VERSION, "Lucid-"),
},
},
+ {
+ /* Pegatron Lucid (Ordissimo) */
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "Ordissimo"),
+ DMI_MATCH(DMI_BIOS_VERSION, "Lucid-"),
+ },
+ },
{ }
};
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Oleg Nesterov <[email protected]>
commit b40a79591ca918e7b91b0d9b6abd5d00f2e88c19 upstream.
flush_old_exec() clears PF_KTHREAD but forgets about PF_NOFREEZE.
Signed-off-by: Oleg Nesterov <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/exec.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/exec.c b/fs/exec.c
index e95aeed..8726a93 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1110,7 +1110,8 @@ int flush_old_exec(struct linux_binprm * bprm)
bprm->mm = NULL; /* We're using it now */
set_fs(USER_DS);
- current->flags &= ~(PF_RANDOMIZE | PF_FORKNOEXEC | PF_KTHREAD);
+ current->flags &=
+ ~(PF_RANDOMIZE | PF_FORKNOEXEC | PF_KTHREAD | PF_NOFREEZE);
flush_thread();
current->personality &= ~bprm->per_clear;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Kees Cook <[email protected]>
commit 20f1de659b77364d55d4e7fad2ef657e7730323f upstream.
Fix possible overflow of the buffer used for expanding environment
variables when building file list.
In the extremely unlikely case of an attacker having control over the
environment variables visible to gen_init_cpio, control over the
contents of the file gen_init_cpio parses, and gen_init_cpio was built
without compiler hardening, the attacker can gain arbitrary execution
control via a stack buffer overflow.
$ cat usr/crash.list
file foo ${BIG}${BIG}${BIG}${BIG}${BIG}${BIG} 0755 0 0
$ BIG=$(perl -e 'print "A" x 4096;') ./usr/gen_init_cpio usr/crash.list
*** buffer overflow detected ***: ./usr/gen_init_cpio terminated
This also replaces the space-indenting with tabs.
Patch based on existing fix extracted from grsecurity.
Signed-off-by: Kees Cook <[email protected]>
Cc: Michal Marek <[email protected]>
Cc: Brad Spengler <[email protected]>
Cc: PaX Team <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
usr/gen_init_cpio.c | 43 +++++++++++++++++++++++--------------------
1 file changed, 23 insertions(+), 20 deletions(-)
diff --git a/usr/gen_init_cpio.c b/usr/gen_init_cpio.c
index af0f22f..aca6edc 100644
--- a/usr/gen_init_cpio.c
+++ b/usr/gen_init_cpio.c
@@ -303,7 +303,7 @@ static int cpio_mkfile(const char *name, const char *location,
int retval;
int rc = -1;
int namesize;
- int i;
+ unsigned int i;
mode |= S_IFREG;
@@ -381,25 +381,28 @@ error:
static char *cpio_replace_env(char *new_location)
{
- char expanded[PATH_MAX + 1];
- char env_var[PATH_MAX + 1];
- char *start;
- char *end;
-
- for (start = NULL; (start = strstr(new_location, "${")); ) {
- end = strchr(start, '}');
- if (start < end) {
- *env_var = *expanded = '\0';
- strncat(env_var, start + 2, end - start - 2);
- strncat(expanded, new_location, start - new_location);
- strncat(expanded, getenv(env_var), PATH_MAX);
- strncat(expanded, end + 1, PATH_MAX);
- strncpy(new_location, expanded, PATH_MAX);
- } else
- break;
- }
-
- return new_location;
+ char expanded[PATH_MAX + 1];
+ char env_var[PATH_MAX + 1];
+ char *start;
+ char *end;
+
+ for (start = NULL; (start = strstr(new_location, "${")); ) {
+ end = strchr(start, '}');
+ if (start < end) {
+ *env_var = *expanded = '\0';
+ strncat(env_var, start + 2, end - start - 2);
+ strncat(expanded, new_location, start - new_location);
+ strncat(expanded, getenv(env_var),
+ PATH_MAX - strlen(expanded));
+ strncat(expanded, end + 1,
+ PATH_MAX - strlen(expanded));
+ strncpy(new_location, expanded, PATH_MAX);
+ new_location[PATH_MAX] = 0;
+ } else
+ break;
+ }
+
+ return new_location;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jan Luebbe <[email protected]>
commit fee0de7791f967c2c5f0d43eb7b7261761b45e64 upstream.
Signed-off-by: Jan Luebbe <[email protected]>
Cc: Alessandro Zummo <[email protected]>
Cc: Roland Stigge <[email protected]>
Cc: Grant Likely <[email protected]>
Tested-by: Roland Stigge <[email protected]>
Cc: Sascha Hauer <[email protected]>
Cc: Russell King <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/rtc/rtc-imxdi.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/rtc/rtc-imxdi.c b/drivers/rtc/rtc-imxdi.c
index 891cd6c..4eed510 100644
--- a/drivers/rtc/rtc-imxdi.c
+++ b/drivers/rtc/rtc-imxdi.c
@@ -392,6 +392,8 @@ static int dryice_rtc_probe(struct platform_device *pdev)
if (imxdi->ioaddr == NULL)
return -ENOMEM;
+ spin_lock_init(&imxdi->irq_lock);
+
imxdi->irq = platform_get_irq(pdev, 0);
if (imxdi->irq < 0)
return imxdi->irq;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Olof Johansson <[email protected]>
commit 5189c2a7c7769ee9d037d76c1a7b8550ccf3481c upstream.
When 32-bit EFI is used with 64-bit kernel (or vice versa), turn off
efi_enabled once setup is done. Beyond setup, it is normally used to
determine if runtime services are available and we will have none.
This will resolve issues stemming from efivars modprobe panicking on a
32/64-bit setup, as well as some reboot issues on similar setups.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=45991
Reported-by: Marko Kohtala <[email protected]>
Reported-by: Maxim Kammerer <[email protected]>
Signed-off-by: Olof Johansson <[email protected]>
Acked-by: Maarten Lankhorst <[email protected]>
Cc: Matthew Garrett <[email protected]>
Signed-off-by: Matt Fleming <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/include/asm/efi.h | 1 +
arch/x86/kernel/setup.c | 12 ++++++++++++
arch/x86/platform/efi/efi.c | 18 ++++++++++--------
3 files changed, 23 insertions(+), 8 deletions(-)
diff --git a/arch/x86/include/asm/efi.h b/arch/x86/include/asm/efi.h
index c9dcc18..029189d 100644
--- a/arch/x86/include/asm/efi.h
+++ b/arch/x86/include/asm/efi.h
@@ -98,6 +98,7 @@ extern void efi_set_executable(efi_memory_desc_t *md, bool executable);
extern int efi_memblock_x86_reserve_range(void);
extern void efi_call_phys_prelog(void);
extern void efi_call_phys_epilog(void);
+extern void efi_unmap_memmap(void);
#ifndef CONFIG_EFI
/*
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 6cafbcd..e860517 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1050,6 +1050,18 @@ void __init setup_arch(char **cmdline_p)
mcheck_init();
arch_init_ideal_nops();
+
+#ifdef CONFIG_EFI
+ /* Once setup is done above, disable efi_enabled on mismatched
+ * firmware/kernel archtectures since there is no support for
+ * runtime services.
+ */
+ if (efi_enabled && IS_ENABLED(CONFIG_X86_64) != efi_64bit) {
+ pr_info("efi: Setup done, disabling due to 32/64-bit mismatch\n");
+ efi_unmap_memmap();
+ efi_enabled = 0;
+ }
+#endif
}
#ifdef CONFIG_X86_32
diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
index b3dbbdb..72d8899 100644
--- a/arch/x86/platform/efi/efi.c
+++ b/arch/x86/platform/efi/efi.c
@@ -69,11 +69,15 @@ EXPORT_SYMBOL(efi);
struct efi_memory_map memmap;
bool efi_64bit;
-static bool efi_native;
static struct efi efi_phys __initdata;
static efi_system_table_t efi_systab __initdata;
+static inline bool efi_is_native(void)
+{
+ return IS_ENABLED(CONFIG_X86_64) == efi_64bit;
+}
+
static int __init setup_noefi(char *arg)
{
efi_enabled = 0;
@@ -419,7 +423,7 @@ void __init efi_reserve_boot_services(void)
}
}
-static void __init efi_unmap_memmap(void)
+void __init efi_unmap_memmap(void)
{
if (memmap.map) {
early_iounmap(memmap.map, memmap.nr_map * memmap.desc_size);
@@ -431,7 +435,7 @@ void __init efi_free_boot_services(void)
{
void *p;
- if (!efi_native)
+ if (!efi_is_native())
return;
for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
@@ -683,12 +687,10 @@ void __init efi_init(void)
return;
}
efi_phys.systab = (efi_system_table_t *)boot_params.efi_info.efi_systab;
- efi_native = !efi_64bit;
#else
efi_phys.systab = (efi_system_table_t *)
(boot_params.efi_info.efi_systab |
((__u64)boot_params.efi_info.efi_systab_hi<<32));
- efi_native = efi_64bit;
#endif
if (efi_systab_init(efi_phys.systab)) {
@@ -722,7 +724,7 @@ void __init efi_init(void)
* that doesn't match the kernel 32/64-bit mode.
*/
- if (!efi_native)
+ if (!efi_is_native())
pr_info("No EFI runtime due to 32/64-bit mismatch with kernel\n");
else if (efi_runtime_init()) {
efi_enabled = 0;
@@ -734,7 +736,7 @@ void __init efi_init(void)
return;
}
#ifdef CONFIG_X86_32
- if (efi_native) {
+ if (efi_is_native()) {
x86_platform.get_wallclock = efi_get_time;
x86_platform.set_wallclock = efi_set_rtc_mmss;
}
@@ -800,7 +802,7 @@ void __init efi_enter_virtual_mode(void)
* non-native EFI
*/
- if (!efi_native) {
+ if (!efi_is_native()) {
efi_unmap_memmap();
return;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger <[email protected]>
commit dea5f0998aa82bdeca260b87c653db11e91329b2 upstream.
This patch fixes a regression in spc_emulate_inquiry() code where the
local scope bounce buffer was no longer getting it's memory zeroed,
causing various problems with SCSI initiators that depend upon areas
of INQUIRY EVPD=0x83 payload having been zeroed.
This bug was introduced with the following v3.7-rc1 patch + CC'ed
stable commit:
commit ffe7b0e9326d9c68f5688bef691dd49f1e0d3651
Author: Paolo Bonzini <[email protected]>
Date: Fri Sep 7 17:30:38 2012 +0200
target: support zero allocation length in INQUIRY
Go ahead and re-add the missing memset of bounce buffer memory to be
copied into the outgoing se_cmd descriptor kmapped SGL payload.
Reported-by: Kelsey Prantis <[email protected]>
Cc: Kelsey Prantis <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Andy Grover <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
[ herton: code to be patched is in target_core_cdb.c on 3.5 ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/target/target_core_cdb.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/target/target_core_cdb.c b/drivers/target/target_core_cdb.c
index 3dc3393..dd09f0f 100644
--- a/drivers/target/target_core_cdb.c
+++ b/drivers/target/target_core_cdb.c
@@ -610,6 +610,8 @@ int target_emulate_inquiry(struct se_cmd *cmd)
unsigned char buf[SE_INQUIRY_BUF];
int p, ret;
+ memset(buf, 0, SE_INQUIRY_BUF);
+
if (dev == tpg->tpg_virt_lun0.lun_se_dev)
buf[0] = 0x3f; /* Not connected */
else
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 261030215d970c62f799e6e508e3c68fc7ec2aa9 upstream.
For some reason the declaration of ceph_con_get() and
ceph_con_put() did not get deleted in this commit:
d59315ca libceph: drop ceph_con_get/put helpers and nref member
Clean that up.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 9844241..189ae06 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -215,8 +215,6 @@ extern void ceph_msg_revoke(struct ceph_msg *msg);
extern void ceph_msg_revoke_incoming(struct ceph_msg *msg);
extern void ceph_con_keepalive(struct ceph_connection *con);
-extern struct ceph_connection *ceph_con_get(struct ceph_connection *con);
-extern void ceph_con_put(struct ceph_connection *con);
extern struct ceph_msg *ceph_msg_new(int type, int front_len, gfp_t flags,
bool can_fail);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul Walmsley <[email protected]>
commit 39141ddfb63a664f26d3f42f64ee386e879b492c upstream.
After commit 846a136881b8f73c1f74250bf6acfaa309cab1f2 ("ARM: vfp: fix
saving d16-d31 vfp registers on v6+ kernels"), the OMAP 2430SDP board
started crashing during boot with omap2plus_defconfig:
[ 3.875122] mmcblk0: mmc0:e624 SD04G 3.69 GiB
[ 3.915954] mmcblk0: p1
[ 4.086639] Internal error: Oops - undefined instruction: 0 [#1] SMP ARM
[ 4.093719] Modules linked in:
[ 4.096954] CPU: 0 Not tainted (3.6.0-02232-g759e00b #570)
[ 4.103149] PC is at vfp_reload_hw+0x1c/0x44
[ 4.107666] LR is at __und_usr_fault_32+0x0/0x8
It turns out that the context save/restore fix unmasked a latent bug
in commit 5aaf254409f8d58229107b59507a8235b715a960 ("ARM: 6203/1: Make
VFPv3 usable on ARMv6"). When CONFIG_VFPv3 is set, but the kernel is
booted on a pre-VFPv3 core, the code attempts to save and restore the
d16-d31 VFP registers. These are only present on non-D16 VFPv3+, so
this results in an undefined instruction exception. The code didn't
crash before commit 846a136 because the save and restore code was
only touching d0-d15, present on all VFP.
Fix by implementing a request from Russell King to add a new HWCAP
flag that affirmatively indicates the presence of the d16-d31
registers:
http://marc.info/?l=linux-arm-kernel&m=135013547905283&w=2
and some feedback from MÃ¥ns to clarify the name of the HWCAP flag.
Signed-off-by: Paul Walmsley <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Dave Martin <[email protected]>
Cc: Måns Rullgård <[email protected]>
Signed-off-by: Russell King <[email protected]>
[ herton: no uapi directory on 3.5 ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/arm/include/asm/hwcap.h | 3 ++-
arch/arm/include/asm/vfpmacros.h | 12 ++++++------
arch/arm/vfp/vfpmodule.c | 9 ++++++---
3 files changed, 14 insertions(+), 10 deletions(-)
diff --git a/arch/arm/include/asm/hwcap.h b/arch/arm/include/asm/hwcap.h
index 9176261..a2fe893 100644
--- a/arch/arm/include/asm/hwcap.h
+++ b/arch/arm/include/asm/hwcap.h
@@ -18,11 +18,12 @@
#define HWCAP_THUMBEE (1 << 11)
#define HWCAP_NEON (1 << 12)
#define HWCAP_VFPv3 (1 << 13)
-#define HWCAP_VFPv3D16 (1 << 14)
+#define HWCAP_VFPv3D16 (1 << 14) /* also set for VFPv4-D16 */
#define HWCAP_TLS (1 << 15)
#define HWCAP_VFPv4 (1 << 16)
#define HWCAP_IDIVA (1 << 17)
#define HWCAP_IDIVT (1 << 18)
+#define HWCAP_VFPD32 (1 << 19) /* set if VFP has 32 regs (not 16) */
#define HWCAP_IDIV (HWCAP_IDIVA | HWCAP_IDIVT)
#if defined(__KERNEL__)
diff --git a/arch/arm/include/asm/vfpmacros.h b/arch/arm/include/asm/vfpmacros.h
index bf53047..c49c8f7 100644
--- a/arch/arm/include/asm/vfpmacros.h
+++ b/arch/arm/include/asm/vfpmacros.h
@@ -27,9 +27,9 @@
#if __LINUX_ARM_ARCH__ <= 6
ldr \tmp, =elf_hwcap @ may not have MVFR regs
ldr \tmp, [\tmp, #0]
- tst \tmp, #HWCAP_VFPv3D16
- ldceql p11, cr0, [\base],#32*4 @ FLDMIAD \base!, {d16-d31}
- addne \base, \base, #32*4 @ step over unused register space
+ tst \tmp, #HWCAP_VFPD32
+ ldcnel p11, cr0, [\base],#32*4 @ FLDMIAD \base!, {d16-d31}
+ addeq \base, \base, #32*4 @ step over unused register space
#else
VFPFMRX \tmp, MVFR0 @ Media and VFP Feature Register 0
and \tmp, \tmp, #MVFR0_A_SIMD_MASK @ A_SIMD field
@@ -51,9 +51,9 @@
#if __LINUX_ARM_ARCH__ <= 6
ldr \tmp, =elf_hwcap @ may not have MVFR regs
ldr \tmp, [\tmp, #0]
- tst \tmp, #HWCAP_VFPv3D16
- stceql p11, cr0, [\base],#32*4 @ FSTMIAD \base!, {d16-d31}
- addne \base, \base, #32*4 @ step over unused register space
+ tst \tmp, #HWCAP_VFPD32
+ stcnel p11, cr0, [\base],#32*4 @ FSTMIAD \base!, {d16-d31}
+ addeq \base, \base, #32*4 @ step over unused register space
#else
VFPFMRX \tmp, MVFR0 @ Media and VFP Feature Register 0
and \tmp, \tmp, #MVFR0_A_SIMD_MASK @ A_SIMD field
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index c834b32..3b44e0d 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -701,11 +701,14 @@ static int __init vfp_init(void)
elf_hwcap |= HWCAP_VFPv3;
/*
- * Check for VFPv3 D16. CPUs in this configuration
- * only have 16 x 64bit registers.
+ * Check for VFPv3 D16 and VFPv4 D16. CPUs in
+ * this configuration only have 16 x 64bit
+ * registers.
*/
if (((fmrx(MVFR0) & MVFR0_A_SIMD_MASK)) == 1)
- elf_hwcap |= HWCAP_VFPv3D16;
+ elf_hwcap |= HWCAP_VFPv3D16; /* also v4-D16 */
+ else
+ elf_hwcap |= HWCAP_VFPD32;
}
#endif
/*
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Herton Ronaldo Krzesinski <[email protected]>
This reverts commit 900404e5d201d0a6d2806f615b41e939713d55db on 3.5,
which is commit 749c8814f08f12baa4a9c2812a7c6ede7d69507d upstream.
The change was originally intended to fix a mismerge in 3.6. The 3.5
stable branch was unaffected by the issue, as reported by Huacai Chen on
stable mailing list (on the thread 'Seems like "sched: Add missing call
to calc_load_exit_idle()" should be reverted in 3.5 branch').
As concluded by Peter Zijlstra:
"(...) You are right, v3.5.5 has one calc_load_exit_idle() too many, the
one in tick_nohz_update_jiffies() needs to go. (...)"
Cc: Huacai Chen <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Jonathan Nieder <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Charles Wang <[email protected]>
Cc: Ingo Molnar <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
kernel/time/tick-sched.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index a057ed4..4a08472 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -145,7 +145,6 @@ static void tick_nohz_update_jiffies(ktime_t now)
tick_do_update_jiffies64(now);
local_irq_restore(flags);
- calc_load_exit_idle();
touch_softlockup_watchdog();
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jan Kara <[email protected]>
commit ef5d437f71afdf4afdbab99213add99f4b1318fd upstream.
On s390 any write to a page (even from kernel itself) sets architecture
specific page dirty bit. Thus when a page is written to via buffered
write, HW dirty bit gets set and when we later map and unmap the page,
page_remove_rmap() finds the dirty bit and calls set_page_dirty().
Dirtying of a page which shouldn't be dirty can cause all sorts of
problems to filesystems. The bug we observed in practice is that
buffers from the page get freed, so when the page gets later marked as
dirty and writeback writes it, XFS crashes due to an assertion
BUG_ON(!PagePrivate(page)) in page_buffers() called from
xfs_count_page_state().
Similar problem can also happen when zero_user_segment() call from
xfs_vm_writepage() (or block_write_full_page() for that matter) set the
hardware dirty bit during writeback, later buffers get freed, and then
page unmapped.
Fix the issue by ignoring s390 HW dirty bit for page cache pages of
mappings with mapping_cap_account_dirty(). This is safe because for
such mappings when a page gets marked as writeable in PTE it is also
marked dirty in do_wp_page() or do_page_fault(). When the dirty bit is
cleared by clear_page_dirty_for_io(), the page gets writeprotected in
page_mkclean(). So pagecache page is writeable if and only if it is
dirty.
Thanks to Hugh Dickins for pointing out mapping has to have
mapping_cap_account_dirty() for things to work and proposing a cleaned
up variant of the patch.
The patch has survived about two hours of running fsx-linux on tmpfs
while heavily swapping and several days of running on out build machines
where the original problem was triggered.
Signed-off-by: Jan Kara <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Heiko Carstens <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
mm/rmap.c | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 0f3b7cd..aa95e59 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -56,6 +56,7 @@
#include <linux/mmu_notifier.h>
#include <linux/migrate.h>
#include <linux/hugetlb.h>
+#include <linux/backing-dev.h>
#include <asm/tlbflush.h>
@@ -971,11 +972,8 @@ int page_mkclean(struct page *page)
if (page_mapped(page)) {
struct address_space *mapping = page_mapping(page);
- if (mapping) {
+ if (mapping)
ret = page_mkclean_file(mapping, page);
- if (page_test_and_clear_dirty(page_to_pfn(page), 1))
- ret = 1;
- }
}
return ret;
@@ -1161,6 +1159,7 @@ void page_add_file_rmap(struct page *page)
*/
void page_remove_rmap(struct page *page)
{
+ struct address_space *mapping = page_mapping(page);
bool anon = PageAnon(page);
bool locked;
unsigned long flags;
@@ -1183,8 +1182,19 @@ void page_remove_rmap(struct page *page)
* this if the page is anon, so about to be freed; but perhaps
* not if it's in swapcache - there might be another pte slot
* containing the swap entry, but page not yet written to swap.
+ *
+ * And we can skip it on file pages, so long as the filesystem
+ * participates in dirty tracking; but need to catch shm and tmpfs
+ * and ramfs pages which have been modified since creation by read
+ * fault.
+ *
+ * Note that mapping must be decided above, before decrementing
+ * mapcount (which luckily provides a barrier): once page is unmapped,
+ * it could be truncated and page->mapping reset to NULL at any moment.
+ * Note also that we are relying on page_mapping(page) to set mapping
+ * to &swapper_space when PageSwapCache(page).
*/
- if ((!anon || PageSwapCache(page)) &&
+ if (mapping && !mapping_cap_account_dirty(mapping) &&
page_test_and_clear_dirty(page_to_pfn(page), 1))
set_page_dirty(page);
/*
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jacob Shin <[email protected]>
commit 844ab6f993b1d32eb40512503d35ff6ad0c57030 upstream.
Current logic finds enough space for direct mapping page tables from 0
to end. Instead, we only need to find enough space to cover mr[0].start
to mr[nr_range].end -- the range that is actually being mapped by
init_memory_mapping()
This is needed after 1bbbbe779aabe1f0768c2bf8f8c0a5583679b54a, to address
the panic reported here:
https://lkml.org/lkml/2012/10/20/160
https://lkml.org/lkml/2012/10/21/157
Signed-off-by: Jacob Shin <[email protected]>
Link: http://lkml.kernel.org/r/20121024195311.GB11779@jshin-Toonie
Tested-by: Tom Rini <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/mm/init.c | 70 ++++++++++++++++++++++++++++++----------------------
1 file changed, 41 insertions(+), 29 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 37b2e6a..5e967a3 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -29,36 +29,54 @@ int direct_gbpages
#endif
;
-static void __init find_early_table_space(unsigned long end, int use_pse,
- int use_gbpages)
+struct map_range {
+ unsigned long start;
+ unsigned long end;
+ unsigned page_size_mask;
+};
+
+/*
+ * First calculate space needed for kernel direct mapping page tables to cover
+ * mr[0].start to mr[nr_range - 1].end, while accounting for possible 2M and 1GB
+ * pages. Then find enough contiguous space for those page tables.
+ */
+static void __init find_early_table_space(struct map_range *mr, int nr_range)
{
- unsigned long puds, pmds, ptes, tables, start = 0, good_end = end;
+ int i;
+ unsigned long puds = 0, pmds = 0, ptes = 0, tables;
+ unsigned long start = 0, good_end;
phys_addr_t base;
- puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
- tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
+ for (i = 0; i < nr_range; i++) {
+ unsigned long range, extra;
- if (use_gbpages) {
- unsigned long extra;
+ range = mr[i].end - mr[i].start;
+ puds += (range + PUD_SIZE - 1) >> PUD_SHIFT;
- extra = end - ((end>>PUD_SHIFT) << PUD_SHIFT);
- pmds = (extra + PMD_SIZE - 1) >> PMD_SHIFT;
- } else
- pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT;
-
- tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
+ if (mr[i].page_size_mask & (1 << PG_LEVEL_1G)) {
+ extra = range - ((range >> PUD_SHIFT) << PUD_SHIFT);
+ pmds += (extra + PMD_SIZE - 1) >> PMD_SHIFT;
+ } else {
+ pmds += (range + PMD_SIZE - 1) >> PMD_SHIFT;
+ }
- if (use_pse) {
- unsigned long extra;
-
- extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT);
+ if (mr[i].page_size_mask & (1 << PG_LEVEL_2M)) {
+ extra = range - ((range >> PMD_SHIFT) << PMD_SHIFT);
#ifdef CONFIG_X86_32
- extra += PMD_SIZE;
+ extra += PMD_SIZE;
#endif
- ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
- } else
- ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ /* The first 2/4M doesn't use large pages. */
+ if (mr[i].start < PMD_SIZE)
+ extra += range;
+
+ ptes += (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ } else {
+ ptes += (range + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ }
+ }
+ tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
+ tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE);
#ifdef CONFIG_X86_32
@@ -76,7 +94,7 @@ static void __init find_early_table_space(unsigned long end, int use_pse,
pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx]\n",
- end - 1, pgt_buf_start << PAGE_SHIFT,
+ mr[nr_range - 1].end - 1, pgt_buf_start << PAGE_SHIFT,
(pgt_buf_top << PAGE_SHIFT) - 1);
}
@@ -85,12 +103,6 @@ void __init native_pagetable_reserve(u64 start, u64 end)
memblock_reserve(start, end - start);
}
-struct map_range {
- unsigned long start;
- unsigned long end;
- unsigned page_size_mask;
-};
-
#ifdef CONFIG_X86_32
#define NR_RANGE_MR 3
#else /* CONFIG_X86_64 */
@@ -263,7 +275,7 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
* nodes are discovered.
*/
if (!after_bootmem)
- find_early_table_space(end, use_pse, use_gbpages);
+ find_early_table_space(mr, nr_range);
for (i = 0; i < nr_range; i++)
ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Yinghai Lu <[email protected]>
commit f82f64dd9f485e13f29f369772d4a0e868e5633a upstream.
Commit
844ab6f9 x86, mm: Find_early_table_space based on ranges that are actually being mapped
added back some lines back wrongly that has been removed in commit
7b16bbf97 Revert "x86/mm: Fix the size calculation of mapping tables"
remove them again.
Signed-off-by: Yinghai Lu <[email protected]>
Link: http://lkml.kernel.org/r/CAE9FiQW_vuaYQbmagVnxT2DGsYc=9tNeAbdBq53sYkitPOwxSQ@mail.gmail.com
Acked-by: Jacob Shin <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/mm/init.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 5e967a3..4e4dcda 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -65,10 +65,6 @@ static void __init find_early_table_space(struct map_range *mr, int nr_range)
#ifdef CONFIG_X86_32
extra += PMD_SIZE;
#endif
- /* The first 2/4M doesn't use large pages. */
- if (mr[i].start < PMD_SIZE)
- extra += range;
-
ptes += (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
} else {
ptes += (range + PAGE_SIZE - 1) >> PAGE_SHIFT;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Bruce Allan <[email protected]>
commit 16e310ae6ed352c4963b1f2413fcd88fa693eeda upstream.
i218 is the next-generation LOM that will be available on systems with the
Lynx Point LP Platform Controller Hub (PCH) chipset from Intel. This patch
provides the initial support of those devices.
Signed-off-by: Bruce Allan <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/net/ethernet/intel/e1000e/hw.h | 2 ++
drivers/net/ethernet/intel/e1000e/netdev.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/drivers/net/ethernet/intel/e1000e/hw.h b/drivers/net/ethernet/intel/e1000e/hw.h
index ed5b409..d37bfd9 100644
--- a/drivers/net/ethernet/intel/e1000e/hw.h
+++ b/drivers/net/ethernet/intel/e1000e/hw.h
@@ -412,6 +412,8 @@ enum e1e_registers {
#define E1000_DEV_ID_PCH2_LV_V 0x1503
#define E1000_DEV_ID_PCH_LPT_I217_LM 0x153A
#define E1000_DEV_ID_PCH_LPT_I217_V 0x153B
+#define E1000_DEV_ID_PCH_LPTLP_I218_LM 0x155A
+#define E1000_DEV_ID_PCH_LPTLP_I218_V 0x1559
#define E1000_REVISION_4 4
diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
index 9bbfa1d..7e750ae 100644
--- a/drivers/net/ethernet/intel/e1000e/netdev.c
+++ b/drivers/net/ethernet/intel/e1000e/netdev.c
@@ -6519,6 +6519,8 @@ static DEFINE_PCI_DEVICE_TABLE(e1000_pci_tbl) = {
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LPT_I217_LM), board_pch_lpt },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LPT_I217_V), board_pch_lpt },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LPTLP_I218_LM), board_pch_lpt },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LPTLP_I218_V), board_pch_lpt },
{ 0, 0, 0, 0, 0, 0, 0 } /* terminate list */
};
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Thadeu Lima de Souza Cascardo <[email protected]>
commit eedce141cd2dad8d0cefc5468ef41898949a7031 upstream.
The genalloc code uses the bitmap API from include/linux/bitmap.h and
lib/bitmap.c, which is based on long values. Both bitmap_set from
lib/bitmap.c and bitmap_set_ll, which is the lockless version from
genalloc.c, use BITMAP_LAST_WORD_MASK to set the first bits in a long in
the bitmap.
That one uses (1 << bits) - 1, 0b111, if you are setting the first three
bits. This means that the API counts from the least significant bits
(LSB from now on) to the MSB. The LSB in the first long is bit 0, then.
The same works for the lookup functions.
The genalloc code uses longs for the bitmap, as it should. In
include/linux/genalloc.h, struct gen_pool_chunk has unsigned long
bits[0] as its last member. When allocating the struct, genalloc should
reserve enough space for the bitmap. This should be a proper number of
longs that can fit the amount of bits in the bitmap.
However, genalloc allocates an integer number of bytes that fit the
amount of bits, but may not be an integer amount of longs. 9 bytes, for
example, could be allocated for 70 bits.
This is a problem in itself if the Least Significat Bit in a long is in
the byte with the largest address, which happens in Big Endian machines.
This means genalloc is not allocating the byte in which it will try to
set or check for a bit.
This may end up in memory corruption, where genalloc will try to set the
bits it has not allocated. In fact, genalloc may not set these bits
because it may find them already set, because they were not zeroed since
they were not allocated. And that's what causes a BUG when
gen_pool_destroy is called and check for any set bits.
What really happens is that genalloc uses kmalloc_node with __GFP_ZERO
on gen_pool_add_virt. With SLAB and SLUB, this means the whole slab
will be cleared, not only the requested bytes. Since struct
gen_pool_chunk has a size that is a multiple of 8, and slab sizes are
multiples of 8, we get lucky and allocate and clear the right amount of
bytes.
Hower, this is not the case with SLOB or with older code that did memset
after allocating instead of using __GFP_ZERO.
So, a simple module as this (running 3.6.0), will cause a crash when
rmmod'ed.
[root@phantom-lp2 foo]# cat foo.c
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/genalloc.h>
MODULE_LICENSE("GPL");
MODULE_VERSION("0.1");
static struct gen_pool *foo_pool;
static __init int foo_init(void)
{
int ret;
foo_pool = gen_pool_create(10, -1);
if (!foo_pool)
return -ENOMEM;
ret = gen_pool_add(foo_pool, 0xa0000000, 32 << 10, -1);
if (ret) {
gen_pool_destroy(foo_pool);
return ret;
}
return 0;
}
static __exit void foo_exit(void)
{
gen_pool_destroy(foo_pool);
}
module_init(foo_init);
module_exit(foo_exit);
[root@phantom-lp2 foo]# zcat /proc/config.gz | grep SLOB
CONFIG_SLOB=y
[root@phantom-lp2 foo]# insmod ./foo.ko
[root@phantom-lp2 foo]# rmmod foo
------------[ cut here ]------------
kernel BUG at lib/genalloc.c:243!
cpu 0x4: Vector: 700 (Program Check) at [c0000000bb0e7960]
pc: c0000000003cb50c: .gen_pool_destroy+0xac/0x110
lr: c0000000003cb4fc: .gen_pool_destroy+0x9c/0x110
sp: c0000000bb0e7be0
msr: 8000000000029032
current = 0xc0000000bb0e0000
paca = 0xc000000006d30e00 softe: 0 irq_happened: 0x01
pid = 13044, comm = rmmod
kernel BUG at lib/genalloc.c:243!
[c0000000bb0e7ca0] d000000004b00020 .foo_exit+0x20/0x38 [foo]
[c0000000bb0e7d20] c0000000000dff98 .SyS_delete_module+0x1a8/0x290
[c0000000bb0e7e30] c0000000000097d4 syscall_exit+0x0/0x94
--- Exception: c00 (System Call) at 000000800753d1a0
SP (fffd0b0e640) is in userspace
Signed-off-by: Thadeu Lima de Souza Cascardo <[email protected]>
Cc: Paul Gortmaker <[email protected]>
Cc: Benjamin Gaignard <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
lib/genalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/genalloc.c b/lib/genalloc.c
index 6bc04aa..7cb7a5d 100644
--- a/lib/genalloc.c
+++ b/lib/genalloc.c
@@ -176,7 +176,7 @@ int gen_pool_add_virt(struct gen_pool *pool, unsigned long virt, phys_addr_t phy
struct gen_pool_chunk *chunk;
int nbits = size >> pool->min_alloc_order;
int nbytes = sizeof(struct gen_pool_chunk) +
- (nbits + BITS_PER_BYTE - 1) / BITS_PER_BYTE;
+ BITS_TO_LONGS(nbits) * sizeof(long);
chunk = kmalloc_node(nbytes, GFP_KERNEL | __GFP_ZERO, nid);
if (unlikely(chunk == NULL))
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Josh Triplett <[email protected]>
commit 785107923a83d8456bbd8564e288a24d84109a46 upstream.
Some new ACPI 5.0 tables reference resources stored in boot services
memory, so keep that memory around until we have ACPI and can extract
data from it.
Signed-off-by: Josh Triplett <[email protected]>
Link: http://lkml.kernel.org/r/baaa6d44bdc4eb0c58e5d1b4ccd2c729f854ac55.1348876882.git.josh@joshtriplett.org
Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/platform/efi/efi.c | 31 ++++++++++++++++++-------------
include/linux/efi.h | 5 +++++
init/main.c | 3 +++
3 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
index f55a4ce..b3dbbdb 100644
--- a/arch/x86/platform/efi/efi.c
+++ b/arch/x86/platform/efi/efi.c
@@ -419,10 +419,21 @@ void __init efi_reserve_boot_services(void)
}
}
-static void __init efi_free_boot_services(void)
+static void __init efi_unmap_memmap(void)
+{
+ if (memmap.map) {
+ early_iounmap(memmap.map, memmap.nr_map * memmap.desc_size);
+ memmap.map = NULL;
+ }
+}
+
+void __init efi_free_boot_services(void)
{
void *p;
+ if (!efi_native)
+ return;
+
for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
efi_memory_desc_t *md = p;
unsigned long long start = md->phys_addr;
@@ -438,6 +449,8 @@ static void __init efi_free_boot_services(void)
free_bootmem_late(start, size);
}
+
+ efi_unmap_memmap();
}
static int __init efi_systab_init(void *phys)
@@ -787,8 +800,10 @@ void __init efi_enter_virtual_mode(void)
* non-native EFI
*/
- if (!efi_native)
- goto out;
+ if (!efi_native) {
+ efi_unmap_memmap();
+ return;
+ }
/* Merge contiguous regions of the same type and attribute */
for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
@@ -878,13 +893,6 @@ void __init efi_enter_virtual_mode(void)
}
/*
- * Thankfully, it does seem that no runtime services other than
- * SetVirtualAddressMap() will touch boot services code, so we can
- * get rid of it all at this point
- */
- efi_free_boot_services();
-
- /*
* Now that EFI is in virtual mode, update the function
* pointers in the runtime service table to the new virtual addresses.
*
@@ -907,9 +915,6 @@ void __init efi_enter_virtual_mode(void)
if (__supported_pte_mask & _PAGE_NX)
runtime_code_page_mkexec();
-out:
- early_iounmap(memmap.map, memmap.nr_map * memmap.desc_size);
- memmap.map = NULL;
kfree(new_memmap);
}
diff --git a/include/linux/efi.h b/include/linux/efi.h
index ec45ccd..5782114 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -496,6 +496,11 @@ extern void efi_map_pal_code (void);
extern void efi_memmap_walk (efi_freemem_callback_t callback, void *arg);
extern void efi_gettimeofday (struct timespec *ts);
extern void efi_enter_virtual_mode (void); /* switch EFI to virtual mode, if possible */
+#ifdef CONFIG_X86
+extern void efi_free_boot_services(void);
+#else
+static inline void efi_free_boot_services(void) {}
+#endif
extern u64 efi_get_iobase (void);
extern u32 efi_mem_type (unsigned long phys_addr);
extern u64 efi_mem_attributes (unsigned long phys_addr);
diff --git a/init/main.c b/init/main.c
index b5cc0a7..593c8b6 100644
--- a/init/main.c
+++ b/init/main.c
@@ -630,6 +630,9 @@ asmlinkage void __init start_kernel(void)
acpi_early_init(); /* before LAPIC and SMP init */
sfi_init_late();
+ if (efi_enabled)
+ efi_free_boot_services();
+
ftrace_init();
/* Do the rest non-__init'ed, we're now alive */
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Kees Cook <[email protected]>
commit 12176503366885edd542389eed3aaf94be163fdb upstream.
The compat ioctl for VIDEO_SET_SPU_PALETTE was missing an error check
while converting ioctl arguments. This could lead to leaking kernel
stack contents into userspace.
Patch extracted from existing fix in grsecurity.
Signed-off-by: Kees Cook <[email protected]>
Cc: David Miller <[email protected]>
Cc: Brad Spengler <[email protected]>
Cc: PaX Team <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/compat_ioctl.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/compat_ioctl.c b/fs/compat_ioctl.c
index debdfe0..5d2069f 100644
--- a/fs/compat_ioctl.c
+++ b/fs/compat_ioctl.c
@@ -210,6 +210,8 @@ static int do_video_set_spu_palette(unsigned int fd, unsigned int cmd,
err = get_user(palp, &up->palette);
err |= get_user(length, &up->length);
+ if (err)
+ return -EFAULT;
up_native = compat_alloc_user_space(sizeof(struct video_spu_palette));
err = put_user(compat_ptr(palp), &up_native->palette);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams <[email protected]>
commit f8295ec22cb0f1ee6849b862addbfa3ea9320755 upstream.
These devices provide QMI and ethernet functionality via a standard CDC
ethernet descriptor. But when driven by cdc_ether, the QMI
functionality is unavailable because only cdc_ether can claim the USB
interface. Thus blacklist the devices in cdc_ether and add their IDs to
qmi_wwan, which enables both QMI and ethernet simultaneously.
Signed-off-by: Dan Williams <[email protected]>
Acked-by: Greg Kroah-Hartman <[email protected]>
Acked-by: Bjørn Mork <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/net/usb/cdc_ether.c | 41 ++++++++++++++++++++++++++---------------
drivers/net/usb/qmi_wwan.c | 14 ++++++++++++++
2 files changed, 40 insertions(+), 15 deletions(-)
diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
index a03de71..d012982 100644
--- a/drivers/net/usb/cdc_ether.c
+++ b/drivers/net/usb/cdc_ether.c
@@ -592,6 +592,32 @@ static const struct usb_device_id products [] = {
.driver_info = 0,
},
+/* Novatel USB551L and MC551 - handled by qmi_wwan */
+{
+ .match_flags = USB_DEVICE_ID_MATCH_VENDOR
+ | USB_DEVICE_ID_MATCH_PRODUCT
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+ .idVendor = NOVATEL_VENDOR_ID,
+ .idProduct = 0xB001,
+ .bInterfaceClass = USB_CLASS_COMM,
+ .bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET,
+ .bInterfaceProtocol = USB_CDC_PROTO_NONE,
+ .driver_info = 0,
+},
+
+/* Novatel E362 - handled by qmi_wwan */
+{
+ .match_flags = USB_DEVICE_ID_MATCH_VENDOR
+ | USB_DEVICE_ID_MATCH_PRODUCT
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+ .idVendor = NOVATEL_VENDOR_ID,
+ .idProduct = 0x9010,
+ .bInterfaceClass = USB_CLASS_COMM,
+ .bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET,
+ .bInterfaceProtocol = USB_CDC_PROTO_NONE,
+ .driver_info = 0,
+},
+
/*
* WHITELIST!!!
*
@@ -604,21 +630,6 @@ static const struct usb_device_id products [] = {
* because of bugs/quirks in a given product (like Zaurus, above).
*/
{
- /* Novatel USB551L */
- /* This match must come *before* the generic CDC-ETHER match so that
- * we get FLAG_WWAN set on the device, since it's descriptors are
- * generic CDC-ETHER.
- */
- .match_flags = USB_DEVICE_ID_MATCH_VENDOR
- | USB_DEVICE_ID_MATCH_PRODUCT
- | USB_DEVICE_ID_MATCH_INT_INFO,
- .idVendor = NOVATEL_VENDOR_ID,
- .idProduct = 0xB001,
- .bInterfaceClass = USB_CLASS_COMM,
- .bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET,
- .bInterfaceProtocol = USB_CDC_PROTO_NONE,
- .driver_info = (unsigned long)&wwan_info,
-}, {
/* ZTE (Vodafone) K3805-Z */
.match_flags = USB_DEVICE_ID_MATCH_VENDOR
| USB_DEVICE_ID_MATCH_PRODUCT
diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
index d124bdd..466af29 100644
--- a/drivers/net/usb/qmi_wwan.c
+++ b/drivers/net/usb/qmi_wwan.c
@@ -579,6 +579,20 @@ static const struct usb_device_id products[] = {
.bInterfaceProtocol = 0xff,
.driver_info = (unsigned long)&qmi_wwan_sierra,
},
+ { /* Novatel USB551L and MC551 */
+ USB_DEVICE_AND_INTERFACE_INFO(0x1410, 0xb001,
+ USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET,
+ USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long)&qmi_wwan_info,
+ },
+ { /* Novatel E362 */
+ USB_DEVICE_AND_INTERFACE_INFO(0x1410, 0x9010,
+ USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET,
+ USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long)&qmi_wwan_info,
+ },
/* Gobi 1000 devices */
{QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sarah Sharp <[email protected]>
commit 43a09f7fb01fa1e091416a2aa49b6c666458c1ee upstream.
The command cancellation code doesn't check whether find_trb_seg()
couldn't find the segment that contains the TRB to be canceled. This
could cause a NULL pointer deference later in the function when next_trb
is called. It's unlikely to happen unless something is wrong with the
command ring pointers, so add some debugging in case it happens.
This patch should be backported to stable kernels as old as 3.0, that
contain the commit b63f4053cc8aa22a98e3f9a97845afe6c15d0a0d "xHCI:
handle command after aborting the command ring".
Signed-off-by: Sarah Sharp <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/host/xhci-ring.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index 47b6bb6..755858e 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -1228,6 +1228,17 @@ static void xhci_cmd_to_noop(struct xhci_hcd *xhci, struct xhci_cd *cur_cd)
cur_seg = find_trb_seg(xhci->cmd_ring->first_seg,
xhci->cmd_ring->dequeue, &cycle_state);
+ if (!cur_seg) {
+ xhci_warn(xhci, "Command ring mismatch, dequeue = %p %llx (dma)\n",
+ xhci->cmd_ring->dequeue,
+ (unsigned long long)
+ xhci_trb_virt_to_dma(xhci->cmd_ring->deq_seg,
+ xhci->cmd_ring->dequeue));
+ xhci_debug_ring(xhci, xhci->cmd_ring);
+ xhci_dbg_ring_ptrs(xhci, xhci->cmd_ring);
+ return;
+ }
+
/* find the command trb matched by cd from command ring */
for (cmd_trb = xhci->cmd_ring->dequeue;
cmd_trb != xhci->cmd_ring->enqueue;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Anisse Astier <[email protected]>
commit c323dc023b9501e5d09582ec7efd1d40a9001d99 upstream.
BIOS vendors keep changing the BIOS versions. Only match the beginning
of the string to match all Lucid tablets with board name M11JB.
Signed-off-by: Anisse Astier <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/host/pci-quirks.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c
index 966d148..ead4525 100644
--- a/drivers/usb/host/pci-quirks.c
+++ b/drivers/usb/host/pci-quirks.c
@@ -545,7 +545,7 @@ static const struct dmi_system_id __devinitconst ehci_dmi_nohandoff_table[] = {
/* Pegatron Lucid (Ordissimo AIRIS) */
.matches = {
DMI_MATCH(DMI_BOARD_NAME, "M11JB"),
- DMI_MATCH(DMI_BIOS_VERSION, "Lucid-GE-133"),
+ DMI_MATCH(DMI_BIOS_VERSION, "Lucid-"),
},
},
{ }
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit 28c3ae9a8cf45f439c9a0779ebd0256e2ae72813 upstream.
The private int_urb is never allocated so the submission from the
control completion handler will always fail. Remove this odd piece of
broken code.
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
[ herton: adjusted context ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/mos7840.c | 15 +--------------
1 file changed, 1 insertion(+), 14 deletions(-)
diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
index f717db4..eab84c1 100644
--- a/drivers/usb/serial/mos7840.c
+++ b/drivers/usb/serial/mos7840.c
@@ -219,7 +219,6 @@ struct moschip_port {
int port_num; /*Actual port number in the device(1,2,etc) */
struct urb *write_urb; /* write URB for this port */
struct urb *read_urb; /* read URB for this port */
- struct urb *int_urb;
__u8 shadowLCR; /* last LCR value received */
__u8 shadowMCR; /* last MCR value received */
char open;
@@ -494,7 +493,6 @@ static void mos7840_control_callback(struct urb *urb)
unsigned char *data;
struct moschip_port *mos7840_port;
__u8 regval = 0x0;
- int result = 0;
int status = urb->status;
mos7840_port = urb->context;
@@ -513,7 +511,7 @@ static void mos7840_control_callback(struct urb *urb)
default:
dbg("%s - nonzero urb status received: %d", __func__,
status);
- goto exit;
+ return;
}
dbg("%s urb buffer size is %d", __func__, urb->actual_length);
@@ -526,17 +524,6 @@ static void mos7840_control_callback(struct urb *urb)
mos7840_handle_new_msr(mos7840_port, regval);
else if (mos7840_port->MsrLsr == 1)
mos7840_handle_new_lsr(mos7840_port, regval);
-
-exit:
- spin_lock(&mos7840_port->pool_lock);
- if (!mos7840_port->zombie)
- result = usb_submit_urb(mos7840_port->int_urb, GFP_ATOMIC);
- spin_unlock(&mos7840_port->pool_lock);
- if (result) {
- dev_err(&urb->dev->dev,
- "%s - Error %d submitting interrupt urb\n",
- __func__, result);
- }
}
static int mos7840_get_reg(struct moschip_port *mcs, __u16 Wval, __u16 reg,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit 65a4cdbb170e4ec1a7fa0e94936d47e24a17b0e8 upstream.
Make sure control urb is freed at release.
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/mos7840.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
index 009c1d9..748bc6a 100644
--- a/drivers/usb/serial/mos7840.c
+++ b/drivers/usb/serial/mos7840.c
@@ -2755,6 +2755,7 @@ static void mos7840_release(struct usb_serial *serial)
del_timer_sync(&mos7840_port->led_timer1);
del_timer_sync(&mos7840_port->led_timer2);
}
+ usb_free_urb(mos7840_port->control_urb);
kfree(mos7840_port->ctrl_buf);
kfree(mos7840_port->dr);
kfree(mos7840_port);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit 7e41f9bcdd2e813ea2a3c40db291d87ea06b559f upstream.
Make sure port private data is deallocated on errors in attach.
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/sierra.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c
index 7d7ab91..f72bbaa 100644
--- a/drivers/usb/serial/sierra.c
+++ b/drivers/usb/serial/sierra.c
@@ -910,7 +910,7 @@ static int sierra_startup(struct usb_serial *serial)
dev_dbg(&port->dev, "%s: kmalloc for "
"sierra_port_private (%d) failed!\n",
__func__, i);
- return -ENOMEM;
+ goto err;
}
spin_lock_init(&portdata->lock);
init_usb_anchor(&portdata->active);
@@ -947,6 +947,13 @@ static int sierra_startup(struct usb_serial *serial)
}
return 0;
+err:
+ for (--i; i >= 0; --i) {
+ portdata = usb_get_serial_port_data(serial->port[i]);
+ kfree(portdata);
+ }
+
+ return -ENOMEM;
}
static void sierra_release(struct usb_serial *serial)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit ea0dbebffe118724cd4df7d9b071ea8ee48d48f0 upstream.
Make sure to allocate the control-message buffer dynamically as some
platforms cannot do DMA from stack.
Note that only the first byte of the old buffer was used.
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/opticon.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/serial/opticon.c b/drivers/usb/serial/opticon.c
index 02cb1b7..5ba2700 100644
--- a/drivers/usb/serial/opticon.c
+++ b/drivers/usb/serial/opticon.c
@@ -158,7 +158,11 @@ static int send_control_msg(struct usb_serial_port *port, u8 requesttype,
{
struct usb_serial *serial = port->serial;
int retval;
- u8 buffer[2];
+ u8 *buffer;
+
+ buffer = kzalloc(1, GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
buffer[0] = val;
/* Send the message to the vendor control endpoint
@@ -167,6 +171,7 @@ static int send_control_msg(struct usb_serial_port *port, u8 requesttype,
requesttype,
USB_DIR_OUT|USB_TYPE_VENDOR|USB_RECIP_INTERFACE,
0, 0, buffer, 1, 0);
+ kfree(buffer);
return retval;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit b8a0055050b6294826171641b182c09f78f4cc63 upstream.
Fix memory leak in attach error path where the read urb was never freed.
Cc: Bill Pemberton <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/quatech2.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/usb/serial/quatech2.c b/drivers/usb/serial/quatech2.c
index 8dd88eb..d170271 100644
--- a/drivers/usb/serial/quatech2.c
+++ b/drivers/usb/serial/quatech2.c
@@ -829,6 +829,7 @@ static int qt2_setup_urbs(struct usb_serial *serial)
if (status != 0) {
dev_err(&serial->dev->dev,
"%s - submit read urb failed %i\n", __func__, status);
+ usb_free_urb(serial_priv->read_urb);
return status;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit 2ee44fbeac92c36e53779a57ee84cfee1affe418 upstream.
Make sure no control urb is submitted during close after a disconnect by
checking the disconnected flag.
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/metro-usb.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/drivers/usb/serial/metro-usb.c b/drivers/usb/serial/metro-usb.c
index d47eb06..aafd914 100644
--- a/drivers/usb/serial/metro-usb.c
+++ b/drivers/usb/serial/metro-usb.c
@@ -188,16 +188,13 @@ static void metrousb_cleanup(struct usb_serial_port *port)
{
dev_dbg(&port->dev, "%s\n", __func__);
- if (port->serial->dev) {
- /* Shutdown any interrupt in urbs. */
- if (port->interrupt_in_urb) {
- usb_unlink_urb(port->interrupt_in_urb);
- usb_kill_urb(port->interrupt_in_urb);
- }
-
- /* Send deactivate cmd to device */
+ usb_unlink_urb(port->interrupt_in_urb);
+ usb_kill_urb(port->interrupt_in_urb);
+
+ mutex_lock(&port->serial->disc_mutex);
+ if (!port->serial->disconnected)
metrousb_send_unidirectional_cmd(UNI_CMD_CLOSE, port);
- }
+ mutex_unlock(&port->serial->disc_mutex);
}
static int metrousb_open(struct tty_struct *tty, struct usb_serial_port *port)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Shigorin <[email protected]>
commit d7870af7e2e3a91b462075ec1ca669b482215187 upstream.
This commit sets removable subclass for Casio EX-N1 digital camera.
The patch has been tested within an ALT Linux kernel:
http://git.altlinux.org/people/led/packages/?p=kernel-image-3.0.git;a=commitdiff;h=c0fd891836e89fe0c93a4d536a59216d90e4e3e7
See also https://bugzilla.kernel.org/show_bug.cgi?id=49221
Signed-off-by: Oleksandr Chumachenko <[email protected]>
Signed-off-by: Michael Shigorin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/storage/unusual_devs.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
index 1719886..dd2c64f 100644
--- a/drivers/usb/storage/unusual_devs.h
+++ b/drivers/usb/storage/unusual_devs.h
@@ -1004,6 +1004,12 @@ UNUSUAL_DEV( 0x07cf, 0x1001, 0x1000, 0x9999,
USB_SC_8070, USB_PR_CB, NULL,
US_FL_NEED_OVERRIDE | US_FL_FIX_INQUIRY ),
+/* Submitted by Oleksandr Chumachenko <[email protected]> */
+UNUSUAL_DEV( 0x07cf, 0x1167, 0x0100, 0x0100,
+ "Casio",
+ "EX-N1 DigitalCamera",
+ USB_SC_8070, USB_PR_DEVICE, NULL, 0),
+
/* Submitted by Hartmut Wahl <[email protected]>*/
UNUSUAL_DEV( 0x0839, 0x000a, 0x0001, 0x0001,
"Samsung",
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Yinghai Lu <[email protected]>
commit 1f2ff682ac951ed82cc043cf140d2851084512df upstream.
We need to handle E820_RAM and E820_RESERVED_KERNEL at the same time.
Also memblock has page aligned range for ram, so we could avoid mapping
partial pages.
Signed-off-by: Yinghai Lu <[email protected]>
Link: http://lkml.kernel.org/r/CAE9FiQVZirvaBMFYRfXMmWEcHbKSicQEHz4VAwUv0xFCk51ZNw@mail.gmail.com
Acked-by: Jacob Shin <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/kernel/setup.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index e615c31..6cafbcd 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -920,18 +920,19 @@ void __init setup_arch(char **cmdline_p)
#ifdef CONFIG_X86_64
if (max_pfn > max_low_pfn) {
int i;
- for (i = 0; i < e820.nr_map; i++) {
- struct e820entry *ei = &e820.map[i];
+ unsigned long start, end;
+ unsigned long start_pfn, end_pfn;
- if (ei->addr + ei->size <= 1UL << 32)
- continue;
+ for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn,
+ NULL) {
- if (ei->type == E820_RESERVED)
+ end = PFN_PHYS(end_pfn);
+ if (end <= (1UL<<32))
continue;
+ start = PFN_PHYS(start_pfn);
max_pfn_mapped = init_memory_mapping(
- ei->addr < 1UL << 32 ? 1UL << 32 : ei->addr,
- ei->addr + ei->size);
+ max((1UL<<32), start), end);
}
/* can we preseve max_low_pfn ?*/
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: "Michael S. Tsirkin" <[email protected]>
commit 910a578f7e9400a78a3b13aba0b4d2df16a2cb05 upstream.
We copy head count to a 16 bit field, this works by chance on LE but on
BE guest gets 0. Fix it up.
Signed-off-by: Michael S. Tsirkin <[email protected]>
Tested-by: Alexander Graf <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/vhost/net.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index f82a739..3c4dcc3d9 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -379,7 +379,8 @@ static void handle_rx(struct vhost_net *net)
.hdr.gso_type = VIRTIO_NET_HDR_GSO_NONE
};
size_t total_len = 0;
- int err, headcount, mergeable;
+ int err, mergeable;
+ s16 headcount;
size_t vhost_hlen, sock_hlen;
size_t vhost_len, sock_len;
/* TODO: check that we are running from vhost_worker? */
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Felix Fietkau <[email protected]>
commit 73b26df5fa1a6245d6fc982362518b620bc7c2fe upstream.
This reverts commit a240dc7b3c7463bd60cf0a9b2a90f52f78aae0fd.
This commit is reducing tx power by at least 10 db on some devices,
e.g. the Buffalo WZR-HP-G450H.
Signed-off-by: Felix Fietkau <[email protected]>
Cc: [email protected]
Signed-off-by: John W. Linville <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
.../net/wireless/ath/ath9k/ar9003_2p2_initvals.h | 164 ++++++++++----------
1 file changed, 82 insertions(+), 82 deletions(-)
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h b/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h
index 952cb2b..21d9a40 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h
+++ b/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h
@@ -533,107 +533,107 @@ static const u32 ar9300_2p2_baseband_core[][2] = {
static const u32 ar9300Modes_high_power_tx_gain_table_2p2[][5] = {
/* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */
- {0x0000a2dc, 0x000cfff0, 0x000cfff0, 0x03aaa352, 0x03aaa352},
- {0x0000a2e0, 0x000f0000, 0x000f0000, 0x03ccc584, 0x03ccc584},
- {0x0000a2e4, 0x03f00000, 0x03f00000, 0x03f0f800, 0x03f0f800},
+ {0x0000a2dc, 0x00033800, 0x00033800, 0x03aaa352, 0x03aaa352},
+ {0x0000a2e0, 0x0003c000, 0x0003c000, 0x03ccc584, 0x03ccc584},
+ {0x0000a2e4, 0x03fc0000, 0x03fc0000, 0x03f0f800, 0x03f0f800},
{0x0000a2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000},
{0x0000a410, 0x000050d9, 0x000050d9, 0x000050d9, 0x000050d9},
{0x0000a500, 0x00000000, 0x00000000, 0x00000000, 0x00000000},
{0x0000a504, 0x06000003, 0x06000003, 0x04000002, 0x04000002},
{0x0000a508, 0x0a000020, 0x0a000020, 0x08000004, 0x08000004},
{0x0000a50c, 0x10000023, 0x10000023, 0x0b000200, 0x0b000200},
- {0x0000a510, 0x15000028, 0x15000028, 0x0f000202, 0x0f000202},
- {0x0000a514, 0x1b00002b, 0x1b00002b, 0x12000400, 0x12000400},
- {0x0000a518, 0x1f020028, 0x1f020028, 0x16000402, 0x16000402},
- {0x0000a51c, 0x2502002b, 0x2502002b, 0x19000404, 0x19000404},
- {0x0000a520, 0x2a04002a, 0x2a04002a, 0x1c000603, 0x1c000603},
- {0x0000a524, 0x2e06002a, 0x2e06002a, 0x21000a02, 0x21000a02},
- {0x0000a528, 0x3302202d, 0x3302202d, 0x25000a04, 0x25000a04},
- {0x0000a52c, 0x3804202c, 0x3804202c, 0x28000a20, 0x28000a20},
- {0x0000a530, 0x3c06202c, 0x3c06202c, 0x2c000e20, 0x2c000e20},
- {0x0000a534, 0x4108202d, 0x4108202d, 0x30000e22, 0x30000e22},
- {0x0000a538, 0x4506402d, 0x4506402d, 0x34000e24, 0x34000e24},
- {0x0000a53c, 0x4906222d, 0x4906222d, 0x38001640, 0x38001640},
- {0x0000a540, 0x4d062231, 0x4d062231, 0x3c001660, 0x3c001660},
- {0x0000a544, 0x50082231, 0x50082231, 0x3f001861, 0x3f001861},
- {0x0000a548, 0x5608422e, 0x5608422e, 0x43001a81, 0x43001a81},
- {0x0000a54c, 0x5a08442e, 0x5a08442e, 0x47001a83, 0x47001a83},
- {0x0000a550, 0x5e0a4431, 0x5e0a4431, 0x4a001c84, 0x4a001c84},
- {0x0000a554, 0x640a4432, 0x640a4432, 0x4e001ce3, 0x4e001ce3},
- {0x0000a558, 0x680a4434, 0x680a4434, 0x52001ce5, 0x52001ce5},
- {0x0000a55c, 0x6c0a6434, 0x6c0a6434, 0x56001ce9, 0x56001ce9},
- {0x0000a560, 0x6f0a6633, 0x6f0a6633, 0x5a001ceb, 0x5a001ceb},
- {0x0000a564, 0x730c6634, 0x730c6634, 0x5d001eec, 0x5d001eec},
- {0x0000a568, 0x730c6634, 0x730c6634, 0x5d001eec, 0x5d001eec},
- {0x0000a56c, 0x730c6634, 0x730c6634, 0x5d001eec, 0x5d001eec},
- {0x0000a570, 0x730c6634, 0x730c6634, 0x5d001eec, 0x5d001eec},
- {0x0000a574, 0x730c6634, 0x730c6634, 0x5d001eec, 0x5d001eec},
- {0x0000a578, 0x730c6634, 0x730c6634, 0x5d001eec, 0x5d001eec},
- {0x0000a57c, 0x730c6634, 0x730c6634, 0x5d001eec, 0x5d001eec},
+ {0x0000a510, 0x16000220, 0x16000220, 0x0f000202, 0x0f000202},
+ {0x0000a514, 0x1c000223, 0x1c000223, 0x12000400, 0x12000400},
+ {0x0000a518, 0x21002220, 0x21002220, 0x16000402, 0x16000402},
+ {0x0000a51c, 0x27002223, 0x27002223, 0x19000404, 0x19000404},
+ {0x0000a520, 0x2b022220, 0x2b022220, 0x1c000603, 0x1c000603},
+ {0x0000a524, 0x2f022222, 0x2f022222, 0x21000a02, 0x21000a02},
+ {0x0000a528, 0x34022225, 0x34022225, 0x25000a04, 0x25000a04},
+ {0x0000a52c, 0x3a02222a, 0x3a02222a, 0x28000a20, 0x28000a20},
+ {0x0000a530, 0x3e02222c, 0x3e02222c, 0x2c000e20, 0x2c000e20},
+ {0x0000a534, 0x4202242a, 0x4202242a, 0x30000e22, 0x30000e22},
+ {0x0000a538, 0x4702244a, 0x4702244a, 0x34000e24, 0x34000e24},
+ {0x0000a53c, 0x4b02244c, 0x4b02244c, 0x38001640, 0x38001640},
+ {0x0000a540, 0x4e02246c, 0x4e02246c, 0x3c001660, 0x3c001660},
+ {0x0000a544, 0x52022470, 0x52022470, 0x3f001861, 0x3f001861},
+ {0x0000a548, 0x55022490, 0x55022490, 0x43001a81, 0x43001a81},
+ {0x0000a54c, 0x59022492, 0x59022492, 0x47001a83, 0x47001a83},
+ {0x0000a550, 0x5d022692, 0x5d022692, 0x4a001c84, 0x4a001c84},
+ {0x0000a554, 0x61022892, 0x61022892, 0x4e001ce3, 0x4e001ce3},
+ {0x0000a558, 0x65024890, 0x65024890, 0x52001ce5, 0x52001ce5},
+ {0x0000a55c, 0x69024892, 0x69024892, 0x56001ce9, 0x56001ce9},
+ {0x0000a560, 0x6e024c92, 0x6e024c92, 0x5a001ceb, 0x5a001ceb},
+ {0x0000a564, 0x74026e92, 0x74026e92, 0x5d001eec, 0x5d001eec},
+ {0x0000a568, 0x74026e92, 0x74026e92, 0x5d001eec, 0x5d001eec},
+ {0x0000a56c, 0x74026e92, 0x74026e92, 0x5d001eec, 0x5d001eec},
+ {0x0000a570, 0x74026e92, 0x74026e92, 0x5d001eec, 0x5d001eec},
+ {0x0000a574, 0x74026e92, 0x74026e92, 0x5d001eec, 0x5d001eec},
+ {0x0000a578, 0x74026e92, 0x74026e92, 0x5d001eec, 0x5d001eec},
+ {0x0000a57c, 0x74026e92, 0x74026e92, 0x5d001eec, 0x5d001eec},
{0x0000a580, 0x00800000, 0x00800000, 0x00800000, 0x00800000},
{0x0000a584, 0x06800003, 0x06800003, 0x04800002, 0x04800002},
{0x0000a588, 0x0a800020, 0x0a800020, 0x08800004, 0x08800004},
{0x0000a58c, 0x10800023, 0x10800023, 0x0b800200, 0x0b800200},
- {0x0000a590, 0x15800028, 0x15800028, 0x0f800202, 0x0f800202},
- {0x0000a594, 0x1b80002b, 0x1b80002b, 0x12800400, 0x12800400},
- {0x0000a598, 0x1f820028, 0x1f820028, 0x16800402, 0x16800402},
- {0x0000a59c, 0x2582002b, 0x2582002b, 0x19800404, 0x19800404},
- {0x0000a5a0, 0x2a84002a, 0x2a84002a, 0x1c800603, 0x1c800603},
- {0x0000a5a4, 0x2e86002a, 0x2e86002a, 0x21800a02, 0x21800a02},
- {0x0000a5a8, 0x3382202d, 0x3382202d, 0x25800a04, 0x25800a04},
- {0x0000a5ac, 0x3884202c, 0x3884202c, 0x28800a20, 0x28800a20},
- {0x0000a5b0, 0x3c86202c, 0x3c86202c, 0x2c800e20, 0x2c800e20},
- {0x0000a5b4, 0x4188202d, 0x4188202d, 0x30800e22, 0x30800e22},
- {0x0000a5b8, 0x4586402d, 0x4586402d, 0x34800e24, 0x34800e24},
- {0x0000a5bc, 0x4986222d, 0x4986222d, 0x38801640, 0x38801640},
- {0x0000a5c0, 0x4d862231, 0x4d862231, 0x3c801660, 0x3c801660},
- {0x0000a5c4, 0x50882231, 0x50882231, 0x3f801861, 0x3f801861},
- {0x0000a5c8, 0x5688422e, 0x5688422e, 0x43801a81, 0x43801a81},
- {0x0000a5cc, 0x5a88442e, 0x5a88442e, 0x47801a83, 0x47801a83},
- {0x0000a5d0, 0x5e8a4431, 0x5e8a4431, 0x4a801c84, 0x4a801c84},
- {0x0000a5d4, 0x648a4432, 0x648a4432, 0x4e801ce3, 0x4e801ce3},
- {0x0000a5d8, 0x688a4434, 0x688a4434, 0x52801ce5, 0x52801ce5},
- {0x0000a5dc, 0x6c8a6434, 0x6c8a6434, 0x56801ce9, 0x56801ce9},
- {0x0000a5e0, 0x6f8a6633, 0x6f8a6633, 0x5a801ceb, 0x5a801ceb},
- {0x0000a5e4, 0x738c6634, 0x738c6634, 0x5d801eec, 0x5d801eec},
- {0x0000a5e8, 0x738c6634, 0x738c6634, 0x5d801eec, 0x5d801eec},
- {0x0000a5ec, 0x738c6634, 0x738c6634, 0x5d801eec, 0x5d801eec},
- {0x0000a5f0, 0x738c6634, 0x738c6634, 0x5d801eec, 0x5d801eec},
- {0x0000a5f4, 0x738c6634, 0x738c6634, 0x5d801eec, 0x5d801eec},
- {0x0000a5f8, 0x738c6634, 0x738c6634, 0x5d801eec, 0x5d801eec},
- {0x0000a5fc, 0x738c6634, 0x738c6634, 0x5d801eec, 0x5d801eec},
+ {0x0000a590, 0x16800220, 0x16800220, 0x0f800202, 0x0f800202},
+ {0x0000a594, 0x1c800223, 0x1c800223, 0x12800400, 0x12800400},
+ {0x0000a598, 0x21802220, 0x21802220, 0x16800402, 0x16800402},
+ {0x0000a59c, 0x27802223, 0x27802223, 0x19800404, 0x19800404},
+ {0x0000a5a0, 0x2b822220, 0x2b822220, 0x1c800603, 0x1c800603},
+ {0x0000a5a4, 0x2f822222, 0x2f822222, 0x21800a02, 0x21800a02},
+ {0x0000a5a8, 0x34822225, 0x34822225, 0x25800a04, 0x25800a04},
+ {0x0000a5ac, 0x3a82222a, 0x3a82222a, 0x28800a20, 0x28800a20},
+ {0x0000a5b0, 0x3e82222c, 0x3e82222c, 0x2c800e20, 0x2c800e20},
+ {0x0000a5b4, 0x4282242a, 0x4282242a, 0x30800e22, 0x30800e22},
+ {0x0000a5b8, 0x4782244a, 0x4782244a, 0x34800e24, 0x34800e24},
+ {0x0000a5bc, 0x4b82244c, 0x4b82244c, 0x38801640, 0x38801640},
+ {0x0000a5c0, 0x4e82246c, 0x4e82246c, 0x3c801660, 0x3c801660},
+ {0x0000a5c4, 0x52822470, 0x52822470, 0x3f801861, 0x3f801861},
+ {0x0000a5c8, 0x55822490, 0x55822490, 0x43801a81, 0x43801a81},
+ {0x0000a5cc, 0x59822492, 0x59822492, 0x47801a83, 0x47801a83},
+ {0x0000a5d0, 0x5d822692, 0x5d822692, 0x4a801c84, 0x4a801c84},
+ {0x0000a5d4, 0x61822892, 0x61822892, 0x4e801ce3, 0x4e801ce3},
+ {0x0000a5d8, 0x65824890, 0x65824890, 0x52801ce5, 0x52801ce5},
+ {0x0000a5dc, 0x69824892, 0x69824892, 0x56801ce9, 0x56801ce9},
+ {0x0000a5e0, 0x6e824c92, 0x6e824c92, 0x5a801ceb, 0x5a801ceb},
+ {0x0000a5e4, 0x74826e92, 0x74826e92, 0x5d801eec, 0x5d801eec},
+ {0x0000a5e8, 0x74826e92, 0x74826e92, 0x5d801eec, 0x5d801eec},
+ {0x0000a5ec, 0x74826e92, 0x74826e92, 0x5d801eec, 0x5d801eec},
+ {0x0000a5f0, 0x74826e92, 0x74826e92, 0x5d801eec, 0x5d801eec},
+ {0x0000a5f4, 0x74826e92, 0x74826e92, 0x5d801eec, 0x5d801eec},
+ {0x0000a5f8, 0x74826e92, 0x74826e92, 0x5d801eec, 0x5d801eec},
+ {0x0000a5fc, 0x74826e92, 0x74826e92, 0x5d801eec, 0x5d801eec},
{0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000},
{0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000},
- {0x0000a608, 0x01804601, 0x01804601, 0x00000000, 0x00000000},
- {0x0000a60c, 0x01804601, 0x01804601, 0x00000000, 0x00000000},
- {0x0000a610, 0x01804601, 0x01804601, 0x00000000, 0x00000000},
- {0x0000a614, 0x01804601, 0x01804601, 0x01404000, 0x01404000},
- {0x0000a618, 0x01804601, 0x01804601, 0x01404501, 0x01404501},
- {0x0000a61c, 0x01804601, 0x01804601, 0x02008501, 0x02008501},
- {0x0000a620, 0x03408d02, 0x03408d02, 0x0280ca03, 0x0280ca03},
- {0x0000a624, 0x0300cc03, 0x0300cc03, 0x03010c04, 0x03010c04},
- {0x0000a628, 0x03410d04, 0x03410d04, 0x04014c04, 0x04014c04},
- {0x0000a62c, 0x03410d04, 0x03410d04, 0x04015005, 0x04015005},
- {0x0000a630, 0x03410d04, 0x03410d04, 0x04015005, 0x04015005},
- {0x0000a634, 0x03410d04, 0x03410d04, 0x04015005, 0x04015005},
- {0x0000a638, 0x03410d04, 0x03410d04, 0x04015005, 0x04015005},
- {0x0000a63c, 0x03410d04, 0x03410d04, 0x04015005, 0x04015005},
- {0x0000b2dc, 0x000cfff0, 0x000cfff0, 0x03aaa352, 0x03aaa352},
- {0x0000b2e0, 0x000f0000, 0x000f0000, 0x03ccc584, 0x03ccc584},
- {0x0000b2e4, 0x03f00000, 0x03f00000, 0x03f0f800, 0x03f0f800},
+ {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000},
+ {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000},
+ {0x0000a610, 0x00000000, 0x00000000, 0x00000000, 0x00000000},
+ {0x0000a614, 0x02004000, 0x02004000, 0x01404000, 0x01404000},
+ {0x0000a618, 0x02004801, 0x02004801, 0x01404501, 0x01404501},
+ {0x0000a61c, 0x02808a02, 0x02808a02, 0x02008501, 0x02008501},
+ {0x0000a620, 0x0380ce03, 0x0380ce03, 0x0280ca03, 0x0280ca03},
+ {0x0000a624, 0x04411104, 0x04411104, 0x03010c04, 0x03010c04},
+ {0x0000a628, 0x04411104, 0x04411104, 0x04014c04, 0x04014c04},
+ {0x0000a62c, 0x04411104, 0x04411104, 0x04015005, 0x04015005},
+ {0x0000a630, 0x04411104, 0x04411104, 0x04015005, 0x04015005},
+ {0x0000a634, 0x04411104, 0x04411104, 0x04015005, 0x04015005},
+ {0x0000a638, 0x04411104, 0x04411104, 0x04015005, 0x04015005},
+ {0x0000a63c, 0x04411104, 0x04411104, 0x04015005, 0x04015005},
+ {0x0000b2dc, 0x00033800, 0x00033800, 0x03aaa352, 0x03aaa352},
+ {0x0000b2e0, 0x0003c000, 0x0003c000, 0x03ccc584, 0x03ccc584},
+ {0x0000b2e4, 0x03fc0000, 0x03fc0000, 0x03f0f800, 0x03f0f800},
{0x0000b2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000},
- {0x0000c2dc, 0x000cfff0, 0x000cfff0, 0x03aaa352, 0x03aaa352},
- {0x0000c2e0, 0x000f0000, 0x000f0000, 0x03ccc584, 0x03ccc584},
- {0x0000c2e4, 0x03f00000, 0x03f00000, 0x03f0f800, 0x03f0f800},
+ {0x0000c2dc, 0x00033800, 0x00033800, 0x03aaa352, 0x03aaa352},
+ {0x0000c2e0, 0x0003c000, 0x0003c000, 0x03ccc584, 0x03ccc584},
+ {0x0000c2e4, 0x03fc0000, 0x03fc0000, 0x03f0f800, 0x03f0f800},
{0x0000c2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000},
{0x00016044, 0x012492d4, 0x012492d4, 0x012492d4, 0x012492d4},
- {0x00016048, 0x61200001, 0x61200001, 0x66480001, 0x66480001},
+ {0x00016048, 0x66480001, 0x66480001, 0x66480001, 0x66480001},
{0x00016068, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c},
{0x00016444, 0x012492d4, 0x012492d4, 0x012492d4, 0x012492d4},
- {0x00016448, 0x61200001, 0x61200001, 0x66480001, 0x66480001},
+ {0x00016448, 0x66480001, 0x66480001, 0x66480001, 0x66480001},
{0x00016468, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c},
{0x00016844, 0x012492d4, 0x012492d4, 0x012492d4, 0x012492d4},
- {0x00016848, 0x61200001, 0x61200001, 0x66480001, 0x66480001},
+ {0x00016848, 0x66480001, 0x66480001, 0x66480001, 0x66480001},
{0x00016868, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c},
};
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Geert Uytterhoeven <[email protected]>
commit 66081a72517a131430dcf986775f3268aafcb546 upstream.
The warning check for duplicate sysfs entries can cause a buffer overflow
when printing the warning, as strcat() doesn't check buffer sizes.
Use strlcat() instead.
Since strlcat() doesn't return a pointer to the passed buffer, unlike
strcat(), I had to convert the nested concatenation in sysfs_add_one() to
an admittedly more obscure comma operator construct, to avoid emitting code
for the concatenation if CONFIG_BUG is disabled.
Signed-off-by: Geert Uytterhoeven <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/sysfs/dir.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/fs/sysfs/dir.c b/fs/sysfs/dir.c
index e6bb9b2..5a035b5 100644
--- a/fs/sysfs/dir.c
+++ b/fs/sysfs/dir.c
@@ -478,20 +478,18 @@ int __sysfs_add_one(struct sysfs_addrm_cxt *acxt, struct sysfs_dirent *sd)
/**
* sysfs_pathname - return full path to sysfs dirent
* @sd: sysfs_dirent whose path we want
- * @path: caller allocated buffer
+ * @path: caller allocated buffer of size PATH_MAX
*
* Gives the name "/" to the sysfs_root entry; any path returned
* is relative to wherever sysfs is mounted.
- *
- * XXX: does no error checking on @path size
*/
static char *sysfs_pathname(struct sysfs_dirent *sd, char *path)
{
if (sd->s_parent) {
sysfs_pathname(sd->s_parent, path);
- strcat(path, "/");
+ strlcat(path, "/", PATH_MAX);
}
- strcat(path, sd->s_name);
+ strlcat(path, sd->s_name, PATH_MAX);
return path;
}
@@ -524,9 +522,11 @@ int sysfs_add_one(struct sysfs_addrm_cxt *acxt, struct sysfs_dirent *sd)
char *path = kzalloc(PATH_MAX, GFP_KERNEL);
WARN(1, KERN_WARNING
"sysfs: cannot create duplicate filename '%s'\n",
- (path == NULL) ? sd->s_name :
- strcat(strcat(sysfs_pathname(acxt->parent_sd, path), "/"),
- sd->s_name));
+ (path == NULL) ? sd->s_name
+ : (sysfs_pathname(acxt->parent_sd, path),
+ strlcat(path, "/", PATH_MAX),
+ strlcat(path, sd->s_name, PATH_MAX),
+ path));
kfree(path);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Yinghai Lu <[email protected]>
commit 6ede1fd3cb404c0016de6ac529df46d561bd558b upstream.
We will not map partial pages, so need to make sure memblock
allocation will not allocate those bytes out.
Also we will use for_each_mem_pfn_range() to loop to map memory
range to keep them consistent.
Signed-off-by: Yinghai Lu <[email protected]>
Link: http://lkml.kernel.org/r/CAE9FiQVZirvaBMFYRfXMmWEcHbKSicQEHz4VAwUv0xFCk51ZNw@mail.gmail.com
Acked-by: Jacob Shin <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/kernel/e820.c | 3 +++
include/linux/memblock.h | 1 +
mm/memblock.c | 24 ++++++++++++++++++++++++
3 files changed, 28 insertions(+)
diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index 4185797..a42889d 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -1077,6 +1077,9 @@ void __init memblock_x86_fill(void)
memblock_add(ei->addr, ei->size);
}
+ /* throw away partial pages */
+ memblock_trim_memory(PAGE_SIZE);
+
memblock_dump_all();
}
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 19dc455..c948c44 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -57,6 +57,7 @@ int memblock_add(phys_addr_t base, phys_addr_t size);
int memblock_remove(phys_addr_t base, phys_addr_t size);
int memblock_free(phys_addr_t base, phys_addr_t size);
int memblock_reserve(phys_addr_t base, phys_addr_t size);
+void memblock_trim_memory(phys_addr_t align);
#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
diff --git a/mm/memblock.c b/mm/memblock.c
index 5cc6731..900a97d 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -928,6 +928,30 @@ int __init_memblock memblock_is_region_reserved(phys_addr_t base, phys_addr_t si
return memblock_overlaps_region(&memblock.reserved, base, size) >= 0;
}
+void __init_memblock memblock_trim_memory(phys_addr_t align)
+{
+ int i;
+ phys_addr_t start, end, orig_start, orig_end;
+ struct memblock_type *mem = &memblock.memory;
+
+ for (i = 0; i < mem->cnt; i++) {
+ orig_start = mem->regions[i].base;
+ orig_end = mem->regions[i].base + mem->regions[i].size;
+ start = round_up(orig_start, align);
+ end = round_down(orig_end, align);
+
+ if (start == orig_start && end == orig_end)
+ continue;
+
+ if (start < end) {
+ mem->regions[i].base = start;
+ mem->regions[i].size = end - start;
+ } else {
+ memblock_remove_region(mem, i);
+ i--;
+ }
+ }
+}
void __init_memblock memblock_set_current_limit(phys_addr_t limit)
{
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Larry Finger <[email protected]>
commit f89ff6441df06abc2d95f3ef67525923032d6283 upstream.
When b43 fails to find firmware when loaded, a subsequent unload will
oops due to calling ieee80211_unregister_hw() when the corresponding
register call was never made.
Commit 2d838bb608e2d1f6cb4280e76748cb812dc822e7 fixed the same problem
for b43legacy.
Signed-off-by: Larry Finger <[email protected]>
Tested-by: Markus Kanet <[email protected]>
Cc: Markus Kanet <[email protected]>
Signed-off-by: John W. Linville <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/net/wireless/b43/main.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/wireless/b43/main.c b/drivers/net/wireless/b43/main.c
index b80352b..c04d7b6 100644
--- a/drivers/net/wireless/b43/main.c
+++ b/drivers/net/wireless/b43/main.c
@@ -5369,6 +5369,8 @@ static void b43_bcma_remove(struct bcma_device *core)
cancel_work_sync(&wldev->restart_work);
B43_WARN_ON(!wl);
+ if (!wldev->fw.ucode.data)
+ return; /* NULL if firmware never loaded */
if (wl->current_dev == wldev && wl->hw_registred) {
b43_leds_stop(wldev);
ieee80211_unregister_hw(wl->hw);
@@ -5443,6 +5445,8 @@ static void b43_ssb_remove(struct ssb_device *sdev)
cancel_work_sync(&wldev->restart_work);
B43_WARN_ON(!wl);
+ if (!wldev->fw.ucode.data)
+ return; /* NULL if firmware never loaded */
if (wl->current_dev == wldev && wl->hw_registred) {
b43_leds_stop(wldev);
ieee80211_unregister_hw(wl->hw);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Ivan Shugov <[email protected]>
commit 3d9a0183dd3423353e9e363bcc261c1220d05f9f upstream.
Newer at91sam9g10 SoC revision can't be detected, so the kernel can't boot with
this kind of kernel panic:
"AT91: Impossible to detect the SOC type"
CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=00053177
CPU: VIVT data cache, VIVT instruction cache
Machine: Atmel AT91SAM9G10-EK
Ignoring tag cmdline (using the default kernel command line)
bootconsole [earlycon0] enabled
Memory policy: ECC disabled, Data cache writeback
Kernel panic - not syncing: AT91: Impossible to detect the SOC type
[<c00133d4>] (unwind_backtrace+0x0/0xe0) from [<c02366dc>] (panic+0x78/0x1cc)
[<c02366dc>] (panic+0x78/0x1cc) from [<c02fa35c>] (at91_map_io+0x90/0xc8)
[<c02fa35c>] (at91_map_io+0x90/0xc8) from [<c02f9860>] (paging_init+0x564/0x6d0)
[<c02f9860>] (paging_init+0x564/0x6d0) from [<c02f7914>] (setup_arch+0x464/0x704)
[<c02f7914>] (setup_arch+0x464/0x704) from [<c02f44f8>] (start_kernel+0x6c/0x2d4)
[<c02f44f8>] (start_kernel+0x6c/0x2d4) from [<20008040>] (0x20008040)
The reason for this is that the Debug Unit Chip ID Register has changed between
Engineering Sample and definitive revision of the SoC. Changing the check of
cidr to socid will address the problem. We do not integrate this check to the
list just above because we also have to make sure that the extended id is
disregarded.
Signed-off-by: Ivan Shugov <[email protected]>
[[email protected]: change commit message]
Signed-off-by: Nicolas Ferre <[email protected]>
Acked-by: Jean-Christophe PLAGNIOL-VILLARD <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/arm/mach-at91/setup.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/mach-at91/setup.c b/arch/arm/mach-at91/setup.c
index 944bffb..31bb13d 100644
--- a/arch/arm/mach-at91/setup.c
+++ b/arch/arm/mach-at91/setup.c
@@ -151,7 +151,7 @@ static void __init soc_detect(u32 dbgu_base)
}
/* at91sam9g10 */
- if ((cidr & ~AT91_CIDR_EXT) == ARCH_ID_AT91SAM9G10) {
+ if ((socid & ~AT91_CIDR_EXT) == ARCH_ID_AT91SAM9G10) {
at91_soc_initdata.type = AT91_SOC_SAM9G10;
at91_boot_soc = at91sam9261_soc;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust <[email protected]>
commit f878b657ce8e7d3673afe48110ec208a29e38c4a upstream.
Chris Perl reports that we're seeing races between the wakeup call in
xs_error_report and the connect attempts. Basically, Chris has shown
that in certain circumstances, the call to xs_error_report causes the
rpc_task that is responsible for reconnecting to wake up early, thus
triggering a disconnect and retry.
Since the sk->sk_error_report() calls in the socket layer are always
followed by a tcp_done() in the cases where we care about waking up
the rpc_tasks, just let the state_change callbacks take responsibility
for those wake ups.
Reported-by: Chris Perl <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Tested-by: Chris Perl <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/sunrpc/xprtsock.c | 25 -------------------------
1 file changed, 25 deletions(-)
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 67c46ec..420abf8 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -254,7 +254,6 @@ struct sock_xprt {
void (*old_data_ready)(struct sock *, int);
void (*old_state_change)(struct sock *);
void (*old_write_space)(struct sock *);
- void (*old_error_report)(struct sock *);
};
/*
@@ -781,7 +780,6 @@ static void xs_save_old_callbacks(struct sock_xprt *transport, struct sock *sk)
transport->old_data_ready = sk->sk_data_ready;
transport->old_state_change = sk->sk_state_change;
transport->old_write_space = sk->sk_write_space;
- transport->old_error_report = sk->sk_error_report;
}
static void xs_restore_old_callbacks(struct sock_xprt *transport, struct sock *sk)
@@ -789,7 +787,6 @@ static void xs_restore_old_callbacks(struct sock_xprt *transport, struct sock *s
sk->sk_data_ready = transport->old_data_ready;
sk->sk_state_change = transport->old_state_change;
sk->sk_write_space = transport->old_write_space;
- sk->sk_error_report = transport->old_error_report;
}
static void xs_reset_transport(struct sock_xprt *transport)
@@ -1558,25 +1555,6 @@ static void xs_tcp_state_change(struct sock *sk)
read_unlock_bh(&sk->sk_callback_lock);
}
-/**
- * xs_error_report - callback mainly for catching socket errors
- * @sk: socket
- */
-static void xs_error_report(struct sock *sk)
-{
- struct rpc_xprt *xprt;
-
- read_lock_bh(&sk->sk_callback_lock);
- if (!(xprt = xprt_from_sock(sk)))
- goto out;
- dprintk("RPC: %s client %p...\n"
- "RPC: error %d\n",
- __func__, xprt, sk->sk_err);
- xprt_wake_pending_tasks(xprt, -EAGAIN);
-out:
- read_unlock_bh(&sk->sk_callback_lock);
-}
-
static void xs_write_space(struct sock *sk)
{
struct socket *sock;
@@ -1876,7 +1854,6 @@ static int xs_local_finish_connecting(struct rpc_xprt *xprt,
sk->sk_user_data = xprt;
sk->sk_data_ready = xs_local_data_ready;
sk->sk_write_space = xs_udp_write_space;
- sk->sk_error_report = xs_error_report;
sk->sk_allocation = GFP_ATOMIC;
xprt_clear_connected(xprt);
@@ -1965,7 +1942,6 @@ static void xs_udp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
sk->sk_user_data = xprt;
sk->sk_data_ready = xs_udp_data_ready;
sk->sk_write_space = xs_udp_write_space;
- sk->sk_error_report = xs_error_report;
sk->sk_no_check = UDP_CSUM_NORCV;
sk->sk_allocation = GFP_ATOMIC;
@@ -2079,7 +2055,6 @@ static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
sk->sk_data_ready = xs_tcp_data_ready;
sk->sk_state_change = xs_tcp_state_change;
sk->sk_write_space = xs_tcp_write_space;
- sk->sk_error_report = xs_error_report;
sk->sk_allocation = GFP_ATOMIC;
/* socket options */
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Barry Song <[email protected]>
commit 26fd12209c08fe947be1828896ef4ffc5bd0e6df upstream.
list_move_tail(&schan->queued, &schan->active) makes the list_empty(schan->queued)
undefined, we either should change it to:
list_move_tail(schan->queued.next, &schan->active)
or
list_move_tail(&sdesc->node, &schan->active)
Signed-off-by: Barry Song <[email protected]>
Signed-off-by: Vinod Koul <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/dma/sirf-dma.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dma/sirf-dma.c b/drivers/dma/sirf-dma.c
index 35a329d..c439489 100644
--- a/drivers/dma/sirf-dma.c
+++ b/drivers/dma/sirf-dma.c
@@ -109,7 +109,7 @@ static void sirfsoc_dma_execute(struct sirfsoc_dma_chan *schan)
sdesc = list_first_entry(&schan->queued, struct sirfsoc_dma_desc,
node);
/* Move the first queued descriptor to active list */
- list_move_tail(&schan->queued, &schan->active);
+ list_move_tail(&sdesc->node, &schan->active);
/* Start the DMA transfer */
writel_relaxed(sdesc->width, sdma->base + SIRFSOC_DMA_WIDTH_0 +
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust <[email protected]>
commit b9d2bb2ee537424a7f855e1f93eed44eb9ee0854 upstream.
This reverts commit 55420c24a0d4d1fce70ca713f84aa00b6b74a70e.
Now that we clear the connected flag when entering TCP_CLOSE_WAIT,
the deadlock described in this commit is no longer possible.
Instead, the resulting call to xs_tcp_shutdown() can interfere
with pending reconnection attempts.
Reported-by: Chris Perl <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Tested-by: Chris Perl <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/sunrpc/xprtsock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 683f990b..a3f1990 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -737,10 +737,10 @@ static int xs_tcp_send_request(struct rpc_task *task)
dprintk("RPC: sendmsg returned unrecognized error %d\n",
-status);
case -ECONNRESET:
- case -EPIPE:
xs_tcp_shutdown(xprt);
case -ECONNREFUSED:
case -ENOTCONN:
+ case -EPIPE:
clear_bit(SOCK_ASYNC_NOSPACE, &transport->sock->flags);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust <[email protected]>
commit d0bea455dd48da1ecbd04fedf00eb89437455fdc upstream.
This is needed to ensure that we call xprt_connect() upon the next
call to call_connect().
Signed-off-by: Trond Myklebust <[email protected]>
Tested-by: Chris Perl <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/sunrpc/xprtsock.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 00ff343..683f990b 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1528,6 +1528,7 @@ static void xs_tcp_state_change(struct sock *sk)
case TCP_CLOSE_WAIT:
/* The server initiated a shutdown of the socket */
xprt->connect_cookie++;
+ clear_bit(XPRT_CONNECTED, &xprt->state);
xs_tcp_force_close(xprt);
case TCP_CLOSING:
/*
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jan Beulich <[email protected]>
commit 876ee61aadf01aa0db981b5d249cbdd53dc28b5e upstream.
Commit 20167d3421a089a1bf1bd680b150dc69c9506810 ("x86-64: Fix
accounting in kernel_physical_mapping_init()") went a little too
far by entirely removing the counting of pre-populated page
tables: this should be done at boot time (to cover the page
tables set up in early boot code), but shouldn't be done during
memory hot add.
Hence, re-add the removed increments of "pages", but make them
and the one in phys_pte_init() conditional upon !after_bootmem.
Reported-Acked-and-Tested-by: Hugh Dickins <[email protected]>
Signed-off-by: Jan Beulich <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/mm/init_64.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 2b6b4a3..3baff25 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -386,7 +386,8 @@ phys_pte_init(pte_t *pte_page, unsigned long addr, unsigned long end,
* these mappings are more intelligent.
*/
if (pte_val(*pte)) {
- pages++;
+ if (!after_bootmem)
+ pages++;
continue;
}
@@ -451,6 +452,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end,
* attributes.
*/
if (page_size_mask & (1 << PG_LEVEL_2M)) {
+ if (!after_bootmem)
+ pages++;
last_map_addr = next;
continue;
}
@@ -526,6 +529,8 @@ phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end,
* attributes.
*/
if (page_size_mask & (1 << PG_LEVEL_1G)) {
+ if (!after_bootmem)
+ pages++;
last_map_addr = next;
continue;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Oliver Neukum <[email protected]>
commit 16b45fdf9c4e82f5d3bc53aa70737650e7c8d5ed upstream.
xhci_service_interval_to_ns() returns long long
to avoid an overflow. However, the type cast happens
too late. The fix is to force ULL from the beginning.
This patch should be backported to kernels as old as 3.5, that contain
the commit e3567d2c15a7a8e2f992a5f7c7683453ca406d82 "xhci: Add Intel
U1/U2 timeout policy."
Signed-off-by: Oliver Neukum <[email protected]>
Signed-off-by: Sarah Sharp <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/host/xhci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 1f9b8a4..5f831c2 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -4020,7 +4020,7 @@ int xhci_update_device(struct usb_hcd *hcd, struct usb_device *udev)
static unsigned long long xhci_service_interval_to_ns(
struct usb_endpoint_descriptor *desc)
{
- return (1 << (desc->bInterval - 1)) * 125 * 1000;
+ return (1ULL << (desc->bInterval - 1)) * 125 * 1000;
}
static u16 xhci_get_timeout_no_hub_lpm(struct usb_device *udev,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Huewe <[email protected]>
commit 0dc77b6dabec8fd298392018cc0de5214af2dc43 upstream.
If you compile extcon with CONFIG_ANDROID and then load and unload the
module you get a simple oops as the driver does not unregister its
compat class and thus cannot register it again.
Full trace:
root@(none):~# modprobe extcon_class
root@(none):~# rmmod extcon_class
root@(none):~# modprobe extcon_class
------------[ cut here ]------------
WARNING: at fs/sysfs/dir.c:536 sysfs_add_one+0xde/0x100()
sysfs: cannot create duplicate filename '/class/switch'
Modules linked in: extcon_class(+) [last unloaded: extcon_class]
Call Trace:
9f451a00: [<602a58bc>] printk+0x0/0xa8
9f451a18: [<60039b43>] warn_slowpath_common+0x93/0xd0
9f451a28: [<6012c6de>] sysfs_add_one+0xde/0x100
9f451a50: [<601d3d90>] strcat+0x0/0x40
9f451a68: [<60039cdc>] warn_slowpath_fmt+0x9c/0xa0
9f451a90: [<6002fe32>] unblock_signals+0x0/0x84
9f451ab0: [<60039c40>] warn_slowpath_fmt+0x0/0xa0
9f451ac0: [<6002fe32>] unblock_signals+0x0/0x84
9f451ae8: [<6012bd97>] sysfs_pathname.isra.10+0x57/0x70
9f451b00: [<601d3d90>] strcat+0x0/0x40
9f451b18: [<6012bd97>] sysfs_pathname.isra.10+0x57/0x70
9f451b48: [<6012c6de>] sysfs_add_one+0xde/0x100
9f451b78: [<6012c96f>] create_dir+0x8f/0x100
9f451bc0: [<a0861000>] extcon_class_init+0x0/0x12 [extcon_class]
9f451bd8: [<6012cda6>] sysfs_create_dir+0xa6/0x1c0
9f451be8: [<601d89f1>] kvasprintf+0x81/0xa0
9f451bf8: [<601cf0f0>] kobject_get+0x0/0x50
9f451c18: [<601cf396>] kobject_add_internal+0x96/0x280
9f451c60: [<a0861000>] extcon_class_init+0x0/0x12 [extcon_class]
9f451c78: [<601cfb93>] kobject_add+0xd3/0x140
9f451cc0: [<601cfac0>] kobject_add+0x0/0x140
9f451cd0: [<6002fe32>] unblock_signals+0x0/0x84
9f451cf8: [<6002fffc>] set_signals+0x29/0x3f
9f451d28: [<600c1de1>] kmem_cache_alloc+0xe1/0x100
9f451d78: [<601cffa0>] kobject_create_and_add+0x50/0xa0
9f451da8: [<601fbe76>] class_compat_register+0x56/0x80
9f451dc8: [<a085d118>] create_extcon_class+0x88/0xd0 [extcon_class]
9f451de8: [<a0861010>] extcon_class_init+0x10/0x12 [extcon_class]
9f451df8: [<600189a8>] do_one_initcall+0x48/0x1f0
9f451e20: [<60061920>] blocking_notifier_call_chain+0x0/0x20
9f451e30: [<60061920>] blocking_notifier_call_chain+0x0/0x20
9f451e58: [<6007e3c3>] sys_init_module+0xa3/0x280
9f451e88: [<6001e2ad>] handle_syscall+0x8d/0x90
9f451ea8: [<60033370>] userspace+0x405/0x531
9f451ee8: [<6001e380>] copy_chunk_to_user+0x0/0x40
9f451ef8: [<6001e5cd>] do_op_one_page+0x14d/0x220
9f451fd8: [<6001a355>] fork_handler+0x95/0xa0
---[ end trace dd512cc03fe1c367 ]---
------------[ cut here ]------------
WARNING: at lib/kobject.c:196 kobject_add_internal+0x26e/0x280()
kobject_add_internal failed for switch with -EEXIST, don't try to
register things with the same name in the same directory.
Modules linked in: extcon_class(+) [last unloaded: extcon_class]
Call Trace:
9f451ad0: [<602a58bc>] printk+0x0/0xa8
9f451ae8: [<60039b43>] warn_slowpath_common+0x93/0xd0
9f451af8: [<601cf56e>] kobject_add_internal+0x26e/0x280
9f451b18: [<601cf140>] kobject_put+0x0/0x70
9f451b20: [<a0861000>] extcon_class_init+0x0/0x12 [extcon_class]
9f451b38: [<60039cdc>] warn_slowpath_fmt+0x9c/0xa0
9f451b88: [<60039c40>] warn_slowpath_fmt+0x0/0xa0
9f451bc0: [<a0861000>] extcon_class_init+0x0/0x12 [extcon_class]
9f451bd8: [<6012cda6>] sysfs_create_dir+0xa6/0x1c0
9f451be8: [<601d89f1>] kvasprintf+0x81/0xa0
9f451bf8: [<601cf0f0>] kobject_get+0x0/0x50
9f451c18: [<601cf56e>] kobject_add_internal+0x26e/0x280
9f451c60: [<a0861000>] extcon_class_init+0x0/0x12 [extcon_class]
9f451c78: [<601cfb93>] kobject_add+0xd3/0x140
9f451cc0: [<601cfac0>] kobject_add+0x0/0x140
9f451cd0: [<6002fe32>] unblock_signals+0x0/0x84
9f451cf8: [<6002fffc>] set_signals+0x29/0x3f
9f451d28: [<600c1de1>] kmem_cache_alloc+0xe1/0x100
9f451d78: [<601cffa0>] kobject_create_and_add+0x50/0xa0
9f451da8: [<601fbe76>] class_compat_register+0x56/0x80
9f451dc8: [<a085d118>] create_extcon_class+0x88/0xd0 [extcon_class]
9f451de8: [<a0861010>] extcon_class_init+0x10/0x12 [extcon_class]
9f451df8: [<600189a8>] do_one_initcall+0x48/0x1f0
9f451e20: [<60061920>] blocking_notifier_call_chain+0x0/0x20
9f451e30: [<60061920>] blocking_notifier_call_chain+0x0/0x20
9f451e58: [<6007e3c3>] sys_init_module+0xa3/0x280
9f451e88: [<6001e2ad>] handle_syscall+0x8d/0x90
9f451ea8: [<60033370>] userspace+0x405/0x531
9f451ee8: [<6001e380>] copy_chunk_to_user+0x0/0x40
9f451ef8: [<6001e5cd>] do_op_one_page+0x14d/0x220
9f451fd8: [<6001a355>] fork_handler+0x95/0xa0
---[ end trace dd512cc03fe1c368 ]---
kobject_create_and_add: kobject_add error: -17
------------[ cut here ]------------
WARNING: at drivers/extcon/extcon_class.c:545
create_extcon_class+0xbc/0xd0 [extcon_class]()
cannot allocate
Modules linked in: extcon_class(+) [last unloaded: extcon_class]
Call Trace:
9f451c80: [<602a58bc>] printk+0x0/0xa8
9f451c98: [<60039b43>] warn_slowpath_common+0x93/0xd0
9f451ca0: [<6002fe32>] unblock_signals+0x0/0x84
9f451ca8: [<a085d14c>] create_extcon_class+0xbc/0xd0 [extcon_class]
9f451cd0: [<a0861000>] extcon_class_init+0x0/0x12 [extcon_class]
9f451ce8: [<60039cdc>] warn_slowpath_fmt+0x9c/0xa0
9f451d20: [<6002fe32>] unblock_signals+0x0/0x84
9f451d28: [<60039c40>] warn_slowpath_fmt+0x0/0xa0
9f451d48: [<6002fffc>] set_signals+0x29/0x3f
9f451d58: [<601cf172>] kobject_put+0x32/0x70
9f451d78: [<600c22c3>] kfree+0xb3/0x100
9f451da8: [<601fbe9a>] class_compat_register+0x7a/0x80
9f451dc8: [<a085d14c>] create_extcon_class+0xbc/0xd0 [extcon_class]
9f451de8: [<a0861010>] extcon_class_init+0x10/0x12 [extcon_class]
9f451df8: [<600189a8>] do_one_initcall+0x48/0x1f0
9f451e20: [<60061920>] blocking_notifier_call_chain+0x0/0x20
9f451e30: [<60061920>] blocking_notifier_call_chain+0x0/0x20
9f451e58: [<6007e3c3>] sys_init_module+0xa3/0x280
9f451e88: [<6001e2ad>] handle_syscall+0x8d/0x90
9f451ea8: [<60033370>] userspace+0x405/0x531
9f451ee8: [<6001e380>] copy_chunk_to_user+0x0/0x40
9f451ef8: [<6001e5cd>] do_op_one_page+0x14d/0x220
9f451fd8: [<6001a355>] fork_handler+0x95/0xa0
---[ end trace dd512cc03fe1c369 ]---
FATAL: Error inserting extcon_class
(/lib/modules/3.6.0-rc6-00178-g811315f/kernel/drivers/extcon/extcon_class.ko):
Cannot allocate memory
This patch fixes this.
Signed-off-by: Peter Huewe <[email protected]>
Signed-off-by: Chanwoo Choi <[email protected]>
[ herton: file name is extcon_class.c on 3.5 ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/extcon/extcon_class.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/extcon/extcon_class.c b/drivers/extcon/extcon_class.c
index 159aeb0..01bac36 100644
--- a/drivers/extcon/extcon_class.c
+++ b/drivers/extcon/extcon_class.c
@@ -821,6 +821,9 @@ module_init(extcon_class_init);
static void __exit extcon_class_exit(void)
{
+#if defined(CONFIG_ANDROID)
+ class_compat_unregister(switch_class);
+#endif
class_destroy(extcon_class);
}
module_exit(extcon_class_exit);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Deucher <[email protected]>
commit b6aa22db7857ab7ed042d6c56b800bfc727cfdff upstream.
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/drm/drm_pciids.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/drm/drm_pciids.h b/include/drm/drm_pciids.h
index bae1d11..fc0f7cc 100644
--- a/include/drm/drm_pciids.h
+++ b/include/drm/drm_pciids.h
@@ -205,6 +205,8 @@
{0x1002, 0x6788, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI|RADEON_NEW_MEMMAP}, \
{0x1002, 0x678A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6790, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6791, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6792, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6798, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6799, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI|RADEON_NEW_MEMMAP}, \
{0x1002, 0x679A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI|RADEON_NEW_MEMMAP}, \
@@ -217,6 +219,7 @@
{0x1002, 0x6808, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PITCAIRN|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6809, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PITCAIRN|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6810, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PITCAIRN|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6811, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PITCAIRN|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6816, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PITCAIRN|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6817, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PITCAIRN|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6818, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PITCAIRN|RADEON_NEW_MEMMAP}, \
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Yongjun <[email protected]>
commit b4dd784ba8af03bf1f9ee5118c792d7abd4919bd upstream.
Add the missing unlock on the error handle path in function
pinctrl_groups_show().
Signed-off-by: Wei Yongjun <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/pinctrl/core.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
index 0cc053a..5e59c45 100644
--- a/drivers/pinctrl/core.c
+++ b/drivers/pinctrl/core.c
@@ -1069,8 +1069,10 @@ static int pinctrl_groups_show(struct seq_file *s, void *what)
seq_printf(s, "group: %s\n", gname);
for (i = 0; i < num_pins; i++) {
pname = pin_get_name(pctldev, pins[i]);
- if (WARN_ON(!pname))
+ if (WARN_ON(!pname)) {
+ mutex_unlock(&pinctrl_mutex);
return -EINVAL;
+ }
seq_printf(s, "pin %d (%s)\n", pins[i], pname);
}
seq_puts(s, "\n");
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Huewe <[email protected]>
commit 824a1bc045cef278aec15bef35d8d0b59ce77856 upstream.
Since extcon registers this compat link at device registration
(extcon_dev_register), we should probably remove them at deregistration/cleanup.
Signed-off-by: Peter Huewe <[email protected]>
Signed-off-by: Chanwoo Choi <[email protected]>
[ herton: file name is extcon_class.c on 3.5 ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/extcon/extcon_class.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/extcon/extcon_class.c b/drivers/extcon/extcon_class.c
index 01bac36..0a710b0 100644
--- a/drivers/extcon/extcon_class.c
+++ b/drivers/extcon/extcon_class.c
@@ -575,6 +575,10 @@ static void extcon_cleanup(struct extcon_dev *edev, bool skip)
kfree(edev->cables);
}
+#if defined(CONFIG_ANDROID)
+ if (switch_class)
+ class_compat_remove_link(switch_class, edev->dev, NULL);
+#endif
device_unregister(edev->dev);
put_device(edev->dev);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Andreas Herrmann <[email protected]>
commit e4df1cbcc1f329e53a1fff7450b2229e0addff20 upstream.
Commit 6889125b8b4e09c5e53e6ecab3433bed1ce198c9
(cpufreq/powernow-k8: workqueue user shouldn't migrate the kworker to another CPU)
causes powernow-k8 to trigger a preempt warning, e.g.:
BUG: using smp_processor_id() in preemptible [00000000] code: cpufreq/3776
caller is powernowk8_target+0x20/0x49
Pid: 3776, comm: cpufreq Not tainted 3.6.0 #9
Call Trace:
[<ffffffff8125b447>] debug_smp_processor_id+0xc7/0xe0
[<ffffffff814877e7>] powernowk8_target+0x20/0x49
[<ffffffff81482b02>] __cpufreq_driver_target+0x82/0x8a
[<ffffffff81484fc6>] cpufreq_governor_performance+0x4e/0x54
[<ffffffff81482c50>] __cpufreq_governor+0x8c/0xc9
[<ffffffff81482e6f>] __cpufreq_set_policy+0x1a9/0x21e
[<ffffffff814839af>] store_scaling_governor+0x16f/0x19b
[<ffffffff81484f16>] ? cpufreq_update_policy+0x124/0x124
[<ffffffff8162b4a5>] ? _raw_spin_unlock_irqrestore+0x2c/0x49
[<ffffffff81483640>] store+0x60/0x88
[<ffffffff811708c0>] sysfs_write_file+0xf4/0x130
[<ffffffff8111243b>] vfs_write+0xb5/0x151
[<ffffffff811126e0>] sys_write+0x4a/0x71
[<ffffffff816319a9>] system_call_fastpath+0x16/0x1b
Fix this by by always using work_on_cpu().
Signed-off-by: Andreas Herrmann <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/cpufreq/powernow-k8.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/drivers/cpufreq/powernow-k8.c b/drivers/cpufreq/powernow-k8.c
index 1a40935..c671369 100644
--- a/drivers/cpufreq/powernow-k8.c
+++ b/drivers/cpufreq/powernow-k8.c
@@ -1223,14 +1223,7 @@ static int powernowk8_target(struct cpufreq_policy *pol,
struct powernowk8_target_arg pta = { .pol = pol, .targfreq = targfreq,
.relation = relation };
- /*
- * Must run on @pol->cpu. cpufreq core is responsible for ensuring
- * that we're bound to the current CPU and pol->cpu stays online.
- */
- if (smp_processor_id() == pol->cpu)
- return powernowk8_target_fn(&pta);
- else
- return work_on_cpu(pol->cpu, powernowk8_target_fn, &pta);
+ return work_on_cpu(pol->cpu, powernowk8_target_fn, &pta);
}
/* Driver entry point to verify the policy and range of frequencies */
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= <[email protected]>
commit 585650dcec88e704a19bb226a34b6a7166111623 upstream.
The default kernel mapping for the pages allocated for the binder
buffers is never used. Set the __GFP_HIGHMEM flag when allocating
these pages so we don't needlessly use low memory pages that may
be required elsewhere.
Signed-off-by: Arve Hjønnevåg <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/staging/android/binder.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/android/binder.c b/drivers/staging/android/binder.c
index 80acd0e..223639a 100644
--- a/drivers/staging/android/binder.c
+++ b/drivers/staging/android/binder.c
@@ -655,7 +655,7 @@ static int binder_update_page_range(struct binder_proc *proc, int allocate,
page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];
BUG_ON(*page);
- *page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ *page = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
if (*page == NULL) {
printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
"for page at %p\n", proc->pid, page_addr);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Ian Abbott <[email protected]>
commit aaeb61a97b7159ebe30b18a422d04eeabfa8790b upstream.
`pc236_detach()` is called by the comedi core if it attempted to attach
a device and failed. `pc236_detach()` calls `pc236_intr_disable()` if
the comedi device private data pointer (`devpriv`) is non-null. This
test is insufficient as `pc236_intr_disable()` accesses hardware
registers and the attach routine may have failed before it has saved
their I/O base addresses.
Fix it by checking `dev->iobase` is non-zero before calling
`pc236_intr_disable()` as that means the I/O base addresses have been
saved and the hardware registers can be accessed. It also implies the
comedi device private data pointer is valid, so there is no need to
check it.
Signed-off-by: Ian Abbott <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
[ herton: devpriv is a macro on 3.5, just check dev->iobase ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/staging/comedi/drivers/amplc_pc236.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/comedi/drivers/amplc_pc236.c b/drivers/staging/comedi/drivers/amplc_pc236.c
index 57ba322..1fe5ac9 100644
--- a/drivers/staging/comedi/drivers/amplc_pc236.c
+++ b/drivers/staging/comedi/drivers/amplc_pc236.c
@@ -479,7 +479,7 @@ static int pc236_attach(struct comedi_device *dev, struct comedi_devconfig *it)
static void pc236_detach(struct comedi_device *dev)
{
- if (devpriv)
+ if (dev->iobase)
pc236_intr_disable(dev);
if (dev->irq)
free_irq(dev->irq, dev);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Octavian Purdila <[email protected]>
commit 3b6054da68f9b0d5ed6a7ed0f42a79e61904352c upstream.
There is a race condition in the USB hub code with regard to handling
TT clear requests that can get the HCD driver in a deadlock. Usually
when an TT clear request is scheduled it will be executed immediately:
<7>[ 6.077583] usb 2-1.3: unlink qh1-0e01/f4d4db00 start 0 [1/2 us]
<3>[ 6.078041] usb 2-1: clear tt buffer port 3, a3 ep2 t04048d82
<7>[ 6.078299] hub_tt_work:731
<7>[ 9.309089] usb 2-1.5: link qh1-0e01/f4d506c0 start 0 [1/2 us]
<7>[ 9.324526] ehci_hcd 0000:00:1d.0: reused qh f4d4db00 schedule
<7>[ 9.324539] usb 2-1.3: link qh1-0e01/f4d4db00 start 0 [1/2 us]
<7>[ 9.341530] usb 1-1.1: link qh4-0e01/f397aec0 start 2 [1/2 us]
<7>[ 10.116159] usb 2-1.3: unlink qh1-0e01/f4d4db00 start 0 [1/2 us]
<3>[ 10.116459] usb 2-1: clear tt buffer port 3, a3 ep2 t04048d82
<7>[ 10.116537] hub_tt_work:731
However, if a suspend operation is triggered before hub_tt_work is
scheduled, hub_quiesce will cancel the work without notifying the HCD
driver:
<3>[ 35.033941] usb 2-1: clear tt buffer port 3, a3 ep2 t04048d80
<5>[ 35.034022] sd 0:0:0:0: [sda] Stopping disk
<7>[ 35.034039] hub 2-1:1.0: hub_suspend
<7>[ 35.034067] usb 2-1: unlink qh256-0001/f3b1ab00 start 1 [1/0 us]
<7>[ 35.035085] hub 1-0:1.0: hub_suspend
<7>[ 35.035102] usb usb1: bus suspend, wakeup 0
<7>[ 35.035106] ehci_hcd 0000:00:1a.0: suspend root hub
<7>[ 35.035298] hub 2-0:1.0: hub_suspend
<7>[ 35.035313] usb usb2: bus suspend, wakeup 0
<7>[ 35.035315] ehci_hcd 0000:00:1d.0: suspend root hub
<6>[ 35.250017] PM: suspend of devices complete after 216.979 msecs
<6>[ 35.250822] PM: late suspend of devices complete after 0.799 msecs
<7>[ 35.252343] ehci_hcd 0000:00:1d.0: wakeup: 1
<7>[ 35.262923] ehci_hcd 0000:00:1d.0: --> PCI D3hot
<7>[ 35.263302] ehci_hcd 0000:00:1a.0: wakeup: 1
<7>[ 35.273912] ehci_hcd 0000:00:1a.0: --> PCI D3hot
<6>[ 35.274254] PM: noirq suspend of devices complete after 23.442 msecs
<6>[ 35.274975] ACPI: Preparing to enter system sleep state S3
<6>[ 35.292666] PM: Saving platform NVS memory
<7>[ 35.295030] Disabling non-boot CPUs ...
<6>[ 35.297351] CPU 1 is now offline
<6>[ 35.300345] CPU 2 is now offline
<6>[ 35.303929] CPU 3 is now offline
<7>[ 35.303931] lockdep: fixing up alternatives.
<6>[ 35.304825] Extended CMOS year: 2000
When the device will resume the EHCI driver will get stuck in
ehci_endpoint_disable waiting for the tt_clearing flag to reset:
<0>[ 47.610967] usb 2-1.3: **** DPM device timeout ****
<7>[ 47.610972] f2f11c60 00000092 f2f11c0c c10624a5 00000003 f4c6e880 c1c8a4c0 c1c8a4c0
<7>[ 47.610983] 15c55698 0000000b f56b34c0 f2a45b70 f4c6e880 00000082 f2a4602c f2f11c30
<7>[ 47.610993] c10787f8 f4cac000 f2a45b70 00000000 f4cac010 f2f11c58 00000046 00000001
<7>[ 47.611004] Call Trace:
<7>[ 47.611006] [<c10624a5>] ? sched_clock_cpu+0xf5/0x160
<7>[ 47.611019] [<c10787f8>] ? lock_release_holdtime.part.22+0x88/0xf0
<7>[ 47.611026] [<c103ed46>] ? lock_timer_base.isra.35+0x26/0x50
<7>[ 47.611034] [<c17592d3>] ? schedule_timeout+0x133/0x290
<7>[ 47.611044] [<c175b43e>] schedule+0x1e/0x50
<7>[ 47.611051] [<c17592d8>] schedule_timeout+0x138/0x290
<7>[ 47.611057] [<c10624a5>] ? sched_clock_cpu+0xf5/0x160
<7>[ 47.611063] [<c103e560>] ? usleep_range+0x40/0x40
<7>[ 47.611070] [<c1759445>] schedule_timeout_uninterruptible+0x15/0x20
<7>[ 47.611077] [<c14935f4>] ehci_endpoint_disable+0x64/0x160
<7>[ 47.611084] [<c147d1ee>] ? usb_hcd_flush_endpoint+0x10e/0x1d0
<7>[ 47.611092] [<c1165663>] ? sysfs_add_file+0x13/0x20
<7>[ 47.611100] [<c147d5a9>] usb_hcd_disable_endpoint+0x29/0x40
<7>[ 47.611107] [<c147fafc>] usb_disable_endpoint+0x5c/0x80
<7>[ 47.611111] [<c147fb57>] usb_disable_interface+0x37/0x50
<7>[ 47.611116] [<c1477650>] usb_reset_and_verify_device+0x4b0/0x640
<7>[ 47.611122] [<c1474665>] ? hub_port_status+0xb5/0x100
<7>[ 47.611129] [<c147a975>] usb_port_resume+0xd5/0x220
<7>[ 47.611136] [<c148877f>] generic_resume+0xf/0x30
<7>[ 47.611142] [<c14821a3>] usb_resume+0x133/0x180
<7>[ 47.611147] [<c1473b10>] ? usb_dev_thaw+0x10/0x10
<7>[ 47.611152] [<c1473b1d>] usb_dev_resume+0xd/0x10
<7>[ 47.611157] [<c13baa60>] dpm_run_callback+0x40/0xb0
<7>[ 47.611164] [<c13bdb03>] ? pm_runtime_enable+0x43/0x70
<7>[ 47.611171] [<c13bafc6>] device_resume+0x1a6/0x2c0
<7>[ 47.611177] [<c13ba940>] ? dpm_show_time+0xe0/0xe0
<7>[ 47.611183] [<c13bb0f9>] async_resume+0x19/0x40
<7>[ 47.611189] [<c10580c4>] async_run_entry_fn+0x64/0x160
<7>[ 47.611196] [<c104a244>] ? process_one_work+0x104/0x480
<7>[ 47.611203] [<c104a24c>] ? process_one_work+0x10c/0x480
<7>[ 47.611209] [<c104a2c0>] process_one_work+0x180/0x480
<7>[ 47.611215] [<c104a244>] ? process_one_work+0x104/0x480
<7>[ 47.611220] [<c1058060>] ? async_schedule+0x10/0x10
<7>[ 47.611226] [<c104c15c>] worker_thread+0x11c/0x2f0
<7>[ 47.611233] [<c104c040>] ? manage_workers.isra.27+0x1f0/0x1f0
<7>[ 47.611239] [<c10507f8>] kthread+0x78/0x80
<7>[ 47.611244] [<c1750000>] ? timer_cpu_notify+0xd6/0x20d
<7>[ 47.611253] [<c1050780>] ? __init_kthread_worker+0x60/0x60
<7>[ 47.611258] [<c176357e>] kernel_thread_helper+0x6/0xd
<7>[ 47.611283] ------------[ cut here ]------------
This patch changes hub_quiesce behavior to flush the TT clear work
instead of canceling it, to make sure that no TT clear request remains
uncompleted before suspend.
Signed-off-by: Octavian Purdila <[email protected]>
Acked-by: Alan Stern <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/core/hub.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index c16230b..f04ca3c 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -729,13 +729,16 @@ static void hub_tt_work(struct work_struct *work)
int limit = 100;
spin_lock_irqsave (&hub->tt.lock, flags);
- while (--limit && !list_empty (&hub->tt.clear_list)) {
+ while (!list_empty(&hub->tt.clear_list)) {
struct list_head *next;
struct usb_tt_clear *clear;
struct usb_device *hdev = hub->hdev;
const struct hc_driver *drv;
int status;
+ if (!hub->quiescing && --limit < 0)
+ break;
+
next = hub->tt.clear_list.next;
clear = list_entry (next, struct usb_tt_clear, clear_list);
list_del (&clear->clear_list);
@@ -1200,7 +1203,7 @@ static void hub_quiesce(struct usb_hub *hub, enum hub_quiescing_type type)
if (hub->has_indicators)
cancel_delayed_work_sync(&hub->leds);
if (hub->tt.hub)
- cancel_work_sync(&hub->tt.clear_work);
+ flush_work_sync(&hub->tt.clear_work);
}
/* caller has locked the hub device */
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Haojian Zhuang <[email protected]>
commit 7ae9d71e8df27a3ab60a05ae3add08728debc09c upstream.
Mutex is locked duplicatly by pinconf_groups_show() and
pin_config_group_get(). It results dead lock. So avoid to lock mutex
in pinconf_groups_show().
Signed-off-by: Haojian Zhuang <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/pinctrl/pinconf.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/drivers/pinctrl/pinconf.c b/drivers/pinctrl/pinconf.c
index 43f474c..baee2cc 100644
--- a/drivers/pinctrl/pinconf.c
+++ b/drivers/pinctrl/pinconf.c
@@ -537,8 +537,6 @@ static int pinconf_groups_show(struct seq_file *s, void *what)
seq_puts(s, "Pin config settings per pin group\n");
seq_puts(s, "Format: group (name): configs\n");
- mutex_lock(&pinctrl_mutex);
-
while (selector < ngroups) {
const char *gname = pctlops->get_group_name(pctldev, selector);
@@ -549,8 +547,6 @@ static int pinconf_groups_show(struct seq_file *s, void *what)
selector++;
}
- mutex_unlock(&pinctrl_mutex);
-
return 0;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai <[email protected]>
commit cb766404e6b8c566569eb9ada02ea45d28729864 upstream.
By some reason, Toshiba laptop doesn't like the EAPD turned up for the
headphone pin. Add a fix up code to force to turn down EAPD for NID
0x15.
Bugzilla: https://bugzilla.novell.com/show_bug.cgi?id=569991
Signed-off-by: Takashi Iwai <[email protected]>
[ herton: backported to 3.5:
create enum/structs from scratch, instead of bringing
commit 6e72aa5f. alc_apply_fixup with ALC_FIXUP_ACT_PRE_PROBE should
be a noop on this, but keep it as it's a skeleton for possible
quirks in the future being applied through stable ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
sound/pci/hda/patch_realtek.c | 32 ++++++++++++++++++++++++++++++++
1 file changed, 32 insertions(+)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index e996453..649ce26 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -5546,6 +5546,33 @@ static const struct hda_verb alc268_beep_init_verbs[] = {
{ }
};
+enum {
+ ALC268_FIXUP_HP_EAPD,
+};
+
+static const struct alc_fixup alc268_fixups[] = {
+ [ALC268_FIXUP_HP_EAPD] = {
+ .type = ALC_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ {0x15, AC_VERB_SET_EAPD_BTLENABLE, 0},
+ {}
+ }
+ },
+};
+
+static const struct alc_model_fixup alc268_fixup_models[] = {
+ {.id = ALC268_FIXUP_HP_EAPD, .name = "hp-eapd"},
+ {}
+};
+
+static const struct snd_pci_quirk alc268_fixup_tbl[] = {
+ /* below is codec SSID since multiple Toshiba laptops have the
+ * same PCI SSID 1179:ff00
+ */
+ SND_PCI_QUIRK(0x1179, 0xff06, "Toshiba P200", ALC268_FIXUP_HP_EAPD),
+ {}
+};
+
/*
* BIOS auto configuration
*/
@@ -5577,6 +5604,9 @@ static int patch_alc268(struct hda_codec *codec)
spec = codec->spec;
+ alc_pick_fixup(codec, alc268_fixup_models, alc268_fixup_tbl, alc268_fixups);
+ alc_apply_fixup(codec, ALC_FIXUP_ACT_PRE_PROBE);
+
/* automatic parse from the BIOS config */
err = alc268_parse_auto_config(codec);
if (err < 0)
@@ -5606,6 +5636,8 @@ static int patch_alc268(struct hda_codec *codec)
codec->patch_ops = alc_patch_ops;
spec->shutup = alc_eapd_shutup;
+ alc_apply_fixup(codec, ALC_FIXUP_ACT_PROBE);
+
return 0;
error:
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: David Vrabel <[email protected]>
commit a349e23d1cf746f8bdc603dcc61fae9ee4a695f6 upstream.
In 32 bit guests, if a userspace process has %eax == -ERESTARTSYS
(-512) or -ERESTARTNOINTR (-513) when it is interrupted by an event
/and/ the process has a pending signal then %eip (and %eax) are
corrupted when returning to the main process after handling the
signal. The application may then crash with SIGSEGV or a SIGILL or it
may have subtly incorrect behaviour (depending on what instruction it
returned to).
The occurs because handle_signal() is incorrectly thinking that there
is a system call that needs to restarted so it adjusts %eip and %eax
to re-execute the system call instruction (even though user space had
not done a system call).
If %eax == -514 (-ERESTARTNOHAND (-514) or -ERESTART_RESTARTBLOCK
(-516) then handle_signal() only corrupted %eax (by setting it to
-EINTR). This may cause the application to crash or have incorrect
behaviour.
handle_signal() assumes that regs->orig_ax >= 0 means a system call so
any kernel entry point that is not for a system call must push a
negative value for orig_ax. For example, for physical interrupts on
bare metal the inverse of the vector is pushed and page_fault() sets
regs->orig_ax to -1, overwriting the hardware provided error code.
xen_hypervisor_callback() was incorrectly pushing 0 for orig_ax
instead of -1.
Classic Xen kernels pushed %eax which works as %eax cannot be both
non-negative and -RESTARTSYS (etc.), but using -1 is consistent with
other non-system call entry points and avoids some of the tests in
handle_signal().
There were similar bugs in xen_failsafe_callback() of both 32 and
64-bit guests. If the fault was corrected and the normal return path
was used then 0 was incorrectly pushed as the value for orig_ax.
Signed-off-by: David Vrabel <[email protected]>
Acked-by: Jan Beulich <[email protected]>
Acked-by: Ian Campbell <[email protected]>
Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/kernel/entry_32.S | 8 +++++---
arch/x86/kernel/entry_64.S | 2 +-
2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index 623f288..8f8e8ee 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -1016,7 +1016,7 @@ ENTRY(xen_sysenter_target)
ENTRY(xen_hypervisor_callback)
CFI_STARTPROC
- pushl_cfi $0
+ pushl_cfi $-1 /* orig_ax = -1 => not a system call */
SAVE_ALL
TRACE_IRQS_OFF
@@ -1058,14 +1058,16 @@ ENTRY(xen_failsafe_callback)
2: mov 8(%esp),%es
3: mov 12(%esp),%fs
4: mov 16(%esp),%gs
+ /* EAX == 0 => Category 1 (Bad segment)
+ EAX != 0 => Category 2 (Bad IRET) */
testl %eax,%eax
popl_cfi %eax
lea 16(%esp),%esp
CFI_ADJUST_CFA_OFFSET -16
jz 5f
addl $16,%esp
- jmp iret_exc # EAX != 0 => Category 2 (Bad IRET)
-5: pushl_cfi $0 # EAX == 0 => Category 1 (Bad segment)
+ jmp iret_exc
+5: pushl_cfi $-1 /* orig_ax = -1 => not a system call */
SAVE_ALL
jmp ret_from_exception
CFI_ENDPROC
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 7d65133..e1ed683 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1381,7 +1381,7 @@ ENTRY(xen_failsafe_callback)
CFI_RESTORE r11
addq $0x30,%rsp
CFI_ADJUST_CFA_OFFSET -0x30
- pushq_cfi $0
+ pushq_cfi $-1 /* orig_ax = -1 => not a system call */
SAVE_ALL
jmp error_exit
CFI_ENDPROC
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tejun Heo <[email protected]>
commit d87838321124061f6c935069d97f37010fa417e6 upstream.
This reverts commit 7e3aa30ac8c904a706518b725c451bb486daaae9.
The commit incorrectly assumed that fork path always performed
threadgroup_change_begin/end() and depended on that for
synchronization against task exit and cgroup migration paths instead
of explicitly grabbing task_lock().
threadgroup_change is not locked when forking a new process (as
opposed to a new thread in the same process) and even if it were it
wouldn't be effective as different processes use different threadgroup
locks.
Revert the incorrect optimization.
Signed-off-by: Tejun Heo <[email protected]>
LKML-Reference: <20121008020000.GB2575@localhost>
Acked-by: Li Zefan <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
kernel/cgroup.c | 15 +++------------
1 file changed, 3 insertions(+), 12 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 75d4318..a91aa0b 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -4805,19 +4805,10 @@ void cgroup_post_fork(struct task_struct *child)
*/
if (use_task_css_set_links) {
write_lock(&css_set_lock);
- if (list_empty(&child->cg_list)) {
- /*
- * It's safe to use child->cgroups without task_lock()
- * here because we are protected through
- * threadgroup_change_begin() against concurrent
- * css_set change in cgroup_task_migrate(). Also
- * the task can't exit at that point until
- * wake_up_new_task() is called, so we are protected
- * against cgroup_exit() setting child->cgroup to
- * init_css_set.
- */
+ task_lock(child);
+ if (list_empty(&child->cg_list))
list_add(&child->cg_list, &child->cgroups->tasks);
- }
+ task_unlock(child);
write_unlock(&css_set_lock);
}
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tejun Heo <[email protected]>
commit 9bb71308b8133d643648776243e4d5599b1c193d upstream.
This reverts commit 7e381b0eb1e1a9805c37335562e8dc02e7d7848c.
The commit incorrectly assumed that fork path always performed
threadgroup_change_begin/end() and depended on that for
synchronization against task exit and cgroup migration paths instead
of explicitly grabbing task_lock().
threadgroup_change is not locked when forking a new process (as
opposed to a new thread in the same process) and even if it were it
wouldn't be effective as different processes use different threadgroup
locks.
Revert the incorrect optimization.
Signed-off-by: Tejun Heo <[email protected]>
LKML-Reference: <20121008020000.GB2575@localhost>
Acked-by: Li Zefan <[email protected]>
Bitterly-Acked-by: Frederic Weisbecker <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
kernel/cgroup.c | 23 ++++++-----------------
1 file changed, 6 insertions(+), 17 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 63c9596..75d4318 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -4739,31 +4739,20 @@ static const struct file_operations proc_cgroupstats_operations = {
*
* A pointer to the shared css_set was automatically copied in
* fork.c by dup_task_struct(). However, we ignore that copy, since
- * it was not made under the protection of RCU, cgroup_mutex or
- * threadgroup_change_begin(), so it might no longer be a valid
- * cgroup pointer. cgroup_attach_task() might have already changed
- * current->cgroups, allowing the previously referenced cgroup
- * group to be removed and freed.
- *
- * Outside the pointer validity we also need to process the css_set
- * inheritance between threadgoup_change_begin() and
- * threadgoup_change_end(), this way there is no leak in any process
- * wide migration performed by cgroup_attach_proc() that could otherwise
- * miss a thread because it is too early or too late in the fork stage.
+ * it was not made under the protection of RCU or cgroup_mutex, so
+ * might no longer be a valid cgroup pointer. cgroup_attach_task() might
+ * have already changed current->cgroups, allowing the previously
+ * referenced cgroup group to be removed and freed.
*
* At the point that cgroup_fork() is called, 'current' is the parent
* task, and the passed argument 'child' points to the child task.
*/
void cgroup_fork(struct task_struct *child)
{
- /*
- * We don't need to task_lock() current because current->cgroups
- * can't be changed concurrently here. The parent obviously hasn't
- * exited and called cgroup_exit(), and we are synchronized against
- * cgroup migration through threadgroup_change_begin().
- */
+ task_lock(current);
child->cgroups = current->cgroups;
get_css_set(child->cgroups);
+ task_unlock(current);
INIT_LIST_HEAD(&child->cg_list);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Pritesh Raithatha <[email protected]>
commit a03690e44468dcd3088f6600ab036d17bd2130ff upstream.
Signed-off-by: Pritesh Raithatha <[email protected]>
Acked-by: Stephen Warren <[email protected]>
Tested-by: Stephen Warren <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/pinctrl/pinctrl-tegra30.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/pinctrl/pinctrl-tegra30.c b/drivers/pinctrl/pinctrl-tegra30.c
index 0386fdf..7894f14 100644
--- a/drivers/pinctrl/pinctrl-tegra30.c
+++ b/drivers/pinctrl/pinctrl-tegra30.c
@@ -3345,10 +3345,10 @@ static const struct tegra_function tegra30_functions[] = {
FUNCTION(vi_alt3),
};
-#define MUXCTL_REG_A 0x3000
-#define PINGROUP_REG_A 0x868
+#define DRV_PINGROUP_REG_A 0x868 /* bank 0 */
+#define PINGROUP_REG_A 0x3000 /* bank 1 */
-#define PINGROUP_REG_Y(r) ((r) - MUXCTL_REG_A)
+#define PINGROUP_REG_Y(r) ((r) - PINGROUP_REG_A)
#define PINGROUP_REG_N(r) -1
#define PINGROUP(pg_name, f0, f1, f2, f3, f_safe, r, od, ior) \
@@ -3364,25 +3364,25 @@ static const struct tegra_function tegra30_functions[] = {
}, \
.func_safe = TEGRA_MUX_ ## f_safe, \
.mux_reg = PINGROUP_REG_Y(r), \
- .mux_bank = 0, \
+ .mux_bank = 1, \
.mux_bit = 0, \
.pupd_reg = PINGROUP_REG_Y(r), \
- .pupd_bank = 0, \
+ .pupd_bank = 1, \
.pupd_bit = 2, \
.tri_reg = PINGROUP_REG_Y(r), \
- .tri_bank = 0, \
+ .tri_bank = 1, \
.tri_bit = 4, \
.einput_reg = PINGROUP_REG_Y(r), \
- .einput_bank = 0, \
+ .einput_bank = 1, \
.einput_bit = 5, \
.odrain_reg = PINGROUP_REG_##od(r), \
- .odrain_bank = 0, \
+ .odrain_bank = 1, \
.odrain_bit = 6, \
.lock_reg = PINGROUP_REG_Y(r), \
- .lock_bank = 0, \
+ .lock_bank = 1, \
.lock_bit = 7, \
.ioreset_reg = PINGROUP_REG_##ior(r), \
- .ioreset_bank = 0, \
+ .ioreset_bank = 1, \
.ioreset_bit = 8, \
.drv_reg = -1, \
}
@@ -3401,8 +3401,8 @@ static const struct tegra_function tegra30_functions[] = {
.odrain_reg = -1, \
.lock_reg = -1, \
.ioreset_reg = -1, \
- .drv_reg = ((r) - PINGROUP_REG_A), \
- .drv_bank = 1, \
+ .drv_reg = ((r) - DRV_PINGROUP_REG_A), \
+ .drv_bank = 0, \
.hsm_bit = hsm_b, \
.schmitt_bit = schmitt_b, \
.lpmd_bit = lpmd_b, \
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Daisuke Nishimura <[email protected]>
commit 1f5320d5972aa50d3e8d2b227b636b370e608359 upstream.
notify_on_release must be triggered when the last process in a cgroup is
move to another. But if the first(and only) process in a cgroup is moved to
another, notify_on_release is not triggered.
# mkdir /cgroup/cpu/SRC
# mkdir /cgroup/cpu/DST
#
# echo 1 >/cgroup/cpu/SRC/notify_on_release
# echo 1 >/cgroup/cpu/DST/notify_on_release
#
# sleep 300 &
[1] 8629
#
# echo 8629 >/cgroup/cpu/SRC/tasks
# echo 8629 >/cgroup/cpu/DST/tasks
-> notify_on_release for /SRC must be triggered at this point,
but it isn't.
This is because put_css_set() is called before setting CGRP_RELEASABLE
in cgroup_task_migrate(), and is a regression introduce by the
commit:74a1166d(cgroups: make procs file writable), which was merged
into v3.0.
Cc: Ben Blum <[email protected]>
Acked-by: Li Zefan <[email protected]>
Signed-off-by: Daisuke Nishimura <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
kernel/cgroup.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 15462a0..63c9596 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -1927,9 +1927,8 @@ static void cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
* trading it for newcg is protected by cgroup_mutex, we're safe to drop
* it here; it will be freed under RCU.
*/
- put_css_set(oldcg);
-
set_bit(CGRP_RELEASABLE, &oldcgrp->flags);
+ put_css_set(oldcg);
}
/**
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Pritesh Raithatha <[email protected]>
commit d6ec6b60a56a1e7d99da1fc69c031fa5ab54ba94 upstream.
change nvidia,slew_rate* to nvidia,slew-rate*
Signed-off-by: Pritesh Raithatha <[email protected]>
Acked-by: Stephen Warren <[email protected]>
Tested-by: Stephen Warren <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
.../bindings/pinctrl/nvidia,tegra20-pinmux.txt | 2 +-
.../bindings/pinctrl/nvidia,tegra30-pinmux.txt | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/Documentation/devicetree/bindings/pinctrl/nvidia,tegra20-pinmux.txt b/Documentation/devicetree/bindings/pinctrl/nvidia,tegra20-pinmux.txt
index c8e5782..683fde9 100644
--- a/Documentation/devicetree/bindings/pinctrl/nvidia,tegra20-pinmux.txt
+++ b/Documentation/devicetree/bindings/pinctrl/nvidia,tegra20-pinmux.txt
@@ -93,7 +93,7 @@ Valid values for pin and group names are:
With some exceptions, these support nvidia,high-speed-mode,
nvidia,schmitt, nvidia,low-power-mode, nvidia,pull-down-strength,
- nvidia,pull-up-strength, nvidia,slew_rate-rising, nvidia,slew_rate-falling.
+ nvidia,pull-up-strength, nvidia,slew-rate-rising, nvidia,slew-rate-falling.
drive_ao1, drive_ao2, drive_at1, drive_at2, drive_cdev1, drive_cdev2,
drive_csus, drive_dap1, drive_dap2, drive_dap3, drive_dap4, drive_dbg,
diff --git a/Documentation/devicetree/bindings/pinctrl/nvidia,tegra30-pinmux.txt b/Documentation/devicetree/bindings/pinctrl/nvidia,tegra30-pinmux.txt
index c275b70..6f426ed 100644
--- a/Documentation/devicetree/bindings/pinctrl/nvidia,tegra30-pinmux.txt
+++ b/Documentation/devicetree/bindings/pinctrl/nvidia,tegra30-pinmux.txt
@@ -83,7 +83,7 @@ Valid values for pin and group names are:
drive groups:
These all support nvidia,pull-down-strength, nvidia,pull-up-strength,
- nvidia,slew_rate-rising, nvidia,slew_rate-falling. Most but not all
+ nvidia,slew-rate-rising, nvidia,slew-rate-falling. Most but not all
support nvidia,high-speed-mode, nvidia,schmitt, nvidia,low-power-mode.
ao1, ao2, at1, at2, at3, at4, at5, cdev1, cdev2, cec, crt, csus, dap1,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanislav Yakovlev <[email protected]>
commit bf11315eeda510ea4fc1a2bf972d8155d31d89b4 upstream.
The driver does not count space of radiotap fields when allocating skb for
radiotap packet. This leads to kernel panic with the following call trace:
...
[67607.676067] [<c152f90f>] error_code+0x67/0x6c
[67607.676067] [<c142f831>] ? skb_put+0x91/0xa0
[67607.676067] [<f8cf5e5b>] ? ipw_handle_promiscuous_tx+0x16b/0x2d0 [ipw2200]
[67607.676067] [<f8cf5e5b>] ipw_handle_promiscuous_tx+0x16b/0x2d0 [ipw2200]
[67607.676067] [<f8cf899b>] ipw_net_hard_start_xmit+0x8b/0x90 [ipw2200]
[67607.676067] [<f8741c5a>] libipw_xmit+0x55a/0x980 [libipw]
[67607.676067] [<c143d3e8>] dev_hard_start_xmit+0x218/0x4d0
...
This bug was found by VittGam.
https://bugzilla.kernel.org/show_bug.cgi?id=43255
Signed-off-by: Stanislav Yakovlev <[email protected]>
Signed-off-by: John W. Linville <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/net/wireless/ipw2x00/ipw2200.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/wireless/ipw2x00/ipw2200.c b/drivers/net/wireless/ipw2x00/ipw2200.c
index 0036737..1f2edf2 100644
--- a/drivers/net/wireless/ipw2x00/ipw2200.c
+++ b/drivers/net/wireless/ipw2x00/ipw2200.c
@@ -10470,7 +10470,7 @@ static void ipw_handle_promiscuous_tx(struct ipw_priv *priv,
} else
len = src->len;
- dst = alloc_skb(len + sizeof(*rt_hdr), GFP_ATOMIC);
+ dst = alloc_skb(len + sizeof(*rt_hdr) + sizeof(u16)*2, GFP_ATOMIC);
if (!dst)
continue;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hedberg <[email protected]>
commit 065a13e2cc665f6547dc7e8a9d6b6565badf940a upstream.
When sending a pairing request or response we should not just blindly
copy the value that the remote device sent. Instead we should at least
make sure to mask out any unknown bits. This is particularly critical
from the upcoming LE Secure Connections feature perspective as
incorrectly indicating support for it (by copying the remote value)
would cause a failure to pair with devices that support it.
Signed-off-by: Johan Hedberg <[email protected]>
Acked-by: Marcel Holtmann <[email protected]>
Signed-off-by: Gustavo Padovan <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/bluetooth/smp.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
index 926043c..22fe004 100644
--- a/net/bluetooth/smp.c
+++ b/net/bluetooth/smp.c
@@ -31,6 +31,8 @@
#define SMP_TIMEOUT msecs_to_jiffies(30000)
+#define AUTH_REQ_MASK 0x07
+
static inline void swap128(u8 src[16], u8 dst[16])
{
int i;
@@ -229,7 +231,7 @@ static void build_pairing_cmd(struct l2cap_conn *conn,
req->max_key_size = SMP_MAX_ENC_KEY_SIZE;
req->init_key_dist = 0;
req->resp_key_dist = dist_keys;
- req->auth_req = authreq;
+ req->auth_req = (authreq & AUTH_REQ_MASK);
return;
}
@@ -238,7 +240,7 @@ static void build_pairing_cmd(struct l2cap_conn *conn,
rsp->max_key_size = SMP_MAX_ENC_KEY_SIZE;
rsp->init_key_dist = 0;
rsp->resp_key_dist = req->resp_key_dist & dist_keys;
- rsp->auth_req = authreq;
+ rsp->auth_req = (authreq & AUTH_REQ_MASK);
}
static u8 check_enc_key_size(struct l2cap_conn *conn, __u8 max_key_size)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanislaw Gruszka <[email protected]>
commit 4045f72bcf3c293c7c5932ef001742d8bb5ded76 upstream.
This patch fix corruption which can manifest itself by following crash
when switching on rfkill switch with rt2x00 driver:
https://bugzilla.redhat.com/attachment.cgi?id=615362
Pointer key->u.ccmp.tfm of group key get corrupted in:
ieee80211_rx_h_michael_mic_verify():
/* update IV in key information to be able to detect replays */
rx->key->u.tkip.rx[rx->security_idx].iv32 = rx->tkip_iv32;
rx->key->u.tkip.rx[rx->security_idx].iv16 = rx->tkip_iv16;
because rt2x00 always set RX_FLAG_MMIC_STRIPPED, even if key is not TKIP.
We already check type of the key in different path in
ieee80211_rx_h_michael_mic_verify() function, so adding additional
check here is reasonable.
Signed-off-by: Stanislaw Gruszka <[email protected]>
Signed-off-by: John W. Linville <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/mac80211/wpa.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/mac80211/wpa.c b/net/mac80211/wpa.c
index bdb53ab..e72562a 100644
--- a/net/mac80211/wpa.c
+++ b/net/mac80211/wpa.c
@@ -106,7 +106,8 @@ ieee80211_rx_h_michael_mic_verify(struct ieee80211_rx_data *rx)
if (status->flag & RX_FLAG_MMIC_ERROR)
goto mic_fail;
- if (!(status->flag & RX_FLAG_IV_STRIPPED) && rx->key)
+ if (!(status->flag & RX_FLAG_IV_STRIPPED) && rx->key &&
+ rx->key->conf.cipher == WLAN_CIPHER_SUITE_TKIP)
goto update_iv;
return RX_CONTINUE;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanislaw Gruszka <[email protected]>
commit 6863255bd0e48bc41ae5a066d5c771801e92735a upstream.
Avoid situation when we are on associate state in mac80211 and
on disassociate state in cfg80211. This can results on crash
during modules unload (like showed on this thread:
http://marc.info/?t=134373976300001&r=1&w=2) and possibly other
problems.
Reported-by: Pedro Francisco <[email protected]>
Signed-off-by: Stanislaw Gruszka <[email protected]>
Signed-off-by: Johannes Berg <[email protected]>
[herton: use the patch for 3.6 stable]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/net/cfg80211.h | 1 +
net/mac80211/mlme.c | 5 +++--
net/wireless/mlme.c | 12 +++---------
3 files changed, 7 insertions(+), 11 deletions(-)
diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
index 0289d4c..e225dcc 100644
--- a/include/net/cfg80211.h
+++ b/include/net/cfg80211.h
@@ -1130,6 +1130,7 @@ struct cfg80211_deauth_request {
const u8 *ie;
size_t ie_len;
u16 reason_code;
+ bool local_state_change;
};
/**
diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
index 92028de..6d79367 100644
--- a/net/mac80211/mlme.c
+++ b/net/mac80211/mlme.c
@@ -3467,6 +3467,7 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata,
{
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
u8 frame_buf[DEAUTH_DISASSOC_LEN];
+ bool tx = !req->local_state_change;
mutex_lock(&ifmgd->mtx);
@@ -3483,11 +3484,11 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata,
if (ifmgd->associated &&
ether_addr_equal(ifmgd->associated->bssid, req->bssid))
ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH,
- req->reason_code, true, frame_buf);
+ req->reason_code, tx, frame_buf);
else
ieee80211_send_deauth_disassoc(sdata, req->bssid,
IEEE80211_STYPE_DEAUTH,
- req->reason_code, true,
+ req->reason_code, tx,
frame_buf);
mutex_unlock(&ifmgd->mtx);
diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
index eb90988..b6626d9 100644
--- a/net/wireless/mlme.c
+++ b/net/wireless/mlme.c
@@ -441,20 +441,14 @@ int __cfg80211_mlme_deauth(struct cfg80211_registered_device *rdev,
.reason_code = reason,
.ie = ie,
.ie_len = ie_len,
+ .local_state_change = local_state_change,
};
ASSERT_WDEV_LOCK(wdev);
- if (local_state_change) {
- if (wdev->current_bss &&
- ether_addr_equal(wdev->current_bss->pub.bssid, bssid)) {
- cfg80211_unhold_bss(wdev->current_bss);
- cfg80211_put_bss(&wdev->current_bss->pub);
- wdev->current_bss = NULL;
- }
-
+ if (local_state_change && (!wdev->current_bss ||
+ !ether_addr_equal(wdev->current_bss->pub.bssid, bssid)))
return 0;
- }
return rdev->ops->deauth(&rdev->wiphy, dev, &req);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Vaibhav Nagarnaik <[email protected]>
commit 8e49f418c9632790bf456634742d34d97120a784 upstream.
With a system where, num_present_cpus < num_possible_cpus, even if all
CPUs are online, non-present CPUs don't have per_cpu buffers allocated.
If per_cpu/<cpu>/buffer_size_kb is modified for such a CPU, it can cause
a panic due to NULL dereference in ring_buffer_resize().
To fix this, resize operation is allowed only if the per-cpu buffer has
been initialized.
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Vaibhav Nagarnaik <[email protected]>
Signed-off-by: Steven Rostedt <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
kernel/trace/ring_buffer.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index f765465..db6dff1 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1567,6 +1567,10 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size,
put_online_cpus();
} else {
+ /* Make sure this CPU has been intitialized */
+ if (!cpumask_test_cpu(cpu_id, buffer->cpumask))
+ goto out;
+
cpu_buffer = buffer->buffers[cpu_id];
if (nr_pages == cpu_buffer->nr_pages)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Kees Cook <[email protected]>
commit 2702b1526c7278c4d65d78de209a465d4de2885e upstream.
Calling uname() with the UNAME26 personality set allows a leak of kernel
stack contents. This fixes it by defensively calculating the length of
copy_to_user() call, making the len argument unsigned, and initializing
the stack buffer to zero (now technically unneeded, but hey, overkill).
CVE-2012-0957
Reported-by: PaX Team <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: PaX Team <[email protected]>
Cc: Brad Spengler <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
kernel/sys.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/kernel/sys.c b/kernel/sys.c
index 0349bde..1b66408 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -1265,15 +1265,16 @@ DECLARE_RWSEM(uts_sem);
* Work around broken programs that cannot handle "Linux 3.0".
* Instead we map 3.x to 2.6.40+x, so e.g. 3.0 would be 2.6.40
*/
-static int override_release(char __user *release, int len)
+static int override_release(char __user *release, size_t len)
{
int ret = 0;
- char buf[65];
if (current->personality & UNAME26) {
- char *rest = UTS_RELEASE;
+ const char *rest = UTS_RELEASE;
+ char buf[65] = { 0 };
int ndots = 0;
unsigned v;
+ size_t copy;
while (*rest) {
if (*rest == '.' && ++ndots >= 3)
@@ -1283,8 +1284,9 @@ static int override_release(char __user *release, int len)
rest++;
}
v = ((LINUX_VERSION_CODE >> 8) & 0xff) + 40;
- snprintf(buf, len, "2.6.%u%s", v, rest);
- ret = copy_to_user(release, buf, len);
+ copy = min(sizeof(buf), max_t(size_t, 1, len));
+ copy = scnprintf(buf, copy, "2.6.%u%s", v, rest);
+ ret = copy_to_user(release, buf, copy + 1);
}
return ret;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry Monakhov <[email protected]>
commit dee1f973ca341c266229faa5a1a5bb268bed3531 upstream.
We assumed that at the time we call ext4_convert_unwritten_extents_endio()
extent in question is fully inside [map.m_lblk, map->m_len] because
it was already split during submission. But this may not be true due to
a race between writeback vs fallocate.
If extent in question is larger than requested we will split it again.
Special precautions should being done if zeroout required because
[map.m_lblk, map->m_len] already contains valid data.
Signed-off-by: Dmitry Monakhov <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ext4/extents.c | 57 ++++++++++++++++++++++++++++++++++++++++++-----------
1 file changed, 46 insertions(+), 11 deletions(-)
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 9752106..844e11e 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -52,6 +52,9 @@
#define EXT4_EXT_MARK_UNINIT1 0x2 /* mark first half uninitialized */
#define EXT4_EXT_MARK_UNINIT2 0x4 /* mark second half uninitialized */
+#define EXT4_EXT_DATA_VALID1 0x8 /* first half contains valid data */
+#define EXT4_EXT_DATA_VALID2 0x10 /* second half contains valid data */
+
static __le32 ext4_extent_block_csum(struct inode *inode,
struct ext4_extent_header *eh)
{
@@ -2896,6 +2899,9 @@ static int ext4_split_extent_at(handle_t *handle,
unsigned int ee_len, depth;
int err = 0;
+ BUG_ON((split_flag & (EXT4_EXT_DATA_VALID1 | EXT4_EXT_DATA_VALID2)) ==
+ (EXT4_EXT_DATA_VALID1 | EXT4_EXT_DATA_VALID2));
+
ext_debug("ext4_split_extents_at: inode %lu, logical"
"block %llu\n", inode->i_ino, (unsigned long long)split);
@@ -2954,7 +2960,14 @@ static int ext4_split_extent_at(handle_t *handle,
err = ext4_ext_insert_extent(handle, inode, path, &newex, flags);
if (err == -ENOSPC && (EXT4_EXT_MAY_ZEROOUT & split_flag)) {
- err = ext4_ext_zeroout(inode, &orig_ex);
+ if (split_flag & (EXT4_EXT_DATA_VALID1|EXT4_EXT_DATA_VALID2)) {
+ if (split_flag & EXT4_EXT_DATA_VALID1)
+ err = ext4_ext_zeroout(inode, ex2);
+ else
+ err = ext4_ext_zeroout(inode, ex);
+ } else
+ err = ext4_ext_zeroout(inode, &orig_ex);
+
if (err)
goto fix_extent_len;
/* update the extent length and mark as initialized */
@@ -3007,12 +3020,13 @@ static int ext4_split_extent(handle_t *handle,
uninitialized = ext4_ext_is_uninitialized(ex);
if (map->m_lblk + map->m_len < ee_block + ee_len) {
- split_flag1 = split_flag & EXT4_EXT_MAY_ZEROOUT ?
- EXT4_EXT_MAY_ZEROOUT : 0;
+ split_flag1 = split_flag & EXT4_EXT_MAY_ZEROOUT;
flags1 = flags | EXT4_GET_BLOCKS_PRE_IO;
if (uninitialized)
split_flag1 |= EXT4_EXT_MARK_UNINIT1 |
EXT4_EXT_MARK_UNINIT2;
+ if (split_flag & EXT4_EXT_DATA_VALID2)
+ split_flag1 |= EXT4_EXT_DATA_VALID1;
err = ext4_split_extent_at(handle, inode, path,
map->m_lblk + map->m_len, split_flag1, flags1);
if (err)
@@ -3025,8 +3039,8 @@ static int ext4_split_extent(handle_t *handle,
return PTR_ERR(path);
if (map->m_lblk >= ee_block) {
- split_flag1 = split_flag & EXT4_EXT_MAY_ZEROOUT ?
- EXT4_EXT_MAY_ZEROOUT : 0;
+ split_flag1 = split_flag & (EXT4_EXT_MAY_ZEROOUT |
+ EXT4_EXT_DATA_VALID2);
if (uninitialized)
split_flag1 |= EXT4_EXT_MARK_UNINIT1;
if (split_flag & EXT4_EXT_MARK_UNINIT2)
@@ -3304,26 +3318,47 @@ static int ext4_split_unwritten_extents(handle_t *handle,
split_flag |= ee_block + ee_len <= eof_block ? EXT4_EXT_MAY_ZEROOUT : 0;
split_flag |= EXT4_EXT_MARK_UNINIT2;
-
+ if (flags & EXT4_GET_BLOCKS_CONVERT)
+ split_flag |= EXT4_EXT_DATA_VALID2;
flags |= EXT4_GET_BLOCKS_PRE_IO;
return ext4_split_extent(handle, inode, path, map, split_flag, flags);
}
static int ext4_convert_unwritten_extents_endio(handle_t *handle,
- struct inode *inode,
- struct ext4_ext_path *path)
+ struct inode *inode,
+ struct ext4_map_blocks *map,
+ struct ext4_ext_path *path)
{
struct ext4_extent *ex;
+ ext4_lblk_t ee_block;
+ unsigned int ee_len;
int depth;
int err = 0;
depth = ext_depth(inode);
ex = path[depth].p_ext;
+ ee_block = le32_to_cpu(ex->ee_block);
+ ee_len = ext4_ext_get_actual_len(ex);
ext_debug("ext4_convert_unwritten_extents_endio: inode %lu, logical"
"block %llu, max_blocks %u\n", inode->i_ino,
- (unsigned long long)le32_to_cpu(ex->ee_block),
- ext4_ext_get_actual_len(ex));
+ (unsigned long long)ee_block, ee_len);
+
+ /* If extent is larger than requested then split is required */
+ if (ee_block != map->m_lblk || ee_len > map->m_len) {
+ err = ext4_split_unwritten_extents(handle, inode, map, path,
+ EXT4_GET_BLOCKS_CONVERT);
+ if (err < 0)
+ goto out;
+ ext4_ext_drop_refs(path);
+ path = ext4_ext_find_extent(inode, map->m_lblk, path);
+ if (IS_ERR(path)) {
+ err = PTR_ERR(path);
+ goto out;
+ }
+ depth = ext_depth(inode);
+ ex = path[depth].p_ext;
+ }
err = ext4_ext_get_access(handle, inode, path + depth);
if (err)
@@ -3631,7 +3666,7 @@ ext4_ext_handle_uninitialized_extents(handle_t *handle, struct inode *inode,
}
/* IO end_io complete, convert the filled extent to written */
if ((flags & EXT4_GET_BLOCKS_CONVERT)) {
- ret = ext4_convert_unwritten_extents_endio(handle, inode,
+ ret = ext4_convert_unwritten_extents_endio(handle, inode, map,
path);
if (ret >= 0) {
ext4_update_inode_fsync_trans(handle, inode, 1);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: =?UTF-8?q?Bj=C3=B8rn=20Mork?= <[email protected]>
commit 1452df6f1b7e396d89c2a1fdbdc0e0e839f97671 upstream.
Based on information from the ZTE Windows drivers.
Signed-off-by: Bjørn Mork <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/serial/option.c | 68 ++++++++++++++++++++++++++++++-------------
1 file changed, 48 insertions(+), 20 deletions(-)
diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
index d581ed3..26d073f 100644
--- a/drivers/usb/serial/option.c
+++ b/drivers/usb/serial/option.c
@@ -503,11 +503,19 @@ static const struct option_blacklist_info net_intf5_blacklist = {
.reserved = BIT(5),
};
+static const struct option_blacklist_info net_intf6_blacklist = {
+ .reserved = BIT(6),
+};
+
static const struct option_blacklist_info zte_mf626_blacklist = {
.sendsetup = BIT(0) | BIT(1),
.reserved = BIT(4),
};
+static const struct option_blacklist_info zte_1255_blacklist = {
+ .reserved = BIT(3) | BIT(4),
+};
+
static const struct usb_device_id option_ids[] = {
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) },
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_RICOLA) },
@@ -853,13 +861,19 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0113, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t)&net_intf5_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0117, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0118, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0121, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0118, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf5_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0121, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf5_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0122, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0123, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0124, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0125, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0126, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0123, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0124, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf5_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0125, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf6_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0126, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf5_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0128, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0142, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0143, 0xff, 0xff, 0xff) },
@@ -872,7 +886,8 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0156, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0157, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t)&net_intf5_blacklist },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0158, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0158, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf3_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0159, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0161, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0162, 0xff, 0xff, 0xff) },
@@ -886,7 +901,8 @@ static const struct usb_device_id option_ids[] = {
.driver_info = (kernel_ulong_t)&net_intf4_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1010, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t)&net_intf4_blacklist },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1012, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1012, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1057, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1058, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1059, 0xff, 0xff, 0xff) },
@@ -1002,18 +1018,24 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1169, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1170, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1244, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1245, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1245, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1246, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1247, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1247, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1248, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1249, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1250, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1251, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1252, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1252, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1253, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1254, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1255, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1256, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1254, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1255, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&zte_1255_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1256, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1257, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1258, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1259, 0xff, 0xff, 0xff) },
@@ -1071,15 +1093,21 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0070, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0073, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0094, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0130, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0133, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0141, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0130, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf1_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0133, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf3_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0141, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf5_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0147, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0152, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0168, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0168, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0170, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0176, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0178, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0176, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf3_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0178, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf3_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_CDMA_TECH, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC8710, 0xff, 0xff, 0xff) },
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Ming Lei <[email protected]>
commit c5211187f7ff8e8dbff4ebf7c011ac4c0ffe319c upstream.
If the write endpoint is interrupt type, usb_sndintpipe() should
be passed to usb_fill_int_urb() instead of usb_sndbulkpipe().
Cc: Oliver Neukum <[email protected]>
Signed-off-by: Ming Lei <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/class/cdc-acm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
index 2e29044..f4593ee 100644
--- a/drivers/usb/class/cdc-acm.c
+++ b/drivers/usb/class/cdc-acm.c
@@ -1234,7 +1234,7 @@ made_compressed_probe:
if (usb_endpoint_xfer_int(epwrite))
usb_fill_int_urb(snd->urb, usb_dev,
- usb_sndbulkpipe(usb_dev, epwrite->bEndpointAddress),
+ usb_sndintpipe(usb_dev, epwrite->bEndpointAddress),
NULL, acm->writesize, acm_write_bulk, snd, epwrite->bInterval);
else
usb_fill_bulk_urb(snd->urb, usb_dev,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: David Henningsson <[email protected]>
commit 71aa5ebe36a4e936eff281b375a4707b6a8320f2 upstream.
Even when CONFIG_SND_DEBUG is not enabled, we don't want to
return an arbitrary memory location when the channel count is
larger than we expected.
Signed-off-by: David Henningsson <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
sound/pci/hda/patch_realtek.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index f6784d7..2549f0b 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -2486,8 +2486,10 @@ static const char *alc_get_line_out_pfx(struct alc_spec *spec, int ch,
return "PCM";
break;
}
- if (snd_BUG_ON(ch >= ARRAY_SIZE(channel_name)))
+ if (ch >= ARRAY_SIZE(channel_name)) {
+ snd_BUG();
return "PCM";
+ }
return channel_name[ch];
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jacob Shin <[email protected]>
commit 1bbbbe779aabe1f0768c2bf8f8c0a5583679b54a upstream.
On systems with very large memory (1 TB in our case), BIOS may report a
reserved region or a hole in the E820 map, even above the 4 GB range. Exclude
these from the direct mapping.
[ hpa: this should be done not just for > 4 GB but for everything above the legacy
region (1 MB), at the very least. That, however, turns out to require significant
restructuring. That work is well underway, but is not suitable for rc/stable. ]
Signed-off-by: Jacob Shin <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/kernel/setup.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 16be6dc..e615c31 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -919,8 +919,21 @@ void __init setup_arch(char **cmdline_p)
#ifdef CONFIG_X86_64
if (max_pfn > max_low_pfn) {
- max_pfn_mapped = init_memory_mapping(1UL<<32,
- max_pfn<<PAGE_SHIFT);
+ int i;
+ for (i = 0; i < e820.nr_map; i++) {
+ struct e820entry *ei = &e820.map[i];
+
+ if (ei->addr + ei->size <= 1UL << 32)
+ continue;
+
+ if (ei->type == E820_RESERVED)
+ continue;
+
+ max_pfn_mapped = init_memory_mapping(
+ ei->addr < 1UL << 32 ? 1UL << 32 : ei->addr,
+ ei->addr + ei->size);
+ }
+
/* can we preseve max_low_pfn ?*/
max_low_pfn = max_pfn;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Egbert Eich <[email protected]>
commit 082918471139b07964967cfe5f70230909c82ae1 upstream.
radeon_i2c_fini() walks thru the list of I2C bus recs rdev->i2c_bus[]
to destroy each of them.
radeon_ext_tmds_enc_destroy() however also has code to destroy it's
associated I2C bus rec which has been obtained by radeon_i2c_lookup()
and is therefore also in the i2c_bus[] list.
This causes a double free resulting in a kernel panic when unloading
the radeon driver.
Removing destroy code from radeon_ext_tmds_enc_destroy() fixes this
problem.
agd5f: fix compiler warning
Signed-off-by: Egbert Eich <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/gpu/drm/radeon/radeon_legacy_encoders.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/radeon/radeon_legacy_encoders.c b/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
index a0c8222..9e62325 100644
--- a/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
+++ b/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
@@ -974,11 +974,7 @@ static void radeon_legacy_tmds_ext_mode_set(struct drm_encoder *encoder,
static void radeon_ext_tmds_enc_destroy(struct drm_encoder *encoder)
{
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
- struct radeon_encoder_ext_tmds *tmds = radeon_encoder->enc_priv;
- if (tmds) {
- if (tmds->i2c_bus)
- radeon_i2c_destroy(tmds->i2c_bus);
- }
+ /* don't destroy the i2c bus record here, this will be done in radeon_i2c_fini */
kfree(radeon_encoder->enc_priv);
drm_encoder_cleanup(encoder);
kfree(radeon_encoder);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Stefano Babic <[email protected]>
commit 6ff1f3d3bd7c69c62ca5773b1b684bce42eff06a upstream.
On AM3517, tx and rx interrupt are detected together with
the disconnect event. This generates a kernel panic in musb_interrupt,
because rx / tx are handled after disconnect.
This issue was seen on a Technexion's TAM3517 SOM. Unplugging a device,
tx / rx interrupts together with disconnect are detected. This brings
to kernel panic like this:
[ 68.526153] Unable to handle kernel NULL pointer dereference at virtual address 00000011
[ 68.534698] pgd = c0004000
[ 68.537536] [00000011] *pgd=00000000
[ 68.541351] Internal error: Oops: 17 [#1] ARM
[ 68.545928] Modules linked in:
[ 68.549163] CPU: 0 Not tainted (3.6.0-rc5-00020-g9e05905 #178)
[ 68.555694] PC is at rxstate+0x8/0xdc
[ 68.559539] LR is at musb_interrupt+0x98/0x858
[ 68.564239] pc : [<c035cd88>] lr : [<c035af1c>] psr: 40000193
[ 68.564239] sp : ce83fb40 ip : d0906410 fp : 00000000
[ 68.576293] r10: 00000000 r9 : cf3b0e40 r8 : 00000002
[ 68.581817] r7 : 00000019 r6 : 00000001 r5 : 00000001 r4 : 000000d4
[ 68.588684] r3 : 00000000 r2 : 00000000 r1 : ffffffcc r0 : cf23c108
[ 68.595550] Flags: nZcv IRQs off FIQs on Mode SVC_32 ISA ARM Segment ke
Note: this behavior is not seen with a USB hub, while it is
easy to reproduce connecting a USB-pen directly to the USB-A of
the board.
Drop tx / rx interrupts if disconnect is detected.
Signed-off-by: Stefano Babic <[email protected]>
CC: Felipe Balbi <[email protected]>
Tested-by: Dmitry Lifshitz <[email protected]>
Tested-by: Igor Grinberg <[email protected]>
Signed-off-by: Felipe Balbi <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/musb/am35x.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/usb/musb/am35x.c b/drivers/usb/musb/am35x.c
index 9f3eda9..340cff9 100644
--- a/drivers/usb/musb/am35x.c
+++ b/drivers/usb/musb/am35x.c
@@ -311,6 +311,12 @@ static irqreturn_t am35x_musb_interrupt(int irq, void *hci)
ret = IRQ_HANDLED;
}
+ /* Drop spurious RX and TX if device is disconnected */
+ if (musb->int_usb & MUSB_INTR_DISCONNECT) {
+ musb->int_tx = 0;
+ musb->int_rx = 0;
+ }
+
if (musb->int_tx || musb->int_rx || musb->int_usb)
ret |= musb_interrupt(musb);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai <[email protected]>
commit 128960a9ad67e2d119738f5211956e0304517551 upstream.
Delay the registration of VGA switcheroo client to the end of the
probing. Otherwise a too quick switching may result in Oops during
probing.
Also add the check of the return value from snd_hda_lock_devices().
Reported-and-tested-by: Daniel J Blueman <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
sound/pci/hda/hda_intel.c | 31 ++++++++++++++++++++-----------
1 file changed, 20 insertions(+), 11 deletions(-)
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
index 7757536..ff2bf55 100644
--- a/sound/pci/hda/hda_intel.c
+++ b/sound/pci/hda/hda_intel.c
@@ -485,6 +485,7 @@ struct azx {
/* VGA-switcheroo setup */
unsigned int use_vga_switcheroo:1;
+ unsigned int vga_switcheroo_registered:1;
unsigned int init_failed:1; /* delayed init failed */
unsigned int disabled:1; /* disabled by VGA-switcher */
@@ -2523,7 +2524,9 @@ static void azx_vs_set_state(struct pci_dev *pci,
if (disabled) {
azx_suspend(pci, PMSG_FREEZE);
chip->disabled = true;
- snd_hda_lock_devices(chip->bus);
+ if (snd_hda_lock_devices(chip->bus))
+ snd_printk(KERN_WARNING SFX
+ "Cannot lock devices!\n");
} else {
snd_hda_unlock_devices(chip->bus);
chip->disabled = false;
@@ -2566,14 +2569,20 @@ static const struct vga_switcheroo_client_ops azx_vs_ops = {
static int __devinit register_vga_switcheroo(struct azx *chip)
{
+ int err;
+
if (!chip->use_vga_switcheroo)
return 0;
/* FIXME: currently only handling DIS controller
* is there any machine with two switchable HDMI audio controllers?
*/
- return vga_switcheroo_register_audio_client(chip->pci, &azx_vs_ops,
+ err = vga_switcheroo_register_audio_client(chip->pci, &azx_vs_ops,
VGA_SWITCHEROO_DIS,
chip->bus != NULL);
+ if (err < 0)
+ return err;
+ chip->vga_switcheroo_registered = 1;
+ return 0;
}
#else
#define init_vga_switcheroo(chip) /* NOP */
@@ -2593,7 +2602,8 @@ static int azx_free(struct azx *chip)
if (use_vga_switcheroo(chip)) {
if (chip->disabled && chip->bus)
snd_hda_unlock_devices(chip->bus);
- vga_switcheroo_unregister_client(chip->pci);
+ if (chip->vga_switcheroo_registered)
+ vga_switcheroo_unregister_client(chip->pci);
}
if (chip->initialized) {
@@ -2935,14 +2945,6 @@ static int __devinit azx_create(struct snd_card *card, struct pci_dev *pci,
}
ok:
- err = register_vga_switcheroo(chip);
- if (err < 0) {
- snd_printk(KERN_ERR SFX
- "Error registering VGA-switcheroo client\n");
- azx_free(chip);
- return err;
- }
-
err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops);
if (err < 0) {
snd_printk(KERN_ERR SFX "Error creating device [card]!\n");
@@ -3169,6 +3171,13 @@ static int __devinit azx_probe(struct pci_dev *pci,
pci_set_drvdata(pci, card);
+ err = register_vga_switcheroo(chip);
+ if (err < 0) {
+ snd_printk(KERN_ERR SFX
+ "Error registering VGA-switcheroo client\n");
+ goto out_free;
+ }
+
dev++;
return 0;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann <[email protected]>
commit fdc858a466b738d35d3492bc7cf77b1dac98bf7c upstream.
The sharpsl_pcmcia_ops structure gets passed into
sa11xx_drv_pcmcia_probe, where it gets accessed at run-time,
unlike all other pcmcia drivers that pass their structures
into platform_device_add_data, which makes a copy.
This means the gcc warning is valid and the structure
must not be marked as __initdata.
Without this patch, building collie_defconfig results in:
drivers/pcmcia/pxa2xx_sharpsl.c:22:31: fatal error: mach-pxa/hardware.h: No such file or directory
compilation terminated.
make[3]: *** [drivers/pcmcia/pxa2xx_sharpsl.o] Error 1
make[2]: *** [drivers/pcmcia] Error 2
make[1]: *** [drivers] Error 2
make: *** [sub-make] Error 2
Signed-off-by: Arnd Bergmann <[email protected]>
Cc: Dominik Brodowski <[email protected]>
Cc: Russell King <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: [email protected]
Cc: Jochen Friedrich <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/pcmcia/pxa2xx_sharpsl.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/pcmcia/pxa2xx_sharpsl.c b/drivers/pcmcia/pxa2xx_sharpsl.c
index b066273..7dd879c 100644
--- a/drivers/pcmcia/pxa2xx_sharpsl.c
+++ b/drivers/pcmcia/pxa2xx_sharpsl.c
@@ -194,7 +194,7 @@ static void sharpsl_pcmcia_socket_suspend(struct soc_pcmcia_socket *skt)
sharpsl_pcmcia_init_reset(skt);
}
-static struct pcmcia_low_level sharpsl_pcmcia_ops __initdata = {
+static struct pcmcia_low_level sharpsl_pcmcia_ops = {
.owner = THIS_MODULE,
.hw_init = sharpsl_pcmcia_hw_init,
.socket_state = sharpsl_pcmcia_socket_state,
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Kenneth Graunke <[email protected]>
commit 26b6e44afb58432a5e998da0343757404f9de9ee upstream.
A previous patch, namely:
commit bf97b276ca04cee9ab65ffd378fa8e6aedd71ff6
Author: Daniel Vetter <[email protected]>
Date: Wed Apr 11 20:42:41 2012 +0200
drm/i915: implement w/a for incorrect guarband clipping
accidentally set bit 5 in 3D_CHICKEN, which has nothing to do with
clipping. This patch changes it to be set in 3D_CHICKEN3, where it
belongs.
The game "Dante" demonstrates random clipping issues when guardband
clipping is enabled and bit 5 of 3D_CHICKEN3 isn't set. So the
workaround is actually necessary.
Cc: Daniel Vetter <[email protected]>
Cc: Oliver McFadden <[email protected]>
Acked-by: Paul Menzel <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Mika Kuoppala <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/gpu/drm/i915/i915_reg.h | 2 +-
drivers/gpu/drm/i915/intel_pm.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 4cad908..84c04c8 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -508,7 +508,7 @@
*/
# define _3D_CHICKEN2_WM_READ_PIPELINED (1 << 14)
#define _3D_CHICKEN3 0x02090
-#define _3D_CHICKEN_SF_DISABLE_FASTCLIP_CULL (1 << 5)
+#define _3D_CHICKEN3_SF_DISABLE_FASTCLIP_CULL (1 << 5)
#define MI_MODE 0x0209c
# define VS_TIMER_DISPATCH (1 << 6)
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index b55aa0e..f8e332d 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3334,8 +3334,8 @@ static void gen6_init_clock_gating(struct drm_device *dev)
GEN6_RCCUNIT_CLOCK_GATE_DISABLE);
/* Bspec says we need to always set all mask bits. */
- I915_WRITE(_3D_CHICKEN, (0xFFFF << 16) |
- _3D_CHICKEN_SF_DISABLE_FASTCLIP_CULL);
+ I915_WRITE(_3D_CHICKEN3, (0xFFFF << 16) |
+ _3D_CHICKEN3_SF_DISABLE_FASTCLIP_CULL);
/*
* According to the spec the following bits should be
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Willy Tarreau <[email protected]>
commit c77d7162a7ae451c2e895d7ef7fbeb0906107472 upstream.
starting an old X server causes a kernel BUG since commit 1b50247a8d:
------------[ cut here ]------------
kernel BUG at drivers/gpu/drm/i915/i915_gem.c:3661!
invalid opcode: 0000 [#1] SMP
Modules linked in: snd_seq_dummy snd_seq_oss snd_seq_midi_event snd_seq snd_seq_device snd_pcm_oss snd_mixer_oss uvcvideo
+videobuf2_core videodev videobuf2_vmalloc videobuf2_memops uhci_hcd ath9k mac80211 snd_hda_codec_realtek ath9k_common microcode
+ath9k_hw psmouse serio_raw sg ath cfg80211 atl1c lpc_ich mfd_core ehci_hcd snd_hda_intel snd_hda_codec snd_hwdep snd_pcm rtc_cmos
+snd_timer snd evdev eeepc_laptop snd_page_alloc sparse_keymap
Pid: 2866, comm: X Not tainted 3.5.6-rc1-eeepc #1 ASUSTeK Computer INC. 1005HA/1005HA
EIP: 0060:[<c12dc291>] EFLAGS: 00013297 CPU: 0
EIP is at i915_gem_entervt_ioctl+0xf1/0x110
EAX: f5941df4 EBX: f5940000 ECX: 00000000 EDX: 00020000
ESI: f5835400 EDI: 00000000 EBP: f51d7e38 ESP: f51d7e20
DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
CR0: 8005003b CR2: b760e0a0 CR3: 351b6000 CR4: 000007d0
DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
DR6: ffff0ff0 DR7: 00000400
Process X (pid: 2866, ti=f51d6000 task=f61af8d0 task.ti=f51d6000)
Stack:
00000001 00000000 f5835414 f51d7e84 f5835400 f54f85c0 f51d7f10 c12b530b
00000001 c151b139 c14751b6 c152e030 00000b32 00006459 00000059 0000e200
00000001 00000000 00006459 c159ddd0 c12dc1a0 ffffffea 00000000 00000000
Call Trace:
[<c12b530b>] drm_ioctl+0x2eb/0x440
[<c12dc1a0>] ? i915_gem_init+0xe0/0xe0
[<c1052b2b>] ? enqueue_hrtimer+0x1b/0x50
[<c1053321>] ? __hrtimer_start_range_ns+0x161/0x330
[<c10530b3>] ? lock_hrtimer_base+0x23/0x50
[<c1053163>] ? hrtimer_try_to_cancel+0x33/0x70
[<c12b5020>] ? drm_version+0x90/0x90
[<c10ca171>] vfs_ioctl+0x31/0x50
[<c10ca2e4>] do_vfs_ioctl+0x64/0x510
[<c10535de>] ? hrtimer_nanosleep+0x8e/0x100
[<c1052c20>] ? update_rmtp+0x80/0x80
[<c10ca7c9>] sys_ioctl+0x39/0x60
[<c1433949>] syscall_call+0x7/0xb
Code: 83 c4 0c 5b 5e 5f 5d c3 c7 44 24 04 2c 05 53 c1 c7 04 24 6f ef 47 c1 e8 6e e0 fd ff c7 83 38 1e 00 00 00 00 00 00 e9 3f ff ff
+ff <0f> 0b eb fe 0f 0b eb fe 8d b4 26 00 00 00 00 0f 0b eb fe 8d b6
EIP: [<c12dc291>] i915_gem_entervt_ioctl+0xf1/0x110 SS:ESP 0068:f51d7e20
---[ end trace dd332ec083cbd513 ]---
The crash happens here in i915_gem_entervt_ioctl() :
3659 BUG_ON(!list_empty(&dev_priv->mm.active_list));
3660 BUG_ON(!list_empty(&dev_priv->mm.flushing_list));
-> 3661 BUG_ON(!list_empty(&dev_priv->mm.inactive_list));
3662 mutex_unlock(&dev->struct_mutex);
Quoting Chris :
"That BUG_ON there is silly and can simply be removed. The check is to
verify that no batches were submitted to the kernel whilst the UMS/GEM
client was suspended - to which the BUG_ONs are a crude approximation.
Furthermore, the checks are too late, since it means we attempted to
program the hardware whilst it was in an invalid state, the BUG_ONs are
the least of your concerns at that point."
Note that this regression has been introduced in
commit 1b50247a8ddde4af5aaa0e6bc125615372ce6c16
Author: Chris Wilson <[email protected]>
Date: Tue Apr 24 15:47:30 2012 +0100
drm/i915: Remove the list of pinned inactive objects
Cc: Chris Wilson <[email protected]>
Signed-off-by: Willy Tarreau <[email protected]>
[danvet: Added note about the regressing commit and cc: stable.]
Signed-off-by: Daniel Vetter <[email protected]>
[herton: adjusted context]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/gpu/drm/i915/i915_gem.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 35926ad..fc6683a 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -3658,7 +3658,6 @@ i915_gem_entervt_ioctl(struct drm_device *dev, void *data,
BUG_ON(!list_empty(&dev_priv->mm.active_list));
BUG_ON(!list_empty(&dev_priv->mm.flushing_list));
- BUG_ON(!list_empty(&dev_priv->mm.inactive_list));
mutex_unlock(&dev->struct_mutex);
ret = drm_irq_install(dev);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sarah Sharp <[email protected]>
commit d01f87c0ffa96cb44faa78710711eb6e974b891c upstream.
Before a driver is probed, we want to disable USB 3.0 Link Power
Management (LPM), in case the driver needs hub-initiated LPM disabled.
After the probe finishes, we want to attempt to re-enable LPM, order to
balance the LPM ref count.
When a probe fails (such as when libusual doesn't want to bind to a USB
3.0 mass storage device), make sure to balance the LPM ref counts by
re-enabling LPM.
This patch should be backported to kernels as old as 3.5, that contain
the commit 8306095fd2c1100e8244c09bf560f97aca5a311d "USB: Disable USB
3.0 LPM in critical sections."
Signed-off-by: Sarah Sharp <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/core/driver.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
index f536aeb..da33079 100644
--- a/drivers/usb/core/driver.c
+++ b/drivers/usb/core/driver.c
@@ -371,6 +371,10 @@ static int usb_probe_interface(struct device *dev)
intf->condition = USB_INTERFACE_UNBOUND;
usb_cancel_queued_reset(intf);
+ /* If the LPM disable succeeded, balance the ref counts. */
+ if (!lpm_disable_error)
+ usb_unlocked_enable_lpm(udev);
+
/* Unbound interfaces are always runtime-PM-disabled and -suspended */
if (driver->supports_autosuspend)
pm_runtime_disable(dev);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sarah Sharp <[email protected]>
commit ae8963adb4ad8c5f2a89ca1d99fb7bb721e7599f upstream.
Some USB 3.0 devices signal that they don't implement Link PM by having
all zeroes in the U1/U2 exit latencies in their SuperSpeed BOS
descriptor. Don found that a Western Digital device he has experiences
transfer errors when LPM is enabled. The lsusb shows the U1/U2 exit
latencies are set to zero:
Binary Object Store Descriptor:
bLength 5
bDescriptorType 15
wTotalLength 22
bNumDeviceCaps 2
SuperSpeed USB Device Capability:
bLength 10
bDescriptorType 16
bDevCapabilityType 3
bmAttributes 0x00
Latency Tolerance Messages (LTM) Supported
wSpeedsSupported 0x000e
Device can operate at Full Speed (12Mbps)
Device can operate at High Speed (480Mbps)
Device can operate at SuperSpeed (5Gbps)
bFunctionalitySupport 1
Lowest fully-functional device speed is Full Speed (12Mbps)
bU1DevExitLat 0 micro seconds
bU2DevExitLat 0 micro seconds
The fix is to not enable LPM for a particular link state if we find its
corresponding exit latency is zero.
This patch should be backported to kernels as old as 3.5, that contain
the commit 1ea7e0e8e3d0f50901d335ea4178ab2aa8c88201 "USB: Add support to
enable/disable USB3 link states."
Signed-off-by: Sarah Sharp <[email protected]>
Reported-by: Don Zickus <[email protected]>
Tested-by: Don Zickus <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/core/hub.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index 70223d5..209815a 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -3294,6 +3294,16 @@ static void usb_enable_link_state(struct usb_hcd *hcd, struct usb_device *udev,
enum usb3_link_state state)
{
int timeout;
+ __u8 u1_mel = udev->bos->ss_cap->bU1devExitLat;
+ __le16 u2_mel = udev->bos->ss_cap->bU2DevExitLat;
+
+ /* If the device says it doesn't have *any* exit latency to come out of
+ * U1 or U2, it's probably lying. Assume it doesn't implement that link
+ * state.
+ */
+ if ((state == USB3_LPM_U1 && u1_mel == 0) ||
+ (state == USB3_LPM_U2 && u2_mel == 0))
+ return;
/* We allow the host controller to set the U1/U2 timeout internally
* first, so that it can change its schedule to account for the
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Mark Brown <[email protected]>
commit 5ae9eb4cbdfd640269dbd66aa3c92ea8e11cc838 upstream.
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
sound/soc/codecs/wm2200.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/sound/soc/codecs/wm2200.c b/sound/soc/codecs/wm2200.c
index 32682c1..58ab97a 100644
--- a/sound/soc/codecs/wm2200.c
+++ b/sound/soc/codecs/wm2200.c
@@ -2091,6 +2091,7 @@ static __devinit int wm2200_i2c_probe(struct i2c_client *i2c,
switch (wm2200->rev) {
case 0:
+ case 1:
ret = regmap_register_patch(wm2200->regmap, wm2200_reva_patch,
ARRAY_SIZE(wm2200_reva_patch));
if (ret != 0) {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit d63b77f4c552cc3a20506871046ab0fcbc332609 upstream.
If we encounter an invalid (e.g., zeroed) mapping, return an error
and avoid a divide by zero.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/osd_client.h | 2 +-
include/linux/ceph/osdmap.h | 6 +++---
net/ceph/osd_client.c | 32 ++++++++++++++++++++------------
net/ceph/osdmap.c | 18 ++++++++++++++++--
4 files changed, 40 insertions(+), 18 deletions(-)
diff --git a/include/linux/ceph/osd_client.h b/include/linux/ceph/osd_client.h
index cedfb1a..d9b880e 100644
--- a/include/linux/ceph/osd_client.h
+++ b/include/linux/ceph/osd_client.h
@@ -207,7 +207,7 @@ extern void ceph_osdc_handle_reply(struct ceph_osd_client *osdc,
extern void ceph_osdc_handle_map(struct ceph_osd_client *osdc,
struct ceph_msg *msg);
-extern void ceph_calc_raw_layout(struct ceph_osd_client *osdc,
+extern int ceph_calc_raw_layout(struct ceph_osd_client *osdc,
struct ceph_file_layout *layout,
u64 snapid,
u64 off, u64 *plen, u64 *bno,
diff --git a/include/linux/ceph/osdmap.h b/include/linux/ceph/osdmap.h
index 311ef8d..e88a620 100644
--- a/include/linux/ceph/osdmap.h
+++ b/include/linux/ceph/osdmap.h
@@ -109,9 +109,9 @@ extern struct ceph_osdmap *osdmap_apply_incremental(void **p, void *end,
extern void ceph_osdmap_destroy(struct ceph_osdmap *map);
/* calculate mapping of a file extent to an object */
-extern void ceph_calc_file_object_mapping(struct ceph_file_layout *layout,
- u64 off, u64 *plen,
- u64 *bno, u64 *oxoff, u64 *oxlen);
+extern int ceph_calc_file_object_mapping(struct ceph_file_layout *layout,
+ u64 off, u64 *plen,
+ u64 *bno, u64 *oxoff, u64 *oxlen);
/* calculate mapping of object to a placement group */
extern int ceph_calc_object_layout(struct ceph_object_layout *ol,
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index 752b498..065cc38 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -52,7 +52,7 @@ static int op_has_extent(int op)
op == CEPH_OSD_OP_WRITE);
}
-void ceph_calc_raw_layout(struct ceph_osd_client *osdc,
+int ceph_calc_raw_layout(struct ceph_osd_client *osdc,
struct ceph_file_layout *layout,
u64 snapid,
u64 off, u64 *plen, u64 *bno,
@@ -62,12 +62,15 @@ void ceph_calc_raw_layout(struct ceph_osd_client *osdc,
struct ceph_osd_request_head *reqhead = req->r_request->front.iov_base;
u64 orig_len = *plen;
u64 objoff, objlen; /* extent in object */
+ int r;
reqhead->snapid = cpu_to_le64(snapid);
/* object extent? */
- ceph_calc_file_object_mapping(layout, off, plen, bno,
- &objoff, &objlen);
+ r = ceph_calc_file_object_mapping(layout, off, plen, bno,
+ &objoff, &objlen);
+ if (r < 0)
+ return r;
if (*plen < orig_len)
dout(" skipping last %llu, final file extent %llu~%llu\n",
orig_len - *plen, off, *plen);
@@ -83,7 +86,7 @@ void ceph_calc_raw_layout(struct ceph_osd_client *osdc,
dout("calc_layout bno=%llx %llu~%llu (%d pages)\n",
*bno, objoff, objlen, req->r_num_pages);
-
+ return 0;
}
EXPORT_SYMBOL(ceph_calc_raw_layout);
@@ -112,20 +115,25 @@ EXPORT_SYMBOL(ceph_calc_raw_layout);
*
* fill osd op in request message.
*/
-static void calc_layout(struct ceph_osd_client *osdc,
- struct ceph_vino vino,
- struct ceph_file_layout *layout,
- u64 off, u64 *plen,
- struct ceph_osd_request *req,
- struct ceph_osd_req_op *op)
+static int calc_layout(struct ceph_osd_client *osdc,
+ struct ceph_vino vino,
+ struct ceph_file_layout *layout,
+ u64 off, u64 *plen,
+ struct ceph_osd_request *req,
+ struct ceph_osd_req_op *op)
{
u64 bno;
+ int r;
- ceph_calc_raw_layout(osdc, layout, vino.snap, off,
- plen, &bno, req, op);
+ r = ceph_calc_raw_layout(osdc, layout, vino.snap, off,
+ plen, &bno, req, op);
+ if (r < 0)
+ return r;
snprintf(req->r_oid, sizeof(req->r_oid), "%llx.%08llx", vino.ino, bno);
req->r_oid_len = strlen(req->r_oid);
+
+ return r;
}
/*
diff --git a/net/ceph/osdmap.c b/net/ceph/osdmap.c
index 9600674..d10a72b 100644
--- a/net/ceph/osdmap.c
+++ b/net/ceph/osdmap.c
@@ -945,7 +945,7 @@ bad:
* for now, we write only a single su, until we can
* pass a stride back to the caller.
*/
-void ceph_calc_file_object_mapping(struct ceph_file_layout *layout,
+int ceph_calc_file_object_mapping(struct ceph_file_layout *layout,
u64 off, u64 *plen,
u64 *ono,
u64 *oxoff, u64 *oxlen)
@@ -959,11 +959,17 @@ void ceph_calc_file_object_mapping(struct ceph_file_layout *layout,
dout("mapping %llu~%llu osize %u fl_su %u\n", off, *plen,
osize, su);
+ if (su == 0 || sc == 0)
+ goto invalid;
su_per_object = osize / su;
+ if (su_per_object == 0)
+ goto invalid;
dout("osize %u / su %u = su_per_object %u\n", osize, su,
su_per_object);
- BUG_ON((su & ~PAGE_MASK) != 0);
+ if ((su & ~PAGE_MASK) != 0)
+ goto invalid;
+
/* bl = *off / su; */
t = off;
do_div(t, su);
@@ -991,6 +997,14 @@ void ceph_calc_file_object_mapping(struct ceph_file_layout *layout,
*plen = *oxlen;
dout(" obj extent %llu~%llu\n", *oxoff, *oxlen);
+ return 0;
+
+invalid:
+ dout(" invalid layout\n");
+ *ono = 0;
+ *oxoff = 0;
+ *oxlen = 0;
+ return -EINVAL;
}
EXPORT_SYMBOL(ceph_calc_file_object_mapping);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: David Zafman <[email protected]>
commit 52eb5a900a9863a8b77a895f770e5d825c8e02c6 upstream.
Call to d_find_alias() needs a corresponding dput()
This fixes http://tracker.newdream.net/issues/3271
Signed-off-by: David Zafman <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ceph/export.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/ceph/export.c b/fs/ceph/export.c
index 02ce909..9349bb3 100644
--- a/fs/ceph/export.c
+++ b/fs/ceph/export.c
@@ -90,6 +90,8 @@ static int ceph_encode_fh(struct inode *inode, u32 *rawfh, int *max_len,
*max_len = handle_length;
type = 255;
}
+ if (dentry)
+ dput(dentry);
return type;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 5ce765a540f34d1e2005e1210f49f67fdf11e997 upstream.
In write_partial_msg_pages(), pages need to be kmapped in order to
perform a CRC-32c calculation on them. As an artifact of the way
this code used to be structured, the kunmap() call was separated
from the kmap() call and both were done conditionally. But the
conditions under which the kmap() and kunmap() calls were made
differed, so there was a chance a kunmap() call would be done on a
page that had not been mapped.
The symptom of this was tripping a BUG() in kunmap_high() when
pkmap_count[nr] became 0.
Reported-by: Bryan K. Wright <[email protected]>
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index b141c86..8ba0eee 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1069,16 +1069,13 @@ static int write_partial_msg_pages(struct ceph_connection *con)
BUG_ON(kaddr == NULL);
base = kaddr + con->out_msg_pos.page_pos + bio_offset;
crc = crc32c(crc, base, len);
+ kunmap(page);
msg->footer.data_crc = cpu_to_le32(crc);
con->out_msg_pos.did_page_crc = true;
}
ret = ceph_tcp_sendpage(con->sock, page,
con->out_msg_pos.page_pos + bio_offset,
len, 1);
-
- if (do_datacrc)
- kunmap(page);
-
if (ret <= 0)
goto out;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit d1c338a509cea5378df59629ad47382810c38623 upstream.
The debugfs directory includes the cluster fsid and our unique global_id.
We need to delay the initialization of the debug entry until we have
learned both the fsid and our global_id from the monitor or else the
second client can't create its debugfs entry and will fail (and multiple
client instances aren't properly reflected in debugfs).
Reported by: Yan, Zheng <[email protected]>
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ceph/debugfs.c | 1 +
net/ceph/ceph_common.c | 1 -
net/ceph/debugfs.c | 4 ++++
net/ceph/mon_client.c | 51 +++++++++++++++++++++++++++++++++++++++++++-----
4 files changed, 51 insertions(+), 6 deletions(-)
diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
index fb962ef..6d59006 100644
--- a/fs/ceph/debugfs.c
+++ b/fs/ceph/debugfs.c
@@ -201,6 +201,7 @@ int ceph_fs_debugfs_init(struct ceph_fs_client *fsc)
int err = -ENOMEM;
dout("ceph_fs_debugfs_init\n");
+ BUG_ON(!fsc->client->debugfs_dir);
fsc->debugfs_congestion_kb =
debugfs_create_file("writeback_congestion_kb",
0600,
diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c
index 3b45e01..408a81a 100644
--- a/net/ceph/ceph_common.c
+++ b/net/ceph/ceph_common.c
@@ -83,7 +83,6 @@ int ceph_check_fsid(struct ceph_client *client, struct ceph_fsid *fsid)
return -1;
}
} else {
- pr_info("client%lld fsid %pU\n", ceph_client_id(client), fsid);
memcpy(&client->fsid, fsid, sizeof(*fsid));
}
return 0;
diff --git a/net/ceph/debugfs.c b/net/ceph/debugfs.c
index 54b531a..38b5dc1 100644
--- a/net/ceph/debugfs.c
+++ b/net/ceph/debugfs.c
@@ -189,6 +189,9 @@ int ceph_debugfs_client_init(struct ceph_client *client)
snprintf(name, sizeof(name), "%pU.client%lld", &client->fsid,
client->monc.auth->global_id);
+ dout("ceph_debugfs_client_init %p %s\n", client, name);
+
+ BUG_ON(client->debugfs_dir);
client->debugfs_dir = debugfs_create_dir(name, ceph_debugfs_dir);
if (!client->debugfs_dir)
goto out;
@@ -234,6 +237,7 @@ out:
void ceph_debugfs_client_cleanup(struct ceph_client *client)
{
+ dout("ceph_debugfs_client_cleanup %p\n", client);
debugfs_remove(client->debugfs_osdmap);
debugfs_remove(client->debugfs_monmap);
debugfs_remove(client->osdc.debugfs_file);
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index 105d533..900ea0f 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -311,6 +311,17 @@ int ceph_monc_open_session(struct ceph_mon_client *monc)
EXPORT_SYMBOL(ceph_monc_open_session);
/*
+ * We require the fsid and global_id in order to initialize our
+ * debugfs dir.
+ */
+static bool have_debugfs_info(struct ceph_mon_client *monc)
+{
+ dout("have_debugfs_info fsid %d globalid %lld\n",
+ (int)monc->client->have_fsid, monc->auth->global_id);
+ return monc->client->have_fsid && monc->auth->global_id > 0;
+}
+
+/*
* The monitor responds with mount ack indicate mount success. The
* included client ticket allows the client to talk to MDSs and OSDs.
*/
@@ -320,9 +331,12 @@ static void ceph_monc_handle_map(struct ceph_mon_client *monc,
struct ceph_client *client = monc->client;
struct ceph_monmap *monmap = NULL, *old = monc->monmap;
void *p, *end;
+ int had_debugfs_info, init_debugfs = 0;
mutex_lock(&monc->mutex);
+ had_debugfs_info = have_debugfs_info(monc);
+
dout("handle_monmap\n");
p = msg->front.iov_base;
end = p + msg->front.iov_len;
@@ -344,12 +358,22 @@ static void ceph_monc_handle_map(struct ceph_mon_client *monc,
if (!client->have_fsid) {
client->have_fsid = true;
+ if (!had_debugfs_info && have_debugfs_info(monc)) {
+ pr_info("client%lld fsid %pU\n",
+ ceph_client_id(monc->client),
+ &monc->client->fsid);
+ init_debugfs = 1;
+ }
mutex_unlock(&monc->mutex);
- /*
- * do debugfs initialization without mutex to avoid
- * creating a locking dependency
- */
- ceph_debugfs_client_init(client);
+
+ if (init_debugfs) {
+ /*
+ * do debugfs initialization without mutex to avoid
+ * creating a locking dependency
+ */
+ ceph_debugfs_client_init(monc->client);
+ }
+
goto out_unlocked;
}
out:
@@ -865,8 +889,10 @@ static void handle_auth_reply(struct ceph_mon_client *monc,
{
int ret;
int was_auth = 0;
+ int had_debugfs_info, init_debugfs = 0;
mutex_lock(&monc->mutex);
+ had_debugfs_info = have_debugfs_info(monc);
if (monc->auth->ops)
was_auth = monc->auth->ops->is_authenticated(monc->auth);
monc->pending_auth = 0;
@@ -889,7 +915,22 @@ static void handle_auth_reply(struct ceph_mon_client *monc,
__send_subscribe(monc);
__resend_generic_request(monc);
}
+
+ if (!had_debugfs_info && have_debugfs_info(monc)) {
+ pr_info("client%lld fsid %pU\n",
+ ceph_client_id(monc->client),
+ &monc->client->fsid);
+ init_debugfs = 1;
+ }
mutex_unlock(&monc->mutex);
+
+ if (init_debugfs) {
+ /*
+ * do debugfs initialization without mutex to avoid
+ * creating a locking dependency
+ */
+ ceph_debugfs_client_init(monc->client);
+ }
}
static int __validate_auth(struct ceph_mon_client *monc)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 8636ea672f0c5ab7478c42c5b6705ebd1db7eb6a upstream.
The ceph_fault() function takes the con mutex, so we should avoid
dropping it before calling it. This fixes a potential race with
another thread calling ceph_con_close(), or _open(), or similar (we
don't reverify con->state after retaking the lock).
Add annotation so that lockdep realizes we will drop the mutex before
returning.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index feb5a2a..c3b628c 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2336,7 +2336,6 @@ done_unlocked:
return;
fault:
- mutex_unlock(&con->mutex);
ceph_fault(con); /* error/fault path */
goto done_unlocked;
}
@@ -2347,9 +2346,8 @@ fault:
* exponential backoff
*/
static void ceph_fault(struct ceph_connection *con)
+ __releases(con->mutex)
{
- mutex_lock(&con->mutex);
-
pr_err("%s%lld %s %s\n", ENTITY_NAME(con->peer_name),
ceph_pr_addr(&con->peer_addr.in_addr), con->error_msg);
dout("fault %p state %lu to peer %s\n",
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 4f471e4a9c7db0256834e1b376ea50c82e345c3c upstream.
Revoke all mon_client messages when we shut down the old connection.
This is mostly moot since we are re-using the same ceph_connection,
but it is cleaner.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/mon_client.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index bfd21a8..105d533 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -118,6 +118,9 @@ static void __close_session(struct ceph_mon_client *monc)
{
dout("__close_session closing mon%d\n", monc->cur_mon);
ceph_msg_revoke(monc->m_auth);
+ ceph_msg_revoke_incoming(monc->m_auth_reply);
+ ceph_msg_revoke(monc->m_subscribe);
+ ceph_msg_revoke_incoming(monc->m_subscribe_ack);
ceph_con_close(&monc->con);
monc->cur_mon = -1;
monc->pending_auth = 0;
@@ -685,6 +688,7 @@ static void __resend_generic_request(struct ceph_mon_client *monc)
for (p = rb_first(&monc->generic_request_tree); p; p = rb_next(p)) {
req = rb_entry(p, struct ceph_mon_generic_request, node);
ceph_msg_revoke(req->request);
+ ceph_msg_revoke_incoming(req->reply);
ceph_con_send(&monc->con, ceph_msg_get(req->request));
}
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 43c7427d100769451601b8a36988ac0528ce0124 upstream.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index b872db5..fa16f2c 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -528,6 +528,8 @@ void ceph_con_close(struct ceph_connection *con)
clear_bit(CON_FLAG_LOSSYTX, &con->flags); /* so we retry next connect */
clear_bit(CON_FLAG_KEEPALIVE_PENDING, &con->flags);
clear_bit(CON_FLAG_WRITE_PENDING, &con->flags);
+ clear_bit(CON_FLAG_KEEPALIVE_PENDING, &con->flags);
+ clear_bit(CON_FLAG_BACKOFF, &con->flags);
reset_connection(con);
con->peer_global_seq = 0;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 8dacc7da69a491c515851e68de6036f21b5663ce upstream.
Use a simple set of 6 enumerated values for the socket states (CON_STATE_*)
and use those instead of the state bits. All of the con->state checks are
now under the protection of the con mutex, so this is safe. It also
simplifies many of the state checks because we can check for anything other
than the expected state instead of various bits for races we can think of.
This appears to hold up well to stress testing both with and without socket
failure injection on the server side.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 12 ----
net/ceph/messenger.c | 130 +++++++++++++++++++++-------------------
2 files changed, 68 insertions(+), 74 deletions(-)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index de39cda..dc684f6 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -117,18 +117,6 @@ struct ceph_msg_pos {
#define BACKOFF 15
/*
- * ceph_connection states
- */
-#define CONNECTING 1
-#define NEGOTIATING 2
-#define CONNECTED 5
-#define STANDBY 8 /* no outgoing messages, socket closed. we keep
- * the ceph_connection around to maintain shared
- * state with the peer. */
-#define CLOSED 10 /* we've closed the connection */
-#define OPENING 13 /* open connection w/ (possibly new) peer */
-
-/*
* A single connection with another host.
*
* We maintain a queue of outgoing messages, and some session state to
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index e7320cd..563e46a 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -77,6 +77,17 @@
#define CON_SOCK_STATE_CONNECTED 3 /* -> CLOSING or -> CLOSED */
#define CON_SOCK_STATE_CLOSING 4 /* -> CLOSED */
+/*
+ * connection states
+ */
+#define CON_STATE_CLOSED 1 /* -> PREOPEN */
+#define CON_STATE_PREOPEN 2 /* -> CONNECTING, CLOSED */
+#define CON_STATE_CONNECTING 3 /* -> NEGOTIATING, CLOSED */
+#define CON_STATE_NEGOTIATING 4 /* -> OPEN, CLOSED */
+#define CON_STATE_OPEN 5 /* -> STANDBY, CLOSED */
+#define CON_STATE_STANDBY 6 /* -> PREOPEN, CLOSED */
+
+
/* static tag bytes (protocol control messages) */
static char tag_msg = CEPH_MSGR_TAG_MSG;
static char tag_ack = CEPH_MSGR_TAG_ACK;
@@ -503,11 +514,7 @@ void ceph_con_close(struct ceph_connection *con)
mutex_lock(&con->mutex);
dout("con_close %p peer %s\n", con,
ceph_pr_addr(&con->peer_addr.in_addr));
- clear_bit(NEGOTIATING, &con->state);
- clear_bit(CONNECTING, &con->state);
- clear_bit(CONNECTED, &con->state);
- clear_bit(STANDBY, &con->state); /* avoid connect_seq bump */
- set_bit(CLOSED, &con->state);
+ con->state = CON_STATE_CLOSED;
clear_bit(LOSSYTX, &con->flags); /* so we retry next connect */
clear_bit(KEEPALIVE_PENDING, &con->flags);
@@ -530,8 +537,9 @@ void ceph_con_open(struct ceph_connection *con,
{
mutex_lock(&con->mutex);
dout("con_open %p %s\n", con, ceph_pr_addr(&addr->in_addr));
- set_bit(OPENING, &con->state);
- WARN_ON(!test_and_clear_bit(CLOSED, &con->state));
+
+ BUG_ON(con->state != CON_STATE_CLOSED);
+ con->state = CON_STATE_PREOPEN;
con->peer_name.type = (__u8) entity_type;
con->peer_name.num = cpu_to_le64(entity_num);
@@ -571,7 +579,7 @@ void ceph_con_init(struct ceph_connection *con, void *private,
INIT_LIST_HEAD(&con->out_sent);
INIT_DELAYED_WORK(&con->work, con_work);
- set_bit(CLOSED, &con->state);
+ con->state = CON_STATE_CLOSED;
}
EXPORT_SYMBOL(ceph_con_init);
@@ -809,27 +817,21 @@ static struct ceph_auth_handshake *get_connect_authorizer(struct ceph_connection
if (!con->ops->get_authorizer) {
con->out_connect.authorizer_protocol = CEPH_AUTH_UNKNOWN;
con->out_connect.authorizer_len = 0;
-
return NULL;
}
/* Can't hold the mutex while getting authorizer */
-
mutex_unlock(&con->mutex);
-
auth = con->ops->get_authorizer(con, auth_proto, con->auth_retry);
-
mutex_lock(&con->mutex);
if (IS_ERR(auth))
return auth;
- if (test_bit(CLOSED, &con->state) || test_bit(OPENING, &con->flags))
+ if (con->state != CON_STATE_NEGOTIATING)
return ERR_PTR(-EAGAIN);
con->auth_reply_buf = auth->authorizer_reply_buf;
con->auth_reply_buf_len = auth->authorizer_reply_buf_len;
-
-
return auth;
}
@@ -1484,7 +1486,8 @@ static int process_banner(struct ceph_connection *con)
static void fail_protocol(struct ceph_connection *con)
{
reset_connection(con);
- set_bit(CLOSED, &con->state); /* in case there's queued work */
+ BUG_ON(con->state != CON_STATE_NEGOTIATING);
+ con->state = CON_STATE_CLOSED;
}
static int process_connect(struct ceph_connection *con)
@@ -1558,8 +1561,7 @@ static int process_connect(struct ceph_connection *con)
if (con->ops->peer_reset)
con->ops->peer_reset(con);
mutex_lock(&con->mutex);
- if (test_bit(CLOSED, &con->state) ||
- test_bit(OPENING, &con->state))
+ if (con->state != CON_STATE_NEGOTIATING)
return -EAGAIN;
break;
@@ -1605,8 +1607,10 @@ static int process_connect(struct ceph_connection *con)
fail_protocol(con);
return -1;
}
- clear_bit(NEGOTIATING, &con->state);
- set_bit(CONNECTED, &con->state);
+
+ BUG_ON(con->state != CON_STATE_NEGOTIATING);
+ con->state = CON_STATE_OPEN;
+
con->peer_global_seq = le32_to_cpu(con->in_reply.global_seq);
con->connect_seq++;
con->peer_features = server_feat;
@@ -1994,8 +1998,9 @@ more:
dout("try_write out_kvec_bytes %d\n", con->out_kvec_bytes);
/* open the socket first? */
- if (con->sock == NULL) {
- set_bit(CONNECTING, &con->state);
+ if (con->state == CON_STATE_PREOPEN) {
+ BUG_ON(con->sock);
+ con->state = CON_STATE_CONNECTING;
con_out_kvec_reset(con);
prepare_write_banner(con);
@@ -2046,8 +2051,7 @@ more_kvec:
}
do_next:
- if (!test_bit(CONNECTING, &con->state) &&
- !test_bit(NEGOTIATING, &con->state)) {
+ if (con->state == CON_STATE_OPEN) {
/* is anything else pending? */
if (!list_empty(&con->out_queue)) {
prepare_write_message(con);
@@ -2081,29 +2085,19 @@ static int try_read(struct ceph_connection *con)
{
int ret = -1;
- if (!con->sock)
- return 0;
-
- if (test_bit(STANDBY, &con->state))
+more:
+ dout("try_read start on %p state %lu\n", con, con->state);
+ if (con->state != CON_STATE_CONNECTING &&
+ con->state != CON_STATE_NEGOTIATING &&
+ con->state != CON_STATE_OPEN)
return 0;
- dout("try_read start on %p\n", con);
+ BUG_ON(!con->sock);
-more:
dout("try_read tag %d in_base_pos %d\n", (int)con->in_tag,
con->in_base_pos);
- /*
- * process_connect and process_message drop and re-take
- * con->mutex. make sure we handle a racing close or reopen.
- */
- if (test_bit(CLOSED, &con->state) ||
- test_bit(OPENING, &con->state)) {
- ret = -EAGAIN;
- goto out;
- }
-
- if (test_bit(CONNECTING, &con->state)) {
+ if (con->state == CON_STATE_CONNECTING) {
dout("try_read connecting\n");
ret = read_partial_banner(con);
if (ret <= 0)
@@ -2112,8 +2106,8 @@ more:
if (ret < 0)
goto out;
- clear_bit(CONNECTING, &con->state);
- set_bit(NEGOTIATING, &con->state);
+ BUG_ON(con->state != CON_STATE_CONNECTING);
+ con->state = CON_STATE_NEGOTIATING;
/* Banner is good, exchange connection info */
ret = prepare_write_connect(con);
@@ -2125,7 +2119,7 @@ more:
goto out;
}
- if (test_bit(NEGOTIATING, &con->state)) {
+ if (con->state == CON_STATE_NEGOTIATING) {
dout("try_read negotiating\n");
ret = read_partial_connect(con);
if (ret <= 0)
@@ -2136,6 +2130,8 @@ more:
goto more;
}
+ BUG_ON(con->state != CON_STATE_OPEN);
+
if (con->in_base_pos < 0) {
/*
* skipping + discarding content.
@@ -2169,8 +2165,8 @@ more:
prepare_read_ack(con);
break;
case CEPH_MSGR_TAG_CLOSE:
- clear_bit(CONNECTED, &con->state);
- set_bit(CLOSED, &con->state); /* fixme */
+ con_close_socket(con);
+ con->state = CON_STATE_CLOSED;
goto out;
default:
goto bad_tag;
@@ -2246,14 +2242,21 @@ static void con_work(struct work_struct *work)
mutex_lock(&con->mutex);
restart:
if (test_and_clear_bit(SOCK_CLOSED, &con->flags)) {
- if (test_and_clear_bit(CONNECTED, &con->state))
- con->error_msg = "socket closed";
- else if (test_and_clear_bit(NEGOTIATING, &con->state))
- con->error_msg = "negotiation failed";
- else if (test_and_clear_bit(CONNECTING, &con->state))
+ switch (con->state) {
+ case CON_STATE_CONNECTING:
con->error_msg = "connection failed";
- else
+ break;
+ case CON_STATE_NEGOTIATING:
+ con->error_msg = "negotiation failed";
+ break;
+ case CON_STATE_OPEN:
+ con->error_msg = "socket closed";
+ break;
+ default:
+ dout("unrecognized con state %d\n", (int)con->state);
con->error_msg = "unrecognized con state";
+ BUG();
+ }
goto fault;
}
@@ -2271,17 +2274,16 @@ restart:
}
}
- if (test_bit(STANDBY, &con->state)) {
+ if (con->state == CON_STATE_STANDBY) {
dout("con_work %p STANDBY\n", con);
goto done;
}
- if (test_bit(CLOSED, &con->state)) {
+ if (con->state == CON_STATE_CLOSED) {
dout("con_work %p CLOSED\n", con);
BUG_ON(con->sock);
goto done;
}
- if (test_and_clear_bit(OPENING, &con->state)) {
- /* reopen w/ new peer */
+ if (con->state == CON_STATE_PREOPEN) {
dout("con_work OPENING\n");
BUG_ON(con->sock);
}
@@ -2328,13 +2330,15 @@ static void ceph_fault(struct ceph_connection *con)
dout("fault %p state %lu to peer %s\n",
con, con->state, ceph_pr_addr(&con->peer_addr.in_addr));
- if (test_bit(CLOSED, &con->state))
- goto out_unlock;
+ BUG_ON(con->state != CON_STATE_CONNECTING &&
+ con->state != CON_STATE_NEGOTIATING &&
+ con->state != CON_STATE_OPEN);
con_close_socket(con);
if (test_bit(LOSSYTX, &con->flags)) {
- dout("fault on LOSSYTX channel\n");
+ dout("fault on LOSSYTX channel, marking CLOSED\n");
+ con->state = CON_STATE_CLOSED;
goto out_unlock;
}
@@ -2355,9 +2359,10 @@ static void ceph_fault(struct ceph_connection *con)
!test_bit(KEEPALIVE_PENDING, &con->flags)) {
dout("fault %p setting STANDBY clearing WRITE_PENDING\n", con);
clear_bit(WRITE_PENDING, &con->flags);
- set_bit(STANDBY, &con->state);
+ con->state = CON_STATE_STANDBY;
} else {
/* retry after a delay. */
+ con->state = CON_STATE_PREOPEN;
if (con->delay == 0)
con->delay = BASE_DELAY_INTERVAL;
else if (con->delay < MAX_DELAY_INTERVAL)
@@ -2431,8 +2436,9 @@ EXPORT_SYMBOL(ceph_messenger_init);
static void clear_standby(struct ceph_connection *con)
{
/* come back from STANDBY? */
- if (test_and_clear_bit(STANDBY, &con->state)) {
+ if (con->state == CON_STATE_STANDBY) {
dout("clear_standby %p and ++connect_seq\n", con);
+ con->state = CON_STATE_PREOPEN;
con->connect_seq++;
WARN_ON(test_bit(WRITE_PENDING, &con->flags));
WARN_ON(test_bit(KEEPALIVE_PENDING, &con->flags));
@@ -2451,7 +2457,7 @@ void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg)
mutex_lock(&con->mutex);
- if (test_bit(CLOSED, &con->state)) {
+ if (con->state == CON_STATE_CLOSED) {
dout("con_send %p closed, dropping %p\n", con, msg);
ceph_msg_put(msg);
mutex_unlock(&con->mutex);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit a59b55a602b6c741052d79c1e3643f8440cddd27 upstream.
Take the con mutex before checking whether the connection is closed to
avoid racing with someone else closing it.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 16 +++++++---------
1 file changed, 7 insertions(+), 9 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 1a3cb4a..20e60a8 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2453,22 +2453,20 @@ static void clear_standby(struct ceph_connection *con)
*/
void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg)
{
- if (test_bit(CLOSED, &con->state)) {
- dout("con_send %p closed, dropping %p\n", con, msg);
- ceph_msg_put(msg);
- return;
- }
-
/* set src+dst */
msg->hdr.src = con->msgr->inst.name;
-
BUG_ON(msg->front.iov_len != le32_to_cpu(msg->hdr.front_len));
-
msg->needs_out_seq = true;
- /* queue */
mutex_lock(&con->mutex);
+ if (test_bit(CLOSED, &con->state)) {
+ dout("con_send %p closed, dropping %p\n", con, msg);
+ ceph_msg_put(msg);
+ mutex_unlock(&con->mutex);
+ return;
+ }
+
BUG_ON(msg->con != NULL);
msg->con = con->ops->get(con);
BUG_ON(msg->con == NULL);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 5469155f2bc83bb2c88b0a0370c3d54d87eed06e upstream.
Take the con mutex while we are initiating a ceph open. This is necessary
because the may have previously been in use and then closed, which could
result in a racing workqueue running con_work().
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index f1bd3bb..a477998 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -537,6 +537,7 @@ void ceph_con_open(struct ceph_connection *con,
__u8 entity_type, __u64 entity_num,
struct ceph_entity_addr *addr)
{
+ mutex_lock(&con->mutex);
dout("con_open %p %s\n", con, ceph_pr_addr(&addr->in_addr));
set_bit(OPENING, &con->state);
WARN_ON(!test_and_clear_bit(CLOSED, &con->state));
@@ -546,6 +547,7 @@ void ceph_con_open(struct ceph_connection *con,
memcpy(&con->peer_addr, addr, sizeof(*addr));
con->delay = 0; /* reset backoff memory */
+ mutex_unlock(&con->mutex);
queue_con(con);
}
EXPORT_SYMBOL(ceph_con_open);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 85effe183dd45854d1ad1a370b88cddb403c4c91 upstream.
We exponentially back off when we encounter connection errors. If several
errors accumulate, we will eventually wait ages before even trying to
reconnect.
Fix this by resetting the backoff counter after a successful negotiation/
connection with the remote node. Fixes ceph issue #2802.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index a477998..07204f1 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1629,6 +1629,8 @@ static int process_connect(struct ceph_connection *con)
if (con->in_reply.flags & CEPH_MSG_CONNECT_LOSSY)
set_bit(LOSSYTX, &con->flags);
+ con->delay = 0; /* reset backoff memory */
+
prepare_read_tag(con);
break;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit a4107026976f06c9a6ce8cc84a763564ee39d901 upstream.
Previously, we were opportunistically initializing the bio_iter if it
appeared to be uninitialized in the middle of the read path. The problem
is that a sequence like:
- start reading message
- initialize bio_iter
- read half a message
- messenger fault, reconnect
- restart reading message
- ** bio_iter now non-NULL, not reinitialized **
- read past end of bio, crash
Instead, initialize the bio_iter unconditionally when we allocate/claim
the message for read.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index e65b15d..f1bd3bb 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1872,6 +1872,11 @@ static int read_partial_message(struct ceph_connection *con)
else
con->in_msg_pos.page_pos = 0;
con->in_msg_pos.data_pos = 0;
+
+#ifdef CONFIG_BLOCK
+ if (m->bio)
+ init_bio_iter(m->bio, &m->bio_iter, &m->bio_seg);
+#endif
}
/* front */
@@ -1888,10 +1893,6 @@ static int read_partial_message(struct ceph_connection *con)
if (ret <= 0)
return ret;
}
-#ifdef CONFIG_BLOCK
- if (m->bio && !m->bio_iter)
- init_bio_iter(m->bio, &m->bio_iter, &m->bio_seg);
-#endif
/* (page) data */
while (con->in_msg_pos.data_pos < data_len) {
@@ -1902,7 +1903,7 @@ static int read_partial_message(struct ceph_connection *con)
return ret;
#ifdef CONFIG_BLOCK
} else if (m->bio) {
-
+ BUG_ON(!m->bio_iter);
ret = read_partial_message_bio(con,
&m->bio_iter, &m->bio_seg,
data_len, do_datacrc);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 3a140a0d5c4b9e35373b016e41dfc85f1e526bdb upstream.
We need to set error_msg to something useful before calling ceph_fault();
do so here for try_{read,write}(). This is more informative than
libceph: osd0 192.168.106.220:6801 (null)
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 63e1252..6e2f678 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2287,14 +2287,18 @@ restart:
ret = try_read(con);
if (ret == -EAGAIN)
goto restart;
- if (ret < 0)
+ if (ret < 0) {
+ con->error_msg = "socket error on read";
goto fault;
+ }
ret = try_write(con);
if (ret == -EAGAIN)
goto restart;
- if (ret < 0)
+ if (ret < 0) {
+ con->error_msg = "socket error on write";
goto fault;
+ }
done:
mutex_unlock(&con->mutex);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit d50b409fb8698571d8209e5adfe122e287e31290 upstream.
Initialize the type field for messages in a msgpool. The caller was doing
this for osd ops, but not for the reply messages.
Reported-by: Alex Elder <[email protected]>
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/msgpool.h | 3 ++-
net/ceph/msgpool.c | 7 ++++---
net/ceph/osd_client.c | 7 ++++---
3 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/include/linux/ceph/msgpool.h b/include/linux/ceph/msgpool.h
index a362605..09fa96b 100644
--- a/include/linux/ceph/msgpool.h
+++ b/include/linux/ceph/msgpool.h
@@ -11,10 +11,11 @@
struct ceph_msgpool {
const char *name;
mempool_t *pool;
+ int type; /* preallocated message type */
int front_len; /* preallocated payload size */
};
-extern int ceph_msgpool_init(struct ceph_msgpool *pool,
+extern int ceph_msgpool_init(struct ceph_msgpool *pool, int type,
int front_len, int size, bool blocking,
const char *name);
extern void ceph_msgpool_destroy(struct ceph_msgpool *pool);
diff --git a/net/ceph/msgpool.c b/net/ceph/msgpool.c
index 11d5f41..ddec1c1 100644
--- a/net/ceph/msgpool.c
+++ b/net/ceph/msgpool.c
@@ -12,7 +12,7 @@ static void *msgpool_alloc(gfp_t gfp_mask, void *arg)
struct ceph_msgpool *pool = arg;
struct ceph_msg *msg;
- msg = ceph_msg_new(0, pool->front_len, gfp_mask, true);
+ msg = ceph_msg_new(pool->type, pool->front_len, gfp_mask, true);
if (!msg) {
dout("msgpool_alloc %s failed\n", pool->name);
} else {
@@ -32,10 +32,11 @@ static void msgpool_free(void *element, void *arg)
ceph_msg_put(msg);
}
-int ceph_msgpool_init(struct ceph_msgpool *pool,
+int ceph_msgpool_init(struct ceph_msgpool *pool, int type,
int front_len, int size, bool blocking, const char *name)
{
dout("msgpool %s init\n", name);
+ pool->type = type;
pool->front_len = front_len;
pool->pool = mempool_create(size, msgpool_alloc, msgpool_free, pool);
if (!pool->pool)
@@ -61,7 +62,7 @@ struct ceph_msg *ceph_msgpool_get(struct ceph_msgpool *pool,
WARN_ON(1);
/* try to alloc a fresh message */
- return ceph_msg_new(0, front_len, GFP_NOFS, false);
+ return ceph_msg_new(pool->type, front_len, GFP_NOFS, false);
}
msg = mempool_alloc(pool->pool, GFP_NOFS);
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index c252711..4475d17 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -242,6 +242,7 @@ struct ceph_osd_request *ceph_osdc_alloc_request(struct ceph_osd_client *osdc,
}
ceph_pagelist_init(req->r_trail);
}
+
/* create request message; allow space for oid */
msg_size += MAX_OBJ_NAME_SIZE;
if (snapc)
@@ -255,7 +256,6 @@ struct ceph_osd_request *ceph_osdc_alloc_request(struct ceph_osd_client *osdc,
return NULL;
}
- msg->hdr.type = cpu_to_le16(CEPH_MSG_OSD_OP);
memset(msg->front.iov_base, 0, msg->front.iov_len);
req->r_request = msg;
@@ -1837,11 +1837,12 @@ int ceph_osdc_init(struct ceph_osd_client *osdc, struct ceph_client *client)
if (!osdc->req_mempool)
goto out;
- err = ceph_msgpool_init(&osdc->msgpool_op, OSD_OP_FRONT_LEN, 10, true,
+ err = ceph_msgpool_init(&osdc->msgpool_op, CEPH_MSG_OSD_OP,
+ OSD_OP_FRONT_LEN, 10, true,
"osd_op");
if (err < 0)
goto out_mempool;
- err = ceph_msgpool_init(&osdc->msgpool_op_reply,
+ err = ceph_msgpool_init(&osdc->msgpool_op_reply, CEPH_MSG_OSD_OPREPLY,
OSD_OPREPLY_FRONT_LEN, 10, true,
"osd_op_reply");
if (err < 0)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 735a72ef952d42a256f79ae3e6dc1c17a45c041b upstream.
Do not re-initialize the con on every connection attempt. When we
ceph_con_close, there may still be work queued on the socket (e.g., to
close it), and re-initializing will clobber the work_struct state.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/mon_client.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index bcc80a0..bfd21a8 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -119,7 +119,6 @@ static void __close_session(struct ceph_mon_client *monc)
dout("__close_session closing mon%d\n", monc->cur_mon);
ceph_msg_revoke(monc->m_auth);
ceph_con_close(&monc->con);
- monc->con.private = NULL;
monc->cur_mon = -1;
monc->pending_auth = 0;
ceph_auth_reset(monc->auth);
@@ -142,9 +141,6 @@ static int __open_session(struct ceph_mon_client *monc)
monc->sub_renew_after = jiffies; /* i.e., expired */
monc->want_next_osdmap = !!monc->want_next_osdmap;
- ceph_con_init(&monc->con, monc, &mon_con_ops,
- &monc->client->msgr);
-
dout("open_session mon%d opening\n", monc->cur_mon);
ceph_con_open(&monc->con,
CEPH_ENTITY_TYPE_MON, monc->cur_mon,
@@ -798,6 +794,9 @@ int ceph_monc_init(struct ceph_mon_client *monc, struct ceph_client *cl)
if (!monc->m_auth)
goto out_auth_reply;
+ ceph_con_init(&monc->con, monc, &mon_con_ops,
+ &monc->client->msgr);
+
monc->cur_mon = -1;
monc->hunting = true;
monc->sub_renew_after = jiffies;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit b7a9e5dd40f17a48a72f249b8bbc989b63bae5fd upstream.
The peer name may change on each open attempt, even when the connection is
reused.
Signed-off-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ceph/mds_client.c | 7 ++++---
include/linux/ceph/messenger.h | 4 ++--
net/ceph/messenger.c | 12 +++++++-----
net/ceph/mon_client.c | 4 ++--
net/ceph/osd_client.c | 10 ++++++----
5 files changed, 21 insertions(+), 16 deletions(-)
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index ecd7f15..5ac6434 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -394,8 +394,7 @@ static struct ceph_mds_session *register_session(struct ceph_mds_client *mdsc,
s->s_seq = 0;
mutex_init(&s->s_mutex);
- ceph_con_init(&s->s_con, s, &mds_con_ops, &mdsc->fsc->client->msgr,
- CEPH_ENTITY_TYPE_MDS, mds);
+ ceph_con_init(&s->s_con, s, &mds_con_ops, &mdsc->fsc->client->msgr);
spin_lock_init(&s->s_gen_ttl_lock);
s->s_cap_gen = 0;
@@ -437,7 +436,8 @@ static struct ceph_mds_session *register_session(struct ceph_mds_client *mdsc,
mdsc->sessions[mds] = s;
atomic_inc(&s->s_ref); /* one ref to sessions[], one to caller */
- ceph_con_open(&s->s_con, ceph_mdsmap_get_addr(mdsc->mdsmap, mds));
+ ceph_con_open(&s->s_con, CEPH_ENTITY_TYPE_MDS, mds,
+ ceph_mdsmap_get_addr(mdsc->mdsmap, mds));
return s;
@@ -2529,6 +2529,7 @@ static void send_mds_reconnect(struct ceph_mds_client *mdsc,
session->s_seq = 0;
ceph_con_open(&session->s_con,
+ CEPH_ENTITY_TYPE_MDS, mds,
ceph_mdsmap_get_addr(mdsc->mdsmap, mds));
/* replay unsafe requests */
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 6a00acc..ec22abd 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -224,9 +224,9 @@ extern void ceph_messenger_init(struct ceph_messenger *msgr,
extern void ceph_con_init(struct ceph_connection *con, void *private,
const struct ceph_connection_operations *ops,
- struct ceph_messenger *msgr, __u8 entity_type,
- __u64 entity_num);
+ struct ceph_messenger *msgr);
extern void ceph_con_open(struct ceph_connection *con,
+ __u8 entity_type, __u64 entity_num,
struct ceph_entity_addr *addr);
extern bool ceph_con_opened(struct ceph_connection *con);
extern void ceph_con_close(struct ceph_connection *con);
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index dfc4192..5adf786 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -523,12 +523,17 @@ EXPORT_SYMBOL(ceph_con_close);
/*
* Reopen a closed connection, with a new peer address.
*/
-void ceph_con_open(struct ceph_connection *con, struct ceph_entity_addr *addr)
+void ceph_con_open(struct ceph_connection *con,
+ __u8 entity_type, __u64 entity_num,
+ struct ceph_entity_addr *addr)
{
dout("con_open %p %s\n", con, ceph_pr_addr(&addr->in_addr));
set_bit(OPENING, &con->state);
WARN_ON(!test_and_clear_bit(CLOSED, &con->state));
+ con->peer_name.type = (__u8) entity_type;
+ con->peer_name.num = cpu_to_le64(entity_num);
+
memcpy(&con->peer_addr, addr, sizeof(*addr));
con->delay = 0; /* reset backoff memory */
queue_con(con);
@@ -548,7 +553,7 @@ bool ceph_con_opened(struct ceph_connection *con)
*/
void ceph_con_init(struct ceph_connection *con, void *private,
const struct ceph_connection_operations *ops,
- struct ceph_messenger *msgr, __u8 entity_type, __u64 entity_num)
+ struct ceph_messenger *msgr)
{
dout("con_init %p\n", con);
memset(con, 0, sizeof(*con));
@@ -558,9 +563,6 @@ void ceph_con_init(struct ceph_connection *con, void *private,
con_sock_state_init(con);
- con->peer_name.type = (__u8) entity_type;
- con->peer_name.num = cpu_to_le64(entity_num);
-
mutex_init(&con->mutex);
INIT_LIST_HEAD(&con->out_queue);
INIT_LIST_HEAD(&con->out_sent);
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index e9db3de..bcc80a0 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -143,11 +143,11 @@ static int __open_session(struct ceph_mon_client *monc)
monc->want_next_osdmap = !!monc->want_next_osdmap;
ceph_con_init(&monc->con, monc, &mon_con_ops,
- &monc->client->msgr,
- CEPH_ENTITY_TYPE_MON, monc->cur_mon);
+ &monc->client->msgr);
dout("open_session mon%d opening\n", monc->cur_mon);
ceph_con_open(&monc->con,
+ CEPH_ENTITY_TYPE_MON, monc->cur_mon,
&monc->monmap->mon_inst[monc->cur_mon].addr);
/* initiatiate authentication handshake */
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index db2da54..c252711 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -639,8 +639,7 @@ static struct ceph_osd *create_osd(struct ceph_osd_client *osdc, int onum)
INIT_LIST_HEAD(&osd->o_osd_lru);
osd->o_incarnation = 1;
- ceph_con_init(&osd->o_con, osd, &osd_con_ops, &osdc->client->msgr,
- CEPH_ENTITY_TYPE_OSD, onum);
+ ceph_con_init(&osd->o_con, osd, &osd_con_ops, &osdc->client->msgr);
INIT_LIST_HEAD(&osd->o_keepalive_item);
return osd;
@@ -750,7 +749,8 @@ static int __reset_osd(struct ceph_osd_client *osdc, struct ceph_osd *osd)
ret = -EAGAIN;
} else {
ceph_con_close(&osd->o_con);
- ceph_con_open(&osd->o_con, &osdc->osdmap->osd_addr[osd->o_osd]);
+ ceph_con_open(&osd->o_con, CEPH_ENTITY_TYPE_OSD, osd->o_osd,
+ &osdc->osdmap->osd_addr[osd->o_osd]);
osd->o_incarnation++;
}
return ret;
@@ -1005,7 +1005,9 @@ static int __map_request(struct ceph_osd_client *osdc,
dout("map_request osd %p is osd%d\n", req->r_osd, o);
__insert_osd(osdc, req->r_osd);
- ceph_con_open(&req->r_osd->o_con, &osdc->osdmap->osd_addr[o]);
+ ceph_con_open(&req->r_osd->o_con,
+ CEPH_ENTITY_TYPE_OSD, o,
+ &osdc->osdmap->osd_addr[o]);
}
if (req->r_osd) {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit bc18f4b1c850ab355e38373fbb60fd28568d84b5 upstream.
Sage liked the state diagram I put in my commit description so
I'm putting it in with the code.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 42 +++++++++++++++++++++++++++++++++++++++++-
1 file changed, 41 insertions(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 64db2c2..dfc4192 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -29,7 +29,47 @@
* the sender.
*/
-/* State values for ceph_connection->sock_state; NEW is assumed to be 0 */
+/*
+ * We track the state of the socket on a given connection using
+ * values defined below. The transition to a new socket state is
+ * handled by a function which verifies we aren't coming from an
+ * unexpected state.
+ *
+ * --------
+ * | NEW* | transient initial state
+ * --------
+ * | con_sock_state_init()
+ * v
+ * ----------
+ * | CLOSED | initialized, but no socket (and no
+ * ---------- TCP connection)
+ * ^ \
+ * | \ con_sock_state_connecting()
+ * | ----------------------
+ * | \
+ * + con_sock_state_closed() \
+ * |\ \
+ * | \ \
+ * | ----------- \
+ * | | CLOSING | socket event; \
+ * | ----------- await close \
+ * | ^ |
+ * | | |
+ * | + con_sock_state_closing() |
+ * | / \ |
+ * | / --------------- |
+ * | / \ v
+ * | / --------------
+ * | / -----------------| CONNECTING | socket created, TCP
+ * | | / -------------- connect initiated
+ * | | | con_sock_state_connected()
+ * | | v
+ * -------------
+ * | CONNECTED | TCP connection established
+ * -------------
+ *
+ * State values for ceph_connection->sock_state; NEW is assumed to be 0.
+ */
#define CON_SOCK_STATE_NEW 0 /* -> CLOSED */
#define CON_SOCK_STATE_CLOSED 1 /* -> CONNECTING */
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit e27947c767f5bed15048f4e4dad3e2eb69133697 upstream.
There is no state explicitly defined when a ceph connection is fully
operational. So define one.
It's set when the connection sequence completes successfully, and is
cleared when the connection gets closed.
Be a little more careful when examining the old state when a socket
disconnect event is reported.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 1 +
net/ceph/messenger.c | 9 +++++++--
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index f624b75..6a00acc 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -120,6 +120,7 @@ struct ceph_msg_pos {
*/
#define CONNECTING 1
#define NEGOTIATING 2
+#define CONNECTED 5
#define STANDBY 8 /* no outgoing messages, socket closed. we keep
* the ceph_connection around to maintain shared
* state with the peer. */
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 9e586ea..dc95437 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -463,6 +463,7 @@ void ceph_con_close(struct ceph_connection *con)
ceph_pr_addr(&con->peer_addr.in_addr));
clear_bit(NEGOTIATING, &con->state);
clear_bit(CONNECTING, &con->state);
+ clear_bit(CONNECTED, &con->state);
clear_bit(STANDBY, &con->state); /* avoid connect_seq bump */
set_bit(CLOSED, &con->state);
@@ -1564,6 +1565,7 @@ static int process_connect(struct ceph_connection *con)
}
clear_bit(NEGOTIATING, &con->state);
clear_bit(CONNECTING, &con->state);
+ set_bit(CONNECTED, &con->state);
con->peer_global_seq = le32_to_cpu(con->in_reply.global_seq);
con->connect_seq++;
con->peer_features = server_feat;
@@ -2114,6 +2116,7 @@ more:
prepare_read_ack(con);
break;
case CEPH_MSGR_TAG_CLOSE:
+ clear_bit(CONNECTED, &con->state);
set_bit(CLOSED, &con->state); /* fixme */
goto out;
default:
@@ -2190,11 +2193,13 @@ static void con_work(struct work_struct *work)
mutex_lock(&con->mutex);
restart:
if (test_and_clear_bit(SOCK_CLOSED, &con->flags)) {
- if (test_and_clear_bit(CONNECTING, &con->state)) {
+ if (test_and_clear_bit(CONNECTED, &con->state))
+ con->error_msg = "socket closed";
+ else if (test_and_clear_bit(CONNECTING, &con->state)) {
clear_bit(NEGOTIATING, &con->state);
con->error_msg = "connection failed";
} else {
- con->error_msg = "socket closed";
+ con->error_msg = "unrecognized con state";
}
goto fault;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 188048bce311ee41e5178bc3255415d0eae28423 upstream.
Currently the socket state change event handler records an error
message on a connection to distinguish a close while connecting from
a close while a connection was already established.
Changing connection information during handling of a socket event is
not very clean, so instead move this assignment inside con_work(),
where it can be done during normal connection-level processing (and
under protection of the connection mutex as well).
Move the handling of a socket closed event up to the top of the
processing loop in con_work(); there's no point in handling backoff
etc. if we have a newly-closed socket to take care of.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index d0aca62..5bd243e 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -261,13 +261,8 @@ static void ceph_sock_state_change(struct sock *sk)
case TCP_CLOSE_WAIT:
dout("%s TCP_CLOSE_WAIT\n", __func__);
con_sock_state_closing(con);
- if (test_and_set_bit(SOCK_CLOSED, &con->flags) == 0) {
- if (test_bit(CONNECTING, &con->state))
- con->error_msg = "connection failed";
- else
- con->error_msg = "socket closed";
+ if (!test_and_set_bit(SOCK_CLOSED, &con->flags))
queue_con(con);
- }
break;
case TCP_ESTABLISHED:
dout("%s TCP_ESTABLISHED\n", __func__);
@@ -2187,6 +2182,14 @@ static void con_work(struct work_struct *work)
mutex_lock(&con->mutex);
restart:
+ if (test_and_clear_bit(SOCK_CLOSED, &con->flags)) {
+ if (test_bit(CONNECTING, &con->state))
+ con->error_msg = "connection failed";
+ else
+ con->error_msg = "socket closed";
+ goto fault;
+ }
+
if (test_and_clear_bit(BACKOFF, &con->flags)) {
dout("con_work %p backing off\n", con);
if (queue_delayed_work(ceph_msgr_wq, &con->work,
@@ -2216,9 +2219,6 @@ restart:
con_close_socket(con);
}
- if (test_and_clear_bit(SOCK_CLOSED, &con->flags))
- goto fault;
-
ret = try_read(con);
if (ret == -EAGAIN)
goto restart;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit d65c9e0b9eb43d14ece9dd843506ccba06162ee7 upstream.
When a TCP_CLOSE or TCP_CLOSE_WAIT event occurs, the SOCK_CLOSED
connection flag bit is set, and if it had not been previously set
queue_con() is called to ensure con_work() will get a chance to
handle the changed state.
con_work() atomically checks--and if set, clears--the SOCK_CLOSED
bit if it was set. This means that even if the bit were set
repeatedly, the related processing in con_work() only gets called
once per transition of the bit from 0 to 1.
What's important then is that we ensure con_work() gets called *at
least* once when a socket close event occurs, not that it gets
called *exactly* once.
The work queue mechanism already takes care of queueing work
only if it is not already queued, so there's no need for us
to call queue_con() conditionally.
So this patch just makes it so the SOCK_CLOSED flag gets set
unconditionally in ceph_sock_state_change().
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 5bd243e..cd1aaa8 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -261,8 +261,8 @@ static void ceph_sock_state_change(struct sock *sk)
case TCP_CLOSE_WAIT:
dout("%s TCP_CLOSE_WAIT\n", __func__);
con_sock_state_closing(con);
- if (!test_and_set_bit(SOCK_CLOSED, &con->flags))
- queue_con(con);
+ set_bit(SOCK_CLOSED, &con->flags);
+ queue_con(con);
break;
case TCP_ESTABLISHED:
dout("%s TCP_ESTABLISHED\n", __func__);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit abdaa6a849af1d63153682c11f5bbb22dacb1f6b upstream.
Recently a bug was fixed in which the bio_iter field in a ceph
message was not being properly re-initialized when a message got
re-transmitted:
commit 43643528cce60ca184fe8197efa8e8da7c89a037
Author: Yan, Zheng <[email protected]>
rbd: Clear ceph_msg->bio_iter for retransmitted message
We are now only initializing the bio_iter field when we are about to
start to write message data (in prepare_write_message_data()),
rather than every time we are attempting to write any portion of the
message data (in write_partial_msg_pages()). This means we no
longer need to use the msg->bio_iter field as a flag.
So just don't do that any more. Trust prepare_write_message_data()
to ensure msg->bio_iter is properly initialized, every time we are
about to begin writing (or re-writing) a message's bio data.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index b83c963..d47305a 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -604,7 +604,7 @@ static void prepare_write_message_data(struct ceph_connection *con)
else
con->out_msg_pos.page_pos = 0;
#ifdef CONFIG_BLOCK
- if (msg->bio && !msg->bio_iter)
+ if (msg->bio)
init_bio_iter(msg->bio, &msg->bio_iter, &msg->bio_seg);
#endif
con->out_msg_pos.data_pos = 0;
@@ -672,10 +672,6 @@ static void prepare_write_message(struct ceph_connection *con)
m->hdr.seq = cpu_to_le64(++con->out_seq);
m->needs_out_seq = false;
}
-#ifdef CONFIG_BLOCK
- else
- m->bio_iter = NULL;
-#endif
dout("prepare_write_message %p seq %lld type %d len %d+%d+%d %d pgs\n",
m, con->out_seq, le16_to_cpu(m->hdr.type),
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit df6ad1f97342ebc4270128222e896541405eecdb upstream.
Move init_bio_iter() and iter_bio_next() up in their source file so
the'll be defined before they're needed.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 50 +++++++++++++++++++++++++-------------------------
1 file changed, 25 insertions(+), 25 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index bd36e59..5e8dbc0d 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -565,6 +565,31 @@ static void con_out_kvec_add(struct ceph_connection *con,
con->out_kvec_bytes += size;
}
+#ifdef CONFIG_BLOCK
+static void init_bio_iter(struct bio *bio, struct bio **iter, int *seg)
+{
+ if (!bio) {
+ *iter = NULL;
+ *seg = 0;
+ return;
+ }
+ *iter = bio;
+ *seg = bio->bi_idx;
+}
+
+static void iter_bio_next(struct bio **bio_iter, int *seg)
+{
+ if (*bio_iter == NULL)
+ return;
+
+ BUG_ON(*seg >= (*bio_iter)->bi_vcnt);
+
+ (*seg)++;
+ if (*seg == (*bio_iter)->bi_vcnt)
+ init_bio_iter((*bio_iter)->bi_next, bio_iter, seg);
+}
+#endif
+
static void prepare_write_message_data(struct ceph_connection *con)
{
struct ceph_msg *msg = con->out_msg;
@@ -868,31 +893,6 @@ out:
return ret; /* done! */
}
-#ifdef CONFIG_BLOCK
-static void init_bio_iter(struct bio *bio, struct bio **iter, int *seg)
-{
- if (!bio) {
- *iter = NULL;
- *seg = 0;
- return;
- }
- *iter = bio;
- *seg = bio->bi_idx;
-}
-
-static void iter_bio_next(struct bio **bio_iter, int *seg)
-{
- if (*bio_iter == NULL)
- return;
-
- BUG_ON(*seg >= (*bio_iter)->bi_vcnt);
-
- (*seg)++;
- if (*seg == (*bio_iter)->bi_vcnt)
- init_bio_iter((*bio_iter)->bi_next, bio_iter, seg);
-}
-#endif
-
static void out_msg_pos_next(struct ceph_connection *con, struct page *page,
size_t len, size_t sent, bool in_trail)
{
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 84ca8fc87fcf4ab97bb8acdb59bf97bb4820cb14 upstream.
In write_partial_msg_pages(), once all the data from a page has been
sent we advance to the next one. Put the code that takes care of
this into its own function.
While modifying write_partial_msg_pages(), make its local variable
"in_trail" be Boolean, and use the local variable "msg" (which is
just the connection's current out_msg pointer) consistently.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 58 +++++++++++++++++++++++++++++---------------------
1 file changed, 34 insertions(+), 24 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index c7efb92..434809c 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -891,6 +891,33 @@ static void iter_bio_next(struct bio **bio_iter, int *seg)
}
#endif
+static void out_msg_pos_next(struct ceph_connection *con, struct page *page,
+ size_t len, size_t sent, bool in_trail)
+{
+ struct ceph_msg *msg = con->out_msg;
+
+ BUG_ON(!msg);
+ BUG_ON(!sent);
+
+ con->out_msg_pos.data_pos += sent;
+ con->out_msg_pos.page_pos += sent;
+ if (sent == len) {
+ con->out_msg_pos.page_pos = 0;
+ con->out_msg_pos.page++;
+ con->out_msg_pos.did_page_crc = false;
+ if (in_trail)
+ list_move_tail(&page->lru,
+ &msg->trail->head);
+ else if (msg->pagelist)
+ list_move_tail(&page->lru,
+ &msg->pagelist->head);
+#ifdef CONFIG_BLOCK
+ else if (msg->bio)
+ iter_bio_next(&msg->bio_iter, &msg->bio_seg);
+#endif
+ }
+}
+
/*
* Write as much message data payload as we can. If we finish, queue
* up the footer.
@@ -906,11 +933,11 @@ static int write_partial_msg_pages(struct ceph_connection *con)
bool do_datacrc = !con->msgr->nocrc;
int ret;
int total_max_write;
- int in_trail = 0;
+ bool in_trail = false;
size_t trail_len = (msg->trail ? msg->trail->length : 0);
dout("write_partial_msg_pages %p msg %p page %d/%d offset %d\n",
- con, con->out_msg, con->out_msg_pos.page, con->out_msg->nr_pages,
+ con, msg, con->out_msg_pos.page, msg->nr_pages,
con->out_msg_pos.page_pos);
#ifdef CONFIG_BLOCK
@@ -934,13 +961,12 @@ static int write_partial_msg_pages(struct ceph_connection *con)
/* have we reached the trail part of the data? */
if (con->out_msg_pos.data_pos >= data_len - trail_len) {
- in_trail = 1;
+ in_trail = true;
total_max_write = data_len - con->out_msg_pos.data_pos;
page = list_first_entry(&msg->trail->head,
struct page, lru);
- max_write = PAGE_SIZE;
} else if (msg->pages) {
page = msg->pages[con->out_msg_pos.page];
} else if (msg->pagelist) {
@@ -964,14 +990,14 @@ static int write_partial_msg_pages(struct ceph_connection *con)
if (do_datacrc && !con->out_msg_pos.did_page_crc) {
void *base;
u32 crc;
- u32 tmpcrc = le32_to_cpu(con->out_msg->footer.data_crc);
+ u32 tmpcrc = le32_to_cpu(msg->footer.data_crc);
char *kaddr;
kaddr = kmap(page);
BUG_ON(kaddr == NULL);
base = kaddr + con->out_msg_pos.page_pos + bio_offset;
crc = crc32c(tmpcrc, base, len);
- con->out_msg->footer.data_crc = cpu_to_le32(crc);
+ msg->footer.data_crc = cpu_to_le32(crc);
con->out_msg_pos.did_page_crc = true;
}
ret = ceph_tcp_sendpage(con->sock, page,
@@ -984,30 +1010,14 @@ static int write_partial_msg_pages(struct ceph_connection *con)
if (ret <= 0)
goto out;
- con->out_msg_pos.data_pos += ret;
- con->out_msg_pos.page_pos += ret;
- if (ret == len) {
- con->out_msg_pos.page_pos = 0;
- con->out_msg_pos.page++;
- con->out_msg_pos.did_page_crc = false;
- if (in_trail)
- list_move_tail(&page->lru,
- &msg->trail->head);
- else if (msg->pagelist)
- list_move_tail(&page->lru,
- &msg->pagelist->head);
-#ifdef CONFIG_BLOCK
- else if (msg->bio)
- iter_bio_next(&msg->bio_iter, &msg->bio_seg);
-#endif
- }
+ out_msg_pos_next(con, page, len, (size_t) ret, in_trail);
}
dout("write_partial_msg_pages %p msg %p done\n", con, msg);
/* prepare and queue up footer, too */
if (!do_datacrc)
- con->out_msg->footer.flags |= CEPH_MSG_FOOTER_NOCRC;
+ msg->footer.flags |= CEPH_MSG_FOOTER_NOCRC;
con_out_kvec_reset(con);
prepare_write_message_footer(con);
ret = 1;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit fd154f3c75465abd83b7a395033e3755908a1e6e upstream.
This is a nit, but prepare_write_message() sets the FOOTER_COMPLETE
flag before the CRC for the data portion (recorded in the footer)
has been completely computed. Hold off setting the complete flag
until we've decided it's ready to send.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 434809c..bd36e59 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -592,6 +592,8 @@ static void prepare_write_message_footer(struct ceph_connection *con)
struct ceph_msg *m = con->out_msg;
int v = con->out_kvec_left;
+ m->footer.flags |= CEPH_MSG_FOOTER_COMPLETE;
+
dout("prepare_write_message_footer %p\n", con);
con->out_kvec_is_msg = true;
con->out_kvec[v].iov_base = &m->footer;
@@ -665,7 +667,7 @@ static void prepare_write_message(struct ceph_connection *con)
/* fill in crc (except data pages), footer */
crc = crc32c(0, &m->hdr, offsetof(struct ceph_msg_header, crc));
con->out_msg->hdr.crc = cpu_to_le32(crc);
- con->out_msg->footer.flags = CEPH_MSG_FOOTER_COMPLETE;
+ con->out_msg->footer.flags = 0;
crc = crc32c(0, m->front.iov_base, m->front.iov_len);
con->out_msg->footer.front_crc = cpu_to_le32(crc);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <[email protected]>
commit 36eb71aa57e6a33d61fd90a2fd87f00c6844bc86 upstream.
The ceph_con_get/put() helpers manipulate the embedded con ref
count, which isn't used now that ceph_connections are embedded in
other structures.
Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 0a6fdf8..ddb710c 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -414,7 +414,7 @@ static void ceph_msg_remove(struct ceph_msg *msg)
{
list_del_init(&msg->list_head);
BUG_ON(msg->con == NULL);
- ceph_con_put(msg->con);
+ msg->con->ops->put(msg->con);
msg->con = NULL;
ceph_msg_put(msg);
@@ -440,7 +440,7 @@ static void reset_connection(struct ceph_connection *con)
con->in_msg->con = NULL;
ceph_msg_put(con->in_msg);
con->in_msg = NULL;
- ceph_con_put(con);
+ con->ops->put(con);
}
con->connect_seq = 0;
@@ -1919,7 +1919,7 @@ static void process_message(struct ceph_connection *con)
con->in_msg->con = NULL;
msg = con->in_msg;
con->in_msg = NULL;
- ceph_con_put(con);
+ con->ops->put(con);
/* if first message, set peer_name */
if (con->peer_name.type == 0)
@@ -2281,7 +2281,7 @@ static void ceph_fault(struct ceph_connection *con)
con->in_msg->con = NULL;
ceph_msg_put(con->in_msg);
con->in_msg = NULL;
- ceph_con_put(con);
+ con->ops->put(con);
}
/* Requeue anything that hasn't been acked */
@@ -2400,7 +2400,7 @@ void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg)
mutex_lock(&con->mutex);
BUG_ON(msg->con != NULL);
- msg->con = ceph_con_get(con);
+ msg->con = con->ops->get(con);
BUG_ON(msg->con == NULL);
BUG_ON(!list_empty(&msg->list_head));
@@ -2436,7 +2436,7 @@ void ceph_msg_revoke(struct ceph_msg *msg)
dout("%s %p msg %p - was on queue\n", __func__, con, msg);
list_del_init(&msg->list_head);
BUG_ON(msg->con == NULL);
- ceph_con_put(msg->con);
+ msg->con->ops->put(msg->con);
msg->con = NULL;
msg->hdr.seq = 0;
@@ -2646,7 +2646,7 @@ static bool ceph_con_in_msg_alloc(struct ceph_connection *con,
con->in_msg = con->ops->alloc_msg(con, hdr, &skip);
mutex_lock(&con->mutex);
if (con->in_msg) {
- con->in_msg->con = ceph_con_get(con);
+ con->in_msg->con = con->ops->get(con);
BUG_ON(con->in_msg->con == NULL);
}
if (skip)
@@ -2662,7 +2662,7 @@ static bool ceph_con_in_msg_alloc(struct ceph_connection *con,
type, front_len);
return false;
}
- con->in_msg->con = ceph_con_get(con);
+ con->in_msg->con = con->ops->get(con);
BUG_ON(con->in_msg->con == NULL);
con->in_msg->page_alignment = le16_to_cpu(hdr->data_off);
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Xi Wang <[email protected]>
commit a5506049500b30dbc5edb4d07a3577477c1f3643 upstream.
On 32-bit systems, a large `pglen' would overflow `pglen*sizeof(u32)'
and bypass the check ceph_decode_need(p, end, pglen*sizeof(u32), bad).
It would also overflow the subsequent kmalloc() size, leading to
out-of-bounds write.
Signed-off-by: Xi Wang <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/osdmap.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/net/ceph/osdmap.c b/net/ceph/osdmap.c
index bc73341..9600674 100644
--- a/net/ceph/osdmap.c
+++ b/net/ceph/osdmap.c
@@ -893,6 +893,10 @@ struct ceph_osdmap *osdmap_apply_incremental(void **p, void *end,
(void) __remove_pg_mapping(&map->pg_temp, pgid);
/* insert */
+ if (pglen > (UINT_MAX - sizeof(*pg)) / sizeof(u32)) {
+ err = -EINVAL;
+ goto bad;
+ }
pg = kmalloc(sizeof(*pg) + sizeof(u32)*pglen, GFP_NOFS);
if (!pg) {
err = -ENOMEM;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Xi Wang <[email protected]>
commit ad3b904c07dfa88603689bf9a67bffbb9b99beb5 upstream.
`len' is read from network and thus needs validation. Otherwise a
large `len' would cause out-of-bounds access via the memcpy() call.
In addition, len = 0xffffffff would overflow the kmalloc() size,
leading to out-of-bounds write.
This patch adds a check of `len' via ceph_decode_need(). Also use
kstrndup rather than kmalloc/memcpy.
[[email protected]: added -ENOMEM return for null kstrndup() result]
Signed-off-by: Xi Wang <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/osdmap.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/net/ceph/osdmap.c b/net/ceph/osdmap.c
index 81e3b84..95b2762 100644
--- a/net/ceph/osdmap.c
+++ b/net/ceph/osdmap.c
@@ -488,15 +488,16 @@ static int __decode_pool_names(void **p, void *end, struct ceph_osdmap *map)
ceph_decode_32_safe(p, end, pool, bad);
ceph_decode_32_safe(p, end, len, bad);
dout(" pool %d len %d\n", pool, len);
+ ceph_decode_need(p, end, len, bad);
pi = __lookup_pg_pool(&map->pg_pools, pool);
if (pi) {
+ char *name = kstrndup(*p, len, GFP_NOFS);
+
+ if (!name)
+ return -ENOMEM;
kfree(pi->name);
- pi->name = kmalloc(len + 1, GFP_NOFS);
- if (pi->name) {
- memcpy(pi->name, *p, len);
- pi->name[len] = '\0';
- dout(" name is %s\n", pi->name);
- }
+ pi->name = name;
+ dout(" name is %s\n", pi->name);
}
*p += len;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 92ce034b5a740046cc643a21ea21eaad589e0043 upstream.
There are essentially two types of ceph messages: incoming and
outgoing. Outgoing messages are always allocated via ceph_msg_new(),
and at the time of their allocation they are not associated with any
particular connection. Incoming messages are always allocated via
ceph_con_in_msg_alloc(), and they are initially associated with the
connection from which incoming data will be placed into the message.
When an outgoing message gets sent, it becomes associated with a
connection and remains that way until the message is successfully
sent. The association of an incoming message goes away at the point
it is sent to an upper layer via a con->ops->dispatch method.
This patch implements reference counting for all ceph messages, such
that every message holds a reference (and a pointer) to a connection
if and only if it is associated with that connection (as described
above).
For background, here is an explanation of the ceph message
lifecycle, emphasizing when an association exists between a message
and a connection.
Outgoing Messages
An outgoing message is "owned" by its allocator, from the time it is
allocated in ceph_msg_new() up to the point it gets queued for
sending in ceph_con_send(). Prior to that point the message's
msg->con pointer is null; at the point it is queued for sending its
message pointer is assigned to refer to the connection. At that
time the message is inserted into a connection's out_queue list.
When a message on the out_queue list has been sent to the socket
layer to be put on the wire, it is transferred out of that list and
into the connection's out_sent list. At that point it is still owned
by the connection, and will remain so until an acknowledgement is
received from the recipient that indicates the message was
successfully transferred. When such an acknowledgement is received
(in process_ack()), the message is removed from its list (in
ceph_msg_remove()), at which point it is no longer associated with
the connection.
So basically, any time a message is on one of a connection's lists,
it is associated with that connection. Reference counting outgoing
messages can thus be done at the points a message is added to the
out_queue (in ceph_con_send()) and the point it is removed from
either its two lists (in ceph_msg_remove())--at which point its
connection pointer becomes null.
Incoming Messages
When an incoming message on a connection is getting read (in
read_partial_message()) and there is no message in con->in_msg,
a new one is allocated using ceph_con_in_msg_alloc(). At that
point the message is associated with the connection. Once that
message has been completely and successfully read, it is passed to
upper layer code using the connection's con->ops->dispatch method.
At that point the association between the message and the connection
no longer exists.
Reference counting of connections for incoming messages can be done
by taking a reference to the connection when the message gets
allocated, and releasing that reference when it gets handed off
using the dispatch method.
We should never fail to get a connection reference for a
message--the since the caller should already hold one.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 24 ++++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index dc88846..964a8c3 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -415,6 +415,7 @@ static void ceph_msg_remove(struct ceph_msg *msg)
{
list_del_init(&msg->list_head);
BUG_ON(msg->con == NULL);
+ ceph_con_put(msg->con);
msg->con = NULL;
ceph_msg_put(msg);
@@ -440,6 +441,7 @@ static void reset_connection(struct ceph_connection *con)
con->in_msg->con = NULL;
ceph_msg_put(con->in_msg);
con->in_msg = NULL;
+ ceph_con_put(con->in_msg->con);
}
con->connect_seq = 0;
@@ -1918,6 +1920,7 @@ static void process_message(struct ceph_connection *con)
con->in_msg->con = NULL;
msg = con->in_msg;
con->in_msg = NULL;
+ ceph_con_put(con);
/* if first message, set peer_name */
if (con->peer_name.type == 0)
@@ -2279,6 +2282,7 @@ static void ceph_fault(struct ceph_connection *con)
con->in_msg->con = NULL;
ceph_msg_put(con->in_msg);
con->in_msg = NULL;
+ ceph_con_put(con);
}
/* Requeue anything that hasn't been acked */
@@ -2395,8 +2399,11 @@ void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg)
/* queue */
mutex_lock(&con->mutex);
+
BUG_ON(msg->con != NULL);
- msg->con = con;
+ msg->con = ceph_con_get(con);
+ BUG_ON(msg->con == NULL);
+
BUG_ON(!list_empty(&msg->list_head));
list_add_tail(&msg->list_head, &con->out_queue);
dout("----- %p to %s%lld %d=%s len %d+%d+%d -----\n", msg,
@@ -2425,10 +2432,11 @@ void ceph_con_revoke(struct ceph_connection *con, struct ceph_msg *msg)
dout("%s %p msg %p - was on queue\n", __func__, con, msg);
list_del_init(&msg->list_head);
BUG_ON(msg->con == NULL);
+ ceph_con_put(msg->con);
msg->con = NULL;
+ msg->hdr.seq = 0;
ceph_msg_put(msg);
- msg->hdr.seq = 0;
}
if (con->out_msg == msg) {
dout("%s %p msg %p - was sending\n", __func__, con, msg);
@@ -2437,8 +2445,9 @@ void ceph_con_revoke(struct ceph_connection *con, struct ceph_msg *msg)
con->out_skip = con->out_kvec_bytes;
con->out_kvec_is_msg = false;
}
- ceph_msg_put(msg);
msg->hdr.seq = 0;
+
+ ceph_msg_put(msg);
}
mutex_unlock(&con->mutex);
}
@@ -2622,8 +2631,10 @@ static bool ceph_con_in_msg_alloc(struct ceph_connection *con,
mutex_unlock(&con->mutex);
con->in_msg = con->ops->alloc_msg(con, hdr, &skip);
mutex_lock(&con->mutex);
- if (con->in_msg)
- con->in_msg->con = con;
+ if (con->in_msg) {
+ con->in_msg->con = ceph_con_get(con);
+ BUG_ON(con->in_msg->con == NULL);
+ }
if (skip)
con->in_msg = NULL;
@@ -2637,7 +2648,8 @@ static bool ceph_con_in_msg_alloc(struct ceph_connection *con,
type, front_len);
return false;
}
- con->in_msg->con = con;
+ con->in_msg->con = ceph_con_get(con);
+ BUG_ON(con->in_msg->con == NULL);
con->in_msg->page_alignment = le16_to_cpu(hdr->data_off);
}
memcpy(&con->in_msg->hdr, &con->in_hdr, sizeof(con->in_hdr));
--
1.7.9.5
On 11/26/2012 08:57 AM, Herton Ronaldo Krzesinski wrote:
> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
What kind of version number is that?
-hpa
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 1c20f2d26795803fc4f5155fe4fca5717a5944b6 upstream.
The function ceph_alloc_msg() is only used to allocate a message
that will be assigned to a connection's in_msg pointer. Rename the
function so this implied usage is more clear.
In addition, make that assignment inside the function (again, since
that's precisely what it's intended to be used for). This allows us
to return what is now provided via the passed-in address of a "skip"
variable. The return type is now Boolean to be explicit that there
are only two possible outcomes.
Make sure the result of an ->alloc_msg method call always sets the
value of *skip properly.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 61 +++++++++++++++++++++++++++----------------------
net/ceph/mon_client.c | 3 +++
net/ceph/osd_client.c | 1 +
3 files changed, 38 insertions(+), 27 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index d8986e8..9e12806 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1659,9 +1659,8 @@ static int read_partial_message_section(struct ceph_connection *con,
return 1;
}
-static struct ceph_msg *ceph_alloc_msg(struct ceph_connection *con,
- struct ceph_msg_header *hdr,
- int *skip);
+static bool ceph_con_in_msg_alloc(struct ceph_connection *con,
+ struct ceph_msg_header *hdr);
static int read_partial_message_pages(struct ceph_connection *con,
@@ -1744,7 +1743,6 @@ static int read_partial_message(struct ceph_connection *con)
int ret;
unsigned int front_len, middle_len, data_len;
bool do_datacrc = !con->msgr->nocrc;
- int skip;
u64 seq;
u32 crc;
@@ -1797,9 +1795,7 @@ static int read_partial_message(struct ceph_connection *con)
if (!con->in_msg) {
dout("got hdr type %d front %d data %d\n", con->in_hdr.type,
con->in_hdr.front_len, con->in_hdr.data_len);
- skip = 0;
- con->in_msg = ceph_alloc_msg(con, &con->in_hdr, &skip);
- if (skip) {
+ if (ceph_con_in_msg_alloc(con, &con->in_hdr)) {
/* skip this message */
dout("alloc_msg said skip message\n");
BUG_ON(con->in_msg);
@@ -2581,46 +2577,57 @@ static int ceph_alloc_middle(struct ceph_connection *con, struct ceph_msg *msg)
}
/*
- * Generic message allocator, for incoming messages.
+ * Allocate a message for receiving an incoming message on a
+ * connection, and save the result in con->in_msg. Uses the
+ * connection's private alloc_msg op if available.
+ *
+ * Returns true if the message should be skipped, false otherwise.
+ * If true is returned (skip message), con->in_msg will be NULL.
+ * If false is returned, con->in_msg will contain a pointer to the
+ * newly-allocated message, or NULL in case of memory exhaustion.
*/
-static struct ceph_msg *ceph_alloc_msg(struct ceph_connection *con,
- struct ceph_msg_header *hdr,
- int *skip)
+static bool ceph_con_in_msg_alloc(struct ceph_connection *con,
+ struct ceph_msg_header *hdr)
{
int type = le16_to_cpu(hdr->type);
int front_len = le32_to_cpu(hdr->front_len);
int middle_len = le32_to_cpu(hdr->middle_len);
- struct ceph_msg *msg = NULL;
int ret;
+ BUG_ON(con->in_msg != NULL);
+
if (con->ops->alloc_msg) {
+ int skip = 0;
+
mutex_unlock(&con->mutex);
- msg = con->ops->alloc_msg(con, hdr, skip);
+ con->in_msg = con->ops->alloc_msg(con, hdr, &skip);
mutex_lock(&con->mutex);
- if (!msg || *skip)
- return NULL;
+ if (skip)
+ con->in_msg = NULL;
+
+ if (!con->in_msg)
+ return skip != 0;
}
- if (!msg) {
- *skip = 0;
- msg = ceph_msg_new(type, front_len, GFP_NOFS, false);
- if (!msg) {
+ if (!con->in_msg) {
+ con->in_msg = ceph_msg_new(type, front_len, GFP_NOFS, false);
+ if (!con->in_msg) {
pr_err("unable to allocate msg type %d len %d\n",
type, front_len);
- return NULL;
+ return false;
}
- msg->page_alignment = le16_to_cpu(hdr->data_off);
+ con->in_msg->page_alignment = le16_to_cpu(hdr->data_off);
}
- memcpy(&msg->hdr, &con->in_hdr, sizeof(con->in_hdr));
+ memcpy(&con->in_msg->hdr, &con->in_hdr, sizeof(con->in_hdr));
- if (middle_len && !msg->middle) {
- ret = ceph_alloc_middle(con, msg);
+ if (middle_len && !con->in_msg->middle) {
+ ret = ceph_alloc_middle(con, con->in_msg);
if (ret < 0) {
- ceph_msg_put(msg);
- return NULL;
+ ceph_msg_put(con->in_msg);
+ con->in_msg = NULL;
}
}
- return msg;
+ return false;
}
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index dad0abb..f65111c 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -442,6 +442,7 @@ static struct ceph_msg *get_generic_reply(struct ceph_connection *con,
m = NULL;
} else {
dout("get_generic_reply %lld got %p\n", tid, req->reply);
+ *skip = 0;
m = ceph_msg_get(req->reply);
/*
* we don't need to track the connection reading into
@@ -990,6 +991,8 @@ static struct ceph_msg *mon_alloc_msg(struct ceph_connection *con,
case CEPH_MSG_MDS_MAP:
case CEPH_MSG_OSD_MAP:
m = ceph_msg_new(type, front_len, GFP_NOFS, false);
+ if (!m)
+ return NULL; /* ENOMEM--return skip == 0 */
break;
}
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index 01ec0ac..d137bf0 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -2077,6 +2077,7 @@ static struct ceph_msg *alloc_msg(struct ceph_connection *con,
int type = le16_to_cpu(hdr->type);
int front = le32_to_cpu(hdr->front_len);
+ *skip = 0;
switch (type) {
case CEPH_MSG_OSD_MAP:
case CEPH_MSG_WATCH_NOTIFY:
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 20581c1faf7b15ae1f8b80c0ec757877b0b53151 upstream.
Hold off initializing a monitor client's connection until just
before it gets opened for use.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/mon_client.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index b59ca7a..fec3147 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -119,6 +119,7 @@ static void __close_session(struct ceph_mon_client *monc)
dout("__close_session closing mon%d\n", monc->cur_mon);
ceph_con_revoke(&monc->con, monc->m_auth);
ceph_con_close(&monc->con);
+ monc->con.private = NULL;
monc->cur_mon = -1;
monc->pending_auth = 0;
ceph_auth_reset(monc->auth);
@@ -141,9 +142,13 @@ static int __open_session(struct ceph_mon_client *monc)
monc->sub_renew_after = jiffies; /* i.e., expired */
monc->want_next_osdmap = !!monc->want_next_osdmap;
- dout("open_session mon%d opening\n", monc->cur_mon);
+ ceph_con_init(&monc->client->msgr, &monc->con);
+ monc->con.private = monc;
+ monc->con.ops = &mon_con_ops;
monc->con.peer_name.type = CEPH_ENTITY_TYPE_MON;
monc->con.peer_name.num = cpu_to_le64(monc->cur_mon);
+
+ dout("open_session mon%d opening\n", monc->cur_mon);
ceph_con_open(&monc->con,
&monc->monmap->mon_inst[monc->cur_mon].addr);
@@ -760,10 +765,6 @@ int ceph_monc_init(struct ceph_mon_client *monc, struct ceph_client *cl)
goto out;
/* connection */
- ceph_con_init(&monc->client->msgr, &monc->con);
- monc->con.private = monc;
- monc->con.ops = &mon_con_ops;
-
/* authentication */
monc->auth = ceph_auth_init(cl->options->name,
cl->options->key);
@@ -836,8 +837,6 @@ void ceph_monc_stop(struct ceph_mon_client *monc)
mutex_lock(&monc->mutex);
__close_session(monc);
- monc->con.private = NULL;
-
mutex_unlock(&monc->mutex);
/*
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit a5988c490ef66cb04ea2f610681949b25c773b3c upstream.
Once a connection is fully initialized, it is really in a CLOSED
state, so make that explicit by setting the bit in its state field.
It is possible for a connection in NEGOTIATING state to get a
failure, leading to ceph_fault() and ultimately ceph_con_close().
Clear that bits if it is set in that case, to reflect that the
connection truly is closed and is no longer participating in a
connect sequence.
Issue a warning if ceph_con_open() is called on a connection that
is not in CLOSED state.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 3239d3d..603d8b5 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -454,11 +454,14 @@ void ceph_con_close(struct ceph_connection *con)
{
dout("con_close %p peer %s\n", con,
ceph_pr_addr(&con->peer_addr.in_addr));
- set_bit(CLOSED, &con->state); /* in case there's queued work */
+ clear_bit(NEGOTIATING, &con->state);
clear_bit(STANDBY, &con->state); /* avoid connect_seq bump */
+ set_bit(CLOSED, &con->state);
+
clear_bit(LOSSYTX, &con->flags); /* so we retry next connect */
clear_bit(KEEPALIVE_PENDING, &con->flags);
clear_bit(WRITE_PENDING, &con->flags);
+
mutex_lock(&con->mutex);
reset_connection(con);
con->peer_global_seq = 0;
@@ -475,7 +478,8 @@ void ceph_con_open(struct ceph_connection *con, struct ceph_entity_addr *addr)
{
dout("con_open %p %s\n", con, ceph_pr_addr(&addr->in_addr));
set_bit(OPENING, &con->state);
- clear_bit(CLOSED, &con->state);
+ WARN_ON(!test_and_clear_bit(CLOSED, &con->state));
+
memcpy(&con->peer_addr, addr, sizeof(*addr));
con->delay = 0; /* reset backoff memory */
queue_con(con);
@@ -530,6 +534,8 @@ void ceph_con_init(struct ceph_messenger *msgr, struct ceph_connection *con)
INIT_LIST_HEAD(&con->out_queue);
INIT_LIST_HEAD(&con->out_sent);
INIT_DELAYED_WORK(&con->work, con_work);
+
+ set_bit(CLOSED, &con->state);
}
EXPORT_SYMBOL(ceph_con_init);
@@ -1937,14 +1943,15 @@ more:
/* open the socket first? */
if (con->sock == NULL) {
+ clear_bit(NEGOTIATING, &con->state);
+ set_bit(CONNECTING, &con->state);
+
con_out_kvec_reset(con);
prepare_write_banner(con);
ret = prepare_write_connect(con);
if (ret < 0)
goto out;
prepare_read_banner(con);
- set_bit(CONNECTING, &con->state);
- clear_bit(NEGOTIATING, &con->state);
BUG_ON(con->in_msg);
con->in_tag = CEPH_MSGR_TAG_READY;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit ce2c8903e76e690846a00a0284e4bd9ee954d680 upstream.
Start explicitly keeping track of the state of a ceph connection's
socket, separate from the state of the connection itself. Create
placeholder functions to encapsulate the state transitions.
--------
| NEW* | transient initial state
--------
| con_sock_state_init()
v
----------
| CLOSED | initialized, but no socket (and no
---------- TCP connection)
^ \
| \ con_sock_state_connecting()
| ----------------------
| \
+ con_sock_state_closed() \
|\ \
| \ \
| ----------- \
| | CLOSING | socket event; \
| ----------- await close \
| ^ |
| | |
| + con_sock_state_closing() |
| / \ |
| / --------------- |
| / \ v
| / --------------
| / -----------------| CONNECTING | socket created, TCP
| | / -------------- connect initiated
| | | con_sock_state_connected()
| | v
-------------
| CONNECTED | TCP connection established
-------------
Make the socket state an atomic variable, reinforcing that it's a
distinct transtion with no possible "intermediate/both" states.
This is almost certainly overkill at this point, though the
transitions into CONNECTED and CLOSING state do get called via
socket callback (the rest of the transitions occur with the
connection mutex held). We can back out the atomicity later.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Sage Weil<[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 8 +++--
net/ceph/messenger.c | 64 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 70 insertions(+), 2 deletions(-)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 34e9506..5f30c81 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -137,14 +137,18 @@ struct ceph_connection {
const struct ceph_connection_operations *ops;
struct ceph_messenger *msgr;
+
+ atomic_t sock_state;
struct socket *sock;
+ struct ceph_entity_addr peer_addr; /* peer address */
+ struct ceph_entity_addr peer_addr_for_me;
+
unsigned long flags;
unsigned long state;
const char *error_msg; /* error message, if any */
- struct ceph_entity_addr peer_addr; /* peer address */
struct ceph_entity_name peer_name; /* peer name */
- struct ceph_entity_addr peer_addr_for_me;
+
unsigned peer_features;
u32 connect_seq; /* identify the most recent connection
attempt for this connection, client */
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 7ec608c..3239d3d 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -29,6 +29,14 @@
* the sender.
*/
+/* State values for ceph_connection->sock_state; NEW is assumed to be 0 */
+
+#define CON_SOCK_STATE_NEW 0 /* -> CLOSED */
+#define CON_SOCK_STATE_CLOSED 1 /* -> CONNECTING */
+#define CON_SOCK_STATE_CONNECTING 2 /* -> CONNECTED or -> CLOSING */
+#define CON_SOCK_STATE_CONNECTED 3 /* -> CLOSING or -> CLOSED */
+#define CON_SOCK_STATE_CLOSING 4 /* -> CLOSED */
+
/* static tag bytes (protocol control messages) */
static char tag_msg = CEPH_MSGR_TAG_MSG;
static char tag_ack = CEPH_MSGR_TAG_ACK;
@@ -147,6 +155,55 @@ void ceph_msgr_flush(void)
}
EXPORT_SYMBOL(ceph_msgr_flush);
+/* Connection socket state transition functions */
+
+static void con_sock_state_init(struct ceph_connection *con)
+{
+ int old_state;
+
+ old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CLOSED);
+ if (WARN_ON(old_state != CON_SOCK_STATE_NEW))
+ printk("%s: unexpected old state %d\n", __func__, old_state);
+}
+
+static void con_sock_state_connecting(struct ceph_connection *con)
+{
+ int old_state;
+
+ old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CONNECTING);
+ if (WARN_ON(old_state != CON_SOCK_STATE_CLOSED))
+ printk("%s: unexpected old state %d\n", __func__, old_state);
+}
+
+static void con_sock_state_connected(struct ceph_connection *con)
+{
+ int old_state;
+
+ old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CONNECTED);
+ if (WARN_ON(old_state != CON_SOCK_STATE_CONNECTING))
+ printk("%s: unexpected old state %d\n", __func__, old_state);
+}
+
+static void con_sock_state_closing(struct ceph_connection *con)
+{
+ int old_state;
+
+ old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CLOSING);
+ if (WARN_ON(old_state != CON_SOCK_STATE_CONNECTING &&
+ old_state != CON_SOCK_STATE_CONNECTED &&
+ old_state != CON_SOCK_STATE_CLOSING))
+ printk("%s: unexpected old state %d\n", __func__, old_state);
+}
+
+static void con_sock_state_closed(struct ceph_connection *con)
+{
+ int old_state;
+
+ old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CLOSED);
+ if (WARN_ON(old_state != CON_SOCK_STATE_CONNECTED &&
+ old_state != CON_SOCK_STATE_CLOSING))
+ printk("%s: unexpected old state %d\n", __func__, old_state);
+}
/*
* socket callback functions
@@ -203,6 +260,7 @@ static void ceph_sock_state_change(struct sock *sk)
dout("%s TCP_CLOSE\n", __func__);
case TCP_CLOSE_WAIT:
dout("%s TCP_CLOSE_WAIT\n", __func__);
+ con_sock_state_closing(con);
if (test_and_set_bit(SOCK_CLOSED, &con->flags) == 0) {
if (test_bit(CONNECTING, &con->state))
con->error_msg = "connection failed";
@@ -213,6 +271,7 @@ static void ceph_sock_state_change(struct sock *sk)
break;
case TCP_ESTABLISHED:
dout("%s TCP_ESTABLISHED\n", __func__);
+ con_sock_state_connected(con);
queue_con(con);
break;
default: /* Everything else is uninteresting */
@@ -277,6 +336,7 @@ static int ceph_tcp_connect(struct ceph_connection *con)
return ret;
}
con->sock = sock;
+ con_sock_state_connecting(con);
return 0;
}
@@ -343,6 +403,7 @@ static int con_close_socket(struct ceph_connection *con)
sock_release(con->sock);
con->sock = NULL;
clear_bit(SOCK_CLOSED, &con->state);
+ con_sock_state_closed(con);
return rc;
}
@@ -462,6 +523,9 @@ void ceph_con_init(struct ceph_messenger *msgr, struct ceph_connection *con)
memset(con, 0, sizeof(*con));
atomic_set(&con->nref, 1);
con->msgr = msgr;
+
+ con_sock_state_init(con);
+
mutex_init(&con->mutex);
INIT_LIST_HEAD(&con->out_queue);
INIT_LIST_HEAD(&con->out_sent);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 327800bdc2cb9b71f4b458ca07aa9d522668dde0 upstream.
Change the names of the three socket callback functions to make it
more obvious they're specifically associated with a connection's
socket (not the ceph connection that uses it).
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/ceph/messenger.c | 28 ++++++++++++++--------------
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 18f69a8..5fb9937 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -153,46 +153,46 @@ EXPORT_SYMBOL(ceph_msgr_flush);
*/
/* data available on socket, or listen socket received a connect */
-static void ceph_data_ready(struct sock *sk, int count_unused)
+static void ceph_sock_data_ready(struct sock *sk, int count_unused)
{
struct ceph_connection *con = sk->sk_user_data;
if (sk->sk_state != TCP_CLOSE_WAIT) {
- dout("ceph_data_ready on %p state = %lu, queueing work\n",
+ dout("%s on %p state = %lu, queueing work\n", __func__,
con, con->state);
queue_con(con);
}
}
/* socket has buffer space for writing */
-static void ceph_write_space(struct sock *sk)
+static void ceph_sock_write_space(struct sock *sk)
{
struct ceph_connection *con = sk->sk_user_data;
/* only queue to workqueue if there is data we want to write,
* and there is sufficient space in the socket buffer to accept
- * more data. clear SOCK_NOSPACE so that ceph_write_space()
+ * more data. clear SOCK_NOSPACE so that ceph_sock_write_space()
* doesn't get called again until try_write() fills the socket
* buffer. See net/ipv4/tcp_input.c:tcp_check_space()
* and net/core/stream.c:sk_stream_write_space().
*/
if (test_bit(WRITE_PENDING, &con->state)) {
if (sk_stream_wspace(sk) >= sk_stream_min_wspace(sk)) {
- dout("ceph_write_space %p queueing write work\n", con);
+ dout("%s %p queueing write work\n", __func__, con);
clear_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
queue_con(con);
}
} else {
- dout("ceph_write_space %p nothing to write\n", con);
+ dout("%s %p nothing to write\n", __func__, con);
}
}
/* socket's state has changed */
-static void ceph_state_change(struct sock *sk)
+static void ceph_sock_state_change(struct sock *sk)
{
struct ceph_connection *con = sk->sk_user_data;
- dout("ceph_state_change %p state = %lu sk_state = %u\n",
+ dout("%s %p state = %lu sk_state = %u\n", __func__,
con, con->state, sk->sk_state);
if (test_bit(CLOSED, &con->state))
@@ -200,9 +200,9 @@ static void ceph_state_change(struct sock *sk)
switch (sk->sk_state) {
case TCP_CLOSE:
- dout("ceph_state_change TCP_CLOSE\n");
+ dout("%s TCP_CLOSE\n", __func__);
case TCP_CLOSE_WAIT:
- dout("ceph_state_change TCP_CLOSE_WAIT\n");
+ dout("%s TCP_CLOSE_WAIT\n", __func__);
if (test_and_set_bit(SOCK_CLOSED, &con->state) == 0) {
if (test_bit(CONNECTING, &con->state))
con->error_msg = "connection failed";
@@ -212,7 +212,7 @@ static void ceph_state_change(struct sock *sk)
}
break;
case TCP_ESTABLISHED:
- dout("ceph_state_change TCP_ESTABLISHED\n");
+ dout("%s TCP_ESTABLISHED\n", __func__);
queue_con(con);
break;
default: /* Everything else is uninteresting */
@@ -228,9 +228,9 @@ static void set_sock_callbacks(struct socket *sock,
{
struct sock *sk = sock->sk;
sk->sk_user_data = con;
- sk->sk_data_ready = ceph_data_ready;
- sk->sk_write_space = ceph_write_space;
- sk->sk_state_change = ceph_state_change;
+ sk->sk_data_ready = ceph_sock_data_ready;
+ sk->sk_write_space = ceph_sock_write_space;
+ sk->sk_state_change = ceph_sock_state_change;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit e5e372da9a469dfe3ece40277090a7056c566838 upstream.
The ceph connection state "DEAD" is never set and is therefore not
needed. Eliminate it.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 1 -
net/ceph/messenger.c | 6 ------
2 files changed, 7 deletions(-)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 44c87e7..6a1dc80 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -119,7 +119,6 @@ struct ceph_msg_pos {
#define CLOSED 10 /* we've closed the connection */
#define SOCK_CLOSED 11 /* socket state changed to closed */
#define OPENING 13 /* open connection w/ (possibly new) peer */
-#define DEAD 14 /* dead, about to kfree */
#define BACKOFF 15
/*
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 10255e8..ad2eaad 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2091,12 +2091,6 @@ bad_tag:
*/
static void queue_con(struct ceph_connection *con)
{
- if (test_bit(DEAD, &con->state)) {
- dout("queue_con %p ignoring: DEAD\n",
- con);
- return;
- }
-
if (!con->ops->get(con)) {
dout("queue_con %p ref count 0\n", con);
return;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Elder <[email protected]>
commit 6384bb8b8e88a9c6bf2ae0d9517c2c0199177c34 upstream.
No code sets a bad_proto method in its ceph connection operations
vector, so just get rid of it.
Signed-off-by: Alex Elder <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
include/linux/ceph/messenger.h | 3 ---
net/ceph/messenger.c | 5 -----
2 files changed, 8 deletions(-)
diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 6a1dc80..ce7a483 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -31,9 +31,6 @@ struct ceph_connection_operations {
int (*verify_authorizer_reply) (struct ceph_connection *con, int len);
int (*invalidate_authorizer)(struct ceph_connection *con);
- /* protocol version mismatch */
- void (*bad_proto) (struct ceph_connection *con);
-
/* there was some error on the socket (disconnect, whatever) */
void (*fault) (struct ceph_connection *con);
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index ad2eaad..18f69a8 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1360,11 +1360,6 @@ static void fail_protocol(struct ceph_connection *con)
{
reset_connection(con);
set_bit(CLOSED, &con->state); /* in case there's queued work */
-
- mutex_unlock(&con->mutex);
- if (con->ops->bad_proto)
- con->ops->bad_proto(con);
- mutex_lock(&con->mutex);
}
static int process_connect(struct ceph_connection *con)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Konrad Rzeszutek Wilk <[email protected]>
commit cd0608e71e9757f4dae35bcfb4e88f4d1a03a8ab upstream.
The hypervisor will trap it. However without this patch,
we would crash as the .read_tscp is set to NULL. This patch
fixes it and sets it to the native_read_tscp call.
Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/x86/xen/enlighten.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 405307f..93dcfdc 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1131,6 +1131,8 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
.read_tsc = native_read_tsc,
.read_pmc = native_read_pmc,
+ .read_tscp = native_read_tscp,
+
.iret = xen_iret,
.irq_enable_sysexit = xen_sysexit,
#ifdef CONFIG_X86_64
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jason Wessel <[email protected]>
commit f0a996eeeda214f4293e234df33b29bec003b536 upstream.
This fault was detected using the kgdb test suite on boot and it
crashes recursively due to the fact that CONFIG_KPROBES on mips adds
an extra die notifier in the page fault handler. The crash signature
looks like this:
kgdbts:RUN bad memory access test
KGDB: re-enter exception: ALL breakpoints killed
Call Trace:
[<807b7548>] dump_stack+0x20/0x54
[<807b7548>] dump_stack+0x20/0x54
The fix for now is to have kgdb return immediately if the fault type
is DIE_PAGE_FAULT and allow the kprobe code to decide what is supposed
to happen.
Cc: Masami Hiramatsu <[email protected]>
Cc: David S. Miller <[email protected]>
Signed-off-by: Jason Wessel <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/mips/kernel/kgdb.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/mips/kernel/kgdb.c b/arch/mips/kernel/kgdb.c
index f4546e9..23817a6 100644
--- a/arch/mips/kernel/kgdb.c
+++ b/arch/mips/kernel/kgdb.c
@@ -283,6 +283,15 @@ static int kgdb_mips_notify(struct notifier_block *self, unsigned long cmd,
struct pt_regs *regs = args->regs;
int trap = (regs->cp0_cause & 0x7c) >> 2;
+#ifdef CONFIG_KPROBES
+ /*
+ * Return immediately if the kprobes fault notifier has set
+ * DIE_PAGE_FAULT.
+ */
+ if (cmd == DIE_PAGE_FAULT)
+ return NOTIFY_DONE;
+#endif /* CONFIG_KPROBES */
+
/* Userspace events, ignore. */
if (user_mode(regs))
return NOTIFY_DONE;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Ian Kent <[email protected]>
commit 49999ab27eab6289a8e4f450e148bdab521361b2 upstream.
In autofs4_d_automount(), if a mount fail occurs the AUTOFS_INF_PENDING
mount pending flag is not cleared.
One effect of this is when using the "browse" option, directory entry
attributes show up with all "?"s due to the incorrect callback and
subsequent failure return (when in fact no callback should be made).
Signed-off-by: Ian Kent <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/autofs4/root.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/autofs4/root.c b/fs/autofs4/root.c
index 75e5f1c..8c4292f 100644
--- a/fs/autofs4/root.c
+++ b/fs/autofs4/root.c
@@ -392,10 +392,12 @@ static struct vfsmount *autofs4_d_automount(struct path *path)
ino->flags |= AUTOFS_INF_PENDING;
spin_unlock(&sbi->fs_lock);
status = autofs4_mount_wait(dentry);
- if (status)
- return ERR_PTR(status);
spin_lock(&sbi->fs_lock);
ino->flags &= ~AUTOFS_INF_PENDING;
+ if (status) {
+ spin_unlock(&sbi->fs_lock);
+ return ERR_PTR(status);
+ }
}
done:
if (!(ino->flags & AUTOFS_INF_EXPIRING)) {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Hugh Dickins <[email protected]>
commit 35c2a7f4908d404c9124c2efc6ada4640ca4d5d5 upstream.
Fuzzing with trinity oopsed on the 1st instruction of shmem_fh_to_dentry(),
u64 inum = fid->raw[2];
which is unhelpfully reported as at the end of shmem_alloc_inode():
BUG: unable to handle kernel paging request at ffff880061cd3000
IP: [<ffffffff812190d0>] shmem_alloc_inode+0x40/0x40
Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
Call Trace:
[<ffffffff81488649>] ? exportfs_decode_fh+0x79/0x2d0
[<ffffffff812d77c3>] do_handle_open+0x163/0x2c0
[<ffffffff812d792c>] sys_open_by_handle_at+0xc/0x10
[<ffffffff83a5f3f8>] tracesys+0xe1/0xe6
Right, tmpfs is being stupid to access fid->raw[2] before validating that
fh_len includes it: the buffer kmalloc'ed by do_sys_name_to_handle() may
fall at the end of a page, and the next page not be present.
But some other filesystems (ceph, gfs2, isofs, reiserfs, xfs) are being
careless about fh_len too, in fh_to_dentry() and/or fh_to_parent(), and
could oops in the same way: add the missing fh_len checks to those.
Reported-by: Sasha Levin <[email protected]>
Signed-off-by: Hugh Dickins <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Sage Weil <[email protected]>
Cc: Steven Whitehouse <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ceph/export.c | 18 ++++++++++++++----
fs/gfs2/export.c | 4 ++++
fs/isofs/export.c | 2 +-
fs/reiserfs/inode.c | 6 +++++-
fs/xfs/xfs_export.c | 3 +++
mm/shmem.c | 6 ++++--
6 files changed, 31 insertions(+), 8 deletions(-)
diff --git a/fs/ceph/export.c b/fs/ceph/export.c
index 8e1b60e..02ce909 100644
--- a/fs/ceph/export.c
+++ b/fs/ceph/export.c
@@ -99,7 +99,7 @@ static int ceph_encode_fh(struct inode *inode, u32 *rawfh, int *max_len,
* FIXME: we should try harder by querying the mds for the ino.
*/
static struct dentry *__fh_to_dentry(struct super_block *sb,
- struct ceph_nfs_fh *fh)
+ struct ceph_nfs_fh *fh, int fh_len)
{
struct ceph_mds_client *mdsc = ceph_sb_to_client(sb)->mdsc;
struct inode *inode;
@@ -107,6 +107,9 @@ static struct dentry *__fh_to_dentry(struct super_block *sb,
struct ceph_vino vino;
int err;
+ if (fh_len < sizeof(*fh) / 4)
+ return ERR_PTR(-ESTALE);
+
dout("__fh_to_dentry %llx\n", fh->ino);
vino.ino = fh->ino;
vino.snap = CEPH_NOSNAP;
@@ -150,7 +153,7 @@ static struct dentry *__fh_to_dentry(struct super_block *sb,
* convert connectable fh to dentry
*/
static struct dentry *__cfh_to_dentry(struct super_block *sb,
- struct ceph_nfs_confh *cfh)
+ struct ceph_nfs_confh *cfh, int fh_len)
{
struct ceph_mds_client *mdsc = ceph_sb_to_client(sb)->mdsc;
struct inode *inode;
@@ -158,6 +161,9 @@ static struct dentry *__cfh_to_dentry(struct super_block *sb,
struct ceph_vino vino;
int err;
+ if (fh_len < sizeof(*cfh) / 4)
+ return ERR_PTR(-ESTALE);
+
dout("__cfh_to_dentry %llx (%llx/%x)\n",
cfh->ino, cfh->parent_ino, cfh->parent_name_hash);
@@ -207,9 +213,11 @@ static struct dentry *ceph_fh_to_dentry(struct super_block *sb, struct fid *fid,
int fh_len, int fh_type)
{
if (fh_type == 1)
- return __fh_to_dentry(sb, (struct ceph_nfs_fh *)fid->raw);
+ return __fh_to_dentry(sb, (struct ceph_nfs_fh *)fid->raw,
+ fh_len);
else
- return __cfh_to_dentry(sb, (struct ceph_nfs_confh *)fid->raw);
+ return __cfh_to_dentry(sb, (struct ceph_nfs_confh *)fid->raw,
+ fh_len);
}
/*
@@ -230,6 +238,8 @@ static struct dentry *ceph_fh_to_parent(struct super_block *sb,
if (fh_type == 1)
return ERR_PTR(-ESTALE);
+ if (fh_len < sizeof(*cfh) / 4)
+ return ERR_PTR(-ESTALE);
pr_debug("fh_to_parent %llx/%d\n", cfh->parent_ino,
cfh->parent_name_hash);
diff --git a/fs/gfs2/export.c b/fs/gfs2/export.c
index e8ed6d4..4767774 100644
--- a/fs/gfs2/export.c
+++ b/fs/gfs2/export.c
@@ -161,6 +161,8 @@ static struct dentry *gfs2_fh_to_dentry(struct super_block *sb, struct fid *fid,
case GFS2_SMALL_FH_SIZE:
case GFS2_LARGE_FH_SIZE:
case GFS2_OLD_FH_SIZE:
+ if (fh_len < GFS2_SMALL_FH_SIZE)
+ return NULL;
this.no_formal_ino = ((u64)be32_to_cpu(fh[0])) << 32;
this.no_formal_ino |= be32_to_cpu(fh[1]);
this.no_addr = ((u64)be32_to_cpu(fh[2])) << 32;
@@ -180,6 +182,8 @@ static struct dentry *gfs2_fh_to_parent(struct super_block *sb, struct fid *fid,
switch (fh_type) {
case GFS2_LARGE_FH_SIZE:
case GFS2_OLD_FH_SIZE:
+ if (fh_len < GFS2_LARGE_FH_SIZE)
+ return NULL;
parent.no_formal_ino = ((u64)be32_to_cpu(fh[4])) << 32;
parent.no_formal_ino |= be32_to_cpu(fh[5]);
parent.no_addr = ((u64)be32_to_cpu(fh[6])) << 32;
diff --git a/fs/isofs/export.c b/fs/isofs/export.c
index aa4356d..8cf3f97 100644
--- a/fs/isofs/export.c
+++ b/fs/isofs/export.c
@@ -174,7 +174,7 @@ static struct dentry *isofs_fh_to_parent(struct super_block *sb,
{
struct isofs_fid *ifid = (struct isofs_fid *)fid;
- if (fh_type != 2)
+ if (fh_len < 2 || fh_type != 2)
return NULL;
return isofs_export_iget(sb,
diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c
index a6d4268..0192f86 100644
--- a/fs/reiserfs/inode.c
+++ b/fs/reiserfs/inode.c
@@ -1573,8 +1573,10 @@ struct dentry *reiserfs_fh_to_dentry(struct super_block *sb, struct fid *fid,
reiserfs_warning(sb, "reiserfs-13077",
"nfsd/reiserfs, fhtype=%d, len=%d - odd",
fh_type, fh_len);
- fh_type = 5;
+ fh_type = fh_len;
}
+ if (fh_len < 2)
+ return NULL;
return reiserfs_get_dentry(sb, fid->raw[0], fid->raw[1],
(fh_type == 3 || fh_type >= 5) ? fid->raw[2] : 0);
@@ -1583,6 +1585,8 @@ struct dentry *reiserfs_fh_to_dentry(struct super_block *sb, struct fid *fid,
struct dentry *reiserfs_fh_to_parent(struct super_block *sb, struct fid *fid,
int fh_len, int fh_type)
{
+ if (fh_type > fh_len)
+ fh_type = fh_len;
if (fh_type < 4)
return NULL;
diff --git a/fs/xfs/xfs_export.c b/fs/xfs/xfs_export.c
index 4267922..8c6d1d7 100644
--- a/fs/xfs/xfs_export.c
+++ b/fs/xfs/xfs_export.c
@@ -189,6 +189,9 @@ xfs_fs_fh_to_parent(struct super_block *sb, struct fid *fid,
struct xfs_fid64 *fid64 = (struct xfs_fid64 *)fid;
struct inode *inode = NULL;
+ if (fh_len < xfs_fileid_length(fileid_type))
+ return NULL;
+
switch (fileid_type) {
case FILEID_INO32_GEN_PARENT:
inode = xfs_nfs_get_inode(sb, fid->i32.parent_ino,
diff --git a/mm/shmem.c b/mm/shmem.c
index bd10636..06d48ca 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2364,12 +2364,14 @@ static struct dentry *shmem_fh_to_dentry(struct super_block *sb,
{
struct inode *inode;
struct dentry *dentry = NULL;
- u64 inum = fid->raw[2];
- inum = (inum << 32) | fid->raw[1];
+ u64 inum;
if (fh_len < 3)
return NULL;
+ inum = fid->raw[2];
+ inum = (inum << 32) | fid->raw[1];
+
inode = ilookup5(sb, (unsigned long)(inum + fid->raw[0]),
shmem_match, fid->raw);
if (inode) {
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Carpenter <[email protected]>
commit 5b3900cd409466c0070b234d941650685ad0c791 upstream.
We fixed a bunch of integer overflows in timekeeping code during the 3.6
cycle. I did an audit based on that and found this potential overflow.
Signed-off-by: Dan Carpenter <[email protected]>
Acked-by: John Stultz <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
[ herton: adapt for 3.5, timekeeper instead of tk pointer ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
kernel/time/timekeeping.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 63c88c1..8954990 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1012,7 +1012,7 @@ static cycle_t logarithmic_accumulation(cycle_t offset, int shift)
}
/* Accumulate raw time */
- raw_nsecs = timekeeper.raw_interval << shift;
+ raw_nsecs = (u64)timekeeper.raw_interval << shift;
raw_nsecs += timekeeper.raw_time.tv_nsec;
if (raw_nsecs >= NSEC_PER_SEC) {
u64 raw_secs = raw_nsecs;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Stefan Richter <[email protected]>
commit 790198f74c9d1b46b6a89504361b1a844670d050 upstream.
Fix two bugs of the /dev/fw* character device concerning the
FW_CDEV_IOC_GET_INFO ioctl with nonzero fw_cdev_get_info.bus_reset.
(Practically all /dev/fw* clients issue this ioctl right after opening
the device.)
Both bugs are caused by sizeof(struct fw_cdev_event_bus_reset) being 36
without natural alignment and 40 with natural alignment.
1) Memory corruption, affecting i386 userland on amd64 kernel:
Userland reserves a 36 bytes large buffer, kernel writes 40 bytes.
This has been first found and reported against libraw1394 if
compiled with gcc 4.7 which happens to order libraw1394's stack such
that the bug became visible as data corruption.
2) Information leak, affecting all kernel architectures except i386:
4 bytes of random kernel stack data were leaked to userspace.
Hence limit the respective copy_to_user() to the 32-bit aligned size of
struct fw_cdev_event_bus_reset.
Reported-by: Simon Kirby <[email protected]>
Signed-off-by: Stefan Richter <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/firewire/core-cdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
index 2783f69..f8d2287 100644
--- a/drivers/firewire/core-cdev.c
+++ b/drivers/firewire/core-cdev.c
@@ -473,8 +473,8 @@ static int ioctl_get_info(struct client *client, union ioctl_arg *arg)
client->bus_reset_closure = a->bus_reset_closure;
if (a->bus_reset != 0) {
fill_bus_reset_event(&bus_reset, client);
- ret = copy_to_user(u64_to_uptr(a->bus_reset),
- &bus_reset, sizeof(bus_reset));
+ /* unaligned size of bus_reset is 36 bytes */
+ ret = copy_to_user(u64_to_uptr(a->bus_reset), &bus_reset, 36);
}
if (ret == 0 && list_empty(&client->link))
list_add_tail(&client->link, &client->device->client_list);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: "Hildner, Christian" <[email protected]>
commit 26cff4e2aa4d666dc6a120ea34336b5057e3e187 upstream.
Adding two (or more) timers with large values for "expires" (they have
to reside within tv5 in the same list) leads to endless looping
between cascade() and internal_add_timer() in case CONFIG_BASE_SMALL
is one and jiffies are crossing the value 1 << 18. The bug was
introduced between 2.6.11 and 2.6.12 (and survived for quite some
time).
This patch ensures that when cascade() is called timers within tv5 are
not added endlessly to their own list again, instead they are added to
the next lower tv level tv4 (as expected).
Signed-off-by: Christian Hildner <[email protected]>
Reviewed-by: Jan Kiszka <[email protected]>
Link: http://lkml.kernel.org/r/98673C87CB31274881CFFE0B65ECC87B0F5FC1963E@DEFTHW99EA4MSX.ww902.siemens.net
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
kernel/timer.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/kernel/timer.c b/kernel/timer.c
index 6ec7e7e..ad4ced1 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -63,6 +63,7 @@ EXPORT_SYMBOL(jiffies_64);
#define TVR_SIZE (1 << TVR_BITS)
#define TVN_MASK (TVN_SIZE - 1)
#define TVR_MASK (TVR_SIZE - 1)
+#define MAX_TVAL ((unsigned long)((1ULL << (TVR_BITS + 4*TVN_BITS)) - 1))
struct tvec {
struct list_head vec[TVN_SIZE];
@@ -356,11 +357,12 @@ static void internal_add_timer(struct tvec_base *base, struct timer_list *timer)
vec = base->tv1.vec + (base->timer_jiffies & TVR_MASK);
} else {
int i;
- /* If the timeout is larger than 0xffffffff on 64-bit
- * architectures then we use the maximum timeout:
+ /* If the timeout is larger than MAX_TVAL (on 64-bit
+ * architectures or with CONFIG_BASE_SMALL=1) then we
+ * use the maximum timeout.
*/
- if (idx > 0xffffffffUL) {
- idx = 0xffffffffUL;
+ if (idx > MAX_TVAL) {
+ idx = MAX_TVAL;
expires = idx + base->timer_jiffies;
}
i = (expires >> (TVR_BITS + 3 * TVN_BITS)) & TVN_MASK;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Russell King <[email protected]>
commit 846a136881b8f73c1f74250bf6acfaa309cab1f2 upstream.
Michael Olbrich reported that his test program fails when built with
-O2 -mcpu=cortex-a8 -mfpu=neon, and a kernel which supports v6 and v7
CPUs:
volatile int x = 2;
volatile int64_t y = 2;
int main() {
volatile int a = 0;
volatile int64_t b = 0;
while (1) {
a = (a + x) % (1 << 30);
b = (b + y) % (1 << 30);
assert(a == b);
}
}
and two instances are run. When built for just v7 CPUs, this program
works fine. It uses the "vadd.i64 d19, d18, d16" VFP instruction.
It appears that we do not save the high-16 double VFP registers across
context switches when the kernel is built for v6 CPUs. Fix that.
Tested-By: Michael Olbrich <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/arm/include/asm/vfpmacros.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/vfpmacros.h b/arch/arm/include/asm/vfpmacros.h
index 3d5fc41..bf53047 100644
--- a/arch/arm/include/asm/vfpmacros.h
+++ b/arch/arm/include/asm/vfpmacros.h
@@ -28,7 +28,7 @@
ldr \tmp, =elf_hwcap @ may not have MVFR regs
ldr \tmp, [\tmp, #0]
tst \tmp, #HWCAP_VFPv3D16
- ldceq p11, cr0, [\base],#32*4 @ FLDMIAD \base!, {d16-d31}
+ ldceql p11, cr0, [\base],#32*4 @ FLDMIAD \base!, {d16-d31}
addne \base, \base, #32*4 @ step over unused register space
#else
VFPFMRX \tmp, MVFR0 @ Media and VFP Feature Register 0
@@ -52,7 +52,7 @@
ldr \tmp, =elf_hwcap @ may not have MVFR regs
ldr \tmp, [\tmp, #0]
tst \tmp, #HWCAP_VFPv3D16
- stceq p11, cr0, [\base],#32*4 @ FSTMIAD \base!, {d16-d31}
+ stceql p11, cr0, [\base],#32*4 @ FSTMIAD \base!, {d16-d31}
addne \base, \base, #32*4 @ step over unused register space
#else
VFPFMRX \tmp, MVFR0 @ Media and VFP Feature Register 0
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Felix Fietkau <[email protected]>
commit c3e7724b6bc2f25e46c38dbe68f09d71fafeafb8 upstream.
A few places free skbs using dev_kfree_skb even though they're called
after ieee80211_subif_start_xmit might have cloned it for tracking tx
status. Use ieee80211_free_txskb here to prevent skb leaks.
Signed-off-by: Felix Fietkau <[email protected]>
Signed-off-by: John W. Linville <[email protected]>
[ herton: adjust context ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/mac80211/status.c | 4 ++--
net/mac80211/tx.c | 22 ++++++++++++----------
2 files changed, 14 insertions(+), 12 deletions(-)
diff --git a/net/mac80211/status.c b/net/mac80211/status.c
index 28cfa98..d64dc07 100644
--- a/net/mac80211/status.c
+++ b/net/mac80211/status.c
@@ -34,7 +34,7 @@ void ieee80211_tx_status_irqsafe(struct ieee80211_hw *hw,
skb_queue_len(&local->skb_queue_unreliable);
while (tmp > IEEE80211_IRQSAFE_QUEUE_LIMIT &&
(skb = skb_dequeue(&local->skb_queue_unreliable))) {
- dev_kfree_skb_irq(skb);
+ ieee80211_free_txskb(hw, skb);
tmp--;
I802_DEBUG_INC(local->tx_status_drop);
}
@@ -162,7 +162,7 @@ static void ieee80211_handle_filtered_frame(struct ieee80211_local *local,
skb_queue_len(&sta->tx_filtered[ac]),
!!test_sta_flag(sta, WLAN_STA_PS_STA), jiffies);
#endif
- dev_kfree_skb(skb);
+ ieee80211_free_txskb(&local->hw, skb);
}
static void ieee80211_check_pending_bar(struct sta_info *sta, u8 *addr, u8 tid)
diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
index 85cf32d..a22a1ad 100644
--- a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -358,7 +358,7 @@ static void purge_old_ps_buffers(struct ieee80211_local *local)
total += skb_queue_len(&sta->ps_tx_buf[ac]);
if (skb) {
purged++;
- dev_kfree_skb(skb);
+ ieee80211_free_txskb(&local->hw, skb);
break;
}
}
@@ -478,7 +478,7 @@ ieee80211_tx_h_unicast_ps_buf(struct ieee80211_tx_data *tx)
net_dbg_ratelimited("%s: STA %pM TX buffer for AC %d full - dropping oldest frame\n",
tx->sdata->name, sta->sta.addr, ac);
#endif
- dev_kfree_skb(old);
+ ieee80211_free_txskb(&local->hw, old);
} else
tx->local->total_ps_buffered++;
@@ -1112,7 +1112,7 @@ static bool ieee80211_tx_prep_agg(struct ieee80211_tx_data *tx,
spin_unlock(&tx->sta->lock);
if (purge_skb)
- dev_kfree_skb(purge_skb);
+ ieee80211_free_txskb(&tx->local->hw, purge_skb);
}
/* reset session timer */
@@ -1223,7 +1223,7 @@ static bool ieee80211_tx_frags(struct ieee80211_local *local,
#ifdef CONFIG_MAC80211_VERBOSE_DEBUG
if (WARN_ON_ONCE(q >= local->hw.queues)) {
__skb_unlink(skb, skbs);
- dev_kfree_skb(skb);
+ ieee80211_free_txskb(&local->hw, skb);
continue;
}
#endif
@@ -1368,7 +1368,7 @@ static int invoke_tx_handlers(struct ieee80211_tx_data *tx)
if (unlikely(res == TX_DROP)) {
I802_DEBUG_INC(tx->local->tx_handlers_drop);
if (tx->skb)
- dev_kfree_skb(tx->skb);
+ ieee80211_free_txskb(&tx->local->hw, tx->skb);
else
__skb_queue_purge(&tx->skbs);
return -1;
@@ -1405,7 +1405,7 @@ static bool ieee80211_tx(struct ieee80211_sub_if_data *sdata,
res_prepare = ieee80211_tx_prepare(sdata, &tx, skb);
if (unlikely(res_prepare == TX_DROP)) {
- dev_kfree_skb(skb);
+ ieee80211_free_txskb(&local->hw, skb);
goto out;
} else if (unlikely(res_prepare == TX_QUEUED)) {
goto out;
@@ -1478,7 +1478,7 @@ void ieee80211_xmit(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb)
headroom = max_t(int, 0, headroom);
if (ieee80211_skb_resize(sdata, skb, headroom, may_encrypt)) {
- dev_kfree_skb(skb);
+ ieee80211_free_txskb(&local->hw, skb);
rcu_read_unlock();
return;
}
@@ -2075,8 +2075,10 @@ netdev_tx_t ieee80211_subif_start_xmit(struct sk_buff *skb,
head_need += IEEE80211_ENCRYPT_HEADROOM;
head_need += local->tx_headroom;
head_need = max_t(int, 0, head_need);
- if (ieee80211_skb_resize(sdata, skb, head_need, true))
- goto fail;
+ if (ieee80211_skb_resize(sdata, skb, head_need, true)) {
+ ieee80211_free_txskb(&local->hw, skb);
+ return NETDEV_TX_OK;
+ }
}
if (encaps_data) {
@@ -2211,7 +2213,7 @@ void ieee80211_tx_pending(unsigned long data)
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
if (WARN_ON(!info->control.vif)) {
- kfree_skb(skb);
+ ieee80211_free_txskb(&local->hw, skb);
continue;
}
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Feng Tang <[email protected]>
commit a520d52e99b14ba7db135e916348f12f2a6e09be upstream.
The Linux EC driver includes a mechanism to detect GPE storms,
and switch from interrupt-mode to polling mode. However, polling
mode sometimes doesn't work, so the workaround is problematic.
Also, different systems seem to need the threshold for detecting
the GPE storm at different levels.
ACPI_EC_STORM_THRESHOLD was initially 20 when it's created, and
was changed to 8 in 2.6.28 commit 06cf7d3c7 "ACPI: EC: lower interrupt storm
threshold" to fix kernel bug 11892 by forcing the laptop in that bug to
work in polling mode. However in bug 45151, it works fine in interrupt
mode if we lift the threshold back to 20.
This patch makes the threshold a module parameter so that user has a
flexible option to debug/workaround this issue.
The default is unchanged.
This is also a preparation patch to fix specific systems:
https://bugzilla.kernel.org/show_bug.cgi?id=45151
Signed-off-by: Feng Tang <[email protected]>
Signed-off-by: Len Brown <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/acpi/ec.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
index 7edaccc..615264c 100644
--- a/drivers/acpi/ec.c
+++ b/drivers/acpi/ec.c
@@ -71,9 +71,6 @@ enum ec_command {
#define ACPI_EC_UDELAY_GLK 1000 /* Wait 1ms max. to get global lock */
#define ACPI_EC_MSI_UDELAY 550 /* Wait 550us for MSI EC */
-#define ACPI_EC_STORM_THRESHOLD 8 /* number of false interrupts
- per one transaction */
-
enum {
EC_FLAGS_QUERY_PENDING, /* Query is pending */
EC_FLAGS_GPE_STORM, /* GPE storm detected */
@@ -87,6 +84,15 @@ static unsigned int ec_delay __read_mostly = ACPI_EC_DELAY;
module_param(ec_delay, uint, 0644);
MODULE_PARM_DESC(ec_delay, "Timeout(ms) waited until an EC command completes");
+/*
+ * If the number of false interrupts per one transaction exceeds
+ * this threshold, will think there is a GPE storm happened and
+ * will disable the GPE for normal transaction.
+ */
+static unsigned int ec_storm_threshold __read_mostly = 8;
+module_param(ec_storm_threshold, uint, 0644);
+MODULE_PARM_DESC(ec_storm_threshold, "Maxim false GPE numbers not considered as GPE storm");
+
/* If we find an EC via the ECDT, we need to keep a ptr to its context */
/* External interfaces use first EC only, so remember */
typedef int (*acpi_ec_query_func) (void *data);
@@ -319,7 +325,7 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t)
msleep(1);
/* It is safe to enable the GPE outside of the transaction. */
acpi_enable_gpe(NULL, ec->gpe);
- } else if (t->irq_count > ACPI_EC_STORM_THRESHOLD) {
+ } else if (t->irq_count > ec_storm_threshold) {
pr_info(PREFIX "GPE storm detected, "
"transactions will use polling mode\n");
set_bit(EC_FLAGS_GPE_STORM, &ec->flags);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Felix Fietkau <[email protected]>
commit 249ee72249140fe5b9adc988f97298f0aa5db2fc upstream.
Using ieee80211_free_txskb for tx frames is required, since mac80211 clones
skbs for which socket tx status is requested.
Signed-off-by: Felix Fietkau <[email protected]>
Signed-off-by: John W. Linville <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/net/wireless/ath/ath9k/beacon.c | 2 +-
drivers/net/wireless/ath/ath9k/main.c | 2 +-
drivers/net/wireless/ath/ath9k/xmit.c | 53 +++++++++++++++++--------------
3 files changed, 31 insertions(+), 26 deletions(-)
diff --git a/drivers/net/wireless/ath/ath9k/beacon.c b/drivers/net/wireless/ath/ath9k/beacon.c
index 11bc55e..3f9330a 100644
--- a/drivers/net/wireless/ath/ath9k/beacon.c
+++ b/drivers/net/wireless/ath/ath9k/beacon.c
@@ -121,7 +121,7 @@ static void ath_tx_cabq(struct ieee80211_hw *hw, struct sk_buff *skb)
if (ath_tx_start(hw, skb, &txctl) != 0) {
ath_dbg(common, XMIT, "CABQ TX failed\n");
- dev_kfree_skb_any(skb);
+ ieee80211_free_txskb(hw, skb);
}
}
diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
index dac1a27..b5fa9eb 100644
--- a/drivers/net/wireless/ath/ath9k/main.c
+++ b/drivers/net/wireless/ath/ath9k/main.c
@@ -1161,7 +1161,7 @@ static void ath9k_tx(struct ieee80211_hw *hw, struct sk_buff *skb)
return;
exit:
- dev_kfree_skb_any(skb);
+ ieee80211_free_txskb(hw, skb);
}
static void ath9k_stop(struct ieee80211_hw *hw)
diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
index 4d57139..b78773b 100644
--- a/drivers/net/wireless/ath/ath9k/xmit.c
+++ b/drivers/net/wireless/ath/ath9k/xmit.c
@@ -64,8 +64,7 @@ static void ath_tx_update_baw(struct ath_softc *sc, struct ath_atx_tid *tid,
static struct ath_buf *ath_tx_setup_buffer(struct ath_softc *sc,
struct ath_txq *txq,
struct ath_atx_tid *tid,
- struct sk_buff *skb,
- bool dequeue);
+ struct sk_buff *skb);
enum {
MCS_HT20,
@@ -201,7 +200,15 @@ static void ath_tx_flush_tid(struct ath_softc *sc, struct ath_atx_tid *tid)
fi = get_frame_info(skb);
bf = fi->bf;
- if (bf && fi->retries) {
+ if (!bf) {
+ bf = ath_tx_setup_buffer(sc, txq, tid, skb);
+ if (!bf) {
+ ieee80211_free_txskb(sc->hw, skb);
+ continue;
+ }
+ }
+
+ if (fi->retries) {
list_add_tail(&bf->list, &bf_head);
ath_tx_update_baw(sc, tid, bf->bf_state.seqno);
ath_tx_complete_buf(sc, bf, txq, &bf_head, &ts, 0);
@@ -812,10 +819,13 @@ static enum ATH_AGGR_STATUS ath_tx_form_aggr(struct ath_softc *sc,
fi = get_frame_info(skb);
bf = fi->bf;
if (!fi->bf)
- bf = ath_tx_setup_buffer(sc, txq, tid, skb, true);
+ bf = ath_tx_setup_buffer(sc, txq, tid, skb);
- if (!bf)
+ if (!bf) {
+ __skb_unlink(skb, &tid->buf_q);
+ ieee80211_free_txskb(sc->hw, skb);
continue;
+ }
bf->bf_state.bf_type = BUF_AMPDU | BUF_AGGR;
seqno = bf->bf_state.seqno;
@@ -1717,9 +1727,11 @@ static void ath_tx_send_ampdu(struct ath_softc *sc, struct ath_atx_tid *tid,
return;
}
- bf = ath_tx_setup_buffer(sc, txctl->txq, tid, skb, false);
- if (!bf)
+ bf = ath_tx_setup_buffer(sc, txctl->txq, tid, skb);
+ if (!bf) {
+ ieee80211_free_txskb(sc->hw, skb);
return;
+ }
bf->bf_state.bf_type = BUF_AMPDU;
INIT_LIST_HEAD(&bf_head);
@@ -1743,11 +1755,6 @@ static void ath_tx_send_normal(struct ath_softc *sc, struct ath_txq *txq,
struct ath_buf *bf;
bf = fi->bf;
- if (!bf)
- bf = ath_tx_setup_buffer(sc, txq, tid, skb, false);
-
- if (!bf)
- return;
INIT_LIST_HEAD(&bf_head);
list_add_tail(&bf->list, &bf_head);
@@ -1820,8 +1827,7 @@ u8 ath_txchainmask_reduction(struct ath_softc *sc, u8 chainmask, u32 rate)
static struct ath_buf *ath_tx_setup_buffer(struct ath_softc *sc,
struct ath_txq *txq,
struct ath_atx_tid *tid,
- struct sk_buff *skb,
- bool dequeue)
+ struct sk_buff *skb)
{
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
struct ath_frame_info *fi = get_frame_info(skb);
@@ -1833,7 +1839,7 @@ static struct ath_buf *ath_tx_setup_buffer(struct ath_softc *sc,
bf = ath_tx_get_buffer(sc);
if (!bf) {
ath_dbg(common, XMIT, "TX buffers are full\n");
- goto error;
+ return NULL;
}
ATH_TXBUF_RESET(bf);
@@ -1862,18 +1868,12 @@ static struct ath_buf *ath_tx_setup_buffer(struct ath_softc *sc,
ath_err(ath9k_hw_common(sc->sc_ah),
"dma_mapping_error() on TX\n");
ath_tx_return_buffer(sc, bf);
- goto error;
+ return NULL;
}
fi->bf = bf;
return bf;
-
-error:
- if (dequeue)
- __skb_unlink(skb, &tid->buf_q);
- dev_kfree_skb_any(skb);
- return NULL;
}
/* FIXME: tx power */
@@ -1902,9 +1902,14 @@ static void ath_tx_start_dma(struct ath_softc *sc, struct sk_buff *skb,
*/
ath_tx_send_ampdu(sc, tid, skb, txctl);
} else {
- bf = ath_tx_setup_buffer(sc, txctl->txq, tid, skb, false);
- if (!bf)
+ bf = ath_tx_setup_buffer(sc, txctl->txq, tid, skb);
+ if (!bf) {
+ if (txctl->paprd)
+ dev_kfree_skb_any(skb);
+ else
+ ieee80211_free_txskb(sc->hw, skb);
return;
+ }
bf->bf_state.bfs_paprd = txctl->paprd;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger <[email protected]>
commit cf0eb28d3ba60098865bf7dbcbfdd6b1cc483e3b upstream.
This patch increases the default for nopin_timeout to 15 seconds (wait
between sending a new NopIN ping) and nopin_response_timeout to 30 seconds
(wait for NopOUT response before failing the connection) in order to avoid
false positives by iSCSI Initiators who are not always able (under load) to
respond to NopIN echo PING requests within the current 5 second window.
False positives have been observed recently using Open-iSCSI code on v3.3.x
with heavy large-block READ workloads over small MTU 1 Gb/sec ports, and
increasing these values to more reasonable defaults significantly reduces
the possibility of false positive NopIN response timeout events under
this specific workload.
Historically these have been set low to initiate connection recovery as
soon as possible if we don't hear a ping back, but for modern v3.x code
on 1 -> 10 Gb/sec ports these new defaults make alot more sense.
Cc: Christoph Hellwig <[email protected]>
Cc: Andy Grover <[email protected]>
Cc: Mike Christie <[email protected]>
Cc: Hannes Reinecke <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/target/iscsi/iscsi_target_core.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/target/iscsi/iscsi_target_core.h b/drivers/target/iscsi/iscsi_target_core.h
index 1dd5716..8bb81d68 100644
--- a/drivers/target/iscsi/iscsi_target_core.h
+++ b/drivers/target/iscsi/iscsi_target_core.h
@@ -25,10 +25,10 @@
#define NA_DATAOUT_TIMEOUT_RETRIES 5
#define NA_DATAOUT_TIMEOUT_RETRIES_MAX 15
#define NA_DATAOUT_TIMEOUT_RETRIES_MIN 1
-#define NA_NOPIN_TIMEOUT 5
+#define NA_NOPIN_TIMEOUT 15
#define NA_NOPIN_TIMEOUT_MAX 60
#define NA_NOPIN_TIMEOUT_MIN 3
-#define NA_NOPIN_RESPONSE_TIMEOUT 5
+#define NA_NOPIN_RESPONSE_TIMEOUT 30
#define NA_NOPIN_RESPONSE_TIMEOUT_MAX 60
#define NA_NOPIN_RESPONSE_TIMEOUT_MIN 3
#define NA_RANDOM_DATAIN_PDU_OFFSETS 0
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger <[email protected]>
commit 38b11bae6ba02da352340aff12ee25755977b222 upstream.
We've had reports in the past about this specific case, so it's time to
go ahead and explicitly set cache_dynamic_acls=1 for generate_node_acls=1
(TPG demo-mode) operation.
During normal generate_node_acls=0 operation with explicit NodeACLs ->
se_node_acl memory is persistent to the configfs group located at
/sys/kernel/config/target/$TARGETNAME/$TPGT/acls/$INITIATORNAME, so in
the generate_node_acls=1 case we want the reservation logic to reference
existing per initiator IQN se_node_acl memory (not to generate a new
se_node_acl), so go ahead and always set cache_dynamic_acls=1 when
TPG demo-mode is enabled.
Reported-by: Ronnie Sahlberg <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/target/iscsi/iscsi_target_tpg.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
index 879d8d0..c3d7bf54 100644
--- a/drivers/target/iscsi/iscsi_target_tpg.c
+++ b/drivers/target/iscsi/iscsi_target_tpg.c
@@ -672,6 +672,12 @@ int iscsit_ta_generate_node_acls(
pr_debug("iSCSI_TPG[%hu] - Generate Initiator Portal Group ACLs: %s\n",
tpg->tpgt, (a->generate_node_acls) ? "Enabled" : "Disabled");
+ if (flag == 1 && a->cache_dynamic_acls == 0) {
+ pr_debug("Explicitly setting cache_dynamic_acls=1 when "
+ "generate_node_acls=1\n");
+ a->cache_dynamic_acls = 1;
+ }
+
return 0;
}
@@ -711,6 +717,12 @@ int iscsit_ta_cache_dynamic_acls(
return -EINVAL;
}
+ if (a->generate_node_acls == 1 && flag == 0) {
+ pr_debug("Skipping cache_dynamic_acls=0 when"
+ " generate_node_acls=1\n");
+ return 0;
+ }
+
a->cache_dynamic_acls = flag;
pr_debug("iSCSI_TPG[%hu] - Cache Dynamic Initiator Portal Group"
" ACLs %s\n", tpg->tpgt, (a->cache_dynamic_acls) ?
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Peng Tao <[email protected]>
commit 96c9eae638765c2bf2ca4f5a6325484f9bb69aa7 upstream.
For DIO writes, if it is not blocksize aligned, we need to do
internal serialization. It may slow down writers anyway. So we
just bail them out and resend to MDS.
Signed-off-by: Peng Tao <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 34 +++++++++++++++++++++++++++++++---
1 file changed, 31 insertions(+), 3 deletions(-)
diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index e5dfef5..1093968 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -685,7 +685,7 @@ bl_write_pagelist(struct nfs_write_data *wdata, int sync)
struct bio *bio = NULL;
struct pnfs_block_extent *be = NULL, *cow_read = NULL;
sector_t isect, last_isect = 0, extent_length = 0;
- struct parallel_io *par;
+ struct parallel_io *par = NULL;
loff_t offset = wdata->args.offset;
size_t count = wdata->args.count;
unsigned int pg_offset, pg_len, saved_len;
@@ -697,6 +697,13 @@ bl_write_pagelist(struct nfs_write_data *wdata, int sync)
NFS_SERVER(header->inode)->pnfs_blksize >> PAGE_CACHE_SHIFT;
dprintk("%s enter, %Zu@%lld\n", __func__, count, offset);
+
+ if (header->dreq != NULL &&
+ (!IS_ALIGNED(offset, NFS_SERVER(header->inode)->pnfs_blksize) ||
+ !IS_ALIGNED(count, NFS_SERVER(header->inode)->pnfs_blksize))) {
+ dprintk("pnfsblock nonblock aligned DIO writes. Resend MDS\n");
+ goto out_mds;
+ }
/* At this point, wdata->pages is a (sequential) list of nfs_pages.
* We want to write each, and if there is an error set pnfs_error
* to have it redone using nfs.
@@ -1197,6 +1204,27 @@ bl_pg_test_read(struct nfs_pageio_descriptor *pgio, struct nfs_page *prev,
return pnfs_generic_pg_test(pgio, prev, req);
}
+void
+bl_pg_init_write(struct nfs_pageio_descriptor *pgio, struct nfs_page *req)
+{
+ if (pgio->pg_dreq != NULL &&
+ !is_aligned_req(req, PAGE_CACHE_SIZE))
+ nfs_pageio_reset_write_mds(pgio);
+ else
+ pnfs_generic_pg_init_write(pgio, req);
+}
+
+static bool
+bl_pg_test_write(struct nfs_pageio_descriptor *pgio, struct nfs_page *prev,
+ struct nfs_page *req)
+{
+ if (pgio->pg_dreq != NULL &&
+ !is_aligned_req(req, PAGE_CACHE_SIZE))
+ return false;
+
+ return pnfs_generic_pg_test(pgio, prev, req);
+}
+
static const struct nfs_pageio_ops bl_pg_read_ops = {
.pg_init = bl_pg_init_read,
.pg_test = bl_pg_test_read,
@@ -1204,8 +1232,8 @@ static const struct nfs_pageio_ops bl_pg_read_ops = {
};
static const struct nfs_pageio_ops bl_pg_write_ops = {
- .pg_init = pnfs_generic_pg_init_write,
- .pg_test = pnfs_generic_pg_test,
+ .pg_init = bl_pg_init_write,
+ .pg_test = bl_pg_test_write,
.pg_doio = pnfs_generic_pg_writepages,
};
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Peng Tao <[email protected]>
commit f742dc4a32587bff50b13dde9d8894b96851951a upstream.
For DIO read, if it is not sector aligned, we should reject it
and resend via MDS. Otherwise there might be data corruption.
Also teach bl_read_pagelist to handle partial page reads for DIO.
Signed-off-by: Peng Tao <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 64 +++++++++++++++++++++++++++++++++-----
1 file changed, 56 insertions(+), 8 deletions(-)
diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 39fa002..e5dfef5 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -252,8 +252,11 @@ bl_read_pagelist(struct nfs_read_data *rdata)
sector_t isect, extent_length = 0;
struct parallel_io *par;
loff_t f_offset = rdata->args.offset;
+ size_t bytes_left = rdata->args.count;
+ unsigned int pg_offset, pg_len;
struct page **pages = rdata->args.pages;
int pg_index = rdata->args.pgbase >> PAGE_CACHE_SHIFT;
+ const bool is_dio = (header->dreq != NULL);
dprintk("%s enter nr_pages %u offset %lld count %u\n", __func__,
rdata->pages.npages, f_offset, (unsigned int)rdata->args.count);
@@ -287,36 +290,53 @@ bl_read_pagelist(struct nfs_read_data *rdata)
extent_length = min(extent_length, cow_length);
}
}
+
+ if (is_dio) {
+ pg_offset = f_offset & ~PAGE_CACHE_MASK;
+ if (pg_offset + bytes_left > PAGE_CACHE_SIZE)
+ pg_len = PAGE_CACHE_SIZE - pg_offset;
+ else
+ pg_len = bytes_left;
+
+ f_offset += pg_len;
+ bytes_left -= pg_len;
+ isect += (pg_offset >> SECTOR_SHIFT);
+ } else {
+ pg_offset = 0;
+ pg_len = PAGE_CACHE_SIZE;
+ }
+
hole = is_hole(be, isect);
if (hole && !cow_read) {
bio = bl_submit_bio(READ, bio);
/* Fill hole w/ zeroes w/o accessing device */
dprintk("%s Zeroing page for hole\n", __func__);
- zero_user_segment(pages[i], 0, PAGE_CACHE_SIZE);
+ zero_user_segment(pages[i], pg_offset, pg_len);
print_page(pages[i]);
SetPageUptodate(pages[i]);
} else {
struct pnfs_block_extent *be_read;
be_read = (hole && cow_read) ? cow_read : be;
- bio = bl_add_page_to_bio(bio, rdata->pages.npages - i,
+ bio = do_add_page_to_bio(bio, rdata->pages.npages - i,
READ,
isect, pages[i], be_read,
- bl_end_io_read, par);
+ bl_end_io_read, par,
+ pg_offset, pg_len);
if (IS_ERR(bio)) {
header->pnfs_error = PTR_ERR(bio);
bio = NULL;
goto out;
}
}
- isect += PAGE_CACHE_SECTORS;
+ isect += (pg_len >> SECTOR_SHIFT);
extent_length -= PAGE_CACHE_SECTORS;
}
if ((isect << SECTOR_SHIFT) >= header->inode->i_size) {
rdata->res.eof = 1;
- rdata->res.count = header->inode->i_size - f_offset;
+ rdata->res.count = header->inode->i_size - rdata->args.offset;
} else {
- rdata->res.count = (isect << SECTOR_SHIFT) - f_offset;
+ rdata->res.count = (isect << SECTOR_SHIFT) - rdata->args.offset;
}
out:
bl_put_extent(be);
@@ -1149,9 +1169,37 @@ bl_clear_layoutdriver(struct nfs_server *server)
return 0;
}
+static bool
+is_aligned_req(struct nfs_page *req, unsigned int alignment)
+{
+ return IS_ALIGNED(req->wb_offset, alignment) &&
+ IS_ALIGNED(req->wb_bytes, alignment);
+}
+
+static void
+bl_pg_init_read(struct nfs_pageio_descriptor *pgio, struct nfs_page *req)
+{
+ if (pgio->pg_dreq != NULL &&
+ !is_aligned_req(req, SECTOR_SIZE))
+ nfs_pageio_reset_read_mds(pgio);
+ else
+ pnfs_generic_pg_init_read(pgio, req);
+}
+
+static bool
+bl_pg_test_read(struct nfs_pageio_descriptor *pgio, struct nfs_page *prev,
+ struct nfs_page *req)
+{
+ if (pgio->pg_dreq != NULL &&
+ !is_aligned_req(req, SECTOR_SIZE))
+ return false;
+
+ return pnfs_generic_pg_test(pgio, prev, req);
+}
+
static const struct nfs_pageio_ops bl_pg_read_ops = {
- .pg_init = pnfs_generic_pg_init_read,
- .pg_test = pnfs_generic_pg_test,
+ .pg_init = bl_pg_init_read,
+ .pg_test = bl_pg_test_read,
.pg_doio = pnfs_generic_pg_readpages,
};
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Peng Tao <[email protected]>
commit fe6e1e8d9fad86873eb74a26e80a8f91f9e870b5 upstream.
If applications use flock to protect its write range, generic NFS
will not do read-modify-write cycle at page cache level. Therefore
LD should know how to handle non-sector aligned writes. Otherwise
there will be data corruption.
Signed-off-by: Peng Tao <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 177 +++++++++++++++++++++++++++++++++++---
fs/nfs/blocklayout/blocklayout.h | 1 +
2 files changed, 166 insertions(+), 12 deletions(-)
diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 7ae8a60..39fa002 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -162,25 +162,39 @@ static struct bio *bl_alloc_init_bio(int npg, sector_t isect,
return bio;
}
-static struct bio *bl_add_page_to_bio(struct bio *bio, int npg, int rw,
+static struct bio *do_add_page_to_bio(struct bio *bio, int npg, int rw,
sector_t isect, struct page *page,
struct pnfs_block_extent *be,
void (*end_io)(struct bio *, int err),
- struct parallel_io *par)
+ struct parallel_io *par,
+ unsigned int offset, int len)
{
+ isect = isect + (offset >> SECTOR_SHIFT);
+ dprintk("%s: npg %d rw %d isect %llu offset %u len %d\n", __func__,
+ npg, rw, (unsigned long long)isect, offset, len);
retry:
if (!bio) {
bio = bl_alloc_init_bio(npg, isect, be, end_io, par);
if (!bio)
return ERR_PTR(-ENOMEM);
}
- if (bio_add_page(bio, page, PAGE_CACHE_SIZE, 0) < PAGE_CACHE_SIZE) {
+ if (bio_add_page(bio, page, len, offset) < len) {
bio = bl_submit_bio(rw, bio);
goto retry;
}
return bio;
}
+static struct bio *bl_add_page_to_bio(struct bio *bio, int npg, int rw,
+ sector_t isect, struct page *page,
+ struct pnfs_block_extent *be,
+ void (*end_io)(struct bio *, int err),
+ struct parallel_io *par)
+{
+ return do_add_page_to_bio(bio, npg, rw, isect, page, be,
+ end_io, par, 0, PAGE_CACHE_SIZE);
+}
+
/* This is basically copied from mpage_end_io_read */
static void bl_end_io_read(struct bio *bio, int err)
{
@@ -450,6 +464,106 @@ map_block(struct buffer_head *bh, sector_t isect, struct pnfs_block_extent *be)
return;
}
+static void
+bl_read_single_end_io(struct bio *bio, int error)
+{
+ struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
+ struct page *page = bvec->bv_page;
+
+ /* Only one page in bvec */
+ unlock_page(page);
+}
+
+static int
+bl_do_readpage_sync(struct page *page, struct pnfs_block_extent *be,
+ unsigned int offset, unsigned int len)
+{
+ struct bio *bio;
+ struct page *shadow_page;
+ sector_t isect;
+ char *kaddr, *kshadow_addr;
+ int ret = 0;
+
+ dprintk("%s: offset %u len %u\n", __func__, offset, len);
+
+ shadow_page = alloc_page(GFP_NOFS | __GFP_HIGHMEM);
+ if (shadow_page == NULL)
+ return -ENOMEM;
+
+ bio = bio_alloc(GFP_NOIO, 1);
+ if (bio == NULL)
+ return -ENOMEM;
+
+ isect = (page->index << PAGE_CACHE_SECTOR_SHIFT) +
+ (offset / SECTOR_SIZE);
+
+ bio->bi_sector = isect - be->be_f_offset + be->be_v_offset;
+ bio->bi_bdev = be->be_mdev;
+ bio->bi_end_io = bl_read_single_end_io;
+
+ lock_page(shadow_page);
+ if (bio_add_page(bio, shadow_page,
+ SECTOR_SIZE, round_down(offset, SECTOR_SIZE)) == 0) {
+ unlock_page(shadow_page);
+ bio_put(bio);
+ return -EIO;
+ }
+
+ submit_bio(READ, bio);
+ wait_on_page_locked(shadow_page);
+ if (unlikely(!test_bit(BIO_UPTODATE, &bio->bi_flags))) {
+ ret = -EIO;
+ } else {
+ kaddr = kmap_atomic(page);
+ kshadow_addr = kmap_atomic(shadow_page);
+ memcpy(kaddr + offset, kshadow_addr + offset, len);
+ kunmap_atomic(kshadow_addr);
+ kunmap_atomic(kaddr);
+ }
+ __free_page(shadow_page);
+ bio_put(bio);
+
+ return ret;
+}
+
+static int
+bl_read_partial_page_sync(struct page *page, struct pnfs_block_extent *be,
+ unsigned int dirty_offset, unsigned int dirty_len,
+ bool full_page)
+{
+ int ret = 0;
+ unsigned int start, end;
+
+ if (full_page) {
+ start = 0;
+ end = PAGE_CACHE_SIZE;
+ } else {
+ start = round_down(dirty_offset, SECTOR_SIZE);
+ end = round_up(dirty_offset + dirty_len, SECTOR_SIZE);
+ }
+
+ dprintk("%s: offset %u len %d\n", __func__, dirty_offset, dirty_len);
+ if (!be) {
+ zero_user_segments(page, start, dirty_offset,
+ dirty_offset + dirty_len, end);
+ if (start == 0 && end == PAGE_CACHE_SIZE &&
+ trylock_page(page)) {
+ SetPageUptodate(page);
+ unlock_page(page);
+ }
+ return ret;
+ }
+
+ if (start != dirty_offset)
+ ret = bl_do_readpage_sync(page, be, start, dirty_offset - start);
+
+ if (!ret && (dirty_offset + dirty_len < end))
+ ret = bl_do_readpage_sync(page, be, dirty_offset + dirty_len,
+ end - dirty_offset - dirty_len);
+
+ return ret;
+}
+
/* Given an unmapped page, zero it or read in page for COW, page is locked
* by caller.
*/
@@ -483,7 +597,6 @@ init_page_for_write(struct page *page, struct pnfs_block_extent *cow_read)
SetPageUptodate(page);
cleanup:
- bl_put_extent(cow_read);
if (bh)
free_buffer_head(bh);
if (ret) {
@@ -555,6 +668,7 @@ bl_write_pagelist(struct nfs_write_data *wdata, int sync)
struct parallel_io *par;
loff_t offset = wdata->args.offset;
size_t count = wdata->args.count;
+ unsigned int pg_offset, pg_len, saved_len;
struct page **pages = wdata->args.pages;
struct page *page;
pgoff_t index;
@@ -659,10 +773,11 @@ next_page:
if (!extent_length) {
/* We've used up the previous extent */
bl_put_extent(be);
+ bl_put_extent(cow_read);
bio = bl_submit_bio(WRITE, bio);
/* Get the next one */
be = bl_find_get_extent(BLK_LSEG2EXT(header->lseg),
- isect, NULL);
+ isect, &cow_read);
if (!be || !is_writable(be, isect)) {
header->pnfs_error = -EINVAL;
goto out;
@@ -679,7 +794,26 @@ next_page:
extent_length = be->be_length -
(isect - be->be_f_offset);
}
- if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
+
+ dprintk("%s offset %lld count %Zu\n", __func__, offset, count);
+ pg_offset = offset & ~PAGE_CACHE_MASK;
+ if (pg_offset + count > PAGE_CACHE_SIZE)
+ pg_len = PAGE_CACHE_SIZE - pg_offset;
+ else
+ pg_len = count;
+
+ saved_len = pg_len;
+ if (be->be_state == PNFS_BLOCK_INVALID_DATA &&
+ !bl_is_sector_init(be->be_inval, isect)) {
+ ret = bl_read_partial_page_sync(pages[i], cow_read,
+ pg_offset, pg_len, true);
+ if (ret) {
+ dprintk("%s bl_read_partial_page_sync fail %d\n",
+ __func__, ret);
+ header->pnfs_error = ret;
+ goto out;
+ }
+
ret = bl_mark_sectors_init(be->be_inval, isect,
PAGE_CACHE_SECTORS);
if (unlikely(ret)) {
@@ -688,15 +822,35 @@ next_page:
header->pnfs_error = ret;
goto out;
}
+
+ /* Expand to full page write */
+ pg_offset = 0;
+ pg_len = PAGE_CACHE_SIZE;
+ } else if ((pg_offset & (SECTOR_SIZE - 1)) ||
+ (pg_len & (SECTOR_SIZE - 1))){
+ /* ahh, nasty case. We have to do sync full sector
+ * read-modify-write cycles.
+ */
+ unsigned int saved_offset = pg_offset;
+ ret = bl_read_partial_page_sync(pages[i], be, pg_offset,
+ pg_len, false);
+ pg_offset = round_down(pg_offset, SECTOR_SIZE);
+ pg_len = round_up(saved_offset + pg_len, SECTOR_SIZE)
+ - pg_offset;
}
- bio = bl_add_page_to_bio(bio, wdata->pages.npages - i, WRITE,
+
+
+ bio = do_add_page_to_bio(bio, wdata->pages.npages - i, WRITE,
isect, pages[i], be,
- bl_end_io_write, par);
+ bl_end_io_write, par,
+ pg_offset, pg_len);
if (IS_ERR(bio)) {
header->pnfs_error = PTR_ERR(bio);
bio = NULL;
goto out;
}
+ offset += saved_len;
+ count -= saved_len;
isect += PAGE_CACHE_SECTORS;
last_isect = isect;
extent_length -= PAGE_CACHE_SECTORS;
@@ -714,17 +868,16 @@ next_page:
}
write_done:
- wdata->res.count = (last_isect << SECTOR_SHIFT) - (offset);
- if (count < wdata->res.count) {
- wdata->res.count = count;
- }
+ wdata->res.count = wdata->args.count;
out:
bl_put_extent(be);
+ bl_put_extent(cow_read);
bl_submit_bio(WRITE, bio);
put_parallel(par);
return PNFS_ATTEMPTED;
out_mds:
bl_put_extent(be);
+ bl_put_extent(cow_read);
kfree(par);
return PNFS_NOT_ATTEMPTED;
}
diff --git a/fs/nfs/blocklayout/blocklayout.h b/fs/nfs/blocklayout/blocklayout.h
index 0335069..39bb51a 100644
--- a/fs/nfs/blocklayout/blocklayout.h
+++ b/fs/nfs/blocklayout/blocklayout.h
@@ -41,6 +41,7 @@
#define PAGE_CACHE_SECTORS (PAGE_CACHE_SIZE >> SECTOR_SHIFT)
#define PAGE_CACHE_SECTOR_SHIFT (PAGE_CACHE_SHIFT - SECTOR_SHIFT)
+#define SECTOR_SIZE (1 << SECTOR_SHIFT)
struct block_mount_id {
spinlock_t bm_lock; /* protects list */
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Michal Marek <[email protected]>
commit fe04ddf7c2910362f3817c8156e41cbd6c0ee35d upstream.
There were reports of users destroying their Fedora installs by a kernel
tarball that replaces the /lib -> /usr/lib symlink. Let's remove the
toplevel directories from the tarball to prevent this from happening.
Reported-by: Andi Kleen <[email protected]>
Suggested-by: Ben Hutchings <[email protected]>
Signed-off-by: Michal Marek <[email protected]>
[ herton: dropped unrelated changes to arch/x86/Makefile and
scripts/Makefile.fwinst, which don't apply anyway on 3.5, see commit
3ce9e53e788881da0d5f3912f80e0dd6b501f304 upstream ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
scripts/package/buildtar | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/package/buildtar b/scripts/package/buildtar
index 8a7b155..d0d748e 100644
--- a/scripts/package/buildtar
+++ b/scripts/package/buildtar
@@ -109,7 +109,7 @@ esac
if tar --owner=root --group=root --help >/dev/null 2>&1; then
opts="--owner=root --group=root"
fi
- tar cf - . $opts | ${compress} > "${tarball}${file_ext}"
+ tar cf - boot/* lib/* $opts | ${compress} > "${tarball}${file_ext}"
)
echo "Tarball successfully created in ${tarball}${file_ext}"
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger <[email protected]>
commit f25590f39d543272f7ae7b00d533359c8d7ff331 upstream.
This patch adds a missing iscsi_reject->ffffffff assignment within
iscsit_send_reject() code to properly follow RFC-3720 Section 10.17
Bytes 16 -> 19 for the PDU format definition of ISCSI_OP_REJECT.
We've not seen any initiators care about this bytes in practice, but
as Ronnie reported this was causing trouble with wireshark packet
decoding lets go ahead and fix this up now.
Reported-by: Ronnie Sahlberg <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/target/iscsi/iscsi_target.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index d7dcd67..d3114d1 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -3422,6 +3422,7 @@ static int iscsit_send_reject(
hdr->opcode = ISCSI_OP_REJECT;
hdr->flags |= ISCSI_FLAG_CMD_FINAL;
hton24(hdr->dlength, ISCSI_HDR_LEN);
+ hdr->ffffffff = 0xffffffff;
cmd->stat_sn = conn->stat_sn++;
hdr->statsn = cpu_to_be32(cmd->stat_sn);
hdr->exp_cmdsn = cpu_to_be32(conn->sess->exp_cmd_sn);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tao Ma <[email protected]>
commit bef53b01faeb791e27605cba1a71ba21364cb23e upstream.
The update_backups() function is used to backup all the metadata
blocks, so we should not take it for granted that 'data' is pointed to
a super block and use ext4_superblock_csum_set to calculate the
checksum there. In case where the data is a group descriptor block,
it will corrupt the last group descriptor, and then e2fsck will
complain about it it.
As all the metadata checksums should already be OK when we do the
backup, remove the wrong ext4_superblock_csum_set and it should be
just fine.
Reported-by: "Theodore Ts'o" <[email protected]>
Signed-off-by: Tao Ma <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
[ herton: adjust context ]
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ext4/resize.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
index b0bdd10..dc1affc 100644
--- a/fs/ext4/resize.c
+++ b/fs/ext4/resize.c
@@ -979,8 +979,6 @@ static void update_backups(struct super_block *sb,
goto exit_err;
}
- ext4_superblock_csum_set(sb, (struct ext4_super_block *)data);
-
while ((group = ext4_list_backups(sb, &three, &five, &seven)) < last) {
struct buffer_head *bh;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tejun Heo <[email protected]>
commit 749fefe6778e98dfefe3b8bb72a93875196ec554 upstream.
b82d4b197c ("blkcg: make request_queue bypassing on allocation") made
request_queues bypassed on allocation to avoid switching on and off
bypass mode on a queue being initialized. Some drivers allocate and
then destroy a lot of queues without fully initializing them and
incurring bypass latency overhead on each of them could add upto
significant overhead.
Unfortunately, blk_init_allocated_queue() is never used by queues of
bio-based drivers, which means that all bio-based driver queues are in
bypass mode even after initialization and registration complete
successfully.
Due to the limited way request_queues are used by bio drivers, this
problem is hidden pretty well but it shows up when blk-throttle is
used in combination with a bio-based driver. Trying to configure
(echoing to cgroupfs file) blk-throttle for a bio-based driver hangs
indefinitely in blkg_conf_prep() waiting for bypass mode to end.
This patch moves the initial blk_queue_bypass_end() call from
blk_init_allocated_queue() to blk_register_queue() which is called for
any userland-visible queues regardless of its type.
I believe this is correct because I don't think there is any block
driver which needs or wants working elevator and blk-cgroup on a queue
which isn't visible to userland. If there are such users, we need a
different solution.
Signed-off-by: Tejun Heo <[email protected]>
Reported-by: Joseph Glanville <[email protected]>
Acked-by: Vivek Goyal <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
block/blk-core.c | 7 ++-----
block/blk-sysfs.c | 6 ++++++
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index ad39394..96335a7 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -598,8 +598,8 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
/*
* A queue starts its life with bypass turned on to avoid
* unnecessary bypass on/off overhead and nasty surprises during
- * init. The initial bypass will be finished at the end of
- * blk_init_allocated_queue().
+ * init. The initial bypass will be finished when the queue is
+ * registered by blk_register_queue().
*/
q->bypass_depth = 1;
__set_bit(QUEUE_FLAG_BYPASS, &q->queue_flags);
@@ -702,9 +702,6 @@ blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn,
/* init elevator */
if (elevator_init(q, NULL))
return NULL;
-
- /* all done, end the initial bypass */
- blk_queue_bypass_end(q);
return q;
}
EXPORT_SYMBOL(blk_init_allocated_queue);
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index aa41b47..be7edfc 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -523,6 +523,12 @@ int blk_register_queue(struct gendisk *disk)
if (WARN_ON(!q))
return -ENXIO;
+ /*
+ * Initialization must be complete by now. Finish the initial
+ * bypass from queue allocation.
+ */
+ blk_queue_bypass_end(q);
+
ret = blk_trace_init_sysfs(dev);
if (ret)
return ret;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet <[email protected]>
commit 3d861f661006606bf159fd6bd973e83dbf21d0f9 upstream.
Mike Kazantsev found 3.5 kernels and beyond were leaking memory,
and tracked the faulty commit to a1c7fff7e18f59e ("net:
netdev_alloc_skb() use build_skb()")
While this commit seems fine, it uncovered a bug introduced
in commit bad43ca8325 ("net: introduce skb_try_coalesce()), in function
kfree_skb_partial()"):
If head is stolen, we free the sk_buff,
without removing references on secpath (skb->sp).
So IPsec + IP defrag/reassembly (using skb coalescing), or
TCP coalescing could leak secpath objects.
Fix this bug by calling skb_release_head_state(skb) to properly
release all possible references to linked objects.
Reported-by: Mike Kazantsev <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Bisected-by: Mike Kazantsev <[email protected]>
Tested-by: Mike Kazantsev <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
net/core/skbuff.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index d124306..015f3a7 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3350,10 +3350,12 @@ EXPORT_SYMBOL(__skb_warn_lro_forwarding);
void kfree_skb_partial(struct sk_buff *skb, bool head_stolen)
{
- if (head_stolen)
+ if (head_stolen) {
+ skb_release_head_state(skb);
kmem_cache_free(skbuff_head_cache, skb);
- else
+ } else {
__kfree_skb(skb);
+ }
}
EXPORT_SYMBOL(kfree_skb_partial);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Gabor Juhos <[email protected]>
commit 97541ccfb9db2bb9cd1dde6344d5834438d14bda upstream.
Besides the CPU and DDR PLLs, the CPU and DDR frequencies
can be derived from other PLLs in the SRIF block on the
AR934x SoCs. The current code does not checks if the SRIF
PLLs are used and this can lead to incorrectly calculated
CPU/DDR frequencies.
Fix it by calculating the frequencies from SRIF PLLs if
those are used on a given board.
Signed-off-by: Gabor Juhos <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/4324/
Signed-off-by: Ralf Baechle <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
arch/mips/ath79/clock.c | 109 ++++++++++++++++++------
arch/mips/include/asm/mach-ath79/ar71xx_regs.h | 23 +++++
2 files changed, 104 insertions(+), 28 deletions(-)
diff --git a/arch/mips/ath79/clock.c b/arch/mips/ath79/clock.c
index d272857..579f452 100644
--- a/arch/mips/ath79/clock.c
+++ b/arch/mips/ath79/clock.c
@@ -17,6 +17,8 @@
#include <linux/err.h>
#include <linux/clk.h>
+#include <asm/div64.h>
+
#include <asm/mach-ath79/ath79.h>
#include <asm/mach-ath79/ar71xx_regs.h>
#include "common.h"
@@ -166,11 +168,34 @@ static void __init ar933x_clocks_init(void)
ath79_uart_clk.rate = ath79_ref_clk.rate;
}
+static u32 __init ar934x_get_pll_freq(u32 ref, u32 ref_div, u32 nint, u32 nfrac,
+ u32 frac, u32 out_div)
+{
+ u64 t;
+ u32 ret;
+
+ t = ath79_ref_clk.rate;
+ t *= nint;
+ do_div(t, ref_div);
+ ret = t;
+
+ t = ath79_ref_clk.rate;
+ t *= nfrac;
+ do_div(t, ref_div * frac);
+ ret += t;
+
+ ret /= (1 << out_div);
+ return ret;
+}
+
static void __init ar934x_clocks_init(void)
{
- u32 pll, out_div, ref_div, nint, frac, clk_ctrl, postdiv;
+ u32 pll, out_div, ref_div, nint, nfrac, frac, clk_ctrl, postdiv;
u32 cpu_pll, ddr_pll;
u32 bootstrap;
+ void __iomem *dpll_base;
+
+ dpll_base = ioremap(AR934X_SRIF_BASE, AR934X_SRIF_SIZE);
bootstrap = ath79_reset_rr(AR934X_RESET_REG_BOOTSTRAP);
if (bootstrap & AR934X_BOOTSTRAP_REF_CLK_40)
@@ -178,33 +203,59 @@ static void __init ar934x_clocks_init(void)
else
ath79_ref_clk.rate = 25 * 1000 * 1000;
- pll = ath79_pll_rr(AR934X_PLL_CPU_CONFIG_REG);
- out_div = (pll >> AR934X_PLL_CPU_CONFIG_OUTDIV_SHIFT) &
- AR934X_PLL_CPU_CONFIG_OUTDIV_MASK;
- ref_div = (pll >> AR934X_PLL_CPU_CONFIG_REFDIV_SHIFT) &
- AR934X_PLL_CPU_CONFIG_REFDIV_MASK;
- nint = (pll >> AR934X_PLL_CPU_CONFIG_NINT_SHIFT) &
- AR934X_PLL_CPU_CONFIG_NINT_MASK;
- frac = (pll >> AR934X_PLL_CPU_CONFIG_NFRAC_SHIFT) &
- AR934X_PLL_CPU_CONFIG_NFRAC_MASK;
-
- cpu_pll = nint * ath79_ref_clk.rate / ref_div;
- cpu_pll += frac * ath79_ref_clk.rate / (ref_div * (1 << 6));
- cpu_pll /= (1 << out_div);
-
- pll = ath79_pll_rr(AR934X_PLL_DDR_CONFIG_REG);
- out_div = (pll >> AR934X_PLL_DDR_CONFIG_OUTDIV_SHIFT) &
- AR934X_PLL_DDR_CONFIG_OUTDIV_MASK;
- ref_div = (pll >> AR934X_PLL_DDR_CONFIG_REFDIV_SHIFT) &
- AR934X_PLL_DDR_CONFIG_REFDIV_MASK;
- nint = (pll >> AR934X_PLL_DDR_CONFIG_NINT_SHIFT) &
- AR934X_PLL_DDR_CONFIG_NINT_MASK;
- frac = (pll >> AR934X_PLL_DDR_CONFIG_NFRAC_SHIFT) &
- AR934X_PLL_DDR_CONFIG_NFRAC_MASK;
-
- ddr_pll = nint * ath79_ref_clk.rate / ref_div;
- ddr_pll += frac * ath79_ref_clk.rate / (ref_div * (1 << 10));
- ddr_pll /= (1 << out_div);
+ pll = __raw_readl(dpll_base + AR934X_SRIF_CPU_DPLL2_REG);
+ if (pll & AR934X_SRIF_DPLL2_LOCAL_PLL) {
+ out_div = (pll >> AR934X_SRIF_DPLL2_OUTDIV_SHIFT) &
+ AR934X_SRIF_DPLL2_OUTDIV_MASK;
+ pll = __raw_readl(dpll_base + AR934X_SRIF_CPU_DPLL1_REG);
+ nint = (pll >> AR934X_SRIF_DPLL1_NINT_SHIFT) &
+ AR934X_SRIF_DPLL1_NINT_MASK;
+ nfrac = pll & AR934X_SRIF_DPLL1_NFRAC_MASK;
+ ref_div = (pll >> AR934X_SRIF_DPLL1_REFDIV_SHIFT) &
+ AR934X_SRIF_DPLL1_REFDIV_MASK;
+ frac = 1 << 18;
+ } else {
+ pll = ath79_pll_rr(AR934X_PLL_CPU_CONFIG_REG);
+ out_div = (pll >> AR934X_PLL_CPU_CONFIG_OUTDIV_SHIFT) &
+ AR934X_PLL_CPU_CONFIG_OUTDIV_MASK;
+ ref_div = (pll >> AR934X_PLL_CPU_CONFIG_REFDIV_SHIFT) &
+ AR934X_PLL_CPU_CONFIG_REFDIV_MASK;
+ nint = (pll >> AR934X_PLL_CPU_CONFIG_NINT_SHIFT) &
+ AR934X_PLL_CPU_CONFIG_NINT_MASK;
+ nfrac = (pll >> AR934X_PLL_CPU_CONFIG_NFRAC_SHIFT) &
+ AR934X_PLL_CPU_CONFIG_NFRAC_MASK;
+ frac = 1 << 6;
+ }
+
+ cpu_pll = ar934x_get_pll_freq(ath79_ref_clk.rate, ref_div, nint,
+ nfrac, frac, out_div);
+
+ pll = __raw_readl(dpll_base + AR934X_SRIF_DDR_DPLL2_REG);
+ if (pll & AR934X_SRIF_DPLL2_LOCAL_PLL) {
+ out_div = (pll >> AR934X_SRIF_DPLL2_OUTDIV_SHIFT) &
+ AR934X_SRIF_DPLL2_OUTDIV_MASK;
+ pll = __raw_readl(dpll_base + AR934X_SRIF_DDR_DPLL1_REG);
+ nint = (pll >> AR934X_SRIF_DPLL1_NINT_SHIFT) &
+ AR934X_SRIF_DPLL1_NINT_MASK;
+ nfrac = pll & AR934X_SRIF_DPLL1_NFRAC_MASK;
+ ref_div = (pll >> AR934X_SRIF_DPLL1_REFDIV_SHIFT) &
+ AR934X_SRIF_DPLL1_REFDIV_MASK;
+ frac = 1 << 18;
+ } else {
+ pll = ath79_pll_rr(AR934X_PLL_DDR_CONFIG_REG);
+ out_div = (pll >> AR934X_PLL_DDR_CONFIG_OUTDIV_SHIFT) &
+ AR934X_PLL_DDR_CONFIG_OUTDIV_MASK;
+ ref_div = (pll >> AR934X_PLL_DDR_CONFIG_REFDIV_SHIFT) &
+ AR934X_PLL_DDR_CONFIG_REFDIV_MASK;
+ nint = (pll >> AR934X_PLL_DDR_CONFIG_NINT_SHIFT) &
+ AR934X_PLL_DDR_CONFIG_NINT_MASK;
+ nfrac = (pll >> AR934X_PLL_DDR_CONFIG_NFRAC_SHIFT) &
+ AR934X_PLL_DDR_CONFIG_NFRAC_MASK;
+ frac = 1 << 10;
+ }
+
+ ddr_pll = ar934x_get_pll_freq(ath79_ref_clk.rate, ref_div, nint,
+ nfrac, frac, out_div);
clk_ctrl = ath79_pll_rr(AR934X_PLL_CPU_DDR_CLK_CTRL_REG);
@@ -240,6 +291,8 @@ static void __init ar934x_clocks_init(void)
ath79_wdt_clk.rate = ath79_ref_clk.rate;
ath79_uart_clk.rate = ath79_ref_clk.rate;
+
+ iounmap(dpll_base);
}
void __init ath79_clocks_init(void)
diff --git a/arch/mips/include/asm/mach-ath79/ar71xx_regs.h b/arch/mips/include/asm/mach-ath79/ar71xx_regs.h
index 1caa78a..b682422 100644
--- a/arch/mips/include/asm/mach-ath79/ar71xx_regs.h
+++ b/arch/mips/include/asm/mach-ath79/ar71xx_regs.h
@@ -63,6 +63,8 @@
#define AR934X_WMAC_BASE (AR71XX_APB_BASE + 0x00100000)
#define AR934X_WMAC_SIZE 0x20000
+#define AR934X_SRIF_BASE (AR71XX_APB_BASE + 0x00116000)
+#define AR934X_SRIF_SIZE 0x1000
/*
* DDR_CTRL block
@@ -398,4 +400,25 @@
#define AR933X_GPIO_COUNT 30
#define AR934X_GPIO_COUNT 23
+/*
+ * SRIF block
+ */
+#define AR934X_SRIF_CPU_DPLL1_REG 0x1c0
+#define AR934X_SRIF_CPU_DPLL2_REG 0x1c4
+#define AR934X_SRIF_CPU_DPLL3_REG 0x1c8
+
+#define AR934X_SRIF_DDR_DPLL1_REG 0x240
+#define AR934X_SRIF_DDR_DPLL2_REG 0x244
+#define AR934X_SRIF_DDR_DPLL3_REG 0x248
+
+#define AR934X_SRIF_DPLL1_REFDIV_SHIFT 27
+#define AR934X_SRIF_DPLL1_REFDIV_MASK 0x1f
+#define AR934X_SRIF_DPLL1_NINT_SHIFT 18
+#define AR934X_SRIF_DPLL1_NINT_MASK 0x1ff
+#define AR934X_SRIF_DPLL1_NFRAC_MASK 0x0003ffff
+
+#define AR934X_SRIF_DPLL2_LOCAL_PLL BIT(30)
+#define AR934X_SRIF_DPLL2_OUTDIV_SHIFT 13
+#define AR934X_SRIF_DPLL2_OUTDIV_MASK 0x7
+
#endif /* __ASM_MACH_AR71XX_REGS_H */
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyler Hicks <[email protected]>
commit 64e6651dcc10e9d2cc6230208a8e6c2cfd19ae18 upstream.
Since eCryptfs only calls fput() on the lower file in
ecryptfs_release(), eCryptfs should call the lower filesystem's
->flush() from ecryptfs_flush().
If the lower filesystem implements ->flush(), then eCryptfs should try
to flush out any dirty pages prior to calling the lower ->flush(). If
the lower filesystem does not implement ->flush(), then eCryptfs has no
need to do anything in ecryptfs_flush() since dirty pages are now
written out to the lower filesystem in ecryptfs_release().
Signed-off-by: Tyler Hicks <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ecryptfs/file.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/ecryptfs/file.c b/fs/ecryptfs/file.c
index 44ce5c6..d45ba45 100644
--- a/fs/ecryptfs/file.c
+++ b/fs/ecryptfs/file.c
@@ -275,8 +275,14 @@ out:
static int ecryptfs_flush(struct file *file, fl_owner_t td)
{
- return file->f_mode & FMODE_WRITE
- ? filemap_write_and_wait(file->f_mapping) : 0;
+ struct file *lower_file = ecryptfs_file_to_lower(file);
+
+ if (lower_file->f_op && lower_file->f_op->flush) {
+ filemap_write_and_wait(file->f_mapping);
+ return lower_file->f_op->flush(lower_file, td);
+ }
+
+ return 0;
}
static int ecryptfs_release(struct inode *inode, struct file *file)
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyler Hicks <[email protected]>
commit 7149f2558d5b5b988726662fe58b1c388337805b upstream.
Fixes a regression caused by:
821f749 eCryptfs: Revert to a writethrough cache model
That patch reverted some code (specifically, 32001d6f) that was
necessary to properly handle open() -> mmap() -> close() -> dirty pages
-> munmap(), because the lower file could be closed before the dirty
pages are written out.
Rather than reapplying 32001d6f, this approach is a better way of
ensuring that the lower file is still open in order to handle writing
out the dirty pages. It is called from ecryptfs_release(), while we have
a lock on the lower file pointer, just before the lower file gets the
final fput() and we overwrite the pointer.
https://launchpad.net/bugs/1047261
Signed-off-by: Tyler Hicks <[email protected]>
Reported-by: Artemy Tregubenko <[email protected]>
Tested-by: Artemy Tregubenko <[email protected]>
Tested-by: Colin Ian King <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ecryptfs/main.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/ecryptfs/main.c b/fs/ecryptfs/main.c
index df217dc..c2a9c39 100644
--- a/fs/ecryptfs/main.c
+++ b/fs/ecryptfs/main.c
@@ -162,6 +162,7 @@ void ecryptfs_put_lower_file(struct inode *inode)
inode_info = ecryptfs_inode_to_private(inode);
if (atomic_dec_and_mutex_lock(&inode_info->lower_file_count,
&inode_info->lower_file_mutex)) {
+ filemap_write_and_wait(inode->i_mapping);
fput(inode_info->lower_file);
inode_info->lower_file = NULL;
mutex_unlock(&inode_info->lower_file_mutex);
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyler Hicks <[email protected]>
commit e3ccaa9761200952cc269b1f4b7d7bb77a5e071b upstream.
Historically, eCryptfs has only initialized lower files in the
ecryptfs_create() path. Lower file initialization is the act of writing
the cryptographic metadata from the inode's crypt_stat to the header of
the file. The ecryptfs_open() path already expects that metadata to be
in the header of the file.
A number of users have reported empty lower files in beneath their
eCryptfs mounts. Most of the causes for those empty files being left
around have been addressed, but the presence of empty files causes
problems due to the lack of proper cryptographic metadata.
To transparently solve this problem, this patch initializes empty lower
files in the ecryptfs_open() error path. If the metadata is unreadable
due to the lower inode size being 0, plaintext passthrough support is
not in use, and the metadata is stored in the header of the file (as
opposed to the user.ecryptfs extended attribute), the lower file will be
initialized.
The number of nested conditionals in ecryptfs_open() was getting out of
hand, so a helper function was created. To avoid the same nested
conditional problem, the conditional logic was reversed inside of the
helper function.
https://launchpad.net/bugs/911507
Signed-off-by: Tyler Hicks <[email protected]>
Cc: John Johansen <[email protected]>
Cc: Colin Ian King <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
fs/ecryptfs/ecryptfs_kernel.h | 2 ++
fs/ecryptfs/file.c | 71 ++++++++++++++++++++++++++---------------
fs/ecryptfs/inode.c | 4 +--
3 files changed, 49 insertions(+), 28 deletions(-)
diff --git a/fs/ecryptfs/ecryptfs_kernel.h b/fs/ecryptfs/ecryptfs_kernel.h
index 867b64c..56e3aa5 100644
--- a/fs/ecryptfs/ecryptfs_kernel.h
+++ b/fs/ecryptfs/ecryptfs_kernel.h
@@ -568,6 +568,8 @@ struct ecryptfs_open_req {
struct inode *ecryptfs_get_inode(struct inode *lower_inode,
struct super_block *sb);
void ecryptfs_i_size_init(const char *page_virt, struct inode *inode);
+int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry,
+ struct inode *ecryptfs_inode);
int ecryptfs_decode_and_decrypt_filename(char **decrypted_name,
size_t *decrypted_name_size,
struct dentry *ecryptfs_dentry,
diff --git a/fs/ecryptfs/file.c b/fs/ecryptfs/file.c
index 49fc575..44ce5c6 100644
--- a/fs/ecryptfs/file.c
+++ b/fs/ecryptfs/file.c
@@ -140,6 +140,48 @@ out:
struct kmem_cache *ecryptfs_file_info_cache;
+static int read_or_initialize_metadata(struct dentry *dentry)
+{
+ struct inode *inode = dentry->d_inode;
+ struct ecryptfs_mount_crypt_stat *mount_crypt_stat;
+ struct ecryptfs_crypt_stat *crypt_stat;
+ int rc;
+
+ crypt_stat = &ecryptfs_inode_to_private(inode)->crypt_stat;
+ mount_crypt_stat = &ecryptfs_superblock_to_private(
+ inode->i_sb)->mount_crypt_stat;
+ mutex_lock(&crypt_stat->cs_mutex);
+
+ if (crypt_stat->flags & ECRYPTFS_POLICY_APPLIED &&
+ crypt_stat->flags & ECRYPTFS_KEY_VALID) {
+ rc = 0;
+ goto out;
+ }
+
+ rc = ecryptfs_read_metadata(dentry);
+ if (!rc)
+ goto out;
+
+ if (mount_crypt_stat->flags & ECRYPTFS_PLAINTEXT_PASSTHROUGH_ENABLED) {
+ crypt_stat->flags &= ~(ECRYPTFS_I_SIZE_INITIALIZED
+ | ECRYPTFS_ENCRYPTED);
+ rc = 0;
+ goto out;
+ }
+
+ if (!(mount_crypt_stat->flags & ECRYPTFS_XATTR_METADATA_ENABLED) &&
+ !i_size_read(ecryptfs_inode_to_lower(inode))) {
+ rc = ecryptfs_initialize_file(dentry, inode);
+ if (!rc)
+ goto out;
+ }
+
+ rc = -EIO;
+out:
+ mutex_unlock(&crypt_stat->cs_mutex);
+ return rc;
+}
+
/**
* ecryptfs_open
* @inode: inode speciying file to open
@@ -215,32 +257,9 @@ static int ecryptfs_open(struct inode *inode, struct file *file)
rc = 0;
goto out;
}
- mutex_lock(&crypt_stat->cs_mutex);
- if (!(crypt_stat->flags & ECRYPTFS_POLICY_APPLIED)
- || !(crypt_stat->flags & ECRYPTFS_KEY_VALID)) {
- rc = ecryptfs_read_metadata(ecryptfs_dentry);
- if (rc) {
- ecryptfs_printk(KERN_DEBUG,
- "Valid headers not found\n");
- if (!(mount_crypt_stat->flags
- & ECRYPTFS_PLAINTEXT_PASSTHROUGH_ENABLED)) {
- rc = -EIO;
- printk(KERN_WARNING "Either the lower file "
- "is not in a valid eCryptfs format, "
- "or the key could not be retrieved. "
- "Plaintext passthrough mode is not "
- "enabled; returning -EIO\n");
- mutex_unlock(&crypt_stat->cs_mutex);
- goto out_put;
- }
- rc = 0;
- crypt_stat->flags &= ~(ECRYPTFS_I_SIZE_INITIALIZED
- | ECRYPTFS_ENCRYPTED);
- mutex_unlock(&crypt_stat->cs_mutex);
- goto out;
- }
- }
- mutex_unlock(&crypt_stat->cs_mutex);
+ rc = read_or_initialize_metadata(ecryptfs_dentry);
+ if (rc)
+ goto out_put;
ecryptfs_printk(KERN_DEBUG, "inode w/ addr = [0x%p], i_ino = "
"[0x%.16lx] size: [0x%.16llx]\n", inode, inode->i_ino,
(unsigned long long)i_size_read(inode));
diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
index b01c7a9..68166ea 100644
--- a/fs/ecryptfs/inode.c
+++ b/fs/ecryptfs/inode.c
@@ -200,8 +200,8 @@ out:
*
* Returns zero on success
*/
-static int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry,
- struct inode *ecryptfs_inode)
+int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry,
+ struct inode *ecryptfs_inode)
{
struct ecryptfs_crypt_stat *crypt_stat =
&ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat;
--
1.7.9.5
3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Fabio Porcedda <[email protected]>
commit 9c6d196d5aa35e07482f23c3e37755e7a82140e0 upstream.
Don't fail the initialization check for the platform_data
if there is avaiable an associated device tree node.
Signed-off-by: Fabio Porcedda <[email protected]>
Signed-off-by: Felipe Balbi <[email protected]>
Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
---
drivers/usb/gadget/at91_udc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/usb/gadget/at91_udc.c b/drivers/usb/gadget/at91_udc.c
index 22865dd..aad16ac 100644
--- a/drivers/usb/gadget/at91_udc.c
+++ b/drivers/usb/gadget/at91_udc.c
@@ -1698,7 +1698,7 @@ static int __devinit at91udc_probe(struct platform_device *pdev)
int retval;
struct resource *res;
- if (!dev->platform_data) {
+ if (!dev->platform_data && !pdev->dev.of_node) {
/* small (so we copy it) but critical! */
DBG("missing platform_data\n");
return -ENODEV;
--
1.7.9.5
On Mon, Nov 26, 2012 at 10:02:09AM -0800, H. Peter Anvin wrote:
> On 11/26/2012 08:57 AM, Herton Ronaldo Krzesinski wrote:
> > 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>
> What kind of version number is that?
This is an "extended stable" tree, a fork from the last 3.5 stable
update (3.5 isn't maintained anymore by stable upstream). Thus
it seemed better to just follow last released 3.5 stable version, and
append an extraversion to it, which I chose in the end to be u<n>
(u == update). It's unlikely that other 3.5 stable upstream update will
be done, in any case I expect this way to avoid any issue.
>
> -hpa
>
>
--
[]'s
Herton
On 11/26/2012 12:08 PM, Herton Ronaldo Krzesinski wrote:
> On Mon, Nov 26, 2012 at 10:02:09AM -0800, H. Peter Anvin wrote:
>> On 11/26/2012 08:57 AM, Herton Ronaldo Krzesinski wrote:
>>> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>>
>> What kind of version number is that?
>
> This is an "extended stable" tree, a fork from the last 3.5 stable
> update (3.5 isn't maintained anymore by stable upstream). Thus
> it seemed better to just follow last released 3.5 stable version, and
> append an extraversion to it, which I chose in the end to be u<n>
> (u == update). It's unlikely that other 3.5 stable upstream update will
> be done, in any case I expect this way to avoid any issue.
>
Why not 3.5.7.1 sticking with the pre-established version numbering scheme?
-hpa
On Mon, Nov 26, 2012 at 12:09:52PM -0800, H. Peter Anvin wrote:
> On 11/26/2012 12:08 PM, Herton Ronaldo Krzesinski wrote:
> > On Mon, Nov 26, 2012 at 10:02:09AM -0800, H. Peter Anvin wrote:
> >> On 11/26/2012 08:57 AM, Herton Ronaldo Krzesinski wrote:
> >>> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
> >>
> >> What kind of version number is that?
> >
> > This is an "extended stable" tree, a fork from the last 3.5 stable
> > update (3.5 isn't maintained anymore by stable upstream). Thus
> > it seemed better to just follow last released 3.5 stable version, and
> > append an extraversion to it, which I chose in the end to be u<n>
> > (u == update). It's unlikely that other 3.5 stable upstream update will
> > be done, in any case I expect this way to avoid any issue.
> >
>
> Why not 3.5.7.1 sticking with the pre-established version numbering scheme?
That looks better indeed. Since this is the first release, I'll turn it
this way.
>
> -hpa
>
>
--
[]'s
Herton
On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Gavin Shan <[email protected]>
>
> commit feadf7c0a1a7c08c74bebb4a13b755f8c40e3bbc upstream.
>
> The EEH core is talking with the PCI device driver to determine the
> action (purely reset, or PCI device removal). During the period, the
> driver might be unloaded and in turn causes kernel crash as follows:
>
> EEH: Detected PCI bus error on PHB#4-PE#10000
> EEH: This PCI device has failed 3 times in the last hour
> lpfc 0004:01:00.0: 0:2710 PCI channel disable preparing for reset
> Unable to handle kernel paging request for data at address 0x00000490
> Faulting instruction address: 0xd00000000e682c90
> cpu 0x1: Vector: 300 (Data Access) at [c000000fc75ffa20]
> pc: d00000000e682c90: .lpfc_io_error_detected+0x30/0x240 [lpfc]
> lr: d00000000e682c8c: .lpfc_io_error_detected+0x2c/0x240 [lpfc]
> sp: c000000fc75ffca0
> msr: 8000000000009032
> dar: 490
> dsisr: 40000000
> current = 0xc000000fc79b88b0
> paca = 0xc00000000edb0380 softe: 0 irq_happened: 0x00
> pid = 3386, comm = eehd
> enter ? for help
> [c000000fc75ffca0] c000000fc75ffd30 (unreliable)
> [c000000fc75ffd30] c00000000004fd3c .eeh_report_error+0x7c/0xf0
> [c000000fc75ffdc0] c00000000004ee00 .eeh_pe_dev_traverse+0xa0/0x180
> [c000000fc75ffe70] c00000000004ffd8 .eeh_handle_event+0x68/0x300
> [c000000fc75fff00] c0000000000503a0 .eeh_event_handler+0x130/0x1a0
> [c000000fc75fff90] c000000000020138 .kernel_thread+0x54/0x70
> 1:mon>
>
> The patch increases the reference of the corresponding driver modules
> while EEH core does the negotiation with PCI device driver so that the
> corresponding driver modules can't be unloaded during the period and
> we're safe to refer the callbacks.
>
> Reported-by: Alexey Kardashevskiy <[email protected]>
> Signed-off-by: Gavin Shan <[email protected]>
> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
> [ herton: backported for 3.5, adjusted driver assignments, return 0
> instead of NULL, assume dev is not NULL ]
> Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
[...]
Greg, you probably want this in 3.4 and 3.6.
Ben.
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.
On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Tao Ma <[email protected]>
>
> commit bef53b01faeb791e27605cba1a71ba21364cb23e upstream.
>
> The update_backups() function is used to backup all the metadata
> blocks, so we should not take it for granted that 'data' is pointed to
> a super block and use ext4_superblock_csum_set to calculate the
> checksum there. In case where the data is a group descriptor block,
> it will corrupt the last group descriptor, and then e2fsck will
> complain about it it.
>
> As all the metadata checksums should already be OK when we do the
> backup, remove the wrong ext4_superblock_csum_set and it should be
> just fine.
>
> Reported-by: "Theodore Ts'o" <[email protected]>
> Signed-off-by: Tao Ma <[email protected]>
> Signed-off-by: "Theodore Ts'o" <[email protected]>
> [ herton: adjust context ]
> Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
This should also be applicable to 3.6.
Ben.
> ---
> fs/ext4/resize.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
> index b0bdd10..dc1affc 100644
> --- a/fs/ext4/resize.c
> +++ b/fs/ext4/resize.c
> @@ -979,8 +979,6 @@ static void update_backups(struct super_block *sb,
> goto exit_err;
> }
>
> - ext4_superblock_csum_set(sb, (struct ext4_super_block *)data);
> -
> while ((group = ext4_list_backups(sb, &three, &five, &seven)) < last) {
> struct buffer_head *bh;
>
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.
On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Michal Marek <[email protected]>
>
> commit fe04ddf7c2910362f3817c8156e41cbd6c0ee35d upstream.
>
> There were reports of users destroying their Fedora installs by a kernel
> tarball that replaces the /lib -> /usr/lib symlink. Let's remove the
> toplevel directories from the tarball to prevent this from happening.
>
> Reported-by: Andi Kleen <[email protected]>
> Suggested-by: Ben Hutchings <[email protected]>
> Signed-off-by: Michal Marek <[email protected]>
> [ herton: dropped unrelated changes to arch/x86/Makefile and
> scripts/Makefile.fwinst, which don't apply anyway on 3.5, see commit
> 3ce9e53e788881da0d5f3912f80e0dd6b501f304 upstream ]
> Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
This is missing from 3.4.
Ben.
> ---
> scripts/package/buildtar | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/scripts/package/buildtar b/scripts/package/buildtar
> index 8a7b155..d0d748e 100644
> --- a/scripts/package/buildtar
> +++ b/scripts/package/buildtar
> @@ -109,7 +109,7 @@ esac
> if tar --owner=root --group=root --help >/dev/null 2>&1; then
> opts="--owner=root --group=root"
> fi
> - tar cf - . $opts | ${compress} > "${tarball}${file_ext}"
> + tar cf - boot/* lib/* $opts | ${compress} > "${tarball}${file_ext}"
> )
>
> echo "Tarball successfully created in ${tarball}${file_ext}"
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.
On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Peng Tao <[email protected]>
>
> commit fe6e1e8d9fad86873eb74a26e80a8f91f9e870b5 upstream.
>
> If applications use flock to protect its write range, generic NFS
> will not do read-modify-write cycle at page cache level. Therefore
> LD should know how to handle non-sector aligned writes. Otherwise
> there will be data corruption.
>
> Signed-off-by: Peng Tao <[email protected]>
> Signed-off-by: Trond Myklebust <[email protected]>
> Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
[...]
I notice that this fix is missing from 3.4, and will need backporting.
Ben.
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.
On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Peng Tao <[email protected]>
>
> commit f742dc4a32587bff50b13dde9d8894b96851951a upstream.
>
> For DIO read, if it is not sector aligned, we should reject it
> and resend via MDS. Otherwise there might be data corruption.
> Also teach bl_read_pagelist to handle partial page reads for DIO.
>
> Signed-off-by: Peng Tao <[email protected]>
> Signed-off-by: Trond Myklebust <[email protected]>
> Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
[...]
This also hasn't been applied to 3.4, and presumably needs backporting.
Ben.
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.
On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Jaehoon Chung <[email protected]>
>
> commit 5feb54a1ab91a237e247c013b8c4fb100ea347b1 upstream.
>
> We can use up to four bus-clocks; but on module remove, we didn't
> disable the fourth bus clock.
>
> Signed-off-by: Jaehoon Chung <[email protected]>
> Signed-off-by: Kyungmin Park <[email protected]>
> Signed-off-by: Chris Ball <[email protected]>
> Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
This is missing from 3.4 and 3.6 (but not 3.2); it apparently needed its
context adjusted.
Ben.
> ---
> drivers/mmc/host/sdhci-s3c.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/mmc/host/sdhci-s3c.c b/drivers/mmc/host/sdhci-s3c.c
> index a50c205..02b7a4a 100644
> --- a/drivers/mmc/host/sdhci-s3c.c
> +++ b/drivers/mmc/host/sdhci-s3c.c
> @@ -656,7 +656,7 @@ static int __devexit sdhci_s3c_remove(struct platform_device *pdev)
>
> pm_runtime_disable(&pdev->dev);
>
> - for (ptr = 0; ptr < 3; ptr++) {
> + for (ptr = 0; ptr < MAX_BUS_CLK; ptr++) {
> if (sc->clk_bus[ptr]) {
> clk_disable(sc->clk_bus[ptr]);
> clk_put(sc->clk_bus[ptr]);
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.
On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Colin Cross <[email protected]>
>
> commit 9d7d6e363b06934221b81a859d509844c97380df upstream.
>
> read_persistent_clock uses a global variable, use a spinlock to
> ensure non-atomic updates to the variable don't overlap and cause
> time to move backwards.
>
> Signed-off-by: Colin Cross <[email protected]>
> Signed-off-by: R Sricharan <[email protected]>
> Signed-off-by: Tony Lindgren <[email protected]>
> Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
[...]
This is also missing from 3.4. I'm attaching the adjusted version for
3.2, which looks like it will work for 3.4.
Ben.
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.
On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Dan Carpenter <[email protected]>
>
> commit 5b3900cd409466c0070b234d941650685ad0c791 upstream.
>
> We fixed a bunch of integer overflows in timekeeping code during the 3.6
> cycle. I did an audit based on that and found this potential overflow.
>
> Signed-off-by: Dan Carpenter <[email protected]>
> Acked-by: John Stultz <[email protected]>
> Link: http://lkml.kernel.org/r/[email protected]
> Signed-off-by: Thomas Gleixner <[email protected]>
> [ herton: adapt for 3.5, timekeeper instead of tk pointer ]
> Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
This is also missing from 3.4; looks like Herton's version is
applicable.
Ben.
> ---
> kernel/time/timekeeping.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
> index 63c88c1..8954990 100644
> --- a/kernel/time/timekeeping.c
> +++ b/kernel/time/timekeeping.c
> @@ -1012,7 +1012,7 @@ static cycle_t logarithmic_accumulation(cycle_t offset, int shift)
> }
>
> /* Accumulate raw time */
> - raw_nsecs = timekeeper.raw_interval << shift;
> + raw_nsecs = (u64)timekeeper.raw_interval << shift;
> raw_nsecs += timekeeper.raw_time.tv_nsec;
> if (raw_nsecs >= NSEC_PER_SEC) {
> u64 raw_secs = raw_nsecs;
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.
On Mon, 2012-11-26 at 14:57 -0200, Herton Ronaldo Krzesinski wrote:
> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Stanislav Yakovlev <[email protected]>
>
> commit bf11315eeda510ea4fc1a2bf972d8155d31d89b4 upstream.
>
> The driver does not count space of radiotap fields when allocating skb for
> radiotap packet. This leads to kernel panic with the following call trace:
>
> ...
> [67607.676067] [<c152f90f>] error_code+0x67/0x6c
> [67607.676067] [<c142f831>] ? skb_put+0x91/0xa0
> [67607.676067] [<f8cf5e5b>] ? ipw_handle_promiscuous_tx+0x16b/0x2d0 [ipw2200]
> [67607.676067] [<f8cf5e5b>] ipw_handle_promiscuous_tx+0x16b/0x2d0 [ipw2200]
> [67607.676067] [<f8cf899b>] ipw_net_hard_start_xmit+0x8b/0x90 [ipw2200]
> [67607.676067] [<f8741c5a>] libipw_xmit+0x55a/0x980 [libipw]
> [67607.676067] [<c143d3e8>] dev_hard_start_xmit+0x218/0x4d0
> ...
>
> This bug was found by VittGam.
> https://bugzilla.kernel.org/show_bug.cgi?id=43255
>
> Signed-off-by: Stanislav Yakovlev <[email protected]>
> Signed-off-by: John W. Linville <[email protected]>
> Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
This is missing from 3.4; it may just need de-fuzzing to apply.
Ben.
> ---
> drivers/net/wireless/ipw2x00/ipw2200.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/wireless/ipw2x00/ipw2200.c b/drivers/net/wireless/ipw2x00/ipw2200.c
> index 0036737..1f2edf2 100644
> --- a/drivers/net/wireless/ipw2x00/ipw2200.c
> +++ b/drivers/net/wireless/ipw2x00/ipw2200.c
> @@ -10470,7 +10470,7 @@ static void ipw_handle_promiscuous_tx(struct ipw_priv *priv,
> } else
> len = src->len;
>
> - dst = alloc_skb(len + sizeof(*rt_hdr), GFP_ATOMIC);
> + dst = alloc_skb(len + sizeof(*rt_hdr) + sizeof(u16)*2, GFP_ATOMIC);
> if (!dst)
> continue;
>
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.
On Mon, 2012-11-26 at 14:57 -0200, Herton Ronaldo Krzesinski wrote:
> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Johannes Berg <[email protected]>
>
> commit 8f7b8db6e0557c8437adf9371e020cd89a7e85dc upstream.
>
> The channel switch command for 6000 series devices
> is larger than the maximum inline command size of
> 320 bytes. The command is therefore refused with a
> warning. Fix this by allocating the command and
> using the NOCOPY mechanism.
>
> Reviewed-by: Emmanuel Grumbach <[email protected]>
> Signed-off-by: Johannes Berg <[email protected]>
> [ herton: file name is different on 3.5, code differs a little bit at
> the end, adjusted context ]
> Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
Also missing from 3.4; the filename is different again
(drivers/net/wireless/iwlwifi/iwl-6000.c) but this should otherwise be
applicable with one line of fuzz at the end.
Ben.
> ---
> drivers/net/wireless/iwlwifi/iwl-agn-devices.c | 39 +++++++++++++++---------
> 1 file changed, 24 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-devices.c b/drivers/net/wireless/iwlwifi/iwl-agn-devices.c
> index 48533b3..8ab0a7c 100644
> --- a/drivers/net/wireless/iwlwifi/iwl-agn-devices.c
> +++ b/drivers/net/wireless/iwlwifi/iwl-agn-devices.c
> @@ -653,7 +653,7 @@ static int iwl6000_hw_channel_switch(struct iwl_priv *priv,
> * See iwlagn_mac_channel_switch.
> */
> struct iwl_rxon_context *ctx = &priv->contexts[IWL_RXON_CTX_BSS];
> - struct iwl6000_channel_switch_cmd cmd;
> + struct iwl6000_channel_switch_cmd *cmd;
> const struct iwl_channel_info *ch_info;
> u32 switch_time_in_usec, ucode_switch_time;
> u16 ch;
> @@ -663,18 +663,25 @@ static int iwl6000_hw_channel_switch(struct iwl_priv *priv,
> struct ieee80211_vif *vif = ctx->vif;
> struct iwl_host_cmd hcmd = {
> .id = REPLY_CHANNEL_SWITCH,
> - .len = { sizeof(cmd), },
> + .len = { sizeof(*cmd), },
> .flags = CMD_SYNC,
> - .data = { &cmd, },
> + .dataflags[0] = IWL_HCMD_DFL_NOCOPY,
> };
> + int err;
>
> - cmd.band = priv->band == IEEE80211_BAND_2GHZ;
> + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
> + if (!cmd)
> + return -ENOMEM;
> +
> + hcmd.data[0] = cmd;
> +
> + cmd->band = priv->band == IEEE80211_BAND_2GHZ;
> ch = ch_switch->channel->hw_value;
> IWL_DEBUG_11H(priv, "channel switch from %u to %u\n",
> ctx->active.channel, ch);
> - cmd.channel = cpu_to_le16(ch);
> - cmd.rxon_flags = ctx->staging.flags;
> - cmd.rxon_filter_flags = ctx->staging.filter_flags;
> + cmd->channel = cpu_to_le16(ch);
> + cmd->rxon_flags = ctx->staging.flags;
> + cmd->rxon_filter_flags = ctx->staging.filter_flags;
> switch_count = ch_switch->count;
> tsf_low = ch_switch->timestamp & 0x0ffffffff;
> /*
> @@ -690,30 +697,32 @@ static int iwl6000_hw_channel_switch(struct iwl_priv *priv,
> switch_count = 0;
> }
> if (switch_count <= 1)
> - cmd.switch_time = cpu_to_le32(priv->ucode_beacon_time);
> + cmd->switch_time = cpu_to_le32(priv->ucode_beacon_time);
> else {
> switch_time_in_usec =
> vif->bss_conf.beacon_int * switch_count * TIME_UNIT;
> ucode_switch_time = iwl_usecs_to_beacons(priv,
> switch_time_in_usec,
> beacon_interval);
> - cmd.switch_time = iwl_add_beacon_time(priv,
> - priv->ucode_beacon_time,
> - ucode_switch_time,
> - beacon_interval);
> + cmd->switch_time = iwl_add_beacon_time(priv,
> + priv->ucode_beacon_time,
> + ucode_switch_time,
> + beacon_interval);
> }
> IWL_DEBUG_11H(priv, "uCode time for the switch is 0x%x\n",
> - cmd.switch_time);
> + cmd->switch_time);
> ch_info = iwl_get_channel_info(priv, priv->band, ch);
> if (ch_info)
> - cmd.expect_beacon = is_channel_radar(ch_info);
> + cmd->expect_beacon = is_channel_radar(ch_info);
> else {
> IWL_ERR(priv, "invalid channel switch from %u to %u\n",
> ctx->active.channel, ch);
> return -EFAULT;
> }
>
> - return iwl_dvm_send_cmd(priv, &hcmd);
> + err = iwl_dvm_send_cmd(priv, &hcmd);
> + kfree(cmd);
> + return err;
> }
>
> struct iwl_lib_ops iwl6000_lib = {
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.
On Mon, 2012-11-26 at 14:58 -0200, Herton Ronaldo Krzesinski wrote:
> 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Johan Hovold <[email protected]>
>
> commit 5260e458f5eff269a43e4f1e9c47186c57b88ddb upstream.
>
> Make sure generic close is called at close.
>
> The driver relies on the generic write implementation but did not call
> generic close.
>
> Note that the call to kill the read urb is not redundant, as mct_u232
> uses an interrupt urb from the second port as the read urb and that
> generic close therefore fails to kill it.
>
> Compile-only tested.
>
> Signed-off-by: Johan Hovold <[email protected]>
> Signed-off-by: Greg Kroah-Hartman <[email protected]>
> Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
[...]
This is missing on 3.4; the version I used for 3.2 (attached) should be
applicable.
Ben.
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.
On Tue, Nov 27, 2012 at 04:15:51PM +0000, Ben Hutchings wrote:
> On Mon, 2012-11-26 at 14:58 -0200, Herton Ronaldo Krzesinski wrote:
> > 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Johan Hovold <[email protected]>
> >
> > commit 5260e458f5eff269a43e4f1e9c47186c57b88ddb upstream.
> >
> > Make sure generic close is called at close.
> >
> > The driver relies on the generic write implementation but did not call
> > generic close.
> >
> > Note that the call to kill the read urb is not redundant, as mct_u232
> > uses an interrupt urb from the second port as the read urb and that
> > generic close therefore fails to kill it.
> >
> > Compile-only tested.
> >
> > Signed-off-by: Johan Hovold <[email protected]>
> > Signed-off-by: Greg Kroah-Hartman <[email protected]>
> > Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
> [...]
>
> This is missing on 3.4; the version I used for 3.2 (attached) should be
> applicable.
Thanks, that works.
greg k-h
On Tue, Nov 27, 2012 at 02:22:19AM +0000, Ben Hutchings wrote:
> On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> > 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Tao Ma <[email protected]>
> >
> > commit bef53b01faeb791e27605cba1a71ba21364cb23e upstream.
> >
> > The update_backups() function is used to backup all the metadata
> > blocks, so we should not take it for granted that 'data' is pointed to
> > a super block and use ext4_superblock_csum_set to calculate the
> > checksum there. In case where the data is a group descriptor block,
> > it will corrupt the last group descriptor, and then e2fsck will
> > complain about it it.
> >
> > As all the metadata checksums should already be OK when we do the
> > backup, remove the wrong ext4_superblock_csum_set and it should be
> > just fine.
> >
> > Reported-by: "Theodore Ts'o" <[email protected]>
> > Signed-off-by: Tao Ma <[email protected]>
> > Signed-off-by: "Theodore Ts'o" <[email protected]>
> > [ herton: adjust context ]
> > Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
>
> This should also be applicable to 3.6.
Thanks, it didn't apply there, so I didn't take it, defering to the ext4
developers, who seem to be a bit busy :)
greg k-h
On Tue, Nov 27, 2012 at 02:26:27AM +0000, Ben Hutchings wrote:
> On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> > 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Michal Marek <[email protected]>
> >
> > commit fe04ddf7c2910362f3817c8156e41cbd6c0ee35d upstream.
> >
> > There were reports of users destroying their Fedora installs by a kernel
> > tarball that replaces the /lib -> /usr/lib symlink. Let's remove the
> > toplevel directories from the tarball to prevent this from happening.
> >
> > Reported-by: Andi Kleen <[email protected]>
> > Suggested-by: Ben Hutchings <[email protected]>
> > Signed-off-by: Michal Marek <[email protected]>
> > [ herton: dropped unrelated changes to arch/x86/Makefile and
> > scripts/Makefile.fwinst, which don't apply anyway on 3.5, see commit
> > 3ce9e53e788881da0d5f3912f80e0dd6b501f304 upstream ]
> > Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
>
> This is missing from 3.4.
I don't think it is needed, as 3ce9e53e788881da0d5f3912f80e0dd6b501f304
didn't go into 3.4, so all should be good for now.
thanks,
greg k-h
On Tue, Nov 27, 2012 at 02:18:34AM +0000, Ben Hutchings wrote:
> On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> > 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Gavin Shan <[email protected]>
> >
> > commit feadf7c0a1a7c08c74bebb4a13b755f8c40e3bbc upstream.
> >
> > The EEH core is talking with the PCI device driver to determine the
> > action (purely reset, or PCI device removal). During the period, the
> > driver might be unloaded and in turn causes kernel crash as follows:
> >
> > EEH: Detected PCI bus error on PHB#4-PE#10000
> > EEH: This PCI device has failed 3 times in the last hour
> > lpfc 0004:01:00.0: 0:2710 PCI channel disable preparing for reset
> > Unable to handle kernel paging request for data at address 0x00000490
> > Faulting instruction address: 0xd00000000e682c90
> > cpu 0x1: Vector: 300 (Data Access) at [c000000fc75ffa20]
> > pc: d00000000e682c90: .lpfc_io_error_detected+0x30/0x240 [lpfc]
> > lr: d00000000e682c8c: .lpfc_io_error_detected+0x2c/0x240 [lpfc]
> > sp: c000000fc75ffca0
> > msr: 8000000000009032
> > dar: 490
> > dsisr: 40000000
> > current = 0xc000000fc79b88b0
> > paca = 0xc00000000edb0380 softe: 0 irq_happened: 0x00
> > pid = 3386, comm = eehd
> > enter ? for help
> > [c000000fc75ffca0] c000000fc75ffd30 (unreliable)
> > [c000000fc75ffd30] c00000000004fd3c .eeh_report_error+0x7c/0xf0
> > [c000000fc75ffdc0] c00000000004ee00 .eeh_pe_dev_traverse+0xa0/0x180
> > [c000000fc75ffe70] c00000000004ffd8 .eeh_handle_event+0x68/0x300
> > [c000000fc75fff00] c0000000000503a0 .eeh_event_handler+0x130/0x1a0
> > [c000000fc75fff90] c000000000020138 .kernel_thread+0x54/0x70
> > 1:mon>
> >
> > The patch increases the reference of the corresponding driver modules
> > while EEH core does the negotiation with PCI device driver so that the
> > corresponding driver modules can't be unloaded during the period and
> > we're safe to refer the callbacks.
> >
> > Reported-by: Alexey Kardashevskiy <[email protected]>
> > Signed-off-by: Gavin Shan <[email protected]>
> > Signed-off-by: Benjamin Herrenschmidt <[email protected]>
> > [ herton: backported for 3.5, adjusted driver assignments, return 0
> > instead of NULL, assume dev is not NULL ]
> > Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
> [...]
>
> Greg, you probably want this in 3.4 and 3.6.
Many thanks. Herton, any reason why you didn't forward on this
backported version of the patch?
greg k-h
On Tue, Nov 27, 2012 at 02:33:24AM +0000, Ben Hutchings wrote:
> On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> > 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Peng Tao <[email protected]>
> >
> > commit fe6e1e8d9fad86873eb74a26e80a8f91f9e870b5 upstream.
> >
> > If applications use flock to protect its write range, generic NFS
> > will not do read-modify-write cycle at page cache level. Therefore
> > LD should know how to handle non-sector aligned writes. Otherwise
> > there will be data corruption.
> >
> > Signed-off-by: Peng Tao <[email protected]>
> > Signed-off-by: Trond Myklebust <[email protected]>
> > Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
> [...]
>
> I notice that this fix is missing from 3.4, and will need backporting.
I don't trust myself with backporting this, as I got it wrong. So if
one of the NFS developers wants to do this (same goes for the other NFS
patch), I'll gladly take it.
thanks,
greg k-h
On Tue, Nov 27, 2012 at 03:00:13PM +0000, Ben Hutchings wrote:
> On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> > 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Jaehoon Chung <[email protected]>
> >
> > commit 5feb54a1ab91a237e247c013b8c4fb100ea347b1 upstream.
> >
> > We can use up to four bus-clocks; but on module remove, we didn't
> > disable the fourth bus clock.
> >
> > Signed-off-by: Jaehoon Chung <[email protected]>
> > Signed-off-by: Kyungmin Park <[email protected]>
> > Signed-off-by: Chris Ball <[email protected]>
> > Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
>
> This is missing from 3.4 and 3.6 (but not 3.2); it apparently needed its
> context adjusted.
Thanks, now backported.
greg k-h
On Tue, Nov 27, 2012 at 03:05:20PM +0000, Ben Hutchings wrote:
> On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> > 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Colin Cross <[email protected]>
> >
> > commit 9d7d6e363b06934221b81a859d509844c97380df upstream.
> >
> > read_persistent_clock uses a global variable, use a spinlock to
> > ensure non-atomic updates to the variable don't overlap and cause
> > time to move backwards.
> >
> > Signed-off-by: Colin Cross <[email protected]>
> > Signed-off-by: R Sricharan <[email protected]>
> > Signed-off-by: Tony Lindgren <[email protected]>
> > Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
> [...]
>
> This is also missing from 3.4. I'm attaching the adjusted version for
> 3.2, which looks like it will work for 3.4.
Thanks for the patch, now applied.
greg k-h
On Tue, Nov 27, 2012 at 03:08:27PM +0000, Ben Hutchings wrote:
> On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> > 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Dan Carpenter <[email protected]>
> >
> > commit 5b3900cd409466c0070b234d941650685ad0c791 upstream.
> >
> > We fixed a bunch of integer overflows in timekeeping code during the 3.6
> > cycle. I did an audit based on that and found this potential overflow.
> >
> > Signed-off-by: Dan Carpenter <[email protected]>
> > Acked-by: John Stultz <[email protected]>
> > Link: http://lkml.kernel.org/r/[email protected]
> > Signed-off-by: Thomas Gleixner <[email protected]>
> > [ herton: adapt for 3.5, timekeeper instead of tk pointer ]
> > Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
>
> This is also missing from 3.4; looks like Herton's version is
> applicable.
Thanks, now applied.
greg k-h
On Tue, Nov 27, 2012 at 03:58:51PM +0000, Ben Hutchings wrote:
> On Mon, 2012-11-26 at 14:57 -0200, Herton Ronaldo Krzesinski wrote:
> > 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Stanislav Yakovlev <[email protected]>
> >
> > commit bf11315eeda510ea4fc1a2bf972d8155d31d89b4 upstream.
> >
> > The driver does not count space of radiotap fields when allocating skb for
> > radiotap packet. This leads to kernel panic with the following call trace:
> >
> > ...
> > [67607.676067] [<c152f90f>] error_code+0x67/0x6c
> > [67607.676067] [<c142f831>] ? skb_put+0x91/0xa0
> > [67607.676067] [<f8cf5e5b>] ? ipw_handle_promiscuous_tx+0x16b/0x2d0 [ipw2200]
> > [67607.676067] [<f8cf5e5b>] ipw_handle_promiscuous_tx+0x16b/0x2d0 [ipw2200]
> > [67607.676067] [<f8cf899b>] ipw_net_hard_start_xmit+0x8b/0x90 [ipw2200]
> > [67607.676067] [<f8741c5a>] libipw_xmit+0x55a/0x980 [libipw]
> > [67607.676067] [<c143d3e8>] dev_hard_start_xmit+0x218/0x4d0
> > ...
> >
> > This bug was found by VittGam.
> > https://bugzilla.kernel.org/show_bug.cgi?id=43255
> >
> > Signed-off-by: Stanislav Yakovlev <[email protected]>
> > Signed-off-by: John W. Linville <[email protected]>
> > Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
>
> This is missing from 3.4; it may just need de-fuzzing to apply.
Odd, it applies just fine, I wonder how I missed it.
thanks,
greg k-h
On Tue, Nov 27, 2012 at 04:02:38PM +0000, Ben Hutchings wrote:
> On Mon, 2012-11-26 at 14:57 -0200, Herton Ronaldo Krzesinski wrote:
> > 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Johannes Berg <[email protected]>
> >
> > commit 8f7b8db6e0557c8437adf9371e020cd89a7e85dc upstream.
> >
> > The channel switch command for 6000 series devices
> > is larger than the maximum inline command size of
> > 320 bytes. The command is therefore refused with a
> > warning. Fix this by allocating the command and
> > using the NOCOPY mechanism.
> >
> > Reviewed-by: Emmanuel Grumbach <[email protected]>
> > Signed-off-by: Johannes Berg <[email protected]>
> > [ herton: file name is different on 3.5, code differs a little bit at
> > the end, adjusted context ]
> > Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
>
> Also missing from 3.4; the filename is different again
> (drivers/net/wireless/iwlwifi/iwl-6000.c) but this should otherwise be
> applicable with one line of fuzz at the end.
Thanks, I forced it in by hand.
greg k-h
On Thu, 2012-11-29 at 17:38 -0800, Greg Kroah-Hartman wrote:
> On Tue, Nov 27, 2012 at 02:26:27AM +0000, Ben Hutchings wrote:
> > On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> > > 3.5.7u1 -stable review patch. If anyone has any objections, please let me know.
> > >
> > > ------------------
> > >
> > > From: Michal Marek <[email protected]>
> > >
> > > commit fe04ddf7c2910362f3817c8156e41cbd6c0ee35d upstream.
> > >
> > > There were reports of users destroying their Fedora installs by a kernel
> > > tarball that replaces the /lib -> /usr/lib symlink. Let's remove the
> > > toplevel directories from the tarball to prevent this from happening.
> > >
> > > Reported-by: Andi Kleen <[email protected]>
> > > Suggested-by: Ben Hutchings <[email protected]>
> > > Signed-off-by: Michal Marek <[email protected]>
> > > [ herton: dropped unrelated changes to arch/x86/Makefile and
> > > scripts/Makefile.fwinst, which don't apply anyway on 3.5, see commit
> > > 3ce9e53e788881da0d5f3912f80e0dd6b501f304 upstream ]
> > > Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
> >
> > This is missing from 3.4.
>
> I don't think it is needed, as 3ce9e53e788881da0d5f3912f80e0dd6b501f304
> didn't go into 3.4, so all should be good for now.
No, 3ce9e53e788881da0d5f3912f80e0dd6b501f304 was later and reverted
unintended changes in fe04ddf7c2910362f3817c8156e41cbd6c0ee35d. You
should probably combine the two.
See these stable commits:
3.2: 0767530 kbuild: Do not package /boot and /lib in make tar-pkg
3.6: 4bb50fa kbuild: Do not package /boot and /lib in make tar-pkg
3.6: 0a7f602 kbuild: Fix accidental revert in commit fe04ddf
Ben.
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.
Greg Kroah-Hartman wrote:
> On Tue, Nov 27, 2012 at 02:26:27AM +0000, Ben Hutchings wrote:
>> On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
>>> commit fe04ddf7c2910362f3817c8156e41cbd6c0ee35d upstream.
>>>
>>> There were reports of users destroying their Fedora installs by a kernel
>>> tarball that replaces the /lib -> /usr/lib symlink. Let's remove the
>>> toplevel directories from the tarball to prevent this from happening.
>>>
>>> Reported-by: Andi Kleen <[email protected]>
>>> Suggested-by: Ben Hutchings <[email protected]>
>>> Signed-off-by: Michal Marek <[email protected]>
>>> [ herton: dropped unrelated changes to arch/x86/Makefile and
>>> scripts/Makefile.fwinst, which don't apply anyway on 3.5, see commit
>>> 3ce9e53e788881da0d5f3912f80e0dd6b501f304 upstream ]
>>> Signed-off-by: Herton Ronaldo Krzesinski <[email protected]>
>>
>> This is missing from 3.4.
>
> I don't think it is needed, as 3ce9e53e788881da0d5f3912f80e0dd6b501f304
> didn't go into 3.4, so all should be good for now.
The dependency's the other way around. Herton's comment above means
to say that 3ce9e53 is being squashed in as a fixup.
Thanks,
Jonathan
> -----Original Message-----
> From: Greg Kroah-Hartman [mailto:[email protected]]
> Sent: Thursday, November 29, 2012 8:46 PM
> To: Ben Hutchings
> Cc: Peng Tao; Myklebust, Trond; [email protected];
> [email protected]; [email protected]; Herton Ronaldo
> Krzesinski
> Subject: Re: [PATCH 041/270] pnfsblock: fix partial page buffer wirte
>
> On Tue, Nov 27, 2012 at 02:33:24AM +0000, Ben Hutchings wrote:
> > On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> > > 3.5.7u1 -stable review patch. If anyone has any objections, please let me
> know.
> > >
> > > ------------------
> > >
> > > From: Peng Tao <[email protected]>
> > >
> > > commit fe6e1e8d9fad86873eb74a26e80a8f91f9e870b5 upstream.
> > >
> > > If applications use flock to protect its write range, generic NFS
> > > will not do read-modify-write cycle at page cache level. Therefore
> > > LD should know how to handle non-sector aligned writes. Otherwise
> > > there will be data corruption.
> > >
> > > Signed-off-by: Peng Tao <[email protected]>
> > > Signed-off-by: Trond Myklebust <[email protected]>
> > > Signed-off-by: Herton Ronaldo Krzesinski
> > > <[email protected]>
> > [...]
> >
> > I notice that this fix is missing from 3.4, and will need backporting.
>
> I don't trust myself with backporting this, as I got it wrong. So if one of the
> NFS developers wants to do this (same goes for the other NFS patch), I'll
> gladly take it.
>
Tao would be the right person to do this since he has access to pNFS blocks hardware and can test the results.
Cheers
Trond
Sorry for the late response. I will get it done in the weekend. The only one that need backport is bellow patch. The two DIO patches are not necessary because 3.4 NFS DIO does not support pNFS yet.
Thanks,
Tao
________________________________________
From: Myklebust, Trond [[email protected]]
Sent: Friday, November 30, 2012 9:48 PM
To: Greg Kroah-Hartman; Ben Hutchings
Cc: Peng, Tao; [email protected]; [email protected]; [email protected]; Herton Ronaldo Krzesinski
Subject: RE: [PATCH 041/270] pnfsblock: fix partial page buffer wirte
> -----Original Message-----
> From: Greg Kroah-Hartman [mailto:[email protected]]
> Sent: Thursday, November 29, 2012 8:46 PM
> To: Ben Hutchings
> Cc: Peng Tao; Myklebust, Trond; [email protected];
> [email protected]; [email protected]; Herton Ronaldo
> Krzesinski
> Subject: Re: [PATCH 041/270] pnfsblock: fix partial page buffer wirte
>
> On Tue, Nov 27, 2012 at 02:33:24AM +0000, Ben Hutchings wrote:
> > On Mon, 2012-11-26 at 14:55 -0200, Herton Ronaldo Krzesinski wrote:
> > > 3.5.7u1 -stable review patch. If anyone has any objections, please let me
> know.
> > >
> > > ------------------
> > >
> > > From: Peng Tao <[email protected]>
> > >
> > > commit fe6e1e8d9fad86873eb74a26e80a8f91f9e870b5 upstream.
> > >
> > > If applications use flock to protect its write range, generic NFS
> > > will not do read-modify-write cycle at page cache level. Therefore
> > > LD should know how to handle non-sector aligned writes. Otherwise
> > > there will be data corruption.
> > >
> > > Signed-off-by: Peng Tao <[email protected]>
> > > Signed-off-by: Trond Myklebust <[email protected]>
> > > Signed-off-by: Herton Ronaldo Krzesinski
> > > <[email protected]>
> > [...]
> >
> > I notice that this fix is missing from 3.4, and will need backporting.
>
> I don't trust myself with backporting this, as I got it wrong. So if one of the
> NFS developers wants to do this (same goes for the other NFS patch), I'll
> gladly take it.
>
Tao would be the right person to do this since he has access to pNFS blocks hardware and can test the results.
Cheers
Trond