This is the start of the stable review cycle for the 3.18.58 release.
There are 32 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Wed Jun 21 15:20:24 UTC 2017.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
kernel.org/pub/linux/kernel/v3.x/stable-review/patch-3.18.58-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-3.18.y
and the diffstat can be found below.
thanks,
greg k-h
-------------
Pseudo-Shortlog of commits:
Greg Kroah-Hartman <[email protected]>
Linux 3.18.58-rc1
Hugh Dickins <[email protected]>
mm: larger stack guard gap, between vmas
Thomas Gleixner <[email protected]>
alarmtimer: Rate limit periodic intervals
Heiner Kallweit <[email protected]>
genirq: Release resources in __setup_irq() error path
Yu Zhao <[email protected]>
swap: cond_resched in swap_cgroup_prepare()
James Morse <[email protected]>
mm/memory-failure.c: use compound_head() flags for huge pages
Corentin Labbe <[email protected]>
usb: xhci: ASMedia ASM1042A chipset need shorts TX quirk
Dan Carpenter <[email protected]>
drivers/misc/c2port/c2port-duramar2150.c: checking for NULL instead of IS_ERR()
Chris Brandt <[email protected]>
usb: r8a66597-hcd: decrease timeout
Chris Brandt <[email protected]>
usb: r8a66597-hcd: select a different endpoint on timeout
Johan Hovold <[email protected]>
USB: gadget: dummy_hcd: fix hub-descriptor removable fields
Arnd Bergmann <[email protected]>
pvrusb2: reduce stack usage pvr2_eeprom_analyze()
Anton Bondarenko <[email protected]>
usb: core: fix potential memory leak in error path during hcd creation
Johan Hovold <[email protected]>
USB: hub: fix SS max number of ports
Matt Ranostay <[email protected]>
iio: proximity: as3935: recalibrate RCO after resume
Dan Carpenter <[email protected]>
staging: rtl8188eu: prevent an underflow in rtw_check_beacon_data()
Tony Lindgren <[email protected]>
mfd: omap-usb-tll: Fix inverted bit use for USB TLL mode
Laura Abbott <[email protected]>
x86/mm/32: Set the '__vmalloc_start_set' flag in initmem_init()
Christophe JAILLET <[email protected]>
serial: efm32: Fix parity management in 'efm32_uart_console_get_options()'
Emmanuel Grumbach <[email protected]>
mac80211: don't look at the PM bit of BAR frames
Christophe JAILLET <[email protected]>
vb2: Fix an off by one error in 'vb2_plane_vaddr'
Tomasz WilczyĆski <[email protected]>
cpufreq: conservative: Allow down_threshold to take values from 1 to 10
Marc Kleine-Budde <[email protected]>
can: gs_usb: fix memory leak in gs_cmd_reset()
Nicholas Bellinger <[email protected]>
configfs: Fix race between create_link and configfs_rmdir
Dan Carpenter <[email protected]>
sparc64: make string buffers large enough
Ard Biesheuvel <[email protected]>
log2: make order_base_2() behave correctly on const input value zero
Jonathan T. Leighton <[email protected]>
ipv6: Inhibit IPv4-mapped src address on the wire.
Jonathan T. Leighton <[email protected]>
ipv6: Handle IPv4-mapped src to in6addr_any dst.
Anssi Hannula <[email protected]>
net: xilinx_emaclite: fix receive buffer overflow
Anssi Hannula <[email protected]>
net: xilinx_emaclite: fix freezes due to unordered I/O
Sachin Prabhu <[email protected]>
Call echo service immediately after socket reconnect
Richard <[email protected]>
partitions/msdos: FreeBSD UFS2 file systems are not recognized
Heiko Carstens <[email protected]>
s390/vmem: fix identity mapping
-------------
Diffstat:
Documentation/kernel-parameters.txt | 7 ++
Makefile | 4 +-
arch/arc/mm/mmap.c | 2 +-
arch/arm/mm/mmap.c | 4 +-
arch/frv/mm/elf-fdpic.c | 2 +-
arch/mips/mm/mmap.c | 2 +-
arch/parisc/kernel/sys_parisc.c | 15 +--
arch/powerpc/mm/slice.c | 2 +-
arch/s390/mm/vmem.c | 2 +-
arch/sh/mm/mmap.c | 4 +-
arch/sparc/kernel/sys_sparc_64.c | 4 +-
arch/sparc/kernel/traps_64.c | 4 +-
arch/sparc/mm/hugetlbpage.c | 2 +-
arch/tile/mm/hugetlbpage.c | 2 +-
arch/x86/kernel/sys_x86_64.c | 4 +-
arch/x86/mm/hugetlbpage.c | 2 +-
arch/x86/mm/numa_32.c | 1 +
arch/xtensa/kernel/syscall.c | 2 +-
block/partitions/msdos.c | 2 +
drivers/cpufreq/cpufreq_conservative.c | 4 +-
drivers/iio/proximity/as3935.c | 6 +-
drivers/media/usb/pvrusb2/pvrusb2-eeprom.c | 13 +--
drivers/media/v4l2-core/videobuf2-core.c | 2 +-
drivers/mfd/omap-usb-tll.c | 2 +-
drivers/misc/c2port/c2port-duramar2150.c | 4 +-
drivers/net/can/usb/gs_usb.c | 2 +
drivers/net/ethernet/xilinx/xilinx_emaclite.c | 126 ++++++++++++----------
drivers/staging/rtl8188eu/core/rtw_ap.c | 2 +-
drivers/tty/serial/efm32-uart.c | 11 +-
drivers/usb/core/hcd.c | 1 +
drivers/usb/core/hub.c | 8 +-
drivers/usb/gadget/udc/dummy_hcd.c | 6 +-
drivers/usb/host/r8a66597-hcd.c | 6 +-
drivers/usb/host/xhci-pci.c | 3 +
fs/cifs/connect.c | 24 +++--
fs/configfs/symlink.c | 3 +-
fs/hugetlbfs/inode.c | 2 +-
fs/proc/task_mmu.c | 4 -
include/linux/log2.h | 13 ++-
include/linux/mm.h | 57 +++++-----
include/uapi/linux/usb/ch11.h | 3 +
kernel/irq/manage.c | 4 +-
kernel/time/alarmtimer.c | 8 ++
mm/gup.c | 6 +-
mm/memory-failure.c | 5 +-
mm/memory.c | 38 -------
mm/mmap.c | 147 +++++++++++++++-----------
mm/page_cgroup.c | 3 +
net/ipv6/datagram.c | 14 ++-
net/ipv6/ip6_output.c | 3 +
net/ipv6/tcp_ipv6.c | 11 +-
net/ipv6/udp.c | 4 +
net/mac80211/rx.c | 6 +-
53 files changed, 347 insertions(+), 271 deletions(-)
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger <[email protected]>
commit ba80aa909c99802c428682c352b0ee0baac0acd3 upstream.
This patch closes a long standing race in configfs between
the creation of a new symlink in create_link(), while the
symlink target's config_item is being concurrently removed
via configfs_rmdir().
This can happen because the symlink target's reference
is obtained by config_item_get() in create_link() before
the CONFIGFS_USET_DROPPING bit set by configfs_detach_prep()
during configfs_rmdir() shutdown is actually checked..
This originally manifested itself on ppc64 on v4.8.y under
heavy load using ibmvscsi target ports with Novalink API:
[ 7877.289863] rpadlpar_io: slot U8247.22L.212A91A-V1-C8 added
[ 7879.893760] ------------[ cut here ]------------
[ 7879.893768] WARNING: CPU: 15 PID: 17585 at ./include/linux/kref.h:46 config_item_get+0x7c/0x90 [configfs]
[ 7879.893811] CPU: 15 PID: 17585 Comm: targetcli Tainted: G O 4.8.17-customv2.22 #12
[ 7879.893812] task: c00000018a0d3400 task.stack: c0000001f3b40000
[ 7879.893813] NIP: d000000002c664ec LR: d000000002c60980 CTR: c000000000b70870
[ 7879.893814] REGS: c0000001f3b43810 TRAP: 0700 Tainted: G O (4.8.17-customv2.22)
[ 7879.893815] MSR: 8000000000029033 <SF,EE,ME,IR,DR,RI,LE> CR: 28222242 XER: 00000000
[ 7879.893820] CFAR: d000000002c664bc SOFTE: 1
GPR00: d000000002c60980 c0000001f3b43a90 d000000002c70908 c0000000fbc06820
GPR04: c0000001ef1bd900 0000000000000004 0000000000000001 0000000000000000
GPR08: 0000000000000000 0000000000000001 d000000002c69560 d000000002c66d80
GPR12: c000000000b70870 c00000000e798700 c0000001f3b43ca0 c0000001d4949d40
GPR16: c00000014637e1c0 0000000000000000 0000000000000000 c0000000f2392940
GPR20: c0000001f3b43b98 0000000000000041 0000000000600000 0000000000000000
GPR24: fffffffffffff000 0000000000000000 d000000002c60be0 c0000001f1dac490
GPR28: 0000000000000004 0000000000000000 c0000001ef1bd900 c0000000f2392940
[ 7879.893839] NIP [d000000002c664ec] config_item_get+0x7c/0x90 [configfs]
[ 7879.893841] LR [d000000002c60980] check_perm+0x80/0x2e0 [configfs]
[ 7879.893842] Call Trace:
[ 7879.893844] [c0000001f3b43ac0] [d000000002c60980] check_perm+0x80/0x2e0 [configfs]
[ 7879.893847] [c0000001f3b43b10] [c000000000329770] do_dentry_open+0x2c0/0x460
[ 7879.893849] [c0000001f3b43b70] [c000000000344480] path_openat+0x210/0x1490
[ 7879.893851] [c0000001f3b43c80] [c00000000034708c] do_filp_open+0xfc/0x170
[ 7879.893853] [c0000001f3b43db0] [c00000000032b5bc] do_sys_open+0x1cc/0x390
[ 7879.893856] [c0000001f3b43e30] [c000000000009584] system_call+0x38/0xec
[ 7879.893856] Instruction dump:
[ 7879.893858] 409d0014 38210030 e8010010 7c0803a6 4e800020 3d220000 e94981e0 892a0000
[ 7879.893861] 2f890000 409effe0 39200001 992a0000 <0fe00000> 4bffffd0 60000000 60000000
[ 7879.893866] ---[ end trace 14078f0b3b5ad0aa ]---
To close this race, go ahead and obtain the symlink's target
config_item reference only after the existing CONFIGFS_USET_DROPPING
check succeeds.
This way, if configfs_rmdir() wins create_link() will return -ENONET,
and if create_link() wins configfs_rmdir() will return -EBUSY.
Reported-by: Bryant G. Ly <[email protected]>
Tested-by: Bryant G. Ly <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/configfs/symlink.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/fs/configfs/symlink.c
+++ b/fs/configfs/symlink.c
@@ -83,14 +83,13 @@ static int create_link(struct config_ite
ret = -ENOMEM;
sl = kmalloc(sizeof(struct configfs_symlink), GFP_KERNEL);
if (sl) {
- sl->sl_target = config_item_get(item);
spin_lock(&configfs_dirent_lock);
if (target_sd->s_type & CONFIGFS_USET_DROPPING) {
spin_unlock(&configfs_dirent_lock);
- config_item_put(item);
kfree(sl);
return -ENOENT;
}
+ sl->sl_target = config_item_get(item);
list_add(&sl->sl_list, &target_sd->s_links);
spin_unlock(&configfs_dirent_lock);
ret = configfs_create_link(sl, parent_item->ci_dentry,
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christophe JAILLET <[email protected]>
commit 5ebb6dd36c9f5fb37b1077b393c254d70a14cb46 upstream.
We should ensure that 'plane_no' is '< vb->num_planes' as done in
'vb2_plane_cookie' just a few lines below.
Fixes: e23ccc0ad925 ("[media] v4l: add videobuf2 Video for Linux 2 driver framework")
Signed-off-by: Christophe JAILLET <[email protected]>
Reviewed-by: Sakari Ailus <[email protected]>
Signed-off-by: Hans Verkuil <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/videobuf2-core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/media/v4l2-core/videobuf2-core.c
+++ b/drivers/media/v4l2-core/videobuf2-core.c
@@ -1120,7 +1120,7 @@ EXPORT_SYMBOL_GPL(vb2_create_bufs);
*/
void *vb2_plane_vaddr(struct vb2_buffer *vb, unsigned int plane_no)
{
- if (plane_no > vb->num_planes || !vb->planes[plane_no].mem_priv)
+ if (plane_no >= vb->num_planes || !vb->planes[plane_no].mem_priv)
return NULL;
return call_ptr_memop(vb, vaddr, vb->planes[plane_no].mem_priv);
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Emmanuel Grumbach <[email protected]>
commit 769dc04db3ed8484798aceb015b94deacc2ba557 upstream.
When a peer sends a BAR frame with PM bit clear, we should
not modify its PM state as madated by the spec in
802.11-20012 10.2.1.2.
Signed-off-by: Emmanuel Grumbach <[email protected]>
Signed-off-by: Johannes Berg <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/mac80211/rx.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
--- a/net/mac80211/rx.c
+++ b/net/mac80211/rx.c
@@ -1319,12 +1319,16 @@ ieee80211_rx_h_sta_process(struct ieee80
*/
if (!(sta->local->hw.flags & IEEE80211_HW_AP_LINK_PS) &&
!ieee80211_has_morefrags(hdr->frame_control) &&
+ !ieee80211_is_back_req(hdr->frame_control) &&
!(status->rx_flags & IEEE80211_RX_DEFERRED_RELEASE) &&
(rx->sdata->vif.type == NL80211_IFTYPE_AP ||
rx->sdata->vif.type == NL80211_IFTYPE_AP_VLAN) &&
- /* PM bit is only checked in frames where it isn't reserved,
+ /*
+ * PM bit is only checked in frames where it isn't reserved,
* in AP mode it's reserved in non-bufferable management frames
* (cf. IEEE 802.11-2012 8.2.4.1.7 Power Management field)
+ * BAR frames should be ignored as specified in
+ * IEEE 802.11-2012 10.2.1.2.
*/
(!ieee80211_is_mgmt(hdr->frame_control) ||
ieee80211_is_bufferable_mmpdu(hdr->frame_control))) {
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tony Lindgren <[email protected]>
commit 8b8a84c54aff4256d592dc18346c65ecf6811b45 upstream.
Commit 16fa3dc75c22 ("mfd: omap-usb-tll: HOST TLL platform driver")
added support for USB TLL, but uses OMAP_TLL_CHANNEL_CONF_ULPINOBITSTUFF
bit the wrong way. The comments in the code are correct, but the inverted
use of OMAP_TLL_CHANNEL_CONF_ULPINOBITSTUFF causes the register to be
enabled instead of disabled unlike what the comments say.
Without this change the Wrigley 3G LTE modem on droid 4 EHCI bus can
be only pinged few times before it stops responding.
Fixes: 16fa3dc75c22 ("mfd: omap-usb-tll: HOST TLL platform driver")
Signed-off-by: Tony Lindgren <[email protected]>
Acked-by: Roger Quadros <[email protected]>
Signed-off-by: Lee Jones <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/mfd/omap-usb-tll.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/mfd/omap-usb-tll.c
+++ b/drivers/mfd/omap-usb-tll.c
@@ -376,8 +376,8 @@ int omap_tll_init(struct usbhs_omap_plat
* and use SDR Mode
*/
reg &= ~(OMAP_TLL_CHANNEL_CONF_UTMIAUTOIDLE
- | OMAP_TLL_CHANNEL_CONF_ULPINOBITSTUFF
| OMAP_TLL_CHANNEL_CONF_ULPIDDRMODE);
+ reg |= OMAP_TLL_CHANNEL_CONF_ULPINOBITSTUFF;
} else if (pdata->port_mode[i] ==
OMAP_EHCI_PORT_MODE_HSIC) {
/*
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matt Ranostay <[email protected]>
commit 6272c0de13abf1480f701d38288f28a11b4301c4 upstream.
According to the datasheet the RCO must be recalibrated
on every power-on-reset. Also remove mutex locking in the
calibration function since callers other than the probe
function (which doesn't need it) will have a lock.
Fixes: 24ddb0e4bba4 ("iio: Add AS3935 lightning sensor support")
Cc: George McCollister <[email protected]>
Signed-off-by: Matt Ranostay <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/iio/proximity/as3935.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
--- a/drivers/iio/proximity/as3935.c
+++ b/drivers/iio/proximity/as3935.c
@@ -256,8 +256,6 @@ static irqreturn_t as3935_interrupt_hand
static void calibrate_as3935(struct as3935_state *st)
{
- mutex_lock(&st->lock);
-
/* mask disturber interrupt bit */
as3935_write(st, AS3935_INT, BIT(5));
@@ -267,8 +265,6 @@ static void calibrate_as3935(struct as39
mdelay(2);
as3935_write(st, AS3935_TUNE_CAP, (st->tune_cap / TUNE_CAP_DIV));
-
- mutex_unlock(&st->lock);
}
#ifdef CONFIG_PM_SLEEP
@@ -305,6 +301,8 @@ static int as3935_resume(struct spi_devi
val &= ~AS3935_AFE_PWR_BIT;
ret = as3935_write(st, AS3935_AFE_GAIN, val);
+ calibrate_as3935(st);
+
err_resume:
mutex_unlock(&st->lock);
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Richard <[email protected]>
commit 223220356d5ebc05ead9a8d697abb0c0a906fc81 upstream.
The code in block/partitions/msdos.c recognizes FreeBSD, OpenBSD
and NetBSD partitions and does a reasonable job picking out OpenBSD
and NetBSD UFS subpartitions.
But for FreeBSD the subpartitions are always "bad".
Kernel: <bsd:bad subpartition - ignored
Though all 3 of these BSD systems use UFS as a file system, only
FreeBSD uses relative start addresses in the subpartition
declarations.
The following patch fixes this for FreeBSD partitions and leaves
the code for OpenBSD and NetBSD intact:
Signed-off-by: Richard Narron <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
block/partitions/msdos.c | 2 ++
1 file changed, 2 insertions(+)
--- a/block/partitions/msdos.c
+++ b/block/partitions/msdos.c
@@ -300,6 +300,8 @@ static void parse_bsd(struct parsed_part
continue;
bsd_start = le32_to_cpu(p->p_offset);
bsd_size = le32_to_cpu(p->p_size);
+ if (memcmp(flavour, "bsd\0", 4) == 0)
+ bsd_start += offset;
if (offset == bsd_start && size == bsd_size)
/* full parent partition, we have it already */
continue;
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Carpenter <[email protected]>
commit 784047eb2d3405a35087af70cba46170c5576b25 upstream.
The "len" could be as low as -14 so we should check for negatives.
Fixes: 9a7fe54ddc3a ("staging: r8188eu: Add source files for new driver - part 1")
Signed-off-by: Dan Carpenter <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/staging/rtl8188eu/core/rtw_ap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/staging/rtl8188eu/core/rtw_ap.c
+++ b/drivers/staging/rtl8188eu/core/rtw_ap.c
@@ -873,7 +873,7 @@ int rtw_check_beacon_data(struct adapter
return _FAIL;
- if (len > MAX_IE_SZ)
+ if (len < 0 || len > MAX_IE_SZ)
return _FAIL;
pbss_network->IELength = len;
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Anton Bondarenko <[email protected]>
commit 1a744d2eb76aaafb997fda004ae3ae62a1538f85 upstream.
Free memory allocated for address0_mutex if allocation of bandwidth_mutex
failed.
Fixes: feb26ac31a2a ("usb: core: hub: hub_port_init lock controller instead of bus")
Signed-off-by: Anton Bondarenko <[email protected]>
Acked-by: Alan Stern <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/usb/core/hcd.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/usb/core/hcd.c
+++ b/drivers/usb/core/hcd.c
@@ -2461,6 +2461,7 @@ struct usb_hcd *usb_create_shared_hcd(co
hcd->bandwidth_mutex = kmalloc(sizeof(*hcd->bandwidth_mutex),
GFP_KERNEL);
if (!hcd->bandwidth_mutex) {
+ kfree(hcd->address0_mutex);
kfree(hcd);
dev_dbg(dev, "hcd bandwidth mutex alloc failed\n");
return NULL;
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann <[email protected]>
commit 6830733d53a4517588e56227b9c8538633f0c496 upstream.
The driver uses a relatively large data structure on the stack, which
showed up on my radar as we get a warning with the "latent entropy"
GCC plugin:
drivers/media/usb/pvrusb2/pvrusb2-eeprom.c:153:1: error: the frame size of 1376 bytes is larger than 1152 bytes [-Werror=frame-larger-than=]
The warning is usually hidden as we raise the warning limit to 2048
when the plugin is enabled, but I'd like to lower that again in the
future, and making this function smaller helps to do that without
build regressions.
Further analysis shows that putting an 'i2c_client' structure on
the stack is not really supported, as the embedded 'struct device'
is not initialized here, and we are only saved by the fact that
the function that is called here does not use the pointer at all.
Fixes: d855497edbfb ("V4L/DVB (4228a): pvrusb2 to kernel 2.6.18")
Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Hans Verkuil <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/usb/pvrusb2/pvrusb2-eeprom.c | 11 +++--------
1 file changed, 3 insertions(+), 8 deletions(-)
--- a/drivers/media/usb/pvrusb2/pvrusb2-eeprom.c
+++ b/drivers/media/usb/pvrusb2/pvrusb2-eeprom.c
@@ -123,15 +123,10 @@ int pvr2_eeprom_analyze(struct pvr2_hdw
memset(&tvdata,0,sizeof(tvdata));
eeprom = pvr2_eeprom_fetch(hdw);
- if (!eeprom) return -EINVAL;
+ if (!eeprom)
+ return -EINVAL;
- {
- struct i2c_client fake_client;
- /* Newer version expects a useless client interface */
- fake_client.addr = hdw->eeprom_addr;
- fake_client.adapter = &hdw->i2c_adap;
- tveeprom_hauppauge_analog(&fake_client,&tvdata,eeprom);
- }
+ tveeprom_hauppauge_analog(NULL, &tvdata, eeprom);
trace_eeprom("eeprom assumed v4l tveeprom module");
trace_eeprom("eeprom direct call results:");
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit d81182ce30dbd497a1e7047d7fda2af040347790 upstream.
Flag the first and only port as removable while also leaving the
remaining bits (including the reserved bit zero) unset in accordance
with the specifications:
"Within a byte, if no port exists for a given location, the bit
field representing the port characteristics shall be 0."
Also add a comment marking the legacy PortPwrCtrlMask field.
Fixes: 1cd8fd2887e1 ("usb: gadget: dummy_hcd: add SuperSpeed support")
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: Tatyana Brokhman <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
Acked-by: Alan Stern <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/usb/gadget/udc/dummy_hcd.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/usb/gadget/udc/dummy_hcd.c
+++ b/drivers/usb/gadget/udc/dummy_hcd.c
@@ -1935,7 +1935,7 @@ ss_hub_descriptor(struct usb_hub_descrip
desc->wHubCharacteristics = cpu_to_le16(0x0001);
desc->bNbrPorts = 1;
desc->u.ss.bHubHdrDecLat = 0x04; /* Worst case: 0.4 micro sec*/
- desc->u.ss.DeviceRemovable = 0xffff;
+ desc->u.ss.DeviceRemovable = 0;
}
static inline void hub_descriptor(struct usb_hub_descriptor *desc)
@@ -1945,8 +1945,8 @@ static inline void hub_descriptor(struct
desc->bDescLength = 9;
desc->wHubCharacteristics = cpu_to_le16(0x0001);
desc->bNbrPorts = 1;
- desc->u.hs.DeviceRemovable[0] = 0xff;
- desc->u.hs.DeviceRemovable[1] = 0xff;
+ desc->u.hs.DeviceRemovable[0] = 0;
+ desc->u.hs.DeviceRemovable[1] = 0xff; /* PortPwrCtrlMask */
}
static int dummy_hub_control(
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: James Morse <[email protected]>
commit 7258ae5c5a2ce2f5969e8b18b881be40ab55433d upstream.
memory_failure() chooses a recovery action function based on the page
flags. For huge pages it uses the tail page flags which don't have
anything interesting set, resulting in:
> Memory failure: 0x9be3b4: Unknown page state
> Memory failure: 0x9be3b4: recovery action for unknown page: Failed
Instead, save a copy of the head page's flags if this is a huge page,
this means if there are no relevant flags for this tail page, we use the
head pages flags instead. This results in the me_huge_page() recovery
action being called:
> Memory failure: 0x9b7969: recovery action for huge page: Delayed
For hugepages that have not yet been allocated, this allows the hugepage
to be dequeued.
Fixes: 524fca1e7356 ("HWPOISON: fix misjudgement of page_action() for errors on mlocked pages")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: James Morse <[email protected]>
Tested-by: Punit Agrawal <[email protected]>
Acked-by: Punit Agrawal <[email protected]>
Acked-by: Naoya Horiguchi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
mm/memory-failure.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1189,7 +1189,10 @@ int memory_failure(unsigned long pfn, in
* page_remove_rmap() in try_to_unmap_one(). So to determine page status
* correctly, we save a copy of the page flags at this time.
*/
- page_flags = p->flags;
+ if (PageHuge(p))
+ page_flags = hpage->flags;
+ else
+ page_flags = p->flags;
/*
* unpoison always clear PG_hwpoison inside page lock
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yu Zhao <[email protected]>
commit ef70762948dde012146926720b70e79736336764 upstream.
I saw need_resched() warnings when swapping on large swapfile (TBs)
because continuously allocating many pages in swap_cgroup_prepare() took
too long.
We already cond_resched when freeing page in swap_cgroup_swapoff(). Do
the same for the page allocation.
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Yu Zhao <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Vladimir Davydov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
mm/page_cgroup.c | 3 +++
1 file changed, 3 insertions(+)
--- a/mm/page_cgroup.c
+++ b/mm/page_cgroup.c
@@ -368,6 +368,9 @@ static int swap_cgroup_prepare(int type)
if (!page)
goto not_enough_page;
ctrl->map[idx] = page;
+
+ if (!(idx % SWAP_CLUSTER_MAX))
+ cond_resched();
}
return 0;
not_enough_page:
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Jonathan T. Leighton" <[email protected]>
[ Upstream commit 052d2369d1b479cdbbe020fdd6d057d3c342db74 ]
This patch adds a check on the type of the source address for the case
where the destination address is in6addr_any. If the source is an
IPv4-mapped IPv6 source address, the destination is changed to
::ffff:127.0.0.1, and otherwise the destination is changed to ::1. This
is done in three locations to handle UDP calls to either connect() or
sendmsg() and TCP calls to connect(). Note that udpv6_sendmsg() delays
handling an in6addr_any destination until very late, so the patch only
needs to handle the case where the source is an IPv4-mapped IPv6
address.
Signed-off-by: Jonathan T. Leighton <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/ipv6/datagram.c | 14 +++++++++-----
net/ipv6/tcp_ipv6.c | 11 ++++++++---
net/ipv6/udp.c | 4 ++++
3 files changed, 21 insertions(+), 8 deletions(-)
--- a/net/ipv6/datagram.c
+++ b/net/ipv6/datagram.c
@@ -76,18 +76,22 @@ static int __ip6_datagram_connect(struct
}
}
- addr_type = ipv6_addr_type(&usin->sin6_addr);
-
- if (addr_type == IPV6_ADDR_ANY) {
+ if (ipv6_addr_any(&usin->sin6_addr)) {
/*
* connect to self
*/
- usin->sin6_addr.s6_addr[15] = 0x01;
+ if (ipv6_addr_v4mapped(&sk->sk_v6_rcv_saddr))
+ ipv6_addr_set_v4mapped(htonl(INADDR_LOOPBACK),
+ &usin->sin6_addr);
+ else
+ usin->sin6_addr = in6addr_loopback;
}
+ addr_type = ipv6_addr_type(&usin->sin6_addr);
+
daddr = &usin->sin6_addr;
- if (addr_type == IPV6_ADDR_MAPPED) {
+ if (addr_type & IPV6_ADDR_MAPPED) {
struct sockaddr_in sin;
if (__ipv6_only_sock(sk)) {
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -164,8 +164,13 @@ static int tcp_v6_connect(struct sock *s
* connect() to INADDR_ANY means loopback (BSD'ism).
*/
- if (ipv6_addr_any(&usin->sin6_addr))
- usin->sin6_addr.s6_addr[15] = 0x1;
+ if (ipv6_addr_any(&usin->sin6_addr)) {
+ if (ipv6_addr_v4mapped(&sk->sk_v6_rcv_saddr))
+ ipv6_addr_set_v4mapped(htonl(INADDR_LOOPBACK),
+ &usin->sin6_addr);
+ else
+ usin->sin6_addr = in6addr_loopback;
+ }
addr_type = ipv6_addr_type(&usin->sin6_addr);
@@ -204,7 +209,7 @@ static int tcp_v6_connect(struct sock *s
* TCP over IPv4
*/
- if (addr_type == IPV6_ADDR_MAPPED) {
+ if (addr_type & IPV6_ADDR_MAPPED) {
u32 exthdrlen = icsk->icsk_ext_hdr_len;
struct sockaddr_in sin;
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -1107,6 +1107,10 @@ int udpv6_sendmsg(struct kiocb *iocb, st
if (addr_len < SIN6_LEN_RFC2133)
return -EINVAL;
daddr = &sin6->sin6_addr;
+ if (ipv6_addr_any(daddr) &&
+ ipv6_addr_v4mapped(&np->saddr))
+ ipv6_addr_set_v4mapped(htonl(INADDR_LOOPBACK),
+ daddr);
break;
case AF_INET:
goto do_udp_sendmsg;
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Anssi Hannula <[email protected]>
[ Upstream commit cd224553641848dd17800fe559e4ff5d208553e8 ]
xilinx_emaclite looks at the received data to try to determine the
Ethernet packet length but does not properly clamp it if
proto_type == ETH_P_IP or 1500 < proto_type <= 1518, causing a buffer
overflow and a panic via skb_panic() as the length exceeds the allocated
skb size.
Fix those cases.
Also add an additional unconditional check with WARN_ON() at the end.
Signed-off-by: Anssi Hannula <[email protected]>
Fixes: bb81b2ddfa19 ("net: add Xilinx emac lite device driver")
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/xilinx/xilinx_emaclite.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
@@ -379,7 +379,7 @@ static int xemaclite_send_data(struct ne
*
* Return: Total number of bytes received
*/
-static u16 xemaclite_recv_data(struct net_local *drvdata, u8 *data)
+static u16 xemaclite_recv_data(struct net_local *drvdata, u8 *data, int maxlen)
{
void __iomem *addr;
u16 length, proto_type;
@@ -419,7 +419,7 @@ static u16 xemaclite_recv_data(struct ne
/* Check if received ethernet frame is a raw ethernet frame
* or an IP packet or an ARP packet */
- if (proto_type > (ETH_FRAME_LEN + ETH_FCS_LEN)) {
+ if (proto_type > ETH_DATA_LEN) {
if (proto_type == ETH_P_IP) {
length = ((ntohl(xemaclite_readl(addr +
@@ -427,6 +427,7 @@ static u16 xemaclite_recv_data(struct ne
XEL_RXBUFF_OFFSET)) >>
XEL_HEADER_SHIFT) &
XEL_RPLR_LENGTH_MASK);
+ length = min_t(u16, length, ETH_DATA_LEN);
length += ETH_HLEN + ETH_FCS_LEN;
} else if (proto_type == ETH_P_ARP)
@@ -439,6 +440,9 @@ static u16 xemaclite_recv_data(struct ne
/* Use the length in the frame, plus the header and trailer */
length = proto_type + ETH_HLEN + ETH_FCS_LEN;
+ if (WARN_ON(length > maxlen))
+ length = maxlen;
+
/* Read from the EmacLite device */
xemaclite_aligned_read((u32 __force *) (addr + XEL_RXBUFF_OFFSET),
data, length);
@@ -613,7 +617,7 @@ static void xemaclite_rx_handler(struct
skb_reserve(skb, 2);
- len = xemaclite_recv_data(lp, (u8 *) skb->data);
+ len = xemaclite_recv_data(lp, (u8 *) skb->data, len);
if (!len) {
dev->stats.rx_errors++;
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Jonathan T. Leighton" <[email protected]>
[ Upstream commit ec5e3b0a1d41fbda0cc33a45bc9e54e91d9d12c7 ]
This patch adds a check for the problematic case of an IPv4-mapped IPv6
source address and a destination address that is neither an IPv4-mapped
IPv6 address nor in6addr_any, and returns an appropriate error. The
check in done before returning from looking up the route.
Signed-off-by: Jonathan T. Leighton <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/ipv6/ip6_output.c | 3 +++
1 file changed, 3 insertions(+)
--- a/net/ipv6/ip6_output.c
+++ b/net/ipv6/ip6_output.c
@@ -960,6 +960,9 @@ static int ip6_dst_lookup_tail(struct so
}
}
#endif
+ if (ipv6_addr_v4mapped(&fl6->saddr) &&
+ !(ipv6_addr_v4mapped(&fl6->daddr) || ipv6_addr_any(&fl6->daddr)))
+ return -EAFNOSUPPORT;
return 0;
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sachin Prabhu <[email protected]>
commit b8c600120fc87d53642476f48c8055b38d6e14c7 upstream.
Commit 4fcd1813e640 ("Fix reconnect to not defer smb3 session reconnect
long after socket reconnect") changes the behaviour of the SMB2 echo
service and causes it to renegotiate after a socket reconnect. However
under default settings, the echo service could take up to 120 seconds to
be scheduled.
The patch forces the echo service to be called immediately resulting a
negotiate call being made immediately on reconnect.
Signed-off-by: Sachin Prabhu <[email protected]>
Reviewed-by: Pavel Shilovsky <[email protected]>
Signed-off-by: Steve French <[email protected]>
Acked-by: Sachin Prabhu <[email protected]>
Signed-off-by: Pavel Shilovsky <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/cifs/connect.c | 24 ++++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -400,6 +400,9 @@ cifs_reconnect(struct TCP_Server_Info *s
mutex_unlock(&server->srv_mutex);
} while (server->tcpStatus == CifsNeedReconnect);
+ if (server->tcpStatus == CifsNeedNegotiate)
+ mod_delayed_work(cifsiod_wq, &server->echo, 0);
+
return rc;
}
@@ -409,18 +412,27 @@ cifs_echo_request(struct work_struct *wo
int rc;
struct TCP_Server_Info *server = container_of(work,
struct TCP_Server_Info, echo.work);
+ unsigned long echo_interval;
+
+ /*
+ * If we need to renegotiate, set echo interval to zero to
+ * immediately call echo service where we can renegotiate.
+ */
+ if (server->tcpStatus == CifsNeedNegotiate)
+ echo_interval = 0;
+ else
+ echo_interval = SMB_ECHO_INTERVAL;
/*
- * We cannot send an echo if it is disabled or until the
- * NEGOTIATE_PROTOCOL request is done, which is indicated by
- * server->ops->need_neg() == true. Also, no need to ping if
- * we got a response recently.
+ * We cannot send an echo if it is disabled.
+ * Also, no need to ping if we got a response recently.
*/
if (server->tcpStatus == CifsNeedReconnect ||
- server->tcpStatus == CifsExiting || server->tcpStatus == CifsNew ||
+ server->tcpStatus == CifsExiting ||
+ server->tcpStatus == CifsNew ||
(server->ops->can_echo && !server->ops->can_echo(server)) ||
- time_before(jiffies, server->lstrp + SMB_ECHO_INTERVAL - HZ))
+ time_before(jiffies, server->lstrp + echo_interval - HZ))
goto requeue_echo;
rc = server->ops->echo ? server->ops->echo(server) : -ENOSYS;
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Carpenter <[email protected]>
commit 8128a31eaadbcdfa37774bbd28f3f00bac69996a upstream.
c2port_device_register() never returns NULL, it uses error pointers.
Link: http://lkml.kernel.org/r/20170412083321.GC3250@mwanda
Fixes: 65131cd52b9e ("c2port: add c2port support for Eurotech Duramar 2150")
Signed-off-by: Dan Carpenter <[email protected]>
Acked-by: Rodolfo Giometti <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/misc/c2port/c2port-duramar2150.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/misc/c2port/c2port-duramar2150.c
+++ b/drivers/misc/c2port/c2port-duramar2150.c
@@ -129,8 +129,8 @@ static int __init duramar2150_c2port_ini
duramar2150_c2port_dev = c2port_device_register("uc",
&duramar2150_c2port_ops, NULL);
- if (!duramar2150_c2port_dev) {
- ret = -ENODEV;
+ if (IS_ERR(duramar2150_c2port_dev)) {
+ ret = PTR_ERR(duramar2150_c2port_dev);
goto free_region;
}
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ard Biesheuvel <[email protected]>
commit 29905b52fad0854351f57bab867647e4982285bf upstream.
The function order_base_2() is defined (according to the comment block)
as returning zero on input zero, but subsequently passes the input into
roundup_pow_of_two(), which is explicitly undefined for input zero.
This has gone unnoticed until now, but optimization passes in GCC 7 may
produce constant folded function instances where a constant value of
zero is passed into order_base_2(), resulting in link errors against the
deliberately undefined '____ilog2_NaN'.
So update order_base_2() to adhere to its own documented interface.
[ See
http://marc.info/?l=linux-kernel&m=147672952517795&w=2
and follow-up discussion for more background. The gcc "optimization
pass" is really just broken, but now the GCC trunk problem seems to
have escaped out of just specially built daily images, so we need to
work around it in mainline. - Linus ]
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/linux/log2.h | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
--- a/include/linux/log2.h
+++ b/include/linux/log2.h
@@ -194,6 +194,17 @@ unsigned long __rounddown_pow_of_two(uns
* ... and so on.
*/
-#define order_base_2(n) ilog2(roundup_pow_of_two(n))
+static inline __attribute_const__
+int __order_base_2(unsigned long n)
+{
+ return n > 1 ? ilog2(n - 1) + 1 : 0;
+}
+#define order_base_2(n) \
+( \
+ __builtin_constant_p(n) ? ( \
+ ((n) == 0 || (n) == 1) ? 0 : \
+ ilog2((n) - 1) + 1) : \
+ __order_base_2(n) \
+)
#endif /* _LINUX_LOG2_H */
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Corentin Labbe <[email protected]>
commit d2f48f05cd2a2a0a708fbfa45f1a00a87660d937 upstream.
When plugging an USB webcam I see the following message:
[106385.615559] xhci_hcd 0000:04:00.0: WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk?
[106390.583860] handle_tx_event: 913 callbacks suppressed
With this patch applied, I get no more printing of this message.
Signed-off-by: Corentin Labbe <[email protected]>
Signed-off-by: Mathias Nyman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/usb/host/xhci-pci.c | 3 +++
1 file changed, 3 insertions(+)
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -185,6 +185,9 @@ static void xhci_pci_quirks(struct devic
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
pdev->device == 0x1042)
xhci->quirks |= XHCI_BROKEN_STREAMS;
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+ pdev->device == 0x1142)
+ xhci->quirks |= XHCI_TRUST_TX_LENGTH;
if (xhci->quirks & XHCI_RESET_ON_RESUME)
xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Anssi Hannula <[email protected]>
[ Upstream commit acf138f1b00bdd1b7cd9894562ed0c2a1670888e ]
The xilinx_emaclite uses __raw_writel and __raw_readl for register
accesses. Those functions do not imply any kind of memory barriers and
they may be reordered.
The driver does not seem to take that into account, though, and the
driver does not satisfy the ordering requirements of the hardware.
For clear examples, see xemaclite_mdio_write() and xemaclite_mdio_read()
which try to set MDIO address before initiating the transaction.
I'm seeing system freezes with the driver with GCC 5.4 and current
Linux kernels on Zynq-7000 SoC immediately when trying to use the
interface.
In commit 123c1407af87 ("net: emaclite: Do not use microblaze and ppc
IO functions") the driver was switched from non-generic
in_be32/out_be32 (memory barriers, big endian) to
__raw_readl/__raw_writel (no memory barriers, native endian), so
apparently the device follows system endianness and the driver was
originally written with the assumption of memory barriers.
Rather than try to hunt for each case of missing barrier, just switch
the driver to use iowrite32/ioread32/iowrite32be/ioread32be depending
on endianness instead.
Tested on little-endian Zynq-7000 ARM SoC FPGA.
Signed-off-by: Anssi Hannula <[email protected]>
Fixes: 123c1407af87 ("net: emaclite: Do not use microblaze and ppc IO
functions")
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/xilinx/xilinx_emaclite.c | 116 +++++++++++++-------------
1 file changed, 62 insertions(+), 54 deletions(-)
--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
@@ -100,6 +100,14 @@
/* BUFFER_ALIGN(adr) calculates the number of bytes to the next alignment. */
#define BUFFER_ALIGN(adr) ((ALIGNMENT - ((u32) adr)) % ALIGNMENT)
+#ifdef __BIG_ENDIAN
+#define xemaclite_readl ioread32be
+#define xemaclite_writel iowrite32be
+#else
+#define xemaclite_readl ioread32
+#define xemaclite_writel iowrite32
+#endif
+
/**
* struct net_local - Our private per device data
* @ndev: instance of the network device
@@ -158,15 +166,15 @@ static void xemaclite_enable_interrupts(
u32 reg_data;
/* Enable the Tx interrupts for the first Buffer */
- reg_data = __raw_readl(drvdata->base_addr + XEL_TSR_OFFSET);
- __raw_writel(reg_data | XEL_TSR_XMIT_IE_MASK,
- drvdata->base_addr + XEL_TSR_OFFSET);
+ reg_data = xemaclite_readl(drvdata->base_addr + XEL_TSR_OFFSET);
+ xemaclite_writel(reg_data | XEL_TSR_XMIT_IE_MASK,
+ drvdata->base_addr + XEL_TSR_OFFSET);
/* Enable the Rx interrupts for the first buffer */
- __raw_writel(XEL_RSR_RECV_IE_MASK, drvdata->base_addr + XEL_RSR_OFFSET);
+ xemaclite_writel(XEL_RSR_RECV_IE_MASK, drvdata->base_addr + XEL_RSR_OFFSET);
/* Enable the Global Interrupt Enable */
- __raw_writel(XEL_GIER_GIE_MASK, drvdata->base_addr + XEL_GIER_OFFSET);
+ xemaclite_writel(XEL_GIER_GIE_MASK, drvdata->base_addr + XEL_GIER_OFFSET);
}
/**
@@ -181,17 +189,17 @@ static void xemaclite_disable_interrupts
u32 reg_data;
/* Disable the Global Interrupt Enable */
- __raw_writel(XEL_GIER_GIE_MASK, drvdata->base_addr + XEL_GIER_OFFSET);
+ xemaclite_writel(XEL_GIER_GIE_MASK, drvdata->base_addr + XEL_GIER_OFFSET);
/* Disable the Tx interrupts for the first buffer */
- reg_data = __raw_readl(drvdata->base_addr + XEL_TSR_OFFSET);
- __raw_writel(reg_data & (~XEL_TSR_XMIT_IE_MASK),
- drvdata->base_addr + XEL_TSR_OFFSET);
+ reg_data = xemaclite_readl(drvdata->base_addr + XEL_TSR_OFFSET);
+ xemaclite_writel(reg_data & (~XEL_TSR_XMIT_IE_MASK),
+ drvdata->base_addr + XEL_TSR_OFFSET);
/* Disable the Rx interrupts for the first buffer */
- reg_data = __raw_readl(drvdata->base_addr + XEL_RSR_OFFSET);
- __raw_writel(reg_data & (~XEL_RSR_RECV_IE_MASK),
- drvdata->base_addr + XEL_RSR_OFFSET);
+ reg_data = xemaclite_readl(drvdata->base_addr + XEL_RSR_OFFSET);
+ xemaclite_writel(reg_data & (~XEL_RSR_RECV_IE_MASK),
+ drvdata->base_addr + XEL_RSR_OFFSET);
}
/**
@@ -323,7 +331,7 @@ static int xemaclite_send_data(struct ne
byte_count = ETH_FRAME_LEN;
/* Check if the expected buffer is available */
- reg_data = __raw_readl(addr + XEL_TSR_OFFSET);
+ reg_data = xemaclite_readl(addr + XEL_TSR_OFFSET);
if ((reg_data & (XEL_TSR_XMIT_BUSY_MASK |
XEL_TSR_XMIT_ACTIVE_MASK)) == 0) {
@@ -336,7 +344,7 @@ static int xemaclite_send_data(struct ne
addr = (void __iomem __force *)((u32 __force)addr ^
XEL_BUFFER_OFFSET);
- reg_data = __raw_readl(addr + XEL_TSR_OFFSET);
+ reg_data = xemaclite_readl(addr + XEL_TSR_OFFSET);
if ((reg_data & (XEL_TSR_XMIT_BUSY_MASK |
XEL_TSR_XMIT_ACTIVE_MASK)) != 0)
@@ -347,16 +355,16 @@ static int xemaclite_send_data(struct ne
/* Write the frame to the buffer */
xemaclite_aligned_write(data, (u32 __force *) addr, byte_count);
- __raw_writel((byte_count & XEL_TPLR_LENGTH_MASK),
- addr + XEL_TPLR_OFFSET);
+ xemaclite_writel((byte_count & XEL_TPLR_LENGTH_MASK),
+ addr + XEL_TPLR_OFFSET);
/* Update the Tx Status Register to indicate that there is a
* frame to send. Set the XEL_TSR_XMIT_ACTIVE_MASK flag which
* is used by the interrupt handler to check whether a frame
* has been transmitted */
- reg_data = __raw_readl(addr + XEL_TSR_OFFSET);
+ reg_data = xemaclite_readl(addr + XEL_TSR_OFFSET);
reg_data |= (XEL_TSR_XMIT_BUSY_MASK | XEL_TSR_XMIT_ACTIVE_MASK);
- __raw_writel(reg_data, addr + XEL_TSR_OFFSET);
+ xemaclite_writel(reg_data, addr + XEL_TSR_OFFSET);
return 0;
}
@@ -381,7 +389,7 @@ static u16 xemaclite_recv_data(struct ne
addr = (drvdata->base_addr + drvdata->next_rx_buf_to_use);
/* Verify which buffer has valid data */
- reg_data = __raw_readl(addr + XEL_RSR_OFFSET);
+ reg_data = xemaclite_readl(addr + XEL_RSR_OFFSET);
if ((reg_data & XEL_RSR_RECV_DONE_MASK) == XEL_RSR_RECV_DONE_MASK) {
if (drvdata->rx_ping_pong != 0)
@@ -398,14 +406,14 @@ static u16 xemaclite_recv_data(struct ne
return 0; /* No data was available */
/* Verify that buffer has valid data */
- reg_data = __raw_readl(addr + XEL_RSR_OFFSET);
+ reg_data = xemaclite_readl(addr + XEL_RSR_OFFSET);
if ((reg_data & XEL_RSR_RECV_DONE_MASK) !=
XEL_RSR_RECV_DONE_MASK)
return 0; /* No data was available */
}
/* Get the protocol type of the ethernet frame that arrived */
- proto_type = ((ntohl(__raw_readl(addr + XEL_HEADER_OFFSET +
+ proto_type = ((ntohl(xemaclite_readl(addr + XEL_HEADER_OFFSET +
XEL_RXBUFF_OFFSET)) >> XEL_HEADER_SHIFT) &
XEL_RPLR_LENGTH_MASK);
@@ -414,7 +422,7 @@ static u16 xemaclite_recv_data(struct ne
if (proto_type > (ETH_FRAME_LEN + ETH_FCS_LEN)) {
if (proto_type == ETH_P_IP) {
- length = ((ntohl(__raw_readl(addr +
+ length = ((ntohl(xemaclite_readl(addr +
XEL_HEADER_IP_LENGTH_OFFSET +
XEL_RXBUFF_OFFSET)) >>
XEL_HEADER_SHIFT) &
@@ -436,9 +444,9 @@ static u16 xemaclite_recv_data(struct ne
data, length);
/* Acknowledge the frame */
- reg_data = __raw_readl(addr + XEL_RSR_OFFSET);
+ reg_data = xemaclite_readl(addr + XEL_RSR_OFFSET);
reg_data &= ~XEL_RSR_RECV_DONE_MASK;
- __raw_writel(reg_data, addr + XEL_RSR_OFFSET);
+ xemaclite_writel(reg_data, addr + XEL_RSR_OFFSET);
return length;
}
@@ -465,14 +473,14 @@ static void xemaclite_update_address(str
xemaclite_aligned_write(address_ptr, (u32 __force *) addr, ETH_ALEN);
- __raw_writel(ETH_ALEN, addr + XEL_TPLR_OFFSET);
+ xemaclite_writel(ETH_ALEN, addr + XEL_TPLR_OFFSET);
/* Update the MAC address in the EmacLite */
- reg_data = __raw_readl(addr + XEL_TSR_OFFSET);
- __raw_writel(reg_data | XEL_TSR_PROG_MAC_ADDR, addr + XEL_TSR_OFFSET);
+ reg_data = xemaclite_readl(addr + XEL_TSR_OFFSET);
+ xemaclite_writel(reg_data | XEL_TSR_PROG_MAC_ADDR, addr + XEL_TSR_OFFSET);
/* Wait for EmacLite to finish with the MAC address update */
- while ((__raw_readl(addr + XEL_TSR_OFFSET) &
+ while ((xemaclite_readl(addr + XEL_TSR_OFFSET) &
XEL_TSR_PROG_MAC_ADDR) != 0)
;
}
@@ -642,32 +650,32 @@ static irqreturn_t xemaclite_interrupt(i
u32 tx_status;
/* Check if there is Rx Data available */
- if ((__raw_readl(base_addr + XEL_RSR_OFFSET) &
+ if ((xemaclite_readl(base_addr + XEL_RSR_OFFSET) &
XEL_RSR_RECV_DONE_MASK) ||
- (__raw_readl(base_addr + XEL_BUFFER_OFFSET + XEL_RSR_OFFSET)
+ (xemaclite_readl(base_addr + XEL_BUFFER_OFFSET + XEL_RSR_OFFSET)
& XEL_RSR_RECV_DONE_MASK))
xemaclite_rx_handler(dev);
/* Check if the Transmission for the first buffer is completed */
- tx_status = __raw_readl(base_addr + XEL_TSR_OFFSET);
+ tx_status = xemaclite_readl(base_addr + XEL_TSR_OFFSET);
if (((tx_status & XEL_TSR_XMIT_BUSY_MASK) == 0) &&
(tx_status & XEL_TSR_XMIT_ACTIVE_MASK) != 0) {
tx_status &= ~XEL_TSR_XMIT_ACTIVE_MASK;
- __raw_writel(tx_status, base_addr + XEL_TSR_OFFSET);
+ xemaclite_writel(tx_status, base_addr + XEL_TSR_OFFSET);
tx_complete = true;
}
/* Check if the Transmission for the second buffer is completed */
- tx_status = __raw_readl(base_addr + XEL_BUFFER_OFFSET + XEL_TSR_OFFSET);
+ tx_status = xemaclite_readl(base_addr + XEL_BUFFER_OFFSET + XEL_TSR_OFFSET);
if (((tx_status & XEL_TSR_XMIT_BUSY_MASK) == 0) &&
(tx_status & XEL_TSR_XMIT_ACTIVE_MASK) != 0) {
tx_status &= ~XEL_TSR_XMIT_ACTIVE_MASK;
- __raw_writel(tx_status, base_addr + XEL_BUFFER_OFFSET +
- XEL_TSR_OFFSET);
+ xemaclite_writel(tx_status, base_addr + XEL_BUFFER_OFFSET +
+ XEL_TSR_OFFSET);
tx_complete = true;
}
@@ -700,7 +708,7 @@ static int xemaclite_mdio_wait(struct ne
/* wait for the MDIO interface to not be busy or timeout
after some time.
*/
- while (__raw_readl(lp->base_addr + XEL_MDIOCTRL_OFFSET) &
+ while (xemaclite_readl(lp->base_addr + XEL_MDIOCTRL_OFFSET) &
XEL_MDIOCTRL_MDIOSTS_MASK) {
if (time_before_eq(end, jiffies)) {
WARN_ON(1);
@@ -736,17 +744,17 @@ static int xemaclite_mdio_read(struct mi
* MDIO Address register. Set the Status bit in the MDIO Control
* register to start a MDIO read transaction.
*/
- ctrl_reg = __raw_readl(lp->base_addr + XEL_MDIOCTRL_OFFSET);
- __raw_writel(XEL_MDIOADDR_OP_MASK |
- ((phy_id << XEL_MDIOADDR_PHYADR_SHIFT) | reg),
- lp->base_addr + XEL_MDIOADDR_OFFSET);
- __raw_writel(ctrl_reg | XEL_MDIOCTRL_MDIOSTS_MASK,
- lp->base_addr + XEL_MDIOCTRL_OFFSET);
+ ctrl_reg = xemaclite_readl(lp->base_addr + XEL_MDIOCTRL_OFFSET);
+ xemaclite_writel(XEL_MDIOADDR_OP_MASK |
+ ((phy_id << XEL_MDIOADDR_PHYADR_SHIFT) | reg),
+ lp->base_addr + XEL_MDIOADDR_OFFSET);
+ xemaclite_writel(ctrl_reg | XEL_MDIOCTRL_MDIOSTS_MASK,
+ lp->base_addr + XEL_MDIOCTRL_OFFSET);
if (xemaclite_mdio_wait(lp))
return -ETIMEDOUT;
- rc = __raw_readl(lp->base_addr + XEL_MDIORD_OFFSET);
+ rc = xemaclite_readl(lp->base_addr + XEL_MDIORD_OFFSET);
dev_dbg(&lp->ndev->dev,
"xemaclite_mdio_read(phy_id=%i, reg=%x) == %x\n",
@@ -783,13 +791,13 @@ static int xemaclite_mdio_write(struct m
* Data register. Finally, set the Status bit in the MDIO Control
* register to start a MDIO write transaction.
*/
- ctrl_reg = __raw_readl(lp->base_addr + XEL_MDIOCTRL_OFFSET);
- __raw_writel(~XEL_MDIOADDR_OP_MASK &
- ((phy_id << XEL_MDIOADDR_PHYADR_SHIFT) | reg),
- lp->base_addr + XEL_MDIOADDR_OFFSET);
- __raw_writel(val, lp->base_addr + XEL_MDIOWR_OFFSET);
- __raw_writel(ctrl_reg | XEL_MDIOCTRL_MDIOSTS_MASK,
- lp->base_addr + XEL_MDIOCTRL_OFFSET);
+ ctrl_reg = xemaclite_readl(lp->base_addr + XEL_MDIOCTRL_OFFSET);
+ xemaclite_writel(~XEL_MDIOADDR_OP_MASK &
+ ((phy_id << XEL_MDIOADDR_PHYADR_SHIFT) | reg),
+ lp->base_addr + XEL_MDIOADDR_OFFSET);
+ xemaclite_writel(val, lp->base_addr + XEL_MDIOWR_OFFSET);
+ xemaclite_writel(ctrl_reg | XEL_MDIOCTRL_MDIOSTS_MASK,
+ lp->base_addr + XEL_MDIOCTRL_OFFSET);
return 0;
}
@@ -834,8 +842,8 @@ static int xemaclite_mdio_setup(struct n
/* Enable the MDIO bus by asserting the enable bit in MDIO Control
* register.
*/
- __raw_writel(XEL_MDIOCTRL_MDIOEN_MASK,
- lp->base_addr + XEL_MDIOCTRL_OFFSET);
+ xemaclite_writel(XEL_MDIOCTRL_MDIOEN_MASK,
+ lp->base_addr + XEL_MDIOCTRL_OFFSET);
bus = mdiobus_alloc();
if (!bus) {
@@ -1138,8 +1146,8 @@ static int xemaclite_of_probe(struct pla
dev_warn(dev, "No MAC address found\n");
/* Clear the Tx CSR's in case this is a restart */
- __raw_writel(0, lp->base_addr + XEL_TSR_OFFSET);
- __raw_writel(0, lp->base_addr + XEL_BUFFER_OFFSET + XEL_TSR_OFFSET);
+ xemaclite_writel(0, lp->base_addr + XEL_TSR_OFFSET);
+ xemaclite_writel(0, lp->base_addr + XEL_BUFFER_OFFSET + XEL_TSR_OFFSET);
/* Set the MAC address in the EmacLite device */
xemaclite_update_address(lp, ndev->dev_addr);
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Carpenter <[email protected]>
commit b5c3206190f1fddd100b3060eb15f0d775ffeab8 upstream.
My static checker complains that if "lvl" is ULONG_MAX (this is 64 bit)
then some of the strings will overflow. I don't know if that's possible
but it seems simple enough to make the buffers slightly larger.
Signed-off-by: Dan Carpenter <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Cc: Waldemar Brodkorb <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/sparc/kernel/traps_64.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/sparc/kernel/traps_64.c
+++ b/arch/sparc/kernel/traps_64.c
@@ -85,7 +85,7 @@ static void dump_tl1_traplog(struct tl1_
void bad_trap(struct pt_regs *regs, long lvl)
{
- char buffer[32];
+ char buffer[36];
siginfo_t info;
if (notify_die(DIE_TRAP, "bad trap", regs,
@@ -116,7 +116,7 @@ void bad_trap(struct pt_regs *regs, long
void bad_trap_tl1(struct pt_regs *regs, long lvl)
{
- char buffer[32];
+ char buffer[36];
if (notify_die(DIE_TRAP_TL1, "bad trap tl1", regs,
0, lvl, SIGTRAP) == NOTIFY_STOP)
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hugh Dickins <[email protected]>
commit 1be7107fbe18eed3e319a6c3e83c78254b693acb upstream.
Stack guard page is a useful feature to reduce a risk of stack smashing
into a different mapping. We have been using a single page gap which
is sufficient to prevent having stack adjacent to a different mapping.
But this seems to be insufficient in the light of the stack usage in
userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
which is 256kB or stack strings with MAX_ARG_STRLEN.
This will become especially dangerous for suid binaries and the default
no limit for the stack size limit because those applications can be
tricked to consume a large portion of the stack and a single glibc call
could jump over the guard page. These attacks are not theoretical,
unfortunatelly.
Make those attacks less probable by increasing the stack guard gap
to 1MB (on systems with 4k pages; but make it depend on the page size
because systems with larger base pages might cap stack allocations in
the PAGE_SIZE units) which should cover larger alloca() and VLA stack
allocations. It is obviously not a full fix because the problem is
somehow inherent, but it should reduce attack space a lot.
One could argue that the gap size should be configurable from userspace,
but that can be done later when somebody finds that the new 1MB is wrong
for some special case applications. For now, add a kernel command line
option (stack_guard_gap) to specify the stack gap size (in page units).
Implementation wise, first delete all the old code for stack guard page:
because although we could get away with accounting one extra page in a
stack vma, accounting a larger gap can break userspace - case in point,
a program run with "ulimit -S -v 20000" failed when the 1MB gap was
counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
and strict non-overcommit mode.
Instead of keeping gap inside the stack vma, maintain the stack guard
gap as a gap between vmas: using vm_start_gap() in place of vm_start
(or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
places which need to respect the gap - mainly arch_get_unmapped_area(),
and and the vma tree's subtree_gap support for that.
Original-patch-by: Oleg Nesterov <[email protected]>
Original-patch-by: Michal Hocko <[email protected]>
Signed-off-by: Hugh Dickins <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Tested-by: Helge Deller <[email protected]> # parisc
Signed-off-by: Linus Torvalds <[email protected]>
[wt: backport to 4.11: adjust context]
[wt: backport to 4.9: adjust context ; kernel doc was not in admin-guide]
[wt: backport to 4.4: adjust context ; drop ppc hugetlb_radix changes]
[wt: backport to 3.18: adjust context ; no FOLL_POPULATE ;
s390 uses generic arch_get_unmapped_area()]
Signed-off-by: Willy Tarreau <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
Documentation/kernel-parameters.txt | 7 +
arch/arc/mm/mmap.c | 2
arch/arm/mm/mmap.c | 4
arch/frv/mm/elf-fdpic.c | 2
arch/mips/mm/mmap.c | 2
arch/parisc/kernel/sys_parisc.c | 15 ++-
arch/powerpc/mm/slice.c | 2
arch/sh/mm/mmap.c | 4
arch/sparc/kernel/sys_sparc_64.c | 4
arch/sparc/mm/hugetlbpage.c | 2
arch/tile/mm/hugetlbpage.c | 2
arch/x86/kernel/sys_x86_64.c | 4
arch/x86/mm/hugetlbpage.c | 2
arch/xtensa/kernel/syscall.c | 2
fs/hugetlbfs/inode.c | 2
fs/proc/task_mmu.c | 4
include/linux/mm.h | 57 ++++++-------
mm/gup.c | 6 -
mm/memory.c | 38 ---------
mm/mmap.c | 147 +++++++++++++++++++++---------------
20 files changed, 148 insertions(+), 160 deletions(-)
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -3324,6 +3324,13 @@ bytes respectively. Such letter suffixes
spia_pedr=
spia_peddr=
+ stack_guard_gap= [MM]
+ override the default stack gap protection. The value
+ is in page units and it defines how many pages prior
+ to (for stacks growing down) resp. after (for stacks
+ growing up) the main stack are reserved for no other
+ mapping. Default value is 256 pages.
+
stacktrace [FTRACE]
Enabled the stack tracer on boot up.
--- a/arch/arc/mm/mmap.c
+++ b/arch/arc/mm/mmap.c
@@ -64,7 +64,7 @@ arch_get_unmapped_area(struct file *filp
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -89,7 +89,7 @@ arch_get_unmapped_area(struct file *filp
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -140,7 +140,7 @@ arch_get_unmapped_area_topdown(struct fi
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
--- a/arch/frv/mm/elf-fdpic.c
+++ b/arch/frv/mm/elf-fdpic.c
@@ -74,7 +74,7 @@ unsigned long arch_get_unmapped_area(str
addr = PAGE_ALIGN(addr);
vma = find_vma(current->mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
goto success;
}
--- a/arch/mips/mm/mmap.c
+++ b/arch/mips/mm/mmap.c
@@ -92,7 +92,7 @@ static unsigned long arch_get_unmapped_a
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
--- a/arch/parisc/kernel/sys_parisc.c
+++ b/arch/parisc/kernel/sys_parisc.c
@@ -88,7 +88,7 @@ unsigned long arch_get_unmapped_area(str
unsigned long len, unsigned long pgoff, unsigned long flags)
{
struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *prev;
unsigned long task_size = TASK_SIZE;
int do_color_align, last_mmap;
struct vm_unmapped_area_info info;
@@ -115,9 +115,10 @@ unsigned long arch_get_unmapped_area(str
else
addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
+ vma = find_vma_prev(mm, addr, &prev);
if (task_size - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)) &&
+ (!prev || addr >= vm_end_gap(prev)))
goto found_addr;
}
@@ -141,7 +142,7 @@ arch_get_unmapped_area_topdown(struct fi
const unsigned long len, const unsigned long pgoff,
const unsigned long flags)
{
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *prev;
struct mm_struct *mm = current->mm;
unsigned long addr = addr0;
int do_color_align, last_mmap;
@@ -175,9 +176,11 @@ arch_get_unmapped_area_topdown(struct fi
addr = COLOR_ALIGN(addr, last_mmap, pgoff);
else
addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
+
+ vma = find_vma_prev(mm, addr, &prev);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)) &&
+ (!prev || addr >= vm_end_gap(prev)))
goto found_addr;
}
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -105,7 +105,7 @@ static int slice_area_is_free(struct mm_
if ((mm->task_size - len) < addr)
return 0;
vma = find_vma(mm, addr);
- return (!vma || (addr + len) <= vma->vm_start);
+ return (!vma || (addr + len) <= vm_start_gap(vma));
}
static int slice_low_has_vma(struct mm_struct *mm, unsigned long slice)
--- a/arch/sh/mm/mmap.c
+++ b/arch/sh/mm/mmap.c
@@ -63,7 +63,7 @@ unsigned long arch_get_unmapped_area(str
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -113,7 +113,7 @@ arch_get_unmapped_area_topdown(struct fi
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
--- a/arch/sparc/kernel/sys_sparc_64.c
+++ b/arch/sparc/kernel/sys_sparc_64.c
@@ -118,7 +118,7 @@ unsigned long arch_get_unmapped_area(str
vma = find_vma(mm, addr);
if (task_size - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -181,7 +181,7 @@ arch_get_unmapped_area_topdown(struct fi
vma = find_vma(mm, addr);
if (task_size - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -115,7 +115,7 @@ hugetlb_get_unmapped_area(struct file *f
addr = ALIGN(addr, HPAGE_SIZE);
vma = find_vma(mm, addr);
if (task_size - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
if (mm->get_unmapped_area == arch_get_unmapped_area)
--- a/arch/tile/mm/hugetlbpage.c
+++ b/arch/tile/mm/hugetlbpage.c
@@ -237,7 +237,7 @@ unsigned long hugetlb_get_unmapped_area(
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
if (current->mm->get_unmapped_area == arch_get_unmapped_area)
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -127,7 +127,7 @@ arch_get_unmapped_area(struct file *filp
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (end - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -166,7 +166,7 @@ arch_get_unmapped_area_topdown(struct fi
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -144,7 +144,7 @@ hugetlb_get_unmapped_area(struct file *f
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
if (mm->get_unmapped_area == arch_get_unmapped_area)
--- a/arch/xtensa/kernel/syscall.c
+++ b/arch/xtensa/kernel/syscall.c
@@ -86,7 +86,7 @@ unsigned long arch_get_unmapped_area(str
/* At this point: (!vmm || addr < vmm->vm_end). */
if (TASK_SIZE - len < addr)
return -ENOMEM;
- if (!vmm || addr + len <= vmm->vm_start)
+ if (!vmm || addr + len <= vm_start_gap(vmm))
return addr;
addr = vmm->vm_end;
if (flags & MAP_SHARED)
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -171,7 +171,7 @@ hugetlb_get_unmapped_area(struct file *f
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -284,11 +284,7 @@ show_map_vma(struct seq_file *m, struct
/* We don't show the stack guard page in /proc/maps */
start = vma->vm_start;
- if (stack_guard_page_start(vma, start))
- start += PAGE_SIZE;
end = vma->vm_end;
- if (stack_guard_page_end(vma, end))
- end -= PAGE_SIZE;
seq_setwidth(m, 25 + sizeof(void *) * 6 - 1);
seq_printf(m, "%08lx-%08lx %c%c%c%c %08llx %02x:%02x %lu ",
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1242,37 +1242,6 @@ int set_page_dirty_lock(struct page *pag
int clear_page_dirty_for_io(struct page *page);
int get_cmdline(struct task_struct *task, char *buffer, int buflen);
-/* Is the vma a continuation of the stack vma above it? */
-static inline int vma_growsdown(struct vm_area_struct *vma, unsigned long addr)
-{
- return vma && (vma->vm_end == addr) && (vma->vm_flags & VM_GROWSDOWN);
-}
-
-static inline int stack_guard_page_start(struct vm_area_struct *vma,
- unsigned long addr)
-{
- return (vma->vm_flags & VM_GROWSDOWN) &&
- (vma->vm_start == addr) &&
- !vma_growsdown(vma->vm_prev, addr);
-}
-
-/* Is the vma a continuation of the stack vma below it? */
-static inline int vma_growsup(struct vm_area_struct *vma, unsigned long addr)
-{
- return vma && (vma->vm_start == addr) && (vma->vm_flags & VM_GROWSUP);
-}
-
-static inline int stack_guard_page_end(struct vm_area_struct *vma,
- unsigned long addr)
-{
- return (vma->vm_flags & VM_GROWSUP) &&
- (vma->vm_end == addr) &&
- !vma_growsup(vma->vm_next, addr);
-}
-
-extern struct task_struct *task_of_stack(struct task_struct *task,
- struct vm_area_struct *vma, bool in_group);
-
extern unsigned long move_page_tables(struct vm_area_struct *vma,
unsigned long old_addr, struct vm_area_struct *new_vma,
unsigned long new_addr, unsigned long len,
@@ -1928,7 +1897,7 @@ void page_cache_async_readahead(struct a
pgoff_t offset,
unsigned long size);
-unsigned long max_sane_readahead(unsigned long nr);
+extern unsigned long stack_guard_gap;
/* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */
extern int expand_stack(struct vm_area_struct *vma, unsigned long address);
@@ -1958,6 +1927,30 @@ static inline struct vm_area_struct * fi
return vma;
}
+static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
+{
+ unsigned long vm_start = vma->vm_start;
+
+ if (vma->vm_flags & VM_GROWSDOWN) {
+ vm_start -= stack_guard_gap;
+ if (vm_start > vma->vm_start)
+ vm_start = 0;
+ }
+ return vm_start;
+}
+
+static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
+{
+ unsigned long vm_end = vma->vm_end;
+
+ if (vma->vm_flags & VM_GROWSUP) {
+ vm_end += stack_guard_gap;
+ if (vm_end < vma->vm_end)
+ vm_end = -PAGE_SIZE;
+ }
+ return vm_end;
+}
+
static inline unsigned long vma_pages(struct vm_area_struct *vma)
{
return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -275,10 +275,8 @@ static int faultin_page(struct task_stru
unsigned int fault_flags = 0;
int ret;
- /* For mlock, just skip the stack guard page. */
- if ((*flags & FOLL_MLOCK) &&
- (stack_guard_page_start(vma, address) ||
- stack_guard_page_end(vma, address + PAGE_SIZE)))
+ /* mlock all present pages, but do not fault in new pages */
+ if (*flags & FOLL_MLOCK)
return -ENOENT;
if (*flags & FOLL_WRITE)
fault_flags |= FAULT_FLAG_WRITE;
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2580,40 +2580,6 @@ out_release:
}
/*
- * This is like a special single-page "expand_{down|up}wards()",
- * except we must first make sure that 'address{-|+}PAGE_SIZE'
- * doesn't hit another vma.
- */
-static inline int check_stack_guard_page(struct vm_area_struct *vma, unsigned long address)
-{
- address &= PAGE_MASK;
- if ((vma->vm_flags & VM_GROWSDOWN) && address == vma->vm_start) {
- struct vm_area_struct *prev = vma->vm_prev;
-
- /*
- * Is there a mapping abutting this one below?
- *
- * That's only ok if it's the same stack mapping
- * that has gotten split..
- */
- if (prev && prev->vm_end == address)
- return prev->vm_flags & VM_GROWSDOWN ? 0 : -ENOMEM;
-
- return expand_downwards(vma, address - PAGE_SIZE);
- }
- if ((vma->vm_flags & VM_GROWSUP) && address + PAGE_SIZE == vma->vm_end) {
- struct vm_area_struct *next = vma->vm_next;
-
- /* As VM_GROWSDOWN but s/below/above/ */
- if (next && next->vm_start == address + PAGE_SIZE)
- return next->vm_flags & VM_GROWSUP ? 0 : -ENOMEM;
-
- return expand_upwards(vma, address + PAGE_SIZE);
- }
- return 0;
-}
-
-/*
* We enter with non-exclusive mmap_sem (to exclude vma changes,
* but allow concurrent faults), and pte mapped but not yet locked.
* We return with mmap_sem still held, but pte unmapped and unlocked.
@@ -2633,10 +2599,6 @@ static int do_anonymous_page(struct mm_s
if (vma->vm_flags & VM_SHARED)
return VM_FAULT_SIGBUS;
- /* Check if we need to add a guard page to the stack */
- if (check_stack_guard_page(vma, address) < 0)
- return VM_FAULT_SIGSEGV;
-
/* Use the zero-page for reads */
if (!(flags & FAULT_FLAG_WRITE)) {
entry = pte_mkspecial(pfn_pte(my_zero_pfn(address),
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -290,6 +290,7 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
unsigned long retval;
unsigned long newbrk, oldbrk;
struct mm_struct *mm = current->mm;
+ struct vm_area_struct *next;
unsigned long min_brk;
bool populate;
@@ -334,7 +335,8 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
}
/* Check against existing mmap mappings. */
- if (find_vma_intersection(mm, oldbrk, newbrk+PAGE_SIZE))
+ next = find_vma(mm, oldbrk);
+ if (next && newbrk + PAGE_SIZE > vm_start_gap(next))
goto out;
/* Ok, looks good - let it rip. */
@@ -357,10 +359,22 @@ out:
static long vma_compute_subtree_gap(struct vm_area_struct *vma)
{
- unsigned long max, subtree_gap;
- max = vma->vm_start;
- if (vma->vm_prev)
- max -= vma->vm_prev->vm_end;
+ unsigned long max, prev_end, subtree_gap;
+
+ /*
+ * Note: in the rare case of a VM_GROWSDOWN above a VM_GROWSUP, we
+ * allow two stack_guard_gaps between them here, and when choosing
+ * an unmapped area; whereas when expanding we only require one.
+ * That's a little inconsistent, but keeps the code here simpler.
+ */
+ max = vm_start_gap(vma);
+ if (vma->vm_prev) {
+ prev_end = vm_end_gap(vma->vm_prev);
+ if (max > prev_end)
+ max -= prev_end;
+ else
+ max = 0;
+ }
if (vma->vm_rb.rb_left) {
subtree_gap = rb_entry(vma->vm_rb.rb_left,
struct vm_area_struct, vm_rb)->rb_subtree_gap;
@@ -453,7 +467,7 @@ static void validate_mm(struct mm_struct
anon_vma_unlock_read(anon_vma);
}
- highest_address = vma->vm_end;
+ highest_address = vm_end_gap(vma);
vma = vma->vm_next;
i++;
}
@@ -622,7 +636,7 @@ void __vma_link_rb(struct mm_struct *mm,
if (vma->vm_next)
vma_gap_update(vma->vm_next);
else
- mm->highest_vm_end = vma->vm_end;
+ mm->highest_vm_end = vm_end_gap(vma);
/*
* vma->vm_prev wasn't known when we followed the rbtree to find the
@@ -874,7 +888,7 @@ again: remove_next = 1 + (end > next->
vma_gap_update(vma);
if (end_changed) {
if (!next)
- mm->highest_vm_end = end;
+ mm->highest_vm_end = vm_end_gap(vma);
else if (!adjust_next)
vma_gap_update(next);
}
@@ -1740,7 +1754,7 @@ unsigned long unmapped_area(struct vm_un
while (true) {
/* Visit left subtree if it looks promising */
- gap_end = vma->vm_start;
+ gap_end = vm_start_gap(vma);
if (gap_end >= low_limit && vma->vm_rb.rb_left) {
struct vm_area_struct *left =
rb_entry(vma->vm_rb.rb_left,
@@ -1751,7 +1765,7 @@ unsigned long unmapped_area(struct vm_un
}
}
- gap_start = vma->vm_prev ? vma->vm_prev->vm_end : 0;
+ gap_start = vma->vm_prev ? vm_end_gap(vma->vm_prev) : 0;
check_current:
/* Check if current node has a suitable gap */
if (gap_start > high_limit)
@@ -1778,8 +1792,8 @@ check_current:
vma = rb_entry(rb_parent(prev),
struct vm_area_struct, vm_rb);
if (prev == vma->vm_rb.rb_left) {
- gap_start = vma->vm_prev->vm_end;
- gap_end = vma->vm_start;
+ gap_start = vm_end_gap(vma->vm_prev);
+ gap_end = vm_start_gap(vma);
goto check_current;
}
}
@@ -1843,7 +1857,7 @@ unsigned long unmapped_area_topdown(stru
while (true) {
/* Visit right subtree if it looks promising */
- gap_start = vma->vm_prev ? vma->vm_prev->vm_end : 0;
+ gap_start = vma->vm_prev ? vm_end_gap(vma->vm_prev) : 0;
if (gap_start <= high_limit && vma->vm_rb.rb_right) {
struct vm_area_struct *right =
rb_entry(vma->vm_rb.rb_right,
@@ -1856,7 +1870,7 @@ unsigned long unmapped_area_topdown(stru
check_current:
/* Check if current node has a suitable gap */
- gap_end = vma->vm_start;
+ gap_end = vm_start_gap(vma);
if (gap_end < low_limit)
return -ENOMEM;
if (gap_start <= high_limit && gap_end - gap_start >= length)
@@ -1882,7 +1896,7 @@ check_current:
struct vm_area_struct, vm_rb);
if (prev == vma->vm_rb.rb_right) {
gap_start = vma->vm_prev ?
- vma->vm_prev->vm_end : 0;
+ vm_end_gap(vma->vm_prev) : 0;
goto check_current;
}
}
@@ -1920,7 +1934,7 @@ arch_get_unmapped_area(struct file *filp
unsigned long len, unsigned long pgoff, unsigned long flags)
{
struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *prev;
struct vm_unmapped_area_info info;
if (len > TASK_SIZE - mmap_min_addr)
@@ -1931,9 +1945,10 @@ arch_get_unmapped_area(struct file *filp
if (addr) {
addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
+ vma = find_vma_prev(mm, addr, &prev);
if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)) &&
+ (!prev || addr >= vm_end_gap(prev)))
return addr;
}
@@ -1956,7 +1971,7 @@ arch_get_unmapped_area_topdown(struct fi
const unsigned long len, const unsigned long pgoff,
const unsigned long flags)
{
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *prev;
struct mm_struct *mm = current->mm;
unsigned long addr = addr0;
struct vm_unmapped_area_info info;
@@ -1971,9 +1986,10 @@ arch_get_unmapped_area_topdown(struct fi
/* requesting a specific address */
if (addr) {
addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
+ vma = find_vma_prev(mm, addr, &prev);
if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)) &&
+ (!prev || addr >= vm_end_gap(prev)))
return addr;
}
@@ -2099,21 +2115,19 @@ find_vma_prev(struct mm_struct *mm, unsi
* update accounting. This is shared with both the
* grow-up and grow-down cases.
*/
-static int acct_stack_growth(struct vm_area_struct *vma, unsigned long size, unsigned long grow)
+static int acct_stack_growth(struct vm_area_struct *vma,
+ unsigned long size, unsigned long grow)
{
struct mm_struct *mm = vma->vm_mm;
struct rlimit *rlim = current->signal->rlim;
- unsigned long new_start, actual_size;
+ unsigned long new_start;
/* address space limit tests */
if (!may_expand_vm(mm, grow))
return -ENOMEM;
/* Stack limit test */
- actual_size = size;
- if (size && (vma->vm_flags & (VM_GROWSUP | VM_GROWSDOWN)))
- actual_size -= PAGE_SIZE;
- if (actual_size > ACCESS_ONCE(rlim[RLIMIT_STACK].rlim_cur))
+ if (size > ACCESS_ONCE(rlim[RLIMIT_STACK].rlim_cur))
return -ENOMEM;
/* mlock limit tests */
@@ -2154,17 +2168,30 @@ static int acct_stack_growth(struct vm_a
*/
int expand_upwards(struct vm_area_struct *vma, unsigned long address)
{
+ struct vm_area_struct *next;
+ unsigned long gap_addr;
int error = 0;
if (!(vma->vm_flags & VM_GROWSUP))
return -EFAULT;
/* Guard against wrapping around to address 0. */
- if (address < PAGE_ALIGN(address+4))
- address = PAGE_ALIGN(address+4);
- else
+ address &= PAGE_MASK;
+ address += PAGE_SIZE;
+ if (!address)
return -ENOMEM;
+ /* Enforce stack_guard_gap */
+ gap_addr = address + stack_guard_gap;
+ if (gap_addr < address)
+ return -ENOMEM;
+ next = vma->vm_next;
+ if (next && next->vm_start < gap_addr) {
+ if (!(next->vm_flags & VM_GROWSUP))
+ return -ENOMEM;
+ /* Check that both stack segments have the same anon_vma? */
+ }
+
/* We must make sure the anon_vma is allocated. */
if (unlikely(anon_vma_prepare(vma)))
return -ENOMEM;
@@ -2205,7 +2232,7 @@ int expand_upwards(struct vm_area_struct
if (vma->vm_next)
vma_gap_update(vma->vm_next);
else
- vma->vm_mm->highest_vm_end = address;
+ vma->vm_mm->highest_vm_end = vm_end_gap(vma);
spin_unlock(&vma->vm_mm->page_table_lock);
perf_event_mmap(vma);
@@ -2225,6 +2252,8 @@ int expand_upwards(struct vm_area_struct
int expand_downwards(struct vm_area_struct *vma,
unsigned long address)
{
+ struct vm_area_struct *prev;
+ unsigned long gap_addr;
int error;
address &= PAGE_MASK;
@@ -2232,6 +2261,17 @@ int expand_downwards(struct vm_area_stru
if (error)
return error;
+ /* Enforce stack_guard_gap */
+ gap_addr = address - stack_guard_gap;
+ if (gap_addr > address)
+ return -ENOMEM;
+ prev = vma->vm_prev;
+ if (prev && prev->vm_end > gap_addr) {
+ if (!(prev->vm_flags & VM_GROWSDOWN))
+ return -ENOMEM;
+ /* Check that both stack segments have the same anon_vma? */
+ }
+
/* We must make sure the anon_vma is allocated. */
if (unlikely(anon_vma_prepare(vma)))
return -ENOMEM;
@@ -2283,28 +2323,25 @@ int expand_downwards(struct vm_area_stru
return error;
}
-/*
- * Note how expand_stack() refuses to expand the stack all the way to
- * abut the next virtual mapping, *unless* that mapping itself is also
- * a stack mapping. We want to leave room for a guard page, after all
- * (the guard page itself is not added here, that is done by the
- * actual page faulting logic)
- *
- * This matches the behavior of the guard page logic (see mm/memory.c:
- * check_stack_guard_page()), which only allows the guard page to be
- * removed under these circumstances.
- */
+/* enforced gap between the expanding stack and other mappings. */
+unsigned long stack_guard_gap = 256UL<<PAGE_SHIFT;
+
+static int __init cmdline_parse_stack_guard_gap(char *p)
+{
+ unsigned long val;
+ char *endptr;
+
+ val = simple_strtoul(p, &endptr, 10);
+ if (!*endptr)
+ stack_guard_gap = val << PAGE_SHIFT;
+
+ return 0;
+}
+__setup("stack_guard_gap=", cmdline_parse_stack_guard_gap);
+
#ifdef CONFIG_STACK_GROWSUP
int expand_stack(struct vm_area_struct *vma, unsigned long address)
{
- struct vm_area_struct *next;
-
- address &= PAGE_MASK;
- next = vma->vm_next;
- if (next && next->vm_start == address + PAGE_SIZE) {
- if (!(next->vm_flags & VM_GROWSUP))
- return -ENOMEM;
- }
return expand_upwards(vma, address);
}
@@ -2326,14 +2363,6 @@ find_extend_vma(struct mm_struct *mm, un
#else
int expand_stack(struct vm_area_struct *vma, unsigned long address)
{
- struct vm_area_struct *prev;
-
- address &= PAGE_MASK;
- prev = vma->vm_prev;
- if (prev && prev->vm_end == address) {
- if (!(prev->vm_flags & VM_GROWSDOWN))
- return -ENOMEM;
- }
return expand_downwards(vma, address);
}
@@ -2429,7 +2458,7 @@ detach_vmas_to_be_unmapped(struct mm_str
vma->vm_prev = prev;
vma_gap_update(vma);
} else
- mm->highest_vm_end = prev ? prev->vm_end : 0;
+ mm->highest_vm_end = prev ? vm_end_gap(prev) : 0;
tail_vma->vm_next = NULL;
/* Kill the cache */
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chris Brandt <[email protected]>
commit dd14a3e9b92ac6f0918054f9e3477438760a4fa6 upstream.
The timeout for BULK packets was 300ms which is a long time if other
endpoints or devices are waiting for their turn. Changing it to 50ms
greatly increased the overall performance for multi-endpoint devices.
Fixes: 5d3043586db4 ("usb: r8a66597-hcd: host controller driver for R8A6659")
Signed-off-by: Chris Brandt <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/usb/host/r8a66597-hcd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/usb/host/r8a66597-hcd.c
+++ b/drivers/usb/host/r8a66597-hcd.c
@@ -1269,7 +1269,7 @@ static void set_td_timer(struct r8a66597
time = 30;
break;
default:
- time = 300;
+ time = 50;
break;
}
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner <[email protected]>
commit ff86bf0c65f14346bf2440534f9ba5ac232c39a0 upstream.
The alarmtimer code has another source of potentially rearming itself too
fast. Interval timers with a very samll interval have a similar CPU hog
effect as the previously fixed overflow issue.
The reason is that alarmtimers do not implement the normal protection
against this kind of problem which the other posix timer use:
timer expires -> queue signal -> deliver signal -> rearm timer
This scheme brings the rearming under scheduler control and prevents
permanently firing timers which hog the CPU.
Bringing this scheme to the alarm timer code is a major overhaul because it
lacks all the necessary mechanisms completely.
So for a quick fix limit the interval to one jiffie. This is not
problematic in practice as alarmtimers are usually backed by an RTC for
suspend which have 1 second resolution. It could be therefor argued that
the resolution of this clock should be set to 1 second in general, but
that's outside the scope of this fix.
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Kostya Serebryany <[email protected]>
Cc: syzkaller <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/time/alarmtimer.c | 8 ++++++++
1 file changed, 8 insertions(+)
--- a/kernel/time/alarmtimer.c
+++ b/kernel/time/alarmtimer.c
@@ -614,6 +614,14 @@ static int alarm_timer_set(struct k_itim
/* start the timer */
timr->it.alarm.interval = timespec_to_ktime(new_setting->it_interval);
+
+ /*
+ * Rate limit to the tick as a hot fix to prevent DOS. Will be
+ * mopped up later.
+ */
+ if (ktime_to_ns(timr->it.alarm.interval) < TICK_NSEC)
+ timr->it.alarm.interval = ktime_set(0, TICK_NSEC);
+
exp = timespec_to_ktime(new_setting->it_value);
/* Convert (if necessary) to absolute time */
if (flags != TIMER_ABSTIME) {
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Heiner Kallweit <[email protected]>
commit fa07ab72cbb0d843429e61bf179308aed6cbe0dd upstream.
In case __irq_set_trigger() fails the resources requested via
irq_request_resources() are not released.
Add the missing release call into the error handling path.
Fixes: c1bacbae8192 ("genirq: Provide irq_request/release_resources chip callbacks")
Signed-off-by: Heiner Kallweit <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/irq/manage.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1156,8 +1156,10 @@ __setup_irq(unsigned int irq, struct irq
ret = __irq_set_trigger(desc, irq,
new->flags & IRQF_TRIGGER_MASK);
- if (ret)
+ if (ret) {
+ irq_release_resources(desc);
goto out_mask;
+ }
}
desc->istate &= ~(IRQS_AUTODETECT | IRQS_SPURIOUS_DISABLED | \
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chris Brandt <[email protected]>
commit 1f873d857b6c2fefb4dada952674aa01bcfb92bd upstream.
If multiple endpoints on a single device have pending IN URBs and one
endpoint times out due to NAKs (perfectly legal), select a different
endpoint URB to try.
The existing code only checked to see another device address has pending
URBs and ignores other IN endpoints on the current device address. This
leads to endpoints never getting serviced if one endpoint is using NAK as
a flow control method.
Fixes: 5d3043586db4 ("usb: r8a66597-hcd: host controller driver for R8A6659")
Signed-off-by: Chris Brandt <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/usb/host/r8a66597-hcd.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/usb/host/r8a66597-hcd.c
+++ b/drivers/usb/host/r8a66597-hcd.c
@@ -1785,6 +1785,7 @@ static void r8a66597_td_timer(unsigned l
pipe = td->pipe;
pipe_stop(r8a66597, pipe);
+ /* Select a different address or endpoint */
new_td = td;
do {
list_move_tail(&new_td->queue,
@@ -1794,7 +1795,8 @@ static void r8a66597_td_timer(unsigned l
new_td = td;
break;
}
- } while (td != new_td && td->address == new_td->address);
+ } while (td != new_td && td->address == new_td->address &&
+ td->pipe->info.epnum == new_td->pipe->info.epnum);
start_transfer(r8a66597, new_td);
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold <[email protected]>
commit 93491ced3c87c94b12220dbac0527e1356702179 upstream.
Add define for the maximum number of ports on a SuperSpeed hub as per
USB 3.1 spec Table 10-5, and use it when verifying the retrieved hub
descriptor.
This specifically avoids benign attempts to update the DeviceRemovable
mask for non-existing ports (should we get that far).
Fixes: dbe79bbe9dcb ("USB 3.0 Hub Changes")
Acked-by: Alan Stern <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/usb/core/hub.c | 8 +++++++-
include/uapi/linux/usb/ch11.h | 3 +++
2 files changed, 10 insertions(+), 1 deletion(-)
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -1346,7 +1346,13 @@ static int hub_configure(struct usb_hub
if (ret < 0) {
message = "can't read hub descriptor";
goto fail;
- } else if (hub->descriptor->bNbrPorts > USB_MAXCHILDREN) {
+ }
+
+ maxchild = USB_MAXCHILDREN;
+ if (hub_is_superspeed(hdev))
+ maxchild = min_t(unsigned, maxchild, USB_SS_MAXPORTS);
+
+ if (hub->descriptor->bNbrPorts > maxchild) {
message = "hub has too many ports!";
ret = -ENODEV;
goto fail;
--- a/include/uapi/linux/usb/ch11.h
+++ b/include/uapi/linux/usb/ch11.h
@@ -22,6 +22,9 @@
*/
#define USB_MAXCHILDREN 31
+/* See USB 3.1 spec Table 10-5 */
+#define USB_SS_MAXPORTS 15
+
/*
* Hub request types
*/
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Laura Abbott <[email protected]>
commit 861ce4a3244c21b0af64f880d5bfe5e6e2fb9e4a upstream.
'__vmalloc_start_set' currently only gets set in initmem_init() when
!CONFIG_NEED_MULTIPLE_NODES. This breaks detection of vmalloc address
with virt_addr_valid() with CONFIG_NEED_MULTIPLE_NODES=y, causing
a kernel crash:
[mm/usercopy] 517e1fbeb6: kernel BUG at arch/x86/mm/physaddr.c:78!
Set '__vmalloc_start_set' appropriately for that case as well.
Reported-by: kbuild test robot <[email protected]>
Signed-off-by: Laura Abbott <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: dc16ecf7fd1f ("x86-32: use specific __vmalloc_start_set flag in __virt_addr_valid")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/x86/mm/numa_32.c | 1 +
1 file changed, 1 insertion(+)
--- a/arch/x86/mm/numa_32.c
+++ b/arch/x86/mm/numa_32.c
@@ -100,5 +100,6 @@ void __init initmem_init(void)
printk(KERN_DEBUG "High memory starts at vaddr %08lx\n",
(ulong) pfn_to_kaddr(highstart_pfn));
+ __vmalloc_start_set = true;
setup_bootmem_allocator();
}
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Kleine-Budde <[email protected]>
commit 5cda3ee5138e91ac369ed9d0b55eab0dab077686 upstream.
This patch adds the missing kfree() in gs_cmd_reset() to free the
memory that is not used anymore after usb_control_msg().
Cc: Maximilian Schneider <[email protected]>
Signed-off-by: Marc Kleine-Budde <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/can/usb/gs_usb.c | 2 ++
1 file changed, 2 insertions(+)
--- a/drivers/net/can/usb/gs_usb.c
+++ b/drivers/net/can/usb/gs_usb.c
@@ -246,6 +246,8 @@ static int gs_cmd_reset(struct gs_usb *g
sizeof(*dm),
1000);
+ kfree(dm);
+
return rc;
}
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Heiko Carstens <[email protected]>
commit c34a69059d7876e0793eb410deedfb08ccb22b02 upstream.
The identity mapping is suboptimal for the last 2GB frame. The mapping
will be established with a mix of 4KB and 1MB mappings instead of a
single 2GB mapping.
This happens because of a off-by-one bug introduced with
commit 50be63450728 ("s390/mm: Convert bootmem to memblock").
Currently the identity mapping looks like this:
0x0000000080000000-0x0000000180000000 4G PUD RW
0x0000000180000000-0x00000001fff00000 2047M PMD RW
0x00000001fff00000-0x0000000200000000 1M PTE RW
With the bug fixed it looks like this:
0x0000000080000000-0x0000000200000000 6G PUD RW
Fixes: 50be63450728 ("s390/mm: Convert bootmem to memblock")
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Martin Schwidefsky <[email protected]>
Cc: Jean Delvare <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/s390/mm/vmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -378,7 +378,7 @@ void __init vmem_map_init(void)
ro_end = (unsigned long)&_eshared & PAGE_MASK;
for_each_memblock(memory, reg) {
start = reg->base;
- end = reg->base + reg->size - 1;
+ end = reg->base + reg->size;
if (start >= ro_end || end <= ro_start)
vmem_add_mem(start, end - start, 0);
else if (start >= ro_start && end <= ro_end)
On Mon, Jun 19, 2017 at 11:20:44PM +0800, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 3.18.58 release.
> There are 32 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Wed Jun 21 15:20:24 UTC 2017.
> Anything received after that time might be too late.
>
Build results:
total: 136 pass: 136 fail: 0
Qemu test results:
total: 111 pass: 111 fail: 0
Details are available at http://kerneltests.org/builders.
Guenter
On Mon, 19 Jun 2017, Greg Kroah-Hartman wrote:
> 3.18-stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Hugh Dickins <[email protected]>
>
> commit 1be7107fbe18eed3e319a6c3e83c78254b693acb upstream.
Here's a few adjustments to the 3.18 patch: no doubt you'll have
already sorted out any build errors (and I have to confess that
I haven't even tried to build this); and the VM_WARN_ON line (as
in 4.4) only fixes a highly unlikely error; but those FOLL_MLOCK
lines in mm/gup.c were mistaken, and do need to be deleted.
Hugh
diff -purN 318n/include/linux/mm.h 318h/include/linux/mm.h
--- 318n/include/linux/mm.h 2017-06-20 16:48:54.050429965 -0700
+++ 318h/include/linux/mm.h 2017-06-20 19:13:05.842191061 -0700
@@ -1242,6 +1242,9 @@ int set_page_dirty_lock(struct page *pag
int clear_page_dirty_for_io(struct page *page);
int get_cmdline(struct task_struct *task, char *buffer, int buflen);
+extern struct task_struct *task_of_stack(struct task_struct *task,
+ struct vm_area_struct *vma, bool in_group);
+
extern unsigned long move_page_tables(struct vm_area_struct *vma,
unsigned long old_addr, struct vm_area_struct *new_vma,
unsigned long new_addr, unsigned long len,
@@ -1897,8 +1900,9 @@ void page_cache_async_readahead(struct a
pgoff_t offset,
unsigned long size);
-extern unsigned long stack_guard_gap;
+unsigned long max_sane_readahead(unsigned long nr);
+extern unsigned long stack_guard_gap;
/* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */
extern int expand_stack(struct vm_area_struct *vma, unsigned long address);
diff -purN 318n/mm/gup.c 318h/mm/gup.c
--- 318n/mm/gup.c 2017-06-20 16:48:54.054429927 -0700
+++ 318h/mm/gup.c 2017-06-20 19:18:19.579275331 -0700
@@ -275,9 +275,6 @@ static int faultin_page(struct task_stru
unsigned int fault_flags = 0;
int ret;
- /* mlock all present pages, but do not fault in new pages */
- if (*flags & FOLL_MLOCK)
- return -ENOENT;
if (*flags & FOLL_WRITE)
fault_flags |= FAULT_FLAG_WRITE;
if (nonblocking)
diff -purN 318n/mm/mmap.c 318h/mm/mmap.c
--- 318n/mm/mmap.c 2017-06-20 16:48:54.054429927 -0700
+++ 318h/mm/mmap.c 2017-06-20 19:43:16.945345744 -0700
@@ -931,7 +931,7 @@ again: remove_next = 1 + (end > next->
else if (next)
vma_gap_update(next);
else
- mm->highest_vm_end = end;
+ VM_WARN_ON(mm->highest_vm_end != vm_end_gap(vma));
}
if (insert && file)
uprobe_mmap(insert);
On Tue, Jun 20, 2017 at 10:49:16PM -0700, Hugh Dickins wrote:
> On Mon, 19 Jun 2017, Greg Kroah-Hartman wrote:
>
> > 3.18-stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Hugh Dickins <[email protected]>
> >
> > commit 1be7107fbe18eed3e319a6c3e83c78254b693acb upstream.
>
> Here's a few adjustments to the 3.18 patch: no doubt you'll have
> already sorted out any build errors (and I have to confess that
> I haven't even tried to build this); and the VM_WARN_ON line (as
> in 4.4) only fixes a highly unlikely error; but those FOLL_MLOCK
> lines in mm/gup.c were mistaken, and do need to be deleted.
Are you sure ? The test on the FOLL_MLOCK flag remains present in
4.11 and mainline :
/* mlock all present pages, but do not fault in new pages */
if ((*flags & (FOLL_POPULATE | FOLL_MLOCK)) == FOLL_MLOCK)
return -ENOENT;
And this test was present although different in 3.18 as well :
/* For mlock, just skip the stack guard page. */
if ((*flags & FOLL_MLOCK) &&
(stack_guard_page_start(vma, address) ||
stack_guard_page_end(vma, address + PAGE_SIZE)))
So by removing it we're totally removing any test on FOLL_MLOCK. That
might be the correct fix, but I'm just a bit surprized since the mainline
patch doesn't remove it, and only removes the test on FOLL_POPULATE.
Thanks,
Willy
On Wed, 21 Jun 2017, Willy Tarreau wrote:
> On Tue, Jun 20, 2017 at 10:49:16PM -0700, Hugh Dickins wrote:
> > On Mon, 19 Jun 2017, Greg Kroah-Hartman wrote:
> >
> > > 3.18-stable review patch. If anyone has any objections, please let me know.
> > >
> > > ------------------
> > >
> > > From: Hugh Dickins <[email protected]>
> > >
> > > commit 1be7107fbe18eed3e319a6c3e83c78254b693acb upstream.
> >
> > Here's a few adjustments to the 3.18 patch: no doubt you'll have
> > already sorted out any build errors (and I have to confess that
> > I haven't even tried to build this); and the VM_WARN_ON line (as
> > in 4.4) only fixes a highly unlikely error; but those FOLL_MLOCK
> > lines in mm/gup.c were mistaken, and do need to be deleted.
>
> Are you sure ? The test on the FOLL_MLOCK flag remains present in
> 4.11 and mainline :
>
> /* mlock all present pages, but do not fault in new pages */
> if ((*flags & (FOLL_POPULATE | FOLL_MLOCK)) == FOLL_MLOCK)
> return -ENOENT;
>
> And this test was present although different in 3.18 as well :
>
> /* For mlock, just skip the stack guard page. */
> if ((*flags & FOLL_MLOCK) &&
> (stack_guard_page_start(vma, address) ||
> stack_guard_page_end(vma, address + PAGE_SIZE)))
>
> So by removing it we're totally removing any test on FOLL_MLOCK. That
> might be the correct fix, but I'm just a bit surprized since the mainline
> patch doesn't remove it, and only removes the test on FOLL_POPULATE.
I think I'm sure :) Please take another look, the intention of those
two FOLL_MLOCK tests is completely different. One of them is about
the mlock2() syscall (I think), which wants not to fault in every
page at syscall time; and the other is about stack guard pages
bogusly included in the vma extents, which must not be faulted in.
The stack guard pages are no longer included in the vma extents,
so we can just delete those lines (note the "&&" in the condition);
but we don't want mlock() to stop faulting its pages in.
Makes sense now, or am I tired and confused?
Hugh
On Tue, Jun 20, 2017 at 11:10:56PM -0700, Hugh Dickins wrote:
> On Wed, 21 Jun 2017, Willy Tarreau wrote:
> > On Tue, Jun 20, 2017 at 10:49:16PM -0700, Hugh Dickins wrote:
> > > On Mon, 19 Jun 2017, Greg Kroah-Hartman wrote:
> > >
> > > > 3.18-stable review patch. If anyone has any objections, please let me know.
> > > >
> > > > ------------------
> > > >
> > > > From: Hugh Dickins <[email protected]>
> > > >
> > > > commit 1be7107fbe18eed3e319a6c3e83c78254b693acb upstream.
> > >
> > > Here's a few adjustments to the 3.18 patch: no doubt you'll have
> > > already sorted out any build errors (and I have to confess that
> > > I haven't even tried to build this); and the VM_WARN_ON line (as
> > > in 4.4) only fixes a highly unlikely error; but those FOLL_MLOCK
> > > lines in mm/gup.c were mistaken, and do need to be deleted.
> >
> > Are you sure ? The test on the FOLL_MLOCK flag remains present in
> > 4.11 and mainline :
> >
> > /* mlock all present pages, but do not fault in new pages */
> > if ((*flags & (FOLL_POPULATE | FOLL_MLOCK)) == FOLL_MLOCK)
> > return -ENOENT;
> >
> > And this test was present although different in 3.18 as well :
> >
> > /* For mlock, just skip the stack guard page. */
> > if ((*flags & FOLL_MLOCK) &&
> > (stack_guard_page_start(vma, address) ||
> > stack_guard_page_end(vma, address + PAGE_SIZE)))
> >
> > So by removing it we're totally removing any test on FOLL_MLOCK. That
> > might be the correct fix, but I'm just a bit surprized since the mainline
> > patch doesn't remove it, and only removes the test on FOLL_POPULATE.
>
> I think I'm sure :) Please take another look, the intention of those
> two FOLL_MLOCK tests is completely different. One of them is about
> the mlock2() syscall (I think), which wants not to fault in every
> page at syscall time; and the other is about stack guard pages
> bogusly included in the vma extents, which must not be faulted in.
> The stack guard pages are no longer included in the vma extents,
> so we can just delete those lines (note the "&&" in the condition);
> but we don't want mlock() to stop faulting its pages in.
>
> Makes sense now, or am I tired and confused?
Your explanation sounds pretty fine to me, so I agree with you :-)
Thanks!
Willy
On Tue, Jun 20, 2017 at 10:49:16PM -0700, Hugh Dickins wrote:
> On Mon, 19 Jun 2017, Greg Kroah-Hartman wrote:
>
> > 3.18-stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Hugh Dickins <[email protected]>
> >
> > commit 1be7107fbe18eed3e319a6c3e83c78254b693acb upstream.
>
> Here's a few adjustments to the 3.18 patch: no doubt you'll have
> already sorted out any build errors (and I have to confess that
> I haven't even tried to build this); and the VM_WARN_ON line (as
> in 4.4) only fixes a highly unlikely error; but those FOLL_MLOCK
> lines in mm/gup.c were mistaken, and do need to be deleted.
Again, many thanks for this, I've now updated my tree with this and will
let it run through 0-day.
greg k-h