From: Hector Martin <[email protected]>
[ Upstream commit 188775181bc05f29372b305ef96485840e351fde ]
At least some JMicron controllers issue buggy oversized DMA reads when
fetching context descriptors, always fetching 0x20 bytes at once for
descriptors which are only 0x10 bytes long. This is often harmless, but
can cause page faults on modern systems with IOMMUs:
DMAR: [DMA Read] Request device [05:00.0] fault addr fff56000 [fault reason 06] PTE Read access is not set
firewire_ohci 0000:05:00.0: DMA context IT0 has stopped, error code: evt_descriptor_read
This works around the problem by always leaving 0x10 padding bytes at
the end of descriptor buffer pages, which should be harmless to do
unconditionally for controllers in case others have the same behavior.
Signed-off-by: Hector Martin <[email protected]>
Reviewed-by: Clemens Ladisch <[email protected]>
Signed-off-by: Stefan Richter <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/firewire/ohci.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/firewire/ohci.c b/drivers/firewire/ohci.c
index 8bf89267dc25..d731b413cb2c 100644
--- a/drivers/firewire/ohci.c
+++ b/drivers/firewire/ohci.c
@@ -1130,7 +1130,13 @@ static int context_add_buffer(struct context *ctx)
return -ENOMEM;
offset = (void *)&desc->buffer - (void *)desc;
- desc->buffer_size = PAGE_SIZE - offset;
+ /*
+ * Some controllers, like JMicron ones, always issue 0x20-byte DMA reads
+ * for descriptors, even 0x10-byte ones. This can cause page faults when
+ * an IOMMU is in use and the oversized read crosses a page boundary.
+ * Work around this by always leaving at least 0x10 bytes of padding.
+ */
+ desc->buffer_size = PAGE_SIZE - offset - 0x10;
desc->buffer_bus = bus_addr + offset;
desc->used = 0;
--
2.15.1
From: Trond Myklebust <[email protected]>
[ Upstream commit 0afa6b4412988019db14c6bfb8c6cbdf120ca9ad ]
Calling __UDPX_INC_STATS() from a preemptible context leads to a
warning of the form:
BUG: using __this_cpu_add() in preemptible [00000000] code: kworker/u5:0/31
caller is xs_udp_data_receive_workfn+0x194/0x270
CPU: 1 PID: 31 Comm: kworker/u5:0 Not tainted 4.15.0-rc8-00076-g90ea9f1 #2
Workqueue: xprtiod xs_udp_data_receive_workfn
Call Trace:
dump_stack+0x85/0xc1
check_preemption_disabled+0xce/0xe0
xs_udp_data_receive_workfn+0x194/0x270
process_one_work+0x318/0x620
worker_thread+0x20a/0x390
? process_one_work+0x620/0x620
kthread+0x120/0x130
? __kthread_bind_mask+0x60/0x60
ret_from_fork+0x24/0x30
Since we're taking a spinlock in those functions anyway, let's fix the
issue by moving the call so that it occurs under the spinlock.
Reported-by: kernel test robot <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/sunrpc/xprtsock.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 8cb40f8ffa5b..30192abfdc3b 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1069,18 +1069,18 @@ static void xs_udp_data_read_skb(struct rpc_xprt *xprt,
/* Suck it into the iovec, verify checksum if not done by hw. */
if (csum_partial_copy_to_xdr(&rovr->rq_private_buf, skb)) {
- __UDPX_INC_STATS(sk, UDP_MIB_INERRORS);
spin_lock(&xprt->recv_lock);
+ __UDPX_INC_STATS(sk, UDP_MIB_INERRORS);
goto out_unpin;
}
- __UDPX_INC_STATS(sk, UDP_MIB_INDATAGRAMS);
spin_lock_bh(&xprt->transport_lock);
xprt_adjust_cwnd(xprt, task, copied);
spin_unlock_bh(&xprt->transport_lock);
spin_lock(&xprt->recv_lock);
xprt_complete_rqst(task, copied);
+ __UDPX_INC_STATS(sk, UDP_MIB_INDATAGRAMS);
out_unpin:
xprt_unpin_rqst(rovr);
out_unlock:
--
2.15.1
From: Jesper Dangaard Brouer <[email protected]>
[ Upstream commit e3d91b0ca523d53158f435a3e13df7f0cb360ea2 ]
V3: More generic skipping of relo-section (suggested by Daniel)
If clang >= 4.0.1 is missing the option '-target bpf', it will cause
llc/llvm to create two ELF sections for "Exception Frames", with
section names '.eh_frame' and '.rel.eh_frame'.
The BPF ELF loader library libbpf fails when loading files with these
sections. The other in-kernel BPF ELF loader in samples/bpf/bpf_load.c,
handle this gracefully. And iproute2 loader also seems to work with these
"eh" sections.
The issue in libbpf is caused by bpf_object__elf_collect() skipping
some sections, and later when performing relocation it will be
pointing to a skipped section, as these sections cannot be found by
bpf_object__find_prog_by_idx() in bpf_object__collect_reloc().
This is a general issue that also occurs for other sections, like
debug sections which are also skipped and can have relo section.
As suggested by Daniel. To avoid keeping state about all skipped
sections, instead perform a direct qlookup in the ELF object. Lookup
the section that the relo-section points to and check if it contains
executable machine instructions (denoted by the sh_flags
SHF_EXECINSTR). Use this check to also skip irrelevant relo-sections.
Note, for samples/bpf/ the '-target bpf' parameter to clang cannot be used
due to incompatibility with asm embedded headers, that some of the samples
include. This is explained in more details by Yonghong Song in bpf_devel_QA.
Signed-off-by: Jesper Dangaard Brouer <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/lib/bpf/libbpf.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 35f6dfcdc565..701d29c8364f 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -661,6 +661,24 @@ bpf_object__init_maps(struct bpf_object *obj)
return bpf_object__validate_maps(obj);
}
+static bool section_have_execinstr(struct bpf_object *obj, int idx)
+{
+ Elf_Scn *scn;
+ GElf_Shdr sh;
+
+ scn = elf_getscn(obj->efile.elf, idx);
+ if (!scn)
+ return false;
+
+ if (gelf_getshdr(scn, &sh) != &sh)
+ return false;
+
+ if (sh.sh_flags & SHF_EXECINSTR)
+ return true;
+
+ return false;
+}
+
static int bpf_object__elf_collect(struct bpf_object *obj)
{
Elf *elf = obj->efile.elf;
@@ -742,6 +760,14 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
} else if (sh.sh_type == SHT_REL) {
void *reloc = obj->efile.reloc;
int nr_reloc = obj->efile.nr_reloc + 1;
+ int sec = sh.sh_info; /* points to other section */
+
+ /* Only do relo for section with exec instructions */
+ if (!section_have_execinstr(obj, sec)) {
+ pr_debug("skip relo %s(%d) for section(%d)\n",
+ name, idx, sec);
+ continue;
+ }
reloc = realloc(reloc,
sizeof(*obj->efile.reloc) * nr_reloc);
--
2.15.1
From: Nicholas Piggin <[email protected]>
[ Upstream commit e7bde88cdb4f0e432398a7d29ca2a15d2c18952a ]
The OPAL IMC driver's shutdown handler disables nest PMU counters by
walking nodes and taking the first CPU out of their cpumask, which is
used to index into the paca (get_hard_smp_processor_id()). This does
not always do the right thing, and in particular for CPU-less nodes it
returns NR_CPUS and that overruns the paca and dereferences random
memory.
Fix it by being more careful about checking returned CPU, and only
using online CPUs. It's not clear this shutdown code makes sense after
commit 885dcd709b ("powerpc/perf: Add nest IMC PMU support"), but this
should not make things worse
Currently the bug causes us to call OPAL with a junk CPU number. A
separate patch in development to change the way pacas are allocated
escalates this bug into a crash:
Unable to handle kernel paging request for data at address 0x2a21af1eeb000076
Faulting instruction address: 0xc0000000000a5468
Oops: Kernel access of bad area, sig: 11 [#1]
...
NIP opal_imc_counters_shutdown+0x148/0x1d0
LR opal_imc_counters_shutdown+0x134/0x1d0
Call Trace:
opal_imc_counters_shutdown+0x134/0x1d0 (unreliable)
platform_drv_shutdown+0x44/0x60
device_shutdown+0x1f8/0x350
kernel_restart_prepare+0x54/0x70
kernel_restart+0x28/0xc0
SyS_reboot+0x1d0/0x2c0
system_call+0x58/0x6c
Signed-off-by: Nicholas Piggin <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/powerpc/platforms/powernv/opal-imc.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/platforms/powernv/opal-imc.c b/arch/powerpc/platforms/powernv/opal-imc.c
index b150f4deaccf..6914b289c86b 100644
--- a/arch/powerpc/platforms/powernv/opal-imc.c
+++ b/arch/powerpc/platforms/powernv/opal-imc.c
@@ -126,9 +126,11 @@ static void disable_nest_pmu_counters(void)
const struct cpumask *l_cpumask;
get_online_cpus();
- for_each_online_node(nid) {
+ for_each_node_with_cpus(nid) {
l_cpumask = cpumask_of_node(nid);
- cpu = cpumask_first(l_cpumask);
+ cpu = cpumask_first_and(l_cpumask, cpu_online_mask);
+ if (cpu >= nr_cpu_ids)
+ continue;
opal_imc_counters_stop(OPAL_IMC_COUNTERS_NEST,
get_hard_smp_processor_id(cpu));
}
--
2.15.1
From: Peter Hutterer <[email protected]>
[ Upstream commit 19eb4ed1141bd1096b9bc84ba9c4d03d5830c143 ]
input_mt_init_slots() resets the ABS_X/Y fuzz to 0 and expects the driver
to call input_mt_report_pointer_emulation(). That is based on the MT
position bits which are already defuzzed - hence a fuzz of 0.
In the case of synaptics semi-mt devices, we report the ABS_X/Y axes
manually. This results in the MT position being defuzzed but the
single-touch emulation missing that defuzzing.
Work around this by re-initializing the ABS_X/Y axes after the MT axis to
get the same fuzz value back.
https://bugs.freedesktop.org/show_bug.cgi?id=104533
Signed-off-by: Peter Hutterer <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/input/mouse/synaptics.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
index ee5466a374bf..a246fc686bb7 100644
--- a/drivers/input/mouse/synaptics.c
+++ b/drivers/input/mouse/synaptics.c
@@ -1280,6 +1280,16 @@ static void set_input_params(struct psmouse *psmouse,
INPUT_MT_POINTER |
(cr48_profile_sensor ?
INPUT_MT_TRACK : INPUT_MT_SEMI_MT));
+
+ /*
+ * For semi-mt devices we send ABS_X/Y ourselves instead of
+ * input_mt_report_pointer_emulation. But
+ * input_mt_init_slots() resets the fuzz to 0, leading to a
+ * filtered ABS_MT_POSITION_X but an unfiltered ABS_X
+ * position. Let's re-initialize ABS_X/Y here.
+ */
+ if (!cr48_profile_sensor)
+ set_abs_position_params(dev, &priv->info, ABS_X, ABS_Y);
}
if (SYN_CAP_PALMDETECT(info->capabilities))
--
2.15.1
From: Mustafa Ismail <[email protected]>
[ Upstream commit f20d429511affab6a2a9129f46042f43e6ffe396 ]
The iWARP Exception Queue (IEQ) resources are not freed when a QP is
destroyed. Fix this by freeing IEQ resources when freeing QP resources.
Fixes: d37498417947 ("i40iw: add files for iwarp interface")
Signed-off-by: Mustafa Ismail <[email protected]>
Signed-off-by: Shiraz Saleem <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/infiniband/hw/i40iw/i40iw_puda.c | 3 +--
drivers/infiniband/hw/i40iw/i40iw_puda.h | 1 +
drivers/infiniband/hw/i40iw/i40iw_verbs.c | 1 +
3 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/i40iw/i40iw_puda.c b/drivers/infiniband/hw/i40iw/i40iw_puda.c
index 59f70676f0e0..74c7f6879084 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_puda.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_puda.c
@@ -48,7 +48,6 @@ static void i40iw_ieq_tx_compl(struct i40iw_sc_vsi *vsi, void *sqwrid);
static void i40iw_ilq_putback_rcvbuf(struct i40iw_sc_qp *qp, u32 wqe_idx);
static enum i40iw_status_code i40iw_puda_replenish_rq(struct i40iw_puda_rsrc
*rsrc, bool initial);
-static void i40iw_ieq_cleanup_qp(struct i40iw_puda_rsrc *ieq, struct i40iw_sc_qp *qp);
/**
* i40iw_puda_get_listbuf - get buffer from puda list
* @list: list to use for buffers (ILQ or IEQ)
@@ -1480,7 +1479,7 @@ static void i40iw_ieq_tx_compl(struct i40iw_sc_vsi *vsi, void *sqwrid)
* @ieq: ieq resource
* @qp: all pending fpdu buffers
*/
-static void i40iw_ieq_cleanup_qp(struct i40iw_puda_rsrc *ieq, struct i40iw_sc_qp *qp)
+void i40iw_ieq_cleanup_qp(struct i40iw_puda_rsrc *ieq, struct i40iw_sc_qp *qp)
{
struct i40iw_puda_buf *buf;
struct i40iw_pfpdu *pfpdu = &qp->pfpdu;
diff --git a/drivers/infiniband/hw/i40iw/i40iw_puda.h b/drivers/infiniband/hw/i40iw/i40iw_puda.h
index dba05ce7d392..ebe37f157d90 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_puda.h
+++ b/drivers/infiniband/hw/i40iw/i40iw_puda.h
@@ -186,4 +186,5 @@ enum i40iw_status_code i40iw_cqp_qp_create_cmd(struct i40iw_sc_dev *dev, struct
enum i40iw_status_code i40iw_cqp_cq_create_cmd(struct i40iw_sc_dev *dev, struct i40iw_sc_cq *cq);
void i40iw_cqp_qp_destroy_cmd(struct i40iw_sc_dev *dev, struct i40iw_sc_qp *qp);
void i40iw_cqp_cq_destroy_cmd(struct i40iw_sc_dev *dev, struct i40iw_sc_cq *cq);
+void i40iw_ieq_cleanup_qp(struct i40iw_puda_rsrc *ieq, struct i40iw_sc_qp *qp);
#endif
diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
index 62be0a41ad0b..9e7ae7161d2f 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
@@ -428,6 +428,7 @@ void i40iw_free_qp_resources(struct i40iw_device *iwdev,
{
struct i40iw_pbl *iwpbl = &iwqp->iwpbl;
+ i40iw_ieq_cleanup_qp(iwdev->vsi.ieq, &iwqp->sc_qp);
i40iw_dealloc_push_page(iwdev, &iwqp->sc_qp);
if (qp_num)
i40iw_free_resource(iwdev, iwdev->allocated_qps, qp_num);
--
2.15.1
From: Geert Uytterhoeven <[email protected]>
[ Upstream commit c877154d307f4a91e0b5b85b75535713dab945ae ]
fs/ubifs/tnc.c: In function ‘search_dh_cookie’:
fs/ubifs/tnc.c:1893: warning: ‘err’ is used uninitialized in this function
Indeed, err is always used uninitialized.
According to an original review comment from Hyunchul, acknowledged by
Richard, err should be initialized to -ENOENT to avoid the first call to
tnc_next(). But we can achieve the same by reordering the code.
Fixes: 781f675e2d7e ("ubifs: Fix unlink code wrt. double hash lookups")
Reported-by: Hyunchul Lee <[email protected]>
Signed-off-by: Geert Uytterhoeven <[email protected]>
Signed-off-by: Richard Weinberger <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/ubifs/tnc.c | 21 +++++++--------------
1 file changed, 7 insertions(+), 14 deletions(-)
diff --git a/fs/ubifs/tnc.c b/fs/ubifs/tnc.c
index 0a213dcba2a1..ba3d0e0f8615 100644
--- a/fs/ubifs/tnc.c
+++ b/fs/ubifs/tnc.c
@@ -1890,35 +1890,28 @@ static int search_dh_cookie(struct ubifs_info *c, const union ubifs_key *key,
union ubifs_key *dkey;
for (;;) {
- if (!err) {
- err = tnc_next(c, &znode, n);
- if (err)
- goto out;
- }
-
zbr = &znode->zbranch[*n];
dkey = &zbr->key;
if (key_inum(c, dkey) != key_inum(c, key) ||
key_type(c, dkey) != key_type(c, key)) {
- err = -ENOENT;
- goto out;
+ return -ENOENT;
}
err = tnc_read_hashed_node(c, zbr, dent);
if (err)
- goto out;
+ return err;
if (key_hash(c, key) == key_hash(c, dkey) &&
le32_to_cpu(dent->cookie) == cookie) {
*zn = znode;
- goto out;
+ return 0;
}
- }
-
-out:
- return err;
+ err = tnc_next(c, &znode, n);
+ if (err)
+ return err;
+ }
}
static int do_lookup_dh(struct ubifs_info *c, const union ubifs_key *key,
--
2.15.1
From: Xose Vazquez Perez <[email protected]>
[ Upstream commit 3f884a0a8bdf28cfd1e9987d54d83350096cdd46 ]
Replace "" with NULL for product revision level, and merge TEXEL
duplicate entries.
Cc: Hannes Reinecke <[email protected]>
Cc: Martin K. Petersen <[email protected]>
Cc: James E.J. Bottomley <[email protected]>
Cc: SCSI ML <[email protected]>
Signed-off-by: Xose Vazquez Perez <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/scsi/scsi_devinfo.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
index cfc095f45e26..ea947a7c2596 100644
--- a/drivers/scsi/scsi_devinfo.c
+++ b/drivers/scsi/scsi_devinfo.c
@@ -109,8 +109,8 @@ static struct {
* seagate controller, which causes SCSI code to reset bus.
*/
{"HP", "C1750A", "3226", BLIST_NOLUN}, /* scanjet iic */
- {"HP", "C1790A", "", BLIST_NOLUN}, /* scanjet iip */
- {"HP", "C2500A", "", BLIST_NOLUN}, /* scanjet iicx */
+ {"HP", "C1790A", NULL, BLIST_NOLUN}, /* scanjet iip */
+ {"HP", "C2500A", NULL, BLIST_NOLUN}, /* scanjet iicx */
{"MEDIAVIS", "CDR-H93MV", "1.31", BLIST_NOLUN}, /* locks up */
{"MICROTEK", "ScanMaker II", "5.61", BLIST_NOLUN}, /* responds to all lun */
{"MITSUMI", "CD-R CR-2201CS", "6119", BLIST_NOLUN}, /* locks up */
@@ -120,7 +120,7 @@ static struct {
{"QUANTUM", "FIREBALL ST4.3S", "0F0C", BLIST_NOLUN}, /* locks up */
{"RELISYS", "Scorpio", NULL, BLIST_NOLUN}, /* responds to all lun */
{"SANKYO", "CP525", "6.64", BLIST_NOLUN}, /* causes failed REQ SENSE, extra reset */
- {"TEXEL", "CD-ROM", "1.06", BLIST_NOLUN},
+ {"TEXEL", "CD-ROM", "1.06", BLIST_NOLUN | BLIST_BORKEN},
{"transtec", "T5008", "0001", BLIST_NOREPORTLUN },
{"YAMAHA", "CDR100", "1.00", BLIST_NOLUN}, /* locks up */
{"YAMAHA", "CDR102", "1.00", BLIST_NOLUN}, /* locks up */
@@ -255,7 +255,6 @@ static struct {
{"ST650211", "CF", NULL, BLIST_RETRY_HWERROR},
{"SUN", "T300", "*", BLIST_SPARSELUN},
{"SUN", "T4", "*", BLIST_SPARSELUN},
- {"TEXEL", "CD-ROM", "1.06", BLIST_BORKEN},
{"Tornado-", "F4", "*", BLIST_NOREPORTLUN},
{"TOSHIBA", "CDROM", NULL, BLIST_ISROM},
{"TOSHIBA", "CD-ROM", NULL, BLIST_ISROM},
--
2.15.1
From: "weiyongjun (A)" <[email protected]>
[ Upstream commit 0ddcff49b672239dda94d70d0fcf50317a9f4b51 ]
'hwname' is malloced in hwsim_new_radio_nl() and should be freed
before leaving from the error handling cases, otherwise it will cause
memory leak.
Fixes: ff4dd73dd2b4 ("mac80211_hwsim: check HWSIM_ATTR_RADIO_NAME length")
Signed-off-by: Wei Yongjun <[email protected]>
Reviewed-by: Ben Hutchings <[email protected]>
Signed-off-by: Johannes Berg <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/wireless/mac80211_hwsim.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
index d148dbf3beeb..977fd7aa2082 100644
--- a/drivers/net/wireless/mac80211_hwsim.c
+++ b/drivers/net/wireless/mac80211_hwsim.c
@@ -3153,8 +3153,10 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info)
if (info->attrs[HWSIM_ATTR_REG_CUSTOM_REG]) {
u32 idx = nla_get_u32(info->attrs[HWSIM_ATTR_REG_CUSTOM_REG]);
- if (idx >= ARRAY_SIZE(hwsim_world_regdom_custom))
+ if (idx >= ARRAY_SIZE(hwsim_world_regdom_custom)) {
+ kfree(hwname);
return -EINVAL;
+ }
param.regd = hwsim_world_regdom_custom[idx];
}
--
2.15.1
From: Jason Gunthorpe <[email protected]>
[ Upstream commit 3624a8f02568f08aef299d3b117f2226f621177d ]
Returning EOPNOTSUPP is problematic because it can also be
returned by the method function, and we use it in quite a few
places in drivers these days.
Instead, dedicate EPROTONOSUPPORT to indicate that the ioctl framework
is enabled but the requested object and method are not supported by
the kernel. No other case will return this code, and it lets userspace
know to fall back to write().
grep says we do not use it today in drivers/infiniband subsystem.
Signed-off-by: Jason Gunthorpe <[email protected]>
Reviewed-by: Matan Barak <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/infiniband/core/uverbs_ioctl.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c
index 5286ad57d903..8f2dc79ad4ec 100644
--- a/drivers/infiniband/core/uverbs_ioctl.c
+++ b/drivers/infiniband/core/uverbs_ioctl.c
@@ -245,16 +245,13 @@ static long ib_uverbs_cmd_verbs(struct ib_device *ib_dev,
uintptr_t data[UVERBS_OPTIMIZE_USING_STACK_SZ / sizeof(uintptr_t)];
#endif
- if (hdr->reserved)
- return -EINVAL;
-
object_spec = uverbs_get_object(ib_dev, hdr->object_id);
if (!object_spec)
- return -EOPNOTSUPP;
+ return -EPROTONOSUPPORT;
method_spec = uverbs_get_method(object_spec, hdr->method_id);
if (!method_spec)
- return -EOPNOTSUPP;
+ return -EPROTONOSUPPORT;
if ((method_spec->flags & UVERBS_ACTION_FLAG_CREATE_ROOT) ^ !file->ucontext)
return -EINVAL;
@@ -310,6 +307,16 @@ static long ib_uverbs_cmd_verbs(struct ib_device *ib_dev,
err = uverbs_handle_method(buf, ctx->uattrs, hdr->num_attrs, ib_dev,
file, method_spec, ctx->uverbs_attr_bundle);
+
+ /*
+ * EPROTONOSUPPORT is ONLY to be returned if the ioctl framework can
+ * not invoke the method because the request is not supported. No
+ * other cases should return this code.
+ */
+ if (unlikely(err == -EPROTONOSUPPORT)) {
+ WARN_ON_ONCE(err == -EPROTONOSUPPORT);
+ err = -EINVAL;
+ }
out:
#ifdef UVERBS_OPTIMIZE_USING_STACK_SZ
if (ctx_size > UVERBS_OPTIMIZE_USING_STACK_SZ)
@@ -348,7 +355,7 @@ long ib_uverbs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
}
if (hdr.reserved) {
- err = -EOPNOTSUPP;
+ err = -EPROTONOSUPPORT;
goto out;
}
--
2.15.1
From: Aaron Sierra <[email protected]>
[ Upstream commit c7e1b4059075c9e8eed101d7cc5da43e95eb5e18 ]
Exar sleep wake-up handling has been done on a per-channel basis by
virtue of INT0 being accessible from each channel's address space. I
believe this was initially done out of necessity, but now that Exar
devices have their own driver, we can do things more efficiently by
registering a dedicated INT0 handler at the PCI device level.
I see this change providing the following benefits:
1. If more than one port is active, eliminates the redundant bus
cycles for reading INT0 on every interrupt.
2. This note associated with hooking in the per-channel handler in
8250_port.c is resolved:
/* Fixme: probably not the best place for this */
Cc: Matt Schulte <[email protected]>
Signed-off-by: Aaron Sierra <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/tty/serial/8250/8250_exar.c | 34 ++++++++++++++++++++++++++++++----
drivers/tty/serial/8250/8250_port.c | 26 --------------------------
2 files changed, 30 insertions(+), 30 deletions(-)
diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
index c55624703fdf..e0aa5f03004c 100644
--- a/drivers/tty/serial/8250/8250_exar.c
+++ b/drivers/tty/serial/8250/8250_exar.c
@@ -37,6 +37,7 @@
#define PCI_DEVICE_ID_EXAR_XR17V4358 0x4358
#define PCI_DEVICE_ID_EXAR_XR17V8358 0x8358
+#define UART_EXAR_INT0 0x80
#define UART_EXAR_8XMODE 0x88 /* 8X sampling rate select */
#define UART_EXAR_FCTR 0x08 /* Feature Control Register */
@@ -124,6 +125,7 @@ struct exar8250_board {
struct exar8250 {
unsigned int nr;
struct exar8250_board *board;
+ void __iomem *virt;
int line[0];
};
@@ -134,12 +136,9 @@ static int default_setup(struct exar8250 *priv, struct pci_dev *pcidev,
const struct exar8250_board *board = priv->board;
unsigned int bar = 0;
- if (!pcim_iomap_table(pcidev)[bar] && !pcim_iomap(pcidev, bar, 0))
- return -ENOMEM;
-
port->port.iotype = UPIO_MEM;
port->port.mapbase = pci_resource_start(pcidev, bar) + offset;
- port->port.membase = pcim_iomap_table(pcidev)[bar] + offset;
+ port->port.membase = priv->virt + offset;
port->port.regshift = board->reg_shift;
return 0;
@@ -423,6 +422,25 @@ static void pci_xr17v35x_exit(struct pci_dev *pcidev)
port->port.private_data = NULL;
}
+/*
+ * These Exar UARTs have an extra interrupt indicator that could fire for a
+ * few interrupts that are not presented/cleared through IIR. One of which is
+ * a wakeup interrupt when coming out of sleep. These interrupts are only
+ * cleared by reading global INT0 or INT1 registers as interrupts are
+ * associated with channel 0. The INT[3:0] registers _are_ accessible from each
+ * channel's address space, but for the sake of bus efficiency we register a
+ * dedicated handler at the PCI device level to handle them.
+ */
+static irqreturn_t exar_misc_handler(int irq, void *data)
+{
+ struct exar8250 *priv = data;
+
+ /* Clear all PCI interrupts by reading INT0. No effect on IIR */
+ ioread8(priv->virt + UART_EXAR_INT0);
+
+ return IRQ_HANDLED;
+}
+
static int
exar_pci_probe(struct pci_dev *pcidev, const struct pci_device_id *ent)
{
@@ -451,6 +469,9 @@ exar_pci_probe(struct pci_dev *pcidev, const struct pci_device_id *ent)
return -ENOMEM;
priv->board = board;
+ priv->virt = pcim_iomap(pcidev, bar, 0);
+ if (!priv->virt)
+ return -ENOMEM;
pci_set_master(pcidev);
@@ -464,6 +485,11 @@ exar_pci_probe(struct pci_dev *pcidev, const struct pci_device_id *ent)
uart.port.irq = pci_irq_vector(pcidev, 0);
uart.port.dev = &pcidev->dev;
+ rc = devm_request_irq(&pcidev->dev, uart.port.irq, exar_misc_handler,
+ IRQF_SHARED, "exar_uart", priv);
+ if (rc)
+ return rc;
+
for (i = 0; i < nr_ports && i < maxnr; i++) {
rc = board->setup(priv, pcidev, &uart, i);
if (rc) {
diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
index 8dcfd4978a03..e9d4420869dd 100644
--- a/drivers/tty/serial/8250/8250_port.c
+++ b/drivers/tty/serial/8250/8250_port.c
@@ -445,7 +445,6 @@ static void io_serial_out(struct uart_port *p, int offset, int value)
}
static int serial8250_default_handle_irq(struct uart_port *port);
-static int exar_handle_irq(struct uart_port *port);
static void set_io_from_upio(struct uart_port *p)
{
@@ -1890,26 +1889,6 @@ static int serial8250_default_handle_irq(struct uart_port *port)
return ret;
}
-/*
- * These Exar UARTs have an extra interrupt indicator that could
- * fire for a few unimplemented interrupts. One of which is a
- * wakeup event when coming out of sleep. Put this here just
- * to be on the safe side that these interrupts don't go unhandled.
- */
-static int exar_handle_irq(struct uart_port *port)
-{
- unsigned int iir = serial_port_in(port, UART_IIR);
- int ret = 0;
-
- if (((port->type == PORT_XR17V35X) || (port->type == PORT_XR17D15X)) &&
- serial_port_in(port, UART_EXAR_INT0) != 0)
- ret = 1;
-
- ret |= serial8250_handle_irq(port, iir);
-
- return ret;
-}
-
/*
* Newer 16550 compatible parts such as the SC16C650 & Altera 16550 Soft IP
* have a programmable TX threshold that triggers the THRE interrupt in
@@ -3074,11 +3053,6 @@ static void serial8250_config_port(struct uart_port *port, int flags)
if (port->type == PORT_UNKNOWN)
serial8250_release_std_resource(up);
- /* Fixme: probably not the best place for this */
- if ((port->type == PORT_XR17V35X) ||
- (port->type == PORT_XR17D15X))
- port->handle_irq = exar_handle_irq;
-
register_dev_spec_attr_grp(up);
up->fcr = uart_config[up->port.type].fcr;
}
--
2.15.1
From: Alex Estrin <[email protected]>
[ Upstream commit 1029361084d18cc270f64dfd39529fafa10cfe01 ]
On reboot SM can program port pkey table before ipoib registered its
event handler, which could result in missing pkey event and leave root
interface with initial pkey value from index 0.
Since OPA port starts with invalid pkey in index 0, root interface will
fail to initialize and stay down with no-carrier flag.
For IB ipoib interface may end up with pkey different from value
opensm put in pkey table idx 0, resulting in connectivity issues
(different mcast groups, for example).
Close the window by calling event handler after registration
to make sure ipoib pkey is in sync with port pkey table.
Reviewed-by: Mike Marciniszyn <[email protected]>
Reviewed-by: Ira Weiny <[email protected]>
Signed-off-by: Alex Estrin <[email protected]>
Signed-off-by: Dennis Dalessandro <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/infiniband/ulp/ipoib/ipoib_main.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
index a009e943362a..6bc9a768f721 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -2273,6 +2273,9 @@ static struct net_device *ipoib_add_port(const char *format,
priv->ca, ipoib_event);
ib_register_event_handler(&priv->event_handler);
+ /* call event handler to ensure pkey in sync */
+ queue_work(ipoib_workqueue, &priv->flush_heavy);
+
result = register_netdev(priv->dev);
if (result) {
printk(KERN_WARNING "%s: couldn't register ipoib port %d; error %d\n",
--
2.15.1
From: Sebastian Ott <[email protected]>
[ Upstream commit 366b77ae43c5a3bf1a367f15ec8bc16e05035f14 ]
Commit 2a842acab109 ("block: introduce new block status code type")
added blk_status_t usage to the eadm subchannel driver. However
blk_status_t is unknown when included via <linux/blkdev.h> for CONFIG_BLOCK=n.
Only include <linux/blk_types.h> since this is the only dependency eadm has.
This fixes build failures like below:
In file included from drivers/s390/cio/eadm_sch.c:24:0:
./arch/s390/include/asm/eadm.h:111:4: error: unknown type name 'blk_status_t'; did you mean 'si_status'?
blk_status_t error);
Reported-by: Heiko Carstens <[email protected]>
Signed-off-by: Sebastian Ott <[email protected]>
Signed-off-by: Martin Schwidefsky <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/s390/include/asm/eadm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/s390/include/asm/eadm.h b/arch/s390/include/asm/eadm.h
index eb5323161f11..bb63b2afdf6f 100644
--- a/arch/s390/include/asm/eadm.h
+++ b/arch/s390/include/asm/eadm.h
@@ -4,7 +4,7 @@
#include <linux/types.h>
#include <linux/device.h>
-#include <linux/blkdev.h>
+#include <linux/blk_types.h>
struct arqb {
u64 data;
--
2.15.1
From: Coly Li <[email protected]>
[ Upstream commit 99361bbf26337186f02561109c17a4c4b1a7536a ]
Kernel thread routine bch_writeback_thread() has the following code block,
447 down_write(&dc->writeback_lock);
448~450 if (check conditions) {
451 up_write(&dc->writeback_lock);
452 set_current_state(TASK_INTERRUPTIBLE);
453
454 if (kthread_should_stop())
455 return 0;
456
457 schedule();
458 continue;
459 }
If condition check is true, its task state is set to TASK_INTERRUPTIBLE
and call schedule() to wait for others to wake up it.
There are 2 issues in current code,
1, Task state is set to TASK_INTERRUPTIBLE after the condition checks, if
another process changes the condition and call wake_up_process(dc->
writeback_thread), then at line 452 task state is set back to
TASK_INTERRUPTIBLE, the writeback kernel thread will lose a chance to be
waken up.
2, At line 454 if kthread_should_stop() is true, writeback kernel thread
will return to kernel/kthread.c:kthread() with TASK_INTERRUPTIBLE and
call do_exit(). It is not good to enter do_exit() with task state
TASK_INTERRUPTIBLE, in following code path might_sleep() is called and a
warning message is reported by __might_sleep(): "WARNING: do not call
blocking ops when !TASK_RUNNING; state=1 set at [xxxx]".
For the first issue, task state should be set before condition checks.
Ineed because dc->writeback_lock is required when modifying all the
conditions, calling set_current_state() inside code block where dc->
writeback_lock is hold is safe. But this is quite implicit, so I still move
set_current_state() before all the condition checks.
For the second issue, frankley speaking it does not hurt when kernel thread
exits with TASK_INTERRUPTIBLE state, but this warning message scares users,
makes them feel there might be something risky with bcache and hurt their
data. Setting task state to TASK_RUNNING before returning fixes this
problem.
In alloc.c:allocator_wait(), there is also a similar issue, and is also
fixed in this patch.
Changelog:
v3: merge two similar fixes into one patch
v2: fix the race issue in v1 patch.
v1: initial buggy fix.
Signed-off-by: Coly Li <[email protected]>
Reviewed-by: Hannes Reinecke <[email protected]>
Reviewed-by: Michael Lyle <[email protected]>
Cc: Michael Lyle <[email protected]>
Cc: Junhui Tang <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/md/bcache/alloc.c | 4 +++-
drivers/md/bcache/writeback.c | 7 +++++--
2 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
index 934b1fce4ce1..02b576d55758 100644
--- a/drivers/md/bcache/alloc.c
+++ b/drivers/md/bcache/alloc.c
@@ -287,8 +287,10 @@ do { \
break; \
\
mutex_unlock(&(ca)->set->bucket_lock); \
- if (kthread_should_stop()) \
+ if (kthread_should_stop()) { \
+ set_current_state(TASK_RUNNING); \
return 0; \
+ } \
\
schedule(); \
mutex_lock(&(ca)->set->bucket_lock); \
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 70454f2ad2fa..f046dedc59ab 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -420,18 +420,21 @@ static int bch_writeback_thread(void *arg)
while (!kthread_should_stop()) {
down_write(&dc->writeback_lock);
+ set_current_state(TASK_INTERRUPTIBLE);
if (!atomic_read(&dc->has_dirty) ||
(!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
!dc->writeback_running)) {
up_write(&dc->writeback_lock);
- set_current_state(TASK_INTERRUPTIBLE);
- if (kthread_should_stop())
+ if (kthread_should_stop()) {
+ set_current_state(TASK_RUNNING);
return 0;
+ }
schedule();
continue;
}
+ set_current_state(TASK_RUNNING);
searched_full_index = refill_dirty(dc);
--
2.15.1
From: John Fastabend <[email protected]>
[ Upstream commit 3d9e952697de89b53227f06d4241f275eb99cfc4 ]
When a program is attached to a map we increment the program refcnt
to ensure that the program is not removed while it is potentially
being referenced from sockmap side. However, if this same program
also references the map (this is a reasonably common pattern in
my programs) then the verifier will also increment the maps refcnt
from the verifier. This is to ensure the map doesn't get garbage
collected while the program has a reference to it.
So we are left in a state where the map holds the refcnt on the
program stopping it from being removed and releasing the map refcnt.
And vice versa the program holds a refcnt on the map stopping it
from releasing the refcnt on the prog.
All this is fine as long as users detach the program while the
map fd is still around. But, if the user omits this detach command
we are left with a dangling map we can no longer release.
To resolve this when the map fd is released decrement the program
references and remove any reference from the map to the program.
This fixes the issue with possibly dangling map and creates a
user side API constraint. That is, the map fd must be held open
for programs to be attached to a map.
Fixes: 174a79ff9515 ("bpf: sockmap with sk redirect support")
Signed-off-by: John Fastabend <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
kernel/bpf/sockmap.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 1890be7ea9cd..53a4787c08d8 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -601,11 +601,6 @@ static void sock_map_free(struct bpf_map *map)
}
rcu_read_unlock();
- if (stab->bpf_verdict)
- bpf_prog_put(stab->bpf_verdict);
- if (stab->bpf_parse)
- bpf_prog_put(stab->bpf_parse);
-
sock_map_remove_complete(stab);
}
@@ -877,6 +872,19 @@ static int sock_map_update_elem(struct bpf_map *map,
return err;
}
+static void sock_map_release(struct bpf_map *map, struct file *map_file)
+{
+ struct bpf_stab *stab = container_of(map, struct bpf_stab, map);
+ struct bpf_prog *orig;
+
+ orig = xchg(&stab->bpf_parse, NULL);
+ if (orig)
+ bpf_prog_put(orig);
+ orig = xchg(&stab->bpf_verdict, NULL);
+ if (orig)
+ bpf_prog_put(orig);
+}
+
const struct bpf_map_ops sock_map_ops = {
.map_alloc = sock_map_alloc,
.map_free = sock_map_free,
@@ -884,6 +892,7 @@ const struct bpf_map_ops sock_map_ops = {
.map_get_next_key = sock_map_get_next_key,
.map_update_elem = sock_map_update_elem,
.map_delete_elem = sock_map_delete_elem,
+ .map_release = sock_map_release,
};
BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
--
2.15.1
From: Paul Mackerras <[email protected]>
[ Upstream commit 05f2bb0313a2855e491dadfc8319b7da261d7074 ]
This fixes the computation of the HPTE index to use when the HPT
resizing code encounters a bolted HPTE which is stored in its
secondary HPTE group. The code inverts the HPTE group number, which
is correct, but doesn't then mask it with new_hash_mask. As a result,
new_pteg will be effectively negative, resulting in new_hptep
pointing before the new HPT, which will corrupt memory.
In addition, this removes two BUG_ON statements. The condition that
the BUG_ONs were testing -- that we have computed the hash value
incorrectly -- has never been observed in testing, and if it did
occur, would only affect the guest, not the host. Given that
BUG_ON should only be used in conditions where the kernel (i.e.
the host kernel, in this case) can't possibly continue execution,
it is not appropriate here.
Reviewed-by: David Gibson <[email protected]>
Signed-off-by: Paul Mackerras <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 2645d484e945..df9b53f40b1e 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -1348,12 +1348,8 @@ static unsigned long resize_hpt_rehash_hpte(struct kvm_resize_hpt *resize,
}
new_pteg = hash & new_hash_mask;
- if (vpte & HPTE_V_SECONDARY) {
- BUG_ON(~pteg != (hash & old_hash_mask));
- new_pteg = ~new_pteg;
- } else {
- BUG_ON(pteg != (hash & old_hash_mask));
- }
+ if (vpte & HPTE_V_SECONDARY)
+ new_pteg = ~hash & new_hash_mask;
new_idx = new_pteg * HPTES_PER_GROUP + (idx % HPTES_PER_GROUP);
new_hptep = (__be64 *)(new->virt + (new_idx << 4));
--
2.15.1
From: David Howells <[email protected]>
[ Upstream commit 8c2f826dc36314059ac146c78d3bf8056b626446 ]
Don't put buffers of data to be handed to crypto on the stack as this may
cause an assertion failure in the kernel (see below). Fix this by using an
kmalloc'd buffer instead.
kernel BUG at ./include/linux/scatterlist.h:147!
...
RIP: 0010:rxkad_encrypt_response.isra.6+0x191/0x1b0 [rxrpc]
RSP: 0018:ffffbe2fc06cfca8 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff989277d59900 RCX: 0000000000000028
RDX: 0000259dc06cfd88 RSI: 0000000000000025 RDI: ffffbe30406cfd88
RBP: ffffbe2fc06cfd60 R08: ffffbe2fc06cfd08 R09: ffffbe2fc06cfd08
R10: 0000000000000000 R11: 0000000000000000 R12: 1ffff7c5f80d9f95
R13: ffffbe2fc06cfd88 R14: ffff98927a3f7aa0 R15: ffffbe2fc06cfd08
FS: 0000000000000000(0000) GS:ffff98927fc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055b1ff28f0f8 CR3: 000000001b412003 CR4: 00000000003606f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
rxkad_respond_to_challenge+0x297/0x330 [rxrpc]
rxrpc_process_connection+0xd1/0x690 [rxrpc]
? process_one_work+0x1c3/0x680
? __lock_is_held+0x59/0xa0
process_one_work+0x249/0x680
worker_thread+0x3a/0x390
? process_one_work+0x680/0x680
kthread+0x121/0x140
? kthread_create_worker_on_cpu+0x70/0x70
ret_from_fork+0x3a/0x50
Reported-by: Jonathan Billings <[email protected]>
Reported-by: Marc Dionne <[email protected]>
Signed-off-by: David Howells <[email protected]>
Tested-by: Jonathan Billings <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/rxrpc/conn_event.c | 1 +
net/rxrpc/rxkad.c | 92 ++++++++++++++++++++++++++++----------------------
2 files changed, 52 insertions(+), 41 deletions(-)
diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
index 59a51a56e7c8..0435c4167a1a 100644
--- a/net/rxrpc/conn_event.c
+++ b/net/rxrpc/conn_event.c
@@ -404,6 +404,7 @@ void rxrpc_process_connection(struct work_struct *work)
case -EKEYEXPIRED:
case -EKEYREJECTED:
goto protocol_error;
+ case -ENOMEM:
case -EAGAIN:
goto requeue_and_leave;
case -ECONNABORTED:
diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
index c38b3a1de56c..77cb23c7bd0a 100644
--- a/net/rxrpc/rxkad.c
+++ b/net/rxrpc/rxkad.c
@@ -773,8 +773,7 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn,
{
const struct rxrpc_key_token *token;
struct rxkad_challenge challenge;
- struct rxkad_response resp
- __attribute__((aligned(8))); /* must be aligned for crypto */
+ struct rxkad_response *resp;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
const char *eproto;
u32 version, nonce, min_level, abort_code;
@@ -818,26 +817,29 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn,
token = conn->params.key->payload.data[0];
/* build the response packet */
- memset(&resp, 0, sizeof(resp));
-
- resp.version = htonl(RXKAD_VERSION);
- resp.encrypted.epoch = htonl(conn->proto.epoch);
- resp.encrypted.cid = htonl(conn->proto.cid);
- resp.encrypted.securityIndex = htonl(conn->security_ix);
- resp.encrypted.inc_nonce = htonl(nonce + 1);
- resp.encrypted.level = htonl(conn->params.security_level);
- resp.kvno = htonl(token->kad->kvno);
- resp.ticket_len = htonl(token->kad->ticket_len);
-
- resp.encrypted.call_id[0] = htonl(conn->channels[0].call_counter);
- resp.encrypted.call_id[1] = htonl(conn->channels[1].call_counter);
- resp.encrypted.call_id[2] = htonl(conn->channels[2].call_counter);
- resp.encrypted.call_id[3] = htonl(conn->channels[3].call_counter);
+ resp = kzalloc(sizeof(struct rxkad_response), GFP_NOFS);
+ if (!resp)
+ return -ENOMEM;
+
+ resp->version = htonl(RXKAD_VERSION);
+ resp->encrypted.epoch = htonl(conn->proto.epoch);
+ resp->encrypted.cid = htonl(conn->proto.cid);
+ resp->encrypted.securityIndex = htonl(conn->security_ix);
+ resp->encrypted.inc_nonce = htonl(nonce + 1);
+ resp->encrypted.level = htonl(conn->params.security_level);
+ resp->kvno = htonl(token->kad->kvno);
+ resp->ticket_len = htonl(token->kad->ticket_len);
+ resp->encrypted.call_id[0] = htonl(conn->channels[0].call_counter);
+ resp->encrypted.call_id[1] = htonl(conn->channels[1].call_counter);
+ resp->encrypted.call_id[2] = htonl(conn->channels[2].call_counter);
+ resp->encrypted.call_id[3] = htonl(conn->channels[3].call_counter);
/* calculate the response checksum and then do the encryption */
- rxkad_calc_response_checksum(&resp);
- rxkad_encrypt_response(conn, &resp, token->kad);
- return rxkad_send_response(conn, &sp->hdr, &resp, token->kad);
+ rxkad_calc_response_checksum(resp);
+ rxkad_encrypt_response(conn, resp, token->kad);
+ ret = rxkad_send_response(conn, &sp->hdr, resp, token->kad);
+ kfree(resp);
+ return ret;
protocol_error:
trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, eproto);
@@ -1048,8 +1050,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
struct sk_buff *skb,
u32 *_abort_code)
{
- struct rxkad_response response
- __attribute__((aligned(8))); /* must be aligned for crypto */
+ struct rxkad_response *response;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rxrpc_crypt session_key;
const char *eproto;
@@ -1061,17 +1062,22 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
_enter("{%d,%x}", conn->debug_id, key_serial(conn->server_key));
+ ret = -ENOMEM;
+ response = kzalloc(sizeof(struct rxkad_response), GFP_NOFS);
+ if (!response)
+ goto temporary_error;
+
eproto = tracepoint_string("rxkad_rsp_short");
abort_code = RXKADPACKETSHORT;
if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header),
- &response, sizeof(response)) < 0)
+ response, sizeof(*response)) < 0)
goto protocol_error;
- if (!pskb_pull(skb, sizeof(response)))
+ if (!pskb_pull(skb, sizeof(*response)))
BUG();
- version = ntohl(response.version);
- ticket_len = ntohl(response.ticket_len);
- kvno = ntohl(response.kvno);
+ version = ntohl(response->version);
+ ticket_len = ntohl(response->ticket_len);
+ kvno = ntohl(response->kvno);
_proto("Rx RESPONSE %%%u { v=%u kv=%u tl=%u }",
sp->hdr.serial, version, kvno, ticket_len);
@@ -1105,31 +1111,31 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
ret = rxkad_decrypt_ticket(conn, skb, ticket, ticket_len, &session_key,
&expiry, _abort_code);
if (ret < 0)
- goto temporary_error_free;
+ goto temporary_error_free_resp;
/* use the session key from inside the ticket to decrypt the
* response */
- rxkad_decrypt_response(conn, &response, &session_key);
+ rxkad_decrypt_response(conn, response, &session_key);
eproto = tracepoint_string("rxkad_rsp_param");
abort_code = RXKADSEALEDINCON;
- if (ntohl(response.encrypted.epoch) != conn->proto.epoch)
+ if (ntohl(response->encrypted.epoch) != conn->proto.epoch)
goto protocol_error_free;
- if (ntohl(response.encrypted.cid) != conn->proto.cid)
+ if (ntohl(response->encrypted.cid) != conn->proto.cid)
goto protocol_error_free;
- if (ntohl(response.encrypted.securityIndex) != conn->security_ix)
+ if (ntohl(response->encrypted.securityIndex) != conn->security_ix)
goto protocol_error_free;
- csum = response.encrypted.checksum;
- response.encrypted.checksum = 0;
- rxkad_calc_response_checksum(&response);
+ csum = response->encrypted.checksum;
+ response->encrypted.checksum = 0;
+ rxkad_calc_response_checksum(response);
eproto = tracepoint_string("rxkad_rsp_csum");
- if (response.encrypted.checksum != csum)
+ if (response->encrypted.checksum != csum)
goto protocol_error_free;
spin_lock(&conn->channel_lock);
for (i = 0; i < RXRPC_MAXCALLS; i++) {
struct rxrpc_call *call;
- u32 call_id = ntohl(response.encrypted.call_id[i]);
+ u32 call_id = ntohl(response->encrypted.call_id[i]);
eproto = tracepoint_string("rxkad_rsp_callid");
if (call_id > INT_MAX)
@@ -1153,12 +1159,12 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
eproto = tracepoint_string("rxkad_rsp_seq");
abort_code = RXKADOUTOFSEQUENCE;
- if (ntohl(response.encrypted.inc_nonce) != conn->security_nonce + 1)
+ if (ntohl(response->encrypted.inc_nonce) != conn->security_nonce + 1)
goto protocol_error_free;
eproto = tracepoint_string("rxkad_rsp_level");
abort_code = RXKADLEVELFAIL;
- level = ntohl(response.encrypted.level);
+ level = ntohl(response->encrypted.level);
if (level > RXRPC_SECURITY_ENCRYPT)
goto protocol_error_free;
conn->params.security_level = level;
@@ -1168,9 +1174,10 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
* as for a client connection */
ret = rxrpc_get_server_data_key(conn, &session_key, expiry, kvno);
if (ret < 0)
- goto temporary_error_free;
+ goto temporary_error_free_ticket;
kfree(ticket);
+ kfree(response);
_leave(" = 0");
return 0;
@@ -1179,12 +1186,15 @@ protocol_error_unlock:
protocol_error_free:
kfree(ticket);
protocol_error:
+ kfree(response);
trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, eproto);
*_abort_code = abort_code;
return -EPROTO;
-temporary_error_free:
+temporary_error_free_ticket:
kfree(ticket);
+temporary_error_free_resp:
+ kfree(response);
temporary_error:
/* Ignore the response packet if we got a temporary error such as
* ENOMEM. We just want to send the challenge again. Note that we
--
2.15.1
From: Ross Lagerwall <[email protected]>
[ Upstream commit f599c64fdf7d9c108e8717fb04bc41c680120da4 ]
When a netfront device is set up it registers a netdev fairly early on,
before it has set up the queues and is actually usable. A userspace tool
like NetworkManager will immediately try to open it and access its state
as soon as it appears. The bug can be reproduced by hotplugging VIFs
until the VM runs out of grant refs. It registers the netdev but fails
to set up any queues (since there are no more grant refs). In the
meantime, NetworkManager opens the device and the kernel crashes trying
to access the queues (of which there are none).
Fix this in two ways:
* For initial setup, register the netdev much later, after the queues
are setup. This avoids the race entirely.
* During a suspend/resume cycle, the frontend reconnects to the backend
and the queues are recreated. It is possible (though highly unlikely) to
race with something opening the device and accessing the queues after
they have been destroyed but before they have been recreated. Extend the
region covered by the rtnl semaphore to protect against this race. There
is a possibility that we fail to recreate the queues so check for this
in the open function.
Signed-off-by: Ross Lagerwall <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]>
Signed-off-by: Juergen Gross <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/xen-netfront.c | 46 ++++++++++++++++++++++++----------------------
1 file changed, 24 insertions(+), 22 deletions(-)
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index c980cdbd6e53..25b4a856f0bb 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -351,6 +351,9 @@ static int xennet_open(struct net_device *dev)
unsigned int i = 0;
struct netfront_queue *queue = NULL;
+ if (!np->queues)
+ return -ENODEV;
+
for (i = 0; i < num_queues; ++i) {
queue = &np->queues[i];
napi_enable(&queue->napi);
@@ -1358,18 +1361,8 @@ static int netfront_probe(struct xenbus_device *dev,
#ifdef CONFIG_SYSFS
info->netdev->sysfs_groups[0] = &xennet_dev_group;
#endif
- err = register_netdev(info->netdev);
- if (err) {
- pr_warn("%s: register_netdev err=%d\n", __func__, err);
- goto fail;
- }
return 0;
-
- fail:
- xennet_free_netdev(netdev);
- dev_set_drvdata(&dev->dev, NULL);
- return err;
}
static void xennet_end_access(int ref, void *page)
@@ -1738,8 +1731,6 @@ static void xennet_destroy_queues(struct netfront_info *info)
{
unsigned int i;
- rtnl_lock();
-
for (i = 0; i < info->netdev->real_num_tx_queues; i++) {
struct netfront_queue *queue = &info->queues[i];
@@ -1748,8 +1739,6 @@ static void xennet_destroy_queues(struct netfront_info *info)
netif_napi_del(&queue->napi);
}
- rtnl_unlock();
-
kfree(info->queues);
info->queues = NULL;
}
@@ -1765,8 +1754,6 @@ static int xennet_create_queues(struct netfront_info *info,
if (!info->queues)
return -ENOMEM;
- rtnl_lock();
-
for (i = 0; i < *num_queues; i++) {
struct netfront_queue *queue = &info->queues[i];
@@ -1775,7 +1762,7 @@ static int xennet_create_queues(struct netfront_info *info,
ret = xennet_init_queue(queue);
if (ret < 0) {
- dev_warn(&info->netdev->dev,
+ dev_warn(&info->xbdev->dev,
"only created %d queues\n", i);
*num_queues = i;
break;
@@ -1789,10 +1776,8 @@ static int xennet_create_queues(struct netfront_info *info,
netif_set_real_num_tx_queues(info->netdev, *num_queues);
- rtnl_unlock();
-
if (*num_queues == 0) {
- dev_err(&info->netdev->dev, "no queues\n");
+ dev_err(&info->xbdev->dev, "no queues\n");
return -EINVAL;
}
return 0;
@@ -1829,6 +1814,7 @@ static int talk_to_netback(struct xenbus_device *dev,
goto out;
}
+ rtnl_lock();
if (info->queues)
xennet_destroy_queues(info);
@@ -1839,6 +1825,7 @@ static int talk_to_netback(struct xenbus_device *dev,
info->queues = NULL;
goto out;
}
+ rtnl_unlock();
/* Create shared ring, alloc event channel -- for each queue */
for (i = 0; i < num_queues; ++i) {
@@ -1935,8 +1922,10 @@ abort_transaction_no_dev_fatal:
xenbus_transaction_end(xbt, 1);
destroy_ring:
xennet_disconnect_backend(info);
+ rtnl_lock();
xennet_destroy_queues(info);
out:
+ rtnl_unlock();
device_unregister(&dev->dev);
return err;
}
@@ -1966,6 +1955,15 @@ static int xennet_connect(struct net_device *dev)
netdev_update_features(dev);
rtnl_unlock();
+ if (dev->reg_state == NETREG_UNINITIALIZED) {
+ err = register_netdev(dev);
+ if (err) {
+ pr_warn("%s: register_netdev err=%d\n", __func__, err);
+ device_unregister(&np->xbdev->dev);
+ return err;
+ }
+ }
+
/*
* All public and private state should now be sane. Get
* ready to start sending and receiving packets and give the driver
@@ -2151,10 +2149,14 @@ static int xennet_remove(struct xenbus_device *dev)
xennet_disconnect_backend(info);
- unregister_netdev(info->netdev);
+ if (info->netdev->reg_state == NETREG_REGISTERED)
+ unregister_netdev(info->netdev);
- if (info->queues)
+ if (info->queues) {
+ rtnl_lock();
xennet_destroy_queues(info);
+ rtnl_unlock();
+ }
xennet_free_netdev(info->netdev);
return 0;
--
2.15.1
From: Jan H. Schönherr <[email protected]>
[ Upstream commit ee190ca6516bc8257e3d36187ca6f0f71a9ec477 ]
follow_pte_pmd() can theoretically return after having acquired a PMD
lock, even when DAX was not compiled with CONFIG_FS_DAX_PMD.
Release the PMD lock unconditionally.
Link: http://lkml.kernel.org/r/[email protected]
Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
Signed-off-by: Jan H. Schönherr <[email protected]>
Reviewed-by: Ross Zwisler <[email protected]>
Reviewed-by: Andrew Morton <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/dax.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/dax.c b/fs/dax.c
index 191306cd8b6b..ddb4981ae32e 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -630,8 +630,8 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
set_pmd_at(vma->vm_mm, address, pmdp, pmd);
mmu_notifier_invalidate_range(vma->vm_mm, start, end);
unlock_pmd:
- spin_unlock(ptl);
#endif
+ spin_unlock(ptl);
} else {
if (pfn != pte_pfn(*ptep))
goto unlock_pte;
--
2.15.1
From: Ross Lagerwall <[email protected]>
[ Upstream commit 3ac7292a25db1c607a50752055a18aba32ac2176 ]
The page given to gnttab_end_foreign_access() to free could be a
compound page so use put_page() instead of free_page() since it can
handle both compound and single pages correctly.
This bug was discovered when migrating a Xen VM with several VIFs and
CONFIG_DEBUG_VM enabled. It hits a BUG usually after fewer than 10
iterations. All netfront devices disconnect from the backend during a
suspend/resume and this will call gnttab_end_foreign_access() if a
netfront queue has an outstanding skb. The mismatch between calling
get_page() and free_page() on a compound page causes a reference
counting error which is detected when DEBUG_VM is enabled.
Signed-off-by: Ross Lagerwall <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]>
Signed-off-by: Juergen Gross <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/xen/grant-table.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 2c6a9114d332..1fb374466e84 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -328,7 +328,7 @@ static void gnttab_handle_deferred(unsigned long unused)
if (entry->page) {
pr_debug("freeing g.e. %#x (pfn %#lx)\n",
entry->ref, page_to_pfn(entry->page));
- __free_page(entry->page);
+ put_page(entry->page);
} else
pr_info("freeing g.e. %#x\n", entry->ref);
kfree(entry);
@@ -384,7 +384,7 @@ void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
if (gnttab_end_foreign_access_ref(ref, readonly)) {
put_free_entry(ref);
if (page != 0)
- free_page(page);
+ put_page(virt_to_page(page));
} else
gnttab_add_deferred(ref, readonly,
page ? virt_to_page(page) : NULL);
--
2.15.1
From: Chuck Lever <[email protected]>
[ Upstream commit 175e03101d36c3034f3c80038d4c28838351a7f2 ]
A single NFSv4 WRITE compound can often have three operations:
PUTFH, WRITE, then GETATTR.
When the WRITE payload is sent in a Read chunk, the client places
the GETATTR in the inline part of the RPC/RDMA message, just after
the WRITE operation (sans payload). The position value in the Read
chunk enables the receiver to insert the Read chunk at the correct
place in the received XDR stream; that is between the WRITE and
GETATTR.
According to RFC 8166, an NFS/RDMA client does not have to add XDR
round-up to the Read chunk that carries the WRITE payload. The
receiver adds XDR round-up padding if it is absent and the
receiver's XDR decoder requires it to be present.
Commit 193bcb7b3719 ("svcrdma: Populate tail iovec when receiving")
attempted to add support for receiving such a compound so that just
the WRITE payload appears in rq_arg's page list, and the trailing
GETATTR is placed in rq_arg's tail iovec. (TCP just strings the
whole compound into the head iovec and page list, without regard
to the alignment of the WRITE payload).
The server transport logic also had to accommodate the optional XDR
round-up of the Read chunk, which it did simply by lengthening the
tail iovec when round-up was needed. This approach is adequate for
the NFSv2 and NFSv3 WRITE decoders.
Unfortunately it is not sufficient for nfsd4_decode_write. When the
Read chunk length is a couple of bytes less than PAGE_SIZE, the
computation at the end of nfsd4_decode_write allows argp->pagelen to
go negative, which breaks the logic in read_buf that looks for the
tail iovec.
The result is that a WRITE operation whose payload length is just
less than a multiple of a page succeeds, but the subsequent GETATTR
in the same compound fails with NFS4ERR_OP_ILLEGAL because the XDR
decoder can't find it. Clients ignore the error, but they must
update their attribute cache via a separate round trip.
As nfsd4_decode_write appears to expect the payload itself to always
have appropriate XDR round-up, have svc_rdma_build_normal_read_chunk
add the Read chunk XDR round-up to the page_len rather than
lengthening the tail iovec.
Reported-by: Olga Kornievskaia <[email protected]>
Fixes: 193bcb7b3719 ("svcrdma: Populate tail iovec when receiving")
Signed-off-by: Chuck Lever <[email protected]>
Tested-by: Olga Kornievskaia <[email protected]>
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/sunrpc/xprtrdma/svc_rdma_rw.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c
index 9bd04549a1ad..12b9a7e0b6d2 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_rw.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c
@@ -727,12 +727,16 @@ static int svc_rdma_build_normal_read_chunk(struct svc_rqst *rqstp,
head->arg.head[0].iov_len - info->ri_position;
head->arg.head[0].iov_len = info->ri_position;
- /* Read chunk may need XDR roundup (see RFC 5666, s. 3.7).
+ /* Read chunk may need XDR roundup (see RFC 8166, s. 3.4.5.2).
*
- * NFSv2/3 write decoders need the length of the tail to
- * contain the size of the roundup padding.
+ * If the client already rounded up the chunk length, the
+ * length does not change. Otherwise, the length of the page
+ * list is increased to include XDR round-up.
+ *
+ * Currently these chunks always start at page offset 0,
+ * thus the rounded-up length never crosses a page boundary.
*/
- head->arg.tail[0].iov_len += 4 - (info->ri_chunklen & 3);
+ info->ri_chunklen = XDR_QUADLEN(info->ri_chunklen) << 2;
head->arg.page_len = info->ri_chunklen;
head->arg.len += info->ri_chunklen;
--
2.15.1
From: Tony Lindgren <[email protected]>
[ Upstream commit 69728051f5bf15efaf6edfbcfe1b5a49a2437918 ]
If a device is runtime PM suspended when we enter suspend and has
a dedicated wake IRQ, we can get the following warning:
WARNING: CPU: 0 PID: 108 at kernel/irq/manage.c:526 enable_irq+0x40/0x94
[ 102.087860] Unbalanced enable for IRQ 147
...
(enable_irq) from [<c06117a8>] (dev_pm_arm_wake_irq+0x4c/0x60)
(dev_pm_arm_wake_irq) from [<c0618360>]
(device_wakeup_arm_wake_irqs+0x58/0x9c)
(device_wakeup_arm_wake_irqs) from [<c0615948>]
(dpm_suspend_noirq+0x10/0x48)
(dpm_suspend_noirq) from [<c01ac7ac>]
(suspend_devices_and_enter+0x30c/0xf14)
(suspend_devices_and_enter) from [<c01adf20>]
(enter_state+0xad4/0xbd8)
(enter_state) from [<c01ad3ec>] (pm_suspend+0x38/0x98)
(pm_suspend) from [<c01ab3e8>] (state_store+0x68/0xc8)
This is because the dedicated wake IRQ for the device may have been
already enabled earlier by dev_pm_enable_wake_irq_check(). Fix the
issue by checking for runtime PM suspended status.
This issue can be easily reproduced by setting serial console log level
to zero, letting the serial console idle, and suspend the system from
an ssh terminal. On resume, dmesg will have the warning above.
The reason why I have not run into this issue earlier has been that I
typically run my PM test cases from on a serial console instead over ssh.
Fixes: c84345597558 (PM / wakeirq: Enable dedicated wakeirq for suspend)
Signed-off-by: Tony Lindgren <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/base/power/wakeirq.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/base/power/wakeirq.c b/drivers/base/power/wakeirq.c
index ae0429827f31..67c50738834b 100644
--- a/drivers/base/power/wakeirq.c
+++ b/drivers/base/power/wakeirq.c
@@ -323,7 +323,8 @@ void dev_pm_arm_wake_irq(struct wake_irq *wirq)
return;
if (device_may_wakeup(wirq->dev)) {
- if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED)
+ if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED &&
+ !pm_runtime_status_suspended(wirq->dev))
enable_irq(wirq->irq);
enable_irq_wake(wirq->irq);
@@ -345,7 +346,8 @@ void dev_pm_disarm_wake_irq(struct wake_irq *wirq)
if (device_may_wakeup(wirq->dev)) {
disable_irq_wake(wirq->irq);
- if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED)
+ if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED &&
+ !pm_runtime_status_suspended(wirq->dev))
disable_irq_nosync(wirq->irq);
}
}
--
2.15.1
From: Arnd Bergmann <[email protected]>
[ Upstream commit ade7db991b47ab3016a414468164f4966bd08202 ]
This bug was fixed before, but came up again with the latest
compiler in another function:
fs/cifs/cifssmb.c: In function 'CIFSSMBSetEA':
fs/cifs/cifssmb.c:6362:3: error: 'strncpy' offset 8 is out of the bounds [0, 4] [-Werror=array-bounds]
strncpy(parm_data->list[0].name, ea_name, name_len);
Let's apply the same fix that was used for the other instances.
Fixes: b2a3ad9ca502 ("cifs: silence compiler warnings showing up with gcc-4.7.0")
Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Steve French <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/cifs/cifssmb.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
index 35dc5bf01ee2..7fd39ea6e22e 100644
--- a/fs/cifs/cifssmb.c
+++ b/fs/cifs/cifssmb.c
@@ -6331,9 +6331,7 @@ SetEARetry:
pSMB->InformationLevel =
cpu_to_le16(SMB_SET_FILE_EA);
- parm_data =
- (struct fealist *) (((char *) &pSMB->hdr.Protocol) +
- offset);
+ parm_data = (void *)pSMB + offsetof(struct smb_hdr, Protocol) + offset;
pSMB->ParameterOffset = cpu_to_le16(param_offset);
pSMB->DataOffset = cpu_to_le16(offset);
pSMB->SetupCount = 1;
--
2.15.1
From: "Steven Rostedt (VMware)" <[email protected]>
[ Upstream commit 97fe22adf33f06519bfdf7dad33bcd562e366c8f ]
Al Viro discovered a bug in the glob ftrace filtering code where "*a*b" is
treated the same as "a*b", and functions that would be selected by "*a*b"
but not "a*b" are not selected with "*a*b".
Add tests for patterns "*a*b" and "a*b*" to the glob selftest.
Link: http://lkml.kernel.org/r/[email protected]
Cc: Shuah Khan <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
index 589d52b211b7..27a54a17da65 100644
--- a/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
+++ b/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
@@ -29,6 +29,12 @@ ftrace_filter_check '*schedule*' '^.*schedule.*$'
# filter by *, end match
ftrace_filter_check 'schedule*' '^schedule.*$'
+# filter by *mid*end
+ftrace_filter_check '*aw*lock' '.*aw.*lock$'
+
+# filter by start*mid*
+ftrace_filter_check 'mutex*try*' '^mutex.*try.*'
+
# Advanced full-glob matching feature is recently supported.
# Skip the tests if we are sure the kernel does not support it.
if grep -q 'accepts: .* glob-matching-pattern' README ; then
--
2.15.1
From: Don Hiatt <[email protected]>
[ Upstream commit 87daac68f77a3e21a1113f816e6a7be0b38bdde8 ]
iWarp devices do not support the creation of address handles
so return AH_ATTR_TYPE_UNDEFINED for all iWarp devices.
While we are here reduce the size of port_num to u8 and add
a comment.
Fixes: 44c58487d51a ("IB/core: Define 'ib' and 'roce' rdma_ah_attr types")
Reported-by: Parav Pandit <[email protected]>
CC: Sean Hefty <[email protected]>
Reviewed-by: Ira Weiny <[email protected]>
Reviewed-by: Shiraz Saleem <[email protected]>
Signed-off-by: Don Hiatt <[email protected]>
Signed-off-by: Dennis Dalessandro <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
include/rdma/ib_verbs.h | 20 ++++++++++++++------
1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index a9fae49a1883..08f3d8699a27 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -866,6 +866,7 @@ struct ib_mr_status {
__attribute_const__ enum ib_rate mult_to_ib_rate(int mult);
enum rdma_ah_attr_type {
+ RDMA_AH_ATTR_TYPE_UNDEFINED,
RDMA_AH_ATTR_TYPE_IB,
RDMA_AH_ATTR_TYPE_ROCE,
RDMA_AH_ATTR_TYPE_OPA,
@@ -3762,17 +3763,24 @@ static inline void rdma_ah_set_grh(struct rdma_ah_attr *attr,
grh->traffic_class = traffic_class;
}
-/*Get AH type */
+/**
+ * rdma_ah_find_type - Return address handle type.
+ *
+ * @dev: Device to be checked
+ * @port_num: Port number
+ */
static inline enum rdma_ah_attr_type rdma_ah_find_type(struct ib_device *dev,
- u32 port_num)
+ u8 port_num)
{
if (rdma_protocol_roce(dev, port_num))
return RDMA_AH_ATTR_TYPE_ROCE;
- else if ((rdma_protocol_ib(dev, port_num)) &&
- (rdma_cap_opa_ah(dev, port_num)))
- return RDMA_AH_ATTR_TYPE_OPA;
- else
+ if (rdma_protocol_ib(dev, port_num)) {
+ if (rdma_cap_opa_ah(dev, port_num))
+ return RDMA_AH_ATTR_TYPE_OPA;
return RDMA_AH_ATTR_TYPE_IB;
+ }
+
+ return RDMA_AH_ATTR_TYPE_UNDEFINED;
}
/**
--
2.15.1
From: Will Deacon <[email protected]>
[ Upstream commit 11dc13224c975efcec96647a4768a6f1bb7a19a8 ]
When queuing on the qspinlock, the count field for the current CPU's head
node is incremented. This needn't be atomic because locking in e.g. IRQ
context is balanced and so an IRQ will return with node->count as it
found it.
However, the compiler could in theory reorder the initialisation of
node[idx] before the increment of the head node->count, causing an
IRQ to overwrite the initialised node and potentially corrupt the lock
state.
Avoid the potential for this harmful compiler reordering by placing a
barrier() between the increment of the head node->count and the subsequent
node initialisation.
Signed-off-by: Will Deacon <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
kernel/locking/qspinlock.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 294294c71ba4..50dc42aeaa56 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -379,6 +379,14 @@ queue:
tail = encode_tail(smp_processor_id(), idx);
node += idx;
+
+ /*
+ * Ensure that we increment the head node->count before initialising
+ * the actual node. If the compiler is kind enough to reorder these
+ * stores, then an IRQ could overwrite our assignments.
+ */
+ barrier();
+
node->locked = 0;
node->next = NULL;
pv_init_node(node);
--
2.15.1
From: Daniel Borkmann <[email protected]>
[ Upstream commit 941ff6f11c020913f5cddf543a9ec63475d7c082 ]
Fix two issues in the reuseport_bpf selftests that were
reported by Linaro CI:
[...]
+ ./reuseport_bpf
---- IPv4 UDP ----
Testing EBPF mod 10...
Reprograming, testing mod 5...
./reuseport_bpf: ebpf error. log:
0: (bf) r6 = r1
1: (20) r0 = *(u32 *)skb[0]
2: (97) r0 %= 10
3: (95) exit
processed 4 insns
: Operation not permitted
+ echo FAIL
[...]
---- IPv4 TCP ----
Testing EBPF mod 10...
./reuseport_bpf: failed to bind send socket: Address already in use
+ echo FAIL
[...]
For the former adjust rlimit since this was the cause of
failure for loading the BPF prog, and for the latter add
SO_REUSEADDR.
Reported-by: Naresh Kamboju <[email protected]>
Link: https://bugs.linaro.org/show_bug.cgi?id=3502
Signed-off-by: Daniel Borkmann <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/testing/selftests/net/reuseport_bpf.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/net/reuseport_bpf.c b/tools/testing/selftests/net/reuseport_bpf.c
index 4a8217448f20..cad14cd0ea92 100644
--- a/tools/testing/selftests/net/reuseport_bpf.c
+++ b/tools/testing/selftests/net/reuseport_bpf.c
@@ -21,6 +21,7 @@
#include <sys/epoll.h>
#include <sys/types.h>
#include <sys/socket.h>
+#include <sys/resource.h>
#include <unistd.h>
#ifndef ARRAY_SIZE
@@ -190,11 +191,14 @@ static void send_from(struct test_params p, uint16_t sport, char *buf,
struct sockaddr * const saddr = new_any_sockaddr(p.send_family, sport);
struct sockaddr * const daddr =
new_loopback_sockaddr(p.send_family, p.recv_port);
- const int fd = socket(p.send_family, p.protocol, 0);
+ const int fd = socket(p.send_family, p.protocol, 0), one = 1;
if (fd < 0)
error(1, errno, "failed to create send socket");
+ if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one)))
+ error(1, errno, "failed to set reuseaddr");
+
if (bind(fd, saddr, sockaddr_size()))
error(1, errno, "failed to bind send socket");
@@ -433,6 +437,21 @@ void enable_fastopen(void)
}
}
+static struct rlimit rlim_old, rlim_new;
+
+static __attribute__((constructor)) void main_ctor(void)
+{
+ getrlimit(RLIMIT_MEMLOCK, &rlim_old);
+ rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20);
+ rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20);
+ setrlimit(RLIMIT_MEMLOCK, &rlim_new);
+}
+
+static __attribute__((destructor)) void main_dtor(void)
+{
+ setrlimit(RLIMIT_MEMLOCK, &rlim_old);
+}
+
int main(void)
{
fprintf(stderr, "---- IPv4 UDP ----\n");
--
2.15.1
From: "Rafael J. Wysocki" <[email protected]>
[ Upstream commit 3cd091a773936c54344a519f7ee1379ccb620bee ]
Commit 662591461c4b (ACPI / EC: Drop EC noirq hooks to fix a
regression) modified the ACPI EC driver so that it doesn't switch
over to busy polling mode during noirq stages of system suspend and
resume in an attempt to fix an issue resulting from that behavior.
However, that modification introduced a system resume regression on
Thinkpad X240, so make the EC driver switch over to the polling mode
during noirq stages of system suspend and resume again, which
effectively reverts the problematic commit.
Fixes: 662591461c4b (ACPI / EC: Drop EC noirq hooks to fix a regression)
Link: https://bugzilla.kernel.org/show_bug.cgi?id=197863
Reported-by: Markus Demleitner <[email protected]>
Tested-by: Markus Demleitner <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/acpi/ec.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
index df842465634a..fedbcfd45b67 100644
--- a/drivers/acpi/ec.c
+++ b/drivers/acpi/ec.c
@@ -1927,6 +1927,9 @@ static int acpi_ec_suspend_noirq(struct device *dev)
ec->reference_count >= 1)
acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_DISABLE);
+ if (acpi_sleep_no_ec_events())
+ acpi_ec_enter_noirq(ec);
+
return 0;
}
@@ -1934,6 +1937,9 @@ static int acpi_ec_resume_noirq(struct device *dev)
{
struct acpi_ec *ec = acpi_driver_data(to_acpi_device(dev));
+ if (acpi_sleep_no_ec_events())
+ acpi_ec_leave_noirq(ec);
+
if (ec_no_wakeup && test_bit(EC_FLAGS_STARTED, &ec->flags) &&
ec->reference_count >= 1)
acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_ENABLE);
--
2.15.1
From: "J. Bruce Fields" <[email protected]>
[ Upstream commit 0078117c6d9160031b866cfa1853514d4f6865d2 ]
A client that sends more than a hundred ops in a single compound
currently gets an rpc-level GARBAGE_ARGS error.
It would be more helpful to return NFS4ERR_RESOURCE, since that gives
the client a better idea how to recover (for example by splitting up the
compound into smaller compounds).
This is all a bit academic since we've never actually seen a reason for
clients to send such long compounds, but we may as well fix it.
While we're there, just use NFSD4_MAX_OPS_PER_COMPOUND == 16, the
constant we already use in the 4.1 case, instead of hard-coding 100.
Chances anyone actually uses even 16 ops per compound are small enough
that I think there's a neglible risk or any regression.
This fixes pynfs test COMP6.
Reported-by: "Lu, Xinyu" <[email protected]>
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/nfsd/nfs4proc.c | 3 +++
fs/nfsd/nfs4xdr.c | 9 +++++++--
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index 8ce60986fb75..8706b141e4a4 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -1712,6 +1712,9 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
status = nfserr_minor_vers_mismatch;
if (nfsd_minorversion(args->minorversion, NFSD_TEST) <= 0)
goto out;
+ status = nfserr_resource;
+ if (args->opcnt > NFSD_MAX_OPS_PER_COMPOUND)
+ goto out;
status = nfs41_check_op_ordering(args);
if (status) {
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index 2c61c6b8ae09..5dcd7cb45b2d 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -1918,8 +1918,13 @@ nfsd4_decode_compound(struct nfsd4_compoundargs *argp)
if (argp->taglen > NFSD4_MAX_TAGLEN)
goto xdr_error;
- if (argp->opcnt > 100)
- goto xdr_error;
+ /*
+ * NFS4ERR_RESOURCE is a more helpful error than GARBAGE_ARGS
+ * here, so we return success at the xdr level so that
+ * nfsd4_proc can handle this is an NFS-level error.
+ */
+ if (argp->opcnt > NFSD_MAX_OPS_PER_COMPOUND)
+ return 0;
if (argp->opcnt > ARRAY_SIZE(argp->iops)) {
argp->ops = kzalloc(argp->opcnt * sizeof(*argp->ops), GFP_KERNEL);
--
2.15.1
From: Stephen Boyd <[email protected]>
[ Upstream commit 95a2562590c2f64a0398183f978d5cf3db6d0284 ]
On some platforms there's an ITS available but it's not enabled
because reading or writing the registers is denied by the
firmware. In fact, reading or writing them will cause the system
to reset. We could remove the node from DT in such a case, but
it's better to skip nodes that are marked as "disabled" in DT so
that we can describe the hardware that exists and use the status
property to indicate how the firmware has configured things.
Cc: Stuart Yoder <[email protected]>
Cc: Laurentiu Tudor <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Rajendra Nayak <[email protected]>
Signed-off-by: Stephen Boyd <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/irqchip/irq-gic-v3-its-pci-msi.c | 2 ++
drivers/irqchip/irq-gic-v3-its-platform-msi.c | 2 ++
drivers/irqchip/irq-gic-v3-its.c | 2 ++
drivers/staging/fsl-mc/bus/irq-gic-v3-its-fsl-mc-msi.c | 2 ++
4 files changed, 8 insertions(+)
diff --git a/drivers/irqchip/irq-gic-v3-its-pci-msi.c b/drivers/irqchip/irq-gic-v3-its-pci-msi.c
index 14a8c0a7e095..25a98de5cfb2 100644
--- a/drivers/irqchip/irq-gic-v3-its-pci-msi.c
+++ b/drivers/irqchip/irq-gic-v3-its-pci-msi.c
@@ -132,6 +132,8 @@ static int __init its_pci_of_msi_init(void)
for (np = of_find_matching_node(NULL, its_device_id); np;
np = of_find_matching_node(np, its_device_id)) {
+ if (!of_device_is_available(np))
+ continue;
if (!of_property_read_bool(np, "msi-controller"))
continue;
diff --git a/drivers/irqchip/irq-gic-v3-its-platform-msi.c b/drivers/irqchip/irq-gic-v3-its-platform-msi.c
index 833a90fe33ae..8881a053c173 100644
--- a/drivers/irqchip/irq-gic-v3-its-platform-msi.c
+++ b/drivers/irqchip/irq-gic-v3-its-platform-msi.c
@@ -154,6 +154,8 @@ static void __init its_pmsi_of_init(void)
for (np = of_find_matching_node(NULL, its_device_id); np;
np = of_find_matching_node(np, its_device_id)) {
+ if (!of_device_is_available(np))
+ continue;
if (!of_property_read_bool(np, "msi-controller"))
continue;
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index af57f8473a88..13f195c9743e 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -3083,6 +3083,8 @@ static int __init its_of_probe(struct device_node *node)
for (np = of_find_matching_node(node, its_device_id); np;
np = of_find_matching_node(np, its_device_id)) {
+ if (!of_device_is_available(np))
+ continue;
if (!of_property_read_bool(np, "msi-controller")) {
pr_warn("%pOF: no msi-controller property, ITS ignored\n",
np);
diff --git a/drivers/staging/fsl-mc/bus/irq-gic-v3-its-fsl-mc-msi.c b/drivers/staging/fsl-mc/bus/irq-gic-v3-its-fsl-mc-msi.c
index 123e4af58408..50260cb5056d 100644
--- a/drivers/staging/fsl-mc/bus/irq-gic-v3-its-fsl-mc-msi.c
+++ b/drivers/staging/fsl-mc/bus/irq-gic-v3-its-fsl-mc-msi.c
@@ -75,6 +75,8 @@ int __init its_fsl_mc_msi_init(void)
for (np = of_find_matching_node(NULL, its_device_id); np;
np = of_find_matching_node(np, its_device_id)) {
+ if (!of_device_is_available(np))
+ continue;
if (!of_property_read_bool(np, "msi-controller"))
continue;
--
2.15.1
From: Jia Zhang <[email protected]>
[ Upstream commit 595dd46ebfc10be041a365d0a3fa99df50b6ba73 ]
Commit:
df04abfd181a ("fs/proc/kcore.c: Add bounce buffer for ktext data")
... introduced a bounce buffer to work around CONFIG_HARDENED_USERCOPY=y.
However, accessing the vsyscall user page will cause an SMAP fault.
Replace memcpy() with copy_from_user() to fix this bug works, but adding
a common way to handle this sort of user page may be useful for future.
Currently, only vsyscall page requires KCORE_USER.
Signed-off-by: Jia Zhang <[email protected]>
Reviewed-by: Jiri Olsa <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/mm/init_64.c | 3 +--
fs/proc/kcore.c | 4 ++++
include/linux/kcore.h | 1 +
3 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index fe85d1204db8..642357aff216 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1180,8 +1180,7 @@ void __init mem_init(void)
after_bootmem = 1;
/* Register memory areas for /proc/kcore */
- kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
- PAGE_SIZE, KCORE_OTHER);
+ kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR, PAGE_SIZE, KCORE_USER);
mem_init_print_info(NULL);
}
diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
index e8a93bc8285d..d1e82761de81 100644
--- a/fs/proc/kcore.c
+++ b/fs/proc/kcore.c
@@ -510,6 +510,10 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos)
/* we have to zero-fill user buffer even if no read */
if (copy_to_user(buffer, buf, tsz))
return -EFAULT;
+ } else if (m->type == KCORE_USER) {
+ /* User page is handled prior to normal kernel page: */
+ if (copy_to_user(buffer, (char *)start, tsz))
+ return -EFAULT;
} else {
if (kern_addr_valid(start)) {
/*
diff --git a/include/linux/kcore.h b/include/linux/kcore.h
index 7ff25a808fef..80db19d3a505 100644
--- a/include/linux/kcore.h
+++ b/include/linux/kcore.h
@@ -10,6 +10,7 @@ enum kcore_type {
KCORE_VMALLOC,
KCORE_RAM,
KCORE_VMEMMAP,
+ KCORE_USER,
KCORE_OTHER,
};
--
2.15.1
From: Alexey Dobriyan <[email protected]>
[ Upstream commit ac7f1061c2c11bb8936b1b6a94cdb48de732f7a4 ]
Current code does:
if (sscanf(dentry->d_name.name, "%lx-%lx", start, end) != 2)
However sscanf() is broken garbage.
It silently accepts whitespace between format specifiers
(did you know that?).
It silently accepts valid strings which result in integer overflow.
Do not use sscanf() for any even remotely reliable parsing code.
OK
# readlink '/proc/1/map_files/55a23af39000-55a23b05b000'
/lib/systemd/systemd
broken
# readlink '/proc/1/map_files/ 55a23af39000-55a23b05b000'
/lib/systemd/systemd
broken
# readlink '/proc/1/map_files/55a23af39000-55a23b05b000 '
/lib/systemd/systemd
very broken
# readlink '/proc/1/map_files/1000000000000000055a23af39000-55a23b05b000'
/lib/systemd/systemd
Andrei said:
: This patch breaks criu. It was a bug in criu. And this bug is on a minor
: path, which works when memfd_create() isn't available. It is a reason why
: I ask to not backport this patch to stable kernels.
:
: In CRIU this bug can be triggered, only if this patch will be backported
: to a kernel which version is lower than v3.16.
Link: http://lkml.kernel.org/r/20171120212706.GA14325@avx2
Signed-off-by: Alexey Dobriyan <[email protected]>
Cc: Pavel Emelyanov <[email protected]>
Cc: Andrei Vagin <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/proc/base.c | 29 ++++++++++++++++++++++++++++-
1 file changed, 28 insertions(+), 1 deletion(-)
diff --git a/fs/proc/base.c b/fs/proc/base.c
index 9d357b2ea6cb..2ff11a693360 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -100,6 +100,8 @@
#include "internal.h"
#include "fd.h"
+#include "../../lib/kstrtox.h"
+
/* NOTE:
* Implementing inode permission operations in /proc is almost
* certainly an error. Permission checks need to happen during
@@ -1908,8 +1910,33 @@ end_instantiate:
static int dname_to_vma_addr(struct dentry *dentry,
unsigned long *start, unsigned long *end)
{
- if (sscanf(dentry->d_name.name, "%lx-%lx", start, end) != 2)
+ const char *str = dentry->d_name.name;
+ unsigned long long sval, eval;
+ unsigned int len;
+
+ len = _parse_integer(str, 16, &sval);
+ if (len & KSTRTOX_OVERFLOW)
+ return -EINVAL;
+ if (sval != (unsigned long)sval)
+ return -EINVAL;
+ str += len;
+
+ if (*str != '-')
return -EINVAL;
+ str++;
+
+ len = _parse_integer(str, 16, &eval);
+ if (len & KSTRTOX_OVERFLOW)
+ return -EINVAL;
+ if (eval != (unsigned long)eval)
+ return -EINVAL;
+ str += len;
+
+ if (*str != '\0')
+ return -EINVAL;
+
+ *start = sval;
+ *end = eval;
return 0;
}
--
2.15.1
From: Tang Junhui <[email protected]>
[ Upstream commit 7f4fc93d4713394ee8f1cd44c238e046e11b4f15 ]
I attach a back-end device to a cache set, and the cache set is not
registered yet, this back-end device did not attach successfully, and no
error returned:
[root]# echo 87859280-fec6-4bcc-20df7ca8f86b > /sys/block/sde/bcache/attach
[root]#
In sysfs_attach(), the return value "v" is initialized to "size" in
the beginning, and if no cache set exist in bch_cache_sets, the "v" value
would not change any more, and return to sysfs, sysfs regard it as success
since the "size" is a positive number.
This patch fixes this issue by assigning "v" with "-ENOENT" in the
initialization.
Signed-off-by: Tang Junhui <[email protected]>
Reviewed-by: Michael Lyle <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/md/bcache/sysfs.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 6dd03cf9053b..5d81cd06af00 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -193,7 +193,7 @@ STORE(__cached_dev)
{
struct cached_dev *dc = container_of(kobj, struct cached_dev,
disk.kobj);
- ssize_t v = size;
+ ssize_t v;
struct cache_set *c;
struct kobj_uevent_env *env;
@@ -270,6 +270,7 @@ STORE(__cached_dev)
if (bch_parse_uuid(buf, set_uuid) < 16)
return -EINVAL;
+ v = -ENOENT;
list_for_each_entry(c, &bch_cache_sets, list) {
v = bch_cached_dev_attach(dc, c, set_uuid);
if (!v)
@@ -277,7 +278,7 @@ STORE(__cached_dev)
}
pr_err("Can't attach %s: cache set not found", buf);
- size = v;
+ return v;
}
if (attr == &sysfs_detach && dc->disk.c)
--
2.15.1
From: Michael Kelley <[email protected]>
[ Upstream commit d207af2eab3f8668b95ad02b21930481c42806fd ]
for_each_cpu_wrap() was originally added in the #else half of a
large "#if NR_CPUS == 1" statement, but was omitted in the #if
half. This patch adds the missing #if half to prevent compile
errors when NR_CPUS is 1.
Reported-by: kbuild test robot <[email protected]>
Signed-off-by: Michael Kelley <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Fixes: c743f0a5c50f ("sched/fair, cpumask: Export for_each_cpu_wrap()")
Link: http://lkml.kernel.org/r/SN6PR1901MB2045F087F59450507D4FCC17CBF50@SN6PR1901MB2045.namprd19.prod.outlook.com
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
include/linux/cpumask.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 8d3125c493b2..db461a07bf38 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -165,6 +165,8 @@ static inline unsigned int cpumask_local_spread(unsigned int i, int node)
for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
#define for_each_cpu_not(cpu, mask) \
for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
+#define for_each_cpu_wrap(cpu, mask, start) \
+ for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask, (void)(start))
#define for_each_cpu_and(cpu, mask, and) \
for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask, (void)and)
#else
--
2.15.1
From: Thomas Richter <[email protected]>
[ Upstream commit 7a92453620d42c3a5fea94a864dc6aa04c262b93 ]
On Intel test case trace+probe_libc_inet_pton.sh succeeds and the
output is:
[root@f27 perf]# ./perf trace --no-syscalls
-e probe_libc:inet_pton/max-stack=3/ ping -6 -c 1 ::1
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.037 ms
--- ::1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms
0.000 probe_libc:inet_pton:(7fa40ac618a0))
__GI___inet_pton (/usr/lib64/libc-2.26.so)
getaddrinfo (/usr/lib64/libc-2.26.so)
main (/usr/bin/ping)
The kernel stack unwinder is used, it is specified implicitly
as call-graph=fp (frame pointer).
On s390x only dwarf is available for stack unwinding. It is also
done in user space. This requires different parameter setup
and result checking for s390x and Intel.
This patch adds separate perf trace setup and result checking
for Intel and s390x. On s390x specify this command line to
get a call-graph and handle the different call graph result
checking:
[root@s35lp76 perf]# ./perf trace --no-syscalls
-e probe_libc:inet_pton/call-graph=dwarf/ ping -6 -c 1 ::1
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.041 ms
--- ::1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms
0.000 probe_libc:inet_pton:(3ffb9942060))
__GI___inet_pton (/usr/lib64/libc-2.26.so)
gaih_inet (inlined)
__GI_getaddrinfo (inlined)
main (/usr/bin/ping)
__libc_start_main (/usr/lib64/libc-2.26.so)
_start (/usr/bin/ping)
[root@s35lp76 perf]#
Before:
[root@s8360047 perf]# ./perf test -vv 58
58: probe libc's inet_pton & backtrace it with ping :
--- start ---
test child forked, pid 26349
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.079 ms
--- ::1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms
0.000 probe_libc:inet_pton:(3ff925c2060))
test child finished with -1
---- end ----
probe libc's inet_pton & backtrace it with ping: FAILED!
[root@s8360047 perf]#
After:
[root@s35lp76 perf]# ./perf test -vv 57
57: probe libc's inet_pton & backtrace it with ping :
--- start ---
test child forked, pid 38708
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.038 ms
--- ::1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms
0.000 probe_libc:inet_pton:(3ff87342060))
__GI___inet_pton (/usr/lib64/libc-2.26.so)
gaih_inet (inlined)
__GI_getaddrinfo (inlined)
main (/usr/bin/ping)
__libc_start_main (/usr/lib64/libc-2.26.so)
_start (/usr/bin/ping)
test child finished with 0
---- end ----
probe libc's inet_pton & backtrace it with ping: Ok
[root@s35lp76 perf]#
On Intel the test case runs unchanged and succeeds.
Signed-off-by: Thomas Richter <[email protected]>
Reviewed-by: Hendrik Brueckner <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
.../perf/tests/shell/trace+probe_libc_inet_pton.sh | 21 +++++++++++++++++----
1 file changed, 17 insertions(+), 4 deletions(-)
diff --git a/tools/perf/tests/shell/trace+probe_libc_inet_pton.sh b/tools/perf/tests/shell/trace+probe_libc_inet_pton.sh
index 7a84d73324e3..a2f757da49d9 100755
--- a/tools/perf/tests/shell/trace+probe_libc_inet_pton.sh
+++ b/tools/perf/tests/shell/trace+probe_libc_inet_pton.sh
@@ -22,10 +22,23 @@ trace_libc_inet_pton_backtrace() {
expected[4]="rtt min.*"
expected[5]="[0-9]+\.[0-9]+[[:space:]]+probe_libc:inet_pton:\([[:xdigit:]]+\)"
expected[6]=".*inet_pton[[:space:]]\($libc\)$"
- expected[7]="getaddrinfo[[:space:]]\($libc\)$"
- expected[8]=".*\(.*/bin/ping.*\)$"
+ case "$(uname -m)" in
+ s390x)
+ eventattr='call-graph=dwarf'
+ expected[7]="gaih_inet[[:space:]]\(inlined\)$"
+ expected[8]="__GI_getaddrinfo[[:space:]]\(inlined\)$"
+ expected[9]="main[[:space:]]\(.*/bin/ping.*\)$"
+ expected[10]="__libc_start_main[[:space:]]\($libc\)$"
+ expected[11]="_start[[:space:]]\(.*/bin/ping.*\)$"
+ ;;
+ *)
+ eventattr='max-stack=3'
+ expected[7]="getaddrinfo[[:space:]]\($libc\)$"
+ expected[8]=".*\(.*/bin/ping.*\)$"
+ ;;
+ esac
- perf trace --no-syscalls -e probe_libc:inet_pton/max-stack=3/ ping -6 -c 1 ::1 2>&1 | grep -v ^$ | while read line ; do
+ perf trace --no-syscalls -e probe_libc:inet_pton/$eventattr/ ping -6 -c 1 ::1 2>&1 | grep -v ^$ | while read line ; do
echo $line
echo "$line" | egrep -q "${expected[$idx]}"
if [ $? -ne 0 ] ; then
@@ -33,7 +46,7 @@ trace_libc_inet_pton_backtrace() {
exit 1
fi
let idx+=1
- [ $idx -eq 9 ] && break
+ [ -z "${expected[$idx]}" ] && break
done
}
--
2.15.1
From: Mathieu Malaterre <[email protected]>
[ Upstream commit e728789c52afccc1275cba1dd812f03abe16ea3c ]
In commit c7f5d105495a ("net: Add eth_platform_get_mac_address() helper."),
two declarations were added:
int eth_platform_get_mac_address(struct device *dev, u8 *mac_addr);
unsigned char *arch_get_platform_get_mac_address(void);
An extra '_get' was introduced in arch_get_platform_get_mac_address, remove
it. Fix compile warning using W=1:
CC net/ethernet/eth.o
net/ethernet/eth.c:523:24: warning: no previous prototype for ‘arch_get_platform_mac_address’ [-Wmissing-prototypes]
unsigned char * __weak arch_get_platform_mac_address(void)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
AR net/ethernet/built-in.o
Signed-off-by: Mathieu Malaterre <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
include/linux/etherdevice.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
index 2d9f80848d4b..c643cc7fefb5 100644
--- a/include/linux/etherdevice.h
+++ b/include/linux/etherdevice.h
@@ -31,7 +31,7 @@
#ifdef __KERNEL__
struct device;
int eth_platform_get_mac_address(struct device *dev, u8 *mac_addr);
-unsigned char *arch_get_platform_get_mac_address(void);
+unsigned char *arch_get_platform_mac_address(void);
u32 eth_get_headlen(void *data, unsigned int max_len);
__be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev);
extern const struct header_ops eth_header_ops;
--
2.15.1
From: Ulf Hansson <[email protected]>
[ Upstream commit a3381e3a65cbaf612c8f584906c4dba27e84267c ]
Commit b539cc82d493 (PM / Domains: Ignore domain-idle-states that are
not compatible), made it possible to ignore non-compatible
domain-idle-states OF nodes. However, in case that happens while doing
the OF parsing, the number of elements in the allocated array would
exceed the numbers actually needed, thus wasting memory.
Fix this by pre-iterating the genpd OF node and counting the number of
compatible domain-idle-states nodes, before doing the allocation. While
doing this, it makes sense to rework the code a bit to avoid open coding,
of parts responsible for the OF node iteration.
Let's also take the opportunity to clarify the function header for
of_genpd_parse_idle_states(), about what is being returned in case of
errors.
Fixes: b539cc82d493 (PM / Domains: Ignore domain-idle-states that are not compatible)
Signed-off-by: Ulf Hansson <[email protected]>
Reviewed-by: Lina Iyer <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/base/power/domain.c | 76 +++++++++++++++++++++++++++------------------
1 file changed, 45 insertions(+), 31 deletions(-)
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 70f8904f46a3..b3b78079aa9f 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -2206,6 +2206,38 @@ static int genpd_parse_state(struct genpd_power_state *genpd_state,
return 0;
}
+static int genpd_iterate_idle_states(struct device_node *dn,
+ struct genpd_power_state *states)
+{
+ int ret;
+ struct of_phandle_iterator it;
+ struct device_node *np;
+ int i = 0;
+
+ ret = of_count_phandle_with_args(dn, "domain-idle-states", NULL);
+ if (ret <= 0)
+ return ret;
+
+ /* Loop over the phandles until all the requested entry is found */
+ of_for_each_phandle(&it, ret, dn, "domain-idle-states", NULL, 0) {
+ np = it.node;
+ if (!of_match_node(idle_state_match, np))
+ continue;
+ if (states) {
+ ret = genpd_parse_state(&states[i], np);
+ if (ret) {
+ pr_err("Parsing idle state node %pOF failed with err %d\n",
+ np, ret);
+ of_node_put(np);
+ return ret;
+ }
+ }
+ i++;
+ }
+
+ return i;
+}
+
/**
* of_genpd_parse_idle_states: Return array of idle states for the genpd.
*
@@ -2215,49 +2247,31 @@ static int genpd_parse_state(struct genpd_power_state *genpd_state,
*
* Returns the device states parsed from the OF node. The memory for the states
* is allocated by this function and is the responsibility of the caller to
- * free the memory after use.
+ * free the memory after use. If no domain idle states is found it returns
+ * -EINVAL and in case of errors, a negative error code.
*/
int of_genpd_parse_idle_states(struct device_node *dn,
struct genpd_power_state **states, int *n)
{
struct genpd_power_state *st;
- struct device_node *np;
- int i = 0;
- int err, ret;
- int count;
- struct of_phandle_iterator it;
- const struct of_device_id *match_id;
+ int ret;
- count = of_count_phandle_with_args(dn, "domain-idle-states", NULL);
- if (count <= 0)
- return -EINVAL;
+ ret = genpd_iterate_idle_states(dn, NULL);
+ if (ret <= 0)
+ return ret < 0 ? ret : -EINVAL;
- st = kcalloc(count, sizeof(*st), GFP_KERNEL);
+ st = kcalloc(ret, sizeof(*st), GFP_KERNEL);
if (!st)
return -ENOMEM;
- /* Loop over the phandles until all the requested entry is found */
- of_for_each_phandle(&it, err, dn, "domain-idle-states", NULL, 0) {
- np = it.node;
- match_id = of_match_node(idle_state_match, np);
- if (!match_id)
- continue;
- ret = genpd_parse_state(&st[i++], np);
- if (ret) {
- pr_err
- ("Parsing idle state node %pOF failed with err %d\n",
- np, ret);
- of_node_put(np);
- kfree(st);
- return ret;
- }
+ ret = genpd_iterate_idle_states(dn, st);
+ if (ret <= 0) {
+ kfree(st);
+ return ret < 0 ? ret : -EINVAL;
}
- *n = i;
- if (!i)
- kfree(st);
- else
- *states = st;
+ *states = st;
+ *n = ret;
return 0;
}
--
2.15.1
From: Guanglei Li <[email protected]>
[ Upstream commit 2c0aa08631b86a4678dbc93b9caa5248014b4458 ]
Scenario:
1. Port down and do fail over
2. Ap do rds_bind syscall
PID: 47039 TASK: ffff89887e2fe640 CPU: 47 COMMAND: "kworker/u:6"
#0 [ffff898e35f159f0] machine_kexec at ffffffff8103abf9
#1 [ffff898e35f15a60] crash_kexec at ffffffff810b96e3
#2 [ffff898e35f15b30] oops_end at ffffffff8150f518
#3 [ffff898e35f15b60] no_context at ffffffff8104854c
#4 [ffff898e35f15ba0] __bad_area_nosemaphore at ffffffff81048675
#5 [ffff898e35f15bf0] bad_area_nosemaphore at ffffffff810487d3
#6 [ffff898e35f15c00] do_page_fault at ffffffff815120b8
#7 [ffff898e35f15d10] page_fault at ffffffff8150ea95
[exception RIP: unknown or invalid address]
RIP: 0000000000000000 RSP: ffff898e35f15dc8 RFLAGS: 00010282
RAX: 00000000fffffffe RBX: ffff889b77f6fc00 RCX:ffffffff81c99d88
RDX: 0000000000000000 RSI: ffff896019ee08e8 RDI:ffff889b77f6fc00
RBP: ffff898e35f15df0 R8: ffff896019ee08c8 R9:0000000000000000
R10: 0000000000000400 R11: 0000000000000000 R12:ffff896019ee08c0
R13: ffff889b77f6fe68 R14: ffffffff81c99d80 R15: ffffffffa022a1e0
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
#8 [ffff898e35f15dc8] cma_ndev_work_handler at ffffffffa022a228 [rdma_cm]
#9 [ffff898e35f15df8] process_one_work at ffffffff8108a7c6
#10 [ffff898e35f15e58] worker_thread at ffffffff8108bda0
#11 [ffff898e35f15ee8] kthread at ffffffff81090fe6
PID: 45659 TASK: ffff880d313d2500 CPU: 31 COMMAND: "oracle_45659_ap"
#0 [ffff881024ccfc98] __schedule at ffffffff8150bac4
#1 [ffff881024ccfd40] schedule at ffffffff8150c2cf
#2 [ffff881024ccfd50] __mutex_lock_slowpath at ffffffff8150cee7
#3 [ffff881024ccfdc0] mutex_lock at ffffffff8150cdeb
#4 [ffff881024ccfde0] rdma_destroy_id at ffffffffa022a027 [rdma_cm]
#5 [ffff881024ccfe10] rds_ib_laddr_check at ffffffffa0357857 [rds_rdma]
#6 [ffff881024ccfe50] rds_trans_get_preferred at ffffffffa0324c2a [rds]
#7 [ffff881024ccfe80] rds_bind at ffffffffa031d690 [rds]
#8 [ffff881024ccfeb0] sys_bind at ffffffff8142a670
PID: 45659 PID: 47039
rds_ib_laddr_check
/* create id_priv with a null event_handler */
rdma_create_id
rdma_bind_addr
cma_acquire_dev
/* add id_priv to cma_dev->id_list */
cma_attach_to_dev
cma_ndev_work_handler
/* event_hanlder is null */
id_priv->id.event_handler
Signed-off-by: Guanglei Li <[email protected]>
Signed-off-by: Honglei Wang <[email protected]>
Reviewed-by: Junxiao Bi <[email protected]>
Reviewed-by: Yanjun Zhu <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>
Acked-by: Santosh Shilimkar <[email protected]>
Acked-by: Doug Ledford <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/rds/ib.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/rds/ib.c b/net/rds/ib.c
index a0954ace3774..c21eb4850b9d 100644
--- a/net/rds/ib.c
+++ b/net/rds/ib.c
@@ -346,7 +346,8 @@ static int rds_ib_laddr_check(struct net *net, __be32 addr)
/* Create a CMA ID and try to bind it. This catches both
* IB and iWARP capable NICs.
*/
- cm_id = rdma_create_id(&init_net, NULL, NULL, RDMA_PS_TCP, IB_QPT_RC);
+ cm_id = rdma_create_id(&init_net, rds_rdma_cm_event_handler,
+ NULL, RDMA_PS_TCP, IB_QPT_RC);
if (IS_ERR(cm_id))
return PTR_ERR(cm_id);
--
2.15.1
From: Jiri Olsa <[email protected]>
[ Upstream commit 49c0ae80eb32426fa133246200628e529067c595 ]
Stephane reported that we don't set properly PERIOD sample type for
events with period term defined.
Before:
$ perf record -e cpu/cpu-cycles,period=1000/u ls
$ perf evlist -v
cpu/cpu-cycles,period=1000/u: ... sample_type: IP|TID|TIME|PERIOD, ...
After:
$ perf record -e cpu/cpu-cycles,period=1000/u ls
$ perf evlist -v
cpu/cpu-cycles,period=1000/u: ... sample_type: IP|TID|TIME, ...
Setting PERIOD sample type based on period term setup.
Committer note:
When we use -c or a period=N term in the event definition, then we don't
need to ask the kernel, for this event, via perf_event_attr.sample_type
|= PERF_SAMPLE_PERIOD, to put the event period in each sample for this
event, as we know it already, it is in perf_event_attr.sample_period.
Reported-by: Stephane Eranian <[email protected]>
Signed-off-by: Jiri Olsa <[email protected]>
Tested-by: Stephane Eranian <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/perf/util/evsel.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 7c335d84becf..81d032e56ea5 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -736,12 +736,14 @@ static void apply_config_terms(struct perf_evsel *evsel,
if (!(term->weak && opts->user_interval != ULLONG_MAX)) {
attr->sample_period = term->val.period;
attr->freq = 0;
+ perf_evsel__reset_sample_bit(evsel, PERIOD);
}
break;
case PERF_EVSEL__CONFIG_TERM_FREQ:
if (!(term->weak && opts->user_freq != UINT_MAX)) {
attr->sample_freq = term->val.freq;
attr->freq = 1;
+ perf_evsel__set_sample_bit(evsel, PERIOD);
}
break;
case PERF_EVSEL__CONFIG_TERM_TIME:
--
2.15.1
From: Jiri Olsa <[email protected]>
[ Upstream commit f290aa1ffa45ed7e37599840878b4dae68269ee1 ]
Stephan reported we don't unset PERIOD sample type when --no-period is
specified. Adding the unset check and reset PERIOD if --no-period is
specified.
Committer notes:
Check the sample_type, it shouldn't have PERF_SAMPLE_PERIOD there when
--no-period is used.
Before:
# perf record --no-period sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.018 MB perf.data (7 samples) ]
# perf evlist -v
cycles:ppp: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|PERIOD, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, enable_on_exec: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1
#
After:
[root@jouet ~]# perf record --no-period sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.019 MB perf.data (17 samples) ]
[root@jouet ~]# perf evlist -v
cycles:ppp: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, enable_on_exec: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1
[root@jouet ~]#
Reported-by: Stephane Eranian <[email protected]>
Signed-off-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Tested-by: Stephane Eranian <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/perf/builtin-record.c | 3 ++-
tools/perf/perf.h | 1 +
tools/perf/util/evsel.c | 11 ++++++++---
3 files changed, 11 insertions(+), 4 deletions(-)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 0c95ffefb6cc..272ff4b5e80e 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -1611,7 +1611,8 @@ static struct option __record_options[] = {
OPT_BOOLEAN_SET('T', "timestamp", &record.opts.sample_time,
&record.opts.sample_time_set,
"Record the sample timestamps"),
- OPT_BOOLEAN('P', "period", &record.opts.period, "Record the sample period"),
+ OPT_BOOLEAN_SET('P', "period", &record.opts.period, &record.opts.period_set,
+ "Record the sample period"),
OPT_BOOLEAN('n', "no-samples", &record.opts.no_samples,
"don't sample"),
OPT_BOOLEAN_SET('N', "no-buildid-cache", &record.no_buildid_cache,
diff --git a/tools/perf/perf.h b/tools/perf/perf.h
index f75f3dec7485..55086389fc06 100644
--- a/tools/perf/perf.h
+++ b/tools/perf/perf.h
@@ -50,6 +50,7 @@ struct record_opts {
bool sample_time_set;
bool sample_cpu;
bool period;
+ bool period_set;
bool running_time;
bool full_auxtrace;
bool auxtrace_snapshot_mode;
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 4c31a22bbaa0..7c335d84becf 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -943,9 +943,6 @@ void perf_evsel__config(struct perf_evsel *evsel, struct record_opts *opts,
if (target__has_cpu(&opts->target) || opts->sample_cpu)
perf_evsel__set_sample_bit(evsel, CPU);
- if (opts->period)
- perf_evsel__set_sample_bit(evsel, PERIOD);
-
/*
* When the user explicitly disabled time don't force it here.
*/
@@ -1047,6 +1044,14 @@ void perf_evsel__config(struct perf_evsel *evsel, struct record_opts *opts,
apply_config_terms(evsel, opts);
evsel->ignore_missing_thread = opts->ignore_missing_thread;
+
+ /* The --period option takes the precedence. */
+ if (opts->period_set) {
+ if (opts->period)
+ perf_evsel__set_sample_bit(evsel, PERIOD);
+ else
+ perf_evsel__reset_sample_bit(evsel, PERIOD);
+ }
}
static int perf_evsel__alloc_fd(struct perf_evsel *evsel, int ncpus, int nthreads)
--
2.15.1
From: Matt Redfearn <[email protected]>
[ Upstream commit 7bf8b16d1b60419c865e423b907a05f413745b3e ]
The GIC supports running in External Interrupt Controller (EIC) mode,
and will signal this via cpu_has_veic if enabled in hardware. Currently
the generic kernel will panic if cpu_has_veic is set - but the GIC can
legitimately set this flag if either configured to boot in EIC mode, or
if the GIC driver enables this mode. Make the kernel not panic in this
case, and instead just check if the GIC is present. If so, use it's CPU
local interrupt routing functions. If an EIC is present, but it is not
the GIC, then the kernel does not know how to get the VIRQ for the CPU
local interrupts and should panic. Support for alternative EICs being
present is needed here for the generic kernel to support them.
Suggested-by: Paul Burton <[email protected]>
Signed-off-by: Matt Redfearn <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/18191/
Signed-off-by: James Hogan <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/mips/generic/irq.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/mips/generic/irq.c b/arch/mips/generic/irq.c
index 394f8161e462..cb7fdaeef426 100644
--- a/arch/mips/generic/irq.c
+++ b/arch/mips/generic/irq.c
@@ -22,10 +22,10 @@ int get_c0_fdc_int(void)
{
int mips_cpu_fdc_irq;
- if (cpu_has_veic)
- panic("Unimplemented!");
- else if (mips_gic_present())
+ if (mips_gic_present())
mips_cpu_fdc_irq = gic_get_c0_fdc_int();
+ else if (cpu_has_veic)
+ panic("Unimplemented!");
else if (cp0_fdc_irq >= 0)
mips_cpu_fdc_irq = MIPS_CPU_IRQ_BASE + cp0_fdc_irq;
else
@@ -38,10 +38,10 @@ int get_c0_perfcount_int(void)
{
int mips_cpu_perf_irq;
- if (cpu_has_veic)
- panic("Unimplemented!");
- else if (mips_gic_present())
+ if (mips_gic_present())
mips_cpu_perf_irq = gic_get_c0_perfcount_int();
+ else if (cpu_has_veic)
+ panic("Unimplemented!");
else if (cp0_perfcount_irq >= 0)
mips_cpu_perf_irq = MIPS_CPU_IRQ_BASE + cp0_perfcount_irq;
else
@@ -54,10 +54,10 @@ unsigned int get_c0_compare_int(void)
{
int mips_cpu_timer_irq;
- if (cpu_has_veic)
- panic("Unimplemented!");
- else if (mips_gic_present())
+ if (mips_gic_present())
mips_cpu_timer_irq = gic_get_c0_compare_int();
+ else if (cpu_has_veic)
+ panic("Unimplemented!");
else
mips_cpu_timer_irq = MIPS_CPU_IRQ_BASE + cp0_compare_irq;
--
2.15.1
From: Yonghong Song <[email protected]>
[ Upstream commit 09584b406742413ac4c8d7e030374d4daa045b69 ]
With CONFIG_BPF_JIT_ALWAYS_ON is defined in the config file,
tools/testing/selftests/bpf/test_kmod.sh failed like below:
[root@localhost bpf]# ./test_kmod.sh
sysctl: setting key "net.core.bpf_jit_enable": Invalid argument
[ JIT enabled:0 hardened:0 ]
[ 132.175681] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
[ 132.458834] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
[ JIT enabled:1 hardened:0 ]
[ 133.456025] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
[ 133.730935] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
[ JIT enabled:1 hardened:1 ]
[ 134.769730] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
[ 135.050864] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
[ JIT enabled:1 hardened:2 ]
[ 136.442882] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
[ 136.821810] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
[root@localhost bpf]#
The test_kmod.sh load/remove test_bpf.ko multiple times with different
settings for sysctl net.core.bpf_jit_{enable,harden}. The failed test #297
of test_bpf.ko is designed such that JIT always fails.
Commit 290af86629b2 (bpf: introduce BPF_JIT_ALWAYS_ON config)
introduced the following tightening logic:
...
if (!bpf_prog_is_dev_bound(fp->aux)) {
fp = bpf_int_jit_compile(fp);
#ifdef CONFIG_BPF_JIT_ALWAYS_ON
if (!fp->jited) {
*err = -ENOTSUPP;
return fp;
}
#endif
...
With this logic, Test #297 always gets return value -ENOTSUPP
when CONFIG_BPF_JIT_ALWAYS_ON is defined, causing the test failure.
This patch fixed the failure by marking Test #297 as expected failure
when CONFIG_BPF_JIT_ALWAYS_ON is defined.
Fixes: 290af86629b2 (bpf: introduce BPF_JIT_ALWAYS_ON config)
Signed-off-by: Yonghong Song <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
lib/test_bpf.c | 31 ++++++++++++++++++++++++++-----
1 file changed, 26 insertions(+), 5 deletions(-)
diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 6fbb73f3f531..64701b4c9900 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -83,6 +83,7 @@ struct bpf_test {
__u32 result;
} test[MAX_SUBTESTS];
int (*fill_helper)(struct bpf_test *self);
+ int expected_errcode; /* used when FLAG_EXPECTED_FAIL is set in the aux */
__u8 frag_data[MAX_DATA];
int stack_depth; /* for eBPF only, since tests don't call verifier */
};
@@ -1987,7 +1988,9 @@ static struct bpf_test tests[] = {
},
CLASSIC | FLAG_NO_DATA | FLAG_EXPECTED_FAIL,
{ },
- { }
+ { },
+ .fill_helper = NULL,
+ .expected_errcode = -EINVAL,
},
{
"check: div_k_0",
@@ -1997,7 +2000,9 @@ static struct bpf_test tests[] = {
},
CLASSIC | FLAG_NO_DATA | FLAG_EXPECTED_FAIL,
{ },
- { }
+ { },
+ .fill_helper = NULL,
+ .expected_errcode = -EINVAL,
},
{
"check: unknown insn",
@@ -2008,7 +2013,9 @@ static struct bpf_test tests[] = {
},
CLASSIC | FLAG_EXPECTED_FAIL,
{ },
- { }
+ { },
+ .fill_helper = NULL,
+ .expected_errcode = -EINVAL,
},
{
"check: out of range spill/fill",
@@ -2018,7 +2025,9 @@ static struct bpf_test tests[] = {
},
CLASSIC | FLAG_NO_DATA | FLAG_EXPECTED_FAIL,
{ },
- { }
+ { },
+ .fill_helper = NULL,
+ .expected_errcode = -EINVAL,
},
{
"JUMPS + HOLES",
@@ -2110,6 +2119,8 @@ static struct bpf_test tests[] = {
CLASSIC | FLAG_NO_DATA | FLAG_EXPECTED_FAIL,
{ },
{ },
+ .fill_helper = NULL,
+ .expected_errcode = -EINVAL,
},
{
"check: LDX + RET X",
@@ -2120,6 +2131,8 @@ static struct bpf_test tests[] = {
CLASSIC | FLAG_NO_DATA | FLAG_EXPECTED_FAIL,
{ },
{ },
+ .fill_helper = NULL,
+ .expected_errcode = -EINVAL,
},
{ /* Mainly checking JIT here. */
"M[]: alt STX + LDX",
@@ -2294,6 +2307,8 @@ static struct bpf_test tests[] = {
CLASSIC | FLAG_NO_DATA | FLAG_EXPECTED_FAIL,
{ },
{ },
+ .fill_helper = NULL,
+ .expected_errcode = -EINVAL,
},
{ /* Passes checker but fails during runtime. */
"LD [SKF_AD_OFF-1]",
@@ -5356,6 +5371,7 @@ static struct bpf_test tests[] = {
{ },
{ },
.fill_helper = bpf_fill_maxinsns4,
+ .expected_errcode = -EINVAL,
},
{ /* Mainly checking JIT here. */
"BPF_MAXINSNS: Very long jump",
@@ -5411,10 +5427,15 @@ static struct bpf_test tests[] = {
{
"BPF_MAXINSNS: Jump, gap, jump, ...",
{ },
+#ifdef CONFIG_BPF_JIT_ALWAYS_ON
+ CLASSIC | FLAG_NO_DATA | FLAG_EXPECTED_FAIL,
+#else
CLASSIC | FLAG_NO_DATA,
+#endif
{ },
{ { 0, 0xababcbac } },
.fill_helper = bpf_fill_maxinsns11,
+ .expected_errcode = -ENOTSUPP,
},
{
"BPF_MAXINSNS: ld_abs+get_processor_id",
@@ -6193,7 +6214,7 @@ static struct bpf_prog *generate_filter(int which, int *err)
*err = bpf_prog_create(&fp, &fprog);
if (tests[which].aux & FLAG_EXPECTED_FAIL) {
- if (*err == -EINVAL) {
+ if (*err == tests[which].expected_errcode) {
pr_cont("PASS\n");
/* Verifier rejected filter as expected. */
*err = 0;
--
2.15.1
From: Hans de Goede <[email protected]>
[ Upstream commit 54ddce7062242036402242242c07c60c0b505f84 ]
The battery code uses acpi_device->dep_unmet to check for unmet deps and
if there are unmet deps it does not bind to the device to avoid errors
about missing OpRegions when calling ACPI methods on the device.
The missing OpRegions when there are unmet deps problem also applies to
the _STA method of some battery devices and calling it too early results
in errors like these:
[ 0.123579] ACPI Error: No handler for Region [ECRM] (00000000ba9edc4c)
[GenericSerialBus] (20170831/evregion-166)
[ 0.123601] ACPI Error: Region GenericSerialBus (ID=9) has no handler
(20170831/exfldio-299)
[ 0.123618] ACPI Error: Method parse/execution failed
\_SB.I2C1.BAT1._STA, AE_NOT_EXIST (20170831/psparse-550)
This commit fixes these errors happening when acpi_get_bus_status gets
called by checking dep_unmet for battery devices and reporting a status
of 0 until all dependencies are met.
Signed-off-by: Hans de Goede <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/acpi/bus.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
index b6d58cc58f5f..f0348e388d01 100644
--- a/drivers/acpi/bus.c
+++ b/drivers/acpi/bus.c
@@ -146,6 +146,12 @@ int acpi_bus_get_status(struct acpi_device *device)
return 0;
}
+ /* Battery devices must have their deps met before calling _STA */
+ if (acpi_device_is_battery(device) && device->dep_unmet) {
+ acpi_set_device_status(device, 0);
+ return 0;
+ }
+
status = acpi_bus_get_status_handle(device->handle, &sta);
if (ACPI_FAILURE(status))
return -ENODEV;
--
2.15.1
From: Chen Yu <[email protected]>
[ Upstream commit 70f6bf2a3b7e40c3f802b0ea837762a8bc6c1430 ]
When maxcpus=1 is in the kernel command line, the BP is responsible
for re-enabling the HWP - because currently only the APs invoke
intel_pstate_hwp_enable() during their online process - which might
put the system into unstable state after resume.
Fix this by enabling the HWP explicitly on BP during resume.
Reported-by: Doug Smythies <[email protected]>
Suggested-by: Srinivas Pandruvada <[email protected]>
Signed-off-by: Yu Chen <[email protected]>
[ rjw: Subject/changelog, minor modifications ]
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/cpufreq/intel_pstate.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 93a0e88bef76..20226d4243f2 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -779,6 +779,8 @@ static int intel_pstate_hwp_save_state(struct cpufreq_policy *policy)
return 0;
}
+static void intel_pstate_hwp_enable(struct cpudata *cpudata);
+
static int intel_pstate_resume(struct cpufreq_policy *policy)
{
if (!hwp_active)
@@ -786,6 +788,9 @@ static int intel_pstate_resume(struct cpufreq_policy *policy)
mutex_lock(&intel_pstate_limits_lock);
+ if (policy->cpu == 0)
+ intel_pstate_hwp_enable(all_cpu_data[policy->cpu]);
+
all_cpu_data[policy->cpu]->epp_policy = 0;
intel_pstate_hwp_set(policy->cpu);
--
2.15.1
From: Karol Herbst <[email protected]>
[ Upstream commit fe9748b7b41cee11f8db57fb8b20bc540a33102a ]
Fixes failure to compile with recent envyas as a result of the 'movw'
alias being removed for v5.
A bit of history:
v3 only has a 16-bit sign-extended immediate mov op. In order to set
the high bits, there's a separate 'sethi' op. envyas validates that
the value passed to mov(imm) is between -0x8000 and 0x7fff. In order
to simplify macros that load both the low and high word, a 'movw'
alias was added which takes an unsigned 16-bit immediate. However the
actual hardware op still sign extends.
v5 has a full 32-bit immediate mov op. The v3 16-bit immediate mov op
is gone (loads 0 into the dst reg). However due to a bug in envyas,
the movw alias still existed, and selected the no-longer-present v3
16-bit immediate mov op. As a result usage of movw on v5 is the same
as mov with a 0x0 argument.
The proper fix throughout is to only ever use the 'movw' alias in
combination with 'sethi'. Anything else should get the sign-extended
validation to ensure that the intended value ends up in the
destination register.
Changes in fuc3 binaries is the result of a different encoding being
selected for a mov with an 8-bit value.
v2: added commit message written by Ilia, thanks for that!
v3: messed up rebasing, now it should apply
Signed-off-by: Karol Herbst <[email protected]>
Signed-off-by: Ben Skeggs <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
.../drm/nouveau/nvkm/subdev/pmu/fuc/gf100.fuc3.h | 746 +++++++--------
.../drm/nouveau/nvkm/subdev/pmu/fuc/gk208.fuc5.h | 802 ++++++++--------
.../drm/nouveau/nvkm/subdev/pmu/fuc/gt215.fuc3.h | 1006 ++++++++++----------
.../gpu/drm/nouveau/nvkm/subdev/pmu/fuc/memx.fuc | 30 +-
4 files changed, 1292 insertions(+), 1292 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/gf100.fuc3.h b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/gf100.fuc3.h
index 53d01fb00a8b..1dbe593e5960 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/gf100.fuc3.h
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/gf100.fuc3.h
@@ -47,8 +47,8 @@ static uint32_t gf100_pmu_data[] = {
0x00000000,
0x00000000,
0x584d454d,
- 0x00000756,
- 0x00000748,
+ 0x00000754,
+ 0x00000746,
0x00000000,
0x00000000,
0x00000000,
@@ -69,8 +69,8 @@ static uint32_t gf100_pmu_data[] = {
0x00000000,
0x00000000,
0x46524550,
- 0x0000075a,
0x00000758,
+ 0x00000756,
0x00000000,
0x00000000,
0x00000000,
@@ -91,8 +91,8 @@ static uint32_t gf100_pmu_data[] = {
0x00000000,
0x00000000,
0x5f433249,
- 0x00000b8a,
- 0x00000a2d,
+ 0x00000b88,
+ 0x00000a2b,
0x00000000,
0x00000000,
0x00000000,
@@ -113,8 +113,8 @@ static uint32_t gf100_pmu_data[] = {
0x00000000,
0x00000000,
0x54534554,
- 0x00000bb3,
- 0x00000b8c,
+ 0x00000bb1,
+ 0x00000b8a,
0x00000000,
0x00000000,
0x00000000,
@@ -135,8 +135,8 @@ static uint32_t gf100_pmu_data[] = {
0x00000000,
0x00000000,
0x454c4449,
- 0x00000bbf,
0x00000bbd,
+ 0x00000bbb,
0x00000000,
0x00000000,
0x00000000,
@@ -237,19 +237,19 @@ static uint32_t gf100_pmu_data[] = {
0x000005d3,
0x00000003,
0x00000002,
- 0x0000069d,
+ 0x0000069b,
0x00040004,
0x00000000,
- 0x000006b9,
+ 0x000006b7,
0x00010005,
0x00000000,
- 0x000006d6,
+ 0x000006d4,
0x00010006,
0x00000000,
0x0000065b,
0x00000007,
0x00000000,
- 0x000006e1,
+ 0x000006df,
/* 0x03c4: memx_func_tail */
/* 0x03c4: memx_ts_start */
0x00000000,
@@ -1373,432 +1373,432 @@ static uint32_t gf100_pmu_code[] = {
/* 0x065b: memx_func_wait_vblank */
0x9800f840,
0x66b00016,
- 0x130bf400,
+ 0x120bf400,
0xf40166b0,
0x0ef4060b,
/* 0x066d: memx_func_wait_vblank_head1 */
- 0x2077f12e,
- 0x070ef400,
-/* 0x0674: memx_func_wait_vblank_head0 */
- 0x000877f1,
-/* 0x0678: memx_func_wait_vblank_0 */
- 0x07c467f1,
- 0xcf0664b6,
- 0x67fd0066,
- 0xf31bf404,
-/* 0x0688: memx_func_wait_vblank_1 */
- 0x07c467f1,
- 0xcf0664b6,
- 0x67fd0066,
- 0xf30bf404,
-/* 0x0698: memx_func_wait_vblank_fini */
- 0xf80410b6,
-/* 0x069d: memx_func_wr32 */
- 0x00169800,
- 0xb6011598,
- 0x60f90810,
- 0xd0fc50f9,
- 0x21f4e0fc,
- 0x0242b640,
- 0xf8e91bf4,
-/* 0x06b9: memx_func_wait */
- 0x2c87f000,
- 0xcf0684b6,
- 0x1e980088,
- 0x011d9800,
- 0x98021c98,
- 0x10b6031b,
- 0xa321f410,
-/* 0x06d6: memx_func_delay */
- 0x1e9800f8,
- 0x0410b600,
- 0xf87e21f4,
-/* 0x06e1: memx_func_train */
-/* 0x06e3: memx_exec */
- 0xf900f800,
- 0xb9d0f9e0,
- 0xb2b902c1,
-/* 0x06ed: memx_exec_next */
- 0x00139802,
- 0xe70410b6,
- 0xe701f034,
- 0xb601e033,
- 0x30f00132,
- 0xde35980c,
- 0x12b855f9,
- 0xe41ef406,
- 0x98f10b98,
- 0xcbbbf20c,
- 0xc4b7f102,
- 0x06b4b607,
- 0xfc00bbcf,
- 0xf5e0fcd0,
- 0xf8033621,
-/* 0x0729: memx_info */
- 0x01c67000,
-/* 0x072f: memx_info_data */
- 0xf10e0bf4,
- 0xf103ccc7,
- 0xf40800b7,
-/* 0x073a: memx_info_train */
- 0xc7f10b0e,
- 0xb7f10bcc,
-/* 0x0742: memx_info_send */
- 0x21f50100,
- 0x00f80336,
-/* 0x0748: memx_recv */
- 0xf401d6b0,
- 0xd6b0980b,
- 0xd80bf400,
-/* 0x0756: memx_init */
- 0x00f800f8,
-/* 0x0758: perf_recv */
-/* 0x075a: perf_init */
+ 0x2077f02c,
+/* 0x0673: memx_func_wait_vblank_head0 */
+ 0xf0060ef4,
+/* 0x0676: memx_func_wait_vblank_0 */
+ 0x67f10877,
+ 0x64b607c4,
+ 0x0066cf06,
+ 0xf40467fd,
+/* 0x0686: memx_func_wait_vblank_1 */
+ 0x67f1f31b,
+ 0x64b607c4,
+ 0x0066cf06,
+ 0xf40467fd,
+/* 0x0696: memx_func_wait_vblank_fini */
+ 0x10b6f30b,
+/* 0x069b: memx_func_wr32 */
+ 0x9800f804,
+ 0x15980016,
+ 0x0810b601,
+ 0x50f960f9,
+ 0xe0fcd0fc,
+ 0xb64021f4,
+ 0x1bf40242,
+/* 0x06b7: memx_func_wait */
+ 0xf000f8e9,
+ 0x84b62c87,
+ 0x0088cf06,
+ 0x98001e98,
+ 0x1c98011d,
+ 0x031b9802,
+ 0xf41010b6,
+ 0x00f8a321,
+/* 0x06d4: memx_func_delay */
+ 0xb6001e98,
+ 0x21f40410,
+/* 0x06df: memx_func_train */
+ 0xf800f87e,
+/* 0x06e1: memx_exec */
+ 0xf9e0f900,
+ 0x02c1b9d0,
+/* 0x06eb: memx_exec_next */
+ 0x9802b2b9,
+ 0x10b60013,
+ 0xf034e704,
+ 0xe033e701,
+ 0x0132b601,
+ 0x980c30f0,
+ 0x55f9de35,
+ 0xf40612b8,
+ 0x0b98e41e,
+ 0xf20c98f1,
+ 0xf102cbbb,
+ 0xb607c4b7,
+ 0xbbcf06b4,
+ 0xfcd0fc00,
+ 0x3621f5e0,
+/* 0x0727: memx_info */
+ 0x7000f803,
+ 0x0bf401c6,
+/* 0x072d: memx_info_data */
+ 0xccc7f10e,
+ 0x00b7f103,
+ 0x0b0ef408,
+/* 0x0738: memx_info_train */
+ 0x0bccc7f1,
+ 0x0100b7f1,
+/* 0x0740: memx_info_send */
+ 0x033621f5,
+/* 0x0746: memx_recv */
+ 0xd6b000f8,
+ 0x980bf401,
+ 0xf400d6b0,
+ 0x00f8d80b,
+/* 0x0754: memx_init */
+/* 0x0756: perf_recv */
0x00f800f8,
-/* 0x075c: i2c_drive_scl */
- 0xf40036b0,
- 0x07f1110b,
- 0x04b607e0,
- 0x0001d006,
- 0x00f804bd,
-/* 0x0770: i2c_drive_scl_lo */
- 0x07e407f1,
- 0xd00604b6,
- 0x04bd0001,
-/* 0x077e: i2c_drive_sda */
+/* 0x0758: perf_init */
+/* 0x075a: i2c_drive_scl */
0x36b000f8,
0x110bf400,
0x07e007f1,
0xd00604b6,
- 0x04bd0002,
-/* 0x0792: i2c_drive_sda_lo */
+ 0x04bd0001,
+/* 0x076e: i2c_drive_scl_lo */
0x07f100f8,
0x04b607e4,
+ 0x0001d006,
+ 0x00f804bd,
+/* 0x077c: i2c_drive_sda */
+ 0xf40036b0,
+ 0x07f1110b,
+ 0x04b607e0,
0x0002d006,
0x00f804bd,
-/* 0x07a0: i2c_sense_scl */
- 0xf10132f4,
- 0xb607c437,
- 0x33cf0634,
- 0x0431fd00,
- 0xf4060bf4,
-/* 0x07b6: i2c_sense_scl_done */
- 0x00f80131,
-/* 0x07b8: i2c_sense_sda */
- 0xf10132f4,
- 0xb607c437,
- 0x33cf0634,
- 0x0432fd00,
- 0xf4060bf4,
-/* 0x07ce: i2c_sense_sda_done */
- 0x00f80131,
-/* 0x07d0: i2c_raise_scl */
- 0x47f140f9,
- 0x37f00898,
- 0x5c21f501,
-/* 0x07dd: i2c_raise_scl_wait */
- 0xe8e7f107,
- 0x7e21f403,
- 0x07a021f5,
- 0xb60901f4,
- 0x1bf40142,
-/* 0x07f1: i2c_raise_scl_done */
- 0xf840fcef,
-/* 0x07f5: i2c_start */
- 0xa021f500,
- 0x0d11f407,
- 0x07b821f5,
- 0xf40611f4,
-/* 0x0806: i2c_start_rep */
- 0x37f0300e,
- 0x5c21f500,
- 0x0137f007,
- 0x077e21f5,
- 0xb60076bb,
- 0x50f90465,
- 0xbb046594,
- 0x50bd0256,
- 0xfc0475fd,
- 0xd021f550,
- 0x0464b607,
-/* 0x0833: i2c_start_send */
- 0xf01f11f4,
+/* 0x0790: i2c_drive_sda_lo */
+ 0x07e407f1,
+ 0xd00604b6,
+ 0x04bd0002,
+/* 0x079e: i2c_sense_scl */
+ 0x32f400f8,
+ 0xc437f101,
+ 0x0634b607,
+ 0xfd0033cf,
+ 0x0bf40431,
+ 0x0131f406,
+/* 0x07b4: i2c_sense_scl_done */
+/* 0x07b6: i2c_sense_sda */
+ 0x32f400f8,
+ 0xc437f101,
+ 0x0634b607,
+ 0xfd0033cf,
+ 0x0bf40432,
+ 0x0131f406,
+/* 0x07cc: i2c_sense_sda_done */
+/* 0x07ce: i2c_raise_scl */
+ 0x40f900f8,
+ 0x089847f1,
+ 0xf50137f0,
+/* 0x07db: i2c_raise_scl_wait */
+ 0xf1075a21,
+ 0xf403e8e7,
+ 0x21f57e21,
+ 0x01f4079e,
+ 0x0142b609,
+/* 0x07ef: i2c_raise_scl_done */
+ 0xfcef1bf4,
+/* 0x07f3: i2c_start */
+ 0xf500f840,
+ 0xf4079e21,
+ 0x21f50d11,
+ 0x11f407b6,
+ 0x300ef406,
+/* 0x0804: i2c_start_rep */
+ 0xf50037f0,
+ 0xf0075a21,
+ 0x21f50137,
+ 0x76bb077c,
+ 0x0465b600,
+ 0x659450f9,
+ 0x0256bb04,
+ 0x75fd50bd,
+ 0xf550fc04,
+ 0xb607ce21,
+ 0x11f40464,
+/* 0x0831: i2c_start_send */
+ 0x0037f01f,
+ 0x077c21f5,
+ 0x1388e7f1,
+ 0xf07e21f4,
0x21f50037,
- 0xe7f1077e,
+ 0xe7f1075a,
0x21f41388,
- 0x0037f07e,
- 0x075c21f5,
- 0x1388e7f1,
-/* 0x084f: i2c_start_out */
- 0xf87e21f4,
-/* 0x0851: i2c_stop */
- 0x0037f000,
- 0x075c21f5,
- 0xf50037f0,
- 0xf1077e21,
- 0xf403e8e7,
- 0x37f07e21,
- 0x5c21f501,
- 0x88e7f107,
- 0x7e21f413,
+/* 0x084d: i2c_start_out */
+/* 0x084f: i2c_stop */
+ 0xf000f87e,
+ 0x21f50037,
+ 0x37f0075a,
+ 0x7c21f500,
+ 0xe8e7f107,
+ 0x7e21f403,
0xf50137f0,
- 0xf1077e21,
+ 0xf1075a21,
0xf41388e7,
- 0x00f87e21,
-/* 0x0884: i2c_bitw */
- 0x077e21f5,
- 0x03e8e7f1,
- 0xbb7e21f4,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x07d021f5,
- 0xf40464b6,
- 0xe7f11811,
- 0x21f41388,
- 0x0037f07e,
- 0x075c21f5,
- 0x1388e7f1,
-/* 0x08c3: i2c_bitw_out */
- 0xf87e21f4,
-/* 0x08c5: i2c_bitr */
- 0x0137f000,
- 0x077e21f5,
- 0x03e8e7f1,
- 0xbb7e21f4,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x07d021f5,
- 0xf40464b6,
- 0x21f51b11,
- 0x37f007b8,
- 0x5c21f500,
+ 0x37f07e21,
+ 0x7c21f501,
0x88e7f107,
0x7e21f413,
- 0xf4013cf0,
-/* 0x090a: i2c_bitr_done */
- 0x00f80131,
-/* 0x090c: i2c_get_byte */
- 0xf00057f0,
-/* 0x0912: i2c_get_byte_next */
- 0x54b60847,
- 0x0076bb01,
+/* 0x0882: i2c_bitw */
+ 0x21f500f8,
+ 0xe7f1077c,
+ 0x21f403e8,
+ 0x0076bb7e,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b608c5,
- 0x2b11f404,
- 0xb60553fd,
- 0x1bf40142,
- 0x0137f0d8,
- 0xb60076bb,
- 0x50f90465,
- 0xbb046594,
- 0x50bd0256,
- 0xfc0475fd,
- 0x8421f550,
- 0x0464b608,
-/* 0x095c: i2c_get_byte_done */
-/* 0x095e: i2c_put_byte */
- 0x47f000f8,
-/* 0x0961: i2c_put_byte_next */
- 0x0142b608,
- 0xbb3854ff,
+ 0x64b607ce,
+ 0x1811f404,
+ 0x1388e7f1,
+ 0xf07e21f4,
+ 0x21f50037,
+ 0xe7f1075a,
+ 0x21f41388,
+/* 0x08c1: i2c_bitw_out */
+/* 0x08c3: i2c_bitr */
+ 0xf000f87e,
+ 0x21f50137,
+ 0xe7f1077c,
+ 0x21f403e8,
+ 0x0076bb7e,
+ 0xf90465b6,
+ 0x04659450,
+ 0xbd0256bb,
+ 0x0475fd50,
+ 0x21f550fc,
+ 0x64b607ce,
+ 0x1b11f404,
+ 0x07b621f5,
+ 0xf50037f0,
+ 0xf1075a21,
+ 0xf41388e7,
+ 0x3cf07e21,
+ 0x0131f401,
+/* 0x0908: i2c_bitr_done */
+/* 0x090a: i2c_get_byte */
+ 0x57f000f8,
+ 0x0847f000,
+/* 0x0910: i2c_get_byte_next */
+ 0xbb0154b6,
0x65b60076,
0x9450f904,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x088421f5,
+ 0x08c321f5,
0xf40464b6,
- 0x46b03411,
- 0xd81bf400,
- 0xb60076bb,
- 0x50f90465,
- 0xbb046594,
- 0x50bd0256,
- 0xfc0475fd,
- 0xc521f550,
- 0x0464b608,
- 0xbb0f11f4,
- 0x36b00076,
- 0x061bf401,
-/* 0x09b7: i2c_put_byte_done */
- 0xf80132f4,
-/* 0x09b9: i2c_addr */
- 0x0076bb00,
+ 0x53fd2b11,
+ 0x0142b605,
+ 0xf0d81bf4,
+ 0x76bb0137,
+ 0x0465b600,
+ 0x659450f9,
+ 0x0256bb04,
+ 0x75fd50bd,
+ 0xf550fc04,
+ 0xb6088221,
+/* 0x095a: i2c_get_byte_done */
+ 0x00f80464,
+/* 0x095c: i2c_put_byte */
+/* 0x095f: i2c_put_byte_next */
+ 0xb60847f0,
+ 0x54ff0142,
+ 0x0076bb38,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b607f5,
- 0x2911f404,
- 0x012ec3e7,
- 0xfd0134b6,
- 0x76bb0553,
+ 0x64b60882,
+ 0x3411f404,
+ 0xf40046b0,
+ 0x76bbd81b,
0x0465b600,
0x659450f9,
0x0256bb04,
0x75fd50bd,
0xf550fc04,
- 0xb6095e21,
-/* 0x09fe: i2c_addr_done */
- 0x00f80464,
-/* 0x0a00: i2c_acquire_addr */
- 0xb6f8cec7,
- 0xe0b702e4,
- 0xee980d1c,
-/* 0x0a0f: i2c_acquire */
- 0xf500f800,
- 0xf40a0021,
- 0xd9f00421,
- 0x4021f403,
-/* 0x0a1e: i2c_release */
- 0x21f500f8,
- 0x21f40a00,
- 0x03daf004,
- 0xf84021f4,
-/* 0x0a2d: i2c_recv */
- 0x0132f400,
- 0xb6f8c1c7,
- 0x16b00214,
- 0x3a1ff528,
- 0xf413a001,
- 0x0032980c,
- 0x0ccc13a0,
- 0xf4003198,
- 0xd0f90231,
- 0xd0f9e0f9,
- 0x000067f1,
- 0x100063f1,
- 0xbb016792,
+ 0xb608c321,
+ 0x11f40464,
+ 0x0076bb0f,
+ 0xf40136b0,
+ 0x32f4061b,
+/* 0x09b5: i2c_put_byte_done */
+/* 0x09b7: i2c_addr */
+ 0xbb00f801,
0x65b60076,
0x9450f904,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x0a0f21f5,
- 0xfc0464b6,
- 0x00d6b0d0,
- 0x00b31bf5,
- 0xbb0057f0,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x09b921f5,
- 0xf50464b6,
- 0xc700d011,
- 0x76bbe0c5,
- 0x0465b600,
- 0x659450f9,
- 0x0256bb04,
- 0x75fd50bd,
- 0xf550fc04,
- 0xb6095e21,
- 0x11f50464,
- 0x57f000ad,
+ 0x07f321f5,
+ 0xf40464b6,
+ 0xc3e72911,
+ 0x34b6012e,
+ 0x0553fd01,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0x5c21f550,
+ 0x0464b609,
+/* 0x09fc: i2c_addr_done */
+/* 0x09fe: i2c_acquire_addr */
+ 0xcec700f8,
+ 0x02e4b6f8,
+ 0x0d1ce0b7,
+ 0xf800ee98,
+/* 0x0a0d: i2c_acquire */
+ 0xfe21f500,
+ 0x0421f409,
+ 0xf403d9f0,
+ 0x00f84021,
+/* 0x0a1c: i2c_release */
+ 0x09fe21f5,
+ 0xf00421f4,
+ 0x21f403da,
+/* 0x0a2b: i2c_recv */
+ 0xf400f840,
+ 0xc1c70132,
+ 0x0214b6f8,
+ 0xf52816b0,
+ 0xa0013a1f,
+ 0x980cf413,
+ 0x13a00032,
+ 0x31980ccc,
+ 0x0231f400,
+ 0xe0f9d0f9,
+ 0x67f1d0f9,
+ 0x63f10000,
+ 0x67921000,
0x0076bb01,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b609b9,
- 0x8a11f504,
+ 0x64b60a0d,
+ 0xb0d0fc04,
+ 0x1bf500d6,
+ 0x57f000b3,
0x0076bb00,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b6090c,
- 0x6a11f404,
- 0xbbe05bcb,
+ 0x64b609b7,
+ 0xd011f504,
+ 0xe0c5c700,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0x5c21f550,
+ 0x0464b609,
+ 0x00ad11f5,
+ 0xbb0157f0,
0x65b60076,
0x9450f904,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x085121f5,
- 0xb90464b6,
- 0x74bd025b,
-/* 0x0b33: i2c_recv_not_rd08 */
- 0xb0430ef4,
- 0x1bf401d6,
- 0x0057f03d,
- 0x09b921f5,
- 0xc73311f4,
- 0x21f5e0c5,
- 0x11f4095e,
- 0x0057f029,
- 0x09b921f5,
- 0xc71f11f4,
- 0x21f5e0b5,
- 0x11f4095e,
- 0x5121f515,
- 0xc774bd08,
- 0x1bf408c5,
- 0x0232f409,
-/* 0x0b73: i2c_recv_not_wr08 */
-/* 0x0b73: i2c_recv_done */
- 0xc7030ef4,
- 0x21f5f8ce,
- 0xe0fc0a1e,
- 0x12f4d0fc,
- 0x027cb90a,
- 0x033621f5,
-/* 0x0b88: i2c_recv_exit */
-/* 0x0b8a: i2c_init */
- 0x00f800f8,
-/* 0x0b8c: test_recv */
- 0x05d817f1,
+ 0x09b721f5,
+ 0xf50464b6,
+ 0xbb008a11,
+ 0x65b60076,
+ 0x9450f904,
+ 0x56bb0465,
+ 0xfd50bd02,
+ 0x50fc0475,
+ 0x090a21f5,
+ 0xf40464b6,
+ 0x5bcb6a11,
+ 0x0076bbe0,
+ 0xf90465b6,
+ 0x04659450,
+ 0xbd0256bb,
+ 0x0475fd50,
+ 0x21f550fc,
+ 0x64b6084f,
+ 0x025bb904,
+ 0x0ef474bd,
+/* 0x0b31: i2c_recv_not_rd08 */
+ 0x01d6b043,
+ 0xf03d1bf4,
+ 0x21f50057,
+ 0x11f409b7,
+ 0xe0c5c733,
+ 0x095c21f5,
+ 0xf02911f4,
+ 0x21f50057,
+ 0x11f409b7,
+ 0xe0b5c71f,
+ 0x095c21f5,
+ 0xf51511f4,
+ 0xbd084f21,
+ 0x08c5c774,
+ 0xf4091bf4,
+ 0x0ef40232,
+/* 0x0b71: i2c_recv_not_wr08 */
+/* 0x0b71: i2c_recv_done */
+ 0xf8cec703,
+ 0x0a1c21f5,
+ 0xd0fce0fc,
+ 0xb90a12f4,
+ 0x21f5027c,
+/* 0x0b86: i2c_recv_exit */
+ 0x00f80336,
+/* 0x0b88: i2c_init */
+/* 0x0b8a: test_recv */
+ 0x17f100f8,
+ 0x14b605d8,
+ 0x0011cf06,
+ 0xf10110b6,
+ 0xb605d807,
+ 0x01d00604,
+ 0xf104bd00,
+ 0xf1d900e7,
+ 0xf5134fe3,
+ 0xf8025621,
+/* 0x0bb1: test_init */
+ 0x00e7f100,
+ 0x5621f508,
+/* 0x0bbb: idle_recv */
+ 0xf800f802,
+/* 0x0bbd: idle */
+ 0x0031f400,
+ 0x05d417f1,
0xcf0614b6,
0x10b60011,
- 0xd807f101,
+ 0xd407f101,
0x0604b605,
0xbd0001d0,
- 0x00e7f104,
- 0x4fe3f1d9,
- 0x5621f513,
-/* 0x0bb3: test_init */
- 0xf100f802,
- 0xf50800e7,
- 0xf8025621,
-/* 0x0bbd: idle_recv */
-/* 0x0bbf: idle */
- 0xf400f800,
- 0x17f10031,
- 0x14b605d4,
- 0x0011cf06,
- 0xf10110b6,
- 0xb605d407,
- 0x01d00604,
-/* 0x0bdb: idle_loop */
- 0xf004bd00,
- 0x32f45817,
-/* 0x0be1: idle_proc */
-/* 0x0be1: idle_proc_exec */
- 0xb910f902,
- 0x21f5021e,
- 0x10fc033f,
- 0xf40911f4,
- 0x0ef40231,
-/* 0x0bf5: idle_proc_next */
- 0x5810b6ef,
- 0xf4061fb8,
- 0x02f4e61b,
- 0x0028f4dd,
- 0x00bb0ef4,
+/* 0x0bd9: idle_loop */
+ 0x5817f004,
+/* 0x0bdf: idle_proc */
+/* 0x0bdf: idle_proc_exec */
+ 0xf90232f4,
+ 0x021eb910,
+ 0x033f21f5,
+ 0x11f410fc,
+ 0x0231f409,
+/* 0x0bf3: idle_proc_next */
+ 0xb6ef0ef4,
+ 0x1fb85810,
+ 0xe61bf406,
+ 0xf4dd02f4,
+ 0x0ef40028,
+ 0x000000bb,
0x00000000,
0x00000000,
0x00000000,
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/gk208.fuc5.h b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/gk208.fuc5.h
index c4edbc79e41a..e0222cb832fb 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/gk208.fuc5.h
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/gk208.fuc5.h
@@ -47,8 +47,8 @@ static uint32_t gk208_pmu_data[] = {
0x00000000,
0x00000000,
0x584d454d,
- 0x000005f3,
- 0x000005e5,
+ 0x000005ee,
+ 0x000005e0,
0x00000000,
0x00000000,
0x00000000,
@@ -69,8 +69,8 @@ static uint32_t gk208_pmu_data[] = {
0x00000000,
0x00000000,
0x46524550,
- 0x000005f7,
- 0x000005f5,
+ 0x000005f2,
+ 0x000005f0,
0x00000000,
0x00000000,
0x00000000,
@@ -91,8 +91,8 @@ static uint32_t gk208_pmu_data[] = {
0x00000000,
0x00000000,
0x5f433249,
- 0x000009f8,
- 0x000008a2,
+ 0x000009f3,
+ 0x0000089d,
0x00000000,
0x00000000,
0x00000000,
@@ -113,8 +113,8 @@ static uint32_t gk208_pmu_data[] = {
0x00000000,
0x00000000,
0x54534554,
- 0x00000a16,
- 0x000009fa,
+ 0x00000a11,
+ 0x000009f5,
0x00000000,
0x00000000,
0x00000000,
@@ -135,8 +135,8 @@ static uint32_t gk208_pmu_data[] = {
0x00000000,
0x00000000,
0x454c4449,
- 0x00000a21,
- 0x00000a1f,
+ 0x00000a1c,
+ 0x00000a1a,
0x00000000,
0x00000000,
0x00000000,
@@ -234,22 +234,22 @@ static uint32_t gk208_pmu_data[] = {
/* 0x037c: memx_func_next */
0x00000002,
0x00000000,
- 0x000004cf,
+ 0x000004cc,
0x00000003,
0x00000002,
- 0x00000546,
+ 0x00000541,
0x00040004,
0x00000000,
- 0x00000563,
+ 0x0000055e,
0x00010005,
0x00000000,
- 0x0000057d,
+ 0x00000578,
0x00010006,
0x00000000,
- 0x00000541,
+ 0x0000053c,
0x00000007,
0x00000000,
- 0x00000589,
+ 0x00000584,
/* 0x03c4: memx_func_tail */
/* 0x03c4: memx_ts_start */
0x00000000,
@@ -1239,454 +1239,454 @@ static uint32_t gk208_pmu_code[] = {
0x0001f604,
0x00f804bd,
/* 0x045c: memx_func_enter */
- 0x162067f1,
- 0xf55d77f1,
- 0x047e6eb2,
- 0xd8b20000,
- 0xf90487fd,
- 0xfc80f960,
- 0x7ee0fcd0,
- 0x0700002d,
- 0x7e6eb2fe,
+ 0x47162046,
+ 0x6eb2f55d,
+ 0x0000047e,
+ 0x87fdd8b2,
+ 0xf960f904,
+ 0xfcd0fc80,
+ 0x002d7ee0,
+ 0xb2fe0700,
+ 0x00047e6e,
+ 0xfdd8b200,
+ 0x60f90487,
+ 0xd0fc80f9,
+ 0x2d7ee0fc,
+ 0xf0460000,
+ 0x7e6eb226,
0xb2000004,
0x0487fdd8,
0x80f960f9,
0xe0fcd0fc,
0x00002d7e,
- 0x26f067f1,
- 0x047e6eb2,
- 0xd8b20000,
- 0xf90487fd,
- 0xfc80f960,
- 0x7ee0fcd0,
- 0x0600002d,
- 0x07e04004,
- 0xbd0006f6,
-/* 0x04b9: memx_func_enter_wait */
- 0x07c04604,
- 0xf00066cf,
- 0x0bf40464,
- 0xcf2c06f7,
- 0x06b50066,
-/* 0x04cf: memx_func_leave */
- 0x0600f8f1,
- 0x0066cf2c,
- 0x06f206b5,
- 0x07e44004,
- 0xbd0006f6,
-/* 0x04e1: memx_func_leave_wait */
- 0x07c04604,
- 0xf00066cf,
- 0x1bf40464,
- 0xf067f1f7,
+ 0xe0400406,
+ 0x0006f607,
+/* 0x04b6: memx_func_enter_wait */
+ 0xc04604bd,
+ 0x0066cf07,
+ 0xf40464f0,
+ 0x2c06f70b,
+ 0xb50066cf,
+ 0x00f8f106,
+/* 0x04cc: memx_func_leave */
+ 0x66cf2c06,
+ 0xf206b500,
+ 0xe4400406,
+ 0x0006f607,
+/* 0x04de: memx_func_leave_wait */
+ 0xc04604bd,
+ 0x0066cf07,
+ 0xf40464f0,
+ 0xf046f71b,
0xb2010726,
0x00047e6e,
0xfdd8b200,
0x60f90587,
0xd0fc80f9,
0x2d7ee0fc,
- 0x67f10000,
- 0x6eb21620,
- 0x0000047e,
- 0x87fdd8b2,
- 0xf960f905,
- 0xfcd0fc80,
- 0x002d7ee0,
- 0x0aa24700,
- 0x047e6eb2,
- 0xd8b20000,
- 0xf90587fd,
- 0xfc80f960,
- 0x7ee0fcd0,
- 0xf800002d,
-/* 0x0541: memx_func_wait_vblank */
+ 0x20460000,
+ 0x7e6eb216,
+ 0xb2000004,
+ 0x0587fdd8,
+ 0x80f960f9,
+ 0xe0fcd0fc,
+ 0x00002d7e,
+ 0xb20aa247,
+ 0x00047e6e,
+ 0xfdd8b200,
+ 0x60f90587,
+ 0xd0fc80f9,
+ 0x2d7ee0fc,
+ 0x00f80000,
+/* 0x053c: memx_func_wait_vblank */
+ 0xf80410b6,
+/* 0x0541: memx_func_wr32 */
+ 0x00169800,
+ 0xb6011598,
+ 0x60f90810,
+ 0xd0fc50f9,
+ 0x2d7ee0fc,
+ 0x42b60000,
+ 0xe81bf402,
+/* 0x055e: memx_func_wait */
+ 0x2c0800f8,
+ 0x980088cf,
+ 0x1d98001e,
+ 0x021c9801,
+ 0xb6031b98,
+ 0x747e1010,
+ 0x00f80000,
+/* 0x0578: memx_func_delay */
+ 0xb6001e98,
+ 0x587e0410,
+ 0x00f80000,
+/* 0x0584: memx_func_train */
+/* 0x0586: memx_exec */
+ 0xe0f900f8,
+ 0xc1b2d0f9,
+/* 0x058e: memx_exec_next */
+ 0x1398b2b2,
0x0410b600,
-/* 0x0546: memx_func_wr32 */
- 0x169800f8,
- 0x01159800,
- 0xf90810b6,
- 0xfc50f960,
+ 0x01f034e7,
+ 0x01e033e7,
+ 0xf00132b6,
+ 0x35980c30,
+ 0xa655f9de,
+ 0xe51ef412,
+ 0x98f10b98,
+ 0xcbbbf20c,
+ 0x07c44b02,
+ 0xfc00bbcf,
0x7ee0fcd0,
- 0xb600002d,
- 0x1bf40242,
-/* 0x0563: memx_func_wait */
- 0x0800f8e8,
- 0x0088cf2c,
- 0x98001e98,
- 0x1c98011d,
- 0x031b9802,
- 0x7e1010b6,
- 0xf8000074,
-/* 0x057d: memx_func_delay */
- 0x001e9800,
- 0x7e0410b6,
- 0xf8000058,
-/* 0x0589: memx_func_train */
-/* 0x058b: memx_exec */
- 0xf900f800,
- 0xb2d0f9e0,
-/* 0x0593: memx_exec_next */
- 0x98b2b2c1,
- 0x10b60013,
- 0xf034e704,
- 0xe033e701,
- 0x0132b601,
- 0x980c30f0,
- 0x55f9de35,
- 0x1ef412a6,
- 0xf10b98e5,
- 0xbbf20c98,
- 0xc44b02cb,
- 0x00bbcf07,
- 0xe0fcd0fc,
- 0x00029f7e,
-/* 0x05ca: memx_info */
- 0xc67000f8,
- 0x0c0bf401,
-/* 0x05d0: memx_info_data */
- 0x4b03cc4c,
- 0x0ef40800,
-/* 0x05d9: memx_info_train */
- 0x0bcc4c09,
-/* 0x05df: memx_info_send */
- 0x7e01004b,
0xf800029f,
-/* 0x05e5: memx_recv */
- 0x01d6b000,
- 0xb0a30bf4,
- 0x0bf400d6,
-/* 0x05f3: memx_init */
- 0xf800f8dc,
-/* 0x05f5: perf_recv */
-/* 0x05f7: perf_init */
- 0xf800f800,
-/* 0x05f9: i2c_drive_scl */
- 0x0036b000,
- 0x400d0bf4,
- 0x01f607e0,
- 0xf804bd00,
-/* 0x0609: i2c_drive_scl_lo */
- 0x07e44000,
- 0xbd0001f6,
-/* 0x0613: i2c_drive_sda */
- 0xb000f804,
- 0x0bf40036,
- 0x07e0400d,
- 0xbd0002f6,
-/* 0x0623: i2c_drive_sda_lo */
- 0x4000f804,
- 0x02f607e4,
- 0xf804bd00,
-/* 0x062d: i2c_sense_scl */
- 0x0132f400,
- 0xcf07c443,
- 0x31fd0033,
- 0x060bf404,
-/* 0x063f: i2c_sense_scl_done */
- 0xf80131f4,
-/* 0x0641: i2c_sense_sda */
- 0x0132f400,
- 0xcf07c443,
- 0x32fd0033,
- 0x060bf404,
-/* 0x0653: i2c_sense_sda_done */
- 0xf80131f4,
-/* 0x0655: i2c_raise_scl */
- 0x4440f900,
- 0x01030898,
- 0x0005f97e,
-/* 0x0660: i2c_raise_scl_wait */
- 0x7e03e84e,
- 0x7e000058,
- 0xf400062d,
- 0x42b60901,
- 0xef1bf401,
-/* 0x0674: i2c_raise_scl_done */
- 0x00f840fc,
-/* 0x0678: i2c_start */
- 0x00062d7e,
- 0x7e0d11f4,
- 0xf4000641,
- 0x0ef40611,
-/* 0x0689: i2c_start_rep */
- 0x7e00032e,
- 0x030005f9,
- 0x06137e01,
+/* 0x05c5: memx_info */
+ 0x01c67000,
+/* 0x05cb: memx_info_data */
+ 0x4c0c0bf4,
+ 0x004b03cc,
+ 0x090ef408,
+/* 0x05d4: memx_info_train */
+ 0x4b0bcc4c,
+/* 0x05da: memx_info_send */
+ 0x9f7e0100,
+ 0x00f80002,
+/* 0x05e0: memx_recv */
+ 0xf401d6b0,
+ 0xd6b0a30b,
+ 0xdc0bf400,
+/* 0x05ee: memx_init */
+ 0x00f800f8,
+/* 0x05f0: perf_recv */
+/* 0x05f2: perf_init */
+ 0x00f800f8,
+/* 0x05f4: i2c_drive_scl */
+ 0xf40036b0,
+ 0xe0400d0b,
+ 0x0001f607,
+ 0x00f804bd,
+/* 0x0604: i2c_drive_scl_lo */
+ 0xf607e440,
+ 0x04bd0001,
+/* 0x060e: i2c_drive_sda */
+ 0x36b000f8,
+ 0x0d0bf400,
+ 0xf607e040,
+ 0x04bd0002,
+/* 0x061e: i2c_drive_sda_lo */
+ 0xe44000f8,
+ 0x0002f607,
+ 0x00f804bd,
+/* 0x0628: i2c_sense_scl */
+ 0x430132f4,
+ 0x33cf07c4,
+ 0x0431fd00,
+ 0xf4060bf4,
+/* 0x063a: i2c_sense_scl_done */
+ 0x00f80131,
+/* 0x063c: i2c_sense_sda */
+ 0x430132f4,
+ 0x33cf07c4,
+ 0x0432fd00,
+ 0xf4060bf4,
+/* 0x064e: i2c_sense_sda_done */
+ 0x00f80131,
+/* 0x0650: i2c_raise_scl */
+ 0x984440f9,
+ 0x7e010308,
+/* 0x065b: i2c_raise_scl_wait */
+ 0x4e0005f4,
+ 0x587e03e8,
+ 0x287e0000,
+ 0x01f40006,
+ 0x0142b609,
+/* 0x066f: i2c_raise_scl_done */
+ 0xfcef1bf4,
+/* 0x0673: i2c_start */
+ 0x7e00f840,
+ 0xf4000628,
+ 0x3c7e0d11,
+ 0x11f40006,
+ 0x2e0ef406,
+/* 0x0684: i2c_start_rep */
+ 0xf47e0003,
+ 0x01030005,
+ 0x00060e7e,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0x06507e50,
+ 0x0464b600,
+/* 0x06af: i2c_start_send */
+ 0x031d11f4,
+ 0x060e7e00,
+ 0x13884e00,
+ 0x0000587e,
+ 0xf47e0003,
+ 0x884e0005,
+ 0x00587e13,
+/* 0x06c9: i2c_start_out */
+/* 0x06cb: i2c_stop */
+ 0x0300f800,
+ 0x05f47e00,
+ 0x7e000300,
+ 0x4e00060e,
+ 0x587e03e8,
+ 0x01030000,
+ 0x0005f47e,
+ 0x7e13884e,
+ 0x03000058,
+ 0x060e7e01,
+ 0x13884e00,
+ 0x0000587e,
+/* 0x06fa: i2c_bitw */
+ 0x0e7e00f8,
+ 0xe84e0006,
+ 0x00587e03,
0x0076bb00,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
- 0x557e50fc,
+ 0x507e50fc,
0x64b60006,
- 0x1d11f404,
-/* 0x06b4: i2c_start_send */
- 0x137e0003,
- 0x884e0006,
- 0x00587e13,
- 0x7e000300,
- 0x4e0005f9,
- 0x587e1388,
-/* 0x06ce: i2c_start_out */
- 0x00f80000,
-/* 0x06d0: i2c_stop */
- 0xf97e0003,
- 0x00030005,
- 0x0006137e,
- 0x7e03e84e,
+ 0x1711f404,
+ 0x7e13884e,
0x03000058,
- 0x05f97e01,
+ 0x05f47e00,
0x13884e00,
0x0000587e,
- 0x137e0103,
- 0x884e0006,
- 0x00587e13,
-/* 0x06ff: i2c_bitw */
- 0x7e00f800,
- 0x4e000613,
- 0x587e03e8,
- 0x76bb0000,
+/* 0x0738: i2c_bitw_out */
+/* 0x073a: i2c_bitr */
+ 0x010300f8,
+ 0x00060e7e,
+ 0x7e03e84e,
+ 0xbb000058,
+ 0x65b60076,
+ 0x9450f904,
+ 0x56bb0465,
+ 0xfd50bd02,
+ 0x50fc0475,
+ 0x0006507e,
+ 0xf40464b6,
+ 0x3c7e1a11,
+ 0x00030006,
+ 0x0005f47e,
+ 0x7e13884e,
+ 0xf0000058,
+ 0x31f4013c,
+/* 0x077d: i2c_bitr_done */
+/* 0x077f: i2c_get_byte */
+ 0x0500f801,
+/* 0x0783: i2c_get_byte_next */
+ 0xb6080400,
+ 0x76bb0154,
0x0465b600,
0x659450f9,
0x0256bb04,
0x75fd50bd,
0x7e50fc04,
- 0xb6000655,
+ 0xb600073a,
0x11f40464,
- 0x13884e17,
- 0x0000587e,
- 0xf97e0003,
- 0x884e0005,
- 0x00587e13,
-/* 0x073d: i2c_bitw_out */
-/* 0x073f: i2c_bitr */
- 0x0300f800,
- 0x06137e01,
- 0x03e84e00,
- 0x0000587e,
+ 0x0553fd2a,
+ 0xf40142b6,
+ 0x0103d81b,
0xb60076bb,
0x50f90465,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0x06557e50,
+ 0x06fa7e50,
0x0464b600,
- 0x7e1a11f4,
- 0x03000641,
- 0x05f97e00,
- 0x13884e00,
- 0x0000587e,
- 0xf4013cf0,
-/* 0x0782: i2c_bitr_done */
- 0x00f80131,
-/* 0x0784: i2c_get_byte */
- 0x08040005,
-/* 0x0788: i2c_get_byte_next */
- 0xbb0154b6,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x00073f7e,
- 0xf40464b6,
- 0x53fd2a11,
- 0x0142b605,
- 0x03d81bf4,
- 0x0076bb01,
- 0xf90465b6,
- 0x04659450,
- 0xbd0256bb,
- 0x0475fd50,
- 0xff7e50fc,
- 0x64b60006,
-/* 0x07d1: i2c_get_byte_done */
-/* 0x07d3: i2c_put_byte */
- 0x0400f804,
-/* 0x07d5: i2c_put_byte_next */
- 0x0142b608,
- 0xbb3854ff,
+/* 0x07cc: i2c_get_byte_done */
+/* 0x07ce: i2c_put_byte */
+ 0x080400f8,
+/* 0x07d0: i2c_put_byte_next */
+ 0xff0142b6,
+ 0x76bb3854,
+ 0x0465b600,
+ 0x659450f9,
+ 0x0256bb04,
+ 0x75fd50bd,
+ 0x7e50fc04,
+ 0xb60006fa,
+ 0x11f40464,
+ 0x0046b034,
+ 0xbbd81bf4,
0x65b60076,
0x9450f904,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x0006ff7e,
+ 0x00073a7e,
0xf40464b6,
- 0x46b03411,
- 0xd81bf400,
+ 0x76bb0f11,
+ 0x0136b000,
+ 0xf4061bf4,
+/* 0x0826: i2c_put_byte_done */
+ 0x00f80132,
+/* 0x0828: i2c_addr */
0xb60076bb,
0x50f90465,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0x073f7e50,
+ 0x06737e50,
0x0464b600,
- 0xbb0f11f4,
- 0x36b00076,
- 0x061bf401,
-/* 0x082b: i2c_put_byte_done */
- 0xf80132f4,
-/* 0x082d: i2c_addr */
- 0x0076bb00,
+ 0xe72911f4,
+ 0xb6012ec3,
+ 0x53fd0134,
+ 0x0076bb05,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
- 0x787e50fc,
- 0x64b60006,
- 0x2911f404,
- 0x012ec3e7,
- 0xfd0134b6,
- 0x76bb0553,
- 0x0465b600,
- 0x659450f9,
- 0x0256bb04,
- 0x75fd50bd,
- 0x7e50fc04,
- 0xb60007d3,
-/* 0x0872: i2c_addr_done */
- 0x00f80464,
-/* 0x0874: i2c_acquire_addr */
- 0xb6f8cec7,
- 0xe0b705e4,
- 0x00f8d014,
-/* 0x0880: i2c_acquire */
- 0x0008747e,
+ 0xce7e50fc,
+ 0x64b60007,
+/* 0x086d: i2c_addr_done */
+/* 0x086f: i2c_acquire_addr */
+ 0xc700f804,
+ 0xe4b6f8ce,
+ 0x14e0b705,
+/* 0x087b: i2c_acquire */
+ 0x7e00f8d0,
+ 0x7e00086f,
+ 0xf0000004,
+ 0x2d7e03d9,
+ 0x00f80000,
+/* 0x088c: i2c_release */
+ 0x00086f7e,
0x0000047e,
- 0x7e03d9f0,
+ 0x7e03daf0,
0xf800002d,
-/* 0x0891: i2c_release */
- 0x08747e00,
- 0x00047e00,
- 0x03daf000,
- 0x00002d7e,
-/* 0x08a2: i2c_recv */
- 0x32f400f8,
- 0xf8c1c701,
- 0xb00214b6,
- 0x1ff52816,
- 0x13b80134,
- 0x98000cf4,
- 0x13b80032,
- 0x98000ccc,
- 0x31f40031,
- 0xf9d0f902,
- 0xd6d0f9e0,
- 0x10000000,
- 0xbb016792,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x0008807e,
- 0xfc0464b6,
- 0x00d6b0d0,
- 0x00b01bf5,
- 0x76bb0005,
+/* 0x089d: i2c_recv */
+ 0x0132f400,
+ 0xb6f8c1c7,
+ 0x16b00214,
+ 0x341ff528,
+ 0xf413b801,
+ 0x3298000c,
+ 0xcc13b800,
+ 0x3198000c,
+ 0x0231f400,
+ 0xe0f9d0f9,
+ 0x00d6d0f9,
+ 0x92100000,
+ 0x76bb0167,
0x0465b600,
0x659450f9,
0x0256bb04,
0x75fd50bd,
0x7e50fc04,
- 0xb600082d,
- 0x11f50464,
- 0xc5c700cc,
- 0x0076bbe0,
- 0xf90465b6,
- 0x04659450,
- 0xbd0256bb,
- 0x0475fd50,
- 0xd37e50fc,
- 0x64b60007,
- 0xa911f504,
- 0xbb010500,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x00082d7e,
- 0xf50464b6,
- 0xbb008711,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x0007847e,
- 0xf40464b6,
- 0x5bcb6711,
- 0x0076bbe0,
+ 0xb600087b,
+ 0xd0fc0464,
+ 0xf500d6b0,
+ 0x0500b01b,
+ 0x0076bb00,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
- 0xd07e50fc,
- 0x64b60006,
- 0xbd5bb204,
- 0x410ef474,
-/* 0x09a4: i2c_recv_not_rd08 */
- 0xf401d6b0,
- 0x00053b1b,
- 0x00082d7e,
- 0xc73211f4,
- 0xd37ee0c5,
- 0x11f40007,
- 0x7e000528,
- 0xf400082d,
- 0xb5c71f11,
- 0x07d37ee0,
- 0x1511f400,
- 0x0006d07e,
- 0xc5c774bd,
- 0x091bf408,
- 0xf40232f4,
-/* 0x09e2: i2c_recv_not_wr08 */
-/* 0x09e2: i2c_recv_done */
- 0xcec7030e,
- 0x08917ef8,
- 0xfce0fc00,
- 0x0912f4d0,
- 0x9f7e7cb2,
-/* 0x09f6: i2c_recv_exit */
- 0x00f80002,
-/* 0x09f8: i2c_init */
-/* 0x09fa: test_recv */
- 0x584100f8,
- 0x0011cf04,
- 0x400110b6,
- 0x01f60458,
- 0xde04bd00,
- 0x134fd900,
- 0x0001de7e,
-/* 0x0a16: test_init */
- 0x004e00f8,
- 0x01de7e08,
-/* 0x0a1f: idle_recv */
+ 0x287e50fc,
+ 0x64b60008,
+ 0xcc11f504,
+ 0xe0c5c700,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0x07ce7e50,
+ 0x0464b600,
+ 0x00a911f5,
+ 0x76bb0105,
+ 0x0465b600,
+ 0x659450f9,
+ 0x0256bb04,
+ 0x75fd50bd,
+ 0x7e50fc04,
+ 0xb6000828,
+ 0x11f50464,
+ 0x76bb0087,
+ 0x0465b600,
+ 0x659450f9,
+ 0x0256bb04,
+ 0x75fd50bd,
+ 0x7e50fc04,
+ 0xb600077f,
+ 0x11f40464,
+ 0xe05bcb67,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0x06cb7e50,
+ 0x0464b600,
+ 0x74bd5bb2,
+/* 0x099f: i2c_recv_not_rd08 */
+ 0xb0410ef4,
+ 0x1bf401d6,
+ 0x7e00053b,
+ 0xf4000828,
+ 0xc5c73211,
+ 0x07ce7ee0,
+ 0x2811f400,
+ 0x287e0005,
+ 0x11f40008,
+ 0xe0b5c71f,
+ 0x0007ce7e,
+ 0x7e1511f4,
+ 0xbd0006cb,
+ 0x08c5c774,
+ 0xf4091bf4,
+ 0x0ef40232,
+/* 0x09dd: i2c_recv_not_wr08 */
+/* 0x09dd: i2c_recv_done */
+ 0xf8cec703,
+ 0x00088c7e,
+ 0xd0fce0fc,
+ 0xb20912f4,
+ 0x029f7e7c,
+/* 0x09f1: i2c_recv_exit */
+/* 0x09f3: i2c_init */
0xf800f800,
-/* 0x0a21: idle */
- 0x0031f400,
- 0xcf045441,
- 0x10b60011,
- 0x04544001,
- 0xbd0001f6,
-/* 0x0a35: idle_loop */
- 0xf4580104,
-/* 0x0a3a: idle_proc */
-/* 0x0a3a: idle_proc_exec */
- 0x10f90232,
- 0xa87e1eb2,
- 0x10fc0002,
- 0xf40911f4,
- 0x0ef40231,
-/* 0x0a4d: idle_proc_next */
- 0x5810b6f0,
- 0x1bf41fa6,
- 0xe002f4e8,
- 0xf40028f4,
- 0x0000c60e,
+/* 0x09f5: test_recv */
+ 0x04584100,
+ 0xb60011cf,
+ 0x58400110,
+ 0x0001f604,
+ 0x00de04bd,
+ 0x7e134fd9,
+ 0xf80001de,
+/* 0x0a11: test_init */
+ 0x08004e00,
+ 0x0001de7e,
+/* 0x0a1a: idle_recv */
+ 0x00f800f8,
+/* 0x0a1c: idle */
+ 0x410031f4,
+ 0x11cf0454,
+ 0x0110b600,
+ 0xf6045440,
+ 0x04bd0001,
+/* 0x0a30: idle_loop */
+ 0x32f45801,
+/* 0x0a35: idle_proc */
+/* 0x0a35: idle_proc_exec */
+ 0xb210f902,
+ 0x02a87e1e,
+ 0xf410fc00,
+ 0x31f40911,
+ 0xf00ef402,
+/* 0x0a48: idle_proc_next */
+ 0xa65810b6,
+ 0xe81bf41f,
+ 0xf4e002f4,
+ 0x0ef40028,
+ 0x000000c6,
+ 0x00000000,
0x00000000,
0x00000000,
0x00000000,
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/gt215.fuc3.h b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/gt215.fuc3.h
index 6a2572e8945a..defddf5957ee 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/gt215.fuc3.h
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/gt215.fuc3.h
@@ -47,8 +47,8 @@ static uint32_t gt215_pmu_data[] = {
0x00000000,
0x00000000,
0x584d454d,
- 0x0000083a,
- 0x0000082c,
+ 0x00000833,
+ 0x00000825,
0x00000000,
0x00000000,
0x00000000,
@@ -69,8 +69,8 @@ static uint32_t gt215_pmu_data[] = {
0x00000000,
0x00000000,
0x46524550,
- 0x0000083e,
- 0x0000083c,
+ 0x00000837,
+ 0x00000835,
0x00000000,
0x00000000,
0x00000000,
@@ -91,8 +91,8 @@ static uint32_t gt215_pmu_data[] = {
0x00000000,
0x00000000,
0x5f433249,
- 0x00000c6e,
- 0x00000b11,
+ 0x00000c67,
+ 0x00000b0a,
0x00000000,
0x00000000,
0x00000000,
@@ -113,8 +113,8 @@ static uint32_t gt215_pmu_data[] = {
0x00000000,
0x00000000,
0x54534554,
- 0x00000c97,
- 0x00000c70,
+ 0x00000c90,
+ 0x00000c69,
0x00000000,
0x00000000,
0x00000000,
@@ -135,8 +135,8 @@ static uint32_t gt215_pmu_data[] = {
0x00000000,
0x00000000,
0x454c4449,
- 0x00000ca3,
- 0x00000ca1,
+ 0x00000c9c,
+ 0x00000c9a,
0x00000000,
0x00000000,
0x00000000,
@@ -234,22 +234,22 @@ static uint32_t gt215_pmu_data[] = {
/* 0x037c: memx_func_next */
0x00000002,
0x00000000,
- 0x000005a0,
+ 0x0000059f,
0x00000003,
0x00000002,
- 0x00000632,
+ 0x0000062f,
0x00040004,
0x00000000,
- 0x0000064e,
+ 0x0000064b,
0x00010005,
0x00000000,
- 0x0000066b,
+ 0x00000668,
0x00010006,
0x00000000,
- 0x000005f0,
+ 0x000005ef,
0x00000007,
0x00000000,
- 0x00000676,
+ 0x00000673,
/* 0x03c4: memx_func_tail */
/* 0x03c4: memx_ts_start */
0x00000000,
@@ -1305,560 +1305,560 @@ static uint32_t gt215_pmu_code[] = {
0x67f102d7,
0x63f1fffc,
0x76fdffff,
- 0x0267f104,
- 0x0576fd00,
- 0x70f980f9,
- 0xe0fcd0fc,
- 0xf04021f4,
+ 0x0267f004,
+ 0xf90576fd,
+ 0xfc70f980,
+ 0xf4e0fcd0,
+ 0x67f04021,
+ 0xe007f104,
+ 0x0604b607,
+ 0xbd0006d0,
+/* 0x0581: memx_func_enter_wait */
+ 0xc067f104,
+ 0x0664b607,
+ 0xf00066cf,
+ 0x0bf40464,
+ 0x2c67f0f3,
+ 0xcf0664b6,
+ 0x06800066,
+/* 0x059f: memx_func_leave */
+ 0xf000f8f1,
+ 0x64b62c67,
+ 0x0066cf06,
+ 0xf0f20680,
0x07f10467,
- 0x04b607e0,
+ 0x04b607e4,
0x0006d006,
-/* 0x0582: memx_func_enter_wait */
+/* 0x05ba: memx_func_leave_wait */
0x67f104bd,
0x64b607c0,
0x0066cf06,
0xf40464f0,
- 0x67f0f30b,
- 0x0664b62c,
- 0x800066cf,
- 0x00f8f106,
-/* 0x05a0: memx_func_leave */
- 0xb62c67f0,
- 0x66cf0664,
- 0xf2068000,
- 0xf10467f0,
- 0xb607e407,
- 0x06d00604,
-/* 0x05bb: memx_func_leave_wait */
- 0xf104bd00,
- 0xb607c067,
- 0x66cf0664,
- 0x0464f000,
- 0xf1f31bf4,
- 0xb9161087,
- 0x21f4028e,
- 0x02d7b904,
- 0xffcc67f1,
- 0xffff63f1,
- 0xf90476fd,
- 0xfc70f980,
- 0xf4e0fcd0,
- 0x00f84021,
-/* 0x05f0: memx_func_wait_vblank */
- 0xb0001698,
- 0x0bf40066,
- 0x0166b013,
- 0xf4060bf4,
-/* 0x0602: memx_func_wait_vblank_head1 */
- 0x77f12e0e,
- 0x0ef40020,
-/* 0x0609: memx_func_wait_vblank_head0 */
- 0x0877f107,
-/* 0x060d: memx_func_wait_vblank_0 */
- 0xc467f100,
- 0x0664b607,
- 0xfd0066cf,
- 0x1bf40467,
-/* 0x061d: memx_func_wait_vblank_1 */
- 0xc467f1f3,
- 0x0664b607,
- 0xfd0066cf,
- 0x0bf40467,
-/* 0x062d: memx_func_wait_vblank_fini */
- 0x0410b6f3,
-/* 0x0632: memx_func_wr32 */
- 0x169800f8,
- 0x01159800,
- 0xf90810b6,
- 0xfc50f960,
- 0xf4e0fcd0,
- 0x42b64021,
- 0xe91bf402,
-/* 0x064e: memx_func_wait */
- 0x87f000f8,
- 0x0684b62c,
- 0x980088cf,
- 0x1d98001e,
- 0x021c9801,
- 0xb6031b98,
- 0x21f41010,
-/* 0x066b: memx_func_delay */
- 0x9800f8a3,
- 0x10b6001e,
- 0x7e21f404,
-/* 0x0676: memx_func_train */
- 0x57f100f8,
- 0x77f10003,
- 0x97f10000,
- 0x93f00000,
- 0x029eb970,
- 0xb90421f4,
- 0xe7f102d8,
- 0x21f42710,
-/* 0x0695: memx_func_train_loop_outer */
- 0x0158e07e,
- 0x0083f101,
- 0xe097f102,
- 0x1193f011,
- 0x80f990f9,
+ 0x87f1f31b,
+ 0x8eb91610,
+ 0x0421f402,
+ 0xf102d7b9,
+ 0xf1ffcc67,
+ 0xfdffff63,
+ 0x80f90476,
+ 0xd0fc70f9,
+ 0x21f4e0fc,
+/* 0x05ef: memx_func_wait_vblank */
+ 0x9800f840,
+ 0x66b00016,
+ 0x120bf400,
+ 0xf40166b0,
+ 0x0ef4060b,
+/* 0x0601: memx_func_wait_vblank_head1 */
+ 0x2077f02c,
+/* 0x0607: memx_func_wait_vblank_head0 */
+ 0xf0060ef4,
+/* 0x060a: memx_func_wait_vblank_0 */
+ 0x67f10877,
+ 0x64b607c4,
+ 0x0066cf06,
+ 0xf40467fd,
+/* 0x061a: memx_func_wait_vblank_1 */
+ 0x67f1f31b,
+ 0x64b607c4,
+ 0x0066cf06,
+ 0xf40467fd,
+/* 0x062a: memx_func_wait_vblank_fini */
+ 0x10b6f30b,
+/* 0x062f: memx_func_wr32 */
+ 0x9800f804,
+ 0x15980016,
+ 0x0810b601,
+ 0x50f960f9,
0xe0fcd0fc,
- 0xf94021f4,
- 0x0067f150,
-/* 0x06b5: memx_func_train_loop_inner */
- 0x1187f100,
- 0x9068ff11,
- 0xfd109894,
- 0x97f10589,
- 0x93f00720,
- 0xf990f910,
- 0xfcd0fc80,
- 0x4021f4e0,
- 0x008097f1,
- 0xb91093f0,
- 0x21f4029e,
- 0x02d8b904,
- 0xf92088c5,
+ 0xb64021f4,
+ 0x1bf40242,
+/* 0x064b: memx_func_wait */
+ 0xf000f8e9,
+ 0x84b62c87,
+ 0x0088cf06,
+ 0x98001e98,
+ 0x1c98011d,
+ 0x031b9802,
+ 0xf41010b6,
+ 0x00f8a321,
+/* 0x0668: memx_func_delay */
+ 0xb6001e98,
+ 0x21f40410,
+/* 0x0673: memx_func_train */
+ 0xf000f87e,
+ 0x77f00357,
+ 0x0097f100,
+ 0x7093f000,
+ 0xf4029eb9,
+ 0xd8b90421,
+ 0x10e7f102,
+ 0x7e21f427,
+/* 0x0690: memx_func_train_loop_outer */
+ 0x010158e0,
+ 0x020083f1,
+ 0x11e097f1,
+ 0xf91193f0,
+ 0xfc80f990,
+ 0xf4e0fcd0,
+ 0x50f94021,
+/* 0x06af: memx_func_train_loop_inner */
+ 0xf10067f0,
+ 0xff111187,
+ 0x98949068,
+ 0x0589fd10,
+ 0x072097f1,
+ 0xf91093f0,
0xfc80f990,
0xf4e0fcd0,
0x97f14021,
- 0x93f0053c,
- 0x0287f110,
- 0x0083f130,
- 0xf990f980,
+ 0x93f00080,
+ 0x029eb910,
+ 0xb90421f4,
+ 0x88c502d8,
+ 0xf990f920,
0xfcd0fc80,
0x4021f4e0,
- 0x0560e7f1,
- 0xf110e3f0,
- 0xf10000d7,
- 0x908000d3,
- 0xb7f100dc,
- 0xb3f08480,
- 0xa321f41e,
- 0x000057f1,
- 0xffff97f1,
- 0x830093f1,
-/* 0x0734: memx_func_train_loop_4x */
- 0x0080a7f1,
- 0xb910a3f0,
- 0x21f402ae,
- 0x02d8b904,
- 0xffdfb7f1,
- 0xffffb3f1,
- 0xf9048bfd,
- 0xfc80f9a0,
+ 0x053c97f1,
+ 0xf11093f0,
+ 0xf1300287,
+ 0xf9800083,
+ 0xfc80f990,
0xf4e0fcd0,
- 0xa7f14021,
- 0xa3f0053c,
- 0x0287f110,
- 0x0083f130,
- 0xf9a0f980,
- 0xfcd0fc80,
- 0x4021f4e0,
- 0x0560e7f1,
- 0xf110e3f0,
- 0xf10000d7,
- 0xb98000d3,
- 0xb7f102dc,
- 0xb3f02710,
- 0xa321f400,
- 0xf402eeb9,
- 0xddb90421,
- 0x949dff02,
+ 0xe7f14021,
+ 0xe3f00560,
+ 0x00d7f110,
+ 0x00d3f100,
+ 0x00dc9080,
+ 0x8480b7f1,
+ 0xf41eb3f0,
+ 0x57f0a321,
+ 0xff97f100,
+ 0x0093f1ff,
+/* 0x072d: memx_func_train_loop_4x */
+ 0x80a7f183,
+ 0x10a3f000,
+ 0xf402aeb9,
+ 0xd8b90421,
+ 0xdfb7f102,
+ 0xffb3f1ff,
+ 0x048bfdff,
+ 0x80f9a0f9,
+ 0xe0fcd0fc,
+ 0xf14021f4,
+ 0xf0053ca7,
+ 0x87f110a3,
+ 0x83f13002,
+ 0xa0f98000,
+ 0xd0fc80f9,
+ 0x21f4e0fc,
+ 0x60e7f140,
+ 0x10e3f005,
+ 0x0000d7f1,
+ 0x8000d3f1,
+ 0xf102dcb9,
+ 0xf02710b7,
+ 0x21f400b3,
+ 0x02eeb9a3,
+ 0xb90421f4,
+ 0x9dff02dd,
+ 0x0150b694,
+ 0xf4045670,
+ 0x7aa0921e,
+ 0xa9800bcc,
+ 0x0160b600,
+ 0x700470b6,
+ 0x1ef51066,
+ 0x50fcff01,
0x700150b6,
- 0x1ef40456,
- 0xcc7aa092,
- 0x00a9800b,
- 0xb60160b6,
- 0x66700470,
- 0x001ef510,
- 0xb650fcff,
- 0x56700150,
- 0xd41ef507,
-/* 0x07c7: memx_exec */
- 0xf900f8fe,
- 0xb9d0f9e0,
- 0xb2b902c1,
-/* 0x07d1: memx_exec_next */
- 0x00139802,
- 0xe70410b6,
- 0xe701f034,
- 0xb601e033,
- 0x30f00132,
- 0xde35980c,
- 0x12b855f9,
- 0xe41ef406,
- 0x98f10b98,
- 0xcbbbf20c,
- 0xc4b7f102,
- 0x06b4b607,
- 0xfc00bbcf,
- 0xf5e0fcd0,
+ 0x1ef50756,
+ 0x00f8fed6,
+/* 0x07c0: memx_exec */
+ 0xd0f9e0f9,
+ 0xb902c1b9,
+/* 0x07ca: memx_exec_next */
+ 0x139802b2,
+ 0x0410b600,
+ 0x01f034e7,
+ 0x01e033e7,
+ 0xf00132b6,
+ 0x35980c30,
+ 0xb855f9de,
+ 0x1ef40612,
+ 0xf10b98e4,
+ 0xbbf20c98,
+ 0xb7f102cb,
+ 0xb4b607c4,
+ 0x00bbcf06,
+ 0xe0fcd0fc,
+ 0x033621f5,
+/* 0x0806: memx_info */
+ 0xc67000f8,
+ 0x0e0bf401,
+/* 0x080c: memx_info_data */
+ 0x03ccc7f1,
+ 0x0800b7f1,
+/* 0x0817: memx_info_train */
+ 0xf10b0ef4,
+ 0xf10bccc7,
+/* 0x081f: memx_info_send */
+ 0xf50100b7,
0xf8033621,
-/* 0x080d: memx_info */
- 0x01c67000,
-/* 0x0813: memx_info_data */
- 0xf10e0bf4,
- 0xf103ccc7,
- 0xf40800b7,
-/* 0x081e: memx_info_train */
- 0xc7f10b0e,
- 0xb7f10bcc,
-/* 0x0826: memx_info_send */
- 0x21f50100,
- 0x00f80336,
-/* 0x082c: memx_recv */
- 0xf401d6b0,
- 0xd6b0980b,
- 0xd80bf400,
-/* 0x083a: memx_init */
- 0x00f800f8,
-/* 0x083c: perf_recv */
-/* 0x083e: perf_init */
- 0x00f800f8,
-/* 0x0840: i2c_drive_scl */
- 0xf40036b0,
- 0x07f1110b,
- 0x04b607e0,
- 0x0001d006,
- 0x00f804bd,
-/* 0x0854: i2c_drive_scl_lo */
- 0x07e407f1,
- 0xd00604b6,
- 0x04bd0001,
-/* 0x0862: i2c_drive_sda */
- 0x36b000f8,
- 0x110bf400,
- 0x07e007f1,
- 0xd00604b6,
- 0x04bd0002,
-/* 0x0876: i2c_drive_sda_lo */
- 0x07f100f8,
- 0x04b607e4,
- 0x0002d006,
- 0x00f804bd,
-/* 0x0884: i2c_sense_scl */
- 0xf10132f4,
- 0xb607c437,
- 0x33cf0634,
- 0x0431fd00,
- 0xf4060bf4,
-/* 0x089a: i2c_sense_scl_done */
- 0x00f80131,
-/* 0x089c: i2c_sense_sda */
- 0xf10132f4,
- 0xb607c437,
- 0x33cf0634,
- 0x0432fd00,
- 0xf4060bf4,
-/* 0x08b2: i2c_sense_sda_done */
- 0x00f80131,
-/* 0x08b4: i2c_raise_scl */
- 0x47f140f9,
- 0x37f00898,
- 0x4021f501,
-/* 0x08c1: i2c_raise_scl_wait */
+/* 0x0825: memx_recv */
+ 0x01d6b000,
+ 0xb0980bf4,
+ 0x0bf400d6,
+/* 0x0833: memx_init */
+ 0xf800f8d8,
+/* 0x0835: perf_recv */
+/* 0x0837: perf_init */
+ 0xf800f800,
+/* 0x0839: i2c_drive_scl */
+ 0x0036b000,
+ 0xf1110bf4,
+ 0xb607e007,
+ 0x01d00604,
+ 0xf804bd00,
+/* 0x084d: i2c_drive_scl_lo */
+ 0xe407f100,
+ 0x0604b607,
+ 0xbd0001d0,
+/* 0x085b: i2c_drive_sda */
+ 0xb000f804,
+ 0x0bf40036,
+ 0xe007f111,
+ 0x0604b607,
+ 0xbd0002d0,
+/* 0x086f: i2c_drive_sda_lo */
+ 0xf100f804,
+ 0xb607e407,
+ 0x02d00604,
+ 0xf804bd00,
+/* 0x087d: i2c_sense_scl */
+ 0x0132f400,
+ 0x07c437f1,
+ 0xcf0634b6,
+ 0x31fd0033,
+ 0x060bf404,
+/* 0x0893: i2c_sense_scl_done */
+ 0xf80131f4,
+/* 0x0895: i2c_sense_sda */
+ 0x0132f400,
+ 0x07c437f1,
+ 0xcf0634b6,
+ 0x32fd0033,
+ 0x060bf404,
+/* 0x08ab: i2c_sense_sda_done */
+ 0xf80131f4,
+/* 0x08ad: i2c_raise_scl */
+ 0xf140f900,
+ 0xf0089847,
+ 0x21f50137,
+/* 0x08ba: i2c_raise_scl_wait */
+ 0xe7f10839,
+ 0x21f403e8,
+ 0x7d21f57e,
+ 0x0901f408,
+ 0xf40142b6,
+/* 0x08ce: i2c_raise_scl_done */
+ 0x40fcef1b,
+/* 0x08d2: i2c_start */
+ 0x21f500f8,
+ 0x11f4087d,
+ 0x9521f50d,
+ 0x0611f408,
+/* 0x08e3: i2c_start_rep */
+ 0xf0300ef4,
+ 0x21f50037,
+ 0x37f00839,
+ 0x5b21f501,
+ 0x0076bb08,
+ 0xf90465b6,
+ 0x04659450,
+ 0xbd0256bb,
+ 0x0475fd50,
+ 0x21f550fc,
+ 0x64b608ad,
+ 0x1f11f404,
+/* 0x0910: i2c_start_send */
+ 0xf50037f0,
+ 0xf1085b21,
+ 0xf41388e7,
+ 0x37f07e21,
+ 0x3921f500,
+ 0x88e7f108,
+ 0x7e21f413,
+/* 0x092c: i2c_start_out */
+/* 0x092e: i2c_stop */
+ 0x37f000f8,
+ 0x3921f500,
+ 0x0037f008,
+ 0x085b21f5,
+ 0x03e8e7f1,
+ 0xf07e21f4,
+ 0x21f50137,
+ 0xe7f10839,
+ 0x21f41388,
+ 0x0137f07e,
+ 0x085b21f5,
+ 0x1388e7f1,
+ 0xf87e21f4,
+/* 0x0961: i2c_bitw */
+ 0x5b21f500,
0xe8e7f108,
0x7e21f403,
- 0x088421f5,
- 0xb60901f4,
- 0x1bf40142,
-/* 0x08d5: i2c_raise_scl_done */
- 0xf840fcef,
-/* 0x08d9: i2c_start */
- 0x8421f500,
- 0x0d11f408,
- 0x089c21f5,
- 0xf40611f4,
-/* 0x08ea: i2c_start_rep */
- 0x37f0300e,
- 0x4021f500,
- 0x0137f008,
- 0x086221f5,
0xb60076bb,
0x50f90465,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0xb421f550,
+ 0xad21f550,
0x0464b608,
-/* 0x0917: i2c_start_send */
- 0xf01f11f4,
- 0x21f50037,
- 0xe7f10862,
- 0x21f41388,
- 0x0037f07e,
- 0x084021f5,
- 0x1388e7f1,
-/* 0x0933: i2c_start_out */
- 0xf87e21f4,
-/* 0x0935: i2c_stop */
- 0x0037f000,
- 0x084021f5,
- 0xf50037f0,
- 0xf1086221,
- 0xf403e8e7,
+ 0xf11811f4,
+ 0xf41388e7,
0x37f07e21,
- 0x4021f501,
+ 0x3921f500,
0x88e7f108,
0x7e21f413,
- 0xf50137f0,
- 0xf1086221,
- 0xf41388e7,
- 0x00f87e21,
-/* 0x0968: i2c_bitw */
- 0x086221f5,
- 0x03e8e7f1,
- 0xbb7e21f4,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x08b421f5,
- 0xf40464b6,
- 0xe7f11811,
+/* 0x09a0: i2c_bitw_out */
+/* 0x09a2: i2c_bitr */
+ 0x37f000f8,
+ 0x5b21f501,
+ 0xe8e7f108,
+ 0x7e21f403,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0xad21f550,
+ 0x0464b608,
+ 0xf51b11f4,
+ 0xf0089521,
+ 0x21f50037,
+ 0xe7f10839,
0x21f41388,
- 0x0037f07e,
- 0x084021f5,
- 0x1388e7f1,
-/* 0x09a7: i2c_bitw_out */
- 0xf87e21f4,
-/* 0x09a9: i2c_bitr */
- 0x0137f000,
- 0x086221f5,
- 0x03e8e7f1,
- 0xbb7e21f4,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x08b421f5,
- 0xf40464b6,
- 0x21f51b11,
- 0x37f0089c,
- 0x4021f500,
- 0x88e7f108,
- 0x7e21f413,
- 0xf4013cf0,
-/* 0x09ee: i2c_bitr_done */
- 0x00f80131,
-/* 0x09f0: i2c_get_byte */
- 0xf00057f0,
-/* 0x09f6: i2c_get_byte_next */
- 0x54b60847,
+ 0x013cf07e,
+/* 0x09e7: i2c_bitr_done */
+ 0xf80131f4,
+/* 0x09e9: i2c_get_byte */
+ 0x0057f000,
+/* 0x09ef: i2c_get_byte_next */
+ 0xb60847f0,
+ 0x76bb0154,
+ 0x0465b600,
+ 0x659450f9,
+ 0x0256bb04,
+ 0x75fd50bd,
+ 0xf550fc04,
+ 0xb609a221,
+ 0x11f40464,
+ 0x0553fd2b,
+ 0xf40142b6,
+ 0x37f0d81b,
0x0076bb01,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b609a9,
- 0x2b11f404,
- 0xb60553fd,
- 0x1bf40142,
- 0x0137f0d8,
- 0xb60076bb,
- 0x50f90465,
- 0xbb046594,
- 0x50bd0256,
- 0xfc0475fd,
- 0x6821f550,
- 0x0464b609,
-/* 0x0a40: i2c_get_byte_done */
-/* 0x0a42: i2c_put_byte */
- 0x47f000f8,
-/* 0x0a45: i2c_put_byte_next */
- 0x0142b608,
- 0xbb3854ff,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x096821f5,
- 0xf40464b6,
- 0x46b03411,
- 0xd81bf400,
+ 0x64b60961,
+/* 0x0a39: i2c_get_byte_done */
+/* 0x0a3b: i2c_put_byte */
+ 0xf000f804,
+/* 0x0a3e: i2c_put_byte_next */
+ 0x42b60847,
+ 0x3854ff01,
0xb60076bb,
0x50f90465,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0xa921f550,
+ 0x6121f550,
0x0464b609,
- 0xbb0f11f4,
- 0x36b00076,
- 0x061bf401,
-/* 0x0a9b: i2c_put_byte_done */
- 0xf80132f4,
-/* 0x0a9d: i2c_addr */
- 0x0076bb00,
+ 0xb03411f4,
+ 0x1bf40046,
+ 0x0076bbd8,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b608d9,
- 0x2911f404,
- 0x012ec3e7,
- 0xfd0134b6,
- 0x76bb0553,
+ 0x64b609a2,
+ 0x0f11f404,
+ 0xb00076bb,
+ 0x1bf40136,
+ 0x0132f406,
+/* 0x0a94: i2c_put_byte_done */
+/* 0x0a96: i2c_addr */
+ 0x76bb00f8,
0x0465b600,
0x659450f9,
0x0256bb04,
0x75fd50bd,
0xf550fc04,
- 0xb60a4221,
-/* 0x0ae2: i2c_addr_done */
- 0x00f80464,
-/* 0x0ae4: i2c_acquire_addr */
- 0xb6f8cec7,
- 0xe0b702e4,
- 0xee980d1c,
-/* 0x0af3: i2c_acquire */
- 0xf500f800,
- 0xf40ae421,
- 0xd9f00421,
- 0x4021f403,
-/* 0x0b02: i2c_release */
- 0x21f500f8,
- 0x21f40ae4,
- 0x03daf004,
- 0xf84021f4,
-/* 0x0b11: i2c_recv */
- 0x0132f400,
- 0xb6f8c1c7,
- 0x16b00214,
- 0x3a1ff528,
- 0xf413a001,
- 0x0032980c,
- 0x0ccc13a0,
- 0xf4003198,
- 0xd0f90231,
- 0xd0f9e0f9,
- 0x000067f1,
- 0x100063f1,
- 0xbb016792,
+ 0xb608d221,
+ 0x11f40464,
+ 0x2ec3e729,
+ 0x0134b601,
+ 0xbb0553fd,
0x65b60076,
0x9450f904,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x0af321f5,
- 0xfc0464b6,
- 0x00d6b0d0,
- 0x00b31bf5,
- 0xbb0057f0,
+ 0x0a3b21f5,
+/* 0x0adb: i2c_addr_done */
+ 0xf80464b6,
+/* 0x0add: i2c_acquire_addr */
+ 0xf8cec700,
+ 0xb702e4b6,
+ 0x980d1ce0,
+ 0x00f800ee,
+/* 0x0aec: i2c_acquire */
+ 0x0add21f5,
+ 0xf00421f4,
+ 0x21f403d9,
+/* 0x0afb: i2c_release */
+ 0xf500f840,
+ 0xf40add21,
+ 0xdaf00421,
+ 0x4021f403,
+/* 0x0b0a: i2c_recv */
+ 0x32f400f8,
+ 0xf8c1c701,
+ 0xb00214b6,
+ 0x1ff52816,
+ 0x13a0013a,
+ 0x32980cf4,
+ 0xcc13a000,
+ 0x0031980c,
+ 0xf90231f4,
+ 0xf9e0f9d0,
+ 0x0067f1d0,
+ 0x0063f100,
+ 0x01679210,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0xec21f550,
+ 0x0464b60a,
+ 0xd6b0d0fc,
+ 0xb31bf500,
+ 0x0057f000,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0x9621f550,
+ 0x0464b60a,
+ 0x00d011f5,
+ 0xbbe0c5c7,
0x65b60076,
0x9450f904,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x0a9d21f5,
+ 0x0a3b21f5,
0xf50464b6,
- 0xc700d011,
- 0x76bbe0c5,
+ 0xf000ad11,
+ 0x76bb0157,
0x0465b600,
0x659450f9,
0x0256bb04,
0x75fd50bd,
0xf550fc04,
- 0xb60a4221,
+ 0xb60a9621,
0x11f50464,
- 0x57f000ad,
- 0x0076bb01,
- 0xf90465b6,
- 0x04659450,
- 0xbd0256bb,
- 0x0475fd50,
- 0x21f550fc,
- 0x64b60a9d,
- 0x8a11f504,
- 0x0076bb00,
- 0xf90465b6,
- 0x04659450,
- 0xbd0256bb,
- 0x0475fd50,
- 0x21f550fc,
- 0x64b609f0,
- 0x6a11f404,
- 0xbbe05bcb,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x093521f5,
- 0xb90464b6,
- 0x74bd025b,
-/* 0x0c17: i2c_recv_not_rd08 */
- 0xb0430ef4,
- 0x1bf401d6,
- 0x0057f03d,
- 0x0a9d21f5,
- 0xc73311f4,
- 0x21f5e0c5,
- 0x11f40a42,
- 0x0057f029,
- 0x0a9d21f5,
- 0xc71f11f4,
- 0x21f5e0b5,
- 0x11f40a42,
- 0x3521f515,
- 0xc774bd09,
- 0x1bf408c5,
- 0x0232f409,
-/* 0x0c57: i2c_recv_not_wr08 */
-/* 0x0c57: i2c_recv_done */
- 0xc7030ef4,
- 0x21f5f8ce,
- 0xe0fc0b02,
- 0x12f4d0fc,
- 0x027cb90a,
- 0x033621f5,
-/* 0x0c6c: i2c_recv_exit */
-/* 0x0c6e: i2c_init */
+ 0x76bb008a,
+ 0x0465b600,
+ 0x659450f9,
+ 0x0256bb04,
+ 0x75fd50bd,
+ 0xf550fc04,
+ 0xb609e921,
+ 0x11f40464,
+ 0xe05bcb6a,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0x2e21f550,
+ 0x0464b609,
+ 0xbd025bb9,
+ 0x430ef474,
+/* 0x0c10: i2c_recv_not_rd08 */
+ 0xf401d6b0,
+ 0x57f03d1b,
+ 0x9621f500,
+ 0x3311f40a,
+ 0xf5e0c5c7,
+ 0xf40a3b21,
+ 0x57f02911,
+ 0x9621f500,
+ 0x1f11f40a,
+ 0xf5e0b5c7,
+ 0xf40a3b21,
+ 0x21f51511,
+ 0x74bd092e,
+ 0xf408c5c7,
+ 0x32f4091b,
+ 0x030ef402,
+/* 0x0c50: i2c_recv_not_wr08 */
+/* 0x0c50: i2c_recv_done */
+ 0xf5f8cec7,
+ 0xfc0afb21,
+ 0xf4d0fce0,
+ 0x7cb90a12,
+ 0x3621f502,
+/* 0x0c65: i2c_recv_exit */
+/* 0x0c67: i2c_init */
+ 0xf800f803,
+/* 0x0c69: test_recv */
+ 0xd817f100,
+ 0x0614b605,
+ 0xb60011cf,
+ 0x07f10110,
+ 0x04b605d8,
+ 0x0001d006,
+ 0xe7f104bd,
+ 0xe3f1d900,
+ 0x21f5134f,
+ 0x00f80256,
+/* 0x0c90: test_init */
+ 0x0800e7f1,
+ 0x025621f5,
+/* 0x0c9a: idle_recv */
0x00f800f8,
-/* 0x0c70: test_recv */
- 0x05d817f1,
- 0xcf0614b6,
- 0x10b60011,
- 0xd807f101,
- 0x0604b605,
- 0xbd0001d0,
- 0x00e7f104,
- 0x4fe3f1d9,
- 0x5621f513,
-/* 0x0c97: test_init */
- 0xf100f802,
- 0xf50800e7,
- 0xf8025621,
-/* 0x0ca1: idle_recv */
-/* 0x0ca3: idle */
- 0xf400f800,
- 0x17f10031,
- 0x14b605d4,
- 0x0011cf06,
- 0xf10110b6,
- 0xb605d407,
- 0x01d00604,
-/* 0x0cbf: idle_loop */
- 0xf004bd00,
- 0x32f45817,
-/* 0x0cc5: idle_proc */
-/* 0x0cc5: idle_proc_exec */
- 0xb910f902,
- 0x21f5021e,
- 0x10fc033f,
- 0xf40911f4,
- 0x0ef40231,
-/* 0x0cd9: idle_proc_next */
- 0x5810b6ef,
- 0xf4061fb8,
- 0x02f4e61b,
- 0x0028f4dd,
- 0x00bb0ef4,
+/* 0x0c9c: idle */
+ 0xf10031f4,
+ 0xb605d417,
+ 0x11cf0614,
+ 0x0110b600,
+ 0x05d407f1,
+ 0xd00604b6,
+ 0x04bd0001,
+/* 0x0cb8: idle_loop */
+ 0xf45817f0,
+/* 0x0cbe: idle_proc */
+/* 0x0cbe: idle_proc_exec */
+ 0x10f90232,
+ 0xf5021eb9,
+ 0xfc033f21,
+ 0x0911f410,
+ 0xf40231f4,
+/* 0x0cd2: idle_proc_next */
+ 0x10b6ef0e,
+ 0x061fb858,
+ 0xf4e61bf4,
+ 0x28f4dd02,
+ 0xbb0ef400,
+ 0x00000000,
+ 0x00000000,
0x00000000,
0x00000000,
0x00000000,
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/memx.fuc b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/memx.fuc
index ec03f9a4290b..1663bf943d77 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/memx.fuc
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/fuc/memx.fuc
@@ -82,15 +82,15 @@ memx_train_tail:
// $r0 - zero
memx_func_enter:
#if NVKM_PPWR_CHIPSET == GT215
- movw $r8 0x1610
+ mov $r8 0x1610
nv_rd32($r7, $r8)
imm32($r6, 0xfffffffc)
and $r7 $r6
- movw $r6 0x2
+ mov $r6 0x2
or $r7 $r6
nv_wr32($r8, $r7)
#else
- movw $r6 0x001620
+ mov $r6 0x001620
imm32($r7, ~0x00000aa2);
nv_rd32($r8, $r6)
and $r8 $r7
@@ -101,7 +101,7 @@ memx_func_enter:
and $r8 $r7
nv_wr32($r6, $r8)
- movw $r6 0x0026f0
+ mov $r6 0x0026f0
nv_rd32($r8, $r6)
and $r8 $r7
nv_wr32($r6, $r8)
@@ -136,19 +136,19 @@ memx_func_leave:
bra nz #memx_func_leave_wait
#if NVKM_PPWR_CHIPSET == GT215
- movw $r8 0x1610
+ mov $r8 0x1610
nv_rd32($r7, $r8)
imm32($r6, 0xffffffcc)
and $r7 $r6
nv_wr32($r8, $r7)
#else
- movw $r6 0x0026f0
+ mov $r6 0x0026f0
imm32($r7, 0x00000001)
nv_rd32($r8, $r6)
or $r8 $r7
nv_wr32($r6, $r8)
- movw $r6 0x001620
+ mov $r6 0x001620
nv_rd32($r8, $r6)
or $r8 $r7
nv_wr32($r6, $r8)
@@ -177,11 +177,11 @@ memx_func_wait_vblank:
bra #memx_func_wait_vblank_fini
memx_func_wait_vblank_head1:
- movw $r7 0x20
+ mov $r7 0x20
bra #memx_func_wait_vblank_0
memx_func_wait_vblank_head0:
- movw $r7 0x8
+ mov $r7 0x8
memx_func_wait_vblank_0:
nv_iord($r6, NV_PPWR_INPUT)
@@ -273,13 +273,13 @@ memx_func_train:
// $r5 - outer loop counter
// $r6 - inner loop counter
// $r7 - entry counter (#memx_train_head + $r7)
- movw $r5 0x3
- movw $r7 0x0
+ mov $r5 0x3
+ mov $r7 0x0
// Read random memory to wake up... things
imm32($r9, 0x700000)
nv_rd32($r8,$r9)
- movw $r14 0x2710
+ mov $r14 0x2710
call(nsec)
memx_func_train_loop_outer:
@@ -289,9 +289,9 @@ memx_func_train:
nv_wr32($r9, $r8)
push $r5
- movw $r6 0x0
+ mov $r6 0x0
memx_func_train_loop_inner:
- movw $r8 0x1111
+ mov $r8 0x1111
mulu $r9 $r6 $r8
shl b32 $r8 $r9 0x10
or $r8 $r9
@@ -315,7 +315,7 @@ memx_func_train:
// $r5 - inner inner loop counter
// $r9 - result
- movw $r5 0
+ mov $r5 0
imm32($r9, 0x8300ffff)
memx_func_train_loop_4x:
imm32($r10, 0x100080)
--
2.15.1
From: Michal Hocko <[email protected]>
[ Upstream commit 0537250fdc6c876ed4cbbe874c739aebef493ee2 ]
syzbot has noticed that xt_alloc_table_info can allocate a lot of memory.
This is an admin only interface but an admin in a namespace is sufficient
as well. eacd86ca3b03 ("net/netfilter/x_tables.c: use kvmalloc() in
xt_alloc_table_info()") has changed the opencoded kmalloc->vmalloc
fallback into kvmalloc. It has dropped __GFP_NORETRY on the way because
vmalloc has simply never fully supported __GFP_NORETRY semantic. This is
still the case because e.g. page tables backing the vmalloc area are
hardcoded GFP_KERNEL.
Revert back to __GFP_NORETRY as a poors man defence against excessively
large allocation request here. We will not rule out the OOM killer
completely but __GFP_NORETRY should at least stop the large request in
most cases.
[[email protected]: coding-style fixes]
Fixes: eacd86ca3b03 ("net/netfilter/x_tables.c: use kvmalloc() in xt_alloc_tableLink: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Acked-by: Florian Westphal <[email protected]>
Reviewed-by: Andrew Morton <[email protected]>
Cc: David S. Miller <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/netfilter/x_tables.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
index 60c92158a2cd..8a4947ff2ebf 100644
--- a/net/netfilter/x_tables.c
+++ b/net/netfilter/x_tables.c
@@ -1008,7 +1008,12 @@ struct xt_table_info *xt_alloc_table_info(unsigned int size)
if ((size >> PAGE_SHIFT) + 2 > totalram_pages)
return NULL;
- info = kvmalloc(sz, GFP_KERNEL);
+ /* __GFP_NORETRY is not fully supported by kvmalloc but it should
+ * work reasonably well if sz is too large and bail out rather
+ * than shoot all processes down before realizing there is nothing
+ * more to reclaim.
+ */
+ info = kvmalloc(sz, GFP_KERNEL | __GFP_NORETRY);
if (!info)
return NULL;
--
2.15.1
From: Ed Swierk <[email protected]>
[ Upstream commit 9382fe71c0058465e942a633869629929102843d ]
IPv4 and IPv6 packets may arrive with lower-layer padding that is not
included in the L3 length. For example, a short IPv4 packet may have
up to 6 bytes of padding following the IP payload when received on an
Ethernet device with a minimum packet length of 64 bytes.
Higher-layer processing functions in netfilter (e.g. nf_ip_checksum(),
and help() in nf_conntrack_ftp) assume skb->len reflects the length of
the L3 header and payload, rather than referring back to
ip_hdr->tot_len or ipv6_hdr->payload_len, and get confused by
lower-layer padding.
In the normal IPv4 receive path, ip_rcv() trims the packet to
ip_hdr->tot_len before invoking netfilter hooks. In the IPv6 receive
path, ip6_rcv() does the same using ipv6_hdr->payload_len. Similarly
in the br_netfilter receive path, br_validate_ipv4() and
br_validate_ipv6() trim the packet to the L3 length before invoking
netfilter hooks.
Currently in the OVS conntrack receive path, ovs_ct_execute() pulls
the skb to the L3 header but does not trim it to the L3 length before
calling nf_conntrack_in(NF_INET_PRE_ROUTING). When
nf_conntrack_proto_tcp encounters a packet with lower-layer padding,
nf_ip_checksum() fails causing a "nf_ct_tcp: bad TCP checksum" log
message. While extra zero bytes don't affect the checksum, the length
in the IP pseudoheader does. That length is based on skb->len, and
without trimming, it doesn't match the length the sender used when
computing the checksum.
In ovs_ct_execute(), trim the skb to the L3 length before higher-layer
processing.
Signed-off-by: Ed Swierk <[email protected]>
Acked-by: Pravin B Shelar <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/openvswitch/conntrack.c | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
index d558e882ca0c..285f8797c26a 100644
--- a/net/openvswitch/conntrack.c
+++ b/net/openvswitch/conntrack.c
@@ -1097,6 +1097,36 @@ static int ovs_ct_commit(struct net *net, struct sw_flow_key *key,
return 0;
}
+/* Trim the skb to the length specified by the IP/IPv6 header,
+ * removing any trailing lower-layer padding. This prepares the skb
+ * for higher-layer processing that assumes skb->len excludes padding
+ * (such as nf_ip_checksum). The caller needs to pull the skb to the
+ * network header, and ensure ip_hdr/ipv6_hdr points to valid data.
+ */
+static int ovs_skb_network_trim(struct sk_buff *skb)
+{
+ unsigned int len;
+ int err;
+
+ switch (skb->protocol) {
+ case htons(ETH_P_IP):
+ len = ntohs(ip_hdr(skb)->tot_len);
+ break;
+ case htons(ETH_P_IPV6):
+ len = sizeof(struct ipv6hdr)
+ + ntohs(ipv6_hdr(skb)->payload_len);
+ break;
+ default:
+ len = skb->len;
+ }
+
+ err = pskb_trim_rcsum(skb, len);
+ if (err)
+ kfree_skb(skb);
+
+ return err;
+}
+
/* Returns 0 on success, -EINPROGRESS if 'skb' is stolen, or other nonzero
* value if 'skb' is freed.
*/
@@ -1111,6 +1141,10 @@ int ovs_ct_execute(struct net *net, struct sk_buff *skb,
nh_ofs = skb_network_offset(skb);
skb_pull_rcsum(skb, nh_ofs);
+ err = ovs_skb_network_trim(skb);
+ if (err)
+ return err;
+
if (key->ip.frag != OVS_FRAG_TYPE_NONE) {
err = handle_fragments(net, key, info->zone.id, skb);
if (err)
--
2.15.1
From: Niklas Cassel <[email protected]>
[ Upstream commit 1b84ca187510f60f00f4e15255043ce19bb30410 ]
The interrupt status register in both dwmac1000 and dwmac4 ignores
interrupt enable (for dwmac4) / interrupt mask (for dwmac1000).
Therefore, if we want to check only the bits that can actually trigger
an irq, we have to filter the interrupt status register manually.
Commit 0a764db10337 ("stmmac: Discard masked flags in interrupt status
register") fixed this for dwmac1000. Fix the same issue for dwmac4.
Just like commit 0a764db10337 ("stmmac: Discard masked flags in
interrupt status register"), this makes sure that we do not get
spurious link up/link down prints.
Signed-off-by: Niklas Cassel <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
index 2f7d7ec59962..e1d03489ae63 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
@@ -562,10 +562,12 @@ static int dwmac4_irq_status(struct mac_device_info *hw,
struct stmmac_extra_stats *x)
{
void __iomem *ioaddr = hw->pcsr;
- u32 intr_status;
+ u32 intr_status = readl(ioaddr + GMAC_INT_STATUS);
+ u32 intr_enable = readl(ioaddr + GMAC_INT_EN);
int ret = 0;
- intr_status = readl(ioaddr + GMAC_INT_STATUS);
+ /* Discard disabled bits */
+ intr_status &= intr_enable;
/* Not used events (e.g. MMC interrupts) are not handled. */
if ((intr_status & mmc_tx_irq))
--
2.15.1
From: Yang Shi <[email protected]>
[ Upstream commit 3b454ad35043dfbd3b5d2bb92b0991d6342afb44 ]
In the current design, khugepaged needs to acquire mmap_sem before
scanning an mm. But in some corner cases, khugepaged may scan a process
which is modifying its memory mapping, so khugepaged blocks in
uninterruptible state. But the process might hold the mmap_sem for a
long time when modifying a huge memory space and it may trigger the
below khugepaged hung issue:
INFO: task khugepaged:270 blocked for more than 120 seconds.
Tainted: G E 4.9.65-006.ali3000.alios7.x86_64 #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
khugepaged D 0 270 2 0x00000000
ffff883f3deae4c0 0000000000000000 ffff883f610596c0 ffff883f7d359440
ffff883f63818000 ffffc90019adfc78 ffffffff817079a5 d67e5aa8c1860a64
0000000000000246 ffff883f7d359440 ffffc90019adfc88 ffff883f610596c0
Call Trace:
schedule+0x36/0x80
rwsem_down_read_failed+0xf0/0x150
call_rwsem_down_read_failed+0x18/0x30
down_read+0x20/0x40
khugepaged+0x476/0x11d0
kthread+0xe6/0x100
ret_from_fork+0x25/0x30
So it sounds pointless to just block khugepaged waiting for the
semaphore so replace down_read() with down_read_trylock() to move to
scan the next mm quickly instead of just blocking on the semaphore so
that other processes can get more chances to install THP. Then
khugepaged can come back to scan the skipped mm when it has finished the
current round full_scan.
And it appears that the change can improve khugepaged efficiency a
little bit.
Below is the test result when running LTP on a 24 cores 4GB memory 2
nodes NUMA VM:
pristine w/ trylock
full_scan 197 187
pages_collapsed 21 26
thp_fault_alloc 40818 44466
thp_fault_fallback 18413 16679
thp_collapse_alloc 21 150
thp_collapse_alloc_failed 14 16
thp_file_alloc 369 369
[[email protected]: coding-style fixes]
[[email protected]: tweak comment]
[[email protected]: avoid uninitialized variable use]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Yang Shi <[email protected]>
Acked-by: Kirill A. Shutemov <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
mm/khugepaged.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2fe26634e1a2..29221602d802 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1679,10 +1679,14 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
spin_unlock(&khugepaged_mm_lock);
mm = mm_slot->mm;
- down_read(&mm->mmap_sem);
- if (unlikely(khugepaged_test_exit(mm)))
- vma = NULL;
- else
+ /*
+ * Don't wait for semaphore (to avoid long wait times). Just move to
+ * the next mm on the list.
+ */
+ vma = NULL;
+ if (unlikely(!down_read_trylock(&mm->mmap_sem)))
+ goto breakouterloop_mmap_sem;
+ if (likely(!khugepaged_test_exit(mm)))
vma = find_vma(mm, khugepaged_scan.address);
progress++;
--
2.15.1
From: Mel Gorman <[email protected]>
[ Upstream commit 69d763fc6d3aee787a3e8c8c35092b4f4960fa5d ]
Minchan Kim asked the following question -- what locks protects
address_space destroying when race happens between inode trauncation and
__isolate_lru_page? Jan Kara clarified by describing the race as follows
CPU1 CPU2
truncate(inode) __isolate_lru_page()
...
truncate_inode_page(mapping, page);
delete_from_page_cache(page)
spin_lock_irqsave(&mapping->tree_lock, flags);
__delete_from_page_cache(page, NULL)
page_cache_tree_delete(..)
... mapping = page_mapping(page);
page->mapping = NULL;
...
spin_unlock_irqrestore(&mapping->tree_lock, flags);
page_cache_free_page(mapping, page)
put_page(page)
if (put_page_testzero(page)) -> false
- inode now has no pages and can be freed including embedded address_space
if (mapping && !mapping->a_ops->migratepage)
- we've dereferenced mapping which is potentially already free.
The race is theoretically possible but unlikely. Before the
delete_from_page_cache, truncate_cleanup_page is called so the page is
likely to be !PageDirty or PageWriteback which gets skipped by the only
caller that checks the mappping in __isolate_lru_page. Even if the race
occurs, a substantial amount of work has to happen during a tiny window
with no preemption but it could potentially be done using a virtual
machine to artifically slow one CPU or halt it during the critical
window.
This patch should eliminate the race with truncation by try-locking the
page before derefencing mapping and aborting if the lock was not
acquired. There was a suggestion from Huang Ying to use RCU as a
side-effect to prevent mapping being freed. However, I do not like the
solution as it's an unconventional means of preserving a mapping and
it's not a context where rcu_read_lock is obviously protecting rcu data.
Link: http://lkml.kernel.org/r/[email protected]
Fixes: c82449352854 ("mm: compaction: make isolate_lru_page() filter-aware again")
Signed-off-by: Mel Gorman <[email protected]>
Acked-by: Minchan Kim <[email protected]>
Cc: "Huang, Ying" <[email protected]>
Cc: Jan Kara <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
mm/vmscan.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a8a3729bfaa9..b3f5e337b64a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1436,14 +1436,24 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode)
if (PageDirty(page)) {
struct address_space *mapping;
+ bool migrate_dirty;
/*
* Only pages without mappings or that have a
* ->migratepage callback are possible to migrate
- * without blocking
+ * without blocking. However, we can be racing with
+ * truncation so it's necessary to lock the page
+ * to stabilise the mapping as truncation holds
+ * the page lock until after the page is removed
+ * from the page cache.
*/
+ if (!trylock_page(page))
+ return ret;
+
mapping = page_mapping(page);
- if (mapping && !mapping->a_ops->migratepage)
+ migrate_dirty = mapping && mapping->a_ops->migratepage;
+ unlock_page(page);
+ if (!migrate_dirty)
return ret;
}
}
--
2.15.1
From: Subash Abhinov Kasiviswanathan <[email protected]>
[ Upstream commit ea23d5e3bf340e413b8e05c13da233c99c64142b ]
Failures were seen in ICMPv6 fragmentation timeout tests if they were
run after the RFC2460 failure tests. Kernel was not sending out the
ICMPv6 fragment reassembly time exceeded packet after the fragmentation
reassembly timeout of 1 minute had elapsed.
This happened because the frag queue was not released if an error in
IPv6 fragmentation header was detected by RFC2460.
Fixes: 83f1999caeb1 ("netfilter: ipv6: nf_defrag: Pass on packets to stack per RFC2460")
Signed-off-by: Subash Abhinov Kasiviswanathan <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/ipv6/netfilter/nf_conntrack_reasm.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
index 5edfe66a3d7a..64ec23388450 100644
--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
+++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
@@ -263,6 +263,7 @@ static int nf_ct_frag6_queue(struct frag_queue *fq, struct sk_buff *skb,
* this case. -DaveM
*/
pr_debug("end of fragment not rounded to 8 bytes.\n");
+ inet_frag_kill(&fq->q, &nf_frags);
return -EPROTO;
}
if (end > fq->q.len) {
--
2.15.1
From: Jens Axboe <[email protected]>
[ Upstream commit 445251d0f4d329aa061f323546cd6388a3bb7ab5 ]
I ran into an issue on my laptop that triggered a bug on the
discard path:
WARNING: CPU: 2 PID: 207 at drivers/nvme/host/core.c:527 nvme_setup_cmd+0x3d3/0x430
Modules linked in: rfcomm fuse ctr ccm bnep arc4 binfmt_misc snd_hda_codec_hdmi nls_iso8859_1 nls_cp437 vfat snd_hda_codec_conexant fat snd_hda_codec_generic iwlmvm snd_hda_intel snd_hda_codec snd_hwdep mac80211 snd_hda_core snd_pcm snd_seq_midi snd_seq_midi_event snd_rawmidi snd_seq x86_pkg_temp_thermal intel_powerclamp kvm_intel uvcvideo iwlwifi btusb snd_seq_device videobuf2_vmalloc btintel videobuf2_memops kvm snd_timer videobuf2_v4l2 bluetooth irqbypass videobuf2_core aesni_intel aes_x86_64 crypto_simd cryptd snd glue_helper videodev cfg80211 ecdh_generic soundcore hid_generic usbhid hid i915 psmouse e1000e ptp pps_core xhci_pci xhci_hcd intel_gtt
CPU: 2 PID: 207 Comm: jbd2/nvme0n1p7- Tainted: G U 4.15.0+ #176
Hardware name: LENOVO 20FBCTO1WW/20FBCTO1WW, BIOS N1FET59W (1.33 ) 12/19/2017
RIP: 0010:nvme_setup_cmd+0x3d3/0x430
RSP: 0018:ffff880423e9f838 EFLAGS: 00010217
RAX: 0000000000000000 RBX: ffff880423e9f8c8 RCX: 0000000000010000
RDX: ffff88022b200010 RSI: 0000000000000002 RDI: 00000000327f0000
RBP: ffff880421251400 R08: ffff88022b200000 R09: 0000000000000009
R10: 0000000000000000 R11: 0000000000000000 R12: 000000000000ffff
R13: ffff88042341e280 R14: 000000000000ffff R15: ffff880421251440
FS: 0000000000000000(0000) GS:ffff880441500000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055b684795030 CR3: 0000000002e09006 CR4: 00000000001606e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
nvme_queue_rq+0x40/0xa00
? __sbitmap_queue_get+0x24/0x90
? blk_mq_get_tag+0xa3/0x250
? wait_woken+0x80/0x80
? blk_mq_get_driver_tag+0x97/0xf0
blk_mq_dispatch_rq_list+0x7b/0x4a0
? deadline_remove_request+0x49/0xb0
blk_mq_do_dispatch_sched+0x4f/0xc0
blk_mq_sched_dispatch_requests+0x106/0x170
__blk_mq_run_hw_queue+0x53/0xa0
__blk_mq_delay_run_hw_queue+0x83/0xa0
blk_mq_run_hw_queue+0x6c/0xd0
blk_mq_sched_insert_request+0x96/0x140
__blk_mq_try_issue_directly+0x3d/0x190
blk_mq_try_issue_directly+0x30/0x70
blk_mq_make_request+0x1a4/0x6a0
generic_make_request+0xfd/0x2f0
? submit_bio+0x5c/0x110
submit_bio+0x5c/0x110
? __blkdev_issue_discard+0x152/0x200
submit_bio_wait+0x43/0x60
ext4_process_freed_data+0x1cd/0x440
? account_page_dirtied+0xe2/0x1a0
ext4_journal_commit_callback+0x4a/0xc0
jbd2_journal_commit_transaction+0x17e2/0x19e0
? kjournald2+0xb0/0x250
kjournald2+0xb0/0x250
? wait_woken+0x80/0x80
? commit_timeout+0x10/0x10
kthread+0x111/0x130
? kthread_create_worker_on_cpu+0x50/0x50
? do_group_exit+0x3a/0xa0
ret_from_fork+0x1f/0x30
Code: 73 89 c1 83 ce 10 c1 e1 10 09 ca 83 f8 04 0f 87 0f ff ff ff 8b 4d 20 48 8b 7d 00 c1 e9 09 48 01 8c c7 00 08 00 00 e9 f8 fe ff ff <0f> ff 4c 89 c7 41 bc 0a 00 00 00 e8 0d 78 d6 ff e9 a1 fc ff ff
---[ end trace 50d361cc444506c8 ]---
print_req_error: I/O error, dev nvme0n1, sector 847167488
Decoding the assembly, the request claims to have 0xffff segments,
while nvme counts two. This turns out to be because we don't check
for a data carrying request on the mq scheduler path, and since
blk_phys_contig_segment() returns true for a non-data request,
we decrement the initial segment count of 0 and end up with
0xffff in the unsigned short.
There are a few issues here:
1) We should initialize the segment count for a discard to 1.
2) The discard merging is currently using the data limits for
segments and sectors.
Fix this up by having attempt_merge() correctly identify the
request, and by initializing the segment count correctly
for discards.
This can only be triggered with mq-deadline on discard capable
devices right now, which isn't a common configuration.
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
block/blk-core.c | 2 ++
block/blk-merge.c | 29 ++++++++++++++++++++++++++---
2 files changed, 28 insertions(+), 3 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index c01f4907dbbc..1feeb1a8aad9 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -3065,6 +3065,8 @@ void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
{
if (bio_has_data(bio))
rq->nr_phys_segments = bio_phys_segments(q, bio);
+ else if (bio_op(bio) == REQ_OP_DISCARD)
+ rq->nr_phys_segments = 1;
rq->__data_len = bio->bi_iter.bi_size;
rq->bio = rq->biotail = bio;
diff --git a/block/blk-merge.c b/block/blk-merge.c
index f5dedd57dff6..8d60a5bbcef9 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -551,6 +551,24 @@ static bool req_no_special_merge(struct request *req)
return !q->mq_ops && req->special;
}
+static bool req_attempt_discard_merge(struct request_queue *q, struct request *req,
+ struct request *next)
+{
+ unsigned short segments = blk_rq_nr_discard_segments(req);
+
+ if (segments >= queue_max_discard_segments(q))
+ goto no_merge;
+ if (blk_rq_sectors(req) + bio_sectors(next->bio) >
+ blk_rq_get_max_sectors(req, blk_rq_pos(req)))
+ goto no_merge;
+
+ req->nr_phys_segments = segments + blk_rq_nr_discard_segments(next);
+ return true;
+no_merge:
+ req_set_nomerge(q, req);
+ return false;
+}
+
static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
struct request *next)
{
@@ -684,9 +702,13 @@ static struct request *attempt_merge(struct request_queue *q,
* If we are allowed to merge, then append bio list
* from next to rq and release next. merge_requests_fn
* will have updated segment counts, update sector
- * counts here.
+ * counts here. Handle DISCARDs separately, as they
+ * have separate settings.
*/
- if (!ll_merge_requests_fn(q, req, next))
+ if (req_op(req) == REQ_OP_DISCARD) {
+ if (!req_attempt_discard_merge(q, req, next))
+ return NULL;
+ } else if (!ll_merge_requests_fn(q, req, next))
return NULL;
/*
@@ -716,7 +738,8 @@ static struct request *attempt_merge(struct request_queue *q,
req->__data_len += blk_rq_bytes(next);
- elv_merge_requests(q, req, next);
+ if (req_op(req) != REQ_OP_DISCARD)
+ elv_merge_requests(q, req, next);
/*
* 'next' is going away, so update stats accordingly
--
2.15.1
From: KarimAllah Ahmed <[email protected]>
[ Upstream commit a340b3e229b24a56f1c7f5826b15a3af0f4b13e5 ]
For EPT-violations that are triggered by a read, the pages are also mapped with
write permissions (if their memory region is also writable). That would avoid
getting yet another fault on the same page when a write occurs.
This optimization only happens when you have a "struct page" backing the memory
region. So also enable it for memory regions that do not have a "struct page".
Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krčmář <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: KarimAllah Ahmed <[email protected]>
Reviewed-by: Paolo Bonzini <[email protected]>
Signed-off-by: Radim Krčmář <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
virt/kvm/kvm_main.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d81af263f50b..4f35f0dfe681 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1434,7 +1434,8 @@ static bool vma_is_valid(struct vm_area_struct *vma, bool write_fault)
static int hva_to_pfn_remapped(struct vm_area_struct *vma,
unsigned long addr, bool *async,
- bool write_fault, kvm_pfn_t *p_pfn)
+ bool write_fault, bool *writable,
+ kvm_pfn_t *p_pfn)
{
unsigned long pfn;
int r;
@@ -1460,6 +1461,8 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
}
+ if (writable)
+ *writable = true;
/*
* Get a reference here because callers of *hva_to_pfn* and
@@ -1525,7 +1528,7 @@ retry:
if (vma == NULL)
pfn = KVM_PFN_ERR_FAULT;
else if (vma->vm_flags & (VM_IO | VM_PFNMAP)) {
- r = hva_to_pfn_remapped(vma, addr, async, write_fault, &pfn);
+ r = hva_to_pfn_remapped(vma, addr, async, write_fault, writable, &pfn);
if (r == -EAGAIN)
goto retry;
if (r < 0)
--
2.15.1
From: piaojun <[email protected]>
[ Upstream commit 025bcbde3634b2c9b316f227fed13ad6ad6817fb ]
If metadata is corrupted such as 'invalid inode block', we will get
failed by calling 'mount()' and then set filesystem readonly as below:
ocfs2_mount
ocfs2_initialize_super
ocfs2_init_global_system_inodes
ocfs2_iget
ocfs2_read_locked_inode
ocfs2_validate_inode_block
ocfs2_error
ocfs2_handle_error
ocfs2_set_ro_flag(osb, 0); // set readonly
In this situation we need return -EROFS to 'mount.ocfs2', so that user
can fix it by fsck. And then mount again. In addition, 'mount.ocfs2'
should be updated correspondingly as it only return 1 for all errno.
And I will post a patch for 'mount.ocfs2' too.
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Jun Piao <[email protected]>
Reviewed-by: Alex Chen <[email protected]>
Reviewed-by: Joseph Qi <[email protected]>
Reviewed-by: Changwei Ge <[email protected]>
Reviewed-by: Gang He <[email protected]>
Cc: Mark Fasheh <[email protected]>
Cc: Joel Becker <[email protected]>
Cc: Junxiao Bi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/ocfs2/super.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
index 80733496b22a..24ab735d91dd 100644
--- a/fs/ocfs2/super.c
+++ b/fs/ocfs2/super.c
@@ -474,9 +474,8 @@ static int ocfs2_init_global_system_inodes(struct ocfs2_super *osb)
new = ocfs2_get_system_file_inode(osb, i, osb->slot_num);
if (!new) {
ocfs2_release_system_inodes(osb);
- status = -EINVAL;
+ status = ocfs2_is_soft_readonly(osb) ? -EROFS : -EINVAL;
mlog_errno(status);
- /* FIXME: Should ERROR_RO_FS */
mlog(ML_ERROR, "Unable to load system inode %d, "
"possibly corrupt fs?", i);
goto bail;
@@ -505,7 +504,7 @@ static int ocfs2_init_local_system_inodes(struct ocfs2_super *osb)
new = ocfs2_get_system_file_inode(osb, i, osb->slot_num);
if (!new) {
ocfs2_release_system_inodes(osb);
- status = -EINVAL;
+ status = ocfs2_is_soft_readonly(osb) ? -EROFS : -EINVAL;
mlog(ML_ERROR, "status=%d, sysfile=%d, slot=%d\n",
status, i, osb->slot_num);
goto bail;
--
2.15.1
From: "[email protected]" <[email protected]>
[ Upstream commit c25d99d20ba69824a1e2cc118e04b877cd427afc ]
The latest UV platforms include the new ApachePass NVDIMMs into the
UV address space. This has introduced address ranges in the Global
Address Map Table that are less than the previous lowest range, which
was 2GB. Fix the address calculation so it accommodates address ranges
from bytes to exabytes.
Signed-off-by: Mike Travis <[email protected]>
Reviewed-by: Andrew Banman <[email protected]>
Reviewed-by: Dimitri Sivanich <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Russ Anderson <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/kernel/apic/x2apic_uv_x.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
index c0b694810ff4..02cfc615e3fb 100644
--- a/arch/x86/kernel/apic/x2apic_uv_x.c
+++ b/arch/x86/kernel/apic/x2apic_uv_x.c
@@ -1140,16 +1140,25 @@ static void __init decode_gam_rng_tbl(unsigned long ptr)
uv_gre_table = gre;
for (; gre->type != UV_GAM_RANGE_TYPE_UNUSED; gre++) {
+ unsigned long size = ((unsigned long)(gre->limit - lgre)
+ << UV_GAM_RANGE_SHFT);
+ int order = 0;
+ char suffix[] = " KMGTPE";
+
+ while (size > 9999 && order < sizeof(suffix)) {
+ size /= 1024;
+ order++;
+ }
+
if (!index) {
pr_info("UV: GAM Range Table...\n");
pr_info("UV: # %20s %14s %5s %4s %5s %3s %2s\n", "Range", "", "Size", "Type", "NASID", "SID", "PN");
}
- pr_info("UV: %2d: 0x%014lx-0x%014lx %5luG %3d %04x %02x %02x\n",
+ pr_info("UV: %2d: 0x%014lx-0x%014lx %5lu%c %3d %04x %02x %02x\n",
index++,
(unsigned long)lgre << UV_GAM_RANGE_SHFT,
(unsigned long)gre->limit << UV_GAM_RANGE_SHFT,
- ((unsigned long)(gre->limit - lgre)) >>
- (30 - UV_GAM_RANGE_SHFT), /* 64M -> 1G */
+ size, suffix[order],
gre->type, gre->nasid, gre->sockid, gre->pnode);
lgre = gre->limit;
--
2.15.1
From: piaojun <[email protected]>
[ Upstream commit 16c8d569f5704a84164f30ff01b29879f3438065 ]
The race between *set_acl and *get_acl will cause getting incomplete
xattr data as below:
processA processB
ocfs2_set_acl
ocfs2_xattr_set
__ocfs2_xattr_set_handle
ocfs2_get_acl_nolock
ocfs2_xattr_get_nolock:
processB may get incomplete xattr data if processA hasn't set_acl done.
So we should use 'ip_xattr_sem' to protect getting extended attribute in
ocfs2_get_acl_nolock(), as other processes could be changing it
concurrently.
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Jun Piao <[email protected]>
Reviewed-by: Alex Chen <[email protected]>
Cc: Mark Fasheh <[email protected]>
Cc: Joel Becker <[email protected]>
Cc: Junxiao Bi <[email protected]>
Cc: Joseph Qi <[email protected]>
Cc: Changwei Ge <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/ocfs2/acl.c | 6 ++++++
fs/ocfs2/xattr.c | 2 ++
2 files changed, 8 insertions(+)
diff --git a/fs/ocfs2/acl.c b/fs/ocfs2/acl.c
index 40b5cc97f7b0..917fadca8a7b 100644
--- a/fs/ocfs2/acl.c
+++ b/fs/ocfs2/acl.c
@@ -311,7 +311,9 @@ struct posix_acl *ocfs2_iop_get_acl(struct inode *inode, int type)
if (had_lock < 0)
return ERR_PTR(had_lock);
+ down_read(&OCFS2_I(inode)->ip_xattr_sem);
acl = ocfs2_get_acl_nolock(inode, type, di_bh);
+ up_read(&OCFS2_I(inode)->ip_xattr_sem);
ocfs2_inode_unlock_tracker(inode, 0, &oh, had_lock);
brelse(di_bh);
@@ -330,7 +332,9 @@ int ocfs2_acl_chmod(struct inode *inode, struct buffer_head *bh)
if (!(osb->s_mount_opt & OCFS2_MOUNT_POSIX_ACL))
return 0;
+ down_read(&OCFS2_I(inode)->ip_xattr_sem);
acl = ocfs2_get_acl_nolock(inode, ACL_TYPE_ACCESS, bh);
+ up_read(&OCFS2_I(inode)->ip_xattr_sem);
if (IS_ERR(acl) || !acl)
return PTR_ERR(acl);
ret = __posix_acl_chmod(&acl, GFP_KERNEL, inode->i_mode);
@@ -361,8 +365,10 @@ int ocfs2_init_acl(handle_t *handle,
if (!S_ISLNK(inode->i_mode)) {
if (osb->s_mount_opt & OCFS2_MOUNT_POSIX_ACL) {
+ down_read(&OCFS2_I(dir)->ip_xattr_sem);
acl = ocfs2_get_acl_nolock(dir, ACL_TYPE_DEFAULT,
dir_bh);
+ up_read(&OCFS2_I(dir)->ip_xattr_sem);
if (IS_ERR(acl))
return PTR_ERR(acl);
}
diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c
index 5fdf269ba82e..fb0a4eec310c 100644
--- a/fs/ocfs2/xattr.c
+++ b/fs/ocfs2/xattr.c
@@ -638,9 +638,11 @@ int ocfs2_calc_xattr_init(struct inode *dir,
si->value_len);
if (osb->s_mount_opt & OCFS2_MOUNT_POSIX_ACL) {
+ down_read(&OCFS2_I(dir)->ip_xattr_sem);
acl_len = ocfs2_xattr_get_nolock(dir, dir_bh,
OCFS2_XATTR_INDEX_POSIX_ACL_DEFAULT,
"", NULL, 0);
+ up_read(&OCFS2_I(dir)->ip_xattr_sem);
if (acl_len > 0) {
a_size = ocfs2_xattr_entry_real_size(0, acl_len);
if (S_ISDIR(mode))
--
2.15.1
From: "Gustavo A. R. Silva" <[email protected]>
[ Upstream commit e4823fbd229bfbba368b40cdadb8f4eeb20604cc ]
Add suffix ULL to constant 80000 in order to avoid a potential integer
overflow and give the compiler complete information about the proper
arithmetic to use. Notice that this constant is used in a context that
expects an expression of type u64.
The current cast to u64 effectively applies to the whole expression
as an argument of type u64 to be passed to div64_u64, but it does
not prevent it from being evaluated using 32-bit arithmetic instead
of 64-bit arithmetic.
Also, once the expression is properly evaluated using 64-bit arithmentic,
there is no need for the parentheses and the external cast to u64.
Addresses-Coverity-ID: 1357588 ("Unintentional integer overflow")
Signed-off-by: Gustavo A. R. Silva <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/ipv4/tcp_nv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_nv.c b/net/ipv4/tcp_nv.c
index 125fc1450b01..09f8773fd769 100644
--- a/net/ipv4/tcp_nv.c
+++ b/net/ipv4/tcp_nv.c
@@ -327,7 +327,7 @@ static void tcpnv_acked(struct sock *sk, const struct ack_sample *sample)
*/
cwnd_by_slope = (u32)
div64_u64(((u64)ca->nv_rtt_max_rate) * ca->nv_min_rtt,
- (u64)(80000 * tp->mss_cache));
+ 80000ULL * tp->mss_cache);
max_win = cwnd_by_slope + nv_pad;
/* If cwnd > max_win, decrease cwnd
--
2.15.1
From: Vitaly Kuznetsov <[email protected]>
[ Upstream commit 89a8f6d4904c8cf3ff8fee9fdaff392a6bbb8bf6 ]
In hyperv_init() its presumed that it always has access to VP index and
hypercall MSRs while according to the specification it should be checked if
it's allowed to access the corresponding MSRs before accessing them.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Thomas Gleixner <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Cc: [email protected]
Cc: Radim Krčmář <[email protected]>
Cc: Haiyang Zhang <[email protected]>
Cc: "Michael Kelley (EOSG)" <[email protected]>
Cc: Roman Kagan <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: [email protected]
Cc: Paolo Bonzini <[email protected]>
Cc: "K. Y. Srinivasan" <[email protected]>
Cc: Cathy Avery <[email protected]>
Cc: Mohammed Gamal <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/hyperv/hv_init.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
index a0b86cf486e0..2e9d58cc371e 100644
--- a/arch/x86/hyperv/hv_init.c
+++ b/arch/x86/hyperv/hv_init.c
@@ -110,12 +110,19 @@ static int hv_cpu_init(unsigned int cpu)
*/
void hyperv_init(void)
{
- u64 guest_id;
+ u64 guest_id, required_msrs;
union hv_x64_msr_hypercall_contents hypercall_msr;
if (x86_hyper_type != X86_HYPER_MS_HYPERV)
return;
+ /* Absolutely required MSRs */
+ required_msrs = HV_X64_MSR_HYPERCALL_AVAILABLE |
+ HV_X64_MSR_VP_INDEX_AVAILABLE;
+
+ if ((ms_hyperv.features & required_msrs) != required_msrs)
+ return;
+
/* Allocate percpu VP index */
hv_vp_index = kmalloc_array(num_possible_cpus(), sizeof(*hv_vp_index),
GFP_KERNEL);
--
2.15.1
From: Vitaly Kuznetsov <[email protected]>
[ Upstream commit d391f1207067268261add0485f0f34503539c5b0 ]
I was investigating an issue with seabios >= 1.10 which stopped working
for nested KVM on Hyper-V. The problem appears to be in
handle_ept_violation() function: when we do fast mmio we need to skip
the instruction so we do kvm_skip_emulated_instruction(). This, however,
depends on VM_EXIT_INSTRUCTION_LEN field being set correctly in VMCS.
However, this is not the case.
Intel's manual doesn't mandate VM_EXIT_INSTRUCTION_LEN to be set when
EPT MISCONFIG occurs. While on real hardware it was observed to be set,
some hypervisors follow the spec and don't set it; we end up advancing
IP with some random value.
I checked with Microsoft and they confirmed they don't fill
VM_EXIT_INSTRUCTION_LEN on EPT MISCONFIG.
Fix the issue by doing instruction skip through emulator when running
nested.
Fixes: 68c3b4d1676d870f0453c31d5a52e7e65c7448ae
Suggested-by: Radim Krčmář <[email protected]>
Suggested-by: Paolo Bonzini <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
Acked-by: Michael S. Tsirkin <[email protected]>
Signed-off-by: Radim Krčmář <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/kvm/vmx.c | 16 +++++++++++++++-
arch/x86/kvm/x86.c | 3 ++-
2 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index ae4803b213d0..bdd84ce4491e 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -6765,7 +6765,21 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
if (!is_guest_mode(vcpu) &&
!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
trace_kvm_fast_mmio(gpa);
- return kvm_skip_emulated_instruction(vcpu);
+ /*
+ * Doing kvm_skip_emulated_instruction() depends on undefined
+ * behavior: Intel's manual doesn't mandate
+ * VM_EXIT_INSTRUCTION_LEN to be set in VMCS when EPT MISCONFIG
+ * occurs and while on real hardware it was observed to be set,
+ * other hypervisors (namely Hyper-V) don't set it, we end up
+ * advancing IP with some random value. Disable fast mmio when
+ * running nested and keep it for real hardware in hope that
+ * VM_EXIT_INSTRUCTION_LEN will always be set correctly.
+ */
+ if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
+ return kvm_skip_emulated_instruction(vcpu);
+ else
+ return x86_emulate_instruction(vcpu, gpa, EMULTYPE_SKIP,
+ NULL, 0) == EMULATE_DONE;
}
ret = kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d7728bcd9a3c..3b2c3aa2cd07 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5699,7 +5699,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
* handle watchpoints yet, those would be handled in
* the emulate_ops.
*/
- if (kvm_vcpu_check_breakpoint(vcpu, &r))
+ if (!(emulation_type & EMULTYPE_SKIP) &&
+ kvm_vcpu_check_breakpoint(vcpu, &r))
return r;
ctxt->interruptibility = 0;
--
2.15.1
From: Andy Spencer <[email protected]>
[ Upstream commit 202a0a70e445caee1d0ec7aae814e64b1189fa4d ]
When the frame check sequence (FCS) is split across the last two frames
of a fragmented packet, part of the FCS gets counted twice, once when
subtracting the FCS, and again when subtracting the previously received
data.
For example, if 1602 bytes are received, and the first fragment contains
the first 1600 bytes (including the first two bytes of the FCS), and the
second fragment contains the last two bytes of the FCS:
'skb->len == 1600' from the first fragment
size = lstatus & BD_LENGTH_MASK; # 1602
size -= ETH_FCS_LEN; # 1598
size -= skb->len; # -2
Since the size is unsigned, it wraps around and causes a BUG later in
the packet handling, as shown below:
kernel BUG at ./include/linux/skbuff.h:2068!
Oops: Exception in kernel mode, sig: 5 [#1]
...
NIP [c021ec60] skb_pull+0x24/0x44
LR [c01e2fbc] gfar_clean_rx_ring+0x498/0x690
Call Trace:
[df7edeb0] [c01e2c1c] gfar_clean_rx_ring+0xf8/0x690 (unreliable)
[df7edf20] [c01e33a8] gfar_poll_rx_sq+0x3c/0x9c
[df7edf40] [c023352c] net_rx_action+0x21c/0x274
[df7edf90] [c0329000] __do_softirq+0xd8/0x240
[df7edff0] [c000c108] call_do_irq+0x24/0x3c
[c0597e90] [c00041dc] do_IRQ+0x64/0xc4
[c0597eb0] [c000d920] ret_from_except+0x0/0x18
--- interrupt: 501 at arch_cpu_idle+0x24/0x5c
Change the size to a signed integer and then trim off any part of the
FCS that was received prior to the last fragment.
Fixes: 6c389fc931bc ("gianfar: fix size of scatter-gathered frames")
Signed-off-by: Andy Spencer <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/ethernet/freescale/gianfar.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c
index 7f837006bb6a..3bdeb295514b 100644
--- a/drivers/net/ethernet/freescale/gianfar.c
+++ b/drivers/net/ethernet/freescale/gianfar.c
@@ -2932,7 +2932,7 @@ static irqreturn_t gfar_transmit(int irq, void *grp_id)
static bool gfar_add_rx_frag(struct gfar_rx_buff *rxb, u32 lstatus,
struct sk_buff *skb, bool first)
{
- unsigned int size = lstatus & BD_LENGTH_MASK;
+ int size = lstatus & BD_LENGTH_MASK;
struct page *page = rxb->page;
bool last = !!(lstatus & BD_LFLAG(RXBD_LAST));
@@ -2947,11 +2947,16 @@ static bool gfar_add_rx_frag(struct gfar_rx_buff *rxb, u32 lstatus,
if (last)
size -= skb->len;
- /* in case the last fragment consisted only of the FCS */
+ /* Add the last fragment if it contains something other than
+ * the FCS, otherwise drop it and trim off any part of the FCS
+ * that was already received.
+ */
if (size > 0)
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
rxb->page_offset + RXBUF_ALIGNMENT,
size, GFAR_RXB_TRUESIZE);
+ else if (size < 0)
+ pskb_trim(skb, skb->len + size);
}
/* try reuse page */
--
2.15.1
From: Logan Gunthorpe <[email protected]>
[ Upstream commit cbd27448faff4843ac4b66cc71445a10623ff48d ]
When using the max_mw_size parameter of ntb_transport to limit the size of
the Memory windows, communication cannot be established and the queues
freeze.
This is because the mw_size that's reported to the peer is correctly
limited but the size used locally is not. So the MW is initialized
with a buffer smaller than the window but the TX side is using the
full window. This means the TX side will be writing to a region of the
window that points nowhere.
This is easily fixed by applying the same limit to tx_size in
ntb_transport_init_queue().
Fixes: e26a5843f7f5 ("NTB: Split ntb_hw_intel and ntb_transport drivers")
Signed-off-by: Logan Gunthorpe <[email protected]>
Acked-by: Allen Hubbe <[email protected]>
Cc: Dave Jiang <[email protected]>
Signed-off-by: Jon Mason <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/ntb/ntb_transport.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
index f58d8e305323..18339b7e88a4 100644
--- a/drivers/ntb/ntb_transport.c
+++ b/drivers/ntb/ntb_transport.c
@@ -998,6 +998,9 @@ static int ntb_transport_init_queue(struct ntb_transport_ctx *nt,
mw_base = nt->mw_vec[mw_num].phys_addr;
mw_size = nt->mw_vec[mw_num].phys_size;
+ if (max_mw_size && mw_size > max_mw_size)
+ mw_size = max_mw_size;
+
tx_size = (unsigned int)mw_size / num_qps_mw;
qp_offset = tx_size * (qp_num / mw_count);
--
2.15.1
From: Tang Junhui <[email protected]>
[ Upstream commit 682811b3ce1a5a4e20d700939a9042f01dbc66c4 ]
After long time running of random small IO writing,
I reboot the machine, and after the machine power on,
I found bcache got stuck, the stack is:
[root@ceph153 ~]# cat /proc/2510/task/*/stack
[<ffffffffa06b2455>] closure_sync+0x25/0x90 [bcache]
[<ffffffffa06b6be8>] bch_journal+0x118/0x2b0 [bcache]
[<ffffffffa06b6dc7>] bch_journal_meta+0x47/0x70 [bcache]
[<ffffffffa06be8f7>] bch_prio_write+0x237/0x340 [bcache]
[<ffffffffa06a8018>] bch_allocator_thread+0x3c8/0x3d0 [bcache]
[<ffffffff810a631f>] kthread+0xcf/0xe0
[<ffffffff8164c318>] ret_from_fork+0x58/0x90
[<ffffffffffffffff>] 0xffffffffffffffff
[root@ceph153 ~]# cat /proc/2038/task/*/stack
[<ffffffffa06b1abd>] __bch_btree_map_nodes+0x12d/0x150 [bcache]
[<ffffffffa06b1bd1>] bch_btree_insert+0xf1/0x170 [bcache]
[<ffffffffa06b637f>] bch_journal_replay+0x13f/0x230 [bcache]
[<ffffffffa06c75fe>] run_cache_set+0x79a/0x7c2 [bcache]
[<ffffffffa06c0cf8>] register_bcache+0xd48/0x1310 [bcache]
[<ffffffff812f702f>] kobj_attr_store+0xf/0x20
[<ffffffff8125b216>] sysfs_write_file+0xc6/0x140
[<ffffffff811dfbfd>] vfs_write+0xbd/0x1e0
[<ffffffff811e069f>] SyS_write+0x7f/0xe0
[<ffffffff8164c3c9>] system_call_fastpath+0x16/0x1
The stack shows the register thread and allocator thread
were getting stuck when registering cache device.
I reboot the machine several times, the issue always
exsit in this machine.
I debug the code, and found the call trace as bellow:
register_bcache()
==>run_cache_set()
==>bch_journal_replay()
==>bch_btree_insert()
==>__bch_btree_map_nodes()
==>btree_insert_fn()
==>btree_split() //node need split
==>btree_check_reserve()
In btree_check_reserve(), It will check if there is enough buckets
of RESERVE_BTREE type, since allocator thread did not work yet, so
no buckets of RESERVE_BTREE type allocated, so the register thread
waits on c->btree_cache_wait, and goes to sleep.
Then the allocator thread initialized, the call trace is bellow:
bch_allocator_thread()
==>bch_prio_write()
==>bch_journal_meta()
==>bch_journal()
==>journal_wait_for_write()
In journal_wait_for_write(), It will check if journal is full by
journal_full(), but the long time random small IO writing
causes the exhaustion of journal buckets(journal.blocks_free=0),
In order to release the journal buckets,
the allocator calls btree_flush_write() to flush keys to
btree nodes, and waits on c->journal.wait until btree nodes writing
over or there has already some journal buckets space, then the
allocator thread goes to sleep. but in btree_flush_write(), since
bch_journal_replay() is not finished, so no btree nodes have journal
(condition "if (btree_current_write(b)->journal)" never satisfied),
so we got no btree node to flush, no journal bucket released,
and allocator sleep all the times.
Through the above analysis, we can see that:
1) Register thread wait for allocator thread to allocate buckets of
RESERVE_BTREE type;
2) Alloctor thread wait for register thread to replay journal, so it
can flush btree nodes and get journal bucket.
then they are all got stuck by waiting for each other.
Hua Rui provided a patch for me, by allocating some buckets of
RESERVE_BTREE type in advance, so the register thread can get bucket
when btree node splitting and no need to waiting for the allocator
thread. I tested it, it has effect, and register thread run a step
forward, but finally are still got stuck, the reason is only 8 bucket
of RESERVE_BTREE type were allocated, and in bch_journal_replay(),
after 2 btree nodes splitting, only 4 bucket of RESERVE_BTREE type left,
then btree_check_reserve() is not satisfied anymore, so it goes to sleep
again, and in the same time, alloctor thread did not flush enough btree
nodes to release a journal bucket, so they all got stuck again.
So we need to allocate more buckets of RESERVE_BTREE type in advance,
but how much is enough? By experience and test, I think it should be
as much as journal buckets. Then I modify the code as this patch,
and test in the machine, and it works.
This patch modified base on Hua Rui’s patch, and allocate more buckets
of RESERVE_BTREE type in advance to avoid register thread and allocate
thread going to wait for each other.
[patch v2] ca->sb.njournal_buckets would be 0 in the first time after
cache creation, and no journal exists, so just 8 btree buckets is OK.
Signed-off-by: Hua Rui <[email protected]>
Signed-off-by: Tang Junhui <[email protected]>
Reviewed-by: Michael Lyle <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/md/bcache/btree.c | 9 ++++++---
drivers/md/bcache/super.c | 13 ++++++++++++-
2 files changed, 18 insertions(+), 4 deletions(-)
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 1598d1e04989..89d088cf95d9 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -1868,14 +1868,17 @@ void bch_initial_gc_finish(struct cache_set *c)
*/
for_each_cache(ca, c, i) {
for_each_bucket(b, ca) {
- if (fifo_full(&ca->free[RESERVE_PRIO]))
+ if (fifo_full(&ca->free[RESERVE_PRIO]) &&
+ fifo_full(&ca->free[RESERVE_BTREE]))
break;
if (bch_can_invalidate_bucket(ca, b) &&
!GC_MARK(b)) {
__bch_invalidate_one_bucket(ca, b);
- fifo_push(&ca->free[RESERVE_PRIO],
- b - ca->buckets);
+ if (!fifo_push(&ca->free[RESERVE_PRIO],
+ b - ca->buckets))
+ fifo_push(&ca->free[RESERVE_BTREE],
+ b - ca->buckets);
}
}
}
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 9417170f180a..c8cbb6cc1405 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1823,6 +1823,7 @@ void bch_cache_release(struct kobject *kobj)
static int cache_alloc(struct cache *ca)
{
size_t free;
+ size_t btree_buckets;
struct bucket *b;
__module_get(THIS_MODULE);
@@ -1830,9 +1831,19 @@ static int cache_alloc(struct cache *ca)
bio_init(&ca->journal.bio, ca->journal.bio.bi_inline_vecs, 8);
+ /*
+ * when ca->sb.njournal_buckets is not zero, journal exists,
+ * and in bch_journal_replay(), tree node may split,
+ * so bucket of RESERVE_BTREE type is needed,
+ * the worst situation is all journal buckets are valid journal,
+ * and all the keys need to replay,
+ * so the number of RESERVE_BTREE type buckets should be as much
+ * as journal buckets
+ */
+ btree_buckets = ca->sb.njournal_buckets ?: 8;
free = roundup_pow_of_two(ca->sb.nbuckets) >> 10;
- if (!init_fifo(&ca->free[RESERVE_BTREE], 8, GFP_KERNEL) ||
+ if (!init_fifo(&ca->free[RESERVE_BTREE], btree_buckets, GFP_KERNEL) ||
!init_fifo_exact(&ca->free[RESERVE_PRIO], prio_buckets(ca), GFP_KERNEL) ||
!init_fifo(&ca->free[RESERVE_MOVINGGC], free, GFP_KERNEL) ||
!init_fifo(&ca->free[RESERVE_NONE], free, GFP_KERNEL) ||
--
2.15.1
From: Yisheng Xie <[email protected]>
[ Upstream commit 0486a38bcc4749808edbc848f1bcf232042770fc ]
As in manpage of migrate_pages, the errno should be set to EINVAL when
none of the node IDs specified by new_nodes are on-line and allowed by
the process's current cpuset context, or none of the specified nodes
contain memory. However, when test by following case:
new_nodes = 0;
old_nodes = 0xf;
ret = migrate_pages(pid, old_nodes, new_nodes, MAX);
The ret will be 0 and no errno is set. As the new_nodes is empty, we
should expect EINVAL as documented.
To fix the case like above, this patch check whether target nodes AND
current task_nodes is empty, and then check whether AND
node_states[N_MEMORY] is empty.
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Yisheng Xie <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Chris Salls <[email protected]>
Cc: Christopher Lameter <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Tan Xiaojun <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
mm/mempolicy.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 80b67805b51d..2d3077ce50cd 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1440,10 +1440,14 @@ SYSCALL_DEFINE4(migrate_pages, pid_t, pid, unsigned long, maxnode,
goto out_put;
}
- if (!nodes_subset(*new, node_states[N_MEMORY])) {
- err = -EINVAL;
+ task_nodes = cpuset_mems_allowed(current);
+ nodes_and(*new, *new, task_nodes);
+ if (nodes_empty(*new))
+ goto out_put;
+
+ nodes_and(*new, *new, node_states[N_MEMORY]);
+ if (nodes_empty(*new))
goto out_put;
- }
err = security_task_movememory(task);
if (err)
--
2.15.1
From: Jacob Keller <[email protected]>
[ Upstream commit 02b4016bfe43d2d5ed043be7ffa56cda6a4d1100 ]
When implementing support for IP_USER_FLOW filters, we correctly
programmed a filter for both the non fragmented IPv4/Other filter, as
well as the fragmented IPv4 filters. However, we did not properly
program the input set for fragmented IPv4 PCTYPE. This meant that the
filters would almost certainly not match, unless the user specified all
of the flow types.
Add support to program the fragmented IPv4 filter input set. Since we
always program these filters together, we'll assume that the two input
sets must match, and will thus always program the input sets to the same
value.
Signed-off-by: Jacob Keller <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 10 ++++++++++
drivers/net/ethernet/intel/i40e/i40e_main.c | 3 +++
2 files changed, 13 insertions(+)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index 05e89864f781..fc27ba5caa55 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -3648,6 +3648,16 @@ static int i40e_check_fdir_input_set(struct i40e_vsi *vsi,
i40e_write_fd_input_set(pf, index, new_mask);
+ /* IP_USER_FLOW filters match both IPv4/Other and IPv4/Fragmented
+ * frames. If we're programming the input set for IPv4/Other, we also
+ * need to program the IPv4/Fragmented input set. Since we don't have
+ * separate support, we'll always assume and enforce that the two flow
+ * types must have matching input sets.
+ */
+ if (index == I40E_FILTER_PCTYPE_NONF_IPV4_OTHER)
+ i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_FRAG_IPV4,
+ new_mask);
+
/* Add the new offset and update table, if necessary */
if (new_flex_offset) {
err = i40e_add_flex_offset(&pf->l4_flex_pit_list, src_offset,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index b1cde1b051a4..d36b799116e4 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -5828,6 +5828,9 @@ static void i40e_fdir_filter_exit(struct i40e_pf *pf)
/* Reprogram the default input set for Other/IPv4 */
i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV4_OTHER,
I40E_L3_SRC_MASK | I40E_L3_DST_MASK);
+
+ i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_FRAG_IPV4,
+ I40E_L3_SRC_MASK | I40E_L3_DST_MASK);
}
/**
--
2.15.1
From: Jake Daryll Obina <[email protected]>
[ Upstream commit 5bdd0c6f89fba430e18d636493398389dadc3b17 ]
If jffs2_iget() fails for a newly-allocated inode, jffs2_do_clear_inode()
can get called twice in the error handling path, the first call in
jffs2_iget() itself and the second through iget_failed(). This can result
to a use-after-free error in the second jffs2_do_clear_inode() call, such
as shown by the oops below wherein the second jffs2_do_clear_inode() call
was trying to free node fragments that were already freed in the first
jffs2_do_clear_inode() call.
[ 78.178860] jffs2: error: (1904) jffs2_do_read_inode_internal: CRC failed for read_inode of inode 24 at physical location 0x1fc00c
[ 78.178914] Unable to handle kernel paging request at virtual address 6b6b6b6b6b6b6b7b
[ 78.185871] pgd = ffffffc03a567000
[ 78.188794] [6b6b6b6b6b6b6b7b] *pgd=0000000000000000, *pud=0000000000000000
[ 78.194968] Internal error: Oops: 96000004 [#1] PREEMPT SMP
...
[ 78.513147] PC is at rb_first_postorder+0xc/0x28
[ 78.516503] LR is at jffs2_kill_fragtree+0x28/0x90 [jffs2]
[ 78.520672] pc : [<ffffff8008323d28>] lr : [<ffffff8000eb1cc8>] pstate: 60000105
[ 78.526757] sp : ffffff800cea38f0
[ 78.528753] x29: ffffff800cea38f0 x28: ffffffc01f3f8e80
[ 78.532754] x27: 0000000000000000 x26: ffffff800cea3c70
[ 78.536756] x25: 00000000dc67c8ae x24: ffffffc033d6945d
[ 78.540759] x23: ffffffc036811740 x22: ffffff800891a5b8
[ 78.544760] x21: 0000000000000000 x20: 0000000000000000
[ 78.548762] x19: ffffffc037d48910 x18: ffffff800891a588
[ 78.552764] x17: 0000000000000800 x16: 0000000000000c00
[ 78.556766] x15: 0000000000000010 x14: 6f2065646f6e695f
[ 78.560767] x13: 6461657220726f66 x12: 2064656c69616620
[ 78.564769] x11: 435243203a6c616e x10: 7265746e695f6564
[ 78.568771] x9 : 6f6e695f64616572 x8 : ffffffc037974038
[ 78.572774] x7 : bbbbbbbbbbbbbbbb x6 : 0000000000000008
[ 78.576775] x5 : 002f91d85bd44a2f x4 : 0000000000000000
[ 78.580777] x3 : 0000000000000000 x2 : 000000403755e000
[ 78.584779] x1 : 6b6b6b6b6b6b6b6b x0 : 6b6b6b6b6b6b6b6b
...
[ 79.038551] [<ffffff8008323d28>] rb_first_postorder+0xc/0x28
[ 79.042962] [<ffffff8000eb5578>] jffs2_do_clear_inode+0x88/0x100 [jffs2]
[ 79.048395] [<ffffff8000eb9ddc>] jffs2_evict_inode+0x3c/0x48 [jffs2]
[ 79.053443] [<ffffff8008201ca8>] evict+0xb0/0x168
[ 79.056835] [<ffffff8008202650>] iput+0x1c0/0x200
[ 79.060228] [<ffffff800820408c>] iget_failed+0x30/0x3c
[ 79.064097] [<ffffff8000eba0c0>] jffs2_iget+0x2d8/0x360 [jffs2]
[ 79.068740] [<ffffff8000eb0a60>] jffs2_lookup+0xe8/0x130 [jffs2]
[ 79.073434] [<ffffff80081f1a28>] lookup_slow+0x118/0x190
[ 79.077435] [<ffffff80081f4708>] walk_component+0xfc/0x28c
[ 79.081610] [<ffffff80081f4dd0>] path_lookupat+0x84/0x108
[ 79.085699] [<ffffff80081f5578>] filename_lookup+0x88/0x100
[ 79.089960] [<ffffff80081f572c>] user_path_at_empty+0x58/0x6c
[ 79.094396] [<ffffff80081ebe14>] vfs_statx+0xa4/0x114
[ 79.098138] [<ffffff80081ec44c>] SyS_newfstatat+0x58/0x98
[ 79.102227] [<ffffff800808354c>] __sys_trace_return+0x0/0x4
[ 79.106489] Code: d65f03c0 f9400001 b40000e1 aa0103e0 (f9400821)
The jffs2_do_clear_inode() call in jffs2_iget() is unnecessary since
iget_failed() will eventually call jffs2_do_clear_inode() if needed, so
just remove it.
Fixes: 5451f79f5f81 ("iget: stop JFFS2 from using iget() and read_inode()")
Reviewed-by: Richard Weinberger <[email protected]>
Signed-off-by: Jake Daryll Obina <[email protected]>
Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/jffs2/fs.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/jffs2/fs.c b/fs/jffs2/fs.c
index e96c6b05e43e..3c96f4bdc549 100644
--- a/fs/jffs2/fs.c
+++ b/fs/jffs2/fs.c
@@ -362,7 +362,6 @@ error_io:
ret = -EIO;
error:
mutex_unlock(&f->sem);
- jffs2_do_clear_inode(c, f);
iget_failed(inode);
return ERR_PTR(ret);
}
--
2.15.1
From: Michael Bringmann <[email protected]>
[ Upstream commit a346137e9142b039fd13af2e59696e3d40c487ef ]
On powerpc systems which allow 'hot-add' of CPU or memory resources,
it may occur that the new resources are to be inserted into nodes that
were not used for these resources at bootup. In the kernel, any node
that is used must be defined and initialized. These empty nodes may
occur when,
* Dedicated vs. shared resources. Shared resources require information
such as the VPHN hcall for CPU assignment to nodes. Associativity
decisions made based on dedicated resource rules, such as
associativity properties in the device tree, may vary from decisions
made using the values returned by the VPHN hcall.
* memoryless nodes at boot. Nodes need to be defined as 'possible' at
boot for operation with other code modules. Previously, the powerpc
code would limit the set of possible nodes to those which have
memory assigned at boot, and were thus online. Subsequent add/remove
of CPUs or memory would only work with this subset of possible
nodes.
* memoryless nodes with CPUs at boot. Due to the previous restriction
on nodes, nodes that had CPUs but no memory were being collapsed
into other nodes that did have memory at boot. In practice this
meant that the node assignment presented by the runtime kernel
differed from the affinity and associativity attributes presented by
the device tree or VPHN hcalls. Nodes that might be known to the
pHyp were not 'possible' in the runtime kernel because they did not
have memory at boot.
This patch ensures that sufficient nodes are defined to support
configuration requirements after boot, as well as at boot. This patch
set fixes a couple of problems.
* Nodes known to powerpc to be memoryless at boot, but to have CPUs in
them are allowed to be 'possible' and 'online'. Memory allocations
for those nodes are taken from another node that does have memory
until and if memory is hot-added to the node. * Nodes which have no
resources assigned at boot, but which may still be referenced
subsequently by affinity or associativity attributes, are kept in
the list of 'possible' nodes for powerpc. Hot-add of memory or CPUs
to the system can reference these nodes and bring them online
instead of redirecting to one of the set of nodes that were known to
have memory at boot.
This patch extracts the value of the lowest domain level (number of
allocable resources) from the device tree property
"ibm,max-associativity-domains" to use as the maximum number of nodes
to setup as possibly available in the system. This new setting will
override the instruction:
nodes_and(node_possible_map, node_possible_map, node_online_map);
presently seen in the function arch/powerpc/mm/numa.c:initmem_init().
If the "ibm,max-associativity-domains" property is not present at
boot, no operation will be performed to define or enable additional
nodes, or enable the above 'nodes_and()'.
Signed-off-by: Michael Bringmann <[email protected]>
Reviewed-by: Nathan Fontenot <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/powerpc/mm/numa.c | 37 ++++++++++++++++++++++++++++++++++---
1 file changed, 34 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index a81279249bfb..0356c6cceff7 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -887,6 +887,34 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
NODE_DATA(nid)->node_spanned_pages = spanned_pages;
}
+static void __init find_possible_nodes(void)
+{
+ struct device_node *rtas;
+ u32 numnodes, i;
+
+ if (min_common_depth <= 0)
+ return;
+
+ rtas = of_find_node_by_path("/rtas");
+ if (!rtas)
+ return;
+
+ if (of_property_read_u32_index(rtas,
+ "ibm,max-associativity-domains",
+ min_common_depth, &numnodes))
+ goto out;
+
+ for (i = 0; i < numnodes; i++) {
+ if (!node_possible(i)) {
+ setup_node_data(i, 0, 0);
+ node_set(i, node_possible_map);
+ }
+ }
+
+out:
+ of_node_put(rtas);
+}
+
void __init initmem_init(void)
{
int nid, cpu;
@@ -900,12 +928,15 @@ void __init initmem_init(void)
memblock_dump_all();
/*
- * Reduce the possible NUMA nodes to the online NUMA nodes,
- * since we do not support node hotplug. This ensures that we
- * lower the maximum NUMA node ID to what is actually present.
+ * Modify the set of possible NUMA nodes to reflect information
+ * available about the set of online nodes, and the set of nodes
+ * that we expect to make use of for this platform's affinity
+ * calculations.
*/
nodes_and(node_possible_map, node_possible_map, node_online_map);
+ find_possible_nodes();
+
for_each_online_node(nid) {
unsigned long start_pfn, end_pfn;
--
2.15.1
From: "Michael J. Ruhl" <[email protected]>
[ Upstream commit 82a979265638c505e12fbe7ba40980dc0901436d ]
The pci_request_irq() interfaces always adds the IRQF_SHARED bit to
all IRQ requests.
When the kernel is built with CONFIG_DEBUG_SHIRQ config flag, if the
IRQF_SHARED bit is set, a call to the IRQ handler is made from the
__free_irq() function. This is testing a race condition between the
IRQ cleanup and an IRQ racing the cleanup. The HFI driver should be
able to handle this race, but does not.
This race can cause traces that start with this footprint:
BUG: unable to handle kernel NULL pointer dereference at (null)
Call Trace:
<hfi1 irq handler>
...
__free_irq+0x1b3/0x2d0
free_irq+0x35/0x70
pci_free_irq+0x1c/0x30
clean_up_interrupts+0x53/0xf0 [hfi1]
hfi1_start_cleanup+0x122/0x190 [hfi1]
postinit_cleanup+0x1d/0x280 [hfi1]
remove_one+0x233/0x250 [hfi1]
pci_device_remove+0x39/0xc0
Export IRQ cleanup function so it can be called from other modules.
Using the exported cleanup function:
Re-order the driver cleanup code to clean up IRQ resources before
other resources, eliminating the race.
Re-order error path for init so that the race does not occur.
Reduce severity on spurious error message for SDMA IRQs to info.
Reviewed-by: Alex Estrin <[email protected]>
Reviewed-by: Patel Jay P <[email protected]>
Reviewed-by: Mike Marciniszyn <[email protected]>
Signed-off-by: Michael J. Ruhl <[email protected]>
Signed-off-by: Dennis Dalessandro <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/infiniband/hw/hfi1/chip.c | 18 ++++++++++++------
drivers/infiniband/hw/hfi1/hfi.h | 1 +
drivers/infiniband/hw/hfi1/init.c | 4 +++-
3 files changed, 16 insertions(+), 7 deletions(-)
diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
index 0e17d03ef1cb..82114ba86041 100644
--- a/drivers/infiniband/hw/hfi1/chip.c
+++ b/drivers/infiniband/hw/hfi1/chip.c
@@ -8294,8 +8294,8 @@ static irqreturn_t sdma_interrupt(int irq, void *data)
/* handle the interrupt(s) */
sdma_engine_interrupt(sde, status);
} else {
- dd_dev_err_ratelimited(dd, "SDMA engine %u interrupt, but no status bits set\n",
- sde->this_idx);
+ dd_dev_info_ratelimited(dd, "SDMA engine %u interrupt, but no status bits set\n",
+ sde->this_idx);
}
return IRQ_HANDLED;
}
@@ -12967,7 +12967,14 @@ static void disable_intx(struct pci_dev *pdev)
pci_intx(pdev, 0);
}
-static void clean_up_interrupts(struct hfi1_devdata *dd)
+/**
+ * hfi1_clean_up_interrupts() - Free all IRQ resources
+ * @dd: valid device data data structure
+ *
+ * Free the MSI or INTx IRQs and assoicated PCI resources,
+ * if they have been allocated.
+ */
+void hfi1_clean_up_interrupts(struct hfi1_devdata *dd)
{
int i;
@@ -13344,7 +13351,7 @@ static int set_up_interrupts(struct hfi1_devdata *dd)
return 0;
fail:
- clean_up_interrupts(dd);
+ hfi1_clean_up_interrupts(dd);
return ret;
}
@@ -14770,7 +14777,6 @@ void hfi1_start_cleanup(struct hfi1_devdata *dd)
aspm_exit(dd);
free_cntrs(dd);
free_rcverr(dd);
- clean_up_interrupts(dd);
finish_chip_resources(dd);
}
@@ -15229,7 +15235,7 @@ bail_free_rcverr:
bail_free_cntrs:
free_cntrs(dd);
bail_clear_intr:
- clean_up_interrupts(dd);
+ hfi1_clean_up_interrupts(dd);
bail_cleanup:
hfi1_pcie_ddcleanup(dd);
bail_free:
diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h
index 3409eee16092..dc9c951ef946 100644
--- a/drivers/infiniband/hw/hfi1/hfi.h
+++ b/drivers/infiniband/hw/hfi1/hfi.h
@@ -1954,6 +1954,7 @@ void hfi1_verbs_unregister_sysfs(struct hfi1_devdata *dd);
int qsfp_dump(struct hfi1_pportdata *ppd, char *buf, int len);
int hfi1_pcie_init(struct pci_dev *pdev, const struct pci_device_id *ent);
+void hfi1_clean_up_interrupts(struct hfi1_devdata *dd);
void hfi1_pcie_cleanup(struct pci_dev *pdev);
int hfi1_pcie_ddinit(struct hfi1_devdata *dd, struct pci_dev *pdev);
void hfi1_pcie_ddcleanup(struct hfi1_devdata *);
diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
index fba77001c3a7..d4fc8795cdf6 100644
--- a/drivers/infiniband/hw/hfi1/init.c
+++ b/drivers/infiniband/hw/hfi1/init.c
@@ -1039,8 +1039,9 @@ static void shutdown_device(struct hfi1_devdata *dd)
}
dd->flags &= ~HFI1_INITTED;
- /* mask interrupts, but not errors */
+ /* mask and clean up interrupts, but not errors */
set_intr_state(dd, 0);
+ hfi1_clean_up_interrupts(dd);
for (pidx = 0; pidx < dd->num_pports; ++pidx) {
ppd = dd->pport + pidx;
@@ -1696,6 +1697,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
dd_dev_err(dd, "Failed to create /dev devices: %d\n", -j);
if (initfail || ret) {
+ hfi1_clean_up_interrupts(dd);
stop_timers(dd);
flush_workqueue(ib_wq);
for (pidx = 0; pidx < dd->num_pports; ++pidx) {
--
2.15.1
From: Vitaly Kuznetsov <[email protected]>
[ Upstream commit 617ab45c9a8900e64a78b43696c02598b8cad68b ]
When hypercall-based TLB flush was enabled for Hyper-V guests PCID feature
was deliberately suppressed as a precaution: back then PCID was never
exposed to Hyper-V guests and it wasn't clear what will happen if some day
it becomes available. The day came and PCID/INVPCID features are already
exposed on certain Hyper-V hosts.
From TLFS (as of 5.0b) it is unclear how TLB flush hypercalls combine with
PCID. In particular the usage of PCID is per-cpu based: the same mm gets
different CR3 values on different CPUs. If the hypercall does exact
matching this will fail. However, this is not the case. David Zhang
explains:
"In practice, the AddressSpace argument is ignored on any VM that supports
PCIDs.
Architecturally, the AddressSpace argument must match the CR3 with PCID
bits stripped out (i.e., the low 12 bits of AddressSpace should be 0 in
long mode). The flush hypercalls flush all PCIDs for the specified
AddressSpace."
With this, PCID can be enabled.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: David Zhang <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Cc: Haiyang Zhang <[email protected]>
Cc: "Michael Kelley (EOSG)" <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: [email protected]
Cc: "K. Y. Srinivasan" <[email protected]>
Cc: Aditya Bhandari <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/hyperv/mmu.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
index 9cc9e1c1e2db..56c9ebac946f 100644
--- a/arch/x86/hyperv/mmu.c
+++ b/arch/x86/hyperv/mmu.c
@@ -137,7 +137,12 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus,
}
if (info->mm) {
+ /*
+ * AddressSpace argument must match the CR3 with PCID bits
+ * stripped out.
+ */
flush->address_space = virt_to_phys(info->mm->pgd);
+ flush->address_space &= CR3_ADDR_MASK;
flush->flags = 0;
} else {
flush->address_space = 0;
@@ -219,7 +224,12 @@ static void hyperv_flush_tlb_others_ex(const struct cpumask *cpus,
}
if (info->mm) {
+ /*
+ * AddressSpace argument must match the CR3 with PCID bits
+ * stripped out.
+ */
flush->address_space = virt_to_phys(info->mm->pgd);
+ flush->address_space &= CR3_ADDR_MASK;
flush->flags = 0;
} else {
flush->address_space = 0;
@@ -278,8 +288,6 @@ void hyperv_setup_mmu_ops(void)
if (!(ms_hyperv.hints & HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED))
return;
- setup_clear_cpu_cap(X86_FEATURE_PCID);
-
if (!(ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED)) {
pr_info("Using hypercall for remote TLB flush\n");
pv_mmu_ops.flush_tlb_others = hyperv_flush_tlb_others;
--
2.15.1
From: Andy Shevchenko <[email protected]>
[ Upstream commit c505cbd45f6e9c539d57dd171d95ec7e5e9f9cd0 ]
Some of the drivers may use the macro at runtime flow, like
struct property_entry p[10];
...
p[index++] = PROPERTY_ENTRY_U8("u8 property", u8_data);
In that case and absence of the data type compiler fails the build:
drivers/char/ipmi/ipmi_dmi.c:79:29: error: Expected ; at end of statement
drivers/char/ipmi/ipmi_dmi.c:79:29: error: got {
Acked-by: Corey Minyard <[email protected]>
Cc: Corey Minyard <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
include/linux/property.h | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/linux/property.h b/include/linux/property.h
index 6bebee13c5e0..89d94b349912 100644
--- a/include/linux/property.h
+++ b/include/linux/property.h
@@ -206,7 +206,7 @@ struct property_entry {
*/
#define PROPERTY_ENTRY_INTEGER_ARRAY(_name_, _type_, _val_) \
-{ \
+(struct property_entry) { \
.name = _name_, \
.length = ARRAY_SIZE(_val_) * sizeof(_type_), \
.is_array = true, \
@@ -224,7 +224,7 @@ struct property_entry {
PROPERTY_ENTRY_INTEGER_ARRAY(_name_, u64, _val_)
#define PROPERTY_ENTRY_STRING_ARRAY(_name_, _val_) \
-{ \
+(struct property_entry) { \
.name = _name_, \
.length = ARRAY_SIZE(_val_) * sizeof(const char *), \
.is_array = true, \
@@ -233,7 +233,7 @@ struct property_entry {
}
#define PROPERTY_ENTRY_INTEGER(_name_, _type_, _val_) \
-{ \
+(struct property_entry) { \
.name = _name_, \
.length = sizeof(_type_), \
.is_string = false, \
@@ -250,7 +250,7 @@ struct property_entry {
PROPERTY_ENTRY_INTEGER(_name_, u64, _val_)
#define PROPERTY_ENTRY_STRING(_name_, _val_) \
-{ \
+(struct property_entry) { \
.name = _name_, \
.length = sizeof(_val_), \
.is_string = true, \
@@ -258,7 +258,7 @@ struct property_entry {
}
#define PROPERTY_ENTRY_BOOL(_name_) \
-{ \
+(struct property_entry) { \
.name = _name_, \
}
--
2.15.1
From: Tang Junhui <[email protected]>
[ Upstream commit 73ac105be390c1de42a2f21643c9778a5e002930 ]
back-end device sdm has already attached a cache_set with ID
f67ebe1f-f8bc-4d73-bfe5-9dc88607f119, then try to attach with
another cache set, and it returns with an error:
[root]# cd /sys/block/sdm/bcache
[root]# echo 5ccd0a63-148e-48b8-afa2-aca9cbd6279f > attach
-bash: echo: write error: Invalid argument
After that, execute a command to modify the label of bcache
device:
[root]# echo data_disk1 > label
Then we reboot the system, when the system power on, the back-end
device can not attach to cache_set, a messages show in the log:
Feb 5 12:05:52 ceph152 kernel: [922385.508498] bcache:
bch_cached_dev_attach() couldn't find uuid for sdm in set
In sysfs_attach(), dc->sb.set_uuid was assigned to the value
which input through sysfs, no matter whether it is success
or not in bch_cached_dev_attach(). For example, If the back-end
device has already attached to an cache set, bch_cached_dev_attach()
would fail, but dc->sb.set_uuid was changed. Then modify the
label of bcache device, it will call bch_write_bdev_super(),
which would write the dc->sb.set_uuid to the super block, so we
record a wrong cache set ID in the super block, after the system
reboot, the cache set couldn't find the uuid of the back-end
device, so the bcache device couldn't exist and use any more.
In this patch, we don't assigned cache set ID to dc->sb.set_uuid
in sysfs_attach() directly, but input it into bch_cached_dev_attach(),
and assigned dc->sb.set_uuid to the cache set ID after the back-end
device attached to the cache set successful.
Signed-off-by: Tang Junhui <[email protected]>
Reviewed-by: Michael Lyle <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/md/bcache/bcache.h | 2 +-
drivers/md/bcache/super.c | 10 ++++++----
drivers/md/bcache/sysfs.c | 6 ++++--
3 files changed, 11 insertions(+), 7 deletions(-)
diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index abd31e847f96..e4a3f692057b 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -906,7 +906,7 @@ void bcache_write_super(struct cache_set *);
int bch_flash_dev_create(struct cache_set *c, uint64_t size);
-int bch_cached_dev_attach(struct cached_dev *, struct cache_set *);
+int bch_cached_dev_attach(struct cached_dev *, struct cache_set *, uint8_t *);
void bch_cached_dev_detach(struct cached_dev *);
void bch_cached_dev_run(struct cached_dev *);
void bcache_device_stop(struct bcache_device *);
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index c8cbb6cc1405..31cf9227da72 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -933,7 +933,8 @@ void bch_cached_dev_detach(struct cached_dev *dc)
cached_dev_put(dc);
}
-int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c)
+int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c,
+ uint8_t *set_uuid)
{
uint32_t rtime = cpu_to_le32(get_seconds());
struct uuid_entry *u;
@@ -942,7 +943,8 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c)
bdevname(dc->bdev, buf);
- if (memcmp(dc->sb.set_uuid, c->sb.set_uuid, 16))
+ if ((set_uuid && memcmp(set_uuid, c->sb.set_uuid, 16)) ||
+ (!set_uuid && memcmp(dc->sb.set_uuid, c->sb.set_uuid, 16)))
return -ENOENT;
if (dc->disk.c) {
@@ -1184,7 +1186,7 @@ static void register_bdev(struct cache_sb *sb, struct page *sb_page,
list_add(&dc->list, &uncached_devices);
list_for_each_entry(c, &bch_cache_sets, list)
- bch_cached_dev_attach(dc, c);
+ bch_cached_dev_attach(dc, c, NULL);
if (BDEV_STATE(&dc->sb) == BDEV_STATE_NONE ||
BDEV_STATE(&dc->sb) == BDEV_STATE_STALE)
@@ -1706,7 +1708,7 @@ static void run_cache_set(struct cache_set *c)
bcache_write_super(c);
list_for_each_entry_safe(dc, t, &uncached_devices, list)
- bch_cached_dev_attach(dc, c);
+ bch_cached_dev_attach(dc, c, NULL);
flash_devs_run(c);
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 234b2f5b286d..6dd03cf9053b 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -265,11 +265,13 @@ STORE(__cached_dev)
}
if (attr == &sysfs_attach) {
- if (bch_parse_uuid(buf, dc->sb.set_uuid) < 16)
+ uint8_t set_uuid[16];
+
+ if (bch_parse_uuid(buf, set_uuid) < 16)
return -EINVAL;
list_for_each_entry(c, &bch_cache_sets, list) {
- v = bch_cached_dev_attach(dc, c);
+ v = bch_cached_dev_attach(dc, c, set_uuid);
if (!v)
return size;
}
--
2.15.1
From: Will Deacon <[email protected]>
[ Upstream commit 202fb4ef81e3ec765c23bd1e6746a5c25b797d0e ]
If the spinlock "next" ticket wraps around between the initial LDR
and the cmpxchg in the LSE version of spin_trylock, then we can erroneously
think that we have successfuly acquired the lock because we only check
whether the next ticket return by the cmpxchg is equal to the owner ticket
in our updated lock word.
This patch fixes the issue by performing a full 32-bit check of the lock
word when trying to determine whether or not the CASA instruction updated
memory.
Reported-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/arm64/include/asm/spinlock.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index 95ad7102b63c..82375b896be5 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -89,8 +89,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
" cbnz %w1, 1f\n"
" add %w1, %w0, %3\n"
" casa %w0, %w1, %2\n"
- " and %w1, %w1, #0xffff\n"
- " eor %w1, %w1, %w0, lsr #16\n"
+ " sub %w1, %w1, %3\n"
+ " eor %w1, %w1, %w0\n"
"1:")
: "=&r" (lockval), "=&r" (tmp), "+Q" (*lock)
: "I" (1 << TICKET_SHIFT)
--
2.15.1
From: Daniel Hua <[email protected]>
[ Upstream commit 3a53285228165225a7f76c7d5ff1ddc0213ce0e4 ]
Problem description:
After ethernet cable connect and disconnect for several iterations on a
device with i210, tx timestamp will stop being put into the socket.
Steps to reproduce:
1. Setup a device with i210 and wire it to a 802.1AS capable switch (
Extreme Networks Summit x440 is used in our case)
2. Have the gptp daemon running on the device and make sure it is synced
with the switch
3. Have the switch disable and enable the port, wait for the device gets
resynced with the switch
4. Iterates step 3 until the device is not albe to get resynced
5. Review the log in dmesg and you will see warning message "igb : clearing
Tx timestamp hang"
Root cause:
If ptp_tx_work() gets scheduled just before the port gets disabled, a LINK
DOWN event will be processed before ptp_tx_work(), which may cause timeout
in ptp_tx_work(). In the timeout logic, the TSYNCTXCTL's TXTT bit (Transmit
timestamp valid bit) is not cleared, causing no new timestamp loaded to
TXSTMP register. Consequently therefore, no new interrupt is triggerred by
TSICR.TXTS bit and no more Tx timestamp send to the socket.
Signed-off-by: Daniel Hua <[email protected]>
Tested-by: Aaron Brown <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/ethernet/intel/igb/igb_ptp.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ethernet/intel/igb/igb_ptp.c b/drivers/net/ethernet/intel/igb/igb_ptp.c
index 841c2a083349..0746b19ec6d3 100644
--- a/drivers/net/ethernet/intel/igb/igb_ptp.c
+++ b/drivers/net/ethernet/intel/igb/igb_ptp.c
@@ -643,6 +643,10 @@ static void igb_ptp_tx_work(struct work_struct *work)
adapter->ptp_tx_skb = NULL;
clear_bit_unlock(__IGB_PTP_TX_IN_PROGRESS, &adapter->state);
adapter->tx_hwtstamp_timeouts++;
+ /* Clear the tx valid bit in TSYNCTXCTL register to enable
+ * interrupt
+ */
+ rd32(E1000_TXSTMPH);
dev_warn(&adapter->pdev->dev, "clearing Tx timestamp hang\n");
return;
}
@@ -717,6 +721,7 @@ void igb_ptp_rx_hang(struct igb_adapter *adapter)
*/
void igb_ptp_tx_hang(struct igb_adapter *adapter)
{
+ struct e1000_hw *hw = &adapter->hw;
bool timeout = time_is_before_jiffies(adapter->ptp_tx_start +
IGB_PTP_TX_TIMEOUT);
@@ -736,6 +741,10 @@ void igb_ptp_tx_hang(struct igb_adapter *adapter)
adapter->ptp_tx_skb = NULL;
clear_bit_unlock(__IGB_PTP_TX_IN_PROGRESS, &adapter->state);
adapter->tx_hwtstamp_timeouts++;
+ /* Clear the tx valid bit in TSYNCTXCTL register to enable
+ * interrupt
+ */
+ rd32(E1000_TXSTMPH);
dev_warn(&adapter->pdev->dev, "clearing Tx timestamp hang\n");
}
}
--
2.15.1
From: Mark Salter <[email protected]>
[ Upstream commit b6dd4d83dc2f78cebc9a7e6e7e4bc2be4d29b94d ]
The pr_debug() in gic-v3 gic_send_sgi() can trigger a circular locking
warning:
GICv3: CPU10: ICC_SGI1R_EL1 5000400
======================================================
WARNING: possible circular locking dependency detected
4.15.0+ #1 Tainted: G W
------------------------------------------------------
dynamic_debug01/1873 is trying to acquire lock:
((console_sem).lock){-...}, at: [<0000000099c891ec>] down_trylock+0x20/0x4c
but task is already holding lock:
(&rq->lock){-.-.}, at: [<00000000842e1587>] __task_rq_lock+0x54/0xdc
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&rq->lock){-.-.}:
__lock_acquire+0x3b4/0x6e0
lock_acquire+0xf4/0x2a8
_raw_spin_lock+0x4c/0x60
task_fork_fair+0x3c/0x148
sched_fork+0x10c/0x214
copy_process.isra.32.part.33+0x4e8/0x14f0
_do_fork+0xe8/0x78c
kernel_thread+0x48/0x54
rest_init+0x34/0x2a4
start_kernel+0x45c/0x488
-> #1 (&p->pi_lock){-.-.}:
__lock_acquire+0x3b4/0x6e0
lock_acquire+0xf4/0x2a8
_raw_spin_lock_irqsave+0x58/0x70
try_to_wake_up+0x48/0x600
wake_up_process+0x28/0x34
__up.isra.0+0x60/0x6c
up+0x60/0x68
__up_console_sem+0x4c/0x7c
console_unlock+0x328/0x634
vprintk_emit+0x25c/0x390
dev_vprintk_emit+0xc4/0x1fc
dev_printk_emit+0x88/0xa8
__dev_printk+0x58/0x9c
_dev_info+0x84/0xa8
usb_new_device+0x100/0x474
hub_port_connect+0x280/0x92c
hub_event+0x740/0xa84
process_one_work+0x240/0x70c
worker_thread+0x60/0x400
kthread+0x110/0x13c
ret_from_fork+0x10/0x18
-> #0 ((console_sem).lock){-...}:
validate_chain.isra.34+0x6e4/0xa20
__lock_acquire+0x3b4/0x6e0
lock_acquire+0xf4/0x2a8
_raw_spin_lock_irqsave+0x58/0x70
down_trylock+0x20/0x4c
__down_trylock_console_sem+0x3c/0x9c
console_trylock+0x20/0xb0
vprintk_emit+0x254/0x390
vprintk_default+0x58/0x90
vprintk_func+0xbc/0x164
printk+0x80/0xa0
__dynamic_pr_debug+0x84/0xac
gic_raise_softirq+0x184/0x18c
smp_cross_call+0xac/0x218
smp_send_reschedule+0x3c/0x48
resched_curr+0x60/0x9c
check_preempt_curr+0x70/0xdc
wake_up_new_task+0x310/0x470
_do_fork+0x188/0x78c
SyS_clone+0x44/0x50
__sys_trace_return+0x0/0x4
other info that might help us debug this:
Chain exists of:
(console_sem).lock --> &p->pi_lock --> &rq->lock
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&rq->lock);
lock(&p->pi_lock);
lock(&rq->lock);
lock((console_sem).lock);
*** DEADLOCK ***
2 locks held by dynamic_debug01/1873:
#0: (&p->pi_lock){-.-.}, at: [<000000001366df53>] wake_up_new_task+0x40/0x470
#1: (&rq->lock){-.-.}, at: [<00000000842e1587>] __task_rq_lock+0x54/0xdc
stack backtrace:
CPU: 10 PID: 1873 Comm: dynamic_debug01 Tainted: G W 4.15.0+ #1
Hardware name: GIGABYTE R120-T34-00/MT30-GS2-00, BIOS T48 10/02/2017
Call trace:
dump_backtrace+0x0/0x188
show_stack+0x24/0x2c
dump_stack+0xa4/0xe0
print_circular_bug.isra.31+0x29c/0x2b8
check_prev_add.constprop.39+0x6c8/0x6dc
validate_chain.isra.34+0x6e4/0xa20
__lock_acquire+0x3b4/0x6e0
lock_acquire+0xf4/0x2a8
_raw_spin_lock_irqsave+0x58/0x70
down_trylock+0x20/0x4c
__down_trylock_console_sem+0x3c/0x9c
console_trylock+0x20/0xb0
vprintk_emit+0x254/0x390
vprintk_default+0x58/0x90
vprintk_func+0xbc/0x164
printk+0x80/0xa0
__dynamic_pr_debug+0x84/0xac
gic_raise_softirq+0x184/0x18c
smp_cross_call+0xac/0x218
smp_send_reschedule+0x3c/0x48
resched_curr+0x60/0x9c
check_preempt_curr+0x70/0xdc
wake_up_new_task+0x310/0x470
_do_fork+0x188/0x78c
SyS_clone+0x44/0x50
__sys_trace_return+0x0/0x4
GICv3: CPU0: ICC_SGI1R_EL1 12000
This could be fixed with printk_deferred() but that might lessen its
usefulness for debugging. So change it to pr_devel to keep it out of
production kernels. Developers working on gic-v3 can enable it as
needed in their kernels.
Signed-off-by: Mark Salter <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/irqchip/irq-gic-v3.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index ae9ff72e83ee..4d4d46dc1a6d 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -645,7 +645,7 @@ static void gic_send_sgi(u64 cluster_id, u16 tlist, unsigned int irq)
MPIDR_TO_SGI_AFFINITY(cluster_id, 1) |
tlist << ICC_SGI1R_TARGET_LIST_SHIFT);
- pr_debug("CPU%d: ICC_SGI1R_EL1 %llx\n", smp_processor_id(), val);
+ pr_devel("CPU%d: ICC_SGI1R_EL1 %llx\n", smp_processor_id(), val);
gic_write_sgi1r(val);
}
--
2.15.1
From: Hans de Goede <[email protected]>
[ Upstream commit 63347db0affadcbccd5613116ea8431c70139b3e ]
The acpi_get_bus_status wrapper for acpi_bus_get_status_handle has some
code to handle certain device quirks, in some cases we also need this
quirk handling for the initial _STA call.
Specifically on some devices calling _STA before all _DEP dependencies
are met results in errors like these:
[ 0.123579] ACPI Error: No handler for Region [ECRM] (00000000ba9edc4c)
[GenericSerialBus] (20170831/evregion-166)
[ 0.123601] ACPI Error: Region GenericSerialBus (ID=9) has no handler
(20170831/exfldio-299)
[ 0.123618] ACPI Error: Method parse/execution failed
\_SB.I2C1.BAT1._STA, AE_NOT_EXIST (20170831/psparse-550)
acpi_get_bus_status already has code to avoid this, so by using it we
also silence these errors from the initial _STA call.
Note that in order for the acpi_get_bus_status handling for this to work,
we initialize dep_unmet to 1 until acpi_device_dep_initialize gets called,
this means that battery devices will be instantiated with an initial
status of 0. This is not a problem, acpi_bus_attach will get called soon
after the instantiation anyways and it will update the status as first
point of order.
Signed-off-by: Hans de Goede <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/acpi/scan.c | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
index 2f2f50322ffb..c0984d33c4c8 100644
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -1568,6 +1568,8 @@ void acpi_init_device_object(struct acpi_device *device, acpi_handle handle,
device_initialize(&device->dev);
dev_set_uevent_suppress(&device->dev, true);
acpi_init_coherency(device);
+ /* Assume there are unmet deps until acpi_device_dep_initialize() runs */
+ device->dep_unmet = 1;
}
void acpi_device_add_finalize(struct acpi_device *device)
@@ -1591,6 +1593,14 @@ static int acpi_add_single_object(struct acpi_device **child,
}
acpi_init_device_object(device, handle, type, sta);
+ /*
+ * For ACPI_BUS_TYPE_DEVICE getting the status is delayed till here so
+ * that we can call acpi_bus_get_status() and use its quirk handling.
+ * Note this must be done before the get power-/wakeup_dev-flags calls.
+ */
+ if (type == ACPI_BUS_TYPE_DEVICE)
+ acpi_bus_get_status(device);
+
acpi_bus_get_power_flags(device);
acpi_bus_get_wakeup_device_flags(device);
@@ -1663,9 +1673,11 @@ static int acpi_bus_type_and_status(acpi_handle handle, int *type,
return -ENODEV;
*type = ACPI_BUS_TYPE_DEVICE;
- status = acpi_bus_get_status_handle(handle, sta);
- if (ACPI_FAILURE(status))
- *sta = 0;
+ /*
+ * acpi_add_single_object updates this once we've an acpi_device
+ * so that acpi_bus_get_status' quirk handling can be used.
+ */
+ *sta = 0;
break;
case ACPI_TYPE_PROCESSOR:
*type = ACPI_BUS_TYPE_PROCESSOR;
@@ -1763,6 +1775,8 @@ static void acpi_device_dep_initialize(struct acpi_device *adev)
acpi_status status;
int i;
+ adev->dep_unmet = 0;
+
if (!acpi_has_method(adev->handle, "_DEP"))
return;
--
2.15.1
From: Jean Delvare <[email protected]>
[ Upstream commit a7770ae194569e96a93c48aceb304edded9cc648 ]
The handling of empty DMI strings looks quite broken to me:
* Strings from 1 to 7 spaces are not considered empty.
* True empty DMI strings (string index set to 0) are not considered
empty, and result in allocating a 0-char string.
* Strings with invalid index also result in allocating a 0-char
string.
* Strings starting with 8 spaces are all considered empty, even if
non-space characters follow (sounds like a weird thing to do, but
I have actually seen occurrences of this in DMI tables before.)
* Strings which are considered empty are reported as 8 spaces,
instead of being actually empty.
Some of these issues are the result of an off-by-one error in memcmp,
the rest is incorrect by design.
So let's get it square: missing strings and strings made of only
spaces, regardless of their length, should be treated as empty and
no memory should be allocated for them. All other strings are
non-empty and should be allocated.
Signed-off-by: Jean Delvare <[email protected]>
Fixes: 79da4721117f ("x86: fix DMI out of memory problems")
Cc: Parag Warudkar <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/firmware/dmi_scan.c | 22 +++++++++-------------
1 file changed, 9 insertions(+), 13 deletions(-)
diff --git a/drivers/firmware/dmi_scan.c b/drivers/firmware/dmi_scan.c
index 783041964439..e8db9659a36b 100644
--- a/drivers/firmware/dmi_scan.c
+++ b/drivers/firmware/dmi_scan.c
@@ -18,7 +18,7 @@ EXPORT_SYMBOL_GPL(dmi_kobj);
* of and an antecedent to, SMBIOS, which stands for System
* Management BIOS. See further: http://www.dmtf.org/standards
*/
-static const char dmi_empty_string[] = " ";
+static const char dmi_empty_string[] = "";
static u32 dmi_ver __initdata;
static u32 dmi_len;
@@ -44,25 +44,21 @@ static int dmi_memdev_nr;
static const char * __init dmi_string_nosave(const struct dmi_header *dm, u8 s)
{
const u8 *bp = ((u8 *) dm) + dm->length;
+ const u8 *nsp;
if (s) {
- s--;
- while (s > 0 && *bp) {
+ while (--s > 0 && *bp)
bp += strlen(bp) + 1;
- s--;
- }
-
- if (*bp != 0) {
- size_t len = strlen(bp)+1;
- size_t cmp_len = len > 8 ? 8 : len;
- if (!memcmp(bp, dmi_empty_string, cmp_len))
- return dmi_empty_string;
+ /* Strings containing only spaces are considered empty */
+ nsp = bp;
+ while (*nsp == ' ')
+ nsp++;
+ if (*nsp != '\0')
return bp;
- }
}
- return "";
+ return dmi_empty_string;
}
static const char * __init dmi_string(const struct dmi_header *dm, u8 s)
--
2.15.1
From: Arnd Bergmann <[email protected]>
[ Upstream commit ebfc15019cfa72496c674ffcb0b8ef10790dcddc ]
In some configurations, 'partial' does not get initialized, as shown by
this gcc-8 warning:
arch/x86/kernel/dumpstack.c: In function 'show_trace_log_lvl':
arch/x86/kernel/dumpstack.c:156:4: error: 'partial' may be used uninitialized in this function [-Werror=maybe-uninitialized]
show_regs_if_on_stack(&stack_info, regs, partial);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This initializes it to false, to get the previous behavior in this case.
Fixes: a9cdbe72c4e8 ("x86/dumpstack: Fix partial register dumps")
Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Nicolas Pitre <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/kernel/dumpstack.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index afbecff161d1..a2d8a3908670 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -109,7 +109,7 @@ void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
struct stack_info stack_info = {0};
unsigned long visit_mask = 0;
int graph_idx = 0;
- bool partial;
+ bool partial = false;
printk("%sCall Trace:\n", log_lvl);
--
2.15.1
From: Alex Estrin <[email protected]>
[ Upstream commit 2b1e7fe16124e86ee9242aeeee859c79a843e3a2 ]
The dd refcount is speculatively incremented prior to allocating
the fd memory with kzalloc(). If that kzalloc() failed the dd
refcount leaks.
Increment refcount on kzalloc success.
Fixes: e11ffbd57520 ("IB/hfi1: Do not free hfi1 cdev parent structure early")
Reviewed-by: Michael J Ruhl <[email protected]>
Signed-off-by: Alex Estrin <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/infiniband/hw/hfi1/file_ops.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
index fd28f09b4445..ee2253d06984 100644
--- a/drivers/infiniband/hw/hfi1/file_ops.c
+++ b/drivers/infiniband/hw/hfi1/file_ops.c
@@ -191,9 +191,6 @@ static int hfi1_file_open(struct inode *inode, struct file *fp)
if (!atomic_inc_not_zero(&dd->user_refcount))
return -ENXIO;
- /* Just take a ref now. Not all opens result in a context assign */
- kobject_get(&dd->kobj);
-
/* The real work is performed later in assign_ctxt() */
fd = kzalloc(sizeof(*fd), GFP_KERNEL);
@@ -203,6 +200,7 @@ static int hfi1_file_open(struct inode *inode, struct file *fp)
fd->mm = current->mm;
mmgrab(fd->mm);
fd->dd = dd;
+ kobject_get(&fd->dd->kobj);
fp->private_data = fd;
} else {
fp->private_data = NULL;
--
2.15.1
From: Yisheng Xie <[email protected]>
[ Upstream commit 56521e7a02b7b84a5e72691a1fb15570e6055545 ]
As Xiaojun reported the ltp of migrate_pages01 will fail on arm64 system
which has 4 nodes[0...3], all have memory and CONFIG_NODES_SHIFT=2:
migrate_pages01 0 TINFO : test_invalid_nodes
migrate_pages01 14 TFAIL : migrate_pages_common.c:45: unexpected failure - returned value = 0, expected: -1
migrate_pages01 15 TFAIL : migrate_pages_common.c:55: call succeeded unexpectedly
In this case the test_invalid_nodes of migrate_pages01 will call:
SYSC_migrate_pages as:
migrate_pages(0, , {0x0000000000000001}, 64, , {0x0000000000000010}, 64) = 0
The new nodes specifies one or more node IDs that are greater than the
maximum supported node ID, however, the errno is not set to EINVAL as
expected.
As man pages of set_mempolicy[1], mbind[2], and migrate_pages[3]
mentioned, when nodemask specifies one or more node IDs that are greater
than the maximum supported node ID, the errno should set to EINVAL.
However, get_nodes only check whether the part of bits
[BITS_PER_LONG*BITS_TO_LONGS(MAX_NUMNODES), maxnode) is zero or not, and
remain [MAX_NUMNODES, BITS_PER_LONG*BITS_TO_LONGS(MAX_NUMNODES)
unchecked.
This patch is to check the bits of [MAX_NUMNODES, maxnode) in get_nodes
to let migrate_pages set the errno to EINVAL when nodemask specifies one
or more node IDs that are greater than the maximum supported node ID,
which follows the manpage's guide.
[1] http://man7.org/linux/man-pages/man2/set_mempolicy.2.html
[2] http://man7.org/linux/man-pages/man2/mbind.2.html
[3] http://man7.org/linux/man-pages/man2/migrate_pages.2.html
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Yisheng Xie <[email protected]>
Reported-by: Tan Xiaojun <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Chris Salls <[email protected]>
Cc: Christopher Lameter <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
mm/mempolicy.c | 23 ++++++++++++++++++++---
1 file changed, 20 insertions(+), 3 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index a2af6d58a68f..80b67805b51d 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1262,6 +1262,7 @@ static int get_nodes(nodemask_t *nodes, const unsigned long __user *nmask,
unsigned long maxnode)
{
unsigned long k;
+ unsigned long t;
unsigned long nlongs;
unsigned long endmask;
@@ -1278,13 +1279,19 @@ static int get_nodes(nodemask_t *nodes, const unsigned long __user *nmask,
else
endmask = (1UL << (maxnode % BITS_PER_LONG)) - 1;
- /* When the user specified more nodes than supported just check
- if the non supported part is all zero. */
+ /*
+ * When the user specified more nodes than supported just check
+ * if the non supported part is all zero.
+ *
+ * If maxnode have more longs than MAX_NUMNODES, check
+ * the bits in that area first. And then go through to
+ * check the rest bits which equal or bigger than MAX_NUMNODES.
+ * Otherwise, just check bits [MAX_NUMNODES, maxnode).
+ */
if (nlongs > BITS_TO_LONGS(MAX_NUMNODES)) {
if (nlongs > PAGE_SIZE/sizeof(long))
return -EINVAL;
for (k = BITS_TO_LONGS(MAX_NUMNODES); k < nlongs; k++) {
- unsigned long t;
if (get_user(t, nmask + k))
return -EFAULT;
if (k == nlongs - 1) {
@@ -1297,6 +1304,16 @@ static int get_nodes(nodemask_t *nodes, const unsigned long __user *nmask,
endmask = ~0UL;
}
+ if (maxnode > MAX_NUMNODES && MAX_NUMNODES % BITS_PER_LONG != 0) {
+ unsigned long valid_mask = endmask;
+
+ valid_mask &= ~((1UL << (MAX_NUMNODES % BITS_PER_LONG)) - 1);
+ if (get_user(t, nmask + nlongs - 1))
+ return -EFAULT;
+ if (t & valid_mask)
+ return -EINVAL;
+ }
+
if (copy_from_user(nodes_addr(*nodes), nmask, nlongs*sizeof(unsigned long)))
return -EFAULT;
nodes_addr(*nodes)[nlongs-1] &= endmask;
--
2.15.1
From: piaojun <[email protected]>
[ Upstream commit d984187e3a1ad7d12447a7ab2c43ce3717a2b5b3 ]
We should not reuse the dirty bh in jbd2 directly due to the following
situation:
1. When removing extent rec, we will dirty the bhs of extent rec and
truncate log at the same time, and hand them over to jbd2.
2. The bhs are submitted to jbd2 area successfully.
3. The write-back thread of device help flush the bhs to disk but
encounter write error due to abnormal storage link.
4. After a while the storage link become normal. Truncate log flush
worker triggered by the next space reclaiming found the dirty bh of
truncate log and clear its 'BH_Write_EIO' and then set it uptodate in
__ocfs2_journal_access():
ocfs2_truncate_log_worker
ocfs2_flush_truncate_log
__ocfs2_flush_truncate_log
ocfs2_replay_truncate_records
ocfs2_journal_access_di
__ocfs2_journal_access // here we clear io_error and set 'tl_bh' uptodata.
5. Then jbd2 will flush the bh of truncate log to disk, but the bh of
extent rec is still in error state, and unfortunately nobody will
take care of it.
6. At last the space of extent rec was not reduced, but truncate log
flush worker have given it back to globalalloc. That will cause
duplicate cluster problem which could be identified by fsck.ocfs2.
Sadly we can hardly revert this but set fs read-only in case of ruining
atomicity and consistency of space reclaim.
Link: http://lkml.kernel.org/r/[email protected]
Fixes: acf8fdbe6afb ("ocfs2: do not BUG if buffer not uptodate in __ocfs2_journal_access")
Signed-off-by: Jun Piao <[email protected]>
Reviewed-by: Yiwen Jiang <[email protected]>
Reviewed-by: Changwei Ge <[email protected]>
Cc: Mark Fasheh <[email protected]>
Cc: Joel Becker <[email protected]>
Cc: Junxiao Bi <[email protected]>
Cc: Joseph Qi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/ocfs2/journal.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
index 36304434eacf..e5dcea6cee5f 100644
--- a/fs/ocfs2/journal.c
+++ b/fs/ocfs2/journal.c
@@ -666,23 +666,24 @@ static int __ocfs2_journal_access(handle_t *handle,
/* we can safely remove this assertion after testing. */
if (!buffer_uptodate(bh)) {
mlog(ML_ERROR, "giving me a buffer that's not uptodate!\n");
- mlog(ML_ERROR, "b_blocknr=%llu\n",
- (unsigned long long)bh->b_blocknr);
+ mlog(ML_ERROR, "b_blocknr=%llu, b_state=0x%lx\n",
+ (unsigned long long)bh->b_blocknr, bh->b_state);
lock_buffer(bh);
/*
- * A previous attempt to write this buffer head failed.
- * Nothing we can do but to retry the write and hope for
- * the best.
+ * A previous transaction with a couple of buffer heads fail
+ * to checkpoint, so all the bhs are marked as BH_Write_EIO.
+ * For current transaction, the bh is just among those error
+ * bhs which previous transaction handle. We can't just clear
+ * its BH_Write_EIO and reuse directly, since other bhs are
+ * not written to disk yet and that will cause metadata
+ * inconsistency. So we should set fs read-only to avoid
+ * further damage.
*/
if (buffer_write_io_error(bh) && !buffer_uptodate(bh)) {
- clear_buffer_write_io_error(bh);
- set_buffer_uptodate(bh);
- }
-
- if (!buffer_uptodate(bh)) {
unlock_buffer(bh);
- return -EIO;
+ return ocfs2_error(osb->sb, "A previous attempt to "
+ "write this buffer head failed\n");
}
unlock_buffer(bh);
}
--
2.15.1
From: Eryu Guan <[email protected]>
[ Upstream commit 6b136a24b05c81a24e0b648a4bd938bcd0c4f69e ]
Attributes that only implement .seq_ops are read-only, any write to
them should be rejected. But currently kernel would crash when
writing to such debugfs entries, e.g.
chmod +w /sys/kernel/debug/block/<dev>/requeue_list
echo 0 > /sys/kernel/debug/block/<dev>/requeue_list
chmod -w /sys/kernel/debug/block/<dev>/requeue_list
Fix it by returning -EPERM in blk_mq_debugfs_write() when writing to
such attributes.
Cc: Ming Lei <[email protected]>
Signed-off-by: Eryu Guan <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
block/blk-mq-debugfs.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index de294d775acf..d95439154556 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -704,7 +704,11 @@ static ssize_t blk_mq_debugfs_write(struct file *file, const char __user *buf,
const struct blk_mq_debugfs_attr *attr = m->private;
void *data = d_inode(file->f_path.dentry->d_parent)->i_private;
- if (!attr->write)
+ /*
+ * Attributes that only implement .seq_ops are read-only and 'attr' is
+ * the same with 'data' in this case.
+ */
+ if (attr == data || !attr->write)
return -EPERM;
return attr->write(data, buf, count, ppos);
--
2.15.1
From: Leon Romanovsky <[email protected]>
[ Upstream commit b081808a66345ba725b77ecd8d759bee874cd937 ]
Failure in XRCD FW deallocation command leaves memory leaked and
returns error to the user which he can't do anything about it.
This patch changes behavior to always free memory and always return
success to the user.
Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters")
Reviewed-by: Majd Dibbiny <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Reviewed-by: Yuval Shaia <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/infiniband/hw/mlx5/qp.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index c4d8cc1c2b1d..e1978d91a2f7 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -4636,13 +4636,10 @@ int mlx5_ib_dealloc_xrcd(struct ib_xrcd *xrcd)
int err;
err = mlx5_core_xrcd_dealloc(dev->mdev, xrcdn);
- if (err) {
+ if (err)
mlx5_ib_warn(dev, "failed to dealloc xrcdn 0x%x\n", xrcdn);
- return err;
- }
kfree(xrcd);
-
return 0;
}
--
2.15.1
From: Jacob Keller <[email protected]>
[ Upstream commit 40339af33c703bacb336493157d43c86a8bf2fed ]
In commit 36777d9fa24c ("i40e: check current configured input set when
adding ntuple filters") some code was added to report the input set
mask for a given filter when reporting it to the user.
This code is necessary so that the reported filter correctly displays
that it is or is not masking certain fields.
Unfortunately the code was incorrect. Development error accidentally
swapped the mask values for the IPv4 addresses with the L4 port numbers.
The port numbers are only 16bits wide while IPv4 addresses are 32 bits.
Unfortunately we assigned only 16 bits to the IPv4 address masks.
Additionally we assigned 32bit value 0xFFFFFFF to the TCP port numbers.
This second part does not matter as the value would be truncated to
16bits regardless, but it is unnecessary.
Fix the reported masks to properly report that the entire field is
masked.
Fixes: 36777d9fa24c ("i40e: check current configured input set when adding ntuple filters")
Signed-off-by: Jacob Keller <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index fc27ba5caa55..ef22793d6a03 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -2588,16 +2588,16 @@ static int i40e_get_ethtool_fdir_entry(struct i40e_pf *pf,
no_input_set:
if (input_set & I40E_L3_SRC_MASK)
- fsp->m_u.tcp_ip4_spec.ip4src = htonl(0xFFFF);
+ fsp->m_u.tcp_ip4_spec.ip4src = htonl(0xFFFFFFFF);
if (input_set & I40E_L3_DST_MASK)
- fsp->m_u.tcp_ip4_spec.ip4dst = htonl(0xFFFF);
+ fsp->m_u.tcp_ip4_spec.ip4dst = htonl(0xFFFFFFFF);
if (input_set & I40E_L4_SRC_MASK)
- fsp->m_u.tcp_ip4_spec.psrc = htons(0xFFFFFFFF);
+ fsp->m_u.tcp_ip4_spec.psrc = htons(0xFFFF);
if (input_set & I40E_L4_DST_MASK)
- fsp->m_u.tcp_ip4_spec.pdst = htons(0xFFFFFFFF);
+ fsp->m_u.tcp_ip4_spec.pdst = htons(0xFFFF);
if (rule->dest_ctl == I40E_FILTER_PROGRAM_DESC_DEST_DROP_PACKET)
fsp->ring_cookie = RX_CLS_FLOW_DISC;
--
2.15.1
From: David Herrmann <[email protected]>
[ Upstream commit 587d8628fb71c3bfae29fb2bbe84c1478c59bac8 ]
This patch prevents the thinkpad_acpi driver from warning about 2 event
codes returned for keyboard palm-detection. No behavioral changes,
other than suppressing the warning in the kernel log. The events are
still forwarded via acpi-netlink channels.
We could, optionally, decide to forward the event through a
input-switch on the tpacpi input device. However, so far no suitable
input-code exists, and no similar drivers report such events. Hence,
leave it an acpi event for now.
Note that the event-codes are named based on empirical studies. On the
ThinkPad X1 5th Gen the sensor can be found underneath the arrow key.
Cc: Matthew Thode <[email protected]>
Signed-off-by: David Herrmann <[email protected]>
Acked-by: Henrique de Moraes Holschuh <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/platform/x86/thinkpad_acpi.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
index 2242d6035d9e..c407d52ef7cf 100644
--- a/drivers/platform/x86/thinkpad_acpi.c
+++ b/drivers/platform/x86/thinkpad_acpi.c
@@ -214,6 +214,10 @@ enum tpacpi_hkey_event_t {
/* AC-related events */
TP_HKEY_EV_AC_CHANGED = 0x6040, /* AC status changed */
+ /* Further user-interface events */
+ TP_HKEY_EV_PALM_DETECTED = 0x60b0, /* palm hoveres keyboard */
+ TP_HKEY_EV_PALM_UNDETECTED = 0x60b1, /* palm removed */
+
/* Misc */
TP_HKEY_EV_RFKILL_CHANGED = 0x7000, /* rfkill switch changed */
};
@@ -3973,6 +3977,12 @@ static bool hotkey_notify_6xxx(const u32 hkey,
*send_acpi_ev = false;
break;
+ case TP_HKEY_EV_PALM_DETECTED:
+ case TP_HKEY_EV_PALM_UNDETECTED:
+ /* palm detected hovering the keyboard, forward to user-space
+ * via netlink for consumption */
+ return true;
+
default:
pr_warn("unknown possible thermal alarm or keyboard event received\n");
known = false;
--
2.15.1
From: David Hildenbrand <[email protected]>
[ Upstream commit b3ecd4aa8632a86428605ab73393d14779019d82 ]
Another VCPU might try to modify the SCB while we are creating the
shadow SCB. In general this is no problem - unless the compiler decides
to not load values once, but e.g. twice.
For us, this is only relevant when checking/working with such values.
E.g. the prefix value, the mso, state of transactional execution and
addresses of satellite blocks.
E.g. if we blindly forward values (e.g. general purpose registers or
execution controls after masking), we don't care.
Leaving unpin_blocks() untouched for now, will handle it separately.
The worst thing right now that I can see would be a missed prefix
un/remap (mso, prefix, tx) or using wrong guest addresses. Nothing
critical, but let's try to avoid unpredictable behavior.
Signed-off-by: David Hildenbrand <[email protected]>
Message-Id: <[email protected]>
Reviewed-by: Christian Borntraeger <[email protected]>
Acked-by: Cornelia Huck <[email protected]>
Signed-off-by: Christian Borntraeger <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/s390/kvm/vsie.c | 50 +++++++++++++++++++++++++++++++-------------------
1 file changed, 31 insertions(+), 19 deletions(-)
diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
index b18b5652e5c5..a74204db759b 100644
--- a/arch/s390/kvm/vsie.c
+++ b/arch/s390/kvm/vsie.c
@@ -31,7 +31,11 @@ struct vsie_page {
* the same offset as that in struct sie_page!
*/
struct mcck_volatile_info mcck_info; /* 0x0200 */
- /* the pinned originial scb */
+ /*
+ * The pinned original scb. Be aware that other VCPUs can modify
+ * it while we read from it. Values that are used for conditions or
+ * are reused conditionally, should be accessed via READ_ONCE.
+ */
struct kvm_s390_sie_block *scb_o; /* 0x0218 */
/* the shadow gmap in use by the vsie_page */
struct gmap *gmap; /* 0x0220 */
@@ -143,12 +147,13 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
{
struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
struct kvm_s390_sie_block *scb_o = vsie_page->scb_o;
- u32 crycb_addr = scb_o->crycbd & 0x7ffffff8U;
+ const uint32_t crycbd_o = READ_ONCE(scb_o->crycbd);
+ const u32 crycb_addr = crycbd_o & 0x7ffffff8U;
unsigned long *b1, *b2;
u8 ecb3_flags;
scb_s->crycbd = 0;
- if (!(scb_o->crycbd & vcpu->arch.sie_block->crycbd & CRYCB_FORMAT1))
+ if (!(crycbd_o & vcpu->arch.sie_block->crycbd & CRYCB_FORMAT1))
return 0;
/* format-1 is supported with message-security-assist extension 3 */
if (!test_kvm_facility(vcpu->kvm, 76))
@@ -186,12 +191,15 @@ static void prepare_ibc(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
{
struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
struct kvm_s390_sie_block *scb_o = vsie_page->scb_o;
+ /* READ_ONCE does not work on bitfields - use a temporary variable */
+ const uint32_t __new_ibc = scb_o->ibc;
+ const uint32_t new_ibc = READ_ONCE(__new_ibc) & 0x0fffU;
__u64 min_ibc = (sclp.ibc >> 16) & 0x0fffU;
scb_s->ibc = 0;
/* ibc installed in g2 and requested for g3 */
- if (vcpu->kvm->arch.model.ibc && (scb_o->ibc & 0x0fffU)) {
- scb_s->ibc = scb_o->ibc & 0x0fffU;
+ if (vcpu->kvm->arch.model.ibc && new_ibc) {
+ scb_s->ibc = new_ibc;
/* takte care of the minimum ibc level of the machine */
if (scb_s->ibc < min_ibc)
scb_s->ibc = min_ibc;
@@ -256,6 +264,10 @@ static int shadow_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
{
struct kvm_s390_sie_block *scb_o = vsie_page->scb_o;
struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
+ /* READ_ONCE does not work on bitfields - use a temporary variable */
+ const uint32_t __new_prefix = scb_o->prefix;
+ const uint32_t new_prefix = READ_ONCE(__new_prefix);
+ const bool wants_tx = READ_ONCE(scb_o->ecb) & ECB_TE;
bool had_tx = scb_s->ecb & ECB_TE;
unsigned long new_mso = 0;
int rc;
@@ -302,14 +314,14 @@ static int shadow_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
scb_s->icpua = scb_o->icpua;
if (!(atomic_read(&scb_s->cpuflags) & CPUSTAT_SM))
- new_mso = scb_o->mso & 0xfffffffffff00000UL;
+ new_mso = READ_ONCE(scb_o->mso) & 0xfffffffffff00000UL;
/* if the hva of the prefix changes, we have to remap the prefix */
- if (scb_s->mso != new_mso || scb_s->prefix != scb_o->prefix)
+ if (scb_s->mso != new_mso || scb_s->prefix != new_prefix)
prefix_unmapped(vsie_page);
/* SIE will do mso/msl validity and exception checks for us */
scb_s->msl = scb_o->msl & 0xfffffffffff00000UL;
scb_s->mso = new_mso;
- scb_s->prefix = scb_o->prefix;
+ scb_s->prefix = new_prefix;
/* We have to definetly flush the tlb if this scb never ran */
if (scb_s->ihcpu != 0xffffU)
@@ -321,11 +333,11 @@ static int shadow_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
if (test_kvm_cpu_feat(vcpu->kvm, KVM_S390_VM_CPU_FEAT_ESOP))
scb_s->ecb |= scb_o->ecb & ECB_HOSTPROTINT;
/* transactional execution */
- if (test_kvm_facility(vcpu->kvm, 73)) {
+ if (test_kvm_facility(vcpu->kvm, 73) && wants_tx) {
/* remap the prefix is tx is toggled on */
- if ((scb_o->ecb & ECB_TE) && !had_tx)
+ if (!had_tx)
prefix_unmapped(vsie_page);
- scb_s->ecb |= scb_o->ecb & ECB_TE;
+ scb_s->ecb |= ECB_TE;
}
/* SIMD */
if (test_kvm_facility(vcpu->kvm, 129)) {
@@ -544,9 +556,9 @@ static int pin_blocks(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
gpa_t gpa;
int rc = 0;
- gpa = scb_o->scaol & ~0xfUL;
+ gpa = READ_ONCE(scb_o->scaol) & ~0xfUL;
if (test_kvm_cpu_feat(vcpu->kvm, KVM_S390_VM_CPU_FEAT_64BSCAO))
- gpa |= (u64) scb_o->scaoh << 32;
+ gpa |= (u64) READ_ONCE(scb_o->scaoh) << 32;
if (gpa) {
if (!(gpa & ~0x1fffUL))
rc = set_validity_icpt(scb_s, 0x0038U);
@@ -566,7 +578,7 @@ static int pin_blocks(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
scb_s->scaol = (u32)(u64)hpa;
}
- gpa = scb_o->itdba & ~0xffUL;
+ gpa = READ_ONCE(scb_o->itdba) & ~0xffUL;
if (gpa && (scb_s->ecb & ECB_TE)) {
if (!(gpa & ~0x1fffU)) {
rc = set_validity_icpt(scb_s, 0x0080U);
@@ -581,7 +593,7 @@ static int pin_blocks(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
scb_s->itdba = hpa;
}
- gpa = scb_o->gvrd & ~0x1ffUL;
+ gpa = READ_ONCE(scb_o->gvrd) & ~0x1ffUL;
if (gpa && (scb_s->eca & ECA_VX) && !(scb_s->ecd & ECD_HOSTREGMGMT)) {
if (!(gpa & ~0x1fffUL)) {
rc = set_validity_icpt(scb_s, 0x1310U);
@@ -599,7 +611,7 @@ static int pin_blocks(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
scb_s->gvrd = hpa;
}
- gpa = scb_o->riccbd & ~0x3fUL;
+ gpa = READ_ONCE(scb_o->riccbd) & ~0x3fUL;
if (gpa && (scb_s->ecb3 & ECB3_RI)) {
if (!(gpa & ~0x1fffUL)) {
rc = set_validity_icpt(scb_s, 0x0043U);
@@ -617,8 +629,8 @@ static int pin_blocks(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
if ((scb_s->ecb & ECB_GS) && !(scb_s->ecd & ECD_HOSTREGMGMT)) {
unsigned long sdnxc;
- gpa = scb_o->sdnxo & ~0xfUL;
- sdnxc = scb_o->sdnxo & 0xfUL;
+ gpa = READ_ONCE(scb_o->sdnxo) & ~0xfUL;
+ sdnxc = READ_ONCE(scb_o->sdnxo) & 0xfUL;
if (!gpa || !(gpa & ~0x1fffUL)) {
rc = set_validity_icpt(scb_s, 0x10b0U);
goto unpin;
@@ -785,7 +797,7 @@ static void retry_vsie_icpt(struct vsie_page *vsie_page)
static int handle_stfle(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
{
struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
- __u32 fac = vsie_page->scb_o->fac & 0x7ffffff8U;
+ __u32 fac = READ_ONCE(vsie_page->scb_o->fac) & 0x7ffffff8U;
if (fac && test_kvm_facility(vcpu->kvm, 7)) {
retry_vsie_icpt(vsie_page);
--
2.15.1
From: Corentin LABBE <[email protected]>
[ Upstream commit 980b4c95e78e4113cb7b9f430f121dab1c814b6c ]
Since CRYPTO_SHA384 does not exists, Kconfig should not select it.
Anyway, all SHA384 stuff is in CRYPTO_SHA512 which is already selected.
Fixes: a21eb94fc4d3i ("crypto: axis - add ARTPEC-6/7 crypto accelerator driver")
Signed-off-by: Corentin Labbe <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/crypto/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index fe33c199fc1a..143f8bc403b9 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -721,7 +721,6 @@ config CRYPTO_DEV_ARTPEC6
select CRYPTO_HASH
select CRYPTO_SHA1
select CRYPTO_SHA256
- select CRYPTO_SHA384
select CRYPTO_SHA512
help
Enables the driver for the on-chip crypto accelerator
--
2.15.1
From: Alan Brady <[email protected]>
[ Upstream commit e0346f9fcb6c636d2f870e6666de8781413f34ea ]
If we receive the link status message from PF with link up before queues
are actually enabled, it will trigger a TX hang. This fixes the issue
by ignoring a link up message if the VF state is not yet in RUNNING
state.
Signed-off-by: Alan Brady <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
.../net/ethernet/intel/i40evf/i40evf_virtchnl.c | 35 ++++++++++++++--------
1 file changed, 23 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c b/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
index 85876f4fb1fb..46bf11afba08 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
+++ b/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
@@ -937,23 +937,34 @@ void i40evf_virtchnl_completion(struct i40evf_adapter *adapter,
if (v_opcode == VIRTCHNL_OP_EVENT) {
struct virtchnl_pf_event *vpe =
(struct virtchnl_pf_event *)msg;
+ bool link_up = vpe->event_data.link_event.link_status;
switch (vpe->event) {
case VIRTCHNL_EVENT_LINK_CHANGE:
adapter->link_speed =
vpe->event_data.link_event.link_speed;
- if (adapter->link_up !=
- vpe->event_data.link_event.link_status) {
- adapter->link_up =
- vpe->event_data.link_event.link_status;
- if (adapter->link_up) {
- netif_tx_start_all_queues(netdev);
- netif_carrier_on(netdev);
- } else {
- netif_tx_stop_all_queues(netdev);
- netif_carrier_off(netdev);
- }
- i40evf_print_link_message(adapter);
+
+ /* we've already got the right link status, bail */
+ if (adapter->link_up == link_up)
+ break;
+
+ /* If we get link up message and start queues before
+ * our queues are configured it will trigger a TX hang.
+ * In that case, just ignore the link status message,
+ * we'll get another one after we enable queues and
+ * actually prepared to send traffic.
+ */
+ if (link_up && adapter->state != __I40EVF_RUNNING)
+ break;
+
+ adapter->link_up = link_up;
+ if (link_up) {
+ netif_tx_start_all_queues(netdev);
+ netif_carrier_on(netdev);
+ } else {
+ netif_tx_stop_all_queues(netdev);
+ netif_carrier_off(netdev);
}
+ i40evf_print_link_message(adapter);
break;
case VIRTCHNL_EVENT_RESET_IMPENDING:
dev_info(&adapter->pdev->dev, "PF reset warning received\n");
--
2.15.1
From: Goldwyn Rodrigues <[email protected]>
[ Upstream commit 20d59023c5ec4426284af492808bcea1f39787ef ]
We inadvertently set it again on the source bio, but we need
to set it on the new split bio instead.
Fixes: fbbaf700e7b1 ("block: trace completion of all bios.")
Signed-off-by: Goldwyn Rodrigues <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
block/bio.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/bio.c b/block/bio.c
index 7f978eac9a7a..6efb23370956 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1893,7 +1893,7 @@ struct bio *bio_split(struct bio *bio, int sectors,
bio_advance(bio, split->bi_iter.bi_size);
if (bio_flagged(bio, BIO_TRACE_COMPLETION))
- bio_set_flag(bio, BIO_TRACE_COMPLETION);
+ bio_set_flag(split, BIO_TRACE_COMPLETION);
return split;
}
--
2.15.1
From: Ngai-Mint Kwan <[email protected]>
[ Upstream commit cf315ea596ec26d7aa542a9ce354990875a920c0 ]
When a VF is under PF VLAN assignment:
ip link set <pf> vf <#> vlan <vid>
This will remove all previous entries in the VLAN table including those
generated by VLAN interfaces created on the VF. The issue arises when
the VF is under PF VLAN assignment and one or more of these VLAN
interfaces of the VF are deleted. When deleting these VLAN interfaces,
the following message will be generated in "dmesg":
failed to kill vid 0081/<vid> for device <vf>
This is due to the fact that "ndo_vlan_rx_kill_vid" exits with an error.
The handler for this ndo is "fm10k_update_vid". Any calls to this
function while under PF VLAN management will exit prematurely and, thus,
it will generate the failure message.
Additionally, since "fm10k_update_vid" exits prematurely, none of the
VLAN update is performed. So, even though the actual VLAN interfaces of
the VF will be deleted, the active_vlans bitmask is not cleared. When
the VF is no longer under PF VLAN assignment, the driver mistakenly
restores the previous entries of the VLAN table based on an
unsynchronized list of active VLANs.
The solution to this issue involves checking the VLAN update action type
before exiting "fm10k_update_vid". If the VLAN update action type is to
"add", this action will not be permitted while the VF is under PF VLAN
assignment and the VLAN update is abandoned like before.
However, if the VLAN update action type is to "kill", then we need to
also clear the active_vlans bitmask. However, we don't need to actually
queue any messages to the PF, because the MAC and VLAN tables have
already been cleared, and the PF would silently ignore these requests
anyways.
Signed-off-by: Ngai-Mint Kwan <[email protected]>
Signed-off-by: Jacob Keller <[email protected]>
Tested-by: Krishneil Singh <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/ethernet/intel/fm10k/fm10k_netdev.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
index e69d49d91d67..914258310ddd 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
@@ -815,8 +815,12 @@ static int fm10k_update_vid(struct net_device *netdev, u16 vid, bool set)
if (vid >= VLAN_N_VID)
return -EINVAL;
- /* Verify we have permission to add VLANs */
- if (hw->mac.vlan_override)
+ /* Verify that we have permission to add VLANs. If this is a request
+ * to remove a VLAN, we still want to allow the user to remove the
+ * VLAN device. In that case, we need to clear the bit in the
+ * active_vlans bitmask.
+ */
+ if (set && hw->mac.vlan_override)
return -EACCES;
/* update active_vlans bitmask */
@@ -835,6 +839,12 @@ static int fm10k_update_vid(struct net_device *netdev, u16 vid, bool set)
rx_ring->vid &= ~FM10K_VLAN_CLEAR;
}
+ /* If our VLAN has been overridden, there is no reason to send VLAN
+ * removal requests as they will be silently ignored.
+ */
+ if (hw->mac.vlan_override)
+ return 0;
+
/* Do not remove default VLAN ID related entries from VLAN and MAC
* tables
*/
--
2.15.1
From: Wei Yongjun <[email protected]>
[ Upstream commit e58decc9c51eb61697aba35ba8eda33f4b80552d ]
Fix to return error code -EINVAL instead of 0 when num_vfs above
limit_vfs, as done elsewhere in this function.
Fixes: 0dc786219186 ("nfp: handle SR-IOV already enabled when driver is probing")
Signed-off-by: Wei Yongjun <[email protected]>
Acked-by: Jakub Kicinski <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/ethernet/netronome/nfp/nfp_main.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.c b/drivers/net/ethernet/netronome/nfp/nfp_main.c
index f8fa63b66739..a1a15e0c2245 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_main.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_main.c
@@ -492,6 +492,7 @@ static int nfp_pci_probe(struct pci_dev *pdev,
dev_err(&pdev->dev,
"Error: %d VFs already enabled, but loaded FW can only support %d\n",
pf->num_vfs, pf->limit_vfs);
+ err = -EINVAL;
goto err_fw_unload;
}
--
2.15.1
From: Andi Shyti <[email protected]>
[ Upstream commit cba04cdf437d745fac85220d1d692a9ae23d7004 ]
The interrupt is requested before the device is powered on and
it's value in some cases cannot be reliable. It happens on some
devices that an interrupt is generated as soon as requested
before having the chance to disable the irq.
Set the irq flag as IRQ_NOAUTOEN before requesting it.
This patch mutes the error:
stmfts 2-0049: failed to read events: -11
received sometimes during boot time.
Signed-off-by: Andi Shyti <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/input/touchscreen/stmfts.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/input/touchscreen/stmfts.c b/drivers/input/touchscreen/stmfts.c
index 8c6c6178ec12..025bae3853cc 100644
--- a/drivers/input/touchscreen/stmfts.c
+++ b/drivers/input/touchscreen/stmfts.c
@@ -687,6 +687,14 @@ static int stmfts_probe(struct i2c_client *client,
input_set_drvdata(sdata->input, sdata);
+ /*
+ * stmfts_power_on expects interrupt to be disabled, but
+ * at this point the device is still off and I do not trust
+ * the status of the irq line that can generate some spurious
+ * interrupts. To be on the safe side it's better to not enable
+ * the interrupts during their request.
+ */
+ irq_set_status_flags(client->irq, IRQ_NOAUTOEN);
err = devm_request_threaded_irq(&client->dev, client->irq,
NULL, stmfts_irq_handler,
IRQF_ONESHOT,
@@ -694,9 +702,6 @@ static int stmfts_probe(struct i2c_client *client,
if (err)
return err;
- /* stmfts_power_on expects interrupt to be disabled */
- disable_irq(client->irq);
-
dev_dbg(&client->dev, "initializing ST-Microelectronics FTS...\n");
err = stmfts_power_on(sdata);
--
2.15.1
From: Jeffy Chen <[email protected]>
[ Upstream commit fde7f9dbc71365230eeb8c8ea97ce9b552c8e5bd ]
The rt5514 dsp captures pcm data through spi directly, so we should not
use rockchip-i2s as it's cpu dai like other codecs.
Use dummy_dai for rt5514 dsp dailink to make voice wakeup work again.
Reported-by: Jimmy Cheng-Yi Chiang <[email protected]>
Fixes: (72cfb0f20c75 ASoC: rockchip: Use codec of_node and dai_name for rt5514 dsp)
Signed-off-by: Jeffy Chen <[email protected]>
Tested-by: Brian Norris <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
sound/soc/rockchip/rk3399_gru_sound.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/sound/soc/rockchip/rk3399_gru_sound.c b/sound/soc/rockchip/rk3399_gru_sound.c
index 0513fe480353..21ac8d6cce3a 100644
--- a/sound/soc/rockchip/rk3399_gru_sound.c
+++ b/sound/soc/rockchip/rk3399_gru_sound.c
@@ -387,7 +387,8 @@ static const struct snd_soc_dai_link rockchip_dais[] = {
[DAILINK_RT5514_DSP] = {
.name = "RT5514 DSP",
.stream_name = "Wake on Voice",
- .codec_dai_name = "rt5514-dsp-cpu-dai",
+ .codec_name = "snd-soc-dummy",
+ .codec_dai_name = "snd-soc-dummy-dai",
},
};
@@ -432,7 +433,18 @@ static int rockchip_sound_of_parse_dais(struct device *dev,
if (index < 0)
continue;
- np_cpu = (index == DAILINK_CDNDP) ? np_cpu1 : np_cpu0;
+ switch (index) {
+ case DAILINK_CDNDP:
+ np_cpu = np_cpu1;
+ break;
+ case DAILINK_RT5514_DSP:
+ np_cpu = np_codec;
+ break;
+ default:
+ np_cpu = np_cpu0;
+ break;
+ }
+
if (!np_cpu) {
dev_err(dev, "Missing 'rockchip,cpu' for %s\n",
rockchip_dais[index].name);
@@ -442,7 +454,8 @@ static int rockchip_sound_of_parse_dais(struct device *dev,
dai = &card->dai_link[card->num_links++];
*dai = rockchip_dais[index];
- dai->codec_of_node = np_codec;
+ if (!dai->codec_name)
+ dai->codec_of_node = np_codec;
dai->platform_of_node = np_cpu;
dai->cpu_of_node = np_cpu;
}
--
2.15.1
From: Sheng Yong <[email protected]>
[ Upstream commit a9d572c7550044d5b217b5287d99a2e6d34b97b0 ]
When io_bits is set, GCing encrypted block may hit the following hungtask.
Since io_bits requires aligned block address, f2fs_submit_page_write may
return -EAGAIN if new_blkaddr does not satisify io_bits alignment. As a
result, the encrypted page will never be writtenback.
This patch makes move_data_block aware the EAGAIN error and cancel the
writeback.
[ 246.751371] INFO: task kworker/u4:4:797 blocked for more than 90 seconds.
[ 246.752423] Not tainted 4.15.0-rc4+ #11
[ 246.754176] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 246.755336] kworker/u4:4 D25448 797 2 0x80000000
[ 246.755597] Workqueue: writeback wb_workfn (flush-7:0)
[ 246.755616] Call Trace:
[ 246.755695] ? __schedule+0x322/0xa90
[ 246.755761] ? blk_init_request_from_bio+0x120/0x120
[ 246.755773] ? pci_mmcfg_check_reserved+0xb0/0xb0
[ 246.755801] ? __radix_tree_create+0x19e/0x200
[ 246.755813] ? delete_node+0x136/0x370
[ 246.755838] schedule+0x43/0xc0
[ 246.755904] io_schedule+0x17/0x40
[ 246.755939] wait_on_page_bit_common+0x17b/0x240
[ 246.755950] ? wake_page_function+0xa0/0xa0
[ 246.755961] ? add_to_page_cache_lru+0x160/0x160
[ 246.755972] ? page_cache_tree_insert+0x170/0x170
[ 246.755983] ? __lru_cache_add+0x96/0xb0
[ 246.756086] __filemap_fdatawait_range+0x14f/0x1c0
[ 246.756097] ? wait_on_page_bit_common+0x240/0x240
[ 246.756120] ? __wake_up_locked_key_bookmark+0x20/0x20
[ 246.756167] ? wait_on_all_pages_writeback+0xc9/0x100
[ 246.756179] ? __remove_ino_entry+0x120/0x120
[ 246.756192] ? wait_woken+0x100/0x100
[ 246.756204] filemap_fdatawait_range+0x9/0x20
[ 246.756216] write_checkpoint+0x18a1/0x1f00
[ 246.756254] ? blk_get_request+0x10/0x10
[ 246.756265] ? cpumask_next_and+0x43/0x60
[ 246.756279] ? f2fs_sync_inode_meta+0x160/0x160
[ 246.756289] ? remove_element.isra.4+0xa0/0xa0
[ 246.756300] ? __put_compound_page+0x40/0x40
[ 246.756310] ? f2fs_sync_fs+0xec/0x1c0
[ 246.756320] ? f2fs_sync_fs+0x120/0x1c0
[ 246.756329] f2fs_sync_fs+0x120/0x1c0
[ 246.756357] ? trace_event_raw_event_f2fs__page+0x260/0x260
[ 246.756393] ? ata_build_rw_tf+0x173/0x410
[ 246.756397] f2fs_balance_fs_bg+0x198/0x390
[ 246.756405] ? drop_inmem_page+0x230/0x230
[ 246.756415] ? ahci_qc_prep+0x1bb/0x2e0
[ 246.756418] ? ahci_qc_issue+0x1df/0x290
[ 246.756422] ? __accumulate_pelt_segments+0x42/0xd0
[ 246.756426] ? f2fs_write_node_pages+0xd1/0x380
[ 246.756429] f2fs_write_node_pages+0xd1/0x380
[ 246.756437] ? sync_node_pages+0x8f0/0x8f0
[ 246.756440] ? update_curr+0x53/0x220
[ 246.756444] ? __accumulate_pelt_segments+0xa2/0xd0
[ 246.756448] ? __update_load_avg_se.isra.39+0x349/0x360
[ 246.756452] ? do_writepages+0x2a/0xa0
[ 246.756456] do_writepages+0x2a/0xa0
[ 246.756460] __writeback_single_inode+0x70/0x490
[ 246.756463] ? check_preempt_wakeup+0x199/0x310
[ 246.756467] writeback_sb_inodes+0x2a2/0x660
[ 246.756471] ? is_empty_dir_inode+0x40/0x40
[ 246.756474] ? __writeback_single_inode+0x490/0x490
[ 246.756477] ? string+0xbf/0xf0
[ 246.756480] ? down_read_trylock+0x35/0x60
[ 246.756484] __writeback_inodes_wb+0x9f/0xf0
[ 246.756488] wb_writeback+0x41d/0x4b0
[ 246.756492] ? writeback_inodes_wb.constprop.55+0x150/0x150
[ 246.756498] ? set_worker_desc+0xf7/0x130
[ 246.756502] ? current_is_workqueue_rescuer+0x60/0x60
[ 246.756511] ? _find_next_bit+0x2c/0xa0
[ 246.756514] ? wb_workfn+0x400/0x5d0
[ 246.756518] wb_workfn+0x400/0x5d0
[ 246.756521] ? finish_task_switch+0xdf/0x2a0
[ 246.756525] ? inode_wait_for_writeback+0x30/0x30
[ 246.756529] process_one_work+0x3a7/0x6f0
[ 246.756533] worker_thread+0x82/0x750
[ 246.756537] kthread+0x16f/0x1c0
[ 246.756541] ? trace_event_raw_event_workqueue_work+0x110/0x110
[ 246.756544] ? kthread_create_worker_on_cpu+0xb0/0xb0
[ 246.756548] ret_from_fork+0x1f/0x30
Signed-off-by: Sheng Yong <[email protected]>
Reviewed-by: Chao Yu <[email protected]>
Signed-off-by: Jaegeuk Kim <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/f2fs/gc.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index bfe6a8ccc3a0..7fb2729d8ee0 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -695,7 +695,12 @@ static void move_data_block(struct inode *inode, block_t bidx,
fio.op = REQ_OP_WRITE;
fio.op_flags = REQ_SYNC;
fio.new_blkaddr = newaddr;
- f2fs_submit_page_write(&fio);
+ err = f2fs_submit_page_write(&fio);
+ if (err) {
+ if (PageWriteback(fio.encrypted_page))
+ end_page_writeback(fio.encrypted_page);
+ goto put_page_out;
+ }
f2fs_update_iostat(fio.sbi, FS_GC_DATA_IO, F2FS_BLKSIZE);
--
2.15.1
From: Liu Bo <[email protected]>
[ Upstream commit 7583d8d088ff2c323b1d4f15b191ca2c23d32558 ]
Before rbio_orig_end_io() goes to free rbio, rbio may get merged with
more bios from other rbios and rbio->bio_list becomes non-empty,
in that case, these newly merged bios don't end properly.
Once unlock_stripe() is done, rbio->bio_list will not be updated any
more and we can call bio_endio() on all queued bios.
It should only happen in error-out cases, the normal path of recover
and full stripe write have already set RBIO_RMW_LOCKED_BIT to disable
merge before doing IO, so rbio_orig_end_io() called by them doesn't
have the above issue.
Reported-by: Jérôme Carretero <[email protected]>
Signed-off-by: Liu Bo <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/btrfs/raid56.c | 37 +++++++++++++++++++++++++------------
1 file changed, 25 insertions(+), 12 deletions(-)
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index dcab41157899..2e995e565633 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -858,10 +858,17 @@ static void __free_raid_bio(struct btrfs_raid_bio *rbio)
kfree(rbio);
}
-static void free_raid_bio(struct btrfs_raid_bio *rbio)
+static void rbio_endio_bio_list(struct bio *cur, blk_status_t err)
{
- unlock_stripe(rbio);
- __free_raid_bio(rbio);
+ struct bio *next;
+
+ while (cur) {
+ next = cur->bi_next;
+ cur->bi_next = NULL;
+ cur->bi_status = err;
+ bio_endio(cur);
+ cur = next;
+ }
}
/*
@@ -871,20 +878,26 @@ static void free_raid_bio(struct btrfs_raid_bio *rbio)
static void rbio_orig_end_io(struct btrfs_raid_bio *rbio, blk_status_t err)
{
struct bio *cur = bio_list_get(&rbio->bio_list);
- struct bio *next;
+ struct bio *extra;
if (rbio->generic_bio_cnt)
btrfs_bio_counter_sub(rbio->fs_info, rbio->generic_bio_cnt);
- free_raid_bio(rbio);
+ /*
+ * At this moment, rbio->bio_list is empty, however since rbio does not
+ * always have RBIO_RMW_LOCKED_BIT set and rbio is still linked on the
+ * hash list, rbio may be merged with others so that rbio->bio_list
+ * becomes non-empty.
+ * Once unlock_stripe() is done, rbio->bio_list will not be updated any
+ * more and we can call bio_endio() on all queued bios.
+ */
+ unlock_stripe(rbio);
+ extra = bio_list_get(&rbio->bio_list);
+ __free_raid_bio(rbio);
- while (cur) {
- next = cur->bi_next;
- cur->bi_next = NULL;
- cur->bi_status = err;
- bio_endio(cur);
- cur = next;
- }
+ rbio_endio_bio_list(cur, err);
+ if (extra)
+ rbio_endio_bio_list(extra, err);
}
/*
--
2.15.1
From: Liu Bo <[email protected]>
[ Upstream commit 18e83ac75bfe67009c4ddcdd581bba8eb16f4030 ]
This fixes a corner case that is caused by a race of dio write vs dio
read/write.
Here is how the race could happen.
Suppose that no extent map has been loaded into memory yet.
There is a file extent [0, 32K), two jobs are running concurrently
against it, t1 is doing dio write to [8K, 32K) and t2 is doing dio
read from [0, 4K) or [4K, 8K).
t1 goes ahead of t2 and splits em [0, 32K) to em [0K, 8K) and [8K 32K).
------------------------------------------------------
t1 t2
btrfs_get_blocks_direct() btrfs_get_blocks_direct()
-> btrfs_get_extent() -> btrfs_get_extent()
-> lookup_extent_mapping()
-> add_extent_mapping() -> lookup_extent_mapping()
# load [0, 32K)
-> btrfs_new_extent_direct()
-> btrfs_drop_extent_cache()
# split [0, 32K) and
# drop [8K, 32K)
-> add_extent_mapping()
# add [8K, 32K)
-> add_extent_mapping()
# handle -EEXIST when adding
# [0, 32K)
------------------------------------------------------
About how t2(dio read/write) runs into -EEXIST:
a) add_extent_mapping() gets -EEXIST for adding em [0, 32k),
b) search_extent_mapping() then returns [0, 8k) as the existing em,
even though start == existing->start, em is [0, 32k) so that
extent_map_end(em) > extent_map_end(existing), i.e. 32k > 8k,
c) then it goes thru merge_extent_mapping() which tries to add a [8k, 8k)
(with a length 0) and returns -EEXIST as [8k, 32k) is already in tree,
d) so btrfs_get_extent() ends up returning -EEXIST to dio read/write,
which is confusing applications.
Here I conclude all the possible situations,
1) start < existing->start
+-----------+em+-----------+
+--prev---+ | +-------------+ |
| | | | | |
+---------+ + +---+existing++ ++
+
|
+
start
2) start == existing->start
+------------em------------+
| +-------------+ |
| | | |
+ +----existing-+ +
|
|
+
start
3) start > existing->start && start < (existing->start + existing->len)
+------------em------------+
| +-------------+ |
| | | |
+ +----existing-+ +
|
|
+
start
4) start >= (existing->start + existing->len)
+-----------+em+-----------+
| +-------------+ | +--next---+
| | | | | |
+ +---+existing++ + +---------+
+
|
+
start
As we can see, it turns out that if start is within existing em (front
inclusive), then the existing em should be returned as is, otherwise,
we try our best to merge candidate em with sibling ems to form a
larger em (in order to reduce the total number of em).
Reported-by: David Vallender <[email protected]>
Signed-off-by: Liu Bo <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/btrfs/inode.c | 17 +++--------------
1 file changed, 3 insertions(+), 14 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 59a01a0844c9..bdff397fe9ea 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7234,19 +7234,12 @@ insert:
* existing will always be non-NULL, since there must be
* extent causing the -EEXIST.
*/
- if (existing->start == em->start &&
- extent_map_end(existing) >= extent_map_end(em) &&
- em->block_start == existing->block_start) {
- /*
- * The existing extent map already encompasses the
- * entire extent map we tried to add.
- */
+ if (start >= existing->start &&
+ start < extent_map_end(existing)) {
free_extent_map(em);
em = existing;
err = 0;
-
- } else if (start >= extent_map_end(existing) ||
- start <= existing->start) {
+ } else {
/*
* The existing extent map is the one nearest to
* the [start, start + len) range which overlaps
@@ -7258,10 +7251,6 @@ insert:
free_extent_map(em);
em = NULL;
}
- } else {
- free_extent_map(em);
- em = existing;
- err = 0;
}
}
write_unlock(&em_tree->lock);
--
2.15.1
From: Parav Pandit <[email protected]>
[ Upstream commit 00db63c128dd3daf38f481371976c24d32678142 ]
If valid netdevice is not found for RoCE, GID table should not be
searched with NULL netdevice.
Doing so causes the search routines to ignore the netdev argument and may
match the wrong GID table entry if the netdev is deleted.
Fixes: abae1b71dd37 ("IB/cma: cma_validate_port should verify the port and netdevice")
Signed-off-by: Parav Pandit <[email protected]>
Reviewed-by: Mark Bloch <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/infiniband/core/cma.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 6cae00ecc905..2cd9671366b0 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -624,11 +624,13 @@ static inline int cma_validate_port(struct ib_device *device, u8 port,
if ((dev_type != ARPHRD_INFINIBAND) && rdma_protocol_ib(device, port))
return ret;
- if (dev_type == ARPHRD_ETHER && rdma_protocol_roce(device, port))
+ if (dev_type == ARPHRD_ETHER && rdma_protocol_roce(device, port)) {
ndev = dev_get_by_index(&init_net, bound_if_index);
- else
+ if (!ndev)
+ return ret;
+ } else {
gid_type = IB_GID_TYPE_IB;
-
+ }
ret = ib_find_cached_gid_by_port(device, gid, gid_type, port,
ndev, NULL);
--
2.15.1
From: Liu Bo <[email protected]>
[ Upstream commit 762221f095e3932669093466aaf4b85ed9ad2ac1 ]
The raid6 corruption is that,
suppose that all disks can be read without problems and if the content
that was read out doesn't match its checksum, currently for raid6
btrfs at most retries twice,
- the 1st retry is to rebuild with all other stripes, it'll eventually
be a raid5 xor rebuild,
- if the 1st fails, the 2nd retry will deliberately fail parity p so
that it will do raid6 style rebuild,
however, the chances are that another non-parity stripe content also
has something corrupted, so that the above retries are not able to
return correct content.
We've fixed normal reads to rebuild raid6 correctly with more retries
in Patch "Btrfs: make raid6 rebuild retry more"[1], this is to fix
scrub to do the exactly same rebuild process.
[1]: https://patchwork.kernel.org/patch/10091755/
Signed-off-by: Liu Bo <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/btrfs/raid56.c | 18 ++++++++++++++----
fs/btrfs/volumes.c | 9 ++++++++-
2 files changed, 22 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index 32b186c5694c..dcab41157899 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -2159,11 +2159,21 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
}
/*
- * reconstruct from the q stripe if they are
- * asking for mirror 3
+ * Loop retry:
+ * for 'mirror == 2', reconstruct from all other stripes.
+ * for 'mirror_num > 2', select a stripe to fail on every retry.
*/
- if (mirror_num == 3)
- rbio->failb = rbio->real_stripes - 2;
+ if (mirror_num > 2) {
+ /*
+ * 'mirror == 3' is to fail the p stripe and
+ * reconstruct from the q stripe. 'mirror > 3' is to
+ * fail a data stripe and reconstruct from p+q stripe.
+ */
+ rbio->failb = rbio->real_stripes - (mirror_num - 1);
+ ASSERT(rbio->failb > 0);
+ if (rbio->failb <= rbio->faila)
+ rbio->failb--;
+ }
ret = lock_stripe_add(rbio);
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 71b3cd634436..5c72b9d2a885 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -5101,7 +5101,14 @@ int btrfs_num_copies(struct btrfs_fs_info *fs_info, u64 logical, u64 len)
else if (map->type & BTRFS_BLOCK_GROUP_RAID5)
ret = 2;
else if (map->type & BTRFS_BLOCK_GROUP_RAID6)
- ret = 3;
+ /*
+ * There could be two corrupted data stripes, we need
+ * to loop retry in order to rebuild the correct data.
+ *
+ * Fail a stripe at a time on every retry except the
+ * stripe under reconstruction.
+ */
+ ret = map->num_stripes;
else
ret = 1;
free_extent_map(em);
--
2.15.1
From: Anand Jain <[email protected]>
[ Upstream commit 6f794e3c5c8f8fdd3b5bb20d9ded894e685b5bbe ]
It appears from the original commit [1] that there isn't any design
specific reason not to fail the mount instead of just warning. This
patch will change it to fail.
[1]
commit 319e4d0661e5323c9f9945f0f8fb5905e5fe74c3
btrfs: Enhance super validation check
Fixes: 319e4d0661e5323 ("btrfs: Enhance super validation check")
Signed-off-by: Anand Jain <[email protected]>
Reviewed-by: Qu Wenruo <[email protected]>
Reviewed-by: David Sterba <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/btrfs/disk-io.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 167ce43cabe8..79f0f282a0ef 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -4063,9 +4063,11 @@ static int btrfs_check_super_valid(struct btrfs_fs_info *fs_info)
btrfs_err(fs_info, "no valid FS found");
ret = -EINVAL;
}
- if (btrfs_super_flags(sb) & ~BTRFS_SUPER_FLAG_SUPP)
- btrfs_warn(fs_info, "unrecognized super flag: %llu",
+ if (btrfs_super_flags(sb) & ~BTRFS_SUPER_FLAG_SUPP) {
+ btrfs_err(fs_info, "unrecognized or unsupported super flag: %llu",
btrfs_super_flags(sb) & ~BTRFS_SUPER_FLAG_SUPP);
+ ret = -EINVAL;
+ }
if (btrfs_super_root_level(sb) >= BTRFS_MAX_LEVEL) {
btrfs_err(fs_info, "tree_root level too big: %d >= %d",
btrfs_super_root_level(sb), BTRFS_MAX_LEVEL);
--
2.15.1
From: Wei Yongjun <[email protected]>
[ Upstream commit e749d328b0b450aa78d562fa26a0cd8872325dd9 ]
Fix to return a negative error code from the request_irq() error
handling case instead of 0, as done elsewhere in this function.
Fixes: dce143c3381c ("ipmi/powernv: Convert to irq event interface")
Signed-off-by: Wei Yongjun <[email protected]>
Reviewed-by: Alexey Kardashevskiy <[email protected]>
Signed-off-by: Corey Minyard <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/char/ipmi/ipmi_powernv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/char/ipmi/ipmi_powernv.c b/drivers/char/ipmi/ipmi_powernv.c
index b338a4becbf8..845efa0f724f 100644
--- a/drivers/char/ipmi/ipmi_powernv.c
+++ b/drivers/char/ipmi/ipmi_powernv.c
@@ -251,8 +251,9 @@ static int ipmi_powernv_probe(struct platform_device *pdev)
ipmi->irq = opal_event_request(prop);
}
- if (request_irq(ipmi->irq, ipmi_opal_event, IRQ_TYPE_LEVEL_HIGH,
- "opal-ipmi", ipmi)) {
+ rc = request_irq(ipmi->irq, ipmi_opal_event, IRQ_TYPE_LEVEL_HIGH,
+ "opal-ipmi", ipmi);
+ if (rc) {
dev_warn(dev, "Unable to request irq\n");
goto err_dispose;
}
--
2.15.1
From: Arnd Bergmann <[email protected]>
[ Upstream commit 96d5eaa9bb74d299508d811d865c2c41b38b0301 ]
While testing with the ARM specific memset() macro removed, I ran into a
compiler warning that shows an old bug:
drivers/scsi/arm/fas216.c: In function 'fas216_rq_sns_done':
drivers/scsi/arm/fas216.c:2014:40: error: argument to 'sizeof' in 'memset' call is the same expression as the destination; did you mean to provide an explicit length? [-Werror=sizeof-pointer-memaccess]
It turns out that the definition of the scsi_cmd structure changed back
in linux-2.6.25, so now we clear only four bytes (sizeof(pointer))
instead of 96 (SCSI_SENSE_BUFFERSIZE). I did not check whether we
actually need to initialize the buffer here, but it's clear that if we
do it, we should use the correct size.
Fixes: de25deb18016 ("[SCSI] use dynamically allocated sense buffer")
Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/scsi/arm/fas216.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/arm/fas216.c b/drivers/scsi/arm/fas216.c
index 24388795ee9a..936e8c735656 100644
--- a/drivers/scsi/arm/fas216.c
+++ b/drivers/scsi/arm/fas216.c
@@ -2011,7 +2011,7 @@ static void fas216_rq_sns_done(FAS216_Info *info, struct scsi_cmnd *SCpnt,
* have valid data in the sense buffer that could
* confuse the higher levels.
*/
- memset(SCpnt->sense_buffer, 0, sizeof(SCpnt->sense_buffer));
+ memset(SCpnt->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
//printk("scsi%d.%c: sense buffer: ", info->host->host_no, '0' + SCpnt->device->id);
//{ int i; for (i = 0; i < 32; i++) printk("%02x ", SCpnt->sense_buffer[i]); printk("\n"); }
/*
--
2.15.1
From: Ulf Magnusson <[email protected]>
[ Upstream commit 0724a7c32a54e3e50d28e19e30c59014f61d4e2c ]
If a 'mainmenu' entry appeared in the Kconfig files, two things would
leak:
- The 'struct property' allocated for the default "Linux Kernel
Configuration" prompt.
- The string for the T_WORD/T_WORD_QUOTE prompt after the
T_MAINMENU token, allocated on the heap in zconf.l.
To fix it, introduce a new 'no_mainmenu_stmt' nonterminal that matches
if there's no 'mainmenu' and adds the default prompt. That means the
prompt only gets allocated once regardless of whether there's a
'mainmenu' statement or not, and managing it becomes simple.
Summary from Valgrind on 'menuconfig' (ARCH=x86) before the fix:
LEAK SUMMARY:
definitely lost: 344,568 bytes in 14,352 blocks
...
Summary after the fix:
LEAK SUMMARY:
definitely lost: 344,440 bytes in 14,350 blocks
...
Signed-off-by: Ulf Magnusson <[email protected]>
Signed-off-by: Masahiro Yamada <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
scripts/kconfig/zconf.y | 33 ++++++++++++++++++++++++---------
1 file changed, 24 insertions(+), 9 deletions(-)
diff --git a/scripts/kconfig/zconf.y b/scripts/kconfig/zconf.y
index c8f396c3b190..20d9caa4be99 100644
--- a/scripts/kconfig/zconf.y
+++ b/scripts/kconfig/zconf.y
@@ -108,7 +108,27 @@ static struct menu *current_menu, *current_entry;
%%
input: nl start | start;
-start: mainmenu_stmt stmt_list | stmt_list;
+start: mainmenu_stmt stmt_list | no_mainmenu_stmt stmt_list;
+
+/* mainmenu entry */
+
+mainmenu_stmt: T_MAINMENU prompt nl
+{
+ menu_add_prompt(P_MENU, $2, NULL);
+};
+
+/* Default main menu, if there's no mainmenu entry */
+
+no_mainmenu_stmt: /* empty */
+{
+ /*
+ * Hack: Keep the main menu title on the heap so we can safely free it
+ * later regardless of whether it comes from the 'prompt' in
+ * mainmenu_stmt or here
+ */
+ menu_add_prompt(P_MENU, strdup("Linux Kernel Configuration"), NULL);
+};
+
stmt_list:
/* empty */
@@ -351,13 +371,6 @@ if_block:
| if_block choice_stmt
;
-/* mainmenu entry */
-
-mainmenu_stmt: T_MAINMENU prompt nl
-{
- menu_add_prompt(P_MENU, $2, NULL);
-};
-
/* menu entry */
menu: T_MENU prompt T_EOL
@@ -502,6 +515,7 @@ word_opt: /* empty */ { $$ = NULL; }
void conf_parse(const char *name)
{
+ const char *tmp;
struct symbol *sym;
int i;
@@ -509,7 +523,6 @@ void conf_parse(const char *name)
sym_init();
_menu_init();
- rootmenu.prompt = menu_add_prompt(P_MENU, "Linux Kernel Configuration", NULL);
if (getenv("ZCONF_DEBUG"))
zconfdebug = 1;
@@ -519,8 +532,10 @@ void conf_parse(const char *name)
if (!modules_sym)
modules_sym = sym_find( "n" );
+ tmp = rootmenu.prompt->text;
rootmenu.prompt->text = _(rootmenu.prompt->text);
rootmenu.prompt->text = sym_expand_string_value(rootmenu.prompt->text);
+ free((char*)tmp);
menu_finalize(&rootmenu);
for_all_symbols(i, sym) {
--
2.15.1
From: Ulf Magnusson <[email protected]>
[ Upstream commit ae7440ef0c8013d68c00dad6900e7cce5311bb1c ]
expr_trans_compare() always allocates and returns a new expression,
giving the following leak outline:
...
*Allocate*
basedep = expr_trans_compare(basedep, E_UNEQUAL, &symbol_no);
...
for (menu = parent->next; menu; menu = menu->next) {
...
*Copy*
dep2 = expr_copy(basedep);
...
*Free copy*
expr_free(dep2);
}
*basedep lost!*
Fix by freeing 'basedep' after the loop.
Summary from Valgrind on 'menuconfig' (ARCH=x86) before the fix:
LEAK SUMMARY:
definitely lost: 344,376 bytes in 14,349 blocks
...
Summary after the fix:
LEAK SUMMARY:
definitely lost: 44,448 bytes in 1,852 blocks
...
Signed-off-by: Ulf Magnusson <[email protected]>
Signed-off-by: Masahiro Yamada <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
scripts/kconfig/menu.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/scripts/kconfig/menu.c b/scripts/kconfig/menu.c
index e9357931b47d..749c2bd5fc51 100644
--- a/scripts/kconfig/menu.c
+++ b/scripts/kconfig/menu.c
@@ -372,6 +372,7 @@ void menu_finalize(struct menu *parent)
menu->parent = parent;
last_menu = menu;
}
+ expr_free(basedep);
if (last_menu) {
parent->list = parent->next;
parent->next = last_menu->next;
--
2.15.1
From: Guenter Roeck <[email protected]>
[ Upstream commit f541c09ebfc61697b586b38c9ebaf4b70defb278 ]
According to all published information, the watchdog disable bit for SB800
compatible controllers is bit 1 of PM register 0x48, not bit 2. For the
most part that doesn't matter in practice, since the bit has to be cleared
to enable watchdog address decoding, which is the default setting, but it
still needs to be fixed.
Cc: Zoltán Böszörményi <[email protected]>
Signed-off-by: Guenter Roeck <[email protected]>
Signed-off-by: Wim Van Sebroeck <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/watchdog/sp5100_tco.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/watchdog/sp5100_tco.h b/drivers/watchdog/sp5100_tco.h
index 1af4dee71337..0e242d1110ce 100644
--- a/drivers/watchdog/sp5100_tco.h
+++ b/drivers/watchdog/sp5100_tco.h
@@ -55,7 +55,7 @@
#define SB800_PM_WATCHDOG_CONFIG 0x4C
#define SB800_PCI_WATCHDOG_DECODE_EN (1 << 0)
-#define SB800_PM_WATCHDOG_DISABLE (1 << 2)
+#define SB800_PM_WATCHDOG_DISABLE (1 << 1)
#define SB800_PM_WATCHDOG_SECOND_RES (3 << 0)
#define SB800_ACPI_MMIO_DECODE_EN (1 << 0)
#define SB800_ACPI_MMIO_SEL (1 << 1)
--
2.15.1
From: Niklas Cassel <[email protected]>
[ Upstream commit 80db6f08b7af93eddc9487535e6150b220262637 ]
Some hardware can operate in either "host" or "endpoint" mode, which means
there can be both a host bridge driver and an endpoint driver for the same
device. Those drivers share a lot of code, so sometimes they live in the
same source file.
The host bridge driver requires CONFIG_PCI=y because it enumerates PCI
devices below the bridge using the PCI core. The endpoint driver does not
require CONFIG_PCI=y because it runs in an embedded kernel on the other
side of the device, e.g., on an adapter card.
pci-dra7xx.c contains both host and endpoint drivers. If we select only
the endpoint driver (CONFIG_PCI=n and CONFIG_PCI_DRA7XX_EP=y), the unneeded
host driver is still compiled. It references pci_irqd_intx_xlate(), which
is not present when CONFIG_PCI=n, which causes this error:
drivers/pci/dwc/pci-dra7xx.c:229:11: error: 'pci_irqd_intx_xlate' undeclared here (not in a function)
Add a dummy pci_irqd_intx_xlate() for the CONFIG_PCI=n case.
[bhelgaas: changelog]
Signed-off-by: Niklas Cassel <[email protected]>
Signed-off-by: Bjorn Helgaas <[email protected]>
Acked-by: Arnd Bergmann <[email protected]>
Acked-by: Lorenzo Pieralisi <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
include/linux/pci.h | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/include/linux/pci.h b/include/linux/pci.h
index d16a7c037ec0..727e309baa5e 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -1688,6 +1688,13 @@ static inline int pci_get_new_domain_nr(void) { return -ENOSYS; }
#define dev_is_pf(d) (false)
static inline bool pci_acs_enabled(struct pci_dev *pdev, u16 acs_flags)
{ return false; }
+static inline int pci_irqd_intx_xlate(struct irq_domain *d,
+ struct device_node *node,
+ const u32 *intspec,
+ unsigned int intsize,
+ unsigned long *out_hwirq,
+ unsigned int *out_type)
+{ return -EINVAL; }
#endif /* CONFIG_PCI */
/* Include architecture-dependent settings and functions */
--
2.15.1
From: Maarten ter Huurne <[email protected]>
[ Upstream commit 1f7412e0e2f327fe7dc5a0c2fc36d7b319d05d47 ]
According to config2, the associativity would be 5-ways, but the
documentation states 4-ways, which also matches the documented
L2 cache size of 256 kB.
Signed-off-by: Maarten ter Huurne <[email protected]>
Reviewed-by: James Hogan <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/18488/
Signed-off-by: James Hogan <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/mips/mm/sc-mips.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/mips/mm/sc-mips.c b/arch/mips/mm/sc-mips.c
index 548acb7f8557..394673991bab 100644
--- a/arch/mips/mm/sc-mips.c
+++ b/arch/mips/mm/sc-mips.c
@@ -16,6 +16,7 @@
#include <asm/mmu_context.h>
#include <asm/r4kcache.h>
#include <asm/mips-cps.h>
+#include <asm/bootinfo.h>
/*
* MIPS32/MIPS64 L2 cache handling
@@ -220,6 +221,14 @@ static inline int __init mips_sc_probe(void)
else
return 0;
+ /*
+ * According to config2 it would be 5-ways, but that is contradicted
+ * by all documentation.
+ */
+ if (current_cpu_type() == CPU_JZRISC &&
+ mips_machtype == MACH_INGENIC_JZ4770)
+ c->scache.ways = 4;
+
c->scache.waysize = c->scache.sets * c->scache.linesz;
c->scache.waybit = __ffs(c->scache.waysize);
--
2.15.1
From: Liu Bo <[email protected]>
[ Upstream commit 343e4fc1c60971b0734de26dbbd475d433950982 ]
Setting plug can merge adjacent IOs before dispatching IOs to the disk
driver.
Without plug, it'd not be a problem for single disk usecases, but for
multiple disks using raid profile, a large IO can be split to several
IOs of stripe length, and plug can be helpful to bring them together
for each disk so that we can save several disk access.
Moreover, fsync issues synchronous writes, so plug can really take
effect.
Signed-off-by: Liu Bo <[email protected]>
Reviewed-by: David Sterba <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/btrfs/file.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index d564a7049d7f..5690feded0de 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -2018,10 +2018,19 @@ int btrfs_release_file(struct inode *inode, struct file *filp)
static int start_ordered_ops(struct inode *inode, loff_t start, loff_t end)
{
int ret;
+ struct blk_plug plug;
+ /*
+ * This is only called in fsync, which would do synchronous writes, so
+ * a plug can merge adjacent IOs as much as possible. Esp. in case of
+ * multiple disks using raid profile, a large IO can be split to
+ * several segments of stripe length (currently 64K).
+ */
+ blk_start_plug(&plug);
atomic_inc(&BTRFS_I(inode)->sync_writers);
ret = btrfs_fdatawrite_range(inode, start, end);
atomic_dec(&BTRFS_I(inode)->sync_writers);
+ blk_finish_plug(&plug);
return ret;
}
--
2.15.1
From: Paul Cercueil <[email protected]>
[ Upstream commit e6cfa64375d34a6c8c1861868a381013b2d3b921 ]
Previously, the clocks with a fixed divider would report their rate
as being the same as the one of their parent, independently of the
divider in use. This commit fixes this behaviour.
This went unnoticed as neither the jz4740 nor the jz4780 CGU code
have clocks with fixed dividers yet.
Signed-off-by: Paul Cercueil <[email protected]>
Acked-by: Stephen Boyd <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Maarten ter Huurne <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/18477/
Signed-off-by: James Hogan <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/clk/ingenic/cgu.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/clk/ingenic/cgu.c b/drivers/clk/ingenic/cgu.c
index ab393637f7b0..a2e73a6d60fd 100644
--- a/drivers/clk/ingenic/cgu.c
+++ b/drivers/clk/ingenic/cgu.c
@@ -328,6 +328,8 @@ ingenic_clk_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
div *= clk_info->div.div;
rate /= div;
+ } else if (clk_info->type & CGU_CLK_FIXDIV) {
+ rate /= clk_info->fixdiv.div;
}
return rate;
--
2.15.1
From: Ulf Magnusson <[email protected]>
[ Upstream commit 5b1374b3b3c2fc4f63a398adfa446fb8eff791a4 ]
Only the E_NOT operand and not the E_NOT node itself was freed, due to
accidentally returning too early in expr_free(). Outline of leak:
switch (e->type) {
...
case E_NOT:
expr_free(e->left.expr);
return;
...
}
*Never reached, 'e' leaked*
free(e);
Fix by changing the 'return' to a 'break'.
Summary from Valgrind on 'menuconfig' (ARCH=x86) before the fix:
LEAK SUMMARY:
definitely lost: 44,448 bytes in 1,852 blocks
...
Summary after the fix:
LEAK SUMMARY:
definitely lost: 1,608 bytes in 67 blocks
...
Signed-off-by: Ulf Magnusson <[email protected]>
Signed-off-by: Masahiro Yamada <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
scripts/kconfig/expr.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/kconfig/expr.c b/scripts/kconfig/expr.c
index cbf4996dd9c1..ed29bad1f03a 100644
--- a/scripts/kconfig/expr.c
+++ b/scripts/kconfig/expr.c
@@ -113,7 +113,7 @@ void expr_free(struct expr *e)
break;
case E_NOT:
expr_free(e->left.expr);
- return;
+ break;
case E_EQUAL:
case E_GEQ:
case E_GTH:
--
2.15.1
From: Martin Blumenstingl <[email protected]>
[ Upstream commit 433c6cab9d298687c097f6ee82e49157044dc7c6 ]
Meson8b only supports MPLL2 as clock input. The rate of the MPLL2 clock
set by Odroid-C1's u-boot is close to (but not exactly) 500MHz. The
exact rate is 500002394Hz, which is calculated in
drivers/clk/meson/clk-mpll.c using the following formula:
DIV_ROUND_UP_ULL((u64)parent_rate * SDM_DEN, (SDM_DEN * n2) + sdm)
Odroid-C1's u-boot configures MPLL2 with the following values:
- SDM_DEN = 16384
- SDM = 1638
- N2 = 5
The 250MHz clock (m250_div) inside dwmac-meson8b driver is derived from
the MPLL2 clock. Due to MPLL2 running slightly faster than 500MHz the
common clock framework chooses a divider which is too big to generate
the 250MHz clock (a divider of 2 would be needed, but this is rounded up
to a divider of 3). This breaks the RTL8211F RGMII PHY on Odroid-C1
because it requires a (close to) 125MHz RGMII TX clock (on Gbit speeds,
the IP block internally divides that down to 25MHz on 100Mbit/s
connections and 2.5MHz on 10Mbit/s connections - we don't need any
special configuration for that).
Round the divider to the closest value to prevent this issue on Meson8b.
This means we'll now end up with a clock rate for the RGMII TX clock of
125001197Hz (= 125MHz plus 1197Hz), which is close-enough to 125MHz.
This has no effect on the Meson GX SoCs since there fclk_div2 is used as
input clock, which has a rate of 1000MHz (and thus is divisible cleanly
to 250MHz and 125MHz).
Fixes: 566e8251625304 ("net: stmmac: add a glue driver for the Amlogic Meson 8b / GXBB DWMAC")
Reported-by: Emiliano Ingrassia <[email protected]>
Signed-off-by: Martin Blumenstingl <[email protected]>
Reviewed-by: Jerome Brunet <[email protected]>
Tested-by: Jerome Brunet <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
index 4404650b32c5..157e12e15f28 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
@@ -144,7 +144,9 @@ static int meson8b_init_clk(struct meson8b_dwmac *dwmac)
dwmac->m250_div.shift = PRG_ETH0_CLK_M250_DIV_SHIFT;
dwmac->m250_div.width = PRG_ETH0_CLK_M250_DIV_WIDTH;
dwmac->m250_div.hw.init = &init;
- dwmac->m250_div.flags = CLK_DIVIDER_ONE_BASED | CLK_DIVIDER_ALLOW_ZERO;
+ dwmac->m250_div.flags = CLK_DIVIDER_ONE_BASED |
+ CLK_DIVIDER_ALLOW_ZERO |
+ CLK_DIVIDER_ROUND_CLOSEST;
dwmac->m250_div_clk = devm_clk_register(dev, &dwmac->m250_div.hw);
if (WARN_ON(IS_ERR(dwmac->m250_div_clk)))
--
2.15.1
From: James Hogan <[email protected]>
[ Upstream commit 5f2483eb2423152445b39f2db59d372f523e664e ]
Make doesn't expand shell style "vmlinuz.{32,ecoff,bin,srec}" to the 4
separate files, so none of these files get cleaned up by make clean.
List the files separately instead.
Fixes: ec3352925b74 ("MIPS: Remove all generated vmlinuz* files on "make clean"")
Signed-off-by: James Hogan <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/18491/
Signed-off-by: Sasha Levin <[email protected]>
---
arch/mips/boot/compressed/Makefile | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/mips/boot/compressed/Makefile b/arch/mips/boot/compressed/Makefile
index c675eece389a..adce180f3ee4 100644
--- a/arch/mips/boot/compressed/Makefile
+++ b/arch/mips/boot/compressed/Makefile
@@ -133,4 +133,8 @@ vmlinuz.srec: vmlinuz
uzImage.bin: vmlinuz.bin FORCE
$(call if_changed,uimage,none)
-clean-files := $(objtree)/vmlinuz $(objtree)/vmlinuz.{32,ecoff,bin,srec}
+clean-files += $(objtree)/vmlinuz
+clean-files += $(objtree)/vmlinuz.32
+clean-files += $(objtree)/vmlinuz.ecoff
+clean-files += $(objtree)/vmlinuz.bin
+clean-files += $(objtree)/vmlinuz.srec
--
2.15.1
From: Jan Chochol <[email protected]>
[ Upstream commit cbebc6ef4fc830f4040d4140bf53484812d5d5d9 ]
Since commit 57e62324e469 ("NFS: Store the legacy idmapper result in the
keyring") nfs_idmap_cache_timeout changed units from jiffies to seconds.
Unfortunately sysctl interface was not updated accordingly.
As a effect updating /proc/sys/fs/nfs/idmap_cache_timeout with some
value will incorrectly multiply this value by HZ.
Also reading /proc/sys/fs/nfs/idmap_cache_timeout will show real value
divided by HZ.
Fixes: 57e62324e469 ("NFS: Store the legacy idmapper result in the keyring")
Signed-off-by: Jan Chochol <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/nfs/nfs4sysctl.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/nfs/nfs4sysctl.c b/fs/nfs/nfs4sysctl.c
index 0d91d84e5822..c394e4447100 100644
--- a/fs/nfs/nfs4sysctl.c
+++ b/fs/nfs/nfs4sysctl.c
@@ -32,7 +32,7 @@ static struct ctl_table nfs4_cb_sysctls[] = {
.data = &nfs_idmap_cache_timeout,
.maxlen = sizeof(int),
.mode = 0644,
- .proc_handler = proc_dointvec_jiffies,
+ .proc_handler = proc_dointvec,
},
{ }
};
--
2.15.1
From: Nicholas Piggin <[email protected]>
[ Upstream commit 4552d128c26e0f0f27a5bd2fadc24092b8f6c1d7 ]
The die() oops path contains a serializing lock to prevent oops
messages from being interleaved. In the case of a system reset
initiated oops (e.g., qemu nmi command), __die was being called
which lacks that synchronisation and oops reports could be
interleaved across CPUs.
A recent patch 4388c9b3a6ee7 ("powerpc: Do not send system reset
request through the oops path") changed this to __die to avoid
the debugger() call, but there is no real harm to calling it twice
if the first time fell through. So go back to using die() here.
This was observed to fix the problem.
Fixes: 4388c9b3a6ee7 ("powerpc: Do not send system reset request through the oops path")
Signed-off-by: Nicholas Piggin <[email protected]>
Reviewed-by: David Gibson <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/powerpc/kernel/traps.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 13c9dcdcba69..d17007451f62 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -336,7 +336,7 @@ void system_reset_exception(struct pt_regs *regs)
* No debugger or crash dump registered, print logs then
* panic.
*/
- __die("System Reset", regs, SIGABRT);
+ die("System Reset", regs, SIGABRT);
mdelay(2*MSEC_PER_SEC); /* Wait a little while for others to print */
add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
--
2.15.1
From: Sagi Grimberg <[email protected]>
[ Upstream commit 246d8b184c100e8eb6b4e8c88f232c2ed2a4e672 ]
polling the completion queue directly does not interfere
with the existing polling logic, hence drop the requirement.
Be aware that running ib_process_cq_direct with non IB_POLL_DIRECT
CQ may trigger concurrent CQ processing.
This can be used for polling mode ULPs.
Cc: Bart Van Assche <[email protected]>
Reported-by: Steve Wise <[email protected]>
Signed-off-by: Sagi Grimberg <[email protected]>
[maxg: added wcs array argument to __ib_process_cq]
Signed-off-by: Max Gurtovoy <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/infiniband/core/cq.c | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)
diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c
index f2ae75fa3128..c8c5a5a7f433 100644
--- a/drivers/infiniband/core/cq.c
+++ b/drivers/infiniband/core/cq.c
@@ -25,9 +25,10 @@
#define IB_POLL_FLAGS \
(IB_CQ_NEXT_COMP | IB_CQ_REPORT_MISSED_EVENTS)
-static int __ib_process_cq(struct ib_cq *cq, int budget)
+static int __ib_process_cq(struct ib_cq *cq, int budget, struct ib_wc *poll_wc)
{
int i, n, completed = 0;
+ struct ib_wc *wcs = poll_wc ? : cq->wc;
/*
* budget might be (-1) if the caller does not
@@ -35,9 +36,9 @@ static int __ib_process_cq(struct ib_cq *cq, int budget)
* minimum here.
*/
while ((n = ib_poll_cq(cq, min_t(u32, IB_POLL_BATCH,
- budget - completed), cq->wc)) > 0) {
+ budget - completed), wcs)) > 0) {
for (i = 0; i < n; i++) {
- struct ib_wc *wc = &cq->wc[i];
+ struct ib_wc *wc = &wcs[i];
if (wc->wr_cqe)
wc->wr_cqe->done(cq, wc);
@@ -60,18 +61,20 @@ static int __ib_process_cq(struct ib_cq *cq, int budget)
* @cq: CQ to process
* @budget: number of CQEs to poll for
*
- * This function is used to process all outstanding CQ entries on a
- * %IB_POLL_DIRECT CQ. It does not offload CQ processing to a different
- * context and does not ask for completion interrupts from the HCA.
+ * This function is used to process all outstanding CQ entries.
+ * It does not offload CQ processing to a different context and does
+ * not ask for completion interrupts from the HCA.
+ * Using direct processing on CQ with non IB_POLL_DIRECT type may trigger
+ * concurrent processing.
*
* Note: do not pass -1 as %budget unless it is guaranteed that the number
* of completions that will be processed is small.
*/
int ib_process_cq_direct(struct ib_cq *cq, int budget)
{
- WARN_ON_ONCE(cq->poll_ctx != IB_POLL_DIRECT);
+ struct ib_wc wcs[IB_POLL_BATCH];
- return __ib_process_cq(cq, budget);
+ return __ib_process_cq(cq, budget, wcs);
}
EXPORT_SYMBOL(ib_process_cq_direct);
@@ -85,7 +88,7 @@ static int ib_poll_handler(struct irq_poll *iop, int budget)
struct ib_cq *cq = container_of(iop, struct ib_cq, iop);
int completed;
- completed = __ib_process_cq(cq, budget);
+ completed = __ib_process_cq(cq, budget, NULL);
if (completed < budget) {
irq_poll_complete(&cq->iop);
if (ib_req_notify_cq(cq, IB_POLL_FLAGS) > 0)
@@ -105,7 +108,7 @@ static void ib_cq_poll_work(struct work_struct *work)
struct ib_cq *cq = container_of(work, struct ib_cq, work);
int completed;
- completed = __ib_process_cq(cq, IB_POLL_BUDGET_WORKQUEUE);
+ completed = __ib_process_cq(cq, IB_POLL_BUDGET_WORKQUEUE, NULL);
if (completed >= IB_POLL_BUDGET_WORKQUEUE ||
ib_req_notify_cq(cq, IB_POLL_FLAGS) > 0)
queue_work(ib_comp_wq, &cq->work);
--
2.15.1
From: Thomas Richter <[email protected]>
[ Upstream commit 81fccd6ca507d3b2012eaf1edeb9b1dbf4bd22db ]
In x86 architecture dependend part function get_cpuid_str() mallocs a
128 byte buffer, but does not check if the memory allocation succeeded
or not.
When the memory allocation fails, function __get_cpuid() is called with
first parameter being a NULL pointer. However this function references
its first parameter and operates on a NULL pointer which might cause
core dumps.
Signed-off-by: Thomas Richter <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Hendrik Brueckner <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/perf/arch/x86/util/header.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/arch/x86/util/header.c b/tools/perf/arch/x86/util/header.c
index 33027c5e6f92..c6b5204e0280 100644
--- a/tools/perf/arch/x86/util/header.c
+++ b/tools/perf/arch/x86/util/header.c
@@ -70,7 +70,7 @@ get_cpuid_str(void)
{
char *buf = malloc(128);
- if (__get_cpuid(buf, 128, "%s-%u-%X$") < 0) {
+ if (buf && __get_cpuid(buf, 128, "%s-%u-%X$") < 0) {
free(buf);
return NULL;
}
--
2.15.1
From: "Steven Rostedt (VMware)" <[email protected]>
[ Upstream commit 38d70b7ca1769f26c0b79f3c08ff2cc949712b59 ]
When processing %pX in pretty_print(), simplify the logic slightly by
incrementing the ptr to the format string if isalnum(ptr[1]) is true.
This follows the logic a bit more closely to what is in the kernel.
Also, this fixes a small bug where %pF was not giving the offset of the
function.
Signed-off-by: Steven Rostedt <[email protected]>
Acked-by: Namhyung Kim <[email protected]>
Cc: Andrew Morton <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/lib/traceevent/event-parse.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
index 7ce724fc0544..9a17bc27296e 100644
--- a/tools/lib/traceevent/event-parse.c
+++ b/tools/lib/traceevent/event-parse.c
@@ -4949,21 +4949,22 @@ static void pretty_print(struct trace_seq *s, void *data, int size, struct event
else
ls = 2;
- if (*(ptr+1) == 'F' || *(ptr+1) == 'f' ||
- *(ptr+1) == 'S' || *(ptr+1) == 's') {
+ if (isalnum(ptr[1]))
ptr++;
+
+ if (*ptr == 'F' || *ptr == 'f' ||
+ *ptr == 'S' || *ptr == 's') {
show_func = *ptr;
- } else if (*(ptr+1) == 'M' || *(ptr+1) == 'm') {
- print_mac_arg(s, *(ptr+1), data, size, event, arg);
- ptr++;
+ } else if (*ptr == 'M' || *ptr == 'm') {
+ print_mac_arg(s, *ptr, data, size, event, arg);
arg = arg->next;
break;
- } else if (*(ptr+1) == 'I' || *(ptr+1) == 'i') {
+ } else if (*ptr == 'I' || *ptr == 'i') {
int n;
- n = print_ip_arg(s, ptr+1, data, size, event, arg);
+ n = print_ip_arg(s, ptr, data, size, event, arg);
if (n > 0) {
- ptr += n;
+ ptr += n - 1;
arg = arg->next;
break;
}
--
2.15.1
From: Arnaldo Carvalho de Melo <[email protected]>
[ Upstream commit 249d98e567e25dd03e015e2d31e1b7b9648f34df ]
When setting the "dwarf" unwinder for a specific event and not
specifying the max-stack, the attr.sample_max_stack ended up using an
uninitialized callchain_param.max_stack, fix it by using designated
initializers for that callchain_param variable, zeroing all non
explicitely initialized struct members.
Here is what happened:
# perf trace -vv --no-syscalls --max-stack 4 -e probe_libc:inet_pton/call-graph=dwarf/ ping -6 -c 1 ::1
callchain: type DWARF
callchain: stack dump size 8192
perf_event_attr:
type 2
size 112
config 0x730
{ sample_period, sample_freq } 1
sample_type IP|TID|TIME|ADDR|CALLCHAIN|CPU|PERIOD|RAW|REGS_USER|STACK_USER|DATA_SRC
exclude_callchain_user 1
{ wakeup_events, wakeup_watermark } 1
sample_regs_user 0xff0fff
sample_stack_user 8192
sample_max_stack 50656
sys_perf_event_open failed, error -75
Value too large for defined data type
# perf trace -vv --no-syscalls --max-stack 4 -e probe_libc:inet_pton/call-graph=dwarf/ ping -6 -c 1 ::1
callchain: type DWARF
callchain: stack dump size 8192
perf_event_attr:
type 2
size 112
config 0x730
sample_type IP|TID|TIME|ADDR|CALLCHAIN|CPU|PERIOD|RAW|REGS_USER|STACK_USER|DATA_SRC
exclude_callchain_user 1
sample_regs_user 0xff0fff
sample_stack_user 8192
sample_max_stack 30448
sys_perf_event_open failed, error -75
Value too large for defined data type
#
Now the attr.sample_max_stack is set to zero and the above works as
expected:
# perf trace --no-syscalls --max-stack 4 -e probe_libc:inet_pton/call-graph=dwarf/ ping -6 -c 1 ::1
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.072 ms
--- ::1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms
0.000 probe_libc:inet_pton:(7feb7a998350))
__inet_pton (inlined)
gaih_inet.constprop.7 (/usr/lib64/libc-2.26.so)
__GI_getaddrinfo (inlined)
[0xffffaa39b6108f3f] (/usr/bin/ping)
#
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Hendrick Brueckner <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Thomas Richter <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/perf/util/evsel.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 1f6beb3d0c68..4c31a22bbaa0 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -722,14 +722,14 @@ static void apply_config_terms(struct perf_evsel *evsel,
struct perf_evsel_config_term *term;
struct list_head *config_terms = &evsel->config_terms;
struct perf_event_attr *attr = &evsel->attr;
- struct callchain_param param;
+ /* callgraph default */
+ struct callchain_param param = {
+ .record_mode = callchain_param.record_mode,
+ };
u32 dump_size = 0;
int max_stack = 0;
const char *callgraph_buf = NULL;
- /* callgraph default */
- param.record_mode = callchain_param.record_mode;
-
list_for_each_entry(term, config_terms, list) {
switch (term->type) {
case PERF_EVSEL__CONFIG_TERM_PERIOD:
--
2.15.1
From: Shiraz Saleem <[email protected]>
[ Upstream commit 6376e926af1a8661dd1b2e6d0896e07f84a35844 ]
If the application invalidates the MR before the FMR WR, HW parses the
consumer key portion of the stag and returns an invalid stag key
Asynchronous Event (AE) that tears down the QP.
Fix this by zeroing-out the consumer key portion of the allocated stag
returned to application for FMR.
Fixes: ee855d3b93f3 ("RDMA/i40iw: Add base memory management extensions")
Signed-off-by: Shiraz Saleem <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/infiniband/hw/i40iw/i40iw_verbs.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
index 9e7ae7161d2f..b7961f21b555 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
@@ -1656,6 +1656,7 @@ static struct ib_mr *i40iw_alloc_mr(struct ib_pd *pd,
err_code = -EOVERFLOW;
goto err;
}
+ stag &= ~I40IW_CQPSQ_STAG_KEY_MASK;
iwmr->stag = stag;
iwmr->ibmr.rkey = stag;
iwmr->ibmr.lkey = stag;
--
2.15.1
From: Hans de Goede <[email protected]>
[ Upstream commit 4d6bde512a86c32df3a1f289d2b4cd04b17758d1 ]
On some Dell XPS models WMI events of type 0x0000 reporting a keycode of
0xe00c get reported when the brightness of the LCD panel changes.
This leads to us reporting false-positive kbd_led change events to
userspace which in turn leads to the kbd backlight OSD showing when it
should not.
We already read the current keyboard backlight brightness value when
reporting events because the led_classdev_notify_brightness_hw_changed
API requires this. Compare this value to the last known value and filter
out duplicate events, fixing this.
Note the fixed issue is esp. a problem on XPS models with an ambient light
sensor and automatic brightness adjustments turned on, this causes the kbd
backlight OSD to show all the time there.
BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1514969
Fixes: 9c656b0799 ("platform/x86: dell-*: Call new led hw_changed API ...")
Acked-by: Pali Rohár <[email protected]>
Signed-off-by: Hans de Goede <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/platform/x86/dell-laptop.c | 24 ++++++++++++++++++++++--
1 file changed, 22 insertions(+), 2 deletions(-)
diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
index 7424e53157b0..dd5043a6a114 100644
--- a/drivers/platform/x86/dell-laptop.c
+++ b/drivers/platform/x86/dell-laptop.c
@@ -1177,6 +1177,7 @@ static u8 kbd_previous_mode_bit;
static bool kbd_led_present;
static DEFINE_MUTEX(kbd_led_mutex);
+static enum led_brightness kbd_led_level;
/*
* NOTE: there are three ways to set the keyboard backlight level.
@@ -2020,6 +2021,7 @@ static enum led_brightness kbd_led_level_get(struct led_classdev *led_cdev)
static int kbd_led_level_set(struct led_classdev *led_cdev,
enum led_brightness value)
{
+ enum led_brightness new_value = value;
struct kbd_state state;
struct kbd_state new_state;
u16 num;
@@ -2049,6 +2051,9 @@ static int kbd_led_level_set(struct led_classdev *led_cdev,
}
out:
+ if (ret == 0)
+ kbd_led_level = new_value;
+
mutex_unlock(&kbd_led_mutex);
return ret;
}
@@ -2076,6 +2081,9 @@ static int __init kbd_led_init(struct device *dev)
if (kbd_led.max_brightness)
kbd_led.max_brightness--;
}
+
+ kbd_led_level = kbd_led_level_get(NULL);
+
ret = led_classdev_register(dev, &kbd_led);
if (ret)
kbd_led_present = false;
@@ -2100,13 +2108,25 @@ static void kbd_led_exit(void)
static int dell_laptop_notifier_call(struct notifier_block *nb,
unsigned long action, void *data)
{
+ bool changed = false;
+ enum led_brightness new_kbd_led_level;
+
switch (action) {
case DELL_LAPTOP_KBD_BACKLIGHT_BRIGHTNESS_CHANGED:
if (!kbd_led_present)
break;
- led_classdev_notify_brightness_hw_changed(&kbd_led,
- kbd_led_level_get(&kbd_led));
+ mutex_lock(&kbd_led_mutex);
+ new_kbd_led_level = kbd_led_level_get(&kbd_led);
+ if (kbd_led_level != new_kbd_led_level) {
+ kbd_led_level = new_kbd_led_level;
+ changed = true;
+ }
+ mutex_unlock(&kbd_led_mutex);
+
+ if (changed)
+ led_classdev_notify_brightness_hw_changed(&kbd_led,
+ kbd_led_level);
break;
}
--
2.15.1
From: "Steven Rostedt (VMware)" <[email protected]>
[ Upstream commit dbdda842fe96f8932bae554f0adf463c27c42bc7 ]
This patch implements what I discussed in Kernel Summit. I added
lockdep annotation (hopefully correctly), and it hasn't had any splats
(since I fixed some bugs in the first iterations). It did catch
problems when I had the owner covering too much. But now that the owner
is only set when actively calling the consoles, lockdep has stayed
quiet.
Here's the design again:
I added a "console_owner" which is set to a task that is actively
writing to the consoles. It is *not* the same as the owner of the
console_lock. It is only set when doing the calls to the console
functions. It is protected by a console_owner_lock which is a raw spin
lock.
There is a console_waiter. This is set when there is an active console
owner that is not current, and waiter is not set. This too is protected
by console_owner_lock.
In printk() when it tries to write to the consoles, we have:
if (console_trylock())
console_unlock();
Now I added an else, which will check if there is an active owner, and
no current waiter. If that is the case, then console_waiter is set, and
the task goes into a spin until it is no longer set.
When the active console owner finishes writing the current message to
the consoles, it grabs the console_owner_lock and sees if there is a
waiter, and clears console_owner.
If there is a waiter, then it breaks out of the loop, clears the waiter
flag (because that will release the waiter from its spin), and exits.
Note, it does *not* release the console semaphore. Because it is a
semaphore, there is no owner. Another task may release it. This means
that the waiter is guaranteed to be the new console owner! Which it
becomes.
Then the waiter calls console_unlock() and continues to write to the
consoles.
If another task comes along and does a printk() it too can become the
new waiter, and we wash rinse and repeat!
By Petr Mladek about possible new deadlocks:
The thing is that we move console_sem only to printk() call
that normally calls console_unlock() as well. It means that
the transferred owner should not bring new type of dependencies.
As Steven said somewhere: "If there is a deadlock, it was
there even before."
We could look at it from this side. The possible deadlock would
look like:
CPU0 CPU1
console_unlock()
console_owner = current;
spin_lockA()
printk()
spin = true;
while (...)
call_console_drivers()
spin_lockA()
This would be a deadlock. CPU0 would wait for the lock A.
While CPU1 would own the lockA and would wait for CPU0
to finish calling the console drivers and pass the console_sem
owner.
But if the above is true than the following scenario was
already possible before:
CPU0
spin_lockA()
printk()
console_unlock()
call_console_drivers()
spin_lockA()
By other words, this deadlock was there even before. Such
deadlocks are prevented by using printk_deferred() in
the sections guarded by the lock A.
By Steven Rostedt:
To demonstrate the issue, this module has been shown to lock up a
system with 4 CPUs and a slow console (like a serial console). It is
also able to lock up a 8 CPU system with only a fast (VGA) console, by
passing in "loops=100". The changes in this commit prevent this module
from locking up the system.
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/sched.h>
#include <linux/mutex.h>
#include <linux/workqueue.h>
#include <linux/hrtimer.h>
static bool stop_testing;
static unsigned int loops = 1;
static void preempt_printk_workfn(struct work_struct *work)
{
int i;
while (!READ_ONCE(stop_testing)) {
for (i = 0; i < loops && !READ_ONCE(stop_testing); i++) {
preempt_disable();
pr_emerg("%5d%-75s\n", smp_processor_id(),
" XXX NOPREEMPT");
preempt_enable();
}
msleep(1);
}
}
static struct work_struct __percpu *works;
static void finish(void)
{
int cpu;
WRITE_ONCE(stop_testing, true);
for_each_online_cpu(cpu)
flush_work(per_cpu_ptr(works, cpu));
free_percpu(works);
}
static int __init test_init(void)
{
int cpu;
works = alloc_percpu(struct work_struct);
if (!works)
return -ENOMEM;
/*
* This is just a test module. This will break if you
* do any CPU hot plugging between loading and
* unloading the module.
*/
for_each_online_cpu(cpu) {
struct work_struct *work = per_cpu_ptr(works, cpu);
INIT_WORK(work, &preempt_printk_workfn);
schedule_work_on(cpu, work);
}
return 0;
}
static void __exit test_exit(void)
{
finish();
}
module_param(loops, uint, 0);
module_init(test_init);
module_exit(test_exit);
MODULE_LICENSE("GPL");
Link: http://lkml.kernel.org/r/[email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Cong Wang <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Jan Kara <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Tetsuo Handa <[email protected]>
Cc: Byungchul Park <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: [email protected]
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
[[email protected]: Commit message about possible deadlocks]
Acked-by: Sergey Senozhatsky <[email protected]>
Signed-off-by: Petr Mladek <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
kernel/printk/printk.c | 108 ++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 107 insertions(+), 1 deletion(-)
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 512f7c2baedd..89c3496975cc 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -86,8 +86,15 @@ EXPORT_SYMBOL_GPL(console_drivers);
static struct lockdep_map console_lock_dep_map = {
.name = "console_lock"
};
+static struct lockdep_map console_owner_dep_map = {
+ .name = "console_owner"
+};
#endif
+static DEFINE_RAW_SPINLOCK(console_owner_lock);
+static struct task_struct *console_owner;
+static bool console_waiter;
+
enum devkmsg_log_bits {
__DEVKMSG_LOG_BIT_ON = 0,
__DEVKMSG_LOG_BIT_OFF,
@@ -1753,8 +1760,56 @@ asmlinkage int vprintk_emit(int facility, int level,
* semaphore. The release will print out buffers and wake up
* /dev/kmsg and syslog() users.
*/
- if (console_trylock())
+ if (console_trylock()) {
console_unlock();
+ } else {
+ struct task_struct *owner = NULL;
+ bool waiter;
+ bool spin = false;
+
+ printk_safe_enter_irqsave(flags);
+
+ raw_spin_lock(&console_owner_lock);
+ owner = READ_ONCE(console_owner);
+ waiter = READ_ONCE(console_waiter);
+ if (!waiter && owner && owner != current) {
+ WRITE_ONCE(console_waiter, true);
+ spin = true;
+ }
+ raw_spin_unlock(&console_owner_lock);
+
+ /*
+ * If there is an active printk() writing to the
+ * consoles, instead of having it write our data too,
+ * see if we can offload that load from the active
+ * printer, and do some printing ourselves.
+ * Go into a spin only if there isn't already a waiter
+ * spinning, and there is an active printer, and
+ * that active printer isn't us (recursive printk?).
+ */
+ if (spin) {
+ /* We spin waiting for the owner to release us */
+ spin_acquire(&console_owner_dep_map, 0, 0, _THIS_IP_);
+ /* Owner will clear console_waiter on hand off */
+ while (READ_ONCE(console_waiter))
+ cpu_relax();
+
+ spin_release(&console_owner_dep_map, 1, _THIS_IP_);
+ printk_safe_exit_irqrestore(flags);
+
+ /*
+ * The owner passed the console lock to us.
+ * Since we did not spin on console lock, annotate
+ * this as a trylock. Otherwise lockdep will
+ * complain.
+ */
+ mutex_acquire(&console_lock_dep_map, 0, 1, _THIS_IP_);
+ console_unlock();
+ printk_safe_enter_irqsave(flags);
+ }
+ printk_safe_exit_irqrestore(flags);
+
+ }
}
return printed_len;
@@ -2141,6 +2196,7 @@ void console_unlock(void)
static u64 seen_seq;
unsigned long flags;
bool wake_klogd = false;
+ bool waiter = false;
bool do_cond_resched, retry;
if (console_suspended) {
@@ -2229,14 +2285,64 @@ skip:
console_seq++;
raw_spin_unlock(&logbuf_lock);
+ /*
+ * While actively printing out messages, if another printk()
+ * were to occur on another CPU, it may wait for this one to
+ * finish. This task can not be preempted if there is a
+ * waiter waiting to take over.
+ */
+ raw_spin_lock(&console_owner_lock);
+ console_owner = current;
+ raw_spin_unlock(&console_owner_lock);
+
+ /* The waiter may spin on us after setting console_owner */
+ spin_acquire(&console_owner_dep_map, 0, 0, _THIS_IP_);
+
stop_critical_timings(); /* don't trace print latency */
call_console_drivers(ext_text, ext_len, text, len);
start_critical_timings();
+
+ raw_spin_lock(&console_owner_lock);
+ waiter = READ_ONCE(console_waiter);
+ console_owner = NULL;
+ raw_spin_unlock(&console_owner_lock);
+
+ /*
+ * If there is a waiter waiting for us, then pass the
+ * rest of the work load over to that waiter.
+ */
+ if (waiter)
+ break;
+
+ /* There was no waiter, and nothing will spin on us here */
+ spin_release(&console_owner_dep_map, 1, _THIS_IP_);
+
printk_safe_exit_irqrestore(flags);
if (do_cond_resched)
cond_resched();
}
+
+ /*
+ * If there is an active waiter waiting on the console_lock.
+ * Pass off the printing to the waiter, and the waiter
+ * will continue printing on its CPU, and when all writing
+ * has finished, the last printer will wake up klogd.
+ */
+ if (waiter) {
+ WRITE_ONCE(console_waiter, false);
+ /* The waiter is now free to continue */
+ spin_release(&console_owner_dep_map, 1, _THIS_IP_);
+ /*
+ * Hand off console_lock to waiter. The waiter will perform
+ * the up(). After this, the waiter is the console_lock owner.
+ */
+ mutex_release(&console_lock_dep_map, 1, _THIS_IP_);
+ printk_safe_exit_irqrestore(flags);
+ /* Note, if waiter is set, logbuf_lock is not held */
+ return;
+ }
+
console_locked = 0;
/* Release the exclusive_console once it is used */
--
2.15.1
From: Robin Murphy <[email protected]>
[ Upstream commit dc98b8480d8a68c2ce9aa28b9f0d714fd258bc0b ]
Removing the early device registration hook overlooked the fact that
it only ran conditionally on a compatible device being present in the
DT. With exynos_iommu_init() now running as an unconditional initcall,
problems arise on non-Exynos systems when other IOMMU drivers find
themselves unable to install their ops on the platform bus, or at worst
the Exynos ops get called with someone else's domain and all hell breaks
loose.
The global ops/cache setup could probably all now be triggered from the
first IOMMU probe, as with dma_dev assigment, but for the time being the
simplest fix is to resurrect the logic from commit a7b67cd5d9af
("iommu/exynos: Play nice in multi-platform builds") to explicitly check
the DT for the presence of an Exynos IOMMU before trying anything.
Fixes: 928055a01b3f ("iommu/exynos: Remove custom platform device registration code")
Signed-off-by: Robin Murphy <[email protected]>
Acked-by: Marek Szyprowski <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/iommu/exynos-iommu.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
index 25c2c75f5332..13485a40dd46 100644
--- a/drivers/iommu/exynos-iommu.c
+++ b/drivers/iommu/exynos-iommu.c
@@ -1344,8 +1344,15 @@ static const struct iommu_ops exynos_iommu_ops = {
static int __init exynos_iommu_init(void)
{
+ struct device_node *np;
int ret;
+ np = of_find_matching_node(NULL, sysmmu_of_match);
+ if (!np)
+ return 0;
+
+ of_node_put(np);
+
lv2table_kmem_cache = kmem_cache_create("exynos-iommu-lv2table",
LV2TABLE_SIZE, LV2TABLE_SIZE, 0, NULL);
if (!lv2table_kmem_cache) {
--
2.15.1
From: Christian Borntraeger <[email protected]>
[ Upstream commit 241e3ec0faf5ab1a0d9b1f6c43eefa919fb9c112 ]
commit a03825bbd0c3 ("KVM: s390: use kvm->created_vcpus") introduced
kvm->created_vcpus to avoid races with the existing kvm->online_vcpus
scheme. One place was "forgotten" and one new place was "added".
Let's fix those.
Reported-by: Halil Pasic <[email protected]>
Signed-off-by: Christian Borntraeger <[email protected]>
Reviewed-by: Halil Pasic <[email protected]>
Reviewed-by: Cornelia Huck <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Fixes: 4e0b1ab72b8a ("KVM: s390: gs support for kvm guests")
Fixes: a03825bbd0c3 ("KVM: s390: use kvm->created_vcpus")
Signed-off-by: Sasha Levin <[email protected]>
---
arch/s390/kvm/kvm-s390.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 0fa3a788dd20..0bce918db11a 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -601,7 +601,7 @@ static int kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap)
case KVM_CAP_S390_GS:
r = -EINVAL;
mutex_lock(&kvm->lock);
- if (atomic_read(&kvm->online_vcpus)) {
+ if (kvm->created_vcpus) {
r = -EBUSY;
} else if (test_facility(133)) {
set_kvm_facility(kvm->arch.model.fac_mask, 133);
@@ -1121,7 +1121,7 @@ static int kvm_s390_set_processor_feat(struct kvm *kvm,
return -EINVAL;
mutex_lock(&kvm->lock);
- if (!atomic_read(&kvm->online_vcpus)) {
+ if (!kvm->created_vcpus) {
bitmap_copy(kvm->arch.cpu_feat, (unsigned long *) data.feat,
KVM_S390_VM_CPU_FEAT_NR_BITS);
ret = 0;
--
2.15.1
From: Dmitry Torokhov <[email protected]>
[ Upstream commit 2bc4298f59d2f15175bb568e2d356b5912d0cdd9 ]
When Synaptics protocol is disabled, we still need to try and detect the
hardware, so we can switch to SMBus device if SMbus is detected, or we know
that it is Synaptics device and reset it properly for the bare PS/2
protocol.
Fixes: c378b5119eb0 ("Input: psmouse - factor out common protocol probing code")
Reported-by: Matteo Croce <[email protected]>
Tested-by: Matteo Croce <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/input/mouse/psmouse-base.c | 34 +++++++++++++++++++++-------------
1 file changed, 21 insertions(+), 13 deletions(-)
diff --git a/drivers/input/mouse/psmouse-base.c b/drivers/input/mouse/psmouse-base.c
index 6a5649e52eed..8ac9e03c05b4 100644
--- a/drivers/input/mouse/psmouse-base.c
+++ b/drivers/input/mouse/psmouse-base.c
@@ -975,6 +975,21 @@ static void psmouse_apply_defaults(struct psmouse *psmouse)
psmouse->pt_deactivate = NULL;
}
+static bool psmouse_do_detect(int (*detect)(struct psmouse *, bool),
+ struct psmouse *psmouse, bool allow_passthrough,
+ bool set_properties)
+{
+ if (psmouse->ps2dev.serio->id.type == SERIO_PS_PSTHRU &&
+ !allow_passthrough) {
+ return false;
+ }
+
+ if (set_properties)
+ psmouse_apply_defaults(psmouse);
+
+ return detect(psmouse, set_properties) == 0;
+}
+
static bool psmouse_try_protocol(struct psmouse *psmouse,
enum psmouse_type type,
unsigned int *max_proto,
@@ -986,15 +1001,8 @@ static bool psmouse_try_protocol(struct psmouse *psmouse,
if (!proto)
return false;
- if (psmouse->ps2dev.serio->id.type == SERIO_PS_PSTHRU &&
- !proto->try_passthru) {
- return false;
- }
-
- if (set_properties)
- psmouse_apply_defaults(psmouse);
-
- if (proto->detect(psmouse, set_properties) != 0)
+ if (!psmouse_do_detect(proto->detect, psmouse, proto->try_passthru,
+ set_properties))
return false;
if (set_properties && proto->init && init_allowed) {
@@ -1027,8 +1035,8 @@ static int psmouse_extensions(struct psmouse *psmouse,
* Always check for focaltech, this is safe as it uses pnp-id
* matching.
*/
- if (psmouse_try_protocol(psmouse, PSMOUSE_FOCALTECH,
- &max_proto, set_properties, false)) {
+ if (psmouse_do_detect(focaltech_detect,
+ psmouse, false, set_properties)) {
if (max_proto > PSMOUSE_IMEX &&
IS_ENABLED(CONFIG_MOUSE_PS2_FOCALTECH) &&
(!set_properties || focaltech_init(psmouse) == 0)) {
@@ -1074,8 +1082,8 @@ static int psmouse_extensions(struct psmouse *psmouse,
* probing for IntelliMouse.
*/
if (max_proto > PSMOUSE_PS2 &&
- psmouse_try_protocol(psmouse, PSMOUSE_SYNAPTICS, &max_proto,
- set_properties, false)) {
+ psmouse_do_detect(synaptics_detect,
+ psmouse, false, set_properties)) {
synaptics_hardware = true;
if (max_proto > PSMOUSE_IMEX) {
--
2.15.1
From: Masami Hiramatsu <[email protected]>
[ Upstream commit 5e46664703b364434a2cbda3e6988fc24ae0ced5 ]
Fix to pick text symbols for multiple kprobe testcase.
kallsyms shows text symbols with " t " or " T " but
current testcase picks all symbols including "t",
so it picks data symbols if it includes 't' (e.g. "str").
This fixes it to find symbol lines with " t " or " T "
(including spaces).
Signed-off-by: Masami Hiramatsu <[email protected]>
Reported-by: Russell King <[email protected]>
Acked-by: Steven Rostedt (VMware) <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/testing/selftests/ftrace/test.d/kprobe/multiple_kprobes.tc | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/multiple_kprobes.tc b/tools/testing/selftests/ftrace/test.d/kprobe/multiple_kprobes.tc
index bb16cf91f1b5..e297bd7a2e79 100644
--- a/tools/testing/selftests/ftrace/test.d/kprobe/multiple_kprobes.tc
+++ b/tools/testing/selftests/ftrace/test.d/kprobe/multiple_kprobes.tc
@@ -12,8 +12,8 @@ case `uname -m` in
*) OFFS=0;;
esac
-echo "Setup up to 256 kprobes"
-grep t /proc/kallsyms | cut -f3 -d" " | grep -v .*\\..* | \
+echo "Setup up kprobes on first 256 text symbols"
+grep -i " t " /proc/kallsyms | cut -f3 -d" " | grep -v .*\\..* | \
head -n 256 | while read i; do echo p ${i}+${OFFS} ; done > kprobe_events ||:
echo 1 > events/kprobes/enable
--
2.15.1
From: Anna-Maria Gleixner <[email protected]>
[ Upstream commit 91633eed73a3ac37aaece5c8c1f93a18bae616a9 ]
So far only CLOCK_MONOTONIC and CLOCK_REALTIME were taken into account as
well as HRTIMER_MODE_ABS/REL in the hrtimer_init tracepoint. The query for
detecting the ABS or REL timer modes is not valid anymore, it got broken
by the introduction of HRTIMER_MODE_PINNED.
HRTIMER_MODE_PINNED is not evaluated in the hrtimer_init() call, but for the
sake of completeness print all given modes.
Signed-off-by: Anna-Maria Gleixner <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
include/trace/events/timer.h | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/include/trace/events/timer.h b/include/trace/events/timer.h
index 16e305e69f34..c6f728037c53 100644
--- a/include/trace/events/timer.h
+++ b/include/trace/events/timer.h
@@ -136,6 +136,20 @@ DEFINE_EVENT(timer_class, timer_cancel,
TP_ARGS(timer)
);
+#define decode_clockid(type) \
+ __print_symbolic(type, \
+ { CLOCK_REALTIME, "CLOCK_REALTIME" }, \
+ { CLOCK_MONOTONIC, "CLOCK_MONOTONIC" }, \
+ { CLOCK_BOOTTIME, "CLOCK_BOOTTIME" }, \
+ { CLOCK_TAI, "CLOCK_TAI" })
+
+#define decode_hrtimer_mode(mode) \
+ __print_symbolic(mode, \
+ { HRTIMER_MODE_ABS, "ABS" }, \
+ { HRTIMER_MODE_REL, "REL" }, \
+ { HRTIMER_MODE_ABS_PINNED, "ABS|PINNED" }, \
+ { HRTIMER_MODE_REL_PINNED, "REL|PINNED" })
+
/**
* hrtimer_init - called when the hrtimer is initialized
* @hrtimer: pointer to struct hrtimer
@@ -162,10 +176,8 @@ TRACE_EVENT(hrtimer_init,
),
TP_printk("hrtimer=%p clockid=%s mode=%s", __entry->hrtimer,
- __entry->clockid == CLOCK_REALTIME ?
- "CLOCK_REALTIME" : "CLOCK_MONOTONIC",
- __entry->mode == HRTIMER_MODE_ABS ?
- "HRTIMER_MODE_ABS" : "HRTIMER_MODE_REL")
+ decode_clockid(__entry->clockid),
+ decode_hrtimer_mode(__entry->mode))
);
/**
--
2.15.1
From: Paul Mackerras <[email protected]>
[ Upstream commit 5855564c8ab2d9cefca7b2933bd19818eb795e40 ]
This adds a register identifier for use with the one_reg interface
to allow the decrementer expiry time to be read and written by
userspace. The decrementer expiry time is in guest timebase units
and is equal to the sum of the decrementer and the guest timebase.
(The expiry time is used rather than the decrementer value itself
because the expiry time is not constantly changing, though the
decrementer value is, while the guest vcpu is not running.)
Without this, a guest vcpu migrated to a new host will see its
decrementer set to some random value. On POWER8 and earlier, the
decrementer is 32 bits wide and counts down at 512MHz, so the
guest vcpu will potentially see no decrementer interrupts for up
to about 4 seconds, which will lead to a stall. With POWER9, the
decrementer is now 56 bits side, so the stall can be much longer
(up to 2.23 years) and more noticeable.
To help work around the problem in cases where userspace has not been
updated to migrate the decrementer expiry time, we now set the
default decrementer expiry at vcpu creation time to the current time
rather than the maximum possible value. This should mean an
immediate decrementer interrupt when a migrated vcpu starts
running. In cases where the decrementer is 32 bits wide and more
than 4 seconds elapse between the creation of the vcpu and when it
first runs, the decrementer would have wrapped around to positive
values and there may still be a stall - but this is no worse than
the current situation. In the large-decrementer case, we are sure
to get an immediate decrementer interrupt (assuming the time from
vcpu creation to first run is less than 2.23 years) and we thus
avoid a very long stall.
Signed-off-by: Paul Mackerras <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
Documentation/virtual/kvm/api.txt | 1 +
arch/powerpc/include/uapi/asm/kvm.h | 2 ++
arch/powerpc/kvm/book3s_hv.c | 8 ++++++++
arch/powerpc/kvm/powerpc.c | 2 +-
4 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index e63a35fafef0..0f9089416b4c 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1837,6 +1837,7 @@ registers, find a list below:
PPC | KVM_REG_PPC_DBSR | 32
PPC | KVM_REG_PPC_TIDR | 64
PPC | KVM_REG_PPC_PSSCR | 64
+ PPC | KVM_REG_PPC_DEC_EXPIRY | 64
PPC | KVM_REG_PPC_TM_GPR0 | 64
...
PPC | KVM_REG_PPC_TM_GPR31 | 64
diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h
index 61d6049f4c1e..8aaec831053a 100644
--- a/arch/powerpc/include/uapi/asm/kvm.h
+++ b/arch/powerpc/include/uapi/asm/kvm.h
@@ -607,6 +607,8 @@ struct kvm_ppc_rmmu_info {
#define KVM_REG_PPC_TIDR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xbc)
#define KVM_REG_PPC_PSSCR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xbd)
+#define KVM_REG_PPC_DEC_EXPIRY (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xbe)
+
/* Transactional Memory checkpointed state:
* This is all GPRs, all VSX regs and a subset of SPRs
*/
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index f48e3379a18a..e094dc90ff1b 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1497,6 +1497,10 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
case KVM_REG_PPC_ARCH_COMPAT:
*val = get_reg_val(id, vcpu->arch.vcore->arch_compat);
break;
+ case KVM_REG_PPC_DEC_EXPIRY:
+ *val = get_reg_val(id, vcpu->arch.dec_expires +
+ vcpu->arch.vcore->tb_offset);
+ break;
default:
r = -EINVAL;
break;
@@ -1724,6 +1728,10 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
case KVM_REG_PPC_ARCH_COMPAT:
r = kvmppc_set_arch_compat(vcpu, set_reg_val(id, *val));
break;
+ case KVM_REG_PPC_DEC_EXPIRY:
+ vcpu->arch.dec_expires = set_reg_val(id, *val) -
+ vcpu->arch.vcore->tb_offset;
+ break;
default:
r = -EINVAL;
break;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 2b02d51d14d8..ecb45361095b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -758,7 +758,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
hrtimer_init(&vcpu->arch.dec_timer, CLOCK_REALTIME, HRTIMER_MODE_ABS);
vcpu->arch.dec_timer.function = kvmppc_decrementer_wakeup;
- vcpu->arch.dec_expires = ~(u64)0;
+ vcpu->arch.dec_expires = get_tb();
#ifdef CONFIG_KVM_EXIT_TIMING
mutex_init(&vcpu->arch.exit_timing_lock);
--
2.15.1
From: Alex Williamson <[email protected]>
[ Upstream commit aa008206634363ef800fbd5f0262016c9ff81dea ]
The Marvell 9128 is the original device generating bug 42679, from which
many other Marvell DMA alias quirks have been sourced, but we didn't have
positive confirmation of the fix on 9128 until now.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=42679
Link: https://www.spinics.net/lists/kvm/msg161459.html
Reported-by: Binarus <[email protected]>
Tested-by: Binarus <[email protected]>
Signed-off-by: Alex Williamson <[email protected]>
Signed-off-by: Bjorn Helgaas <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/pci/quirks.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 05fadcc4f9d2..75f57aa954f2 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -3879,6 +3879,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9120,
quirk_dma_func1_alias);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9123,
quirk_dma_func1_alias);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9128,
+ quirk_dma_func1_alias);
/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c14 */
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9130,
quirk_dma_func1_alias);
--
2.15.1
From: Arnaldo Carvalho de Melo <[email protected]>
[ Upstream commit eabad8c6856f185f876b54c426c2cc69fe0f0a7d ]
When setting up DWARF callchains on specific events, without using
'record' or 'trace' --call-graph, but instead doing it like:
perf trace -e cycles/call-graph=dwarf/
The unwind__prepare_access() call in thread__insert_map() when we
process PERF_RECORD_MMAP(2) metadata events were not being performed,
precluding us from using per-event DWARF callchains, handling them just
when we asked for all events to be DWARF, using "--call-graph dwarf".
We do it in the PERF_RECORD_MMAP because we have to look at one of the
executable maps to figure out the executable type (64-bit, 32-bit) of
the DSO laid out in that mmap. Also to look at the architecture where
the perf.data file was recorded.
All this probably should be deferred to when we process a sample for
some thread that has callchains, so that we do this processing only for
the threads with samples, not for all of them.
For now, fix using DWARF on specific events.
Before:
# perf trace --no-syscalls -e probe_libc:inet_pton/call-graph=dwarf/ ping -6 -c 1 ::1
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.048 ms
--- ::1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms
0.000 probe_libc:inet_pton:(7fe9597bb350))
Problem processing probe_libc:inet_pton callchain, skipping...
#
After:
# perf trace --no-syscalls -e probe_libc:inet_pton/call-graph=dwarf/ ping -6 -c 1 ::1
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.060 ms
--- ::1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms
0.000 probe_libc:inet_pton:(7fd4aa930350))
__inet_pton (inlined)
gaih_inet.constprop.7 (/usr/lib64/libc-2.26.so)
__GI_getaddrinfo (inlined)
[0xffffaa804e51af3f] (/usr/bin/ping)
__libc_start_main (/usr/lib64/libc-2.26.so)
[0xffffaa804e51b379] (/usr/bin/ping)
#
# perf trace --call-graph=dwarf --no-syscalls -e probe_libc:inet_pton/call-graph=dwarf/ ping -6 -c 1 ::1
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.057 ms
--- ::1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms
0.000 probe_libc:inet_pton:(7f9363b9e350))
__inet_pton (inlined)
gaih_inet.constprop.7 (/usr/lib64/libc-2.26.so)
__GI_getaddrinfo (inlined)
[0xffffa9e8a14e0f3f] (/usr/bin/ping)
__libc_start_main (/usr/lib64/libc-2.26.so)
[0xffffa9e8a14e1379] (/usr/bin/ping)
#
# perf trace --call-graph=fp --no-syscalls -e probe_libc:inet_pton/call-graph=dwarf/ ping -6 -c 1 ::1
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.077 ms
--- ::1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms
0.000 probe_libc:inet_pton:(7f4947e1c350))
__inet_pton (inlined)
gaih_inet.constprop.7 (/usr/lib64/libc-2.26.so)
__GI_getaddrinfo (inlined)
[0xffffaa716d88ef3f] (/usr/bin/ping)
__libc_start_main (/usr/lib64/libc-2.26.so)
[0xffffaa716d88f379] (/usr/bin/ping)
#
# perf trace --no-syscalls -e probe_libc:inet_pton/call-graph=fp/ ping -6 -c 1 ::1
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.078 ms
--- ::1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms
0.000 probe_libc:inet_pton:(7fa157696350))
__GI___inet_pton (/usr/lib64/libc-2.26.so)
getaddrinfo (/usr/lib64/libc-2.26.so)
[0xffffa9ba39c74f40] (/usr/bin/ping)
#
Acked-by: Namhyung Kim <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Hendrick Brueckner <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Thomas Richter <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/perf/builtin-c2c.c | 5 +++--
tools/perf/builtin-report.c | 5 +++--
tools/perf/builtin-script.c | 5 +++--
tools/perf/tests/dwarf-unwind.c | 1 +
tools/perf/util/callchain.c | 10 ++++++++++
tools/perf/util/callchain.h | 2 ++
tools/perf/util/unwind-libunwind-local.c | 9 +++------
7 files changed, 25 insertions(+), 12 deletions(-)
diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
index d00aac51130d..3479a1bc7caa 100644
--- a/tools/perf/builtin-c2c.c
+++ b/tools/perf/builtin-c2c.c
@@ -2393,9 +2393,10 @@ static int setup_callchain(struct perf_evlist *evlist)
enum perf_call_graph_mode mode = CALLCHAIN_NONE;
if ((sample_type & PERF_SAMPLE_REGS_USER) &&
- (sample_type & PERF_SAMPLE_STACK_USER))
+ (sample_type & PERF_SAMPLE_STACK_USER)) {
mode = CALLCHAIN_DWARF;
- else if (sample_type & PERF_SAMPLE_BRANCH_STACK)
+ dwarf_callchain_users = true;
+ } else if (sample_type & PERF_SAMPLE_BRANCH_STACK)
mode = CALLCHAIN_LBR;
else if (sample_type & PERF_SAMPLE_CALLCHAIN)
mode = CALLCHAIN_FP;
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index fae4b0340750..78f048f6a2c2 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -312,9 +312,10 @@ static int report__setup_sample_type(struct report *rep)
if (symbol_conf.use_callchain || symbol_conf.cumulate_callchain) {
if ((sample_type & PERF_SAMPLE_REGS_USER) &&
- (sample_type & PERF_SAMPLE_STACK_USER))
+ (sample_type & PERF_SAMPLE_STACK_USER)) {
callchain_param.record_mode = CALLCHAIN_DWARF;
- else if (sample_type & PERF_SAMPLE_BRANCH_STACK)
+ dwarf_callchain_users = true;
+ } else if (sample_type & PERF_SAMPLE_BRANCH_STACK)
callchain_param.record_mode = CALLCHAIN_LBR;
else
callchain_param.record_mode = CALLCHAIN_FP;
diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
index 0fe02758de7d..615fdc63452e 100644
--- a/tools/perf/builtin-script.c
+++ b/tools/perf/builtin-script.c
@@ -2574,9 +2574,10 @@ static void script__setup_sample_type(struct perf_script *script)
if (symbol_conf.use_callchain || symbol_conf.cumulate_callchain) {
if ((sample_type & PERF_SAMPLE_REGS_USER) &&
- (sample_type & PERF_SAMPLE_STACK_USER))
+ (sample_type & PERF_SAMPLE_STACK_USER)) {
callchain_param.record_mode = CALLCHAIN_DWARF;
- else if (sample_type & PERF_SAMPLE_BRANCH_STACK)
+ dwarf_callchain_users = true;
+ } else if (sample_type & PERF_SAMPLE_BRANCH_STACK)
callchain_param.record_mode = CALLCHAIN_LBR;
else
callchain_param.record_mode = CALLCHAIN_FP;
diff --git a/tools/perf/tests/dwarf-unwind.c b/tools/perf/tests/dwarf-unwind.c
index ac40e05bcab4..260418969120 100644
--- a/tools/perf/tests/dwarf-unwind.c
+++ b/tools/perf/tests/dwarf-unwind.c
@@ -173,6 +173,7 @@ int test__dwarf_unwind(struct test *test __maybe_unused, int subtest __maybe_unu
}
callchain_param.record_mode = CALLCHAIN_DWARF;
+ dwarf_callchain_users = true;
if (init_live_machine(machine)) {
pr_err("Could not init machine\n");
diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
index 6031933d811c..146683b1c28d 100644
--- a/tools/perf/util/callchain.c
+++ b/tools/perf/util/callchain.c
@@ -37,6 +37,15 @@ struct callchain_param callchain_param = {
CALLCHAIN_PARAM_DEFAULT
};
+/*
+ * Are there any events usind DWARF callchains?
+ *
+ * I.e.
+ *
+ * -e cycles/call-graph=dwarf/
+ */
+bool dwarf_callchain_users;
+
struct callchain_param callchain_param_default = {
CALLCHAIN_PARAM_DEFAULT
};
@@ -265,6 +274,7 @@ int parse_callchain_record(const char *arg, struct callchain_param *param)
ret = 0;
param->record_mode = CALLCHAIN_DWARF;
param->dump_size = default_stack_dump_size;
+ dwarf_callchain_users = true;
tok = strtok_r(NULL, ",", &saveptr);
if (tok) {
diff --git a/tools/perf/util/callchain.h b/tools/perf/util/callchain.h
index f967aa47d0a1..9ba5903c8d3e 100644
--- a/tools/perf/util/callchain.h
+++ b/tools/perf/util/callchain.h
@@ -89,6 +89,8 @@ enum chain_value {
CCVAL_COUNT,
};
+extern bool dwarf_callchain_users;
+
struct callchain_param {
bool enabled;
enum perf_call_graph_mode record_mode;
diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
index 7a42f703e858..af873044d33a 100644
--- a/tools/perf/util/unwind-libunwind-local.c
+++ b/tools/perf/util/unwind-libunwind-local.c
@@ -631,9 +631,8 @@ static unw_accessors_t accessors = {
static int _unwind__prepare_access(struct thread *thread)
{
- if (callchain_param.record_mode != CALLCHAIN_DWARF)
+ if (!dwarf_callchain_users)
return 0;
-
thread->addr_space = unw_create_addr_space(&accessors, 0);
if (!thread->addr_space) {
pr_err("unwind: Can't create unwind address space.\n");
@@ -646,17 +645,15 @@ static int _unwind__prepare_access(struct thread *thread)
static void _unwind__flush_access(struct thread *thread)
{
- if (callchain_param.record_mode != CALLCHAIN_DWARF)
+ if (!dwarf_callchain_users)
return;
-
unw_flush_cache(thread->addr_space, 0, 0);
}
static void _unwind__finish_access(struct thread *thread)
{
- if (callchain_param.record_mode != CALLCHAIN_DWARF)
+ if (!dwarf_callchain_users)
return;
-
unw_destroy_addr_space(thread->addr_space);
}
--
2.15.1
From: Subash Abhinov Kasiviswanathan <[email protected]>
[ Upstream commit 83f1999caeb14e15df205e80d210699951733287 ]
ipv6_defrag pulls network headers before fragment header. In case of
an error, the netfilter layer is currently dropping these packets.
This results in failure of some IPv6 standards tests which passed on
older kernels due to the netfilter framework using cloning.
The test case run here is a check for ICMPv6 error message replies
when some invalid IPv6 fragments are sent. This specific test case is
listed in https://www.ipv6ready.org/docs/Core_Conformance_Latest.pdf
in the Extension Header Processing Order section.
A packet with unrecognized option Type 11 is sent and the test expects
an ICMP error in line with RFC2460 section 4.2 -
11 - discard the packet and, only if the packet's Destination
Address was not a multicast address, send an ICMP Parameter
Problem, Code 2, message to the packet's Source Address,
pointing to the unrecognized Option Type.
Since netfilter layer now drops all invalid IPv6 frag packets, we no
longer see the ICMP error message and fail the test case.
To fix this, save the transport header. If defrag is unable to process
the packet due to RFC2460, restore the transport header and allow packet
to be processed by stack. There is no change for other packet
processing paths.
Tested by confirming that stack sends an ICMP error when it receives
these packets. Also tested that fragmented ICMP pings succeed.
v1->v2: Instead of cloning always, save the transport_header and
restore it in case of this specific error. Update the title and
commit message accordingly.
Signed-off-by: Subash Abhinov Kasiviswanathan <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/ipv6/netfilter/nf_conntrack_reasm.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
index b263bf3a19f7..5edfe66a3d7a 100644
--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
+++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
@@ -230,7 +230,7 @@ static int nf_ct_frag6_queue(struct frag_queue *fq, struct sk_buff *skb,
if ((unsigned int)end > IPV6_MAXPLEN) {
pr_debug("offset is too large.\n");
- return -1;
+ return -EINVAL;
}
ecn = ip6_frag_ecn(ipv6_hdr(skb));
@@ -263,7 +263,7 @@ static int nf_ct_frag6_queue(struct frag_queue *fq, struct sk_buff *skb,
* this case. -DaveM
*/
pr_debug("end of fragment not rounded to 8 bytes.\n");
- return -1;
+ return -EPROTO;
}
if (end > fq->q.len) {
/* Some bits beyond end -> corruption. */
@@ -357,7 +357,7 @@ found:
discard_fq:
inet_frag_kill(&fq->q, &nf_frags);
err:
- return -1;
+ return -EINVAL;
}
/*
@@ -566,6 +566,7 @@ find_prev_fhdr(struct sk_buff *skb, u8 *prevhdrp, int *prevhoff, int *fhoff)
int nf_ct_frag6_gather(struct net *net, struct sk_buff *skb, u32 user)
{
+ u16 savethdr = skb->transport_header;
struct net_device *dev = skb->dev;
int fhoff, nhoff, ret;
struct frag_hdr *fhdr;
@@ -599,8 +600,12 @@ int nf_ct_frag6_gather(struct net *net, struct sk_buff *skb, u32 user)
spin_lock_bh(&fq->q.lock);
- if (nf_ct_frag6_queue(fq, skb, fhdr, nhoff) < 0) {
- ret = -EINVAL;
+ ret = nf_ct_frag6_queue(fq, skb, fhdr, nhoff);
+ if (ret < 0) {
+ if (ret == -EPROTO) {
+ skb->transport_header = savethdr;
+ ret = 0;
+ }
goto out_unlock;
}
--
2.15.1
From: Chuck Lever <[email protected]>
[ Upstream commit d698c4a02ee02053bbebe051322ff427a2dad56a ]
The backchannel code uses rpcrdma_recv_buffer_put to add new reps
to the free rep list. This also decrements rb_recv_count, which
spoofs the receive overrun logic in rpcrdma_buffer_get_rep.
Commit 9b06688bc3b9 ("xprtrdma: Fix additional uses of
spin_lock_irqsave(rb_lock)") replaced the original open-coded
list_add with a call to rpcrdma_recv_buffer_put(), but then a year
later, commit 05c974669ece ("xprtrdma: Fix receive buffer
accounting") added rep accounting to rpcrdma_recv_buffer_put.
It was an oversight to let the backchannel continue to use this
function.
The fix this, let's combine the "add to free list" logic with
rpcrdma_create_rep.
Also, do not allocate RPCRDMA_MAX_BC_REQUESTS rpcrdma_reps in
rpcrdma_buffer_create and then allocate additional rpcrdma_reps in
rpcrdma_bc_setup_reps. Allocating the extra reps during backchannel
set-up is sufficient.
Fixes: 05c974669ece ("xprtrdma: Fix receive buffer accounting")
Signed-off-by: Chuck Lever <[email protected]>
Signed-off-by: Anna Schumaker <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/sunrpc/xprtrdma/backchannel.c | 12 ++----------
net/sunrpc/xprtrdma/verbs.c | 32 +++++++++++++++++++-------------
net/sunrpc/xprtrdma/xprt_rdma.h | 2 +-
3 files changed, 22 insertions(+), 24 deletions(-)
diff --git a/net/sunrpc/xprtrdma/backchannel.c b/net/sunrpc/xprtrdma/backchannel.c
index 823a781ec89c..25e3602aa41f 100644
--- a/net/sunrpc/xprtrdma/backchannel.c
+++ b/net/sunrpc/xprtrdma/backchannel.c
@@ -74,21 +74,13 @@ out_fail:
static int rpcrdma_bc_setup_reps(struct rpcrdma_xprt *r_xprt,
unsigned int count)
{
- struct rpcrdma_rep *rep;
int rc = 0;
while (count--) {
- rep = rpcrdma_create_rep(r_xprt);
- if (IS_ERR(rep)) {
- pr_err("RPC: %s: reply buffer alloc failed\n",
- __func__);
- rc = PTR_ERR(rep);
+ rc = rpcrdma_create_rep(r_xprt);
+ if (rc)
break;
- }
-
- rpcrdma_recv_buffer_put(rep);
}
-
return rc;
}
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 9e8e1de19b2e..97b9d4f671ac 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -951,10 +951,17 @@ rpcrdma_create_req(struct rpcrdma_xprt *r_xprt)
return req;
}
-struct rpcrdma_rep *
+/**
+ * rpcrdma_create_rep - Allocate an rpcrdma_rep object
+ * @r_xprt: controlling transport
+ *
+ * Returns 0 on success or a negative errno on failure.
+ */
+int
rpcrdma_create_rep(struct rpcrdma_xprt *r_xprt)
{
struct rpcrdma_create_data_internal *cdata = &r_xprt->rx_data;
+ struct rpcrdma_buffer *buf = &r_xprt->rx_buf;
struct rpcrdma_rep *rep;
int rc;
@@ -979,12 +986,18 @@ rpcrdma_create_rep(struct rpcrdma_xprt *r_xprt)
rep->rr_recv_wr.wr_cqe = &rep->rr_cqe;
rep->rr_recv_wr.sg_list = &rep->rr_rdmabuf->rg_iov;
rep->rr_recv_wr.num_sge = 1;
- return rep;
+
+ spin_lock(&buf->rb_lock);
+ list_add(&rep->rr_list, &buf->rb_recv_bufs);
+ spin_unlock(&buf->rb_lock);
+ return 0;
out_free:
kfree(rep);
out:
- return ERR_PTR(rc);
+ dprintk("RPC: %s: reply buffer %d alloc failed\n",
+ __func__, rc);
+ return rc;
}
int
@@ -1027,17 +1040,10 @@ rpcrdma_buffer_create(struct rpcrdma_xprt *r_xprt)
}
INIT_LIST_HEAD(&buf->rb_recv_bufs);
- for (i = 0; i < buf->rb_max_requests + RPCRDMA_MAX_BC_REQUESTS; i++) {
- struct rpcrdma_rep *rep;
-
- rep = rpcrdma_create_rep(r_xprt);
- if (IS_ERR(rep)) {
- dprintk("RPC: %s: reply buffer %d alloc failed\n",
- __func__, i);
- rc = PTR_ERR(rep);
+ for (i = 0; i <= buf->rb_max_requests; i++) {
+ rc = rpcrdma_create_rep(r_xprt);
+ if (rc)
goto out;
- }
- list_add(&rep->rr_list, &buf->rb_recv_bufs);
}
return 0;
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index e26a97d2f922..fcb0b3227ee1 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -550,8 +550,8 @@ int rpcrdma_ep_post_recv(struct rpcrdma_ia *, struct rpcrdma_rep *);
* Buffer calls - xprtrdma/verbs.c
*/
struct rpcrdma_req *rpcrdma_create_req(struct rpcrdma_xprt *);
-struct rpcrdma_rep *rpcrdma_create_rep(struct rpcrdma_xprt *);
void rpcrdma_destroy_req(struct rpcrdma_req *);
+int rpcrdma_create_rep(struct rpcrdma_xprt *r_xprt);
int rpcrdma_buffer_create(struct rpcrdma_xprt *);
void rpcrdma_buffer_destroy(struct rpcrdma_buffer *);
--
2.15.1
From: Paolo Bonzini <[email protected]>
[ Upstream commit 51776043afa415435c7e4636204fbe4f7edc4501 ]
This ioctl is obsolete (it was used by Xenner as far as I know) but
still let's not break it gratuitously... Its handler is copying
directly into struct kvm. Go through a bounce buffer instead, with
the added benefit that we can actually do something useful with the
flags argument---the previous code was exiting with -EINVAL but still
doing the copy.
This technically is a userspace ABI breakage, but since no one should be
using the ioctl, it's a good occasion to see if someone actually
complains.
Cc: [email protected]
Cc: Kees Cook <[email protected]>
Cc: Radim Krčmář <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/kvm/x86.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b9afb4784d12..d7728bcd9a3c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4225,13 +4225,14 @@ long kvm_arch_vm_ioctl(struct file *filp,
mutex_unlock(&kvm->lock);
break;
case KVM_XEN_HVM_CONFIG: {
+ struct kvm_xen_hvm_config xhc;
r = -EFAULT;
- if (copy_from_user(&kvm->arch.xen_hvm_config, argp,
- sizeof(struct kvm_xen_hvm_config)))
+ if (copy_from_user(&xhc, argp, sizeof(xhc)))
goto out;
r = -EINVAL;
- if (kvm->arch.xen_hvm_config.flags)
+ if (xhc.flags)
goto out;
+ memcpy(&kvm->arch.xen_hvm_config, &xhc, sizeof(xhc));
r = 0;
break;
}
--
2.15.1
From: Parav Pandit <[email protected]>
[ Upstream commit a6532e7139660c103dda181aa5b2c734aa26ed6c ]
iWARP does not use rdma_ah_attr_type, and for this reason we do not have a
RDMA_AH_ATTR_TYPE_IWARP. rdma_ah_find_type should not even be called on iwarp
ports and for clarity it shouldn't have a special test for iWarp.
This changes the result from RDMA_AH_ATTR_TYPE_ROCE to RDMA_AH_ATTR_TYPE_IB
when wrongly called on an iWarp port.
Fixes: 44c58487d51a ("IB/core: Define 'ib' and 'roce' rdma_ah_attr types")
Signed-off-by: Parav Pandit <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
include/rdma/ib_verbs.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 6533aa64f009..a9fae49a1883 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -3766,8 +3766,7 @@ static inline void rdma_ah_set_grh(struct rdma_ah_attr *attr,
static inline enum rdma_ah_attr_type rdma_ah_find_type(struct ib_device *dev,
u32 port_num)
{
- if ((rdma_protocol_roce(dev, port_num)) ||
- (rdma_protocol_iwarp(dev, port_num)))
+ if (rdma_protocol_roce(dev, port_num))
return RDMA_AH_ATTR_TYPE_ROCE;
else if ((rdma_protocol_ib(dev, port_num)) &&
(rdma_cap_opa_ah(dev, port_num)))
--
2.15.1
From: NeilBrown <[email protected]>
[ Upstream commit dce2630c7da73b0634686bca557cc8945cc450c8 ]
There are 2 comments in the NFSv4 code which suggest that
SIGLOST should possibly be sent to a process. In these
cases a lock has been lost.
The current practice is to set NFS_LOCK_LOST so that
read/write returns EIO when a lock is lost.
So change these comments to code when sets NFS_LOCK_LOST.
One case is when lock recovery after apparent server restart
fails with NFS4ERR_DENIED, NFS4ERR_RECLAIM_BAD, or
NFS4ERRO_RECLAIM_CONFLICT. The other case is when a lock
attempt as part of lease recovery fails with NFS4ERR_DENIED.
In an ideal world, these should not happen. However I have
a packet trace showing an NFSv4.1 session getting
NFS4ERR_BADSESSION after an extended network parition. The
NFSv4.1 client treats this like server reboot until/unless
it get NFS4ERR_NO_GRACE, in which case it switches over to
"nograce" recovery mode. In this network trace, the client
attempts to recover a lock and the server (incorrectly)
reports NFS4ERR_DENIED rather than NFS4ERR_NO_GRACE. This
leads to the ineffective comment and the client then
continues to write using the OPEN stateid.
Signed-off-by: NeilBrown <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/nfs/nfs4proc.c | 12 ++++++++----
fs/nfs/nfs4state.c | 5 ++++-
2 files changed, 12 insertions(+), 5 deletions(-)
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 2241d52710f7..ae8f43d270d6 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -1885,7 +1885,7 @@ static int nfs4_open_reclaim(struct nfs4_state_owner *sp, struct nfs4_state *sta
return ret;
}
-static int nfs4_handle_delegation_recall_error(struct nfs_server *server, struct nfs4_state *state, const nfs4_stateid *stateid, int err)
+static int nfs4_handle_delegation_recall_error(struct nfs_server *server, struct nfs4_state *state, const nfs4_stateid *stateid, struct file_lock *fl, int err)
{
switch (err) {
default:
@@ -1932,7 +1932,11 @@ static int nfs4_handle_delegation_recall_error(struct nfs_server *server, struct
return -EAGAIN;
case -ENOMEM:
case -NFS4ERR_DENIED:
- /* kill_proc(fl->fl_pid, SIGLOST, 1); */
+ if (fl) {
+ struct nfs4_lock_state *lsp = fl->fl_u.nfs4_fl.owner;
+ if (lsp)
+ set_bit(NFS_LOCK_LOST, &lsp->ls_flags);
+ }
return 0;
}
return err;
@@ -1968,7 +1972,7 @@ int nfs4_open_delegation_recall(struct nfs_open_context *ctx,
err = nfs4_open_recover_helper(opendata, FMODE_READ);
}
nfs4_opendata_put(opendata);
- return nfs4_handle_delegation_recall_error(server, state, stateid, err);
+ return nfs4_handle_delegation_recall_error(server, state, stateid, NULL, err);
}
static void nfs4_open_confirm_prepare(struct rpc_task *task, void *calldata)
@@ -6595,7 +6599,7 @@ int nfs4_lock_delegation_recall(struct file_lock *fl, struct nfs4_state *state,
if (err != 0)
return err;
err = _nfs4_do_setlk(state, F_SETLK, fl, NFS_LOCK_NEW);
- return nfs4_handle_delegation_recall_error(server, state, stateid, err);
+ return nfs4_handle_delegation_recall_error(server, state, stateid, fl, err);
}
struct nfs_release_lockowner_data {
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 0378e2257ca7..45873ed92057 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1447,6 +1447,7 @@ static int nfs4_reclaim_locks(struct nfs4_state *state, const struct nfs4_state_
struct inode *inode = state->inode;
struct nfs_inode *nfsi = NFS_I(inode);
struct file_lock *fl;
+ struct nfs4_lock_state *lsp;
int status = 0;
struct file_lock_context *flctx = inode->i_flctx;
struct list_head *list;
@@ -1487,7 +1488,9 @@ restart:
case -NFS4ERR_DENIED:
case -NFS4ERR_RECLAIM_BAD:
case -NFS4ERR_RECLAIM_CONFLICT:
- /* kill_proc(fl->fl_pid, SIGLOST, 1); */
+ lsp = fl->fl_u.nfs4_fl.owner;
+ if (lsp)
+ set_bit(NFS_LOCK_LOST, &lsp->ls_flags);
status = 0;
}
spin_lock(&flctx->flc_lock);
--
2.15.1
From: Peter Zijlstra <[email protected]>
[ Upstream commit 30c7e5b123673d5e570e238dbada2fb68a87212c ]
Zhang Rui reported that a Surface Pro 4 will fail to boot with
lapic=notscdeadline. Part of the problem is that that machine doesn't have
a PIT.
If, for some reason, the TSC init has to fall back to TSC calibration, it
relies on the PIT to be present.
Allow TSC calibration to reliably fall back to HPET.
The below results in an accurate TSC measurement when forced on a IVB:
tsc: Unable to calibrate against PIT
tsc: No reference (HPET/PMTIMER) available
tsc: Unable to calibrate against PIT
tsc: using HPET reference calibration
tsc: Detected 2792.451 MHz processor
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/include/asm/i8259.h | 5 +++++
arch/x86/kernel/tsc.c | 18 ++++++++++++++++++
2 files changed, 23 insertions(+)
diff --git a/arch/x86/include/asm/i8259.h b/arch/x86/include/asm/i8259.h
index c8376b40e882..5cdcdbd4d892 100644
--- a/arch/x86/include/asm/i8259.h
+++ b/arch/x86/include/asm/i8259.h
@@ -69,6 +69,11 @@ struct legacy_pic {
extern struct legacy_pic *legacy_pic;
extern struct legacy_pic null_legacy_pic;
+static inline bool has_legacy_pic(void)
+{
+ return legacy_pic != &null_legacy_pic;
+}
+
static inline int nr_legacy_irqs(void)
{
return legacy_pic->nr_legacy_irqs;
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 47506567435e..03d8e9204a6a 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -25,6 +25,7 @@
#include <asm/geode.h>
#include <asm/apic.h>
#include <asm/intel-family.h>
+#include <asm/i8259.h>
unsigned int __read_mostly cpu_khz; /* TSC clocks / usec, not used here */
EXPORT_SYMBOL(cpu_khz);
@@ -363,6 +364,20 @@ static unsigned long pit_calibrate_tsc(u32 latch, unsigned long ms, int loopmin)
unsigned long tscmin, tscmax;
int pitcnt;
+ if (!has_legacy_pic()) {
+ /*
+ * Relies on tsc_early_delay_calibrate() to have given us semi
+ * usable udelay(), wait for the same 50ms we would have with
+ * the PIT loop below.
+ */
+ udelay(10 * USEC_PER_MSEC);
+ udelay(10 * USEC_PER_MSEC);
+ udelay(10 * USEC_PER_MSEC);
+ udelay(10 * USEC_PER_MSEC);
+ udelay(10 * USEC_PER_MSEC);
+ return ULONG_MAX;
+ }
+
/* Set the Gate high, disable speaker */
outb((inb(0x61) & ~0x02) | 0x01, 0x61);
@@ -487,6 +502,9 @@ static unsigned long quick_pit_calibrate(void)
u64 tsc, delta;
unsigned long d1, d2;
+ if (!has_legacy_pic())
+ return 0;
+
/* Set the Gate high, disable speaker */
outb((inb(0x61) & ~0x02) | 0x01, 0x61);
--
2.15.1
From: Hans de Goede <[email protected]>
[ Upstream commit e1681599345b8466786b6e54a2db2a00a068a3f3 ]
acpi_lpss_create_device() skips handling LPSS devices which do not have
a mmio resources in their resource list (typically these devices are
disabled by the firmware). But since the LPSS code does not bind to the
device, acpi_bus_attach() ends up still creating a platform device for
it and the regular platform_driver for the ACPI HID still tries to bind
to it.
This happens e.g. on some boards which do not use the pwm-controller
and have an empty or invalid resource-table for it. Currently this causes
these error messages to get logged:
[ 3.281966] pwm-lpss 80862288:00: invalid resource
[ 3.287098] pwm-lpss: probe of 80862288:00 failed with error -22
This commit stops the undesirable creation of a platform_device for
disabled LPSS devices by setting pnp.type.platform_id to 0. Note that
acpi_scan_attach_handler() also sets pnp.type.platform_id to 0 when there
is a matching handler for the device and that handler has no attach
callback, so we simply behave as a handler without an attach function
in this case.
Signed-off-by: Hans de Goede <[email protected]>
Acked-by: Mika Westerberg <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/acpi/acpi_lpss.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
index 032ae44710e5..a2be3fd2c72b 100644
--- a/drivers/acpi/acpi_lpss.c
+++ b/drivers/acpi/acpi_lpss.c
@@ -465,6 +465,8 @@ static int acpi_lpss_create_device(struct acpi_device *adev,
acpi_dev_free_resource_list(&resource_list);
if (!pdata->mmio_base) {
+ /* Avoid acpi_bus_attach() instantiating a pdev for this dev. */
+ adev->pnp.type.platform_id = 0;
/* Skip the device, but continue the namespace scan. */
ret = 0;
goto err_out;
--
2.15.1
From: Takashi Iwai <[email protected]>
[ Upstream commit c469652bb5e8fb715db7d152f46d33b3740c9b87 ]
The commit ffcd28d88e4f ("ALSA: hda - Select INPUT for Realtek
HD-audio codec") introduced the reverse-selection of CONFIG_INPUT for
Realtek codec in order to avoid the mess with dependency between
built-in and modules. Later on, we obtained IS_REACHABLE() macro
exactly for this kind of problems, and now we can remove th INPUT
selection in Kconfig and put IS_REACHABLE(INPUT) to the appropriate
places in the code, so that the driver doesn't need to select other
subsystem forcibly.
Fixes: ffcd28d88e4f ("ALSA: hda - Select INPUT for Realtek HD-audio codec")
Reported-by: Randy Dunlap <[email protected]>
Acked-by: Randy Dunlap <[email protected]> # and build-tested
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
sound/pci/hda/Kconfig | 1 -
sound/pci/hda/patch_realtek.c | 5 +++++
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/sound/pci/hda/Kconfig b/sound/pci/hda/Kconfig
index 7f3b5ed81995..f7a492c382d9 100644
--- a/sound/pci/hda/Kconfig
+++ b/sound/pci/hda/Kconfig
@@ -88,7 +88,6 @@ config SND_HDA_PATCH_LOADER
config SND_HDA_CODEC_REALTEK
tristate "Build Realtek HD-audio codec support"
select SND_HDA_GENERIC
- select INPUT
help
Say Y or M here to include Realtek HD-audio codec support in
snd-hda-intel driver, such as ALC880.
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index e44a9758f2eb..0cf552a0d5c5 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -3721,6 +3721,7 @@ static void alc280_fixup_hp_gpio4(struct hda_codec *codec,
}
}
+#if IS_REACHABLE(INPUT)
static void gpio2_mic_hotkey_event(struct hda_codec *codec,
struct hda_jack_callback *event)
{
@@ -3853,6 +3854,10 @@ static void alc233_fixup_lenovo_line2_mic_hotkey(struct hda_codec *codec,
spec->kb_dev = NULL;
}
}
+#else /* INPUT */
+#define alc280_fixup_hp_gpio2_mic_hotkey NULL
+#define alc233_fixup_lenovo_line2_mic_hotkey NULL
+#endif /* INPUT */
static void alc269_fixup_hp_line1_mic1_led(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
--
2.15.1
From: Jesper Dangaard Brouer <[email protected]>
[ Upstream commit 7110d80d53f472956420cd05a6297f49b558b674 ]
The third parameter to do_install was not used by $(INSTALL) command.
Fix this by only setting the -m option when the third parameter is supplied.
The use of a third parameter was introduced in commit eb54e522a000 ("bpf:
install libbpf headers on 'make install'").
Without this change, the header files are install as executables files (755).
Fixes: eb54e522a000 ("bpf: install libbpf headers on 'make install'")
Signed-off-by: Jesper Dangaard Brouer <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/lib/bpf/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
index 4555304dc18e..f02448e86d38 100644
--- a/tools/lib/bpf/Makefile
+++ b/tools/lib/bpf/Makefile
@@ -183,7 +183,7 @@ define do_install
if [ ! -d '$(DESTDIR_SQ)$2' ]; then \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$2'; \
fi; \
- $(INSTALL) $1 '$(DESTDIR_SQ)$2'
+ $(INSTALL) $1 $(if $3,-m $3,) '$(DESTDIR_SQ)$2'
endef
install_lib: all_cmd
--
2.15.1
From: "Steven Rostedt (VMware)" <[email protected]>
[ Upstream commit d777f8de99b05d399c0e4e51cdce016f26bd971b ]
If a field is a dynamic string, get_field_str() returned just the
offset/size value and not the string. Have it parse the offset/size
correctly to return the actual string. Otherwise filtering fails when
trying to filter fields that are dynamic strings.
Reported-by: Gopanapalli Pradeep <[email protected]>
Signed-off-by: Steven Rostedt <[email protected]>
Acked-by: Namhyung Kim <[email protected]>
Cc: Andrew Morton <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/lib/traceevent/parse-filter.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/tools/lib/traceevent/parse-filter.c b/tools/lib/traceevent/parse-filter.c
index 7c214ceb9386..5e10ba796a6f 100644
--- a/tools/lib/traceevent/parse-filter.c
+++ b/tools/lib/traceevent/parse-filter.c
@@ -1879,17 +1879,25 @@ static const char *get_field_str(struct filter_arg *arg, struct pevent_record *r
struct pevent *pevent;
unsigned long long addr;
const char *val = NULL;
+ unsigned int size;
char hex[64];
/* If the field is not a string convert it */
if (arg->str.field->flags & FIELD_IS_STRING) {
val = record->data + arg->str.field->offset;
+ size = arg->str.field->size;
+
+ if (arg->str.field->flags & FIELD_IS_DYNAMIC) {
+ addr = *(unsigned int *)val;
+ val = record->data + (addr & 0xffff);
+ size = addr >> 16;
+ }
/*
* We need to copy the data since we can't be sure the field
* is null terminated.
*/
- if (*(val + arg->str.field->size - 1)) {
+ if (*(val + size - 1)) {
/* copy it */
memcpy(arg->str.buffer, val, arg->str.field->size);
/* the buffer is already NULL terminated */
--
2.15.1
From: mulhern <[email protected]>
[ Upstream commit 9b28a1102efc75d81298198166ead87d643a29ce ]
Fixes:
1. The use of "exceeds" when the opposite of exceeds, falls below,
was meant.
2. Properly speaking, a table can not exceed a threshold.
It emphasizes the important point, which is that it is the userspace
daemon's responsibility to check for low free space when a device
is resumed, since it won't get a special event indicating low free
space in that situation.
Signed-off-by: mulhern <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
Documentation/device-mapper/thin-provisioning.txt | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/Documentation/device-mapper/thin-provisioning.txt b/Documentation/device-mapper/thin-provisioning.txt
index 1699a55b7b70..ef639960b272 100644
--- a/Documentation/device-mapper/thin-provisioning.txt
+++ b/Documentation/device-mapper/thin-provisioning.txt
@@ -112,9 +112,11 @@ $low_water_mark is expressed in blocks of size $data_block_size. If
free space on the data device drops below this level then a dm event
will be triggered which a userspace daemon should catch allowing it to
extend the pool device. Only one such event will be sent.
-Resuming a device with a new table itself triggers an event so the
-userspace daemon can use this to detect a situation where a new table
-already exceeds the threshold.
+
+No special event is triggered if a just resumed device's free space is below
+the low water mark. However, resuming a device always triggers an
+event; a userspace daemon should verify that free space exceeds the low
+water mark when handling this event.
A low water mark for the metadata device is maintained in the kernel and
will trigger a dm event if free space on the metadata device drops below
--
2.15.1
From: Peter Xu <[email protected]>
[ Upstream commit 9d2e6505f6d6934e681aed502f566198cb25c74a ]
after commit a1ddcbe93010 ("iommu/vt-d: Pass dmar_domain directly into
iommu_flush_iotlb_psi", 2015-08-12), we have domain pointer as parameter
to iommu_flush_iotlb_psi(), so no need to fetch it from cache again.
More importantly, a NULL reference pointer bug is reported on RHEL7 (and
it can be reproduced on some old upstream kernels too, e.g., v4.13) by
unplugging an 40g nic from a VM (hard to test unplug on real host, but
it should be the same):
https://bugzilla.redhat.com/show_bug.cgi?id=1531367
[ 24.391863] pciehp 0000:00:03.0:pcie004: Slot(0): Attention button pressed
[ 24.393442] pciehp 0000:00:03.0:pcie004: Slot(0): Powering off due to button press
[ 29.721068] i40evf 0000:01:00.0: Unable to send opcode 2 to PF, err I40E_ERR_QUEUE_EMPTY, aq_err OK
[ 29.783557] iommu: Removing device 0000:01:00.0 from group 3
[ 29.784662] BUG: unable to handle kernel NULL pointer dereference at 0000000000000304
[ 29.785817] IP: iommu_flush_iotlb_psi+0xcf/0x120
[ 29.786486] PGD 0
[ 29.786487] P4D 0
[ 29.786812]
[ 29.787390] Oops: 0000 [#1] SMP
[ 29.787876] Modules linked in: ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_ng
[ 29.795371] CPU: 0 PID: 156 Comm: kworker/0:2 Not tainted 4.13.0 #14
[ 29.796366] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.11.0-1.el7 04/01/2014
[ 29.797593] Workqueue: pciehp-0 pciehp_power_thread
[ 29.798328] task: ffff94f5745b4a00 task.stack: ffffb326805ac000
[ 29.799178] RIP: 0010:iommu_flush_iotlb_psi+0xcf/0x120
[ 29.799919] RSP: 0018:ffffb326805afbd0 EFLAGS: 00010086
[ 29.800666] RAX: ffff94f5bc56e800 RBX: 0000000000000000 RCX: 0000000200000025
[ 29.801667] RDX: ffff94f5bc56e000 RSI: 0000000000000082 RDI: 0000000000000000
[ 29.802755] RBP: ffffb326805afbf8 R08: 0000000000000000 R09: ffff94f5bc86bbf0
[ 29.803772] R10: ffffb326805afba8 R11: 00000000000ffdc4 R12: ffff94f5bc86a400
[ 29.804789] R13: 0000000000000000 R14: 00000000ffdc4000 R15: 0000000000000000
[ 29.805792] FS: 0000000000000000(0000) GS:ffff94f5bfc00000(0000) knlGS:0000000000000000
[ 29.806923] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 29.807736] CR2: 0000000000000304 CR3: 000000003499d000 CR4: 00000000000006f0
[ 29.808747] Call Trace:
[ 29.809156] flush_unmaps_timeout+0x126/0x1c0
[ 29.809800] domain_exit+0xd6/0x100
[ 29.810322] device_notifier+0x6b/0x70
[ 29.810902] notifier_call_chain+0x4a/0x70
[ 29.812822] __blocking_notifier_call_chain+0x47/0x60
[ 29.814499] blocking_notifier_call_chain+0x16/0x20
[ 29.816137] device_del+0x233/0x320
[ 29.817588] pci_remove_bus_device+0x6f/0x110
[ 29.819133] pci_stop_and_remove_bus_device+0x1a/0x20
[ 29.820817] pciehp_unconfigure_device+0x7a/0x1d0
[ 29.822434] pciehp_disable_slot+0x52/0xe0
[ 29.823931] pciehp_power_thread+0x8a/0xa0
[ 29.825411] process_one_work+0x18c/0x3a0
[ 29.826875] worker_thread+0x4e/0x3b0
[ 29.828263] kthread+0x109/0x140
[ 29.829564] ? process_one_work+0x3a0/0x3a0
[ 29.831081] ? kthread_park+0x60/0x60
[ 29.832464] ret_from_fork+0x25/0x30
[ 29.833794] Code: 85 ed 74 0b 5b 41 5c 41 5d 41 5e 41 5f 5d c3 49 8b 54 24 60 44 89 f8 0f b6 c4 48 8b 04 c2 48 85 c0 74 49 45 0f b6 ff 4a 8b 3c f8 <80> bf
[ 29.838514] RIP: iommu_flush_iotlb_psi+0xcf/0x120 RSP: ffffb326805afbd0
[ 29.840362] CR2: 0000000000000304
[ 29.841716] ---[ end trace b10ec0d6900868d3 ]---
This patch fixes that problem if applied to v4.13 kernel.
The bug does not exist on latest upstream kernel since it's fixed as a
side effect of commit 13cf01744608 ("iommu/vt-d: Make use of iova
deferred flushing", 2017-08-15). But IMHO it's still good to have this
patch upstream.
CC: Alex Williamson <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
Fixes: a1ddcbe93010 ("iommu/vt-d: Pass dmar_domain directly into iommu_flush_iotlb_psi")
Reviewed-by: Alex Williamson <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/iommu/intel-iommu.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 83f3d4831f94..e8414bcf8390 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1603,8 +1603,7 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
* flush. However, device IOTLB doesn't need to be flushed in this case.
*/
if (!cap_caching_mode(iommu->cap) || !map)
- iommu_flush_dev_iotlb(get_iommu_domain(iommu, did),
- addr, mask);
+ iommu_flush_dev_iotlb(domain, addr, mask);
}
static void iommu_flush_iova(struct iova_domain *iovad)
--
2.15.1
From: Ming Lei <[email protected]>
[ Upstream commit 050af08ffb1b62af69196d61c22a0755f9a3cdbd ]
blk-mq will rerun queue via RESTART or dispatch wake after one request
is completed, so not necessary to wait random time for requeuing, we
should trust blk-mq to do it.
More importantly, we need to return BLK_STS_RESOURCE to blk-mq so that
dequeuing from the I/O scheduler can be stopped, this results in
improved I/O merging.
Signed-off-by: Ming Lei <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/md/dm-mpath.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index 2704a55f8b6e..8b7328666eaa 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -502,8 +502,20 @@ static int multipath_clone_and_map(struct dm_target *ti, struct request *rq,
if (queue_dying) {
atomic_inc(&m->pg_init_in_progress);
activate_or_offline_path(pgpath);
+ return DM_MAPIO_DELAY_REQUEUE;
}
- return DM_MAPIO_DELAY_REQUEUE;
+
+ /*
+ * blk-mq's SCHED_RESTART can cover this requeue, so we
+ * needn't deal with it by DELAY_REQUEUE. More importantly,
+ * we have to return DM_MAPIO_REQUEUE so that blk-mq can
+ * get the queue busy feedback (via BLK_STS_RESOURCE),
+ * otherwise I/O merging can suffer.
+ */
+ if (q->mq_ops)
+ return DM_MAPIO_REQUEUE;
+ else
+ return DM_MAPIO_DELAY_REQUEUE;
}
clone->bio = clone->biotail = NULL;
clone->rq_disk = bdev->bd_disk;
--
2.15.1
From: Maxime Chevallier <[email protected]>
[ Upstream commit 44a5f423e70374e5b42cecd85e78f2d79334e0f2 ]
When performing a read using FIFO mode, the spi controller shifts out
the last 2 bytes that were written in a previous transfer on MOSI.
This undocumented behaviour can cause devices to misinterpret the
transfer, so we explicitly clear the WFIFO before each read.
This behaviour was noticed on EspressoBin.
Signed-off-by: Maxime Chevallier <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/spi/spi-armada-3700.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/spi/spi-armada-3700.c b/drivers/spi/spi-armada-3700.c
index fe3fa1e8517a..4903f15177cf 100644
--- a/drivers/spi/spi-armada-3700.c
+++ b/drivers/spi/spi-armada-3700.c
@@ -624,6 +624,11 @@ static int a3700_spi_transfer_one(struct spi_master *master,
a3700_spi_header_set(a3700_spi);
if (xfer->rx_buf) {
+ /* Clear WFIFO, since it's last 2 bytes are shifted out during
+ * a read operation
+ */
+ spireg_write(a3700_spi, A3700_SPI_DATA_OUT_REG, 0);
+
/* Set read data length */
spireg_write(a3700_spi, A3700_SPI_IF_DIN_CNT_REG,
a3700_spi->buf_len);
--
2.15.1
From: Martin Blumenstingl <[email protected]>
[ Upstream commit fb7d38a70e1d8ffd54f7a7464dcc4889d7e490ad ]
On Meson8b the only valid input clock is MPLL2. The bootloader
configures that to run at 500002394Hz which cannot be divided evenly
down to 125MHz using the m250_div clock. Currently the common clock
framework chooses a m250_div of 2 - with the internal fixed
"divide by 10" this results in a RGMII TX clock of 125001197Hz (120Hz
above the requested 125MHz).
Letting the common clock framework propagate the rate changes up to the
parent of m250_mux allows us to get the best possible clock rate. With
this patch the common clock framework calculates a rate of
very-close-to-250MHz (249999701Hz to be exact) for the MPLL2 clock
(which is the mux input). Dividing that by 2 (which is an internal,
fixed divider for the RGMII TX clock) gives us an RGMII TX clock of
124999850Hz (which is only 150Hz off the requested 125MHz, compared to
1197Hz based on the MPLL2 rate set by u-boot and the Amlogic GPL kernel
sources).
SoCs from the Meson GX series are not affected by this change because
the input clock is FCLK_DIV2 whose rate cannot be changed (which is fine
since it's running at 1GHz, so it's already a multiple of 250MHz and
125MHz).
Fixes: 566e8251625304 ("net: stmmac: add a glue driver for the Amlogic Meson 8b / GXBB DWMAC")
Suggested-by: Jerome Brunet <[email protected]>
Signed-off-by: Martin Blumenstingl <[email protected]>
Reviewed-by: Jerome Brunet <[email protected]>
Tested-by: Jerome Brunet <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
index 157e12e15f28..8be4b32544ef 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
@@ -116,7 +116,7 @@ static int meson8b_init_clk(struct meson8b_dwmac *dwmac)
snprintf(clk_name, sizeof(clk_name), "%s#m250_sel", dev_name(dev));
init.name = clk_name;
init.ops = &clk_mux_ops;
- init.flags = 0;
+ init.flags = CLK_SET_RATE_PARENT;
init.parent_names = mux_parent_names;
init.num_parents = MUX_CLK_NUM_PARENTS;
--
2.15.1
From: Dan Carpenter <[email protected]>
[ Upstream commit 7ad81482cad67cbe1ec808490d1ddfc420c42008 ]
We get the "new_profile_index" value from the mouse device when we're
handling raw events. Smatch taints it as untrusted data and complains
that we need a bounds check. This seems like a reasonable warning
otherwise there is a small read beyond the end of the array.
Fixes: 0e70f97f257e ("HID: roccat: Add support for Kova[+] mouse")
Signed-off-by: Dan Carpenter <[email protected]>
Acked-by: Silvan Jegen <[email protected]>
Signed-off-by: Jiri Kosina <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/hid/hid-roccat-kovaplus.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/hid/hid-roccat-kovaplus.c b/drivers/hid/hid-roccat-kovaplus.c
index 43617fb28b87..317c9c2c0a7c 100644
--- a/drivers/hid/hid-roccat-kovaplus.c
+++ b/drivers/hid/hid-roccat-kovaplus.c
@@ -37,6 +37,8 @@ static uint kovaplus_convert_event_cpi(uint value)
static void kovaplus_profile_activated(struct kovaplus_device *kovaplus,
uint new_profile_index)
{
+ if (new_profile_index >= ARRAY_SIZE(kovaplus->profile_settings))
+ return;
kovaplus->actual_profile = new_profile_index;
kovaplus->actual_cpi = kovaplus->profile_settings[new_profile_index].cpi_startup_level;
kovaplus->actual_x_sensitivity = kovaplus->profile_settings[new_profile_index].sensitivity_x;
--
2.15.1
From: Nikolay Borisov <[email protected]>
[ Upstream commit 9ea2c7c9da13c9073e371c046cbbc45481ecb459 ]
When modifying a tree where the root is at BTRFS_MAX_LEVEL - 1 then
the level variable is going to be 7 (this is the max height of the
tree). On the other hand btrfs_cow_block is always called with
"level + 1" as an index into the nodes and slots arrays. This leads to
an out of bounds access. Admittdely this will be benign since an OOB
access of the nodes array will likely read the 0th element from the
slots array, which in this case is going to be 0 (since we start CoW at
the top of the tree). The OOB access into the slots array in turn will
read the 0th and 1st values of the locks array, which would both be 0
at the time. However, this benign behavior relies on the fact that the
path being passed hasn't been initialised, if it has already been used to
query a btree then it could potentially have populated the nodes/slots arrays.
Fix it by explicitly checking if we are at level 7 (the maximum allowed
index in nodes/slots arrays) and explicitly call the CoW routine with
NULL for parent's node/slot.
Signed-off-by: Nikolay Borisov <[email protected]>
Fixes-coverity-id: 711515
Reviewed-by: David Sterba <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/btrfs/ctree.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index e2bb2a065741..21cc27509993 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -2774,6 +2774,8 @@ again:
* contention with the cow code
*/
if (cow) {
+ bool last_level = (level == (BTRFS_MAX_LEVEL - 1));
+
/*
* if we don't really need to cow this block
* then we don't want to set the path blocking,
@@ -2798,9 +2800,13 @@ again:
}
btrfs_set_path_blocking(p);
- err = btrfs_cow_block(trans, root, b,
- p->nodes[level + 1],
- p->slots[level + 1], &b);
+ if (last_level)
+ err = btrfs_cow_block(trans, root, b, NULL, 0,
+ &b);
+ else
+ err = btrfs_cow_block(trans, root, b,
+ p->nodes[level + 1],
+ p->slots[level + 1], &b);
if (err) {
ret = err;
goto done;
--
2.15.1
From: Prashant Bhole <[email protected]>
[ Upstream commit 783687810e986a15ffbf86c516a1a48ff37f38f7 ]
Bug: BPF programs and maps related to sockmaps test exist
in memory even after test_maps ends.
This patch fixes it as a short term workaround (sockmap
kernel side needs real fixing) by empyting sockmaps when
test ends.
Fixes: 6f6d33f3b3d0f ("bpf: selftests add sockmap tests")
Signed-off-by: Prashant Bhole <[email protected]>
[ daniel: Note on workaround. ]
Signed-off-by: Daniel Borkmann <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/testing/selftests/bpf/test_maps.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 50ce52d2013d..8b9470b5af6d 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -463,7 +463,7 @@ static void test_devmap(int task, void *data)
#define SOCKMAP_VERDICT_PROG "./sockmap_verdict_prog.o"
static void test_sockmap(int tasks, void *data)
{
- int one = 1, map_fd_rx, map_fd_tx, map_fd_break, s, sc, rc;
+ int one = 1, map_fd_rx = 0, map_fd_tx = 0, map_fd_break, s, sc, rc;
struct bpf_map *bpf_map_rx, *bpf_map_tx, *bpf_map_break;
int ports[] = {50200, 50201, 50202, 50204};
int err, i, fd, udp, sfd[6] = {0xdeadbeef};
@@ -868,9 +868,12 @@ static void test_sockmap(int tasks, void *data)
goto out_sockmap;
}
- /* Test map close sockets */
- for (i = 0; i < 6; i++)
+ /* Test map close sockets and empty maps */
+ for (i = 0; i < 6; i++) {
+ bpf_map_delete_elem(map_fd_tx, &i);
+ bpf_map_delete_elem(map_fd_rx, &i);
close(sfd[i]);
+ }
close(fd);
close(map_fd_rx);
bpf_object__close(obj);
@@ -881,8 +884,13 @@ out:
printf("Failed to create sockmap '%i:%s'!\n", i, strerror(errno));
exit(1);
out_sockmap:
- for (i = 0; i < 6; i++)
+ for (i = 0; i < 6; i++) {
+ if (map_fd_tx)
+ bpf_map_delete_elem(map_fd_tx, &i);
+ if (map_fd_rx)
+ bpf_map_delete_elem(map_fd_rx, &i);
close(sfd[i]);
+ }
close(fd);
exit(1);
}
--
2.15.1
From: Avinash Dayanand <[email protected]>
[ Upstream commit 06aa040f039404a0039a5158cd12f41187487a1f ]
When a host disables and enables a PF device, all the associated
VFs are removed and added back in. It also generates a PFR which in turn
resets all the connected VFs. This behaviour is different from that of
Linux guest on Linux host. Hence we end up in a situation where there's
a PFR and device removal at the same time. And watchdog doesn't have a
clue about this and schedules a reset_task. This patch adds code to send
signal to reset_task that the device is currently being removed.
Signed-off-by: Avinash Dayanand <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/ethernet/intel/i40evf/i40evf.h | 1 +
drivers/net/ethernet/intel/i40evf/i40evf_main.c | 9 ++++++++-
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf.h b/drivers/net/ethernet/intel/i40evf/i40evf.h
index 82f69031e5cd..2ef32ab1dfae 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf.h
+++ b/drivers/net/ethernet/intel/i40evf/i40evf.h
@@ -186,6 +186,7 @@ enum i40evf_state_t {
enum i40evf_critical_section_t {
__I40EVF_IN_CRITICAL_TASK, /* cannot be interrupted */
__I40EVF_IN_CLIENT_TASK,
+ __I40EVF_IN_REMOVE_TASK, /* device being removed */
};
/* board specific private data structure */
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf_main.c b/drivers/net/ethernet/intel/i40evf/i40evf_main.c
index 1ccad6f30ebf..048c92a1436a 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf_main.c
+++ b/drivers/net/ethernet/intel/i40evf/i40evf_main.c
@@ -1834,6 +1834,12 @@ static void i40evf_reset_task(struct work_struct *work)
u32 reg_val;
int i = 0, err;
+ /* When device is being removed it doesn't make sense to run the reset
+ * task, just return in such a case.
+ */
+ if (test_bit(__I40EVF_IN_REMOVE_TASK, &adapter->crit_section))
+ return;
+
while (test_and_set_bit(__I40EVF_IN_CLIENT_TASK,
&adapter->crit_section))
usleep_range(500, 1000);
@@ -3008,7 +3014,8 @@ static void i40evf_remove(struct pci_dev *pdev)
struct i40evf_mac_filter *f, *ftmp;
struct i40e_hw *hw = &adapter->hw;
int err;
-
+ /* Indicate we are in remove and not to run reset_task */
+ set_bit(__I40EVF_IN_REMOVE_TASK, &adapter->crit_section);
cancel_delayed_work_sync(&adapter->init_task);
cancel_work_sync(&adapter->reset_task);
cancel_delayed_work_sync(&adapter->client_task);
--
2.15.1
From: Emil Tantilov <[email protected]>
[ Upstream commit 2bafa8fac19a31ca72ae1a3e48df35f73661dbed ]
commit 2de6aa3a666e ("ixgbe: Add support for padding packet")
Uses RXDCTL.RLPML to limit the maximum frame size on Rx when using
build_skb. Unfortunately that register does not work on 82599.
Added an explicit check to avoid setting this register on 82599 MAC.
Extended the comment related to the setting of RXDCTL.RLPML to better
explain its purpose.
Signed-off-by: Emil Tantilov <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 29f600fd6977..9e30cfeac04b 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -3987,11 +3987,15 @@ void ixgbe_configure_rx_ring(struct ixgbe_adapter *adapter,
rxdctl &= ~0x3FFFFF;
rxdctl |= 0x080420;
#if (PAGE_SIZE < 8192)
- } else {
+ /* RXDCTL.RLPML does not work on 82599 */
+ } else if (hw->mac.type != ixgbe_mac_82599EB) {
rxdctl &= ~(IXGBE_RXDCTL_RLPMLMASK |
IXGBE_RXDCTL_RLPML_EN);
- /* Limit the maximum frame size so we don't overrun the skb */
+ /* Limit the maximum frame size so we don't overrun the skb.
+ * This can happen in SRIOV mode when the MTU of the VF is
+ * higher than the MTU of the PF.
+ */
if (ring_uses_build_skb(ring) &&
!test_bit(__IXGBE_RX_3K_BUFFER, &ring->state))
rxdctl |= IXGBE_MAX_2K_FRAME_BUILD_SKB |
--
2.15.1
From: Mickaël Salaün <[email protected]>
[ Upstream commit c25ef6a5e62fa212d298ce24995ce239f29b5f96 ]
Do not build lib/bpf/bpf.o with this Makefile but use the one from the
library directory. This avoid making a buggy bpf.o file (e.g. missing
symbols).
This patch is useful if some code (e.g. Landlock tests) needs both the
bpf.o (from tools/lib/bpf) and the bpf_load.o (from samples/bpf).
Signed-off-by: Mickaël Salaün <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
samples/bpf/Makefile | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
index 9b4a66e3363e..c1dc632d4ea4 100644
--- a/samples/bpf/Makefile
+++ b/samples/bpf/Makefile
@@ -179,13 +179,16 @@ LLC ?= llc
CLANG ?= clang
# Trick to allow make to be run from this directory
-all:
+all: $(LIBBPF)
$(MAKE) -C ../../ $(CURDIR)/
clean:
$(MAKE) -C ../../ M=$(CURDIR) clean
@rm -f *~
+$(LIBBPF): FORCE
+ $(MAKE) -C $(dir $@) $(notdir $@)
+
$(obj)/syscall_nrs.s: $(src)/syscall_nrs.c
$(call if_changed_dep,cc_s_c)
--
2.15.1
From: Michael Bringmann <[email protected]>
[ Upstream commit ea05ba7c559c8e5a5946c3a94a2a266e9a6680a6 ]
This patch fixes some problems encountered at runtime with
configurations that support memory-less nodes, or that hot-add CPUs
into nodes that are memoryless during system execution after boot. The
problems of interest include:
* Nodes known to powerpc to be memoryless at boot, but to have CPUs in
them are allowed to be 'possible' and 'online'. Memory allocations
for those nodes are taken from another node that does have memory
until and if memory is hot-added to the node.
* Nodes which have no resources assigned at boot, but which may still
be referenced subsequently by affinity or associativity attributes,
are kept in the list of 'possible' nodes for powerpc. Hot-add of
memory or CPUs to the system can reference these nodes and bring
them online instead of redirecting the references to one of the set
of nodes known to have memory at boot.
Note that this software operates under the context of CPU hotplug. We
are not doing memory hotplug in this code, but rather updating the
kernel's CPU topology (i.e. arch_update_cpu_topology /
numa_update_cpu_topology). We are initializing a node that may be used
by CPUs or memory before it can be referenced as invalid by a CPU
hotplug operation. CPU hotplug operations are protected by a range of
APIs including cpu_maps_update_begin/cpu_maps_update_done,
cpus_read/write_lock / cpus_read/write_unlock, device locks, and more.
Memory hotplug operations, including try_online_node, are protected by
mem_hotplug_begin/mem_hotplug_done, device locks, and more. In the
case of CPUs being hot-added to a previously memoryless node, the
try_online_node operation occurs wholly within the CPU locks with no
overlap. Using HMC hot-add/hot-remove operations, we have been able to
add and remove CPUs to any possible node without failures. HMC
operations involve a degree self-serialization, though.
Signed-off-by: Michael Bringmann <[email protected]>
Reviewed-by: Nathan Fontenot <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/powerpc/mm/numa.c | 47 +++++++++++++++++++++++++++++++++++++----------
1 file changed, 37 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 0356c6cceff7..9fead0796364 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -546,7 +546,7 @@ static int numa_setup_cpu(unsigned long lcpu)
nid = of_node_to_nid_single(cpu);
out_present:
- if (nid < 0 || !node_online(nid))
+ if (nid < 0 || !node_possible(nid))
nid = first_online_node;
map_cpu_to_node(lcpu, nid);
@@ -905,10 +905,8 @@ static void __init find_possible_nodes(void)
goto out;
for (i = 0; i < numnodes; i++) {
- if (!node_possible(i)) {
- setup_node_data(i, 0, 0);
+ if (!node_possible(i))
node_set(i, node_possible_map);
- }
}
out:
@@ -1277,6 +1275,40 @@ static long vphn_get_associativity(unsigned long cpu,
return rc;
}
+static inline int find_and_online_cpu_nid(int cpu)
+{
+ __be32 associativity[VPHN_ASSOC_BUFSIZE] = {0};
+ int new_nid;
+
+ /* Use associativity from first thread for all siblings */
+ vphn_get_associativity(cpu, associativity);
+ new_nid = associativity_to_nid(associativity);
+ if (new_nid < 0 || !node_possible(new_nid))
+ new_nid = first_online_node;
+
+ if (NODE_DATA(new_nid) == NULL) {
+#ifdef CONFIG_MEMORY_HOTPLUG
+ /*
+ * Need to ensure that NODE_DATA is initialized for a node from
+ * available memory (see memblock_alloc_try_nid). If unable to
+ * init the node, then default to nearest node that has memory
+ * installed.
+ */
+ if (try_online_node(new_nid))
+ new_nid = first_online_node;
+#else
+ /*
+ * Default to using the nearest node that has memory installed.
+ * Otherwise, it would be necessary to patch the kernel MM code
+ * to deal with more memoryless-node error conditions.
+ */
+ new_nid = first_online_node;
+#endif
+ }
+
+ return new_nid;
+}
+
/*
* Update the CPU maps and sysfs entries for a single CPU when its NUMA
* characteristics change. This function doesn't perform any locking and is
@@ -1344,7 +1376,6 @@ int numa_update_cpu_topology(bool cpus_locked)
{
unsigned int cpu, sibling, changed = 0;
struct topology_update_data *updates, *ud;
- __be32 associativity[VPHN_ASSOC_BUFSIZE] = {0};
cpumask_t updated_cpus;
struct device *dev;
int weight, new_nid, i = 0;
@@ -1379,11 +1410,7 @@ int numa_update_cpu_topology(bool cpus_locked)
continue;
}
- /* Use associativity from first thread for all siblings */
- vphn_get_associativity(cpu, associativity);
- new_nid = associativity_to_nid(associativity);
- if (new_nid < 0 || !node_online(new_nid))
- new_nid = first_online_node;
+ new_nid = find_and_online_cpu_nid(cpu);
if (new_nid == numa_cpu_lookup_table[cpu]) {
cpumask_andnot(&cpu_associativity_changes_mask,
--
2.15.1
From: Dmitry Vyukov <[email protected]>
[ Upstream commit 1e98ffea5a8935ec040ab72299e349cb44b8defd ]
Several netfilter matches and targets put kernel pointers into
info objects, but don't set usersize in descriptors.
This leads to kernel pointer leaks if a match/target is set
and then read back to userspace.
Properly set usersize for these matches/targets.
Found with manual code inspection.
Fixes: ec2318904965 ("xtables: extend matches and targets with .usersize")
Signed-off-by: Dmitry Vyukov <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/netfilter/xt_IDLETIMER.c | 1 +
net/netfilter/xt_LED.c | 1 +
net/netfilter/xt_limit.c | 3 +--
net/netfilter/xt_nfacct.c | 1 +
net/netfilter/xt_statistic.c | 1 +
5 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/net/netfilter/xt_IDLETIMER.c b/net/netfilter/xt_IDLETIMER.c
index bb5d6a058fb7..1141f08810b6 100644
--- a/net/netfilter/xt_IDLETIMER.c
+++ b/net/netfilter/xt_IDLETIMER.c
@@ -256,6 +256,7 @@ static struct xt_target idletimer_tg __read_mostly = {
.family = NFPROTO_UNSPEC,
.target = idletimer_tg_target,
.targetsize = sizeof(struct idletimer_tg_info),
+ .usersize = offsetof(struct idletimer_tg_info, timer),
.checkentry = idletimer_tg_checkentry,
.destroy = idletimer_tg_destroy,
.me = THIS_MODULE,
diff --git a/net/netfilter/xt_LED.c b/net/netfilter/xt_LED.c
index 0858fe17e14a..2d1c5c169a26 100644
--- a/net/netfilter/xt_LED.c
+++ b/net/netfilter/xt_LED.c
@@ -198,6 +198,7 @@ static struct xt_target led_tg_reg __read_mostly = {
.family = NFPROTO_UNSPEC,
.target = led_tg,
.targetsize = sizeof(struct xt_led_info),
+ .usersize = offsetof(struct xt_led_info, internal_data),
.checkentry = led_tg_check,
.destroy = led_tg_destroy,
.me = THIS_MODULE,
diff --git a/net/netfilter/xt_limit.c b/net/netfilter/xt_limit.c
index d27b5f1ea619..61403b77361c 100644
--- a/net/netfilter/xt_limit.c
+++ b/net/netfilter/xt_limit.c
@@ -193,9 +193,8 @@ static struct xt_match limit_mt_reg __read_mostly = {
.compatsize = sizeof(struct compat_xt_rateinfo),
.compat_from_user = limit_mt_compat_from_user,
.compat_to_user = limit_mt_compat_to_user,
-#else
- .usersize = offsetof(struct xt_rateinfo, prev),
#endif
+ .usersize = offsetof(struct xt_rateinfo, prev),
.me = THIS_MODULE,
};
diff --git a/net/netfilter/xt_nfacct.c b/net/netfilter/xt_nfacct.c
index cc0518fe598e..6f92d25590a8 100644
--- a/net/netfilter/xt_nfacct.c
+++ b/net/netfilter/xt_nfacct.c
@@ -62,6 +62,7 @@ static struct xt_match nfacct_mt_reg __read_mostly = {
.match = nfacct_mt,
.destroy = nfacct_mt_destroy,
.matchsize = sizeof(struct xt_nfacct_match_info),
+ .usersize = offsetof(struct xt_nfacct_match_info, nfacct),
.me = THIS_MODULE,
};
diff --git a/net/netfilter/xt_statistic.c b/net/netfilter/xt_statistic.c
index 11de55e7a868..8710fdba2ae2 100644
--- a/net/netfilter/xt_statistic.c
+++ b/net/netfilter/xt_statistic.c
@@ -84,6 +84,7 @@ static struct xt_match xt_statistic_mt_reg __read_mostly = {
.checkentry = statistic_mt_check,
.destroy = statistic_mt_destroy,
.matchsize = sizeof(struct xt_statistic_info),
+ .usersize = offsetof(struct xt_statistic_info, master),
.me = THIS_MODULE,
};
--
2.15.1
From: "Kirill A. Shutemov" <[email protected]>
[ Upstream commit c58f0bb77ed8bf93dfdde762b01cb67eebbdfc29 ]
Patch series "Do not lose dirty bit on THP pages", v4.
Vlastimil noted that pmdp_invalidate() is not atomic and we can lose
dirty and access bits if CPU sets them after pmdp dereference, but
before set_pmd_at().
The bug can lead to data loss, but the race window is tiny and I haven't
seen any reports that suggested that it happens in reality. So I don't
think it worth sending it to stable.
Unfortunately, there's no way to address the issue in a generic way. We
need to fix all architectures that support THP one-by-one.
All architectures that have THP supported have to provide atomic
pmdp_invalidate() that returns previous value.
If generic implementation of pmdp_invalidate() is used, architecture
needs to provide atomic pmdp_estabish().
pmdp_estabish() is not used out-side generic implementation of
pmdp_invalidate() so far, but I think this can change in the future.
This patch (of 12):
This is an implementation of pmdp_establish() that is only suitable for
an architecture that doesn't have hardware dirty/accessed bits. In this
case we can't race with CPU which sets these bits and non-atomic
approach is fine.
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Kirill A. Shutemov <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Aneesh Kumar K.V <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: David Daney <[email protected]>
Cc: David Miller <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Nitin Gupta <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineet Gupta <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
include/asm-generic/pgtable.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 77b891a8f191..2142bceaeb75 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -309,6 +309,21 @@ extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
#endif
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+/*
+ * This is an implementation of pmdp_establish() that is only suitable for an
+ * architecture that doesn't have hardware dirty/accessed bits. In this case we
+ * can't race with CPU which sets these bits and non-atomic aproach is fine.
+ */
+static inline pmd_t generic_pmdp_establish(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp, pmd_t pmd)
+{
+ pmd_t old_pmd = *pmdp;
+ set_pmd_at(vma->vm_mm, address, pmdp, pmd);
+ return old_pmd;
+}
+#endif
+
#ifndef __HAVE_ARCH_PMDP_INVALIDATE
extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp);
--
2.15.1
From: Arnd Bergmann <[email protected]>
[ Upstream commit 328008a72d38b5bde6491e463405c34a81a65d3e ]
The declaration for swsusp_arch_resume marks it as 'asmlinkage', but the
definition in x86-32 does not, and it fails to include the header with the
declaration. This leads to a warning when building with
link-time-optimizations:
kernel/power/power.h:108:23: error: type of 'swsusp_arch_resume' does not match original declaration [-Werror=lto-type-mismatch]
extern asmlinkage int swsusp_arch_resume(void);
^
arch/x86/power/hibernate_32.c:148:0: note: 'swsusp_arch_resume' was previously declared here
int swsusp_arch_resume(void)
This moves the declaration into a globally visible header file and fixes up
both x86 definitions to match it.
Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Nicolas Pitre <[email protected]>
Cc: [email protected]
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Bart Van Assche <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/power/hibernate_32.c | 2 +-
arch/x86/power/hibernate_64.c | 2 +-
include/linux/suspend.h | 2 ++
kernel/power/power.h | 3 ---
4 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/arch/x86/power/hibernate_32.c b/arch/x86/power/hibernate_32.c
index c35fdb585c68..afc4ed7b1578 100644
--- a/arch/x86/power/hibernate_32.c
+++ b/arch/x86/power/hibernate_32.c
@@ -145,7 +145,7 @@ static inline void resume_init_first_level_page_table(pgd_t *pg_dir)
#endif
}
-int swsusp_arch_resume(void)
+asmlinkage int swsusp_arch_resume(void)
{
int error;
diff --git a/arch/x86/power/hibernate_64.c b/arch/x86/power/hibernate_64.c
index f910c514438f..0ef5e5204968 100644
--- a/arch/x86/power/hibernate_64.c
+++ b/arch/x86/power/hibernate_64.c
@@ -174,7 +174,7 @@ out:
return 0;
}
-int swsusp_arch_resume(void)
+asmlinkage int swsusp_arch_resume(void)
{
int error;
diff --git a/include/linux/suspend.h b/include/linux/suspend.h
index d60b0f5c38d5..8544357d92d0 100644
--- a/include/linux/suspend.h
+++ b/include/linux/suspend.h
@@ -384,6 +384,8 @@ extern int swsusp_page_is_forbidden(struct page *);
extern void swsusp_set_page_free(struct page *);
extern void swsusp_unset_page_free(struct page *);
extern unsigned long get_safe_page(gfp_t gfp_mask);
+extern asmlinkage int swsusp_arch_suspend(void);
+extern asmlinkage int swsusp_arch_resume(void);
extern void hibernation_set_ops(const struct platform_hibernation_ops *ops);
extern int hibernate(void);
diff --git a/kernel/power/power.h b/kernel/power/power.h
index f29cd178df90..9e58bdc8a562 100644
--- a/kernel/power/power.h
+++ b/kernel/power/power.h
@@ -104,9 +104,6 @@ extern int in_suspend;
extern dev_t swsusp_resume_device;
extern sector_t swsusp_resume_block;
-extern asmlinkage int swsusp_arch_suspend(void);
-extern asmlinkage int swsusp_arch_resume(void);
-
extern int create_basic_memory_bitmaps(void);
extern void free_basic_memory_bitmaps(void);
extern int hibernate_preallocate_memory(void);
--
2.15.1
From: Chen Yu <[email protected]>
[ Upstream commit ba1edb9a5125a617d612f98eead14b9b84e75c3a ]
The following warning was triggered after resumed from S3 -
if all the nonboot CPUs were put offline before suspend:
[ 1840.329515] unchecked MSR access error: RDMSR from 0x771 at rIP: 0xffffffff86061e3a (native_read_msr+0xa/0x30)
[ 1840.329516] Call Trace:
[ 1840.329521] __rdmsr_on_cpu+0x33/0x50
[ 1840.329525] generic_exec_single+0x81/0xb0
[ 1840.329527] smp_call_function_single+0xd2/0x100
[ 1840.329530] ? acpi_ds_result_pop+0xdd/0xf2
[ 1840.329532] ? acpi_ds_create_operand+0x215/0x23c
[ 1840.329534] rdmsrl_on_cpu+0x57/0x80
[ 1840.329536] ? cpumask_next+0x1b/0x20
[ 1840.329538] ? rdmsrl_on_cpu+0x57/0x80
[ 1840.329541] intel_pstate_update_perf_limits+0xf3/0x220
[ 1840.329544] ? notifier_call_chain+0x4a/0x70
[ 1840.329546] intel_pstate_set_policy+0x4e/0x150
[ 1840.329548] cpufreq_set_policy+0xcd/0x2f0
[ 1840.329550] cpufreq_update_policy+0xb2/0x130
[ 1840.329552] ? cpufreq_update_policy+0x130/0x130
[ 1840.329556] acpi_processor_ppc_has_changed+0x65/0x80
[ 1840.329558] acpi_processor_notify+0x80/0x100
[ 1840.329561] acpi_ev_notify_dispatch+0x44/0x5c
[ 1840.329563] acpi_os_execute_deferred+0x14/0x20
[ 1840.329565] process_one_work+0x193/0x3c0
[ 1840.329567] worker_thread+0x35/0x3b0
[ 1840.329569] kthread+0x125/0x140
[ 1840.329571] ? process_one_work+0x3c0/0x3c0
[ 1840.329572] ? kthread_park+0x60/0x60
[ 1840.329575] ? do_syscall_64+0x67/0x180
[ 1840.329577] ret_from_fork+0x25/0x30
[ 1840.329585] unchecked MSR access error: WRMSR to 0x774 (tried to write 0x0000000000000000) at rIP: 0xffffffff86061f78 (native_write_msr+0x8/0x30)
[ 1840.329586] Call Trace:
[ 1840.329587] __wrmsr_on_cpu+0x37/0x40
[ 1840.329589] generic_exec_single+0x81/0xb0
[ 1840.329592] smp_call_function_single+0xd2/0x100
[ 1840.329594] ? acpi_ds_create_operand+0x215/0x23c
[ 1840.329595] ? cpumask_next+0x1b/0x20
[ 1840.329597] wrmsrl_on_cpu+0x57/0x70
[ 1840.329598] ? rdmsrl_on_cpu+0x57/0x80
[ 1840.329599] ? wrmsrl_on_cpu+0x57/0x70
[ 1840.329602] intel_pstate_hwp_set+0xd3/0x150
[ 1840.329604] intel_pstate_set_policy+0x119/0x150
[ 1840.329606] cpufreq_set_policy+0xcd/0x2f0
[ 1840.329607] cpufreq_update_policy+0xb2/0x130
[ 1840.329610] ? cpufreq_update_policy+0x130/0x130
[ 1840.329613] acpi_processor_ppc_has_changed+0x65/0x80
[ 1840.329615] acpi_processor_notify+0x80/0x100
[ 1840.329617] acpi_ev_notify_dispatch+0x44/0x5c
[ 1840.329619] acpi_os_execute_deferred+0x14/0x20
[ 1840.329620] process_one_work+0x193/0x3c0
[ 1840.329622] worker_thread+0x35/0x3b0
[ 1840.329624] kthread+0x125/0x140
[ 1840.329625] ? process_one_work+0x3c0/0x3c0
[ 1840.329626] ? kthread_park+0x60/0x60
[ 1840.329628] ? do_syscall_64+0x67/0x180
[ 1840.329631] ret_from_fork+0x25/0x30
This is because if there's only one online CPU, the MSR_PM_ENABLE
(package wide)can not be enabled after resumed, due to
intel_pstate_hwp_enable() will only be invoked on AP's online
process after resumed - if there's no AP online, the HWP remains
disabled after resumed (BIOS has disabled it in S3). Then if
there comes a _PPC change notification which touches HWP register
during this stage, the warning is triggered.
Since we don't call acpi_processor_register_performance() when
HWP is enabled, the pr->performance will be NULL. When this is
NULL we don't need to do _PPC change notification.
Reported-by: Doug Smythies <[email protected]>
Suggested-by: Srinivas Pandruvada <[email protected]>
Signed-off-by: Yu Chen <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/acpi/processor_perflib.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/acpi/processor_perflib.c b/drivers/acpi/processor_perflib.c
index 18b72eec3507..c7cf48ad5cb9 100644
--- a/drivers/acpi/processor_perflib.c
+++ b/drivers/acpi/processor_perflib.c
@@ -159,7 +159,7 @@ void acpi_processor_ppc_has_changed(struct acpi_processor *pr, int event_flag)
{
int ret;
- if (ignore_ppc) {
+ if (ignore_ppc || !pr->performance) {
/*
* Only when it is notification event, the _OST object
* will be evaluated. Otherwise it is skipped.
--
2.15.1
From: Matt Redfearn <[email protected]>
[ Upstream commit 0cde5b44a30f1daaef1c34e08191239dc63271c4 ]
When commit b27311e1cace ("MIPS: TXx9: Add RBTX4939 board support")
added board support for the RBTX4939, it added a call to
led_classdev_register even if the LED class is built as a module.
Built-in arch code cannot call module code directly like this. Commit
b33b44073734 ("MIPS: TXX9: use IS_ENABLED() macro") subsequently
changed the inclusion of this code to a single check that
CONFIG_LEDS_CLASS is either builtin or a module, but the same issue
remains.
This leads to MIPS allmodconfig builds failing when CONFIG_MACH_TX49XX=y
is set:
arch/mips/txx9/rbtx4939/setup.o: In function `rbtx4939_led_probe':
setup.c:(.init.text+0xc0): undefined reference to `of_led_classdev_register'
make: *** [Makefile:999: vmlinux] Error 1
Fix this by using the IS_BUILTIN() macro instead.
Fixes: b27311e1cace ("MIPS: TXx9: Add RBTX4939 board support")
Signed-off-by: Matt Redfearn <[email protected]>
Reviewed-by: James Hogan <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/18544/
Signed-off-by: James Hogan <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/mips/txx9/rbtx4939/setup.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/mips/txx9/rbtx4939/setup.c b/arch/mips/txx9/rbtx4939/setup.c
index 8b937300fb7f..fd26fadc8617 100644
--- a/arch/mips/txx9/rbtx4939/setup.c
+++ b/arch/mips/txx9/rbtx4939/setup.c
@@ -186,7 +186,7 @@ static void __init rbtx4939_update_ioc_pen(void)
#define RBTX4939_MAX_7SEGLEDS 8
-#if IS_ENABLED(CONFIG_LEDS_CLASS)
+#if IS_BUILTIN(CONFIG_LEDS_CLASS)
static u8 led_val[RBTX4939_MAX_7SEGLEDS];
struct rbtx4939_led_data {
struct led_classdev cdev;
@@ -261,7 +261,7 @@ static inline void rbtx4939_led_setup(void)
static void __rbtx4939_7segled_putc(unsigned int pos, unsigned char val)
{
-#if IS_ENABLED(CONFIG_LEDS_CLASS)
+#if IS_BUILTIN(CONFIG_LEDS_CLASS)
unsigned long flags;
local_irq_save(flags);
/* bit7: reserved for LED class */
--
2.15.1
From: "[email protected]" <[email protected]>
[ Upstream commit 7ac0c332f96bb9688560726f5e80c097ed8de59a ]
This patch fixes following Smatch warning:
drivers/scsi/qla2xxx/qla_init.c:130 qla2x00_async_iocb_timeout() error: we previously assumed 'fcport' could be null (see line 107)
Fixes: 5c25d451163c ("scsi: qla2xxx: Fix NULL pointer access for fcport structure")
Reported by: Dan Carpenter <[email protected]>
Signed-off-by: Quinn Tran <[email protected]>
Signed-off-by: Himanshu Madhani <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/scsi/qla2xxx/qla_init.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
index 2300c02ab5e6..e24f57946a17 100644
--- a/drivers/scsi/qla2xxx/qla_init.c
+++ b/drivers/scsi/qla2xxx/qla_init.c
@@ -115,6 +115,8 @@ qla2x00_async_iocb_timeout(void *data)
switch (sp->type) {
case SRB_LOGIN_CMD:
+ if (!fcport)
+ break;
/* Retry as needed. */
lio->u.logio.data[0] = MBS_COMMAND_ERROR;
lio->u.logio.data[1] = lio->u.logio.flags & SRB_LOGIN_RETRIED ?
@@ -128,6 +130,8 @@ qla2x00_async_iocb_timeout(void *data)
qla24xx_handle_plogi_done_event(fcport->vha, &ea);
break;
case SRB_LOGOUT_CMD:
+ if (!fcport)
+ break;
qlt_logo_completion_handler(fcport, QLA_FUNCTION_TIMEOUT);
break;
case SRB_CT_PTHRU_CMD:
--
2.15.1
From: James Hogan <[email protected]>
[ Upstream commit 9a9ab3078e2744a1a55163cfaec73a5798aae33e ]
We now have a platform (Ranchu) in the "generic" platform which matches
based on the FDT compatible string using mips_machine_is_compatible(),
however that function doesn't stop at a blank struct
of_device_id::compatible as that is an array in the struct, not a
pointer to a string.
Fix the loop completion to check the first byte of the compatible array
rather than the address of the compatible array in the struct.
Fixes: eed0eabd12ef ("MIPS: generic: Introduce generic DT-based board support")
Signed-off-by: James Hogan <[email protected]>
Reviewed-by: Paul Burton <[email protected]>
Reviewed-by: Matt Redfearn <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/18580/
Signed-off-by: Sasha Levin <[email protected]>
---
arch/mips/include/asm/machine.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/mips/include/asm/machine.h b/arch/mips/include/asm/machine.h
index e0d9b373d415..f83879dadd1e 100644
--- a/arch/mips/include/asm/machine.h
+++ b/arch/mips/include/asm/machine.h
@@ -52,7 +52,7 @@ mips_machine_is_compatible(const struct mips_machine *mach, const void *fdt)
if (!mach->matches)
return NULL;
- for (match = mach->matches; match->compatible; match++) {
+ for (match = mach->matches; match->compatible[0]; match++) {
if (fdt_node_check_compatible(fdt, 0, match->compatible) == 0)
return match;
}
--
2.15.1
From: Nitin Gupta <[email protected]>
[ Upstream commit a8e654f01cb725d0bfd741ebca1bf4c9337969cc ]
It's required to avoid losing dirty and accessed bits.
[[email protected]: add a `do' to the do-while loop]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Nitin Gupta <[email protected]>
Signed-off-by: Kirill A. Shutemov <[email protected]>
Cc: David Miller <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/sparc/include/asm/pgtable_64.h | 2 +-
arch/sparc/mm/tlb.c | 23 ++++++++++++++++++-----
2 files changed, 19 insertions(+), 6 deletions(-)
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index fd9d9bac7cfa..79c3bdaaa0b4 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -980,7 +980,7 @@ void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t *pmd);
#define __HAVE_ARCH_PMDP_INVALIDATE
-extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
+extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp);
#define __HAVE_ARCH_PGTABLE_DEPOSIT
diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c
index 4ae86bc0d35c..847ddffbf38a 100644
--- a/arch/sparc/mm/tlb.c
+++ b/arch/sparc/mm/tlb.c
@@ -219,17 +219,28 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
}
}
+static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp, pmd_t pmd)
+{
+ pmd_t old;
+
+ do {
+ old = *pmdp;
+ } while (cmpxchg64(&pmdp->pmd, old.pmd, pmd.pmd) != old.pmd);
+
+ return old;
+}
+
/*
* This routine is only called when splitting a THP
*/
-void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
+pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp)
{
- pmd_t entry = *pmdp;
-
- pmd_val(entry) &= ~_PAGE_VALID;
+ pmd_t old, entry;
- set_pmd_at(vma->vm_mm, address, pmdp, entry);
+ entry = __pmd(pmd_val(*pmdp) & ~_PAGE_VALID);
+ old = pmdp_establish(vma, address, pmdp, entry);
flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
/*
@@ -240,6 +251,8 @@ void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
if ((pmd_val(entry) & _PAGE_PMD_HUGE) &&
!is_huge_zero_page(pmd_page(entry)))
(vma->vm_mm)->context.thp_pte_count--;
+
+ return old;
}
void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
--
2.15.1
From: Corinna Vinschen <[email protected]>
[ Upstream commit 177132df5e45b134c147f419f567a3b56aafaf2b ]
Before libvirt modifies the MAC address and vlan tag for an SRIOV VF
for use by a virtual machine (either using vfio device assignment or
macvtap passthru mode), it saves the current MAC address and vlan tag
so that it can reset them to their original value when the guest is
done. Libvirt can't leave the VF MAC set to the value used by the
now-defunct guest since it may be started again later using a
different VF, but it certainly shouldn't just pick any random value,
either. So it saves the state of everything prior to using the VF, and
resets it to that.
The igb driver initializes the MAC addresses of all VFs to
00:00:00:00:00:00, and reports that when asked (via an RTM_GETLINK
netlink message, also visible in the list of VFs in the output of "ip
link show"). But when libvirt attempts to restore the MAC address back
to 00:00:00:00:00:00 (using an RTM_SETLINK netlink message) the kernel
responds with "Invalid argument".
Forbidding a reset back to the original value leaves the VF MAC at the
value set for the now-defunct virtual machine. Especially on a system
with NetworkManager enabled, this has very bad consequences, since
NetworkManager forces all interfacess to be IFF_UP all the time - if
the same virtual machine is restarted using a different VF (or even on
a different host), there will be multiple interfaces watching for
traffic with the same MAC address.
To allow libvirt to revert to the original state, we need a way to
remove the administrative set MAC on a VF, to allow normal host
operation again, and to reset/overwrite the VF MAC via VF netdev.
This patch implements the outlined scenario by allowing to set the
VF MAC to 00:00:00:00:00:00 via RTM_SETLINK on the PF.
igb_ndo_set_vf_mac resets the IGB_VF_FLAG_PF_SET_MAC flag to 0,
so it's possible to reset the VF MAC back to the original value via
the VF netdev.
Note: Recent patches to libvirt allow for a workaround if the NIC
isn't capable of resetting the administrative MAC back to all 0, but
in theory the NIC should allow resetting the MAC in the first place.
Signed-off-by: Corinna Vinschen <[email protected]>
Tested-by: Aaron Brown <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/net/ethernet/intel/igb/igb_main.c | 42 +++++++++++++++++++++++--------
1 file changed, 31 insertions(+), 11 deletions(-)
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index d1a44a84c97e..6ca580cdfd84 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -8373,7 +8373,8 @@ static void igb_rar_set_index(struct igb_adapter *adapter, u32 index)
/* Indicate to hardware the Address is Valid. */
if (adapter->mac_table[index].state & IGB_MAC_STATE_IN_USE) {
- rar_high |= E1000_RAH_AV;
+ if (is_valid_ether_addr(addr))
+ rar_high |= E1000_RAH_AV;
if (hw->mac.type == e1000_82575)
rar_high |= E1000_RAH_POOL_1 *
@@ -8411,17 +8412,36 @@ static int igb_set_vf_mac(struct igb_adapter *adapter,
static int igb_ndo_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)
{
struct igb_adapter *adapter = netdev_priv(netdev);
- if (!is_valid_ether_addr(mac) || (vf >= adapter->vfs_allocated_count))
+
+ if (vf >= adapter->vfs_allocated_count)
+ return -EINVAL;
+
+ /* Setting the VF MAC to 0 reverts the IGB_VF_FLAG_PF_SET_MAC
+ * flag and allows to overwrite the MAC via VF netdev. This
+ * is necessary to allow libvirt a way to restore the original
+ * MAC after unbinding vfio-pci and reloading igbvf after shutting
+ * down a VM.
+ */
+ if (is_zero_ether_addr(mac)) {
+ adapter->vf_data[vf].flags &= ~IGB_VF_FLAG_PF_SET_MAC;
+ dev_info(&adapter->pdev->dev,
+ "remove administratively set MAC on VF %d\n",
+ vf);
+ } else if (is_valid_ether_addr(mac)) {
+ adapter->vf_data[vf].flags |= IGB_VF_FLAG_PF_SET_MAC;
+ dev_info(&adapter->pdev->dev, "setting MAC %pM on VF %d\n",
+ mac, vf);
+ dev_info(&adapter->pdev->dev,
+ "Reload the VF driver to make this change effective.");
+ /* Generate additional warning if PF is down */
+ if (test_bit(__IGB_DOWN, &adapter->state)) {
+ dev_warn(&adapter->pdev->dev,
+ "The VF MAC address has been set, but the PF device is not up.\n");
+ dev_warn(&adapter->pdev->dev,
+ "Bring the PF device up before attempting to use the VF device.\n");
+ }
+ } else {
return -EINVAL;
- adapter->vf_data[vf].flags |= IGB_VF_FLAG_PF_SET_MAC;
- dev_info(&adapter->pdev->dev, "setting MAC %pM on VF %d\n", mac, vf);
- dev_info(&adapter->pdev->dev,
- "Reload the VF driver to make this change effective.");
- if (test_bit(__IGB_DOWN, &adapter->state)) {
- dev_warn(&adapter->pdev->dev,
- "The VF MAC address has been set, but the PF device is not up.\n");
- dev_warn(&adapter->pdev->dev,
- "Bring the PF device up before attempting to use the VF device.\n");
}
return igb_set_vf_mac(adapter, vf, mac);
}
--
2.15.1
From: Dan Carpenter <[email protected]>
[ Upstream commit 123af9043e93cb6f235207d260d50f832cdb5439 ]
The loop timeout doesn't work because it's a post op and ends with "tmo"
set to -1. I changed it from a post-op to a pre-op and I changed the
initial the starting value from 5 to 6 so we still iterate 5 times. I
left the other as it was because it's a large number.
Fixes: b3c70c9ea62a ("ASoC: Alchemy AC97C/I2SC audio support")
Signed-off-by: Dan Carpenter <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
sound/soc/au1x/ac97c.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/sound/soc/au1x/ac97c.c b/sound/soc/au1x/ac97c.c
index 29a97d52e8ad..66d6c52e7761 100644
--- a/sound/soc/au1x/ac97c.c
+++ b/sound/soc/au1x/ac97c.c
@@ -91,8 +91,8 @@ static unsigned short au1xac97c_ac97_read(struct snd_ac97 *ac97,
do {
mutex_lock(&ctx->lock);
- tmo = 5;
- while ((RD(ctx, AC97_STATUS) & STAT_CP) && tmo--)
+ tmo = 6;
+ while ((RD(ctx, AC97_STATUS) & STAT_CP) && --tmo)
udelay(21); /* wait an ac97 frame time */
if (!tmo) {
pr_debug("ac97rd timeout #1\n");
@@ -105,7 +105,7 @@ static unsigned short au1xac97c_ac97_read(struct snd_ac97 *ac97,
* poll, Forrest, poll...
*/
tmo = 0x10000;
- while ((RD(ctx, AC97_STATUS) & STAT_CP) && tmo--)
+ while ((RD(ctx, AC97_STATUS) & STAT_CP) && --tmo)
asm volatile ("nop");
data = RD(ctx, AC97_CMDRESP);
--
2.15.1
On Mon 2018-04-09 00:19:53, Sasha Levin wrote:
> From: "Steven Rostedt (VMware)" <[email protected]>
>
> [ Upstream commit dbdda842fe96f8932bae554f0adf463c27c42bc7 ]
>
> This patch implements what I discussed in Kernel Summit. I added
> lockdep annotation (hopefully correctly), and it hasn't had any splats
> (since I fixed some bugs in the first iterations). It did catch
> problems when I had the owner covering too much. But now that the owner
> is only set when actively calling the consoles, lockdep has stayed
> quiet.
Same here. I do not thing that this is a material for stable backport.
More details can be found in my reply to the patch for 4.15, see
https://lkml.kernel.org/r/[email protected]
Best Regards,
Petr
PS: I wonder how much time you give people to react before releasing
this. The number of autosel mails is increasing and I am involved
only in very small amount of them. I wonder if some other people
gets overwhelmed by this.
Hi Sasha,
please consider taking a small fix for this one (also useful for 4.15):
commit d3b9e8ad425cfd5b9116732e057f1b48e4d3bcb8
Author: Max Gurtovoy <[email protected]>
Date: Mon Mar 5 20:09:48 2018 +0200
RDMA/core: Reduce poll batch for direct cq polling
Fix warning limit for kernel stack consumption:
drivers/infiniband/core/cq.c: In function 'ib_process_cq_direct':
drivers/infiniband/core/cq.c:78:1: error: the frame size of 1032 bytes
is larger than 1024 bytes [-Werror=frame-larger-than=]
Using smaller ib_wc array on the stack brings us comfortably below that
limit again.
Fixes: 246d8b184c10 ("IB/cq: Don't force IB_POLL_DIRECT poll
context for ib_process_cq_direct")
Reported-by: Arnd Bergmann <[email protected]>
Reviewed-by: Sergey Gorenko <[email protected]>
Signed-off-by: Max Gurtovoy <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Reviewed-by: Bart Van Assche <[email protected]>
Acked-by: Arnd Bergmann <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
-Max.
On 4/9/2018 3:20 AM, Sasha Levin wrote:
> From: Sagi Grimberg <[email protected]>
>
> [ Upstream commit 246d8b184c100e8eb6b4e8c88f232c2ed2a4e672 ]
>
> polling the completion queue directly does not interfere
> with the existing polling logic, hence drop the requirement.
> Be aware that running ib_process_cq_direct with non IB_POLL_DIRECT
> CQ may trigger concurrent CQ processing.
>
> This can be used for polling mode ULPs.
>
> Cc: Bart Van Assche <[email protected]>
> Reported-by: Steve Wise <[email protected]>
> Signed-off-by: Sagi Grimberg <[email protected]>
> [maxg: added wcs array argument to __ib_process_cq]
> Signed-off-by: Max Gurtovoy <[email protected]>
> Signed-off-by: Doug Ledford <[email protected]>
> Signed-off-by: Sasha Levin <[email protected]>
> ---
> drivers/infiniband/core/cq.c | 23 +++++++++++++----------
> 1 file changed, 13 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c
> index f2ae75fa3128..c8c5a5a7f433 100644
> --- a/drivers/infiniband/core/cq.c
> +++ b/drivers/infiniband/core/cq.c
> @@ -25,9 +25,10 @@
> #define IB_POLL_FLAGS \
> (IB_CQ_NEXT_COMP | IB_CQ_REPORT_MISSED_EVENTS)
>
> -static int __ib_process_cq(struct ib_cq *cq, int budget)
> +static int __ib_process_cq(struct ib_cq *cq, int budget, struct ib_wc *poll_wc)
> {
> int i, n, completed = 0;
> + struct ib_wc *wcs = poll_wc ? : cq->wc;
>
> /*
> * budget might be (-1) if the caller does not
> @@ -35,9 +36,9 @@ static int __ib_process_cq(struct ib_cq *cq, int budget)
> * minimum here.
> */
> while ((n = ib_poll_cq(cq, min_t(u32, IB_POLL_BATCH,
> - budget - completed), cq->wc)) > 0) {
> + budget - completed), wcs)) > 0) {
> for (i = 0; i < n; i++) {
> - struct ib_wc *wc = &cq->wc[i];
> + struct ib_wc *wc = &wcs[i];
>
> if (wc->wr_cqe)
> wc->wr_cqe->done(cq, wc);
> @@ -60,18 +61,20 @@ static int __ib_process_cq(struct ib_cq *cq, int budget)
> * @cq: CQ to process
> * @budget: number of CQEs to poll for
> *
> - * This function is used to process all outstanding CQ entries on a
> - * %IB_POLL_DIRECT CQ. It does not offload CQ processing to a different
> - * context and does not ask for completion interrupts from the HCA.
> + * This function is used to process all outstanding CQ entries.
> + * It does not offload CQ processing to a different context and does
> + * not ask for completion interrupts from the HCA.
> + * Using direct processing on CQ with non IB_POLL_DIRECT type may trigger
> + * concurrent processing.
> *
> * Note: do not pass -1 as %budget unless it is guaranteed that the number
> * of completions that will be processed is small.
> */
> int ib_process_cq_direct(struct ib_cq *cq, int budget)
> {
> - WARN_ON_ONCE(cq->poll_ctx != IB_POLL_DIRECT);
> + struct ib_wc wcs[IB_POLL_BATCH];
>
> - return __ib_process_cq(cq, budget);
> + return __ib_process_cq(cq, budget, wcs);
> }
> EXPORT_SYMBOL(ib_process_cq_direct);
>
> @@ -85,7 +88,7 @@ static int ib_poll_handler(struct irq_poll *iop, int budget)
> struct ib_cq *cq = container_of(iop, struct ib_cq, iop);
> int completed;
>
> - completed = __ib_process_cq(cq, budget);
> + completed = __ib_process_cq(cq, budget, NULL);
> if (completed < budget) {
> irq_poll_complete(&cq->iop);
> if (ib_req_notify_cq(cq, IB_POLL_FLAGS) > 0)
> @@ -105,7 +108,7 @@ static void ib_cq_poll_work(struct work_struct *work)
> struct ib_cq *cq = container_of(work, struct ib_cq, work);
> int completed;
>
> - completed = __ib_process_cq(cq, IB_POLL_BUDGET_WORKQUEUE);
> + completed = __ib_process_cq(cq, IB_POLL_BUDGET_WORKQUEUE, NULL);
> if (completed >= IB_POLL_BUDGET_WORKQUEUE ||
> ib_req_notify_cq(cq, IB_POLL_FLAGS) > 0)
> queue_work(ib_comp_wq, &cq->work);
>
Hi Sasha,
On Mon, Apr 09, 2018 at 12:20:18AM +0000, Sasha Levin wrote:
> From: James Hogan <[email protected]>
>
> [ Upstream commit 5f2483eb2423152445b39f2db59d372f523e664e ]
>
> Make doesn't expand shell style "vmlinuz.{32,ecoff,bin,srec}" to the 4
> separate files, so none of these files get cleaned up by make clean.
> List the files separately instead.
>
> Fixes: ec3352925b74 ("MIPS: Remove all generated vmlinuz* files on "make clean"")
> Signed-off-by: James Hogan <[email protected]>
> Cc: Ralf Baechle <[email protected]>
> Cc: [email protected]
> Patchwork: https://patchwork.linux-mips.org/patch/18491/
> Signed-off-by: Sasha Levin <[email protected]>
Perhaps you're already on top of it, but this would appear to be equally
relevant to the older stable branches too?
Thanks
James
On Mon, Apr 09, 2018 at 12:20:20AM +0000, Sasha Levin wrote:
> From: Maarten ter Huurne <[email protected]>
>
> [ Upstream commit 1f7412e0e2f327fe7dc5a0c2fc36d7b319d05d47 ]
>
> According to config2, the associativity would be 5-ways, but the
> documentation states 4-ways, which also matches the documented
> L2 cache size of 256 kB.
JZ4770 support is new in 4.16, so no need for this to be backported.
More likely it'll just break the build due to references to
MACH_INGENIC_JZ4770.
Cheers
James
On Mon, Apr 09, 2018 at 10:22:46AM +0200, Petr Mladek wrote:
>PS: I wonder how much time you give people to react before releasing
>this. The number of autosel mails is increasing and I am involved
>only in very small amount of them. I wonder if some other people
>gets overwhelmed by this.
My review cycle gives at least a week, and there's usually another week
until Greg releases them.
I know it's a lot of mails, but in reality it's a lot of commits that
should go in -stable.
Would a different format for review would make it easier?
Grabbed it for both 4.14 and 4.15, thanks Max!
On Mon, Apr 09, 2018 at 07:21:29PM +0300, Max Gurtovoy wrote:
>Hi Sasha,
>please consider taking a small fix for this one (also useful for 4.15):
>
>commit d3b9e8ad425cfd5b9116732e057f1b48e4d3bcb8
>Author: Max Gurtovoy <[email protected]>
>Date: Mon Mar 5 20:09:48 2018 +0200
>
> RDMA/core: Reduce poll batch for direct cq polling
>
> Fix warning limit for kernel stack consumption:
>
> drivers/infiniband/core/cq.c: In function 'ib_process_cq_direct':
> drivers/infiniband/core/cq.c:78:1: error: the frame size of 1032 bytes
> is larger than 1024 bytes [-Werror=frame-larger-than=]
>
> Using smaller ib_wc array on the stack brings us comfortably below that
> limit again.
>
> Fixes: 246d8b184c10 ("IB/cq: Don't force IB_POLL_DIRECT poll
>context for ib_process_cq_direct")
> Reported-by: Arnd Bergmann <[email protected]>
> Reviewed-by: Sergey Gorenko <[email protected]>
> Signed-off-by: Max Gurtovoy <[email protected]>
> Signed-off-by: Leon Romanovsky <[email protected]>
> Reviewed-by: Bart Van Assche <[email protected]>
> Acked-by: Arnd Bergmann <[email protected]>
> Signed-off-by: Jason Gunthorpe <[email protected]>
>
>
>-Max.
>
>
>On 4/9/2018 3:20 AM, Sasha Levin wrote:
>>From: Sagi Grimberg <[email protected]>
>>
>>[ Upstream commit 246d8b184c100e8eb6b4e8c88f232c2ed2a4e672 ]
>>
>>polling the completion queue directly does not interfere
>>with the existing polling logic, hence drop the requirement.
>>Be aware that running ib_process_cq_direct with non IB_POLL_DIRECT
>>CQ may trigger concurrent CQ processing.
>>
>>This can be used for polling mode ULPs.
>>
>>Cc: Bart Van Assche <[email protected]>
>>Reported-by: Steve Wise <[email protected]>
>>Signed-off-by: Sagi Grimberg <[email protected]>
>>[maxg: added wcs array argument to __ib_process_cq]
>>Signed-off-by: Max Gurtovoy <[email protected]>
>>Signed-off-by: Doug Ledford <[email protected]>
>>Signed-off-by: Sasha Levin <[email protected]>
>>---
>> drivers/infiniband/core/cq.c | 23 +++++++++++++----------
>> 1 file changed, 13 insertions(+), 10 deletions(-)
>>
>>diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c
>>index f2ae75fa3128..c8c5a5a7f433 100644
>>--- a/drivers/infiniband/core/cq.c
>>+++ b/drivers/infiniband/core/cq.c
>>@@ -25,9 +25,10 @@
>> #define IB_POLL_FLAGS \
>> (IB_CQ_NEXT_COMP | IB_CQ_REPORT_MISSED_EVENTS)
>>-static int __ib_process_cq(struct ib_cq *cq, int budget)
>>+static int __ib_process_cq(struct ib_cq *cq, int budget, struct ib_wc *poll_wc)
>> {
>> int i, n, completed = 0;
>>+ struct ib_wc *wcs = poll_wc ? : cq->wc;
>> /*
>> * budget might be (-1) if the caller does not
>>@@ -35,9 +36,9 @@ static int __ib_process_cq(struct ib_cq *cq, int budget)
>> * minimum here.
>> */
>> while ((n = ib_poll_cq(cq, min_t(u32, IB_POLL_BATCH,
>>- budget - completed), cq->wc)) > 0) {
>>+ budget - completed), wcs)) > 0) {
>> for (i = 0; i < n; i++) {
>>- struct ib_wc *wc = &cq->wc[i];
>>+ struct ib_wc *wc = &wcs[i];
>> if (wc->wr_cqe)
>> wc->wr_cqe->done(cq, wc);
>>@@ -60,18 +61,20 @@ static int __ib_process_cq(struct ib_cq *cq, int budget)
>> * @cq: CQ to process
>> * @budget: number of CQEs to poll for
>> *
>>- * This function is used to process all outstanding CQ entries on a
>>- * %IB_POLL_DIRECT CQ. It does not offload CQ processing to a different
>>- * context and does not ask for completion interrupts from the HCA.
>>+ * This function is used to process all outstanding CQ entries.
>>+ * It does not offload CQ processing to a different context and does
>>+ * not ask for completion interrupts from the HCA.
>>+ * Using direct processing on CQ with non IB_POLL_DIRECT type may trigger
>>+ * concurrent processing.
>> *
>> * Note: do not pass -1 as %budget unless it is guaranteed that the number
>> * of completions that will be processed is small.
>> */
>> int ib_process_cq_direct(struct ib_cq *cq, int budget)
>> {
>>- WARN_ON_ONCE(cq->poll_ctx != IB_POLL_DIRECT);
>>+ struct ib_wc wcs[IB_POLL_BATCH];
>>- return __ib_process_cq(cq, budget);
>>+ return __ib_process_cq(cq, budget, wcs);
>> }
>> EXPORT_SYMBOL(ib_process_cq_direct);
>>@@ -85,7 +88,7 @@ static int ib_poll_handler(struct irq_poll *iop, int budget)
>> struct ib_cq *cq = container_of(iop, struct ib_cq, iop);
>> int completed;
>>- completed = __ib_process_cq(cq, budget);
>>+ completed = __ib_process_cq(cq, budget, NULL);
>> if (completed < budget) {
>> irq_poll_complete(&cq->iop);
>> if (ib_req_notify_cq(cq, IB_POLL_FLAGS) > 0)
>>@@ -105,7 +108,7 @@ static void ib_cq_poll_work(struct work_struct *work)
>> struct ib_cq *cq = container_of(work, struct ib_cq, work);
>> int completed;
>>- completed = __ib_process_cq(cq, IB_POLL_BUDGET_WORKQUEUE);
>>+ completed = __ib_process_cq(cq, IB_POLL_BUDGET_WORKQUEUE, NULL);
>> if (completed >= IB_POLL_BUDGET_WORKQUEUE ||
>> ib_req_notify_cq(cq, IB_POLL_FLAGS) > 0)
>> queue_work(ib_comp_wq, &cq->work);
>>
On Mon, Apr 09, 2018 at 09:08:38PM +0100, James Hogan wrote:
>On Mon, Apr 09, 2018 at 12:20:20AM +0000, Sasha Levin wrote:
>> From: Maarten ter Huurne <[email protected]>
>>
>> [ Upstream commit 1f7412e0e2f327fe7dc5a0c2fc36d7b319d05d47 ]
>>
>> According to config2, the associativity would be 5-ways, but the
>> documentation states 4-ways, which also matches the documented
>> L2 cache size of 256 kB.
>
>JZ4770 support is new in 4.16, so no need for this to be backported.
>More likely it'll just break the build due to references to
>MACH_INGENIC_JZ4770.
>
>Cheers
>James
Now removed, thanks James!
On Sun, 15 Apr 2018 14:42:51 +0000
Sasha Levin <[email protected]> wrote:
> On Mon, Apr 09, 2018 at 10:22:46AM +0200, Petr Mladek wrote:
> >PS: I wonder how much time you give people to react before releasing
> >this. The number of autosel mails is increasing and I am involved
> >only in very small amount of them. I wonder if some other people
> >gets overwhelmed by this.
>
> My review cycle gives at least a week, and there's usually another week
> until Greg releases them.
>
> I know it's a lot of mails, but in reality it's a lot of commits that
> should go in -stable.
>
> Would a different format for review would make it easier?
I wonder if the "AUTOSEL" patches should at least have an "ack-by" from
someone before they are pulled in. Otherwise there may be some subtle
issues that can find their way into stable releases.
-- Steve
On Mon, Apr 16, 2018 at 6:30 AM, Steven Rostedt <[email protected]> wrote:
>
> I wonder if the "AUTOSEL" patches should at least have an "ack-by" from
> someone before they are pulled in. Otherwise there may be some subtle
> issues that can find their way into stable releases.
I don't know about anybody else, but I get so many of the patch-bot
patches for stable etc that I will *not* reply to normal cases. Only
if there's some issue with a patch will I reply.
I probably do get more than most, but still - requiring active
participation for the steady flow of normal stable patches is almost
pointless.
Just look at the subject line of this thread. The numbers are so big
that you almost need exponential notation for them.
Linus
On Mon 2018-04-16 08:18:09, Linus Torvalds wrote:
> On Mon, Apr 16, 2018 at 6:30 AM, Steven Rostedt <[email protected]> wrote:
> >
> > I wonder if the "AUTOSEL" patches should at least have an "ack-by" from
> > someone before they are pulled in. Otherwise there may be some subtle
> > issues that can find their way into stable releases.
>
> I don't know about anybody else, but I get so many of the patch-bot
> patches for stable etc that I will *not* reply to normal cases. Only
> if there's some issue with a patch will I reply.
>
> I probably do get more than most, but still - requiring active
> participation for the steady flow of normal stable patches is almost
> pointless.
>
> Just look at the subject line of this thread. The numbers are so big
> that you almost need exponential notation for them.
Question is if we need that many stable patches? Autosel seems to be
picking up race conditions in LED state and W+X page fixes... I'd
really like to see less stable patches.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Mon, 16 Apr 2018 08:18:09 -0700
Linus Torvalds <[email protected]> wrote:
> On Mon, Apr 16, 2018 at 6:30 AM, Steven Rostedt <[email protected]> wrote:
> >
> > I wonder if the "AUTOSEL" patches should at least have an "ack-by" from
> > someone before they are pulled in. Otherwise there may be some subtle
> > issues that can find their way into stable releases.
>
> I don't know about anybody else, but I get so many of the patch-bot
> patches for stable etc that I will *not* reply to normal cases. Only
> if there's some issue with a patch will I reply.
>
> I probably do get more than most, but still - requiring active
> participation for the steady flow of normal stable patches is almost
> pointless.
>
> Just look at the subject line of this thread. The numbers are so big
> that you almost need exponential notation for them.
>
I'm worried about just backporting patches that nobody actually looked
at. Is someone going through and vetting that these should definitely
be added to stable. I would like to have some trusted human (doesn't
even need to be the author or maintainer of the patch) to look at all
the patches before they are applied.
I would say anything more than a trivial patch would require author or
sub maintainer ack. Look at this patch, I don't think it should go to
stable, even though it does fix issues. But the fix is for systems
already having issues, and this keeps printk from making things worse.
The fix has side effects that other commits have addressed, and if this
patch gets backported, those other ones must too.
Maybe I was too strong by saying all patches should be acked, but
anything more than buffer overflows and off by one errors probably
require a bit more vetting by a human than to just pull in all patches
that a bot flags to be backported.
-- Steve
On Mon, Apr 16, 2018 at 08:18:09AM -0700, Linus Torvalds wrote:
>On Mon, Apr 16, 2018 at 6:30 AM, Steven Rostedt <[email protected]> wrote:
>>
>> I wonder if the "AUTOSEL" patches should at least have an "ack-by" from
>> someone before they are pulled in. Otherwise there may be some subtle
>> issues that can find their way into stable releases.
>
>I don't know about anybody else, but I get so many of the patch-bot
>patches for stable etc that I will *not* reply to normal cases. Only
>if there's some issue with a patch will I reply.
>
>I probably do get more than most, but still - requiring active
>participation for the steady flow of normal stable patches is almost
>pointless.
>
>Just look at the subject line of this thread. The numbers are so big
>that you almost need exponential notation for them.
>
> Linus
I would be more than happy to make this an opt-in process on my end, but
given the responses I've been seeing from folks so far I doubt it'll
work for many people. Humans don't scale :)
There are a few statistics that suggest that the current workflow is
"good enough":
1. The rejection rate (commits fixed or reverted) for
AUTOSEL commits is similar (actually smaller) than commits
tagged for -stable.
2. Human response rate on review requests is higher than the
rate Greg is getting with his review mails. This is somewhat
expected, but it shows that people do what Linus does and reply
just when they see something wrong.
I also think that using mailing lists for these is bringing up the
limitations of mailing lists. It's hard to go through the amount of
patches AUTOSEL is generating this way, but right now we don't have a
better alternative.
On Mon, Apr 16, 2018 at 05:30:31PM +0200, Pavel Machek wrote:
>On Mon 2018-04-16 08:18:09, Linus Torvalds wrote:
>> On Mon, Apr 16, 2018 at 6:30 AM, Steven Rostedt <[email protected]> wrote:
>> >
>> > I wonder if the "AUTOSEL" patches should at least have an "ack-by" from
>> > someone before they are pulled in. Otherwise there may be some subtle
>> > issues that can find their way into stable releases.
>>
>> I don't know about anybody else, but I get so many of the patch-bot
>> patches for stable etc that I will *not* reply to normal cases. Only
>> if there's some issue with a patch will I reply.
>>
>> I probably do get more than most, but still - requiring active
>> participation for the steady flow of normal stable patches is almost
>> pointless.
>>
>> Just look at the subject line of this thread. The numbers are so big
>> that you almost need exponential notation for them.
>
>Question is if we need that many stable patches? Autosel seems to be
>picking up race conditions in LED state and W+X page fixes... I'd
>really like to see less stable patches.
Why? Given that the kernel keeps seeing more and more lines of code in
each new release, tools around the kernel keep evolving (new fuzzers,
testing suites, etc), and code gets more eyes, this guarantees that
you'll see more and more stable patches for each release as well.
Is there a reason not to take LED fixes if they fix a bug and don't
cause a regression? Sure, we can draw some arbitrary line, maybe
designate some subsystems that are more "important" than others, but
what's the point?
On Mon, Apr 16, 2018 at 11:36:29AM -0400, Steven Rostedt wrote:
>On Mon, 16 Apr 2018 08:18:09 -0700
>Linus Torvalds <[email protected]> wrote:
>
>> On Mon, Apr 16, 2018 at 6:30 AM, Steven Rostedt <[email protected]> wrote:
>> >
>> > I wonder if the "AUTOSEL" patches should at least have an "ack-by" from
>> > someone before they are pulled in. Otherwise there may be some subtle
>> > issues that can find their way into stable releases.
>>
>> I don't know about anybody else, but I get so many of the patch-bot
>> patches for stable etc that I will *not* reply to normal cases. Only
>> if there's some issue with a patch will I reply.
>>
>> I probably do get more than most, but still - requiring active
>> participation for the steady flow of normal stable patches is almost
>> pointless.
>>
>> Just look at the subject line of this thread. The numbers are so big
>> that you almost need exponential notation for them.
>>
>
>I'm worried about just backporting patches that nobody actually looked
>at. Is someone going through and vetting that these should definitely
>be added to stable. I would like to have some trusted human (doesn't
>even need to be the author or maintainer of the patch) to look at all
>the patches before they are applied.
I do go through every single commit sent this way and review it.
Sometimes things slip by, but it's not a fully automatic process.
Let's look at this patch as a concrete example: the only reason,
according to the stable rules, that it shouldn't go in -stable is that
it's longer than 100 lines.
Otherwise, it fixes a bug, it doesn't introduce any new features, it's
upstream, and so on. It had some fixes that went upstream as well?
Great, let's grab those as well.
>I would say anything more than a trivial patch would require author or
>sub maintainer ack. Look at this patch, I don't think it should go to
>stable, even though it does fix issues. But the fix is for systems
>already having issues, and this keeps printk from making things worse.
>The fix has side effects that other commits have addressed, and if this
>patch gets backported, those other ones must too.
Sure, let's get those patches in as well.
One of the things Greg is pushing strongly for is "bug compatibility":
we want the kernel to behave the same way between mainline and stable.
If the code is broken, it should be broken in the same way.
If anything, after this discussion I'd recommend that we take this patch
and it's follow-up fixes...
>Maybe I was too strong by saying all patches should be acked, but
>anything more than buffer overflows and off by one errors probably
>require a bit more vetting by a human than to just pull in all patches
>that a bot flags to be backported.
If anyone wants to give me a hand with going through these I'd be more
than happy to. I know that Ben Hutchings is looking at the ones that
land in 4.4 carefully. It's always good to have more than 1 set of eyes!
On Mon 2018-04-16 15:50:34, Sasha Levin wrote:
> On Mon, Apr 16, 2018 at 05:30:31PM +0200, Pavel Machek wrote:
> >On Mon 2018-04-16 08:18:09, Linus Torvalds wrote:
> >> On Mon, Apr 16, 2018 at 6:30 AM, Steven Rostedt <[email protected]> wrote:
> >> >
> >> > I wonder if the "AUTOSEL" patches should at least have an "ack-by" from
> >> > someone before they are pulled in. Otherwise there may be some subtle
> >> > issues that can find their way into stable releases.
> >>
> >> I don't know about anybody else, but I get so many of the patch-bot
> >> patches for stable etc that I will *not* reply to normal cases. Only
> >> if there's some issue with a patch will I reply.
> >>
> >> I probably do get more than most, but still - requiring active
> >> participation for the steady flow of normal stable patches is almost
> >> pointless.
> >>
> >> Just look at the subject line of this thread. The numbers are so big
> >> that you almost need exponential notation for them.
> >
> >Question is if we need that many stable patches? Autosel seems to be
> >picking up race conditions in LED state and W+X page fixes... I'd
> >really like to see less stable patches.
>
> Why? Given that the kernel keeps seeing more and more lines of code in
> each new release, tools around the kernel keep evolving (new fuzzers,
> testing suites, etc), and code gets more eyes, this guarantees that
> you'll see more and more stable patches for each release as well.
>
> Is there a reason not to take LED fixes if they fix a bug and don't
> cause a regression? Sure, we can draw some arbitrary line, maybe
> designate some subsystems that are more "important" than others, but
> what's the point?
There's a tradeoff.
You want to fix serious bugs in stable, and you really don't want
regressions in stable. And ... stable not having 1000s of patches
would be nice, too.
That means you want to ignore not-so-serious bugs, because benefit of
fixing them is lower than risk of the regressions. I believe bugs that
do not bother anyone should _not_ be fixed in stable.
That was case of the LED patch. Yes, the commit fixed bug, but it
introduced regressions that were fixed by subsequent patches.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Mon 2018-04-16 16:02:03, Sasha Levin wrote:
> On Mon, Apr 16, 2018 at 11:36:29AM -0400, Steven Rostedt wrote:
> >On Mon, 16 Apr 2018 08:18:09 -0700
> >Linus Torvalds <[email protected]> wrote:
> >
> >> On Mon, Apr 16, 2018 at 6:30 AM, Steven Rostedt <[email protected]> wrote:
> >> >
> >> > I wonder if the "AUTOSEL" patches should at least have an "ack-by" from
> >> > someone before they are pulled in. Otherwise there may be some subtle
> >> > issues that can find their way into stable releases.
> >>
> >> I don't know about anybody else, but I get so many of the patch-bot
> >> patches for stable etc that I will *not* reply to normal cases. Only
> >> if there's some issue with a patch will I reply.
> >>
> >> I probably do get more than most, but still - requiring active
> >> participation for the steady flow of normal stable patches is almost
> >> pointless.
> >>
> >> Just look at the subject line of this thread. The numbers are so big
> >> that you almost need exponential notation for them.
> >>
> >
> >I'm worried about just backporting patches that nobody actually looked
> >at. Is someone going through and vetting that these should definitely
> >be added to stable. I would like to have some trusted human (doesn't
> >even need to be the author or maintainer of the patch) to look at all
> >the patches before they are applied.
>
> I do go through every single commit sent this way and review it.
> Sometimes things slip by, but it's not a fully automatic process.
>
> Let's look at this patch as a concrete example: the only reason,
> according to the stable rules, that it shouldn't go in -stable is that
> it's longer than 100 lines.
>
> Otherwise, it fixes a bug, it doesn't introduce any new features, it's
> upstream, and so on. It had some fixes that went upstream as well?
> Great, let's grab those as well.
>
> >I would say anything more than a trivial patch would require author or
> >sub maintainer ack. Look at this patch, I don't think it should go to
> >stable, even though it does fix issues. But the fix is for systems
> >already having issues, and this keeps printk from making things worse.
> >The fix has side effects that other commits have addressed, and if this
> >patch gets backported, those other ones must too.
>
> Sure, let's get those patches in as well.
>
> One of the things Greg is pushing strongly for is "bug compatibility":
> we want the kernel to behave the same way between mainline and stable.
> If the code is broken, it should be broken in the same way.
Maybe Greg should be Cced on this conversation?
Anyway, I don't think "bug compatibility" is a good goal.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Mon, 16 Apr 2018 16:02:03 +0000
Sasha Levin <[email protected]> wrote:
> One of the things Greg is pushing strongly for is "bug compatibility":
> we want the kernel to behave the same way between mainline and stable.
> If the code is broken, it should be broken in the same way.
Wait! What does that mean? What's the purpose of stable if it is as
broken as mainline?
-- Steve
On Mon, Apr 16, 2018 at 06:06:08PM +0200, Pavel Machek wrote:
>On Mon 2018-04-16 15:50:34, Sasha Levin wrote:
>> On Mon, Apr 16, 2018 at 05:30:31PM +0200, Pavel Machek wrote:
>> >On Mon 2018-04-16 08:18:09, Linus Torvalds wrote:
>> >> On Mon, Apr 16, 2018 at 6:30 AM, Steven Rostedt <[email protected]> wrote:
>> >> >
>> >> > I wonder if the "AUTOSEL" patches should at least have an "ack-by" from
>> >> > someone before they are pulled in. Otherwise there may be some subtle
>> >> > issues that can find their way into stable releases.
>> >>
>> >> I don't know about anybody else, but I get so many of the patch-bot
>> >> patches for stable etc that I will *not* reply to normal cases. Only
>> >> if there's some issue with a patch will I reply.
>> >>
>> >> I probably do get more than most, but still - requiring active
>> >> participation for the steady flow of normal stable patches is almost
>> >> pointless.
>> >>
>> >> Just look at the subject line of this thread. The numbers are so big
>> >> that you almost need exponential notation for them.
>> >
>> >Question is if we need that many stable patches? Autosel seems to be
>> >picking up race conditions in LED state and W+X page fixes... I'd
>> >really like to see less stable patches.
>>
>> Why? Given that the kernel keeps seeing more and more lines of code in
>> each new release, tools around the kernel keep evolving (new fuzzers,
>> testing suites, etc), and code gets more eyes, this guarantees that
>> you'll see more and more stable patches for each release as well.
>>
>> Is there a reason not to take LED fixes if they fix a bug and don't
>> cause a regression? Sure, we can draw some arbitrary line, maybe
>> designate some subsystems that are more "important" than others, but
>> what's the point?
>
>There's a tradeoff.
>
>You want to fix serious bugs in stable, and you really don't want
>regressions in stable. And ... stable not having 1000s of patches
>would be nice, too.
I don't think we should use a number cap here, but rather look at the
regression rate: how many patches broke something?
Since the rate we're seeing now with AUTOSEL is similar to what we were
seeing before AUTOSEL, what's the problem it's causing?
>That means you want to ignore not-so-serious bugs, because benefit of
>fixing them is lower than risk of the regressions. I believe bugs that
>do not bother anyone should _not_ be fixed in stable.
>
>That was case of the LED patch. Yes, the commit fixed bug, but it
>introduced regressions that were fixed by subsequent patches.
How do you know if a bug bothers someone?
If a user is annoyed by a LED issue, is he expected to triage the bug,
report it on LKML and patiently wait for the appropriate patch to be
backported?
On Mon, Apr 16, 2018 at 12:12:24PM -0400, Steven Rostedt wrote:
>On Mon, 16 Apr 2018 16:02:03 +0000
>Sasha Levin <[email protected]> wrote:
>
>> One of the things Greg is pushing strongly for is "bug compatibility":
>> we want the kernel to behave the same way between mainline and stable.
>> If the code is broken, it should be broken in the same way.
>
>Wait! What does that mean? What's the purpose of stable if it is as
>broken as mainline?
This just means that if there is a fix that went in mainline, and the
fix is broken somehow, we'd rather take the broken fix than not.
In this scenario, *something* will be broken, it's just a matter of
what. We'd rather have the same thing broken between mainline and
stable.
On Mon, 16 Apr 2018 18:06:08 +0200
Pavel Machek <[email protected]> wrote:
> That means you want to ignore not-so-serious bugs, because benefit of
> fixing them is lower than risk of the regressions. I believe bugs that
> do not bother anyone should _not_ be fixed in stable.
>
> That was case of the LED patch. Yes, the commit fixed bug, but it
> introduced regressions that were fixed by subsequent patches.
I agree. I would disagree that the patch this thread is on should go to
stable. What's the point of stable if it introduces regressions by
backporting bug fixes for non major bugs.
Every fix I make I consider labeling it for stable. The ones I don't, I
feel the bug fix is not worth the risk of added regressions.
I worry that people will get lazy and stop marking commits for stable
(or even thinking about it) because they know that there's a bot that
will pull it for them. That thought crossed my mind. Why do I want to
label anything stable if a bot will probably catch it. Then I could
just wait till the bot posts it before I even think about stable.
-- Steve
On Mon, 16 Apr 2018 16:14:15 +0000
Sasha Levin <[email protected]> wrote:
> Since the rate we're seeing now with AUTOSEL is similar to what we were
> seeing before AUTOSEL, what's the problem it's causing?
Does that mean we just doubled the rate of regressions? That's the
problem.
>
> How do you know if a bug bothers someone?
>
> If a user is annoyed by a LED issue, is he expected to triage the bug,
> report it on LKML and patiently wait for the appropriate patch to be
> backported?
Yes.
-- Steve
On Mon, Apr 16, 2018 at 12:20:19PM -0400, Steven Rostedt wrote:
>On Mon, 16 Apr 2018 18:06:08 +0200
>Pavel Machek <[email protected]> wrote:
>
>> That means you want to ignore not-so-serious bugs, because benefit of
>> fixing them is lower than risk of the regressions. I believe bugs that
>> do not bother anyone should _not_ be fixed in stable.
>>
>> That was case of the LED patch. Yes, the commit fixed bug, but it
>> introduced regressions that were fixed by subsequent patches.
>
>I agree. I would disagree that the patch this thread is on should go to
>stable. What's the point of stable if it introduces regressions by
>backporting bug fixes for non major bugs.
One such reason is that users will then hit the regression when they
upgrade to the next -stable version anyways.
>Every fix I make I consider labeling it for stable. The ones I don't, I
>feel the bug fix is not worth the risk of added regressions.
>
>I worry that people will get lazy and stop marking commits for stable
>(or even thinking about it) because they know that there's a bot that
>will pull it for them. That thought crossed my mind. Why do I want to
>label anything stable if a bot will probably catch it. Then I could
>just wait till the bot posts it before I even think about stable.
People are already "lazy". You are actually an exception for marking your
commits.
Yes, folks will chime in with "sure, I mark my patches too!", but if you
look at the entire committer pool in the kernel you'll see that most
don't bother with this to begin with.
> >> Is there a reason not to take LED fixes if they fix a bug and don't
> >> cause a regression? Sure, we can draw some arbitrary line, maybe
> >> designate some subsystems that are more "important" than others, but
> >> what's the point?
> >
> >There's a tradeoff.
> >
> >You want to fix serious bugs in stable, and you really don't want
> >regressions in stable. And ... stable not having 1000s of patches
> >would be nice, too.
>
> I don't think we should use a number cap here, but rather look at the
> regression rate: how many patches broke something?
>
> Since the rate we're seeing now with AUTOSEL is similar to what we were
> seeing before AUTOSEL, what's the problem it's causing?
Regression rate should not be the only criteria.
More patches mean bigger chance customer's patches will have a
conflict with something in -stable, for example.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Mon, 16 Apr 2018 16:19:14 +0000
Sasha Levin <[email protected]> wrote:
> >Wait! What does that mean? What's the purpose of stable if it is as
> >broken as mainline?
>
> This just means that if there is a fix that went in mainline, and the
> fix is broken somehow, we'd rather take the broken fix than not.
>
> In this scenario, *something* will be broken, it's just a matter of
> what. We'd rather have the same thing broken between mainline and
> stable.
Honestly, I think that removes all value of the stable series. I
remember when the stable series were first created. People were saying
that it wouldn't even get to more than 5 versions, because the bar for
backporting was suppose to be very high. Today it's just a fork of the
kernel at a given version. No more features, but we will be OK with
regressions. I'm struggling to see what the benefit of it is suppose to
be?
-- Steve
On Mon, Apr 16, 2018 at 12:22:44PM -0400, Steven Rostedt wrote:
>On Mon, 16 Apr 2018 16:14:15 +0000
>Sasha Levin <[email protected]> wrote:
>
>> Since the rate we're seeing now with AUTOSEL is similar to what we were
>> seeing before AUTOSEL, what's the problem it's causing?
>
>Does that mean we just doubled the rate of regressions? That's the
>problem.
No, the rate stayed the same :)
If before ~2% of stable commits were buggy, this is still the case with
AUTOSEL.
>>
>> How do you know if a bug bothers someone?
>>
>> If a user is annoyed by a LED issue, is he expected to triage the bug,
>> report it on LKML and patiently wait for the appropriate patch to be
>> backported?
>
>Yes.
I'm honestly not sure how to respond.
Let me ask my wife (who is happy using Linux as a regular desktop user)
how comfortable she would be with triaging kernel bugs...
On Mon, Apr 16, 2018 at 12:30:19PM -0400, Steven Rostedt wrote:
>On Mon, 16 Apr 2018 16:19:14 +0000
>Sasha Levin <[email protected]> wrote:
>
>> >Wait! What does that mean? What's the purpose of stable if it is as
>> >broken as mainline?
>>
>> This just means that if there is a fix that went in mainline, and the
>> fix is broken somehow, we'd rather take the broken fix than not.
>>
>> In this scenario, *something* will be broken, it's just a matter of
>> what. We'd rather have the same thing broken between mainline and
>> stable.
>
>Honestly, I think that removes all value of the stable series. I
>remember when the stable series were first created. People were saying
>that it wouldn't even get to more than 5 versions, because the bar for
>backporting was suppose to be very high. Today it's just a fork of the
>kernel at a given version. No more features, but we will be OK with
>regressions. I'm struggling to see what the benefit of it is suppose to
>be?
It's not "OK with regressions".
Let's look at a hypothetical example: You have a 4.15.1 kernel that has
a broken printf() behaviour so that when you:
pr_err("%d", 5)
Would print:
"Microsoft Rulez"
Bad, right? So you went ahead and fixed it, and now it prints "5" as you
might expect. But alas, with your patch, running:
pr_err("%s", "hi!")
Would show a cat picture for 5 seconds.
Should we take your patch in -stable or not? If we don't, we're stuck
with the original issue while the mainline kernel will behave
differently, but if we do - we introduce a new regression.
So it's not the case that a -stable kernel will have *more* regression,
it'll just have similar ones to mainline.
On Mon 2018-04-16 16:28:00, Sasha Levin wrote:
> On Mon, Apr 16, 2018 at 12:20:19PM -0400, Steven Rostedt wrote:
> >On Mon, 16 Apr 2018 18:06:08 +0200
> >Pavel Machek <[email protected]> wrote:
> >
> >> That means you want to ignore not-so-serious bugs, because benefit of
> >> fixing them is lower than risk of the regressions. I believe bugs that
> >> do not bother anyone should _not_ be fixed in stable.
> >>
> >> That was case of the LED patch. Yes, the commit fixed bug, but it
> >> introduced regressions that were fixed by subsequent patches.
> >
> >I agree. I would disagree that the patch this thread is on should go to
> >stable. What's the point of stable if it introduces regressions by
> >backporting bug fixes for non major bugs.
>
> One such reason is that users will then hit the regression when they
> upgrade to the next -stable version anyways.
Well, yes, testing is required when moving from 4.14 to 4.15. But
testing should not be required when moving from 4.14.5 to 4.14.6.
> >Every fix I make I consider labeling it for stable. The ones I don't, I
> >feel the bug fix is not worth the risk of added regressions.
> >
> >I worry that people will get lazy and stop marking commits for stable
> >(or even thinking about it) because they know that there's a bot that
> >will pull it for them. That thought crossed my mind. Why do I want to
> >label anything stable if a bot will probably catch it. Then I could
> >just wait till the bot posts it before I even think about stable.
>
> People are already "lazy". You are actually an exception for marking your
> commits.
>
> Yes, folks will chime in with "sure, I mark my patches too!", but if you
> look at the entire committer pool in the kernel you'll see that most
> don't bother with this to begin with.
So you take everything and put it into stable? I don't think that's a
solution.
If you are worried about people not putting enough "Stable: " tags in
their commits, perhaps you can write them emails "hey, I think this
should go to stable, do you agree"? You should get people marking
their commits themselves pretty quickly...
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Mon, Apr 16, 2018 at 06:28:50PM +0200, Pavel Machek wrote:
>
>> >> Is there a reason not to take LED fixes if they fix a bug and don't
>> >> cause a regression? Sure, we can draw some arbitrary line, maybe
>> >> designate some subsystems that are more "important" than others, but
>> >> what's the point?
>> >
>> >There's a tradeoff.
>> >
>> >You want to fix serious bugs in stable, and you really don't want
>> >regressions in stable. And ... stable not having 1000s of patches
>> >would be nice, too.
>>
>> I don't think we should use a number cap here, but rather look at the
>> regression rate: how many patches broke something?
>>
>> Since the rate we're seeing now with AUTOSEL is similar to what we were
>> seeing before AUTOSEL, what's the problem it's causing?
>
>Regression rate should not be the only criteria.
>
>More patches mean bigger chance customer's patches will have a
>conflict with something in -stable, for example.
Out of tree patches can't be a consideration here. There are no
guarantees for out of tree code, ever.
On Mon 2018-04-16 16:39:20, Sasha Levin wrote:
> On Mon, Apr 16, 2018 at 06:28:50PM +0200, Pavel Machek wrote:
> >
> >> >> Is there a reason not to take LED fixes if they fix a bug and don't
> >> >> cause a regression? Sure, we can draw some arbitrary line, maybe
> >> >> designate some subsystems that are more "important" than others, but
> >> >> what's the point?
> >> >
> >> >There's a tradeoff.
> >> >
> >> >You want to fix serious bugs in stable, and you really don't want
> >> >regressions in stable. And ... stable not having 1000s of patches
> >> >would be nice, too.
> >>
> >> I don't think we should use a number cap here, but rather look at the
> >> regression rate: how many patches broke something?
> >>
> >> Since the rate we're seeing now with AUTOSEL is similar to what we were
> >> seeing before AUTOSEL, what's the problem it's causing?
> >
> >Regression rate should not be the only criteria.
> >
> >More patches mean bigger chance customer's patches will have a
> >conflict with something in -stable, for example.
>
> Out of tree patches can't be a consideration here. There are no
> guarantees for out of tree code, ever.
Out of tree code is not consideration for mainline, agreed. Stable
should be different.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Mon, Apr 16, 2018 at 06:39:53PM +0200, Pavel Machek wrote:
>On Mon 2018-04-16 16:28:00, Sasha Levin wrote:
>> On Mon, Apr 16, 2018 at 12:20:19PM -0400, Steven Rostedt wrote:
>> >On Mon, 16 Apr 2018 18:06:08 +0200
>> >Pavel Machek <[email protected]> wrote:
>> >
>> >> That means you want to ignore not-so-serious bugs, because benefit of
>> >> fixing them is lower than risk of the regressions. I believe bugs that
>> >> do not bother anyone should _not_ be fixed in stable.
>> >>
>> >> That was case of the LED patch. Yes, the commit fixed bug, but it
>> >> introduced regressions that were fixed by subsequent patches.
>> >
>> >I agree. I would disagree that the patch this thread is on should go to
>> >stable. What's the point of stable if it introduces regressions by
>> >backporting bug fixes for non major bugs.
>>
>> One such reason is that users will then hit the regression when they
>> upgrade to the next -stable version anyways.
>
>Well, yes, testing is required when moving from 4.14 to 4.15. But
>testing should not be required when moving from 4.14.5 to 4.14.6.
You always have to test, even without the AUTOSEL stuff. The rejection
rate was 2% even before AUTOSEL, so there was always a chance of
regression when upgrading minor stable versions.
>> >Every fix I make I consider labeling it for stable. The ones I don't, I
>> >feel the bug fix is not worth the risk of added regressions.
>> >
>> >I worry that people will get lazy and stop marking commits for stable
>> >(or even thinking about it) because they know that there's a bot that
>> >will pull it for them. That thought crossed my mind. Why do I want to
>> >label anything stable if a bot will probably catch it. Then I could
>> >just wait till the bot posts it before I even think about stable.
>>
>> People are already "lazy". You are actually an exception for marking your
>> commits.
>>
>> Yes, folks will chime in with "sure, I mark my patches too!", but if you
>> look at the entire committer pool in the kernel you'll see that most
>> don't bother with this to begin with.
>
>So you take everything and put it into stable? I don't think that's a
>solution.
I don't think I ever said that I want to put *everything*
>If you are worried about people not putting enough "Stable: " tags in
>their commits, perhaps you can write them emails "hey, I think this
>should go to stable, do you agree"? You should get people marking
>their commits themselves pretty quickly...
Greg has been doing this for years, ask him how that worked out for him.
On Mon, Apr 16, 2018 at 06:42:30PM +0200, Pavel Machek wrote:
>On Mon 2018-04-16 16:39:20, Sasha Levin wrote:
>> On Mon, Apr 16, 2018 at 06:28:50PM +0200, Pavel Machek wrote:
>> >
>> >> >> Is there a reason not to take LED fixes if they fix a bug and don't
>> >> >> cause a regression? Sure, we can draw some arbitrary line, maybe
>> >> >> designate some subsystems that are more "important" than others, but
>> >> >> what's the point?
>> >> >
>> >> >There's a tradeoff.
>> >> >
>> >> >You want to fix serious bugs in stable, and you really don't want
>> >> >regressions in stable. And ... stable not having 1000s of patches
>> >> >would be nice, too.
>> >>
>> >> I don't think we should use a number cap here, but rather look at the
>> >> regression rate: how many patches broke something?
>> >>
>> >> Since the rate we're seeing now with AUTOSEL is similar to what we were
>> >> seeing before AUTOSEL, what's the problem it's causing?
>> >
>> >Regression rate should not be the only criteria.
>> >
>> >More patches mean bigger chance customer's patches will have a
>> >conflict with something in -stable, for example.
>>
>> Out of tree patches can't be a consideration here. There are no
>> guarantees for out of tree code, ever.
>
>Out of tree code is not consideration for mainline, agreed. Stable
>should be different.
This is a discussion we could have with in right forum, but FYI stable
doesn't even guarantee KABI compatibility between minor versions at this
point.
On Mon, 16 Apr 2018 16:31:09 +0000
Sasha Levin <[email protected]> wrote:
> On Mon, Apr 16, 2018 at 12:22:44PM -0400, Steven Rostedt wrote:
> >On Mon, 16 Apr 2018 16:14:15 +0000
> >Sasha Levin <[email protected]> wrote:
> >
> >> Since the rate we're seeing now with AUTOSEL is similar to what we were
> >> seeing before AUTOSEL, what's the problem it's causing?
> >
> >Does that mean we just doubled the rate of regressions? That's the
> >problem.
>
> No, the rate stayed the same :)
>
> If before ~2% of stable commits were buggy, this is still the case with
> AUTOSEL.
Sorry, I didn't mean "rate" I meant "number". If the rate stayed the
same, that means the number increased.
>
> >>
> >> How do you know if a bug bothers someone?
> >>
> >> If a user is annoyed by a LED issue, is he expected to triage the bug,
> >> report it on LKML and patiently wait for the appropriate patch to be
> >> backported?
> >
> >Yes.
>
> I'm honestly not sure how to respond.
>
> Let me ask my wife (who is happy using Linux as a regular desktop user)
> how comfortable she would be with triaging kernel bugs...
That's really up to the distribution, not the main kernel stable. Does
she download and compile the kernels herself? Does she use LEDs?
The point is, stable is to keep what was working continued working.
If we don't care about introducing a regression, and just want to keep
regressions the same as mainline, why not just go to mainline? That way
you can also get the new features? Mainline already has the mantra to
not break user space. When I work on new features, I sometimes stumble
on bugs with the current features. And some of those fixes require a
rewrite. It was "good enough" before, but every so often could cause a
bug that the new feature would trigger more often. Do we back port that
rewrite? Do we backport fixes to old code that are more likely to be
triggered by new features?
Ideally, we should be working on getting to no regressions to stable.
-- Steve
On Mon, Apr 16, 2018 at 12:47:11PM -0400, Steven Rostedt wrote:
>On Mon, 16 Apr 2018 16:31:09 +0000
>Sasha Levin <[email protected]> wrote:
>
>> On Mon, Apr 16, 2018 at 12:22:44PM -0400, Steven Rostedt wrote:
>> >On Mon, 16 Apr 2018 16:14:15 +0000
>> >Sasha Levin <[email protected]> wrote:
>> >
>> >> Since the rate we're seeing now with AUTOSEL is similar to what we were
>> >> seeing before AUTOSEL, what's the problem it's causing?
>> >
>> >Does that mean we just doubled the rate of regressions? That's the
>> >problem.
>>
>> No, the rate stayed the same :)
>>
>> If before ~2% of stable commits were buggy, this is still the case with
>> AUTOSEL.
>
>Sorry, I didn't mean "rate" I meant "number". If the rate stayed the
>same, that means the number increased.
Indeed, just like the number of regressions in mainline has increased
over time.
>>
>> >>
>> >> How do you know if a bug bothers someone?
>> >>
>> >> If a user is annoyed by a LED issue, is he expected to triage the bug,
>> >> report it on LKML and patiently wait for the appropriate patch to be
>> >> backported?
>> >
>> >Yes.
>>
>> I'm honestly not sure how to respond.
>>
>> Let me ask my wife (who is happy using Linux as a regular desktop user)
>> how comfortable she would be with triaging kernel bugs...
>
>That's really up to the distribution, not the main kernel stable. Does
>she download and compile the kernels herself? Does she use LEDs?
>
>The point is, stable is to keep what was working continued working.
>If we don't care about introducing a regression, and just want to keep
>regressions the same as mainline, why not just go to mainline? That way
>you can also get the new features? Mainline already has the mantra to
>not break user space. When I work on new features, I sometimes stumble
>on bugs with the current features. And some of those fixes require a
>rewrite. It was "good enough" before, but every so often could cause a
>bug that the new feature would trigger more often. Do we back port that
>rewrite? Do we backport fixes to old code that are more likely to be
>triggered by new features?
>
>Ideally, we should be working on getting to no regressions to stable.
This is exactly what we're doing.
If a fix for a bug in -stable introduces a different regression,
should we take it or not?
On Mon, 16 Apr 2018 16:43:13 +0000
Sasha Levin <[email protected]> wrote:
> >If you are worried about people not putting enough "Stable: " tags in
> >their commits, perhaps you can write them emails "hey, I think this
> >should go to stable, do you agree"? You should get people marking
> >their commits themselves pretty quickly...
>
> Greg has been doing this for years, ask him how that worked out for him.
Then he shouldn't pull in the fix. Let it be broken. As soon as someone
complains about it being broken, then bug the maintainer again. "Hey,
this is broken in 4.x, and this looks like the fix for it. Do you
agree?"
I agree that some patches don't need this discussion. Things that are
obvious. Off-by-one and stack-overflow and other bugs like that. Or
another common bug is error paths that don't release locks. These
should just be backported. But subtle fixes like this thread should
default to (not backport unless someones complains or the
author/maintainer acks it).
-- Steve
On Mon 2018-04-16 16:45:16, Sasha Levin wrote:
> On Mon, Apr 16, 2018 at 06:42:30PM +0200, Pavel Machek wrote:
> >On Mon 2018-04-16 16:39:20, Sasha Levin wrote:
> >> On Mon, Apr 16, 2018 at 06:28:50PM +0200, Pavel Machek wrote:
> >> >
> >> >> >> Is there a reason not to take LED fixes if they fix a bug and don't
> >> >> >> cause a regression? Sure, we can draw some arbitrary line, maybe
> >> >> >> designate some subsystems that are more "important" than others, but
> >> >> >> what's the point?
> >> >> >
> >> >> >There's a tradeoff.
> >> >> >
> >> >> >You want to fix serious bugs in stable, and you really don't want
> >> >> >regressions in stable. And ... stable not having 1000s of patches
> >> >> >would be nice, too.
> >> >>
> >> >> I don't think we should use a number cap here, but rather look at the
> >> >> regression rate: how many patches broke something?
> >> >>
> >> >> Since the rate we're seeing now with AUTOSEL is similar to what we were
> >> >> seeing before AUTOSEL, what's the problem it's causing?
> >> >
> >> >Regression rate should not be the only criteria.
> >> >
> >> >More patches mean bigger chance customer's patches will have a
> >> >conflict with something in -stable, for example.
> >>
> >> Out of tree patches can't be a consideration here. There are no
> >> guarantees for out of tree code, ever.
> >
> >Out of tree code is not consideration for mainline, agreed. Stable
> >should be different.
>
> This is a discussion we could have with in right forum, but FYI stable
> doesn't even guarantee KABI compatibility between minor versions at this
> point.
Stable should be useful base for distributions. They carry out of tree
patches, and yes, you should try to make their lives easy.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Mon 2018-04-16 12:53:07, Steven Rostedt wrote:
> On Mon, 16 Apr 2018 16:43:13 +0000
> Sasha Levin <[email protected]> wrote:
>
> > >If you are worried about people not putting enough "Stable: " tags in
> > >their commits, perhaps you can write them emails "hey, I think this
> > >should go to stable, do you agree"? You should get people marking
> > >their commits themselves pretty quickly...
> >
> > Greg has been doing this for years, ask him how that worked out for him.
>
> Then he shouldn't pull in the fix. Let it be broken. As soon as someone
> complains about it being broken, then bug the maintainer again. "Hey,
> this is broken in 4.x, and this looks like the fix for it. Do you
> agree?"
>
> I agree that some patches don't need this discussion. Things that are
> obvious. Off-by-one and stack-overflow and other bugs like that. Or
> another common bug is error paths that don't release locks. These
> should just be backported. But subtle fixes like this thread should
> default to (not backport unless someones complains or the
> author/maintainer acks it).
Agreed. And it scares me we are even discussing this.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Hi!
> >> Let me ask my wife (who is happy using Linux as a regular desktop user)
> >> how comfortable she would be with triaging kernel bugs...
> >
> >That's really up to the distribution, not the main kernel stable. Does
> >she download and compile the kernels herself? Does she use LEDs?
> >
> >The point is, stable is to keep what was working continued working.
> >If we don't care about introducing a regression, and just want to keep
> >regressions the same as mainline, why not just go to mainline? That way
> >you can also get the new features? Mainline already has the mantra to
> >not break user space. When I work on new features, I sometimes stumble
> >on bugs with the current features. And some of those fixes require a
> >rewrite. It was "good enough" before, but every so often could cause a
> >bug that the new feature would trigger more often. Do we back port that
> >rewrite? Do we backport fixes to old code that are more likely to be
> >triggered by new features?
> >
> >Ideally, we should be working on getting to no regressions to stable.
>
> This is exactly what we're doing.
>
> If a fix for a bug in -stable introduces a different regression,
> should we take it or not?
If a fix for bug introduces regression, would you call it "obviously
correct"?
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Hi!
> How do you know if a bug bothers someone?
>
> If a user is annoyed by a LED issue, is he expected to triage the bug,
> report it on LKML and patiently wait for the appropriate patch to be
> backported?
If the user is annoyed by a LED issue, you are actually expected to
tell him that it is not going to be fixed, because it is not on the list:
- It must fix a problem that causes a build error (but not for things
marked CONFIG_BROKEN), an oops, a hang, data corruption, a real
security issue, or some "oh, that's not good" issue. In short,
something critical.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Mon 2018-04-16 16:37:56, Sasha Levin wrote:
> On Mon, Apr 16, 2018 at 12:30:19PM -0400, Steven Rostedt wrote:
> >On Mon, 16 Apr 2018 16:19:14 +0000
> >Sasha Levin <[email protected]> wrote:
> >
> >> >Wait! What does that mean? What's the purpose of stable if it is as
> >> >broken as mainline?
> >>
> >> This just means that if there is a fix that went in mainline, and the
> >> fix is broken somehow, we'd rather take the broken fix than not.
> >>
> >> In this scenario, *something* will be broken, it's just a matter of
> >> what. We'd rather have the same thing broken between mainline and
> >> stable.
> >
> >Honestly, I think that removes all value of the stable series. I
> >remember when the stable series were first created. People were saying
> >that it wouldn't even get to more than 5 versions, because the bar for
> >backporting was suppose to be very high. Today it's just a fork of the
> >kernel at a given version. No more features, but we will be OK with
> >regressions. I'm struggling to see what the benefit of it is suppose to
> >be?
>
> It's not "OK with regressions".
>
> Let's look at a hypothetical example: You have a 4.15.1 kernel that has
> a broken printf() behaviour so that when you:
>
> pr_err("%d", 5)
>
> Would print:
>
> "Microsoft Rulez"
>
> Bad, right? So you went ahead and fixed it, and now it prints "5" as you
> might expect. But alas, with your patch, running:
>
> pr_err("%s", "hi!")
>
> Would show a cat picture for 5 seconds.
>
> Should we take your patch in -stable or not? If we don't, we're stuck
> with the original issue while the mainline kernel will behave
> differently, but if we do - we introduce a new regression.
Of course not.
- It must be obviously correct and tested.
If it introduces new bug, it is not correct, and certainly not
obviously correct.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Mon, Apr 16, 2018 at 12:53:07PM -0400, Steven Rostedt wrote:
>On Mon, 16 Apr 2018 16:43:13 +0000
>Sasha Levin <[email protected]> wrote:
>
>> >If you are worried about people not putting enough "Stable: " tags in
>> >their commits, perhaps you can write them emails "hey, I think this
>> >should go to stable, do you agree"? You should get people marking
>> >their commits themselves pretty quickly...
>>
>> Greg has been doing this for years, ask him how that worked out for him.
>
>Then he shouldn't pull in the fix. Let it be broken. As soon as someone
>complains about it being broken, then bug the maintainer again. "Hey,
>this is broken in 4.x, and this looks like the fix for it. Do you
>agree?"
If that process would work, I would also get ACK/NACK on every AUTOSEL
request I'm sending.
What usually happens with customer reported issues is that either
they're just told to upgrade to the latest kernel (where the bug is
fixed), or if the distro team can't get them to do that and go hunting
for that bug, they'll just pick it for their kernel tree without ever
telling -stable.
I had a project to get all the fixes Cannonical had in their trees that
we're not in -stable. We're talking hundreds of patches here.
>I agree that some patches don't need this discussion. Things that are
>obvious. Off-by-one and stack-overflow and other bugs like that. Or
>another common bug is error paths that don't release locks. These
>should just be backported. But subtle fixes like this thread should
>default to (not backport unless someones complains or the
>author/maintainer acks it).
Let's play a "be the -stable maintainer" game. Would you take any
of the following commits?
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit?id=fc90441e728aa461a8ed1cfede08b0b9efef43fb
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit?id=a918d2bcea6aab6e671bfb0901cbecc3cf68fca1
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit?id=b1999fa6e8145305a6c8bda30ea20783717708e6
On Mon, Apr 16, 2018 at 07:05:01PM +0200, Pavel Machek wrote:
>Hi!
>
>> How do you know if a bug bothers someone?
>>
>> If a user is annoyed by a LED issue, is he expected to triage the bug,
>> report it on LKML and patiently wait for the appropriate patch to be
>> backported?
>
>If the user is annoyed by a LED issue, you are actually expected to
>tell him that it is not going to be fixed, because it is not on the list:
>
> - It must fix a problem that causes a build error (but not for things
> marked CONFIG_BROKEN), an oops, a hang, data corruption, a real
> security issue, or some "oh, that's not good" issue. In short,
> something critical.
So if a user is operating a nuclear power plant, and has 2 leds: green
one that says "All OK!" and a red one saying "NUCLEAR MELTDOWN!", and
once in a blue moon a race condition is causing the red one to go on and
cause panic in the little province he lives in, we should tell that user
to fuck off?
LEDs may not be critical for you, but they can be critical for someone
else. Think of all the different users we have and the wildly different
ways they use the kernel.
On Mon, Apr 16, 2018 at 07:06:04PM +0200, Pavel Machek wrote:
>On Mon 2018-04-16 16:37:56, Sasha Levin wrote:
>> On Mon, Apr 16, 2018 at 12:30:19PM -0400, Steven Rostedt wrote:
>> >On Mon, 16 Apr 2018 16:19:14 +0000
>> >Sasha Levin <[email protected]> wrote:
>> >
>> >> >Wait! What does that mean? What's the purpose of stable if it is as
>> >> >broken as mainline?
>> >>
>> >> This just means that if there is a fix that went in mainline, and the
>> >> fix is broken somehow, we'd rather take the broken fix than not.
>> >>
>> >> In this scenario, *something* will be broken, it's just a matter of
>> >> what. We'd rather have the same thing broken between mainline and
>> >> stable.
>> >
>> >Honestly, I think that removes all value of the stable series. I
>> >remember when the stable series were first created. People were saying
>> >that it wouldn't even get to more than 5 versions, because the bar for
>> >backporting was suppose to be very high. Today it's just a fork of the
>> >kernel at a given version. No more features, but we will be OK with
>> >regressions. I'm struggling to see what the benefit of it is suppose to
>> >be?
>>
>> It's not "OK with regressions".
>>
>> Let's look at a hypothetical example: You have a 4.15.1 kernel that has
>> a broken printf() behaviour so that when you:
>>
>> pr_err("%d", 5)
>>
>> Would print:
>>
>> "Microsoft Rulez"
>>
>> Bad, right? So you went ahead and fixed it, and now it prints "5" as you
>> might expect. But alas, with your patch, running:
>>
>> pr_err("%s", "hi!")
>>
>> Would show a cat picture for 5 seconds.
>>
>> Should we take your patch in -stable or not? If we don't, we're stuck
>> with the original issue while the mainline kernel will behave
>> differently, but if we do - we introduce a new regression.
>
>Of course not.
>
>- It must be obviously correct and tested.
>
>If it introduces new bug, it is not correct, and certainly not
>obviously correct.
As you might have noticed, we don't strictly follow the rules.
Take a look at the whole PTI story as an example. It's way more than 100
lines, it's not obviously corrent, it fixed more than 1 thing, and so
on, and yet it went in -stable!
Would you argue we shouldn't have backported PTI to -stable?
On Mon, 16 Apr 2018 17:09:38 +0000
Sasha Levin <[email protected]> wrote:
> Let's play a "be the -stable maintainer" game. Would you take any
> of the following commits?
>
> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit?id=fc90441e728aa461a8ed1cfede08b0b9efef43fb
No, not automatically, or without someone from KVM letting me know what
side-effects that may have. Not stopping on a breakpoint is not that
critical, it may be a bit annoying. I would ask the KVM maintainers if
they feel it's critical enough for backporting, but without hearing
from them, I would leave it be.
> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit?id=a918d2bcea6aab6e671bfb0901cbecc3cf68fca1
Sure. Even if it has a subtle regression, that's a critical bug being
fixed.
> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit?id=b1999fa6e8145305a6c8bda30ea20783717708e6
I would consider unlocking a mutex that one didn't lock a critical bug,
so yes.
Again, things that deal with locking or buffer overflows, I would take
the fix, as those are critical. But other behavior issues where it's
not critical, I would leave be unless told further by someone else.
-- Steve
On Mon, Apr 16, 2018 at 01:33:21PM -0400, Steven Rostedt wrote:
>On Mon, 16 Apr 2018 17:09:38 +0000
>Sasha Levin <[email protected]> wrote:
>
>> Let's play a "be the -stable maintainer" game. Would you take any
>> of the following commits?
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit?id=fc90441e728aa461a8ed1cfede08b0b9efef43fb
>
>No, not automatically, or without someone from KVM letting me know what
>side-effects that may have. Not stopping on a breakpoint is not that
>critical, it may be a bit annoying. I would ask the KVM maintainers if
>they feel it's critical enough for backporting, but without hearing
>from them, I would leave it be.
Fair enough.
>> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit?id=a918d2bcea6aab6e671bfb0901cbecc3cf68fca1
>
>Sure. Even if it has a subtle regression, that's a critical bug being
>fixed.
This was later reverted, in -stable:
"""
Commit d63c7dd5bcb9 ("ipr: Fix out-of-bounds null overwrite") removed
the end of line handling when storing the update_fw sysfs attribute.
This changed the userpace API because it started refusing writes
terminated by a line feed, which broke the update tools we already have.
"""
>> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit?id=b1999fa6e8145305a6c8bda30ea20783717708e6
>
>I would consider unlocking a mutex that one didn't lock a critical bug,
>so yes.
>
>Again, things that deal with locking or buffer overflows, I would take
>the fix, as those are critical. But other behavior issues where it's
>not critical, I would leave be unless told further by someone else.
This too, was reverted:
"""
It causes run-time breakage in the 4.4-stable tree and more patches are
needed to be applied first before this one in order to resolve the
issue.
"""
This is how fun it is reviewing AUTOSEL commits :)
Even the small "trivial", "obviously correct" patches have room for
errors for various reasons.
Also note that all of these patches were tagged for stable and actually
ended up in at least one tree.
This is why I'm basing a lot of my decision making on the rejection rate.
If the AUTOSEL process does the job well enough as the "regular"
process did before, why push it back?
On Mon, 16 Apr 2018 17:16:10 +0000
Sasha Levin <[email protected]> wrote:
> So if a user is operating a nuclear power plant, and has 2 leds: green
> one that says "All OK!" and a red one saying "NUCLEAR MELTDOWN!", and
> once in a blue moon a race condition is causing the red one to go on and
> cause panic in the little province he lives in, we should tell that user
> to fuck off?
>
> LEDs may not be critical for you, but they can be critical for someone
> else. Think of all the different users we have and the wildly different
> ways they use the kernel.
We can point them to the fix and have them backport it. Or they should
ask their distribution to backport it.
Hopefully they tested the kernel they are using for something like
that, and only want critical fixes. What happens if they take the next
stable assuming that it has critical fixes only, and this fix causes a
regression that creates the "ALL OK!" when it wasn't.
Basically, I rather have stable be more bug compatible with the version
it is based on with only critical fixes (things that will cause an
oops) than to try to be bug compatible with mainline, as then we get
into a state where things are a frankenstein of the stable base version
and mainline. I could say, "Yeah this feature works better on this
4.x version of the kernel" and not worry about "4.x.y" versions having
it better.
-- Steve
On Mon, Apr 16, 2018 at 01:44:23PM -0400, Steven Rostedt wrote:
>On Mon, 16 Apr 2018 17:16:10 +0000
>Sasha Levin <[email protected]> wrote:
>
>
>> So if a user is operating a nuclear power plant, and has 2 leds: green
>> one that says "All OK!" and a red one saying "NUCLEAR MELTDOWN!", and
>> once in a blue moon a race condition is causing the red one to go on and
>> cause panic in the little province he lives in, we should tell that user
>> to fuck off?
>>
>> LEDs may not be critical for you, but they can be critical for someone
>> else. Think of all the different users we have and the wildly different
>> ways they use the kernel.
>
>We can point them to the fix and have them backport it. Or they should
>ask their distribution to backport it.
It may work in your subsystem, but it really doesn't work this way with
the kernel.
Let me share a concrete example with you: there's a vfs bug that's a
pain to reproduce going around. It was originally reported on
CoreOS/AWS:
https://github.com/coreos/bugs/issues/2356
But our customers reported to us that they're hitting this issue too.
We couldn't reproduce it, and the call trace indicated it may be a
memory corrution. We could however confirm with the customers that the
latest mainline fixes the issue.
Given that we couldn't reproduce it, and neither of us is a fs/ expert,
we sent a mail to LKML, just like you suggested doing:
https://lkml.org/lkml/2018/3/2/1038
But unlike what you said, no one pointed us to the fix, even though the
issue was fixed on mainline. Heck, no one engaged in any meaningful
conversation about the bug.
I really think that we have a different views as to how well the whole
"let me shoot a mail to LKML" process works, which leads to different
views on -stable.
>Hopefully they tested the kernel they are using for something like
>that, and only want critical fixes. What happens if they take the next
>stable assuming that it has critical fixes only, and this fix causes a
>regression that creates the "ALL OK!" when it wasn't.
>
>Basically, I rather have stable be more bug compatible with the version
>it is based on with only critical fixes (things that will cause an
>oops) than to try to be bug compatible with mainline, as then we get
>into a state where things are a frankenstein of the stable base version
>and mainline. I could say, "Yeah this feature works better on this
>4.x version of the kernel" and not worry about "4.x.y" versions having
>it better.
This is how things used to work, right? Look at redhat kernels for
example, they'd stick with a kernel for tens of years, doing the tiniest
fixes, only when customers complained, and encouraging users to upgrade
only when the kernel would go EoL, and when customers couldn't do that
because they were too locked on that kernel version.
redhat still supports 2.6.9.
I thought we agreed that this is bad? We wanted users to be closer to
mainline, and we can't do it without bringing -stable closer to mainline
as well.
On Mon, 16 Apr 2018 17:42:38 +0000
Sasha Levin <[email protected]> wrote:
> >> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit?id=a918d2bcea6aab6e671bfb0901cbecc3cf68fca1
> >
> >Sure. Even if it has a subtle regression, that's a critical bug being
> >fixed.
>
> This was later reverted, in -stable:
>
> """
> Commit d63c7dd5bcb9 ("ipr: Fix out-of-bounds null overwrite") removed
> the end of line handling when storing the update_fw sysfs attribute.
> This changed the userpace API because it started refusing writes
> terminated by a line feed, which broke the update tools we already have.
> """
I hope it wasn't reverted. It did fix a critical bug.
The problem is that it only fixed a critical bug, but didn't go far
enough to keep the bug fix from breaking API. I see this as two bugs
being fixed. Even though the second bug was "caused" by the first fix.
the first fix was still necessary. The second bug was relying on broken
code. This hasn't changed my position on that patch from being
backported. I would not even mark this as a regression. I would say the
original code was broken too much, and fixing part of it just showed
revealed another broken part.
>
> >> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit?id=b1999fa6e8145305a6c8bda30ea20783717708e6
> >
> >I would consider unlocking a mutex that one didn't lock a critical bug,
> >so yes.
> >
> >Again, things that deal with locking or buffer overflows, I would take
> >the fix, as those are critical. But other behavior issues where it's
> >not critical, I would leave be unless told further by someone else.
>
> This too, was reverted:
>
> """
> It causes run-time breakage in the 4.4-stable tree and more patches are
> needed to be applied first before this one in order to resolve the
> issue.
> """
It wasn't reverted in mainline. Looks like there was some subtle issues
with the different stable versions. Perhaps the "fixes" was wrong.
>
> This is how fun it is reviewing AUTOSEL commits :)
>
> Even the small "trivial", "obviously correct" patches have room for
> errors for various reasons.
And that's fine. Any code written can have bugs in it. That's just a
given. Which pushes for why we should be extremely picky about what we
backport.
>
> Also note that all of these patches were tagged for stable and actually
> ended up in at least one tree.
>
> This is why I'm basing a lot of my decision making on the rejection rate.
> If the AUTOSEL process does the job well enough as the "regular"
> process did before, why push it back?
Because I think we are adding too many patches to stable. And
automating it may just make things worse. Your examples above back my
argument more than they refute it. If people can't determine what is
"obviously correct" how is automation going to do any better?
-- Steve
On Mon, Apr 16, 2018 at 11:26 AM, Steven Rostedt <[email protected]> wrote:
>
> The problem is that it only fixed a critical bug, but didn't go far
> enough to keep the bug fix from breaking API.
An API breakage that gets noticed *is* a crtitical bug.
You can't call something else critical and then say "but it broken API".
Seriously. Why do I even have to mention this?
If you break user workflows, NOTHING ELSE MATTERS.
Even security is secondary to "people don't use the end result,
because it doesn't work for them any more".
Really.
Stop with this idiotic "only API". Breaking user space is just about
the only thing that really matters. The rest is "small matter of
implementation".
Linus
On Mon, 16 Apr 2018 18:17:17 +0000
Sasha Levin <[email protected]> wrote:
> I thought we agreed that this is bad? We wanted users to be closer to
> mainline, and we can't do it without bringing -stable closer to mainline
> as well.
I guess the question comes down to, what do the users of stable kernels
want? For my machines, I always stay one or two releases behind
mainline. Right now my kernels are on 4.15.x, and will probably jump to
4.16.x the next time I upgrade my machines. I'm fine with something
breaking every so often as long as it's not data corruption (although I
have lots of backups of my systems in case that happens, just a PITA to
fix it). I only hit bugs on these boxes probably once a year at most in
doing so. But I mostly do what other kernel developers do and that
means the bugs I would mostly hit, other developers hit before their
code is released.
Thus, if stable users are fine with being regression compatible with
mainline, then I'm fine with it too.
-- Steve
On Mon, Apr 16, 2018 at 02:26:53PM -0400, Steven Rostedt wrote:
>On Mon, 16 Apr 2018 17:42:38 +0000
>Sasha Levin <[email protected]> wrote:
>> Also note that all of these patches were tagged for stable and actually
>> ended up in at least one tree.
>>
>> This is why I'm basing a lot of my decision making on the rejection rate.
>> If the AUTOSEL process does the job well enough as the "regular"
>> process did before, why push it back?
>
>Because I think we are adding too many patches to stable. And
>automating it may just make things worse. Your examples above back my
>argument more than they refute it. If people can't determine what is
>"obviously correct" how is automation going to do any better?
I don't understand that statament, it sounds illogical to me.
If I were to tell you that I have a crack team of 10 kernel hackers who
dig through all mainline commits to find commits that should be
backported to stable, and they do it with less mistakes than
authors/maintainers make when they tag their own commits, would I get the
same level of objection?
On the correctness side, I have another effort to improve the quality of
testing -stable commits get, but this is somewhat unrelated to the whole
automatic selection process.
On Mon, 16 Apr 2018 11:30:06 -0700
Linus Torvalds <[email protected]> wrote:
> On Mon, Apr 16, 2018 at 11:26 AM, Steven Rostedt <[email protected]> wrote:
> >
> > The problem is that it only fixed a critical bug, but didn't go far
> > enough to keep the bug fix from breaking API.
>
> An API breakage that gets noticed *is* a crtitical bug.
I totally agree with you. You misunderstood what I wrote.
I said there were two bugs here. The first bug was a possible accessing
bad memory bug. That needed to be fixed. The problem was by fixing
that, it broke API. But that's because the original code was broken
where it relied on broken code to get right. I never said the second
bug fix should not have been backported. I even said that the first bug
"didn't go far enough".
I hope the answer was not to revert the bug and put back the possible
bad memory access in to keep API. But it was to backport the second bug
fix that still has the first fix, but fixes the API breakage.
Yes, an API breakage is something I would label as critical to be
backported.
-- Steve
On Mon, Apr 16, 2018 at 11:41 AM, Steven Rostedt <[email protected]> wrote:
>
>I never said the second
> bug fix should not have been backported. I even said that the first bug
> "didn't go far enough".
You're still not getting it.
The "didn't go far enough" means that the bug fix is *BUGGY*. It needs
to be reverted.
> I hope the answer was not to revert the bug and put back the possible
> bad memory access in to keep API.
But that very must *IS* the answer. If there isn't a fix for the ABI
breakage, then the first bugfix needs to be reverted.
Really. There is no such thing as "but the fix was more important than
the bug it introduced".
This is why we started with the whole "actively revert things that
introduce regressions". Because people always kept claiming that "but
but I fixed a worse bug, and it's better to fix the worse bug even if
it then introduces another problem, because the other problem is
lesser".
NO.
We're better off making *no* progress, than making "unsteady progress".
Really. Seriously.
If you cannot fix a bug without introducing another one, don't do it.
Don't do kernel development.
The whole mentality you show is NOT ACCEPTABLE.
So the *only* answer is: "fix the bug _and_ keep the API". There is
no other choice.
The whole "I fixed one problem but introduced another" is not how we
work. You should damn well know that. There are no excuses.
And yes, sometimes that means jumping through hoops. But that's what
it takes to keep users happy.
Linus
On Mon, 16 Apr 2018 18:35:44 +0000
Sasha Levin <[email protected]> wrote:
> If I were to tell you that I have a crack team of 10 kernel hackers who
> dig through all mainline commits to find commits that should be
> backported to stable, and they do it with less mistakes than
> authors/maintainers make when they tag their own commits, would I get the
> same level of objection?
Probably ;-)
I've been struggling with my own stable tags, and been thinking that I
too suffer from tagging too much for stable, because there's code I
fix, and think "hmm, this could have some unwanted side effects". I'm
actually worried that my own fixes could cause an API breakage that I'm
unaware of.
What I'm staying is, I think we should start looking at fixes that fix
bugs we consider critical. Those being:
* off-by-one
* memory overflow
* locking mismatch
* API regressions
For my sub-system
* wrong data coming out
Which can be a critical issue. Wrong data is worse than no data. But
then, there's the times a bug will produce no data, and considering
what it is, and how much of an effort it takes to fix it, I may or may
not label "no data" issues for stable. The cases where I enable
something with a bunch of parameters, and because of some mishandling
of the parameter it just screws up totally (where it's obvious that it
screwed up), I only mark those for stable if it doesn't require a
redesign of the code to fix it. There's been some cases where a
redesign was required, and I didn't mark it for stable.
The fixes for tracing that I don't usually tag for stable is when doing
complex tracing simply doesn't work and produces no data or errors
incorrectly. Depending on how complex the fix is, I mark it for stable,
otherwise, I think the fix is more likely to break something else that
is more common, then this hardly ever used feature.
The fact that nobody noticed, or hasn't complained about it usually
plays a lot in that decision. If someone complained to me about
breakage, I'm more likely to label it for stable. But if I discover it
myself, as I probably use the tracing system differently than others as
I wrote the code, then I don't usually mark it.
-- Steve
On Mon, Apr 16, 2018 at 11:52 AM, Linus Torvalds
<[email protected]> wrote:
>
> We're better off making *no* progress, than making "unsteady progress".
>
> Really. Seriously.
Side note: the original impetus for this was our suspend/resume mess.
It went on for *YEARS*, and it was absolutely chock-full of exactly
this "I fixed the worse problem, and introduced another one".
There's a reason I'm a hardliner on the regression issue. We've been
there, we've done that.
The whole "two steps forwards, one step back" mentality may work
really well if you're doing line dancing.
BUT WE ARE NOT LINE DANCING. We do kernel development.
Absolutely NOTHING else is more important than the "no regressions"
rule. NOTHING.
And just since everybody always tries to weasel about this: the only
regressions that matter are the ones that people notice in real loads.
So if you write a test-case that tests that "system call number 345
returns -ENOSYS", and we add a new system call, and you say "hey, you
regressed my system call test", that's not a regression. That's just a
"change in behavior".
It becomes a regression only if there are people using tools or
workflows that actually depend on it. So if it turns out (for example)
that Firefox had some really odd bug, and the intent was to do system
call 123, but a typo had caused it to do system call 345 instead, and
another bug meant that the end result worked fine as long as system
call 345 returned ENOSYS, then the addition of that system call
actually does turn into a regression.
See? Even adding a system call can be a regression, because what
matters is not behavior per se, but users _depending_ on some specific
behavior.
Linus
On Mon, Apr 16, 2018 at 11:52 AM, Linus Torvalds
<[email protected]> wrote:
>
> And yes, sometimes that means jumping through hoops. But that's what
> it takes to keep users happy.
The example of "jumping through hoops" I tend to give is the pipe "packet mode".
The kernel actually has a magic pipe mode for "packet buffers", so
that if you do two small writes, the other side of the pipe can
actually say "I want to read not the pipe buffers, but the individual
packets".
Why would we do that? That's not how pipes work! If you want to send
and receive messages, use a socket, for chrissake! A pipe is just a
stream of bytes - as the elder Gods of Unix decreed!
But it turns out that we added the notion of a packetized pipe writer,
and you can actually access it in user space by setting the O_DIRECT
flag (so after you do the "pipe()" system call, do a fcntl(SETFL,
O_DIRECT) on it).
Absolutely nobody uses it, afaik, because you'd be crazy to do it.
What would be the point? sockets work, and are portable.
So why do we have it?
We have it for one single user: autofs. The way you talk to the kernel
side of things is with a magic pipe that you get at mount time, and
the user-space autofs daemon might be 32-bit even if the kernel is
64-bit, and we had a horrible ABI mistake which meant that sending the
binary data over that pipe had different format for a 32-bit autofsd.
And systemd and automount had different workarounds (or lack
there-of), for the ABI issue.
So the whole "ok, allow people to send packets, and read them as
packets" came about purely because the ABI was broken, and there was
no other way to fix things.
See 64f371bc3107 ("autofs: make the autofsv5 packet file descriptor
use a packetized pipe") for (some) of the sad details.
That commit kind of makes it sound like it's a nice solution that just
takes advantage of the packetized pipes. Nice and clean fix, right?
No. The packetized pipes exist in the first place _purely_ to make
that nice solution possible. It's literally just a "this allows us to
be ABI compatible with two different users that were confused about
the compatibility issue we had due to a broken binary structure format
acrss x86-32 and x86-64".
See commit 9883035ae7ed ("pipes: add a "packetized pipe" mode for
writing") for the other side of that.
All this just because _we_ made a mistake in our ABI, and then real
life users started using that mistake, including one user that
literally *knew* about the mistake and worked around it and started
depending on the fact t hat our compatibility mode was buggy because
of it.
So it was a bug in our ABI. But since people depended on the bug, the
bug was a feature, and needed to be kept around. In this case by
adding a totally new and unrelated feature, and using that new feature
to make those old users happy. The whole "set packetized mode on the
autofs pipe" is all done transparently inside the kernel, and
automount never knew or needed to care that we started to use a
packetized pipe to get the data it sent us in the chunks it intended.
Linus
On Mon, 16 Apr 2018 11:52:48 -0700
Linus Torvalds <[email protected]> wrote:
> On Mon, Apr 16, 2018 at 11:41 AM, Steven Rostedt <[email protected]> wrote:
> >
> >I never said the second
> > bug fix should not have been backported. I even said that the first bug
> > "didn't go far enough".
>
> You're still not getting it.
>
> The "didn't go far enough" means that the bug fix is *BUGGY*. It needs
> to be reverted.
It wasn't reverted. Look at the code in question.
Commit d63c7dd5bcb
+++ b/drivers/scsi/ipr.c
@@ -4003,13 +4003,12 @@ static ssize_t ipr_store_update_fw(struct device *dev,
struct ipr_sglist *sglist;
char fname[100];
char *src;
- int len, result, dnld_size;
+ int result, dnld_size;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
- len = snprintf(fname, 99, "%s", buf);
- fname[len-1] = '\0';
+ snprintf(fname, sizeof(fname), "%s", buf);
if (request_firmware(&fw_entry, fname, &ioa_cfg->pdev->dev)) {
dev_err(&ioa_cfg->pdev->dev, "Firmware file %s not found\n", fname);
The bug is that len returned by snprintf() can be much larger than 100.
That fname[len-1] = '\0' can allow a user to decide where to write
zeros.
That patch never got reverted in mainline. It was fixed with this:
Commit 21b81716c6bf
--- a/drivers/scsi/ipr.c
+++ b/drivers/scsi/ipr.c
@@ -4002,6 +4002,7 @@ static ssize_t ipr_store_update_fw(struct device *dev,
struct ipr_sglist *sglist;
char fname[100];
char *src;
+ char *endline;
int result, dnld_size;
if (!capable(CAP_SYS_ADMIN))
@@ -4009,6 +4010,10 @@ static ssize_t ipr_store_update_fw(struct device *dev,
snprintf(fname, sizeof(fname), "%s", buf);
+ endline = strchr(fname, '\n');
+ if (endline)
+ *endline = '\0';
+
if (request_firmware(&fw_entry, fname, &ioa_cfg->pdev->dev)) {
dev_err(&ioa_cfg->pdev->dev, "Firmware file %s not found\n", fname);
return -EIO;
>
> > I hope the answer was not to revert the bug and put back the possible
> > bad memory access in to keep API.
>
> But that very must *IS* the answer. If there isn't a fix for the ABI
> breakage, then the first bugfix needs to be reverted.
It wasn't reverted and that was my point. It just wasn't a complete
fix. And I'm saying that once the API breakage became apparent, the
second fix should have been backported as well.
I'm not saying that we should allow API breakage to fix a critical bug.
I'm saying that the API breakage was really a secondary bug that needed
to be addressed. My point is the first fix was NOT reverted!
>
> Really. There is no such thing as "but the fix was more important than
> the bug it introduced".
I'm not saying that.
>
> This is why we started with the whole "actively revert things that
> introduce regressions". Because people always kept claiming that "but
> but I fixed a worse bug, and it's better to fix the worse bug even if
> it then introduces another problem, because the other problem is
> lesser".
>
> NO.
Right, but the fix to the API was also trivial. I don't understand why
you are arguing with me. I agree with you. I'm talking about this
specific instance. Where a bug was fixed, and the API breakage was
another fix that needed to be backported.
Are you saying if code could allow userspace to write zeros anywhere in
memory, that we should keep it to allow API compatibility?
>
> We're better off making *no* progress, than making "unsteady progress".
>
> Really. Seriously.
>
> If you cannot fix a bug without introducing another one, don't do it.
> Don't do kernel development.
Um, I think that's impossible. As the example shows. Not many people
would have caught the original fix would caused another bug. That
requirement would pretty much keep everyone from ever doing any kernel
development.
>
> The whole mentality you show is NOT ACCEPTABLE.
>
> So the *only* answer is: "fix the bug _and_ keep the API". There is
> no other choice.
I agree. But that that wasn't the question.
>
> The whole "I fixed one problem but introduced another" is not how we
> work. You should damn well know that. There are no excuses.
>
> And yes, sometimes that means jumping through hoops. But that's what
> it takes to keep users happy.
I'm talking about the given example of a simple memory bug that caused
a very subtle breakage of API, which had another trivial fix that
should be backported. I'm not sure that's what you were talking about.
-- Steve
On Mon, Apr 16, 2018 at 12:24 PM, Steven Rostedt <[email protected]> wrote:
>
> Right, but the fix to the API was also trivial. I don't understand why
> you are arguing with me. I agree with you. I'm talking about this
> specific instance. Where a bug was fixed, and the API breakage was
> another fix that needed to be backported.
Fair enough. Were you there when the report of breakage came in?
Because *my* argument is that reverting something that causes problems
is simply *never* the wrong answer.
If you know of the fix, fine. But clearly people DID NOT KNOW. So
reverting was the right choice.
Linus
On Mon, 16 Apr 2018 12:00:08 -0700
Linus Torvalds <[email protected]> wrote:
> > On Mon, Apr 16, 2018 at 11:52 AM, Linus Torvalds
> > <[email protected]> wrote:
> > >
> > > We're better off making *no* progress, than making "unsteady progress".
> > >
> > > Really. Seriously.
[ me inserted: ]
> On Mon, 16 Apr 2018 3:24:29 PM, Steven Rostedt <[email protected]> wrote:
>
> > I'm talking about the given example of a simple memory bug that caused
> > a very subtle breakage of API, which had another trivial fix that
> > should be backported. I'm not sure that's what you were talking about.
>
> Side note: the original impetus for this was our suspend/resume mess.
> It went on for *YEARS*, and it was absolutely chock-full of exactly
> this "I fixed the worse problem, and introduced another one".
What you are talking about here isn't what I was talking about above ;-)
-- Steve
On Mon, Apr 16, 2018 at 12:28 PM, Linus Torvalds
<[email protected]> wrote:
>
> If you know of the fix, fine. But clearly people DID NOT KNOW. So
> reverting was the right choice.
.. and this is obviously different in stable and in mainline.
For example, I start reverting very aggressively only at the end of a
release. If I get a bisected bug report in the last week, I generally
revert without much argument, unless the author of the patch has an
immediate fix.
In contrast, during the merge window and the early rc's, I'm perfectly
happy to say "ok, let's see if somebody can fix this" and not really
consider a revert.
But the -stable tree?
Seriously, what do you expect them to do if they get a report that a
commit they added to the stable tree regresses?
"Revert first, ask questions later" is definitely a very sane model there.
Linus
On Mon, 16 Apr 2018 12:28:21 -0700
Linus Torvalds <[email protected]> wrote:
> On Mon, Apr 16, 2018 at 12:24 PM, Steven Rostedt <[email protected]> wrote:
> >
> > Right, but the fix to the API was also trivial. I don't understand why
> > you are arguing with me. I agree with you. I'm talking about this
> > specific instance. Where a bug was fixed, and the API breakage was
> > another fix that needed to be backported.
>
> Fair enough. Were you there when the report of breakage came in?
No I wasn't.
>
> Because *my* argument is that reverting something that causes problems
> is simply *never* the wrong answer.
>
> If you know of the fix, fine. But clearly people DID NOT KNOW. So
> reverting was the right choice.
But I don't see in the git history that this was ever reverted. My reply
saying that "I hope it wasn't reverted", was a response for it being
reverted in stable, not mainline. Considering that the original bug
would allow userspace to write zeros anywhere in memory, I would have
definitely worked on finding why the API breakage happened and fixing
it properly before putting such a large hole back into the kernel.
I'm assuming that may have been what happened because the commit was
never reverted in your tree, and if I was responsible for that code, I
would be up all night looking for an API fix to make sure the original
fix isn't reverted.
-- Steve
On Mon, Apr 16, 2018 at 12:38 PM, Steven Rostedt <[email protected]> wrote:
>
> But I don't see in the git history that this was ever reverted. My reply
> saying that "I hope it wasn't reverted", was a response for it being
> reverted in stable, not mainline.
See my other email.
If your'e stable maintainer, and you get a report of a commit that
causes problems, your first reaction probably really should just be
"revert it".
You can always re-apply it later, but a patch that causes problems is
absolutely very much suspect, and automatically should make any stable
maintainer go "that needs much more analysis".
Sure, hopefully automation finds the fix too (ie commit 21b81716c6bf
"ipr: Fix regression when loading firmware") in mainline.
It did have the proper "fixes" tag, so it should hopefully have been
easy to find by the automation that stable people use.
But at the same time, I still maintain that "just revert it" is
rather likely the right solution for stable. If it had a bug once,
maybe it shouldn't have been applied in the first place.
The author can then get notified by the other stable automation, and
at that point argue for "yeah, it was buggy, but together with this
other fix it's really important".
But even when that is the case, I really don't see that the author
should complain about it being reverted. Because it's *such* a
no-brainer in stable.
Linus
On Mon, 16 Apr 2018 12:31:09 -0700
Linus Torvalds <[email protected]> wrote:
>
> But the -stable tree?
>
> Seriously, what do you expect them to do if they get a report that a
> commit they added to the stable tree regresses?
>
> "Revert first, ask questions later" is definitely a very sane model there.
The topic of our discussion is on what to backport, and how likely is
it to cause regressions. I'm arguing that the bar for backporting
should be raised, and that only "critical" fixes should be backported.
Sasha pointed this bug fix as an example, and asked me if I would
backport it under my conditions. I said yes. He then said "it was
reverted", pointing me to the commit that fixed it. That confused
me. When I looked further, I noticed that it wasn't reverted, and since
he pointed me to the API fix, I said "I hope it wasn't reverted"
meaning I hope they backported the obvious API fix and didn't just
revert the original fix.
-- Steve
On Mon, 16 Apr 2018 12:55:46 -0700
Linus Torvalds <[email protected]> wrote:
> On Mon, Apr 16, 2018 at 12:38 PM, Steven Rostedt <[email protected]> wrote:
> >
> > But I don't see in the git history that this was ever reverted. My reply
> > saying that "I hope it wasn't reverted", was a response for it being
> > reverted in stable, not mainline.
>
> See my other email.
Already replied.
>
> If your'e stable maintainer, and you get a report of a commit that
> causes problems, your first reaction probably really should just be
> "revert it".
>
> You can always re-apply it later, but a patch that causes problems is
> absolutely very much suspect, and automatically should make any stable
> maintainer go "that needs much more analysis".
>
> Sure, hopefully automation finds the fix too (ie commit 21b81716c6bf
> "ipr: Fix regression when loading firmware") in mainline.
>
> It did have the proper "fixes" tag, so it should hopefully have been
> easy to find by the automation that stable people use.
>
> But at the same time, I still maintain that "just revert it" is
> rather likely the right solution for stable. If it had a bug once,
> maybe it shouldn't have been applied in the first place.
>
> The author can then get notified by the other stable automation, and
> at that point argue for "yeah, it was buggy, but together with this
> other fix it's really important".
>
> But even when that is the case, I really don't see that the author
> should complain about it being reverted. Because it's *such* a
> no-brainer in stable.
But this is going way off topic to what we were discussing. The
discussion is about what gets backported. Is automating the process
going to make stable better? Or is it likely to add more regressions.
Sasha's response has been that his automated process has the same rate
of regressions as what gets tagged by authors. My argument is that
perhaps authors should tag less to stable.
-- Steve
On Mon, Apr 16, 2018 at 1:02 PM, Steven Rostedt <[email protected]> wrote:
>
> But this is going way off topic to what we were discussing. The
> discussion is about what gets backported. Is automating the process
> going to make stable better? Or is it likely to add more regressions.
>
> Sasha's response has been that his automated process has the same rate
> of regressions as what gets tagged by authors. My argument is that
> perhaps authors should tag less to stable.
The ones who should matter most for that discussion is the distros,
since they are the actual users of stable (as well as the people doing
the work, of course - ie Sasha and Greg and the rest of the stable
gang).
And I suspect that they actually do want all the noise, and all the
stuff that isn't "critical". That's often the _easy_ choice. It's the
stuff that I suspect the stable maintainers go "this I don't even have
to think about", because it's a new driver ID or something.
Because the bulk of stable tends to be driver updates, afaik. Which
distros very much tend to want.
Will developers think that their patches matter so much that they
should go to stable? Yes they will. Will they overtag as a result?
Probably. But the reverse likely also happens, where people simply
don't think about stable at all, and just want to fix a bug.
In many ways "Fixes" is likely a better thing to check for in stable
backports, but that doesn't always exist either.
And just judging by the amount of stable email I get - and by how
excited _I_ would be about stable work, I think "automated process" is
simply not an option. It's a _requirement_. You'd go completely crazy
if you didn't automate 99% of all the stable work.
So can you trust the "Cc: stable" as being perfect? Hell no. But
what's your alternative? Manually selecting things for stable? Asking
the developers separately?
Because "criticality" definitely isn't what determines it. If it was,
we'd never add driver ID's etc to stable - they're clearly not
"critical".
Yet it feels like that's sometimes those driver things are the _bulk_
of it, and it is usually fairly safe (not quite as obviously safe as
you'd think, because a driver ID addition has occasionally meant not
just "now it's supported", but instead "now the generic driver doesn't
trigger for it any more", so it can actually break things).
So I think - and _hope_ - that 99% of stable should be the
non-critical stuff that people don't even need to think about.
The critical stuff is hopefully a tiny tiny percentage.
Linus
On Mon, 16 Apr 2018, Linus Torvalds wrote:
> The ones who should matter most for that discussion is the distros,
> since they are the actual users of stable (as well as the people doing
> the work, of course - ie Sasha and Greg and the rest of the stable
> gang).
>
> And I suspect that they actually do want all the noise, and all the
> stuff that isn't "critical". That's often the _easy_ choice. It's the
> stuff that I suspect the stable maintainers go "this I don't even have
> to think about", because it's a new driver ID or something.
So I am a maintainer of SUSE enterprise kernel, and I can tell you I
personally really don't want all the noise, simply because
(a) noone asked us to distribute it (if they did, we would do so)
(b) the risk of regressions
We've always been very cautious about what is coming from stable, and
actually filtering out patches we actively don't want for one reason or
another.
And yes, there is also a history of regressions caused by stable updates
that were painful for us ... I brought this up a multiple times at
ksummit-discuss@ over past years.
So with the upcoming release, we've actually abandonded stable and are
relying more on auto-processing the Fixes: tag.
That is not perfect in both ways (it doesn't cover everything, and we can
miss complex semantical dependencies between patches even though they
"apply"), but we didn't base our decision this time on aligning our
schedule with stable, and so far we don't seem to be suffering. And we
have much better overview / control over what is landing in our enterprise
tree (of course this all is shepherded by machinery around processing
Fixes: tag, which then though has to be *actively* acted upon, depending
on a case-by-case human assessment of how critical it actually is).
> Because the bulk of stable tends to be driver updates, afaik. Which
> distros very much tend to want.
For "community" distros (like Fedora, openSUSE), perhaps, yeah.
For "enterprise" kernels, quite frankly, we much rather get the driver
updates/backports from the respective HW vedndors we're cooperating with,
as they have actually tested and verified the backport on the metal.
> The critical stuff is hopefully a tiny tiny percentage.
But quite frankly, that's the only thing we as distro *really* want -- to
save our users from hitting the critical issues with all the consequences
(data loss, unbootable systems, etc). All the rest we can easily handle on
a reactive basis, which heavily depends on the userbase spectrum (and
that's probably completely different for each -stable tree consumer
anyway).
This is a POV of me as an distro kernel maintainer, but mileage of others
definitely can / will vary of course.
Thanks,
--
Jiri Kosina
SUSE Labs
On Mon, Apr 16, 2018 at 10:17:17PM +0200, Jiri Kosina wrote:
>On Mon, 16 Apr 2018, Sasha Levin wrote:
>
>> So if a user is operating a nuclear power plant, and has 2 leds: green
>> one that says "All OK!" and a red one saying "NUCLEAR MELTDOWN!", and
>> once in a blue moon a race condition is causing the red one to go on and
>> cause panic in the little province he lives in, we should tell that user
>> to fuck off?
>>
>> LEDs may not be critical for you, but they can be critical for someone
>> else. Think of all the different users we have and the wildly different
>> ways they use the kernel.
>
>I am pretty sure that for almost every fix there is a person on a planet
>that'd rate it "critical". We can't really use this as an argument for
>inclusion of code into -stable, as that'd mean that -stable and Linus'
So I think that Linus's claim that users come first applies here as
well. If there's a user that cares about a particular feature being
broken, then we go ahead and fix his bug rather then ignoring him.
>tree would have to be basically the same.
Basically the same minus all new features/subsystems/arch/etc. But yes,
ideally we'd want all bugfixes that go in mainline. Who not?
Instead of keeping bug fixes out, we need to work on improving our
testing story. Instead of ignoring that "person that'd rate it critical"
we should add his usecase into our testing matrix.
On Mon, 16 Apr 2018, Sasha Levin wrote:
> So I think that Linus's claim that users come first applies here as
> well. If there's a user that cares about a particular feature being
> broken, then we go ahead and fix his bug rather then ignoring him.
So one extreme is fixing -stable *iff* users actually do report an issue.
The other extreme is backporting everything that potentially looks like a
potential fix of "something" (according to some arbitrary metric),
pro-actively.
The former voilates the "users first" rule, the latter has a very, very
high risk of regressions.
So this whole debate is about finding a compromise.
My gut feeling always was that the statement in
Documentation/process/stable-kernel-rules.rst
is very reasonable, but making the process way more "aggresive" when
backporting patches is breaking much of its original spirit for me.
--
Jiri Kosina
SUSE Labs
On Mon, 16 Apr 2018, Sasha Levin wrote:
> So if a user is operating a nuclear power plant, and has 2 leds: green
> one that says "All OK!" and a red one saying "NUCLEAR MELTDOWN!", and
> once in a blue moon a race condition is causing the red one to go on and
> cause panic in the little province he lives in, we should tell that user
> to fuck off?
>
> LEDs may not be critical for you, but they can be critical for someone
> else. Think of all the different users we have and the wildly different
> ways they use the kernel.
I am pretty sure that for almost every fix there is a person on a planet
that'd rate it "critical". We can't really use this as an argument for
inclusion of code into -stable, as that'd mean that -stable and Linus'
tree would have to be basically the same.
--
Jiri Kosina
SUSE Labs
On Mon, Apr 16, 2018 at 10:43:28PM +0200, Jiri Kosina wrote:
>On Mon, 16 Apr 2018, Sasha Levin wrote:
>
>> So I think that Linus's claim that users come first applies here as
>> well. If there's a user that cares about a particular feature being
>> broken, then we go ahead and fix his bug rather then ignoring him.
>
>So one extreme is fixing -stable *iff* users actually do report an issue.
>
>The other extreme is backporting everything that potentially looks like a
>potential fix of "something" (according to some arbitrary metric),
>pro-actively.
>
>The former voilates the "users first" rule, the latter has a very, very
>high risk of regressions.
>
>So this whole debate is about finding a compromise.
>
>My gut feeling always was that the statement in
>
> Documentation/process/stable-kernel-rules.rst
>
>is very reasonable, but making the process way more "aggresive" when
>backporting patches is breaking much of its original spirit for me.
I agree that as an enterprise distro taking everything from -stable
isn't the best idea. Ideally you'd want to be close to the first
extreme you've mentioned and only take commits if customers are asking
you to do so.
I think that the rule we're trying to agree upon is the "It must fix
a real bug that bothers people".
I think that we can agree that it's impossible to expect every single
Linux user to go on LKML and complain about a bug he encountered, so the
rule quickly becomes "It must fix a real bug that can bother people".
My "aggressiveness" comes from the whole "bother" part: it doesn't have
to be critical, it doesn't have to cause data corruption, it doesn't
have to be a security issue. It's enough that the bug actually affects a
user in a way he didn't expect it to (if a user doesn't have
expectations, it would fall under the "This could be a problem..."
exception.
We can go into a discussion about what exactly "bothering" is, but on
the flip side, the whole -stable tag is just a way for folks to indicate
they want a given patch reviewed for stable, it's not actually a
guarantee of whether the patch will go in to -stable or not.
On Mon, 16 Apr 2018 13:17:24 -0700
Linus Torvalds <[email protected]> wrote:
> On Mon, Apr 16, 2018 at 1:02 PM, Steven Rostedt <[email protected]> wrote:
> >
> > But this is going way off topic to what we were discussing. The
> > discussion is about what gets backported. Is automating the process
> > going to make stable better? Or is it likely to add more regressions.
> >
> > Sasha's response has been that his automated process has the same rate
> > of regressions as what gets tagged by authors. My argument is that
> > perhaps authors should tag less to stable.
>
> The ones who should matter most for that discussion is the distros,
> since they are the actual users of stable (as well as the people doing
> the work, of course - ie Sasha and Greg and the rest of the stable
> gang).
That was actually my final conclusion before we started out
discussion ;-)
http://lkml.kernel.org/r/[email protected]
>
> And I suspect that they actually do want all the noise, and all the
> stuff that isn't "critical". That's often the _easy_ choice. It's the
> stuff that I suspect the stable maintainers go "this I don't even have
> to think about", because it's a new driver ID or something.
Although Red Hat doesn't base off of the stable kernel. At least it
didn't when I was there. They may look at the stable kernel, but they
make their own decisions.
If we want the distros to use stable as the base, it should be the
least common factor among them. Otherwise, if stable includes commits
that a distro would rather not backport, then they wont use stable.
>
> Because the bulk of stable tends to be driver updates, afaik. Which
> distros very much tend to want.
>
> Will developers think that their patches matter so much that they
> should go to stable? Yes they will. Will they overtag as a result?
> Probably. But the reverse likely also happens, where people simply
> don't think about stable at all, and just want to fix a bug.
>
> In many ways "Fixes" is likely a better thing to check for in stable
> backports, but that doesn't always exist either.
>
> And just judging by the amount of stable email I get - and by how
> excited _I_ would be about stable work, I think "automated process" is
> simply not an option. It's a _requirement_. You'd go completely crazy
> if you didn't automate 99% of all the stable work.
>
> So can you trust the "Cc: stable" as being perfect? Hell no. But
> what's your alternative? Manually selecting things for stable? Asking
> the developers separately?
>
> Because "criticality" definitely isn't what determines it. If it was,
> we'd never add driver ID's etc to stable - they're clearly not
> "critical".
True. But I believe the driver ID's was given the "exception".
>
> Yet it feels like that's sometimes those driver things are the _bulk_
> of it, and it is usually fairly safe (not quite as obviously safe as
> you'd think, because a driver ID addition has occasionally meant not
> just "now it's supported", but instead "now the generic driver doesn't
> trigger for it any more", so it can actually break things).
>
> So I think - and _hope_ - that 99% of stable should be the
> non-critical stuff that people don't even need to think about.
>
> The critical stuff is hopefully a tiny tiny percentage.
Well, I'm not sure that's really the case.
$ git log --oneline v4.14.33..v4.14.34 | head -20
ffebeb0d7c37 Linux 4.14.34
fdae5b620566 net/mlx4_core: Fix memory leak while delete slave's resources
9fdeb33e1913 vhost_net: add missing lock nesting notation
8c316b625705 team: move dev_mc_sync after master_upper_dev_link in team_port_add
233ba28e1862 route: check sysctl_fib_multipath_use_neigh earlier than hash
2f8aa659d4c0 vhost: validate log when IOTLB is enabled
72b880f43990 net/mlx5e: Fix traffic being dropped on VF representor
9408bceb0649 net/mlx4_en: Fix mixed PFC and Global pause user control requests
477c73abf26a strparser: Fix sign of err codes
1c71bfe84deb net/sched: fix NULL dereference on the error path of tcf_skbmod_init()
a19024a3f343 net/sched: fix NULL dereference in the error path of tunnel_key_init()
e096c8bf4fb8 net/mlx5e: Sync netdev vxlan ports at open
baab1f0c4885 net/mlx5e: Don't override vport admin link state in switchdev mode
1ec7966ab7db ipv6: sr: fix seg6 encap performances with TSO enabled
e52a45bb392f nfp: use full 40 bits of the NSP buffer address
ddf79878f1e0 net/mlx5e: Fix memory usage issues in offloading TC flows
9282181c1cc5 net/mlx5e: Avoid using the ipv6 stub in the TC offload neigh update path
b9c6ddda3805 vti6: better validate user provided tunnel names
109dce20c6ed ip6_tunnel: better validate user provided tunnel names
72363c63b070 ip6_gre: better validate user provided tunnel names
The majority of those appear to be on the critical side.
-- Steve
On Mon, 16 Apr 2018, Sasha Levin wrote:
> I agree that as an enterprise distro taking everything from -stable
> isn't the best idea. Ideally you'd want to be close to the first
> extreme you've mentioned and only take commits if customers are asking
> you to do so.
>
> I think that the rule we're trying to agree upon is the "It must fix
> a real bug that bothers people".
>
> I think that we can agree that it's impossible to expect every single
> Linux user to go on LKML and complain about a bug he encountered, so the
> rule quickly becomes "It must fix a real bug that can bother people".
So is there a reason why stable couldn't become some hybrid-form union of
- really critical issues (data corruption, boot issues, severe security
issues) taken from bleeding edge upstream
- [reviewed] cherry-picks of functional fixes from major distro kernels
(based on that very -stable release), as that's apparently what people
are hitting in the real world with that particular kernel
?
Thanks,
--
Jiri Kosina
SUSE Labs
On Mon, Apr 16, 2018 at 11:28:44PM +0200, Jiri Kosina wrote:
> On Mon, 16 Apr 2018, Sasha Levin wrote:
>
> > I agree that as an enterprise distro taking everything from -stable
> > isn't the best idea. Ideally you'd want to be close to the first
> > extreme you've mentioned and only take commits if customers are asking
> > you to do so.
> >
> > I think that the rule we're trying to agree upon is the "It must fix
> > a real bug that bothers people".
> >
> > I think that we can agree that it's impossible to expect every single
> > Linux user to go on LKML and complain about a bug he encountered, so the
> > rule quickly becomes "It must fix a real bug that can bother people".
>
> So is there a reason why stable couldn't become some hybrid-form union of
>
> - really critical issues (data corruption, boot issues, severe security
> issues) taken from bleeding edge upstream
> - [reviewed] cherry-picks of functional fixes from major distro kernels
> (based on that very -stable release), as that's apparently what people
> are hitting in the real world with that particular kernel
It already is that :)
The problem Sasha is trying to solve here is that for many subsystems,
maintainers do not mark patches for stable at all. So real bugfixes
that do hit people are not getting to those kernels, which force the
distros to do extra work to triage a bug, dig through upstream kernels,
find and apply the patch.
By identifying the patches that should have been marked for stable,
based on the ways that the changelog text is written and the logic in
the patch itself, we circumvent that extra annoyance of users hitting
problems and complaining, or ignoring them and hoping they go away if
they reboot.
I've been doing this "by hand" for many years now, with no complaints so
far. Sasha has taken it to the next level as I don't scale and has
started to automate it using some really nice tools. That's all, this
isn't crazy new features being backported, it's just patches that are
obviously fixes being added to the stable tree.
Yes, sometimes those fixes need additional fixes, and that's fine,
normal stable-marked patches need that all the time. I don't see anyone
complaining about that, right?
So nothing "new" is happening here, EXCEPT we are actually starting to
get a better kernel-wide coverage for stable fixes, which we have not
had in the past. That's a good thing! The number of patches applied to
stable is still a very very very tiny % compared to mainline, so nothing
new is happening here.
Oh, and if you do want to complain about huge new features being
backported, look at the mess that Spectre and Meltdown has caused in the
stable trees. I don't see anyone complaining about those massive
changes :)
thanks,
greg k-h
On Mon, Apr 16, 2018 at 07:00:10PM +0200, Pavel Machek wrote:
> Hi!
>
> > >> Let me ask my wife (who is happy using Linux as a regular desktop user)
> > >> how comfortable she would be with triaging kernel bugs...
> > >
> > >That's really up to the distribution, not the main kernel stable. Does
> > >she download and compile the kernels herself? Does she use LEDs?
> > >
> > >The point is, stable is to keep what was working continued working.
> > >If we don't care about introducing a regression, and just want to keep
> > >regressions the same as mainline, why not just go to mainline? That way
> > >you can also get the new features? Mainline already has the mantra to
> > >not break user space. When I work on new features, I sometimes stumble
> > >on bugs with the current features. And some of those fixes require a
> > >rewrite. It was "good enough" before, but every so often could cause a
> > >bug that the new feature would trigger more often. Do we back port that
> > >rewrite? Do we backport fixes to old code that are more likely to be
> > >triggered by new features?
> > >
> > >Ideally, we should be working on getting to no regressions to stable.
> >
> > This is exactly what we're doing.
> >
> > If a fix for a bug in -stable introduces a different regression,
> > should we take it or not?
>
> If a fix for bug introduces regression, would you call it "obviously
> correct"?
I honestly can't believe you all are arguing about this. We backport
bugfixes to the stable tree. If those fixes also are buggy we either
apply the fix for that problem that ended up in Linus's tree, or we
revert the patch. If the fix is not in Linus's tree, sometimes we leave
the "bug" in stable for a bit to apply some pressure on the
developer/maintainer to get it fixed in Linus's tree (that's what I mean
by being "bug compatible".)
This is exactly what we have been doing for over a decade now, why are
people suddenly getting upset?
Oh, I know why, suddenly subsystems that never were taking the time to
mark patches for stable are getting patches backported and are getting
nervous. The simple way to stop that from happening is to PROPERLY MARK
PATCHES FOR STABLE IN THE FIRST PLACE!
If you do that, then, no "automated" patches will get selected as you
already handled them all. Or if there are some automated patches
picked, you can easily NAK them (like xfs does as they know better than
everyone else, and honestly, I trust them, and don't run xfs myself), or
do like what I do when it happens to me and go "hey, nice, I missed that
one!"
There, problem solved, if you do that, no more worrying by you at all,
and this thread can properly die.
ugh,
greg k-h
On Mon, Apr 16, 2018 at 06:54:51PM +0200, Pavel Machek wrote:
> On Mon 2018-04-16 16:45:16, Sasha Levin wrote:
> > On Mon, Apr 16, 2018 at 06:42:30PM +0200, Pavel Machek wrote:
> > >On Mon 2018-04-16 16:39:20, Sasha Levin wrote:
> > >> On Mon, Apr 16, 2018 at 06:28:50PM +0200, Pavel Machek wrote:
> > >> >
> > >> >> >> Is there a reason not to take LED fixes if they fix a bug and don't
> > >> >> >> cause a regression? Sure, we can draw some arbitrary line, maybe
> > >> >> >> designate some subsystems that are more "important" than others, but
> > >> >> >> what's the point?
> > >> >> >
> > >> >> >There's a tradeoff.
> > >> >> >
> > >> >> >You want to fix serious bugs in stable, and you really don't want
> > >> >> >regressions in stable. And ... stable not having 1000s of patches
> > >> >> >would be nice, too.
> > >> >>
> > >> >> I don't think we should use a number cap here, but rather look at the
> > >> >> regression rate: how many patches broke something?
> > >> >>
> > >> >> Since the rate we're seeing now with AUTOSEL is similar to what we were
> > >> >> seeing before AUTOSEL, what's the problem it's causing?
> > >> >
> > >> >Regression rate should not be the only criteria.
> > >> >
> > >> >More patches mean bigger chance customer's patches will have a
> > >> >conflict with something in -stable, for example.
> > >>
> > >> Out of tree patches can't be a consideration here. There are no
> > >> guarantees for out of tree code, ever.
> > >
> > >Out of tree code is not consideration for mainline, agreed. Stable
> > >should be different.
> >
> > This is a discussion we could have with in right forum, but FYI stable
> > doesn't even guarantee KABI compatibility between minor versions at this
> > point.
>
> Stable should be useful base for distributions. They carry out of tree
> patches, and yes, you should try to make their lives easy.
How do you know I already don't do that?
But no, in the end, it's not my job to make their life easier if they
are off in their own corner never providing me feedback or help. For
those companies/distros/SoCs that do provide that feedback, I am glad to
work with them.
As proof of that, there are at least 3 "major" SoC vendors that have
been merging every one of the stable releases into their internal trees
for the past 6+ months now. I get reports when they do the merge and
test, and so far, we have only had 1 regression. And that regression
was because that SoC vendor forgot to upstream a patch that they had in
their internal tree (i.e. they fixed it a while ago but forgot to tell
anyone else, nothing we can do about that.)
So if you are a distro/company/whatever that takes stable releases, and
have run into problems, please let me know, and I will be glad to work
with you.
If you are not that, then please don't attempt to speak for them...
thanks,
greg k-h
On Tue 17-04-18 12:39:36, Greg KH wrote:
> On Mon, Apr 16, 2018 at 11:28:44PM +0200, Jiri Kosina wrote:
> > On Mon, 16 Apr 2018, Sasha Levin wrote:
> >
> > > I agree that as an enterprise distro taking everything from -stable
> > > isn't the best idea. Ideally you'd want to be close to the first
> > > extreme you've mentioned and only take commits if customers are asking
> > > you to do so.
> > >
> > > I think that the rule we're trying to agree upon is the "It must fix
> > > a real bug that bothers people".
> > >
> > > I think that we can agree that it's impossible to expect every single
> > > Linux user to go on LKML and complain about a bug he encountered, so the
> > > rule quickly becomes "It must fix a real bug that can bother people".
> >
> > So is there a reason why stable couldn't become some hybrid-form union of
> >
> > - really critical issues (data corruption, boot issues, severe security
> > issues) taken from bleeding edge upstream
> > - [reviewed] cherry-picks of functional fixes from major distro kernels
> > (based on that very -stable release), as that's apparently what people
> > are hitting in the real world with that particular kernel
>
> It already is that :)
>
> The problem Sasha is trying to solve here is that for many subsystems,
> maintainers do not mark patches for stable at all.
The way he is trying to do that is just wrong. Generate a pressure on
those subsystems by referring to bug reports and unhappy users and I am
pretty sure they will try harder... You cannot solve the problem by
bypassing them without having deep understanding of the specific
subsytem. Once you have it, just make sure you are part of the review
process and make sure to mark patches before they are merged.
> So real bugfixes
> that do hit people are not getting to those kernels, which force the
> distros to do extra work to triage a bug, dig through upstream kernels,
> find and apply the patch.
I would say that this is the primary role of the distro. To hide the
jungle of the upstream work and provide the additional of bug filtering
and forwarding them the right direction.
> By identifying the patches that should have been marked for stable,
> based on the ways that the changelog text is written and the logic in
> the patch itself, we circumvent that extra annoyance of users hitting
> problems and complaining, or ignoring them and hoping they go away if
> they reboot.
Well, but this is a two edge sword. You are not only backporting obvious
bug fixes but also pulling many patch out of the context they were
merged to and double checking all the assumptions are still true is a
non-trivial task to do. I am still not convinced any script or AI can do
that right now.
> I've been doing this "by hand" for many years now, with no complaints so
> far.
Really? I remember quite some complains about broken stable releases and
also many discussions on KS how the current workflow doesn't really work
for some users (e.g. distributions).
> Sasha has taken it to the next level as I don't scale and has
> started to automate it using some really nice tools. That's all, this
> isn't crazy new features being backported, it's just patches that are
> obviously fixes being added to the stable tree.
I have yet to see a tool which can recognize an "obvious fix".
Seriously! Matching keywords in the changelog and some pattern
recognition in the diff can help to do some pre filtering _can_ help a
lot but there is still a human interaction needed to do sanity checking.
And that really requires deep subsystem knowledge. I really fail to see
how that can work without relevant people involvement. Pretending that
you can do stable without maintainers will simply not work IMNHO.
> Yes, sometimes those fixes need additional fixes, and that's fine,
> normal stable-marked patches need that all the time. I don't see anyone
> complaining about that, right?
>
> So nothing "new" is happening here, EXCEPT we are actually starting to
> get a better kernel-wide coverage for stable fixes, which we have not
> had in the past. That's a good thing! The number of patches applied to
> stable is still a very very very tiny % compared to mainline, so nothing
> new is happening here.
yes I do agree, the stable process is not very much different from the
past and I would tend both processes broken because they explicitly try
to avoid maintainers which is just wrong.
> Oh, and if you do want to complain about huge new features being
> backported, look at the mess that Spectre and Meltdown has caused in the
> stable trees. I don't see anyone complaining about those massive
> changes :)
Are you serious? Are you going the compare the biggest PITA that the
community had to undergo because of HW issues with random pattern
matching in changelog/diffs? Come on!
--
Michal Hocko
SUSE Labs
On Tue, 17 Apr 2018, Greg KH wrote:
> It already is that :)
I have a question: I guess a stable team has an idea who they are
preparing the tree for, IOW who is the target consumer. Who is it?
Certainly it's not major distros, as both RH and SUSE already stated that
they are either not basing off the stable kernel (only cherry-pick fixes
from it) (RH), or are quite far in the process of moving away from stable
tree towards combination of what RH is doing + semi-automated evaluation
of Fixes: tag (SUSE).
If the target audience is somewhere else, that's perfectly fine, but then
it'd have to be stated explicitly I guess.
I can't speak for RH, but for us (at least me personally), the pace of
patches flowing into -stable is way too high for us to keep control of
what is landing in our tree.
In some sense, stability should be equivalent to "minimal necessary amout
of super-critical changes". That's not what this "let's backport
proactively almost everything that has word 'fixes' in changelog" (I'm a
bit exaggerating of course) seems to be about.
Again, the rules stated out in
Documentation/process/stable-kernel-rules.rst
are very nice, and are exactly something at least we would be very happy
about. They have the nice hidden asumption in them, that someone actually
has to actively invest human brain power to think about the fix, its
consequences, prerequisities, etc. Not just doing a big dump of all
commits that "might fix something".
How many of the actual patches flowing into -stable would satisfy those
criteria these days?
IOW, I'm pretty sure our users are much happier with us supplying them
reactive fixes than pro-active uncertainity.
> The problem Sasha is trying to solve here is that for many subsystems,
> maintainers do not mark patches for stable at all.
The pressure on those subsystems should be coming from unhappy users
(being it end-users or vendors redistributing the tree) of the stable
tree, who would be complaining about missing fixes for those subsystems.
Is this actually happening? Where?
> Oh, and if you do want to complain about huge new features being
> backported, look at the mess that Spectre and Meltdown has caused in the
> stable trees. I don't see anyone complaining about those massive
> changes :)
Umm, sorry, how is this related?
There simply was no other way, and I took it for given that this is seen
by everybody involved as an absolute exception, due to the nature of the
issue and of the massive changes that were needed.
Thanks,
--
Jiri Kosina
SUSE Labs
On Mon 16-04-18 17:23:30, Sasha Levin wrote:
> On Mon, Apr 16, 2018 at 07:06:04PM +0200, Pavel Machek wrote:
> >On Mon 2018-04-16 16:37:56, Sasha Levin wrote:
> >> On Mon, Apr 16, 2018 at 12:30:19PM -0400, Steven Rostedt wrote:
> >> >On Mon, 16 Apr 2018 16:19:14 +0000
> >> >Sasha Levin <[email protected]> wrote:
> >> >
> >> >> >Wait! What does that mean? What's the purpose of stable if it is as
> >> >> >broken as mainline?
> >> >>
> >> >> This just means that if there is a fix that went in mainline, and the
> >> >> fix is broken somehow, we'd rather take the broken fix than not.
> >> >>
> >> >> In this scenario, *something* will be broken, it's just a matter of
> >> >> what. We'd rather have the same thing broken between mainline and
> >> >> stable.
> >> >
> >> >Honestly, I think that removes all value of the stable series. I
> >> >remember when the stable series were first created. People were saying
> >> >that it wouldn't even get to more than 5 versions, because the bar for
> >> >backporting was suppose to be very high. Today it's just a fork of the
> >> >kernel at a given version. No more features, but we will be OK with
> >> >regressions. I'm struggling to see what the benefit of it is suppose to
> >> >be?
> >>
> >> It's not "OK with regressions".
> >>
> >> Let's look at a hypothetical example: You have a 4.15.1 kernel that has
> >> a broken printf() behaviour so that when you:
> >>
> >> pr_err("%d", 5)
> >>
> >> Would print:
> >>
> >> "Microsoft Rulez"
> >>
> >> Bad, right? So you went ahead and fixed it, and now it prints "5" as you
> >> might expect. But alas, with your patch, running:
> >>
> >> pr_err("%s", "hi!")
> >>
> >> Would show a cat picture for 5 seconds.
> >>
> >> Should we take your patch in -stable or not? If we don't, we're stuck
> >> with the original issue while the mainline kernel will behave
> >> differently, but if we do - we introduce a new regression.
> >
> >Of course not.
> >
> >- It must be obviously correct and tested.
> >
> >If it introduces new bug, it is not correct, and certainly not
> >obviously correct.
>
> As you might have noticed, we don't strictly follow the rules.
>
> Take a look at the whole PTI story as an example. It's way more than 100
> lines, it's not obviously corrent, it fixed more than 1 thing, and so
> on, and yet it went in -stable!
>
> Would you argue we shouldn't have backported PTI to -stable?
So I agree with that being backported. But I think this nicely demostrates
a point some people are trying to make in this thread. We do take fixes
with high risk or regression if they fix serious enough issue. Also we do
take fixes to non-serious stuff (such as addition of device ID) if the
chances of regression are really low.
So IMHO the metric for including the fix is not solely "how annoying to
user this can be" but rather something like:
score = (how annoying the bug is) * ((1 / (chance of regression due to
including this)) - 1)^3
(constants are somewhat arbitrary subject to tuning ;). Now both 'annoying'
and 'regression chance' parts are subjective and sometimes difficult to
estimate so don't take the formula too seriously but it demonstrates the
point. I think we all agree we want to fix annoying stuff and we don't want
regressions. But you need to somehow weight this over your expected
userbase - and this is where your argument "but someone might be annoyed by
LEDs not working so let's include it" has problems - it should rather be
"is the annoyance of non-working leds over expected user base high enough
to risk a regression due to this patch for someone in the expected user
base"? The answer to this second question is not clear at all to a casual
reviewer and that's why we IMHO have CC stable tag as maintainer is
supposed to have at least a bit better clue.
Another point I wanted to make is that if chance a patch causes a
regression is about 2% as you said somewhere else in a thread, then by
adding 20 patches that "may fix a bug that is annoying for someone" you've
just increased a chance there's a regression in the release by 34%. And
this is not just a math game, this also roughly matches a real experience
with maintaining our enterprise kernels. Do 20 "maybe" fixes outweight such
regression chance? And I also note that for a regression to get reported so
that it gets included into your 2% estimate of a patch regression rate,
someone must be bothered enough by it to triage it and send an email
somewhere so that already falls into a category of "serious" stuff to me.
So these are the reasons why I think that merging tons of patches into
stable isn't actually very good.
Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR
On Tue 17-04-18 14:24:54, Petr Mladek wrote:
[...]
> Back to the trend. Last week I got autosel mails even for
> patches that were still being discussed, had issues, and
> were far from upstream:
>
> https://lkml.kernel.org/r/DM5PR2101MB1032AB19B489D46B717B50D4FBBB0@DM5PR2101MB1032.namprd21.prod.outlook.com
> https://lkml.kernel.org/r/DM5PR2101MB10327FA0A7E0D2C901E33B79FBBB0@DM5PR2101MB1032.namprd21.prod.outlook.com
>
> It might be a good idea if the mail asked to add Fixes: tag
> or stable mailing list. But the mail suggested to add the
> unfinished patch into stable branch directly (even before
> upstreaming?).
Well, I think that poking subsystems which ignore stable trees with such
emails early during review might be quite helpful. Maybe people start
marking for stable and we do not need the guessing later. I wouldn't
bother poking those who are known to mark stable patches though.
--
Michal Hocko
SUSE Labs
On Tue 2018-04-17 12:46:37, Greg KH wrote:
> Oh, I know why, suddenly subsystems that never were taking the time to
> mark patches for stable are getting patches backported and are getting
> nervous.
Yes, I am getting nervous because of this. The number of printk fixes
nominated for stable is increasing exponentially (just my feeling)
during last few months.
The problem is that I want to be responsible and think about possible
regressions. Sometimes it requires checking the state of the
particular kernel release. The older code base the more complicated
the decision is.
You might argue that backporting the fixes helps to get the same code
in all supported code bases. But it is not true. It never will be
the same.
Anyway, in the past the "automatically" nominated printk fixes
were trivial. They did not cause harm. But they also were not
worth it, IMHO. They fixed corner cases that were there for ages.
Most of these fixes were found by code review when working on
a feature. They were not backed by bug reports.
Last week, autosel nominated pretty non-trivial patch (started
this thread). It partly solved a problem we tried to fix last few
years.
On one side, this was an annoying problem that motivated several
people spend a lot of time on it. This might be a motivation
for a backport.
On the other hand, it took many years to come somewhere. The main
problem was the fear of regressions. We fixed/improved many things
in the mean time. It shows that the problem really is not trivial.
The same is true for the fix. We did our best to avoid regressions.
But it does not mean that there are none. Also it does not mean
that it will really give better results in all situations.
I really do not see a reason to hurry and backport this to
the older kernel releases. It means to spread the fix but also
eventual problems. It is easy to miss a dependant patch.
The less trivial fix, the more possible problems are there.
Back to the trend. Last week I got autosel mails even for
patches that were still being discussed, had issues, and
were far from upstream:
https://lkml.kernel.org/r/DM5PR2101MB1032AB19B489D46B717B50D4FBBB0@DM5PR2101MB1032.namprd21.prod.outlook.com
https://lkml.kernel.org/r/DM5PR2101MB10327FA0A7E0D2C901E33B79FBBB0@DM5PR2101MB1032.namprd21.prod.outlook.com
It might be a good idea if the mail asked to add Fixes: tag
or stable mailing list. But the mail suggested to add the
unfinished patch into stable branch directly (even before
upstreaming?).
Now, there are only hand full of printk patches in each
release, so it is still doable. I just do not understand
how other maintainers, from much more busy subsystems,
could cope with this trend.
By other words. If you want to automatize patch nomination,
you might need to automatize also patch review. Or you need
to keep the patch rate low. This might mean to nominate
only important and rather trivial fixes.
Best Regards,
Petr
On Tue, Apr 17, 2018 at 01:41:44PM +0200, Jan Kara wrote:
>On Mon 16-04-18 17:23:30, Sasha Levin wrote:
>> On Mon, Apr 16, 2018 at 07:06:04PM +0200, Pavel Machek wrote:
>> >On Mon 2018-04-16 16:37:56, Sasha Levin wrote:
>> >> On Mon, Apr 16, 2018 at 12:30:19PM -0400, Steven Rostedt wrote:
>> >> >On Mon, 16 Apr 2018 16:19:14 +0000
>> >> >Sasha Levin <[email protected]> wrote:
>> >> >
>> >> >> >Wait! What does that mean? What's the purpose of stable if it is as
>> >> >> >broken as mainline?
>> >> >>
>> >> >> This just means that if there is a fix that went in mainline, and the
>> >> >> fix is broken somehow, we'd rather take the broken fix than not.
>> >> >>
>> >> >> In this scenario, *something* will be broken, it's just a matter of
>> >> >> what. We'd rather have the same thing broken between mainline and
>> >> >> stable.
>> >> >
>> >> >Honestly, I think that removes all value of the stable series. I
>> >> >remember when the stable series were first created. People were saying
>> >> >that it wouldn't even get to more than 5 versions, because the bar for
>> >> >backporting was suppose to be very high. Today it's just a fork of the
>> >> >kernel at a given version. No more features, but we will be OK with
>> >> >regressions. I'm struggling to see what the benefit of it is suppose to
>> >> >be?
>> >>
>> >> It's not "OK with regressions".
>> >>
>> >> Let's look at a hypothetical example: You have a 4.15.1 kernel that has
>> >> a broken printf() behaviour so that when you:
>> >>
>> >> pr_err("%d", 5)
>> >>
>> >> Would print:
>> >>
>> >> "Microsoft Rulez"
>> >>
>> >> Bad, right? So you went ahead and fixed it, and now it prints "5" as you
>> >> might expect. But alas, with your patch, running:
>> >>
>> >> pr_err("%s", "hi!")
>> >>
>> >> Would show a cat picture for 5 seconds.
>> >>
>> >> Should we take your patch in -stable or not? If we don't, we're stuck
>> >> with the original issue while the mainline kernel will behave
>> >> differently, but if we do - we introduce a new regression.
>> >
>> >Of course not.
>> >
>> >- It must be obviously correct and tested.
>> >
>> >If it introduces new bug, it is not correct, and certainly not
>> >obviously correct.
>>
>> As you might have noticed, we don't strictly follow the rules.
>>
>> Take a look at the whole PTI story as an example. It's way more than 100
>> lines, it's not obviously corrent, it fixed more than 1 thing, and so
>> on, and yet it went in -stable!
>>
>> Would you argue we shouldn't have backported PTI to -stable?
>
>So I agree with that being backported. But I think this nicely demostrates
>a point some people are trying to make in this thread. We do take fixes
>with high risk or regression if they fix serious enough issue. Also we do
>take fixes to non-serious stuff (such as addition of device ID) if the
>chances of regression are really low.
>
>So IMHO the metric for including the fix is not solely "how annoying to
>user this can be" but rather something like:
>
>score = (how annoying the bug is) * ((1 / (chance of regression due to
> including this)) - 1)^3
>
>(constants are somewhat arbitrary subject to tuning ;). Now both 'annoying'
>and 'regression chance' parts are subjective and sometimes difficult to
>estimate so don't take the formula too seriously but it demonstrates the
>point. I think we all agree we want to fix annoying stuff and we don't want
>regressions. But you need to somehow weight this over your expected
>userbase - and this is where your argument "but someone might be annoyed by
>LEDs not working so let's include it" has problems - it should rather be
>"is the annoyance of non-working leds over expected user base high enough
>to risk a regression due to this patch for someone in the expected user
>base"? The answer to this second question is not clear at all to a casual
>reviewer and that's why we IMHO have CC stable tag as maintainer is
>supposed to have at least a bit better clue.
We may be able to guesstimate the 'regression chance', but there's no
way we can guess the 'annoyance' once. There are so many different use
cases that we just can't even guess how many people would get "annoyed"
by something.
Even regression chance is tricky, look at the commits I've linked
earlier in the thread. Even the most trivial looking commits that end up
in stable have a chance for regression.
>Another point I wanted to make is that if chance a patch causes a
>regression is about 2% as you said somewhere else in a thread, then by
>adding 20 patches that "may fix a bug that is annoying for someone" you've
>just increased a chance there's a regression in the release by 34%. And
So I've said that the rejection rate is less than 2%. This includes
all commits that I have proposed for -stable, but didn't end up being
included in -stable.
This includes commits that the author/maintainers NACKed, commits that
didn't do anything on older kernels, commits that were buggy but were
caught before the kernel was released, commits that failed to build on
an arch I didn't test it on originally and so on.
After thousands of merged AUTOSEL patches I can count the number of
times a commit has caused a regression and had to be removed on one
hand.
>this is not just a math game, this also roughly matches a real experience
>with maintaining our enterprise kernels. Do 20 "maybe" fixes outweight such
>regression chance? And I also note that for a regression to get reported so
>that it gets included into your 2% estimate of a patch regression rate,
>someone must be bothered enough by it to triage it and send an email
>somewhere so that already falls into a category of "serious" stuff to me.
It is indeed a numbers game, but the regression rate isn't 2%, it's
closer to 0.05%.
On Tue, Apr 17, 2018 at 02:49:24PM +0200, Michal Hocko wrote:
>On Tue 17-04-18 14:24:54, Petr Mladek wrote:
>[...]
>> Back to the trend. Last week I got autosel mails even for
>> patches that were still being discussed, had issues, and
>> were far from upstream:
>>
>> https://lkml.kernel.org/r/DM5PR2101MB1032AB19B489D46B717B50D4FBBB0@DM5PR2101MB1032.namprd21.prod.outlook.com
>> https://lkml.kernel.org/r/DM5PR2101MB10327FA0A7E0D2C901E33B79FBBB0@DM5PR2101MB1032.namprd21.prod.outlook.com
>>
>> It might be a good idea if the mail asked to add Fixes: tag
>> or stable mailing list. But the mail suggested to add the
>> unfinished patch into stable branch directly (even before
>> upstreaming?).
>
>Well, I think that poking subsystems which ignore stable trees with such
>emails early during review might be quite helpful. Maybe people start
>marking for stable and we do not need the guessing later. I wouldn't
>bother poking those who are known to mark stable patches though.
Yup, mm/ needs far less poking that XFS (for example).
What makes mm/ so good about this is that it's a rather small set of
devs who are good at marking things for stable. As long as the commit
came from one of these "core" mm/ folks it's almost guaranteed to have
proper stable tags.
But mm/ commits don't come only from these people. Here's a concrete
example we can discuss:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c61611f70958d86f659bca25c02ae69413747a8d
This was merged in a few days ago, and seems relevant for older kernel
trees as well. Should it not have a stable tag?
On Tue, Apr 17, 2018 at 02:24:54PM +0200, Petr Mladek wrote:
>Back to the trend. Last week I got autosel mails even for
>patches that were still being discussed, had issues, and
>were far from upstream:
>
> https://lkml.kernel.org/r/DM5PR2101MB1032AB19B489D46B717B50D4FBBB0@DM5PR2101MB1032.namprd21.prod.outlook.com
> https://lkml.kernel.org/r/DM5PR2101MB10327FA0A7E0D2C901E33B79FBBB0@DM5PR2101MB1032.namprd21.prod.outlook.com
>
>It might be a good idea if the mail asked to add Fixes: tag
>or stable mailing list. But the mail suggested to add the
>unfinished patch into stable branch directly (even before
>upstreaming?).
I obviously didn't suggest that this patch will go in -stable before
it's upstream.
I've started doing those because some folks can't be arsed to reply to a
review request for a patch that is months old. I found that if I send
these mails while the discussion is still going on I'd get a much better
response rate from people.
If you think any of these patches should go in stable there were two
ways about it:
- You end up adding the -stable tag yourself, and it would follow the
usual route where Greg picks it up.
- You reply to that mail, and the patch would wait in a list until my
script notices it made it upstream, at which point it would get
queued for stable.
>Now, there are only hand full of printk patches in each
>release, so it is still doable. I just do not understand
>how other maintainers, from much more busy subsystems,
>could cope with this trend.
>
>By other words. If you want to automatize patch nomination,
>you might need to automatize also patch review. Or you need
>to keep the patch rate low. This might mean to nominate
>only important and rather trivial fixes.
I also have an effort to help review the patches. See what I'm working
on for the xfs folks:
https://lkml.org/lkml/2018/3/29/1113
Where in addition to build tests I'd also run each commit, for each
stable kernel through a set of xfstests and provide them along with the
mail.
So yes, I'm aware that the volume of patches is huge, but there's not
much I can do about it because it's just a subset of the kernel's patch
volume and since the kernel gets more and more patches each release, the
volume of stable commits is bound to grow as well.
On Tue, Apr 17, 2018 at 01:07:17PM +0200, Michal Hocko wrote:
>On Tue 17-04-18 12:39:36, Greg KH wrote:
>> On Mon, Apr 16, 2018 at 11:28:44PM +0200, Jiri Kosina wrote:
>> > On Mon, 16 Apr 2018, Sasha Levin wrote:
>> >
>> > > I agree that as an enterprise distro taking everything from -stable
>> > > isn't the best idea. Ideally you'd want to be close to the first
>> > > extreme you've mentioned and only take commits if customers are asking
>> > > you to do so.
>> > >
>> > > I think that the rule we're trying to agree upon is the "It must fix
>> > > a real bug that bothers people".
>> > >
>> > > I think that we can agree that it's impossible to expect every single
>> > > Linux user to go on LKML and complain about a bug he encountered, so the
>> > > rule quickly becomes "It must fix a real bug that can bother people".
>> >
>> > So is there a reason why stable couldn't become some hybrid-form union of
>> >
>> > - really critical issues (data corruption, boot issues, severe security
>> > issues) taken from bleeding edge upstream
>> > - [reviewed] cherry-picks of functional fixes from major distro kernels
>> > (based on that very -stable release), as that's apparently what people
>> > are hitting in the real world with that particular kernel
>>
>> It already is that :)
>>
>> The problem Sasha is trying to solve here is that for many subsystems,
>> maintainers do not mark patches for stable at all.
>
>The way he is trying to do that is just wrong. Generate a pressure on
>those subsystems by referring to bug reports and unhappy users and I am
>pretty sure they will try harder... You cannot solve the problem by
>bypassing them without having deep understanding of the specific
>subsytem. Once you have it, just make sure you are part of the review
>process and make sure to mark patches before they are merged.
I think we just don't agree on how we should "pressure".
Look at the discussion I had with the XFS folks who just don't want to
deal with this -stable thing because they have to much work upstream.
There wasn't a single patch in -stable coming from XFS for the past 6+
months. I'm aware of more than one way to corrupt an XFS volume for any
distro that uses a kernel older than 4.15.
Sure, please buy them a beer at LSF/MM (I'll pay) and ask them to be
better about it, but I don't see this changing.
The solution to this, in my opinion, is to automate the whole selection
and review process. We do selection using AI, and we run every possible
test that's relevant to that subsystem.
At which point, the amount of work a human needs to do to review a patch
shrinks into something far more managable for some maintainers.
>> So real bugfixes
>> that do hit people are not getting to those kernels, which force the
>> distros to do extra work to triage a bug, dig through upstream kernels,
>> find and apply the patch.
>
>I would say that this is the primary role of the distro. To hide the
>jungle of the upstream work and provide the additional of bug filtering
>and forwarding them the right direction.
More often than triaging, you'll just be asked to upgrade to the latest
version. What sort of user experience does that provide?
[snip]
>> So nothing "new" is happening here, EXCEPT we are actually starting to
>> get a better kernel-wide coverage for stable fixes, which we have not
>> had in the past. That's a good thing! The number of patches applied to
>> stable is still a very very very tiny % compared to mainline, so nothing
>> new is happening here.
>
>yes I do agree, the stable process is not very much different from the
>past and I would tend both processes broken because they explicitly try
>to avoid maintainers which is just wrong.
Avoid maintainers?! We send so much "spam" trying to get maintainers
more involved in the process. How is that avoiding them?
If you're a maintainer who has specific requirements for the -stable
flow, or you have any automated testing you'd like to be run on these
commits, or you want these mails to come in a different format, or
pretty much anything else at all just shoot me a mail!
It's been almost impossible to get maintainers involved in this process.
We don't sneak anything past maintainers, there are multiple mails over
multiple weeks for each commit that would go in. You don't have to
review it right away either, just reply with "please don't merge until
I'm done reviewing" and it'll get removed from the queue.
>> Oh, and if you do want to complain about huge new features being
>> backported, look at the mess that Spectre and Meltdown has caused in the
>> stable trees. I don't see anyone complaining about those massive
>> changes :)
>
>Are you serious? Are you going the compare the biggest PITA that the
>community had to undergo because of HW issues with random pattern
>matching in changelog/diffs? Come on!
HW Issues are irrelevant here. You had a bug that allowed arbitrary
kernel memory access. I can easily list quite a few commits, that are
not tagged for stable, that fix exactly the same thing.
On Tue, 17 Apr 2018 14:04:36 +0000
Sasha Levin <[email protected]> wrote:
> The solution to this, in my opinion, is to automate the whole selection
> and review process. We do selection using AI, and we run every possible
> test that's relevant to that subsystem.
>
> At which point, the amount of work a human needs to do to review a patch
> shrinks into something far more managable for some maintainers.
I guess the real question is, who are the stable kernels for? Is it just
a place to look at to see what distros should think about. A superset
of what distros would take. Then distros would have a nice place to
look to find what patches they should look at. But the stable tree
itself wont be used. But it's not being used today by major distros
(Red Hat and SuSE). Debian may be using it, but that's because the
stable maintainer for its kernels is also the Debian maintainer.
Who are the customers of the stable trees? They are the ones that
should be determining the "equation" for what goes into it.
Personally, I use stable as a one off from mainline. Like I mentioned
in another email. I'm currently on 4.15.x and will probably move to
4.16.x next. Unless there's some critical bug announcement, I update my
machines once a month. I originally just used mainline, but that was a
bit too unstable for my work machines ;-)
-- Steve
On Tue 17-04-18 13:39:33, Sasha Levin wrote:
[...]
> But mm/ commits don't come only from these people. Here's a concrete
> example we can discuss:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c61611f70958d86f659bca25c02ae69413747a8d
I would be really careful. Because that reqiures to audit all callers to
be compliant with the change. This is just _too_ easy to backport
without noticing a failure. Now consider the other side. Is there any
real bug report backing this? This behavior was like that for quite some
time but I do not remember any actual bug report and the changelog
doesn't mention one either. It is about theoretical problem.
So if this was to be merged to stable then the changelog should contain
a big fat warning about the existing users and how they should be
checked.
Besides that I can see Reviewed-by: akpm and Andrew is usually very
careful about stable backports so there probably _was_ a reson to
exclude stable.
--
Michal Hocko
SUSE Labs
On Tue, Apr 17, 2018 at 10:15:02AM -0400, Steven Rostedt wrote:
> On Tue, 17 Apr 2018 14:04:36 +0000
> Sasha Levin <[email protected]> wrote:
>
> > The solution to this, in my opinion, is to automate the whole selection
> > and review process. We do selection using AI, and we run every possible
> > test that's relevant to that subsystem.
> >
> > At which point, the amount of work a human needs to do to review a patch
> > shrinks into something far more managable for some maintainers.
>
> I guess the real question is, who are the stable kernels for? Is it just
> a place to look at to see what distros should think about. A superset
> of what distros would take. Then distros would have a nice place to
> look to find what patches they should look at. But the stable tree
> itself wont be used. But it's not being used today by major distros
> (Red Hat and SuSE). Debian may be using it, but that's because the
> stable maintainer for its kernels is also the Debian maintainer.
>
> Who are the customers of the stable trees? They are the ones that
> should be determining the "equation" for what goes into it.
The "customers" of the stable trees are anyone who uses Linux.
Right now, it's estimated that only about 1/3 of the kernels running out
there, at the best, are an "enterprise" kernel/distro. 2/3 of the world
either run a kernel.org-based release + their own patches, or Debian.
And Debian piggy-backs on the stable kernel releases pretty regularily.
So the majority of the Linux users out there are what we are doing this
for. Those that do not pay for a company to sift through things and
only cherry-pick what they want to pick out (hint, they almost always
miss things, some do this better than others...)
That's who this is all for, which is why we are doing our best to keep
on top of the avalanche of patches going into upstream to get the needed
fixes (both security and "normal" fixes) out to users as soon as
possible.
So again, if you are a subsystem maintainer, tag your patches for
stable. If you do not, you will get automated emails asking you about
patches that should be applied (like the one that started this thread).
If you want to just have us ignore your subsystem entirely, we will be
glad to do so, and we will tell the world to not use your subsystem if
at all possible (see previous comments about xfs, and I would argue IB
right now...)
> Personally, I use stable as a one off from mainline. Like I mentioned
> in another email. I'm currently on 4.15.x and will probably move to
> 4.16.x next. Unless there's some critical bug announcement, I update my
> machines once a month. I originally just used mainline, but that was a
> bit too unstable for my work machines ;-)
That's great, you are a user of these trees then. So you benifit
directly, along with everyone else who relies on them.
And again, I'm working with the SoC vendors to directly incorporate
these trees into their device trees, and I've already seen some devices
in the wild push out updated 4.4.y kernels a few weeks after they are
released. That's the end goal here, to have the world's devices in a
much more secure and stable shape by relying on these kernels.
thanks,
greg k-h
On Tue 17-04-18 14:04:36, Sasha Levin wrote:
> On Tue, Apr 17, 2018 at 01:07:17PM +0200, Michal Hocko wrote:
> >On Tue 17-04-18 12:39:36, Greg KH wrote:
> >> On Mon, Apr 16, 2018 at 11:28:44PM +0200, Jiri Kosina wrote:
> >> > On Mon, 16 Apr 2018, Sasha Levin wrote:
> >> >
> >> > > I agree that as an enterprise distro taking everything from -stable
> >> > > isn't the best idea. Ideally you'd want to be close to the first
> >> > > extreme you've mentioned and only take commits if customers are asking
> >> > > you to do so.
> >> > >
> >> > > I think that the rule we're trying to agree upon is the "It must fix
> >> > > a real bug that bothers people".
> >> > >
> >> > > I think that we can agree that it's impossible to expect every single
> >> > > Linux user to go on LKML and complain about a bug he encountered, so the
> >> > > rule quickly becomes "It must fix a real bug that can bother people".
> >> >
> >> > So is there a reason why stable couldn't become some hybrid-form union of
> >> >
> >> > - really critical issues (data corruption, boot issues, severe security
> >> > issues) taken from bleeding edge upstream
> >> > - [reviewed] cherry-picks of functional fixes from major distro kernels
> >> > (based on that very -stable release), as that's apparently what people
> >> > are hitting in the real world with that particular kernel
> >>
> >> It already is that :)
> >>
> >> The problem Sasha is trying to solve here is that for many subsystems,
> >> maintainers do not mark patches for stable at all.
> >
> >The way he is trying to do that is just wrong. Generate a pressure on
> >those subsystems by referring to bug reports and unhappy users and I am
> >pretty sure they will try harder... You cannot solve the problem by
> >bypassing them without having deep understanding of the specific
> >subsytem. Once you have it, just make sure you are part of the review
> >process and make sure to mark patches before they are merged.
>
> I think we just don't agree on how we should "pressure".
>
> Look at the discussion I had with the XFS folks who just don't want to
> deal with this -stable thing because they have to much work upstream.
So do you really think that you or any script decide without them? My
recollection from that discussion was quite opposite. Dave was quite
clear that most of fixes are quite hard to evaluate and most of them
are simply not worth risking the backport.
> There wasn't a single patch in -stable coming from XFS for the past 6+
> months. I'm aware of more than one way to corrupt an XFS volume for any
> distro that uses a kernel older than 4.15.
Then try to poke/bribe somebody to have it fixed. But applying
_something_ is just not a solution. You should also evaluate whether "I
am able to corrupt" is something that "people see in the wild". Sure
there are zillions of bugs hidden in the large code base like the
kernel. People just do not tend to hit them and this will likely not
change very much in the future.
> Sure, please buy them a beer at LSF/MM (I'll pay) and ask them to be
> better about it, but I don't see this changing.
I can surely have one or two and discuss this. I am pretty sure xfs guys
are not going to pretend older kernels do not exist.
> The solution to this, in my opinion, is to automate the whole selection
> and review process. We do selection using AI, and we run every possible
> test that's relevant to that subsystem.
>
> At which point, the amount of work a human needs to do to review a patch
> shrinks into something far more managable for some maintainers.
I really disagree. I am pretty sure maintainers are very well aware of
how the patch is important. Some do no care about stable and I agree you
should poke those. But some have really good reasons to not throw many
patches that direction because they do not feel the patch is important
enough.
Remember this is not about numbers. The more is not always better.
> >> So real bugfixes
> >> that do hit people are not getting to those kernels, which force the
> >> distros to do extra work to triage a bug, dig through upstream kernels,
> >> find and apply the patch.
> >
> >I would say that this is the primary role of the distro. To hide the
> >jungle of the upstream work and provide the additional of bug filtering
> >and forwarding them the right direction.
>
> More often than triaging, you'll just be asked to upgrade to the latest
> version. What sort of user experience does that provide?
>
> [snip]
>
> >> So nothing "new" is happening here, EXCEPT we are actually starting to
> >> get a better kernel-wide coverage for stable fixes, which we have not
> >> had in the past. That's a good thing! The number of patches applied to
> >> stable is still a very very very tiny % compared to mainline, so nothing
> >> new is happening here.
> >
> >yes I do agree, the stable process is not very much different from the
> >past and I would tend both processes broken because they explicitly try
> >to avoid maintainers which is just wrong.
>
> Avoid maintainers?! We send so much "spam" trying to get maintainers
> more involved in the process. How is that avoiding them?
Just read what your wrote again. I am pretty sure AUTOSEL is on filter
list on many people. We have a good volume of email traffic already and
seeing more automatic one just doesn't help. At all!
> If you're a maintainer who has specific requirements for the -stable
> flow, or you have any automated testing you'd like to be run on these
> commits, or you want these mails to come in a different format, or
> pretty much anything else at all just shoot me a mail!
>
> It's been almost impossible to get maintainers involved in this process.
The whole stable history was that about not bothering maintainers and
here is the result.
> We don't sneak anything past maintainers, there are multiple mails over
> multiple weeks for each commit that would go in. You don't have to
> review it right away either, just reply with "please don't merge until
> I'm done reviewing" and it'll get removed from the queue.
I am not talking about sneaking or pushing behind the backs. I am just
saying that you cannot do this without direct involvement of
maintainers. If they do not respond to bug reports should at them and I
am pretty sure that those subsystems will get a bigger pressure to find
their way to select _important_ fixes to users who are not running the
bleeding edge because those users _matter_ as well (maybe even more
because they are a much larger group).
> >> Oh, and if you do want to complain about huge new features being
> >> backported, look at the mess that Spectre and Meltdown has caused in the
> >> stable trees. I don't see anyone complaining about those massive
> >> changes :)
> >
> >Are you serious? Are you going the compare the biggest PITA that the
> >community had to undergo because of HW issues with random pattern
> >matching in changelog/diffs? Come on!
>
> HW Issues are irrelevant here. You had a bug that allowed arbitrary
> kernel memory access. I can easily list quite a few commits, that are
> not tagged for stable, that fix exactly the same thing.
Those are important fixes and if you are aware of them then you should
be involving the respective maintainer. I haven't heard about _any_
maintainer who would refuse to help.
--
Michal Hocko
SUSE Labs
On Tue, Apr 17, 2018 at 04:22:46PM +0200, Michal Hocko wrote:
>On Tue 17-04-18 13:39:33, Sasha Levin wrote:
>[...]
>> But mm/ commits don't come only from these people. Here's a concrete
>> example we can discuss:
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c61611f70958d86f659bca25c02ae69413747a8d
>
>I would be really careful. Because that reqiures to audit all callers to
>be compliant with the change. This is just _too_ easy to backport
>without noticing a failure. Now consider the other side. Is there any
>real bug report backing this? This behavior was like that for quite some
>time but I do not remember any actual bug report and the changelog
>doesn't mention one either. It is about theoretical problem.
https://lkml.org/lkml/2018/3/19/430
There's even a fun little reproducer that allowed me to confirm it's an
issue (at least) on 4.15.
Heck, it might even qualify as a CVE.
>So if this was to be merged to stable then the changelog should contain
>a big fat warning about the existing users and how they should be
>checked.
So what I'm asking is why *wasn't* it sent to stable? Yes, it requires
additional work backporting this, but what I'm saying is that this
didn't happen at all.
>Besides that I can see Reviewed-by: akpm and Andrew is usually very
>careful about stable backports so there probably _was_ a reson to
>exclude stable.
>--
>Michal Hocko
>SUSE Labs
On Tue, Apr 17, 2018 at 04:36:31PM +0200, Michal Hocko wrote:
>On Tue 17-04-18 14:04:36, Sasha Levin wrote:
>> On Tue, Apr 17, 2018 at 01:07:17PM +0200, Michal Hocko wrote:
>> >On Tue 17-04-18 12:39:36, Greg KH wrote:
>> >> On Mon, Apr 16, 2018 at 11:28:44PM +0200, Jiri Kosina wrote:
>> >> > On Mon, 16 Apr 2018, Sasha Levin wrote:
>> >> >
>> >> > > I agree that as an enterprise distro taking everything from -stable
>> >> > > isn't the best idea. Ideally you'd want to be close to the first
>> >> > > extreme you've mentioned and only take commits if customers are asking
>> >> > > you to do so.
>> >> > >
>> >> > > I think that the rule we're trying to agree upon is the "It must fix
>> >> > > a real bug that bothers people".
>> >> > >
>> >> > > I think that we can agree that it's impossible to expect every single
>> >> > > Linux user to go on LKML and complain about a bug he encountered, so the
>> >> > > rule quickly becomes "It must fix a real bug that can bother people".
>> >> >
>> >> > So is there a reason why stable couldn't become some hybrid-form union of
>> >> >
>> >> > - really critical issues (data corruption, boot issues, severe security
>> >> > issues) taken from bleeding edge upstream
>> >> > - [reviewed] cherry-picks of functional fixes from major distro kernels
>> >> > (based on that very -stable release), as that's apparently what people
>> >> > are hitting in the real world with that particular kernel
>> >>
>> >> It already is that :)
>> >>
>> >> The problem Sasha is trying to solve here is that for many subsystems,
>> >> maintainers do not mark patches for stable at all.
>> >
>> >The way he is trying to do that is just wrong. Generate a pressure on
>> >those subsystems by referring to bug reports and unhappy users and I am
>> >pretty sure they will try harder... You cannot solve the problem by
>> >bypassing them without having deep understanding of the specific
>> >subsytem. Once you have it, just make sure you are part of the review
>> >process and make sure to mark patches before they are merged.
>>
>> I think we just don't agree on how we should "pressure".
>>
>> Look at the discussion I had with the XFS folks who just don't want to
>> deal with this -stable thing because they have to much work upstream.
>
>So do you really think that you or any script decide without them? My
>recollection from that discussion was quite opposite. Dave was quite
>clear that most of fixes are quite hard to evaluate and most of them
>are simply not worth risking the backport.
No, *some* fixes are hard, not most.
I'm not trying to decide for them, I'm trying to help them decide.
>> There wasn't a single patch in -stable coming from XFS for the past 6+
>> months. I'm aware of more than one way to corrupt an XFS volume for any
>> distro that uses a kernel older than 4.15.
>
>Then try to poke/bribe somebody to have it fixed. But applying
>_something_ is just not a solution. You should also evaluate whether "I
>am able to corrupt" is something that "people see in the wild". Sure
>there are zillions of bugs hidden in the large code base like the
>kernel. People just do not tend to hit them and this will likely not
>change very much in the future.
We can't ignore bugs just because people don't notice.
Data corruption bugs in particular are a pain to report as well, the
corruption might have happened months before and there's not much to
report at that point.
There's quite a few bug classes like that.
>> Sure, please buy them a beer at LSF/MM (I'll pay) and ask them to be
>> better about it, but I don't see this changing.
>
>I can surely have one or two and discuss this. I am pretty sure xfs guys
>are not going to pretend older kernels do not exist.
>
>> The solution to this, in my opinion, is to automate the whole selection
>> and review process. We do selection using AI, and we run every possible
>> test that's relevant to that subsystem.
>>
>> At which point, the amount of work a human needs to do to review a patch
>> shrinks into something far more managable for some maintainers.
>
>I really disagree. I am pretty sure maintainers are very well aware of
>how the patch is important. Some do no care about stable and I agree you
>should poke those. But some have really good reasons to not throw many
>patches that direction because they do not feel the patch is important
>enough.
>
>Remember this is not about numbers. The more is not always better.
So what is "important"? Look at the XFS issues, they were important
enough to get fixed upstream, and have an appropriate test added to
xfstests.
Why didn't they go back to -stable?
>> >> So real bugfixes
>> >> that do hit people are not getting to those kernels, which force the
>> >> distros to do extra work to triage a bug, dig through upstream kernels,
>> >> find and apply the patch.
>> >
>> >I would say that this is the primary role of the distro. To hide the
>> >jungle of the upstream work and provide the additional of bug filtering
>> >and forwarding them the right direction.
>>
>> More often than triaging, you'll just be asked to upgrade to the latest
>> version. What sort of user experience does that provide?
>>
>> [snip]
>>
>> >> So nothing "new" is happening here, EXCEPT we are actually starting to
>> >> get a better kernel-wide coverage for stable fixes, which we have not
>> >> had in the past. That's a good thing! The number of patches applied to
>> >> stable is still a very very very tiny % compared to mainline, so nothing
>> >> new is happening here.
>> >
>> >yes I do agree, the stable process is not very much different from the
>> >past and I would tend both processes broken because they explicitly try
>> >to avoid maintainers which is just wrong.
>>
>> Avoid maintainers?! We send so much "spam" trying to get maintainers
>> more involved in the process. How is that avoiding them?
>
>Just read what your wrote again. I am pretty sure AUTOSEL is on filter
>list on many people. We have a good volume of email traffic already and
>seeing more automatic one just doesn't help. At all!
>
>> If you're a maintainer who has specific requirements for the -stable
>> flow, or you have any automated testing you'd like to be run on these
>> commits, or you want these mails to come in a different format, or
>> pretty much anything else at all just shoot me a mail!
>>
>> It's been almost impossible to get maintainers involved in this process.
>
>The whole stable history was that about not bothering maintainers and
>here is the result.
>
>> We don't sneak anything past maintainers, there are multiple mails over
>> multiple weeks for each commit that would go in. You don't have to
>> review it right away either, just reply with "please don't merge until
>> I'm done reviewing" and it'll get removed from the queue.
>
>I am not talking about sneaking or pushing behind the backs. I am just
>saying that you cannot do this without direct involvement of
>maintainers. If they do not respond to bug reports should at them and I
>am pretty sure that those subsystems will get a bigger pressure to find
>their way to select _important_ fixes to users who are not running the
>bleeding edge because those users _matter_ as well (maybe even more
>because they are a much larger group).
>
>> >> Oh, and if you do want to complain about huge new features being
>> >> backported, look at the mess that Spectre and Meltdown has caused in the
>> >> stable trees. I don't see anyone complaining about those massive
>> >> changes :)
>> >
>> >Are you serious? Are you going the compare the biggest PITA that the
>> >community had to undergo because of HW issues with random pattern
>> >matching in changelog/diffs? Come on!
>>
>> HW Issues are irrelevant here. You had a bug that allowed arbitrary
>> kernel memory access. I can easily list quite a few commits, that are
>> not tagged for stable, that fix exactly the same thing.
>
>Those are important fixes and if you are aware of them then you should
>be involving the respective maintainer. I haven't heard about _any_
>maintainer who would refuse to help.
Let's do it this way: let's assume my AUTOSEL project is bad and I'll
get rid of it tomorrow.
How do I get the XFS folks to send their stuff to -stable? (we have
quite a few customers who use XFS)
How do I get the KVM folks to be more consistent about tagging patches
for -stable? (we support nested KVM!)
How Do I get people who are not aware of how the -stable project to tag
their commits properly? (there's quite a long tail of authors sending 1
important bugfix and disappearing forever)
We can agree that just asking them nicely doesn't work: Greg has been
poking maintainers for years, the -stable project got bunch of
publicity, and the instructions for including a patch in -stable are
pretty straightforward.
You're saying that AUTOSEL doesn't work, so let's ignore that too.
How should we proceed?
On Tue 17-04-18 13:31:51, Sasha Levin wrote:
> On Tue, Apr 17, 2018 at 01:41:44PM +0200, Jan Kara wrote:
> >On Mon 16-04-18 17:23:30, Sasha Levin wrote:
> >> On Mon, Apr 16, 2018 at 07:06:04PM +0200, Pavel Machek wrote:
> >> >On Mon 2018-04-16 16:37:56, Sasha Levin wrote:
> >> >> On Mon, Apr 16, 2018 at 12:30:19PM -0400, Steven Rostedt wrote:
> >> >> >On Mon, 16 Apr 2018 16:19:14 +0000
> >> >> >Sasha Levin <[email protected]> wrote:
> >> >> >
> >> >> >> >Wait! What does that mean? What's the purpose of stable if it is as
> >> >> >> >broken as mainline?
> >> >> >>
> >> >> >> This just means that if there is a fix that went in mainline, and the
> >> >> >> fix is broken somehow, we'd rather take the broken fix than not.
> >> >> >>
> >> >> >> In this scenario, *something* will be broken, it's just a matter of
> >> >> >> what. We'd rather have the same thing broken between mainline and
> >> >> >> stable.
> >> >> >
> >> >> >Honestly, I think that removes all value of the stable series. I
> >> >> >remember when the stable series were first created. People were saying
> >> >> >that it wouldn't even get to more than 5 versions, because the bar for
> >> >> >backporting was suppose to be very high. Today it's just a fork of the
> >> >> >kernel at a given version. No more features, but we will be OK with
> >> >> >regressions. I'm struggling to see what the benefit of it is suppose to
> >> >> >be?
> >> >>
> >> >> It's not "OK with regressions".
> >> >>
> >> >> Let's look at a hypothetical example: You have a 4.15.1 kernel that has
> >> >> a broken printf() behaviour so that when you:
> >> >>
> >> >> pr_err("%d", 5)
> >> >>
> >> >> Would print:
> >> >>
> >> >> "Microsoft Rulez"
> >> >>
> >> >> Bad, right? So you went ahead and fixed it, and now it prints "5" as you
> >> >> might expect. But alas, with your patch, running:
> >> >>
> >> >> pr_err("%s", "hi!")
> >> >>
> >> >> Would show a cat picture for 5 seconds.
> >> >>
> >> >> Should we take your patch in -stable or not? If we don't, we're stuck
> >> >> with the original issue while the mainline kernel will behave
> >> >> differently, but if we do - we introduce a new regression.
> >> >
> >> >Of course not.
> >> >
> >> >- It must be obviously correct and tested.
> >> >
> >> >If it introduces new bug, it is not correct, and certainly not
> >> >obviously correct.
> >>
> >> As you might have noticed, we don't strictly follow the rules.
> >>
> >> Take a look at the whole PTI story as an example. It's way more than 100
> >> lines, it's not obviously corrent, it fixed more than 1 thing, and so
> >> on, and yet it went in -stable!
> >>
> >> Would you argue we shouldn't have backported PTI to -stable?
> >
> >So I agree with that being backported. But I think this nicely demostrates
> >a point some people are trying to make in this thread. We do take fixes
> >with high risk or regression if they fix serious enough issue. Also we do
> >take fixes to non-serious stuff (such as addition of device ID) if the
> >chances of regression are really low.
> >
> >So IMHO the metric for including the fix is not solely "how annoying to
> >user this can be" but rather something like:
> >
> >score = (how annoying the bug is) * ((1 / (chance of regression due to
> > including this)) - 1)^3
> >
> >(constants are somewhat arbitrary subject to tuning ;). Now both 'annoying'
> >and 'regression chance' parts are subjective and sometimes difficult to
> >estimate so don't take the formula too seriously but it demonstrates the
> >point. I think we all agree we want to fix annoying stuff and we don't want
> >regressions. But you need to somehow weight this over your expected
> >userbase - and this is where your argument "but someone might be annoyed by
> >LEDs not working so let's include it" has problems - it should rather be
> >"is the annoyance of non-working leds over expected user base high enough
> >to risk a regression due to this patch for someone in the expected user
> >base"? The answer to this second question is not clear at all to a casual
> >reviewer and that's why we IMHO have CC stable tag as maintainer is
> >supposed to have at least a bit better clue.
>
> We may be able to guesstimate the 'regression chance', but there's no
> way we can guess the 'annoyance' once. There are so many different use
> cases that we just can't even guess how many people would get "annoyed"
> by something.
As a maintainer, I hope I have reasonable idea what are common use cases
for my subsystem. Those I cater to when estimating 'annoyance'. Sure I don't
know all of the use cases so people doing unusual stuff hit more bugs and
have to report them to get fixes included in -stable. But for me this is a
preferable tradeoff over the risk of regression so this is the rule I use
when tagging for stable. Now I'm not a -stable maintainer and I fully agree
with "those who do the work decide" principle so pick whatever patches you
think are appropriate, I just wanted explain why I don't think more patches
in stable are necessarily good.
> Even regression chance is tricky, look at the commits I've linked
> earlier in the thread. Even the most trivial looking commits that end up
> in stable have a chance for regression.
Sure, you can never be certain and I think people (including me)
underestimate the chance of regressions for "trivial" patches. But you just
estimate a chance, you may be lucky, you may not...
> >Another point I wanted to make is that if chance a patch causes a
> >regression is about 2% as you said somewhere else in a thread, then by
> >adding 20 patches that "may fix a bug that is annoying for someone" you've
> >just increased a chance there's a regression in the release by 34%. And
>
> So I've said that the rejection rate is less than 2%. This includes
> all commits that I have proposed for -stable, but didn't end up being
> included in -stable.
>
> This includes commits that the author/maintainers NACKed, commits that
> didn't do anything on older kernels, commits that were buggy but were
> caught before the kernel was released, commits that failed to build on
> an arch I didn't test it on originally and so on.
>
> After thousands of merged AUTOSEL patches I can count the number of
> times a commit has caused a regression and had to be removed on one
> hand.
>
> >this is not just a math game, this also roughly matches a real experience
> >with maintaining our enterprise kernels. Do 20 "maybe" fixes outweight such
> >regression chance? And I also note that for a regression to get reported so
> >that it gets included into your 2% estimate of a patch regression rate,
> >someone must be bothered enough by it to triage it and send an email
> >somewhere so that already falls into a category of "serious" stuff to me.
>
> It is indeed a numbers game, but the regression rate isn't 2%, it's
> closer to 0.05%.
Honestly, I think 0.05% is too optimististic :) Quick grepping of 4.14
stable tree suggests some 13 commits were reverted from stable due to bugs.
That's some 0.4% and that doesn't count fixes that were applied to
fix other regressions.
But the actual numbers don't really matter that much, in principle the more
patches you add the higher is the chance of regression. You can't change
that so you better have a good reason to include a patch...
Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR
On Tue, 17 Apr 2018, Sasha Levin wrote:
> How do I get the XFS folks to send their stuff to -stable? (we have
> quite a few customers who use XFS)
If XFS (or *any* other subsystem) doesn't have enough manpower of upstream
maintainers to deal with stable, we just have to accept that and find an
answer to that.
If XFS folks claim that they don't have enough mental capacity to
create/verify XFS backports, I totally don't see how any kind of AI would
have.
If your business relies on XFS (and so does ours, BTW) or any other
subsystem that doesn't have enough manpower to care for stable, the proper
solution (and contribution) would be just bringing more people into the
XFS community.
To put it simply -- I don't think the simple lack of actual human
brainpower can be reasonably resolved in other way than bringing more of
it in.
Thanks,
--
Jiri Kosina
SUSE Labs
On Tue, Apr 17, 2018 at 05:55:49PM +0200, Jan Kara wrote:
>On Tue 17-04-18 13:31:51, Sasha Levin wrote:
>> We may be able to guesstimate the 'regression chance', but there's no
>> way we can guess the 'annoyance' once. There are so many different use
>> cases that we just can't even guess how many people would get "annoyed"
>> by something.
>
>As a maintainer, I hope I have reasonable idea what are common use cases
>for my subsystem. Those I cater to when estimating 'annoyance'. Sure I don't
>know all of the use cases so people doing unusual stuff hit more bugs and
>have to report them to get fixes included in -stable. But for me this is a
>preferable tradeoff over the risk of regression so this is the rule I use
>when tagging for stable. Now I'm not a -stable maintainer and I fully agree
>with "those who do the work decide" principle so pick whatever patches you
>think are appropriate, I just wanted explain why I don't think more patches
>in stable are necessarily good.
The AUTOSEL story is different for subsystems that don't do -stable, and
subsystems that are actually doing the work (like yourself).
I'm not trying to override active maintainers, I'm trying to help them
make decisions.
The AUTOSEL bot will attempt to apply any patch it deems as -stable for
on all -stable branches, finding possible dependencies, build them, and
run any tests that you might deem necessary.
You would be able to start your analysis without "wasting" time on doing
a bunch of "manual labor".
There's a big difference between subsystems like yours and most of the
rest of the kernel.
>> Even regression chance is tricky, look at the commits I've linked
>> earlier in the thread. Even the most trivial looking commits that end up
>> in stable have a chance for regression.
>
>Sure, you can never be certain and I think people (including me)
>underestimate the chance of regressions for "trivial" patches. But you just
>estimate a chance, you may be lucky, you may not...
>
>> >Another point I wanted to make is that if chance a patch causes a
>> >regression is about 2% as you said somewhere else in a thread, then by
>> >adding 20 patches that "may fix a bug that is annoying for someone" you've
>> >just increased a chance there's a regression in the release by 34%. And
>>
>> So I've said that the rejection rate is less than 2%. This includes
>> all commits that I have proposed for -stable, but didn't end up being
>> included in -stable.
>>
>> This includes commits that the author/maintainers NACKed, commits that
>> didn't do anything on older kernels, commits that were buggy but were
>> caught before the kernel was released, commits that failed to build on
>> an arch I didn't test it on originally and so on.
>>
>> After thousands of merged AUTOSEL patches I can count the number of
>> times a commit has caused a regression and had to be removed on one
>> hand.
>>
>> >this is not just a math game, this also roughly matches a real experience
>> >with maintaining our enterprise kernels. Do 20 "maybe" fixes outweight such
>> >regression chance? And I also note that for a regression to get reported so
>> >that it gets included into your 2% estimate of a patch regression rate,
>> >someone must be bothered enough by it to triage it and send an email
>> >somewhere so that already falls into a category of "serious" stuff to me.
>>
>> It is indeed a numbers game, but the regression rate isn't 2%, it's
>> closer to 0.05%.
>
>Honestly, I think 0.05% is too optimististic :) Quick grepping of 4.14
>stable tree suggests some 13 commits were reverted from stable due to bugs.
>That's some 0.4% and that doesn't count fixes that were applied to
>fix other regressions.
0.05% is for commits that were merged in stable but later fixed or
reverted because they introduced a regression. By grepping for reverts
you also include things such as:
- Reverts of commits that were in the corresponding mainline tree
- Reverts of commits that didn't introduce regressions
>But the actual numbers don't really matter that much, in principle the more
>patches you add the higher is the chance of regression. You can't change
>that so you better have a good reason to include a patch...
You increase the chance of regressions, but you also increase the chance
of fixing bugs that affect users.
My claim is that the chance to fix bugs increases far more significantly
than the chance to introduce regressions.
On Tue, 2018-04-17 at 17:52 +0200, Jiri Kosina wrote:
> On Tue, 17 Apr 2018, Sasha Levin wrote:
>
> > How do I get the XFS folks to send their stuff to -stable? (we have
> > quite a few customers who use XFS)
>
> If XFS (or *any* other subsystem) doesn't have enough manpower of upstream
> maintainers to deal with stable, we just have to accept that and find an
> answer to that.
>
> If XFS folks claim that they don't have enough mental capacity to
> create/verify XFS backports, I totally don't see how any kind of AI would
> have.
>
> If your business relies on XFS (and so does ours, BTW) or any other
> subsystem that doesn't have enough manpower to care for stable, the proper
> solution (and contribution) would be just bringing more people into the
> XFS community.
>
> To put it simply -- I don't think the simple lack of actual human
> brainpower can be reasonably resolved in other way than bringing more of
> it in.
Not to worry... soon enough it'll be submitting properly massaged
backports of the stuff it submitted upstream :)
-Mike
On Tue, Apr 17, 2018 at 05:52:30PM +0200, Jiri Kosina wrote:
>On Tue, 17 Apr 2018, Sasha Levin wrote:
>
>> How do I get the XFS folks to send their stuff to -stable? (we have
>> quite a few customers who use XFS)
>
>If XFS (or *any* other subsystem) doesn't have enough manpower of upstream
>maintainers to deal with stable, we just have to accept that and find an
>answer to that.
This is exactly what I'm doing. Many subsystems don't have enough
manpower to deal with -stable, so I'm trying to help.
>If XFS folks claim that they don't have enough mental capacity to
>create/verify XFS backports, I totally don't see how any kind of AI would
>have.
Because creating backports is not all about mental capacity!
A lot of time gets wasted on going through the list of commits,
backporting each of those commits into every -stable tree we have,
building it, running tests, etc.
So it's not all about pure mental capacity, but more about the time
per-patch it takes to get -stable done.
If I can cut down on that, by suggesting a list of commits, doing builds
and tests, what's the problem?
>If your business relies on XFS (and so does ours, BTW) or any other
>subsystem that doesn't have enough manpower to care for stable, the proper
>solution (and contribution) would be just bringing more people into the
>XFS community.
Microsoft's business relies on quite a few kernel subsystems. While we
try to bring more people in the kernel (we're hiring!), as you might
know it's not easy getting kernel folks.
So just "get more people" isn't a good solution. It doesn't scale
either.
>To put it simply -- I don't think the simple lack of actual human
>brainpower can be reasonably resolved in other way than bringing more of
>it in.
>
>Thanks,
>
>--
>Jiri Kosina
>SUSE Labs
>
On Tue 17-04-18 16:19:35, Sasha Levin wrote:
> On Tue, Apr 17, 2018 at 05:55:49PM +0200, Jan Kara wrote:
> >> Even regression chance is tricky, look at the commits I've linked
> >> earlier in the thread. Even the most trivial looking commits that end up
> >> in stable have a chance for regression.
> >
> >Sure, you can never be certain and I think people (including me)
> >underestimate the chance of regressions for "trivial" patches. But you just
> >estimate a chance, you may be lucky, you may not...
> >
> >> >Another point I wanted to make is that if chance a patch causes a
> >> >regression is about 2% as you said somewhere else in a thread, then by
> >> >adding 20 patches that "may fix a bug that is annoying for someone" you've
> >> >just increased a chance there's a regression in the release by 34%. And
> >>
> >> So I've said that the rejection rate is less than 2%. This includes
> >> all commits that I have proposed for -stable, but didn't end up being
> >> included in -stable.
> >>
> >> This includes commits that the author/maintainers NACKed, commits that
> >> didn't do anything on older kernels, commits that were buggy but were
> >> caught before the kernel was released, commits that failed to build on
> >> an arch I didn't test it on originally and so on.
> >>
> >> After thousands of merged AUTOSEL patches I can count the number of
> >> times a commit has caused a regression and had to be removed on one
> >> hand.
> >>
> >> >this is not just a math game, this also roughly matches a real experience
> >> >with maintaining our enterprise kernels. Do 20 "maybe" fixes outweight such
> >> >regression chance? And I also note that for a regression to get reported so
> >> >that it gets included into your 2% estimate of a patch regression rate,
> >> >someone must be bothered enough by it to triage it and send an email
> >> >somewhere so that already falls into a category of "serious" stuff to me.
> >>
> >> It is indeed a numbers game, but the regression rate isn't 2%, it's
> >> closer to 0.05%.
> >
> >Honestly, I think 0.05% is too optimististic :) Quick grepping of 4.14
> >stable tree suggests some 13 commits were reverted from stable due to bugs.
> >That's some 0.4% and that doesn't count fixes that were applied to
> >fix other regressions.
>
> 0.05% is for commits that were merged in stable but later fixed or
> reverted because they introduced a regression. By grepping for reverts
> you also include things such as:
>
> - Reverts of commits that were in the corresponding mainline tree
> - Reverts of commits that didn't introduce regressions
Actually I was careful enough to include only commits that got merged as
part of the stable process into 4.14.x but got later reverted in 4.14.y.
That's where the 0.4% number came from. So I believe all of those cases
(13 in absolute numbers) were user visible regressions during the stable
process.
Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR
On Tue 17-04-18 14:36:44, Sasha Levin wrote:
> On Tue, Apr 17, 2018 at 04:22:46PM +0200, Michal Hocko wrote:
> >On Tue 17-04-18 13:39:33, Sasha Levin wrote:
> >[...]
> >> But mm/ commits don't come only from these people. Here's a concrete
> >> example we can discuss:
> >>
> >> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c61611f70958d86f659bca25c02ae69413747a8d
> >
> >I would be really careful. Because that reqiures to audit all callers to
> >be compliant with the change. This is just _too_ easy to backport
> >without noticing a failure. Now consider the other side. Is there any
> >real bug report backing this? This behavior was like that for quite some
> >time but I do not remember any actual bug report and the changelog
> >doesn't mention one either. It is about theoretical problem.
>
> https://lkml.org/lkml/2018/3/19/430
>
> There's even a fun little reproducer that allowed me to confirm it's an
> issue (at least) on 4.15.
>
> Heck, it might even qualify as a CVE.
>
> >So if this was to be merged to stable then the changelog should contain
> >a big fat warning about the existing users and how they should be
> >checked.
>
> So what I'm asking is why *wasn't* it sent to stable? Yes, it requires
> additional work backporting this, but what I'm saying is that this
> didn't happen at all.
Do not ask me. I wasn't involved. But I would _guess_ that the original
bug is not all that serious because it requires some specific privileges
and it is quite unlikely that somebody privileged would want to shoot
its feet. But this is just my wild guess.
Anyway, I am pretty sure that if the triggering BUG was serious enough
then it would be much safer to remove it for stable backports.
--
Michal Hocko
SUSE Labs
On Tue, Apr 17, 2018 at 07:57:54PM +0200, Jan Kara wrote:
>Actually I was careful enough to include only commits that got merged as
>part of the stable process into 4.14.x but got later reverted in 4.14.y.
>That's where the 0.4% number came from. So I believe all of those cases
>(13 in absolute numbers) were user visible regressions during the stable
>process.
I looked at them, and there are 2 things in play here:
- Quite a few of those reverts are because of the PTI work. I'm not
sure how we treat it, but yes - it skews statistics here.
- 2 of them were reverts for device tree changes for a device that
didn't exist in 4.14, and shouldn't have had any user visible
changes.
On Tue 2018-04-17 13:45:59, Sasha Levin wrote:
> On Tue, Apr 17, 2018 at 02:24:54PM +0200, Petr Mladek wrote:
> >Back to the trend. Last week I got autosel mails even for
> >patches that were still being discussed, had issues, and
> >were far from upstream:
> >
> > https://lkml.kernel.org/r/DM5PR2101MB1032AB19B489D46B717B50D4FBBB0@DM5PR2101MB1032.namprd21.prod.outlook.com
> > https://lkml.kernel.org/r/DM5PR2101MB10327FA0A7E0D2C901E33B79FBBB0@DM5PR2101MB1032.namprd21.prod.outlook.com
> >
> >It might be a good idea if the mail asked to add Fixes: tag
> >or stable mailing list. But the mail suggested to add the
> >unfinished patch into stable branch directly (even before
> >upstreaming?).
>
> I obviously didn't suggest that this patch will go in -stable before
> it's upstream.
>
> I've started doing those because some folks can't be arsed to reply to a
> review request for a patch that is months old. I found that if I send
> these mails while the discussion is still going on I'd get a much better
> response rate from people.
I see. It makes sense.
> If you think any of these patches should go in stable there were two
> ways about it:
>
> - You end up adding the -stable tag yourself, and it would follow the
> usual route where Greg picks it up.
> - You reply to that mail, and the patch would wait in a list until my
> script notices it made it upstream, at which point it would get
> queued for stable.
It would be great if the options are described in the mail.
I wonder if it would make sense to add also a tag that would
say that the commit is not suitable for stable. It might
help both sides. The maintainers will be able to share
their opinion and eventually reduce mails from autosel.
You would get feedback that maintainers considered
the patch for stable. It might be even useful for
teaching the AI.
> >Now, there are only hand full of printk patches in each
> >release, so it is still doable. I just do not understand
> >how other maintainers, from much more busy subsystems,
> >could cope with this trend.
>
> So yes, I'm aware that the volume of patches is huge, but there's not
> much I can do about it because it's just a subset of the kernel's patch
> volume and since the kernel gets more and more patches each release, the
> volume of stable commits is bound to grow as well.
Yes, but the grow in the stable is much faster than the grow
in maintain at the moment. It might be fine if it was caused
just by engaging subsystems that ignored stable so far. But
I am not sure if it is the case. Also I am not sure about
your plans.
Anyway, I am surprised that the patches might go into stable
so easily (no response -> accepted). While it is pretty
hard to get through the review process for mainline.
Of course, many patches go into mainline without review
as well. But the difference is that they are pushed by
people that are familiar and responsible for the affected
area.
I could understand the pain. There are surely people that
do not care about stable, because it takes time, it is hard
to make decisions, flashbacks to the old code are painful,
etc. Well, this is the reason why the maintenance support
is and should be limited.
Anyway, I think that it cannot be done reasonably without
maintainers. You should be careful so that even the currently
cooperating maintainers will not start considering autosel
mails as a spam. (It is not my case. printk is small thing.
But I could imagine that it might stop being bearable
in bigger subsystems. As is already the case with xfs.)
Best Regards,
Petr
Den 16-04-2018 kl. 19:19, skrev Sasha Levin:
> On Mon, Apr 16, 2018 at 12:12:24PM -0400, Steven Rostedt wrote:
>> On Mon, 16 Apr 2018 16:02:03 +0000
>> Sasha Levin <[email protected]> wrote:
>>
>>> One of the things Greg is pushing strongly for is "bug compatibility":
>>> we want the kernel to behave the same way between mainline and stable.
>>> If the code is broken, it should be broken in the same way.
>>
>> Wait! What does that mean? What's the purpose of stable if it is as
>> broken as mainline?
>
> This just means that if there is a fix that went in mainline, and the
> fix is broken somehow, we'd rather take the broken fix than not.
>
> In this scenario, *something* will be broken, it's just a matter of
> what. We'd rather have the same thing broken between mainline and
> stable.
>
Yeah, but _intentionally_ breaking existing setups to stay "bug
compatible" _is_ a _regression_ you _really_ _dont_ want in a stable
supported distro. Because end-users dont care about upstream breaking
stuff... its the distro that takes the heat for that...
Something "already broken" is not a regression...
As distro maintainer that means one now have to review _every_ patch
that carries "AUTOSEL", follow all the mail threads that comes up about
it, then track if it landed in -stable queue, and read every response
and possible objection to all patches in the -stable queue a second time
around... then check if it still got included in final stable point
relase and then either revert them in distro kernel or go track down all
the follow-up fixes needed...
Just to avoid being "bug compatible with master"
--
Thomas
On Thu, Apr 19, 2018 at 02:41:33PM +0300, Thomas Backlund wrote:
> Den 16-04-2018 kl. 19:19, skrev Sasha Levin:
> > On Mon, Apr 16, 2018 at 12:12:24PM -0400, Steven Rostedt wrote:
> > > On Mon, 16 Apr 2018 16:02:03 +0000
> > > Sasha Levin <[email protected]> wrote:
> > >
> > > > One of the things Greg is pushing strongly for is "bug compatibility":
> > > > we want the kernel to behave the same way between mainline and stable.
> > > > If the code is broken, it should be broken in the same way.
> > >
> > > Wait! What does that mean? What's the purpose of stable if it is as
> > > broken as mainline?
> >
> > This just means that if there is a fix that went in mainline, and the
> > fix is broken somehow, we'd rather take the broken fix than not.
> >
> > In this scenario, *something* will be broken, it's just a matter of
> > what. We'd rather have the same thing broken between mainline and
> > stable.
> >
>
> Yeah, but _intentionally_ breaking existing setups to stay "bug compatible"
> _is_ a _regression_ you _really_ _dont_ want in a stable
> supported distro. Because end-users dont care about upstream breaking
> stuff... its the distro that takes the heat for that...
>
> Something "already broken" is not a regression...
>
> As distro maintainer that means one now have to review _every_ patch that
> carries "AUTOSEL", follow all the mail threads that comes up about it, then
> track if it landed in -stable queue, and read every response and possible
> objection to all patches in the -stable queue a second time around... then
> check if it still got included in final stable point relase and then either
> revert them in distro kernel or go track down all the follow-up fixes
> needed...
>
> Just to avoid being "bug compatible with master"
I've done this "bug compatible" "breakage" more than the AUTOSEL stuff
has in the past, so you had better also be reviewing all of my normal
commits as well :)
Anyway, we are trying not to do this, but it does, and will,
occasionally happen. Look, we just did that for one platform for
4.9.94! And the key to all of this is good testing, which we are now
doing, and hopefully you are also doing as well.
thanks,
greg k-h
On Thu 19-04-18 15:59:43, Greg KH wrote:
> On Thu, Apr 19, 2018 at 02:41:33PM +0300, Thomas Backlund wrote:
> > Den 16-04-2018 kl. 19:19, skrev Sasha Levin:
> > > On Mon, Apr 16, 2018 at 12:12:24PM -0400, Steven Rostedt wrote:
> > > > On Mon, 16 Apr 2018 16:02:03 +0000
> > > > Sasha Levin <[email protected]> wrote:
> > > >
> > > > > One of the things Greg is pushing strongly for is "bug compatibility":
> > > > > we want the kernel to behave the same way between mainline and stable.
> > > > > If the code is broken, it should be broken in the same way.
> > > >
> > > > Wait! What does that mean? What's the purpose of stable if it is as
> > > > broken as mainline?
> > >
> > > This just means that if there is a fix that went in mainline, and the
> > > fix is broken somehow, we'd rather take the broken fix than not.
> > >
> > > In this scenario, *something* will be broken, it's just a matter of
> > > what. We'd rather have the same thing broken between mainline and
> > > stable.
> > >
> >
> > Yeah, but _intentionally_ breaking existing setups to stay "bug compatible"
> > _is_ a _regression_ you _really_ _dont_ want in a stable
> > supported distro. Because end-users dont care about upstream breaking
> > stuff... its the distro that takes the heat for that...
> >
> > Something "already broken" is not a regression...
> >
> > As distro maintainer that means one now have to review _every_ patch that
> > carries "AUTOSEL", follow all the mail threads that comes up about it, then
> > track if it landed in -stable queue, and read every response and possible
> > objection to all patches in the -stable queue a second time around... then
> > check if it still got included in final stable point relase and then either
> > revert them in distro kernel or go track down all the follow-up fixes
> > needed...
> >
> > Just to avoid being "bug compatible with master"
>
> I've done this "bug compatible" "breakage" more than the AUTOSEL stuff
> has in the past, so you had better also be reviewing all of my normal
> commits as well :)
>
> Anyway, we are trying not to do this, but it does, and will,
> occasionally happen.
Sure, that's understood. So this was just misunderstanding. Sasha's
original comment really sounded like "bug compatibility" with current
master is desirable and that made me go "Are you serious?" as well...
Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR
On Thu, Apr 19, 2018 at 04:05:45PM +0200, Jan Kara wrote:
> On Thu 19-04-18 15:59:43, Greg KH wrote:
> > On Thu, Apr 19, 2018 at 02:41:33PM +0300, Thomas Backlund wrote:
> > > Den 16-04-2018 kl. 19:19, skrev Sasha Levin:
> > > > On Mon, Apr 16, 2018 at 12:12:24PM -0400, Steven Rostedt wrote:
> > > > > On Mon, 16 Apr 2018 16:02:03 +0000
> > > > > Sasha Levin <[email protected]> wrote:
> > > > >
> > > > > > One of the things Greg is pushing strongly for is "bug compatibility":
> > > > > > we want the kernel to behave the same way between mainline and stable.
> > > > > > If the code is broken, it should be broken in the same way.
> > > > >
> > > > > Wait! What does that mean? What's the purpose of stable if it is as
> > > > > broken as mainline?
> > > >
> > > > This just means that if there is a fix that went in mainline, and the
> > > > fix is broken somehow, we'd rather take the broken fix than not.
> > > >
> > > > In this scenario, *something* will be broken, it's just a matter of
> > > > what. We'd rather have the same thing broken between mainline and
> > > > stable.
> > > >
> > >
> > > Yeah, but _intentionally_ breaking existing setups to stay "bug compatible"
> > > _is_ a _regression_ you _really_ _dont_ want in a stable
> > > supported distro. Because end-users dont care about upstream breaking
> > > stuff... its the distro that takes the heat for that...
> > >
> > > Something "already broken" is not a regression...
> > >
> > > As distro maintainer that means one now have to review _every_ patch that
> > > carries "AUTOSEL", follow all the mail threads that comes up about it, then
> > > track if it landed in -stable queue, and read every response and possible
> > > objection to all patches in the -stable queue a second time around... then
> > > check if it still got included in final stable point relase and then either
> > > revert them in distro kernel or go track down all the follow-up fixes
> > > needed...
> > >
> > > Just to avoid being "bug compatible with master"
> >
> > I've done this "bug compatible" "breakage" more than the AUTOSEL stuff
> > has in the past, so you had better also be reviewing all of my normal
> > commits as well :)
> >
> > Anyway, we are trying not to do this, but it does, and will,
> > occasionally happen.
>
> Sure, that's understood. So this was just misunderstanding. Sasha's
> original comment really sounded like "bug compatibility" with current
> master is desirable and that made me go "Are you serious?" as well...
As I said before in this thread, yes, sometimes I do this on purpose.
As an specific example, see a recent bluetooth patch that caused a
regression on some chromebook devices. The chromeos developers
rightfully complainied, and I left the commit in there to provide the
needed "leverage" on the upstream developers to fix this properly.
Otherwise if I had reverted the stable patch, when people move to a
newer kernel version, things break, and no one remembers why.
I also wrote a long response as to _why_ I do this, and even though it
does happen, why it still is worth taking the stable updates. Please
see the archives for the full details. I don't want to duplicate this
again here.
thanks,
greg k-h
Den 19.04.2018 kl. 16:59, skrev Greg KH:
> On Thu, Apr 19, 2018 at 02:41:33PM +0300, Thomas Backlund wrote:
>> Den 16-04-2018 kl. 19:19, skrev Sasha Levin:
>>> On Mon, Apr 16, 2018 at 12:12:24PM -0400, Steven Rostedt wrote:
>>>> On Mon, 16 Apr 2018 16:02:03 +0000
>>>> Sasha Levin <[email protected]> wrote:
>>>>
>>>>> One of the things Greg is pushing strongly for is "bug compatibility":
>>>>> we want the kernel to behave the same way between mainline and stable.
>>>>> If the code is broken, it should be broken in the same way.
>>>>
>>>> Wait! What does that mean? What's the purpose of stable if it is as
>>>> broken as mainline?
>>>
>>> This just means that if there is a fix that went in mainline, and the
>>> fix is broken somehow, we'd rather take the broken fix than not.
>>>
>>> In this scenario, *something* will be broken, it's just a matter of
>>> what. We'd rather have the same thing broken between mainline and
>>> stable.
>>>
>>
>> Yeah, but _intentionally_ breaking existing setups to stay "bug compatible"
>> _is_ a _regression_ you _really_ _dont_ want in a stable
>> supported distro. Because end-users dont care about upstream breaking
>> stuff... its the distro that takes the heat for that...
>>
>> Something "already broken" is not a regression...
>>
>> As distro maintainer that means one now have to review _every_ patch that
>> carries "AUTOSEL", follow all the mail threads that comes up about it, then
>> track if it landed in -stable queue, and read every response and possible
>> objection to all patches in the -stable queue a second time around... then
>> check if it still got included in final stable point relase and then either
>> revert them in distro kernel or go track down all the follow-up fixes
>> needed...
>>
>> Just to avoid being "bug compatible with master"
>
> I've done this "bug compatible" "breakage" more than the AUTOSEL stuff
> has in the past, so you had better also be reviewing all of my normal
> commits as well :)
>
Yeah, I do... and same goes there ... if there is a known issue, then
same procedure... Either revert, or try to track down fixes...
> Anyway, we are trying not to do this, but it does, and will,
> occasionally happen. Look, we just did that for one platform for
> 4.9.94! And the key to all of this is good testing, which we are now
> doing, and hopefully you are also doing as well.
Yeah, but having to test stuff with known breakages is no fun, so we try
to avoid that
--
Thomas
On Thu, Apr 19, 2018 at 06:04:26PM +0300, Thomas Backlund wrote:
>Den 19.04.2018 kl. 16:59, skrev Greg KH:
>>Anyway, we are trying not to do this, but it does, and will,
>>occasionally happen. Look, we just did that for one platform for
>>4.9.94! And the key to all of this is good testing, which we are now
>>doing, and hopefully you are also doing as well.
>
>Yeah, but having to test stuff with known breakages is no fun, so we
>try to avoid that
Known breakages are easier to deal with than unknown ones :)
I think that that "bug compatability" is basically a policy on *which*
regressions you'll see vs *if* you'll see a regression.
We'll never pull in a commit that introduces a bug but doesn't fix
another one, right? So if you have to deal with a regression anyway,
might as well deal with a regression that is also seen on mainline, so
that when you upgrade your stable kernel you'll keep the same set of
regressions to deal with.
Den 19.04.2018 kl. 17:22, skrev Greg KH:
> On Thu, Apr 19, 2018 at 04:05:45PM +0200, Jan Kara wrote:
>> On Thu 19-04-18 15:59:43, Greg KH wrote:
>>> On Thu, Apr 19, 2018 at 02:41:33PM +0300, Thomas Backlund wrote:
>>>> Den 16-04-2018 kl. 19:19, skrev Sasha Levin:
>>>>> On Mon, Apr 16, 2018 at 12:12:24PM -0400, Steven Rostedt wrote:
>>>>>> On Mon, 16 Apr 2018 16:02:03 +0000
>>>>>> Sasha Levin <[email protected]> wrote:
>>>>>>
>>>>>>> One of the things Greg is pushing strongly for is "bug compatibility":
>>>>>>> we want the kernel to behave the same way between mainline and stable.
>>>>>>> If the code is broken, it should be broken in the same way.
>>>>>>
>>>>>> Wait! What does that mean? What's the purpose of stable if it is as
>>>>>> broken as mainline?
>>>>>
>>>>> This just means that if there is a fix that went in mainline, and the
>>>>> fix is broken somehow, we'd rather take the broken fix than not.
>>>>>
>>>>> In this scenario, *something* will be broken, it's just a matter of
>>>>> what. We'd rather have the same thing broken between mainline and
>>>>> stable.
>>>>>
>>>>
>>>> Yeah, but _intentionally_ breaking existing setups to stay "bug compatible"
>>>> _is_ a _regression_ you _really_ _dont_ want in a stable
>>>> supported distro. Because end-users dont care about upstream breaking
>>>> stuff... its the distro that takes the heat for that...
>>>>
>>>> Something "already broken" is not a regression...
>>>>
>>>> As distro maintainer that means one now have to review _every_ patch that
>>>> carries "AUTOSEL", follow all the mail threads that comes up about it, then
>>>> track if it landed in -stable queue, and read every response and possible
>>>> objection to all patches in the -stable queue a second time around... then
>>>> check if it still got included in final stable point relase and then either
>>>> revert them in distro kernel or go track down all the follow-up fixes
>>>> needed...
>>>>
>>>> Just to avoid being "bug compatible with master"
>>>
>>> I've done this "bug compatible" "breakage" more than the AUTOSEL stuff
>>> has in the past, so you had better also be reviewing all of my normal
>>> commits as well :)
>>>
>>> Anyway, we are trying not to do this, but it does, and will,
>>> occasionally happen.
>>
>> Sure, that's understood. So this was just misunderstanding. Sasha's
>> original comment really sounded like "bug compatibility" with current
>> master is desirable and that made me go "Are you serious?" as well...
>
> As I said before in this thread, yes, sometimes I do this on purpose.
>
And I guess this is the one that gets people the feeling that
"stable is not as stable as it used to be" ...
> As an specific example, see a recent bluetooth patch that caused a
> regression on some chromebook devices. The chromeos developers
> rightfully complainied, and I left the commit in there to provide the
> needed "leverage" on the upstream developers to fix this properly.
> Otherwise if I had reverted the stable patch, when people move to a
> newer kernel version, things break, and no one remembers why.
I do understand what you are trying to do...
But from my distro hat on I have to treat things differently (and I dont
think I'm alone doing it this way...)
Known breakages gets reverted even before it hits QA, so endusers wont
see the issue at all...
So the only ones to see the issue are those building with latest
upstream without own patches applied...
>
> I also wrote a long response as to _why_ I do this, and even though it
> does happen, why it still is worth taking the stable updates. Please
> see the archives for the full details. I don't want to duplicate this
> again here.
And we do use latest stable (with some delay as I dont want to overload
QA & endusers with a new kernel every week :))
We just revert known broken (or add known fixes) before releasing them
to our users
--
Thomas
On Thu, Apr 19, 2018 at 06:16:26PM +0300, Thomas Backlund wrote:
> Den 19.04.2018 kl. 17:22, skrev Greg KH:
> > On Thu, Apr 19, 2018 at 04:05:45PM +0200, Jan Kara wrote:
> > > On Thu 19-04-18 15:59:43, Greg KH wrote:
> > > > On Thu, Apr 19, 2018 at 02:41:33PM +0300, Thomas Backlund wrote:
> > > > > Den 16-04-2018 kl. 19:19, skrev Sasha Levin:
> > > > > > On Mon, Apr 16, 2018 at 12:12:24PM -0400, Steven Rostedt wrote:
> > > > > > > On Mon, 16 Apr 2018 16:02:03 +0000
> > > > > > > Sasha Levin <[email protected]> wrote:
> > > > > > >
> > > > > > > > One of the things Greg is pushing strongly for is "bug compatibility":
> > > > > > > > we want the kernel to behave the same way between mainline and stable.
> > > > > > > > If the code is broken, it should be broken in the same way.
> > > > > > >
> > > > > > > Wait! What does that mean? What's the purpose of stable if it is as
> > > > > > > broken as mainline?
> > > > > >
> > > > > > This just means that if there is a fix that went in mainline, and the
> > > > > > fix is broken somehow, we'd rather take the broken fix than not.
> > > > > >
> > > > > > In this scenario, *something* will be broken, it's just a matter of
> > > > > > what. We'd rather have the same thing broken between mainline and
> > > > > > stable.
> > > > > >
> > > > >
> > > > > Yeah, but _intentionally_ breaking existing setups to stay "bug compatible"
> > > > > _is_ a _regression_ you _really_ _dont_ want in a stable
> > > > > supported distro. Because end-users dont care about upstream breaking
> > > > > stuff... its the distro that takes the heat for that...
> > > > >
> > > > > Something "already broken" is not a regression...
> > > > >
> > > > > As distro maintainer that means one now have to review _every_ patch that
> > > > > carries "AUTOSEL", follow all the mail threads that comes up about it, then
> > > > > track if it landed in -stable queue, and read every response and possible
> > > > > objection to all patches in the -stable queue a second time around... then
> > > > > check if it still got included in final stable point relase and then either
> > > > > revert them in distro kernel or go track down all the follow-up fixes
> > > > > needed...
> > > > >
> > > > > Just to avoid being "bug compatible with master"
> > > >
> > > > I've done this "bug compatible" "breakage" more than the AUTOSEL stuff
> > > > has in the past, so you had better also be reviewing all of my normal
> > > > commits as well :)
> > > >
> > > > Anyway, we are trying not to do this, but it does, and will,
> > > > occasionally happen.
> > >
> > > Sure, that's understood. So this was just misunderstanding. Sasha's
> > > original comment really sounded like "bug compatibility" with current
> > > master is desirable and that made me go "Are you serious?" as well...
> >
> > As I said before in this thread, yes, sometimes I do this on purpose.
> >
>
> And I guess this is the one that gets people the feeling that
> "stable is not as stable as it used to be" ...
It's always been this way, it's just that no one noticed :)
> > As an specific example, see a recent bluetooth patch that caused a
> > regression on some chromebook devices. The chromeos developers
> > rightfully complainied, and I left the commit in there to provide the
> > needed "leverage" on the upstream developers to fix this properly.
> > Otherwise if I had reverted the stable patch, when people move to a
> > newer kernel version, things break, and no one remembers why.
>
> I do understand what you are trying to do...
>
> But from my distro hat on I have to treat things differently (and I dont
> think I'm alone doing it this way...)
>
> Known breakages gets reverted even before it hits QA, so endusers wont see
> the issue at all...
>
> So the only ones to see the issue are those building with latest upstream
> without own patches applied...
>
> >
> > I also wrote a long response as to _why_ I do this, and even though it
> > does happen, why it still is worth taking the stable updates. Please
> > see the archives for the full details. I don't want to duplicate this
> > again here.
>
> And we do use latest stable (with some delay as I dont want to overload QA &
> endusers with a new kernel every week :))
You need to automate your QA :)
> We just revert known broken (or add known fixes) before releasing them to
> our users
That's great, and is what you should be doing, nothing wrong there.
thanks,
greg k-h
Den 19.04.2018 kl. 18:09, skrev Sasha Levin:
> On Thu, Apr 19, 2018 at 06:04:26PM +0300, Thomas Backlund wrote:
>> Den 19.04.2018 kl. 16:59, skrev Greg KH:
>>> Anyway, we are trying not to do this, but it does, and will,
>>> occasionally happen. Look, we just did that for one platform for
>>> 4.9.94! And the key to all of this is good testing, which we are now
>>> doing, and hopefully you are also doing as well.
>>
>> Yeah, but having to test stuff with known breakages is no fun, so we
>> try to avoid that
>
> Known breakages are easier to deal with than unknown ones :)
well, if a system worked before the update, but not after...
Guess wich one we want...
>
> I think that that "bug compatability" is basically a policy on *which*
> regressions you'll see vs *if* you'll see a regression.
>
No. Intentionally breaking known working code in a stable branch is
never ok.
As I said before... something that never worked is not a regression,
but breaking a previously working setup is...
That goes for security fixes too... there is not much point in a
security fix, if it basically turns into a local DOS when the system
stops working...
People will just boot older code and there you have it...
> We'll never pull in a commit that introduces a bug but doesn't fix
> another one, right? So if you have to deal with a regression anyway,
> might as well deal with a regression that is also seen on mainline, so
> that when you upgrade your stable kernel you'll keep the same set of
> regressions to deal with.
>
Here I actually like the comment Linus posted about API breakage earlier
in this thread...
<quote>
If you break user workflows, NOTHING ELSE MATTERS.
Even security is secondary to "people don't use the end result,
because it doesn't work for them any more".
</quote>
_This_ same statement should be aknowledged / enforced in stable trees
too IMHO...
Because this is what will happend...
simple logic... if it does not work, the enduser will boot an earlier
kernel... missing "all the good fixes" (including security ones) just
because one fix is bad.
For example in this AUTOSEL round there is 161 fixes of wich the enduser
never gets the 160 "supposedly good ones" when one is "bad"...
How is that a "good thing" ?
And trying to tell those that get hit "this will force upstream to fix
it faster, so you get a working setup in some days/weeks/months..." is
not going to work...
Heh, This even reminds me that this is just as annoying as when MS
started to "bundle monthly security updates" and you get 95% installed
just to realize that the last 5% does not work (or install at all) and
you have to rollback to something working thus missing the needed
security fixes...
Same flawed logic...
Thnakfully we as distro maintainers can avoid some of the breakage for
our enduses...
--
Thomas
Den 19.04.2018 kl. 18:57, skrev Greg KH:
> On Thu, Apr 19, 2018 at 06:16:26PM +0300, Thomas Backlund wrote:
>> Den 19.04.2018 kl. 17:22, skrev Greg KH:
>>> On Thu, Apr 19, 2018 at 04:05:45PM +0200, Jan Kara wrote:
>>>> On Thu 19-04-18 15:59:43, Greg KH wrote:
>>>>> On Thu, Apr 19, 2018 at 02:41:33PM +0300, Thomas Backlund wrote:
>>>>>> Den 16-04-2018 kl. 19:19, skrev Sasha Levin:
>>>>>>> On Mon, Apr 16, 2018 at 12:12:24PM -0400, Steven Rostedt wrote:
>>>>>>>> On Mon, 16 Apr 2018 16:02:03 +0000
>>>>>>>> Sasha Levin <[email protected]> wrote:
>>>>>>>>
>>>>>>>>> One of the things Greg is pushing strongly for is "bug compatibility":
>>>>>>>>> we want the kernel to behave the same way between mainline and stable.
>>>>>>>>> If the code is broken, it should be broken in the same way.
>>>>>>>>
>>>>>>>> Wait! What does that mean? What's the purpose of stable if it is as
>>>>>>>> broken as mainline?
>>>>>>>
>>>>>>> This just means that if there is a fix that went in mainline, and the
>>>>>>> fix is broken somehow, we'd rather take the broken fix than not.
>>>>>>>
>>>>>>> In this scenario, *something* will be broken, it's just a matter of
>>>>>>> what. We'd rather have the same thing broken between mainline and
>>>>>>> stable.
>>>>>>>
>>>>>>
>>>>>> Yeah, but _intentionally_ breaking existing setups to stay "bug compatible"
>>>>>> _is_ a _regression_ you _really_ _dont_ want in a stable
>>>>>> supported distro. Because end-users dont care about upstream breaking
>>>>>> stuff... its the distro that takes the heat for that...
>>>>>>
>>>>>> Something "already broken" is not a regression...
>>>>>>
>>>>>> As distro maintainer that means one now have to review _every_ patch that
>>>>>> carries "AUTOSEL", follow all the mail threads that comes up about it, then
>>>>>> track if it landed in -stable queue, and read every response and possible
>>>>>> objection to all patches in the -stable queue a second time around... then
>>>>>> check if it still got included in final stable point relase and then either
>>>>>> revert them in distro kernel or go track down all the follow-up fixes
>>>>>> needed...
>>>>>>
>>>>>> Just to avoid being "bug compatible with master"
>>>>>
>>>>> I've done this "bug compatible" "breakage" more than the AUTOSEL stuff
>>>>> has in the past, so you had better also be reviewing all of my normal
>>>>> commits as well :)
>>>>>
>>>>> Anyway, we are trying not to do this, but it does, and will,
>>>>> occasionally happen.
>>>>
>>>> Sure, that's understood. So this was just misunderstanding. Sasha's
>>>> original comment really sounded like "bug compatibility" with current
>>>> master is desirable and that made me go "Are you serious?" as well...
>>>
>>> As I said before in this thread, yes, sometimes I do this on purpose.
>>>
>>
>> And I guess this is the one that gets people the feeling that
>> "stable is not as stable as it used to be" ...
>
> It's always been this way, it's just that no one noticed :)
>
:)
>>> As an specific example, see a recent bluetooth patch that caused a
>>> regression on some chromebook devices. The chromeos developers
>>> rightfully complainied, and I left the commit in there to provide the
>>> needed "leverage" on the upstream developers to fix this properly.
>>> Otherwise if I had reverted the stable patch, when people move to a
>>> newer kernel version, things break, and no one remembers why.
>>
>> I do understand what you are trying to do...
>>
>> But from my distro hat on I have to treat things differently (and I dont
>> think I'm alone doing it this way...)
>>
>> Known breakages gets reverted even before it hits QA, so endusers wont see
>> the issue at all...
>>
>> So the only ones to see the issue are those building with latest upstream
>> without own patches applied...
>>
>>>
>>> I also wrote a long response as to _why_ I do this, and even though it
>>> does happen, why it still is worth taking the stable updates. Please
>>> see the archives for the full details. I don't want to duplicate this
>>> again here.
>>
>> And we do use latest stable (with some delay as I dont want to overload QA &
>> endusers with a new kernel every week :))
>
> You need to automate your QA :)
>
Yeah, some can be automated... but that means having a lot of different
hw to test on... emulators/vms can only test so much...
users part of QA test on a variety of hw with various installs/setups
that exposes fun things with some hw :)
>> We just revert known broken (or add known fixes) before releasing them to
>> our users
>
> That's great, and is what you should be doing, nothing wrong there.
>
> thanks,
>
> greg k-h
>
--
Thomas
On Thu, Apr 19, 2018 at 04:22:22PM +0200, Greg KH wrote:
> On Thu, Apr 19, 2018 at 04:05:45PM +0200, Jan Kara wrote:
> > On Thu 19-04-18 15:59:43, Greg KH wrote:
> > > On Thu, Apr 19, 2018 at 02:41:33PM +0300, Thomas Backlund wrote:
> > > > Den 16-04-2018 kl. 19:19, skrev Sasha Levin:
> > > > > On Mon, Apr 16, 2018 at 12:12:24PM -0400, Steven Rostedt wrote:
> > > > > > On Mon, 16 Apr 2018 16:02:03 +0000
> > > > > > Sasha Levin <[email protected]> wrote:
> > > > > >
> > > > > > > One of the things Greg is pushing strongly for is "bug compatibility":
> > > > > > > we want the kernel to behave the same way between mainline and stable.
> > > > > > > If the code is broken, it should be broken in the same way.
> > > > > >
> > > > > > Wait! What does that mean? What's the purpose of stable if it is as
> > > > > > broken as mainline?
> > > > >
> > > > > This just means that if there is a fix that went in mainline, and the
> > > > > fix is broken somehow, we'd rather take the broken fix than not.
> > > > >
> > > > > In this scenario, *something* will be broken, it's just a matter of
> > > > > what. We'd rather have the same thing broken between mainline and
> > > > > stable.
> > > > >
> > > >
> > > > Yeah, but _intentionally_ breaking existing setups to stay "bug compatible"
> > > > _is_ a _regression_ you _really_ _dont_ want in a stable
> > > > supported distro. Because end-users dont care about upstream breaking
> > > > stuff... its the distro that takes the heat for that...
> > > >
> > > > Something "already broken" is not a regression...
> > > >
> > > > As distro maintainer that means one now have to review _every_ patch that
> > > > carries "AUTOSEL", follow all the mail threads that comes up about it, then
> > > > track if it landed in -stable queue, and read every response and possible
> > > > objection to all patches in the -stable queue a second time around... then
> > > > check if it still got included in final stable point relase and then either
> > > > revert them in distro kernel or go track down all the follow-up fixes
> > > > needed...
> > > >
> > > > Just to avoid being "bug compatible with master"
> > >
> > > I've done this "bug compatible" "breakage" more than the AUTOSEL stuff
> > > has in the past, so you had better also be reviewing all of my normal
> > > commits as well :)
> > >
> > > Anyway, we are trying not to do this, but it does, and will,
> > > occasionally happen.
> >
> > Sure, that's understood. So this was just misunderstanding. Sasha's
> > original comment really sounded like "bug compatibility" with current
> > master is desirable and that made me go "Are you serious?" as well...
>
> As I said before in this thread, yes, sometimes I do this on purpose.
>
> As an specific example, see a recent bluetooth patch that caused a
> regression on some chromebook devices. The chromeos developers
> rightfully complainied, and I left the commit in there to provide the
> needed "leverage" on the upstream developers to fix this properly.
> Otherwise if I had reverted the stable patch, when people move to a
> newer kernel version, things break, and no one remembers why.
>
> I also wrote a long response as to _why_ I do this, and even though it
> does happen, why it still is worth taking the stable updates. Please
> see the archives for the full details. I don't want to duplicate this
> again here.
And to be more specific, let's always take this as a case-by-case basis.
The last time this happened was the bluetooth bug and it was a fix for a
reported problem, but then the fix caused a regression so upstream
reverted it and I reverted it in the stable trees. No matter what I
chose to do, someone would be upset so I followed what upstream did.
thanks,
greg k-h
Hi!
> >- It must be obviously correct and tested.
> >
> >If it introduces new bug, it is not correct, and certainly not
> >obviously correct.
>
> As you might have noticed, we don't strictly follow the rules.
Yes, I noticed. And what I'm saying is that perhaps you should follow
the rules more strictly.
> Take a look at the whole PTI story as an example. It's way more than 100
> lines, it's not obviously corrent, it fixed more than 1 thing, and so
> on, and yet it went in -stable!
>
> Would you argue we shouldn't have backported PTI to -stable?
Actually, I was surprised with PTI going to stable. That was clearly
against the rules. Maybe the security bug was ugly enough to warrant
that.
But please don't use it as an argument for applying any random
patches...
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Tue 2018-04-17 16:19:35, Sasha Levin wrote:
> On Tue, Apr 17, 2018 at 05:55:49PM +0200, Jan Kara wrote:
> >On Tue 17-04-18 13:31:51, Sasha Levin wrote:
> >> We may be able to guesstimate the 'regression chance', but there's no
> >> way we can guess the 'annoyance' once. There are so many different use
> >> cases that we just can't even guess how many people would get "annoyed"
> >> by something.
> >
> >As a maintainer, I hope I have reasonable idea what are common use cases
> >for my subsystem. Those I cater to when estimating 'annoyance'. Sure I don't
> >know all of the use cases so people doing unusual stuff hit more bugs and
> >have to report them to get fixes included in -stable. But for me this is a
> >preferable tradeoff over the risk of regression so this is the rule I use
> >when tagging for stable. Now I'm not a -stable maintainer and I fully agree
> >with "those who do the work decide" principle so pick whatever patches you
> >think are appropriate, I just wanted explain why I don't think more patches
> >in stable are necessarily good.
>
> The AUTOSEL story is different for subsystems that don't do -stable, and
> subsystems that are actually doing the work (like yourself).
>
> I'm not trying to override active maintainers, I'm trying to help them
> make decisions.
Ok, cool. Can you exclude LED subsystem, Hibernation and Nokia N900
stuff from autosel work?
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Mon 2018-04-16 21:18:47, Sasha Levin wrote:
> On Mon, Apr 16, 2018 at 10:43:28PM +0200, Jiri Kosina wrote:
> >On Mon, 16 Apr 2018, Sasha Levin wrote:
> >
> >> So I think that Linus's claim that users come first applies here as
> >> well. If there's a user that cares about a particular feature being
> >> broken, then we go ahead and fix his bug rather then ignoring him.
> >
> >So one extreme is fixing -stable *iff* users actually do report an issue.
> >
> >The other extreme is backporting everything that potentially looks like a
> >potential fix of "something" (according to some arbitrary metric),
> >pro-actively.
> >
> >The former voilates the "users first" rule, the latter has a very, very
> >high risk of regressions.
> >
> >So this whole debate is about finding a compromise.
> >
> >My gut feeling always was that the statement in
> >
> > Documentation/process/stable-kernel-rules.rst
> >
> >is very reasonable, but making the process way more "aggresive" when
> >backporting patches is breaking much of its original spirit for me.
>
> I agree that as an enterprise distro taking everything from -stable
> isn't the best idea. Ideally you'd want to be close to the first
Original purpose of -stable was "to be common base of enterprise
distros" and our documentation still says it is.
> I think that we can agree that it's impossible to expect every single
> Linux user to go on LKML and complain about a bug he encountered, so the
> rule quickly becomes "It must fix a real bug that can bother
> people".
I think you are playing dangerous word games.
> My "aggressiveness" comes from the whole "bother" part: it doesn't have
> to be critical, it doesn't have to cause data corruption, it doesn't
> have to be a security issue. It's enough that the bug actually affects a
> user in a way he didn't expect it to (if a user doesn't have
> expectations, it would fall under the "This could be a problem..."
> exception.
And it seems documentation says you should be less aggressive and
world tells you they expect to be less aggressive. So maybe that's
what you should do?
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Tue 2018-04-17 16:06:29, Sasha Levin wrote:
> On Tue, Apr 17, 2018 at 05:52:30PM +0200, Jiri Kosina wrote:
> >On Tue, 17 Apr 2018, Sasha Levin wrote:
> >
> >> How do I get the XFS folks to send their stuff to -stable? (we have
> >> quite a few customers who use XFS)
> >
> >If XFS (or *any* other subsystem) doesn't have enough manpower of upstream
> >maintainers to deal with stable, we just have to accept that and find an
> >answer to that.
>
> This is exactly what I'm doing. Many subsystems don't have enough
> manpower to deal with -stable, so I'm trying to help.
...and the torrent of spams from the AUTOSEL subsystem actually makes
that worse.
And when you are told particular fix to LEDs is not that important
after all, you start arguing about nuclear power plants (without
really knowing how critical subsystems work).
If you want cooperation with maintainers to work, the rules need to be
clear, first. They are documented, so follow them. If you think rules
are wrong, lets talk about changing the rules; but arguing "every bug
is important because someone may be hitting it" is not ok.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Thu, May 03, 2018 at 12:04:41PM +0200, Pavel Machek wrote:
>On Tue 2018-04-17 16:06:29, Sasha Levin wrote:
>> On Tue, Apr 17, 2018 at 05:52:30PM +0200, Jiri Kosina wrote:
>> >On Tue, 17 Apr 2018, Sasha Levin wrote:
>> >
>> >> How do I get the XFS folks to send their stuff to -stable? (we have
>> >> quite a few customers who use XFS)
>> >
>> >If XFS (or *any* other subsystem) doesn't have enough manpower of upstream
>> >maintainers to deal with stable, we just have to accept that and find an
>> >answer to that.
>>
>> This is exactly what I'm doing. Many subsystems don't have enough
>> manpower to deal with -stable, so I'm trying to help.
>
>...and the torrent of spams from the AUTOSEL subsystem actually makes
>that worse.
>
>And when you are told particular fix to LEDs is not that important
>after all, you start arguing about nuclear power plants (without
>really knowing how critical subsystems work).
Obviously your knowledge far surpasses mine.
>If you want cooperation with maintainers to work, the rules need to be
>clear, first. They are documented, so follow them. If you think rules
>are wrong, lets talk about changing the rules; but arguing "every bug
>is important because someone may be hitting it" is not ok.
I'm sorry but you're just unfamiliar with the process. I'd point out
that all my AUTOSEL commits go through Greg, who wrote the rules, and
accepts my patches.
The rules are there as a guideline to allow us to not take certain
patches, they're not there as a strict set of rules we must follow at
all times.
On Thu, May 03, 2018 at 11:47:24AM +0200, Pavel Machek wrote:
>On Mon 2018-04-16 21:18:47, Sasha Levin wrote:
>> On Mon, Apr 16, 2018 at 10:43:28PM +0200, Jiri Kosina wrote:
>> >On Mon, 16 Apr 2018, Sasha Levin wrote:
>> >
>> >> So I think that Linus's claim that users come first applies here as
>> >> well. If there's a user that cares about a particular feature being
>> >> broken, then we go ahead and fix his bug rather then ignoring him.
>> >
>> >So one extreme is fixing -stable *iff* users actually do report an issue.
>> >
>> >The other extreme is backporting everything that potentially looks like a
>> >potential fix of "something" (according to some arbitrary metric),
>> >pro-actively.
>> >
>> >The former voilates the "users first" rule, the latter has a very, very
>> >high risk of regressions.
>> >
>> >So this whole debate is about finding a compromise.
>> >
>> >My gut feeling always was that the statement in
>> >
>> > Documentation/process/stable-kernel-rules.rst
>> >
>> >is very reasonable, but making the process way more "aggresive" when
>> >backporting patches is breaking much of its original spirit for me.
>>
>> I agree that as an enterprise distro taking everything from -stable
>> isn't the best idea. Ideally you'd want to be close to the first
>
>Original purpose of -stable was "to be common base of enterprise
>distros" and our documentation still says it is.
I guess that the world changes?
At this point calling enterprise distros a niche wouldn't be too far
from the truth. Furthermore, some enterprise distros (as stated
earlier in this thread) don't even follow -stable anymore and cherry
pick their own commits.
So no, the main driving force behind -stable is not traditional
enterprise distributions.
>> I think that we can agree that it's impossible to expect every single
>> Linux user to go on LKML and complain about a bug he encountered, so the
>> rule quickly becomes "It must fix a real bug that can bother
>> people".
>
>I think you are playing dangerous word games.
>
>> My "aggressiveness" comes from the whole "bother" part: it doesn't have
>> to be critical, it doesn't have to cause data corruption, it doesn't
>> have to be a security issue. It's enough that the bug actually affects a
>> user in a way he didn't expect it to (if a user doesn't have
>> expectations, it would fall under the "This could be a problem..."
>> exception.
>
>And it seems documentation says you should be less aggressive and
>world tells you they expect to be less aggressive. So maybe that's
>what you should do?
Who is this "world" you're referring to?
On Thu, May 03, 2018 at 11:32:15AM +0200, Pavel Machek wrote:
>Hi!
>
>> >- It must be obviously correct and tested.
>> >
>> >If it introduces new bug, it is not correct, and certainly not
>> >obviously correct.
>>
>> As you might have noticed, we don't strictly follow the rules.
>
>Yes, I noticed. And what I'm saying is that perhaps you should follow
>the rules more strictly.
Again, this was stated many times by Greg and others, the rules are not
there to be strictly followed.
>> Take a look at the whole PTI story as an example. It's way more than 100
>> lines, it's not obviously corrent, it fixed more than 1 thing, and so
>> on, and yet it went in -stable!
>>
>> Would you argue we shouldn't have backported PTI to -stable?
>
>Actually, I was surprised with PTI going to stable. That was clearly
>against the rules. Maybe the security bug was ugly enough to warrant
>that.
>
>But please don't use it as an argument for applying any random
>patches...
How about this: if a -stable maintainer has concerns with how I follow
the -stable rules, he's more than welcome to reject my patches. Sounds
like a plan?
On Thu, May 03, 2018 at 11:36:51AM +0200, Pavel Machek wrote:
>On Tue 2018-04-17 16:19:35, Sasha Levin wrote:
>> On Tue, Apr 17, 2018 at 05:55:49PM +0200, Jan Kara wrote:
>> >On Tue 17-04-18 13:31:51, Sasha Levin wrote:
>> >> We may be able to guesstimate the 'regression chance', but there's no
>> >> way we can guess the 'annoyance' once. There are so many different use
>> >> cases that we just can't even guess how many people would get "annoyed"
>> >> by something.
>> >
>> >As a maintainer, I hope I have reasonable idea what are common use cases
>> >for my subsystem. Those I cater to when estimating 'annoyance'. Sure I don't
>> >know all of the use cases so people doing unusual stuff hit more bugs and
>> >have to report them to get fixes included in -stable. But for me this is a
>> >preferable tradeoff over the risk of regression so this is the rule I use
>> >when tagging for stable. Now I'm not a -stable maintainer and I fully agree
>> >with "those who do the work decide" principle so pick whatever patches you
>> >think are appropriate, I just wanted explain why I don't think more patches
>> >in stable are necessarily good.
>>
>> The AUTOSEL story is different for subsystems that don't do -stable, and
>> subsystems that are actually doing the work (like yourself).
>>
>> I'm not trying to override active maintainers, I'm trying to help them
>> make decisions.
>
>Ok, cool. Can you exclude LED subsystem, Hibernation and Nokia N900
>stuff from autosel work?
Curiousity got me, and I had to see what these subsystems do as far as
stable commits:
$ git log --oneline --grep 'stable@vger' --since="01-01-2016" kernel/power drivers/leds drivers/media/i2c/et8ek8 drivers/media/i2c/ad5820.c arch/x86/kernel/acpi/ | wc -l
7
Which got me a bit surprised: maybe indeed leds is mostly fine, but
hibernation is definitely tricky, I've been stung by it a few times...
So why not pick something an actual user reported, and see how that was
dealt with?
Googling first showed this:
https://bugzilla.kernel.org/show_bug.cgi?id=97201
Which was fixed by:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=bdbc98abb3aa323f6323b11db39c740e6f8fc5b1
But that's not in any -stable tree. Hmm.. ok..
Next one on google was:
https://bugzilla.kernel.org/show_bug.cgi?id=117971
Which, in turn, was fixed by:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5b3f249c94ce1f46bacd9814385b0ee2d1ae52f3
Oh look at that, it's not in -stable either...
So seeing how you have concerns with my selection of -stable commits,
maybe you could explain to me why these commits didn't end up in
-stable?