This is the start of the stable review cycle for the 5.11.8 release.
There are 31 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Sun, 21 Mar 2021 12:17:37 +0000.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.11.8-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.11.y
and the diffstat can be found below.
thanks,
greg k-h
-------------
Pseudo-Shortlog of commits:
Greg Kroah-Hartman <[email protected]>
Linux 5.11.8-rc1
Ard Biesheuvel <[email protected]>
crypto: x86/aes-ni-xts - use direct calls to and 4-way stride
Florian Fainelli <[email protected]>
net: dsa: b53: Support setting learning on port
J. Bruce Fields <[email protected]>
Revert "nfsd4: a client's own opens needn't prevent delegations"
J. Bruce Fields <[email protected]>
Revert "nfsd4: remove check_conflicting_opens warning"
Amir Goldstein <[email protected]>
fuse: fix live lock in fuse_iget()
Nicolas Morey-Chaisemartin <[email protected]>
RDMA/srp: Fix support for unpopulated and unbalanced NUMA nodes
Vladimir Murzin <[email protected]>
arm64: Unconditionally set virtual cpu id registers
Piotr Krysiuk <[email protected]>
bpf, selftests: Fix up some test_verifier cases for unprivileged
Piotr Krysiuk <[email protected]>
bpf: Add sanity check for upper ptr_limit
Piotr Krysiuk <[email protected]>
bpf: Simplify alu_limit masking for pointer arithmetic
Piotr Krysiuk <[email protected]>
bpf: Fix off-by-one for area size in creating mask to left
Piotr Krysiuk <[email protected]>
bpf: Prohibit alu ops for pointer types not defining ptr_limit
Bob Peterson <[email protected]>
gfs2: bypass signal_our_withdraw if no journal
Bob Peterson <[email protected]>
gfs2: move freeze glock outside the make_fs_rw and _ro functions
Bob Peterson <[email protected]>
gfs2: Add common helper for holding and releasing the freeze glock
Frieder Schrempf <[email protected]>
regulator: pca9450: Clear PRESET_EN bit to fix BUCK1/2/3 voltage setting
Frieder Schrempf <[email protected]>
regulator: pca9450: Enable system reset on WDOG_B assertion
Frieder Schrempf <[email protected]>
regulator: pca9450: Add SD_VSEL GPIO for LDO5
Jia-Ju Bai <[email protected]>
net: bonding: fix error return code of bond_neigh_init()
Andy Shevchenko <[email protected]>
gpiolib: Read "gpio-line-names" from a firmware node
Jens Axboe <[email protected]>
io_uring: clear IOCB_WAITQ for non -EIOCBQUEUED return
Pavel Begunkov <[email protected]>
io_uring: simplify do_read return parsing
Jens Axboe <[email protected]>
io_uring: don't keep looping for more events if we can't flush overflow
Pavel Begunkov <[email protected]>
io_uring: refactor io_cqring_wait
Pavel Begunkov <[email protected]>
io_uring: refactor scheduling in io_cqring_wait
Florian Westphal <[email protected]>
mptcp: dispose initial struct socket when its subflow is closed
Florian Westphal <[email protected]>
mptcp: pm: add lockdep assertions
Geliang Tang <[email protected]>
mptcp: send ack for every add_addr
Sean Christopherson <[email protected]>
KVM: x86/mmu: Set SPTE_AD_WRPROT_ONLY_MASK if and only if PML is enabled
Sean Christopherson <[email protected]>
KVM: x86/mmu: Expand on the comment in kvm_vcpu_ad_need_write_protect()
Jens Axboe <[email protected]>
io_uring: don't attempt IO reissue from the ring exit path
-------------
Diffstat:
Makefile | 4 +-
arch/arm64/include/asm/el2_setup.h | 4 +-
arch/x86/crypto/aesni-intel_asm.S | 115 +++++++++++++--------
arch/x86/crypto/aesni-intel_glue.c | 25 +++--
arch/x86/kvm/mmu/mmu_internal.h | 13 ++-
drivers/gpio/gpiolib.c | 12 +--
drivers/infiniband/ulp/srp/ib_srp.c | 110 ++++++++------------
drivers/net/bonding/bond_main.c | 8 +-
drivers/net/dsa/b53/b53_common.c | 18 ++++
drivers/net/dsa/b53/b53_regs.h | 1 +
drivers/net/dsa/bcm_sf2.c | 15 +--
drivers/regulator/pca9450-regulator.c | 30 ++++++
fs/fuse/fuse_i.h | 1 +
fs/gfs2/ops_fstype.c | 33 +++---
fs/gfs2/recovery.c | 8 +-
fs/gfs2/super.c | 45 +-------
fs/gfs2/util.c | 58 +++++++++--
fs/gfs2/util.h | 3 +
fs/io_uring.c | 84 ++++++++-------
fs/locks.c | 3 -
fs/nfsd/nfs4state.c | 53 +++-------
include/linux/regulator/pca9450.h | 10 ++
kernel/bpf/verifier.c | 33 +++---
net/mptcp/pm.c | 5 +-
net/mptcp/pm_netlink.c | 23 +++--
net/mptcp/protocol.c | 20 +++-
net/mptcp/protocol.h | 5 +
.../selftests/bpf/verifier/bounds_deduction.c | 27 +++--
tools/testing/selftests/bpf/verifier/map_ptr.c | 4 +
tools/testing/selftests/bpf/verifier/unpriv.c | 15 ++-
.../selftests/bpf/verifier/value_ptr_arith.c | 23 ++++-
31 files changed, 472 insertions(+), 336 deletions(-)
From: Frieder Schrempf <[email protected]>
[ Upstream commit 98b94b6e38ca0c4eeb29949c656f6a315000c23e ]
The driver uses the DVS registers PCA9450_REG_BUCKxOUT_DVS0 to set the
voltage for the buck regulators 1, 2 and 3. This has no effect as the
PRESET_EN bit is set by default and therefore the preset values are used
instead, which are set to 850 mV.
To fix this we clear the PRESET_EN bit at time of initialization.
Fixes: 0935ff5f1f0a ("regulator: pca9450: add pca9450 pmic driver")
Cc: <[email protected]>
Signed-off-by: Frieder Schrempf <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/regulator/pca9450-regulator.c | 8 ++++++++
include/linux/regulator/pca9450.h | 3 +++
2 files changed, 11 insertions(+)
diff --git a/drivers/regulator/pca9450-regulator.c b/drivers/regulator/pca9450-regulator.c
index 833d398c6aa2..d38109cc3a01 100644
--- a/drivers/regulator/pca9450-regulator.c
+++ b/drivers/regulator/pca9450-regulator.c
@@ -797,6 +797,14 @@ static int pca9450_i2c_probe(struct i2c_client *i2c,
return ret;
}
+ /* Clear PRESET_EN bit in BUCK123_DVS to use DVS registers */
+ ret = regmap_clear_bits(pca9450->regmap, PCA9450_REG_BUCK123_DVS,
+ BUCK123_PRESET_EN);
+ if (ret) {
+ dev_err(&i2c->dev, "Failed to clear PRESET_EN bit: %d\n", ret);
+ return ret;
+ }
+
/* Set reset behavior on assertion of WDOG_B signal */
ret = regmap_update_bits(pca9450->regmap, PCA9450_REG_RESET_CTRL,
WDOG_B_CFG_MASK, WDOG_B_CFG_COLD_LDO12);
diff --git a/include/linux/regulator/pca9450.h b/include/linux/regulator/pca9450.h
index ccdb5320a240..71902f41c919 100644
--- a/include/linux/regulator/pca9450.h
+++ b/include/linux/regulator/pca9450.h
@@ -147,6 +147,9 @@ enum {
#define BUCK6_FPWM 0x04
#define BUCK6_ENMODE_MASK 0x03
+/* PCA9450_REG_BUCK123_PRESET_EN bit */
+#define BUCK123_PRESET_EN 0x80
+
/* PCA9450_BUCK1OUT_DVS0 bits */
#define BUCK1OUT_DVS0_MASK 0x7F
#define BUCK1OUT_DVS0_DEFAULT 0x14
--
2.30.1
From: Jens Axboe <[email protected]>
[ Upstream commit ca0a26511c679a797f86589894a4523db36d833e ]
It doesn't make sense to wait for more events to come in, if we can't
even flush the overflow we already have to the ring. Return -EBUSY for
that condition, just like we do for attempts to submit with overflow
pending.
Cc: [email protected] # 5.11
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/io_uring.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 7621978e9fc8..cab380a337e4 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1823,18 +1823,22 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
return all_flushed;
}
-static void io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
+static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
struct task_struct *tsk,
struct files_struct *files)
{
+ bool ret = true;
+
if (test_bit(0, &ctx->cq_check_overflow)) {
/* iopoll syncs against uring_lock, not completion_lock */
if (ctx->flags & IORING_SETUP_IOPOLL)
mutex_lock(&ctx->uring_lock);
- __io_cqring_overflow_flush(ctx, force, tsk, files);
+ ret = __io_cqring_overflow_flush(ctx, force, tsk, files);
if (ctx->flags & IORING_SETUP_IOPOLL)
mutex_unlock(&ctx->uring_lock);
}
+
+ return ret;
}
static void __io_cqring_fill_event(struct io_kiocb *req, long res, long cflags)
@@ -7280,11 +7284,16 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts);
trace_io_uring_cqring_wait(ctx, min_events);
do {
- io_cqring_overflow_flush(ctx, false, NULL, NULL);
+ /* if we can't even flush overflow, don't wait for more */
+ if (!io_cqring_overflow_flush(ctx, false, NULL, NULL)) {
+ ret = -EBUSY;
+ break;
+ }
prepare_to_wait_exclusive(&ctx->wait, &iowq.wq,
TASK_INTERRUPTIBLE);
ret = io_cqring_wait_schedule(ctx, &iowq, &timeout);
finish_wait(&ctx->wait, &iowq.wq);
+ cond_resched();
} while (ret > 0);
restore_saved_sigmask_unless(ret == -EINTR);
--
2.30.1
From: Vladimir Murzin <[email protected]>
Commit 78869f0f0552 ("arm64: Extract parts of el2_setup into a macro")
reorganized el2 setup in such way that virtual cpu id registers set
only in nVHE, yet they used (and need) to be set irrespective VHE
support.
Fixes: 78869f0f0552 ("arm64: Extract parts of el2_setup into a macro")
Signed-off-by: Vladimir Murzin <[email protected]>
Acked-by: Will Deacon <[email protected]>
Reviewed-by: Marc Zyngier <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/el2_setup.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -111,7 +111,7 @@
.endm
/* Virtual CPU ID registers */
-.macro __init_el2_nvhe_idregs
+.macro __init_el2_idregs
mrs x0, midr_el1
mrs x1, mpidr_el1
msr vpidr_el2, x0
@@ -163,6 +163,7 @@
__init_el2_stage2
__init_el2_gicv3
__init_el2_hstr
+ __init_el2_idregs
/*
* When VHE is not in use, early init of EL2 needs to be done here.
@@ -171,7 +172,6 @@
* will be done via the _EL1 system register aliases in __cpu_setup.
*/
.ifeqs "\mode", "nvhe"
- __init_el2_nvhe_idregs
__init_el2_nvhe_cptr
__init_el2_nvhe_sve
__init_el2_nvhe_prepare_eret
From: J. Bruce Fields <[email protected]>
commit 6ee65a773096ab3f39d9b00311ac983be5bdeb7c upstream.
This reverts commit 94415b06eb8aed13481646026dc995f04a3a534a.
That commit claimed to allow a client to get a read delegation when it
was the only writer. Actually it allowed a client to get a read
delegation when *any* client has a write open!
The main problem is that it's depending on nfs4_clnt_odstate structures
that are actually only maintained for pnfs exports.
This causes clients to miss writes performed by other clients, even when
there have been intervening closes and opens, violating close-to-open
cache consistency.
We can do this a different way, but first we should just revert this.
I've added pynfs 4.1 test DELEG19 to test for this, as I should have
done originally!
Cc: [email protected]
Reported-by: Timo Rothenpieler <[email protected]>
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Chuck Lever <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/locks.c | 3 --
fs/nfsd/nfs4state.c | 54 +++++++++++++---------------------------------------
2 files changed, 14 insertions(+), 43 deletions(-)
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -1808,9 +1808,6 @@ check_conflicting_open(struct file *filp
if (flags & FL_LAYOUT)
return 0;
- if (flags & FL_DELEG)
- /* We leave these checks to the caller. */
- return 0;
if (arg == F_RDLCK)
return inode_is_open_for_write(inode) ? -EAGAIN : 0;
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -4945,32 +4945,6 @@ static struct file_lock *nfs4_alloc_init
return fl;
}
-static int nfsd4_check_conflicting_opens(struct nfs4_client *clp,
- struct nfs4_file *fp)
-{
- struct nfs4_clnt_odstate *co;
- struct file *f = fp->fi_deleg_file->nf_file;
- struct inode *ino = locks_inode(f);
- int writes = atomic_read(&ino->i_writecount);
-
- if (fp->fi_fds[O_WRONLY])
- writes--;
- if (fp->fi_fds[O_RDWR])
- writes--;
- WARN_ON_ONCE(writes < 0);
- if (writes > 0)
- return -EAGAIN;
- spin_lock(&fp->fi_lock);
- list_for_each_entry(co, &fp->fi_clnt_odstate, co_perfile) {
- if (co->co_client != clp) {
- spin_unlock(&fp->fi_lock);
- return -EAGAIN;
- }
- }
- spin_unlock(&fp->fi_lock);
- return 0;
-}
-
static struct nfs4_delegation *
nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
struct nfs4_file *fp, struct nfs4_clnt_odstate *odstate)
@@ -4990,12 +4964,9 @@ nfs4_set_delegation(struct nfs4_client *
nf = find_readable_file(fp);
if (!nf) {
- /*
- * We probably could attempt another open and get a read
- * delegation, but for now, don't bother until the
- * client actually sends us one.
- */
- return ERR_PTR(-EAGAIN);
+ /* We should always have a readable file here */
+ WARN_ON_ONCE(1);
+ return ERR_PTR(-EBADF);
}
spin_lock(&state_lock);
spin_lock(&fp->fi_lock);
@@ -5025,19 +4996,11 @@ nfs4_set_delegation(struct nfs4_client *
if (!fl)
goto out_clnt_odstate;
- status = nfsd4_check_conflicting_opens(clp, fp);
- if (status) {
- locks_free_lock(fl);
- goto out_clnt_odstate;
- }
status = vfs_setlease(fp->fi_deleg_file->nf_file, fl->fl_type, &fl, NULL);
if (fl)
locks_free_lock(fl);
if (status)
goto out_clnt_odstate;
- status = nfsd4_check_conflicting_opens(clp, fp);
- if (status)
- goto out_clnt_odstate;
spin_lock(&state_lock);
spin_lock(&fp->fi_lock);
@@ -5119,6 +5082,17 @@ nfs4_open_delegation(struct svc_fh *fh,
goto out_no_deleg;
if (!cb_up || !(oo->oo_flags & NFS4_OO_CONFIRMED))
goto out_no_deleg;
+ /*
+ * Also, if the file was opened for write or
+ * create, there's a good chance the client's
+ * about to write to it, resulting in an
+ * immediate recall (since we don't support
+ * write delegations):
+ */
+ if (open->op_share_access & NFS4_SHARE_ACCESS_WRITE)
+ goto out_no_deleg;
+ if (open->op_create == NFS4_OPEN_CREATE)
+ goto out_no_deleg;
break;
default:
goto out_no_deleg;
From: Ard Biesheuvel <[email protected]>
commit 86ad60a65f29dd862a11c22bb4b5be28d6c5cef1 upstream.
The XTS asm helper arrangement is a bit odd: the 8-way stride helper
consists of back-to-back calls to the 4-way core transforms, which
are called indirectly, based on a boolean that indicates whether we
are performing encryption or decryption.
Given how costly indirect calls are on x86, let's switch to direct
calls, and given how the 8-way stride doesn't really add anything
substantial, use a 4-way stride instead, and make the asm core
routine deal with any multiple of 4 blocks. Since 512 byte sectors
or 4 KB blocks are the typical quantities XTS operates on, increase
the stride exported to the glue helper to 512 bytes as well.
As a result, the number of indirect calls is reduced from 3 per 64 bytes
of in/output to 1 per 512 bytes of in/output, which produces a 65% speedup
when operating on 1 KB blocks (measured on a Intel(R) Core(TM) i7-8650U CPU)
Fixes: 9697fa39efd3f ("x86/retpoline/crypto: Convert crypto assembler indirect jumps")
Tested-by: Eric Biggers <[email protected]> # x86_64
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/x86/crypto/aesni-intel_asm.S | 115 ++++++++++++++++++++++---------------
arch/x86/crypto/aesni-intel_glue.c | 25 ++++----
2 files changed, 84 insertions(+), 56 deletions(-)
--- a/arch/x86/crypto/aesni-intel_asm.S
+++ b/arch/x86/crypto/aesni-intel_asm.S
@@ -2715,25 +2715,18 @@ SYM_FUNC_END(aesni_ctr_enc)
pxor CTR, IV;
/*
- * void aesni_xts_crypt8(const struct crypto_aes_ctx *ctx, u8 *dst,
- * const u8 *src, bool enc, le128 *iv)
+ * void aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *dst,
+ * const u8 *src, unsigned int len, le128 *iv)
*/
-SYM_FUNC_START(aesni_xts_crypt8)
+SYM_FUNC_START(aesni_xts_encrypt)
FRAME_BEGIN
- testb %cl, %cl
- movl $0, %ecx
- movl $240, %r10d
- leaq _aesni_enc4, %r11
- leaq _aesni_dec4, %rax
- cmovel %r10d, %ecx
- cmoveq %rax, %r11
movdqa .Lgf128mul_x_ble_mask, GF128MUL_MASK
movups (IVP), IV
mov 480(KEYP), KLEN
- addq %rcx, KEYP
+.Lxts_enc_loop4:
movdqa IV, STATE1
movdqu 0x00(INP), INC
pxor INC, STATE1
@@ -2757,71 +2750,103 @@ SYM_FUNC_START(aesni_xts_crypt8)
pxor INC, STATE4
movdqu IV, 0x30(OUTP)
- CALL_NOSPEC r11
+ call _aesni_enc4
movdqu 0x00(OUTP), INC
pxor INC, STATE1
movdqu STATE1, 0x00(OUTP)
- _aesni_gf128mul_x_ble()
- movdqa IV, STATE1
- movdqu 0x40(INP), INC
- pxor INC, STATE1
- movdqu IV, 0x40(OUTP)
-
movdqu 0x10(OUTP), INC
pxor INC, STATE2
movdqu STATE2, 0x10(OUTP)
- _aesni_gf128mul_x_ble()
- movdqa IV, STATE2
- movdqu 0x50(INP), INC
- pxor INC, STATE2
- movdqu IV, 0x50(OUTP)
-
movdqu 0x20(OUTP), INC
pxor INC, STATE3
movdqu STATE3, 0x20(OUTP)
- _aesni_gf128mul_x_ble()
- movdqa IV, STATE3
- movdqu 0x60(INP), INC
- pxor INC, STATE3
- movdqu IV, 0x60(OUTP)
-
movdqu 0x30(OUTP), INC
pxor INC, STATE4
movdqu STATE4, 0x30(OUTP)
_aesni_gf128mul_x_ble()
- movdqa IV, STATE4
- movdqu 0x70(INP), INC
- pxor INC, STATE4
- movdqu IV, 0x70(OUTP)
- _aesni_gf128mul_x_ble()
+ add $64, INP
+ add $64, OUTP
+ sub $64, LEN
+ ja .Lxts_enc_loop4
+
movups IV, (IVP)
- CALL_NOSPEC r11
+ FRAME_END
+ ret
+SYM_FUNC_END(aesni_xts_encrypt)
+
+/*
+ * void aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *dst,
+ * const u8 *src, unsigned int len, le128 *iv)
+ */
+SYM_FUNC_START(aesni_xts_decrypt)
+ FRAME_BEGIN
+
+ movdqa .Lgf128mul_x_ble_mask, GF128MUL_MASK
+ movups (IVP), IV
+
+ mov 480(KEYP), KLEN
+ add $240, KEYP
+
+.Lxts_dec_loop4:
+ movdqa IV, STATE1
+ movdqu 0x00(INP), INC
+ pxor INC, STATE1
+ movdqu IV, 0x00(OUTP)
+
+ _aesni_gf128mul_x_ble()
+ movdqa IV, STATE2
+ movdqu 0x10(INP), INC
+ pxor INC, STATE2
+ movdqu IV, 0x10(OUTP)
+
+ _aesni_gf128mul_x_ble()
+ movdqa IV, STATE3
+ movdqu 0x20(INP), INC
+ pxor INC, STATE3
+ movdqu IV, 0x20(OUTP)
+
+ _aesni_gf128mul_x_ble()
+ movdqa IV, STATE4
+ movdqu 0x30(INP), INC
+ pxor INC, STATE4
+ movdqu IV, 0x30(OUTP)
+
+ call _aesni_dec4
- movdqu 0x40(OUTP), INC
+ movdqu 0x00(OUTP), INC
pxor INC, STATE1
- movdqu STATE1, 0x40(OUTP)
+ movdqu STATE1, 0x00(OUTP)
- movdqu 0x50(OUTP), INC
+ movdqu 0x10(OUTP), INC
pxor INC, STATE2
- movdqu STATE2, 0x50(OUTP)
+ movdqu STATE2, 0x10(OUTP)
- movdqu 0x60(OUTP), INC
+ movdqu 0x20(OUTP), INC
pxor INC, STATE3
- movdqu STATE3, 0x60(OUTP)
+ movdqu STATE3, 0x20(OUTP)
- movdqu 0x70(OUTP), INC
+ movdqu 0x30(OUTP), INC
pxor INC, STATE4
- movdqu STATE4, 0x70(OUTP)
+ movdqu STATE4, 0x30(OUTP)
+
+ _aesni_gf128mul_x_ble()
+
+ add $64, INP
+ add $64, OUTP
+ sub $64, LEN
+ ja .Lxts_dec_loop4
+
+ movups IV, (IVP)
FRAME_END
ret
-SYM_FUNC_END(aesni_xts_crypt8)
+SYM_FUNC_END(aesni_xts_decrypt)
#endif
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -97,6 +97,12 @@ asmlinkage void aesni_cbc_dec(struct cry
#define AVX_GEN2_OPTSIZE 640
#define AVX_GEN4_OPTSIZE 4096
+asmlinkage void aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out,
+ const u8 *in, unsigned int len, u8 *iv);
+
+asmlinkage void aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out,
+ const u8 *in, unsigned int len, u8 *iv);
+
#ifdef CONFIG_X86_64
static void (*aesni_ctr_enc_tfm)(struct crypto_aes_ctx *ctx, u8 *out,
@@ -104,9 +110,6 @@ static void (*aesni_ctr_enc_tfm)(struct
asmlinkage void aesni_ctr_enc(struct crypto_aes_ctx *ctx, u8 *out,
const u8 *in, unsigned int len, u8 *iv);
-asmlinkage void aesni_xts_crypt8(const struct crypto_aes_ctx *ctx, u8 *out,
- const u8 *in, bool enc, le128 *iv);
-
/* asmlinkage void aesni_gcm_enc()
* void *ctx, AES Key schedule. Starts on a 16 byte boundary.
* struct gcm_context_data. May be uninitialized.
@@ -547,14 +550,14 @@ static void aesni_xts_dec(const void *ct
glue_xts_crypt_128bit_one(ctx, dst, src, iv, aesni_dec);
}
-static void aesni_xts_enc8(const void *ctx, u8 *dst, const u8 *src, le128 *iv)
+static void aesni_xts_enc32(const void *ctx, u8 *dst, const u8 *src, le128 *iv)
{
- aesni_xts_crypt8(ctx, dst, src, true, iv);
+ aesni_xts_encrypt(ctx, dst, src, 32 * AES_BLOCK_SIZE, (u8 *)iv);
}
-static void aesni_xts_dec8(const void *ctx, u8 *dst, const u8 *src, le128 *iv)
+static void aesni_xts_dec32(const void *ctx, u8 *dst, const u8 *src, le128 *iv)
{
- aesni_xts_crypt8(ctx, dst, src, false, iv);
+ aesni_xts_decrypt(ctx, dst, src, 32 * AES_BLOCK_SIZE, (u8 *)iv);
}
static const struct common_glue_ctx aesni_enc_xts = {
@@ -562,8 +565,8 @@ static const struct common_glue_ctx aesn
.fpu_blocks_limit = 1,
.funcs = { {
- .num_blocks = 8,
- .fn_u = { .xts = aesni_xts_enc8 }
+ .num_blocks = 32,
+ .fn_u = { .xts = aesni_xts_enc32 }
}, {
.num_blocks = 1,
.fn_u = { .xts = aesni_xts_enc }
@@ -575,8 +578,8 @@ static const struct common_glue_ctx aesn
.fpu_blocks_limit = 1,
.funcs = { {
- .num_blocks = 8,
- .fn_u = { .xts = aesni_xts_dec8 }
+ .num_blocks = 32,
+ .fn_u = { .xts = aesni_xts_dec32 }
}, {
.num_blocks = 1,
.fn_u = { .xts = aesni_xts_dec }
From: Bob Peterson <[email protected]>
[ Upstream commit c77b52c0a137994ad796f44544c802b0b766e496 ]
Many places in the gfs2 code queued and dequeued the freeze glock.
Almost all of them acquire it in SHARED mode, and need to specify the
same LM_FLAG_NOEXP and GL_EXACT flags.
This patch adds common helper functions gfs2_freeze_lock and gfs2_freeze_unlock
to make the code more readable, and to prepare for the next patch.
Signed-off-by: Bob Peterson <[email protected]>
Signed-off-by: Andreas Gruenbacher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/gfs2/ops_fstype.c | 6 ++----
fs/gfs2/recovery.c | 8 +++-----
fs/gfs2/super.c | 42 ++++++++++++++----------------------------
fs/gfs2/util.c | 25 +++++++++++++++++++++++++
fs/gfs2/util.h | 3 +++
5 files changed, 47 insertions(+), 37 deletions(-)
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index 61fce59cb4d3..4ee56f5e93cb 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -1198,14 +1198,12 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
if (sb_rdonly(sb)) {
struct gfs2_holder freeze_gh;
- error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
- LM_FLAG_NOEXP | GL_EXACT,
- &freeze_gh);
+ error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
if (error) {
fs_err(sdp, "can't make FS RO: %d\n", error);
goto fail_per_node;
}
- gfs2_glock_dq_uninit(&freeze_gh);
+ gfs2_freeze_unlock(&freeze_gh);
} else {
error = gfs2_make_fs_rw(sdp);
if (error) {
diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
index a3c1911862f0..8f9c6480a5df 100644
--- a/fs/gfs2/recovery.c
+++ b/fs/gfs2/recovery.c
@@ -470,9 +470,7 @@ void gfs2_recover_func(struct work_struct *work)
/* Acquire a shared hold on the freeze lock */
- error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
- LM_FLAG_NOEXP | LM_FLAG_PRIORITY |
- GL_EXACT, &thaw_gh);
+ error = gfs2_freeze_lock(sdp, &thaw_gh, LM_FLAG_PRIORITY);
if (error)
goto fail_gunlock_ji;
@@ -524,7 +522,7 @@ void gfs2_recover_func(struct work_struct *work)
clean_journal(jd, &head);
up_read(&sdp->sd_log_flush_lock);
- gfs2_glock_dq_uninit(&thaw_gh);
+ gfs2_freeze_unlock(&thaw_gh);
t_rep = ktime_get();
fs_info(sdp, "jid=%u: Journal replayed in %lldms [jlck:%lldms, "
"jhead:%lldms, tlck:%lldms, replay:%lldms]\n",
@@ -546,7 +544,7 @@ void gfs2_recover_func(struct work_struct *work)
goto done;
fail_gunlock_thaw:
- gfs2_glock_dq_uninit(&thaw_gh);
+ gfs2_freeze_unlock(&thaw_gh);
fail_gunlock_ji:
if (jlocked) {
gfs2_glock_dq_uninit(&ji_gh);
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index 2f56acc41c04..ea312a94ce69 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -173,9 +173,7 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
if (error)
return error;
- error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
- LM_FLAG_NOEXP | GL_EXACT,
- &freeze_gh);
+ error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
if (error)
goto fail_threads;
@@ -205,12 +203,12 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
set_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
- gfs2_glock_dq_uninit(&freeze_gh);
+ gfs2_freeze_unlock(&freeze_gh);
return 0;
fail:
- gfs2_glock_dq_uninit(&freeze_gh);
+ gfs2_freeze_unlock(&freeze_gh);
fail_threads:
if (sdp->sd_quotad_process)
kthread_stop(sdp->sd_quotad_process);
@@ -452,7 +450,7 @@ static int gfs2_lock_fs_check_clean(struct gfs2_sbd *sdp)
}
if (error)
- gfs2_glock_dq_uninit(&sdp->sd_freeze_gh);
+ gfs2_freeze_unlock(&sdp->sd_freeze_gh);
out:
while (!list_empty(&list)) {
@@ -616,21 +614,12 @@ int gfs2_make_fs_ro(struct gfs2_sbd *sdp)
gfs2_holder_mark_uninitialized(&freeze_gh);
if (sdp->sd_freeze_gl &&
!gfs2_glock_is_locked_by_me(sdp->sd_freeze_gl)) {
- if (!log_write_allowed) {
- error = gfs2_glock_nq_init(sdp->sd_freeze_gl,
- LM_ST_SHARED, LM_FLAG_TRY |
- LM_FLAG_NOEXP | GL_EXACT,
- &freeze_gh);
- if (error == GLR_TRYFAILED)
- error = 0;
- } else {
- error = gfs2_glock_nq_init(sdp->sd_freeze_gl,
- LM_ST_SHARED,
- LM_FLAG_NOEXP | GL_EXACT,
- &freeze_gh);
- if (error && !gfs2_withdrawn(sdp))
- return error;
- }
+ error = gfs2_freeze_lock(sdp, &freeze_gh,
+ log_write_allowed ? 0 : LM_FLAG_TRY);
+ if (error == GLR_TRYFAILED)
+ error = 0;
+ if (error && !gfs2_withdrawn(sdp))
+ return error;
}
gfs2_flush_delete_work(sdp);
@@ -661,8 +650,7 @@ int gfs2_make_fs_ro(struct gfs2_sbd *sdp)
atomic_read(&sdp->sd_reserving_log) == 0,
HZ * 5);
}
- if (gfs2_holder_initialized(&freeze_gh))
- gfs2_glock_dq_uninit(&freeze_gh);
+ gfs2_freeze_unlock(&freeze_gh);
gfs2_quota_cleanup(sdp);
@@ -772,10 +760,8 @@ void gfs2_freeze_func(struct work_struct *work)
struct super_block *sb = sdp->sd_vfs;
atomic_inc(&sb->s_active);
- error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
- LM_FLAG_NOEXP | GL_EXACT, &freeze_gh);
+ error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
if (error) {
- fs_info(sdp, "GFS2: couldn't get freeze lock : %d\n", error);
gfs2_assert_withdraw(sdp, 0);
} else {
atomic_set(&sdp->sd_freeze_state, SFS_UNFROZEN);
@@ -785,7 +771,7 @@ void gfs2_freeze_func(struct work_struct *work)
error);
gfs2_assert_withdraw(sdp, 0);
}
- gfs2_glock_dq_uninit(&freeze_gh);
+ gfs2_freeze_unlock(&freeze_gh);
}
deactivate_super(sb);
clear_bit_unlock(SDF_FS_FROZEN, &sdp->sd_flags);
@@ -853,7 +839,7 @@ static int gfs2_unfreeze(struct super_block *sb)
return 0;
}
- gfs2_glock_dq_uninit(&sdp->sd_freeze_gh);
+ gfs2_freeze_unlock(&sdp->sd_freeze_gh);
mutex_unlock(&sdp->sd_freeze_mutex);
return wait_on_bit(&sdp->sd_flags, SDF_FS_FROZEN, TASK_INTERRUPTIBLE);
}
diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c
index 574bea29f21e..e6c93e811c3e 100644
--- a/fs/gfs2/util.c
+++ b/fs/gfs2/util.c
@@ -91,6 +91,31 @@ int check_journal_clean(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
return error;
}
+/**
+ * gfs2_freeze_lock - hold the freeze glock
+ * @sdp: the superblock
+ * @freeze_gh: pointer to the requested holder
+ * @caller_flags: any additional flags needed by the caller
+ */
+int gfs2_freeze_lock(struct gfs2_sbd *sdp, struct gfs2_holder *freeze_gh,
+ int caller_flags)
+{
+ int flags = LM_FLAG_NOEXP | GL_EXACT | caller_flags;
+ int error;
+
+ error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED, flags,
+ freeze_gh);
+ if (error && error != GLR_TRYFAILED)
+ fs_err(sdp, "can't lock the freeze lock: %d\n", error);
+ return error;
+}
+
+void gfs2_freeze_unlock(struct gfs2_holder *freeze_gh)
+{
+ if (gfs2_holder_initialized(freeze_gh))
+ gfs2_glock_dq_uninit(freeze_gh);
+}
+
static void signal_our_withdraw(struct gfs2_sbd *sdp)
{
struct gfs2_glock *live_gl = sdp->sd_live_gh.gh_gl;
diff --git a/fs/gfs2/util.h b/fs/gfs2/util.h
index a4443dd8a94b..69e1a0ae5a4d 100644
--- a/fs/gfs2/util.h
+++ b/fs/gfs2/util.h
@@ -149,6 +149,9 @@ int gfs2_io_error_i(struct gfs2_sbd *sdp, const char *function,
extern int check_journal_clean(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
bool verbose);
+extern int gfs2_freeze_lock(struct gfs2_sbd *sdp,
+ struct gfs2_holder *freeze_gh, int caller_flags);
+extern void gfs2_freeze_unlock(struct gfs2_holder *freeze_gh);
#define gfs2_io_error(sdp) \
gfs2_io_error_i((sdp), __func__, __FILE__, __LINE__)
--
2.30.1
From: Amir Goldstein <[email protected]>
commit 775c5033a0d164622d9d10dd0f0a5531639ed3ed upstream.
Commit 5d069dbe8aaf ("fuse: fix bad inode") replaced make_bad_inode()
in fuse_iget() with a private implementation fuse_make_bad().
The private implementation fails to remove the bad inode from inode
cache, so the retry loop with iget5_locked() finds the same bad inode
and marks it bad forever.
kmsg snip:
[ ] rcu: INFO: rcu_sched self-detected stall on CPU
...
[ ] ? bit_wait_io+0x50/0x50
[ ] ? fuse_init_file_inode+0x70/0x70
[ ] ? find_inode.isra.32+0x60/0xb0
[ ] ? fuse_init_file_inode+0x70/0x70
[ ] ilookup5_nowait+0x65/0x90
[ ] ? fuse_init_file_inode+0x70/0x70
[ ] ilookup5.part.36+0x2e/0x80
[ ] ? fuse_init_file_inode+0x70/0x70
[ ] ? fuse_inode_eq+0x20/0x20
[ ] iget5_locked+0x21/0x80
[ ] ? fuse_inode_eq+0x20/0x20
[ ] fuse_iget+0x96/0x1b0
Fixes: 5d069dbe8aaf ("fuse: fix bad inode")
Cc: [email protected] # 5.10+
Signed-off-by: Amir Goldstein <[email protected]>
Signed-off-by: Miklos Szeredi <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/fuse/fuse_i.h | 1 +
1 file changed, 1 insertion(+)
--- a/fs/fuse/fuse_i.h
+++ b/fs/fuse/fuse_i.h
@@ -863,6 +863,7 @@ static inline u64 fuse_get_attr_version(
static inline void fuse_make_bad(struct inode *inode)
{
+ remove_inode_hash(inode);
set_bit(FUSE_I_BAD, &get_fuse_inode(inode)->state);
}
From: Bob Peterson <[email protected]>
[ Upstream commit 96b1454f2e8ede4c619fde405a1bb4e9ba8d218e ]
Before this patch, sister functions gfs2_make_fs_rw and gfs2_make_fs_ro locked
(held) the freeze glock by calling gfs2_freeze_lock and gfs2_freeze_unlock.
The problem is, not all the callers of gfs2_make_fs_ro should be doing this.
The three callers of gfs2_make_fs_ro are: remount (gfs2_reconfigure),
signal_our_withdraw, and unmount (gfs2_put_super). But when unmounting the
file system we can get into the following circular lock dependency:
deactivate_super
down_write(&s->s_umount); <-------------------------------------- s_umount
deactivate_locked_super
gfs2_kill_sb
kill_block_super
generic_shutdown_super
gfs2_put_super
gfs2_make_fs_ro
gfs2_glock_nq_init sd_freeze_gl
freeze_go_sync
if (freeze glock in SH)
freeze_super (vfs)
down_write(&sb->s_umount); <------- s_umount
This patch moves the hold of the freeze glock outside the two sister rw/ro
functions to their callers, but it doesn't request the glock from
gfs2_put_super, thus eliminating the circular dependency.
Signed-off-by: Bob Peterson <[email protected]>
Signed-off-by: Andreas Gruenbacher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/gfs2/ops_fstype.c | 31 +++++++++++++++++--------------
fs/gfs2/super.c | 23 -----------------------
fs/gfs2/util.c | 18 ++++++++++++++++--
3 files changed, 33 insertions(+), 39 deletions(-)
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index 4ee56f5e93cb..f2c6bbe5cdb8 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -1084,6 +1084,7 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
int silent = fc->sb_flags & SB_SILENT;
struct gfs2_sbd *sdp;
struct gfs2_holder mount_gh;
+ struct gfs2_holder freeze_gh;
int error;
sdp = init_sbd(sb);
@@ -1195,23 +1196,18 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
goto fail_per_node;
}
- if (sb_rdonly(sb)) {
- struct gfs2_holder freeze_gh;
+ error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
+ if (error)
+ goto fail_per_node;
- error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
- if (error) {
- fs_err(sdp, "can't make FS RO: %d\n", error);
- goto fail_per_node;
- }
- gfs2_freeze_unlock(&freeze_gh);
- } else {
+ if (!sb_rdonly(sb))
error = gfs2_make_fs_rw(sdp);
- if (error) {
- fs_err(sdp, "can't make FS RW: %d\n", error);
- goto fail_per_node;
- }
- }
+ gfs2_freeze_unlock(&freeze_gh);
+ if (error) {
+ fs_err(sdp, "can't make FS RW: %d\n", error);
+ goto fail_per_node;
+ }
gfs2_glock_dq_uninit(&mount_gh);
gfs2_online_uevent(sdp);
return 0;
@@ -1512,6 +1508,12 @@ static int gfs2_reconfigure(struct fs_context *fc)
fc->sb_flags |= SB_RDONLY;
if ((sb->s_flags ^ fc->sb_flags) & SB_RDONLY) {
+ struct gfs2_holder freeze_gh;
+
+ error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
+ if (error)
+ return -EINVAL;
+
if (fc->sb_flags & SB_RDONLY) {
error = gfs2_make_fs_ro(sdp);
if (error)
@@ -1521,6 +1523,7 @@ static int gfs2_reconfigure(struct fs_context *fc)
if (error)
errorfc(fc, "unable to remount read-write");
}
+ gfs2_freeze_unlock(&freeze_gh);
}
sdp->sd_args = *newargs;
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index ea312a94ce69..754ea2a137b4 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -165,7 +165,6 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
{
struct gfs2_inode *ip = GFS2_I(sdp->sd_jdesc->jd_inode);
struct gfs2_glock *j_gl = ip->i_gl;
- struct gfs2_holder freeze_gh;
struct gfs2_log_header_host head;
int error;
@@ -173,10 +172,6 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
if (error)
return error;
- error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
- if (error)
- goto fail_threads;
-
j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
if (gfs2_withdrawn(sdp)) {
error = -EIO;
@@ -203,13 +198,9 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
set_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
- gfs2_freeze_unlock(&freeze_gh);
-
return 0;
fail:
- gfs2_freeze_unlock(&freeze_gh);
-fail_threads:
if (sdp->sd_quotad_process)
kthread_stop(sdp->sd_quotad_process);
sdp->sd_quotad_process = NULL;
@@ -607,21 +598,9 @@ static void gfs2_dirty_inode(struct inode *inode, int flags)
int gfs2_make_fs_ro(struct gfs2_sbd *sdp)
{
- struct gfs2_holder freeze_gh;
int error = 0;
int log_write_allowed = test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
- gfs2_holder_mark_uninitialized(&freeze_gh);
- if (sdp->sd_freeze_gl &&
- !gfs2_glock_is_locked_by_me(sdp->sd_freeze_gl)) {
- error = gfs2_freeze_lock(sdp, &freeze_gh,
- log_write_allowed ? 0 : LM_FLAG_TRY);
- if (error == GLR_TRYFAILED)
- error = 0;
- if (error && !gfs2_withdrawn(sdp))
- return error;
- }
-
gfs2_flush_delete_work(sdp);
if (!log_write_allowed && current == sdp->sd_quotad_process)
fs_warn(sdp, "The quotad daemon is withdrawing.\n");
@@ -650,8 +629,6 @@ int gfs2_make_fs_ro(struct gfs2_sbd *sdp)
atomic_read(&sdp->sd_reserving_log) == 0,
HZ * 5);
}
- gfs2_freeze_unlock(&freeze_gh);
-
gfs2_quota_cleanup(sdp);
if (!log_write_allowed)
diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c
index e6c93e811c3e..8d3c670c990f 100644
--- a/fs/gfs2/util.c
+++ b/fs/gfs2/util.c
@@ -123,6 +123,7 @@ static void signal_our_withdraw(struct gfs2_sbd *sdp)
struct gfs2_inode *ip = GFS2_I(inode);
struct gfs2_glock *i_gl = ip->i_gl;
u64 no_formal_ino = ip->i_no_formal_ino;
+ int log_write_allowed = test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
int ret = 0;
int tries;
@@ -143,8 +144,21 @@ static void signal_our_withdraw(struct gfs2_sbd *sdp)
* therefore we need to clear SDF_JOURNAL_LIVE manually.
*/
clear_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
- if (!sb_rdonly(sdp->sd_vfs))
- ret = gfs2_make_fs_ro(sdp);
+ if (!sb_rdonly(sdp->sd_vfs)) {
+ struct gfs2_holder freeze_gh;
+
+ gfs2_holder_mark_uninitialized(&freeze_gh);
+ if (sdp->sd_freeze_gl &&
+ !gfs2_glock_is_locked_by_me(sdp->sd_freeze_gl)) {
+ ret = gfs2_freeze_lock(sdp, &freeze_gh,
+ log_write_allowed ? 0 : LM_FLAG_TRY);
+ if (ret == GLR_TRYFAILED)
+ ret = 0;
+ }
+ if (!ret)
+ ret = gfs2_make_fs_ro(sdp);
+ gfs2_freeze_unlock(&freeze_gh);
+ }
if (sdp->sd_lockstruct.ls_ops->lm_lock == NULL) { /* lock_nolock */
if (!ret)
--
2.30.1
From: Bob Peterson <[email protected]>
[ Upstream commit d5bf630f355d8c532bef2347cf90e8ae60a5f1bd ]
Before this patch, function signal_our_withdraw referenced the journal
inode immediately. But corrupt file systems may have some invalid
journals, in which case our attempt to read it in will withdraw and the
resulting signal_our_withdraw would dereference the NULL value.
This patch adds a check to signal_our_withdraw so that if the journal
has not yet been initialized, it simply returns and does the old-style
withdraw.
Thanks, Andy Price, for his analysis.
Reported-by: [email protected]
Fixes: 601ef0d52e96 ("gfs2: Force withdraw to replay journals and wait for it to finish")
Signed-off-by: Bob Peterson <[email protected]>
Signed-off-by: Andreas Gruenbacher <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/gfs2/util.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c
index 8d3c670c990f..dc4985429cf2 100644
--- a/fs/gfs2/util.c
+++ b/fs/gfs2/util.c
@@ -119,17 +119,22 @@ void gfs2_freeze_unlock(struct gfs2_holder *freeze_gh)
static void signal_our_withdraw(struct gfs2_sbd *sdp)
{
struct gfs2_glock *live_gl = sdp->sd_live_gh.gh_gl;
- struct inode *inode = sdp->sd_jdesc->jd_inode;
- struct gfs2_inode *ip = GFS2_I(inode);
- struct gfs2_glock *i_gl = ip->i_gl;
- u64 no_formal_ino = ip->i_no_formal_ino;
+ struct inode *inode;
+ struct gfs2_inode *ip;
+ struct gfs2_glock *i_gl;
+ u64 no_formal_ino;
int log_write_allowed = test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
int ret = 0;
int tries;
- if (test_bit(SDF_NORECOVERY, &sdp->sd_flags))
+ if (test_bit(SDF_NORECOVERY, &sdp->sd_flags) || !sdp->sd_jdesc)
return;
+ inode = sdp->sd_jdesc->jd_inode;
+ ip = GFS2_I(inode);
+ i_gl = ip->i_gl;
+ no_formal_ino = ip->i_no_formal_ino;
+
/* Prevent any glock dq until withdraw recovery is complete */
set_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags);
/*
--
2.30.1
From: Piotr Krysiuk <[email protected]>
commit b5871dca250cd391885218b99cc015aca1a51aea upstream.
Instead of having the mov32 with aux->alu_limit - 1 immediate, move this
operation to retrieve_ptr_limit() instead to simplify the logic and to
allow for subsequent sanity boundary checks inside retrieve_ptr_limit().
This avoids in future that at the time of the verifier masking rewrite
we'd run into an underflow which would not sign extend due to the nature
of mov32 instruction.
Signed-off-by: Piotr Krysiuk <[email protected]>
Co-developed-by: Daniel Borkmann <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/bpf/verifier.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5398,16 +5398,16 @@ static int retrieve_ptr_limit(const stru
*/
off = ptr_reg->off + ptr_reg->var_off.value;
if (mask_to_left)
- *ptr_limit = MAX_BPF_STACK + off + 1;
+ *ptr_limit = MAX_BPF_STACK + off;
else
- *ptr_limit = -off;
+ *ptr_limit = -off - 1;
return 0;
case PTR_TO_MAP_VALUE:
if (mask_to_left) {
- *ptr_limit = ptr_reg->umax_value + ptr_reg->off + 1;
+ *ptr_limit = ptr_reg->umax_value + ptr_reg->off;
} else {
off = ptr_reg->smin_value + ptr_reg->off;
- *ptr_limit = ptr_reg->map_ptr->value_size - off;
+ *ptr_limit = ptr_reg->map_ptr->value_size - off - 1;
}
return 0;
default:
@@ -11083,7 +11083,7 @@ static int fixup_bpf_calls(struct bpf_ve
off_reg = issrc ? insn->src_reg : insn->dst_reg;
if (isneg)
*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
- *patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit - 1);
+ *patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
*patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg);
*patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg);
*patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
From: Florian Fainelli <[email protected]>
commit f9b3827ee66cfcf297d0acd6ecf33653a5f297ef upstream.
Add support for being able to set the learning attribute on port, and
make sure that the standalone ports start up with learning disabled.
We can remove the code in bcm_sf2 that configured the ports learning
attribute because we want the standalone ports to have learning disabled
by default and port 7 cannot be bridged, so its learning attribute will
not change past its initial configuration.
Signed-off-by: Florian Fainelli <[email protected]>
Reviewed-by: Vladimir Oltean <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/dsa/b53/b53_common.c | 18 ++++++++++++++++++
drivers/net/dsa/b53/b53_regs.h | 1 +
drivers/net/dsa/bcm_sf2.c | 15 +--------------
3 files changed, 20 insertions(+), 14 deletions(-)
--- a/drivers/net/dsa/b53/b53_common.c
+++ b/drivers/net/dsa/b53/b53_common.c
@@ -510,6 +510,19 @@ void b53_imp_vlan_setup(struct dsa_switc
}
EXPORT_SYMBOL(b53_imp_vlan_setup);
+static void b53_port_set_learning(struct b53_device *dev, int port,
+ bool learning)
+{
+ u16 reg;
+
+ b53_read16(dev, B53_CTRL_PAGE, B53_DIS_LEARNING, ®);
+ if (learning)
+ reg &= ~BIT(port);
+ else
+ reg |= BIT(port);
+ b53_write16(dev, B53_CTRL_PAGE, B53_DIS_LEARNING, reg);
+}
+
int b53_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy)
{
struct b53_device *dev = ds->priv;
@@ -523,6 +536,7 @@ int b53_enable_port(struct dsa_switch *d
cpu_port = dsa_to_port(ds, port)->cpu_dp->index;
b53_br_egress_floods(ds, port, true, true);
+ b53_port_set_learning(dev, port, false);
if (dev->ops->irq_enable)
ret = dev->ops->irq_enable(dev, port);
@@ -656,6 +670,7 @@ static void b53_enable_cpu_port(struct b
b53_brcm_hdr_setup(dev->ds, port);
b53_br_egress_floods(dev->ds, port, true, true);
+ b53_port_set_learning(dev, port, false);
}
static void b53_enable_mib(struct b53_device *dev)
@@ -1839,6 +1854,8 @@ int b53_br_join(struct dsa_switch *ds, i
b53_write16(dev, B53_PVLAN_PAGE, B53_PVLAN_PORT_MASK(port), pvlan);
dev->ports[port].vlan_ctl_mask = pvlan;
+ b53_port_set_learning(dev, port, true);
+
return 0;
}
EXPORT_SYMBOL(b53_br_join);
@@ -1886,6 +1903,7 @@ void b53_br_leave(struct dsa_switch *ds,
vl->untag |= BIT(port) | BIT(cpu_port);
b53_set_vlan_entry(dev, pvid, vl);
}
+ b53_port_set_learning(dev, port, false);
}
EXPORT_SYMBOL(b53_br_leave);
--- a/drivers/net/dsa/b53/b53_regs.h
+++ b/drivers/net/dsa/b53/b53_regs.h
@@ -115,6 +115,7 @@
#define B53_UC_FLOOD_MASK 0x32
#define B53_MC_FLOOD_MASK 0x34
#define B53_IPMC_FLOOD_MASK 0x36
+#define B53_DIS_LEARNING 0x3c
/*
* Override Ports 0-7 State on devices with xMII interfaces (8 bit)
--- a/drivers/net/dsa/bcm_sf2.c
+++ b/drivers/net/dsa/bcm_sf2.c
@@ -222,23 +222,10 @@ static int bcm_sf2_port_setup(struct dsa
reg &= ~P_TXQ_PSM_VDD(port);
core_writel(priv, reg, CORE_MEM_PSM_VDD_CTRL);
- /* Enable learning */
- reg = core_readl(priv, CORE_DIS_LEARN);
- reg &= ~BIT(port);
- core_writel(priv, reg, CORE_DIS_LEARN);
-
/* Enable Broadcom tags for that port if requested */
- if (priv->brcm_tag_mask & BIT(port)) {
+ if (priv->brcm_tag_mask & BIT(port))
b53_brcm_hdr_setup(ds, port);
- /* Disable learning on ASP port */
- if (port == 7) {
- reg = core_readl(priv, CORE_DIS_LEARN);
- reg |= BIT(port);
- core_writel(priv, reg, CORE_DIS_LEARN);
- }
- }
-
/* Configure Traffic Class to QoS mapping, allow each priority to map
* to a different queue number
*/
From: Piotr Krysiuk <[email protected]>
commit 0a13e3537ea67452d549a6a80da3776d6b7dedb3 upstream.
Fix up test_verifier error messages for the case where the original error
message changed, or for the case where pointer alu errors differ between
privileged and unprivileged tests. Also, add alternative tests for keeping
coverage of the original verifier rejection error message (fp alu), and
newly reject map_ptr += rX where rX == 0 given we now forbid alu on these
types for unprivileged. All test_verifier cases pass after the change. The
test case fixups were kept separate to ease backporting of core changes.
Signed-off-by: Piotr Krysiuk <[email protected]>
Co-developed-by: Daniel Borkmann <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
tools/testing/selftests/bpf/verifier/bounds_deduction.c | 27 +++++++++++-----
tools/testing/selftests/bpf/verifier/map_ptr.c | 4 ++
tools/testing/selftests/bpf/verifier/unpriv.c | 15 ++++++++
tools/testing/selftests/bpf/verifier/value_ptr_arith.c | 23 +++++++++++++
4 files changed, 59 insertions(+), 10 deletions(-)
--- a/tools/testing/selftests/bpf/verifier/bounds_deduction.c
+++ b/tools/testing/selftests/bpf/verifier/bounds_deduction.c
@@ -6,8 +6,9 @@
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
BPF_EXIT_INSN(),
},
- .result = REJECT,
+ .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
.errstr = "R0 tried to subtract pointer from scalar",
+ .result = REJECT,
},
{
"check deducing bounds from const, 2",
@@ -20,6 +21,8 @@
BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
BPF_EXIT_INSN(),
},
+ .errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types",
+ .result_unpriv = REJECT,
.result = ACCEPT,
.retval = 1,
},
@@ -31,8 +34,9 @@
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
BPF_EXIT_INSN(),
},
- .result = REJECT,
+ .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
.errstr = "R0 tried to subtract pointer from scalar",
+ .result = REJECT,
},
{
"check deducing bounds from const, 4",
@@ -45,6 +49,8 @@
BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
BPF_EXIT_INSN(),
},
+ .errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types",
+ .result_unpriv = REJECT,
.result = ACCEPT,
},
{
@@ -55,8 +61,9 @@
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
BPF_EXIT_INSN(),
},
- .result = REJECT,
+ .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
.errstr = "R0 tried to subtract pointer from scalar",
+ .result = REJECT,
},
{
"check deducing bounds from const, 6",
@@ -67,8 +74,9 @@
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
BPF_EXIT_INSN(),
},
- .result = REJECT,
+ .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
.errstr = "R0 tried to subtract pointer from scalar",
+ .result = REJECT,
},
{
"check deducing bounds from const, 7",
@@ -80,8 +88,9 @@
offsetof(struct __sk_buff, mark)),
BPF_EXIT_INSN(),
},
- .result = REJECT,
+ .errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types",
.errstr = "dereference of modified ctx ptr",
+ .result = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
@@ -94,8 +103,9 @@
offsetof(struct __sk_buff, mark)),
BPF_EXIT_INSN(),
},
- .result = REJECT,
+ .errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
.errstr = "dereference of modified ctx ptr",
+ .result = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
@@ -106,8 +116,9 @@
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
BPF_EXIT_INSN(),
},
- .result = REJECT,
+ .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
.errstr = "R0 tried to subtract pointer from scalar",
+ .result = REJECT,
},
{
"check deducing bounds from const, 10",
@@ -119,6 +130,6 @@
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
BPF_EXIT_INSN(),
},
- .result = REJECT,
.errstr = "math between ctx pointer and register with unbounded min value is not allowed",
+ .result = REJECT,
},
--- a/tools/testing/selftests/bpf/verifier/map_ptr.c
+++ b/tools/testing/selftests/bpf/verifier/map_ptr.c
@@ -75,6 +75,8 @@
BPF_EXIT_INSN(),
},
.fixup_map_hash_16b = { 4 },
+ .result_unpriv = REJECT,
+ .errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
.result = ACCEPT,
},
{
@@ -91,5 +93,7 @@
BPF_EXIT_INSN(),
},
.fixup_map_hash_16b = { 4 },
+ .result_unpriv = REJECT,
+ .errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
.result = ACCEPT,
},
--- a/tools/testing/selftests/bpf/verifier/unpriv.c
+++ b/tools/testing/selftests/bpf/verifier/unpriv.c
@@ -496,7 +496,7 @@
.result = ACCEPT,
},
{
- "unpriv: adding of fp",
+ "unpriv: adding of fp, reg",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_MOV64_IMM(BPF_REG_1, 0),
@@ -504,6 +504,19 @@
BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8),
BPF_EXIT_INSN(),
},
+ .errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
+ .result_unpriv = REJECT,
+ .result = ACCEPT,
+},
+{
+ "unpriv: adding of fp, imm",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0),
+ BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8),
+ BPF_EXIT_INSN(),
+ },
.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
.result_unpriv = REJECT,
.result = ACCEPT,
--- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
@@ -169,7 +169,7 @@
.fixup_map_array_48b = { 1 },
.result = ACCEPT,
.result_unpriv = REJECT,
- .errstr_unpriv = "R2 tried to add from different maps or paths",
+ .errstr_unpriv = "R2 tried to add from different maps, paths, or prohibited types",
.retval = 0,
},
{
@@ -517,6 +517,27 @@
.retval = 0xabcdef12,
},
{
+ "map access: value_ptr += N, value_ptr -= N known scalar",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+ BPF_MOV32_IMM(BPF_REG_1, 0x12345678),
+ BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2),
+ BPF_MOV64_IMM(BPF_REG_1, 2),
+ BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+ BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map_array_48b = { 3 },
+ .result = ACCEPT,
+ .retval = 0x12345678,
+},
+{
"map access: unknown scalar += value_ptr, 1",
.insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
From: Frieder Schrempf <[email protected]>
[ Upstream commit f7684f5a048febd2a7bc98ee81d6dce52f7268b8 ]
By default the PCA9450 doesn't handle the assertion of the WDOG_B
signal, but this is required to guarantee that things like software
resets triggered by the watchdog work reliably.
As we don't want to rely on the bootloader to enable this, we tell
the PMIC to issue a cold reset in case the WDOG_B signal is
asserted (WDOG_B_CFG = 10), just as the NXP U-Boot code does.
Signed-off-by: Frieder Schrempf <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/regulator/pca9450-regulator.c | 8 ++++++++
include/linux/regulator/pca9450.h | 7 +++++++
2 files changed, 15 insertions(+)
diff --git a/drivers/regulator/pca9450-regulator.c b/drivers/regulator/pca9450-regulator.c
index 1bba8fdcb7b7..833d398c6aa2 100644
--- a/drivers/regulator/pca9450-regulator.c
+++ b/drivers/regulator/pca9450-regulator.c
@@ -797,6 +797,14 @@ static int pca9450_i2c_probe(struct i2c_client *i2c,
return ret;
}
+ /* Set reset behavior on assertion of WDOG_B signal */
+ ret = regmap_update_bits(pca9450->regmap, PCA9450_REG_RESET_CTRL,
+ WDOG_B_CFG_MASK, WDOG_B_CFG_COLD_LDO12);
+ if (ret) {
+ dev_err(&i2c->dev, "Failed to set WDOG_B reset behavior\n");
+ return ret;
+ }
+
/*
* The driver uses the LDO5CTRL_H register to control the LDO5 regulator.
* This is only valid if the SD_VSEL input of the PMIC is high. Let's
diff --git a/include/linux/regulator/pca9450.h b/include/linux/regulator/pca9450.h
index 1bbd3014f906..ccdb5320a240 100644
--- a/include/linux/regulator/pca9450.h
+++ b/include/linux/regulator/pca9450.h
@@ -216,4 +216,11 @@ enum {
#define IRQ_THERM_105 0x02
#define IRQ_THERM_125 0x01
+/* PCA9450_REG_RESET_CTRL bits */
+#define WDOG_B_CFG_MASK 0xC0
+#define WDOG_B_CFG_NONE 0x00
+#define WDOG_B_CFG_WARM 0x40
+#define WDOG_B_CFG_COLD_LDO12 0x80
+#define WDOG_B_CFG_COLD 0xC0
+
#endif /* __LINUX_REG_PCA9450_H__ */
--
2.30.1
From: Geliang Tang <[email protected]>
[ Upstream commit b5a7acd3bd63c7430c98d7f66d0aa457c9ccde30 ]
This patch changes the sending ACK conditions for the ADD_ADDR, send an
ACK packet for any ADD_ADDR, not just when ipv6 addresses or port
numbers are included.
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/139
Acked-by: Paolo Abeni <[email protected]>
Signed-off-by: Geliang Tang <[email protected]>
Signed-off-by: Mat Martineau <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/mptcp/pm.c | 3 +--
net/mptcp/pm_netlink.c | 10 ++++------
2 files changed, 5 insertions(+), 8 deletions(-)
diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
index da2ed576f289..5463d7c8c931 100644
--- a/net/mptcp/pm.c
+++ b/net/mptcp/pm.c
@@ -188,8 +188,7 @@ void mptcp_pm_add_addr_received(struct mptcp_sock *msk,
void mptcp_pm_add_addr_send_ack(struct mptcp_sock *msk)
{
- if (!mptcp_pm_should_add_signal_ipv6(msk) &&
- !mptcp_pm_should_add_signal_port(msk))
+ if (!mptcp_pm_should_add_signal(msk))
return;
mptcp_pm_schedule_work(msk, MPTCP_PM_ADD_ADDR_SEND_ACK);
diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
index a6d983d80576..b81ce0ea1f8b 100644
--- a/net/mptcp/pm_netlink.c
+++ b/net/mptcp/pm_netlink.c
@@ -408,8 +408,7 @@ void mptcp_pm_nl_add_addr_send_ack(struct mptcp_sock *msk)
{
struct mptcp_subflow_context *subflow;
- if (!mptcp_pm_should_add_signal_ipv6(msk) &&
- !mptcp_pm_should_add_signal_port(msk))
+ if (!mptcp_pm_should_add_signal(msk))
return;
__mptcp_flush_join_list(msk);
@@ -419,10 +418,9 @@ void mptcp_pm_nl_add_addr_send_ack(struct mptcp_sock *msk)
u8 add_addr;
spin_unlock_bh(&msk->pm.lock);
- if (mptcp_pm_should_add_signal_ipv6(msk))
- pr_debug("send ack for add_addr6");
- if (mptcp_pm_should_add_signal_port(msk))
- pr_debug("send ack for add_addr_port");
+ pr_debug("send ack for add_addr%s%s",
+ mptcp_pm_should_add_signal_ipv6(msk) ? " [ipv6]" : "",
+ mptcp_pm_should_add_signal_port(msk) ? " [port]" : "");
lock_sock(ssk);
tcp_send_ack(ssk);
--
2.30.1
From: Florian Westphal <[email protected]>
[ Upstream commit 3abc05d9ef6fe989706b679e1e6371d6360d3db4 ]
Add a few assertions to make sure functions are called with the needed
locks held.
Two functions gain might_sleep annotations because they contain
conditional calls to functions that sleep.
Signed-off-by: Florian Westphal <[email protected]>
Signed-off-by: Mat Martineau <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
net/mptcp/pm.c | 2 ++
net/mptcp/pm_netlink.c | 13 +++++++++++++
net/mptcp/protocol.c | 4 ++++
net/mptcp/protocol.h | 5 +++++
4 files changed, 24 insertions(+)
diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
index 5463d7c8c931..1c01c3bcbf5a 100644
--- a/net/mptcp/pm.c
+++ b/net/mptcp/pm.c
@@ -20,6 +20,8 @@ int mptcp_pm_announce_addr(struct mptcp_sock *msk,
pr_debug("msk=%p, local_id=%d", msk, addr->id);
+ lockdep_assert_held(&msk->pm.lock);
+
if (add_addr) {
pr_warn("addr_signal error, add_addr=%d", add_addr);
return -EINVAL;
diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
index b81ce0ea1f8b..71c41b948861 100644
--- a/net/mptcp/pm_netlink.c
+++ b/net/mptcp/pm_netlink.c
@@ -134,6 +134,8 @@ select_local_address(const struct pm_nl_pernet *pernet,
{
struct mptcp_pm_addr_entry *entry, *ret = NULL;
+ msk_owned_by_me(msk);
+
rcu_read_lock();
__mptcp_flush_join_list(msk);
list_for_each_entry_rcu(entry, &pernet->local_addr_list, list) {
@@ -191,6 +193,8 @@ lookup_anno_list_by_saddr(struct mptcp_sock *msk,
{
struct mptcp_pm_add_entry *entry;
+ lockdep_assert_held(&msk->pm.lock);
+
list_for_each_entry(entry, &msk->pm.anno_list, list) {
if (addresses_equal(&entry->addr, addr, false))
return entry;
@@ -266,6 +270,8 @@ static bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk,
struct sock *sk = (struct sock *)msk;
struct net *net = sock_net(sk);
+ lockdep_assert_held(&msk->pm.lock);
+
if (lookup_anno_list_by_saddr(msk, &entry->addr))
return false;
@@ -408,6 +414,9 @@ void mptcp_pm_nl_add_addr_send_ack(struct mptcp_sock *msk)
{
struct mptcp_subflow_context *subflow;
+ msk_owned_by_me(msk);
+ lockdep_assert_held(&msk->pm.lock);
+
if (!mptcp_pm_should_add_signal(msk))
return;
@@ -443,6 +452,8 @@ void mptcp_pm_nl_rm_addr_received(struct mptcp_sock *msk)
pr_debug("address rm_id %d", msk->pm.rm_id);
+ msk_owned_by_me(msk);
+
if (!msk->pm.rm_id)
return;
@@ -478,6 +489,8 @@ void mptcp_pm_nl_rm_subflow_received(struct mptcp_sock *msk, u8 rm_id)
pr_debug("subflow rm_id %d", rm_id);
+ msk_owned_by_me(msk);
+
if (!rm_id)
return;
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 056846eb2e5b..64b8a49652ae 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2186,6 +2186,8 @@ static void __mptcp_close_subflow(struct mptcp_sock *msk)
{
struct mptcp_subflow_context *subflow, *tmp;
+ might_sleep();
+
list_for_each_entry_safe(subflow, tmp, &msk->conn_list, node) {
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
@@ -2529,6 +2531,8 @@ static void __mptcp_destroy_sock(struct sock *sk)
pr_debug("msk=%p", msk);
+ might_sleep();
+
/* dispose the ancillatory tcp socket, if any */
if (msk->subflow) {
iput(SOCK_INODE(msk->subflow));
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 18fef4273bdc..c374345ad134 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -286,6 +286,11 @@ struct mptcp_sock {
#define mptcp_for_each_subflow(__msk, __subflow) \
list_for_each_entry(__subflow, &((__msk)->conn_list), node)
+static inline void msk_owned_by_me(const struct mptcp_sock *msk)
+{
+ sock_owned_by_me((const struct sock *)msk);
+}
+
static inline struct mptcp_sock *mptcp_sk(const struct sock *sk)
{
return (struct mptcp_sock *)sk;
--
2.30.1
From: Nicolas Morey-Chaisemartin <[email protected]>
commit 2b5715fc17386a6223490d5b8f08d031999b0c0b upstream.
The current code computes a number of channels per SRP target and spreads
them equally across all online NUMA nodes. Each channel is then assigned
a CPU within this node.
In the case of unbalanced, or even unpopulated nodes, some channels do not
get a CPU associated and thus do not get connected. This causes the SRP
connection to fail.
This patch solves the issue by rewriting channel computation and
allocation:
- Drop channel to node/CPU association as it had no real effect on
locality but added unnecessary complexity.
- Tweak the number of channels allocated to reduce CPU contention when
possible:
- Up to one channel per CPU (instead of up to 4 by node)
- At least 4 channels per node, unless ch_count module parameter is
used.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Nicolas Morey-Chaisemartin <[email protected]>
Reviewed-by: Bart Van Assche <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
Cc: Yi Zhang <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/infiniband/ulp/srp/ib_srp.c | 116 ++++++++++++++----------------------
1 file changed, 48 insertions(+), 68 deletions(-)
--- a/drivers/infiniband/ulp/srp/ib_srp.c
+++ b/drivers/infiniband/ulp/srp/ib_srp.c
@@ -3628,7 +3628,7 @@ static ssize_t srp_create_target(struct
struct srp_rdma_ch *ch;
struct srp_device *srp_dev = host->srp_dev;
struct ib_device *ibdev = srp_dev->dev;
- int ret, node_idx, node, cpu, i;
+ int ret, i, ch_idx;
unsigned int max_sectors_per_mr, mr_per_cmd = 0;
bool multich = false;
uint32_t max_iu_len;
@@ -3753,81 +3753,61 @@ static ssize_t srp_create_target(struct
goto out;
ret = -ENOMEM;
- if (target->ch_count == 0)
+ if (target->ch_count == 0) {
target->ch_count =
- max_t(unsigned int, num_online_nodes(),
- min(ch_count ?:
- min(4 * num_online_nodes(),
- ibdev->num_comp_vectors),
- num_online_cpus()));
+ min(ch_count ?:
+ max(4 * num_online_nodes(),
+ ibdev->num_comp_vectors),
+ num_online_cpus());
+ }
+
target->ch = kcalloc(target->ch_count, sizeof(*target->ch),
GFP_KERNEL);
if (!target->ch)
goto out;
- node_idx = 0;
- for_each_online_node(node) {
- const int ch_start = (node_idx * target->ch_count /
- num_online_nodes());
- const int ch_end = ((node_idx + 1) * target->ch_count /
- num_online_nodes());
- const int cv_start = node_idx * ibdev->num_comp_vectors /
- num_online_nodes();
- const int cv_end = (node_idx + 1) * ibdev->num_comp_vectors /
- num_online_nodes();
- int cpu_idx = 0;
-
- for_each_online_cpu(cpu) {
- if (cpu_to_node(cpu) != node)
- continue;
- if (ch_start + cpu_idx >= ch_end)
- continue;
- ch = &target->ch[ch_start + cpu_idx];
- ch->target = target;
- ch->comp_vector = cv_start == cv_end ? cv_start :
- cv_start + cpu_idx % (cv_end - cv_start);
- spin_lock_init(&ch->lock);
- INIT_LIST_HEAD(&ch->free_tx);
- ret = srp_new_cm_id(ch);
- if (ret)
- goto err_disconnect;
-
- ret = srp_create_ch_ib(ch);
- if (ret)
- goto err_disconnect;
-
- ret = srp_alloc_req_data(ch);
- if (ret)
- goto err_disconnect;
-
- ret = srp_connect_ch(ch, max_iu_len, multich);
- if (ret) {
- char dst[64];
-
- if (target->using_rdma_cm)
- snprintf(dst, sizeof(dst), "%pIS",
- &target->rdma_cm.dst);
- else
- snprintf(dst, sizeof(dst), "%pI6",
- target->ib_cm.orig_dgid.raw);
- shost_printk(KERN_ERR, target->scsi_host,
- PFX "Connection %d/%d to %s failed\n",
- ch_start + cpu_idx,
- target->ch_count, dst);
- if (node_idx == 0 && cpu_idx == 0) {
- goto free_ch;
- } else {
- srp_free_ch_ib(target, ch);
- srp_free_req_data(target, ch);
- target->ch_count = ch - target->ch;
- goto connected;
- }
+ for (ch_idx = 0; ch_idx < target->ch_count; ++ch_idx) {
+ ch = &target->ch[ch_idx];
+ ch->target = target;
+ ch->comp_vector = ch_idx % ibdev->num_comp_vectors;
+ spin_lock_init(&ch->lock);
+ INIT_LIST_HEAD(&ch->free_tx);
+ ret = srp_new_cm_id(ch);
+ if (ret)
+ goto err_disconnect;
+
+ ret = srp_create_ch_ib(ch);
+ if (ret)
+ goto err_disconnect;
+
+ ret = srp_alloc_req_data(ch);
+ if (ret)
+ goto err_disconnect;
+
+ ret = srp_connect_ch(ch, max_iu_len, multich);
+ if (ret) {
+ char dst[64];
+
+ if (target->using_rdma_cm)
+ snprintf(dst, sizeof(dst), "%pIS",
+ &target->rdma_cm.dst);
+ else
+ snprintf(dst, sizeof(dst), "%pI6",
+ target->ib_cm.orig_dgid.raw);
+ shost_printk(KERN_ERR, target->scsi_host,
+ PFX "Connection %d/%d to %s failed\n",
+ ch_idx,
+ target->ch_count, dst);
+ if (ch_idx == 0) {
+ goto free_ch;
+ } else {
+ srp_free_ch_ib(target, ch);
+ srp_free_req_data(target, ch);
+ target->ch_count = ch - target->ch;
+ goto connected;
}
-
- multich = true;
- cpu_idx++;
}
- node_idx++;
+ multich = true;
}
connected:
From: J. Bruce Fields <[email protected]>
commit 4aa5e002034f0701c3335379fd6c22d7f3338cce upstream.
This reverts commit 50747dd5e47b "nfsd4: remove check_conflicting_opens
warning", as a prerequisite for reverting 94415b06eb8a, which has a
serious bug.
Cc: [email protected]
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Chuck Lever <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/nfsd/nfs4state.c | 1 +
1 file changed, 1 insertion(+)
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -4957,6 +4957,7 @@ static int nfsd4_check_conflicting_opens
writes--;
if (fp->fi_fds[O_RDWR])
writes--;
+ WARN_ON_ONCE(writes < 0);
if (writes > 0)
return -EAGAIN;
spin_lock(&fp->fi_lock);
From: Piotr Krysiuk <[email protected]>
commit f232326f6966cf2a1d1db7bc917a4ce5f9f55f76 upstream.
The purpose of this patch is to streamline error propagation and in particular
to propagate retrieve_ptr_limit() errors for pointer types that are not defining
a ptr_limit such that register-based alu ops against these types can be rejected.
The main rationale is that a gap has been identified by Piotr in the existing
protection against speculatively out-of-bounds loads, for example, in case of
ctx pointers, unprivileged programs can still perform pointer arithmetic. This
can be abused to execute speculatively out-of-bounds loads without restrictions
and thus extract contents of kernel memory.
Fix this by rejecting unprivileged programs that attempt any pointer arithmetic
on unprotected pointer types. The two affected ones are pointer to ctx as well
as pointer to map. Field access to a modified ctx' pointer is rejected at a
later point in time in the verifier, and 7c6967326267 ("bpf: Permit map_ptr
arithmetic with opcode add and offset 0") only relevant for root-only use cases.
Risk of unprivileged program breakage is considered very low.
Fixes: 7c6967326267 ("bpf: Permit map_ptr arithmetic with opcode add and offset 0")
Fixes: b2157399cc98 ("bpf: prevent out-of-bounds speculation")
Signed-off-by: Piotr Krysiuk <[email protected]>
Co-developed-by: Daniel Borkmann <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/bpf/verifier.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5462,6 +5462,7 @@ static int sanitize_ptr_alu(struct bpf_v
u32 alu_state, alu_limit;
struct bpf_reg_state tmp;
bool ret;
+ int err;
if (can_skip_alu_sanitation(env, insn))
return 0;
@@ -5477,10 +5478,13 @@ static int sanitize_ptr_alu(struct bpf_v
alu_state |= ptr_is_dst_reg ?
BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
- if (retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg))
- return 0;
- if (update_alu_sanitation_state(aux, alu_state, alu_limit))
- return -EACCES;
+ err = retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg);
+ if (err < 0)
+ return err;
+
+ err = update_alu_sanitation_state(aux, alu_state, alu_limit);
+ if (err < 0)
+ return err;
do_sim:
/* Simulate and find potential out-of-bounds access under
* speculative execution from truncation as a result of
@@ -5596,7 +5600,7 @@ static int adjust_ptr_min_max_vals(struc
case BPF_ADD:
ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);
if (ret < 0) {
- verbose(env, "R%d tried to add from different maps or paths\n", dst);
+ verbose(env, "R%d tried to add from different maps, paths, or prohibited types\n", dst);
return ret;
}
/* We can take a fixed offset as long as it doesn't overflow
@@ -5651,7 +5655,7 @@ static int adjust_ptr_min_max_vals(struc
case BPF_SUB:
ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);
if (ret < 0) {
- verbose(env, "R%d tried to sub from different maps or paths\n", dst);
+ verbose(env, "R%d tried to sub from different maps, paths, or prohibited types\n", dst);
return ret;
}
if (dst_reg == off_reg) {
From: Piotr Krysiuk <[email protected]>
commit 1b1597e64e1a610c7a96710fc4717158e98a08b3 upstream.
Given we know the max possible value of ptr_limit at the time of retrieving
the latter, add basic assertions, so that the verifier can bail out if
anything looks odd and reject the program. Nothing triggered this so far,
but it also does not hurt to have these.
Signed-off-by: Piotr Krysiuk <[email protected]>
Co-developed-by: Daniel Borkmann <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/bpf/verifier.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5389,10 +5389,14 @@ static int retrieve_ptr_limit(const stru
{
bool mask_to_left = (opcode == BPF_ADD && off_is_neg) ||
(opcode == BPF_SUB && !off_is_neg);
- u32 off;
+ u32 off, max;
switch (ptr_reg->type) {
case PTR_TO_STACK:
+ /* Offset 0 is out-of-bounds, but acceptable start for the
+ * left direction, see BPF_REG_FP.
+ */
+ max = MAX_BPF_STACK + mask_to_left;
/* Indirect variable offset stack access is prohibited in
* unprivileged mode so it's not handled here.
*/
@@ -5401,15 +5405,16 @@ static int retrieve_ptr_limit(const stru
*ptr_limit = MAX_BPF_STACK + off;
else
*ptr_limit = -off - 1;
- return 0;
+ return *ptr_limit >= max ? -ERANGE : 0;
case PTR_TO_MAP_VALUE:
+ max = ptr_reg->map_ptr->value_size;
if (mask_to_left) {
*ptr_limit = ptr_reg->umax_value + ptr_reg->off;
} else {
off = ptr_reg->smin_value + ptr_reg->off;
*ptr_limit = ptr_reg->map_ptr->value_size - off - 1;
}
- return 0;
+ return *ptr_limit >= max ? -ERANGE : 0;
default:
return -EINVAL;
}
From: Piotr Krysiuk <[email protected]>
commit 10d2bb2e6b1d8c4576c56a748f697dbeb8388899 upstream.
retrieve_ptr_limit() computes the ptr_limit for registers with stack and
map_value type. ptr_limit is the size of the memory area that is still
valid / in-bounds from the point of the current position and direction
of the operation (add / sub). This size will later be used for masking
the operation such that attempting out-of-bounds access in the speculative
domain is redirected to remain within the bounds of the current map value.
When masking to the right the size is correct, however, when masking to
the left, the size is off-by-one which would lead to an incorrect mask
and thus incorrect arithmetic operation in the non-speculative domain.
Piotr found that if the resulting alu_limit value is zero, then the
BPF_MOV32_IMM() from the fixup_bpf_calls() rewrite will end up loading
0xffffffff into AX instead of sign-extending to the full 64 bit range,
and as a result, this allows abuse for executing speculatively out-of-
bounds loads against 4GB window of address space and thus extracting the
contents of kernel memory via side-channel.
Fixes: 979d63d50c0c ("bpf: prevent out of bounds speculation on pointer arithmetic")
Signed-off-by: Piotr Krysiuk <[email protected]>
Co-developed-by: Daniel Borkmann <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/bpf/verifier.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5398,13 +5398,13 @@ static int retrieve_ptr_limit(const stru
*/
off = ptr_reg->off + ptr_reg->var_off.value;
if (mask_to_left)
- *ptr_limit = MAX_BPF_STACK + off;
+ *ptr_limit = MAX_BPF_STACK + off + 1;
else
*ptr_limit = -off;
return 0;
case PTR_TO_MAP_VALUE:
if (mask_to_left) {
- *ptr_limit = ptr_reg->umax_value + ptr_reg->off;
+ *ptr_limit = ptr_reg->umax_value + ptr_reg->off + 1;
} else {
off = ptr_reg->smin_value + ptr_reg->off;
*ptr_limit = ptr_reg->map_ptr->value_size - off;
On Fri, 19 Mar 2021 at 17:51, Greg Kroah-Hartman
<[email protected]> wrote:
>
> This is the start of the stable review cycle for the 5.11.8 release.
> There are 31 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sun, 21 Mar 2021 12:17:37 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.11.8-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.11.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
Results from Linaro’s test farm.
No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing <[email protected]>
Summary
------------------------------------------------------------------------
kernel: 5.11.8-rc1
git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
git branch: linux-5.11.y
git commit: 48a0708a31ceced042f5acd1d6a225a2fb66ebf3
git describe: v5.11.7-32-g48a0708a31ce
Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.11.y/build/v5.11.7-32-g48a0708a31ce
No regressions (compared to build v5.11.7)
No fixes (compared to build v5.11.7)
Ran 66943 total tests in the following environments and test suites.
Environments
--------------
- arc
- arm
- arm64
- dragonboard-410c
- hi6220-hikey
- i386
- juno-64k_page_size
- juno-r2
- juno-r2-compat
- juno-r2-kasan
- mips
- nxp-ls2088
- nxp-ls2088-64k_page_size
- parisc
- powerpc
- qemu-arm-clang
- qemu-arm-debug
- qemu-arm64-clang
- qemu-arm64-debug
- qemu-arm64-kasan
- qemu-i386-clang
- qemu-i386-debug
- qemu-x86_64-clang
- qemu-x86_64-debug
- qemu-x86_64-kasan
- qemu-x86_64-kcsan
- qemu_arm
- qemu_arm64
- qemu_arm64-compat
- qemu_i386
- qemu_x86_64
- qemu_x86_64-compat
- riscv
- s390
- sh
- sparc
- x15
- x86
- x86-kasan
- x86_64
Test Suites
-----------
* build
* linux-log-parser
* install-android-platform-tools-r2600
* kselftest-android
* kselftest-capabilities
* kselftest-cgroup
* kselftest-clone3
* kselftest-core
* kselftest-cpu-hotplug
* kselftest-cpufreq
* kselftest-intel_pstate
* kselftest-kvm
* kselftest-livepatch
* kselftest-lkdtm
* kselftest-net
* kselftest-netfilter
* kselftest-nsfs
* kselftest-ptrace
* kselftest-rseq
* kselftest-rtc
* kselftest-seccomp
* kselftest-sigaltstack
* kselftest-size
* kselftest-splice
* kselftest-static_keys
* kselftest-sync
* kselftest-sysctl
* kselftest-tc-testing
* libhugetlbfs
* ltp-dio-tests
* ltp-fcntl-locktests-tests
* ltp-filecaps-tests
* ltp-fs-tests
* ltp-fs_bind-tests
* ltp-fs_perms_simple-tests
* ltp-fsx-tests
* ltp-hugetlb-tests
* ltp-io-tests
* ltp-mm-tests
* ltp-nptl-tests
* ltp-pty-tests
* ltp-sched-tests
* ltp-securebits-tests
* ltp-syscalls-tests
* ltp-tracing-tests
* v4l2-compliance
* fwts
* kselftest-bpf
* ltp-cap_bounds-tests
* ltp-containers-tests
* ltp-cpuhotplug-tests
* ltp-crypto-tests
* ltp-cve-tests
* ltp-ipc-tests
* network-basic-tests
* kselftest-
* kselftest-efivarfs
* kselftest-filesystems
* kselftest-firmware
* kselftest-fpu
* kselftest-futex
* kselftest-gpio
* kselftest-ipc
* kselftest-ir
* kselftest-kcmp
* kselftest-kexec
* kselftest-lib
* kselftest-membarri[
* kselftest-memfd
* kselftest-memory-hotplug
* kselftest-mincore
* kselftest-mount
* kselftest-mqueue
* kselftest-openat2
* kselftest-pid_namespace
* kselftest-pidfd
* kselftest-proc
* kselftest-pstore
* kselftest-timens
* kselftest-timers
* kselftest-tmpfs
* kselftest-tpm2
* kselftest-user
* kselftest-vm
* kselftest-x86
* kselftest-zram
* ltp-commands-tests
* ltp-controllers-tests
* ltp-math-tests
* ltp-open-posix-tests
* kvm-unit-tests
* rcutorture
* kunit
* kselftest-vsyscall-mode-native-
* kselftest-vsyscall-mode-none-
* perf
* ssuite
--
Linaro LKFT
https://lkft.linaro.org
On Fri, Mar 19, 2021 at 01:18:54PM +0100, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 5.11.8 release.
> There are 31 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sun, 21 Mar 2021 12:17:37 +0000.
> Anything received after that time might be too late.
>
Build results:
total: 155 pass: 155 fail: 0
Qemu test results:
total: 437 pass: 437 fail: 0
Tested-by: Guenter Roeck <[email protected]>
Guenter
On Sat, Mar 20, 2021 at 01:08:52AM +0530, Naresh Kamboju wrote:
> On Fri, 19 Mar 2021 at 17:51, Greg Kroah-Hartman
> <[email protected]> wrote:
> >
> > This is the start of the stable review cycle for the 5.11.8 release.
> > There are 31 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Sun, 21 Mar 2021 12:17:37 +0000.
> > Anything received after that time might be too late.
> >
> > The whole patch series can be found in one patch at:
> > https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.11.8-rc1.gz
> > or in the git tree and branch at:
> > git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.11.y
> > and the diffstat can be found below.
> >
> > thanks,
> >
> > greg k-h
>
> Results from Linaro’s test farm.
> No regressions on arm64, arm, x86_64, and i386.
>
> Tested-by: Linux Kernel Functional Testing <[email protected]>
thanks for testing them all.
greg k-h