2024-05-13 12:55:54

by Haakon Bugge

[permalink] [raw]
Subject: [PATCH 0/6] rds: rdma: Add ability to force GFP_NOIO

This series enables RDS and the RDMA stack to be used as a block I/O
device. This to support a filesystem on top of a raw block device
which uses RDS and the RDMA stack as the network transport layer.

Under intense memory pressure, we get memory reclaims. Assume the
filesystem reclaims memory, goes to the raw block device, which calls
into RDS, which calls the RDMA stack. Now, if regular GFP_KERNEL
allocations in RDS or the RDMA stack require reclaims to be fulfilled,
we end up in a circular dependency.

We break this circular dependency by:

1. Force all allocations in RDS and the relevant RDMA stack to use
GFP_NOIO, by means of a parenthetic use of
memalloc_noio_{save,restore} on all relevant entry points.

2. Make sure work-queues inherits current->flags
wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
work-queue inherits the same flag(s).

Håkon Bugge (6):
workqueue: Inherit NOIO and NOFS alloc flags
rds: Brute force GFP_NOIO
RDMA/cma: Brute force GFP_NOIO
RDMA/cm: Brute force GFP_NOIO
RDMA/mlx5: Brute force GFP_NOIO
net/mlx5: Brute force GFP_NOIO

drivers/infiniband/core/cm.c | 15 ++++-
drivers/infiniband/core/cma.c | 20 ++++++-
drivers/infiniband/hw/mlx5/main.c | 22 +++++--
.../net/ethernet/mellanox/mlx5/core/main.c | 14 ++++-
include/linux/workqueue.h | 2 +
kernel/workqueue.c | 17 ++++++
net/rds/af_rds.c | 60 ++++++++++++++++++-
7 files changed, 138 insertions(+), 12 deletions(-)

--
2.39.3



2024-05-13 12:56:12

by Haakon Bugge

[permalink] [raw]
Subject: [PATCH 4/6] RDMA/cm: Brute force GFP_NOIO

In ib_cm_init(), we call memalloc_noio_{save,restore} in a parenthetic
fashion when enabled by the module parameter force_noio.

This in order to conditionally enable ib_cm to work aligned with block
I/O devices. Any work queued later on work-queues created during
module initialization will inherit the PF_MEMALLOC_{NOIO,NOFS}
flag(s), due to commit ("workqueue: Inherit NOIO and NOFS alloc
flags").

We do this in order to enable ULPs using the RDMA stack to be used as
a network block I/O device. This to support a filesystem on top of a
raw block device which uses said ULP(s) and the RDMA stack as the
network transport layer.

Under intense memory pressure, we get memory reclaims. Assume the
filesystem reclaims memory, goes to the raw block device, which calls
into the ULP in question, which calls the RDMA stack. Now, if regular
GFP_KERNEL allocations in ULP or the RDMA stack require reclaims to be
fulfilled, we end up in a circular dependency.

We break this circular dependency by:

1. Force all allocations in the ULP and the relevant RDMA stack to use
GFP_NOIO, by means of a parenthetic use of
memalloc_noio_{save,restore} on all relevant entry points.

2. Make sure work-queues inherits current->flags
wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
work-queue inherits the same flag(s).

Signed-off-by: Håkon Bugge <[email protected]>
---
drivers/infiniband/core/cm.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 07fb8d3c037f0..767eec38eb57d 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -22,6 +22,7 @@
#include <linux/workqueue.h>
#include <linux/kdev_t.h>
#include <linux/etherdevice.h>
+#include <linux/sched/mm.h>

#include <rdma/ib_cache.h>
#include <rdma/ib_cm.h>
@@ -35,6 +36,11 @@ MODULE_DESCRIPTION("InfiniBand CM");
MODULE_LICENSE("Dual BSD/GPL");

#define CM_DESTROY_ID_WAIT_TIMEOUT 10000 /* msecs */
+
+static bool cm_force_noio;
+module_param_named(force_noio, cm_force_noio, bool, 0444);
+MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)");
+
static const char * const ibcm_rej_reason_strs[] = {
[IB_CM_REJ_NO_QP] = "no QP",
[IB_CM_REJ_NO_EEC] = "no EEC",
@@ -4504,6 +4510,10 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data)
static int __init ib_cm_init(void)
{
int ret;
+ unsigned int noio_flags;
+
+ if (cm_force_noio)
+ noio_flags = memalloc_noio_save();

INIT_LIST_HEAD(&cm.device_list);
rwlock_init(&cm.device_lock);
@@ -4527,10 +4537,13 @@ static int __init ib_cm_init(void)
if (ret)
goto error3;

- return 0;
+ goto error2;
error3:
destroy_workqueue(cm.wq);
error2:
+ if (cm_force_noio)
+ memalloc_noio_restore(noio_flags);
+
return ret;
}

--
2.39.3


2024-05-13 12:56:13

by Haakon Bugge

[permalink] [raw]
Subject: [PATCH 6/6] net/mlx5: Brute force GFP_NOIO

In mlx5_core_init(), we call memalloc_noio_{save,restore} in a parenthetic
fashion when enabled by the module parameter force_noio.

This in order to conditionally enable mlx5_core to work aligned with
I/O devices. Any work queued later on work-queues created during
module initialization will inherit the PF_MEMALLOC_{NOIO,NOFS}
flag(s), due to commit ("workqueue: Inherit NOIO and NOFS alloc
flags").

We do this in order to enable ULPs using the RDMA stack and the
mlx5_core driver to be used as a network block I/O device. This to
support a filesystem on top of a raw block device which uses said
ULP(s) and the RDMA stack as the network transport layer.

Under intense memory pressure, we get memory reclaims. Assume the
filesystem reclaims memory, goes to the raw block device, which calls
into the ULP in question, which calls the RDMA stack. Now, if regular
GFP_KERNEL allocations in ULP or the RDMA stack require reclaims to be
fulfilled, we end up in a circular dependency.

We break this circular dependency by:

1. Force all allocations in the ULP and the relevant RDMA stack to use
GFP_NOIO, by means of a parenthetic use of
memalloc_noio_{save,restore} on all relevant entry points.

2. Make sure work-queues inherits current->flags
wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
work-queue inherits the same flag(s).

Signed-off-by: Håkon Bugge <[email protected]>
---
drivers/net/ethernet/mellanox/mlx5/core/main.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index 331ce47f51a17..aa1bf8bb5d15c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -48,6 +48,7 @@
#include <linux/mlx5/vport.h>
#include <linux/version.h>
#include <net/devlink.h>
+#include <linux/sched/mm.h>
#include "mlx5_core.h"
#include "lib/eq.h"
#include "fs_core.h"
@@ -87,6 +88,10 @@ static unsigned int prof_sel = MLX5_DEFAULT_PROF;
module_param_named(prof_sel, prof_sel, uint, 0444);
MODULE_PARM_DESC(prof_sel, "profile selector. Valid range 0 - 2");

+static bool mlx5_core_force_noio;
+module_param_named(force_noio, mlx5_core_force_noio, bool, 0444);
+MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)");
+
static u32 sw_owner_id[4];
#define MAX_SW_VHCA_ID (BIT(__mlx5_bit_sz(cmd_hca_cap_2, sw_vhca_id)) - 1)
static DEFINE_IDA(sw_vhca_ida);
@@ -2312,8 +2317,12 @@ static void mlx5_core_verify_params(void)

static int __init mlx5_init(void)
{
+ unsigned int noio_flags;
int err;

+ if (mlx5_core_force_noio)
+ noio_flags = memalloc_noio_save();
+
WARN_ONCE(strcmp(MLX5_ADEV_NAME, KBUILD_MODNAME),
"mlx5_core name not in sync with kernel module name");

@@ -2334,7 +2343,7 @@ static int __init mlx5_init(void)
if (err)
goto err_pci;

- return 0;
+ goto out;

err_pci:
mlx5_sf_driver_unregister();
@@ -2342,6 +2351,9 @@ static int __init mlx5_init(void)
mlx5e_cleanup();
err_debug:
mlx5_unregister_debugfs();
+out:
+ if (mlx5_core_force_noio)
+ memalloc_noio_restore(noio_flags);
return err;
}

--
2.39.3


2024-05-13 12:56:45

by Haakon Bugge

[permalink] [raw]
Subject: [PATCH 2/6] rds: Brute force GFP_NOIO

For most entry points to RDS, we call memalloc_noio_{save,restore} in
a parenthetic fashion when enabled by the module parameter force_noio.

We skip the calls to memalloc_noio_{save,restore} in rds_ioctl(), as
no memory allocations are executed in this function or its callees.

The reason we execute memalloc_noio_{save,restore} in rds_poll(), is
due to the following call chain:

rds_poll()
poll_wait()
__pollwait()
poll_get_entry()
__get_free_page(GFP_KERNEL)

The function rds_setsockopt() allocates memory in its callee's
rds_get_mr() and rds_get_mr_for_dest(). Hence, we need
memalloc_noio_{save,restore} in rds_setsockopt().

In rds_getsockopt(), we have rds_info_getsockopt() that allocates
memory. Hence, we need memalloc_noio_{save,restore} in
rds_getsockopt().

All the above, in order to conditionally enable RDS to become a block I/O
device.

Signed-off-by: Håkon Bugge <[email protected]>
---
net/rds/af_rds.c | 60 +++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 57 insertions(+), 3 deletions(-)

diff --git a/net/rds/af_rds.c b/net/rds/af_rds.c
index 8435a20968ef5..a89d192aabc0b 100644
--- a/net/rds/af_rds.c
+++ b/net/rds/af_rds.c
@@ -37,10 +37,16 @@
#include <linux/in.h>
#include <linux/ipv6.h>
#include <linux/poll.h>
+#include <linux/sched/mm.h>
#include <net/sock.h>

#include "rds.h"

+bool rds_force_noio;
+EXPORT_SYMBOL(rds_force_noio);
+module_param_named(force_noio, rds_force_noio, bool, 0444);
+MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)");
+
/* this is just used for stats gathering :/ */
static DEFINE_SPINLOCK(rds_sock_lock);
static unsigned long rds_sock_count;
@@ -60,6 +66,10 @@ static int rds_release(struct socket *sock)
{
struct sock *sk = sock->sk;
struct rds_sock *rs;
+ unsigned int noio_flags;
+
+ if (rds_force_noio)
+ noio_flags = memalloc_noio_save();

if (!sk)
goto out;
@@ -90,6 +100,8 @@ static int rds_release(struct socket *sock)
sock->sk = NULL;
sock_put(sk);
out:
+ if (rds_force_noio)
+ memalloc_noio_restore(noio_flags);
return 0;
}

@@ -214,9 +226,13 @@ static __poll_t rds_poll(struct file *file, struct socket *sock,
{
struct sock *sk = sock->sk;
struct rds_sock *rs = rds_sk_to_rs(sk);
+ unsigned int noio_flags;
__poll_t mask = 0;
unsigned long flags;

+ if (rds_force_noio)
+ noio_flags = memalloc_noio_save();
+
poll_wait(file, sk_sleep(sk), wait);

if (rs->rs_seen_congestion)
@@ -249,6 +265,8 @@ static __poll_t rds_poll(struct file *file, struct socket *sock,
if (mask)
rs->rs_seen_congestion = 0;

+ if (rds_force_noio)
+ memalloc_noio_restore(noio_flags);
return mask;
}

@@ -294,8 +312,12 @@ static int rds_cancel_sent_to(struct rds_sock *rs, sockptr_t optval, int len)
{
struct sockaddr_in6 sin6;
struct sockaddr_in sin;
+ unsigned int noio_flags;
int ret = 0;

+ if (rds_force_noio)
+ noio_flags = memalloc_noio_save();
+
/* racing with another thread binding seems ok here */
if (ipv6_addr_any(&rs->rs_bound_addr)) {
ret = -ENOTCONN; /* XXX not a great errno */
@@ -324,6 +346,8 @@ static int rds_cancel_sent_to(struct rds_sock *rs, sockptr_t optval, int len)

rds_send_drop_to(rs, &sin6);
out:
+ if (rds_force_noio)
+ noio_flags = memalloc_noio_save();
return ret;
}

@@ -485,8 +509,12 @@ static int rds_getsockopt(struct socket *sock, int level, int optname,
{
struct rds_sock *rs = rds_sk_to_rs(sock->sk);
int ret = -ENOPROTOOPT, len;
+ unsigned int noio_flags;
int trans;

+ if (rds_force_noio)
+ noio_flags = memalloc_noio_save();
+
if (level != SOL_RDS)
goto out;

@@ -529,6 +557,8 @@ static int rds_getsockopt(struct socket *sock, int level, int optname,
}

out:
+ if (rds_force_noio)
+ memalloc_noio_restore(noio_flags);
return ret;

}
@@ -538,12 +568,16 @@ static int rds_connect(struct socket *sock, struct sockaddr *uaddr,
{
struct sock *sk = sock->sk;
struct sockaddr_in *sin;
+ unsigned int noio_flags;
struct rds_sock *rs = rds_sk_to_rs(sk);
int ret = 0;

if (addr_len < offsetofend(struct sockaddr, sa_family))
return -EINVAL;

+ if (rds_force_noio)
+ noio_flags = memalloc_noio_save();
+
lock_sock(sk);

switch (uaddr->sa_family) {
@@ -626,6 +660,8 @@ static int rds_connect(struct socket *sock, struct sockaddr *uaddr,
}

release_sock(sk);
+ if (rds_force_noio)
+ memalloc_noio_restore(noio_flags);
return ret;
}

@@ -697,16 +733,28 @@ static int __rds_create(struct socket *sock, struct sock *sk, int protocol)
static int rds_create(struct net *net, struct socket *sock, int protocol,
int kern)
{
+ unsigned int noio_flags;
struct sock *sk;
+ int ret;

if (sock->type != SOCK_SEQPACKET || protocol)
return -ESOCKTNOSUPPORT;

+ if (rds_force_noio)
+ noio_flags = memalloc_noio_save();
+
sk = sk_alloc(net, AF_RDS, GFP_KERNEL, &rds_proto, kern);
- if (!sk)
- return -ENOMEM;
+ if (!sk) {
+ ret = -ENOMEM;
+ goto out;
+ }

- return __rds_create(sock, sk, protocol);
+ ret = __rds_create(sock, sk, protocol);
+out:
+ if (rds_force_noio)
+ memalloc_noio_restore(noio_flags);
+
+ return ret;
}

void rds_sock_addref(struct rds_sock *rs)
@@ -895,8 +943,12 @@ u32 rds_gen_num;

static int __init rds_init(void)
{
+ unsigned int noio_flags;
int ret;

+ if (rds_force_noio)
+ noio_flags = memalloc_noio_save();
+
net_get_random_once(&rds_gen_num, sizeof(rds_gen_num));

ret = rds_bind_lock_init();
@@ -947,6 +999,8 @@ static int __init rds_init(void)
out_bind:
rds_bind_lock_destroy();
out:
+ if (rds_force_noio)
+ memalloc_noio_restore(noio_flags);
return ret;
}
module_init(rds_init);
--
2.39.3


2024-05-13 12:59:06

by Haakon Bugge

[permalink] [raw]
Subject: [PATCH 5/6] RDMA/mlx5: Brute force GFP_NOIO

In mlx5_ib_init(), we call memalloc_noio_{save,restore} in a parenthetic
fashion when enabled by the module parameter force_noio.

This in order to conditionally enable mlx5_ib to work aligned with
I/O devices. Any work queued later on work-queues created during
module initialization will inherit the PF_MEMALLOC_{NOIO,NOFS}
flag(s), due to commit ("workqueue: Inherit NOIO and NOFS alloc
flags").

We do this in order to enable ULPs using the RDMA stack and the
mlx5_ib driver to be used as a network block I/O device. This to
support a filesystem on top of a raw block device which uses said
ULP(s) and the RDMA stack as the network transport layer.

Under intense memory pressure, we get memory reclaims. Assume the
filesystem reclaims memory, goes to the raw block device, which calls
into the ULP in question, which calls the RDMA stack. Now, if regular
GFP_KERNEL allocations in ULP or the RDMA stack require reclaims to be
fulfilled, we end up in a circular dependency.

We break this circular dependency by:

1. Force all allocations in the ULP and the relevant RDMA stack to use
GFP_NOIO, by means of a parenthetic use of
memalloc_noio_{save,restore} on all relevant entry points.

2. Make sure work-queues inherits current->flags
wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
work-queue inherits the same flag(s).

Signed-off-by: Håkon Bugge <[email protected]>
---
drivers/infiniband/hw/mlx5/main.c | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index c2b557e642906..a424d518538ed 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -56,6 +56,10 @@ MODULE_AUTHOR("Eli Cohen <[email protected]>");
MODULE_DESCRIPTION("Mellanox 5th generation network adapters (ConnectX series) IB driver");
MODULE_LICENSE("Dual BSD/GPL");

+static bool mlx5_ib_force_noio;
+module_param_named(force_noio, mlx5_ib_force_noio, bool, 0444);
+MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)");
+
struct mlx5_ib_event_work {
struct work_struct work;
union {
@@ -4489,16 +4493,23 @@ static struct auxiliary_driver mlx5r_driver = {

static int __init mlx5_ib_init(void)
{
+ unsigned int noio_flags;
int ret;

+ if (mlx5_ib_force_noio)
+ noio_flags = memalloc_noio_save();
+
xlt_emergency_page = (void *)__get_free_page(GFP_KERNEL);
- if (!xlt_emergency_page)
- return -ENOMEM;
+ if (!xlt_emergency_page) {
+ ret = -ENOMEM;
+ goto out;
+ }

mlx5_ib_event_wq = alloc_ordered_workqueue("mlx5_ib_event_wq", 0);
if (!mlx5_ib_event_wq) {
free_page((unsigned long)xlt_emergency_page);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out;
}

ret = mlx5_ib_qp_event_init();
@@ -4515,7 +4526,7 @@ static int __init mlx5_ib_init(void)
ret = auxiliary_driver_register(&mlx5r_driver);
if (ret)
goto drv_err;
- return 0;
+ goto out;

drv_err:
auxiliary_driver_unregister(&mlx5r_mp_driver);
@@ -4526,6 +4537,9 @@ static int __init mlx5_ib_init(void)
qp_event_err:
destroy_workqueue(mlx5_ib_event_wq);
free_page((unsigned long)xlt_emergency_page);
+out:
+ if (mlx5_ib_force_noio)
+ memalloc_noio_restore(noio_flags);
return ret;
}

--
2.39.3


2024-05-13 18:05:12

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 2/6] rds: Brute force GFP_NOIO

Hi H?kon,

kernel test robot noticed the following build warnings:

[auto build test WARNING on tj-wq/for-next]
[also build test WARNING on rdma/for-next net/main net-next/main linus/master v6.9 next-20240513]
[cannot apply to horms-ipvs/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/H-kon-Bugge/workqueue-Inherit-NOIO-and-NOFS-alloc-flags/20240513-205927
base: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-next
patch link: https://lore.kernel.org/r/20240513125346.764076-3-haakon.bugge%40oracle.com
patch subject: [PATCH 2/6] rds: Brute force GFP_NOIO
config: s390-defconfig (https://download.01.org/0day-ci/archive/20240514/[email protected]/config)
compiler: clang version 19.0.0git (https://github.com/llvm/llvm-project b910bebc300dafb30569cecc3017b446ea8eafa0)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240514/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

In file included from net/rds/af_rds.c:33:
In file included from include/linux/module.h:19:
In file included from include/linux/elf.h:6:
In file included from arch/s390/include/asm/elf.h:173:
In file included from arch/s390/include/asm/mmu_context.h:11:
In file included from arch/s390/include/asm/pgalloc.h:18:
In file included from include/linux/mm.h:2208:
include/linux/vmstat.h:508:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
508 | return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~ ^
509 | item];
| ~~~~
include/linux/vmstat.h:515:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
515 | return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~ ^
516 | NR_VM_NUMA_EVENT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~~
include/linux/vmstat.h:522:36: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
522 | return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_"
| ~~~~~~~~~~~ ^ ~~~
include/linux/vmstat.h:527:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
527 | return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~ ^
528 | NR_VM_NUMA_EVENT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~~
include/linux/vmstat.h:536:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
536 | return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~ ^
537 | NR_VM_NUMA_EVENT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~~
In file included from net/rds/af_rds.c:38:
In file included from include/linux/ipv6.h:101:
In file included from include/linux/tcp.h:17:
In file included from include/linux/skbuff.h:28:
In file included from include/linux/dma-mapping.h:11:
In file included from include/linux/scatterlist.h:9:
In file included from arch/s390/include/asm/io.h:78:
include/asm-generic/io.h:547:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
547 | val = __raw_readb(PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:560:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
560 | val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
| ~~~~~~~~~~ ^
include/uapi/linux/byteorder/big_endian.h:37:59: note: expanded from macro '__le16_to_cpu'
37 | #define __le16_to_cpu(x) __swab16((__force __u16)(__le16)(x))
| ^
include/uapi/linux/swab.h:102:54: note: expanded from macro '__swab16'
102 | #define __swab16(x) (__u16)__builtin_bswap16((__u16)(x))
| ^
In file included from net/rds/af_rds.c:38:
In file included from include/linux/ipv6.h:101:
In file included from include/linux/tcp.h:17:
In file included from include/linux/skbuff.h:28:
In file included from include/linux/dma-mapping.h:11:
In file included from include/linux/scatterlist.h:9:
In file included from arch/s390/include/asm/io.h:78:
include/asm-generic/io.h:573:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
573 | val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
| ~~~~~~~~~~ ^
include/uapi/linux/byteorder/big_endian.h:35:59: note: expanded from macro '__le32_to_cpu'
35 | #define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
| ^
include/uapi/linux/swab.h:115:54: note: expanded from macro '__swab32'
115 | #define __swab32(x) (__u32)__builtin_bswap32((__u32)(x))
| ^
In file included from net/rds/af_rds.c:38:
In file included from include/linux/ipv6.h:101:
In file included from include/linux/tcp.h:17:
In file included from include/linux/skbuff.h:28:
In file included from include/linux/dma-mapping.h:11:
In file included from include/linux/scatterlist.h:9:
In file included from arch/s390/include/asm/io.h:78:
include/asm-generic/io.h:584:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
584 | __raw_writeb(value, PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:594:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
594 | __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:604:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
604 | __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:692:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
692 | readsb(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:700:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
700 | readsw(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:708:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
708 | readsl(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:717:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
717 | writesb(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:726:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
726 | writesw(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:735:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
735 | writesl(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
>> net/rds/af_rds.c:315:15: warning: variable 'noio_flags' set but not used [-Wunused-but-set-variable]
315 | unsigned int noio_flags;
| ^
18 warnings generated.


vim +/noio_flags +315 net/rds/af_rds.c

310
311 static int rds_cancel_sent_to(struct rds_sock *rs, sockptr_t optval, int len)
312 {
313 struct sockaddr_in6 sin6;
314 struct sockaddr_in sin;
> 315 unsigned int noio_flags;
316 int ret = 0;
317
318 if (rds_force_noio)
319 noio_flags = memalloc_noio_save();
320
321 /* racing with another thread binding seems ok here */
322 if (ipv6_addr_any(&rs->rs_bound_addr)) {
323 ret = -ENOTCONN; /* XXX not a great errno */
324 goto out;
325 }
326
327 if (len < sizeof(struct sockaddr_in)) {
328 ret = -EINVAL;
329 goto out;
330 } else if (len < sizeof(struct sockaddr_in6)) {
331 /* Assume IPv4 */
332 if (copy_from_sockptr(&sin, optval,
333 sizeof(struct sockaddr_in))) {
334 ret = -EFAULT;
335 goto out;
336 }
337 ipv6_addr_set_v4mapped(sin.sin_addr.s_addr, &sin6.sin6_addr);
338 sin6.sin6_port = sin.sin_port;
339 } else {
340 if (copy_from_sockptr(&sin6, optval,
341 sizeof(struct sockaddr_in6))) {
342 ret = -EFAULT;
343 goto out;
344 }
345 }
346
347 rds_send_drop_to(rs, &sin6);
348 out:
349 if (rds_force_noio)
350 noio_flags = memalloc_noio_save();
351 return ret;
352 }
353

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2024-05-13 18:14:44

by Simon Horman

[permalink] [raw]
Subject: Re: [PATCH 2/6] rds: Brute force GFP_NOIO

On Mon, May 13, 2024 at 02:53:42PM +0200, Håkon Bugge wrote:
> For most entry points to RDS, we call memalloc_noio_{save,restore} in
> a parenthetic fashion when enabled by the module parameter force_noio.
>
> We skip the calls to memalloc_noio_{save,restore} in rds_ioctl(), as
> no memory allocations are executed in this function or its callees.
>
> The reason we execute memalloc_noio_{save,restore} in rds_poll(), is
> due to the following call chain:
>
> rds_poll()
> poll_wait()
> __pollwait()
> poll_get_entry()
> __get_free_page(GFP_KERNEL)
>
> The function rds_setsockopt() allocates memory in its callee's
> rds_get_mr() and rds_get_mr_for_dest(). Hence, we need
> memalloc_noio_{save,restore} in rds_setsockopt().
>
> In rds_getsockopt(), we have rds_info_getsockopt() that allocates
> memory. Hence, we need memalloc_noio_{save,restore} in
> rds_getsockopt().
>
> All the above, in order to conditionally enable RDS to become a block I/O
> device.
>
> Signed-off-by: Håkon Bugge <[email protected]>

Hi Håkon,

Some minor feedback from my side.

> ---
> net/rds/af_rds.c | 60 +++++++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 57 insertions(+), 3 deletions(-)
>
> diff --git a/net/rds/af_rds.c b/net/rds/af_rds.c
> index 8435a20968ef5..a89d192aabc0b 100644
> --- a/net/rds/af_rds.c
> +++ b/net/rds/af_rds.c
> @@ -37,10 +37,16 @@ > #include <linux/in.h>
> #include <linux/ipv6.h>
> #include <linux/poll.h>
> +#include <linux/sched/mm.h>
> #include <net/sock.h>
>
> #include "rds.h"
>
> +bool rds_force_noio;
> +EXPORT_SYMBOL(rds_force_noio);

rds_force_noio seems to be only used within this file.
I wonder if it should it be static and not EXPORTed?

Flagged by Sparse.

> +module_param_named(force_noio, rds_force_noio, bool, 0444);
> +MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)");
> +
> /* this is just used for stats gathering :/ */
> static DEFINE_SPINLOCK(rds_sock_lock);
> static unsigned long rds_sock_count;
> @@ -60,6 +66,10 @@ static int rds_release(struct socket *sock)
> {
> struct sock *sk = sock->sk;
> struct rds_sock *rs;
> + unsigned int noio_flags;

Please consider using reverse xmas tree order - longest line to shortest -
for local variable declarations in Networking code.

This tool can be of assistance: https://github.com/ecree-solarflare/xmastree

> +
> + if (rds_force_noio)
> + noio_flags = memalloc_noio_save();
>
> if (!sk)
> goto out;

..

> @@ -324,6 +346,8 @@ static int rds_cancel_sent_to(struct rds_sock *rs, sockptr_t optval, int len)
>
> rds_send_drop_to(rs, &sin6);
> out:
> + if (rds_force_noio)
> + noio_flags = memalloc_noio_save();

noio_flags appears to be set but otherwise unused in this function.

Flagged by W=1 builds.

> return ret;
> }
>

..

2024-05-13 23:03:39

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 0/6] rds: rdma: Add ability to force GFP_NOIO

On Mon, May 13, 2024 at 02:53:40PM +0200, Håkon Bugge wrote:
> This series enables RDS and the RDMA stack to be used as a block I/O
> device. This to support a filesystem on top of a raw block device
> which uses RDS and the RDMA stack as the network transport layer.
>
> Under intense memory pressure, we get memory reclaims. Assume the
> filesystem reclaims memory, goes to the raw block device, which calls
> into RDS, which calls the RDMA stack. Now, if regular GFP_KERNEL
> allocations in RDS or the RDMA stack require reclaims to be fulfilled,
> we end up in a circular dependency.
>
> We break this circular dependency by:
>
> 1. Force all allocations in RDS and the relevant RDMA stack to use
> GFP_NOIO, by means of a parenthetic use of
> memalloc_noio_{save,restore} on all relevant entry points.

I didn't see an obvious explanation why each of these changes was
necessary. I expected this:

> 2. Make sure work-queues inherits current->flags
> wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
> work-queue inherits the same flag(s).

To broadly capture everything and understood this was the general plan
from the MM side instead of direct annotation?

So, can you explain in each case why it needs an explicit change?

And further, is there any validation of this? There is some lockdep
tracking of reclaim, I feel like it should be more robustly hooked up
in RDMA if we expect this to really work..

Jason

2024-05-14 08:54:22

by Zhu Yanjun

[permalink] [raw]
Subject: Re: [PATCH 0/6] rds: rdma: Add ability to force GFP_NOIO

On 13.05.24 14:53, Håkon Bugge wrote:
> This series enables RDS and the RDMA stack to be used as a block I/O
> device. This to support a filesystem on top of a raw block device

This is to support a filesystem ... ?

> which uses RDS and the RDMA stack as the network transport layer.
>
> Under intense memory pressure, we get memory reclaims. Assume the
> filesystem reclaims memory, goes to the raw block device, which calls
> into RDS, which calls the RDMA stack. Now, if regular GFP_KERNEL
> allocations in RDS or the RDMA stack require reclaims to be fulfilled,
> we end up in a circular dependency.
>
> We break this circular dependency by:
>
> 1. Force all allocations in RDS and the relevant RDMA stack to use
> GFP_NOIO, by means of a parenthetic use of
> memalloc_noio_{save,restore} on all relevant entry points.
>
> 2. Make sure work-queues inherits current->flags
> wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
> work-queue inherits the same flag(s).
>
> Håkon Bugge (6):
> workqueue: Inherit NOIO and NOFS alloc flags
> rds: Brute force GFP_NOIO
> RDMA/cma: Brute force GFP_NOIO
> RDMA/cm: Brute force GFP_NOIO
> RDMA/mlx5: Brute force GFP_NOIO
> net/mlx5: Brute force GFP_NOIO
>
> drivers/infiniband/core/cm.c | 15 ++++-
> drivers/infiniband/core/cma.c | 20 ++++++-
> drivers/infiniband/hw/mlx5/main.c | 22 +++++--
> .../net/ethernet/mellanox/mlx5/core/main.c | 14 ++++-
> include/linux/workqueue.h | 2 +
> kernel/workqueue.c | 17 ++++++
> net/rds/af_rds.c | 60 ++++++++++++++++++-
> 7 files changed, 138 insertions(+), 12 deletions(-)
>
> --
> 2.39.3
>


2024-05-14 12:04:51

by Zhu Yanjun

[permalink] [raw]
Subject: Re: [PATCH 0/6] rds: rdma: Add ability to force GFP_NOIO



On 14.05.24 10:53, Zhu Yanjun wrote:
> On 13.05.24 14:53, Håkon Bugge wrote:
>> This series enables RDS and the RDMA stack to be used as a block I/O
>> device. This to support a filesystem on top of a raw block device
>
> This is to support a filesystem ... ?

Sorry. my bad. I mean, normally rds is used to act as a communication
protocol between Oracle databases. Now in this patch series, it seems
that rds acts as a communication protocol to support a filesystem. So I
am curious which filesystem that rds is supporting?

Thanks a lot.
Zhu Yanjun

>
>> which uses RDS and the RDMA stack as the network transport layer.
>>
>> Under intense memory pressure, we get memory reclaims. Assume the
>> filesystem reclaims memory, goes to the raw block device, which calls
>> into RDS, which calls the RDMA stack. Now, if regular GFP_KERNEL
>> allocations in RDS or the RDMA stack require reclaims to be fulfilled,
>> we end up in a circular dependency.
>>
>> We break this circular dependency by:
>>
>> 1. Force all allocations in RDS and the relevant RDMA stack to use
>>     GFP_NOIO, by means of a parenthetic use of
>>     memalloc_noio_{save,restore} on all relevant entry points.
>>
>> 2. Make sure work-queues inherits current->flags
>>     wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
>>     work-queue inherits the same flag(s).
>>
>> Håkon Bugge (6):
>>    workqueue: Inherit NOIO and NOFS alloc flags
>>    rds: Brute force GFP_NOIO
>>    RDMA/cma: Brute force GFP_NOIO
>>    RDMA/cm: Brute force GFP_NOIO
>>    RDMA/mlx5: Brute force GFP_NOIO
>>    net/mlx5: Brute force GFP_NOIO
>>
>>   drivers/infiniband/core/cm.c                  | 15 ++++-
>>   drivers/infiniband/core/cma.c                 | 20 ++++++-
>>   drivers/infiniband/hw/mlx5/main.c             | 22 +++++--
>>   .../net/ethernet/mellanox/mlx5/core/main.c    | 14 ++++-
>>   include/linux/workqueue.h                     |  2 +
>>   kernel/workqueue.c                            | 17 ++++++
>>   net/rds/af_rds.c                              | 60 ++++++++++++++++++-
>>   7 files changed, 138 insertions(+), 12 deletions(-)
>>
>> --
>> 2.39.3
>>
>

--
Best

2024-05-14 13:33:12

by Haakon Bugge

[permalink] [raw]
Subject: Re: [PATCH 2/6] rds: Brute force GFP_NOIO

> On Mon, May 13, 2024 at 02:53:42PM +0200, Håkon Bugge wrote:
> For most entry points to RDS, we call memalloc_noio_{save,restore} in
> a parenthetic fashion when enabled by the module parameter force_noio.
>
> We skip the calls to memalloc_noio_{save,restore} in rds_ioctl(), as
> no memory allocations are executed in this function or its callees.
>
> The reason we execute memalloc_noio_{save,restore} in rds_poll(), is
> due to the following call chain:
>
> rds_poll()
> poll_wait()
> __pollwait()
> poll_get_entry()
> __get_free_page(GFP_KERNEL)
>
> The function rds_setsockopt() allocates memory in its callee's
> rds_get_mr() and rds_get_mr_for_dest(). Hence, we need
> memalloc_noio_{save,restore} in rds_setsockopt().
>
> In rds_getsockopt(), we have rds_info_getsockopt() that allocates
> memory. Hence, we need memalloc_noio_{save,restore} in
> rds_getsockopt().
>
> All the above, in order to conditionally enable RDS to become a block I/O
> device.
>
> Signed-off-by: Håkon Bugge <[email protected]>
>
> Hi Håkon,
>
> Some minor feedback from my side.
>
> ---
> net/rds/af_rds.c | 60 +++++++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 57 insertions(+), 3 deletions(-)
>
> diff --git a/net/rds/af_rds.c b/net/rds/af_rds.c
> index 8435a20968ef5..a89d192aabc0b 100644
> --- a/net/rds/af_rds.c
> +++ b/net/rds/af_rds.c
> @@ -37,10 +37,16 @@
> #include <linux/in.h>
> #include <linux/ipv6.h>
> #include <linux/poll.h>
> +#include <linux/sched/mm.h>
> #include <net/sock.h>
>
> #include "rds.h"
>
> +bool rds_force_noio;
> +EXPORT_SYMBOL(rds_force_noio);
>
> rds_force_noio seems to be only used within this file.
> I wonder if it should it be static and not EXPORTed?
>
> Flagged by Sparse.

Hi Simon,

You are quite right. Had an earlier version where the symbol was used in several files, but in this version, static is the right choice. Fixed in v2.

> +module_param_named(force_noio, rds_force_noio, bool, 0444);
> +MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)");
> +
> /* this is just used for stats gathering :/ */
> static DEFINE_SPINLOCK(rds_sock_lock);
> static unsigned long rds_sock_count;
> @@ -60,6 +66,10 @@ static int rds_release(struct socket *sock)
> {
> struct sock *sk = sock->sk;
> struct rds_sock *rs;
> + unsigned int noio_flags;
>
> Please consider using reverse xmas tree order - longest line to shortest -
> for local variable declarations in Networking code.
>
> This tool can be of assistance: https://github.com/ecree-solarflare/xmastree

Will fix.

>
> +
> + if (rds_force_noio)
> + noio_flags = memalloc_noio_save();
>
> if (!sk)
> goto out;
>
> ...
>
> @@ -324,6 +346,8 @@ static int rds_cancel_sent_to(struct rds_sock *rs, sockptr_t optval, int len)
>
> rds_send_drop_to(rs, &sin6);
> out:
> + if (rds_force_noio)
> + noio_flags = memalloc_noio_save();
>
> noio_flags appears to be set but otherwise unused in this function.

Bummer. C/P error. This should be the restore. Fixed in v2. Will add W=1 for builds in the future :-)


Thxs, Håkon

2024-05-14 18:21:07

by Haakon Bugge

[permalink] [raw]
Subject: Re: [PATCH 0/6] rds: rdma: Add ability to force GFP_NOIO

Hi Jason,


> On 14 May 2024, at 01:03, Jason Gunthorpe <[email protected]> wrote:
>
> On Mon, May 13, 2024 at 02:53:40PM +0200, Håkon Bugge wrote:
>> This series enables RDS and the RDMA stack to be used as a block I/O
>> device. This to support a filesystem on top of a raw block device
>> which uses RDS and the RDMA stack as the network transport layer.
>>
>> Under intense memory pressure, we get memory reclaims. Assume the
>> filesystem reclaims memory, goes to the raw block device, which calls
>> into RDS, which calls the RDMA stack. Now, if regular GFP_KERNEL
>> allocations in RDS or the RDMA stack require reclaims to be fulfilled,
>> we end up in a circular dependency.
>>
>> We break this circular dependency by:
>>
>> 1. Force all allocations in RDS and the relevant RDMA stack to use
>> GFP_NOIO, by means of a parenthetic use of
>> memalloc_noio_{save,restore} on all relevant entry points.
>
> I didn't see an obvious explanation why each of these changes was
> necessary. I expected this:
>
>> 2. Make sure work-queues inherits current->flags
>> wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
>> work-queue inherits the same flag(s).

When the modules initialize, it does not help to have 2., unless PF_MEMALLOC_NOIO is set in current->flags. That is most probably not set, e.g. considering modprobe. That is why we have these steps in all the five modules. During module initialization, work queues are allocated in all mentioned modules. Therefore, the module initialization functions need the paranthetic use of memalloc_noio_{save,restore}.

> To broadly capture everything and understood this was the general plan
> from the MM side instead of direct annotation?
>
> So, can you explain in each case why it needs an explicit change?

I hope my comment above explains this.

> And further, is there any validation of this? There is some lockdep
> tracking of reclaim, I feel like it should be more robustly hooked up
> in RDMA if we expect this to really work..

Oracle is about to launch a product using this series, so the techniques used have been thoroughly validated, allthough on an older kernel version.


Thxs, Håkon

2024-05-14 18:34:26

by Haakon Bugge

[permalink] [raw]
Subject: Re: [PATCH 0/6] rds: rdma: Add ability to force GFP_NOIO

Hi Yanjun,


> On 14 May 2024, at 14:02, Zhu Yanjun <[email protected]> wrote:
>
>
>
> On 14.05.24 10:53, Zhu Yanjun wrote:
>> On 13.05.24 14:53, Håkon Bugge wrote:
>>> This series enables RDS and the RDMA stack to be used as a block I/O
>>> device. This to support a filesystem on top of a raw block device
>> This is to support a filesystem ... ?
>
> Sorry. my bad. I mean, normally rds is used to act as a communication protocol between Oracle databases. Now in this patch series, it seems that rds acts as a communication protocol to support a filesystem. So I am curious which filesystem that rds is supporting?

The peer here is a file-server which acts a block device. What Oracle calls a cell-server. The initiator here, is actually using XFS over an Oracle in-kernel pseudo-volume block device.


Thxs, Håkon

2024-05-15 10:25:41

by Zhu Yanjun

[permalink] [raw]
Subject: Re: [PATCH 0/6] rds: rdma: Add ability to force GFP_NOIO

在 2024/5/14 20:32, Haakon Bugge 写道:
> Hi Yanjun,
>
>
>> On 14 May 2024, at 14:02, Zhu Yanjun <[email protected]> wrote:
>>
>>
>>
>> On 14.05.24 10:53, Zhu Yanjun wrote:
>>> On 13.05.24 14:53, Håkon Bugge wrote:
>>>> This series enables RDS and the RDMA stack to be used as a block I/O
>>>> device. This to support a filesystem on top of a raw block device
>>> This is to support a filesystem ... ?
>>
>> Sorry. my bad. I mean, normally rds is used to act as a communication protocol between Oracle databases. Now in this patch series, it seems that rds acts as a communication protocol to support a filesystem. So I am curious which filesystem that rds is supporting?
>
> The peer here is a file-server which acts a block device. What Oracle calls a cell-server. The initiator here, is actually using XFS over an Oracle in-kernel pseudo-volume block device.

Thanks Haakon.
There is a link about GFP_NOFS and GFP_NOIO,
https://lore.kernel.org/linux-fsdevel/[email protected]/.

I am not sure if you have read this link or not. In this link, the
writer has his ideas about GFP_NOFS and GFP_NOIO.

"
My interest in this is that I'd like to get rid of the FGP_NOFS flag.
It'd also be good to get rid of the __GFP_FS flag since there's always
demand for more GFP flags. I have a git branch with some work in this
area, so there's a certain amount of conference-driven development going
on here too.

We could mutatis mutandi for GFP_NOIO, memalloc_noio_save/restore,
__GFP_IO, etc, so maybe the block people are also interested. I haven't
looked into that in any detail though. I guess we'll see what interest
this topic gains.
"

Anyway, good luck!

Zhu Yanjun

>
>
> Thxs, Håkon
>


2024-05-17 17:31:05

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 0/6] rds: rdma: Add ability to force GFP_NOIO

On Tue, May 14, 2024 at 06:19:53PM +0000, Haakon Bugge wrote:
> Hi Jason,
>
>
> > On 14 May 2024, at 01:03, Jason Gunthorpe <[email protected]> wrote:
> >
> > On Mon, May 13, 2024 at 02:53:40PM +0200, Håkon Bugge wrote:
> >> This series enables RDS and the RDMA stack to be used as a block I/O
> >> device. This to support a filesystem on top of a raw block device
> >> which uses RDS and the RDMA stack as the network transport layer.
> >>
> >> Under intense memory pressure, we get memory reclaims. Assume the
> >> filesystem reclaims memory, goes to the raw block device, which calls
> >> into RDS, which calls the RDMA stack. Now, if regular GFP_KERNEL
> >> allocations in RDS or the RDMA stack require reclaims to be fulfilled,
> >> we end up in a circular dependency.
> >>
> >> We break this circular dependency by:
> >>
> >> 1. Force all allocations in RDS and the relevant RDMA stack to use
> >> GFP_NOIO, by means of a parenthetic use of
> >> memalloc_noio_{save,restore} on all relevant entry points.
> >
> > I didn't see an obvious explanation why each of these changes was
> > necessary. I expected this:
> >
> >> 2. Make sure work-queues inherits current->flags
> >> wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
> >> work-queue inherits the same flag(s).
>
> When the modules initialize, it does not help to have 2., unless
> PF_MEMALLOC_NOIO is set in current->flags. That is most probably not
> set, e.g. considering modprobe. That is why we have these steps in
> all the five modules. During module initialization, work queues are
> allocated in all mentioned modules. Therefore, the module
> initialization functions need the paranthetic use of
> memalloc_noio_{save,restore}.

And why would I need these work queues to have noio? they are never
called under a filesystem.

You need to explain in every single case how something in a NOIO
context becomes entangled with the unrelated thing you are taggin NIO.

Historically when we've tried to do this we gave up because the entire
subsystem end up being NOIO.

> > And further, is there any validation of this? There is some lockdep
> > tracking of reclaim, I feel like it should be more robustly hooked up
> > in RDMA if we expect this to really work..
>
> Oracle is about to launch a product using this series, so the
> techniques used have been thoroughly validated, allthough on an
> older kernel version.

That doesn't really help keep it working. I want to see some kind of
lockdep scheme to enforce this that can validate without ever
triggering reclaim.

Jason