2015-04-02 13:51:06

by Sowmini Varadhan

[permalink] [raw]
Subject: [PATCH 0/6] RDS: RDS core bug fixes

This patch-series updates the RDS core and rds-tcp modules with
some bug fixes that were originally authored by Andy Grover,
Zach Brown, and Chris Mason.


Sowmini Varadhan (6):
RDS: Re-add pf/sol access via sysctl
RDS: only use passive connections when addresses match
rds: check for excessive looping in rds_send_xmit
RDS: make sure not to loop forever inside rds_send_xmit
RDS: rds_send_xmit is called under a spinlock, lets not do a
cond_resched()
RDS: don't trust the LL_SEND_FULL bit

net/rds/connection.c | 3 +-
net/rds/rds.h | 1 +
net/rds/send.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-----
net/rds/sysctl.c | 20 +++++++++++++++++
net/rds/threads.c | 2 +
5 files changed, 76 insertions(+), 7 deletions(-)


2015-04-02 13:56:07

by Sowmini Varadhan

[permalink] [raw]
Subject: [PATCH 1/6] RDS: Re-add pf/sol access via sysctl

Although RDS has an official PF_RDS value now, existing software
expects to look for rds sysctls to determine it. We need to maintain
these for now, for backwards compatibility.

Original patch by Andy Grover

Signed-off-by: Sowmini Varadhan <[email protected]>
Reviewed-by: Ajaykumar Hotchandani <[email protected]>
---
net/rds/sysctl.c | 20 ++++++++++++++++++++
1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/net/rds/sysctl.c b/net/rds/sysctl.c
index c173f69..5fbe9e6 100644
--- a/net/rds/sysctl.c
+++ b/net/rds/sysctl.c
@@ -49,6 +49,12 @@ unsigned int rds_sysctl_max_unacked_bytes = (16 << 20);

unsigned int rds_sysctl_ping_enable = 1;

+/* We have official values, but must maintain the sysctl interface for existing
+ * software that expects to find these values here.
+ */
+static int rds_sysctl_pf_rds = PF_RDS;
+static int rds_sysctl_sol_rds = SOL_RDS;
+
static struct ctl_table rds_sysctl_rds_table[] = {
{
.procname = "reconnect_min_delay_ms",
@@ -69,6 +75,20 @@ static struct ctl_table rds_sysctl_rds_table[] = {
.extra2 = &rds_sysctl_reconnect_max,
},
{
+ .procname = "pf_rds",
+ .data = &rds_sysctl_pf_rds,
+ .maxlen = sizeof(int),
+ .mode = 0444,
+ .proc_handler = proc_dointvec,
+ },
+ {
+ .procname = "sol_rds",
+ .data = &rds_sysctl_sol_rds,
+ .maxlen = sizeof(int),
+ .mode = 0444,
+ .proc_handler = proc_dointvec,
+ },
+ {
.procname = "max_unacked_packets",
.data = &rds_sysctl_max_unacked_packets,
.maxlen = sizeof(int),
--
1.7.1

2015-04-02 13:52:29

by Sowmini Varadhan

[permalink] [raw]
Subject: [PATCH 2/6] RDS: only use passive connections when addresses match

Passive connections were added for the case where one loopback IB
connection between identical addresses needs another connection to store
the second QP. Unfortunately, they were also created in the case where
the addesses differ and we already have both QPs.

This lead to a message reordering bug.

- two different IB interfaces and addresses on a machine: A B
- traffic is sent from A to B
- connection from A-B is created, connect request sent
- listening accepts connect request, B-A is created
- traffic flows, next_rx is incremented
- unacked messages exist on the retrans list
- connection A-B is shut down, new connect request sent
- listen sees existing loopback B-A, creates new passive B-A
- retrans messages are sent and delivered because of 0 next_rx

The problem is that the second connection request saw the previously
existing parent connection. Instead of using it, and using the existing
next_rx_seq state for the traffic between those IPs, it mistakenly
thought that it had to create a passive connection.

We fix this by only using passive connections in the special case where
laddr and faddr match. In this case we'll only ever have one parent
sending connection requests and one passive connection created as the
listening path sees the existing parent connection which initiated the
request.

Original patch by Zach Brown

Signed-off-by: Sowmini Varadhan <[email protected]>
Reviewed-by: Ajaykumar Hotchandani <[email protected]>
---
net/rds/connection.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/net/rds/connection.c b/net/rds/connection.c
index 378c3a6..7952a5b 100644
--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -130,7 +130,7 @@ static struct rds_connection *__rds_conn_create(__be32 laddr, __be32 faddr,
rcu_read_lock();
conn = rds_conn_lookup(head, laddr, faddr, trans);
if (conn && conn->c_loopback && conn->c_trans != &rds_loop_transport &&
- !is_outgoing) {
+ laddr == faddr && !is_outgoing) {
/* This is a looped back IB connection, and we're
* called by the code handling the incoming connect.
* We need a second connection object into which we
--
1.7.1

2015-04-02 13:51:43

by Sowmini Varadhan

[permalink] [raw]
Subject: [PATCH 3/6] rds: check for excessive looping in rds_send_xmit

Original patch by Andy Grover

Signed-off-by: Sowmini Varadhan <[email protected]>
---
net/rds/send.c | 13 +++++++++++++
1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/net/rds/send.c b/net/rds/send.c
index 44672be..b1741a2 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -140,6 +140,7 @@ int rds_send_xmit(struct rds_connection *conn)
struct scatterlist *sg;
int ret = 0;
LIST_HEAD(to_be_dropped);
+ int same_rm = 0;

restart:

@@ -177,6 +178,18 @@ int rds_send_xmit(struct rds_connection *conn)

rm = conn->c_xmit_rm;

+ if (!rm) {
+ same_rm = 0;
+ } else {
+ same_rm++;
+ if (same_rm >= 4096) {
+ printk_ratelimited(KERN_ERR "RDS: Stuck rm\n");
+ cond_resched();
+ ret = -EAGAIN;
+ break;
+ }
+ }
+
/*
* If between sending messages, we can send a pending congestion
* map update.
--
1.7.1

2015-04-02 13:52:24

by Sowmini Varadhan

[permalink] [raw]
Subject: [PATCH 4/6] RDS: make sure not to loop forever inside rds_send_xmit

If a determined set of concurrent senders keep the send queue full,
we can loop forever inside rds_send_xmit. This fix has two parts.

First we are dropping out of the while(1) loop after we've processed a
large batch of messages.

Second we add a generation number that gets bumped each time the
xmit bit lock is acquired. If someone else has jumped in and
made progress in the queue, we skip our goto restart.

Original patch by Chris Mason.

Signed-off-by: Sowmini Varadhan <[email protected]>
Reviewed-by: Ajaykumar Hotchandani <[email protected]>
---
net/rds/connection.c | 1 +
net/rds/rds.h | 1 +
net/rds/send.c | 47 +++++++++++++++++++++++++++++++++++++++++++----
3 files changed, 45 insertions(+), 4 deletions(-)

diff --git a/net/rds/connection.c b/net/rds/connection.c
index 7952a5b..14f0413 100644
--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -193,6 +193,7 @@ static struct rds_connection *__rds_conn_create(__be32 laddr, __be32 faddr,
}

atomic_set(&conn->c_state, RDS_CONN_DOWN);
+ conn->c_send_gen = 0;
conn->c_reconnect_jiffies = 0;
INIT_DELAYED_WORK(&conn->c_send_w, rds_send_worker);
INIT_DELAYED_WORK(&conn->c_recv_w, rds_recv_worker);
diff --git a/net/rds/rds.h b/net/rds/rds.h
index c3f2855..0d41155 100644
--- a/net/rds/rds.h
+++ b/net/rds/rds.h
@@ -110,6 +110,7 @@ struct rds_connection {
void *c_transport_data;

atomic_t c_state;
+ unsigned long c_send_gen;
unsigned long c_flags;
unsigned long c_reconnect_jiffies;
struct delayed_work c_send_w;
diff --git a/net/rds/send.c b/net/rds/send.c
index b1741a2..aec3f9d 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -141,9 +141,13 @@ int rds_send_xmit(struct rds_connection *conn)
int ret = 0;
LIST_HEAD(to_be_dropped);
int same_rm = 0;
+ int batch_count;
+ unsigned long send_gen = 0;

restart:

+ batch_count = 0;
+
/*
* sendmsg calls here after having queued its message on the send
* queue. We only have one task feeding the connection at a time. If
@@ -158,6 +162,17 @@ int rds_send_xmit(struct rds_connection *conn)
}

/*
+ * we record the send generation after doing the xmit acquire.
+ * if someone else manages to jump in and do some work, we'll use
+ * this to avoid a goto restart farther down.
+ *
+ * we don't need a lock because the counter is only incremented
+ * while we have the in_xmit bit held.
+ */
+ conn->c_send_gen++;
+ send_gen = conn->c_send_gen;
+
+ /*
* rds_conn_shutdown() sets the conn state and then tests RDS_IN_XMIT,
* we do the opposite to avoid races.
*/
@@ -215,6 +230,16 @@ int rds_send_xmit(struct rds_connection *conn)
if (!rm) {
unsigned int len;

+ batch_count++;
+
+ /* we want to process as big a batch as we can, but
+ * we also want to avoid softlockups. If we've been
+ * through a lot of messages, lets back off and see
+ * if anyone else jumps in
+ */
+ if (batch_count >= 1024)
+ goto over_batch;
+
spin_lock_irqsave(&conn->c_lock, flags);

if (!list_empty(&conn->c_send_queue)) {
@@ -370,9 +395,9 @@ int rds_send_xmit(struct rds_connection *conn)
}
}

+over_batch:
if (conn->c_trans->xmit_complete)
conn->c_trans->xmit_complete(conn);
-
release_in_xmit(conn);

/* Nuke any messages we decided not to retransmit. */
@@ -393,12 +418,26 @@ int rds_send_xmit(struct rds_connection *conn)
* If the transport cannot continue (i.e ret != 0), then it must
* call us when more room is available, such as from the tx
* completion handler.
+ *
+ * We have an extra generation check here so that if someone manages
+ * to jump in after our release_in_xmit, we'll see that they have done
+ * some work and we will skip our goto
*/
if (ret == 0) {
smp_mb();
- if (!list_empty(&conn->c_send_queue)) {
- rds_stats_inc(s_send_lock_queue_raced);
- goto restart;
+ if (!list_empty(&conn->c_send_queue) &&
+ send_gen == conn->c_send_gen) {
+ cond_resched();
+ /* repeat our check after the resched in case
+ * someone else was kind enough to empty or process
+ * the queue
+ */
+ smp_mb();
+ if (!list_empty(&conn->c_send_queue) &&
+ send_gen == conn->c_send_gen) {
+ rds_stats_inc(s_send_lock_queue_raced);
+ goto restart;
+ }
}
}
out:
--
1.7.1

2015-04-02 13:51:12

by Sowmini Varadhan

[permalink] [raw]
Subject: [PATCH 5/6] RDS: rds_send_xmit is called under a spinlock, lets not do a cond_resched()

Original patch by Chris Mason

Signed-off-by: Sowmini Varadhan <[email protected]>
---
net/rds/send.c | 14 ++------------
1 files changed, 2 insertions(+), 12 deletions(-)

diff --git a/net/rds/send.c b/net/rds/send.c
index aec3f9d..23135a8 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -199,7 +199,6 @@ int rds_send_xmit(struct rds_connection *conn)
same_rm++;
if (same_rm >= 4096) {
printk_ratelimited(KERN_ERR "RDS: Stuck rm\n");
- cond_resched();
ret = -EAGAIN;
break;
}
@@ -427,17 +426,8 @@ int rds_send_xmit(struct rds_connection *conn)
smp_mb();
if (!list_empty(&conn->c_send_queue) &&
send_gen == conn->c_send_gen) {
- cond_resched();
- /* repeat our check after the resched in case
- * someone else was kind enough to empty or process
- * the queue
- */
- smp_mb();
- if (!list_empty(&conn->c_send_queue) &&
- send_gen == conn->c_send_gen) {
- rds_stats_inc(s_send_lock_queue_raced);
- goto restart;
- }
+ rds_stats_inc(s_send_lock_queue_raced);
+ goto restart;
}
}
out:
--
1.7.1

2015-04-02 13:56:13

by Sowmini Varadhan

[permalink] [raw]
Subject: [PATCH 6/6] RDS: don't trust the LL_SEND_FULL bit

We are seeing connections stuck with the LL_SEND_FULL bit getting
set and never cleared. This changes RDS to stop trusting the
LL_SEND_FULL bit and kick krdsd after any time we
see -ENOMEM from the ring allocation code.

Original patch by Chris Mason

Signed-off-by: Sowmini Varadhan <[email protected]>
Reviewed-by: Ajaykumar Hotchandani <[email protected]>
---
net/rds/send.c | 11 +++++++----
net/rds/threads.c | 2 ++
2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/net/rds/send.c b/net/rds/send.c
index 23135a8..9d9c90c 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -1108,8 +1108,10 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
*/
rds_stats_inc(s_send_queued);

- if (!test_bit(RDS_LL_SEND_FULL, &conn->c_flags))
- rds_send_xmit(conn);
+ ret = rds_send_xmit(conn);
+ if (ret == -ENOMEM || ret == -EAGAIN)
+ queue_delayed_work(rds_wq, &conn->c_send_w, 1);
+

rds_message_put(rm);
return payload_len;
@@ -1165,8 +1167,9 @@ rds_send_pong(struct rds_connection *conn, __be16 dport)
rds_stats_inc(s_send_queued);
rds_stats_inc(s_send_pong);

- if (!test_bit(RDS_LL_SEND_FULL, &conn->c_flags))
- queue_delayed_work(rds_wq, &conn->c_send_w, 0);
+ ret = rds_send_xmit(conn);
+ if (ret == -ENOMEM || ret == -EAGAIN)
+ queue_delayed_work(rds_wq, &conn->c_send_w, 1);

rds_message_put(rm);
return 0;
diff --git a/net/rds/threads.c b/net/rds/threads.c
index dc2402e..454aa6d 100644
--- a/net/rds/threads.c
+++ b/net/rds/threads.c
@@ -162,7 +162,9 @@ void rds_send_worker(struct work_struct *work)
int ret;

if (rds_conn_state(conn) == RDS_CONN_UP) {
+ clear_bit(RDS_LL_SEND_FULL, &conn->c_flags);
ret = rds_send_xmit(conn);
+ cond_resched();
rdsdebug("conn %p ret %d\n", conn, ret);
switch (ret) {
case -EAGAIN:
--
1.7.1

2015-04-02 14:06:31

by Sergei Shtylyov

[permalink] [raw]
Subject: Re: [PATCH 6/6] RDS: don't trust the LL_SEND_FULL bit

Hello.

On 4/2/2015 4:50 PM, Sowmini Varadhan wrote:

> We are seeing connections stuck with the LL_SEND_FULL bit getting
> set and never cleared. This changes RDS to stop trusting the
> LL_SEND_FULL bit and kick krdsd after any time we
> see -ENOMEM from the ring allocation code.

> Original patch by Chris Mason

> Signed-off-by: Sowmini Varadhan <[email protected]>
> Reviewed-by: Ajaykumar Hotchandani <[email protected]>
> ---
> net/rds/send.c | 11 +++++++----
> net/rds/threads.c | 2 ++
> 2 files changed, 9 insertions(+), 4 deletions(-)

> diff --git a/net/rds/send.c b/net/rds/send.c
> index 23135a8..9d9c90c 100644
> --- a/net/rds/send.c
> +++ b/net/rds/send.c
> @@ -1108,8 +1108,10 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
> */
> rds_stats_inc(s_send_queued);
>
> - if (!test_bit(RDS_LL_SEND_FULL, &conn->c_flags))
> - rds_send_xmit(conn);
> + ret = rds_send_xmit(conn);
> + if (ret == -ENOMEM || ret == -EAGAIN)
> + queue_delayed_work(rds_wq, &conn->c_send_w, 1);
> +

No need for extra new line here.

>
> rds_message_put(rm);
> return payload_len;
[...]

WBR, Sergei