2015-04-07 20:38:38

by Sowmini Varadhan

[permalink] [raw]
Subject: [PATCH v4 0/3] RDS: RDS-core fixes

This patch-series updates the RDS core and rds-tcp modules with
some bug fixes that were originally authored by Andy Grover,
Zach Brown, and Chris Mason.

v2: Code review comment by Sergei Shtylov
V3: DaveM comments:
- dropped patches 3, 5 for "heuristic" changes in rds_send_xmit().
Investigation into the root-cause of these IB-triggered changes
produced the feedback: "I don't remember seeing "RDS: Stuck RM"
message in last 1-1.5 years and checking with other folks. It may very
well be some old workaround for stale connection for which long term
fix is already made and this part of code not exercised anymore."

Any such fixes, *if* they are needed, can/should be done in the
IB specific RDS transport modules.

- similarly dropped the LL_SEND_FULL patch (patch 6 in v2 set)

v4: Documentation/networking/rds.txt contains incorrect references
to "missing sysctl values for pf_rds and sol_rds in mainline".
The sysctl values were never needed in mainline, thus fix the
documentation.

Sowmini Varadhan (3):
Document AF_RDS, PF_RDS and SOL_RDS correctly.
RDS: only use passive connections when addresses match
RDS: make sure not to loop forever inside rds_send_xmit

Documentation/networking/rds.txt | 9 ++++-----
net/rds/connection.c | 3 ++-
net/rds/rds.h | 1 +
net/rds/send.c | 33 +++++++++++++++++++++++++++++++--
4 files changed, 38 insertions(+), 8 deletions(-)


2015-04-07 20:38:40

by Sowmini Varadhan

[permalink] [raw]
Subject: [PATCH v4 1/3] Document AF_RDS, PF_RDS and SOL_RDS correctly.

AF_RDS, PF_RDS and SOL_RDS are available in header files,
and there is no need to get their values from /proc. Document
this correctly.

Fixes: 0c5f9b8830aa ("RDS: Documentation")

Signed-off-by: Sowmini Varadhan <[email protected]>
---
v4: this change replaces the v3 modifications to sysctl.c

Documentation/networking/rds.txt | 9 ++++-----
1 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/Documentation/networking/rds.txt b/Documentation/networking/rds.txt
index c67077c..e1a3d59 100644
--- a/Documentation/networking/rds.txt
+++ b/Documentation/networking/rds.txt
@@ -62,11 +62,10 @@ Socket Interface
================

AF_RDS, PF_RDS, SOL_RDS
- These constants haven't been assigned yet, because RDS isn't in
- mainline yet. Currently, the kernel module assigns some constant
- and publishes it to user space through two sysctl files
- /proc/sys/net/rds/pf_rds
- /proc/sys/net/rds/sol_rds
+ AF_RDS and PF_RDS are the domain type to be used with socket(2)
+ to create RDS sockets. SOL_RDS is the socket-level to be used
+ with setsockopt(2) and getsockopt(2) for RDS specific socket
+ options.

fd = socket(PF_RDS, SOCK_SEQPACKET, 0);
This creates a new, unbound RDS socket.
--
1.7.1

2015-04-07 20:38:42

by Sowmini Varadhan

[permalink] [raw]
Subject: [PATCH v4 2/3] RDS: only use passive connections when addresses match

Passive connections were added for the case where one loopback IB
connection between identical addresses needs another connection to store
the second QP. Unfortunately, they were also created in the case where
the addesses differ and we already have both QPs.

This lead to a message reordering bug.

- two different IB interfaces and addresses on a machine: A B
- traffic is sent from A to B
- connection from A-B is created, connect request sent
- listening accepts connect request, B-A is created
- traffic flows, next_rx is incremented
- unacked messages exist on the retrans list
- connection A-B is shut down, new connect request sent
- listen sees existing loopback B-A, creates new passive B-A
- retrans messages are sent and delivered because of 0 next_rx

The problem is that the second connection request saw the previously
existing parent connection. Instead of using it, and using the existing
next_rx_seq state for the traffic between those IPs, it mistakenly
thought that it had to create a passive connection.

We fix this by only using passive connections in the special case where
laddr and faddr match. In this case we'll only ever have one parent
sending connection requests and one passive connection created as the
listening path sees the existing parent connection which initiated the
request.

Original patch by Zach Brown

Signed-off-by: Sowmini Varadhan <[email protected]>
Cc: Ajaykumar Hotchandani <[email protected]>
---
net/rds/connection.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/net/rds/connection.c b/net/rds/connection.c
index 378c3a6..7952a5b 100644
--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -130,7 +130,7 @@ static struct rds_connection *__rds_conn_create(__be32 laddr, __be32 faddr,
rcu_read_lock();
conn = rds_conn_lookup(head, laddr, faddr, trans);
if (conn && conn->c_loopback && conn->c_trans != &rds_loop_transport &&
- !is_outgoing) {
+ laddr == faddr && !is_outgoing) {
/* This is a looped back IB connection, and we're
* called by the code handling the incoming connect.
* We need a second connection object into which we
--
1.7.1

2015-04-07 20:39:36

by Sowmini Varadhan

[permalink] [raw]
Subject: [PATCH v4 3/3] RDS: make sure not to loop forever inside rds_send_xmit

If a determined set of concurrent senders keep the send queue full,
we can loop forever inside rds_send_xmit. This fix has two parts.

First we are dropping out of the while(1) loop after we've processed a
large batch of messages.

Second we add a generation number that gets bumped each time the
xmit bit lock is acquired. If someone else has jumped in and
made progress in the queue, we skip our goto restart.

Original patch by Chris Mason.

Signed-off-by: Sowmini Varadhan <[email protected]>
Cc: Ajaykumar Hotchandani <[email protected]>
---
net/rds/connection.c | 1 +
net/rds/rds.h | 1 +
net/rds/send.c | 33 +++++++++++++++++++++++++++++++--
3 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/net/rds/connection.c b/net/rds/connection.c
index 7952a5b..14f0413 100644
--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -193,6 +193,7 @@ static struct rds_connection *__rds_conn_create(__be32 laddr, __be32 faddr,
}

atomic_set(&conn->c_state, RDS_CONN_DOWN);
+ conn->c_send_gen = 0;
conn->c_reconnect_jiffies = 0;
INIT_DELAYED_WORK(&conn->c_send_w, rds_send_worker);
INIT_DELAYED_WORK(&conn->c_recv_w, rds_recv_worker);
diff --git a/net/rds/rds.h b/net/rds/rds.h
index c3f2855..0d41155 100644
--- a/net/rds/rds.h
+++ b/net/rds/rds.h
@@ -110,6 +110,7 @@ struct rds_connection {
void *c_transport_data;

atomic_t c_state;
+ unsigned long c_send_gen;
unsigned long c_flags;
unsigned long c_reconnect_jiffies;
struct delayed_work c_send_w;
diff --git a/net/rds/send.c b/net/rds/send.c
index 44672be..b02f4da 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -140,8 +140,11 @@ int rds_send_xmit(struct rds_connection *conn)
struct scatterlist *sg;
int ret = 0;
LIST_HEAD(to_be_dropped);
+ int batch_count;
+ unsigned long send_gen = 0;

restart:
+ batch_count = 0;

/*
* sendmsg calls here after having queued its message on the send
@@ -157,6 +160,17 @@ int rds_send_xmit(struct rds_connection *conn)
}

/*
+ * we record the send generation after doing the xmit acquire.
+ * if someone else manages to jump in and do some work, we'll use
+ * this to avoid a goto restart farther down.
+ *
+ * we don't need a lock because the counter is only incremented
+ * while we have the in_xmit bit held.
+ */
+ conn->c_send_gen++;
+ send_gen = conn->c_send_gen;
+
+ /*
* rds_conn_shutdown() sets the conn state and then tests RDS_IN_XMIT,
* we do the opposite to avoid races.
*/
@@ -202,6 +216,16 @@ int rds_send_xmit(struct rds_connection *conn)
if (!rm) {
unsigned int len;

+ batch_count++;
+
+ /* we want to process as big a batch as we can, but
+ * we also want to avoid softlockups. If we've been
+ * through a lot of messages, lets back off and see
+ * if anyone else jumps in
+ */
+ if (batch_count >= 1024)
+ goto over_batch;
+
spin_lock_irqsave(&conn->c_lock, flags);

if (!list_empty(&conn->c_send_queue)) {
@@ -357,9 +381,9 @@ int rds_send_xmit(struct rds_connection *conn)
}
}

+over_batch:
if (conn->c_trans->xmit_complete)
conn->c_trans->xmit_complete(conn);
-
release_in_xmit(conn);

/* Nuke any messages we decided not to retransmit. */
@@ -380,10 +404,15 @@ int rds_send_xmit(struct rds_connection *conn)
* If the transport cannot continue (i.e ret != 0), then it must
* call us when more room is available, such as from the tx
* completion handler.
+ *
+ * We have an extra generation check here so that if someone manages
+ * to jump in after our release_in_xmit, we'll see that they have done
+ * some work and we will skip our goto
*/
if (ret == 0) {
smp_mb();
- if (!list_empty(&conn->c_send_queue)) {
+ if (!list_empty(&conn->c_send_queue) &&
+ send_gen == conn->c_send_gen) {
rds_stats_inc(s_send_lock_queue_raced);
goto restart;
}
--
1.7.1

2015-04-07 21:26:15

by David Miller

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] RDS: make sure not to loop forever inside rds_send_xmit

From: Sowmini Varadhan <[email protected]>
Date: Tue, 7 Apr 2015 16:38:04 -0400

> @@ -157,6 +160,17 @@ int rds_send_xmit(struct rds_connection *conn)
> }
>
> /*
> + * we record the send generation after doing the xmit acquire.
> + * if someone else manages to jump in and do some work, we'll use
> + * this to avoid a goto restart farther down.
> + *
> + * we don't need a lock because the counter is only incremented
> + * while we have the in_xmit bit held.
> + */
> + conn->c_send_gen++;
> + send_gen = conn->c_send_gen;

This increment does need to either be changed to be an atomic_t
or covered by a lock.

Otherwise two concurrent callers can both try to increment it at
the same time, and it only effectively increments once. That's
corrupted state and will break all of the new logic added here.

Still very unhappy with this patch series submission, as I still find
new problems every time I look at these changes. Are you evaluating
them and double checking all of the claims in the commit log message
and comments, and logic in these, or are you just passing them off
upstream after testing and leaving the checking to people like me?

2015-04-07 21:56:09

by Sowmini Varadhan

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] RDS: make sure not to loop forever inside rds_send_xmit

On (04/07/15 17:26), David Miller wrote:
> > /*
> > + * we record the send generation after doing the xmit acquire.
> > + * if someone else manages to jump in and do some work, we'll use
> > + * this to avoid a goto restart farther down.
> > + *
> > + * we don't need a lock because the counter is only incremented
> > + * while we have the in_xmit bit held.
> > + */
> > + conn->c_send_gen++;
> > + send_gen = conn->c_send_gen;
>
> This increment does need to either be changed to be an atomic_t
> or covered by a lock.
>
> Otherwise two concurrent callers can both try to increment it at
> the same time, and it only effectively increments once. That's
> corrupted state and will break all of the new logic added here.
I'm afraid I dont follow what race condiiton you are seeing? Prior
to this line, the "acquire_in_xmit" check would have only allowed
one thread to successfully increment c_send_gen, right? What did I
miss?


> Still very unhappy with this patch series submission, as I still find
> new problems every time I look at these changes. Are you evaluating
> them and double checking all of the claims in the commit log message
> and comments, and logic in these, or are you just passing them off
> upstream after testing and leaving the checking to people like me?

I am sorry about the aggravation this is causing. But I'm trying to do
the right thing here, and fix someone else's lapse in failing to
properly commit things that were recognized as upstream issues.

Thus all I'm left with is visual inspection, and to run whatever
regression tests we have for these commits.

--Sowmini

[I had to truncate recipient list, because it does not make it past
rds-devel moderator otherwise]

2015-04-07 22:27:41

by David Miller

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] RDS: make sure not to loop forever inside rds_send_xmit

From: Sowmini Varadhan <[email protected]>
Date: Tue, 7 Apr 2015 17:56:05 -0400

> On (04/07/15 17:26), David Miller wrote:
>> > /*
>> > + * we record the send generation after doing the xmit acquire.
>> > + * if someone else manages to jump in and do some work, we'll use
>> > + * this to avoid a goto restart farther down.
>> > + *
>> > + * we don't need a lock because the counter is only incremented
>> > + * while we have the in_xmit bit held.
>> > + */
>> > + conn->c_send_gen++;
>> > + send_gen = conn->c_send_gen;
>>
>> This increment does need to either be changed to be an atomic_t
>> or covered by a lock.
>>
>> Otherwise two concurrent callers can both try to increment it at
>> the same time, and it only effectively increments once. That's
>> corrupted state and will break all of the new logic added here.
> I'm afraid I dont follow what race condiiton you are seeing? Prior
> to this line, the "acquire_in_xmit" check would have only allowed
> one thread to successfully increment c_send_gen, right? What did I
> miss?

Then there is locking protecting this counter increment and the
comment is wrong.