2012-08-15 19:46:18

by Sage Weil

[permalink] [raw]
Subject: regression with poll(2)?

I'm experiencing a stall with Ceph daemons communicating over TCP that
occurs reliably with 3.6-rc1 (and linus/master) but not 3.5. The basic
situation is:

- the socket is two processes communicating over TCP on the same host, e.g.

tcp 0 2164849 10.214.132.38:6801 10.214.132.38:51729 ESTABLISHED

- one end writes a bunch of data in
- the other end consumes data, but at some point stalls.
- reads are nonblocking, e.g.

int got = ::recv( sd, buf, len, MSG_DONTWAIT );

and between those calls we wait with

struct pollfd pfd;
short evmask;
pfd.fd = sd;
pfd.events = POLLIN;
#if defined(__linux__)
pfd.events |= POLLRDHUP;
#endif

if (poll(&pfd, 1, msgr->timeout) <= 0)
return -1;

- in my case the timeout is ~15 minutes. at that point it errors out,
and the daemons reconnect and continue for a while until hitting this
again.

- at the time of the stall, the reading process is blocked on that
poll(2) call. There are a bunch of threads stuck on poll(2), some of them
stuck and some not, but they all have stacks like

[<ffffffff8118f6f9>] poll_schedule_timeout+0x49/0x70
[<ffffffff81190baf>] do_sys_poll+0x35f/0x4c0
[<ffffffff81190deb>] sys_poll+0x6b/0x100
[<ffffffff8163d369>] system_call_fastpath+0x16/0x1b

- you'll note that the netstat output shows data queued:

tcp 0 1163264 10.214.132.36:6807 10.214.132.36:41738 ESTABLISHED
tcp 0 1622016 10.214.132.36:41738 10.214.132.36:6807 ESTABLISHED

etc.

Is this a known regression? Or might I be misusing the API? What
information would help track it down?

Thanks!
sage


2012-08-15 20:54:51

by Atchley, Scott

[permalink] [raw]
Subject: Re: regression with poll(2)?

On Aug 15, 2012, at 3:46 PM, Sage Weil wrote:

> I'm experiencing a stall with Ceph daemons communicating over TCP that
> occurs reliably with 3.6-rc1 (and linus/master) but not 3.5. The basic
> situation is:
>
> - the socket is two processes communicating over TCP on the same host, e.g.
>
> tcp 0 2164849 10.214.132.38:6801 10.214.132.38:51729 ESTABLISHED
>
> - one end writes a bunch of data in
> - the other end consumes data, but at some point stalls.
> - reads are nonblocking, e.g.
>
> int got = ::recv( sd, buf, len, MSG_DONTWAIT );
>
> and between those calls we wait with
>
> struct pollfd pfd;
> short evmask;
> pfd.fd = sd;
> pfd.events = POLLIN;
> #if defined(__linux__)
> pfd.events |= POLLRDHUP;
> #endif
>
> if (poll(&pfd, 1, msgr->timeout) <= 0)
> return -1;
>
> - in my case the timeout is ~15 minutes. at that point it errors out,
> and the daemons reconnect and continue for a while until hitting this
> again.
>
> - at the time of the stall, the reading process is blocked on that
> poll(2) call. There are a bunch of threads stuck on poll(2), some of them
> stuck and some not, but they all have stacks like
>
> [<ffffffff8118f6f9>] poll_schedule_timeout+0x49/0x70
> [<ffffffff81190baf>] do_sys_poll+0x35f/0x4c0
> [<ffffffff81190deb>] sys_poll+0x6b/0x100
> [<ffffffff8163d369>] system_call_fastpath+0x16/0x1b
>
> - you'll note that the netstat output shows data queued:
>
> tcp 0 1163264 10.214.132.36:6807 10.214.132.36:41738 ESTABLISHED
> tcp 0 1622016 10.214.132.36:41738 10.214.132.36:6807 ESTABLISHED
>
> etc.
>
> Is this a known regression? Or might I be misusing the API? What
> information would help track it down?
>
> Thanks!
> sage


Sage,

Do you see the same behavior when using two hosts (i.e. not loopback)? If different, how much data is in the pipe in the localhost case?

Scott

2012-08-15 21:04:02

by Sage Weil

[permalink] [raw]
Subject: Re: regression with poll(2)?

On Wed, 15 Aug 2012, Atchley, Scott wrote:
> On Aug 15, 2012, at 3:46 PM, Sage Weil wrote:
>
> > I'm experiencing a stall with Ceph daemons communicating over TCP that
> > occurs reliably with 3.6-rc1 (and linus/master) but not 3.5. The basic
> > situation is:
> >
> > - the socket is two processes communicating over TCP on the same host, e.g.
> >
> > tcp 0 2164849 10.214.132.38:6801 10.214.132.38:51729 ESTABLISHED
> >
> > - one end writes a bunch of data in
> > - the other end consumes data, but at some point stalls.
> > - reads are nonblocking, e.g.
> >
> > int got = ::recv( sd, buf, len, MSG_DONTWAIT );
> >
> > and between those calls we wait with
> >
> > struct pollfd pfd;
> > short evmask;
> > pfd.fd = sd;
> > pfd.events = POLLIN;
> > #if defined(__linux__)
> > pfd.events |= POLLRDHUP;
> > #endif
> >
> > if (poll(&pfd, 1, msgr->timeout) <= 0)
> > return -1;
> >
> > - in my case the timeout is ~15 minutes. at that point it errors out,
> > and the daemons reconnect and continue for a while until hitting this
> > again.
> >
> > - at the time of the stall, the reading process is blocked on that
> > poll(2) call. There are a bunch of threads stuck on poll(2), some of them
> > stuck and some not, but they all have stacks like
> >
> > [<ffffffff8118f6f9>] poll_schedule_timeout+0x49/0x70
> > [<ffffffff81190baf>] do_sys_poll+0x35f/0x4c0
> > [<ffffffff81190deb>] sys_poll+0x6b/0x100
> > [<ffffffff8163d369>] system_call_fastpath+0x16/0x1b
> >
> > - you'll note that the netstat output shows data queued:
> >
> > tcp 0 1163264 10.214.132.36:6807 10.214.132.36:41738 ESTABLISHED
> > tcp 0 1622016 10.214.132.36:41738 10.214.132.36:6807 ESTABLISHED
> >
> > etc.
> >
> > Is this a known regression? Or might I be misusing the API? What
> > information would help track it down?
> >
> > Thanks!
> > sage
>
>
> Sage,
>
> Do you see the same behavior when using two hosts (i.e. not loopback)? If different, how much data is in the pipe in the localhost case?

I have only seen it in the loopback case, and have independently diagnosed
it a half dozen or so times now.

:/
sage


>
> Scott
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>

2012-08-19 18:49:39

by Sage Weil

[permalink] [raw]
Subject: Re: regression with poll(2)

I've bisected and identified this commit:

netvm: propagate page->pfmemalloc to skb

The skb->pfmemalloc flag gets set to true iff during the slab allocation
of data in __alloc_skb that the the PFMEMALLOC reserves were used. If the
packet is fragmented, it is possible that pages will be allocated from the
PFMEMALLOC reserve without propagating this information to the skb. This
patch propagates page->pfmemalloc from pages allocated for fragments to
the skb.

Signed-off-by: Mel Gorman <[email protected]>
Acked-by: David S. Miller <[email protected]>
Cc: Neil Brown <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Mike Christie <[email protected]>
Cc: Eric B Munson <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Christoph Lameter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>

I've retested several times and confirmed that this change leads to the
breakage, and also confirmed that reverting it on top of -rc1 also fixes
the problem.

I've also added some additional instrumentation to my code and confirmed
that the process is blocking on poll(2) while netstat is reporting
data available on the socket.

What can I do to help track this down?

Thanks!
sage


On Wed, 15 Aug 2012, Sage Weil wrote:

> I'm experiencing a stall with Ceph daemons communicating over TCP that
> occurs reliably with 3.6-rc1 (and linus/master) but not 3.5. The basic
> situation is:
>
> - the socket is two processes communicating over TCP on the same host, e.g.
>
> tcp 0 2164849 10.214.132.38:6801 10.214.132.38:51729 ESTABLISHED
>
> - one end writes a bunch of data in
> - the other end consumes data, but at some point stalls.
> - reads are nonblocking, e.g.
>
> int got = ::recv( sd, buf, len, MSG_DONTWAIT );
>
> and between those calls we wait with
>
> struct pollfd pfd;
> short evmask;
> pfd.fd = sd;
> pfd.events = POLLIN;
> #if defined(__linux__)
> pfd.events |= POLLRDHUP;
> #endif
>
> if (poll(&pfd, 1, msgr->timeout) <= 0)
> return -1;
>
> - in my case the timeout is ~15 minutes. at that point it errors out,
> and the daemons reconnect and continue for a while until hitting this
> again.
>
> - at the time of the stall, the reading process is blocked on that
> poll(2) call. There are a bunch of threads stuck on poll(2), some of them
> stuck and some not, but they all have stacks like
>
> [<ffffffff8118f6f9>] poll_schedule_timeout+0x49/0x70
> [<ffffffff81190baf>] do_sys_poll+0x35f/0x4c0
> [<ffffffff81190deb>] sys_poll+0x6b/0x100
> [<ffffffff8163d369>] system_call_fastpath+0x16/0x1b
>
> - you'll note that the netstat output shows data queued:
>
> tcp 0 1163264 10.214.132.36:6807 10.214.132.36:41738 ESTABLISHED
> tcp 0 1622016 10.214.132.36:41738 10.214.132.36:6807 ESTABLISHED
>
> etc.
>
> Is this a known regression? Or might I be misusing the API? What
> information would help track it down?
>
> Thanks!
> sage
>
>
>

2012-08-20 08:08:15

by Eric Dumazet

[permalink] [raw]
Subject: Re: regression with poll(2)

On Sun, 2012-08-19 at 11:49 -0700, Sage Weil wrote:
> I've bisected and identified this commit:
>
> netvm: propagate page->pfmemalloc to skb
>
> The skb->pfmemalloc flag gets set to true iff during the slab allocation
> of data in __alloc_skb that the the PFMEMALLOC reserves were used. If the
> packet is fragmented, it is possible that pages will be allocated from the
> PFMEMALLOC reserve without propagating this information to the skb. This
> patch propagates page->pfmemalloc from pages allocated for fragments to
> the skb.
>
> Signed-off-by: Mel Gorman <[email protected]>
> Acked-by: David S. Miller <[email protected]>
> Cc: Neil Brown <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Mike Christie <[email protected]>
> Cc: Eric B Munson <[email protected]>
> Cc: Eric Dumazet <[email protected]>
> Cc: Sebastian Andrzej Siewior <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> Signed-off-by: Linus Torvalds <[email protected]>
>
> I've retested several times and confirmed that this change leads to the
> breakage, and also confirmed that reverting it on top of -rc1 also fixes
> the problem.
>
> I've also added some additional instrumentation to my code and confirmed
> that the process is blocking on poll(2) while netstat is reporting
> data available on the socket.
>
> What can I do to help track this down?
>
> Thanks!
> sage
>
>
> On Wed, 15 Aug 2012, Sage Weil wrote:
>
> > I'm experiencing a stall with Ceph daemons communicating over TCP that
> > occurs reliably with 3.6-rc1 (and linus/master) but not 3.5. The basic
> > situation is:
> >
> > - the socket is two processes communicating over TCP on the same host, e.g.
> >
> > tcp 0 2164849 10.214.132.38:6801 10.214.132.38:51729 ESTABLISHED
> >
> > - one end writes a bunch of data in
> > - the other end consumes data, but at some point stalls.
> > - reads are nonblocking, e.g.
> >
> > int got = ::recv( sd, buf, len, MSG_DONTWAIT );
> >
> > and between those calls we wait with
> >
> > struct pollfd pfd;
> > short evmask;
> > pfd.fd = sd;
> > pfd.events = POLLIN;
> > #if defined(__linux__)
> > pfd.events |= POLLRDHUP;
> > #endif
> >
> > if (poll(&pfd, 1, msgr->timeout) <= 0)
> > return -1;
> >
> > - in my case the timeout is ~15 minutes. at that point it errors out,
> > and the daemons reconnect and continue for a while until hitting this
> > again.
> >
> > - at the time of the stall, the reading process is blocked on that
> > poll(2) call. There are a bunch of threads stuck on poll(2), some of them
> > stuck and some not, but they all have stacks like
> >
> > [<ffffffff8118f6f9>] poll_schedule_timeout+0x49/0x70
> > [<ffffffff81190baf>] do_sys_poll+0x35f/0x4c0
> > [<ffffffff81190deb>] sys_poll+0x6b/0x100
> > [<ffffffff8163d369>] system_call_fastpath+0x16/0x1b
> >
> > - you'll note that the netstat output shows data queued:
> >
> > tcp 0 1163264 10.214.132.36:6807 10.214.132.36:41738 ESTABLISHED
> > tcp 0 1622016 10.214.132.36:41738 10.214.132.36:6807 ESTABLISHED
> >

In this netstat output, we can see some data in output queues, but no
data on receive queues. poll() is OK.

Some TCP frames are not properly delivered, even after a retransmit.

( to see useful stats/counters : ss -emoi dst 10.214.132.36)

For loopback transmits, skbs are taken from the output queue, cloned and
feeded to local stack.

If they have the pfmemalloc bit, they wont be delivered to normal
sockets, but dropped.

tcp_sendmsg() seems to be able to queue skbs with pfmemalloc set to
true, and this makes no sense to me.


2012-08-20 09:10:41

by Mel Gorman

[permalink] [raw]
Subject: Re: regression with poll(2)

On Sun, Aug 19, 2012 at 11:49:31AM -0700, Sage Weil wrote:
> I've bisected and identified this commit:
>
> netvm: propagate page->pfmemalloc to skb
>
> The skb->pfmemalloc flag gets set to true iff during the slab allocation
> of data in __alloc_skb that the the PFMEMALLOC reserves were used. If the
> packet is fragmented, it is possible that pages will be allocated from the
> PFMEMALLOC reserve without propagating this information to the skb. This
> patch propagates page->pfmemalloc from pages allocated for fragments to
> the skb.
>
> Signed-off-by: Mel Gorman <[email protected]>
> Acked-by: David S. Miller <[email protected]>
> Cc: Neil Brown <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Mike Christie <[email protected]>
> Cc: Eric B Munson <[email protected]>
> Cc: Eric Dumazet <[email protected]>
> Cc: Sebastian Andrzej Siewior <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> Signed-off-by: Linus Torvalds <[email protected]>
>

Ok, thanks.

> I've retested several times and confirmed that this change leads to the
> breakage, and also confirmed that reverting it on top of -rc1 also fixes
> the problem.
>
> I've also added some additional instrumentation to my code and confirmed
> that the process is blocking on poll(2) while netstat is reporting
> data available on the socket.
>
> What can I do to help track this down?
>

Can the following patch be tested please? It is reported to fix an fio
regression that may be similar to what you are experiencing but has not
been picked up yet.

---8<---
From: Alex Shi <[email protected]>
Subject: [PATCH] mm: correct page->pfmemalloc to fix deactivate_slab regression

commit cfd19c5a9ec (mm: only set page->pfmemalloc when
ALLOC_NO_WATERMARKS was used) try to narrow down page->pfmemalloc
setting, but it missed some places the pfmemalloc should be set.

So, in __slab_alloc, the unalignment pfmemalloc and ALLOC_NO_WATERMARKS
cause incorrect deactivate_slab() on our core2 server:

64.73% fio [kernel.kallsyms] [k] _raw_spin_lock
|
--- _raw_spin_lock
|
|---0.34%-- deactivate_slab
| __slab_alloc
| kmem_cache_alloc
| |

That causes our fio sync write performance has 40% regression.

This patch move the checking in get_page_from_freelist, that resolved
this issue.

Signed-off-by: Alex Shi <[email protected]>
---
mm/page_alloc.c | 21 +++++++++++----------
1 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 009ac28..07f1924 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1928,6 +1928,17 @@ this_zone_full:
zlc_active = 0;
goto zonelist_scan;
}
+
+ if (page)
+ /*
+ * page->pfmemalloc is set when ALLOC_NO_WATERMARKS was
+ * necessary to allocate the page. The expectation is
+ * that the caller is taking steps that will free more
+ * memory. The caller should avoid the page being used
+ * for !PFMEMALLOC purposes.
+ */
+ page->pfmemalloc = !!(alloc_flags & ALLOC_NO_WATERMARKS);
+
return page;
}

@@ -2389,14 +2400,6 @@ rebalance:
zonelist, high_zoneidx, nodemask,
preferred_zone, migratetype);
if (page) {
- /*
- * page->pfmemalloc is set when ALLOC_NO_WATERMARKS was
- * necessary to allocate the page. The expectation is
- * that the caller is taking steps that will free more
- * memory. The caller should avoid the page being used
- * for !PFMEMALLOC purposes.
- */
- page->pfmemalloc = true;
goto got_pg;
}
}
@@ -2569,8 +2572,6 @@ retry_cpuset:
page = __alloc_pages_slowpath(gfp_mask, order,
zonelist, high_zoneidx, nodemask,
preferred_zone, migratetype);
- else
- page->pfmemalloc = false;

trace_mm_page_alloc(page, order, gfp_mask, migratetype);

--
1.7.5.4

2012-08-20 09:33:16

by Eric Dumazet

[permalink] [raw]
Subject: Re: regression with poll(2)

On Mon, 2012-08-20 at 10:04 +0100, Mel Gorman wrote:

> Can the following patch be tested please? It is reported to fix an fio
> regression that may be similar to what you are experiencing but has not
> been picked up yet.
>
> -

This seems to help here.

Boot your machine with "mem=768M" or a bit less depending on your setup,
and try a netperf.

-> before patch :

# netperf
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
localhost.localdomain () port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

87380 16384 16384 14.00 6.05

-> after patch :

Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

87380 16384 16384 10.00 18509.73

2012-08-20 16:55:05

by Sage Weil

[permalink] [raw]
Subject: Re: regression with poll(2)

On Mon, 20 Aug 2012, Mel Gorman wrote:
> On Sun, Aug 19, 2012 at 11:49:31AM -0700, Sage Weil wrote:
> > I've bisected and identified this commit:
> >
> > netvm: propagate page->pfmemalloc to skb
> >
> > The skb->pfmemalloc flag gets set to true iff during the slab allocation
> > of data in __alloc_skb that the the PFMEMALLOC reserves were used. If the
> > packet is fragmented, it is possible that pages will be allocated from the
> > PFMEMALLOC reserve without propagating this information to the skb. This
> > patch propagates page->pfmemalloc from pages allocated for fragments to
> > the skb.
> >
> > Signed-off-by: Mel Gorman <[email protected]>
> > Acked-by: David S. Miller <[email protected]>
> > Cc: Neil Brown <[email protected]>
> > Cc: Peter Zijlstra <[email protected]>
> > Cc: Mike Christie <[email protected]>
> > Cc: Eric B Munson <[email protected]>
> > Cc: Eric Dumazet <[email protected]>
> > Cc: Sebastian Andrzej Siewior <[email protected]>
> > Cc: Mel Gorman <[email protected]>
> > Cc: Christoph Lameter <[email protected]>
> > Signed-off-by: Andrew Morton <[email protected]>
> > Signed-off-by: Linus Torvalds <[email protected]>
> >
>
> Ok, thanks.
>
> > I've retested several times and confirmed that this change leads to the
> > breakage, and also confirmed that reverting it on top of -rc1 also fixes
> > the problem.
> >
> > I've also added some additional instrumentation to my code and confirmed
> > that the process is blocking on poll(2) while netstat is reporting
> > data available on the socket.
> >
> > What can I do to help track this down?
> >
>
> Can the following patch be tested please? It is reported to fix an fio
> regression that may be similar to what you are experiencing but has not
> been picked up yet.

This patch appears to resolve things for me as well, at least after a
couple of passes. I'll let you know if I see any further problems come up
with more testing.

Thanks!
sage


>
> ---8<---
> From: Alex Shi <[email protected]>
> Subject: [PATCH] mm: correct page->pfmemalloc to fix deactivate_slab regression
>
> commit cfd19c5a9ec (mm: only set page->pfmemalloc when
> ALLOC_NO_WATERMARKS was used) try to narrow down page->pfmemalloc
> setting, but it missed some places the pfmemalloc should be set.
>
> So, in __slab_alloc, the unalignment pfmemalloc and ALLOC_NO_WATERMARKS
> cause incorrect deactivate_slab() on our core2 server:
>
> 64.73% fio [kernel.kallsyms] [k] _raw_spin_lock
> |
> --- _raw_spin_lock
> |
> |---0.34%-- deactivate_slab
> | __slab_alloc
> | kmem_cache_alloc
> | |
>
> That causes our fio sync write performance has 40% regression.
>
> This patch move the checking in get_page_from_freelist, that resolved
> this issue.
>
> Signed-off-by: Alex Shi <[email protected]>
> ---
> mm/page_alloc.c | 21 +++++++++++----------
> 1 files changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 009ac28..07f1924 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1928,6 +1928,17 @@ this_zone_full:
> zlc_active = 0;
> goto zonelist_scan;
> }
> +
> + if (page)
> + /*
> + * page->pfmemalloc is set when ALLOC_NO_WATERMARKS was
> + * necessary to allocate the page. The expectation is
> + * that the caller is taking steps that will free more
> + * memory. The caller should avoid the page being used
> + * for !PFMEMALLOC purposes.
> + */
> + page->pfmemalloc = !!(alloc_flags & ALLOC_NO_WATERMARKS);
> +
> return page;
> }
>
> @@ -2389,14 +2400,6 @@ rebalance:
> zonelist, high_zoneidx, nodemask,
> preferred_zone, migratetype);
> if (page) {
> - /*
> - * page->pfmemalloc is set when ALLOC_NO_WATERMARKS was
> - * necessary to allocate the page. The expectation is
> - * that the caller is taking steps that will free more
> - * memory. The caller should avoid the page being used
> - * for !PFMEMALLOC purposes.
> - */
> - page->pfmemalloc = true;
> goto got_pg;
> }
> }
> @@ -2569,8 +2572,6 @@ retry_cpuset:
> page = __alloc_pages_slowpath(gfp_mask, order,
> zonelist, high_zoneidx, nodemask,
> preferred_zone, migratetype);
> - else
> - page->pfmemalloc = false;
>
> trace_mm_page_alloc(page, order, gfp_mask, migratetype);
>
> --
> 1.7.5.4
>
>

2012-08-20 17:02:31

by Linus Torvalds

[permalink] [raw]
Subject: Re: regression with poll(2)

On Mon, Aug 20, 2012 at 2:04 AM, Mel Gorman <[email protected]> wrote:
>
> Can the following patch be tested please? It is reported to fix an fio
> regression that may be similar to what you are experiencing but has not
> been picked up yet.

Andrew, is this in your queue, or should I take this directly, or
what? It seems to fix the problem for Eric and Sage, at least.

Linus

2012-08-20 23:21:07

by Andrew Morton

[permalink] [raw]
Subject: Re: regression with poll(2)

On Mon, 20 Aug 2012 11:30:59 +0200
Eric Dumazet <[email protected]> wrote:

> On Mon, 2012-08-20 at 10:04 +0100, Mel Gorman wrote:
>
> > Can the following patch be tested please? It is reported to fix an fio
> > regression that may be similar to what you are experiencing but has not
> > been picked up yet.
> >
> > -
>
> This seems to help here.
>
> Boot your machine with "mem=768M" or a bit less depending on your setup,
> and try a netperf.
>
> -> before patch :
>
> # netperf
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> localhost.localdomain () port 0 AF_INET
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 16384 16384 14.00 6.05
>
> -> after patch :
>
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 16384 16384 10.00 18509.73

"seem to help"? Was previous performance fully restored?

2012-08-21 05:16:19

by Eric Dumazet

[permalink] [raw]
Subject: Re: regression with poll(2)

On Mon, 2012-08-20 at 16:20 -0700, Andrew Morton wrote:
> On Mon, 20 Aug 2012 11:30:59 +0200
> Eric Dumazet <[email protected]> wrote:
>
> > On Mon, 2012-08-20 at 10:04 +0100, Mel Gorman wrote:
> >
> > > Can the following patch be tested please? It is reported to fix an fio
> > > regression that may be similar to what you are experiencing but has not
> > > been picked up yet.
> > >
> > > -
> >
> > This seems to help here.
> >
> > Boot your machine with "mem=768M" or a bit less depending on your setup,
> > and try a netperf.
> >
> > -> before patch :
> >
> > # netperf
> > MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> > localhost.localdomain () port 0 AF_INET
> > Recv Send Send
> > Socket Socket Message Elapsed
> > Size Size Size Time Throughput
> > bytes bytes bytes secs. 10^6bits/sec
> >
> > 87380 16384 16384 14.00 6.05
> >
> > -> after patch :
> >
> > Recv Send Send
> > Socket Socket Message Elapsed
> > Size Size Size Time Throughput
> > bytes bytes bytes secs. 10^6bits/sec
> >
> > 87380 16384 16384 10.00 18509.73
>
> "seem to help"? Was previous performance fully restored?

I did some tests this morning on my HP Z600, and got same numbers than
3.5.0

Of course, its a bit difficult to say, because there is no
CONFIG_PFMEMALLOC to test real impact.


2012-08-21 07:11:13

by Mel Gorman

[permalink] [raw]
Subject: Re: regression with poll(2)

On Mon, Aug 20, 2012 at 09:54:59AM -0700, Sage Weil wrote:
> > <SNIP>
> >
> > > I've retested several times and confirmed that this change leads to the
> > > breakage, and also confirmed that reverting it on top of -rc1 also fixes
> > > the problem.
> > >
> > > I've also added some additional instrumentation to my code and confirmed
> > > that the process is blocking on poll(2) while netstat is reporting
> > > data available on the socket.
> > >
> > > What can I do to help track this down?
> > >
> >
> > Can the following patch be tested please? It is reported to fix an fio
> > regression that may be similar to what you are experiencing but has not
> > been picked up yet.
>
> This patch appears to resolve things for me as well, at least after a
> couple of passes. I'll let you know if I see any further problems come up
> with more testing.
>

Thanks very much Sage.

--
Mel Gorman
SUSE Labs

2012-08-21 15:55:58

by Andrew Morton

[permalink] [raw]
Subject: Re: regression with poll(2)

On Mon, 20 Aug 2012 10:02:05 -0700 Linus Torvalds <[email protected]> wrote:

> On Mon, Aug 20, 2012 at 2:04 AM, Mel Gorman <[email protected]> wrote:
> >
> > Can the following patch be tested please? It is reported to fix an fio
> > regression that may be similar to what you are experiencing but has not
> > been picked up yet.
>
> Andrew, is this in your queue, or should I take this directly, or
> what? It seems to fix the problem for Eric and Sage, at least.

Yes, I have a copy queued:


From: Alex Shi <[email protected]>
Subject: mm: correct page->pfmemalloc to fix deactivate_slab regression

cfd19c5a9ec ("mm: only set page->pfmemalloc when ALLOC_NO_WATERMARKS was
used") tried to narrow down page->pfmemalloc setting, but it missed some
places the pfmemalloc should be set.

So, in __slab_alloc, the unalignment pfmemalloc and ALLOC_NO_WATERMARKS
cause incorrect deactivate_slab() on our core2 server:

64.73% fio [kernel.kallsyms] [k] _raw_spin_lock
|
--- _raw_spin_lock
|
|---0.34%-- deactivate_slab
| __slab_alloc
| kmem_cache_alloc
| |

That causes our fio sync write performance to have a 40% regression.

Move the checking in get_page_from_freelist() which resolves this issue.

Signed-off-by: Alex Shi <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Cc: David Miller <[email protected]
Cc: Peter Zijlstra <[email protected]>
Tested-by: Eric Dumazet <[email protected]>
Tested-by: Sage Weil <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---

mm/page_alloc.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)

diff -puN mm/page_alloc.c~mm-correct-page-pfmemalloc-to-fix-deactivate_slab-regression mm/page_alloc.c
--- a/mm/page_alloc.c~mm-correct-page-pfmemalloc-to-fix-deactivate_slab-regression
+++ a/mm/page_alloc.c
@@ -1928,6 +1928,17 @@ this_zone_full:
zlc_active = 0;
goto zonelist_scan;
}
+
+ if (page)
+ /*
+ * page->pfmemalloc is set when ALLOC_NO_WATERMARKS was
+ * necessary to allocate the page. The expectation is
+ * that the caller is taking steps that will free more
+ * memory. The caller should avoid the page being used
+ * for !PFMEMALLOC purposes.
+ */
+ page->pfmemalloc = !!(alloc_flags & ALLOC_NO_WATERMARKS);
+
return page;
}

@@ -2389,14 +2400,6 @@ rebalance:
zonelist, high_zoneidx, nodemask,
preferred_zone, migratetype);
if (page) {
- /*
- * page->pfmemalloc is set when ALLOC_NO_WATERMARKS was
- * necessary to allocate the page. The expectation is
- * that the caller is taking steps that will free more
- * memory. The caller should avoid the page being used
- * for !PFMEMALLOC purposes.
- */
- page->pfmemalloc = true;
goto got_pg;
}
}
@@ -2569,8 +2572,6 @@ retry_cpuset:
page = __alloc_pages_slowpath(gfp_mask, order,
zonelist, high_zoneidx, nodemask,
preferred_zone, migratetype);
- else
- page->pfmemalloc = false;

trace_mm_page_alloc(page, order, gfp_mask, migratetype);

_