Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753199AbdHOCwS (ORCPT ); Mon, 14 Aug 2017 22:52:18 -0400 Received: from mail-oi0-f43.google.com ([209.85.218.43]:36480 "EHLO mail-oi0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751810AbdHOCwR (ORCPT ); Mon, 14 Aug 2017 22:52:17 -0400 MIME-Version: 1.0 In-Reply-To: <20170815022743.GB28715@tassilo.jf.intel.com> References: <84c7f26182b7f4723c0fe3b34ba912a9de92b8b7.1502758114.git.tim.c.chen@linux.intel.com> <20170815022743.GB28715@tassilo.jf.intel.com> From: Linus Torvalds Date: Mon, 14 Aug 2017 19:52:16 -0700 X-Google-Sender-Auth: wLAcOTn3d4XwGnZ7J_z05Ua1AuY Message-ID: Subject: Re: [PATCH 1/2] sched/wait: Break up long wake list walk To: Andi Kleen Cc: Tim Chen , Peter Zijlstra , Ingo Molnar , Kan Liang , Andrew Morton , Johannes Weiner , Jan Kara , linux-mm , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3166 Lines: 74 On Mon, Aug 14, 2017 at 7:27 PM, Andi Kleen wrote: > > We could try it and it may even help in this case and it may > be a good idea in any case on such a system, but: > > - Even with a large hash table it might be that by chance all CPUs > will be queued up on the same page > - There are a lot of other wait queues in the kernel and they all > could run into a similar problem > - I suspect it's even possible to construct it from user space > as a kind of DoS attack Maybe. Which is why I didn't NAK the patch outright. But I don't think it's the solution for the scalability issue you guys found. It's just a workaround, and it's likely a bad one at that. > Now in one case (on a smaller system) we debugged we had > > - 4S system with 208 logical threads > - during the test the wait queue length was 3700 entries. > - the last CPUs queued had to wait roughly 0.8s > > This gives a budget of roughly 1us per wake up. I'm not at all convinced that follows. When bad scaling happens, you often end up hitting quadratic (or worse) behavior. So if you are able to fix the scaling by some fixed amount, it's possible that almost _all_ the problems just go away. The real issue is that "3700 entries" part. What was it that actually triggered them? In particular, if it's just a hashing issue, and we can trivially just make the hash table be bigger (256 entries is *tiny*) then the whole thing goes away. Which is why I really want to hear what happens if you just change PAGE_WAIT_TABLE_BITS to 16. The right fix would be to just make it scale by memory, but before we even do that, let's just look at what happens when you increase the size the stupid way. Maybe those 3700 entries will just shrink down to 14 entries because the hash just works fine and 256 entries was just much much too small when you have hundreds of thousands of threads or whatever But it is *also* possible that it's actually all waiting on the exact same page, and there's some way to do a thundering herd on the page lock bit, for example. But then it would be really good to hear what it is that triggers that. The thing is, the reason we perform well on many loads in the kernel is that I have *always* pushed back against bad workarounds. We do *not* do lock back-off in our locks, for example, because I told people that lock contention gets fixed by not contending, not by trying to act better when things have already become bad. This is the same issue. We don't "fix" things by papering over some symptom. We try to fix the _actual_ underlying problem. Maybe there is some caller that can simply be rewritten. Maybe we can do other tricks than just make the wait tables bigger. But we should not say "3700 entries is ok, let's just make that sh*t be interruptible". That is what the patch does now, and that is why I dislike the patch. So I _am_ NAK'ing the patch if nobody is willing to even try alternatives. Because a band-aid is ok for "some theoretical worst-case behavior". But a band-aid is *not* ok for "we can't even be bothered to try to figure out the right thing, so we're just adding this hack and leaving it". Linus