Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161062AbcK3SP2 (ORCPT ); Wed, 30 Nov 2016 13:15:28 -0500 Received: from mail-io0-f194.google.com ([209.85.223.194]:34405 "EHLO mail-io0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753815AbcK3SPK (ORCPT ); Wed, 30 Nov 2016 13:15:10 -0500 MIME-Version: 1.0 In-Reply-To: <20161130174713.lhvqgophhiupzwrm@merlins.org> References: <48061a22-0203-de54-5a44-89773bff1e63@suse.cz> <20161123063410.GB2864@dhcp22.suse.cz> <20161128072315.GC14788@dhcp22.suse.cz> <20161129155537.f6qgnfmnoljwnx6j@merlins.org> <20161129160751.GC9796@dhcp22.suse.cz> <20161129163406.treuewaqgt4fy4kh@merlins.org> <20161129174019.fywddwo5h4pyix7r@merlins.org> <20161130174713.lhvqgophhiupzwrm@merlins.org> From: Linus Torvalds Date: Wed, 30 Nov 2016 10:14:50 -0800 X-Google-Sender-Auth: S_GkYZfDBl59v59MmmnQYqpun9E Message-ID: Subject: Re: 4.8.8 kernel trigger OOM killer repeatedly when I have lots of RAM that should be free To: Marc MERLIN , Kent Overstreet , Tejun Heo , Jens Axboe Cc: Michal Hocko , Vlastimil Babka , linux-mm , LKML , Joonsoo Kim , Greg Kroah-Hartman Content-Type: multipart/mixed; boundary=001a113ee5e455c442054288aec3 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4262 Lines: 89 --001a113ee5e455c442054288aec3 Content-Type: text/plain; charset=UTF-8 On Wed, Nov 30, 2016 at 9:47 AM, Marc MERLIN wrote: > > I gave it a thought again, I think it is exactly the nasty situation you > described. > bcache takes I/O quickly while sending to SSD cache. SSD fills up, now > bcache can't handle IO as quickly and has to hang until the SSD has been > flushed to spinning rust drives. > This actually is exactly the same as filling up the cache on a USB key > and now you're waiting for slow writes to flash, is it not? It does sound like you might hit exactly the same kind of situation, yes. And the fact that you have dmcrypt running too just makes things pile up more. All those IO's end up slowed down by the scheduling too. Anyway, none of this seems new per se. I'm adding Kent and Jens to the cc (Tejun already was), in the hope that maybe they have some idea how to control the nasty worst-case behavior wrt workqueue lockup (it's not really a "lockup", it looks like it's just hundreds of workqueues all waiting for IO to complete and much too deep IO queues). I think it's the traditional "throughput is much easier to measure and improve" situation, where making queues big help some throughput situation, but ends up causing chaos when things go south. And I think your NMI watchdog then turns the "system is no longer responsive" into an actual kernel panic. > With your dirty ratio workaround, I was able to re-enable bcache and > have it not fall over, but only barely. I recorded over a hundred > workqueues in flight during the copy at some point (just not enough > to actually kill the kernel this time). > > I've started a bcache followp on this here: > http://marc.info/?l=linux-bcache&m=148052441423532&w=2 > http://marc.info/?l=linux-bcache&m=148052620524162&w=2 > > A full traceback showing the pilup of requests is here: > http://marc.info/?l=linux-bcache&m=147949497808483&w=2 > > and there: > http://pastebin.com/rJ5RKUVm > (2 different ones but mostly the same result) Tejun/Kent - any way to just limit the workqueue depth for bcache? Because that really isn't helping, and things *will* time out and cause those problems when you have hundreds of IO's queued on a disk that likely as a write iops around ~100.. And I really wonder if we should do the "big hammer" approach to the dirty limits on non-HIGHMEM machines too (approximate the "vm_highmem_is_dirtyable" by just limiting global_dirtyable_memory() to 1 GB). That would make the default dirty limits be 100/200MB (for soft/hard throttling), which really is much more reasonable than gigabytes and gigabytes of dirty data. Of course, no way do we do that during rc7.. Linus --001a113ee5e455c442054288aec3 Content-Type: text/plain; charset=US-ASCII; name="patch.diff" Content-Disposition: attachment; filename="patch.diff" Content-Transfer-Encoding: base64 X-Attachment-Id: f_iw590r8i0 IG1tL3BhZ2Utd3JpdGViYWNrLmMgfCA5ICsrKysrKysrLQogMSBmaWxlIGNoYW5nZWQsIDggaW5z ZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKQoKZGlmZiAtLWdpdCBhL21tL3BhZ2Utd3JpdGViYWNr LmMgYi9tbS9wYWdlLXdyaXRlYmFjay5jCmluZGV4IDQzOWNjNjNhZDkwMy4uMjZlY2JkZWNiODE1 IDEwMDY0NAotLS0gYS9tbS9wYWdlLXdyaXRlYmFjay5jCisrKyBiL21tL3BhZ2Utd3JpdGViYWNr LmMKQEAgLTM1Miw2ICszNTIsMTAgQEAgc3RhdGljIHVuc2lnbmVkIGxvbmcgaGlnaG1lbV9kaXJ0 eWFibGVfbWVtb3J5KHVuc2lnbmVkIGxvbmcgdG90YWwpCiAjZW5kaWYKIH0KIAorLyogTGltaXQg ZGlydHlhYmxlIG1lbW9yeSB0byAxR0IgKi8KKyNkZWZpbmUgUEFHRVNfSU5fR0IoeCkgKCh4KSA8 PCAoMzAgLSBQQUdFX1NISUZUKSkKKyNkZWZpbmUgTUFYX0RJUlRZQUJMRV9MT1dNRU1fUEFHRVMg UEFHRVNfSU5fR0IoMSkKKwogLyoqCiAgKiBnbG9iYWxfZGlydHlhYmxlX21lbW9yeSAtIG51bWJl ciBvZiBnbG9iYWxseSBkaXJ0eWFibGUgcGFnZXMKICAqCkBAIC0zNzMsOCArMzc3LDExIEBAIHN0 YXRpYyB1bnNpZ25lZCBsb25nIGdsb2JhbF9kaXJ0eWFibGVfbWVtb3J5KHZvaWQpCiAJeCArPSBn bG9iYWxfbm9kZV9wYWdlX3N0YXRlKE5SX0lOQUNUSVZFX0ZJTEUpOwogCXggKz0gZ2xvYmFsX25v ZGVfcGFnZV9zdGF0ZShOUl9BQ1RJVkVfRklMRSk7CiAKLQlpZiAoIXZtX2hpZ2htZW1faXNfZGly dHlhYmxlKQorCWlmICghdm1faGlnaG1lbV9pc19kaXJ0eWFibGUpIHsKIAkJeCAtPSBoaWdobWVt X2RpcnR5YWJsZV9tZW1vcnkoeCk7CisJCWlmICh4ID4gTUFYX0RJUlRZQUJMRV9MT1dNRU1fUEFH RVMpCisJCQl4ID0gTUFYX0RJUlRZQUJMRV9MT1dNRU1fUEFHRVM7CisJfQogCiAJcmV0dXJuIHgg KyAxOwkvKiBFbnN1cmUgdGhhdCB3ZSBuZXZlciByZXR1cm4gMCAqLwogfQo= --001a113ee5e455c442054288aec3--