Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755518AbdGXHDo (ORCPT ); Mon, 24 Jul 2017 03:03:44 -0400 Received: from mail-yw0-f169.google.com ([209.85.161.169]:33653 "EHLO mail-yw0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751832AbdGXHDg (ORCPT ); Mon, 24 Jul 2017 03:03:36 -0400 Date: Mon, 24 Jul 2017 00:03:32 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Michal Hocko cc: Hugh Dickins , Andrew Morton , Mel Gorman , Tetsuo Handa , Rik van Riel , Johannes Weiner , Vlastimil Babka , linux-mm@kvack.org, LKML Subject: Re: [PATCH] mm, vmscan: do not loop on too_many_isolated for ever In-Reply-To: <20170720132225.GI9058@dhcp22.suse.cz> Message-ID: References: <20170710074842.23175-1-mhocko@kernel.org> <20170720132225.GI9058@dhcp22.suse.cz> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1755 Lines: 40 On Thu, 20 Jul 2017, Michal Hocko wrote: > On Wed 19-07-17 18:54:40, Hugh Dickins wrote: > [...] > > You probably won't welcome getting into alternatives at this late stage; > > but after hacking around it one way or another because of its pointless > > lockups, I lost patience with that too_many_isolated() loop a few months > > back (on realizing the enormous number of pages that may be isolated via > > migrate_pages(2)), and we've been running nicely since with something like: > > > > bool got_mutex = false; > > > > if (unlikely(too_many_isolated(pgdat, file, sc))) { > > if (mutex_lock_killable(&pgdat->too_many_isolated)) > > return SWAP_CLUSTER_MAX; > > got_mutex = true; > > } > > ... > > if (got_mutex) > > mutex_unlock(&pgdat->too_many_isolated); > > > > Using a mutex to provide the intended throttling, without an infinite > > loop or an arbitrary delay; and without having to worry (as we often did) > > about whether those numbers in too_many_isolated() are really appropriate. > > No premature OOMs complained of yet. > > > > But that was on a different kernel, and there I did have to make sure > > that PF_MEMALLOC always prevented us from nesting: I'm not certain of > > that in the current kernel (but do remember Johannes changing the memcg > > end to make it use PF_MEMALLOC too). I offer the preview above, to see > > if you're interested in that alternative: if you are, then I'll go ahead > > and make it into an actual patch against v4.13-rc. > > I would rather get rid of any additional locking here and my ultimate > goal is to make throttling at the page allocator layer rather than > inside the reclaim. Fair enough, I'm certainly in no hurry to send the patch, but thought it worth mentioning. Hugh