Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp1398535imm; Wed, 10 Oct 2018 14:01:45 -0700 (PDT) X-Google-Smtp-Source: ACcGV60hxKhDTDoK+R3o0sJsipiWg/74ZMN1pRHIN+QE1lEGz2iRLG74hmH2slRPKHV3s+bYZXIy X-Received: by 2002:a63:e54d:: with SMTP id z13-v6mr30795571pgj.169.1539205305090; Wed, 10 Oct 2018 14:01:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539205305; cv=none; d=google.com; s=arc-20160816; b=Tt+ozgE4K91dIwQKvWBbOb03pX9YJCQBSSO1fznjhpNMCgl3ewz3AN881b+Vs/kvXs DwI/lIZ1W4I+0tITP5Rh7E3nWR5kuOwM+ximzSLn+w7LsTtAjKQiJnbf16WmPW7bezgv uqm437yWJWZ9FilVXVvPXL9j2ebSFdp8wtf9o5F4tSputYtynZ+iZIsONJ91F5k+ujK+ hfnhyKtIDozibe4tQnzT6Ztlar2S2JzmZNiQ+nGyJlmN2ycMeYfnJZrWkZduDD8tyZJn vBHpy5jHzIoleglgXm/XaTO7GSff5wo119hjivRNR/OyCyQGzB0vhYJsDMEhAwQAQMWw /3/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=CDLa69JCcNJtNB64BuSbF97p3HkkBGJVu7c5HN37+Ng=; b=1Hq/nUsg9tQUITNsWb8A+lDvDaJOfXXzqNpxHdYZwyMtcb4/+q1iUxKCG+Rc4nMO+D fqt5CtwzsNd1WCXfitqDWmuByX2nXLuEAv/4bDQz3+Wu++1toe++R10IjbXSEASGqqeY qaKwSMgSHn8H5EywChBqPzCDLhaQq494fjlZC0rqbCoKkUPJHFTeY4/ncODAicsGcObV jUFDFVvMzRhbb+DtaLnOdfWoGSJdiH/cqRenZOdtSdI4LV+V/AIMhWI4JemunU+2LGS9 6nKm6xE3ylgnma/pPCANszvZSnXq2grJYmCQGiKa5JAnKLXvXOPAG60crIachnVFBQFF v0CQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Nun316VO; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c184-v6si27302437pfg.215.2018.10.10.14.01.29; Wed, 10 Oct 2018 14:01:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Nun316VO; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726035AbeJKEYx (ORCPT + 99 others); Thu, 11 Oct 2018 00:24:53 -0400 Received: from mail-pf1-f195.google.com ([209.85.210.195]:39763 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725822AbeJKEYx (ORCPT ); Thu, 11 Oct 2018 00:24:53 -0400 Received: by mail-pf1-f195.google.com with SMTP id c25-v6so3231627pfe.6 for ; Wed, 10 Oct 2018 14:00:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=CDLa69JCcNJtNB64BuSbF97p3HkkBGJVu7c5HN37+Ng=; b=Nun316VO4P6MfL3K11yGcMOlW+/0DYjktkkcbAXX6NS6JkiU3z8ELYIjwJVZ3ZWzQ7 bGT7ao34YG/ra8vysi68yg3ybDRRLD22STNOHYnGWk8VzOmeGSCLz8oPIlEalAoHHH7s 5b+K/a+YO8nfhhYkaBPvcre9dW8uWhkcmOwY2e1Z2gynvRzf3yWG9p+S6MpCDEXv7i9Z NeEkfZHZ0GLqxosDO7PIcxDitBJPvARzUf1yLH7w5bRKhHY0eYcpFjdnHqFBx93HxYUD BGJrvlxsFLq6dwIHSFOvW2WHp/JO9aD7n+HH/LRbQZTWFH6idsHvMGKg5ZTGWTY+Nycn XGiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=CDLa69JCcNJtNB64BuSbF97p3HkkBGJVu7c5HN37+Ng=; b=kdLPxIQK4uwLrN7C0A3d6GqxFk+sQsmuQUExz/QP0GMVijOcwpEmds1ho3iVWXISFW MOu1vcfefnhB60TkGrcZkwcWvU+LNP5af6QVnWivUdYjwmBNcbf7ZdqngAndArUKXKnH sUkWHVhXApVVJ5pL1nx44V7WNF/jxmvk+vZuniNcbT1Pb/ZFrxXQd0jbsO+SRx1Rbg2Y 3BmtoQnuWKV9IgWSrRnxQqWXks0xrpzNk/wYqOONtM5yLqEQFpCsML3YNRBlxyc6Mlz8 FqskhWkYbBXYuw9LXyEkkNNfUgBEi+i6a2iat0+QgmSUqTpAMyKwOdnCoytvLcKEA9UX 2Yrg== X-Gm-Message-State: ABuFfoiFta+CMEuJXhMRkgcT32gkz6OcGbvehBAFOX3MMUaaDAkt1nPx EF7chhg0uQWheSS6DTkcg9+w/A== X-Received: by 2002:a63:7c1d:: with SMTP id x29-v6mr1118841pgc.273.1539205257314; Wed, 10 Oct 2018 14:00:57 -0700 (PDT) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id s13-v6sm33482587pfj.105.2018.10.10.14.00.56 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 10 Oct 2018 14:00:56 -0700 (PDT) Date: Wed, 10 Oct 2018 14:00:55 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrea Arcangeli cc: Mel Gorman , Michal Hocko , Andrew Morton , Vlastimil Babka , Andrea Argangeli , Zi Yan , Stefan Priebe - Profihost AG , "Kirill A. Shutemov" , linux-mm@kvack.org, LKML , Stable tree , Michal Hocko Subject: Re: [PATCH 1/2] mm: thp: relax __GFP_THISNODE for MADV_HUGEPAGE mappings In-Reply-To: <20181009225147.GD9307@redhat.com> Message-ID: References: <20180925120326.24392-1-mhocko@kernel.org> <20180925120326.24392-2-mhocko@kernel.org> <20181005073854.GB6931@suse.de> <20181005232155.GA2298@redhat.com> <20181009094825.GC6931@suse.de> <20181009225147.GD9307@redhat.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 9 Oct 2018, Andrea Arcangeli wrote: > On Tue, Oct 09, 2018 at 03:17:30PM -0700, David Rientjes wrote: > > causes workloads to severely regress both in fault and access latency when > > we know that direct reclaim is unlikely to make direct compaction free an > > entire pageblock. It's more likely than not that the reclaim was > > pointless and the allocation will still fail. > > How do you know that? If all RAM is full of filesystem cache, but it's > not heavily fragmented by slab or other unmovable objects, compaction > will succeed every single time after reclaim frees 2M of cache like > it's asked to do. > > reclaim succeeds every time, compaction then succeeds every time. > > Not doing reclaim after COMPACT_SKIPPED is returned simply makes > compaction unable to compact memory once all nodes are filled by > filesystem cache. > For reclaim to assist memory compaction based on compaction's current implementation, it would require that the freeing scanner starting at the end of the zone can find these reclaimed pages as migration targets and that compaction will be able to utilize these migration targets to make an entire pageblock free. In such low on memory conditions when a node is fully saturated, it is much less likely that memory compaction can free an entire pageblock even if the freeing scanner can find these now-free pages. More likely is that we have unmovable pages in MIGRATE_MOVABLE pageblocks because the allocator allows fallback to pageblocks of other migratetypes to return node local memory (because affinity matters for kernel memory as it matters for thp) rather than fallback to remote memory. This has caused us significant pain where we have 1.5GB of slab, for example, spread over 100GB of pageblocks once the node has become saturated. So reclaim is not always "pointless" as you point out, but it should at least only be attempted if memory compaction could free an entire pageblock if it had free memory to migrate to. It's much harder to make sure that these freed pages can be utilized by the freeing scanner. Based on how memory compaction is implemented, I do not think any guarantee can be made that reclaim will ever be successful in allowing it to make order-9 memory available, unfortunately. > > If memory compaction were patched such that it can report that it could > > successfully free a page of the specified order if there were free pages > > at the end of the zone it could migrate to, reclaim might be helpful. But > > with the current implementation, I don't think that is reliably possible. > > These free pages could easily be skipped over by the migration scanner > > because of the presence of slab pages, for example, and unavailable to the > > freeing scanner. > > Yes there's one case where reclaim is "pointless", but it happens once > and then COMPACT_DEFERRED is returned and __GFP_NORETRY will skip > reclaim then. > > So you're right when we hit fragmentation there's one and only one > "pointless" reclaim invocation. And immediately after we also > exponentially backoff on the compaction invocations with the > compaction deferred logic. > This assumes that every time we get COMPACT_SKIPPED that if we can simply free memory that it'll succeed and that's definitely not the case based on the implementation of memory compaction: compaction_alloc() needs to find the memory and the migration scanner needs to free an entire pageblock. The migration scanner doesn't even look ahead to see if that's possible before starting to migrate pages, it's limited to COMPACT_CLUSTER_MAX. The scenario we have: compaction returns COMPACT_SKIPPED; reclaim expensively tries to reclaim memory by thrashing the local node; the compaction migration scanner has already passed over the now-freed pages so it's inaccessible; if accessible, the migration scanner migrates memory to the newly freed pages but fails to make a pageblock free; loop. My contention is that the second step is only justified if we can guarantee the freed memory can be useful for compaction and that it can free an entire pageblock for the hugepage if it can migrate. Both of those are not possible to determine based on the current implementation. > > I'd appreciate if Andrea can test this patch, have a rebuttal that we > > should still remove __GFP_THISNODE because we don't care about locality as > > much as forming a hugepage, we can make that change, and then merge this > > instead of causing such massive fault and access latencies. > > I can certainly test, but from source review I'm already convinced > it'll solve fine the "pathological THP allocation behavior", no > argument about that. It's certainly better and more correct your patch > than the current upstream (no security issues with lack of permissions > for __GFP_THISNODE anymore either). > > I expect your patch will run 100% equivalent to __GFP_COMPACT_ONLY > alternative I posted, for our testcase that hit into the "pathological > THP allocation behavior". > > Your patch encodes __GFP_COMPACT_ONLY into the __GFP_NORETRY semantics > and hardcodes the __GFP_COMPACT_ONLY for all orders = HPAGE_PMD_SIZE > no matter which is the caller. > > As opposed I let the caller choose and left __GFP_NORETRY semantics > alone and orthogonal to the __GFP_COMPACT_ONLY semantics. I think > letting the caller decide instead of hardcoding it for order 9 is > better, because __GFP_COMPACT_ONLY made sense to be set only if > __GFP_THISNODE was also set by the caller. > I've hardcoded it directly for pageblock_order because compaction works over pageblocks and we lack the two crucial points of information I've stated that determines whether direct reclaim could possibly be useful. (It's more correctly implemented as order >= pageblock_order as opposed to order == pageblock_order.) > If a driver does an order 9 allocation with __GFP_THISNODE not set, > your patch will prevent it to allocate remote THP if all remote nodes > are full of cache (which is a reasonable common assumption as more THP > are allocated over time eating in all free memory). Iff it's using __GFP_NORETRY, yes, the allocation and remote access latency that is incurred is the same as for thp. Hugetlbfs doesn't use __GFP_NORETRY, it uses __GFP_RETRY_MAYFAIL, so it will attempt to reclaim memory after my patch. My patch is assuming that a single call to reclaim followed up by another attempt at compaction is not beneficial for pageblock_order sized allocations with __GFP_NORETRY because it's unlikely to either free an entire pageblock and compaction may not be able to access this memory. It's based on how memory compaction is implemented rather than any special heuristic. I won't argue if you protect this logic with __GFP_COMPACT_ONLY, but I think thp allocations should always have __GFP_NORETRY based on compaction's implementation. I see removing __GFP_THISNODE as a separate discussion: if, after my patch (perhaps with a modification for __GFP_COMPACT_ONLY on top of it), you still get unacceptable fault latency and can show that remote access latency to remotely allocated hugepage is better on some platform that isn't Haswell, Naples, and Rome, we can address that but it will probably require more work that simply unsetting __GFP_THISNODE because it will depend on the latency to certain remote nodes over others.