Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp7411362imu; Mon, 3 Dec 2018 12:28:18 -0800 (PST) X-Google-Smtp-Source: AFSGD/X8dfl3nDs6cPXgSqqgtX6DNhxfKsKS3nXQ0zgN7rbd08KwgUMZL4tXVvHlIH2nf2LDkWg3 X-Received: by 2002:a17:902:b68d:: with SMTP id c13mr17382461pls.102.1543868898170; Mon, 03 Dec 2018 12:28:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543868898; cv=none; d=google.com; s=arc-20160816; b=ZbbszpbDHOf8KlpPTJETvNnbdv0oWRXUC0l3BRyURRi+hXm4qhIzRg8V2M4mJ2mCTm pwbyEDZv+0w1CNjoKH50gW7uB/jouFZFNqhJgB+TRuYyS25Rh2Iq6uuQM5bvQSvvler5 pe8lEBurWuXAJ3vqbiQQW0zxQexKG/A/7hUk75xHAWqB6OisSOTMgVlduk+Ld2YBBrUu 0RspDQMvyLkAJWdYfb7yTa/nh9P/C9+g6Edpb8cGDLF+693oZomwINviMgibL3unDpEc sE3V5AajeDL9DCq6Wq1m+oKfGemiRfptNLLbiTZt+xj5iMevHR6DI8FyX80xJ55WsPZV NRcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=GFASv4k38eUe55JTn7Xs8Ag9VKnmgbHychIvl/jNOAg=; b=dGWouVJeJ2JI8QRhedtHN84hHT3yG1JVhxOMzi0STBjI20xCax4LSkhtjChIi+kZDD JB/U0NCDj0L19GnW/ThLAjVpzMAqzDvn2Q6pkjOegkKD+o6VbamHP2APIeJMtGv9V9B3 x4L8m4TTBKv8VD536E6klCntcR9JWhpQHwfxdAoka+aMxLQgc5SZdPoZRKmBr6f/gBN9 44u9Ox9iL9mykexo7ksJmUhfmGffiurt4g2RbKTLWSdFFHhE4EVjhMbqYKTAg/00TPBS IMMATNt3sVAvAucMl5ElNjuUPTupmN88FVv896PBQmPcgdQv6CEL/uLvBguWn1F4DktL mG5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="EO/InZd/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o1si14485542pld.79.2018.12.03.12.28.02; Mon, 03 Dec 2018 12:28:18 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="EO/InZd/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726138AbeLCU0d (ORCPT + 99 others); Mon, 3 Dec 2018 15:26:33 -0500 Received: from mail-pg1-f194.google.com ([209.85.215.194]:33560 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725963AbeLCU0d (ORCPT ); Mon, 3 Dec 2018 15:26:33 -0500 Received: by mail-pg1-f194.google.com with SMTP id z11so6250486pgu.0 for ; Mon, 03 Dec 2018 12:26:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=GFASv4k38eUe55JTn7Xs8Ag9VKnmgbHychIvl/jNOAg=; b=EO/InZd/dNwl5W6+zVMfPVfRFw11pqr4EtqvABf1URkuchkrWU6m5PoV8k/z+CnUlt W/P/e4JJTF3kgD7bAoVKhZSoQ4RoPPjR7TgFPaOr3QUfiCn/BnlUy7HKHPRwWfYuZVqb xHJueHFm709e6qSWG/K7VtqcsqcXNAPr7Hfy6lD8LFS7HKHsjleFmgQXhU7H+EbH+Ztr 6aJLlUKrtkUTZYbjhe6xSNvpPYErrySTTRXVYl6FYNY3lQxqL5QvRE6zTAZJyVQK+euH C1hvH8/tTPjFMcY90RaHLqZwPFQu68v12+wV4rBOFzkMM9snMajRpEM1lgSm62o5wvB+ TpVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=GFASv4k38eUe55JTn7Xs8Ag9VKnmgbHychIvl/jNOAg=; b=EjzfETZeg4wrb/0Bqlriv8epgzSjSXW4S/v9ZhRxA3Eo/e+guX3m+MqI92rAyDkA/4 Norwz79H+c7hrxbUv1CNRde7NiXSidKsEQLqUI1t4rTzN9W34ikolPpQ6s6b0E71LAZ4 MVUyAClFhXPuvV8HU4okN7etNGohebUIDrP+JCQ+LlVoR+jsmdoedWQG31sFwbONZXaV W4FPGI1vpqDzTNJAPRrxrpLKmW7BVCt6OwAopxhxdxsZOA4SFRLRjoDWx9/3gMdmeLIQ ulK0CquU08DoIRuHWHV9MQz0JRCY19yZV2JwZEysnbMequxgpyamwuLqqIN6mvSjyI6H 8kIg== X-Gm-Message-State: AA+aEWaSNZc15tx+91xDpjWOIRXRhMTqOIXopL7jUVBWRU/WgCqeQ5xH G/IF9oOHpMW3UfaAKSVoA5wQjw== X-Received: by 2002:a63:1e56:: with SMTP id p22mr4564172pgm.126.1543868790813; Mon, 03 Dec 2018 12:26:30 -0800 (PST) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id v89sm22872289pfk.12.2018.12.03.12.26.29 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 03 Dec 2018 12:26:29 -0800 (PST) Date: Mon, 3 Dec 2018 12:26:28 -0800 (PST) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrea Arcangeli cc: Michal Hocko , Linus Torvalds , ying.huang@intel.com, s.priebe@profihost.ag, mgorman@techsingularity.net, Linux List Kernel Mailing , alex.williamson@redhat.com, lkp@01.org, kirill@shutemov.name, Andrew Morton , zi.yan@cs.rutgers.edu, Vlastimil Babka Subject: Re: [LKP] [mm] ac5b2c1891: vm-scalability.throughput -61.3% regression In-Reply-To: <20181203192344.GA2986@redhat.com> Message-ID: References: <20181127205737.GI16136@redhat.com> <87tvk1yjkp.fsf@yhuang-dev.intel.com> <20181203181456.GK31738@dhcp22.suse.cz> <20181203183050.GL31738@dhcp22.suse.cz> <20181203185954.GM31738@dhcp22.suse.cz> <20181203192344.GA2986@redhat.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 3 Dec 2018, Andrea Arcangeli wrote: > It's trivial to reproduce the badness by running a memhog process that > allocates more than the RAM of 1 NUMA node, under defrag=always > setting (or by changing memhog to use MADV_HUGEPAGE) and it'll create > swap storms despite 75% of the RAM is completely free in a 4 node NUMA > (or 50% of RAM free in a 2 node NUMA) etc.. > > How can it be ok to push the system into gigabytes of swap by default > without any special capability despite 50% - 75% or more of the RAM is > free? That's the downside of the __GFP_THISNODE optimizaton. > The swap storm is the issue that is being addressed. If your remote memory is as low as local memory, the patch to clear __GFP_THISNODE has done nothing to fix it: you still get swap storms and memory compaction can still fail if the per-zone freeing scanner cannot utilize the reclaimed memory. Recall that this patch to clear __GFP_THISNODE was measured by me to have a 40% increase in allocation latency for fragmented remote memory on Haswell. It makes the problem much, much worse. > __GFP_THISNODE helps increasing NUMA locality if your app can fit in a > single node which is the common David's workload. But if his workload > would more often than not fit in a single node, he would also run into > an unacceptable slowdown because of the __GFP_THISNODE. > Which is why I have suggested that we do not do direct reclaim, as the page allocator implementation expects all thp page fault allocations to have __GFP_NORETRY set, because no amount of reclaim can be shown to be useful to the memory compaction freeing scanner if it is iterated over by the migration scanner. > I think there's lots of room for improvement for the future, but in my > view that __GFP_THISNODE as it was implemented was an incomplete hack, > that opened the door for bad VM corner cases that should not happen. > __GFP_THISNODE is intended specifically because of the remote access latency increase that is encountered if you fault remote hugepages over local pages of the native page size.