Received: by 2002:ac0:8c9a:0:0:0:0:0 with SMTP id r26csp491025ima; Fri, 1 Feb 2019 06:29:36 -0800 (PST) X-Google-Smtp-Source: ALg8bN5Yndw0Wvxd1vWrRT+AOjtcW/4/ZAHyjPpd872hkKjNEAgi+TKnzkcZAtSFTi4zisOkf4Xr X-Received: by 2002:a17:902:e18c:: with SMTP id cd12mr37574050plb.279.1549031376304; Fri, 01 Feb 2019 06:29:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1549031376; cv=none; d=google.com; s=arc-20160816; b=SwFCFKOCzwAaaHKg5CF5TIeCDK9fhwrL4+lEXpU+/EjJw/NoLuWBwvMv93fVl6rrFA MHRptpwBY78ySAgoVls2AkF+m/LD8IRtOGRkIxbzPvrlJQGtMCZZenRXai2XLoQdetsj 57TTBKJiLiT2ZfmHptb8WwUYyfOHFKD86GpGmQhQk3naSYSWhzRhLrwnFT1C/v3vM0GC fK7F8vw+UPrdOxUoJAtZENZheMpqpueBn6VbZAHsDUMiJJIrWLKkCwOZjj5aNbHGcOOm DaQ/BU9QSFVKosvbokYTdm+HJdvc/GwAFwLKgkQMJLIOKT4qm4rkXTTdL4GAyrdSPOij l9Mw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=CCNleHAfR8vTpo+Edul3sweRpbLUDITX69CkpU264u4=; b=DINLvMJH/gWjUty+pcFJM/kBV3KXL02BmuLnt2Gwh24Nr44uj9mRY2nYMwlo7qRngb 8FIgLTXpb0Ma8BVb1e222pPTvgk86CbBj+/tw9GenzmMa4QzKGzSLCXkF89IZcM1Nm6t vkJRWc+inHKHQSBcWrffVQVH1E2yxSegV7Bt4B3buvYWsf0jER2QJ5IUY7Buj6JvQ4Xc eqf0C3ZAlECXhRnhGyQtPVAThW/MVxa9ZbGj2lJK+ZB1OE9vxXvQql59EfIETiRCldia vtXUfLwcn+z6mvoo5NeoelTDr78jr+JK+WYHFvkR76EfaD+jEiYi3SJDC/dsfcipEm/F FHew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q15si4292643pgm.420.2019.02.01.06.29.20; Fri, 01 Feb 2019 06:29:36 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729108AbfBAORi (ORCPT + 99 others); Fri, 1 Feb 2019 09:17:38 -0500 Received: from mx2.suse.de ([195.135.220.15]:48038 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726330AbfBAORi (ORCPT ); Fri, 1 Feb 2019 09:17:38 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 00672ADF1; Fri, 1 Feb 2019 14:17:36 +0000 (UTC) Date: Fri, 1 Feb 2019 14:17:33 +0000 From: Mel Gorman To: Andrea Arcangeli Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Peter Xu , Blake Caldwell , Mike Rapoport , Mike Kravetz , Michal Hocko , Vlastimil Babka , David Rientjes Subject: Re: [LSF/MM TOPIC] NUMA remote THP vs NUMA local non-THP under MADV_HUGEPAGE Message-ID: <20190201141733.GC4926@suse.de> References: <20190129234058.GH31695@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20190129234058.GH31695@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 29, 2019 at 06:40:58PM -0500, Andrea Arcangeli wrote: > I posted some benchmark results showing that for tasks without strong > NUMA locality the __GFP_THISNODE logic is not guaranteed to be optimal > (and here of course I mean even if we ignore the large slowdown with > swap storms at allocation time that might be caused by > __GFP_THISNODE). The results also show NUMA remote THPs help > intrasocket as well as intersocket. > > https://lkml.kernel.org/r/20181210044916.GC24097@redhat.com > https://lkml.kernel.org/r/20181212104418.GE1130@redhat.com > > The following seems the interim conclusion which I happen to be in > agreement with Michal and Mel: > > https://lkml.kernel.org/r/20181212095051.GO1286@dhcp22.suse.cz > https://lkml.kernel.org/r/20181212170016.GG1130@redhat.com > > Hopefully this strict issue will be hot-fixed before April (like we > had to hot-fix it in the enterprise kernels to avoid the 3 years old > regression to break large workloads that can't fit it in a single NUMA > node and I assume other enterprise distributions will follow suit), > but whatever hot-fix will likely allow ample margin for discussions on > what we can do better to optimize the decision between local non-THP > and remote THP under MADV_HUGEPAGE. > > It is clear that the __GFP_THISNODE forced in the current code > provides some minor advantage to apps using MADV_HUGEPAGE that can fit > in a single NUMA node, but we should try to achieve it without major > disadvantages to apps that can't fit in a single NUMA node. > > For example it was mentioned that we could allocate readily available > already-free local 4k if local compaction fails and the watermarks > still allows local 4k allocations without invoking reclaim, before > invoking compaction on remote nodes. The same can be repeated at a > second level with intra-socket non-THP memory before invoking > compaction inter-socket. However we can't do things like that with the > current page allocator workflow. It's possible some larger change is > required than just sending a single gfp bitflag down to the page > allocator that creates an implicit MPOL_LOCAL binding to make it > behave like the obsoleted numa/zone reclaim behavior, but weirdly only > applied to THP allocations. > I would also be interested in discussing this topic. My activity is mostly compaction-related but I believe it will evolve into something that returns more sane data to the page allocator. That should make it a bit easier to detect when local compaction fails and make it easier to improve the page allocator workflow without throwing another workload under a bus. -- Mel Gorman SUSE Labs