Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp9290167imu; Wed, 5 Dec 2018 02:16:05 -0800 (PST) X-Google-Smtp-Source: AFSGD/UaRuPyz3Re2ATqIOZ/tR4va47JGlEwXmRiO7/PQ0Qq6gBeD6zbxZoRWSDQJvKkdAZFQnoL X-Received: by 2002:a63:4e41:: with SMTP id o1mr19204413pgl.282.1544004965766; Wed, 05 Dec 2018 02:16:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544004965; cv=none; d=google.com; s=arc-20160816; b=lQOEDqYHc/qrNF8sv8gOWUSAFBcLOaVe4Y8bBpwxOzlE05NADVL4xccv3UccIXbfq3 4v1emP4aekPhourtJQQKAawczQE1jlTYEmg+DoA7plpbtQXFCb0WVZcMBo6508uuEoSX i+Viwmn/Rg06Q14v2uekxKQSbPe/YxCOTXpIQcq690RZkjWcXPxjYyZ0uzNdR5S2YXzS UYnvq6JbeZJlrFEtrSGKg+hGOgZpVpQak6RdOSwfPIcOgbBPmfhoAepyWCddu5agpL/J Hm35JVYmHaRhPF+x9W7Ta2L/mMSghumY5it/WGL9XJl+bqGmK7usNCFGTidmi6nm2hJV aeZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=UV5j751q4n137pVJIoOyT9TtPAGDrTkLPIwDdPe8knY=; b=hTKo2BtkJMXexNuMUvng3TwRptyKhqeMYkTtqJKg+T/1ybnfa04CCrLqj36ABVltml lR5/LJGY3A09LYPvp0uFDY4W9AVnqjo4TJeR+Yr8/rNUql7V8av1y0syFmPa7XDuxVoL Ukh/ymjKYlPaWV8z3NId/VEIMV0gXZbbmnVWKHeTyyG+YCfYeAJqtw7e0fJm/8Zdrt9Y HCVPI+6xGQPPG1FkdJ9N1Oi15L7F/6IIizstDBMdunkAZM/8uWXiG00SRyZLISaODH9L PfJgPRU0NCwD2pp6Pex9Rud+ejksgPfcNFA8CX8dWrYzSxU3422GkWBu0dKZfAnH8qX8 theQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z186si18048845pgd.90.2018.12.05.02.15.50; Wed, 05 Dec 2018 02:16:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727581AbeLEKPQ (ORCPT + 99 others); Wed, 5 Dec 2018 05:15:16 -0500 Received: from outbound-smtp12.blacknight.com ([46.22.139.17]:42555 "EHLO outbound-smtp12.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726171AbeLEKPQ (ORCPT ); Wed, 5 Dec 2018 05:15:16 -0500 Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp12.blacknight.com (Postfix) with ESMTPS id 4123D1C229B for ; Wed, 5 Dec 2018 10:15:14 +0000 (GMT) Received: (qmail 11397 invoked from network); 5 Dec 2018 10:15:14 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[37.228.245.71]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 5 Dec 2018 10:15:13 -0000 Date: Wed, 5 Dec 2018 10:15:12 +0000 From: Mel Gorman To: David Rientjes Cc: Michal Hocko , Linus Torvalds , Andrea Arcangeli , ying.huang@intel.com, s.priebe@profihost.ag, Linux List Kernel Mailing , alex.williamson@redhat.com, lkp@01.org, kirill@shutemov.name, Andrew Morton , zi.yan@cs.rutgers.edu, Vlastimil Babka Subject: Re: [patch 0/2 for-4.20] mm, thp: fix remote access and allocation regressions Message-ID: <20181205101512.GY23260@techsingularity.net> References: <20181204073850.GW31738@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 04, 2018 at 02:25:54PM -0800, David Rientjes wrote: > On Tue, 4 Dec 2018, Michal Hocko wrote: > > > > This fixes a 13.9% of remote memory access regression and 40% remote > > > memory allocation regression on Haswell when the local node is fragmented > > > for hugepage sized pages and memory is being faulted with either the thp > > > defrag setting of "always" or has been madvised with MADV_HUGEPAGE. > > > > > > The usecase that initially identified this issue were binaries that mremap > > > their .text segment to be backed by transparent hugepages on startup. > > > They do mmap(), madvise(MADV_HUGEPAGE), memcpy(), and mremap(). > > > > Do you have something you can share with so that other people can play > > and try to reproduce? > > > > This is a single MADV_HUGEPAGE usecase, there is nothing special about it. > It would be the same as if you did mmap(), madvise(MADV_HUGEPAGE), and > faulted the memory with a fragmented local node and then measured the > remote access latency to the remote hugepage that occurs without setting > __GFP_THISNODE. You can also measure the remote allocation latency by > fragmenting the entire system and then faulting. > I'll make the same point as before, the form the fragmentation takes matters as well as the types of pages that are resident and whether they are active or not. It affects the level of work the system does as well as the overall success rate of operations (be it reclaim, THP allocation, compaction, whatever). This is why a reproduction case that is representative of the problem you're facing on the real workload matters would have been helpful because then any alternative proposal could have taken your workload into account during testing. -- Mel Gorman SUSE Labs