Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp7277215imu; Mon, 3 Dec 2018 10:16:06 -0800 (PST) X-Google-Smtp-Source: AFSGD/W8zmTzvqk/c/AsRn+5V+Bh2yTfiQf7P6U2HgepEX9mp7tEJIeVV/ZQao0ZV7YUNWvorinV X-Received: by 2002:a63:db02:: with SMTP id e2mr14216112pgg.419.1543860966485; Mon, 03 Dec 2018 10:16:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543860966; cv=none; d=google.com; s=arc-20160816; b=JWEwnReSimwGPBpwIous/Nj18KB5e/Dy9anaiNmSr+5Gs/rENdcWI3tNrqC4d09H3J rEl0HDkiYFnUKn7Bg6VOw2/2tO15zb/j+uuMnCXKby9M0LLjJDIypMpk1zFygVo5Bu0+ QaNX9pC7OZWH0LLAu6bioc0GF6jMPTnduwJmF0y5eiiPGitk7unJfcAEMIb7s+JMPDIe XDMAQ+g5LTLceJ0zbgUZjOpqmVYQ6jGakZyjcCpaz21mmlDMrmsk1yEyM/mSvfcXSnja ObDhukZho4gj/Iwpp4hK4Gi0TjmSz1rPFpQ1dEGss4hhca5r3ZB/1JRpQYN+bwtdBfp4 sPYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Hhx9zP3GFXmskU1CS/atnU45kiVpElTKi+q4fDKOWfg=; b=LKLYGXMLoTqV0BnUouiI2vx+l3t8oi+5Na0ehuutwAgoYGcq7UunfiDupEAfGKOhtf qCg936U3a7qH+PPW7TsG9xmrIDQSY78DeukAd2LbHwpPBDlKMjdQ7tWtv6oYfzdvZAMQ r8OrlaxvtJr7uiN/LpS622KS/LeGhJAbsdPe0lmLCMZy2t4HMmJTyUVC3iH0pQl/Iqfl E2bGzYyDUnpw10sI/V1DvrL77MWqxeB2bL9xMTXvqudSlMipfQlLem6xfSA4aZe0G6A+ By+VUoZpupavbzs3EV45VCaE5A52GE/+pE7u9q6EnbUYVgcrirI3Ac1NIemGU3lMJEDt 64wA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e34si13438868pgb.80.2018.12.03.10.15.43; Mon, 03 Dec 2018 10:16:06 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726592AbeLCSPK (ORCPT + 99 others); Mon, 3 Dec 2018 13:15:10 -0500 Received: from mx2.suse.de ([195.135.220.15]:41594 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726014AbeLCSPK (ORCPT ); Mon, 3 Dec 2018 13:15:10 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 918ECAEB7; Mon, 3 Dec 2018 18:15:04 +0000 (UTC) Date: Mon, 3 Dec 2018 19:14:56 +0100 From: Michal Hocko To: Linus Torvalds Cc: ying.huang@intel.com, Andrea Arcangeli , s.priebe@profihost.ag, mgorman@techsingularity.net, Linux List Kernel Mailing , alex.williamson@redhat.com, lkp@01.org, rientjes@google.com, kirill@shutemov.name, Andrew Morton , zi.yan@cs.rutgers.edu, Vlastimil Babka Subject: Re: [LKP] [mm] ac5b2c1891: vm-scalability.throughput -61.3% regression Message-ID: <20181203181456.GK31738@dhcp22.suse.cz> References: <20181127062503.GH6163@shao2-debian> <20181127205737.GI16136@redhat.com> <87tvk1yjkp.fsf@yhuang-dev.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 03-12-18 10:01:18, Linus Torvalds wrote: > On Wed, Nov 28, 2018 at 8:48 AM Linus Torvalds > wrote: > > > > On Tue, Nov 27, 2018 at 7:20 PM Huang, Ying wrote: > > > > > > In general, memory allocation fairness among processes should be a good > > > thing. So I think the report should have been a "performance > > > improvement" instead of "performance regression". > > > > Hey, when you put it that way... > > > > Let's ignore this issue for now, and see if it shows up in some real > > workload and people complain. > > Well, David Rientjes points out that it *does* cause real problems in > real workloads, so it's not just this benchmark run that shows the > issue. The thing is that there is no universal win here. There are two different types of workloads and we cannot satisfy both. This has been discussed at lenght during the review process. David's workload makes some assumptions about the MADV_HUGEPAGE numa placement. There are other workalods like KVM setups which do not really require that and those are ones which regressed. The prevalent consensus was that a NUMA placement is not really implied by MADV_HUGEPAGE because a) this has never been documented or intended behavior and b) it is not a universal win (basically the same as node/zone_reclaim which used to be on by default on some NUMA setups). Reverting the patch would regress another class of workloads. As we cannot satisfy both I believe we should make the API clear and in favor of a more relaxed workloads. Those with special requirements should have a proper API to reflect that (this is our general NUMA policy pattern already). -- Michal Hocko SUSE Labs