Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp4399905yba; Wed, 17 Apr 2019 10:37:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqwlzvxQnZCm4bKHnq3NbSfGfo28d1VgdxLIToBParugcVn/WbktoB3jXkSpUpWZ/u31eh8b X-Received: by 2002:a17:902:bf08:: with SMTP id bi8mr1070027plb.336.1555522635299; Wed, 17 Apr 2019 10:37:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555522635; cv=none; d=google.com; s=arc-20160816; b=Df5M3q1IyZOBp9cJtyy2amG6sCQIzSI7HWdAYJFy+plMCUzANnuZ81YKkEbMs2U3fI xAOZuHKoxmSypdkdAFpqXPZmg8097Tf70Ae3giFz3WBLuwAyGGD7Av+A9McqXrQNlR1m gRHoo6dLuf2FKGhjTiig+XJWZVCWRm7UmQwex1HqmI9y7mpvE1wjYvGwzp9bmzQOBRVS ai7LYm6xITiuRWmqBQ4P5vgMbaHP4JQR3lMjrA2wcZczbUoGFKxyVE82dypRqA8b4uYa iIctAIAPMNuMb0EgiIaB4wFKz8GXuT1R3n3B09yx2GNYOkaz9Aoqns8Db8ykfp+UxvD/ jEQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=EY31vcT5qc1leXFJy1w6o5I4JvfZh7xbKEGI1O7PIB4=; b=fTS3+8LkFvmelQy3HXbBQT8+i+ZJD463gKjwu1BtkjTXrqltvRGomk+54dhDz/DQaX O9TmxMQ6o0vmiRShqVmxniyhDgZj4LzrFvC/9ij/8K7EVgdHxBJl6UqY9b+wzSyiVGNs FobwKNGhOEp0HLz5ufhs7ewHBcOFk5PLwHfAjTaMOdrOGxjld8TdiZahXTABLqllGjkg zNGMqRtTZ4mYOQIPyy/SQSFgoXPO3usJyHmi+SdFh/m9EO1Pm9cpvVLmuQE4hkSTGYA/ wq610Cvghfdcikktvqf2+oUzzK5XYJgGIZ3VTUMtQhz1YtV0zm+BrRRo12cZBILnzW+3 QY/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e25si45015575pfi.123.2019.04.17.10.37.00; Wed, 17 Apr 2019 10:37:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733008AbfDQRft (ORCPT + 99 others); Wed, 17 Apr 2019 13:35:49 -0400 Received: from mga07.intel.com ([134.134.136.100]:57642 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729395AbfDQRft (ORCPT ); Wed, 17 Apr 2019 13:35:49 -0400 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Apr 2019 10:35:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,362,1549958400"; d="scan'208";a="143743233" Received: from unknown (HELO localhost.localdomain) ([10.232.112.69]) by fmsmga007.fm.intel.com with ESMTP; 17 Apr 2019 10:35:48 -0700 Date: Wed, 17 Apr 2019 11:29:33 -0600 From: Keith Busch To: Yang Shi Cc: Michal Hocko , Dave Hansen , mgorman@techsingularity.net, riel@surriel.com, hannes@cmpxchg.org, akpm@linux-foundation.org, dan.j.williams@intel.com, fengguang.wu@intel.com, fan.du@intel.com, ying.huang@intel.com, ziy@nvidia.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [v2 RFC PATCH 0/9] Another Approach to Use PMEM as NUMA Node Message-ID: <20190417172932.GA6176@localhost.localdomain> References: <20190416074714.GD11561@dhcp22.suse.cz> <876768ad-a63a-99c3-59de-458403f008c4@linux.alibaba.com> <20190417092318.GG655@dhcp22.suse.cz> <20190417152345.GB4786@localhost.localdomain> <20190417153923.GO5878@dhcp22.suse.cz> <20190417153739.GD4786@localhost.localdomain> <20190417163911.GA9523@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 17, 2019 at 10:26:05AM -0700, Yang Shi wrote: > On 4/17/19 9:39 AM, Michal Hocko wrote: > > On Wed 17-04-19 09:37:39, Keith Busch wrote: > > > On Wed, Apr 17, 2019 at 05:39:23PM +0200, Michal Hocko wrote: > > > > On Wed 17-04-19 09:23:46, Keith Busch wrote: > > > > > On Wed, Apr 17, 2019 at 11:23:18AM +0200, Michal Hocko wrote: > > > > > > On Tue 16-04-19 14:22:33, Dave Hansen wrote: > > > > > > > Keith Busch had a set of patches to let you specify the demotion order > > > > > > > via sysfs for fun. The rules we came up with were: > > > > > > I am not a fan of any sysfs "fun" > > > > > I'm hung up on the user facing interface, but there should be some way a > > > > > user decides if a memory node is or is not a migrate target, right? > > > > Why? Or to put it differently, why do we have to start with a user > > > > interface at this stage when we actually barely have any real usecases > > > > out there? > > > The use case is an alternative to swap, right? The user has to decide > > > which storage is the swap target, so operating in the same spirit. > > I do not follow. If you use rebalancing you can still deplete the memory > > and end up in a swap storage. If you want to reclaim/swap rather than > > rebalance then you do not enable rebalancing (by node_reclaim or similar > > mechanism). > > I'm a little bit confused. Do you mean just do *not* do reclaim/swap in > rebalancing mode? If rebalancing is on, then node_reclaim just move the > pages around nodes, then kswapd or direct reclaim would take care of swap? > > If so the node reclaim on PMEM node may rebalance the pages to DRAM node? > Should this be allowed? > > I think both I and Keith was supposed to treat PMEM as a tier in the reclaim > hierarchy. The reclaim should push inactive pages down to PMEM, then swap. > So, PMEM is kind of a "terminal" node. So, he introduced sysfs defined > target node, I introduced N_CPU_MEM. Yeah, I think Yang and I view "demotion" as a separate feature from numa rebalancing.