Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp925062pxb; Fri, 22 Apr 2022 14:28:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyoVvpzuetTDDpdo2dFEfw6+hSIiGm3wsuXoVZB4QxAQRYv7GSOz5UiC0fwiq1cTe+SqMg1 X-Received: by 2002:a17:902:bf4c:b0:15c:3d1b:8a47 with SMTP id u12-20020a170902bf4c00b0015c3d1b8a47mr4481069pls.118.1650662914039; Fri, 22 Apr 2022 14:28:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1650662914; cv=none; d=google.com; s=arc-20160816; b=WJevbJKRkXcN6RrxAsE5HUSvP6A4ZbMtslmTcdCXeaqtqtxaHrK6jhVw/Q6RQv2BlI NB7pxMH0NZY8rwcy6bFSyDjB+agLeoXOpWihmgN7164LV0XNrOZn8hEaqC9EM6NoeDC9 rJ/gwmK4sGK4C6X03oyf08F+es+Pj+qdZAGJVV0rJ+bm3Q+Cwrbythm/HZkSmZd2VL4K ui1qZ7sQxME9OTb4AN5K1mxPe6I9gr1VXm9UGbhEkeaHaCRsbXZ+CiZf7mAS9DKPrVNK jqOYzUJqjaWawFXaGJrGpLYi7eBYsDPvtPIJwDdEuuEJs4gdGtJ0d2T+Fn0CWZBXU19u cncQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=ViZ4rS/eycFdyCL0pmim1stXBOTkqYRh9tN/Tz1sHIA=; b=E/laErB7DCrg/n8opT99svF+YgwUyR8KMIGlXJS8OvWegNOY6agAwckPnvQFR4wGCh dJ+uw7e3Ee+8kiZxxF7ZbD7wOSFBd31x5xDLSOTxwwE5RPlZwL5m5ibxS9VrGwDqmnMv y3nYIUTbhlj/ZuumvbuYwkIGKvarVJM2RfHYpdXlCqAVNJouNRwfcWq09aWYnvsv82WW GIRViqDfbC/4XrQqhneQyunMbw9ZCOolLonq2zl2e7fdGSmlXfe2Cco0hs32XGXj1IQQ /NdNXsONJp71cp1pQMV5L/TW80W0Ost2YnSSiqB7uKyXhIvJoH5bNU1bSPiagZiLjAI5 E2JQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=fRrotN+M; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id u64-20020a638543000000b003aaf7a65618si38769pgd.22.2022.04.22.14.28.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 14:28:34 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=fRrotN+M; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6D4091EDC04; Fri, 22 Apr 2022 12:35:54 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443990AbiDVEtW (ORCPT + 99 others); Fri, 22 Apr 2022 00:49:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353213AbiDVEtT (ORCPT ); Fri, 22 Apr 2022 00:49:19 -0400 Received: from mail-vk1-xa2b.google.com (mail-vk1-xa2b.google.com [IPv6:2607:f8b0:4864:20::a2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC169183B6 for ; Thu, 21 Apr 2022 21:46:26 -0700 (PDT) Received: by mail-vk1-xa2b.google.com with SMTP id w128so3293719vkd.3 for ; Thu, 21 Apr 2022 21:46:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ViZ4rS/eycFdyCL0pmim1stXBOTkqYRh9tN/Tz1sHIA=; b=fRrotN+M96X/Eelm6McwhcNGs52xFGZHTiW76h2ddT2c/TYbQvH5ccIj8OaTROdL+W MN9q014IIJasEDcGKIUYbIunY+HNissSqKbGAtXVV+QxuWWpjp46czpiHMQIOx/C7lvT rItmjIrETBrY7kdi0m2MZGl6i7O/m/YTvy5EVaUYsb37nGD3wSdaRhQpic2KSGoA3lNr luXSHpXDQv9W+iM02bAxJt88HQR7Vtr+nPt6NthQkGdLM2IElaTHHCrqX6dCDcJspDbx d9jZPWObZ/IIdfXARxT5Tanoge/VYoCqMT94wIZI3zsquoaYNKRvpFMqpfzPnmSHgEkR BHdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ViZ4rS/eycFdyCL0pmim1stXBOTkqYRh9tN/Tz1sHIA=; b=SP1xBO2mbwkUb7ENB5GBu0duBHch81PkWhgLhqGz5Er6TtQV8oIcuQxNTSNjEvHk2H 7+CQTxeUB6rOgojZtwu4c0EY58aMD9MnWcc9rch1g9bwfHMHoITfFzMxtAxj6ag/LPrq WatQJfRA9VWjW5Ltdfax/WcXnv9z2sEZ34WVc5qoXfyRYauEMO9Q8k64jqv2Nk5zzKa9 oQTTRhpQeZ8SWp/i+cYNuv+BI7dwZeucU+3wSMKmtrip+XMbdfLgdN0vZiVlxxOXSasS Y5ZWXN0ORU1Zkckrh/XlD4BbZdTC6RsoH8w15EgEFKhMqqqeMc6DGs1od/kb5WDiDYQU Z51A== X-Gm-Message-State: AOAM533rTK6vhZJG5YFmZHzz++ug1u0p/WVNq48QLBFYzLP/llbnpGwa zQx3cIaPJAnJV+b9mNi1ycFmIwdw62oE+71za1r1aQ== X-Received: by 2002:a1f:38c2:0:b0:349:9667:9232 with SMTP id f185-20020a1f38c2000000b0034996679232mr994981vka.16.1650602785473; Thu, 21 Apr 2022 21:46:25 -0700 (PDT) MIME-Version: 1.0 References: <20220413092206.73974-1-jvgediya@linux.ibm.com> <6365983a8fbd8c325bb18959c51e9417fd821c91.camel@intel.com> <610ccaad03f168440ce765ae5570634f3b77555e.camel@intel.com> In-Reply-To: <610ccaad03f168440ce765ae5570634f3b77555e.camel@intel.com> From: Wei Xu Date: Thu, 21 Apr 2022 21:46:14 -0700 Message-ID: Subject: Re: [PATCH v2 0/5] mm: demotion: Introduce new node state N_DEMOTION_TARGETS To: "ying.huang@intel.com" Cc: Yang Shi , Jagdish Gediya , Linux MM , Linux Kernel Mailing List , Andrew Morton , "Aneesh Kumar K.V" , Baolin Wang , Dave Hansen , Dan Williams , Greg Thelen Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.5 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 21, 2022 at 5:58 PM ying.huang@intel.com wrote: > > On Thu, 2022-04-21 at 11:26 -0700, Wei Xu wrote: > > On Thu, Apr 21, 2022 at 12:45 AM ying.huang@intel.com > > wrote: > > > > > > On Thu, 2022-04-21 at 00:29 -0700, Wei Xu wrote: > > > > On Thu, Apr 21, 2022 at 12:08 AM ying.huang@intel.com > > > > wrote: > > > > > > > > > > On Wed, 2022-04-20 at 23:49 -0700, Wei Xu wrote: > > > > > > On Wed, Apr 20, 2022 at 11:24 PM ying.huang@intel.com > > > > > > wrote: > > > > > > > > > > > > > > On Wed, 2022-04-20 at 22:41 -0700, Wei Xu wrote: > > > > > > > > On Wed, Apr 20, 2022 at 8:12 PM Yang Shi wrote: > > > > > > > > > > > > > > > > > > On Thu, Apr 14, 2022 at 12:00 AM ying.huang@intel.com > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > On Wed, 2022-04-13 at 14:52 +0530, Jagdish Gediya wrote: > > > > > > > > > > > Current implementation to find the demotion targets works > > > > > > > > > > > based on node state N_MEMORY, however some systems may have > > > > > > > > > > > dram only memory numa node which are N_MEMORY but not the > > > > > > > > > > > right choices as demotion targets. > > > > > > > > > > > > > > > > > > > > > > This patch series introduces the new node state > > > > > > > > > > > N_DEMOTION_TARGETS, which is used to distinguish the nodes which > > > > > > > > > > > can be used as demotion targets, node_states[N_DEMOTION_TARGETS] > > > > > > > > > > > is used to hold the list of nodes which can be used as demotion > > > > > > > > > > > targets, support is also added to set the demotion target > > > > > > > > > > > list from user space so that default behavior can be overridden. > > > > > > > > > > > > > > > > > > > > It appears that your proposed user space interface cannot solve all > > > > > > > > > > problems. For example, for system as follows, > > > > > > > > > > > > > > > > > > > > Node 0 & 2 are cpu + dram nodes and node 1 are slow memory node near > > > > > > > > > > node 0, > > > > > > > > > > > > > > > > > > > > available: 3 nodes (0-2) > > > > > > > > > > node 0 cpus: 0 1 > > > > > > > > > > node 0 size: n MB > > > > > > > > > > node 0 free: n MB > > > > > > > > > > node 1 cpus: > > > > > > > > > > node 1 size: n MB > > > > > > > > > > node 1 free: n MB > > > > > > > > > > node 2 cpus: 2 3 > > > > > > > > > > node 2 size: n MB > > > > > > > > > > node 2 free: n MB > > > > > > > > > > node distances: > > > > > > > > > > node 0 1 2 > > > > > > > > > > 0: 10 40 20 > > > > > > > > > > 1: 40 10 80 > > > > > > > > > > 2: 20 80 10 > > > > > > > > > > > > > > > > > > > > Demotion order 1: > > > > > > > > > > > > > > > > > > > > node demotion_target > > > > > > > > > > 0 1 > > > > > > > > > > 1 X > > > > > > > > > > 2 X > > > > > > > > > > > > > > > > > > > > Demotion order 2: > > > > > > > > > > > > > > > > > > > > node demotion_target > > > > > > > > > > 0 1 > > > > > > > > > > 1 X > > > > > > > > > > 2 1 > > > > > > > > > > > > > > > > > > > > The demotion order 1 is preferred if we want to reduce cross-socket > > > > > > > > > > traffic. While the demotion order 2 is preferred if we want to take > > > > > > > > > > full advantage of the slow memory node. We can take any choice as > > > > > > > > > > automatic-generated order, while make the other choice possible via user > > > > > > > > > > space overridden. > > > > > > > > > > > > > > > > > > > > I don't know how to implement this via your proposed user space > > > > > > > > > > interface. How about the following user space interface? > > > > > > > > > > > > > > > > > > > > 1. Add a file "demotion_order_override" in > > > > > > > > > > /sys/devices/system/node/ > > > > > > > > > > > > > > > > > > > > 2. When read, "1" is output if the demotion order of the system has been > > > > > > > > > > overridden; "0" is output if not. > > > > > > > > > > > > > > > > > > > > 3. When write "1", the demotion order of the system will become the > > > > > > > > > > overridden mode. When write "0", the demotion order of the system will > > > > > > > > > > become the automatic mode and the demotion order will be re-generated. > > > > > > > > > > > > > > > > > > > > 4. Add a file "demotion_targets" for each node in > > > > > > > > > > /sys/devices/system/node/nodeX/ > > > > > > > > > > > > > > > > > > > > 5. When read, the demotion targets of nodeX will be output. > > > > > > > > > > > > > > > > > > > > 6. When write a node list to the file, the demotion targets of nodeX > > > > > > > > > > will be set to the written nodes. And the demotion order of the system > > > > > > > > > > will become the overridden mode. > > > > > > > > > > > > > > > > > > TBH I don't think having override demotion targets in userspace is > > > > > > > > > quite useful in real life for now (it might become useful in the > > > > > > > > > future, I can't tell). Imagine you manage hundred thousands of > > > > > > > > > machines, which may come from different vendors, have different > > > > > > > > > generations of hardware, have different versions of firmware, it would > > > > > > > > > be a nightmare for the users to configure the demotion targets > > > > > > > > > properly. So it would be great to have the kernel properly configure > > > > > > > > > it *without* intervening from the users. > > > > > > > > > > > > > > > > > > So we should pick up a proper default policy and stick with that > > > > > > > > > policy unless it doesn't work well for the most workloads. I do > > > > > > > > > understand it is hard to make everyone happy. My proposal is having > > > > > > > > > every node in the fast tier has a demotion target (at least one) if > > > > > > > > > the slow tier exists sounds like a reasonable default policy. I think > > > > > > > > > this is also the current implementation. > > > > > > > > > > > > > > > > > > > > > > > > > This is reasonable. I agree that with a decent default policy, > > > > > > > > > > > > > > > > > > > > > > I agree that a decent default policy is important. As that was enhanced > > > > > > > in [1/5] of this patchset. > > > > > > > > > > > > > > > the > > > > > > > > overriding of per-node demotion targets can be deferred. The most > > > > > > > > important problem here is that we should allow the configurations > > > > > > > > where memory-only nodes are not used as demotion targets, which this > > > > > > > > patch set has already addressed. > > > > > > > > > > > > > > Do you mean the user space interface proposed by [3/5] of this patchset? > > > > > > > > > > > > Yes. > > > > > > > > > > > > > IMHO, if we want to add a user space interface, I think that it should > > > > > > > be powerful enough to address all existing issues and some potential > > > > > > > future issues, so that it can be stable. I don't think it's a good idea > > > > > > > to define a partial user space interface that works only for a specific > > > > > > > use case and cannot be extended for other use cases. > > > > > > > > > > > > I actually think that they can be viewed as two separate problems: one > > > > > > is to define which nodes can be used as demotion targets (this patch > > > > > > set), and the other is how to initialize the per-node demotion path > > > > > > (node_demotion[]). We don't have to solve both problems at the same > > > > > > time. > > > > > > > > > > > > If we decide to go with a per-node demotion path customization > > > > > > interface to indirectly set N_DEMOTION_TARGETS, I'd prefer that there > > > > > > is a single global control to turn off all demotion targets (for the > > > > > > machines that don't use memory-only nodes for demotion). > > > > > > > > > > > > > > > > There's one already. In commit 20b51af15e01 ("mm/migrate: add sysfs > > > > > interface to enable reclaim migration"), a sysfs interface > > > > > > > > > > /sys/kernel/mm/numa/demotion_enabled > > > > > > > > > > is added to turn off all demotion targets. > > > > > > > > IIUC, this sysfs interface only turns off demotion-in-reclaim. It > > > > will be even cleaner if we have an easy way to clear node_demotion[] > > > > and N_DEMOTION_TARGETS so that the userspace (post-boot agent, not > > > > init scripts) can know that the machine doesn't even have memory > > > > tiering hardware enabled. > > > > > > > > > > What is the difference? Now we have no interface to show demotion > > > targets of a node. That is in-kernel only. What is memory tiering > > > hardware? The Optane PMEM? Some information for it is available via > > > ACPI HMAT table. > > > > > > Except demotion-in-reclaim, what else do you care about? > > > > There is a difference: one is to indicate the availability of the > > memory tiering hardware and the other is to indicate whether > > transparent kernel-driven demotion from the reclaim path is activated. > > With /sys/devices/system/node/demote_targets or the per-node demotion > > target interface, the userspace can figure out the memory tiering > > topology abstracted by the kernel. It is possible to use > > application-guided demotion without having to enable reclaim-based > > demotion in the kernel. Logically it is also cleaner to me to > > decouple the tiering node representation from the actual demotion > > mechanism enablement. > > I am confused here. It appears that you need a way to expose the > automatic generated demotion order from kernel to user space interface. > We can talk about that if you really need it. > > But [2-5/5] of this patchset is to override the automatic generated > demotion order from user space to kernel interface. As a side effect of allowing user space to override the default set of demotion target nodes, it also provides a sysfs interface to allow userspace to read which nodes are currently being designated as demotion targets. The initialization of demotion targets is expected to complete during boot (either by kernel or via an init script). After that, the userspace processes (e.g. proactive tiering daemon or tiering-aware applications) can query this sysfs interface to know if there are any tiering nodes present and act accordingly. It would be even better to expose the per-node demotion order (node_demotion[]) via the sysfs interface (e.g. /sys/devices/system/node/nodeX/demotion_targets as you have suggested). It can be read-only until there are good use cases to require overriding the per-node demotion order.