Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp247588pxb; Thu, 21 Apr 2022 23:05:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzTZ9Ljv8qykYtH7Vspxs6ujIg1PyGGWnegvopUsrqZSpjITCVXJ7ND7DldtqLHq1SQnthT X-Received: by 2002:a17:907:1b1b:b0:6e4:7fac:6ce0 with SMTP id mp27-20020a1709071b1b00b006e47fac6ce0mr2636588ejc.617.1650607539885; Thu, 21 Apr 2022 23:05:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1650607539; cv=none; d=google.com; s=arc-20160816; b=LM3t+0p31rAAUo8Zvcm4ksVD5XFVgrUVbXgVWREp+CBMzDZm/NJw1T6sE7X0EAiFkA HHpORADYTw6ICWB6oDs6K43IjCZj4gh3PEgleWY+zteVhDb3l1AQ8u9OC539BMaTTkY0 KNV9bUSk922bZGLYtO2F8fNl1Z34Iw9QtShcQ6SVeUp+8EKZ1NA6+qexgxl8oB/fvH79 ZWz9My+y4Qh3vlLkEB5c4MP4sLmqKMjML1HcKJ/9bmlHd+/oDzQNsbB/vPmy3f4GuMU5 xpkngkYlRK4jbHSp7EEwqBPq7PUs9+85wybHmuL9OjOFczb9MAn42rFOLEjG70fxNYTS QH1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=0Kef+uraHlTS1J8sntjcDuAazHDLlgQ9ZX8fx3bltn4=; b=zMeqx+RL9e7F8nomBInfST91HkIqVZtPNcVD0e2/4Gtm+6CrrFhqQM8fycMSp71SD1 qa43/EzombEKRPMaagcVz5MoabemRFFQAbPr3n+jYP0t164GEbnY9e/oDLRw8VomwhX5 ERl8BdrwqMcq7GGo0V71oFiuHidC3tci70Dv3n4nUyuQ+fuaxu4iXMl9F+g7Qy1bnxLt FEnrw1JQUObiMNR7bDNthgwD52bD3swwzPGnW/NVb/4nMaWi2m5AC3kNttMT4Yv4RyNV AFcJ89p9dndLNV/1CxPrFoQ7capUtpFPBTGq99GEV4n0VMAi7+CUO35r5J6reDXfD1N/ gSbw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="MJoHzYG/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ew23-20020a170907951700b006df892af0besi5289200ejc.940.2022.04.21.23.05.10; Thu, 21 Apr 2022 23:05:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="MJoHzYG/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382124AbiDUHdF (ORCPT + 99 others); Thu, 21 Apr 2022 03:33:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1385780AbiDUHci (ORCPT ); Thu, 21 Apr 2022 03:32:38 -0400 Received: from mail-vs1-xe2b.google.com (mail-vs1-xe2b.google.com [IPv6:2607:f8b0:4864:20::e2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15992B1F7 for ; Thu, 21 Apr 2022 00:29:49 -0700 (PDT) Received: by mail-vs1-xe2b.google.com with SMTP id v133so3748613vsv.7 for ; Thu, 21 Apr 2022 00:29:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=0Kef+uraHlTS1J8sntjcDuAazHDLlgQ9ZX8fx3bltn4=; b=MJoHzYG/QyhL2BlIJO/TiCWwz52LYDroJq+MKWPLxEwZg9G4VVPzP22pMcbq9h2LXu l0MJ3TFbvLjP6xW0Ksn0FmdKtZ5pSOohjkX8GiOi32S20ZbmRPhpvX7G5OhXRyGzUtlX x3kLRTd8v4nkRqN3I/6/YZ1urnXkgajFJE4wekTmT168NeBFhMHEGfglOa65L9VE1fYZ Lb/kr2bD06LHjq0V5HoZnlmSkLuKMomwW0Ag3Owi1kRwg8wz0luHSgKDH1j0rKV1tJV8 16I+v0qL16fEi2sWPF3JUPfbHFwBQVNT7/ssh/N8G3orl2uQYjQ9sLMh9FnYQKlnN0cj KFGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=0Kef+uraHlTS1J8sntjcDuAazHDLlgQ9ZX8fx3bltn4=; b=o/0u8FUY1NeoHxi4+8+a0pVwlIwtLChFDf+tNupb/zorgh/i0vErYWQ42Vr1pTu1Gb 3nwIVZkFpNAICb2MsSe21jK43ceFdFoCiimG/fFCrHOTqzpS82np8odBmoK3IPjyeYH3 DqRX3WC6V5nhfnFCG4la8vuSIktwFIMYDOWlY3LlaBD8bCAw4yd6zxgJHyRwzqieHouG qbx2s52rljGBLsazAKWDRIDVcQSOxw7S79HllGt9Bxh8mUZLqhyD65lNi7tIp5HMe12/ gbcPssqvVZMpp2t8ha8dH5FqD1pAB6Qsm75TWJyYo/FCbDNOeQNz20OBSUDF6HDkRC4i +y+w== X-Gm-Message-State: AOAM530RqrU7iV2lagtN9IdyGl7C7XUo7m5PvC6lP0uCJ427PijJy8Du AwWnardGEyrsetXygQLIEFYStfqykKOjvBURq2K01w== X-Received: by 2002:a05:6102:3106:b0:32a:18c8:1633 with SMTP id e6-20020a056102310600b0032a18c81633mr7733600vsh.51.1650526188028; Thu, 21 Apr 2022 00:29:48 -0700 (PDT) MIME-Version: 1.0 References: <20220413092206.73974-1-jvgediya@linux.ibm.com> <6365983a8fbd8c325bb18959c51e9417fd821c91.camel@intel.com> In-Reply-To: From: Wei Xu Date: Thu, 21 Apr 2022 00:29:37 -0700 Message-ID: Subject: Re: [PATCH v2 0/5] mm: demotion: Introduce new node state N_DEMOTION_TARGETS To: "ying.huang@intel.com" Cc: Yang Shi , Jagdish Gediya , Linux MM , Linux Kernel Mailing List , Andrew Morton , "Aneesh Kumar K.V" , Baolin Wang , Dave Hansen , Dan Williams , Greg Thelen Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 21, 2022 at 12:08 AM ying.huang@intel.com wrote: > > On Wed, 2022-04-20 at 23:49 -0700, Wei Xu wrote: > > On Wed, Apr 20, 2022 at 11:24 PM ying.huang@intel.com > > wrote: > > > > > > On Wed, 2022-04-20 at 22:41 -0700, Wei Xu wrote: > > > > On Wed, Apr 20, 2022 at 8:12 PM Yang Shi wrote: > > > > > > > > > > On Thu, Apr 14, 2022 at 12:00 AM ying.huang@intel.com > > > > > wrote: > > > > > > > > > > > > On Wed, 2022-04-13 at 14:52 +0530, Jagdish Gediya wrote: > > > > > > > Current implementation to find the demotion targets works > > > > > > > based on node state N_MEMORY, however some systems may have > > > > > > > dram only memory numa node which are N_MEMORY but not the > > > > > > > right choices as demotion targets. > > > > > > > > > > > > > > This patch series introduces the new node state > > > > > > > N_DEMOTION_TARGETS, which is used to distinguish the nodes which > > > > > > > can be used as demotion targets, node_states[N_DEMOTION_TARGETS] > > > > > > > is used to hold the list of nodes which can be used as demotion > > > > > > > targets, support is also added to set the demotion target > > > > > > > list from user space so that default behavior can be overridden. > > > > > > > > > > > > It appears that your proposed user space interface cannot solve all > > > > > > problems. For example, for system as follows, > > > > > > > > > > > > Node 0 & 2 are cpu + dram nodes and node 1 are slow memory node near > > > > > > node 0, > > > > > > > > > > > > available: 3 nodes (0-2) > > > > > > node 0 cpus: 0 1 > > > > > > node 0 size: n MB > > > > > > node 0 free: n MB > > > > > > node 1 cpus: > > > > > > node 1 size: n MB > > > > > > node 1 free: n MB > > > > > > node 2 cpus: 2 3 > > > > > > node 2 size: n MB > > > > > > node 2 free: n MB > > > > > > node distances: > > > > > > node 0 1 2 > > > > > > 0: 10 40 20 > > > > > > 1: 40 10 80 > > > > > > 2: 20 80 10 > > > > > > > > > > > > Demotion order 1: > > > > > > > > > > > > node demotion_target > > > > > > 0 1 > > > > > > 1 X > > > > > > 2 X > > > > > > > > > > > > Demotion order 2: > > > > > > > > > > > > node demotion_target > > > > > > 0 1 > > > > > > 1 X > > > > > > 2 1 > > > > > > > > > > > > The demotion order 1 is preferred if we want to reduce cross-socket > > > > > > traffic. While the demotion order 2 is preferred if we want to take > > > > > > full advantage of the slow memory node. We can take any choice as > > > > > > automatic-generated order, while make the other choice possible via user > > > > > > space overridden. > > > > > > > > > > > > I don't know how to implement this via your proposed user space > > > > > > interface. How about the following user space interface? > > > > > > > > > > > > 1. Add a file "demotion_order_override" in > > > > > > /sys/devices/system/node/ > > > > > > > > > > > > 2. When read, "1" is output if the demotion order of the system has been > > > > > > overridden; "0" is output if not. > > > > > > > > > > > > 3. When write "1", the demotion order of the system will become the > > > > > > overridden mode. When write "0", the demotion order of the system will > > > > > > become the automatic mode and the demotion order will be re-generated. > > > > > > > > > > > > 4. Add a file "demotion_targets" for each node in > > > > > > /sys/devices/system/node/nodeX/ > > > > > > > > > > > > 5. When read, the demotion targets of nodeX will be output. > > > > > > > > > > > > 6. When write a node list to the file, the demotion targets of nodeX > > > > > > will be set to the written nodes. And the demotion order of the system > > > > > > will become the overridden mode. > > > > > > > > > > TBH I don't think having override demotion targets in userspace is > > > > > quite useful in real life for now (it might become useful in the > > > > > future, I can't tell). Imagine you manage hundred thousands of > > > > > machines, which may come from different vendors, have different > > > > > generations of hardware, have different versions of firmware, it would > > > > > be a nightmare for the users to configure the demotion targets > > > > > properly. So it would be great to have the kernel properly configure > > > > > it *without* intervening from the users. > > > > > > > > > > So we should pick up a proper default policy and stick with that > > > > > policy unless it doesn't work well for the most workloads. I do > > > > > understand it is hard to make everyone happy. My proposal is having > > > > > every node in the fast tier has a demotion target (at least one) if > > > > > the slow tier exists sounds like a reasonable default policy. I think > > > > > this is also the current implementation. > > > > > > > > > > > > > This is reasonable. I agree that with a decent default policy, > > > > > > > > > > I agree that a decent default policy is important. As that was enhanced > > > in [1/5] of this patchset. > > > > > > > the > > > > overriding of per-node demotion targets can be deferred. The most > > > > important problem here is that we should allow the configurations > > > > where memory-only nodes are not used as demotion targets, which this > > > > patch set has already addressed. > > > > > > Do you mean the user space interface proposed by [3/5] of this patchset? > > > > Yes. > > > > > IMHO, if we want to add a user space interface, I think that it should > > > be powerful enough to address all existing issues and some potential > > > future issues, so that it can be stable. I don't think it's a good idea > > > to define a partial user space interface that works only for a specific > > > use case and cannot be extended for other use cases. > > > > I actually think that they can be viewed as two separate problems: one > > is to define which nodes can be used as demotion targets (this patch > > set), and the other is how to initialize the per-node demotion path > > (node_demotion[]). We don't have to solve both problems at the same > > time. > > > > If we decide to go with a per-node demotion path customization > > interface to indirectly set N_DEMOTION_TARGETS, I'd prefer that there > > is a single global control to turn off all demotion targets (for the > > machines that don't use memory-only nodes for demotion). > > > > There's one already. In commit 20b51af15e01 ("mm/migrate: add sysfs > interface to enable reclaim migration"), a sysfs interface > > /sys/kernel/mm/numa/demotion_enabled > > is added to turn off all demotion targets. IIUC, this sysfs interface only turns off demotion-in-reclaim. It will be even cleaner if we have an easy way to clear node_demotion[] and N_DEMOTION_TARGETS so that the userspace (post-boot agent, not init scripts) can know that the machine doesn't even have memory tiering hardware enabled. > Best Regards, > Huang, Ying > > >