Received: by 2002:a05:6602:2086:0:0:0:0 with SMTP id a6csp4377198ioa; Wed, 27 Apr 2022 02:23:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwQ5rS8qdj9EvaexpNIkiryKQVprSrhnpz4KSRA7xqMXOWTzlpw6RNFu1DhEkrW78ct47ia X-Received: by 2002:a63:d454:0:b0:386:86:6aaa with SMTP id i20-20020a63d454000000b0038600866aaamr23257115pgj.60.1651051435637; Wed, 27 Apr 2022 02:23:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651051435; cv=none; d=google.com; s=arc-20160816; b=j3xVrngPjERhFupn21FT+dSwvmpIDA/0fW2R6Dl0XyWLdXShmtPm4IG1yr3X/nhSEz jecwR2ffUn8+izOPhzNs+Gv5e0e+wMUfPRuvADO+1GqO+bAw5UKVx9/R6Zfl2P4jOEXe oB6XAfpDu7UioSH+7y863n0PzL1J6KmKHmXIuxQRhUrgvytN0GImlJsJjRaAXRHvcmBo ATZ9C8B+Kkxv3er2ZpsF3yDQlaRRWUDtfYelSONpwRZN5iSBSKqJQDBgFEyRjwTz4I0Y x1bOdve6PatbALord/GhRihelYpKNJuHMVjSapOaNRL3HRUlbT+T99YOE+4OBF4YVIcD lBig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:cc:to:from:subject :message-id:dkim-signature; bh=Ee6/W8Yx1i5vkT24ABueEOPBlHCLEZClOlotehqTTpE=; b=obXhlO940jIWpFU6tHNNZu2DQUtdP3LyPlM7hKM2hc09Svs4qrP7pxpWZpIzTqW2j8 ZfM+8ynw9zhSo7h1GUqsvG7KfpixK9Qy1oXwDi/LwOChrR183c0o3zTWGLSkd1T5k9ei SzvUbwN5x7JI+XlxDd0kJ+6Va1GhbNWmm90uaAFat3cxEKLyVQAI/iL2NfPig0ZCoiP9 CKmfdLjyGAGMlGGpkSGlId8sFuKoqQfiX650mhJg5uzIyn3nPTXdAqpfAyFWUAy8YqLa /DrYTcYih3uSd87vChsNAo7CeRIARrFflqpX4pp6hjLeWHsTAYEwyAMCNhzoFhWHRwlV IM8g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BLykOcU+; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id f3-20020a170902e98300b0015d50bab223si1112819plb.74.2022.04.27.02.23.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Apr 2022 02:23:55 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BLykOcU+; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id ADF2912FEC1; Wed, 27 Apr 2022 02:09:47 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358620AbiD0HO2 (ORCPT + 99 others); Wed, 27 Apr 2022 03:14:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344735AbiD0HO0 (ORCPT ); Wed, 27 Apr 2022 03:14:26 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 380EF38DB2 for ; Wed, 27 Apr 2022 00:11:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651043476; x=1682579476; h=message-id:subject:from:to:cc:date:in-reply-to: references:mime-version:content-transfer-encoding; bh=8duyb/+ssT9ps9wnLY94ZdGxDfZW/9By36wLbrcDEcg=; b=BLykOcU+5lq8qAqx6IP/RiJybVTuogtboFBbhYHpnyEpm7TgoOkvYAlv l2Xg1JEhgR5ufJovRFHqoN5wGStvGeNT8i5f58seRhLjSKrPnbGAeaxUm ie9H6AkSrzivTIno+AKtDrnFyO/H5DeWdqFHI+/iUtEr9y0f9cHJ/WA8h jUvkccu/N1YrDKmogy9XJ5b6MmxmtiXi0u/+AXcsDVA+c0gN3nRs9dnwb YqQBcBAWcBPcG6Wi+HWISR07Nj5RR94Afjie2EP8SqqGlTDseoK6BAtJt C+IrRhyv29CAznv6ZrI6ayMh+wwbmY8xxhN9CwHYOBE1uFv6223hoQbLf A==; X-IronPort-AV: E=McAfee;i="6400,9594,10329"; a="266003848" X-IronPort-AV: E=Sophos;i="5.90,292,1643702400"; d="scan'208";a="266003848" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Apr 2022 00:11:15 -0700 X-IronPort-AV: E=Sophos;i="5.90,292,1643702400"; d="scan'208";a="730662992" Received: from kang1-mobl1.ccr.corp.intel.com ([10.254.212.35]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Apr 2022 00:11:11 -0700 Message-ID: Subject: Re: [PATCH v2 0/5] mm: demotion: Introduce new node state N_DEMOTION_TARGETS From: "ying.huang@intel.com" To: Wei Xu Cc: Jagdish Gediya , Yang Shi , Dave Hansen , Dan Williams , Davidlohr Bueso , Linux MM , Linux Kernel Mailing List , Andrew Morton , "Aneesh Kumar K.V" , Baolin Wang , Greg Thelen , MichalHocko , Brice Goglin , Tim C Chen Date: Wed, 27 Apr 2022 15:11:08 +0800 In-Reply-To: References: <610ccaad03f168440ce765ae5570634f3b77555e.camel@intel.com> <8e31c744a7712bb05dbf7ceb2accf1a35e60306a.camel@intel.com> <78b5f4cfd86efda14c61d515e4db9424e811c5be.camel@intel.com> <200e95cf36c1642512d99431014db8943fed715d.camel@intel.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.38.3-1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 2022-04-25 at 09:56 -0700, Wei Xu wrote: > On Sat, Apr 23, 2022 at 8:02 PM ying.huang@intel.com > wrote: > > > > Hi, All, > > > > On Fri, 2022-04-22 at 16:30 +0530, Jagdish Gediya wrote: > > > > [snip] > > > > > I think it is necessary to either have per node demotion targets > > > configuration or the user space interface supported by this patch > > > series. As we don't have clear consensus on how the user interface > > > should look like, we can defer the per node demotion target set > > > interface to future until the real need arises. > > > > > > Current patch series sets N_DEMOTION_TARGET from dax device kmem > > > driver, it may be possible that some memory node desired as demotion > > > target is not detected in the system from dax-device kmem probe path. > > > > > > It is also possible that some of the dax-devices are not preferred as > > > demotion target e.g. HBM, for such devices, node shouldn't be set to > > > N_DEMOTION_TARGETS. In future, Support should be added to distinguish > > > such dax-devices and not mark them as N_DEMOTION_TARGETS from the > > > kernel, but for now this user space interface will be useful to avoid > > > such devices as demotion targets. > > > > > > We can add read only interface to view per node demotion targets > > > from /sys/devices/system/node/nodeX/demotion_targets, remove > > > duplicated /sys/kernel/mm/numa/demotion_target interface and instead > > > make /sys/devices/system/node/demotion_targets writable. > > > > > > Huang, Wei, Yang, > > > What do you suggest? > > > > We cannot remove a kernel ABI in practice. So we need to make it right > > at the first time. Let's try to collect some information for the kernel > > ABI definitation. > > > > The below is just a starting point, please add your requirements. > > > > 1. Jagdish has some machines with DRAM only NUMA nodes, but they don't > > want to use that as the demotion targets. But I don't think this is a > > issue in practice for now, because demote-in-reclaim is disabled by > > default. > > > > 2. For machines with PMEM installed in only 1 of 2 sockets, for example, > > > > Node 0 & 2 are cpu + dram nodes and node 1 are slow > > memory node near node 0, > > > > available: 3 nodes (0-2) > > node 0 cpus: 0 1 > > node 0 size: n MB > > node 0 free: n MB > > node 1 cpus: > > node 1 size: n MB > > node 1 free: n MB > > node 2 cpus: 2 3 > > node 2 size: n MB > > node 2 free: n MB > > node distances: > > node 0 1 2 > >   0: 10 40 20 > >   1: 40 10 80 > >   2: 20 80 10 > > > > We have 2 choices, > > > > a) > > node demotion targets > > 0 1 > > 2 1 > > > > b) > > node demotion targets > > 0 1 > > 2 X > > > > a) is good to take advantage of PMEM. b) is good to reduce cross-socket > > traffic. Both are OK as defualt configuration. But some users may > > prefer the other one. So we need a user space ABI to override the > > default configuration. > > I think 2(a) should be the system-wide configuration and 2(b) can be > achieved with NUMA mempolicy (which needs to be added to demotion). Unfortunately, some NUMA mempolicy information isn't available at demotion time, for example, mempolicy enforced via set_mempolicy() is for thread. But I think that cpusets can work for demotion. > In general, we can view the demotion order in a way similar to > allocation fallback order (after all, if we don't demote or demotion > lags behind, the allocations will go to these demotion target nodes > according to the allocation fallback order anyway). If we initialize > the demotion order in that way (i.e. every node can demote to any node > in the next tier, and the priority of the target nodes is sorted for > each source node), we don't need per-node demotion order override from > the userspace. What we need is to specify what nodes should be in > each tier and support NUMA mempolicy in demotion. This sounds interesting. Tier sounds like a natural and general concept for these memory types. It's attracting to use it for user space interface too. For example, we may use that for mem_cgroup limits of a specific memory type (tier). And if we take a look at the N_DEMOTION_TARGETS again from the "tier" point of view. The nodes are divided to 2 classes via N_DEMOTION_TARGETS. - The nodes without N_DEMOTION_TARGETS are top tier (or tier 0). - The nodes with N_DEMOTION_TARGETS are non-top tier (or tier 1, 2, 3, ...) So, another possibility is to fit N_DEMOTION_TARGETS and its overriding into "tier" concept too. !N_DEMOTION_TARGETS == TIER0. - All nodes start with TIER0 - TIER0 can be cleared for some nodes via e.g. kmem driver TIER0 node list can be read or overriden by the user space via the following interface, /sys/devices/system/node/tier0 In the future, if we want to customize more tiers, we can add tier1, tier2, tier3, ..... For now, we can add just tier0. That is, the interface is extensible in the future compared with .../node/demote_targets. This isn't as flexible as the writable per-node demotion targets. But it may be enough for most requirements? Best Regards, Huang, Ying > Cross-socket demotion should not be too big a problem in practice > because we can optimize the code to do the demotion from the local CPU > node (i.e. local writes to the target node and remote read from the > source node). The bigger issue is cross-socket memory access onto the > demoted pages from the applications, which is why NUMA mempolicy is > important here. > > > 3. For machines with HBM (High Bandwidth Memory), as in > > > > https://lore.kernel.org/lkml/39cbe02a-d309-443d-54c9-678a0799342d@gmail.com/ > > > > > [1] local DDR = 10, remote DDR = 20, local HBM = 31, remote HBM = 41 > > > > Although HBM has better performance than DDR, in ACPI SLIT, their > > distance to CPU is longer. We need to provide a way to fix this. The > > user space ABI is one way. The desired result will be to use local DDR > > as demotion targets of local HBM. > > > > Best Regards, > > Huang, Ying > >