Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp896347ybt; Wed, 1 Jul 2020 12:46:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwnZhNbVW43LnM/0tyTVcWVVFCRiXULNpcBeJVYiEv+n2cqDE3ArYLcOEmX/BAkcV3QctYQ X-Received: by 2002:a17:906:940f:: with SMTP id q15mr25121529ejx.470.1593632807036; Wed, 01 Jul 2020 12:46:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593632807; cv=none; d=google.com; s=arc-20160816; b=SG368kzXM1Hg1SLxus3lQJvmFEj2QHRzCgLqGOluzXgGHDCaBxI74rPEgOShv3yGiR QNSR/zRVj2gyeNlFl0I0MwdaQP5epbKG9/Qy9o389LARJb/F5Q2V7BS3RkZlrzfX9G9p kYxoLgphdApgppiObPwU3X5yODe3s2pBGfWix6gvg+5K4/FNuajougaZmr0h/gbtmB6g vGWI2BLsZvfz/7ke7Y6bgc3V26V6DQtZ+hE5aRH/RexcSF24tMSvTnbwxsL5twW4WN9T 4ovBZ2YqaTSP06jlHPNpLK99HzbKL7qXvZ4K9qzo7hgVKSW3jQkKFMnDHsyI4XfzxBJS 4txA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=MmYcz0HvWzSHDRJWGpLrqkNktn4/JnvsoNfPiLm5KlQ=; b=wthPp9vLFRBztpfR2+prN7YuwFUMMo4b+9JSnZkrxlZxBT7KBGBqpdo/QnlNnL1ue0 /P2jLHL08NbO/qHBDVkWUlF13ukznNCDmQJYTt2/YEMMw7sZ8mXQeWHAdjaiD/wYyBcn x+cMLbe2eSgQYZ7B+8K84AOWyuiHDL8X6cFLjVyW5D4qV/Cs+hbdclrWlbUlQOCTphhb HB5xbMMOZfLbqgERlbKHgfIR5WeWb8R7Fyn+k6uTo1Z7a5/BO5F8y/XQ7kHcX2NVhvGs Pv2LpwoP6vJtVAoAm/4BfZqhKOpcIIvaAuTJH7Q7yOQj3YjZHH4obZa+mh0h49ufhBTa xfLA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Zv9TefdE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d18si4181097ejt.487.2020.07.01.12.46.23; Wed, 01 Jul 2020 12:46:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Zv9TefdE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726925AbgGATpY (ORCPT + 99 others); Wed, 1 Jul 2020 15:45:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726477AbgGATpU (ORCPT ); Wed, 1 Jul 2020 15:45:20 -0400 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA73DC08C5C1 for ; Wed, 1 Jul 2020 12:45:19 -0700 (PDT) Received: by mail-pj1-x1043.google.com with SMTP id k5so1885233pjg.3 for ; Wed, 01 Jul 2020 12:45:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=MmYcz0HvWzSHDRJWGpLrqkNktn4/JnvsoNfPiLm5KlQ=; b=Zv9TefdEvjl7gyFCZ5lVCVicTgJt1CjvgutyIOpBITiOFrSh6wxtGnedgQ09N0dmor pS7qMg+F/+kJrWhLAlfgi7xmRqLBVK2TeE1tJwOC4ppPtuJSyJGfJ8T/CVcpv5AZYVul SRuSYxrQgCjxe7Ye2OYgFdFsytY/MGoGO5XlsAPkSUtU4kzEIyLF7fz0u6VvQNojqA4b 8mBosRczpOzC7kixcI/LataPeCsZfBXYo70+i6WKTgBmUCQ1dmBMuFVJoJbxzm2siFMq y2bJrZH9P5MZgylpWyX4emAefMgXEDU1GSQbqIrL+2+KE4bJp5ivRUC+r+UL7JjO8AkT rSRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=MmYcz0HvWzSHDRJWGpLrqkNktn4/JnvsoNfPiLm5KlQ=; b=B/SaSKkIIjtLItc15gf/Xf+BjIsPST9kmAexgArqiVc6wbOGkSyPNzuhYOlS6T9EOc Uvcy9S5kSxed84kaF1GhGdd9s4ns5G50PCXSfB0HbMfsTe5uypElPrk8igKQwLQP9oad ON+0+uvFb6uC8R7nMYKxbs9vPQwiZMr3OgwESWaf7h2t7LvURaBHz1uJ5gBbYFxz/UQ7 CbvBF9QsNKuSTZXKhc7ljKhHO6uoIZqJJOa9UVstjsPc00tDtAbyWHKV9jCllhOM2DIY euIodzAdYg2191EWqVW/jv+vGuItgsUVCRuMVRKkGtYvbojuYRhyIJgWI5us3ZLGeh2E GrQQ== X-Gm-Message-State: AOAM533u/DPhJJynk2bH+K4F82KBqJZleL/JGZSRNwao/33m52bzE4nn vDYZaU3OrM3CsaxPWvgMcpIbsw== X-Received: by 2002:a17:90a:246:: with SMTP id t6mr29763294pje.230.1593632719108; Wed, 01 Jul 2020 12:45:19 -0700 (PDT) Received: from [2620:15c:17:3:4a0f:cfff:fe51:6667] ([2620:15c:17:3:4a0f:cfff:fe51:6667]) by smtp.gmail.com with ESMTPSA id u20sm6422921pfm.152.2020.07.01.12.45.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:45:18 -0700 (PDT) Date: Wed, 1 Jul 2020 12:45:17 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Yang Shi cc: Dave Hansen , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kbusch@kernel.org, ying.huang@intel.com, dan.j.williams@intel.com Subject: Re: [RFC][PATCH 3/8] mm/vmscan: Attempt to migrate page in lieu of discard In-Reply-To: <33028a57-24fd-e618-7d89-5f35a35a6314@linux.alibaba.com> Message-ID: References: <20200629234503.749E5340@viggo.jf.intel.com> <20200629234509.8F89C4EF@viggo.jf.intel.com> <039a5704-4468-f662-d660-668071842ca3@linux.alibaba.com> <33028a57-24fd-e618-7d89-5f35a35a6314@linux.alibaba.com> User-Agent: Alpine 2.23 (DEB 453 2020-06-18) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 1 Jul 2020, Yang Shi wrote: > > We can do this if we consider pmem not to be a separate memory tier from > > the system perspective, however, but rather the socket perspective. In > > other words, a node can only demote to a series of exclusive pmem ranges > > and promote to the same series of ranges in reverse order. So DRAM node 0 > > can only demote to PMEM node 2 while DRAM node 1 can only demote to PMEM > > node 3 -- a pmem range cannot be demoted to, or promoted from, more than > > one DRAM node. > > > > This naturally takes care of mbind() and cpuset.mems if we consider pmem > > just to be slower volatile memory and we don't need to deal with the > > latency concerns of cross socket migration. A user page will never be > > demoted to a pmem range across the socket and will never be promoted to a > > different DRAM node that it doesn't have access to. > > But I don't see too much benefit to limit the migration target to the > so-called *paired* pmem node. IMHO it is fine to migrate to a remote (on a > different socket) pmem node since even the cross socket access should be much > faster then refault or swap from disk. > Hi Yang, Right, but any eventual promotion path would allow this to subvert the user mempolicy or cpuset.mems if the demoted memory is eventually promoted to a DRAM node on its socket. We've discussed not having the ability to map from the demoted page to either of these contexts and it becomes more difficult for shared memory. We have page_to_nid() and page_zone() so we can always find the appropriate demotion or promotion node for a given page if there is a 1:1 relationship. Do we lose anything with the strict 1:1 relationship between DRAM and PMEM nodes? It seems much simpler in terms of implementation and is more intuitive. > I think using pmem as a node is more natural than zone and less intrusive > since we can just reuse all the numa APIs. If we treat pmem as a new zone I > think the implementation may be more intrusive and complicated (i.e. need a > new gfp flag) and user can't control the memory placement. > This is an important decision to make, I'm not sure that we actually *want* all of these NUMA APIs :) If my memory is demoted, I can simply do migrate_pages() back to DRAM and cause other memory to be demoted in its place. Things like MPOL_INTERLEAVE over nodes {0,1,2} don't make sense. Kswapd for a DRAM node putting pressure on a PMEM node for demotion that then puts the kswapd for the PMEM node under pressure to reclaim it serves *only* to spend unnecessary cpu cycles. Users could control the memory placement through a new mempolicy flag, which I think are needed anyway for explicit allocation policies for PMEM nodes. Consider if PMEM is a zone so that it has the natural 1:1 relationship with DRAM, now your system only has nodes {0,1} as today, no new NUMA topology to consider, and a mempolicy flag MPOL_F_TOPTIER that specifies memory must be allocated from ZONE_MOVABLE or ZONE_NORMAL (and I can then mlock() if I want to disable demotion on memory pressure).