Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp4301478img; Tue, 26 Mar 2019 06:59:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqxiiGYlE4GOBTFMWZh/ftZqWu1vhn0UVqp6kukcnKwtN8Mq1wkMxmBSbMgGdr61rEnwtZ0I X-Received: by 2002:a17:902:aa5:: with SMTP id 34mr30059580plp.302.1553608779098; Tue, 26 Mar 2019 06:59:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553608779; cv=none; d=google.com; s=arc-20160816; b=ZIGzpy7Sz2QAwHTUqdXsWwr/S+vAbnxFYMlTRXJvfuoh2b0lLuRYu66OrOZm6Ui8+M ejxDe03h7mO6voV0OJ+Yu8WUE9wxgYZ2Ur6Y77OExJ7lN2L4zEjHXCinHDsECSTHcoJR osdKsGBXVW3w7hPsPnF+BFx6YGHfhrqz6liSGJ+5fb46pUHbx3tHnkl49rtVdSWrOc25 T/pwYetgOOS/6pEm4WjLVdMmWgH0srEFl0WKUqVvBpZIF3PHNVTvsUjfRlCw6BB29XBD /gox0QtPKIJ7c6nes+zcydw3Smef612V/QF8Q0OJ3XFaEFcKpOsx7gMmqBGkMpBu0B72 wm6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=jOU6xt3JX5VunDgNyH+hQn9oq6Ym3Db0TOcBzrXZYUU=; b=D111v46b5cSYMtT3M5OkLXyeGZZV0DFn75HOQZiTzBBDQCV5LQWmVr00BAUW3hOlyf 8uNG3yhBT09yOlEd5G11U4lQSJ1u+1C7OSay4zxEkeIGYieGo9/UDpiJJ8U/innC521H LxdbTCT0rhNNjluGj6BuQp6Mq8+0U/lMGkmzbGDVFH1ldl33bOw5pmiUigY8ydSj5G7a x9coxkF8R+m2biJkx8naIKPdeFFjn/MjivPwo9zX14qY3ukh3hCNN6zFr5j1qJB/5Ojf L5C87rur0WBJTJKrMtWfUplIrAomRsxBQOw2JvwWMxCN4HyBI2dxK3BYRiBv6fPHCT1Z zPqA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x20si15497254pfm.282.2019.03.26.06.59.23; Tue, 26 Mar 2019 06:59:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731495AbfCZN6n (ORCPT + 99 others); Tue, 26 Mar 2019 09:58:43 -0400 Received: from mx2.suse.de ([195.135.220.15]:37440 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726175AbfCZN6n (ORCPT ); Tue, 26 Mar 2019 09:58:43 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 1C7FFAC8E; Tue, 26 Mar 2019 13:58:40 +0000 (UTC) Date: Tue, 26 Mar 2019 14:58:37 +0100 From: Michal Hocko To: Yang Shi Cc: mgorman@techsingularity.net, riel@surriel.com, hannes@cmpxchg.org, akpm@linux-foundation.org, dave.hansen@intel.com, keith.busch@intel.com, dan.j.williams@intel.com, fengguang.wu@intel.com, fan.du@intel.com, ying.huang@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH 0/10] Another Approach to Use PMEM as NUMA Node Message-ID: <20190326135837.GP28406@dhcp22.suse.cz> References: <1553316275-21985-1-git-send-email-yang.shi@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1553316275-21985-1-git-send-email-yang.shi@linux.alibaba.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat 23-03-19 12:44:25, Yang Shi wrote: > > With Dave Hansen's patches merged into Linus's tree > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c221c0b0308fd01d9fb33a16f64d2fd95f8830a4 > > PMEM could be hot plugged as NUMA node now. But, how to use PMEM as NUMA node > effectively and efficiently is still a question. > > There have been a couple of proposals posted on the mailing list [1] [2]. > > The patchset is aimed to try a different approach from this proposal [1] > to use PMEM as NUMA nodes. > > The approach is designed to follow the below principles: > > 1. Use PMEM as normal NUMA node, no special gfp flag, zone, zonelist, etc. > > 2. DRAM first/by default. No surprise to existing applications and default > running. PMEM will not be allocated unless its node is specified explicitly > by NUMA policy. Some applications may be not very sensitive to memory latency, > so they could be placed on PMEM nodes then have hot pages promote to DRAM > gradually. Why are you pushing yourself into the corner right at the beginning? If the PMEM is exported as a regular NUMA node then the only difference should be performance characteristics (module durability which shouldn't play any role in this particular case, right?). Applications which are already sensitive to memory access should better use proper binding already. Some NUMA topologies might have quite a large interconnect penalties already. So this doesn't sound like an argument to me, TBH. > 5. Control memory allocation and hot/cold pages promotion/demotion on per VMA > basis. What does that mean? Anon vs. file backed memory? [...] > 2. Introduce a new mempolicy, called MPOL_HYBRID to keep other mempolicy > semantics intact. We would like to have memory placement control on per process > or even per VMA granularity. So, mempolicy sounds more reasonable than madvise. > The new mempolicy is mainly used for launching processes on PMEM nodes then > migrate hot pages to DRAM nodes via NUMA balancing. MPOL_BIND could bind to > PMEM nodes too, but migrating to DRAM nodes would just break the semantic of > it. MPOL_PREFERRED can't constraint the allocation to PMEM nodes. So, it sounds > a new mempolicy is needed to fulfill the usecase. The above restriction pushes you to invent an API which is not really trivial to get right and it seems quite artificial to me already. > 3. The new mempolicy would promote pages to DRAM via NUMA balancing. IMHO, I > don't think kernel is a good place to implement sophisticated hot/cold page > distinguish algorithm due to the complexity and overhead. But, kernel should > have such capability. NUMA balancing sounds like a good start point. This is what the kernel does all the time. We call it memory reclaim. > 4. Promote twice faulted page. Use PG_promote to track if a page is faulted > twice. This is an optimization to NUMA balancing to reduce the migration > thrashing and overhead for migrating from PMEM. I am sorry, but page flags are an extremely scarce resource and a new flag is extremely hard to get. On the other hand we already do have use-twice detection for mapped page cache (see page_check_references). I believe we can generalize that to anon pages as well. > 5. When DRAM has memory pressure, demote page to PMEM via page reclaim path. > This is quite similar to other proposals. Then NUMA balancing will promote > page to DRAM as long as the page is referenced again. But, the > promotion/demotion still assumes two tier main memory. And, the demotion may > break mempolicy. Yes, this sounds like a good idea to me ;) > 6. Anonymous page only for the time being since NUMA balancing can't promote > unmapped page cache. As long as the nvdimm access is faster than the regular storage then using any node (including pmem one) should be OK. -- Michal Hocko SUSE Labs