Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2114635imu; Thu, 10 Jan 2019 08:30:50 -0800 (PST) X-Google-Smtp-Source: ALg8bN58khYFn42B7EApYOMIlkEFreEmsw3NGf5U53x55oqKrUxRe7F0H4FcmSov+GxgN7f9mKYk X-Received: by 2002:a62:60c5:: with SMTP id u188mr10891131pfb.4.1547137850578; Thu, 10 Jan 2019 08:30:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547137850; cv=none; d=google.com; s=arc-20160816; b=RhLDoBcUKb6Q0xLjruUfk3Eqh6Nk1ylHGj6Q25+UdXoriZ/9Tf8Kp91arnSNdTHi73 Xn04x1S6OcmNSwVnVBbXpyHTj3LK2W1EB84BB8DQ0mLmV6YhhSXRjtn4oxRg0reQx6FI jysGRPYp3qPTbdv7hF1LqNcfrzmRZ1tzbaT9KA+aF99LA8TX05N3lw4O2jfLpaBpGv7a 5R52WJj33ejFGYgBVo7d46lag45eccCLZZd+n0IYMGO+Dyufrlk+8gffO4ZX4o4WSRAq U9XwwkioBZRCDeVf+wm/yz+wz8sd1rEoBKFg8/SlQoej9MbdQzM89mSizuLi1+MIpDAj gmTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=OkJsHFUjv1xXlNNOQU8ylwJOBOrlBheNMkAZfOt+LIs=; b=jtuMxaaazlpSL+NPs/wb/wY2wziL7Nji5HdSYmzjxZJl6ym/xguSY9NUFdJA76KW0O Q5PnfnYOFJpoalQZEW05odQZG3u3UgupfC8JhyTNdg6NDI+mohCc/VzwGoHA0rEht8xT 3wDrKyUsUbwPlLyDWIae5xXXQP4BrthMLk8Dk49DlLlRzXIAMnSOa5oDAw7QUG1W1YNT xduHsfisWLA4ifjHLFehjF+J06ACRwAtoqGqZee9FfikOg0ttDOjGlwTcdvfLw8bYUf0 7DjMRfy0sPv2KIajHtZ6JetWH4Mkf2nsMpr5K2vNSv6RqC7iVZUY3FKpJDeK6h+rUGZk ZXdA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x5si9341878pga.440.2019.01.10.08.30.35; Thu, 10 Jan 2019 08:30:50 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729862AbfAJQ0D (ORCPT + 99 others); Thu, 10 Jan 2019 11:26:03 -0500 Received: from mx1.redhat.com ([209.132.183.28]:41210 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727771AbfAJQ0D (ORCPT ); Thu, 10 Jan 2019 11:26:03 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 27C19C7CA9; Thu, 10 Jan 2019 16:26:02 +0000 (UTC) Received: from redhat.com (unknown [10.20.6.215]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 5571D60123; Thu, 10 Jan 2019 16:25:58 +0000 (UTC) Date: Thu, 10 Jan 2019 11:25:56 -0500 From: Jerome Glisse To: Michal Hocko Cc: Fengguang Wu , Andrew Morton , Linux Memory Management List , kvm@vger.kernel.org, LKML , Fan Du , Yao Yuan , Peng Dong , Huang Ying , Liu Jingqi , Dong Eddie , Dave Hansen , Zhang Yi , Dan Williams , Mel Gorman , Andrea Arcangeli Subject: Re: [RFC][PATCH v2 00/21] PMEM NUMA node and hotness accounting/migration Message-ID: <20190110162556.GC4394@redhat.com> References: <20181226131446.330864849@intel.com> <20181227203158.GO16738@dhcp22.suse.cz> <20181228050806.ewpxtwo3fpw7h3lq@wfg-t540p.sh.intel.com> <20181228084105.GQ16738@dhcp22.suse.cz> <20181228094208.7lgxhha34zpqu4db@wfg-t540p.sh.intel.com> <20181228121515.GS16738@dhcp22.suse.cz> <20181228133111.zromvopkfcg3m5oy@wfg-t540p.sh.intel.com> <20181228195224.GY16738@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20181228195224.GY16738@dhcp22.suse.cz> User-Agent: Mutt/1.10.0 (2018-05-17) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Thu, 10 Jan 2019 16:26:02 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Dec 28, 2018 at 08:52:24PM +0100, Michal Hocko wrote: > [Ccing Mel and Andrea] > > On Fri 28-12-18 21:31:11, Wu Fengguang wrote: > > > > > I haven't looked at the implementation yet but if you are proposing a > > > > > special cased zone lists then this is something CDM (Coherent Device > > > > > Memory) was trying to do two years ago and there was quite some > > > > > skepticism in the approach. > > > > > > > > It looks we are pretty different than CDM. :) > > > > We creating new NUMA nodes rather than CDM's new ZONE. > > > > The zonelists modification is just to make PMEM nodes more separated. > > > > > > Yes, this is exactly what CDM was after. Have a zone which is not > > > reachable without explicit request AFAIR. So no, I do not think you are > > > too different, you just use a different terminology ;) > > > > Got it. OK.. The fall back zonelists patch does need more thoughts. > > > > In long term POV, Linux should be prepared for multi-level memory. > > Then there will arise the need to "allocate from this level memory". > > So it looks good to have separated zonelists for each level of memory. > > Well, I do not have a good answer for you here. We do not have good > experiences with those systems, I am afraid. NUMA is with us for more > than a decade yet our APIs are coarse to say the least and broken at so > many times as well. Starting a new API just based on PMEM sounds like a > ticket to another disaster to me. > > I would like to see solid arguments why the current model of numa nodes > with fallback in distances order cannot be used for those new > technologies in the beginning and develop something better based on our > experiences that we gain on the way. I see several issues with distance. First it does fully abstract the underlying topology and this might be problematic, for instance if you memory with different characteristic in same node like persistent memory connected to some CPU then it might be faster for that CPU to access that persistent memory has it has dedicated link to it than to access some other remote memory for which the CPU might have to share the link with other CPUs or devices. Second distance is no longer easy to compute when you are not trying to answer what is the fastest memory for CPU-N but rather asking what is the fastest memory for CPU-N and device-M ie when you are trying to find the best memory for a group of CPUs/devices. The answer can changes drasticly depending on members of the groups. Some advance programmer already do graph matching ie they match the graph of their program dataset/computation with the topology graph of the computer they run on to determine what is best placement both for threads and memory. > I would be especially interested about a possibility of the memory > migration idea during a memory pressure and relying on numa balancing to > resort the locality on demand rather than hiding certain NUMA nodes or > zones from the allocator and expose them only to the userspace. For device memory we have more things to think of like: - memory not accessible by CPU - non cache coherent memory (yet still useful in some case if application explicitly ask for it) - device driver want to keep full control over memory as older application like graphic for GPU, do need contiguous physical memory and other tight control over physical memory placement So if we are talking about something to replace NUMA i would really like for that to be inclusive of device memory (which can itself be a hierarchy of different memory with different characteristics). Note that i do believe the NUMA proposed solution is something useful now. But for a new API it would be good to allow thing like device memory. This is a good topic to discuss during next LSF/MM Cheers, J?r?me