Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752937AbdGFXIK (ORCPT ); Thu, 6 Jul 2017 19:08:10 -0400 Received: from mx1.redhat.com ([209.132.183.28]:44952 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752504AbdGFXIJ (ORCPT ); Thu, 6 Jul 2017 19:08:09 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 4CF01C04B946 Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=jglisse@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 4CF01C04B946 Date: Thu, 6 Jul 2017 19:08:04 -0400 From: Jerome Glisse To: Ross Zwisler Cc: linux-kernel@vger.kernel.org, "Anaczkowski, Lukasz" , "Box, David E" , "Kogut, Jaroslaw" , "Lahtinen, Joonas" , "Moore, Robert" , "Nachimuthu, Murugasamy" , "Odzioba, Lukasz" , "Rafael J. Wysocki" , "Rafael J. Wysocki" , "Schmauss, Erik" , "Verma, Vishal L" , "Zheng, Lv" , Andrew Morton , Dan Williams , Dave Hansen , Greg Kroah-Hartman , Len Brown , Tim Chen , devel@acpica.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org, linux-nvdimm@lists.01.org Subject: Re: [RFC v2 0/5] surface heterogeneous memory performance information Message-ID: <20170706230803.GE2919@redhat.com> References: <20170706215233.11329-1-ross.zwisler@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20170706215233.11329-1-ross.zwisler@linux.intel.com> User-Agent: Mutt/1.8.2 (2017-04-18) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Thu, 06 Jul 2017 23:08:08 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1573 Lines: 39 On Thu, Jul 06, 2017 at 03:52:28PM -0600, Ross Zwisler wrote: [...] > > ==== Next steps ==== > > There is still a lot of work to be done on this series, but the overall > goal of this RFC is to gather feedback on which of the two options we > should pursue, or whether some third option is preferred. After that is > done and we have a solid direction we can add support for ACPI hot add, > test more complex configurations, etc. > > So, for applications that need to differentiate between memory ranges based > on their performance, what option would work best for you? Is the local > (initiator,target) performance provided by patch 5 enough, or do you > require performance information for all possible (initiator,target) > pairings? Am i right in assuming that HBM or any faster memory will be relatively small (1GB - 8GB maybe 16GB ?) and of fix amount (ie size will depend on the exact CPU model you have) ? If so i am wondering if we should not restrict NUMA placement policy for such node to vma only. Forbid any policy that would prefer those node globally at thread/process level. This would avoid wide thread policy to exhaust this smaller pool of memory. Drawback of doing so would be that existing applications would not benefit from it. So workload where is acceptable to exhaust such memory wouldn't benefit until their application are updated. This is definitly not something impacting this patchset. I am just thinking about this at large and i believe that NUMA might need to evolve slightly to better handle memory hierarchy. Cheers, J?r?me