Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2410959imu; Thu, 17 Jan 2019 13:45:38 -0800 (PST) X-Google-Smtp-Source: ALg8bN5R1OGlYzqKY5zfI6XL40T8k/g7rktdoPnfQAjBf9Cl5FIaJUd67UtihjLCuNu6/Z+Fs4WD X-Received: by 2002:a17:902:29a7:: with SMTP id h36mr16740913plb.244.1547761538052; Thu, 17 Jan 2019 13:45:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547761538; cv=none; d=google.com; s=arc-20160816; b=QNSbi9/0I2T6UUlJAcePlQSoSjDNFBxOk5/E+uOf/ZpyN7ACG9QEaOKXjc9yW4mzsr 56hXeGZmZ5dNeR3gSYHCrrIzFFLQ/ZnjAjkh6XW0qDTCuxhBIKmokyEv228fVs6pfsOo wOoRKHzZujov4FUhTGmJBnz2lNwh09gazHW7ySGTGB+rW5t0wPtt6FgXn478OmOP4mk4 Di9rtcBhV+5+mbHpIrrLLIPLeY84VIuoePD+eV3qdH1B2a7cLssDEUHihBQpaUTdEbs+ oIcTNvPwO3iSu5Zf98bHLkUouUB3tIqq493TO76NNkPX+CdaT+rA+2I+GHBHNYI5M+ji e6iw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=oFTEP4BSEZwmFQQcKEbcEJCQzljxvJoZHLCG5rVVDEc=; b=WLduhh3SV919Pi9bSB1H3tyI4akncvpDyMhAYEM6Z0M2VDrihEGBbdzoMC0qfjAu4X 0h1MboxNCT8cx67HB3RWvZpnj3U5jJAVGgWRLDUuaup0SeZib8flB+d89WTP5oQGiNfn I2w89YkZXa21zGS+WDthvMNwFys03dZUzB2EkVwWO+zCCvfpKtC02zBsw2mRDmKjgoJ/ ADrtLtVp+aLBiKhqpyFBivyYX813R9uhgFLmf9nK0ZRr8cXsKiKeqJE3iYNkn6JqLzXc cqrR4cXHMx9rvTBwchRtoxg/YXgaezCwVf6Lv0i47k3VhJTZgsO/OkBGA6tGzFxeNwdJ /maQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u11si2559903plq.287.2019.01.17.13.45.22; Thu, 17 Jan 2019 13:45:38 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729108AbfAQTtF (ORCPT + 99 others); Thu, 17 Jan 2019 14:49:05 -0500 Received: from mga02.intel.com ([134.134.136.20]:13095 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728173AbfAQTtE (ORCPT ); Thu, 17 Jan 2019 14:49:04 -0500 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jan 2019 11:49:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,489,1539673200"; d="scan'208";a="117526329" Received: from unknown (HELO localhost.localdomain) ([10.232.112.69]) by fmsmga008.fm.intel.com with ESMTP; 17 Jan 2019 11:49:03 -0800 Date: Thu, 17 Jan 2019 12:47:51 -0700 From: Keith Busch To: Jonathan Cameron Cc: "linux-kernel@vger.kernel.org" , "linux-acpi@vger.kernel.org" , "linux-mm@kvack.org" , Greg Kroah-Hartman , Rafael Wysocki , "Hansen, Dave" , "Williams, Dan J" , "linuxarm@huawei.com" Subject: Re: [PATCHv4 00/13] Heterogeneuos memory node attributes Message-ID: <20190117194751.GE31543@localhost.localdomain> References: <20190116175804.30196-1-keith.busch@intel.com> <20190117181835.000034ab@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190117181835.000034ab@huawei.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 17, 2019 at 10:18:35AM -0800, Jonathan Cameron wrote: > I've been having a play with various hand constructed HMAT tables to allow > me to try breaking them in all sorts of ways. > > Mostly working as expected. > > Two places I am so far unsure on... > > 1. Concept of 'best' is not implemented in a consistent fashion. > > I don't agree with the logic to match on 'best' because it can give some counter > intuitive sets of target nodes. > > For my simple test case we have both the latency and bandwidth specified (using > access as I'm lazy and it saves typing). > > Rather that matching when both are the best value, we match when _any_ of the > measurements is the 'best' for the type of measurement. > > A simple system with a high bandwidth interconnect between two SoCs > might well have identical bandwidths to memory connected to each node, but > much worse latency to the remote one. Another simple case would be DDR and > SCM on roughly the same memory controller. Bandwidths likely to be equal, > latencies very different. > > Right now we get both nodes in the list of 'best' ones because the bandwidths > are equal which is far from ideal. It also means we are presenting one value > for both latency and bandwidth, misrepresenting the ones where it doesn't apply. > > If we aren't going to specify that both must be "best", then I think we should > separate the bandwidth and latency classes, requiring userspace to check > both if they want the best combination of latency and bandwidth. I'm also > happy enough (having not thought about it much) to have one class where the 'best' > is the value sorted first on best latency and then on best bandwidth. Okay, I see what you mean. I must admit my test environment doesn't have nodes with the same bandwith but different latency, so we may get the wrong information with the HMAT parsing in this series. I'll look into fixing that and consider your sugggestions. > 2. Handling of memory only nodes - that might have a device attached - _PXM > > This is a common situation in CCIX for example where you have an accelerator > with coherent memory homed at it. Looks like a pci device in a domain with > the memory. Right now you can't actually do this as _PXM is processed > for pci devices, but we'll get that fixed (broken threadripper firmwares > meant it got reverted last cycle). > > In my case I have 4 nodes with cpu and memory (0,1,2,3) and 2 memory only (4,5) > Memory only are longer latency and lower bandwidth. > > Now > ls /sys/bus/nodes/devices/node0/class0/ > ... > > initiator0 > target0 > target4 > target5 > > read_bandwidth = 15000 > read_latency = 10000 > > These two values (and their paired write values) are correct for initiator0 to target0 > but completely wrong for initiator0 to target4 or target5. Hm, this wasn't intended to tell us performance for the initiator's targets. The performance data here is when you access node0's memory target from a node in its initiator_list, or one of the simlinked initiatorX's. If you want to see the performance attributes for accessing initiator0->target4, you can check: /sys/devices/system/node/node0/class0/target4/class0/read_bandwidth > This occurs because we loop over the targets looking for the best values and add > set the relevant bit in t->p_nodes based on that. These memory only nodes have > a best value that happens to be equal from all the initiators. The issue is it > isn't the one reported in the node0/class0. > > Also if we look in > /sys/bus/nodes/devices/node4/class0 there are no targets listed (there are the expected > 4 initiators 0-3). > > I'm not sure what the intended behavior would be in this case. You mentioned that node 4 is a memory-only node, so it can't have any targets, right?