Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp107144imu; Mon, 10 Dec 2018 17:12:41 -0800 (PST) X-Google-Smtp-Source: AFSGD/VyqFqQmaN3ToWnBZOIgU2+bLe7ZwTygRnounIf0NwpP3gEB5FKvoHmp5fEZSZZu1wixmI4 X-Received: by 2002:a63:2d46:: with SMTP id t67mr13142570pgt.140.1544490761025; Mon, 10 Dec 2018 17:12:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544490760; cv=none; d=google.com; s=arc-20160816; b=wWxMnFW9SJLcHGBd6+Sa39ff37n8L+aV2gNHrJtw7halHv4dre7Z5IX1ojCEFKd13l AHQBM3c2TUVn0X+zbogewYNiSGpzVBjro4/Y/iong21hgFZEgRjIgJSCD4wswKO0xkAs aW1jSSnoRCsbwcndM8cedVDz8O21dYdSxik9NYtr1V4OIm+rHsUb4QYXvA40RSGnLvx5 gQo6cqLO6uXfYnPCS6DcENh4SmpHaVHPQObdLieGdol2Xjj/xPSb9xEP52fXMgABsVoM GEGDhCxt9sHXus4ElXfP/KMcohdNU9H59B9nSaD8GC0lJdyx6qShsAyvALZYDoc4TNd8 KEFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=q4PgxRPz8GvUPSkyve44Vnn7Mu/NSBLNUgaq1Rmxhng=; b=e9QZlRMiYu5OO0SSHwffUTew+DThKoGaDY7dde342bR6Ssblr/ELFNbRf3zYiVPPTn vBL1YK1XY2+PIBWYGTixU0B/TJVb0YEKbj4mqypM9NlSCoRDP5eOYVpB1OGFTm11aB+8 h1Lt3vz0GVRBedoFFAeNHjMVb5oC9pHL6DUInaHDCltUCqi3cnzTOKzfGgCRgXKzlf/4 FhvIQ/+uAcF1xDkh9g+n/RKrWuftgcbBANpuirfsYTNOtwyjf9gS9bKwQIXcTFCSc2Ap 6nZ+ZQhIOj/Xg4a/tqL/25ZRrlId4ULPg1yiIf4IO2k5gjX93NtbjryxreQS7hD9aKCC Xf/Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j39si11179307plb.272.2018.12.10.17.12.26; Mon, 10 Dec 2018 17:12:40 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729905AbeLKBGo (ORCPT + 99 others); Mon, 10 Dec 2018 20:06:44 -0500 Received: from mga02.intel.com ([134.134.136.20]:24625 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729427AbeLKBFr (ORCPT ); Mon, 10 Dec 2018 20:05:47 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Dec 2018 17:05:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,340,1539673200"; d="scan'208";a="117705176" Received: from unknown (HELO localhost.lm.intel.com) ([10.232.112.69]) by orsmga001.jf.intel.com with ESMTP; 10 Dec 2018 17:05:45 -0800 From: Keith Busch To: linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org Cc: Greg Kroah-Hartman , Rafael Wysocki , Dave Hansen , Dan Williams , Keith Busch Subject: [PATCHv2 03/12] node: Link memory nodes to their compute nodes Date: Mon, 10 Dec 2018 18:03:01 -0700 Message-Id: <20181211010310.8551-4-keith.busch@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20181211010310.8551-1-keith.busch@intel.com> References: <20181211010310.8551-1-keith.busch@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Memory-only nodes will often have affinity to a compute node that can initiate memory access. Platforms may have various ways to express that locality. The nodes that initiate memory requests closest to a memory target node may be considered to have 'primary' access. In preparation for these systems, provide a way for the kernel to link the target memory node to its primary initiator compute nodes with symlinks to each other. Also add memory target and initiator node masks showing the same relationship. The following example show the node's new sysfs hierarchy setup for memory node 'Y' with primary access to commpute node 'X': # ls -l /sys/devices/system/node/nodeX/primary_target* /sys/devices/system/node/nodeX/primary_targetY -> ../nodeY # ls -l /sys/devices/system/node/nodeY/primary_initiator* /sys/devices/system/node/nodeY/primary_initiatorX -> ../nodeX A single initiator may have primary access to multiple memory targets, and the targets may also have primary access from multiple memory initiators. The following demonstrates how this may look for a theoretical system with 8 memory nodes and 2 compute nodes. # cat /sys/devices/system/node/node0/primary_mem_nodelist 0,2,4,6 # cat /sys/devices/system/node/node1/primary_mem_nodelist 1,3,5,7 And then going the other way to identify the cpu lists of a node's primary targets: # cat /sys/devices/system/node/node0/primary_target*/primary_cpu_nodelist | tr "\n" "," 0,0,0,0 # cat /sys/devices/system/node/node1/primary_target*/primary_cpu_nodelist 1,1,1,1 As an example of what you may be able to do with this, let's say we have a PCIe storage device, /dev/nvme0n1, attached to a particular node, and we want to run IO to it using only CPUs and Memory that share primary access. The following shell script is such an example to achieve that goal: #!/bin/bash DEV_NODE=/sys/devices/system/node/node$(cat /sys/block/nvme0n1/device/device/numa_node) numactl --membind=$(cat ${DEV_NODE}/primary_mem_nodelist) \ --cpunodebind=$(cat ${DEV_NODE}/primary_cpu_nodelist) \ -- fio --filename=/dev/nvme0n1 --bs=4k --name=access-test Signed-off-by: Keith Busch --- drivers/base/node.c | 85 ++++++++++++++++++++++++++++++++++++++++++++++++++++ include/linux/node.h | 4 +++ 2 files changed, 89 insertions(+) diff --git a/drivers/base/node.c b/drivers/base/node.c index 86d6cd92ce3d..50412ce3fd7d 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -56,6 +56,46 @@ static inline ssize_t node_read_cpulist(struct device *dev, return node_read_cpumap(dev, true, buf); } +static ssize_t node_read_nodemap(nodemask_t *mask, bool list, char *buf) +{ + return list ? scnprintf(buf, PAGE_SIZE - 1, "%*pbl\n", + nodemask_pr_args(mask)) : + scnprintf(buf, PAGE_SIZE - 1, "%*pb\n", + nodemask_pr_args(mask)); +} + +static ssize_t primary_mem_nodelist_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct node *n = to_node(dev); + return node_read_nodemap(&n->primary_mem_nodes, true, buf); +} + +static ssize_t primary_mem_nodemask_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct node *n = to_node(dev); + return node_read_nodemap(&n->primary_mem_nodes, false, buf); +} + +static ssize_t primary_cpu_nodelist_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct node *n = to_node(dev); + return node_read_nodemap(&n->primary_cpu_nodes, true, buf); +} + +static ssize_t primary_cpu_nodemask_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct node *n = to_node(dev); + return node_read_nodemap(&n->primary_cpu_nodes, false, buf); +} + +static DEVICE_ATTR_RO(primary_mem_nodelist); +static DEVICE_ATTR_RO(primary_mem_nodemask); +static DEVICE_ATTR_RO(primary_cpu_nodemask); +static DEVICE_ATTR_RO(primary_cpu_nodelist); static DEVICE_ATTR(cpumap, S_IRUGO, node_read_cpumask, NULL); static DEVICE_ATTR(cpulist, S_IRUGO, node_read_cpulist, NULL); @@ -240,6 +280,10 @@ static struct attribute *node_dev_attrs[] = { &dev_attr_numastat.attr, &dev_attr_distance.attr, &dev_attr_vmstat.attr, + &dev_attr_primary_mem_nodelist.attr, + &dev_attr_primary_mem_nodemask.attr, + &dev_attr_primary_cpu_nodemask.attr, + &dev_attr_primary_cpu_nodelist.attr, NULL }; ATTRIBUTE_GROUPS(node_dev); @@ -372,6 +416,42 @@ int register_cpu_under_node(unsigned int cpu, unsigned int nid) kobject_name(&node_devices[nid]->dev.kobj)); } +int register_memory_node_under_compute_node(unsigned int m, unsigned int p) +{ + struct node *init, *targ; + char initiator[28]; /* "primary_initiator4294967295\0" */ + char target[25]; /* "primary_target4294967295\0" */ + int ret; + + if (!node_online(p) || !node_online(m)) + return -ENODEV; + if (m == p) + return 0; + + snprintf(initiator, sizeof(initiator), "primary_initiator%d", p); + snprintf(target, sizeof(target), "primary_target%d", m); + + init = node_devices[p]; + targ = node_devices[m]; + + ret = sysfs_create_link(&init->dev.kobj, &targ->dev.kobj, target); + if (ret) + return ret; + + ret = sysfs_create_link(&targ->dev.kobj, &init->dev.kobj, initiator); + if (ret) + goto err; + + node_set(m, init->primary_mem_nodes); + node_set(p, targ->primary_cpu_nodes); + + return 0; + err: + sysfs_remove_link(&node_devices[p]->dev.kobj, + kobject_name(&node_devices[m]->dev.kobj)); + return ret; +} + int unregister_cpu_under_node(unsigned int cpu, unsigned int nid) { struct device *obj; @@ -580,6 +660,11 @@ int __register_one_node(int nid) register_cpu_under_node(cpu, nid); } + if (node_state(nid, N_MEMORY)) + node_set(nid, node_devices[nid]->primary_mem_nodes); + if (node_state(nid, N_CPU)) + node_set(nid, node_devices[nid]->primary_cpu_nodes); + /* initialize work queue for memory hot plug */ init_node_hugetlb_work(nid); diff --git a/include/linux/node.h b/include/linux/node.h index 257bb3d6d014..3d06de045cbf 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -21,6 +21,8 @@ struct node { struct device dev; + nodemask_t primary_mem_nodes; + nodemask_t primary_cpu_nodes; #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HUGETLBFS) struct work_struct node_work; @@ -75,6 +77,8 @@ extern int register_mem_sect_under_node(struct memory_block *mem_blk, extern int unregister_mem_sect_under_nodes(struct memory_block *mem_blk, unsigned long phys_index); +extern int register_memory_node_under_compute_node(unsigned int m, unsigned int p); + #ifdef CONFIG_HUGETLBFS extern void register_hugetlbfs_with_node(node_registration_func_t doregister, node_registration_func_t unregister); -- 2.14.4