Received: by 2002:ac0:8c8e:0:0:0:0:0 with SMTP id r14csp816530ima; Wed, 6 Feb 2019 08:49:31 -0800 (PST) X-Google-Smtp-Source: AHgI3IZI155dPPNjCxKQg/RnYEKlFwavsNtM1vyygCDm5r8mjiqFr3T6C56WsOKf/paGjzDenksk X-Received: by 2002:a62:13c3:: with SMTP id 64mr11366101pft.93.1549471771540; Wed, 06 Feb 2019 08:49:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1549471771; cv=none; d=google.com; s=arc-20160816; b=u+P7w3lGzGKw7Pwkorqi7YIjS3jCmUZSH0EiB748nlj/YRGsjT7vLwsRw42h63Cjlz ouvT/GGRmK21m03h5LPDMLe51stZ3iwerUOmL0FqYoQzk26JQNGeCZnDQzC/Hm606MUO bDhiAlq6gDvuPqhiOnQ2H4Nxf6p3gyzdbUthD57VNBjgcPoeGJ8+zb2w9YXYNWBBjEET FF44s/+0KiLkrNC2IiFz+t5f7ji1OdBHDxMNLPk6UJcIxLsR6KyenOJ9KAyQJuYXrSkL zYfsqh0ejVGLVHsqA92l5ObFsu1WjHMtqDhlpCsRTi6VvVv27jfzixz3JFsU3zxhL76d fq+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :organization:references:in-reply-to:message-id:subject:cc:to:from :date; bh=YyNIoJmFS+MbFMyN/idn4BdmVkNBI2zIf6UVcYzoyaI=; b=fjSUbVT1mYpeIcku4kXYEsZNb5JzOOPesZG/gqJF4Th+gmM6X/PdkaUJSwWfS3UofU YKjVP5tmiDqek0OIvLu+1ZbqyJsDHYkUP0Z4wI+yJGqhydKepL3xXZMaFpCZ1stzOykg GMTECGBOrgTz6rEo/B8GDc9tAd8gYtn05WK6wHIk/ZL/GSear8UfzNtKrLieltCjqWzb Nyx4hHItQuQvqQTlzDur5l+mKiPygCTyM4pSRI56ouWFlPi98leHjYcbkpmbMWWWE3Sf 6lmH3HvDYsvGSVS2wQEYF/t1cj6QWQc4hA257CKf1JsX9V+XX699LlSqKfxA/9SzxPoh W91Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b124si6259325pfg.47.2019.02.06.08.49.15; Wed, 06 Feb 2019 08:49:31 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731463AbfBFQrq (ORCPT + 99 others); Wed, 6 Feb 2019 11:47:46 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:37188 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727797AbfBFQrq (ORCPT ); Wed, 6 Feb 2019 11:47:46 -0500 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id A543F91B5040A52C41F5; Thu, 7 Feb 2019 00:47:40 +0800 (CST) Received: from localhost (10.202.226.61) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.408.0; Thu, 7 Feb 2019 00:47:34 +0800 Date: Wed, 6 Feb 2019 16:47:24 +0000 From: Jonathan Cameron To: Keith Busch CC: "linux-kernel@vger.kernel.org" , "linux-acpi@vger.kernel.org" , "linux-mm@kvack.org" , Greg Kroah-Hartman , Rafael Wysocki , "Hansen, Dave" , "Williams, Dan J" Subject: Re: [PATCHv5 04/10] node: Link memory nodes to their compute nodes Message-ID: <20190206164724.000019c5@huawei.com> In-Reply-To: <20190206161227.GH28064@localhost.localdomain> References: <20190124230724.10022-1-keith.busch@intel.com> <20190124230724.10022-5-keith.busch@intel.com> <20190206122635.00005d37@huawei.com> <20190206161227.GH28064@localhost.localdomain> Organization: Huawei X-Mailer: Claws Mail 3.16.0 (GTK+ 2.24.32; i686-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.202.226.61] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 6 Feb 2019 09:12:27 -0700 Keith Busch wrote: > On Wed, Feb 06, 2019 at 04:26:35AM -0800, Jonathan Cameron wrote: > > On Thu, 24 Jan 2019 16:07:18 -0700 > > Keith Busch wrote: > > > +What: /sys/devices/system/node/nodeX/accessY/initiators/ > > > +Date: December 2018 > > > +Contact: Keith Busch > > > +Description: > > > + The directory containing symlinks to memory initiator > > > + nodes that have class "Y" access to this target node's > > > + memory. CPUs and other memory initiators in nodes not in > > > + the list accessing this node's memory may have different > > > + performance. > > > > Also seems to contain the characteristics of those accesses. > > Right, but not in this patch. I will append the description for access > characterisitics in the patch that adds those attributes. > > > > + * This function will export node relationships linking which memory > > > + * initiator nodes can access memory targets at a given ranked access > > > + * class. > > > + */ > > > +int register_memory_node_under_compute_node(unsigned int mem_nid, > > > + unsigned int cpu_nid, > > > + unsigned access) > > > +{ > > > + struct node *init_node, *targ_node; > > > + struct node_access_nodes *initiator, *target; > > > + int ret; > > > + > > > + if (!node_online(cpu_nid) || !node_online(mem_nid)) > > > + return -ENODEV; > > > > What do we do under memory/node hotplug? More than likely that will > > apply in such systems (it does in mine for starters). > > Clearly to do the full story we would need _HMT support etc but > > we can do the prebaked version by having hmat entries for nodes > > that aren't online yet (like we do for SRAT). > > > > Perhaps one for a follow up patch set. However, I'd like an > > pr_info to indicate that the node is listed but not online currently. > > Yes, hot plug is planned to follow on to this series. > > > > + > > > + init_node = node_devices[cpu_nid]; > > > + targ_node = node_devices[mem_nid]; > > > + initiator = node_init_node_access(init_node, access); > > > + target = node_init_node_access(targ_node, access); > > > + if (!initiator || !target) > > > + return -ENOMEM; > > > > If one of these fails and the other doesn't + the one that succeeded > > did an init, don't we end up leaking a device here? I'd expect this > > function to not leave things hanging if it has an error. It should > > unwind anything it has done. It has been added to the list so > > could be cleaned up later, but I'm not seeing that code. > > > > These only get cleaned up when the node is removed. > > The intiator-target relationship is many-to-many, so we don't want to > free it just because we couldn't allocate its pairing node. The > exisiting one may still be paired to others we were able to allocate. Reference count them? We have lots of paths that can result in creation any of which might need cleaning up. Sounds like a classic case for reference counts. Jonathan