Received: by 10.223.176.46 with SMTP id f43csp831995wra; Fri, 26 Jan 2018 07:36:04 -0800 (PST) X-Google-Smtp-Source: AH8x224ZcsGVO5U8M34ahbB6cnLf7LldHyReua5CzuS2xrqkuVZg5ZkMB4EBHIQEEXH25wnhQoaO X-Received: by 2002:a17:902:7283:: with SMTP id d3-v6mr12746710pll.163.1516980963949; Fri, 26 Jan 2018 07:36:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516980963; cv=none; d=google.com; s=arc-20160816; b=n3klieGD2EjWTqyp3COOlOYT6tTZmIlmBHSRfQ1x+mVlNpECjP9f8xqcnhHIN5Aw+f I4EPOBGzwUE0DVHeYmjqQbUZ+9MS/Jub8yxX0CsOZQrRK9DeD+mpdiC8LyFty8v0zkBJ a0ecVxnBzN3jwokMTWJ6KlaROGinbDpYsZfN76Dj0ODQmw8hOwIn0zjSKtSFwlZhN+XE 4sQEinwcc6FgoXg48ckmPxIpJ0qcfn+gvH5UFSKFUdHLnuYoAxmk3ZdemlikA6VlmmmT J0S0F9Ua0zzozil/BTimOLypPgbTP74KQRTIXYRlyF9ZKOxd3CQvHWAFGtVW0ZHpv/cL QISQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dmarc-filter :arc-authentication-results; bh=uDS8DVq6dGdsZAm8OqE8e98VrXfMGISKSa2VesVJDVw=; b=zwqm52qVKr96TFSzfcTLWdQcjyktzwr5n9KbNtxx4GrHYUBWWXU9m2PaX8kj0HjW/b zf7U1rNKfFLv5UnxaFiZLcTyw1zwv6y80r6UCVL4ebybG5nQdIBeZ8NN83YCw/8I2Q9v XR++PlHboAEt8sZp1Y+s9FGHCcVGLK4YkTdwbDQP2IO/siRUq3teYqtwLZ/bNwgvxhXd vLkP//+PPVknXKy2vm2tskaHSjbmVDivc6LzjQry5ae51X+Bjat+YWHLNgx23Bpkxq+W nhFrhDDKST0ibGcHayvof2QvUWY1tDOzRRdfoLyKRnW/1qfEdqx13oy7nZQEiP3QDdQN s7FA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r26si3138885pgd.313.2018.01.26.07.35.49; Fri, 26 Jan 2018 07:36:03 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751914AbeAZPfW (ORCPT + 99 others); Fri, 26 Jan 2018 10:35:22 -0500 Received: from mail.kernel.org ([198.145.29.99]:33450 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751405AbeAZPfU (ORCPT ); Fri, 26 Jan 2018 10:35:20 -0500 Received: from mail-qt0-f177.google.com (mail-qt0-f177.google.com [209.85.216.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2645521797; Fri, 26 Jan 2018 15:35:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2645521797 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=robh+dt@kernel.org Received: by mail-qt0-f177.google.com with SMTP id a27so2229603qtd.1; Fri, 26 Jan 2018 07:35:20 -0800 (PST) X-Gm-Message-State: AKwxytfNKPxuC/0YqBV3wWzPQxxFN/9X65CzP0a8VnzvrwoOXGZOVduq yJz5EeUjkg0G+GKTb8uKd84a3Fl/Gy8sijQjoA== X-Received: by 10.237.38.5 with SMTP id z5mr24318950qtc.180.1516980919365; Fri, 26 Jan 2018 07:35:19 -0800 (PST) MIME-Version: 1.0 Received: by 10.12.147.20 with HTTP; Fri, 26 Jan 2018 07:34:59 -0800 (PST) In-Reply-To: References: <1516955496-17236-1-git-send-email-cpandya@codeaurora.org> From: Rob Herring Date: Fri, 26 Jan 2018 09:34:59 -0600 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v2] of: use hash based search in of_find_node_by_phandle To: Chintan Pandya Cc: Rasmus Villemoes , Frank Rowand , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" , "linux-kernel@vger.kernel.org" , linux-arm-msm Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 26, 2018 at 9:14 AM, Chintan Pandya wrote: > >> I'm probably missing something obvious, but: Aren't phandles in practice >> small consecutive integers assigned by dtc? If so, why not just have a >> smallish static array mapping the small phandle values directly to >> device node, instead of adding a pointer to every struct device_node? Or >> one could determine the size of the array dynamically (largest seen >> phandle value, capping at something sensible, e.g. 1024). I do have some concerns that is a bit fragile and dependent on dtc's implementations. However, I guess we already kind of are with overlay phandles. > Haven't noticed this earlier !! If following is known or true, we can avoid > using hash-table and save per device_node hlish_node. > > 1. How to know max phandle value without traversing tree once? In my > case, > max is 1263. We already have to know it for overlays. Plus unflattening has to handle phandles specially already, so it would be easy to do there if we aren't already. Then the question what to do with overlays. For now, we can probably assume too few phandles to matter. > 2. Although, I haven't observed collision but is it like every > device_node > is associated with unique phandle value ? Yes, phandles must be unique. >> In either case, one would still need to keep the code doing the >> whole-tree traversal for handling large phandle values, but I think the >> above should make lookup O(1) in most cases. > > I would refrain doing this because that will make this API inconsistent in > terms > of time taken by different nodes. I see that people do change their device > placing in DT and that changes time taken in of_* APIs for them but > affecting > others. Who cares. It's the total time that matters. It's obviously not a problem until you have 1000s of lookups as no one cared until recently (though you aren't the first with a hash table lookup). >> Alternatively, one could just count the number of nodes with a phandle, >> allocate an array of that many pointers (so the memory use is certainly >> no more than if adding a pointer to each device_node), and sort it by >> phandle, so one can do lookup using a binary search. >> >> Rasmus > > This is certainly doable if current approach is not welcomed due to > addition on hlish_node in device_node. I certainly prefer an out of band approach as that's easier to turn of if we want to save memory. Still, I'd like to see some data on a cache based approach and reasons why that won't work. Rob