Received: by 10.223.176.5 with SMTP id f5csp2807387wra; Thu, 1 Feb 2018 06:25:38 -0800 (PST) X-Google-Smtp-Source: AH8x227PqKAfhlOkm6JKYsBZTo6RbXlVyAI/tcd7LpdO32Iq0UNzMKn18mifw2sI5FdcuuaS2H2z X-Received: by 2002:a17:902:6809:: with SMTP id h9-v6mr32281838plk.46.1517495138462; Thu, 01 Feb 2018 06:25:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517495138; cv=none; d=google.com; s=arc-20160816; b=rqT1O/B91MysKBP2pqJfQW397C23h9VmB3IgyxRd0AHeaPknd3oA+mgRk2xjU4lO15 XyvW7LJ/jsO1qbXV2GgK3zUpqaibNmLGzslsI9L70UmmbRU8YFI4jejoW+6DwDu5Aucj eNm+vEzrqx6kYsFlGrtw1i2UfHRQzI8DzgApw3Sl8C1UNkdwhz2ISPWuzu8dumZWIPCN tuKTuu3ZpSLzyaNG8PN9LtdPxc1zelt5LXRtE5FDhBp1QNixLAe8NV01gLNGAljoGcft bp7QjkgAXF8MUCGg2cydQzNRpsmzciRi2xwws/SrWgB0jrQpAcPPl68dXPUesl4+9w/8 wjhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dmarc-filter :arc-authentication-results; bh=OITjQCxAEnS94MThjEZxYCXjjfAqTcIcfmT0D8bVM4Q=; b=TOL2wl2+LjDwtpmgbx5autSgNtpAAudSRc0GGjIJkbLKx8kkHtMQTMy+b54tyINsWr xSunh3//43U80Byfy0n/BCb84uYcJC2VK4whr8RglkWLww+WEFDUimgeFeIVnqdJbX8L FPktpT5q+ZIepbMTvEoVW7ikzgIw1q6zyh/ECZXN0XrgiaLna9VWG8sLaBkaZ/hljS18 /WjShDBpjlNuA3HZFWoOFR0bl7RxArsAUJy6FyYvbuNVpl6GEzPRESwf8Px0iSAjJj4b GgmwmHxz5LJxnpPxeoZuG9iCeFC/3drxu6yzLqup/rxRx82jG2oWFSd2me2xbKGZ2Oxu rpAQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p1-v6si1859746pld.492.2018.02.01.06.25.23; Thu, 01 Feb 2018 06:25:38 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751665AbeBAOY0 (ORCPT + 99 others); Thu, 1 Feb 2018 09:24:26 -0500 Received: from mail.kernel.org ([198.145.29.99]:56746 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751450AbeBAOYX (ORCPT ); Thu, 1 Feb 2018 09:24:23 -0500 Received: from mail-qt0-f173.google.com (mail-qt0-f173.google.com [209.85.216.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3FD7F21798; Thu, 1 Feb 2018 14:24:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3FD7F21798 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=robh+dt@kernel.org Received: by mail-qt0-f173.google.com with SMTP id o35so26562670qtj.13; Thu, 01 Feb 2018 06:24:23 -0800 (PST) X-Gm-Message-State: AKwxytdDJCLwi0FX6nvPMMywAVhuh8Hg9cnEIQNbN9hovM/xjAm246UC yKf04YCyCzXQqzz/Kl8gaAyd2HDJ1PRYolFcMw== X-Received: by 10.237.58.226 with SMTP id o89mr59117428qte.207.1517495062428; Thu, 01 Feb 2018 06:24:22 -0800 (PST) MIME-Version: 1.0 Received: by 10.12.147.20 with HTTP; Thu, 1 Feb 2018 06:24:02 -0800 (PST) In-Reply-To: <5dd35d8f-c430-237e-9863-2e73556f92ec@gmail.com> References: <1517429142-25727-1-git-send-email-frowand.list@gmail.com> <5dd35d8f-c430-237e-9863-2e73556f92ec@gmail.com> From: Rob Herring Date: Thu, 1 Feb 2018 08:24:02 -0600 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] of: cache phandle nodes to decrease cost of of_find_node_by_phandle() To: Frank Rowand Cc: Chintan Pandya , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" , "linux-kernel@vger.kernel.org" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 31, 2018 at 3:43 PM, Frank Rowand wrote: > On 01/31/18 12:05, frowand.list@gmail.com wrote: >> From: Frank Rowand >> >> Create a cache of the nodes that contain a phandle property. Use this >> cache to find the node for a given phandle value instead of scanning >> the devicetree to find the node. If the phandle value is not found >> in the cache, of_find_node_by_phandle() will fall back to the tree >> scan algorithm. >> >> The cache is initialized in of_core_init(). >> >> The cache is freed via a late_initcall_sync(). >> >> Signed-off-by: Frank Rowand >> --- >> >> Some of_find_by_phandle() calls may occur before the cache is >> initialized or after it is freed. For example, for the qualcomm >> qcom-apq8074-dragonboard, 11 calls occur before the initialization >> and 80 occur after the cache is freed (out of 516 total calls.) >> >> >> drivers/of/base.c | 85 ++++++++++++++++++++++++++++++++++++++++++++++--- >> drivers/of/of_private.h | 5 +++ >> drivers/of/resolver.c | 21 ------------ >> 3 files changed, 86 insertions(+), 25 deletions(-) > > Some observations.... > > The size of the cache for a normal device tree would be a couple of > words of overhead for the cache, plus one pointer per devicetree node > that contains a phandle property. This will be less space than > would be used by adding a hash field to each device node. It is > also less space than was used by the older algorithm (long gone) > that added a linked list through the nodes that contained a > phandle property. > > This is assuming that the values of the phandle properties are > the default ones created by the dtc compiler. In the case > where a very large phandle property value is hand-coded in > a devicetree source, the size of the cache is capped at one > entry per node. In this case, a little bit of space will be > wasted -- but this is just a sanity fallback, it should not > be encountered, and can be fixed by fixing the devicetree > source. I don't think we should rely on how dtc allocates phandles. dtc is not the only source of DeviceTrees. If we could do that, then lets make them have some known flag in the upper byte so we have some hint for phandle values. 2^24 phandles should be enough for anyone.TM Your cache size is also going to balloon if the dtb was built with '-@'. Since you walk the tree for every phandle, it is conceivable that you could make things slower. Freeing after boot is nice, but if someone has lots of modules or large overlays, this doesn't help them at all. There's still more tweaks we could do with a cache based (i.e. can miss) approach. We could have an access count or some most recently used list to avoid evicting frequently accessed phandles (your data tends to indicate that would help). We could have cache sets. And so far, no one has explained why a bigger cache got slower. Or we could do something decentralized and make the frequent callers cache their phandles. Rob