Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp161490ybl; Wed, 11 Dec 2019 15:51:09 -0800 (PST) X-Google-Smtp-Source: APXvYqwFm8IVkM+Oc3ek84xjEe0eHtH/ltCPEW5nsnUOV5HA4taxt7HSZNpokCuK7AJNKYARsnkM X-Received: by 2002:a9d:61c4:: with SMTP id h4mr4921448otk.310.1576108269812; Wed, 11 Dec 2019 15:51:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1576108269; cv=none; d=google.com; s=arc-20160816; b=sKN6A1WPe0JEWuDC4CQTHESYXOxQrKs2XO20FIXbIkmI0FrvCCqTcU/9MSvLWSuDau jt+v6Vyh4Xprvd2yLuhZDsnJr3VwUREt1BlQXmVR2Vkac6GPX3x2aEDhOaaqQ36jupX5 OjPEdAIhvykxaGgm5Xzy+bt6RJeYDwsu+ytp6sBxetFZRd2o6p9Pvh7KtO6Uf9Y422fK hWt5Tu5K4876oyD/6jUc8sCVHfIH0EsaRl+8UxO2kWlQX3zzkbEqDLYjF72BiVbphEKy KA0XksnigujJOK1R10GkTBv+Gx5TigOkswJ5SJVUx6QRAaWZ/HtoiaXBUJUu/NgVKC1N +Aaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=meAtmw20fl7T+MKVq+fiOU5jZt1l+bFAb0EHT2gl/U0=; b=dcCFSnwLVp+SVorltMse5feUi9DNpSKGZeVcKWjR2Ml5LOfcqFrC5z6+tP7AUhC6UD FDedyYQ+6zWV4jHmszt3Xk+iDlV39L39rDgvd1/TnP5NyjhGCMtUWeA2ygtjKpsJj02I DZrz1jRRw4HyxaUx+qb+1wSKfyoqvPkpAP97nP6/zB56vX45fHME27slqnSEfnwl19sc JbinNbLWsn7p+gL13Wu8P3mOHWVALxENSG0NZ7gxFZBQ4jE38WfVi35/kfoDC0R1ywc0 QqvVB5xZIVy23AiniSSVnbbx9GsqKYZ1dxkikM1YIC+GcM48pS89pHrQT/kgT0Tx792M o9bg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=tOQAAN2y; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o21si2297905ote.320.2019.12.11.15.50.57; Wed, 11 Dec 2019 15:51:09 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=tOQAAN2y; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727110AbfLKXtI (ORCPT + 99 others); Wed, 11 Dec 2019 18:49:08 -0500 Received: from mail.kernel.org ([198.145.29.99]:44528 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726631AbfLKXtH (ORCPT ); Wed, 11 Dec 2019 18:49:07 -0500 Received: from mail-qv1-f46.google.com (mail-qv1-f46.google.com [209.85.219.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7A64322527; Wed, 11 Dec 2019 23:49:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1576108146; bh=f0rMuDibQwxhGxUg6uCBllNAF042fhNbWR73lcnFA0s=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=tOQAAN2yPQE77usTWwPzLjvFzL7kgJNKo2v80h0m193+YlCOdgA7eXX7/ytci5LDA hii5vGrI23lJWPFZhX49AD2kwaZkeR/seuajCWRW9TEhC1sE20/qoIokJdZFL8Sfl0 CMJy5m8+9+0vgLOm1lXj09+47pSeXQy2Oi1y08p8= Received: by mail-qv1-f46.google.com with SMTP id b18so208325qvo.8; Wed, 11 Dec 2019 15:49:06 -0800 (PST) X-Gm-Message-State: APjAAAWBccKOPYZDkkYHgkTXf7tFgdr1J3m3NcWR0dHyzMUaweZVRWtA VGeyuZx+jTpesnBjVviSJwnpm+bbLzXxPxftQQ== X-Received: by 2002:ad4:450a:: with SMTP id k10mr5348785qvu.136.1576108145643; Wed, 11 Dec 2019 15:49:05 -0800 (PST) MIME-Version: 1.0 References: <20191211232345.24810-1-robh@kernel.org> In-Reply-To: <20191211232345.24810-1-robh@kernel.org> From: Rob Herring Date: Wed, 11 Dec 2019 17:48:54 -0600 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] of: Rework and simplify phandle cache to use a fixed size To: devicetree@vger.kernel.org, Frank Rowand Cc: "linux-kernel@vger.kernel.org" , Sebastian Andrzej Siewior , Michael Ellerman , Segher Boessenkool Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Dec 11, 2019 at 5:23 PM Rob Herring wrote: > > The phandle cache was added to speed up of_find_node_by_phandle() by > avoiding walking the whole DT to find a matching phandle. The > implementation has several shortcomings: > > - The cache is designed to work on a linear set of phandle values. > This is true for dtc generated DTs, but not for other cases such as > Power. > - The cache isn't enabled until of_core_init() and a typical system > may see hundreds of calls to of_find_node_by_phandle() before that > point. > - The cache is freed and re-allocated when the number of phandles > changes. > - It takes a raw spinlock around a memory allocation which breaks on > RT. > > Change the implementation to a fixed size and use hash_32() as the > cache index. This greatly simplifies the implementation. It avoids > the need for any re-alloc of the cache and taking a reference on nodes > in the cache. We only have a single source of removing cache entries > which is of_detach_node(). > > Using hash_32() removes any assumption on phandle values improving > the hit rate for non-linear phandle values. The effect on linear values > using hash_32() is about a 10% collision. The chances of thrashing on > colliding values seems to be low. > > To compare performance, I used a RK3399 board which is a pretty typical > system. I found that just measuring boot time as done previously is > noisy and may be impacted by other things. Also bringing up secondary > cores causes some issues with measuring, so I booted with 'nr_cpus=1'. > With no caching, calls to of_find_node_by_phandle() take about 20124 us > for 1248 calls. There's an additional 288 calls before time keeping is > up. Using the average time per hit/miss with the cache, we can calculate > these calls to take 690 us (277 hit / 11 miss) with a 128 entry cache > and 13319 us with no cache or an uninitialized cache. > > Comparing the 3 implementations the time spent in > of_find_node_by_phandle() is: > > no cache: 20124 us (+ 13319 us) > 128 entry cache: 5134 us (+ 690 us) > current cache: 819 us (+ 13319 us) > > We could move the allocation of the cache earlier to improve the > current cache, but that just further complicates the situation as it > needs to be after slab is up, so we can't do it when unflattening (which > uses memblock). > > Reported-by: Sebastian Andrzej Siewior > Cc: Michael Ellerman > Cc: Segher Boessenkool > Cc: Frank Rowand > Signed-off-by: Rob Herring > --- > drivers/of/base.c | 133 ++++++++-------------------------------- > drivers/of/dynamic.c | 2 +- > drivers/of/of_private.h | 4 +- > drivers/of/overlay.c | 10 --- > 4 files changed, 28 insertions(+), 121 deletions(-) [...] > - if (phandle_cache) { > - if (phandle_cache[masked_handle] && > - handle == phandle_cache[masked_handle]->phandle) > - np = phandle_cache[masked_handle]; > - if (np && of_node_check_flag(np, OF_DETACHED)) { > - WARN_ON(1); /* did not uncache np on node removal */ > - of_node_put(np); > - phandle_cache[masked_handle] = NULL; > - np = NULL; > - } > + if (phandle_cache[handle_hash] && > + handle == phandle_cache[handle_hash]->phandle) > + np = phandle_cache[handle_hash]; > + if (np && of_node_check_flag(np, OF_DETACHED)) { > + WARN_ON(1); /* did not uncache np on node removal */ BTW, I don't think this check is even valid. If we failed to detach and remove the node from the cache, then we could be accessing np after freeing it. Rob