Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp7120003imm; Tue, 28 Aug 2018 06:47:26 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYBjlIkt/tob0/SQtJH9k7ka636ueh7zute3b9KS5qhDiE2l/R61ibWPb7X+vX9jKQk0sGg X-Received: by 2002:a17:902:1025:: with SMTP id b34-v6mr1679286pla.291.1535464046233; Tue, 28 Aug 2018 06:47:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535464046; cv=none; d=google.com; s=arc-20160816; b=MYS0LT9dsQQYdp4eUfEYSxpP7fZL+UhU2F4Js+hyxemW0bQculR3stfqbses9xywVz p7ADwfXO1gh0figMJ+0sOJZpGPBkfDtdlJ2FE4JuRc6cSUZsong8bOZlFhxDfK7vmYRp jivh3YIHCOJxLXLHUaLOcI+47Jd3AaGJGMH1h7x1LKGIS9xRB0BK0AUK7qqxIw75vFGG cInBf6JNt8aFx7k13gdvDD+A1DMclA5VQ/nr4hYwHBRd1evESeynj0tcMzg4b/TliyqE jLHZeoiDnwiicvBMEuaJiK0mXu6HVaYV7XzOsEPa37V+Qj7HYwjT9gjjcT+djIpH3vEF 1oPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature :arc-authentication-results; bh=OQDJNuMwzi5e00r2vB28FrMpSIitbL0DmygxULgGBKw=; b=L9E/Up0zUJXe7SoyWA9watOC392QNKK611ikejLunNAn8sdwx7Kr59nHFp/20+RPE7 awdeRIwI5wD7uuZU1MevQmz/N1tk+h9zv9UVZqbyYfOQGWBF+fx2xKRY/yE3HbCs00AC Z17YZt2k3MvVDhWV84ogiIMp5x5HSY9LMrv5U7ZLAcrtqrrwKfF+3Xpd4mRc/tjbq3XR 1b0PL5nIZbvR+xB5lpT3NLVtfkuPZDtUS8xFRnrQrSfgNBMkZaKCUVzsNhzscgPZz/Cs uk/Lc3PNpwlMKHJTxgArtmf5agCBc1wvHRpMy1mNf0sSFPkPV/zvkr3syAjZdi2tN45U MRqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CyVhxoNI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v23-v6si1082220plo.19.2018.08.28.06.47.11; Tue, 28 Aug 2018 06:47:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CyVhxoNI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728254AbeH1RhG (ORCPT + 99 others); Tue, 28 Aug 2018 13:37:06 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:41223 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727939AbeH1RhF (ORCPT ); Tue, 28 Aug 2018 13:37:05 -0400 Received: by mail-wr1-f68.google.com with SMTP id z96-v6so1645357wrb.8 for ; Tue, 28 Aug 2018 06:45:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=OQDJNuMwzi5e00r2vB28FrMpSIitbL0DmygxULgGBKw=; b=CyVhxoNIVxE3eXNdPI8yRp57GteixXl0uVxxU+AyoLyDZKvn4woOLPLpg/mJde18VF mrDXtj915Z3t55qTnnjia5+IanBvrreZ9XbsRrKLWdQ+PDOBrKji2lRxqrqywHAm9d7l CnBiRuS9DSrkqXa1FQPyQI0FnDImZ/kISRS6Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=OQDJNuMwzi5e00r2vB28FrMpSIitbL0DmygxULgGBKw=; b=DvDuWCxCldUgLoq+ebx+tME9U7lBeNkoHliOj+JyOJwLz5Wxk0bWN6Jm2poXx/k/KW NbZhNmA3ofYeiyipWQOBL9z29SEZr9HJQwyayAffBf2KhEQSZI6ZjkJHrSMmd6SgMmtC cFPwGDuUqtG11IZTH8BW0VVoJcYSQhDTFTU7ZBDkKijb21BHAnPsngHyL1gdyyYbokAe iNbNyI4fR1wLAPxR5AyShwr85LMe5UYjnryiOE5I4wIuws84T/w5YRU4M67jp+CTfFql fK54wpAApzmIZuPpjLpvtjckaeStK/uCY93uoLjfuurdBMCI4BaxZpVabGJI4Tu5jStf jmFw== X-Gm-Message-State: APzg51CShYNziGSFxvdatGsFIQ54fShDOKgvq+r57WOPoUPBXSbLL/HD qdEX4pSIT2bNk8iz0sVdwJj3NQ== X-Received: by 2002:adf:cc91:: with SMTP id p17-v6mr1240693wrj.226.1535463919878; Tue, 28 Aug 2018 06:45:19 -0700 (PDT) Received: from [192.168.0.18] (cpc90716-aztw32-2-0-cust92.18-1.cable.virginm.net. [86.26.100.93]) by smtp.googlemail.com with ESMTPSA id x16-v6sm685832wro.84.2018.08.28.06.45.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 28 Aug 2018 06:45:19 -0700 (PDT) Subject: Re: [PATCH v2 01/29] nvmem: add support for cell lookups To: Bartosz Golaszewski Cc: Boris Brezillon , Andrew Lunn , linux-doc , Sekhar Nori , Bartosz Golaszewski , linux-i2c , Mauro Carvalho Chehab , Rob Herring , Florian Fainelli , Kevin Hilman , Richard Weinberger , Russell King , Marek Vasut , Paolo Abeni , Dan Carpenter , Grygorii Strashko , David Lechner , Arnd Bergmann , Sven Van Asbroeck , "open list:MEMORY TECHNOLOGY..." , Linux-OMAP , Linux ARM , Ivan Khoronzhuk , Greg Kroah-Hartman , Jonathan Corbet , Linux Kernel Mailing List , Lukas Wunner , Naren , netdev , Alban Bedel , Andrew Morton , Brian Norris , David Woodhouse , "David S . Miller" References: <20180810080526.27207-1-brgl@bgdev.pl> <20180810080526.27207-2-brgl@bgdev.pl> <20180824170848.29594318@bbrezillon> <20180824152740.GD27483@lunn.ch> <20180825082722.567e8c9a@bbrezillon> <20180827110055.122988d0@bbrezillon> <8cb75723-dc87-f127-2aab-54dd0b08eee8@linaro.org> From: Srinivas Kandagatla Message-ID: <916e3e89-82b3-0d52-2b77-4374261a9d0f@linaro.org> Date: Tue, 28 Aug 2018 14:45:17 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 28/08/18 12:56, Bartosz Golaszewski wrote: > 2018-08-28 12:15 GMT+02:00 Srinivas Kandagatla : >> >> On 27/08/18 14:37, Bartosz Golaszewski wrote: >>> >>> I didn't notice it before but there's a global list of nvmem cells >> >> >> Bit of history here. >> >> The global list of nvmem_cell is to assist non device tree based cell >> lookups. These cell entries come as part of the non-dt providers >> nvmem_config. >> >> All the device tree based cell lookup happen dynamically on request/demand, >> and all the cell definition comes from DT. >> > > Makes perfect sense. > >> As of today NVMEM supports both DT and non DT usecase, this is much simpler. >> >> Non dt cases have various consumer usecases. >> >> 1> Consumer is aware of provider name and cell details. >> This is probably simple usecase where it can just use device based >> apis. >> >> 2> Consumer is not aware of provider name, its just aware of cell name. >> This is the case where global list of cells are used. >> > > I would like to support an additional use case here: the provider is > generic and is not aware of its cells at all. Since the only way of > defining nvmem cells is through DT or nvmem_config, we lack a way to > allow machine code to define cells without the provider code being > aware. machine driver should be able to do nvmem_device_get() nvmem_add_cells() currently this adds to the global cell list which is exactly like doing it via nvmem_config. > >>> with each cell referencing its owner nvmem device. I'm wondering if >>> this isn't some kind of inversion of ownership. Shouldn't each nvmem >>> device have a separate list of nvmem cells owned by it? What happens >> >> This is mainly done for use case where consumer does not have idea of >> provider name or any details. >> > > It doesn't need to know the provider details, but in most subsystems > the core code associates such resources by dev_id and optional con_id > as Boris already said. > If dev_id here is referring to provider dev_id, then we already do that using nvmem device apis, except in global cell list which makes dev_id optional. >> First thing non dt user should do is use "NVMEM device based consumer APIs" >> >> ex: First get handle to nvmem device using its nvmem provider name by >> calling nvmem_device_get(); and use nvmem_device_cell_read/write() apis. >> >> Also am not 100% sure how would maintaining cells list per nvmem provider >> would help for the intended purpose of global list? >> > > It would fix the use case where the consumer wants to use > nvmem_cell_get(dev, name) and two nvmem providers would have a cell > with the same name. There is no code to enforce duplicate checks, so this would just decrease the chances rather than fixing the problem totally. I guess this is same problem Finding cell by name without dev_id would still be an issue, am not too concerned about this ATM. However, the idea of having cells per provider does sound good to me. We should also maintain list of providers in core as a lookup in cases where dev_id is null. I did hack up a patch, incase you might want to try: I did only compile test. ---------------------------------->cut<------------------------------- Author: Srinivas Kandagatla Date: Tue Aug 28 13:46:21 2018 +0100 nvmem: core: maintain per provider cell list Having a global cell list could be a issue in cases where the cell-id is same across multiple providers. Making the cell list specific to provider could avoid such issue by adding additional checks while addding cells. Signed-off-by: Srinivas Kandagatla diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c index aa1657831b70..29da603f2fa4 100644 --- a/drivers/nvmem/core.c +++ b/drivers/nvmem/core.c @@ -40,6 +40,8 @@ struct nvmem_device { struct device *base_dev; nvmem_reg_read_t reg_read; nvmem_reg_write_t reg_write; + struct list_head node; + struct list_head cells; void *priv; }; @@ -57,9 +59,7 @@ struct nvmem_cell { static DEFINE_MUTEX(nvmem_mutex); static DEFINE_IDA(nvmem_ida); - -static LIST_HEAD(nvmem_cells); -static DEFINE_MUTEX(nvmem_cells_mutex); +static LIST_HEAD(nvmem_devices); #ifdef CONFIG_DEBUG_LOCK_ALLOC static struct lock_class_key eeprom_lock_key; @@ -284,26 +284,28 @@ static struct nvmem_device *of_nvmem_find(struct device_node *nvmem_np) static struct nvmem_cell *nvmem_find_cell(const char *cell_id) { - struct nvmem_cell *p; + struct nvmem_device *d; - mutex_lock(&nvmem_cells_mutex); - - list_for_each_entry(p, &nvmem_cells, node) - if (!strcmp(p->name, cell_id)) { - mutex_unlock(&nvmem_cells_mutex); - return p; - } + mutex_lock(&nvmem_mutex); + list_for_each_entry(d, &nvmem_devices, node) { + struct nvmem_cell *p; + list_for_each_entry(p, &d->cells, node) + if (!strcmp(p->name, cell_id)) { + mutex_unlock(&nvmem_mutex); + return p; + } + } - mutex_unlock(&nvmem_cells_mutex); + mutex_unlock(&nvmem_mutex); return NULL; } static void nvmem_cell_drop(struct nvmem_cell *cell) { - mutex_lock(&nvmem_cells_mutex); + mutex_lock(&nvmem_mutex); list_del(&cell->node); - mutex_unlock(&nvmem_cells_mutex); + mutex_unlock(&nvmem_mutex); kfree(cell); } @@ -312,18 +314,18 @@ static void nvmem_device_remove_all_cells(const struct nvmem_device *nvmem) struct nvmem_cell *cell; struct list_head *p, *n; - list_for_each_safe(p, n, &nvmem_cells) { + list_for_each_safe(p, n, &nvmem->cells) { cell = list_entry(p, struct nvmem_cell, node); if (cell->nvmem == nvmem) nvmem_cell_drop(cell); } } -static void nvmem_cell_add(struct nvmem_cell *cell) +static void nvmem_cell_add(struct nvmem_device *nvmem, struct nvmem_cell *cell) { - mutex_lock(&nvmem_cells_mutex); - list_add_tail(&cell->node, &nvmem_cells); - mutex_unlock(&nvmem_cells_mutex); + mutex_lock(&nvmem_mutex); + list_add_tail(&cell->node, &nvmem->cells); + mutex_unlock(&nvmem_mutex); } static int nvmem_cell_info_to_nvmem_cell(struct nvmem_device *nvmem, @@ -385,7 +387,7 @@ int nvmem_add_cells(struct nvmem_device *nvmem, goto err; } - nvmem_cell_add(cells[i]); + nvmem_cell_add(nvmem, cells[i]); } /* remove tmp array */ @@ -519,6 +521,10 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) if (config->cells) nvmem_add_cells(nvmem, config->cells, config->ncells); + mutex_lock(&nvmem_mutex); + list_add_tail(&nvmem->node, &nvmem_devices); + mutex_unlock(&nvmem_mutex); + return nvmem; err_device_del: @@ -544,6 +550,8 @@ int nvmem_unregister(struct nvmem_device *nvmem) mutex_unlock(&nvmem_mutex); return -EBUSY; } + + list_del(&nvmem->node); mutex_unlock(&nvmem_mutex); if (nvmem->flags & FLAG_COMPAT) @@ -899,7 +907,7 @@ struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, goto err_sanity; } - nvmem_cell_add(cell); + nvmem_cell_add(nvmem, cell); return cell; ---------------------------------->cut<------------------------------- > > Next we could add a way to associate dev_ids with nvmem cells. > >>> if we have two nvmem providers with the same names for cells? I'm >> >> Yes, it would return the first instance.. which is a known issue. >> Am not really sure this is a big problem as of today! but am open for any >> better suggestions! >> > > Yes, I would like to rework nvmem a bit. I don't see any non-DT users > defining nvmem-cells using nvmem_config. I think that what we need is > a way of specifying cell config outside of nvmem providers in some > kind of structures. These tables would reference the provider by name > and define the cells. Then we would have an additional lookup > structure which would associate the consumer (by dev_id and con_id, > where dev_id could optionally be NULL and where we would fall back to > using con_id only) and the nvmem provider + cell together. Similarly > to how GPIO consumers are associated with the gpiochip and hwnum. How > does it sound? Yes, sounds good. Correct me if am wrong! You should be able to add the new cells using struct nvmem_cell_info and add them to particular provider using nvmem_add_cells(). Sounds like thats exactly what nvmem_add_lookup_table() would look like. We should add new nvmem_device_cell_get(nvmem, conn_id) which would return nvmem cell which is specific to the provider. This cell can be used by the machine driver to read/write. >>> >>> BTW: of_nvmem_cell_get() seems to always allocate an nvmem_cell >>> instance even if the cell for this node was already added to the nvmem >>> device. >> >> I hope you got the reason why of_nvmem_cell_get() always allocates new >> instance for every get!! > > > I admit I didn't test it, but just from reading the code it seems like > in nvmem_cell_get() for DT-users we'll always get to > of_nvmem_cell_get() and in there we always end up calling line 873: > cell = kzalloc(sizeof(*cell), GFP_KERNEL); > That is correct, this cell is created when we do a get and release when we do a put(). thanks, srini