Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp5916692imm; Mon, 27 Aug 2018 06:38:51 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdaq33ZIUj/b7QDmy6v2d9PcHKa5Ge0IqLRzq8nGsXelUwbKzRtNppaj59pzonv5hYgr902x X-Received: by 2002:a63:291:: with SMTP id 139-v6mr12018753pgc.365.1535377131843; Mon, 27 Aug 2018 06:38:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535377131; cv=none; d=google.com; s=arc-20160816; b=dYa6qFfBRE0uL96I3wia4W3bXAQ0PdG5um/nnTVDrX/SI76F3jp6/BKqortsFKEZEe 7QoHXS5mO+UJukpN2gsNiu7WbAWcggGQZDa/6fW6oyv2JOKIKlVQhPJBSYijHovAzf+d iHStlUUqKShgFfdS2uvDw1PbDuYSh/yKTeI2lWrGoqvpryDfYLBW5N4YzXwhMyzQwlCy mvaSPS9FuvN3SKpyHO1ddrBKOn0Ma92RnB8QtAe6/G9Hn2YrQOOqZ9rfXbhGD5N6nJie d//HHvde2pdSeNCGoNf/VYwZThWcNfhrZ3iSg52mGsTcLTcWOPPLKuZoUTKjs7eMll+4 j91A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=CFFa2lvF5oaI3bU1emNkaZ6B/T2Hkocyox+4q5Cz66w=; b=vKp0eFGJIckyibhmevpJjuYgiBYRP5IymIdkuBVrHLAxmXTwXvncbtbyR1lVe6rsl+ iDtUmi+eLRRQylMDiqH4e9T8R3Z5Yz2X+j1OiNRVidKloNdahizkcRu9Ri4XLQBg1cya F4PMhLAMBQuWveLfnbjthydx9s/SotcbMQ10lEk174MzyDB4bdkxL5enE6n1sqDU7KqN WTvV1V4e0HMOSzCHGWSR/jp6NoYrzg1Ya08efpNvsv+s+tW0lpnXN2tMGL01Zhh/xJ9V jWBVRdBW5/BfrmGq4o5yZ7SxKv3urrZDNx0ABEWzHTDo13ELutZdZh5nf4nrBhuo91sF 8lZw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bgdev-pl.20150623.gappssmtp.com header.s=20150623 header.b=wBWhd2gT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t127-v6si12830930pfc.118.2018.08.27.06.38.36; Mon, 27 Aug 2018 06:38:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@bgdev-pl.20150623.gappssmtp.com header.s=20150623 header.b=wBWhd2gT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727463AbeH0RYG (ORCPT + 99 others); Mon, 27 Aug 2018 13:24:06 -0400 Received: from mail-it0-f67.google.com ([209.85.214.67]:33210 "EHLO mail-it0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726976AbeH0RYG (ORCPT ); Mon, 27 Aug 2018 13:24:06 -0400 Received: by mail-it0-f67.google.com with SMTP id j198-v6so11195340ita.0 for ; Mon, 27 Aug 2018 06:37:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bgdev-pl.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=CFFa2lvF5oaI3bU1emNkaZ6B/T2Hkocyox+4q5Cz66w=; b=wBWhd2gT++Pd9qXQZd8NIB3xeAsZZY4Vq2ZHE1B9C7yt2LOzlRTeE/vx5ul50vgVVG U0QsweS26VJT1lS6VxfiwlTxRdcWATPp72UbeAeHNiIJ+dejxQR/9NuKBLqRJbRgr5tq qgpXwqGcjWYA1y/twRg8eHSDnDKfIthDp8pCEKPmukjhDTzGtHvPwk+ZpOvlajwYIqii vkY4cqrLvdGAPFdQnMYTp5w2AtmOCJkhXaGq+CLorQ8GXl3pTCxTEsz/bIpryv7Tr4T0 M5cQP8S3mUn+MTV8doW/IypGMniRItB3sGb0BxkO2epL+MWh2BQ8/x6gLdgLpFP10vkQ U6tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=CFFa2lvF5oaI3bU1emNkaZ6B/T2Hkocyox+4q5Cz66w=; b=Bk/XFIJl4BaHYuCIVnCBZpYC3d/XOVYBtuWu50L5b/jIdhXOoUlMPnAwi2s2acpD3u UELydkQEWD7M9UvHH0v9G5/DCQ3bREZjTH0myGXDWleHRTq01NX930PRTiJ6ngGAg/HU 16HrrOMN4EZVtvJjZAX2jHlA0CIyGFF4hjUmCYZpMzIghEQOW2OHOSTFTlPYbV7HqKQa WZJoWJI4PHs8tzEgrR8ufFXaY8uLnMJ3d7DMpRDMAE0kloYCSsrVY7p1tC6GuyMdakwv MnQynNh5VHBoAQ+JZsQk8awgBO12NNk3mnpsiQEJdMn+uGz2xTcgBZSVCIKVqHFE4+K1 RVMg== X-Gm-Message-State: APzg51CXk+zTITpNffrSmFXNVl+zd4m02jHJHPttqry/raY8Jb8mDbWT TcjHgVKvyKvtilw5QwAzJ3w9YmCZPNvpc6CRrwVNpw== X-Received: by 2002:a24:3d02:: with SMTP id n2-v6mr7025530itn.111.1535377044370; Mon, 27 Aug 2018 06:37:24 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a5e:9402:0:0:0:0:0 with HTTP; Mon, 27 Aug 2018 06:37:23 -0700 (PDT) In-Reply-To: <20180827110055.122988d0@bbrezillon> References: <20180810080526.27207-1-brgl@bgdev.pl> <20180810080526.27207-2-brgl@bgdev.pl> <20180824170848.29594318@bbrezillon> <20180824152740.GD27483@lunn.ch> <20180825082722.567e8c9a@bbrezillon> <20180827110055.122988d0@bbrezillon> From: Bartosz Golaszewski Date: Mon, 27 Aug 2018 15:37:23 +0200 Message-ID: Subject: Re: [PATCH v2 01/29] nvmem: add support for cell lookups To: Boris Brezillon Cc: Andrew Lunn , linux-doc , Sekhar Nori , Bartosz Golaszewski , Srinivas Kandagatla , linux-i2c , Mauro Carvalho Chehab , Rob Herring , Florian Fainelli , Kevin Hilman , Richard Weinberger , Russell King , Marek Vasut , Paolo Abeni , Dan Carpenter , Grygorii Strashko , David Lechner , Arnd Bergmann , Sven Van Asbroeck , "open list:MEMORY TECHNOLOGY..." , Linux-OMAP , Linux ARM , Ivan Khoronzhuk , Greg Kroah-Hartman , Jonathan Corbet , Linux Kernel Mailing List , Lukas Wunner , Naren , netdev , Alban Bedel , Andrew Morton , Brian Norris , David Woodhouse , "David S . Miller" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 2018-08-27 11:00 GMT+02:00 Boris Brezillon : > On Mon, 27 Aug 2018 10:56:29 +0200 > Bartosz Golaszewski wrote: > >> 2018-08-25 8:27 GMT+02:00 Boris Brezillon : >> > On Fri, 24 Aug 2018 17:27:40 +0200 >> > Andrew Lunn wrote: >> > >> >> On Fri, Aug 24, 2018 at 05:08:48PM +0200, Boris Brezillon wrote: >> >> > Hi Bartosz, >> >> > >> >> > On Fri, 10 Aug 2018 10:04:58 +0200 >> >> > Bartosz Golaszewski wrote: >> >> > >> >> > > +struct nvmem_cell_lookup { >> >> > > + struct nvmem_cell_info info; >> >> > > + struct list_head list; >> >> > > + const char *nvmem_name; >> >> > > +}; >> >> > >> >> > Hm, maybe I don't get it right, but this looks suspicious. Usually the >> >> > consumer lookup table is here to attach device specific names to >> >> > external resources. >> >> > >> >> > So what I'd expect here is: >> >> > >> >> > struct nvmem_cell_lookup { >> >> > /* The nvmem device name. */ >> >> > const char *nvmem_name; >> >> > >> >> > /* The nvmem cell name */ >> >> > const char *nvmem_cell_name; >> >> > >> >> > /* >> >> > * The local resource name. Basically what you have in the >> >> > * nvmem-cell-names prop. >> >> > */ >> >> > const char *conid; >> >> > }; >> >> > >> >> > struct nvmem_cell_lookup_table { >> >> > struct list_head list; >> >> > >> >> > /* ID of the consumer device. */ >> >> > const char *devid; >> >> > >> >> > /* Array of cell lookup entries. */ >> >> > unsigned int ncells; >> >> > const struct nvmem_cell_lookup *cells; >> >> > }; >> >> > >> >> > Looks like your nvmem_cell_lookup is more something used to attach cells >> >> > to an nvmem device, which is NVMEM provider's responsibility not the >> >> > consumer one. >> >> >> >> Hi Boris >> >> >> >> There are cases where there is not a clear providier/consumer split. I >> >> have an x86 platform, with a few at24 EEPROMs on it. It uses an off >> >> the shelf Komtron module, placed on a custom carrier board. One of the >> >> EEPROMs contains the hardware variant information. Once i know the >> >> variant, i need to instantiate other I2C, SPI, MDIO devices, all using >> >> platform devices, since this is x86, no DT available. >> >> >> >> So the first thing my x86 platform device does is instantiate the >> >> first i2c device for the AT24. Once the EEPROM pops into existence, i >> >> need to add nvmem cells onto it. So at that point, the x86 platform >> >> driver is playing the provider role. Once the cells are added, i can >> >> then use nvmem consumer interfaces to get the contents of the cell, >> >> run a checksum, and instantiate the other devices. >> >> >> >> I wish the embedded world was all DT, but the reality is that it is >> >> not :-( >> > >> > Actually, I'm not questioning the need for this feature (being able to >> > attach NVMEM cells to an NVMEM device on a platform that does not use >> > DT). What I'm saying is that this functionality is provider related, >> > not consumer related. Also, I wonder if defining such NVMEM cells >> > shouldn't go through the provider driver instead of being passed >> > directly to the NVMEM layer, because nvmem_config already have a fields >> > to pass cells at registration time, plus, the name of the NVMEM cell >> > device is sometimes created dynamically and can be hard to guess at >> > platform_device registration time. >> > >> >> In my use case the provider is at24 EEPROM driver. This is where the >> nvmem_config lives but I can't image a correct and clean way of >> passing this cell config to the driver from board files without using >> new ugly fields in platform_data which this very series is trying to >> remove. This is why this cell config should live in machine code. > > Okay. > >> >> > I also think non-DT consumers will need a way to reference exiting >> > NVMEM cells, but this consumer-oriented nvmem cell lookup table should >> > look like the gpio or pwm lookup table (basically what I proposed in my >> > previous email). >> >> How about introducing two new interfaces to nvmem: one for defining >> nvmem cells from machine code and the second for connecting these >> cells with devices? > > Yes, that's basically what I was suggesting: move what you've done in > nvmem-provider.h (maybe rename some of the structs to make it clear > that this is about defining cells not referencing existing ones), and > add a new consumer interface (based on what other subsystems do) in > nvmem-consumer.h. > > This way you have both things clearly separated, and if a driver is > both a consumer and a provider you'll just have to include both headers. > > Regards, > > Boris I didn't notice it before but there's a global list of nvmem cells with each cell referencing its owner nvmem device. I'm wondering if this isn't some kind of inversion of ownership. Shouldn't each nvmem device have a separate list of nvmem cells owned by it? What happens if we have two nvmem providers with the same names for cells? I'm asking because dev_id based lookup doesn't make sense if internally nvmem_cell_get_from_list() doesn't care about any device names (takes only the cell_id as argument). This doesn't cause any trouble now since there are no users defining cells in nvmem_config - there are only DT users - but this must be clarified before I can advance with correctly implementing nvmem lookups. BTW: of_nvmem_cell_get() seems to always allocate an nvmem_cell instance even if the cell for this node was already added to the nvmem device. Bart