Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp5941611imm; Mon, 27 Aug 2018 07:03:18 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZ8+oWI+LiodMKXaSME1/KTAommGvf9Mta0+o6ilrL3rJaMmccFJgJnuHvnjhq9UB5LJ4CH X-Received: by 2002:a17:902:b702:: with SMTP id d2-v6mr13577578pls.12.1535378598355; Mon, 27 Aug 2018 07:03:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535378598; cv=none; d=google.com; s=arc-20160816; b=HL61q2in1BC6G698GY7owpZYi9tuZ1hdJyyL+MIwVgC8daYrd0B/x9NoYu6Bwl00ts aBTPhcgTUW0rIJQGOZZnQ3ebN8feTqEiRzR71scxzTxMjVFgqABJsW8m4g2lNNt6d5iN mkvJojJc2A0oM151498Y3zcTLF3+jx9XmndaAqZgnafOf0rswvj3EcyFV5dwyJ+M0Lo3 fY14vqV4qdTY2fNsr699/inHa25Oy5Am6u4rSplATcWyYFNgLUUVY91a+RuL2n2ZIvzx 34a1i+ol6KlM6SmmU9BPeIErnIT5fF9ixt9dRJ2g90Ve9aychJI+EDNxY97yrmyR/Io3 mGwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :arc-authentication-results; bh=vEv6eySPUwYfDYYFZstbK1a1PrLZ1uVPC66UZqvTyfQ=; b=gAdV+acmmuCbAmOkEiKrRFmycCjboGRgzn9FkziZIbR/gtoovUlc7JcJQYHualZivE VW4jFFn+ReTlg8E6ToRGw1w2f4UC08k7IrEYu/BiWbF7WdtE1PKhjuhWxNHTs18xpNfr tdHH09WTFUtMzkcgOCd3mYFymH8drG+rW05vUM9zqdcfQdFuxwVXkcbTWyqe9KC0FBwN 5EijzmuA8U/aP+naig4zluZJ9fgCgTt6I27uh9OsH2G2D1T6SirAZw5oRt3t+Wn+FFx7 FTohAdkwMbBp0gRQRstUUkSVII8ylaw7envk10a5ILbMAIZmFZfHYxf1ZmlMx6kYv4Pq C/bQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t10-v6si10442873plh.324.2018.08.27.07.03.03; Mon, 27 Aug 2018 07:03:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727533AbeH0RsW (ORCPT + 99 others); Mon, 27 Aug 2018 13:48:22 -0400 Received: from mail.bootlin.com ([62.4.15.54]:42633 "EHLO mail.bootlin.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726925AbeH0RsW (ORCPT ); Mon, 27 Aug 2018 13:48:22 -0400 Received: by mail.bootlin.com (Postfix, from userid 110) id C0917206F6; Mon, 27 Aug 2018 16:01:33 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.bootlin.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED,SHORTCIRCUIT, URIBL_BLOCKED shortcircuit=ham autolearn=disabled version=3.4.0 Received: from bbrezillon (AAubervilliers-681-1-53-19.w90-88.abo.wanadoo.fr [90.88.170.19]) by mail.bootlin.com (Postfix) with ESMTPSA id F366E206A6; Mon, 27 Aug 2018 16:01:32 +0200 (CEST) Date: Mon, 27 Aug 2018 16:01:31 +0200 From: Boris Brezillon To: Bartosz Golaszewski Cc: Andrew Lunn , linux-doc , Sekhar Nori , Bartosz Golaszewski , Srinivas Kandagatla , linux-i2c , Mauro Carvalho Chehab , Rob Herring , Florian Fainelli , Kevin Hilman , Richard Weinberger , Russell King , Marek Vasut , Paolo Abeni , Dan Carpenter , Grygorii Strashko , David Lechner , Arnd Bergmann , Sven Van Asbroeck , "open list:MEMORY TECHNOLOGY..." , Linux-OMAP , Linux ARM , Ivan Khoronzhuk , Greg Kroah-Hartman , Jonathan Corbet , Linux Kernel Mailing List , Lukas Wunner , Naren , netdev , Alban Bedel , Andrew Morton , Brian Norris , David Woodhouse , "David S . Miller" Subject: Re: [PATCH v2 01/29] nvmem: add support for cell lookups Message-ID: <20180827160131.58143824@bbrezillon> In-Reply-To: References: <20180810080526.27207-1-brgl@bgdev.pl> <20180810080526.27207-2-brgl@bgdev.pl> <20180824170848.29594318@bbrezillon> <20180824152740.GD27483@lunn.ch> <20180825082722.567e8c9a@bbrezillon> <20180827110055.122988d0@bbrezillon> X-Mailer: Claws Mail 3.15.0-dirty (GTK+ 2.24.31; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 27 Aug 2018 15:37:23 +0200 Bartosz Golaszewski wrote: > 2018-08-27 11:00 GMT+02:00 Boris Brezillon : > > On Mon, 27 Aug 2018 10:56:29 +0200 > > Bartosz Golaszewski wrote: > > > >> 2018-08-25 8:27 GMT+02:00 Boris Brezillon : > >> > On Fri, 24 Aug 2018 17:27:40 +0200 > >> > Andrew Lunn wrote: > >> > > >> >> On Fri, Aug 24, 2018 at 05:08:48PM +0200, Boris Brezillon wrote: > >> >> > Hi Bartosz, > >> >> > > >> >> > On Fri, 10 Aug 2018 10:04:58 +0200 > >> >> > Bartosz Golaszewski wrote: > >> >> > > >> >> > > +struct nvmem_cell_lookup { > >> >> > > + struct nvmem_cell_info info; > >> >> > > + struct list_head list; > >> >> > > + const char *nvmem_name; > >> >> > > +}; > >> >> > > >> >> > Hm, maybe I don't get it right, but this looks suspicious. Usually the > >> >> > consumer lookup table is here to attach device specific names to > >> >> > external resources. > >> >> > > >> >> > So what I'd expect here is: > >> >> > > >> >> > struct nvmem_cell_lookup { > >> >> > /* The nvmem device name. */ > >> >> > const char *nvmem_name; > >> >> > > >> >> > /* The nvmem cell name */ > >> >> > const char *nvmem_cell_name; > >> >> > > >> >> > /* > >> >> > * The local resource name. Basically what you have in the > >> >> > * nvmem-cell-names prop. > >> >> > */ > >> >> > const char *conid; > >> >> > }; > >> >> > > >> >> > struct nvmem_cell_lookup_table { > >> >> > struct list_head list; > >> >> > > >> >> > /* ID of the consumer device. */ > >> >> > const char *devid; > >> >> > > >> >> > /* Array of cell lookup entries. */ > >> >> > unsigned int ncells; > >> >> > const struct nvmem_cell_lookup *cells; > >> >> > }; > >> >> > > >> >> > Looks like your nvmem_cell_lookup is more something used to attach cells > >> >> > to an nvmem device, which is NVMEM provider's responsibility not the > >> >> > consumer one. > >> >> > >> >> Hi Boris > >> >> > >> >> There are cases where there is not a clear providier/consumer split. I > >> >> have an x86 platform, with a few at24 EEPROMs on it. It uses an off > >> >> the shelf Komtron module, placed on a custom carrier board. One of the > >> >> EEPROMs contains the hardware variant information. Once i know the > >> >> variant, i need to instantiate other I2C, SPI, MDIO devices, all using > >> >> platform devices, since this is x86, no DT available. > >> >> > >> >> So the first thing my x86 platform device does is instantiate the > >> >> first i2c device for the AT24. Once the EEPROM pops into existence, i > >> >> need to add nvmem cells onto it. So at that point, the x86 platform > >> >> driver is playing the provider role. Once the cells are added, i can > >> >> then use nvmem consumer interfaces to get the contents of the cell, > >> >> run a checksum, and instantiate the other devices. > >> >> > >> >> I wish the embedded world was all DT, but the reality is that it is > >> >> not :-( > >> > > >> > Actually, I'm not questioning the need for this feature (being able to > >> > attach NVMEM cells to an NVMEM device on a platform that does not use > >> > DT). What I'm saying is that this functionality is provider related, > >> > not consumer related. Also, I wonder if defining such NVMEM cells > >> > shouldn't go through the provider driver instead of being passed > >> > directly to the NVMEM layer, because nvmem_config already have a fields > >> > to pass cells at registration time, plus, the name of the NVMEM cell > >> > device is sometimes created dynamically and can be hard to guess at > >> > platform_device registration time. > >> > > >> > >> In my use case the provider is at24 EEPROM driver. This is where the > >> nvmem_config lives but I can't image a correct and clean way of > >> passing this cell config to the driver from board files without using > >> new ugly fields in platform_data which this very series is trying to > >> remove. This is why this cell config should live in machine code. > > > > Okay. > > > >> > >> > I also think non-DT consumers will need a way to reference exiting > >> > NVMEM cells, but this consumer-oriented nvmem cell lookup table should > >> > look like the gpio or pwm lookup table (basically what I proposed in my > >> > previous email). > >> > >> How about introducing two new interfaces to nvmem: one for defining > >> nvmem cells from machine code and the second for connecting these > >> cells with devices? > > > > Yes, that's basically what I was suggesting: move what you've done in > > nvmem-provider.h (maybe rename some of the structs to make it clear > > that this is about defining cells not referencing existing ones), and > > add a new consumer interface (based on what other subsystems do) in > > nvmem-consumer.h. > > > > This way you have both things clearly separated, and if a driver is > > both a consumer and a provider you'll just have to include both headers. > > > > Regards, > > > > Boris > > I didn't notice it before but there's a global list of nvmem cells > with each cell referencing its owner nvmem device. I'm wondering if > this isn't some kind of inversion of ownership. Shouldn't each nvmem > device have a separate list of nvmem cells owned by it? What happens > if we have two nvmem providers with the same names for cells? I'm > asking because dev_id based lookup doesn't make sense if internally > nvmem_cell_get_from_list() doesn't care about any device names (takes > only the cell_id as argument). > > This doesn't cause any trouble now since there are no users defining > cells in nvmem_config - there are only DT users - but this must be > clarified before I can advance with correctly implementing nvmem > lookups. > > BTW: of_nvmem_cell_get() seems to always allocate an nvmem_cell > instance even if the cell for this node was already added to the nvmem > device. Yep, don't know if it's done on purpose or not, but it's weird. I'd expect cells to be instantiated at NVMEM registration time (and stored in a list attached to the device) and then, anytime someone calls nvmem_cell_get(), you would search in this list for a match.