Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3724494imm; Mon, 20 Aug 2018 03:45:41 -0700 (PDT) X-Google-Smtp-Source: AA+uWPzb2Htvl1vt6Pwkz/AyhxVBws/aOpx6wWD+vujYeDob4q8BktKj93mGOTFBA5f3G9p3TZHB X-Received: by 2002:a62:7f91:: with SMTP id a139-v6mr37319489pfd.99.1534761941213; Mon, 20 Aug 2018 03:45:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534761941; cv=none; d=google.com; s=arc-20160816; b=grA2fN5SqeJE7WYxaeuybDGvxanj1B2imv+VE3cyEDysulnoQmb72Kfe544SNB3G6S InOUd/CAXmxbm7f+P5e0v5gttbRnfGBD0NxvdAgQ09MruQ2sFZCbRgGxHM4nYpGHyWU7 3heZd6CD7iqPWMHcKBo3mtBAmbZbZQqKtVDrnTAkPaPqQ2OZy3NSBXrUWdUyyJVTtOTZ DyNzWG7cFjhhJPPlIQtnVXZlME+J6cI0HIvgvfuhU1Bp+e7P4uXFQ8IGIOi3vd4B6PNs ob1ZULjCb78G0zKV3KBSWykDG59/Vx1XnM7Dwyn41BTKomQc1Owwv5nGHdBVjQ9dfS7n YTtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature :arc-authentication-results; bh=KVb/MNWJbuLQEZxBLsMD4p9CGoOEJHQSRenTWlAg+YU=; b=PdzD/1XP65uPRyo6QBKKtzR1FNc5agP3rSkqoXfskUXoL3SGxK3sGrPjP+6Ys6ywvb ByAotf/+FsU0U/BQMDpDDjyWiWRXWNSFLU4E76oul06FhqAtY/IrzjJKi+iIe52Ar/sk VUhrHdUXF4+00kwy57lb8WEu36dKptKU9n37oPGM98A6scaUadrhF1ImpJ8B8dEBz8Yr CwtH9v+fwSJxK5cyvgjyoJU3SQoeKy/r5A5imcNgjwvgYQ9FXwCT2OSFA3hiIos4HWxX tPUceV9duVIBJlTgcBhD/dw6zcEYm02Sj55/O6ihrM5GqH95icd1yyEvUMW17d8aRH6l MD9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Cl8jw3UZ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x22-v6si9029644pfh.84.2018.08.20.03.45.25; Mon, 20 Aug 2018 03:45:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Cl8jw3UZ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726132AbeHTN6o (ORCPT + 99 others); Mon, 20 Aug 2018 09:58:44 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:38779 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726021AbeHTN6o (ORCPT ); Mon, 20 Aug 2018 09:58:44 -0400 Received: by mail-wm0-f66.google.com with SMTP id t25-v6so13352844wmi.3 for ; Mon, 20 Aug 2018 03:43:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=KVb/MNWJbuLQEZxBLsMD4p9CGoOEJHQSRenTWlAg+YU=; b=Cl8jw3UZuYsSv5KvjZpcc4HmDDuw9/fBv+5N6ufrTlu1DCnP7GIN5/xV+o3JXSbIAg oPrk+oPiO6bEb/LnRu+pZwyrPHNZoyA1hCGrMZ18Vjj+KqdTzLFE09mknhiiM0Pm15fb IpyA5WeYMsGbzTTPaBsd05Ry15fOZKPOl3yoc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=KVb/MNWJbuLQEZxBLsMD4p9CGoOEJHQSRenTWlAg+YU=; b=dPhPDWZUc0tQtyEIrtzaf1IRPJDeiOXRALu3IS6cJLs3J3oNBi+1rbUVBYrtl9jk6J IZ61SyMcHshRsEPHp3Mj6rtImwxAUQRFnhP8/WiJzf3MPiOjboya0PPHwv60/4wuLGip zaRVZqurjL6WUq4UYpTnqhEav+C2m7JL50IyeiGRUmGLTWDYqfR87XTkMLvC3keRzNsz WmsxFKDfKuuAUUEdSwcVCoxSGtjydsEzA6yKS12aXYoiOxZfwBSFWOKFSa6xpMECYkVa jjTdJyPKFJ57gNondeptBoFVOWgfWboWmK7mbZhKwcRPGcjCQirta2Zr44B1GgVHHHCV FOfA== X-Gm-Message-State: AOUpUlEEYt1qjHIkH4/CgK/iiyldzYZaf2nuwazn4fEluwe/Ltm4yx6T 55XOD+kfDcyZqsmo3fSD8AopLw== X-Received: by 2002:a1c:1943:: with SMTP id 64-v6mr24403033wmz.89.1534761816835; Mon, 20 Aug 2018 03:43:36 -0700 (PDT) Received: from [192.168.0.18] (cpc90716-aztw32-2-0-cust92.18-1.cable.virginm.net. [86.26.100.93]) by smtp.googlemail.com with ESMTPSA id i205-v6sm12251357wmf.30.2018.08.20.03.43.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Aug 2018 03:43:36 -0700 (PDT) Subject: Re: [PATCH v2 06/29] mtd: Add support for reading MTD devices via the nvmem API To: Boris Brezillon , Alban Cc: Bartosz Golaszewski , Jonathan Corbet , Sekhar Nori , Kevin Hilman , Russell King , Arnd Bergmann , Greg Kroah-Hartman , David Woodhouse , Brian Norris , Marek Vasut , Richard Weinberger , Grygorii Strashko , "David S . Miller" , Naren , Mauro Carvalho Chehab , Andrew Morton , Lukas Wunner , Dan Carpenter , Florian Fainelli , Ivan Khoronzhuk , Sven Van Asbroeck , Paolo Abeni , Rob Herring , David Lechner , Andrew Lunn , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-i2c@vger.kernel.org, linux-mtd@lists.infradead.org, linux-omap@vger.kernel.org, netdev@vger.kernel.org, Bartosz Golaszewski References: <20180810080526.27207-1-brgl@bgdev.pl> <20180810080526.27207-7-brgl@bgdev.pl> <20180817182720.6a6e5e8e@bbrezillon> <20180819133106.0420df5f@tock> <20180819184609.6dcdbb9a@bbrezillon> From: Srinivas Kandagatla Message-ID: <5b8c30b8-41e1-d59e-542b-fef6c6469ff0@linaro.org> Date: Mon, 20 Aug 2018 11:43:34 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <20180819184609.6dcdbb9a@bbrezillon> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Thanks Boris, for looking into this in more detail. On 19/08/18 17:46, Boris Brezillon wrote: > On Sun, 19 Aug 2018 13:31:06 +0200 > Alban wrote: > >> On Fri, 17 Aug 2018 18:27:20 +0200 >> Boris Brezillon wrote: >> >>> Hi Bartosz, >>> >>> On Fri, 10 Aug 2018 10:05:03 +0200 >>> Bartosz Golaszewski wrote: >>> >>>> From: Alban Bedel >>>> >>>> Allow drivers that use the nvmem API to read data stored on MTD devices. >>>> For this the mtd devices are registered as read-only NVMEM providers. >>>> >>>> Signed-off-by: Alban Bedel >>>> [Bartosz: >>>> - use the managed variant of nvmem_register(), >>>> - set the nvmem name] >>>> Signed-off-by: Bartosz Golaszewski >>> >>> What happened to the 2 other patches of Alban's series? I'd really >>> like the DT case to be handled/agreed on in the same patchset, but >>> IIRC, Alban and Srinivas disagreed on how this should be represented. >>> I hope this time we'll come to an agreement, because the MTD <-> NVMEM >>> glue has been floating around for quite some time... >> >> These other patches were to fix what I consider a fundamental flaw in >> the generic NVMEM bindings, however we couldn't agree on this point. >> Bartosz later contacted me to take over this series and I suggested to >> just change the MTD NVMEM binding to use a compatible string on the >> NVMEM cells as an alternative solution to fix the clash with the old >> style MTD partition. >> >> However all this has no impact on the code needed to add NVMEM support >> to MTD, so the above patch didn't change at all. > > It does have an impact on the supported binding though. > nvmem->dev.of_node is automatically assigned to mtd->dev.of_node, which > means people will be able to define their NVMEM cells directly under > the MTD device and reference them from other nodes (even if it's not > documented), and as you said, it conflict with the old MTD partition > bindings. So we'd better agree on this binding before merging this > patch. > Yes, I agree with you! > I see several options: > > 1/ provide a way to tell the NVMEM framework not to use parent->of_node > even if it's != NULL. This way we really don't support defining > NVMEM cells in the DT, and also don't support referencing the nvmem > device using a phandle. > Other options look much better than this one! > 2/ define a new binding where all nvmem-cells are placed in an > "nvmem" subnode (just like we have this "partitions" subnode for > partitions), and then add a config->of_node field so that the > nvmem provider can explicitly specify the DT node representing the > nvmem device. We'll also need to set this field to ERR_PTR(-ENOENT) > in case this node does not exist so that the nvmem framework knows > that it should not assign nvmem->dev.of_node to parent->of_node > This one looks promising, One Question though.. Do we expect that there would be nvmem cells in any of the partitions? or nvmem cell are only valid for unpartioned area? Am sure that the nvmem cells would be in multiple partitions, Is it okay to have some parts of partition to be in a separate subnode? I would like this case to be considered too. > 3/ only declare partitions as nvmem providers. This would solve the > problem we have with partitions defined in the DT since > defining sub-partitions in the DT is not (yet?) supported and > partition nodes are supposed to be leaf nodes. Still, I'm not a big > fan of this solution because it will prevent us from supporting > sub-partitions if we ever want/need to. > This one is going to come back so, its better we > 4/ Add a ->of_xlate() hook that would be called if present by the > framework instead of using the default parsing we have right now. This looks much cleaner! We could hook that up under __nvmem_device_get() to do that translation. > > 5/ Tell the nvmem framework the name of the subnode containing nvmem > cell definitions (if NULL that means cells are directly defined > under the nvmem provider node). We would set it to "nvmem-cells" (or > whatever you like) for the MTD case. Option 2 looks better than this. > > There are probably other options (some were proposed by Alban and > Srinivas already), but I'd like to get this sorted out before we merge > this patch. > > Alban, Srinivas, any opinion? Overall am still not able to clear visualize on how MTD bindings with nvmem cells would look in both partition and un-partition usecases? An example DT would be nice here!! Option 4 looks like much generic solution to me, may be we should try this once bindings on MTD side w.r.t nvmem cells are decided. Thanks, Srini >