Received: by 2002:a05:6358:7058:b0:131:369:b2a3 with SMTP id 24csp10904169rwp; Fri, 21 Jul 2023 06:42:44 -0700 (PDT) X-Google-Smtp-Source: APBJJlGWtCn45mkQhLYmz5iT9LkAntVUD4R0mO6KyIFWDxy+2Vgp+87QkMSug/7mCejAkCKLxXNw X-Received: by 2002:a17:902:ec8c:b0:1b9:dea2:800f with SMTP id x12-20020a170902ec8c00b001b9dea2800fmr2027264plg.8.1689946963740; Fri, 21 Jul 2023 06:42:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689946963; cv=none; d=google.com; s=arc-20160816; b=d+Afy1bSpIhivTHAX/vAoFpExF5GLqQBskmRGf+B5YyMty3m8C0pIBpZmJth5/tjPW CmgWpURew0pD41nt96L516zjKCYRryxozEVFkWZTYGkPsZD6/L1XOVJZGidJ3vRweN7U vxFUiF8CqkrKgyFlKbpXiP/R7sArbag4FeV4HtLmxUUSluhh5c3/PcBDlIy+aZ/pUDqP W/AI6x/ovzx1057O+wvux4xiTiQOvkIaBZsU9YQltdAjOLxmkU96mK+ZXSIhnc3xFH5s KTNWYayz3DZE++vO+IUgM2ts937PS4pr+n3LVnge1o+OMz5Teck0OkwyONZqFpAYI/15 YrZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=a9u+BH+AR1cqQsTUlylO/+UGDbcvpWBticY6KH4py2s=; fh=4SK4od3LllnXWBTmtIisgzdfvieNnjc46DcvNdzA1B0=; b=uzRZqzPSnup8QyG5rih43tku4394/EekVmJYrOWXUOOeNiAET+pgVfqkcD3BafbVP9 MiryUlZmS7tmyotgb2SkXdQOUh4+8zp7drV1LIZGXd4QpD3ITPP5iugSBc5RN3bqs+MW oNoO8KQOBHoh6r5c1ejCaS8rDWnt1lRn9XlCbgu5jALvWmNdC8esBvh11aEjF+HUcA0W 7RZ6OGzvP2KtfaelL3LdMjwBDtgoys3L2FoSa8q0RQTqWE8OhNk+PTH3Xwv62TSEG/dO BVpHTykeLXyZWjJhJuRodX4bqahMIXaG7SKPdKXIoMgVSnlFIgyyczX1UOWiYFqimT9t YPqQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g11-20020a170902c98b00b001b89bab468esi2820467plc.107.2023.07.21.06.42.30; Fri, 21 Jul 2023 06:42:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230336AbjGUNEF (ORCPT + 99 others); Fri, 21 Jul 2023 09:04:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229992AbjGUNED (ORCPT ); Fri, 21 Jul 2023 09:04:03 -0400 Received: from pidgin.makrotopia.org (pidgin.makrotopia.org [185.142.180.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0C91186; Fri, 21 Jul 2023 06:04:00 -0700 (PDT) Received: from local by pidgin.makrotopia.org with esmtpsa (TLS1.3:TLS_AES_256_GCM_SHA384:256) (Exim 4.96) (envelope-from ) id 1qMpmy-0001ws-1J; Fri, 21 Jul 2023 13:03:20 +0000 Date: Fri, 21 Jul 2023 14:03:08 +0100 From: Daniel Golle To: Greg Kroah-Hartman Cc: Christoph Hellwig , Jens Axboe , Ulf Hansson , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Dave Chinner , Matthew Wilcox , Thomas =?iso-8859-1?Q?Wei=DFschuh?= , Jan Kara , Damien Le Moal , Ming Lei , Min Li , Christian Loehle , Adrian Hunter , Hannes Reinecke , Jack Wang , Florian Fainelli , Yeqi Fu , Avri Altman , Hans de Goede , Ye Bin , =?utf-8?B?UmFmYcWCIE1pxYJlY2tp?= , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org Subject: Re: [RFC PATCH 6/6] block: implement NVMEM provider Message-ID: References: <2023072128-shadow-system-1903@gregkh> <2023072106-partly-thank-8657@gregkh> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2023072106-partly-thank-8657@gregkh> X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 21, 2023 at 01:39:18PM +0200, Greg Kroah-Hartman wrote: > On Fri, Jul 21, 2023 at 12:30:10PM +0100, Daniel Golle wrote: > > On Fri, Jul 21, 2023 at 01:11:40PM +0200, Greg Kroah-Hartman wrote: > > > On Fri, Jul 21, 2023 at 11:40:51AM +0100, Daniel Golle wrote: > > > > On Thu, Jul 20, 2023 at 11:31:06PM -0700, Christoph Hellwig wrote: > > > > > On Thu, Jul 20, 2023 at 05:02:32PM +0100, Daniel Golle wrote: > > > > > > On Thu, Jul 20, 2023 at 12:04:43AM -0700, Christoph Hellwig wrote: > > > > > > > The layering here is exactly the wrong way around. This block device > > > > > > > as nvmem provide has not business sitting in the block layer and being > > > > > > > keyed ff the gendisk registration. Instead you should create a new > > > > > > > nvmem backed that opens the block device as needed if it fits your > > > > > > > OF description without any changes to the core block layer. > > > > > > > > > > > > > > > > > > > Ok. I will use a class_interface instead. > > > > > > > > > > I'm not sure a class_interface makes much sense here. Why does the > > > > > block layer even need to know about you using a device a nvmem provider? > > > > > > > > It doesn't. But it has to notify the nvmem providing driver about the > > > > addition of new block devices. This is what I'm using class_interface > > > > for, simply to hook into .add_dev of the block_class. > > > > > > Why is this single type of block device special to require this, yet all > > > others do not? Encoding this into the block layer feels like a huge > > > layering violation to me, why not do it how all other block drivers do > > > it instead? > > > > I was thinkng of this as a generic solution in no way tied to one specific > > type of block device. *Any* internal block device which can be used to > > boot from should also be usable as NVMEM provider imho. > > Define "internal" :) Exactly, it could be anything, MMC, SATA, even USB. As long as there is a device tree node associated with the device, it could be referenced as an NVMEM provider. And that's what this series tries to implement. > > And that's all up to the boot process in userspace, the kernel doesn't > care about this. So lets look at any random embedded Linux router or Wi-Fi AP: Typically we got some kind of NAND or NOR flash memory with a hard-coded partitioning dedicating different areas in the flash to be used for the bootloader, bootloader environment, factory data (such as MAC addresses and Wi-Fi EEPROMs), a linux kernel, a root filesystem and some way to store user data. And things are not that different when using an eMMC instead of NOR flash, from the perspective of the OS the main difference is just that the eMMC is larger and represented as a block device and typically MBR or GPT partitioning is used. > > > > > > As far as I can tell your provider should layer entirely above the > > > > > block layer and not have to be integrated with it. > > > > > > > > My approach using class_interface doesn't require any changes to be > > > > made to existing block code. However, it does use block_class. If > > > > you see any other good option to implement matching off and usage of > > > > block devices by in-kernel users, please let me know. > > > > > > Do not use block_class, again, that should only be for the block core to > > > touch. Individual block drivers should never be poking around in it. > > > > Do I have any other options to coldplug and be notified about newly > > added block devices, so the block-device-consuming driver can know > > about them? > > What other options do you need? Either calls for adding/removing the NVMEM devices need to be added to block/genhd.c and block/partitions/core.c, and if that's not acceptable (see below), then we'd need something like register_mtd_user() would be for MTD: A way for a driver to cold-plug existing block devices and be notified about newly added ones. > > > This is not a rhetoric question, I've been looking for other ways > > and haven't found anything better than class_find_device or > > class_interface. > > Never use that, sorry, that's not for a driver to touch. > > > Using those also prevents blk-nvmem to be built as > > a module, so I'd really like to find alternatives. > > E.g. for MTD we got struct mtd_notifier and register_mtd_user(). > > Your storage/hardware driver should be the thing that "finds block > devices" and registers them with the block class core, right? After > that, what matters? Well, I'd say: Make the device available as NVMEM provider, as it could be needed for e.g. the Ethernet driver to know the MAC address to be used. This is how it is done for MTD devices here: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/mtd/mtdcore.c#n723 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/mtd/mtdcore.c#n804 ... and then used for example here: https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/tree/arch/arm64/boot/dts/marvell/armada-3720-gl-mv1000.dts#n227 The idea to use a class_interface at all is a result of the criticism by Christoph Hellwig that the series requires changes to be made to the block subsystem, ie. calls to register and unregister the NVMEM provider. Yet another way would obviously be to let the block-device-providing driver (eg. drivers/mmc/core/block.c) register the NVMEM provider, but that would then have to be re-implemented for every relevant block driver which also seems wrong.