2024-03-05 20:59:27

by Daniel Golle

[permalink] [raw]
Subject: [RFC PATCH v2 0/8] nvmem: add block device NVMEM provider

On embedded devices using an eMMC it is common that one or more (hw/sw)
partitions on the eMMC are used to store MAC addresses and Wi-Fi
calibration EEPROM data.

Implement an NVMEM provider backed by block devices as typically the
NVMEM framework is used to have kernel drivers read and use binary data
from EEPROMs, efuses, flash memory (MTD), ...

In order to be able to reference hardware partitions on an eMMC, add code
to bind each hardware partition to a specific firmware subnode.

This series is meant to open the discussion on how exactly the device
tree schema for block devices and partitions may look like, and even
if using the block layer to back the NVMEM device is at all the way to
go -- to me it seemed to be a good solution because it will be reuable
e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.

This series has previously been submitted on July 19th 2023[1] and most of
the basic idea did not change since.

However, the recent introduction of bdev_file_open_by_dev() allow to
get rid of most use of block layer internals which supposedly was the
main objection raised by Christoph Hellwig back then.

Most of the other comments received for in the first RFC have also
been addressed, however, what remains is the use of class_interface
(lacking an alternative way to get notifications about addition or
removal of block devices from the system). As this has been criticized
in the past I'm specifically interested in suggestions on how to solve
this in another way -- ideally without having to implement a whole new
way for in-kernel notifications of appearing or disappearing block
devices...

And, in a way just like in case of MTD and UBI, I believe acting as an
NVMEM provider *is* a functionality which belongs to the block layer
itself and, other than e.g. filesystems, is inconvenient to implement
elsewhere.

[1]: https://patchwork.kernel.org/project/linux-block/list/?series=767565

Daniel Golle (8):
dt-bindings: block: add basic bindings for block devices
block: partitions: populate fwnode
block: add new genhd flag GENHD_FL_NVMEM
block: implement NVMEM provider
dt-bindings: mmc: mmc-card: add block device nodes
mmc: core: set card fwnode_handle
mmc: block: set fwnode of disk devices
mmc: block: set GENHD_FL_NVMEM

.../bindings/block/block-device.yaml | 22 +++
.../devicetree/bindings/block/partition.yaml | 51 +++++
.../devicetree/bindings/block/partitions.yaml | 20 ++
.../devicetree/bindings/mmc/mmc-card.yaml | 45 +++++
block/Kconfig | 9 +
block/Makefile | 1 +
block/blk-nvmem.c | 175 ++++++++++++++++++
block/partitions/core.c | 41 ++++
drivers/mmc/core/block.c | 8 +
drivers/mmc/core/bus.c | 2 +
include/linux/blkdev.h | 2 +
11 files changed, 376 insertions(+)
create mode 100644 Documentation/devicetree/bindings/block/block-device.yaml
create mode 100644 Documentation/devicetree/bindings/block/partition.yaml
create mode 100644 Documentation/devicetree/bindings/block/partitions.yaml
create mode 100644 block/blk-nvmem.c

--
2.44.0


2024-03-12 12:32:23

by Ulf Hansson

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/8] nvmem: add block device NVMEM provider

On Tue, 5 Mar 2024 at 21:23, Daniel Golle <[email protected]> wrote:
>
> On embedded devices using an eMMC it is common that one or more (hw/sw)
> partitions on the eMMC are used to store MAC addresses and Wi-Fi
> calibration EEPROM data.
>
> Implement an NVMEM provider backed by block devices as typically the
> NVMEM framework is used to have kernel drivers read and use binary data
> from EEPROMs, efuses, flash memory (MTD), ...
>
> In order to be able to reference hardware partitions on an eMMC, add code
> to bind each hardware partition to a specific firmware subnode.
>
> This series is meant to open the discussion on how exactly the device
> tree schema for block devices and partitions may look like, and even
> if using the block layer to back the NVMEM device is at all the way to
> go -- to me it seemed to be a good solution because it will be reuable
> e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.
>
> This series has previously been submitted on July 19th 2023[1] and most of
> the basic idea did not change since.
>
> However, the recent introduction of bdev_file_open_by_dev() allow to
> get rid of most use of block layer internals which supposedly was the
> main objection raised by Christoph Hellwig back then.
>
> Most of the other comments received for in the first RFC have also
> been addressed, however, what remains is the use of class_interface
> (lacking an alternative way to get notifications about addition or
> removal of block devices from the system). As this has been criticized
> in the past I'm specifically interested in suggestions on how to solve
> this in another way -- ideally without having to implement a whole new
> way for in-kernel notifications of appearing or disappearing block
> devices...
>
> And, in a way just like in case of MTD and UBI, I believe acting as an
> NVMEM provider *is* a functionality which belongs to the block layer
> itself and, other than e.g. filesystems, is inconvenient to implement
> elsewhere.

I don't object to the above, however to keep things scalable at the
block device driver level, such as the MMC subsystem, I think we
should avoid having *any* knowledge about the binary format at these
kinds of lower levels.

Even if most of the NVMEM format is managed elsewhere, the support for
NVMEM partitions seems to be dealt with from the MMC subsystem too.
Why can't NVMEM partitions be managed the usual way via the MBR/GPT?

[...]

Kind regards
Uffe

2024-03-12 12:32:31

by Daniel Golle

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/8] nvmem: add block device NVMEM provider

Hi Ulf,

On Tue, Mar 12, 2024 at 01:22:49PM +0100, Ulf Hansson wrote:
> On Tue, 5 Mar 2024 at 21:23, Daniel Golle <[email protected]> wrote:
> >
> > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > calibration EEPROM data.
> >
> > Implement an NVMEM provider backed by block devices as typically the
> > NVMEM framework is used to have kernel drivers read and use binary data
> > from EEPROMs, efuses, flash memory (MTD), ...
> >
> > In order to be able to reference hardware partitions on an eMMC, add code
> > to bind each hardware partition to a specific firmware subnode.
> >
> > This series is meant to open the discussion on how exactly the device
> > tree schema for block devices and partitions may look like, and even
> > if using the block layer to back the NVMEM device is at all the way to
> > go -- to me it seemed to be a good solution because it will be reuable
> > e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.
> >
> > This series has previously been submitted on July 19th 2023[1] and most of
> > the basic idea did not change since.
> >
> > However, the recent introduction of bdev_file_open_by_dev() allow to
> > get rid of most use of block layer internals which supposedly was the
> > main objection raised by Christoph Hellwig back then.
> >
> > Most of the other comments received for in the first RFC have also
> > been addressed, however, what remains is the use of class_interface
> > (lacking an alternative way to get notifications about addition or
> > removal of block devices from the system). As this has been criticized
> > in the past I'm specifically interested in suggestions on how to solve
> > this in another way -- ideally without having to implement a whole new
> > way for in-kernel notifications of appearing or disappearing block
> > devices...
> >
> > And, in a way just like in case of MTD and UBI, I believe acting as an
> > NVMEM provider *is* a functionality which belongs to the block layer
> > itself and, other than e.g. filesystems, is inconvenient to implement
> > elsewhere.
>
> I don't object to the above, however to keep things scalable at the
> block device driver level, such as the MMC subsystem, I think we
> should avoid having *any* knowledge about the binary format at these
> kinds of lower levels.
>
> Even if most of the NVMEM format is managed elsewhere, the support for
> NVMEM partitions seems to be dealt with from the MMC subsystem too.

In an earlier iteration of this RFC it was requested to make NVMEM
support opt-in (instead of opt-out for mtdblock and ubiblock, which
already got their own NVMEM provider implementation).
Hence at least a change to opt-in for NVMEM support is required in the
MMC subsystem, together with making sure that MMC devices have their
fwnode assigned.

> Why can't NVMEM partitions be managed the usual way via the MBR/GPT?

Absolutely, maybe my wording was not clear, but that's exactly what
I'm suggesting here. There are no added parsers nor any knowledge
about binary formats in this patchset.

Or did I misunderstand your comment?

2024-03-12 12:58:32

by Ulf Hansson

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/8] nvmem: add block device NVMEM provider

On Tue, 12 Mar 2024 at 13:30, Daniel Golle <[email protected]> wrote:
>
> Hi Ulf,
>
> On Tue, Mar 12, 2024 at 01:22:49PM +0100, Ulf Hansson wrote:
> > On Tue, 5 Mar 2024 at 21:23, Daniel Golle <[email protected]> wrote:
> > >
> > > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > > calibration EEPROM data.
> > >
> > > Implement an NVMEM provider backed by block devices as typically the
> > > NVMEM framework is used to have kernel drivers read and use binary data
> > > from EEPROMs, efuses, flash memory (MTD), ...
> > >
> > > In order to be able to reference hardware partitions on an eMMC, add code
> > > to bind each hardware partition to a specific firmware subnode.
> > >
> > > This series is meant to open the discussion on how exactly the device
> > > tree schema for block devices and partitions may look like, and even
> > > if using the block layer to back the NVMEM device is at all the way to
> > > go -- to me it seemed to be a good solution because it will be reuable
> > > e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.
> > >
> > > This series has previously been submitted on July 19th 2023[1] and most of
> > > the basic idea did not change since.
> > >
> > > However, the recent introduction of bdev_file_open_by_dev() allow to
> > > get rid of most use of block layer internals which supposedly was the
> > > main objection raised by Christoph Hellwig back then.
> > >
> > > Most of the other comments received for in the first RFC have also
> > > been addressed, however, what remains is the use of class_interface
> > > (lacking an alternative way to get notifications about addition or
> > > removal of block devices from the system). As this has been criticized
> > > in the past I'm specifically interested in suggestions on how to solve
> > > this in another way -- ideally without having to implement a whole new
> > > way for in-kernel notifications of appearing or disappearing block
> > > devices...
> > >
> > > And, in a way just like in case of MTD and UBI, I believe acting as an
> > > NVMEM provider *is* a functionality which belongs to the block layer
> > > itself and, other than e.g. filesystems, is inconvenient to implement
> > > elsewhere.
> >
> > I don't object to the above, however to keep things scalable at the
> > block device driver level, such as the MMC subsystem, I think we
> > should avoid having *any* knowledge about the binary format at these
> > kinds of lower levels.
> >
> > Even if most of the NVMEM format is managed elsewhere, the support for
> > NVMEM partitions seems to be dealt with from the MMC subsystem too.
>
> In an earlier iteration of this RFC it was requested to make NVMEM
> support opt-in (instead of opt-out for mtdblock and ubiblock, which
> already got their own NVMEM provider implementation).
> Hence at least a change to opt-in for NVMEM support is required in the
> MMC subsystem, together with making sure that MMC devices have their
> fwnode assigned.

So, the NVMEM support needs to be turned on (opt-in) for each and
every block device driver?

It's not a big deal for me - and I would be happy to apply such a
change. On the other hand, it is just some binary data that is stored
on the flash, why should MMC have to opt-in or opt-out at all? It
should be the upper layers who decide what to store on the flash, not
the MMC subsystem, if you get my point.

>
> > Why can't NVMEM partitions be managed the usual way via the MBR/GPT?
>
> Absolutely, maybe my wording was not clear, but that's exactly what
> I'm suggesting here. There are no added parsers nor any knowledge
> about binary formats in this patchset.

Right, but there are new DT bindings added in the $subject series that
allows us to describe NVMEM partitions for an eMMC. Why isn't that
parsed from the MBR/GPT, etc, rather than encoded in DT?

>
> Or did I misunderstand your comment?

Maybe. I am just trying to understand this, so apologize if you find
my questions silly. :-)

Kind regards
Uffe

2024-03-12 13:12:59

by Daniel Golle

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/8] nvmem: add block device NVMEM provider

On Tue, Mar 12, 2024 at 01:57:39PM +0100, Ulf Hansson wrote:
> On Tue, 12 Mar 2024 at 13:30, Daniel Golle <[email protected]> wrote:
> >
> > Hi Ulf,
> >
> > On Tue, Mar 12, 2024 at 01:22:49PM +0100, Ulf Hansson wrote:
> > > On Tue, 5 Mar 2024 at 21:23, Daniel Golle <[email protected]> wrote:
> > > >
> > > > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > > > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > > > calibration EEPROM data.
> > > >
> > > > Implement an NVMEM provider backed by block devices as typically the
> > > > NVMEM framework is used to have kernel drivers read and use binary data
> > > > from EEPROMs, efuses, flash memory (MTD), ...
> > > >
> > > > In order to be able to reference hardware partitions on an eMMC, add code
> > > > to bind each hardware partition to a specific firmware subnode.
> > > >
> > > > This series is meant to open the discussion on how exactly the device
> > > > tree schema for block devices and partitions may look like, and even
> > > > if using the block layer to back the NVMEM device is at all the way to
> > > > go -- to me it seemed to be a good solution because it will be reuable
> > > > e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.
> > > >
> > > > This series has previously been submitted on July 19th 2023[1] and most of
> > > > the basic idea did not change since.
> > > >
> > > > However, the recent introduction of bdev_file_open_by_dev() allow to
> > > > get rid of most use of block layer internals which supposedly was the
> > > > main objection raised by Christoph Hellwig back then.
> > > >
> > > > Most of the other comments received for in the first RFC have also
> > > > been addressed, however, what remains is the use of class_interface
> > > > (lacking an alternative way to get notifications about addition or
> > > > removal of block devices from the system). As this has been criticized
> > > > in the past I'm specifically interested in suggestions on how to solve
> > > > this in another way -- ideally without having to implement a whole new
> > > > way for in-kernel notifications of appearing or disappearing block
> > > > devices...
> > > >
> > > > And, in a way just like in case of MTD and UBI, I believe acting as an
> > > > NVMEM provider *is* a functionality which belongs to the block layer
> > > > itself and, other than e.g. filesystems, is inconvenient to implement
> > > > elsewhere.
> > >
> > > I don't object to the above, however to keep things scalable at the
> > > block device driver level, such as the MMC subsystem, I think we
> > > should avoid having *any* knowledge about the binary format at these
> > > kinds of lower levels.
> > >
> > > Even if most of the NVMEM format is managed elsewhere, the support for
> > > NVMEM partitions seems to be dealt with from the MMC subsystem too.
> >
> > In an earlier iteration of this RFC it was requested to make NVMEM
> > support opt-in (instead of opt-out for mtdblock and ubiblock, which
> > already got their own NVMEM provider implementation).
> > Hence at least a change to opt-in for NVMEM support is required in the
> > MMC subsystem, together with making sure that MMC devices have their
> > fwnode assigned.
>
> So, the NVMEM support needs to be turned on (opt-in) for each and
> every block device driver?
>
> It's not a big deal for me - and I would be happy to apply such a
> change. On the other hand, it is just some binary data that is stored
> on the flash, why should MMC have to opt-in or opt-out at all? It
> should be the upper layers who decide what to store on the flash, not
> the MMC subsystem, if you get my point.
>

I agree, and that's exactly how I originally wrote it. However, in the
first round of rewiew it was requested to be in that way (ie. opt-in
for each subsystem; rather than opt-out for subsystems already
providing NVMEM in another way, such as MTD or UBI), see here:

https://patchwork.kernel.org/comment/25432948/

> >
> > > Why can't NVMEM partitions be managed the usual way via the MBR/GPT?
> >
> > Absolutely, maybe my wording was not clear, but that's exactly what
> > I'm suggesting here. There are no added parsers nor any knowledge
> > about binary formats in this patchset.
>
> Right, but there are new DT bindings added in the $subject series that
> allows us to describe NVMEM partitions for an eMMC. Why isn't that
> parsed from the MBR/GPT, etc, rather than encoded in DT?

The added dt-bindings merely allow to **identify** the partition by
it's PARTNAME, PARTNO or PARTUUID, so we can reference them in DT.
We'd still rely on MBR or GPT to do the actual parsing of the on-disk
format.

>
> >
> > Or did I misunderstand your comment?
>
> Maybe. I am just trying to understand this, so apologize if you find
> my questions silly. :-)

Let's make sure to all be on the same page and everything is fully
understood by everyone. Everyone has to bare the noise, but I guess
that's ok ;)


Cheers


Daniel

2024-03-13 10:20:35

by Ulf Hansson

[permalink] [raw]
Subject: Re: [RFC PATCH v2 0/8] nvmem: add block device NVMEM provider

On Tue, 12 Mar 2024 at 14:12, Daniel Golle <[email protected]> wrote:
>
> On Tue, Mar 12, 2024 at 01:57:39PM +0100, Ulf Hansson wrote:
> > On Tue, 12 Mar 2024 at 13:30, Daniel Golle <[email protected]> wrote:
> > >
> > > Hi Ulf,
> > >
> > > On Tue, Mar 12, 2024 at 01:22:49PM +0100, Ulf Hansson wrote:
> > > > On Tue, 5 Mar 2024 at 21:23, Daniel Golle <[email protected]> wrote:
> > > > >
> > > > > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > > > > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > > > > calibration EEPROM data.
> > > > >
> > > > > Implement an NVMEM provider backed by block devices as typically the
> > > > > NVMEM framework is used to have kernel drivers read and use binary data
> > > > > from EEPROMs, efuses, flash memory (MTD), ...
> > > > >
> > > > > In order to be able to reference hardware partitions on an eMMC, add code
> > > > > to bind each hardware partition to a specific firmware subnode.
> > > > >
> > > > > This series is meant to open the discussion on how exactly the device
> > > > > tree schema for block devices and partitions may look like, and even
> > > > > if using the block layer to back the NVMEM device is at all the way to
> > > > > go -- to me it seemed to be a good solution because it will be reuable
> > > > > e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.
> > > > >
> > > > > This series has previously been submitted on July 19th 2023[1] and most of
> > > > > the basic idea did not change since.
> > > > >
> > > > > However, the recent introduction of bdev_file_open_by_dev() allow to
> > > > > get rid of most use of block layer internals which supposedly was the
> > > > > main objection raised by Christoph Hellwig back then.
> > > > >
> > > > > Most of the other comments received for in the first RFC have also
> > > > > been addressed, however, what remains is the use of class_interface
> > > > > (lacking an alternative way to get notifications about addition or
> > > > > removal of block devices from the system). As this has been criticized
> > > > > in the past I'm specifically interested in suggestions on how to solve
> > > > > this in another way -- ideally without having to implement a whole new
> > > > > way for in-kernel notifications of appearing or disappearing block
> > > > > devices...
> > > > >
> > > > > And, in a way just like in case of MTD and UBI, I believe acting as an
> > > > > NVMEM provider *is* a functionality which belongs to the block layer
> > > > > itself and, other than e.g. filesystems, is inconvenient to implement
> > > > > elsewhere.
> > > >
> > > > I don't object to the above, however to keep things scalable at the
> > > > block device driver level, such as the MMC subsystem, I think we
> > > > should avoid having *any* knowledge about the binary format at these
> > > > kinds of lower levels.
> > > >
> > > > Even if most of the NVMEM format is managed elsewhere, the support for
> > > > NVMEM partitions seems to be dealt with from the MMC subsystem too.
> > >
> > > In an earlier iteration of this RFC it was requested to make NVMEM
> > > support opt-in (instead of opt-out for mtdblock and ubiblock, which
> > > already got their own NVMEM provider implementation).
> > > Hence at least a change to opt-in for NVMEM support is required in the
> > > MMC subsystem, together with making sure that MMC devices have their
> > > fwnode assigned.
> >
> > So, the NVMEM support needs to be turned on (opt-in) for each and
> > every block device driver?
> >
> > It's not a big deal for me - and I would be happy to apply such a
> > change. On the other hand, it is just some binary data that is stored
> > on the flash, why should MMC have to opt-in or opt-out at all? It
> > should be the upper layers who decide what to store on the flash, not
> > the MMC subsystem, if you get my point.
> >
>
> I agree, and that's exactly how I originally wrote it. However, in the
> first round of rewiew it was requested to be in that way (ie. opt-in
> for each subsystem; rather than opt-out for subsystems already
> providing NVMEM in another way, such as MTD or UBI), see here:
>
> https://patchwork.kernel.org/comment/25432948/

Okay, got it, thanks!

>
> > >
> > > > Why can't NVMEM partitions be managed the usual way via the MBR/GPT?
> > >
> > > Absolutely, maybe my wording was not clear, but that's exactly what
> > > I'm suggesting here. There are no added parsers nor any knowledge
> > > about binary formats in this patchset.
> >
> > Right, but there are new DT bindings added in the $subject series that
> > allows us to describe NVMEM partitions for an eMMC. Why isn't that
> > parsed from the MBR/GPT, etc, rather than encoded in DT?
>
> The added dt-bindings merely allow to **identify** the partition by
> it's PARTNAME, PARTNO or PARTUUID, so we can reference them in DT.
> We'd still rely on MBR or GPT to do the actual parsing of the on-disk
> format.

Thanks for clarifying!

So, it looks like this all relies on what DT maintainers think then.

[...]

Kind regards
Uffe