From: Hyesoo Yu <[email protected]>
Document devicetree binding for chunk heap on dma heap framework
Signed-off-by: Hyesoo Yu <[email protected]>
Signed-off-by: Minchan Kim <[email protected]>
---
.../bindings/dma-buf/chunk_heap.yaml | 52 +++++++++++++++++++
1 file changed, 52 insertions(+)
create mode 100644 Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml
diff --git a/Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml b/Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml
new file mode 100644
index 000000000000..f382bee02778
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml
@@ -0,0 +1,52 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/dma-buf/chunk_heap.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Device tree binding for chunk heap on DMA HEAP FRAMEWORK
+
+maintainers:
+ - Sumit Semwal <[email protected]>
+
+description: |
+ The chunk heap is backed by the Contiguous Memory Allocator (CMA) and
+ allocates the buffers that are made up to a list of fixed size chunks
+ taken from CMA. Chunk sizes are configurated when the heaps are created.
+
+properties:
+ compatible:
+ enum:
+ - dma_heap,chunk
+
+ memory-region:
+ maxItems: 1
+
+ alignment:
+ maxItems: 1
+
+required:
+ - compatible
+ - memory-region
+ - alignment
+
+additionalProperties: false
+
+examples:
+ - |
+ reserved-memory {
+ #address-cells = <2>;
+ #size-cells = <1>;
+
+ chunk_memory: chunk_memory {
+ compatible = "shared-dma-pool";
+ reusable;
+ size = <0x10000000>;
+ };
+ };
+
+ chunk_default_heap: chunk_default_heap {
+ compatible = "dma_heap,chunk";
+ memory-region = <&chunk_memory>;
+ alignment = <0x10000>;
+ };
--
2.29.2.299.gdc1121823c-goog
On Tue, Nov 17, 2020 at 10:19 AM Minchan Kim <[email protected]> wrote:
>
> From: Hyesoo Yu <[email protected]>
>
> Document devicetree binding for chunk heap on dma heap framework
>
> Signed-off-by: Hyesoo Yu <[email protected]>
> Signed-off-by: Minchan Kim <[email protected]>
> ---
> .../bindings/dma-buf/chunk_heap.yaml | 52 +++++++++++++++++++
> 1 file changed, 52 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml
>
> diff --git a/Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml b/Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml
> new file mode 100644
> index 000000000000..f382bee02778
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml
> @@ -0,0 +1,52 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/dma-buf/chunk_heap.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Device tree binding for chunk heap on DMA HEAP FRAMEWORK
> +
> +maintainers:
> + - Sumit Semwal <[email protected]>
> +
> +description: |
> + The chunk heap is backed by the Contiguous Memory Allocator (CMA) and
> + allocates the buffers that are made up to a list of fixed size chunks
> + taken from CMA. Chunk sizes are configurated when the heaps are created.
> +
> +properties:
> + compatible:
> + enum:
> + - dma_heap,chunk
> +
> + memory-region:
> + maxItems: 1
> +
> + alignment:
> + maxItems: 1
> +
> +required:
> + - compatible
> + - memory-region
> + - alignment
> +
> +additionalProperties: false
> +
> +examples:
> + - |
> + reserved-memory {
> + #address-cells = <2>;
> + #size-cells = <1>;
> +
> + chunk_memory: chunk_memory {
> + compatible = "shared-dma-pool";
> + reusable;
> + size = <0x10000000>;
> + };
> + };
> +
> + chunk_default_heap: chunk_default_heap {
> + compatible = "dma_heap,chunk";
> + memory-region = <&chunk_memory>;
> + alignment = <0x10000>;
> + };
So I suspect Rob will push back on this as he has for other dt
bindings related to ion/dmabuf heaps (I tried to push a similar
solution to exporting multiple CMA areas via dmabuf heaps).
The proposal he seemed to like best was having an in-kernel function
that a driver would call to initialize the heap (associated with the
CMA region the driver is interested in). Similar to Kunihiko Hayashi's
patch here:
- https://lore.kernel.org/lkml/[email protected]/
The one sticking point for that patch (which I think is a good one),
is that we don't have any in-tree users, so it couldn't be merged yet.
A similar approach might be good here, but again we probably need to
have at least one in-tree user which could call such a registration
function.
thanks
-john
On Wed, Nov 18, 2020 at 5:22 PM Hyesoo Yu <[email protected]> wrote:
>
> On Tue, Nov 17, 2020 at 07:00:54PM -0800, John Stultz wrote:
> > So I suspect Rob will push back on this as he has for other dt
> > bindings related to ion/dmabuf heaps (I tried to push a similar
> > solution to exporting multiple CMA areas via dmabuf heaps).
> >
> > The proposal he seemed to like best was having an in-kernel function
> > that a driver would call to initialize the heap (associated with the
> > CMA region the driver is interested in). Similar to Kunihiko Hayashi's
> > patch here:
> > - https://lore.kernel.org/lkml/[email protected]/
> >
> > The one sticking point for that patch (which I think is a good one),
> > is that we don't have any in-tree users, so it couldn't be merged yet.
> >
> > A similar approach might be good here, but again we probably need to
> > have at least one in-tree user which could call such a registration
> > function.
>
> Thanks for your review.
>
> The chunk heap is not considered for device-specific reserved memory and specific driver.
> It is similar to system heap, but it only collects high-order pages by using specific cma-area for performance.
So, yes I agree, the chunk heap isn't device specific. It's just that
the CMA regions usually are tied to devices.
The main objection to this style of solution has been due to the fact
that the DTS is supposed to describe the physical hardware (in an OS
agnostic way), rather than define configuration info for Linux
software drivers.
Obviously this can be quibbled about, as the normal way of tying
devices to CMA has some assumptions of what the driver will use that
region for, rather than somehow representing a physical tie between a
memory reservation and a device. Nonetheless, Rob has been hesitant to
take any sort of ION/DmaBuf Heap DT devices, and has been more
interested in some device having the memory reservation reference and
the driver for that doing the registration.
> It is strange that there is in-tree user who registers chunk heap.
> (Wouldn't it be strange for some users to register the system heap?)
Well, as there's no reservation/configuration needed, the system heap
can register itself.
The CMA heap currently only registers the default CMA heap, as we
didn't want to expose all CMA regions and there's otherwise no way to
pick and choose.
> Is there a reason to use dma-heap framework to add cma-area for specific device ?
>
> Even if some in-tree users register dma-heap with cma-area, the buffers could be allocated in user-land and these could be shared among other devices.
> For exclusive access, I guess, the device don't need to register dma-heap for cma area.
>
It's not really about exclusive access. More just that if you want to
bind a memory reservation/region (cma or otherwise), at least for DTS,
it needs to bind with some device in DT.
Then the device driver can register that region with a heap driver.
This avoids adding new Linux-specific software bindings to DT. It
becomes a driver implementation detail instead. The primary user of
the heap type would probably be a practical pick (ie the display or
isp driver).
The other potential solution Rob has suggested is that we create some
tag for the memory reservation (ie: like we do with cma: "reusable"),
which can be used to register the region to a heap. But this has the
problem that each tag has to be well defined and map to a known heap.
thanks
-john
On Wed, Nov 18, 2020 at 07:19:07PM -0800, John Stultz wrote:
> On Wed, Nov 18, 2020 at 5:22 PM Hyesoo Yu <[email protected]> wrote:
> >
> > On Tue, Nov 17, 2020 at 07:00:54PM -0800, John Stultz wrote:
> > > So I suspect Rob will push back on this as he has for other dt
> > > bindings related to ion/dmabuf heaps (I tried to push a similar
> > > solution to exporting multiple CMA areas via dmabuf heaps).
> > >
> > > The proposal he seemed to like best was having an in-kernel function
> > > that a driver would call to initialize the heap (associated with the
> > > CMA region the driver is interested in). Similar to Kunihiko Hayashi's
> > > patch here:
> > > - https://lore.kernel.org/lkml/[email protected]/
> > >
> > > The one sticking point for that patch (which I think is a good one),
> > > is that we don't have any in-tree users, so it couldn't be merged yet.
> > >
> > > A similar approach might be good here, but again we probably need to
> > > have at least one in-tree user which could call such a registration
> > > function.
> >
> > Thanks for your review.
> >
> > The chunk heap is not considered for device-specific reserved memory and specific driver.
> > It is similar to system heap, but it only collects high-order pages by using specific cma-area for performance.
>
> So, yes I agree, the chunk heap isn't device specific. It's just that
> the CMA regions usually are tied to devices.
>
> The main objection to this style of solution has been due to the fact
> that the DTS is supposed to describe the physical hardware (in an OS
> agnostic way), rather than define configuration info for Linux
> software drivers.
>
> Obviously this can be quibbled about, as the normal way of tying
> devices to CMA has some assumptions of what the driver will use that
> region for, rather than somehow representing a physical tie between a
> memory reservation and a device. Nonetheless, Rob has been hesitant to
> take any sort of ION/DmaBuf Heap DT devices, and has been more
> interested in some device having the memory reservation reference and
> the driver for that doing the registration.
>
> > It is strange that there is in-tree user who registers chunk heap.
> > (Wouldn't it be strange for some users to register the system heap?)
>
> Well, as there's no reservation/configuration needed, the system heap
> can register itself.
>
> The CMA heap currently only registers the default CMA heap, as we
> didn't want to expose all CMA regions and there's otherwise no way to
> pick and choose.
Yub.
dma-buf really need a way to make exclusive CMA area. Otherwise, default
CMA would be shared among drivers and introduce fragmentation easily
since we couldn't control other drivers. In such aspect, I don't think
current cma-heap works if userspace needs big memory chunk.
Here, the problem is there is no in-kernel user to bind the specific
CMA area because the owner will be random in userspace via dma-buf
interface.
>
> > Is there a reason to use dma-heap framework to add cma-area for specific device ?
> >
> > Even if some in-tree users register dma-heap with cma-area, the buffers could be allocated in user-land and these could be shared among other devices.
> > For exclusive access, I guess, the device don't need to register dma-heap for cma area.
> >
>
> It's not really about exclusive access. More just that if you want to
> bind a memory reservation/region (cma or otherwise), at least for DTS,
> it needs to bind with some device in DT.
>
> Then the device driver can register that region with a heap driver.
> This avoids adding new Linux-specific software bindings to DT. It
> becomes a driver implementation detail instead. The primary user of
> the heap type would probably be a practical pick (ie the display or
> isp driver).
If it's the only solution, we could create some dummy driver which has
only module_init and bind it from there but I don't think it's a good
idea.
>
> The other potential solution Rob has suggested is that we create some
> tag for the memory reservation (ie: like we do with cma: "reusable"),
> which can be used to register the region to a heap. But this has the
> problem that each tag has to be well defined and map to a known heap.
Do you think that's the only solution to make progress for this feature?
Then, could you elaborate it a bit more or any other ideas from dma-buf
folks?
On Thu, Dec 10, 2020 at 12:15:15AM -0800, John Stultz wrote:
> On Wed, Dec 9, 2020 at 3:53 PM Minchan Kim <[email protected]> wrote:
> > On Wed, Nov 18, 2020 at 07:19:07PM -0800, John Stultz wrote:
> > > The CMA heap currently only registers the default CMA heap, as we
> > > didn't want to expose all CMA regions and there's otherwise no way to
> > > pick and choose.
> >
> > Yub.
> >
> > dma-buf really need a way to make exclusive CMA area. Otherwise, default
> > CMA would be shared among drivers and introduce fragmentation easily
> > since we couldn't control other drivers. In such aspect, I don't think
> > current cma-heap works if userspace needs big memory chunk.
>
> Yes, the default CMA region is not always optimal.
>
> That's why I was hopeful for Kunihiko Hayashi's patch to allow for
> exposing specific cma regions:
> https://lore.kernel.org/lkml/[email protected]/
>
> I think it would be a good solution, but all we need is *some* driver
> which can be considered the primary user/owner of the cma region which
> would then explicitly export it via the dmabuf heaps.
>
> > Here, the problem is there is no in-kernel user to bind the specific
> > CMA area because the owner will be random in userspace via dma-buf
> > interface.
>
> Well, while I agree that conceptually the dmabuf heaps allow for
> allocations for multi-device pipelines, and thus are not tied to
> specific devices. I do think that the memory types exposed are likely
> to have specific devices/drivers in the pipeline that it matters most
> to. So I don't see a big issue with the in-kernel driver registering a
> specific CMA region as a dmabuf heap.
Then, I am worry about that we spread out dma_heap_add_cma to too many
drivers since kernel doesn't how userspace will use it.
For example, system 1 could have device A-B-C pipeline so they added
it A driver. After that, system 2 could have device B-C-D pipeline
so they add dma_heap_add_cma into B device.
>
> > > > Is there a reason to use dma-heap framework to add cma-area for specific device ?
> > > >
> > > > Even if some in-tree users register dma-heap with cma-area, the buffers could be allocated in user-land and these could be shared among other devices.
> > > > For exclusive access, I guess, the device don't need to register dma-heap for cma area.
> > > >
> > >
> > > It's not really about exclusive access. More just that if you want to
> > > bind a memory reservation/region (cma or otherwise), at least for DTS,
> > > it needs to bind with some device in DT.
> > >
> > > Then the device driver can register that region with a heap driver.
> > > This avoids adding new Linux-specific software bindings to DT. It
> > > becomes a driver implementation detail instead. The primary user of
> > > the heap type would probably be a practical pick (ie the display or
> > > isp driver).
> >
> > If it's the only solution, we could create some dummy driver which has
> > only module_init and bind it from there but I don't think it's a good
> > idea.
>
> Yea, an un-upstreamable dummy driver is maybe what it devolves to in
> the worst case. But I suspect it would be cleaner for a display or ISP
> driver that benefits most from the heap type to add the reserved
> memory reference to their DT node, and on init for them to register
> the region with the dmabuf heap code.
As I mentioned above, it could be a display at this moment but it could
be different driver next time. If I miss your point, let me know.
Thanks for the review, John.
>
>
> > > The other potential solution Rob has suggested is that we create some
> > > tag for the memory reservation (ie: like we do with cma: "reusable"),
> > > which can be used to register the region to a heap. But this has the
> > > problem that each tag has to be well defined and map to a known heap.
> >
> > Do you think that's the only solution to make progress for this feature?
> > Then, could you elaborate it a bit more or any other ideas from dma-buf
> > folks?
>
> I'm skeptical of that DT tag approach working out, as we'd need a new
> DT binding for the new tag name, and we'd have to do so for each new
> heap type that needs this (so non-default cma, your chunk heap,
> whatever other similar heap types that use reserved regions folks come
> up with). Having *some* driver take ownership for the reserved region
> and register it with the appropriate heap type seems much
> cleaner/flexible and avoids mucking with the DT ABI.
>
> thanks
> -john
On Wed, Dec 9, 2020 at 3:53 PM Minchan Kim <[email protected]> wrote:
> On Wed, Nov 18, 2020 at 07:19:07PM -0800, John Stultz wrote:
> > The CMA heap currently only registers the default CMA heap, as we
> > didn't want to expose all CMA regions and there's otherwise no way to
> > pick and choose.
>
> Yub.
>
> dma-buf really need a way to make exclusive CMA area. Otherwise, default
> CMA would be shared among drivers and introduce fragmentation easily
> since we couldn't control other drivers. In such aspect, I don't think
> current cma-heap works if userspace needs big memory chunk.
Yes, the default CMA region is not always optimal.
That's why I was hopeful for Kunihiko Hayashi's patch to allow for
exposing specific cma regions:
https://lore.kernel.org/lkml/[email protected]/
I think it would be a good solution, but all we need is *some* driver
which can be considered the primary user/owner of the cma region which
would then explicitly export it via the dmabuf heaps.
> Here, the problem is there is no in-kernel user to bind the specific
> CMA area because the owner will be random in userspace via dma-buf
> interface.
Well, while I agree that conceptually the dmabuf heaps allow for
allocations for multi-device pipelines, and thus are not tied to
specific devices. I do think that the memory types exposed are likely
to have specific devices/drivers in the pipeline that it matters most
to. So I don't see a big issue with the in-kernel driver registering a
specific CMA region as a dmabuf heap.
> > > Is there a reason to use dma-heap framework to add cma-area for specific device ?
> > >
> > > Even if some in-tree users register dma-heap with cma-area, the buffers could be allocated in user-land and these could be shared among other devices.
> > > For exclusive access, I guess, the device don't need to register dma-heap for cma area.
> > >
> >
> > It's not really about exclusive access. More just that if you want to
> > bind a memory reservation/region (cma or otherwise), at least for DTS,
> > it needs to bind with some device in DT.
> >
> > Then the device driver can register that region with a heap driver.
> > This avoids adding new Linux-specific software bindings to DT. It
> > becomes a driver implementation detail instead. The primary user of
> > the heap type would probably be a practical pick (ie the display or
> > isp driver).
>
> If it's the only solution, we could create some dummy driver which has
> only module_init and bind it from there but I don't think it's a good
> idea.
Yea, an un-upstreamable dummy driver is maybe what it devolves to in
the worst case. But I suspect it would be cleaner for a display or ISP
driver that benefits most from the heap type to add the reserved
memory reference to their DT node, and on init for them to register
the region with the dmabuf heap code.
> > The other potential solution Rob has suggested is that we create some
> > tag for the memory reservation (ie: like we do with cma: "reusable"),
> > which can be used to register the region to a heap. But this has the
> > problem that each tag has to be well defined and map to a known heap.
>
> Do you think that's the only solution to make progress for this feature?
> Then, could you elaborate it a bit more or any other ideas from dma-buf
> folks?
I'm skeptical of that DT tag approach working out, as we'd need a new
DT binding for the new tag name, and we'd have to do so for each new
heap type that needs this (so non-default cma, your chunk heap,
whatever other similar heap types that use reserved regions folks come
up with). Having *some* driver take ownership for the reserved region
and register it with the appropriate heap type seems much
cleaner/flexible and avoids mucking with the DT ABI.
thanks
-john
On Thu, Dec 10, 2020 at 8:06 AM Minchan Kim <[email protected]> wrote:
> On Thu, Dec 10, 2020 at 12:15:15AM -0800, John Stultz wrote:
> > Well, while I agree that conceptually the dmabuf heaps allow for
> > allocations for multi-device pipelines, and thus are not tied to
> > specific devices. I do think that the memory types exposed are likely
> > to have specific devices/drivers in the pipeline that it matters most
> > to. So I don't see a big issue with the in-kernel driver registering a
> > specific CMA region as a dmabuf heap.
>
> Then, I am worry about that we spread out dma_heap_add_cma to too many
> drivers since kernel doesn't how userspace will use it.
> For example, system 1 could have device A-B-C pipeline so they added
> it A driver. After that, system 2 could have device B-C-D pipeline
> so they add dma_heap_add_cma into B device.
I'm not sure I see this as a major issue? If the drivers add it based
on the dt memory reference, those will be configured to not add
duplicate heaps (and even so the heap driver can also ensure we don't
try to add a heap twice).
> > Yea, an un-upstreamable dummy driver is maybe what it devolves to in
> > the worst case. But I suspect it would be cleaner for a display or ISP
> > driver that benefits most from the heap type to add the reserved
> > memory reference to their DT node, and on init for them to register
> > the region with the dmabuf heap code.
>
> As I mentioned above, it could be a display at this moment but it could
> be different driver next time. If I miss your point, let me know.
>
I guess I just don't see potentially having the registration calls
added to multiple drivers as a big problem.
Ideally, yes, I'd probably rather see a DT node that would allow the
heap driver to register specified regions, but that's been NACKed
multiple times. Given that, having hooks in device drivers to export
the region seems to me like the next best approach, as it avoids DT
ABI ( if ends up its a bad approach, its not something we have to
keep).
The bigger problem right now is not that there are too many places the
registration call would be made from, but that there aren't upstream
drivers which I'm aware of where it would currently make sense to add
specific dma_heap_add_cma() registration hooks to. We need an
upstream user of Kunihiko Hayashi's patch.
thanks
-john
On Thu, Dec 10, 2020 at 02:40:38PM -0800, John Stultz wrote:
> On Thu, Dec 10, 2020 at 8:06 AM Minchan Kim <[email protected]> wrote:
> > On Thu, Dec 10, 2020 at 12:15:15AM -0800, John Stultz wrote:
> > > Well, while I agree that conceptually the dmabuf heaps allow for
> > > allocations for multi-device pipelines, and thus are not tied to
> > > specific devices. I do think that the memory types exposed are likely
> > > to have specific devices/drivers in the pipeline that it matters most
> > > to. So I don't see a big issue with the in-kernel driver registering a
> > > specific CMA region as a dmabuf heap.
> >
> > Then, I am worry about that we spread out dma_heap_add_cma to too many
> > drivers since kernel doesn't how userspace will use it.
> > For example, system 1 could have device A-B-C pipeline so they added
> > it A driver. After that, system 2 could have device B-C-D pipeline
> > so they add dma_heap_add_cma into B device.
>
> I'm not sure I see this as a major issue? If the drivers add it based
> on the dt memory reference, those will be configured to not add
> duplicate heaps (and even so the heap driver can also ensure we don't
> try to add a heap twice).
Sure, it doesn't have any problem to work but design ponint of view,
it looks werid to me in that one of random driver in the specific
scenario should have the heap registration and then we propagate
the heap registeration logics to other drivers day by day. To avoid DT
binding with dmabuf directy but it seems we have bad design.
To me, it sounds like to toss DT with anonymous dmabuf binding problem
to device drivers.
>
> > > Yea, an un-upstreamable dummy driver is maybe what it devolves to in
> > > the worst case. But I suspect it would be cleaner for a display or ISP
> > > driver that benefits most from the heap type to add the reserved
> > > memory reference to their DT node, and on init for them to register
> > > the region with the dmabuf heap code.
> >
> > As I mentioned above, it could be a display at this moment but it could
> > be different driver next time. If I miss your point, let me know.
> >
>
> I guess I just don't see potentially having the registration calls
> added to multiple drivers as a big problem.
>
> Ideally, yes, I'd probably rather see a DT node that would allow the
> heap driver to register specified regions, but that's been NACKed
> multiple times. Given that, having hooks in device drivers to export
> the region seems to me like the next best approach, as it avoids DT
> ABI ( if ends up its a bad approach, its not something we have to
> keep).
>
> The bigger problem right now is not that there are too many places the
> registration call would be made from, but that there aren't upstream
> drivers which I'm aware of where it would currently make sense to add
> specific dma_heap_add_cma() registration hooks to. We need an
> upstream user of Kunihiko Hayashi's patch.
Hmm, if that's only way to proceed, Hyesoo, do you have any specifid
driver in your mind to bind the CMA area?