2024-01-31 12:26:34

by Will Deacon

[permalink] [raw]
Subject: [PATCH v2 2/3] swiotlb: Enforce page alignment in swiotlb_alloc()

When allocating pages from a restricted DMA pool in swiotlb_alloc(),
the buffer address is blindly converted to a 'struct page *' that is
returned to the caller. In the unlikely event of an allocation bug,
page-unaligned addresses are not detected and slots can silently be
double-allocated.

Add a simple check of the buffer alignment in swiotlb_alloc() to make
debugging a little easier if something has gone wonky.

Cc: Christoph Hellwig <[email protected]>
Cc: Marek Szyprowski <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Petr Tesarik <[email protected]>
Cc: Dexuan Cui <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
---
kernel/dma/swiotlb.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 56cc08b1fbd6..4485f216e620 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -1642,6 +1642,12 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
return NULL;

tlb_addr = slot_addr(pool->start, index);
+ if (unlikely(!PAGE_ALIGNED(tlb_addr))) {
+ dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n",
+ &tlb_addr);
+ swiotlb_release_slots(dev, tlb_addr);
+ return NULL;
+ }

return pfn_to_page(PFN_DOWN(tlb_addr));
}
--
2.43.0.429.g432eaa2c6b-goog



2024-01-31 15:14:32

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] swiotlb: Enforce page alignment in swiotlb_alloc()

On 31/01/2024 12:25 pm, Will Deacon wrote:
> When allocating pages from a restricted DMA pool in swiotlb_alloc(),
> the buffer address is blindly converted to a 'struct page *' that is
> returned to the caller. In the unlikely event of an allocation bug,
> page-unaligned addresses are not detected and slots can silently be
> double-allocated.
>
> Add a simple check of the buffer alignment in swiotlb_alloc() to make
> debugging a little easier if something has gone wonky.
>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Marek Szyprowski <[email protected]>
> Cc: Robin Murphy <[email protected]>
> Cc: Petr Tesarik <[email protected]>
> Cc: Dexuan Cui <[email protected]>
> Signed-off-by: Will Deacon <[email protected]>
> ---
> kernel/dma/swiotlb.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 56cc08b1fbd6..4485f216e620 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -1642,6 +1642,12 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
> return NULL;
>
> tlb_addr = slot_addr(pool->start, index);
> + if (unlikely(!PAGE_ALIGNED(tlb_addr))) {
> + dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n",
> + &tlb_addr);

Nit: if there's cause for another respin, I'd be inclined to use a
straightforward "if (WARN_ON(...))" here - this condition should
represent SWIOTLB itself going badly wrong, which isn't really
attributable to whatever device happened to be involved in the call.

Cheers,
Robin.

> + swiotlb_release_slots(dev, tlb_addr);
> + return NULL;
> + }
>
> return pfn_to_page(PFN_DOWN(tlb_addr));
> }

2024-02-01 12:48:33

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] swiotlb: Enforce page alignment in swiotlb_alloc()

On Wed, Jan 31, 2024 at 03:14:18PM +0000, Robin Murphy wrote:
> On 31/01/2024 12:25 pm, Will Deacon wrote:
> > When allocating pages from a restricted DMA pool in swiotlb_alloc(),
> > the buffer address is blindly converted to a 'struct page *' that is
> > returned to the caller. In the unlikely event of an allocation bug,
> > page-unaligned addresses are not detected and slots can silently be
> > double-allocated.
> >
> > Add a simple check of the buffer alignment in swiotlb_alloc() to make
> > debugging a little easier if something has gone wonky.
> >
> > Cc: Christoph Hellwig <[email protected]>
> > Cc: Marek Szyprowski <[email protected]>
> > Cc: Robin Murphy <[email protected]>
> > Cc: Petr Tesarik <[email protected]>
> > Cc: Dexuan Cui <[email protected]>
> > Signed-off-by: Will Deacon <[email protected]>
> > ---
> > kernel/dma/swiotlb.c | 6 ++++++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index 56cc08b1fbd6..4485f216e620 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -1642,6 +1642,12 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
> > return NULL;
> > tlb_addr = slot_addr(pool->start, index);
> > + if (unlikely(!PAGE_ALIGNED(tlb_addr))) {
> > + dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n",
> > + &tlb_addr);
>
> Nit: if there's cause for another respin, I'd be inclined to use a
> straightforward "if (WARN_ON(...))" here - this condition should represent
> SWIOTLB itself going badly wrong, which isn't really attributable to
> whatever device happened to be involved in the call.

Well, there'll definitely be a v3 thanks to my idiotic dropping of the
'continue' statement when I reworked the searching loop for v2.

However, given that we're returning NULL, I think printing the device is
helpful as we're likely to cause some horrible error (e.g. probe failure)
in the caller and then it will be obvious why that happened from looking
at the logs. So I'd prefer to keep it unless you insist.

Cheers,

Will

2024-02-01 13:56:03

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] swiotlb: Enforce page alignment in swiotlb_alloc()

On 01/02/2024 12:48 pm, Will Deacon wrote:
> On Wed, Jan 31, 2024 at 03:14:18PM +0000, Robin Murphy wrote:
>> On 31/01/2024 12:25 pm, Will Deacon wrote:
>>> When allocating pages from a restricted DMA pool in swiotlb_alloc(),
>>> the buffer address is blindly converted to a 'struct page *' that is
>>> returned to the caller. In the unlikely event of an allocation bug,
>>> page-unaligned addresses are not detected and slots can silently be
>>> double-allocated.
>>>
>>> Add a simple check of the buffer alignment in swiotlb_alloc() to make
>>> debugging a little easier if something has gone wonky.
>>>
>>> Cc: Christoph Hellwig <[email protected]>
>>> Cc: Marek Szyprowski <[email protected]>
>>> Cc: Robin Murphy <[email protected]>
>>> Cc: Petr Tesarik <[email protected]>
>>> Cc: Dexuan Cui <[email protected]>
>>> Signed-off-by: Will Deacon <[email protected]>
>>> ---
>>> kernel/dma/swiotlb.c | 6 ++++++
>>> 1 file changed, 6 insertions(+)
>>>
>>> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
>>> index 56cc08b1fbd6..4485f216e620 100644
>>> --- a/kernel/dma/swiotlb.c
>>> +++ b/kernel/dma/swiotlb.c
>>> @@ -1642,6 +1642,12 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
>>> return NULL;
>>> tlb_addr = slot_addr(pool->start, index);
>>> + if (unlikely(!PAGE_ALIGNED(tlb_addr))) {
>>> + dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n",
>>> + &tlb_addr);
>>
>> Nit: if there's cause for another respin, I'd be inclined to use a
>> straightforward "if (WARN_ON(...))" here - this condition should represent
>> SWIOTLB itself going badly wrong, which isn't really attributable to
>> whatever device happened to be involved in the call.
>
> Well, there'll definitely be a v3 thanks to my idiotic dropping of the
> 'continue' statement when I reworked the searching loop for v2.
>
> However, given that we're returning NULL, I think printing the device is
> helpful as we're likely to cause some horrible error (e.g. probe failure)
> in the caller and then it will be obvious why that happened from looking
> at the logs. So I'd prefer to keep it unless you insist.

No, helping to clarify any knock-on effects sounds like a reasonable
justification to me - I hadn't really considered that angle. I'd still
be inclined to condense it down to "if (dev_WARN_ONCE(...))" to minimise
redundancy, but that's really just a matter of personal preference.

Cheers,
Robin.