2019-11-27 14:41:32

by Christoph Hellwig

[permalink] [raw]
Subject: make dma_addressing_limited work for memory encryption setups

Hi all,

this little series fixes dma_addressing_limited to return true for
systems that use bounce buffers due to memory encryption.


2019-11-27 14:42:20

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited

Devices that are forced to DMA through unencrypted bounce buffers
need to be treated as if they are addressing limited.

Signed-off-by: Christoph Hellwig <[email protected]>
---
kernel/dma/mapping.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 1dbe6d725962..f6c35b53d996 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -416,6 +416,8 @@ EXPORT_SYMBOL_GPL(dma_get_merge_boundary);
*/
bool dma_addressing_limited(struct device *dev)
{
+ if (force_dma_unencrypted(dev))
+ return true;
return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) <
dma_get_required_mask(dev);
}
--
2.20.1

2019-11-27 14:44:57

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 1/2] dma-mapping: move dma_addressing_limited out of line

This function isn't used in the fast path, and moving it out of line
will reduce include clutter with the next change.

Signed-off-by: Christoph Hellwig <[email protected]>
---
include/linux/dma-mapping.h | 14 +-------------
kernel/dma/mapping.c | 15 +++++++++++++++
2 files changed, 16 insertions(+), 13 deletions(-)

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index c4d8741264bd..94ef74ecd18a 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -687,19 +687,7 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask)
return dma_set_mask_and_coherent(dev, mask);
}

-/**
- * dma_addressing_limited - return if the device is addressing limited
- * @dev: device to check
- *
- * Return %true if the devices DMA mask is too small to address all memory in
- * the system, else %false. Lack of addressing bits is the prime reason for
- * bounce buffering, but might not be the only one.
- */
-static inline bool dma_addressing_limited(struct device *dev)
-{
- return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) <
- dma_get_required_mask(dev);
-}
+bool dma_addressing_limited(struct device *dev);

#ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS
void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 12ff766ec1fa..1dbe6d725962 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -405,3 +405,18 @@ unsigned long dma_get_merge_boundary(struct device *dev)
return ops->get_merge_boundary(dev);
}
EXPORT_SYMBOL_GPL(dma_get_merge_boundary);
+
+/**
+ * dma_addressing_limited - return if the device is addressing limited
+ * @dev: device to check
+ *
+ * Return %true if the devices DMA mask is too small to address all memory in
+ * the system, else %false. Lack of addressing bits is the prime reason for
+ * bounce buffering, but might not be the only one.
+ */
+bool dma_addressing_limited(struct device *dev)
+{
+ return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) <
+ dma_get_required_mask(dev);
+}
+EXPORT_SYMBOL_GPL(dma_addressing_limited);
--
2.20.1

2019-11-27 17:14:53

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH 1/2] dma-mapping: move dma_addressing_limited out of line

On Wed, Nov 27, 2019 at 03:40:05PM +0100, Christoph Hellwig wrote:
> +/**
> + * dma_addressing_limited - return if the device is addressing limited
> + * @dev: device to check
> + *
> + * Return %true if the devices DMA mask is too small to address all memory in

Could I trouble you to use a : after Return? That turns it into its
own section rather than making it part of the generic description.

2019-11-27 18:24:27

by Thomas Hellstrom

[permalink] [raw]
Subject: Re: [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited

Hi,

On Wed, 2019-11-27 at 15:40 +0100, Christoph Hellwig wrote:
> Devices that are forced to DMA through unencrypted bounce buffers
> need to be treated as if they are addressing limited.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> kernel/dma/mapping.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index 1dbe6d725962..f6c35b53d996 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -416,6 +416,8 @@ EXPORT_SYMBOL_GPL(dma_get_merge_boundary);
> */
> bool dma_addressing_limited(struct device *dev)
> {
> + if (force_dma_unencrypted(dev))
> + return true;
> return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) <
> dma_get_required_mask(dev);
> }

Any chance to have the case

(swiotlb_force == SWIOTLB_FORCE)

also included?

Otherwise for the series

Reviewed-by: Thomas Hellström <[email protected]>



2019-11-28 07:55:35

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited

On Wed, Nov 27, 2019 at 06:22:57PM +0000, Thomas Hellstrom wrote:
> > bool dma_addressing_limited(struct device *dev)
> > {
> > + if (force_dma_unencrypted(dev))
> > + return true;
> > return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) <
> > dma_get_required_mask(dev);
> > }
>
> Any chance to have the case
>
> (swiotlb_force == SWIOTLB_FORCE)
>
> also included?

We have a hard time handling that in generic code. Do we have any
good use case for SWIOTLB_FORCE not that we have force_dma_unencrypted?
I'd love to be able to get rid of it..

2019-11-28 08:05:59

by Thomas Hellstrom

[permalink] [raw]
Subject: Re: [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited

On 11/28/19 8:51 AM, [email protected] wrote:
> On Wed, Nov 27, 2019 at 06:22:57PM +0000, Thomas Hellstrom wrote:
>>> bool dma_addressing_limited(struct device *dev)
>>> {
>>> + if (force_dma_unencrypted(dev))
>>> + return true;
>>> return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) <
>>> dma_get_required_mask(dev);
>>> }
>> Any chance to have the case
>>
>> (swiotlb_force == SWIOTLB_FORCE)
>>
>> also included?
> We have a hard time handling that in generic code. Do we have any
> good use case for SWIOTLB_FORCE not that we have force_dma_unencrypted?
> I'd love to be able to get rid of it..
>
IIRC the justification for it is debugging. Drivers that don't do
syncing correctly or have incorrect assumptions of initialization of DMA
memory will not work properly when SWIOTLB is forced. We recently found
a vmw_pvscsi device flaw that way...

/Thomas



2019-11-28 15:40:33

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited

On Thu, Nov 28, 2019 at 08:02:16AM +0000, Thomas Hellstrom wrote:
> > We have a hard time handling that in generic code. Do we have any
> > good use case for SWIOTLB_FORCE not that we have force_dma_unencrypted?
> > I'd love to be able to get rid of it..
> >
> IIRC the justification for it is debugging. Drivers that don't do
> syncing correctly or have incorrect assumptions of initialization of DMA
> memory will not work properly when SWIOTLB is forced. We recently found
> a vmw_pvscsi device flaw that way...

Ok. I guess debugging is reasonable. Although that means I need
to repsin this quite a bit as I now need a callout to dma_direct.
I'll respin it in the next days.