On Fri, 2020-09-04 at 23:25 +0800, Hillf Danton wrote:
> The DMA buffer allocated is always cleared in DMA core and this is
> making DMA_ATTR_NO_KERNEL_MAPPING non-special.
>
> Fixes: d98849aff879 ("dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING
> in common code")
> Cc: Kees Cook <[email protected]>
> Cc: Matthew Wilcox <[email protected]>
> Cc: Marek Szyprowski <[email protected]>
> Cc: James Bottomley <[email protected]>
> Signed-off-by: Hillf Danton <[email protected]>
> ---
>
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -178,9 +178,17 @@ void *dma_direct_alloc_pages(struct devi
>
> if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> !force_dma_unencrypted(dev)) {
> + int i;
> +
> /* remove any dirty cache lines on the kernel alias
> */
> if (!PageHighMem(page))
> arch_dma_prep_coherent(page, size);
> +
> + for (i = 0; i < size/PAGE_SIZE; i++) {
> + ret = kmap_atomic(page + i);
> + memset(ret, 0, PAGE_SIZE);
> + kunmap_atomic(ret);
This is massively expensive on PARISC and likely other VIPT/VIVT
architectures. What's the reason for clearing it? This could also be
really inefficient even on PIPT architectures if the memory is device
remote.
If we really have to do this, it should likely be done in the arch or
driver hooks because there are potentially more efficient ways we can
do this knowing how the architecture behaves.
James