Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp1157901pxb; Wed, 3 Nov 2021 19:37:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJygVM+YhP+MP22kPpaf3t8O6DWYLe2RMHye+G5oh6ALiVDEFcLICbl7LuPpN7ZAc7k+MJAA X-Received: by 2002:a05:6402:524d:: with SMTP id t13mr66051869edd.245.1635993444059; Wed, 03 Nov 2021 19:37:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1635993444; cv=none; d=google.com; s=arc-20160816; b=xsz0HpDavjZTPwLniElDyvrog7yhqpQzxEOeQtBTxztb7GUa0wiN15a8GyKnl7Pbbu 9htF8BAwevtKpLmfKDaVpRZWAQtOsxlg1wELURpc10TvGBhj1soCCrLdWhdYK9lfKNc0 bhVg8v0nQ2CAfxTx6aMgTR5CyjXzQVnXRMZqRQ53Owr82wBuoTiTCAJUDtBCfzC6EBKx o3PEtPCrnIlnrwWUN8zDoGlSS1EWnhR3SbDTq56Zwo64Y0qGYKYriivkqHI9quNLguAZ I+53IPJ0+a1Yoysi92d0EROfDJn5pOlOzYlS+Wow1mvOwiT00jl2ya2zG4DeKV7sfkhM y5MA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from; bh=eocIOJMXzfjs5hQy/UtTjY7GAzH7pAQMgEcpMW8s9T4=; b=QyOXNRuHLYrccWeqKlosQ+2MRZYDiC/o3ByltfrAxqmyvnb+PoWLEJS5pfninOVVWD zKxV/VreSnU4z2ZQD9D31/Qjev9r/kOs8A0UUR3/Mc9OyOheWYwRdZ5UpbSo9XvuBiCz U0H3ukV4KCTuVgDSI8TtS/TNUBSqF/2ESWSSx21/pK+rXs4EHzueOhlD6BwTfo1XQP52 0OutmCz+20jpTNVrZo2FpVZIMZpqnKA0RebS0S2CBAlvTbrzH3NNnDNHzXhYaDNhbi7k odBiiIHLO7xdPgEYOOJYq9165F3COHLSRRhxJc0H4WKRCJhsaElO748Msp+CRK7m+w+7 j7PQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=mediatek.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id hp5si8464937ejc.37.2021.11.03.19.36.53; Wed, 03 Nov 2021 19:37:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229907AbhKDCfD (ORCPT + 99 others); Wed, 3 Nov 2021 22:35:03 -0400 Received: from mailgw01.mediatek.com ([60.244.123.138]:59016 "EHLO mailgw01.mediatek.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S229541AbhKDCfC (ORCPT ); Wed, 3 Nov 2021 22:35:02 -0400 X-UUID: b5a848c057d7456daea1530d568a627c-20211104 X-UUID: b5a848c057d7456daea1530d568a627c-20211104 Received: from mtkexhb02.mediatek.inc [(172.21.101.103)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 541227360; Thu, 04 Nov 2021 10:32:23 +0800 Received: from mtkmbs10n2.mediatek.inc (172.21.101.183) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.3; Thu, 4 Nov 2021 10:32:22 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs10n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.3 via Frontend Transport; Thu, 4 Nov 2021 10:32:22 +0800 From: Walter Wu To: Christoph Hellwig , Marek Szyprowski , Robin Murphy , "Matthias Brugger" , Ard Biesheuvel , "Andrew Morton" CC: , , , wsd_upstream , , Walter Wu Subject: [PATCH v2] dma-direct: improve DMA_ATTR_NO_KERNEL_MAPPING Date: Thu, 4 Nov 2021 10:32:21 +0800 Message-ID: <20211104023221.16391-1-walter-zh.wu@mediatek.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 Content-Type: text/plain X-MTK: N Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When the allocated buffers use dma coherent memory with DMA_ATTR_NO_KERNEL_MAPPING, then its kernel mapping is exist. The caller use that DMA_ATTR_NO_KERNEL_MAPPING mean they can't rely on kernel mapping, but removing kernel mapping have some improvements. The improvements are: a) Security improvement. In some cases, we don't hope the allocated buffer to be read by cpu speculative execution. Therefore, it need to remove kernel mapping, this patch improve DMA_ATTR_NO_KERNEL_MAPPING to remove a page from kernel mapping in order that cpu doesn't read it. b) Debugging improvement. If the allocated buffer map into user space, only access it in user space, nobody can access it in kernel space, so we can use this patch to see who try to access it in kernel space. This patch only works if the memory is mapping at page granularity in the linear region, so that current only support for ARM64. Signed-off-by: Walter Wu Suggested-by: Christoph Hellwig Suggested-by: Ard Biesheuvel Cc: Christoph Hellwig Cc: Marek Szyprowski Cc: Robin Murphy Cc: Matthias Brugger Cc: Ard Biesheuvel Cc: Andrew Morton --- v2: 1. modify commit message and fix the removing mapping for arm64 2. fix build error for x86 --- include/linux/set_memory.h | 5 +++++ kernel/dma/direct.c | 13 +++++++++++++ 2 files changed, 18 insertions(+) diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h index f36be5166c19..6c7d1683339c 100644 --- a/include/linux/set_memory.h +++ b/include/linux/set_memory.h @@ -7,11 +7,16 @@ #ifdef CONFIG_ARCH_HAS_SET_MEMORY #include + +#ifndef CONFIG_RODATA_FULL_DEFAULT_ENABLED +static inline int set_memory_valid(unsigned long addr, int numpages, int enable) { return 0; } +#endif #else static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; } static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; } static inline int set_memory_x(unsigned long addr, int numpages) { return 0; } static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; } +static inline int set_memory_valid(unsigned long addr, int numpages, int enable) { return 0; } #endif #ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 4c6c5e0635e3..d5d03b51b708 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -155,6 +155,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, struct page *page; void *ret; int err; + unsigned long kaddr; size = PAGE_ALIGN(size); if (attrs & DMA_ATTR_NO_WARN) @@ -169,6 +170,11 @@ void *dma_direct_alloc(struct device *dev, size_t size, if (!PageHighMem(page)) arch_dma_prep_coherent(page, size); *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); + if (IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED)) { + kaddr = (unsigned long)phys_to_virt(dma_to_phys(dev, *dma_handle)); + /* page remove kernel mapping for arm64 */ + set_memory_valid(kaddr, size >> PAGE_SHIFT, 0); + } /* return the page pointer as the opaque cookie */ return page; } @@ -275,9 +281,16 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { unsigned int page_order = get_order(size); + unsigned long kaddr; if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) { + if (IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED)) { + size = PAGE_ALIGN(size); + kaddr = (unsigned long)phys_to_virt(dma_to_phys(dev, dma_addr)); + /* page create kernel mapping for arm64 */ + set_memory_valid(kaddr, size >> PAGE_SHIFT, 1); + } /* cpu_addr is a struct page cookie, not a kernel address */ dma_free_contiguous(dev, cpu_addr, size); return; -- 2.18.0