2020-12-03 06:32:38

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH v14 00/10] mm: introduce memfd_secret system call to create "secret" memory areas

From: Mike Rapoport <[email protected]>

Hi,

@Andrew, this is based on v5.10-rc2-mmotm-2020-11-07-21-40, I can rebase on
current mmotm if you prefer.

This is an implementation of "secret" mappings backed by a file descriptor.

The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will be present only in the page table of the owning mm.

Although normally Linux userspace mappings are protected from other users,
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.

Additionally, in the future the secret mappings may be used as a mean to
protect guest memory in a virtual machine host.

For demonstration of secret memory usage we've created a userspace library

https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git

that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.

Hiding secret memory mappings behind an anonymous file allows (ab)use of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.

The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.

To limit fragmentation of the direct map to splitting only PUD-size pages,
I've added an amortizing cache of PMD-size pages to each file descriptor
that is used as an allocation pool for the secret memory areas.

As the memory allocated by secretmem becomes unmovable, we use CMA to back
large page caches so that page allocator won't be surprised by failing attempt
to migrate these pages.

v14:
* Finally s/mod_node_page_state/mod_lruvec_page_state/

v13: https://lore.kernel.org/lkml/[email protected]
* Added Reviewed-by, thanks Catalin and David
* s/mod_node_page_state/mod_lruvec_page_state/ as Shakeel suggested

v12: https://lore.kernel.org/lkml/[email protected]
* Add detection of whether set_direct_map has actual effect on arm64 and bail
out of CMA allocation for secretmem and the memfd_secret() syscall if pages
would not be removed from the direct map

v11: https://lore.kernel.org/lkml/[email protected]
* Drop support for uncached mappings

v10: https://lore.kernel.org/lkml/[email protected]
* Drop changes to arm64 compatibility layer
* Add Roman's Ack for memcg accounting

Older history:
v9: https://lore.kernel.org/lkml/[email protected]
v8: https://lore.kernel.org/lkml/[email protected]
v7: https://lore.kernel.org/lkml/[email protected]
v6: https://lore.kernel.org/lkml/[email protected]
v5: https://lore.kernel.org/lkml/[email protected]
v4: https://lore.kernel.org/lkml/[email protected]
v3: https://lore.kernel.org/lkml/[email protected]
v2: https://lore.kernel.org/lkml/[email protected]
v1: https://lore.kernel.org/lkml/[email protected]

Mike Rapoport (10):
mm: add definition of PMD_PAGE_ORDER
mmap: make mlock_future_check() global
set_memory: allow set_direct_map_*_noflush() for multiple pages
set_memory: allow querying whether set_direct_map_*() is actually enabled
mm: introduce memfd_secret system call to create "secret" memory areas
secretmem: use PMD-size pages to amortize direct map fragmentation
secretmem: add memcg accounting
PM: hibernate: disable when there are active secretmem users
arch, mm: wire up memfd_secret system call were relevant
secretmem: test: add basic selftest for memfd_secret(2)

arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/cacheflush.h | 6 -
arch/arm64/include/asm/set_memory.h | 17 +
arch/arm64/include/uapi/asm/unistd.h | 1 +
arch/arm64/kernel/machine_kexec.c | 1 +
arch/arm64/mm/mmu.c | 6 +-
arch/arm64/mm/pageattr.c | 23 +-
arch/riscv/include/asm/set_memory.h | 4 +-
arch/riscv/include/asm/unistd.h | 1 +
arch/riscv/mm/pageattr.c | 8 +-
arch/x86/Kconfig | 2 +-
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/x86/include/asm/set_memory.h | 4 +-
arch/x86/mm/pat/set_memory.c | 8 +-
fs/dax.c | 11 +-
include/linux/pgtable.h | 3 +
include/linux/secretmem.h | 30 ++
include/linux/set_memory.h | 16 +-
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/unistd.h | 6 +-
include/uapi/linux/magic.h | 1 +
kernel/power/hibernate.c | 5 +-
kernel/power/snapshot.c | 4 +-
kernel/sys_ni.c | 2 +
mm/Kconfig | 5 +
mm/Makefile | 1 +
mm/filemap.c | 3 +-
mm/gup.c | 10 +
mm/internal.h | 3 +
mm/mmap.c | 5 +-
mm/secretmem.c | 439 ++++++++++++++++++++++
mm/vmalloc.c | 5 +-
scripts/checksyscalls.sh | 4 +
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +-
tools/testing/selftests/vm/memfd_secret.c | 298 +++++++++++++++
tools/testing/selftests/vm/run_vmtests | 17 +
38 files changed, 906 insertions(+), 51 deletions(-)
create mode 100644 arch/arm64/include/asm/set_memory.h
create mode 100644 include/linux/secretmem.h
create mode 100644 mm/secretmem.c
create mode 100644 tools/testing/selftests/vm/memfd_secret.c


base-commit: 9f8ce377d420db12b19d6a4f636fecbd88a725a5
--
2.28.0


2020-12-03 06:32:46

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH v14 01/10] mm: add definition of PMD_PAGE_ORDER

From: Mike Rapoport <[email protected]>

The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.

Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid conflict with these definitions use
PMD_PAGE_ORDER name and update DAX respectively.

Signed-off-by: Mike Rapoport <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
---
fs/dax.c | 11 ++++-------
include/linux/pgtable.h | 3 +++
2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 26d5dcd2d69e..0f109eb16196 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -49,9 +49,6 @@ static inline unsigned int pe_order(enum page_entry_size pe_size)
#define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1)
#define PG_PMD_NR (PMD_SIZE >> PAGE_SHIFT)

-/* The order of a PMD entry */
-#define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT)
-
static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];

static int __init init_dax_wait_table(void)
@@ -98,7 +95,7 @@ static bool dax_is_locked(void *entry)
static unsigned int dax_entry_order(void *entry)
{
if (xa_to_value(entry) & DAX_PMD)
- return PMD_ORDER;
+ return PMD_PAGE_ORDER;
return 0;
}

@@ -1470,7 +1467,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
{
struct vm_area_struct *vma = vmf->vma;
struct address_space *mapping = vma->vm_file->f_mapping;
- XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER);
+ XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER);
unsigned long pmd_addr = vmf->address & PMD_MASK;
bool write = vmf->flags & FAULT_FLAG_WRITE;
bool sync;
@@ -1529,7 +1526,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
* entry is already in the array, for instance), it will return
* VM_FAULT_FALLBACK.
*/
- entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+ entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER);
if (xa_is_internal(entry)) {
result = xa_to_internal(entry);
goto fallback;
@@ -1695,7 +1692,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
if (order == 0)
ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
#ifdef CONFIG_FS_DAX_PMD
- else if (order == PMD_ORDER)
+ else if (order == PMD_PAGE_ORDER)
ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
#endif
else
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 71125a4676c4..7f718b8dc789 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -28,6 +28,9 @@
#define USER_PGTABLES_CEILING 0UL
#endif

+/* Number of base pages in a second level leaf page */
+#define PMD_PAGE_ORDER (PMD_SHIFT - PAGE_SHIFT)
+
/*
* A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
*
--
2.28.0

2020-12-03 06:33:07

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH v14 02/10] mmap: make mlock_future_check() global

From: Mike Rapoport <[email protected]>

It will be used by the upcoming secret memory implementation.

Signed-off-by: Mike Rapoport <[email protected]>
---
mm/internal.h | 3 +++
mm/mmap.c | 5 ++---
2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index c43ccdddb0f6..ae146a260b14 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -348,6 +348,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
extern void mlock_vma_page(struct page *page);
extern unsigned int munlock_vma_page(struct page *page);

+extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+ unsigned long len);
+
/*
* Clear the page's PageMlocked(). This can be useful in a situation where
* we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c
index 61f72b09d990..c481f088bd50 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1348,9 +1348,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
return hint;
}

-static inline int mlock_future_check(struct mm_struct *mm,
- unsigned long flags,
- unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+ unsigned long len)
{
unsigned long locked, lock_limit;

--
2.28.0

2020-12-03 06:33:38

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH v14 04/10] set_memory: allow querying whether set_direct_map_*() is actually enabled

From: Mike Rapoport <[email protected]>

On arm64, set_direct_map_*() functions may return 0 without actually
changing the linear map. This behaviour can be controlled using kernel
parameters, so we need a way to determine at runtime whether calls to
set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
any effect.

Extend set_memory API with can_set_direct_map() function that allows
checking if calling set_direct_map_*() will actually change the page table,
replace several occurrences of open coded checks in arm64 with the new
function and provide a generic stub for architectures that always modify
page tables upon calls to set_direct_map APIs.

Signed-off-by: Mike Rapoport <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
---
arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/cacheflush.h | 6 ------
arch/arm64/include/asm/set_memory.h | 17 +++++++++++++++++
arch/arm64/kernel/machine_kexec.c | 1 +
arch/arm64/mm/mmu.c | 6 +++---
arch/arm64/mm/pageattr.c | 13 +++++++++----
include/linux/set_memory.h | 12 ++++++++++++
7 files changed, 42 insertions(+), 14 deletions(-)
create mode 100644 arch/arm64/include/asm/set_memory.h

diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index ff9cbb631212..4306136ef329 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -4,5 +4,4 @@ generic-y += local64.h
generic-y += mcs_spinlock.h
generic-y += qrwlock.h
generic-y += qspinlock.h
-generic-y += set_memory.h
generic-y += user.h
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index d3598419a284..b1bdf83a73db 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -136,12 +136,6 @@ static __always_inline void __flush_icache_all(void)
dsb(ish);
}

-int set_memory_valid(unsigned long addr, int numpages, int enable);
-
-int set_direct_map_invalid_noflush(struct page *page, int numpages);
-int set_direct_map_default_noflush(struct page *page, int numpages);
-bool kernel_page_present(struct page *page);
-
#include <asm-generic/cacheflush.h>

#endif /* __ASM_CACHEFLUSH_H */
diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h
new file mode 100644
index 000000000000..ecb6b0f449ab
--- /dev/null
+++ b/arch/arm64/include/asm/set_memory.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_ARM64_SET_MEMORY_H
+#define _ASM_ARM64_SET_MEMORY_H
+
+#include <asm-generic/set_memory.h>
+
+bool can_set_direct_map(void);
+#define can_set_direct_map can_set_direct_map
+
+int set_memory_valid(unsigned long addr, int numpages, int enable);
+
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
+bool kernel_page_present(struct page *page);
+
+#endif /* _ASM_ARM64_SET_MEMORY_H */
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index a0b144cfaea7..0cbc50c4fa5a 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -11,6 +11,7 @@
#include <linux/kernel.h>
#include <linux/kexec.h>
#include <linux/page-flags.h>
+#include <linux/set_memory.h>
#include <linux/smp.h>

#include <asm/cacheflush.h>
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 86be6d1a78ab..aa5ec08cb902 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -22,6 +22,7 @@
#include <linux/io.h>
#include <linux/mm.h>
#include <linux/vmalloc.h>
+#include <linux/set_memory.h>

#include <asm/barrier.h>
#include <asm/cputype.h>
@@ -477,7 +478,7 @@ static void __init map_mem(pgd_t *pgdp)
int flags = 0;
u64 i;

- if (rodata_full || debug_pagealloc_enabled())
+ if (can_set_direct_map())
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;

/*
@@ -1453,8 +1454,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
* KFENCE requires linear map to be mapped at page granularity, so that
* it is possible to protect/unprotect single pages in the KFENCE pool.
*/
- if (rodata_full || debug_pagealloc_enabled() ||
- IS_ENABLED(CONFIG_KFENCE))
+ if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;

__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index b53ef37bf95a..d505172265b0 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -19,6 +19,11 @@ struct page_change_data {

bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);

+bool can_set_direct_map(void)
+{
+ return rodata_full || debug_pagealloc_enabled();
+}
+
static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
{
struct page_change_data *cdata = data;
@@ -156,7 +161,7 @@ int set_direct_map_invalid_noflush(struct page *page, int numpages)
};
unsigned long size = PAGE_SIZE * numpages;

- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return 0;

return apply_to_page_range(&init_mm,
@@ -172,7 +177,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
};
unsigned long size = PAGE_SIZE * numpages;

- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return 0;

return apply_to_page_range(&init_mm,
@@ -183,7 +188,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
#ifdef CONFIG_DEBUG_PAGEALLOC
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return;

set_memory_valid((unsigned long)page_address(page), numpages, enable);
@@ -208,7 +213,7 @@ bool kernel_page_present(struct page *page)
pte_t *ptep;
unsigned long addr = (unsigned long)page_address(page);

- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return true;

pgdp = pgd_offset_k(addr);
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index c650f82db813..7b4b6626032d 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -28,7 +28,19 @@ static inline bool kernel_page_present(struct page *page)
{
return true;
}
+#else /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+/*
+ * Some architectures, e.g. ARM64 can disable direct map modifications at
+ * boot time. Let them overrive this query.
+ */
+#ifndef can_set_direct_map
+static inline bool can_set_direct_map(void)
+{
+ return true;
+}
+#define can_set_direct_map can_set_direct_map
#endif
+#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */

#ifndef set_mce_nospec
static inline int set_mce_nospec(unsigned long pfn, bool unmap)
--
2.28.0

2020-12-03 06:34:03

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH v14 06/10] secretmem: use PMD-size pages to amortize direct map fragmentation

From: Mike Rapoport <[email protected]>

Removing a PAGE_SIZE page from the direct map every time such page is
allocated for a secret memory mapping will cause severe fragmentation of
the direct map. This fragmentation can be reduced by using PMD-size pages
as a pool for small pages for secret memory mappings.

Add a gen_pool per secretmem inode and lazily populate this pool with
PMD-size pages.

As pages allocated by secretmem become unmovable, use CMA to back large
page caches so that page allocator won't be surprised by failing attempt to
migrate these pages.

The CMA area used by secretmem is controlled by the "secretmem=" kernel
parameter. This allows explicit control over the memory available for
secretmem and provides upper hard limit for secretmem consumption.

Signed-off-by: Mike Rapoport <[email protected]>
---
mm/Kconfig | 2 +
mm/secretmem.c | 152 ++++++++++++++++++++++++++++++++++++++++++-------
2 files changed, 135 insertions(+), 19 deletions(-)

diff --git a/mm/Kconfig b/mm/Kconfig
index d8d170fa5210..e0e789398421 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -886,5 +886,7 @@ config MAPPING_DIRTY_HELPERS

config SECRETMEM
def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+ select GENERIC_ALLOCATOR
+ select CMA

endmenu
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 781aaaca8c70..52a900a135a5 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -7,12 +7,15 @@

#include <linux/mm.h>
#include <linux/fs.h>
+#include <linux/cma.h>
#include <linux/mount.h>
#include <linux/memfd.h>
#include <linux/bitops.h>
#include <linux/printk.h>
#include <linux/pagemap.h>
+#include <linux/genalloc.h>
#include <linux/syscalls.h>
+#include <linux/memblock.h>
#include <linux/pseudo_fs.h>
#include <linux/secretmem.h>
#include <linux/set_memory.h>
@@ -35,25 +38,80 @@
#define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK

struct secretmem_ctx {
+ struct gen_pool *pool;
unsigned int mode;
};

-static struct page *secretmem_alloc_page(gfp_t gfp)
+static struct cma *secretmem_cma;
+
+static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
{
+ unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
+ struct gen_pool *pool = ctx->pool;
+ unsigned long addr;
+ struct page *page;
+ int err;
+
+ page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
+ if (!page)
+ return -ENOMEM;
+
+ err = set_direct_map_invalid_noflush(page, nr_pages);
+ if (err)
+ goto err_cma_release;
+
+ addr = (unsigned long)page_address(page);
+ err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
+ if (err)
+ goto err_set_direct_map;
+
+ flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+
+ return 0;
+
+err_set_direct_map:
/*
- * FIXME: use a cache of large pages to reduce the direct map
- * fragmentation
+ * If a split of PUD-size page was required, it already happened
+ * when we marked the pages invalid which guarantees that this call
+ * won't fail
*/
- return alloc_page(gfp);
+ set_direct_map_default_noflush(page, nr_pages);
+err_cma_release:
+ cma_release(secretmem_cma, page, nr_pages);
+ return err;
+}
+
+static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
+ gfp_t gfp)
+{
+ struct gen_pool *pool = ctx->pool;
+ unsigned long addr;
+ struct page *page;
+ int err;
+
+ if (gen_pool_avail(pool) < PAGE_SIZE) {
+ err = secretmem_pool_increase(ctx, gfp);
+ if (err)
+ return NULL;
+ }
+
+ addr = gen_pool_alloc(pool, PAGE_SIZE);
+ if (!addr)
+ return NULL;
+
+ page = virt_to_page(addr);
+ get_page(page);
+
+ return page;
}

static vm_fault_t secretmem_fault(struct vm_fault *vmf)
{
+ struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
struct inode *inode = file_inode(vmf->vma->vm_file);
pgoff_t offset = vmf->pgoff;
vm_fault_t ret = 0;
- unsigned long addr;
struct page *page;
int err;

@@ -62,8 +120,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)

page = find_get_page(mapping, offset);
if (!page) {
-
- page = secretmem_alloc_page(vmf->gfp_mask);
+ page = secretmem_alloc_page(ctx, vmf->gfp_mask);
if (!page)
return vmf_error(-ENOMEM);

@@ -71,14 +128,8 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
if (unlikely(err))
goto err_put_page;

- err = set_direct_map_invalid_noflush(page, 1);
- if (err)
- goto err_del_page_cache;
-
- addr = (unsigned long)page_address(page);
- flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
-
__SetPageUptodate(page);
+ set_page_private(page, (unsigned long)ctx);

ret = VM_FAULT_LOCKED;
}
@@ -86,8 +137,6 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
vmf->page = page;
return ret;

-err_del_page_cache:
- delete_from_page_cache(page);
err_put_page:
put_page(page);
return vmf_error(err);
@@ -136,8 +185,11 @@ static int secretmem_migratepage(struct address_space *mapping,

static void secretmem_freepage(struct page *page)
{
- set_direct_map_default_noflush(page, 1);
- clear_highpage(page);
+ unsigned long addr = (unsigned long)page_address(page);
+ struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
+ struct gen_pool *pool = ctx->pool;
+
+ gen_pool_free(pool, addr, PAGE_SIZE);
}

static const struct address_space_operations secretmem_aops = {
@@ -172,13 +224,18 @@ static struct file *secretmem_file_create(unsigned long flags)
if (!ctx)
goto err_free_inode;

+ ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+ if (!ctx->pool)
+ goto err_free_ctx;
+
file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
O_RDWR, &secretmem_fops);
if (IS_ERR(file))
- goto err_free_ctx;
+ goto err_free_pool;

mapping_set_unevictable(inode->i_mapping);

+ inode->i_private = ctx;
inode->i_mapping->private_data = ctx;
inode->i_mapping->a_ops = &secretmem_aops;

@@ -192,6 +249,8 @@ static struct file *secretmem_file_create(unsigned long flags)

return file;

+err_free_pool:
+ gen_pool_destroy(ctx->pool);
err_free_ctx:
kfree(ctx);
err_free_inode:
@@ -210,6 +269,9 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
return -EINVAL;

+ if (!secretmem_cma)
+ return -ENOMEM;
+
fd = get_unused_fd_flags(flags & O_CLOEXEC);
if (fd < 0)
return fd;
@@ -230,11 +292,37 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
return err;
}

+static void secretmem_cleanup_chunk(struct gen_pool *pool,
+ struct gen_pool_chunk *chunk, void *data)
+{
+ unsigned long start = chunk->start_addr;
+ unsigned long end = chunk->end_addr;
+ struct page *page = virt_to_page(start);
+ unsigned long nr_pages = (end - start + 1) / PAGE_SIZE;
+ int i;
+
+ set_direct_map_default_noflush(page, nr_pages);
+
+ for (i = 0; i < nr_pages; i++)
+ clear_highpage(page + i);
+
+ cma_release(secretmem_cma, page, nr_pages);
+}
+
+static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
+{
+ struct gen_pool *pool = ctx->pool;
+
+ gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
+ gen_pool_destroy(pool);
+}
+
static void secretmem_evict_inode(struct inode *inode)
{
struct secretmem_ctx *ctx = inode->i_private;

truncate_inode_pages_final(&inode->i_data);
+ secretmem_cleanup_pool(ctx);
clear_inode(inode);
kfree(ctx);
}
@@ -271,3 +359,29 @@ static int secretmem_init(void)
return ret;
}
fs_initcall(secretmem_init);
+
+static int __init secretmem_setup(char *str)
+{
+ phys_addr_t align = PMD_SIZE;
+ unsigned long reserved_size;
+ int err;
+
+ reserved_size = memparse(str, NULL);
+ if (!reserved_size)
+ return 0;
+
+ if (reserved_size * 2 > PUD_SIZE)
+ align = PUD_SIZE;
+
+ err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false,
+ "secretmem", &secretmem_cma);
+ if (err) {
+ pr_err("failed to create CMA: %d\n", err);
+ return err;
+ }
+
+ pr_info("reserved %luM\n", reserved_size >> 20);
+
+ return 0;
+}
+__setup("secretmem=", secretmem_setup);
--
2.28.0

2020-12-03 06:34:30

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH v14 03/10] set_memory: allow set_direct_map_*_noflush() for multiple pages

From: Mike Rapoport <[email protected]>

The underlying implementations of set_direct_map_invalid_noflush() and
set_direct_map_default_noflush() allow updating multiple contiguous pages
at once.

Add numpages parameter to set_direct_map_*_noflush() to expose this ability
with these APIs.

Signed-off-by: Mike Rapoport <[email protected]>
Acked-by: Catalin Marinas <[email protected]> # arm64
---
arch/arm64/include/asm/cacheflush.h | 4 ++--
arch/arm64/mm/pageattr.c | 10 ++++++----
arch/riscv/include/asm/set_memory.h | 4 ++--
arch/riscv/mm/pageattr.c | 8 ++++----
arch/x86/include/asm/set_memory.h | 4 ++--
arch/x86/mm/pat/set_memory.c | 8 ++++----
include/linux/set_memory.h | 4 ++--
kernel/power/snapshot.c | 4 ++--
mm/vmalloc.c | 5 +++--
9 files changed, 27 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 45217f21f1fe..d3598419a284 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -138,8 +138,8 @@ static __always_inline void __flush_icache_all(void)

int set_memory_valid(unsigned long addr, int numpages, int enable);

-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);

#include <asm-generic/cacheflush.h>
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 92eccaf595c8..b53ef37bf95a 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,34 +148,36 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
__pgprot(PTE_VALID));
}

-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
struct page_change_data data = {
.set_mask = __pgprot(0),
.clear_mask = __pgprot(PTE_VALID),
};
+ unsigned long size = PAGE_SIZE * numpages;

if (!debug_pagealloc_enabled() && !rodata_full)
return 0;

return apply_to_page_range(&init_mm,
(unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ size, change_page_range, &data);
}

-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
struct page_change_data data = {
.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
.clear_mask = __pgprot(PTE_RDONLY),
};
+ unsigned long size = PAGE_SIZE * numpages;

if (!debug_pagealloc_enabled() && !rodata_full)
return 0;

return apply_to_page_range(&init_mm,
(unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ size, change_page_range, &data);
}

#ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index d690b08dff2a..92b9bb26bf5e 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -22,8 +22,8 @@ static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
#endif

-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);

#endif /* __ASSEMBLY__ */
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 87ba5a68bbb8..0454f2d052c4 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -150,11 +150,11 @@ int set_memory_nx(unsigned long addr, int numpages)
return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC));
}

-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
int ret;
unsigned long start = (unsigned long)page_address(page);
- unsigned long end = start + PAGE_SIZE;
+ unsigned long end = start + PAGE_SIZE * numpages;
struct pageattr_masks masks = {
.set_mask = __pgprot(0),
.clear_mask = __pgprot(_PAGE_PRESENT)
@@ -167,11 +167,11 @@ int set_direct_map_invalid_noflush(struct page *page)
return ret;
}

-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
int ret;
unsigned long start = (unsigned long)page_address(page);
- unsigned long end = start + PAGE_SIZE;
+ unsigned long end = start + PAGE_SIZE * numpages;
struct pageattr_masks masks = {
.set_mask = PAGE_KERNEL,
.clear_mask = __pgprot(0)
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 4352f08bfbb5..6224cb291f6c 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -80,8 +80,8 @@ int set_pages_wb(struct page *page, int numpages);
int set_pages_ro(struct page *page, int numpages);
int set_pages_rw(struct page *page, int numpages);

-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);

extern int kernel_set_to_readonly;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 16f878c26667..d157fd617c99 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2184,14 +2184,14 @@ static int __set_pages_np(struct page *page, int numpages)
return __change_page_attr_set_clr(&cpa, 0);
}

-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
- return __set_pages_np(page, 1);
+ return __set_pages_np(page, numpages);
}

-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
- return __set_pages_p(page, 1);
+ return __set_pages_p(page, numpages);
}

#ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index fe1aa4e54680..c650f82db813 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -15,11 +15,11 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
#endif

#ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP
-static inline int set_direct_map_invalid_noflush(struct page *page)
+static inline int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
return 0;
}
-static inline int set_direct_map_default_noflush(struct page *page)
+static inline int set_direct_map_default_noflush(struct page *page, int numpages)
{
return 0;
}
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 069576704c57..d40bb6666735 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -89,9 +89,9 @@ static inline void hibernate_map_page(struct page *page, int enable)
* changes and this will no longer be the case.
*/
if (enable)
- ret = set_direct_map_default_noflush(page);
+ ret = set_direct_map_default_noflush(page, 1);
else
- ret = set_direct_map_invalid_noflush(page);
+ ret = set_direct_map_invalid_noflush(page, 1);

if (ret) {
pr_warn_once("Failed to remap page\n");
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d7075ad340aa..7e903524e002 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2179,13 +2179,14 @@ struct vm_struct *remove_vm_area(const void *addr)
}

static inline void set_area_direct_map(const struct vm_struct *area,
- int (*set_direct_map)(struct page *page))
+ int (*set_direct_map)(struct page *page,
+ int numpages))
{
int i;

for (i = 0; i < area->nr_pages; i++)
if (page_address(area->pages[i]))
- set_direct_map(area->pages[i]);
+ set_direct_map(area->pages[i], 1);
}

/* Handle removing and resetting vm mappings related to the vm_struct. */
--
2.28.0

2020-12-03 06:34:49

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH v14 08/10] PM: hibernate: disable when there are active secretmem users

From: Mike Rapoport <[email protected]>

It is unsafe to allow saving of secretmem areas to the hibernation snapshot
as they would be visible after the resume and this essentially will defeat
the purpose of secret memory mappings.

Prevent hibernation whenever there are active secret memory users.

Signed-off-by: Mike Rapoport <[email protected]>
---
include/linux/secretmem.h | 6 ++++++
kernel/power/hibernate.c | 5 ++++-
mm/secretmem.c | 15 +++++++++++++++
3 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 70e7db9f94fe..907a6734059c 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -6,6 +6,7 @@

bool vma_is_secretmem(struct vm_area_struct *vma);
bool page_is_secretmem(struct page *page);
+bool secretmem_active(void);

#else

@@ -19,6 +20,11 @@ static inline bool page_is_secretmem(struct page *page)
return false;
}

+static inline bool secretmem_active(void)
+{
+ return false;
+}
+
#endif /* CONFIG_SECRETMEM */

#endif /* _LINUX_SECRETMEM_H */
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index da0b41914177..559acef3fddb 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -31,6 +31,7 @@
#include <linux/genhd.h>
#include <linux/ktime.h>
#include <linux/security.h>
+#include <linux/secretmem.h>
#include <trace/events/power.h>

#include "power.h"
@@ -81,7 +82,9 @@ void hibernate_release(void)

bool hibernation_available(void)
{
- return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION);
+ return nohibernate == 0 &&
+ !security_locked_down(LOCKDOWN_HIBERNATION) &&
+ !secretmem_active();
}

/**
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 2390901d3ff7..7236f4d9458a 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -45,6 +45,13 @@ struct secretmem_ctx {

static struct cma *secretmem_cma;

+static atomic_t secretmem_users;
+
+bool secretmem_active(void)
+{
+ return !!atomic_read(&secretmem_users);
+}
+
static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
{
int err;
@@ -179,6 +186,12 @@ static const struct vm_operations_struct secretmem_vm_ops = {
.fault = secretmem_fault,
};

+static int secretmem_release(struct inode *inode, struct file *file)
+{
+ atomic_dec(&secretmem_users);
+ return 0;
+}
+
static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
{
unsigned long len = vma->vm_end - vma->vm_start;
@@ -201,6 +214,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma)
}

static const struct file_operations secretmem_fops = {
+ .release = secretmem_release,
.mmap = secretmem_mmap,
};

@@ -318,6 +332,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
file->f_flags |= O_LARGEFILE;

fd_install(fd, file);
+ atomic_inc(&secretmem_users);
return fd;

err_put_fd:
--
2.28.0

2020-12-03 06:34:59

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH v14 05/10] mm: introduce memfd_secret system call to create "secret" memory areas

From: Mike Rapoport <[email protected]>

Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.

The user will create a file descriptor using the memfd_secret() system
call. The memory areas created by mmap() calls from this file descriptor
will be unmapped from the kernel direct map and they will be only mapped in
the page table of the owning mm.

The secret memory remains accessible in the process context using uaccess
primitives, but it is not accessible using direct/linear map addresses.

Functions in the follow_page()/get_user_page() family will refuse to return
a page that belongs to the secret memory area.

A page that was a part of the secret memory area is cleared when it is
freed.

The following example demonstrates creation of a secret mapping (error
handling is omitted):

fd = memfd_secret(0);
ftruncate(fd, MAP_SIZE);
ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);

Signed-off-by: Mike Rapoport <[email protected]>
Acked-by: Hagen Paul Pfeifer <[email protected]>
---
arch/x86/Kconfig | 2 +-
include/linux/secretmem.h | 24 ++++
include/uapi/linux/magic.h | 1 +
kernel/sys_ni.c | 2 +
mm/Kconfig | 3 +
mm/Makefile | 1 +
mm/gup.c | 10 ++
mm/secretmem.c | 273 +++++++++++++++++++++++++++++++++++++
8 files changed, 315 insertions(+), 1 deletion(-)
create mode 100644 include/linux/secretmem.h
create mode 100644 mm/secretmem.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 34d5fb82f674..7d781fea79c2 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -41,7 +41,7 @@ config FORCE_DYNAMIC_FTRACE
in order to test the non static function tracing in the
generic code, as other architectures still use it. But we
only need to keep it around for x86_64. No need to keep it
- for x86_32. For x86_32, force DYNAMIC_FTRACE.
+ for x86_32. For x86_32, force DYNAMIC_FTRACE.
#
# Arch settings
#
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
new file mode 100644
index 000000000000..70e7db9f94fe
--- /dev/null
+++ b/include/linux/secretmem.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_SECRETMEM_H
+#define _LINUX_SECRETMEM_H
+
+#ifdef CONFIG_SECRETMEM
+
+bool vma_is_secretmem(struct vm_area_struct *vma);
+bool page_is_secretmem(struct page *page);
+
+#else
+
+static inline bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+ return false;
+}
+
+static inline bool page_is_secretmem(struct page *page)
+{
+ return false;
+}
+
+#endif /* CONFIG_SECRETMEM */
+
+#endif /* _LINUX_SECRETMEM_H */
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index f3956fc11de6..35687dcb1a42 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -97,5 +97,6 @@
#define DEVMEM_MAGIC 0x454d444d /* "DMEM" */
#define Z3FOLD_MAGIC 0x33
#define PPC_CMM_MAGIC 0xc7571590
+#define SECRETMEM_MAGIC 0x5345434d /* "SECM" */

#endif /* __LINUX_MAGIC_H__ */
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 2dd6cbb8cabc..805fd7a668be 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -353,6 +353,8 @@ COND_SYSCALL(pkey_mprotect);
COND_SYSCALL(pkey_alloc);
COND_SYSCALL(pkey_free);

+/* memfd_secret */
+COND_SYSCALL(memfd_secret);

/*
* Architecture specific weak syscall entries.
diff --git a/mm/Kconfig b/mm/Kconfig
index c89c5444924b..d8d170fa5210 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -884,4 +884,7 @@ config ARCH_HAS_HUGEPD
config MAPPING_DIRTY_HELPERS
bool

+config SECRETMEM
+ def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+
endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 6eeb4b29efb8..dfda14c48a75 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -121,3 +121,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_SECRETMEM) += secretmem.o
diff --git a/mm/gup.c b/mm/gup.c
index 5ec98de1e5de..71164fa83114 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -10,6 +10,7 @@
#include <linux/rmap.h>
#include <linux/swap.h>
#include <linux/swapops.h>
+#include <linux/secretmem.h>

#include <linux/sched/signal.h>
#include <linux/rwsem.h>
@@ -793,6 +794,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
struct follow_page_context ctx = { NULL };
struct page *page;

+ if (vma_is_secretmem(vma))
+ return NULL;
+
page = follow_page_mask(vma, address, foll_flags, &ctx);
if (ctx.pgmap)
put_dev_pagemap(ctx.pgmap);
@@ -923,6 +927,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
if (gup_flags & FOLL_ANON && !vma_is_anonymous(vma))
return -EFAULT;

+ if (vma_is_secretmem(vma))
+ return -EFAULT;
+
if (write) {
if (!(vm_flags & VM_WRITE)) {
if (!(gup_flags & FOLL_FORCE))
@@ -2196,6 +2203,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);

+ if (page_is_secretmem(page))
+ goto pte_unmap;
+
head = try_grab_compound_head(page, 1, flags);
if (!head)
goto pte_unmap;
diff --git a/mm/secretmem.c b/mm/secretmem.c
new file mode 100644
index 000000000000..781aaaca8c70
--- /dev/null
+++ b/mm/secretmem.c
@@ -0,0 +1,273 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <[email protected]>
+ */
+
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/mount.h>
+#include <linux/memfd.h>
+#include <linux/bitops.h>
+#include <linux/printk.h>
+#include <linux/pagemap.h>
+#include <linux/syscalls.h>
+#include <linux/pseudo_fs.h>
+#include <linux/secretmem.h>
+#include <linux/set_memory.h>
+#include <linux/sched/signal.h>
+
+#include <uapi/linux/magic.h>
+
+#include <asm/tlbflush.h>
+
+#include "internal.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "secretmem: " fmt
+
+/*
+ * Define mode and flag masks to allow validation of the system call
+ * parameters.
+ */
+#define SECRETMEM_MODE_MASK (0x0)
+#define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
+
+struct secretmem_ctx {
+ unsigned int mode;
+};
+
+static struct page *secretmem_alloc_page(gfp_t gfp)
+{
+ /*
+ * FIXME: use a cache of large pages to reduce the direct map
+ * fragmentation
+ */
+ return alloc_page(gfp);
+}
+
+static vm_fault_t secretmem_fault(struct vm_fault *vmf)
+{
+ struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+ struct inode *inode = file_inode(vmf->vma->vm_file);
+ pgoff_t offset = vmf->pgoff;
+ vm_fault_t ret = 0;
+ unsigned long addr;
+ struct page *page;
+ int err;
+
+ if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+ return vmf_error(-EINVAL);
+
+ page = find_get_page(mapping, offset);
+ if (!page) {
+
+ page = secretmem_alloc_page(vmf->gfp_mask);
+ if (!page)
+ return vmf_error(-ENOMEM);
+
+ err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
+ if (unlikely(err))
+ goto err_put_page;
+
+ err = set_direct_map_invalid_noflush(page, 1);
+ if (err)
+ goto err_del_page_cache;
+
+ addr = (unsigned long)page_address(page);
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+
+ __SetPageUptodate(page);
+
+ ret = VM_FAULT_LOCKED;
+ }
+
+ vmf->page = page;
+ return ret;
+
+err_del_page_cache:
+ delete_from_page_cache(page);
+err_put_page:
+ put_page(page);
+ return vmf_error(err);
+}
+
+static const struct vm_operations_struct secretmem_vm_ops = {
+ .fault = secretmem_fault,
+};
+
+static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ unsigned long len = vma->vm_end - vma->vm_start;
+
+ if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+ return -EINVAL;
+
+ if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
+ return -EAGAIN;
+
+ vma->vm_ops = &secretmem_vm_ops;
+ vma->vm_flags |= VM_LOCKED;
+
+ return 0;
+}
+
+bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+ return vma->vm_ops == &secretmem_vm_ops;
+}
+
+static const struct file_operations secretmem_fops = {
+ .mmap = secretmem_mmap,
+};
+
+static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode)
+{
+ return false;
+}
+
+static int secretmem_migratepage(struct address_space *mapping,
+ struct page *newpage, struct page *page,
+ enum migrate_mode mode)
+{
+ return -EBUSY;
+}
+
+static void secretmem_freepage(struct page *page)
+{
+ set_direct_map_default_noflush(page, 1);
+ clear_highpage(page);
+}
+
+static const struct address_space_operations secretmem_aops = {
+ .freepage = secretmem_freepage,
+ .migratepage = secretmem_migratepage,
+ .isolate_page = secretmem_isolate_page,
+};
+
+bool page_is_secretmem(struct page *page)
+{
+ struct address_space *mapping = page_mapping(page);
+
+ if (!mapping)
+ return false;
+
+ return mapping->a_ops == &secretmem_aops;
+}
+
+static struct vfsmount *secretmem_mnt;
+
+static struct file *secretmem_file_create(unsigned long flags)
+{
+ struct file *file = ERR_PTR(-ENOMEM);
+ struct secretmem_ctx *ctx;
+ struct inode *inode;
+
+ inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx)
+ goto err_free_inode;
+
+ file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+ O_RDWR, &secretmem_fops);
+ if (IS_ERR(file))
+ goto err_free_ctx;
+
+ mapping_set_unevictable(inode->i_mapping);
+
+ inode->i_mapping->private_data = ctx;
+ inode->i_mapping->a_ops = &secretmem_aops;
+
+ /* pretend we are a normal file with zero size */
+ inode->i_mode |= S_IFREG;
+ inode->i_size = 0;
+
+ file->private_data = ctx;
+
+ ctx->mode = flags & SECRETMEM_MODE_MASK;
+
+ return file;
+
+err_free_ctx:
+ kfree(ctx);
+err_free_inode:
+ iput(inode);
+ return file;
+}
+
+SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
+{
+ struct file *file;
+ int fd, err;
+
+ /* make sure local flags do not confict with global fcntl.h */
+ BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+ if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+ return -EINVAL;
+
+ fd = get_unused_fd_flags(flags & O_CLOEXEC);
+ if (fd < 0)
+ return fd;
+
+ file = secretmem_file_create(flags);
+ if (IS_ERR(file)) {
+ err = PTR_ERR(file);
+ goto err_put_fd;
+ }
+
+ file->f_flags |= O_LARGEFILE;
+
+ fd_install(fd, file);
+ return fd;
+
+err_put_fd:
+ put_unused_fd(fd);
+ return err;
+}
+
+static void secretmem_evict_inode(struct inode *inode)
+{
+ struct secretmem_ctx *ctx = inode->i_private;
+
+ truncate_inode_pages_final(&inode->i_data);
+ clear_inode(inode);
+ kfree(ctx);
+}
+
+static const struct super_operations secretmem_super_ops = {
+ .evict_inode = secretmem_evict_inode,
+};
+
+static int secretmem_init_fs_context(struct fs_context *fc)
+{
+ struct pseudo_fs_context *ctx = init_pseudo(fc, SECRETMEM_MAGIC);
+
+ if (!ctx)
+ return -ENOMEM;
+ ctx->ops = &secretmem_super_ops;
+
+ return 0;
+}
+
+static struct file_system_type secretmem_fs = {
+ .name = "secretmem",
+ .init_fs_context = secretmem_init_fs_context,
+ .kill_sb = kill_anon_super,
+};
+
+static int secretmem_init(void)
+{
+ int ret = 0;
+
+ secretmem_mnt = kern_mount(&secretmem_fs);
+ if (IS_ERR(secretmem_mnt))
+ ret = PTR_ERR(secretmem_mnt);
+
+ return ret;
+}
+fs_initcall(secretmem_init);
--
2.28.0

2020-12-03 06:35:22

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH v14 07/10] secretmem: add memcg accounting

From: Mike Rapoport <[email protected]>

Account memory consumed by secretmem to memcg. The accounting is updated
when the memory is actually allocated and freed.

Signed-off-by: Mike Rapoport <[email protected]>
Acked-by: Roman Gushchin <[email protected]>
---
mm/filemap.c | 3 ++-
mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
2 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 249cf489f5df..cf7f1dc9f4b8 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -42,6 +42,7 @@
#include <linux/psi.h>
#include <linux/ramfs.h>
#include <linux/page_idle.h>
+#include <linux/secretmem.h>
#include "internal.h"

#define CREATE_TRACE_POINTS
@@ -844,7 +845,7 @@ static noinline int __add_to_page_cache_locked(struct page *page,
page->mapping = mapping;
page->index = offset;

- if (!huge) {
+ if (!huge && !page_is_secretmem(page)) {
error = mem_cgroup_charge(page, current->mm, gfp);
if (error)
goto error;
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 52a900a135a5..2390901d3ff7 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -18,6 +18,7 @@
#include <linux/memblock.h>
#include <linux/pseudo_fs.h>
#include <linux/secretmem.h>
+#include <linux/memcontrol.h>
#include <linux/set_memory.h>
#include <linux/sched/signal.h>

@@ -44,6 +45,32 @@ struct secretmem_ctx {

static struct cma *secretmem_cma;

+static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
+{
+ int err;
+
+ err = memcg_kmem_charge_page(page, gfp, order);
+ if (err)
+ return err;
+
+ /*
+ * seceremem caches are unreclaimable kernel allocations, so treat
+ * them as unreclaimable slab memory for VM statistics purposes
+ */
+ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+ PAGE_SIZE << order);
+
+ return 0;
+}
+
+static void secretmem_unaccount_pages(struct page *page, int order)
+{
+
+ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+ -PAGE_SIZE << order);
+ memcg_kmem_uncharge_page(page, order);
+}
+
static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
{
unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
@@ -56,10 +83,14 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
if (!page)
return -ENOMEM;

- err = set_direct_map_invalid_noflush(page, nr_pages);
+ err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
if (err)
goto err_cma_release;

+ err = set_direct_map_invalid_noflush(page, nr_pages);
+ if (err)
+ goto err_memcg_uncharge;
+
addr = (unsigned long)page_address(page);
err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
if (err)
@@ -76,6 +107,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
* won't fail
*/
set_direct_map_default_noflush(page, nr_pages);
+err_memcg_uncharge:
+ secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
err_cma_release:
cma_release(secretmem_cma, page, nr_pages);
return err;
@@ -302,6 +335,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
int i;

set_direct_map_default_noflush(page, nr_pages);
+ secretmem_unaccount_pages(page, PMD_PAGE_ORDER);

for (i = 0; i < nr_pages; i++)
clear_highpage(page + i);
--
2.28.0

2020-12-03 06:36:30

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH v14 10/10] secretmem: test: add basic selftest for memfd_secret(2)

From: Mike Rapoport <[email protected]>

The test verifies that file descriptor created with memfd_secret does
not allow read/write operations, that secret memory mappings respect
RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
ptrace() to the secret memory fail.

Signed-off-by: Mike Rapoport <[email protected]>
---
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +-
tools/testing/selftests/vm/memfd_secret.c | 298 ++++++++++++++++++++++
tools/testing/selftests/vm/run_vmtests | 17 ++
4 files changed, 318 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/vm/memfd_secret.c

diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore
index 9a35c3f6a557..c8deddc81e7a 100644
--- a/tools/testing/selftests/vm/.gitignore
+++ b/tools/testing/selftests/vm/.gitignore
@@ -21,4 +21,5 @@ va_128TBswitch
map_fixed_noreplace
write_to_hugetlbfs
hmm-tests
+memfd_secret
local_config.*
diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index 62fb15f286ee..9ab98946fbf2 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -34,6 +34,7 @@ TEST_GEN_FILES += khugepaged
TEST_GEN_FILES += map_fixed_noreplace
TEST_GEN_FILES += map_hugetlb
TEST_GEN_FILES += map_populate
+TEST_GEN_FILES += memfd_secret
TEST_GEN_FILES += mlock-random-test
TEST_GEN_FILES += mlock2-tests
TEST_GEN_FILES += mremap_dontunmap
@@ -129,7 +130,7 @@ warn_32bit_failure:
endif
endif

-$(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+$(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap

$(OUTPUT)/gup_test: ../../../../mm/gup_test.h

diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c
new file mode 100644
index 000000000000..79578dfd13e6
--- /dev/null
+++ b/tools/testing/selftests/vm/memfd_secret.c
@@ -0,0 +1,298 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <[email protected]>
+ */
+
+#define _GNU_SOURCE
+#include <sys/uio.h>
+#include <sys/mman.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ptrace.h>
+#include <sys/syscall.h>
+#include <sys/resource.h>
+#include <sys/capability.h>
+
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <errno.h>
+#include <stdio.h>
+
+#include "../kselftest.h"
+
+#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__)
+#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
+#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
+
+#ifdef __NR_memfd_secret
+
+#include <linux/secretmem.h>
+
+#define PATTERN 0x55
+
+static const int prot = PROT_READ | PROT_WRITE;
+static const int mode = MAP_SHARED;
+
+static unsigned long page_size;
+static unsigned long mlock_limit_cur;
+static unsigned long mlock_limit_max;
+
+static int memfd_secret(unsigned long flags)
+{
+ return syscall(__NR_memfd_secret, flags);
+}
+
+static void test_file_apis(int fd)
+{
+ char buf[64];
+
+ if ((read(fd, buf, sizeof(buf)) >= 0) ||
+ (write(fd, buf, sizeof(buf)) >= 0) ||
+ (pread(fd, buf, sizeof(buf), 0) >= 0) ||
+ (pwrite(fd, buf, sizeof(buf), 0) >= 0))
+ fail("unexpected file IO\n");
+ else
+ pass("file IO is blocked as expected\n");
+}
+
+static void test_mlock_limit(int fd)
+{
+ size_t len;
+ char *mem;
+
+ len = mlock_limit_cur;
+ mem = mmap(NULL, len, prot, mode, fd, 0);
+ if (mem == MAP_FAILED) {
+ fail("unable to mmap secret memory\n");
+ return;
+ }
+ munmap(mem, len);
+
+ len = mlock_limit_max * 2;
+ mem = mmap(NULL, len, prot, mode, fd, 0);
+ if (mem != MAP_FAILED) {
+ fail("unexpected mlock limit violation\n");
+ munmap(mem, len);
+ return;
+ }
+
+ pass("mlock limit is respected\n");
+}
+
+static void try_process_vm_read(int fd, int pipefd[2])
+{
+ struct iovec liov, riov;
+ char buf[64];
+ char *mem;
+
+ if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+ fail("pipe write: %s\n", strerror(errno));
+ exit(KSFT_FAIL);
+ }
+
+ liov.iov_len = riov.iov_len = sizeof(buf);
+ liov.iov_base = buf;
+ riov.iov_base = mem;
+
+ if (process_vm_readv(getppid(), &liov, 1, &riov, 1, 0) < 0) {
+ if (errno == ENOSYS)
+ exit(KSFT_SKIP);
+ exit(KSFT_PASS);
+ }
+
+ exit(KSFT_FAIL);
+}
+
+static void try_ptrace(int fd, int pipefd[2])
+{
+ pid_t ppid = getppid();
+ int status;
+ char *mem;
+ long ret;
+
+ if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+ perror("pipe write");
+ exit(KSFT_FAIL);
+ }
+
+ ret = ptrace(PTRACE_ATTACH, ppid, 0, 0);
+ if (ret) {
+ perror("ptrace_attach");
+ exit(KSFT_FAIL);
+ }
+
+ ret = waitpid(ppid, &status, WUNTRACED);
+ if ((ret != ppid) || !(WIFSTOPPED(status))) {
+ fprintf(stderr, "weird waitppid result %ld stat %x\n",
+ ret, status);
+ exit(KSFT_FAIL);
+ }
+
+ if (ptrace(PTRACE_PEEKDATA, ppid, mem, 0))
+ exit(KSFT_PASS);
+
+ exit(KSFT_FAIL);
+}
+
+static void check_child_status(pid_t pid, const char *name)
+{
+ int status;
+
+ waitpid(pid, &status, 0);
+
+ if (WIFEXITED(status) && WEXITSTATUS(status) == KSFT_SKIP) {
+ skip("%s is not supported\n", name);
+ return;
+ }
+
+ if ((WIFEXITED(status) && WEXITSTATUS(status) == KSFT_PASS) ||
+ WIFSIGNALED(status)) {
+ pass("%s is blocked as expected\n", name);
+ return;
+ }
+
+ fail("%s: unexpected memory access\n", name);
+}
+
+static void test_remote_access(int fd, const char *name,
+ void (*func)(int fd, int pipefd[2]))
+{
+ int pipefd[2];
+ pid_t pid;
+ char *mem;
+
+ if (pipe(pipefd)) {
+ fail("pipe failed: %s\n", strerror(errno));
+ return;
+ }
+
+ pid = fork();
+ if (pid < 0) {
+ fail("fork failed: %s\n", strerror(errno));
+ return;
+ }
+
+ if (pid == 0) {
+ func(fd, pipefd);
+ return;
+ }
+
+ mem = mmap(NULL, page_size, prot, mode, fd, 0);
+ if (mem == MAP_FAILED) {
+ fail("Unable to mmap secret memory\n");
+ return;
+ }
+
+ ftruncate(fd, page_size);
+ memset(mem, PATTERN, page_size);
+
+ if (write(pipefd[1], &mem, sizeof(mem)) < 0) {
+ fail("pipe write: %s\n", strerror(errno));
+ return;
+ }
+
+ check_child_status(pid, name);
+}
+
+static void test_process_vm_read(int fd)
+{
+ test_remote_access(fd, "process_vm_read", try_process_vm_read);
+}
+
+static void test_ptrace(int fd)
+{
+ test_remote_access(fd, "ptrace", try_ptrace);
+}
+
+static int set_cap_limits(rlim_t max)
+{
+ struct rlimit new;
+ cap_t cap = cap_init();
+
+ new.rlim_cur = max;
+ new.rlim_max = max;
+ if (setrlimit(RLIMIT_MEMLOCK, &new)) {
+ perror("setrlimit() returns error");
+ return -1;
+ }
+
+ /* drop capabilities including CAP_IPC_LOCK */
+ if (cap_set_proc(cap)) {
+ perror("cap_set_proc() returns error");
+ return -2;
+ }
+
+ return 0;
+}
+
+static void prepare(void)
+{
+ struct rlimit rlim;
+
+ page_size = sysconf(_SC_PAGE_SIZE);
+ if (!page_size)
+ ksft_exit_fail_msg("Failed to get page size %s\n",
+ strerror(errno));
+
+ if (getrlimit(RLIMIT_MEMLOCK, &rlim))
+ ksft_exit_fail_msg("Unable to detect mlock limit: %s\n",
+ strerror(errno));
+
+ mlock_limit_cur = rlim.rlim_cur;
+ mlock_limit_max = rlim.rlim_max;
+
+ printf("page_size: %ld, mlock.soft: %ld, mlock.hard: %ld\n",
+ page_size, mlock_limit_cur, mlock_limit_max);
+
+ if (page_size > mlock_limit_cur)
+ mlock_limit_cur = page_size;
+ if (page_size > mlock_limit_max)
+ mlock_limit_max = page_size;
+
+ if (set_cap_limits(mlock_limit_max))
+ ksft_exit_fail_msg("Unable to set mlock limit: %s\n",
+ strerror(errno));
+}
+
+#define NUM_TESTS 4
+
+int main(int argc, char *argv[])
+{
+ int fd;
+
+ prepare();
+
+ ksft_print_header();
+ ksft_set_plan(NUM_TESTS);
+
+ fd = memfd_secret(0);
+ if (fd < 0) {
+ if (errno == ENOSYS)
+ ksft_exit_skip("memfd_secret is not supported\n");
+ else
+ ksft_exit_fail_msg("memfd_secret failed: %s\n",
+ strerror(errno));
+ }
+
+ test_mlock_limit(fd);
+ test_file_apis(fd);
+ test_process_vm_read(fd);
+ test_ptrace(fd);
+
+ close(fd);
+
+ ksft_exit(!ksft_get_fail_cnt());
+}
+
+#else /* __NR_memfd_secret */
+
+int main(int argc, char *argv[])
+{
+ printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n");
+ return KSFT_SKIP;
+}
+
+#endif /* __NR_memfd_secret */
diff --git a/tools/testing/selftests/vm/run_vmtests b/tools/testing/selftests/vm/run_vmtests
index e953f3cd9664..95a67382f132 100755
--- a/tools/testing/selftests/vm/run_vmtests
+++ b/tools/testing/selftests/vm/run_vmtests
@@ -346,4 +346,21 @@ else
exitcode=1
fi

+echo "running memfd_secret test"
+echo "------------------------------------"
+./memfd_secret
+ret_val=$?
+
+if [ $ret_val -eq 0 ]; then
+ echo "[PASS]"
+elif [ $ret_val -eq $ksft_skip ]; then
+ echo "[SKIP]"
+ exitcode=$ksft_skip
+else
+ echo "[FAIL]"
+ exitcode=1
+fi
+
+exit $exitcode
+
exit $exitcode
--
2.28.0

2020-12-03 06:37:04

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH v14 09/10] arch, mm: wire up memfd_secret system call were relevant

From: Mike Rapoport <[email protected]>

Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.

Signed-off-by: Mike Rapoport <[email protected]>
Acked-by: Palmer Dabbelt <[email protected]>
Acked-by: Arnd Bergmann <[email protected]>
---
arch/arm64/include/uapi/asm/unistd.h | 1 +
arch/riscv/include/asm/unistd.h | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/unistd.h | 6 +++++-
mm/secretmem.c | 3 +++
scripts/checksyscalls.sh | 4 ++++
8 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index f83a70e07df8..ce2ee8f1e361 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -20,5 +20,6 @@
#define __ARCH_WANT_SET_GET_RLIMIT
#define __ARCH_WANT_TIME32_SYSCALLS
#define __ARCH_WANT_SYS_CLONE3
+#define __ARCH_WANT_MEMFD_SECRET

#include <asm-generic/unistd.h>
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
index 977ee6181dab..6c316093a1e5 100644
--- a/arch/riscv/include/asm/unistd.h
+++ b/arch/riscv/include/asm/unistd.h
@@ -9,6 +9,7 @@
*/

#define __ARCH_WANT_SYS_CLONE
+#define __ARCH_WANT_MEMFD_SECRET

#include <uapi/asm/unistd.h>

diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index c52ab1c4a755..109e6681b8fa 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -446,3 +446,4 @@
439 i386 faccessat2 sys_faccessat2
440 i386 process_madvise sys_process_madvise
441 i386 watch_mount sys_watch_mount
+442 i386 memfd_secret sys_memfd_secret
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index f3270a9ef467..742cf17d7725 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -363,6 +363,7 @@
439 common faccessat2 sys_faccessat2
440 common process_madvise sys_process_madvise
441 common watch_mount sys_watch_mount
+442 common memfd_secret sys_memfd_secret

#
# Due to a historical design error, certain syscalls are numbered differently
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 6d55324363ab..f9d93fbf9b69 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1010,6 +1010,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
asmlinkage long sys_watch_mount(int dfd, const char __user *path,
unsigned int at_flags, int watch_fd, int watch_id);
+asmlinkage long sys_memfd_secret(unsigned long flags);

/*
* Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 5df46517260e..51151888f330 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -861,9 +861,13 @@ __SYSCALL(__NR_faccessat2, sys_faccessat2)
__SYSCALL(__NR_process_madvise, sys_process_madvise)
#define __NR_watch_mount 441
__SYSCALL(__NR_watch_mount, sys_watch_mount)
+#ifdef __ARCH_WANT_MEMFD_SECRET
+#define __NR_memfd_secret 442
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
+#endif

#undef __NR_syscalls
-#define __NR_syscalls 442
+#define __NR_syscalls 443

/*
* 32 bit systems traditionally used different
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 7236f4d9458a..b8a32954ac68 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -415,6 +415,9 @@ static int __init secretmem_setup(char *str)
unsigned long reserved_size;
int err;

+ if (!can_set_direct_map())
+ return 0;
+
reserved_size = memparse(str, NULL);
if (!reserved_size)
return 0;
diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
index a18b47695f55..b7609958ee36 100755
--- a/scripts/checksyscalls.sh
+++ b/scripts/checksyscalls.sh
@@ -40,6 +40,10 @@ cat << EOF
#define __IGNORE_setrlimit /* setrlimit */
#endif

+#ifndef __ARCH_WANT_MEMFD_SECRET
+#define __IGNORE_memfd_secret
+#endif
+
/* Missing flags argument */
#define __IGNORE_renameat /* renameat2 */

--
2.28.0

2020-12-03 15:52:28

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH v14 07/10] secretmem: add memcg accounting

On Wed, Dec 2, 2020 at 10:31 PM Mike Rapoport <[email protected]> wrote:
>
> From: Mike Rapoport <[email protected]>
>
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.
>
> Signed-off-by: Mike Rapoport <[email protected]>
> Acked-by: Roman Gushchin <[email protected]>

Reviewed-by: Shakeel Butt <[email protected]>

2020-12-03 23:40:00

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v14 04/10] set_memory: allow querying whether set_direct_map_*() is actually enabled

On Thu, 3 Dec 2020 08:29:43 +0200 Mike Rapoport <[email protected]> wrote:

> From: Mike Rapoport <[email protected]>
>
> On arm64, set_direct_map_*() functions may return 0 without actually
> changing the linear map. This behaviour can be controlled using kernel
> parameters, so we need a way to determine at runtime whether calls to
> set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
> any effect.
>
> Extend set_memory API with can_set_direct_map() function that allows
> checking if calling set_direct_map_*() will actually change the page table,
> replace several occurrences of open coded checks in arm64 with the new
> function and provide a generic stub for architectures that always modify
> page tables upon calls to set_direct_map APIs.
>
> ...
>
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -22,6 +22,7 @@
> #include <linux/io.h>
> #include <linux/mm.h>
> #include <linux/vmalloc.h>
> +#include <linux/set_memory.h>
>
> #include <asm/barrier.h>
> #include <asm/cputype.h>
> @@ -477,7 +478,7 @@ static void __init map_mem(pgd_t *pgdp)
> int flags = 0;
> u64 i;
>
> - if (rodata_full || debug_pagealloc_enabled())
> + if (can_set_direct_map())
> flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;

Changes in -next turned this into

if (can_set_direct_map() || crash_mem_map)


2020-12-03 23:42:44

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v14 09/10] arch, mm: wire up memfd_secret system call were relevant

On Thu, 3 Dec 2020 08:29:48 +0200 Mike Rapoport <[email protected]> wrote:

> From: Mike Rapoport <[email protected]>
>
> Wire up memfd_secret system call on architectures that define
> ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.
>
> ...
>
> --- a/include/uapi/asm-generic/unistd.h
> +++ b/include/uapi/asm-generic/unistd.h
> @@ -861,9 +861,13 @@ __SYSCALL(__NR_faccessat2, sys_faccessat2)
> __SYSCALL(__NR_process_madvise, sys_process_madvise)
> #define __NR_watch_mount 441
> __SYSCALL(__NR_watch_mount, sys_watch_mount)
> +#ifdef __ARCH_WANT_MEMFD_SECRET
> +#define __NR_memfd_secret 442
> +__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
> +#endif

Why do we add the ifdef? Can't we simply define the syscall on all
architectures and let sys_ni do its thing?

2020-12-06 11:33:58

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH v14 04/10] set_memory: allow querying whether set_direct_map_*() is actually enabled

On Thu, Dec 03, 2020 at 03:36:10PM -0800, Andrew Morton wrote:
> On Thu, 3 Dec 2020 08:29:43 +0200 Mike Rapoport <[email protected]> wrote:
>
> > From: Mike Rapoport <[email protected]>
> >
> > On arm64, set_direct_map_*() functions may return 0 without actually
> > changing the linear map. This behaviour can be controlled using kernel
> > parameters, so we need a way to determine at runtime whether calls to
> > set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
> > any effect.
> >
> > Extend set_memory API with can_set_direct_map() function that allows
> > checking if calling set_direct_map_*() will actually change the page table,
> > replace several occurrences of open coded checks in arm64 with the new
> > function and provide a generic stub for architectures that always modify
> > page tables upon calls to set_direct_map APIs.
> >
> > ...
> >
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -22,6 +22,7 @@
> > #include <linux/io.h>
> > #include <linux/mm.h>
> > #include <linux/vmalloc.h>
> > +#include <linux/set_memory.h>
> >
> > #include <asm/barrier.h>
> > #include <asm/cputype.h>
> > @@ -477,7 +478,7 @@ static void __init map_mem(pgd_t *pgdp)
> > int flags = 0;
> > u64 i;
> >
> > - if (rodata_full || debug_pagealloc_enabled())
> > + if (can_set_direct_map())
> > flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> Changes in -next turned this into
>
> if (can_set_direct_map() || crash_mem_map)

Thanks for updating!

--
Sincerely yours,
Mike.

2020-12-06 11:36:45

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH v14 09/10] arch, mm: wire up memfd_secret system call were relevant

On Thu, Dec 03, 2020 at 03:39:16PM -0800, Andrew Morton wrote:
> On Thu, 3 Dec 2020 08:29:48 +0200 Mike Rapoport <[email protected]> wrote:
>
> > From: Mike Rapoport <[email protected]>
> >
> > Wire up memfd_secret system call on architectures that define
> > ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.
> >
> > ...
> >
> > --- a/include/uapi/asm-generic/unistd.h
> > +++ b/include/uapi/asm-generic/unistd.h
> > @@ -861,9 +861,13 @@ __SYSCALL(__NR_faccessat2, sys_faccessat2)
> > __SYSCALL(__NR_process_madvise, sys_process_madvise)
> > #define __NR_watch_mount 441
> > __SYSCALL(__NR_watch_mount, sys_watch_mount)
> > +#ifdef __ARCH_WANT_MEMFD_SECRET
> > +#define __NR_memfd_secret 442
> > +__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
> > +#endif
>
> Why do we add the ifdef? Can't we simply define the syscall on all
> architectures and let sys_ni do its thing?

I quite blindly copied it from clone3. I agree there is no real need for
it and sys_ni handles this just fine.

--
Sincerely yours,
Mike.

2020-12-07 14:50:35

by Qian Cai

[permalink] [raw]
Subject: Re: [PATCH v14 09/10] arch, mm: wire up memfd_secret system call were relevant

On Thu, 2020-12-03 at 08:29 +0200, Mike Rapoport wrote:
> From: Mike Rapoport <[email protected]>
>
> Wire up memfd_secret system call on architectures that define
> ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.
>
> Signed-off-by: Mike Rapoport <[email protected]>
> Acked-by: Palmer Dabbelt <[email protected]>
> Acked-by: Arnd Bergmann <[email protected]>
> ---
> arch/arm64/include/uapi/asm/unistd.h | 1 +
> arch/riscv/include/asm/unistd.h | 1 +
> arch/x86/entry/syscalls/syscall_32.tbl | 1 +
> arch/x86/entry/syscalls/syscall_64.tbl | 1 +
> include/linux/syscalls.h | 1 +
> include/uapi/asm-generic/unistd.h | 6 +++++-
> mm/secretmem.c | 3 +++
> scripts/checksyscalls.sh | 4 ++++
> 8 files changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
> index f83a70e07df8..ce2ee8f1e361 100644
> --- a/arch/arm64/include/uapi/asm/unistd.h
> +++ b/arch/arm64/include/uapi/asm/unistd.h
> @@ -20,5 +20,6 @@
> #define __ARCH_WANT_SET_GET_RLIMIT
> #define __ARCH_WANT_TIME32_SYSCALLS
> #define __ARCH_WANT_SYS_CLONE3
> +#define __ARCH_WANT_MEMFD_SECRET
>
> #include <asm-generic/unistd.h>
> diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
> index 977ee6181dab..6c316093a1e5 100644
> --- a/arch/riscv/include/asm/unistd.h
> +++ b/arch/riscv/include/asm/unistd.h
> @@ -9,6 +9,7 @@
> */
>
> #define __ARCH_WANT_SYS_CLONE
> +#define __ARCH_WANT_MEMFD_SECRET
>
> #include <uapi/asm/unistd.h>
>
> diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
> index c52ab1c4a755..109e6681b8fa 100644
> --- a/arch/x86/entry/syscalls/syscall_32.tbl
> +++ b/arch/x86/entry/syscalls/syscall_32.tbl
> @@ -446,3 +446,4 @@
> 439 i386 faccessat2 sys_faccessat2
> 440 i386 process_madvise sys_process_madvise
> 441 i386 watch_mount sys_watch_mount
> +442 i386 memfd_secret sys_memfd_secret
> diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
> index f3270a9ef467..742cf17d7725 100644
> --- a/arch/x86/entry/syscalls/syscall_64.tbl
> +++ b/arch/x86/entry/syscalls/syscall_64.tbl
> @@ -363,6 +363,7 @@
> 439 common faccessat2 sys_faccessat2
> 440 common process_madvise sys_process_madvise
> 441 common watch_mount sys_watch_mount
> +442 common memfd_secret sys_memfd_secret
>
> #
> # Due to a historical design error, certain syscalls are numbered differently
> diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> index 6d55324363ab..f9d93fbf9b69 100644
> --- a/include/linux/syscalls.h
> +++ b/include/linux/syscalls.h
> @@ -1010,6 +1010,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
> asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
> asmlinkage long sys_watch_mount(int dfd, const char __user *path,
> unsigned int at_flags, int watch_fd, int watch_id);
> +asmlinkage long sys_memfd_secret(unsigned long flags);
>
> /*
> * Architecture-specific system calls
> diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
> index 5df46517260e..51151888f330 100644
> --- a/include/uapi/asm-generic/unistd.h
> +++ b/include/uapi/asm-generic/unistd.h
> @@ -861,9 +861,13 @@ __SYSCALL(__NR_faccessat2, sys_faccessat2)
> __SYSCALL(__NR_process_madvise, sys_process_madvise)
> #define __NR_watch_mount 441
> __SYSCALL(__NR_watch_mount, sys_watch_mount)
> +#ifdef __ARCH_WANT_MEMFD_SECRET
> +#define __NR_memfd_secret 442
> +__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
> +#endif

I can't see where was it defined for arm64 after it looks like Andrew has
deleted the above chunk. Thus, we have a warning using this .config:

https://cailca.coding.net/public/linux/mm/git/files/master/arm64.config

<stdin>:1539:2: warning: #warning syscall memfd_secret not implemented [-Wcpp]

>
> #undef __NR_syscalls
> -#define __NR_syscalls 442
> +#define __NR_syscalls 443
>
> /*
> * 32 bit systems traditionally used different
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 7236f4d9458a..b8a32954ac68 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -415,6 +415,9 @@ static int __init secretmem_setup(char *str)
> unsigned long reserved_size;
> int err;
>
> + if (!can_set_direct_map())
> + return 0;
> +
> reserved_size = memparse(str, NULL);
> if (!reserved_size)
> return 0;
> diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
> index a18b47695f55..b7609958ee36 100755
> --- a/scripts/checksyscalls.sh
> +++ b/scripts/checksyscalls.sh
> @@ -40,6 +40,10 @@ cat << EOF
> #define __IGNORE_setrlimit /* setrlimit */
> #endif
>
> +#ifndef __ARCH_WANT_MEMFD_SECRET
> +#define __IGNORE_memfd_secret
> +#endif
> +
> /* Missing flags argument */
> #define __IGNORE_renameat /* renameat2 */
>

2020-12-07 16:04:55

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH v14 09/10] arch, mm: wire up memfd_secret system call were relevant

On Mon, Dec 07, 2020 at 09:45:59AM -0500, Qian Cai wrote:
> On Thu, 2020-12-03 at 08:29 +0200, Mike Rapoport wrote:

...

> > diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> > index 6d55324363ab..f9d93fbf9b69 100644
> > --- a/include/linux/syscalls.h
> > +++ b/include/linux/syscalls.h
> > @@ -1010,6 +1010,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
> > asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
> > asmlinkage long sys_watch_mount(int dfd, const char __user *path,
> > unsigned int at_flags, int watch_fd, int watch_id);
> > +asmlinkage long sys_memfd_secret(unsigned long flags);
> >
> > /*
> > * Architecture-specific system calls
> > diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
> > index 5df46517260e..51151888f330 100644
> > --- a/include/uapi/asm-generic/unistd.h
> > +++ b/include/uapi/asm-generic/unistd.h
> > @@ -861,9 +861,13 @@ __SYSCALL(__NR_faccessat2, sys_faccessat2)
> > __SYSCALL(__NR_process_madvise, sys_process_madvise)
> > #define __NR_watch_mount 441
> > __SYSCALL(__NR_watch_mount, sys_watch_mount)
> > +#ifdef __ARCH_WANT_MEMFD_SECRET
> > +#define __NR_memfd_secret 442
> > +__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
> > +#endif
>
> I can't see where was it defined for arm64 after it looks like Andrew has
> deleted the above chunk. Thus, we have a warning using this .config:
>
> https://cailca.coding.net/public/linux/mm/git/files/master/arm64.config
>
> <stdin>:1539:2: warning: #warning syscall memfd_secret not implemented [-Wcpp]

I was under the impression that Andrew only removed the #ifdef...

Andrew, can you please restore syscall definition for memfd_secret() in
include/uapi/asm-generic/unistd.h?

> >
> > #undef __NR_syscalls
> > -#define __NR_syscalls 442
> > +#define __NR_syscalls 443
> >
> > /*
> > * 32 bit systems traditionally used different
> > diff --git a/mm/secretmem.c b/mm/secretmem.c
> > index 7236f4d9458a..b8a32954ac68 100644
> > --- a/mm/secretmem.c
> > +++ b/mm/secretmem.c
> > @@ -415,6 +415,9 @@ static int __init secretmem_setup(char *str)
> > unsigned long reserved_size;
> > int err;
> >
> > + if (!can_set_direct_map())
> > + return 0;
> > +
> > reserved_size = memparse(str, NULL);
> > if (!reserved_size)
> > return 0;
> > diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
> > index a18b47695f55..b7609958ee36 100755
> > --- a/scripts/checksyscalls.sh
> > +++ b/scripts/checksyscalls.sh
> > @@ -40,6 +40,10 @@ cat << EOF
> > #define __IGNORE_setrlimit /* setrlimit */
> > #endif
> >
> > +#ifndef __ARCH_WANT_MEMFD_SECRET
> > +#define __IGNORE_memfd_secret
> > +#endif
> > +
> > /* Missing flags argument */
> > #define __IGNORE_renameat /* renameat2 */
> >
>

--
Sincerely yours,
Mike.

2020-12-08 04:40:49

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v14 09/10] arch, mm: wire up memfd_secret system call were relevant

On Mon, 7 Dec 2020 18:00:06 +0200 Mike Rapoport <[email protected]> wrote:

> >
> > I can't see where was it defined for arm64 after it looks like Andrew has
> > deleted the above chunk. Thus, we have a warning using this .config:
> >
> > https://cailca.coding.net/public/linux/mm/git/files/master/arm64.config
> >
> > <stdin>:1539:2: warning: #warning syscall memfd_secret not implemented [-Wcpp]
>
> I was under the impression that Andrew only removed the #ifdef...
>
> Andrew, can you please restore syscall definition for memfd_secret() in
> include/uapi/asm-generic/unistd.h?
>

urgh, OK, that seems to have got lost in the (moderate amount of)
conflict resolution).

--- a/include/uapi/asm-generic/unistd.h~arch-mm-wire-up-memfd_secret-system-call-were-relevant-fix
+++ a/include/uapi/asm-generic/unistd.h
@@ -863,9 +863,13 @@ __SYSCALL(__NR_process_madvise, sys_proc
__SYSCALL(__NR_watch_mount, sys_watch_mount)
#define __NR_epoll_pwait2 442
__SC_COMP(__NR_epoll_pwait2, sys_epoll_pwait2, compat_sys_epoll_pwait2)
+#ifdef __ARCH_WANT_MEMFD_SECRET
+#define __NR_memfd_secret 443
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
+#endif

#undef __NR_syscalls
-#define __NR_syscalls 443
+#define __NR_syscalls 444

/*
* 32 bit systems traditionally used different
_

2020-12-13 14:25:45

by John Hubbard

[permalink] [raw]
Subject: Re: [PATCH v14 10/10] secretmem: test: add basic selftest for memfd_secret(2)

On 12/2/20 10:29 PM, Mike Rapoport wrote:
> From: Mike Rapoport <[email protected]>
...
> +#include "../kselftest.h"
> +
> +#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__)
> +#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
> +#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
> +
> +#ifdef __NR_memfd_secret
> +
> +#include <linux/secretmem.h>

Hi Mike,

Say, when I tried this out from today's linux-next, I had to delete the
above line. In other words, the following was required in order to build:

diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c
index 79578dfd13e6..c878c2b841fc 100644
--- a/tools/testing/selftests/vm/memfd_secret.c
+++ b/tools/testing/selftests/vm/memfd_secret.c
@@ -29,8 +29,6 @@

#ifdef __NR_memfd_secret

-#include <linux/secretmem.h>
-
#define PATTERN 0x55

static const int prot = PROT_READ | PROT_WRITE;


...and that makes sense to me, because:

a) secretmem.h is not in the uapi, which this selftests/vm build system
expects (it runs "make headers_install" for us, which is *not* going
to pick up items in the kernel include dirs), and

b) There is nothing in secretmem.h that this test uses, anyway! Just these:

bool vma_is_secretmem(struct vm_area_struct *vma);
bool page_is_secretmem(struct page *page);
bool secretmem_active(void);


...or am I just Doing It Wrong? :)

thanks,
--
John Hubbard
NVIDIA

2020-12-14 04:06:05

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH v14 10/10] secretmem: test: add basic selftest for memfd_secret(2)

Hi John,

On Fri, Dec 11, 2020 at 10:16:23PM -0800, John Hubbard wrote:
> On 12/2/20 10:29 PM, Mike Rapoport wrote:
> > From: Mike Rapoport <[email protected]>
> ...
> > +#include "../kselftest.h"
> > +
> > +#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__)
> > +#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
> > +#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
> > +
> > +#ifdef __NR_memfd_secret
> > +
> > +#include <linux/secretmem.h>
>
> Hi Mike,
>
> Say, when I tried this out from today's linux-next, I had to delete the
> above line. In other words, the following was required in order to build:
>
> diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c
> index 79578dfd13e6..c878c2b841fc 100644
> --- a/tools/testing/selftests/vm/memfd_secret.c
> +++ b/tools/testing/selftests/vm/memfd_secret.c
> @@ -29,8 +29,6 @@
>
> #ifdef __NR_memfd_secret
>
> -#include <linux/secretmem.h>
> -
> #define PATTERN 0x55
>
> static const int prot = PROT_READ | PROT_WRITE;
>
>
> ...and that makes sense to me, because:
>
> a) secretmem.h is not in the uapi, which this selftests/vm build system
> expects (it runs "make headers_install" for us, which is *not* going
> to pick up items in the kernel include dirs), and
>
> b) There is nothing in secretmem.h that this test uses, anyway! Just these:
>
> bool vma_is_secretmem(struct vm_area_struct *vma);
> bool page_is_secretmem(struct page *page);
> bool secretmem_active(void);
>
>
> ...or am I just Doing It Wrong? :)

You are perfectly right, I had a stale header in uapi from the prevoius
versions, the include in the test remained from then.

@Andrew, can you please add the hunk above as a fixup?

> thanks,
> --
> John Hubbard
> NVIDIA

--
Sincerely yours,
Mike.

2021-01-19 20:31:49

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v14 05/10] mm: introduce memfd_secret system call to create "secret" memory areas

On Thu, Dec 03, 2020 at 08:29:44AM +0200, Mike Rapoport wrote:
> +static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> +{
> + struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> + struct inode *inode = file_inode(vmf->vma->vm_file);
> + pgoff_t offset = vmf->pgoff;
> + vm_fault_t ret = 0;
> + unsigned long addr;
> + struct page *page;
> + int err;
> +
> + if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
> + return vmf_error(-EINVAL);
> +
> + page = find_get_page(mapping, offset);
> + if (!page) {
> +
> + page = secretmem_alloc_page(vmf->gfp_mask);
> + if (!page)
> + return vmf_error(-ENOMEM);

Just use VM_FAULT_OOM directly.

> + err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
> + if (unlikely(err))
> + goto err_put_page;

What if the error is EEXIST because somebody else raced with you to add
a new page to the page cache?

> + err = set_direct_map_invalid_noflush(page, 1);
> + if (err)
> + goto err_del_page_cache;

Does this work correctly if somebody else has a reference to the page
in the meantime?

> + addr = (unsigned long)page_address(page);
> + flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> +
> + __SetPageUptodate(page);

Once you've added it to the cache, somebody else can come along and try
to lock it. They will set PageWaiter. Now you call __SetPageUptodate
and wipe out their PageWaiter bit. So you won't wake them up when you
unlock.

You can call __SetPageUptodate before adding it to the page cache,
but once it's visible to another thread, you can't do that.

> + ret = VM_FAULT_LOCKED;
> + }
> +
> + vmf->page = page;

You're supposed to return the page locked, so use find_lock_page() instead
of find_get_page().

> + return ret;
> +
> +err_del_page_cache:
> + delete_from_page_cache(page);
> +err_put_page:
> + put_page(page);
> + return vmf_error(err);
> +}

2021-01-20 15:15:40

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH v14 05/10] mm: introduce memfd_secret system call to create "secret" memory areas

On Tue, Jan 19, 2021 at 08:22:13PM +0000, Matthew Wilcox wrote:
> On Thu, Dec 03, 2020 at 08:29:44AM +0200, Mike Rapoport wrote:
> > +static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> > +{
> > + struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> > + struct inode *inode = file_inode(vmf->vma->vm_file);
> > + pgoff_t offset = vmf->pgoff;
> > + vm_fault_t ret = 0;
> > + unsigned long addr;
> > + struct page *page;
> > + int err;
> > +
> > + if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
> > + return vmf_error(-EINVAL);
> > +
> > + page = find_get_page(mapping, offset);
> > + if (!page) {
> > +
> > + page = secretmem_alloc_page(vmf->gfp_mask);
> > + if (!page)
> > + return vmf_error(-ENOMEM);
>
> Just use VM_FAULT_OOM directly.

Ok.

> > + err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
> > + if (unlikely(err))
> > + goto err_put_page;
>
> What if the error is EEXIST because somebody else raced with you to add
> a new page to the page cache?

Right, for -EEXIST I need a retry here, thanks.

> > + err = set_direct_map_invalid_noflush(page, 1);
> > + if (err)
> > + goto err_del_page_cache;
>
> Does this work correctly if somebody else has a reference to the page
> in the meantime?

Yes, it does. If somebody else won the race that page was dropped from the
direct map and this call would be essentially a nop. And anyway, the very
next patch changes the way pages are removed from the direct map ;-)

> > + addr = (unsigned long)page_address(page);
> > + flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> > +
> > + __SetPageUptodate(page);
>
> Once you've added it to the cache, somebody else can come along and try
> to lock it. They will set PageWaiter. Now you call __SetPageUptodate
> and wipe out their PageWaiter bit. So you won't wake them up when you
> unlock.
>
> You can call __SetPageUptodate before adding it to the page cache,
> but once it's visible to another thread, you can't do that.

Will fix.

> > + ret = VM_FAULT_LOCKED;
> > + }
> > +
> > + vmf->page = page;
>
> You're supposed to return the page locked, so use find_lock_page() instead
> of find_get_page().

Ok/

> > + return ret;
> > +
> > +err_del_page_cache:
> > + delete_from_page_cache(page);
> > +err_put_page:
> > + put_page(page);
> > + return vmf_error(err);
> > +}

--
Sincerely yours,
Mike.

2021-01-20 16:11:26

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v14 05/10] mm: introduce memfd_secret system call to create "secret" memory areas

On Wed, Jan 20, 2021 at 05:05:10PM +0200, Mike Rapoport wrote:
> On Tue, Jan 19, 2021 at 08:22:13PM +0000, Matthew Wilcox wrote:
> > On Thu, Dec 03, 2020 at 08:29:44AM +0200, Mike Rapoport wrote:
> > > +static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> > > +{
> > > + struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> > > + struct inode *inode = file_inode(vmf->vma->vm_file);
> > > + pgoff_t offset = vmf->pgoff;
> > > + vm_fault_t ret = 0;
> > > + unsigned long addr;
> > > + struct page *page;
> > > + int err;
> > > +
> > > + if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
> > > + return vmf_error(-EINVAL);
> > > +
> > > + page = find_get_page(mapping, offset);
> > > + if (!page) {
> > > +
> > > + page = secretmem_alloc_page(vmf->gfp_mask);
> > > + if (!page)
> > > + return vmf_error(-ENOMEM);
> >
> > Just use VM_FAULT_OOM directly.
>
> Ok.
>
> > > + err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
> > > + if (unlikely(err))
> > > + goto err_put_page;
> >
> > What if the error is EEXIST because somebody else raced with you to add
> > a new page to the page cache?
>
> Right, for -EEXIST I need a retry here, thanks.
>
> > > + err = set_direct_map_invalid_noflush(page, 1);
> > > + if (err)
> > > + goto err_del_page_cache;
> >
> > Does this work correctly if somebody else has a reference to the page
> > in the meantime?
>
> Yes, it does. If somebody else won the race that page was dropped from the
> direct map and this call would be essentially a nop. And anyway, the very
> next patch changes the way pages are removed from the direct map ;-)

What I'm thinking is:

thread A page faults
doesn't find page
allocates page
adds page to page cache
thread B page faults
does find page in page cache
set direct map invalid fails
deletes from page cache
... ?

2021-01-20 17:10:25

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH v14 05/10] mm: introduce memfd_secret system call to create "secret" memory areas

On Wed, Jan 20, 2021 at 04:02:10PM +0000, Matthew Wilcox wrote:
> On Wed, Jan 20, 2021 at 05:05:10PM +0200, Mike Rapoport wrote:
> > On Tue, Jan 19, 2021 at 08:22:13PM +0000, Matthew Wilcox wrote:
> > > On Thu, Dec 03, 2020 at 08:29:44AM +0200, Mike Rapoport wrote:
> > > > +static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> > > > +{
> > > > + struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> > > > + struct inode *inode = file_inode(vmf->vma->vm_file);
> > > > + pgoff_t offset = vmf->pgoff;
> > > > + vm_fault_t ret = 0;
> > > > + unsigned long addr;
> > > > + struct page *page;
> > > > + int err;
> > > > +
> > > > + if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
> > > > + return vmf_error(-EINVAL);
> > > > +
> > > > + page = find_get_page(mapping, offset);
> > > > + if (!page) {
> > > > +
> > > > + page = secretmem_alloc_page(vmf->gfp_mask);
> > > > + if (!page)
> > > > + return vmf_error(-ENOMEM);
> > >
> > > Just use VM_FAULT_OOM directly.
> >
> > Ok.
> >
> > > > + err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
> > > > + if (unlikely(err))
> > > > + goto err_put_page;
> > >
> > > What if the error is EEXIST because somebody else raced with you to add
> > > a new page to the page cache?
> >
> > Right, for -EEXIST I need a retry here, thanks.
> >
> > > > + err = set_direct_map_invalid_noflush(page, 1);
> > > > + if (err)
> > > > + goto err_del_page_cache;
> > >
> > > Does this work correctly if somebody else has a reference to the page
> > > in the meantime?
> >
> > Yes, it does. If somebody else won the race that page was dropped from the
> > direct map and this call would be essentially a nop. And anyway, the very
> > next patch changes the way pages are removed from the direct map ;-)
>
> What I'm thinking is:
>
> thread A page faults
> doesn't find page
> allocates page
> adds page to page cache
> thread B page faults
> does find page in page cache
> set direct map invalid fails
> deletes from page cache
> ... ?

Hmm, this is not nice indeed...

--
Sincerely yours,
Mike.