2021-03-04 23:24:59

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas

From: Mike Rapoport <[email protected]>

Hi,

@Andrew, this is based on v5.12-rc1, I can rebase whatever way you prefer.

This is an implementation of "secret" mappings backed by a file descriptor.

The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will be present only in the page table of the owning mm.

Although normally Linux userspace mappings are protected from other users,
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.

Additionally, in the future the secret mappings may be used as a mean to
protect guest memory in a virtual machine host.

For demonstration of secret memory usage we've created a userspace library

https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git

that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.

Hiding secret memory mappings behind an anonymous file allows usage of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.

The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.

Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which affects
the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to have
secretmem disabled by default with the ability of a system administrator to
enable it at boot time.

In addition, there is also a long term goal to improve management of the
direct map.

[1] https://lore.kernel.org/linux-mm/[email protected]/

v18:
* rebase on v5.12-rc1
* merge kfence fix into the original patch
* massage commit message of the patch introducing the memfd_secret syscall

v17: https://lore.kernel.org/lkml/[email protected]
* Remove pool of large pages backing secretmem allocations, per Michal Hocko
* Add secretmem pages to unevictable LRU, per Michal Hocko
* Use GFP_HIGHUSER as secretmem mapping mask, per Michal Hocko
* Make secretmem an opt-in feature that is disabled by default

v16: https://lore.kernel.org/lkml/[email protected]
* Fix memory leak intorduced in v15
* Clean the data left from previous page user before handing the page to
the userspace

v15: https://lore.kernel.org/lkml/[email protected]
* Add riscv/Kconfig update to disable set_memory operations for nommu
builds (patch 3)
* Update the code around add_to_page_cache() per Matthew's comments
(patches 6,7)
* Add fixups for build/checkpatch errors discovered by CI systems

v14: https://lore.kernel.org/lkml/[email protected]
* Finally s/mod_node_page_state/mod_lruvec_page_state/

v13: https://lore.kernel.org/lkml/[email protected]
* Added Reviewed-by, thanks Catalin and David
* s/mod_node_page_state/mod_lruvec_page_state/ as Shakeel suggested

Older history:
v12: https://lore.kernel.org/lkml/[email protected]
v11: https://lore.kernel.org/lkml/[email protected]
v10: https://lore.kernel.org/lkml/[email protected]
v9: https://lore.kernel.org/lkml/[email protected]
v8: https://lore.kernel.org/lkml/[email protected]
v7: https://lore.kernel.org/lkml/[email protected]
v6: https://lore.kernel.org/lkml/[email protected]
v5: https://lore.kernel.org/lkml/[email protected]
v4: https://lore.kernel.org/lkml/[email protected]
v3: https://lore.kernel.org/lkml/[email protected]
v2: https://lore.kernel.org/lkml/[email protected]
v1: https://lore.kernel.org/lkml/[email protected]
rfc-v2: https://lore.kernel.org/lkml/[email protected]/
rfc-v1: https://lore.kernel.org/lkml/20200130162340.GA14232@rapoport-lnx/
rfc-v0: https://lore.kernel.org/lkml/[email protected]/

Mike Rapoport (9):
mm: add definition of PMD_PAGE_ORDER
mmap: make mlock_future_check() global
riscv/Kconfig: make direct map manipulation options depend on MMU
set_memory: allow set_direct_map_*_noflush() for multiple pages
set_memory: allow querying whether set_direct_map_*() is actually enabled
mm: introduce memfd_secret system call to create "secret" memory areas
PM: hibernate: disable when there are active secretmem users
arch, mm: wire up memfd_secret system call where relevant
secretmem: test: add basic selftest for memfd_secret(2)

arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/cacheflush.h | 6 -
arch/arm64/include/asm/kfence.h | 2 +-
arch/arm64/include/asm/set_memory.h | 17 ++
arch/arm64/include/uapi/asm/unistd.h | 1 +
arch/arm64/kernel/machine_kexec.c | 1 +
arch/arm64/mm/mmu.c | 6 +-
arch/arm64/mm/pageattr.c | 23 +-
arch/riscv/Kconfig | 4 +-
arch/riscv/include/asm/set_memory.h | 4 +-
arch/riscv/include/asm/unistd.h | 1 +
arch/riscv/mm/pageattr.c | 8 +-
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/x86/include/asm/set_memory.h | 4 +-
arch/x86/mm/pat/set_memory.c | 8 +-
fs/dax.c | 11 +-
include/linux/pgtable.h | 3 +
include/linux/secretmem.h | 30 +++
include/linux/set_memory.h | 16 +-
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/unistd.h | 6 +-
include/uapi/linux/magic.h | 1 +
kernel/power/hibernate.c | 5 +-
kernel/power/snapshot.c | 4 +-
kernel/sys_ni.c | 2 +
mm/Kconfig | 3 +
mm/Makefile | 1 +
mm/gup.c | 10 +
mm/internal.h | 3 +
mm/mlock.c | 3 +-
mm/mmap.c | 5 +-
mm/secretmem.c | 261 +++++++++++++++++++
mm/vmalloc.c | 5 +-
scripts/checksyscalls.sh | 4 +
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +-
tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
tools/testing/selftests/vm/run_vmtests.sh | 17 ++
39 files changed, 726 insertions(+), 53 deletions(-)
create mode 100644 arch/arm64/include/asm/set_memory.h
create mode 100644 include/linux/secretmem.h
create mode 100644 mm/secretmem.c
create mode 100644 tools/testing/selftests/vm/memfd_secret.c

--
2.28.0


2021-05-05 19:09:36

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas

On Wed, 3 Mar 2021 18:22:00 +0200 Mike Rapoport <[email protected]> wrote:

> This is an implementation of "secret" mappings backed by a file descriptor.
>
> The file descriptor backing secret memory mappings is created using a
> dedicated memfd_secret system call The desired protection mode for the
> memory is configured using flags parameter of the system call. The mmap()
> of the file descriptor created with memfd_secret() will create a "secret"
> memory mapping. The pages in that mapping will be marked as not present in
> the direct map and will be present only in the page table of the owning mm.
>
> Although normally Linux userspace mappings are protected from other users,
> such secret mappings are useful for environments where a hostile tenant is
> trying to trick the kernel into giving them access to other tenants
> mappings.

I continue to struggle with this and I don't recall seeing much
enthusiasm from others. Perhaps we're all missing the value point and
some additional selling is needed.

Am I correct in understanding that the overall direction here is to
protect keys (and perhaps other things) from kernel bugs? That if the
kernel was bug-free then there would be no need for this feature? If
so, that's a bit sad. But realistic I guess.

Is this intended to protect keys/etc after the attacker has gained the
ability to run arbitrary kernel-mode code? If so, that seems
optimistic, doesn't it?

I think that a very complete description of the threats which this
feature addresses would be helpful.

2021-05-06 15:31:51

by James Bottomley

[permalink] [raw]
Subject: Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas

On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> On Wed, 3 Mar 2021 18:22:00 +0200 Mike Rapoport <[email protected]>
> wrote:
>
> > This is an implementation of "secret" mappings backed by a file
> > descriptor.
> >
> > The file descriptor backing secret memory mappings is created using
> > a dedicated memfd_secret system call The desired protection mode
> > for the memory is configured using flags parameter of the system
> > call. The mmap() of the file descriptor created with memfd_secret()
> > will create a "secret" memory mapping. The pages in that mapping
> > will be marked as not present in the direct map and will be present
> > only in the page table of the owning mm.
> >
> > Although normally Linux userspace mappings are protected from other
> > users, such secret mappings are useful for environments where a
> > hostile tenant is trying to trick the kernel into giving them
> > access to other tenants mappings.
>
> I continue to struggle with this and I don't recall seeing much
> enthusiasm from others. Perhaps we're all missing the value point
> and some additional selling is needed.
>
> Am I correct in understanding that the overall direction here is to
> protect keys (and perhaps other things) from kernel bugs? That if
> the kernel was bug-free then there would be no need for this
> feature? If so, that's a bit sad. But realistic I guess.

Secret memory really serves several purposes. The "increase the level
of difficulty of secret exfiltration" you describe. And, as you say,
if the kernel were bug free this wouldn't be necessary.

But also:

1. Memory safety for use space code. Once the secret memory is
allocated, the user can't accidentally pass it into the kernel to be
transmitted somewhere.
2. It also serves as a basis for context protection of virtual
machines, but other groups are working on this aspect, and it is
broadly similar to the secret exfiltration from the kernel problem.

>
> Is this intended to protect keys/etc after the attacker has gained
> the ability to run arbitrary kernel-mode code? If so, that seems
> optimistic, doesn't it?

Not exactly: there are many types of kernel attack, but mostly the
attacker either manages to effect a privilege escalation to root or
gets the ability to run a ROP gadget. The object of this code is to be
completely secure against root trying to extract the secret (some what
similar to the lockdown idea), thus defeating privilege escalation and
to provide "sufficient" protection against ROP gadgets.

The ROP gadget thing needs more explanation: the usual defeatist
approach is to say that once the attacker gains the stack, they can do
anything because they can find enough ROP gadgets to be turing
complete. However, in the real world, given the kernel stack size
limit and address space layout randomization making finding gadgets
really hard, usually the attacker gets one or at most two gadgets to
string together. Not having any in-kernel primitive for accessing
secret memory means the one gadget ROP attack can't work. Since the
only way to access secret memory is to reconstruct the missing mapping
entry, the attacker has to recover the physical page and insert a PTE
pointing to it in the kernel and then retrieve the contents. That
takes at least three gadgets which is a level of difficulty beyond most
standard attacks.

> I think that a very complete description of the threats which this
> feature addresses would be helpful.

It's designed to protect against three different threats:

1. Detection of user secret memory mismanagement
2. significant protection against privilege escalation
3. enhanced protection (in conjunction with all the other in-kernel
attack prevention systems) against ROP attacks.

Do you want us to add this to one of the patch descriptions?

James


2021-05-06 16:47:54

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas

On 06.05.21 17:26, James Bottomley wrote:
> On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
>> On Wed, 3 Mar 2021 18:22:00 +0200 Mike Rapoport <[email protected]>
>> wrote:
>>
>>> This is an implementation of "secret" mappings backed by a file
>>> descriptor.
>>>
>>> The file descriptor backing secret memory mappings is created using
>>> a dedicated memfd_secret system call The desired protection mode
>>> for the memory is configured using flags parameter of the system
>>> call. The mmap() of the file descriptor created with memfd_secret()
>>> will create a "secret" memory mapping. The pages in that mapping
>>> will be marked as not present in the direct map and will be present
>>> only in the page table of the owning mm.
>>>
>>> Although normally Linux userspace mappings are protected from other
>>> users, such secret mappings are useful for environments where a
>>> hostile tenant is trying to trick the kernel into giving them
>>> access to other tenants mappings.
>>
>> I continue to struggle with this and I don't recall seeing much
>> enthusiasm from others. Perhaps we're all missing the value point
>> and some additional selling is needed.
>>
>> Am I correct in understanding that the overall direction here is to
>> protect keys (and perhaps other things) from kernel bugs? That if
>> the kernel was bug-free then there would be no need for this
>> feature? If so, that's a bit sad. But realistic I guess.
>
> Secret memory really serves several purposes. The "increase the level
> of difficulty of secret exfiltration" you describe. And, as you say,
> if the kernel were bug free this wouldn't be necessary.
>
> But also:
>
> 1. Memory safety for use space code. Once the secret memory is
> allocated, the user can't accidentally pass it into the kernel to be
> transmitted somewhere.

That's an interesting point I didn't realize so far.

> 2. It also serves as a basis for context protection of virtual
> machines, but other groups are working on this aspect, and it is
> broadly similar to the secret exfiltration from the kernel problem.
>

I was wondering if this also helps against CPU microcode issues like
spectre and friends.

>>
>> Is this intended to protect keys/etc after the attacker has gained
>> the ability to run arbitrary kernel-mode code? If so, that seems
>> optimistic, doesn't it?
>
> Not exactly: there are many types of kernel attack, but mostly the
> attacker either manages to effect a privilege escalation to root or
> gets the ability to run a ROP gadget. The object of this code is to be
> completely secure against root trying to extract the secret (some what
> similar to the lockdown idea), thus defeating privilege escalation and
> to provide "sufficient" protection against ROP gadget.

What stops "root" from mapping /dev/mem and reading that memory?

IOW, would we want to enforce "CONFIG_STRICT_DEVMEM" with CONFIG_SECRETMEM?


Also, there is a way to still read that memory when root by

1. Having kdump active (which would often be the case, but maybe not to
dump user pages )
2. Triggering a kernel crash (easy via proc as root)
3. Waiting for the reboot after kump() created the dump and then reading
the content from disk.

Or, as an attacker, load a custom kexec() kernel and read memory from
the new environment. Of course, the latter two are advanced mechanisms,
but they are possible when root. We might be able to mitigate, for
example, by zeroing out secretmem pages before booting into the kexec
kernel, if we care :)

--
Thanks,

David / dhildenb

2021-05-06 17:07:22

by James Bottomley

[permalink] [raw]
Subject: Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas

On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
> On 06.05.21 17:26, James Bottomley wrote:
> > On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> > > On Wed, 3 Mar 2021 18:22:00 +0200 Mike Rapoport <[email protected]
> > > >
> > > wrote:
> > >
> > > > This is an implementation of "secret" mappings backed by a file
> > > > descriptor.
> > > >
> > > > The file descriptor backing secret memory mappings is created
> > > > using a dedicated memfd_secret system call The desired
> > > > protection mode for the memory is configured using flags
> > > > parameter of the system call. The mmap() of the file descriptor
> > > > created with memfd_secret() will create a "secret" memory
> > > > mapping. The pages in that mapping will be marked as not
> > > > present in the direct map and will be present only in the page
> > > > table of the owning mm.
> > > >
> > > > Although normally Linux userspace mappings are protected from
> > > > other users, such secret mappings are useful for environments
> > > > where a hostile tenant is trying to trick the kernel into
> > > > giving them access to other tenants mappings.
> > >
> > > I continue to struggle with this and I don't recall seeing much
> > > enthusiasm from others. Perhaps we're all missing the value
> > > point and some additional selling is needed.
> > >
> > > Am I correct in understanding that the overall direction here is
> > > to protect keys (and perhaps other things) from kernel
> > > bugs? That if the kernel was bug-free then there would be no
> > > need for this feature? If so, that's a bit sad. But realistic I
> > > guess.
> >
> > Secret memory really serves several purposes. The "increase the
> > level of difficulty of secret exfiltration" you describe. And, as
> > you say, if the kernel were bug free this wouldn't be necessary.
> >
> > But also:
> >
> > 1. Memory safety for use space code. Once the secret memory is
> > allocated, the user can't accidentally pass it into the
> > kernel to be
> > transmitted somewhere.
>
> That's an interesting point I didn't realize so far.
>
> > 2. It also serves as a basis for context protection of virtual
> > machines, but other groups are working on this aspect, and
> > it is
> > broadly similar to the secret exfiltration from the kernel
> > problem.
> >
>
> I was wondering if this also helps against CPU microcode issues like
> spectre and friends.

It can for VMs, but not really for the user space secret memory use
cases ... the in-kernel mitigations already present are much more
effective.

>
> > > Is this intended to protect keys/etc after the attacker has
> > > gained the ability to run arbitrary kernel-mode code? If so,
> > > that seems optimistic, doesn't it?
> >
> > Not exactly: there are many types of kernel attack, but mostly the
> > attacker either manages to effect a privilege escalation to root or
> > gets the ability to run a ROP gadget. The object of this code is
> > to be completely secure against root trying to extract the secret
> > (some what similar to the lockdown idea), thus defeating privilege
> > escalation and to provide "sufficient" protection against ROP
> > gadget.
>
> What stops "root" from mapping /dev/mem and reading that memory?

/dev/mem uses the direct map for the copy at least for read/write, so
it gets a fault in the same way root trying to use ptrace does. I
think we've protected mmap, but Mike would know that better than I.

> IOW, would we want to enforce "CONFIG_STRICT_DEVMEM" with
> CONFIG_SECRETMEM?

Unless there's a corner case I haven't thought of, I don't think it
adds much. However, doing a full lockdown on a public system where
users want to use secret memory is best practice I think (except I
think you want it to be the full secure boot lockdown to close all the
root holes).

> Also, there is a way to still read that memory when root by
>
> 1. Having kdump active (which would often be the case, but maybe not
> to dump user pages )
> 2. Triggering a kernel crash (easy via proc as root)
> 3. Waiting for the reboot after kump() created the dump and then
> reading the content from disk.

Anything that can leave physical memory intact but boot to a kernel
where the missing direct map entry is restored could theoretically
extract the secret. However, it's not exactly going to be a stealthy
extraction ...

> Or, as an attacker, load a custom kexec() kernel and read memory
> from the new environment. Of course, the latter two are advanced
> mechanisms, but they are possible when root. We might be able to
> mitigate, for example, by zeroing out secretmem pages before booting
> into the kexec kernel, if we care :)

I think we could handle it by marking the region, yes, and a zero on
shutdown might be useful ... it would prevent all warm reboot type
attacks.

James

2021-05-06 17:27:19

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas

>>>> Is this intended to protect keys/etc after the attacker has
>>>> gained the ability to run arbitrary kernel-mode code? If so,
>>>> that seems optimistic, doesn't it?
>>>
>>> Not exactly: there are many types of kernel attack, but mostly the
>>> attacker either manages to effect a privilege escalation to root or
>>> gets the ability to run a ROP gadget. The object of this code is
>>> to be completely secure against root trying to extract the secret
>>> (some what similar to the lockdown idea), thus defeating privilege
>>> escalation and to provide "sufficient" protection against ROP
>>> gadget.
>>
>> What stops "root" from mapping /dev/mem and reading that memory?
>
> /dev/mem uses the direct map for the copy at least for read/write, so
> it gets a fault in the same way root trying to use ptrace does. I
> think we've protected mmap, but Mike would know that better than I.
>

I'm more concerned about the mmap case -> remap_pfn_range(). Anybody
going via the VMA shouldn't see the struct page, at least when
vma_normal_page() is properly used; so you cannot detect secretmem
memory mapped via /dev/mem reliably. At least that's my theory :)

[...]

>> Also, there is a way to still read that memory when root by
>>
>> 1. Having kdump active (which would often be the case, but maybe not
>> to dump user pages )
>> 2. Triggering a kernel crash (easy via proc as root)
>> 3. Waiting for the reboot after kump() created the dump and then
>> reading the content from disk.
>
> Anything that can leave physical memory intact but boot to a kernel
> where the missing direct map entry is restored could theoretically
> extract the secret. However, it's not exactly going to be a stealthy
> extraction ...
>
>> Or, as an attacker, load a custom kexec() kernel and read memory
>> from the new environment. Of course, the latter two are advanced
>> mechanisms, but they are possible when root. We might be able to
>> mitigate, for example, by zeroing out secretmem pages before booting
>> into the kexec kernel, if we care :)
>
> I think we could handle it by marking the region, yes, and a zero on
> shutdown might be useful ... it would prevent all warm reboot type
> attacks.

Right. But I guess when you're actually root, you can just write a
kernel module to extract the information you need (unless we have signed
modules, so it could be harder/impossible).

--
Thanks,

David / dhildenb

2021-05-06 17:35:57

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas

On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> > On Wed, 3 Mar 2021 18:22:00 +0200 Mike Rapoport <[email protected]>
> > wrote:
> >
> > > This is an implementation of "secret" mappings backed by a file
> > > descriptor.

tl;dr: I like this series, I think there are number of clarifications
needed, though. See below.

> > >
> > > The file descriptor backing secret memory mappings is created using
> > > a dedicated memfd_secret system call The desired protection mode
> > > for the memory is configured using flags parameter of the system
> > > call. The mmap() of the file descriptor created with memfd_secret()
> > > will create a "secret" memory mapping. The pages in that mapping
> > > will be marked as not present in the direct map and will be present
> > > only in the page table of the owning mm.
> > >
> > > Although normally Linux userspace mappings are protected from other
> > > users, such secret mappings are useful for environments where a
> > > hostile tenant is trying to trick the kernel into giving them
> > > access to other tenants mappings.
> >
> > I continue to struggle with this and I don't recall seeing much
> > enthusiasm from others. Perhaps we're all missing the value point
> > and some additional selling is needed.
> >
> > Am I correct in understanding that the overall direction here is to
> > protect keys (and perhaps other things) from kernel bugs? That if
> > the kernel was bug-free then there would be no need for this
> > feature? If so, that's a bit sad. But realistic I guess.
>
> Secret memory really serves several purposes. The "increase the level
> of difficulty of secret exfiltration" you describe. And, as you say,
> if the kernel were bug free this wouldn't be necessary.
>
> But also:
>
> 1. Memory safety for user space code. Once the secret memory is
> allocated, the user can't accidentally pass it into the kernel to be
> transmitted somewhere.

In my first read through, I didn't see how cross-userspace operations
were blocked, but it looks like it's the various gup paths where
{vma,page}_is_secretmem() is called. (Thank you for the self-test! That
helped me follow along.) I think this access pattern should be more
clearly spelled out in the cover later (i.e. "This will block things
like process_vm_readv()").

I like the results (inaccessible outside the process), though I suspect
this will absolutely melt gdb or other ptracers that try to see into
the memory. Don't get me wrong, I'm a big fan of such concepts[0], but
I see nothing in the cover letter about it (e.g. the effects on "ptrace"
or "gdb" are not mentioned.)

There is also a risk here of this becoming a forensics nightmare:
userspace malware will just download their entire executable region
into a memfd_secret region. Can we, perhaps, disallow mmap/mprotect
with PROT_EXEC when vma_is_secretmem()? The OpenSSL example, for
example, certainly doesn't need PROT_EXEC.

What's happening with O_CLOEXEC in this code? I don't see that mentioned
in the cover letter either. Why is it disallowed? That seems a strange
limitation for something trying to avoid leaking secrets into other
processes.

And just so I'm sure I understand: if a vma_is_secretmem() check is
missed in future mm code evolutions, it seems there is nothing to block
the kernel from accessing the contents directly through copy_from_user()
via the userspace virtual address, yes?

> 2. It also serves as a basis for context protection of virtual
> machines, but other groups are working on this aspect, and it is
> broadly similar to the secret exfiltration from the kernel problem.
>
> >
> > Is this intended to protect keys/etc after the attacker has gained
> > the ability to run arbitrary kernel-mode code? If so, that seems
> > optimistic, doesn't it?
>
> Not exactly: there are many types of kernel attack, but mostly the
> attacker either manages to effect a privilege escalation to root or
> gets the ability to run a ROP gadget. The object of this code is to be
> completely secure against root trying to extract the secret (some what
> similar to the lockdown idea), thus defeating privilege escalation and
> to provide "sufficient" protection against ROP gadgets.
>
> The ROP gadget thing needs more explanation: the usual defeatist
> approach is to say that once the attacker gains the stack, they can do
> anything because they can find enough ROP gadgets to be turing
> complete. However, in the real world, given the kernel stack size
> limit and address space layout randomization making finding gadgets
> really hard, usually the attacker gets one or at most two gadgets to
> string together. Not having any in-kernel primitive for accessing
> secret memory means the one gadget ROP attack can't work. Since the
> only way to access secret memory is to reconstruct the missing mapping
> entry, the attacker has to recover the physical page and insert a PTE
> pointing to it in the kernel and then retrieve the contents. That
> takes at least three gadgets which is a level of difficulty beyond most
> standard attacks.

As for protecting against exploited kernel flaws I also see benefits
here. While the kernel is already blocked from directly reading contents
from userspace virtual addresses (i.e. SMAP), this feature does help by
blocking the kernel from directly reading contents via the direct map
alias. (i.e. this feature is a specialized version of XPFO[1], which
tried to do this for ALL user memory.) So in that regard, yes, this has
value in the sense that to perform exfiltration, an attacker would need
a significant level of control over kernel execution or over page table
contents.

Sufficient control over PTE allocation and positioning is possible
without kernel execution control[3], and "only" having an arbitrary
write primitive can lead to direct PTE control. Because of this, it
would be nice to have page tables strongly protected[2] in the kernel.
They remain a viable "data only" attack given a sufficiently "capable"
write flaw.

I would argue that page table entries are a more important asset to
protect than userspace secrets, but given the difficulties with XPFO
and the not-yet-available PKS I can understand starting here. It does,
absolutely, narrow the ways exploits must be written to exfiltrate secret
contents. (We are starting to now constrict[4] many attack methods
into attacking the page table itself, which is good in the sense that
protecting page tables will be a big win, and bad in the sense that
focusing attack research on page tables means we're going to see some
very powerful attacks.)

> > I think that a very complete description of the threats which this
> > feature addresses would be helpful.
>
> It's designed to protect against three different threats:
>
> 1. Detection of user secret memory mismanagement

I would say "cross-process secret userspace memory exposures" (via a
number of common interfaces by blocking it at the GUP level).

> 2. significant protection against privilege escalation

I don't see how this series protects against privilege escalation. (It
protects against exfiltration.) Maybe you mean include this in the first
bullet point (i.e. "cross-process secret userspace memory exposures,
even in the face of privileged processes")?

> 3. enhanced protection (in conjunction with all the other in-kernel
> attack prevention systems) against ROP attacks.

Same here, I don't see it preventing ROP, but I see it making "simple"
ROP insufficient to perform exfiltration.

-Kees

[0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/security/yama/yama_lsm.c?h=v5.12#n410
[1] https://lore.kernel.org/linux-mm/[email protected]/
[2] https://lore.kernel.org/lkml/[email protected]/
[3] https://googleprojectzero.blogspot.com/2015/03/exploiting-dram-rowhammer-bug-to-gain.html
[4] https://git.kernel.org/linus/cf68fffb66d60d96209446bfc4a15291dc5a5d41

--
Kees Cook

2021-05-06 19:54:30

by James Bottomley

[permalink] [raw]
Subject: Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas

On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
[...]
> > 1. Memory safety for user space code. Once the secret memory is
> > allocated, the user can't accidentally pass it into the
> > kernel to be
> > transmitted somewhere.
>
> In my first read through, I didn't see how cross-userspace operations
> were blocked, but it looks like it's the various gup paths where
> {vma,page}_is_secretmem() is called. (Thank you for the self-test!
> That helped me follow along.) I think this access pattern should be
> more clearly spelled out in the cover later (i.e. "This will block
> things like process_vm_readv()").

I'm sure Mike can add it.

> I like the results (inaccessible outside the process), though I
> suspect this will absolutely melt gdb or other ptracers that try to
> see into the memory.

I wouldn't say "melt" ... one of the Demos we did a FOSDEM was using
gdb/ptrace to extract secrets and then showing it couldn't be done if
secret memory was used. You can still trace the execution of the
process (and thus you could extract the secret as it's processed in
registers, for instance) but you just can't extract the actual secret
memory contents ... that's a fairly limited and well defined
restriction.

> Don't get me wrong, I'm a big fan of such concepts[0], but I see
> nothing in the cover letter about it (e.g. the effects on "ptrace" or
> "gdb" are not mentioned.)

Sure, but we thought "secret" covered it. It wouldn't be secret if
gdb/ptrace from another process could see it.

> There is also a risk here of this becoming a forensics nightmare:
> userspace malware will just download their entire executable region
> into a memfd_secret region. Can we, perhaps, disallow mmap/mprotect
> with PROT_EXEC when vma_is_secretmem()? The OpenSSL example, for
> example, certainly doesn't need PROT_EXEC.

I think disallowing PROT_EXEC is a great enhancement.

> What's happening with O_CLOEXEC in this code? I don't see that
> mentioned in the cover letter either. Why is it disallowed? That
> seems a strange limitation for something trying to avoid leaking
> secrets into other processes.

I actually thought we forced it, so I'll let Mike address this. I
think allowing it is great, so the secret memory isn't inherited by
children, but I can see use cases where a process would want its child
to inherit the secrets.

> And just so I'm sure I understand: if a vma_is_secretmem() check is
> missed in future mm code evolutions, it seems there is nothing to
> block the kernel from accessing the contents directly through
> copy_from_user() via the userspace virtual address, yes?

Technically no because copy_from_user goes via the userspace page
tables which do have access.

> > 2. It also serves as a basis for context protection of virtual
> > machines, but other groups are working on this aspect, and it
> > is
> > broadly similar to the secret exfiltration from the kernel
> > problem.
> >
> > > Is this intended to protect keys/etc after the attacker has
> > > gained the ability to run arbitrary kernel-mode code? If so,
> > > that seems optimistic, doesn't it?
> >
> > Not exactly: there are many types of kernel attack, but mostly the
> > attacker either manages to effect a privilege escalation to root or
> > gets the ability to run a ROP gadget. The object of this code is
> > to be completely secure against root trying to extract the secret
> > (some what similar to the lockdown idea), thus defeating privilege
> > escalation and to provide "sufficient" protection against ROP
> > gadgets.
> >
> > The ROP gadget thing needs more explanation: the usual defeatist
> > approach is to say that once the attacker gains the stack, they can
> > do anything because they can find enough ROP gadgets to be turing
> > complete. However, in the real world, given the kernel stack size
> > limit and address space layout randomization making finding gadgets
> > really hard, usually the attacker gets one or at most two gadgets
> > to string together. Not having any in-kernel primitive for
> > accessing secret memory means the one gadget ROP attack can't
> > work. Since the only way to access secret memory is to reconstruct
> > the missing mapping entry, the attacker has to recover the physical
> > page and insert a PTE pointing to it in the kernel and then
> > retrieve the contents. That takes at least three gadgets which is
> > a level of difficulty beyond most standard attacks.
>
> As for protecting against exploited kernel flaws I also see benefits
> here. While the kernel is already blocked from directly reading
> contents from userspace virtual addresses (i.e. SMAP), this feature
> does help by blocking the kernel from directly reading contents via
> the direct map alias. (i.e. this feature is a specialized version of
> XPFO[1], which tried to do this for ALL user memory.) So in that
> regard, yes, this has value in the sense that to perform
> exfiltration, an attacker would need a significant level of control
> over kernel execution or over page table contents.
>
> Sufficient control over PTE allocation and positioning is possible
> without kernel execution control[3], and "only" having an arbitrary
> write primitive can lead to direct PTE control. Because of this, it
> would be nice to have page tables strongly protected[2] in the
> kernel. They remain a viable "data only" attack given a sufficiently
> "capable" write flaw.

Right, but this is on the radar of several people and when fixed will
strengthen the value of secret memory.

> I would argue that page table entries are a more important asset to
> protect than userspace secrets, but given the difficulties with XPFO
> and the not-yet-available PKS I can understand starting here. It
> does, absolutely, narrow the ways exploits must be written to
> exfiltrate secret contents. (We are starting to now constrict[4] many
> attack methods into attacking the page table itself, which is good in
> the sense that protecting page tables will be a big win, and bad in
> the sense that focusing attack research on page tables means we're
> going to see some very powerful attacks.)
>
> > > I think that a very complete description of the threats which
> > > this feature addresses would be helpful.
> >
> > It's designed to protect against three different threats:
> >
> > 1. Detection of user secret memory mismanagement
>
> I would say "cross-process secret userspace memory exposures" (via a
> number of common interfaces by blocking it at the GUP level).
>
> > 2. significant protection against privilege escalation
>
> I don't see how this series protects against privilege escalation.
> (It protects against exfiltration.) Maybe you mean include this in
> the first bullet point (i.e. "cross-process secret userspace memory
> exposures, even in the face of privileged processes")?

It doesn't prevent privilege escalation from happening in the first
place, but once the escalation has happened it protects against
exfiltration by the newly minted root attacker.

> > 3. enhanced protection (in conjunction with all the other in-
> > kernel
> > attack prevention systems) against ROP attacks.
>
> Same here, I don't see it preventing ROP, but I see it making
> "simple" ROP insufficient to perform exfiltration.

Right, that's why I call it "enhanced protection". With ROP the design
goal is to take exfiltration beyond the simple, and require increasing
complexity in the attack ... the usual security whack-a-mole approach
... in the hope that script kiddies get bored by the level of
difficulty and move on to something easier.

James


2021-05-07 03:46:41

by Nick Kossifidis

[permalink] [raw]
Subject: Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas

Στις 2021-05-06 20:05, James Bottomley έγραψε:
> On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
>>
>> Also, there is a way to still read that memory when root by
>>
>> 1. Having kdump active (which would often be the case, but maybe not
>> to dump user pages )
>> 2. Triggering a kernel crash (easy via proc as root)
>> 3. Waiting for the reboot after kump() created the dump and then
>> reading the content from disk.
>
> Anything that can leave physical memory intact but boot to a kernel
> where the missing direct map entry is restored could theoretically
> extract the secret. However, it's not exactly going to be a stealthy
> extraction ...
>
>> Or, as an attacker, load a custom kexec() kernel and read memory
>> from the new environment. Of course, the latter two are advanced
>> mechanisms, but they are possible when root. We might be able to
>> mitigate, for example, by zeroing out secretmem pages before booting
>> into the kexec kernel, if we care :)
>
> I think we could handle it by marking the region, yes, and a zero on
> shutdown might be useful ... it would prevent all warm reboot type
> attacks.
>

I had similar concerns about recovering secrets with kdump, and
considered cleaning up keyrings before jumping to the new kernel. The
problem is we can't provide guarantees in that case, once the kernel has
crashed and we are on our way to run crashkernel, we can't be sure we
can reliably zero-out anything, the more code we add to that path the
more risky it gets. However during reboot/normal kexec() we should do
some cleanup, it makes sense and secretmem can indeed be useful in that
case. Regarding loading custom kexec() kernels, we mitigate this with
the kexec file-based API where we can verify the signature of the loaded
kimage (assuming the system runs a kernel provided by a trusted 3rd
party and we 've maintained a chain of trust since booting).

2021-05-07 08:01:05

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas

On 07.05.21 01:16, Nick Kossifidis wrote:
> Στις 2021-05-06 20:05, James Bottomley έγραψε:
>> On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
>>>
>>> Also, there is a way to still read that memory when root by
>>>
>>> 1. Having kdump active (which would often be the case, but maybe not
>>> to dump user pages )
>>> 2. Triggering a kernel crash (easy via proc as root)
>>> 3. Waiting for the reboot after kump() created the dump and then
>>> reading the content from disk.
>>
>> Anything that can leave physical memory intact but boot to a kernel
>> where the missing direct map entry is restored could theoretically
>> extract the secret. However, it's not exactly going to be a stealthy
>> extraction ...
>>
>>> Or, as an attacker, load a custom kexec() kernel and read memory
>>> from the new environment. Of course, the latter two are advanced
>>> mechanisms, but they are possible when root. We might be able to
>>> mitigate, for example, by zeroing out secretmem pages before booting
>>> into the kexec kernel, if we care :)
>>
>> I think we could handle it by marking the region, yes, and a zero on
>> shutdown might be useful ... it would prevent all warm reboot type
>> attacks.
>>
>
> I had similar concerns about recovering secrets with kdump, and
> considered cleaning up keyrings before jumping to the new kernel. The
> problem is we can't provide guarantees in that case, once the kernel has
> crashed and we are on our way to run crashkernel, we can't be sure we
> can reliably zero-out anything, the more code we add to that path the

Well, I think it depends. Assume we do the following

1) Zero out any secretmem pages when handing them back to the buddy.
(alternative: init_on_free=1) -- if not already done, I didn't check the
code.

2) On kdump(), zero out all allocated secretmem. It'd be easier if we'd
just allocated from a fixed physical memory area; otherwise we have to
walk process page tables or use a PFN walker. And zeroing out secretmem
pages without a direct mapping is a different challenge.

Now, during 2) it can happen that

a) We crash in our clearing code (e.g., something is seriously messed
up) and fail to start the kdump kernel. That's actually good, instead of
leaking data we fail hard.

b) We don't find all secretmem pages, for example, because process page
tables are messed up or something messed up our memmap (if we'd use that
to identify secretmem pages via a PFN walker somehow)


But for the simple cases (e.g., malicious root tries to crash the kernel
via /proc/sysrq-trigger) both a) and b) wouldn't apply.

Obviously, if an admin would want to mitigate right now, he would want
to disable kdump completely, meaning any attempt to load a crashkernel
would fail and cannot be enabled again for that kernel (also not via
cmdline an attacker could modify to reboot into a system with the option
for a crashkernel). Disabling kdump in the kernel when secretmem pages
are allocated is one approach, although sub-optimal.

> more risky it gets. However during reboot/normal kexec() we should do
> some cleanup, it makes sense and secretmem can indeed be useful in that
> case. Regarding loading custom kexec() kernels, we mitigate this with
> the kexec file-based API where we can verify the signature of the loaded
> kimage (assuming the system runs a kernel provided by a trusted 3rd
> party and we 've maintained a chain of trust since booting).

For example in VMs (like QEMU), we often don't clear physical memory
during a reboot. So if an attacker manages to load a kernel that you can
trick into reading random physical memory areas, we can leak secretmem
data I think.

And there might be ways to achieve that just using the cmdline, not
necessarily loading a different kernel. For example if you limit the
kernel footprint ("mem=256M") and disable strict_iomem_checks
("strict_iomem_checks=relaxed") you can just extract that memory via
/dev/mem if I am not wrong.

So as an attacker, modify the (grub) cmdline to "mem=256M
strict_iomem_checks=relaxed", reboot, and read all memory via /dev/mem.
Or load a signed kexec kernel with that cmdline and boot into it.

Interesting problem :)

--
Thanks,

David / dhildenb

2021-05-08 00:01:34

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas

On Thu, May 06, 2021 at 11:47:47AM -0700, James Bottomley wrote:
> On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> > On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> [...]
> > > > I think that a very complete description of the threats which
> > > > this feature addresses would be helpful.
> > >
> > > It's designed to protect against three different threats:
> > >
> > > 1. Detection of user secret memory mismanagement
> >
> > I would say "cross-process secret userspace memory exposures" (via a
> > number of common interfaces by blocking it at the GUP level).
> >
> > > 2. significant protection against privilege escalation
> >
> > I don't see how this series protects against privilege escalation.
> > (It protects against exfiltration.) Maybe you mean include this in
> > the first bullet point (i.e. "cross-process secret userspace memory
> > exposures, even in the face of privileged processes")?
>
> It doesn't prevent privilege escalation from happening in the first
> place, but once the escalation has happened it protects against
> exfiltration by the newly minted root attacker.

So, after thinking a bit more about this, I don't think there is
protection here against privileged execution. This feature kind of helps
against cross-process read/write attempts, but it doesn't help with
sufficiently privileged (i.e. ptraced) execution, since we can just ask
the process itself to do the reading:

$ gdb ./memfd_secret
...
ready: 0x7ffff7ffb000
Breakpoint 1, ...
(gdb) compile code unsigned long addr = 0x7ffff7ffb000UL; printf("%016lx\n", *((unsigned long *)addr));
55555555555555555

And since process_vm_readv() requires PTRACE_ATTACH, there's very little
difference in effort between process_vm_readv() and the above.

So, what other paths through GUP exist that aren't covered by
PTRACE_ATTACH? And if none, then should this actually just be done by
setting the process undumpable? (This is already what things like gnupg
do.)

So, the user-space side of this doesn't seem to really help. The kernel
side protection is interesting for kernel read/write flaws, though, in
the sense that the process is likely not being attacked from "current",
so a kernel-side attack would need to either walk the page tables and
create new ones, or spawn a new userspace process to do the ptracing.

So, while I like the idea of this stuff, and I see how it provides
certain coverages, I'm curious to learn more about the threat model to
make sure it's actually providing meaningful hurdles to attacks.

--
Kees Cook

2021-05-10 18:12:16

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas

On Thu, May 06, 2021 at 11:47:47AM -0700, James Bottomley wrote:
> On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> > On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
>
> > What's happening with O_CLOEXEC in this code? I don't see that
> > mentioned in the cover letter either. Why is it disallowed? That
> > seems a strange limitation for something trying to avoid leaking
> > secrets into other processes.
>
> I actually thought we forced it, so I'll let Mike address this. I
> think allowing it is great, so the secret memory isn't inherited by
> children, but I can see use cases where a process would want its child
> to inherit the secrets.

We do not enforce O_CLOEXEC, but if the user explicitly requested O_CLOEXEC
it would be passed to get_unused_fd_flags().

--
Sincerely yours,
Mike.