2022-11-25 22:22:10

by Jann Horn

[permalink] [raw]
Subject: [PATCH v3 1/3] mm/khugepaged: Take the right locks for page table retraction

pagetable walks on address ranges mapped by VMAs can be done under the mmap
lock, the lock of an anon_vma attached to the VMA, or the lock of the VMA's
address_space. Only one of these needs to be held, and it does not need to
be held in exclusive mode.

Under those circumstances, the rules for concurrent access to page table
entries are:

- Terminal page table entries (entries that don't point to another page
table) can be arbitrarily changed under the page table lock, with the
exception that they always need to be consistent for
hardware page table walks and lockless_pages_from_mm().
This includes that they can be changed into non-terminal entries.
- Non-terminal page table entries (which point to another page table)
can not be modified; readers are allowed to READ_ONCE() an entry, verify
that it is non-terminal, and then assume that its value will stay as-is.

Retracting a page table involves modifying a non-terminal entry, so
page-table-level locks are insufficient to protect against concurrent
page table traversal; it requires taking all the higher-level locks under
which it is possible to start a page walk in the relevant range in
exclusive mode.

The collapse_huge_page() path for anonymous THP already follows this rule,
but the shmem/file THP path was getting it wrong, making it possible for
concurrent rmap-based operations to cause corruption.

Cc: [email protected]
Fixes: 27e1f8273113 ("khugepaged: enable collapse pmd for pte-mapped THP")
Signed-off-by: Jann Horn <[email protected]>
---
mm/khugepaged.c | 55 +++++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 51 insertions(+), 4 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 4734315f79407..674b111a24fa7 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1384,16 +1384,37 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
return SCAN_SUCCEED;
}

+/*
+ * A note about locking:
+ * Trying to take the page table spinlocks would be useless here because those
+ * are only used to synchronize:
+ *
+ * - modifying terminal entries (ones that point to a data page, not to another
+ * page table)
+ * - installing *new* non-terminal entries
+ *
+ * Instead, we need roughly the same kind of protection as free_pgtables() or
+ * mm_take_all_locks() (but only for a single VMA):
+ * The mmap lock together with this VMA's rmap locks covers all paths towards
+ * the page table entries we're messing with here, except for hardware page
+ * table walks and lockless_pages_from_mm().
+ */
static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, pmd_t *pmdp)
{
- spinlock_t *ptl;
pmd_t pmd;

mmap_assert_write_locked(mm);
- ptl = pmd_lock(vma->vm_mm, pmdp);
+ if (vma->vm_file)
+ lockdep_assert_held_write(&vma->vm_file->f_mapping->i_mmap_rwsem);
+ /*
+ * All anon_vmas attached to the VMA have the same root and are
+ * therefore locked by the same lock.
+ */
+ if (vma->anon_vma)
+ lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
+
pmd = pmdp_collapse_flush(vma, addr, pmdp);
- spin_unlock(ptl);
mm_dec_nr_ptes(mm);
page_table_check_pte_clear_range(mm, addr, pmd);
pte_free(mm, pmd_pgtable(pmd));
@@ -1444,6 +1465,14 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false))
return SCAN_VMA_CHECK;

+ /*
+ * Symmetry with retract_page_tables(): Exclude MAP_PRIVATE mappings
+ * that got written to. Without this, we'd have to also lock the
+ * anon_vma if one exists.
+ */
+ if (vma->anon_vma)
+ return SCAN_VMA_CHECK;
+
/* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */
if (userfaultfd_wp(vma))
return SCAN_PTE_UFFD_WP;
@@ -1477,6 +1506,20 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
goto drop_hpage;
}

+ /*
+ * We need to lock the mapping so that from here on, only GUP-fast and
+ * hardware page walks can access the parts of the page tables that
+ * we're operating on.
+ * See collapse_and_free_pmd().
+ */
+ i_mmap_lock_write(vma->vm_file->f_mapping);
+
+ /*
+ * This spinlock should be unnecessary: Nobody else should be accessing
+ * the page tables under spinlock protection here, only
+ * lockless_pages_from_mm() and the hardware page walker can access page
+ * tables while all the high-level locks are held in write mode.
+ */
start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
result = SCAN_FAIL;

@@ -1531,6 +1574,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
/* step 4: remove pte entries */
collapse_and_free_pmd(mm, vma, haddr, pmd);

+ i_mmap_unlock_write(vma->vm_file->f_mapping);
+
maybe_install_pmd:
/* step 5: install pmd entry */
result = install_pmd
@@ -1544,6 +1589,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,

abort:
pte_unmap_unlock(start_pte, ptl);
+ i_mmap_unlock_write(vma->vm_file->f_mapping);
goto drop_hpage;
}

@@ -1600,7 +1646,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff,
* An alternative would be drop the check, but check that page
* table is clear before calling pmdp_collapse_flush() under
* ptl. It has higher chance to recover THP for the VMA, but
- * has higher cost too.
+ * has higher cost too. It would also probably require locking
+ * the anon_vma.
*/
if (vma->anon_vma) {
result = SCAN_PAGE_ANON;

base-commit: eb7081409f94a9a8608593d0fb63a1aa3d6f95d8
--
2.38.1.584.g0f3c55d4c2-goog


2022-11-25 22:30:56

by Jann Horn

[permalink] [raw]
Subject: [PATCH v3 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths

Any codepath that zaps page table entries must invoke MMU notifiers to
ensure that secondary MMUs (like KVM) don't keep accessing pages which
aren't mapped anymore. Secondary MMUs don't hold their own references to
pages that are mirrored over, so failing to notify them can lead to page
use-after-free.

I'm marking this as addressing an issue introduced in commit f3f0e1d2150b
("khugepaged: add support of collapse for tmpfs/shmem pages"), but most of
the security impact of this only came in commit 27e1f8273113 ("khugepaged:
enable collapse pmd for pte-mapped THP"), which actually omitted flushes
for the removal of present PTEs, not just for the removal of empty page
tables.

Cc: [email protected]
Fixes: f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages")
Signed-off-by: Jann Horn <[email protected]>
---
mm/khugepaged.c | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index c3d3ce596bff7..49eb4b4981d88 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1404,6 +1404,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
unsigned long addr, pmd_t *pmdp)
{
pmd_t pmd;
+ struct mmu_notifier_range range;

mmap_assert_write_locked(mm);
if (vma->vm_file)
@@ -1415,8 +1416,12 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
if (vma->anon_vma)
lockdep_assert_held_write(&vma->anon_vma->root->rwsem);

+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm, addr,
+ addr + HPAGE_PMD_SIZE);
+ mmu_notifier_invalidate_range_start(&range);
pmd = pmdp_collapse_flush(vma, addr, pmdp);
tlb_remove_table_sync_one();
+ mmu_notifier_invalidate_range_end(&range);
mm_dec_nr_ptes(mm);
page_table_check_pte_clear_range(mm, addr, pmd);
pte_free(mm, pmd_pgtable(pmd));
--
2.38.1.584.g0f3c55d4c2-goog

2022-11-28 14:09:27

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 1/3] mm/khugepaged: Take the right locks for page table retraction

On 25.11.22 22:37, Jann Horn wrote:
> pagetable walks on address ranges mapped by VMAs can be done under the mmap
> lock, the lock of an anon_vma attached to the VMA, or the lock of the VMA's
> address_space. Only one of these needs to be held, and it does not need to
> be held in exclusive mode.
>
> Under those circumstances, the rules for concurrent access to page table
> entries are:
>
> - Terminal page table entries (entries that don't point to another page
> table) can be arbitrarily changed under the page table lock, with the
> exception that they always need to be consistent for
> hardware page table walks and lockless_pages_from_mm().
> This includes that they can be changed into non-terminal entries.
> - Non-terminal page table entries (which point to another page table)
> can not be modified; readers are allowed to READ_ONCE() an entry, verify
> that it is non-terminal, and then assume that its value will stay as-is.
>
> Retracting a page table involves modifying a non-terminal entry, so
> page-table-level locks are insufficient to protect against concurrent
> page table traversal; it requires taking all the higher-level locks under
> which it is possible to start a page walk in the relevant range in
> exclusive mode.
>
> The collapse_huge_page() path for anonymous THP already follows this rule,
> but the shmem/file THP path was getting it wrong, making it possible for
> concurrent rmap-based operations to cause corruption.

This sounds sane and correct to me. No expert on file-THP, though.

For anon-THP it's the mmap lock and the rmap locks. I assume the only
difference for file-THP is that the rmap lock is actually the mapping
lock. Looking at rmap_walk_file(), that seems to be the case.


I wish at least PTE table removal could be done easier ... I already
experimented some time ago with some ideas (e.g., lock in PMD table
memmap) but it's all far from trivial and space in the memmap is rare.

--
Thanks,

David / dhildenb

2022-11-28 18:12:40

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 1/3] mm/khugepaged: Take the right locks for page table retraction

On 28.11.22 18:28, Jann Horn wrote:
> On Mon, Nov 28, 2022 at 2:53 PM David Hildenbrand <[email protected]> wrote:
>> On 25.11.22 22:37, Jann Horn wrote:
>>> pagetable walks on address ranges mapped by VMAs can be done under the mmap
>>> lock, the lock of an anon_vma attached to the VMA, or the lock of the VMA's
>>> address_space. Only one of these needs to be held, and it does not need to
>>> be held in exclusive mode.
>>>
>>> Under those circumstances, the rules for concurrent access to page table
>>> entries are:
>>>
>>> - Terminal page table entries (entries that don't point to another page
>>> table) can be arbitrarily changed under the page table lock, with the
>>> exception that they always need to be consistent for
>>> hardware page table walks and lockless_pages_from_mm().
>>> This includes that they can be changed into non-terminal entries.
>>> - Non-terminal page table entries (which point to another page table)
>>> can not be modified; readers are allowed to READ_ONCE() an entry, verify
>>> that it is non-terminal, and then assume that its value will stay as-is.
>>>
>>> Retracting a page table involves modifying a non-terminal entry, so
>>> page-table-level locks are insufficient to protect against concurrent
>>> page table traversal; it requires taking all the higher-level locks under
>>> which it is possible to start a page walk in the relevant range in
>>> exclusive mode.
>>>
>>> The collapse_huge_page() path for anonymous THP already follows this rule,
>>> but the shmem/file THP path was getting it wrong, making it possible for
>>> concurrent rmap-based operations to cause corruption.
>>
>> This sounds sane and correct to me. No expert on file-THP, though.
>>
>> For anon-THP it's the mmap lock and the rmap locks. I assume the only
>> difference for file-THP is that the rmap lock is actually the mapping
>> lock. Looking at rmap_walk_file(), that seems to be the case.
>
> Yeah. You can also have private file VMAs that are associated with
> both a mapping and a set of anon_vmas, and in that case you would need
> to lock the mmap, the mapping, and the anon_vma root; but the file THP
> code in khugepaged instead just bails on file VMAs with an anon_vma.

Right, that's my understanding as well.

>
>> I wish at least PTE table removal could be done easier ... I already
>> experimented some time ago with some ideas (e.g., lock in PMD table
>> memmap) but it's all far from trivial and space in the memmap is rare.
>
> Because you want it to be faster? Is that for the THP usecase or something else?

Page table reclaim and page table migration, where you might only have
limited context and wouldn't want to take all these expensive locks in
write mode (IOW, you wouldn't want to care about them at all).

Feel free to add my

Acked-by: David Hildenbrand <[email protected]>

--
Thanks,

David / dhildenb

2022-11-28 18:34:23

by Jann Horn

[permalink] [raw]
Subject: Re: [PATCH v3 1/3] mm/khugepaged: Take the right locks for page table retraction

On Mon, Nov 28, 2022 at 2:53 PM David Hildenbrand <[email protected]> wrote:
> On 25.11.22 22:37, Jann Horn wrote:
> > pagetable walks on address ranges mapped by VMAs can be done under the mmap
> > lock, the lock of an anon_vma attached to the VMA, or the lock of the VMA's
> > address_space. Only one of these needs to be held, and it does not need to
> > be held in exclusive mode.
> >
> > Under those circumstances, the rules for concurrent access to page table
> > entries are:
> >
> > - Terminal page table entries (entries that don't point to another page
> > table) can be arbitrarily changed under the page table lock, with the
> > exception that they always need to be consistent for
> > hardware page table walks and lockless_pages_from_mm().
> > This includes that they can be changed into non-terminal entries.
> > - Non-terminal page table entries (which point to another page table)
> > can not be modified; readers are allowed to READ_ONCE() an entry, verify
> > that it is non-terminal, and then assume that its value will stay as-is.
> >
> > Retracting a page table involves modifying a non-terminal entry, so
> > page-table-level locks are insufficient to protect against concurrent
> > page table traversal; it requires taking all the higher-level locks under
> > which it is possible to start a page walk in the relevant range in
> > exclusive mode.
> >
> > The collapse_huge_page() path for anonymous THP already follows this rule,
> > but the shmem/file THP path was getting it wrong, making it possible for
> > concurrent rmap-based operations to cause corruption.
>
> This sounds sane and correct to me. No expert on file-THP, though.
>
> For anon-THP it's the mmap lock and the rmap locks. I assume the only
> difference for file-THP is that the rmap lock is actually the mapping
> lock. Looking at rmap_walk_file(), that seems to be the case.

Yeah. You can also have private file VMAs that are associated with
both a mapping and a set of anon_vmas, and in that case you would need
to lock the mmap, the mapping, and the anon_vma root; but the file THP
code in khugepaged instead just bails on file VMAs with an anon_vma.

> I wish at least PTE table removal could be done easier ... I already
> experimented some time ago with some ideas (e.g., lock in PMD table
> memmap) but it's all far from trivial and space in the memmap is rare.

Because you want it to be faster? Is that for the THP usecase or something else?

2022-11-28 18:35:53

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths

On 28.11.22 18:57, Jann Horn wrote:
> On Mon, Nov 28, 2022 at 6:37 PM David Hildenbrand <[email protected]> wrote:
>>
>> On 25.11.22 22:37, Jann Horn wrote:
>>> Any codepath that zaps page table entries must invoke MMU notifiers to
>>> ensure that secondary MMUs (like KVM) don't keep accessing pages which
>>> aren't mapped anymore. Secondary MMUs don't hold their own references to
>>> pages that are mirrored over, so failing to notify them can lead to page
>>> use-after-free.
>>>
>>> I'm marking this as addressing an issue introduced in commit f3f0e1d2150b
>>> ("khugepaged: add support of collapse for tmpfs/shmem pages"), but most of
>>> the security impact of this only came in commit 27e1f8273113 ("khugepaged:
>>> enable collapse pmd for pte-mapped THP"), which actually omitted flushes
>>> for the removal of present PTEs, not just for the removal of empty page
>>> tables.
>>>
>>> Cc: [email protected]
>>> Fixes: f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages")
>>
>> I'm curious, do you have a working reproducer for this?
>
> You're on the CC list of my bug report to [email protected]
> with title "khugepaged races with rmap-based zap, races with GUP-fast,
> and fails to call MMU notifiers". That has an attached reproducer
> thp_ro_no_notify_kvm.c that is able to read PAGE_POISON out of freed
> file THP pages through KVM.
>

Ah, the mail from early October, thanks (drowning in mail).

You're amazingly skilled at writing reproducers.

--
Thanks,

David / dhildenb

2022-11-28 19:02:23

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths

On 25.11.22 22:37, Jann Horn wrote:
> Any codepath that zaps page table entries must invoke MMU notifiers to
> ensure that secondary MMUs (like KVM) don't keep accessing pages which
> aren't mapped anymore. Secondary MMUs don't hold their own references to
> pages that are mirrored over, so failing to notify them can lead to page
> use-after-free.
>
> I'm marking this as addressing an issue introduced in commit f3f0e1d2150b
> ("khugepaged: add support of collapse for tmpfs/shmem pages"), but most of
> the security impact of this only came in commit 27e1f8273113 ("khugepaged:
> enable collapse pmd for pte-mapped THP"), which actually omitted flushes
> for the removal of present PTEs, not just for the removal of empty page
> tables.
>
> Cc: [email protected]
> Fixes: f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages")

I'm curious, do you have a working reproducer for this?

Change looks sane on quick glimpse.

--
Thanks,

David / dhildenb

2022-11-28 19:23:35

by Jann Horn

[permalink] [raw]
Subject: Re: [PATCH v3 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths

On Mon, Nov 28, 2022 at 6:37 PM David Hildenbrand <[email protected]> wrote:
>
> On 25.11.22 22:37, Jann Horn wrote:
> > Any codepath that zaps page table entries must invoke MMU notifiers to
> > ensure that secondary MMUs (like KVM) don't keep accessing pages which
> > aren't mapped anymore. Secondary MMUs don't hold their own references to
> > pages that are mirrored over, so failing to notify them can lead to page
> > use-after-free.
> >
> > I'm marking this as addressing an issue introduced in commit f3f0e1d2150b
> > ("khugepaged: add support of collapse for tmpfs/shmem pages"), but most of
> > the security impact of this only came in commit 27e1f8273113 ("khugepaged:
> > enable collapse pmd for pte-mapped THP"), which actually omitted flushes
> > for the removal of present PTEs, not just for the removal of empty page
> > tables.
> >
> > Cc: [email protected]
> > Fixes: f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages")
>
> I'm curious, do you have a working reproducer for this?

You're on the CC list of my bug report to [email protected]
with title "khugepaged races with rmap-based zap, races with GUP-fast,
and fails to call MMU notifiers". That has an attached reproducer
thp_ro_no_notify_kvm.c that is able to read PAGE_POISON out of freed
file THP pages through KVM.