2024-03-18 01:38:31

by zhaoyang.huang

[permalink] [raw]
Subject: reply: [PATCH] mm: fix a race scenario in folio_isolate_lru

>
>
>On Sun, Mar 17, 2024 at 12:07:40PM +0800, Zhaoyang Huang wrote:
>> Could it be this scenario, where folio comes from pte(thread 0), local
>> fbatch(thread 1) and page cache(thread 2) concurrently and proceed
>> intermixed without lock's protection? Actually, IMO, thread 1 also
>> could see the folio with refcnt==1 since it doesn't care if the page
>> is on the page cache or not.
>>
>> madivise_cold_and_pageout does no explicit folio_get thing since the
>> folio comes from pte which implies it has one refcnt from pagecache
>
>Mmm, no. It's implicit, but madvise_cold_or_pageout_pte_range()
>does guarantee that the folio has at least one refcount.
>
>Since we get the folio from vm_normal_folio(vma, addr, ptent); we know that
>there is at least one mapcount on the folio. refcount is always >= mapcount.
>Since we hold pte_offset_map_lock(), we know that mapcount (and therefore
>refcount) cannot be decremented until we call pte_unmap_unlock(), which we
>don't do until we have called folio_isolate_lru().
>
>Good try though, took me a few minutes of looking at it to convince myself that
>it was safe.
>
>Something to bear in mind is that if the race you outline is real, failing to hold a
>refcount on the folio leaves the caller susceptible to the
>VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio); if the other thread calls
>folio_put().
Resend the chart via outlook.
I think the problem rely on an special timing which is rare, I would like to list them below in timing sequence.

1. thread 0 calls folio_isolate_lru with refcnt == 1
2. thread 1 calls release_pages with refcnt == 2.(IMO, it could be 1 as release_pages doesn't care if the folio is used by page cache or fs)
3. thread 2 decrease refcnt to 1 by calling filemap_free_folio.(as I mentioned in 2, thread 2 is not mandatary here)
4. thread 1 calls folio_put_testzero and pass.(lruvec->lock has not been take here)
5. thread 0 clear folio's PG_lru by calling folio_test_clear_lru. The folio_get behind has no meaning there.
6. thread 1 failed in folio_test_lru and leave the folio on the LRU.
7. thread 1 add folio to pages_to_free wrongly which could break the LRU's->list and will have next folio experience list_del_invalid

#thread 0(madivise_cold_and_pageout) #1(lru_add_drain->fbatch_release_pages) #2(read_pages->filemap_remove_folios)
refcnt == 1(represent page cache) refcnt==2(another one represent LRU) folio comes from page cache

folio_isolate_lru release_pages filemap_free_folio
refcnt==1(decrease the one of page cache)
folio_test_clear_lru
<folio's PG_lru gone>
folio_put_testzero == true
folio_get
folio_test_lru == false
<No lruvec_del_folio>
list_add(folio->lru, pages_to_free)
//current folio will break LRU's integrity since it has not been deleted
>
>I can't understand any of the scenarios you outline below.
>Please try again without relying on indentation.
>
>> #thread 0(madivise_cold_and_pageout) #1
>> (lru_add_drain->fbatch_release_pages)
>> #2(read_pages->filemap_remove_folios)
>> refcnt == 1(represent page cache)
>>
>> refcnt==2(another one represent LRU)
>> folio comes from page cache
>> folio_isolate_lru
>> release_pages
>> filemap_free_folio
>>
>>
>> refcnt==1(decrease the one of page
>cache)
>>
>> folio_put_testzero == true
>>
>> <No lruvec_del_folio>
>>
>> list_add(folio->lru, pages_to_free) //current folio will break LRU's
>> integrity since it has not been deleted
>>
>> In case of gmail's wrap, split above chart to two parts
>>
>> #thread 0(madivise_cold_and_pageout) #1
>> (lru_add_drain->fbatch_release_pages)
>> refcnt == 1(represent page cache)
>>
>> refcnt==2(another one represent LRU)
>> folio_isolate_lru
>release_pages
>>
>> folio_put_testzero == true
>>
>> <No lruvec_del_folio>
>>
>> list_add(folio->lru, pages_to_free)
>>
>> //current folio will break LRU's integrity since it has not been
>> deleted
>>
>> #1 (lru_add_drain->fbatch_release_pages)
>> #2(read_pages->filemap_remove_folios)
>> refcnt==2(another one represent LRU)
>> folio comes from page cache
>> release_pages
>> filemap_free_folio
>>
>> refcnt==1(decrease the one of page
>cache)
>> folio_put_testzero == true <No lruvec_del_folio>
>> list_add(folio->lru, pages_to_free) //current folio will break LRU's
>> integrity since it has not been deleted
>> >
>> > > #0 folio_isolate_lru #1 release_pages
>> > > BUG_ON(!folio_refcnt)
>> > > if
>(folio_put_testzero())
>> > > folio_get(folio)
>> > > if (folio_test_clear_lru())


2024-03-18 04:46:10

by Matthew Wilcox

[permalink] [raw]
Subject: Re: reply: [PATCH] mm: fix a race scenario in folio_isolate_lru

On Mon, Mar 18, 2024 at 01:37:04AM +0000, 黄朝阳 (Zhaoyang Huang) wrote:
> >On Sun, Mar 17, 2024 at 12:07:40PM +0800, Zhaoyang Huang wrote:
> >> Could it be this scenario, where folio comes from pte(thread 0), local
> >> fbatch(thread 1) and page cache(thread 2) concurrently and proceed
> >> intermixed without lock's protection? Actually, IMO, thread 1 also
> >> could see the folio with refcnt==1 since it doesn't care if the page
> >> is on the page cache or not.
> >>
> >> madivise_cold_and_pageout does no explicit folio_get thing since the
> >> folio comes from pte which implies it has one refcnt from pagecache
> >
> >Mmm, no. It's implicit, but madvise_cold_or_pageout_pte_range()
> >does guarantee that the folio has at least one refcount.
> >
> >Since we get the folio from vm_normal_folio(vma, addr, ptent); we know that
> >there is at least one mapcount on the folio. refcount is always >= mapcount.
> >Since we hold pte_offset_map_lock(), we know that mapcount (and therefore
> >refcount) cannot be decremented until we call pte_unmap_unlock(), which we
> >don't do until we have called folio_isolate_lru().
> >
> >Good try though, took me a few minutes of looking at it to convince myself that
> >it was safe.
> >
> >Something to bear in mind is that if the race you outline is real, failing to hold a
> >refcount on the folio leaves the caller susceptible to the
> >VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio); if the other thread calls
> >folio_put().
> Resend the chart via outlook.
> I think the problem rely on an special timing which is rare, I would like to list them below in timing sequence.
>
> 1. thread 0 calls folio_isolate_lru with refcnt == 1

(i assume you mean refcnt == 2 here, otherwise none of this makes sense)

> 2. thread 1 calls release_pages with refcnt == 2.(IMO, it could be 1 as release_pages doesn't care if the folio is used by page cache or fs)
> 3. thread 2 decrease refcnt to 1 by calling filemap_free_folio.(as I mentioned in 2, thread 2 is not mandatary here)
> 4. thread 1 calls folio_put_testzero and pass.(lruvec->lock has not been take here)

But there's already a bug here.

Rearrange the order of this:

2. thread 1 calls release_pages with refcount == 2 (decreasing refcount to 1)
3. thread 2 decrease refcount to 0 by calling filemap_free_folio
1. thread 0 calls folio_isolate_lru() and hits the BUG().

> 5. thread 0 clear folio's PG_lru by calling folio_test_clear_lru. The folio_get behind has no meaning there.
> 6. thread 1 failed in folio_test_lru and leave the folio on the LRU.
> 7. thread 1 add folio to pages_to_free wrongly which could break the LRU's->list and will have next folio experience list_del_invalid
>
> #thread 0(madivise_cold_and_pageout) #1(lru_add_drain->fbatch_release_pages) #2(read_pages->filemap_remove_folios)
> refcnt == 1(represent page cache) refcnt==2(another one represent LRU) folio comes from page cache

This is still illegible. Try it this way:

Thread 0 Thread 1 Thread 2
madvise_cold_or_pageout_pte_range
lru_add_drain
fbatch_release_pages
read_pages
filemap_remove_folio

Some accuracy in your report would also be appreciated. There's no
function called madivise_cold_and_pageout, nor is there a function called
filemap_remove_folios(). It's a little detail, but it's annoying for
me to try to find which function you're actually referring to. I have
to guess, and it puts me in a bad mood.

At any rate, these three functions cannot do what you're proposing.
In read_page(), when we call filemap_remove_folio(), the folio in
question will not have the uptodate flag set, so can never have been
put in the page tables, so cannot be found by madvise().

Also, as I said in my earlier email, madvise_cold_or_pageout_pte_range()
does guarantee that the refcount on the folio is held and can never
decrease to zero while folio_isolate_lru() is running. So that's two
ways this scenario cannot happen.


2024-03-18 06:16:21

by Zhaoyang Huang

[permalink] [raw]
Subject: Re: reply: [PATCH] mm: fix a race scenario in folio_isolate_lru

On Mon, Mar 18, 2024 at 11:28 AM Matthew Wilcox <[email protected]> wrote:
>
> On Mon, Mar 18, 2024 at 01:37:04AM +0000, 黄朝阳 (Zhaoyang Huang) wrote:
> > >On Sun, Mar 17, 2024 at 12:07:40PM +0800, Zhaoyang Huang wrote:
> > >> Could it be this scenario, where folio comes from pte(thread 0), local
> > >> fbatch(thread 1) and page cache(thread 2) concurrently and proceed
> > >> intermixed without lock's protection? Actually, IMO, thread 1 also
> > >> could see the folio with refcnt==1 since it doesn't care if the page
> > >> is on the page cache or not.
> > >>
> > >> madivise_cold_and_pageout does no explicit folio_get thing since the
> > >> folio comes from pte which implies it has one refcnt from pagecache
> > >
> > >Mmm, no. It's implicit, but madvise_cold_or_pageout_pte_range()
> > >does guarantee that the folio has at least one refcount.
> > >
> > >Since we get the folio from vm_normal_folio(vma, addr, ptent); we know that
> > >there is at least one mapcount on the folio. refcount is always >= mapcount.
> > >Since we hold pte_offset_map_lock(), we know that mapcount (and therefore
> > >refcount) cannot be decremented until we call pte_unmap_unlock(), which we
> > >don't do until we have called folio_isolate_lru().
> > >
> > >Good try though, took me a few minutes of looking at it to convince myself that
> > >it was safe.
> > >
> > >Something to bear in mind is that if the race you outline is real, failing to hold a
> > >refcount on the folio leaves the caller susceptible to the
> > >VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio); if the other thread calls
> > >folio_put().
> > Resend the chart via outlook.
> > I think the problem rely on an special timing which is rare, I would like to list them below in timing sequence.
> >
> > 1. thread 0 calls folio_isolate_lru with refcnt == 1
>
> (i assume you mean refcnt == 2 here, otherwise none of this makes sense)
>
> > 2. thread 1 calls release_pages with refcnt == 2.(IMO, it could be 1 as release_pages doesn't care if the folio is used by page cache or fs)
> > 3. thread 2 decrease refcnt to 1 by calling filemap_free_folio.(as I mentioned in 2, thread 2 is not mandatary here)
> > 4. thread 1 calls folio_put_testzero and pass.(lruvec->lock has not been take here)
>
> But there's already a bug here.
>
> Rearrange the order of this:
>
> 2. thread 1 calls release_pages with refcount == 2 (decreasing refcount to 1)
> 3. thread 2 decrease refcount to 0 by calling filemap_free_folio
> 1. thread 0 calls folio_isolate_lru() and hits the BUG().
>
> > 5. thread 0 clear folio's PG_lru by calling folio_test_clear_lru. The folio_get behind has no meaning there.
> > 6. thread 1 failed in folio_test_lru and leave the folio on the LRU.
> > 7. thread 1 add folio to pages_to_free wrongly which could break the LRU's->list and will have next folio experience list_del_invalid
> >
> > #thread 0(madivise_cold_and_pageout) #1(lru_add_drain->fbatch_release_pages) #2(read_pages->filemap_remove_folios)
> > refcnt == 1(represent page cache) refcnt==2(another one represent LRU) folio comes from page cache
>
> This is still illegible. Try it this way:
>
> Thread 0 Thread 1 Thread 2
> madvise_cold_or_pageout_pte_range
> lru_add_drain
> fbatch_release_pages
> read_pages
> filemap_remove_folio
Thread 0 Thread 1 Thread 2
madvise_cold_or_pageout_pte_range
truncate_inode_pages_range
fbatch_release_pages
truncate_inode_pages_range
filemap_remove_folio
Sorry for the confusion. Rearrange the timing chart like above
according to the real panic's stacktrace. Thread 1&2 are all from
truncate_inode_pages_range(I think thread2(read_pages) is not
mandatory here as thread 0&1 could rely on the same refcnt==1).
>
> Some accuracy in your report would also be appreciated. There's no
> function called madivise_cold_and_pageout, nor is there a function called
> filemap_remove_folios(). It's a little detail, but it's annoying for
> me to try to find which function you're actually referring to. I have
> to guess, and it puts me in a bad mood.
>
> At any rate, these three functions cannot do what you're proposing.
> In read_page(), when we call filemap_remove_folio(), the folio in
> question will not have the uptodate flag set, so can never have been
> put in the page tables, so cannot be found by madvise().
>
> Also, as I said in my earlier email, madvise_cold_or_pageout_pte_range()
> does guarantee that the refcount on the folio is held and can never
> decrease to zero while folio_isolate_lru() is running. So that's two
> ways this scenario cannot happen.
The madivse_xxx comes from my presumption which has any proof.
Whereas, It looks like truncate_inode_pages_range just cares about
page cache refcnt by folio_put_testzero without noticing any task's VM
stuff. Furthermore, I notice that move_folios_to_lru is safe as it
runs with holding lruvec->lock.
>

2024-03-18 08:01:40

by Zhaoyang Huang

[permalink] [raw]
Subject: Re: reply: [PATCH] mm: fix a race scenario in folio_isolate_lru

On Mon, Mar 18, 2024 at 2:15 PM Zhaoyang Huang <huangzhaoyang@gmailcom> wrote:
>
> On Mon, Mar 18, 2024 at 11:28 AM Matthew Wilcox <[email protected]> wrote:
> >
> > On Mon, Mar 18, 2024 at 01:37:04AM +0000, 黄朝阳 (Zhaoyang Huang) wrote:
> > > >On Sun, Mar 17, 2024 at 12:07:40PM +0800, Zhaoyang Huang wrote:
> > > >> Could it be this scenario, where folio comes from pte(thread 0), local
> > > >> fbatch(thread 1) and page cache(thread 2) concurrently and proceed
> > > >> intermixed without lock's protection? Actually, IMO, thread 1 also
> > > >> could see the folio with refcnt==1 since it doesn't care if the page
> > > >> is on the page cache or not.
> > > >>
> > > >> madivise_cold_and_pageout does no explicit folio_get thing since the
> > > >> folio comes from pte which implies it has one refcnt from pagecache
> > > >
> > > >Mmm, no. It's implicit, but madvise_cold_or_pageout_pte_range()
> > > >does guarantee that the folio has at least one refcount.
> > > >
> > > >Since we get the folio from vm_normal_folio(vma, addr, ptent); we know that
> > > >there is at least one mapcount on the folio. refcount is always >= mapcount.
> > > >Since we hold pte_offset_map_lock(), we know that mapcount (and therefore
> > > >refcount) cannot be decremented until we call pte_unmap_unlock(), which we
> > > >don't do until we have called folio_isolate_lru().
> > > >
> > > >Good try though, took me a few minutes of looking at it to convince myself that
> > > >it was safe.
> > > >
> > > >Something to bear in mind is that if the race you outline is real, failing to hold a
> > > >refcount on the folio leaves the caller susceptible to the
> > > >VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio); if the other thread calls
> > > >folio_put().
> > > Resend the chart via outlook.
> > > I think the problem rely on an special timing which is rare, I would like to list them below in timing sequence.
> > >
> > > 1. thread 0 calls folio_isolate_lru with refcnt == 1
> >
> > (i assume you mean refcnt == 2 here, otherwise none of this makes sense)
> >
> > > 2. thread 1 calls release_pages with refcnt == 2.(IMO, it could be 1 as release_pages doesn't care if the folio is used by page cache or fs)
> > > 3. thread 2 decrease refcnt to 1 by calling filemap_free_folio.(as I mentioned in 2, thread 2 is not mandatary here)
> > > 4. thread 1 calls folio_put_testzero and pass.(lruvec->lock has not been take here)
> >
> > But there's already a bug here.
> >
> > Rearrange the order of this:
> >
> > 2. thread 1 calls release_pages with refcount == 2 (decreasing refcount to 1)
> > 3. thread 2 decrease refcount to 0 by calling filemap_free_folio
> > 1. thread 0 calls folio_isolate_lru() and hits the BUG().
> >
> > > 5. thread 0 clear folio's PG_lru by calling folio_test_clear_lru. The folio_get behind has no meaning there.
> > > 6. thread 1 failed in folio_test_lru and leave the folio on the LRU.
> > > 7. thread 1 add folio to pages_to_free wrongly which could break the LRU's->list and will have next folio experience list_del_invalid
> > >
> > > #thread 0(madivise_cold_and_pageout) #1(lru_add_drain->fbatch_release_pages) #2(read_pages->filemap_remove_folios)
> > > refcnt == 1(represent page cache) refcnt==2(another one represent LRU) folio comes from page cache
> >
> > This is still illegible. Try it this way:
> >
> > Thread 0 Thread 1 Thread 2
> > madvise_cold_or_pageout_pte_range
> > lru_add_drain
> > fbatch_release_pages
> > read_pages
> > filemap_remove_folio
> Thread 0 Thread 1 Thread 2
> madvise_cold_or_pageout_pte_range
> truncate_inode_pages_range
> fbatch_release_pages
> truncate_inode_pages_range
> filemap_remove_folio
> Sorry for the confusion. Rearrange the timing chart like above
> according to the real panic's stacktrace. Thread 1&2 are all from
> truncate_inode_pages_range(I think thread2(read_pages) is not
> mandatory here as thread 0&1 could rely on the same refcnt==1).
> >
> > Some accuracy in your report would also be appreciated. There's no
> > function called madivise_cold_and_pageout, nor is there a function called
> > filemap_remove_folios(). It's a little detail, but it's annoying for
> > me to try to find which function you're actually referring to. I have
> > to guess, and it puts me in a bad mood.
> >
> > At any rate, these three functions cannot do what you're proposing.
> > In read_page(), when we call filemap_remove_folio(), the folio in
> > question will not have the uptodate flag set, so can never have been
> > put in the page tables, so cannot be found by madvise().
> >
> > Also, as I said in my earlier email, madvise_cold_or_pageout_pte_range()
> > does guarantee that the refcount on the folio is held and can never
> > decrease to zero while folio_isolate_lru() is running. So that's two
> > ways this scenario cannot happen.
> The madivse_xxx comes from my presumption which has any proof.
> Whereas, It looks like truncate_inode_pages_range just cares about
> page cache refcnt by folio_put_testzero without noticing any task's VM
> stuff. Furthermore, I notice that move_folios_to_lru is safe as it
> runs with holding lruvec->lock.
> >
BTW, I think we need to protect all
folio_test_clear_lru/folio_test_lru by moving them into lruvec->lock
in such as __page_cache_release and folio_activate functions.
Otherwise, there is always a race window between judging PG_lru and
following actions.