2022-02-01 20:44:27

by David Hildenbrand

[permalink] [raw]
Subject: [PATCH v3 3/9] mm: slightly clarify KSM logic in do_swap_page()

Let's make it clearer that KSM might only have to copy a page
in case we have a page in the swapcache, not if we allocated a fresh
page and bypassed the swapcache. While at it, add a comment why this is
usually necessary and merge the two swapcache conditions.

Signed-off-by: David Hildenbrand <[email protected]>
---
mm/memory.c | 38 +++++++++++++++++++++++---------------
1 file changed, 23 insertions(+), 15 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 923165b4c27e..3c91294cca98 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3615,21 +3615,29 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
goto out_release;
}

- /*
- * Make sure try_to_free_swap or reuse_swap_page or swapoff did not
- * release the swapcache from under us. The page pin, and pte_same
- * test below, are not enough to exclude that. Even if it is still
- * swapcache, we need to check that the page's swap has not changed.
- */
- if (unlikely((!PageSwapCache(page) ||
- page_private(page) != entry.val)) && swapcache)
- goto out_page;
-
- page = ksm_might_need_to_copy(page, vma, vmf->address);
- if (unlikely(!page)) {
- ret = VM_FAULT_OOM;
- page = swapcache;
- goto out_page;
+ if (swapcache) {
+ /*
+ * Make sure try_to_free_swap or reuse_swap_page or swapoff did
+ * not release the swapcache from under us. The page pin, and
+ * pte_same test below, are not enough to exclude that. Even if
+ * it is still swapcache, we need to check that the page's swap
+ * has not changed.
+ */
+ if (unlikely(!PageSwapCache(page) ||
+ page_private(page) != entry.val))
+ goto out_page;
+
+ /*
+ * KSM sometimes has to copy on read faults, for example, if
+ * page->index of !PageKSM() pages would be nonlinear inside the
+ * anon VMA -- PageKSM() is lost on actual swapout.
+ */
+ page = ksm_might_need_to_copy(page, vma, vmf->address);
+ if (unlikely(!page)) {
+ ret = VM_FAULT_OOM;
+ page = swapcache;
+ goto out_page;
+ }
}

cgroup_throttle_swaprate(page, GFP_KERNEL);
--
2.34.1


2022-03-09 18:52:56

by Yang Shi

[permalink] [raw]
Subject: Re: [PATCH v3 3/9] mm: slightly clarify KSM logic in do_swap_page()

On Mon, Jan 31, 2022 at 8:33 AM David Hildenbrand <[email protected]> wrote:
>
> Let's make it clearer that KSM might only have to copy a page
> in case we have a page in the swapcache, not if we allocated a fresh
> page and bypassed the swapcache. While at it, add a comment why this is
> usually necessary and merge the two swapcache conditions.
>
> Signed-off-by: David Hildenbrand <[email protected]>
> ---
> mm/memory.c | 38 +++++++++++++++++++++++---------------
> 1 file changed, 23 insertions(+), 15 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 923165b4c27e..3c91294cca98 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3615,21 +3615,29 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> goto out_release;
> }
>
> - /*
> - * Make sure try_to_free_swap or reuse_swap_page or swapoff did not

We could remove the reference to "reuse_swap_page", right?

> - * release the swapcache from under us. The page pin, and pte_same
> - * test below, are not enough to exclude that. Even if it is still
> - * swapcache, we need to check that the page's swap has not changed.
> - */
> - if (unlikely((!PageSwapCache(page) ||
> - page_private(page) != entry.val)) && swapcache)
> - goto out_page;
> -
> - page = ksm_might_need_to_copy(page, vma, vmf->address);
> - if (unlikely(!page)) {
> - ret = VM_FAULT_OOM;
> - page = swapcache;
> - goto out_page;
> + if (swapcache) {
> + /*
> + * Make sure try_to_free_swap or reuse_swap_page or swapoff did
> + * not release the swapcache from under us. The page pin, and
> + * pte_same test below, are not enough to exclude that. Even if
> + * it is still swapcache, we need to check that the page's swap
> + * has not changed.
> + */
> + if (unlikely(!PageSwapCache(page) ||
> + page_private(page) != entry.val))
> + goto out_page;
> +
> + /*
> + * KSM sometimes has to copy on read faults, for example, if
> + * page->index of !PageKSM() pages would be nonlinear inside the
> + * anon VMA -- PageKSM() is lost on actual swapout.
> + */
> + page = ksm_might_need_to_copy(page, vma, vmf->address);
> + if (unlikely(!page)) {
> + ret = VM_FAULT_OOM;
> + page = swapcache;
> + goto out_page;
> + }
> }
>
> cgroup_throttle_swaprate(page, GFP_KERNEL);
> --
> 2.34.1
>

2022-03-09 19:02:36

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v3 3/9] mm: slightly clarify KSM logic in do_swap_page()

On 1/31/22 17:29, David Hildenbrand wrote:
> Let's make it clearer that KSM might only have to copy a page
> in case we have a page in the swapcache, not if we allocated a fresh
> page and bypassed the swapcache. While at it, add a comment why this is
> usually necessary and merge the two swapcache conditions.
>
> Signed-off-by: David Hildenbrand <[email protected]>

Acked-by: Vlastimil Babka <[email protected]>


> ---
> mm/memory.c | 38 +++++++++++++++++++++++---------------
> 1 file changed, 23 insertions(+), 15 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 923165b4c27e..3c91294cca98 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3615,21 +3615,29 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> goto out_release;
> }
>
> - /*
> - * Make sure try_to_free_swap or reuse_swap_page or swapoff did not
> - * release the swapcache from under us. The page pin, and pte_same
> - * test below, are not enough to exclude that. Even if it is still
> - * swapcache, we need to check that the page's swap has not changed.
> - */
> - if (unlikely((!PageSwapCache(page) ||
> - page_private(page) != entry.val)) && swapcache)
> - goto out_page;
> -
> - page = ksm_might_need_to_copy(page, vma, vmf->address);
> - if (unlikely(!page)) {
> - ret = VM_FAULT_OOM;
> - page = swapcache;
> - goto out_page;
> + if (swapcache) {
> + /*
> + * Make sure try_to_free_swap or reuse_swap_page or swapoff did
> + * not release the swapcache from under us. The page pin, and
> + * pte_same test below, are not enough to exclude that. Even if
> + * it is still swapcache, we need to check that the page's swap
> + * has not changed.
> + */
> + if (unlikely(!PageSwapCache(page) ||
> + page_private(page) != entry.val))
> + goto out_page;
> +
> + /*
> + * KSM sometimes has to copy on read faults, for example, if
> + * page->index of !PageKSM() pages would be nonlinear inside the
> + * anon VMA -- PageKSM() is lost on actual swapout.
> + */
> + page = ksm_might_need_to_copy(page, vma, vmf->address);
> + if (unlikely(!page)) {
> + ret = VM_FAULT_OOM;
> + page = swapcache;
> + goto out_page;
> + }
> }
>
> cgroup_throttle_swaprate(page, GFP_KERNEL);

2022-03-09 21:24:44

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 3/9] mm: slightly clarify KSM logic in do_swap_page()

On 09.03.22 19:48, Yang Shi wrote:
> On Mon, Jan 31, 2022 at 8:33 AM David Hildenbrand <[email protected]> wrote:
>>
>> Let's make it clearer that KSM might only have to copy a page
>> in case we have a page in the swapcache, not if we allocated a fresh
>> page and bypassed the swapcache. While at it, add a comment why this is
>> usually necessary and merge the two swapcache conditions.
>>
>> Signed-off-by: David Hildenbrand <[email protected]>
>> ---
>> mm/memory.c | 38 +++++++++++++++++++++++---------------
>> 1 file changed, 23 insertions(+), 15 deletions(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 923165b4c27e..3c91294cca98 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -3615,21 +3615,29 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>> goto out_release;
>> }
>>
>> - /*
>> - * Make sure try_to_free_swap or reuse_swap_page or swapoff did not
>
> We could remove the reference to "reuse_swap_page", right?
>
Yes, I noticed this a couple of days ago as well and already have a
patch prepared for that ("mm: adjust stale comment in do_swap_page()
mentioning reuse_swap_page()" at
https://github.com/davidhildenbrand/linux/commits/cow_fixes_part_3)

If Andrew wants, we can fix that up directly before sending upstream or
I'll simply include that patch when sending out part2 v2.

(I want to avoid sending another series just for this)

Thanks!

--
Thanks,

David / dhildenb

2022-03-09 21:57:09

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v3 3/9] mm: slightly clarify KSM logic in do_swap_page()

On Wed, 9 Mar 2022 20:15:54 +0100 David Hildenbrand <[email protected]> wrote:

> On 09.03.22 19:48, Yang Shi wrote:
> > On Mon, Jan 31, 2022 at 8:33 AM David Hildenbrand <[email protected]> wrote:
> >>
> >> Let's make it clearer that KSM might only have to copy a page
> >> in case we have a page in the swapcache, not if we allocated a fresh
> >> page and bypassed the swapcache. While at it, add a comment why this is
> >> usually necessary and merge the two swapcache conditions.
> >>
> >> Signed-off-by: David Hildenbrand <[email protected]>
> >> ---
> >> mm/memory.c | 38 +++++++++++++++++++++++---------------
> >> 1 file changed, 23 insertions(+), 15 deletions(-)
> >>
> >> diff --git a/mm/memory.c b/mm/memory.c
> >> index 923165b4c27e..3c91294cca98 100644
> >> --- a/mm/memory.c
> >> +++ b/mm/memory.c
> >> @@ -3615,21 +3615,29 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >> goto out_release;
> >> }
> >>
> >> - /*
> >> - * Make sure try_to_free_swap or reuse_swap_page or swapoff did not
> >
> > We could remove the reference to "reuse_swap_page", right?
> >
> Yes, I noticed this a couple of days ago as well and already have a
> patch prepared for that ("mm: adjust stale comment in do_swap_page()
> mentioning reuse_swap_page()" at
> https://github.com/davidhildenbrand/linux/commits/cow_fixes_part_3)
>
> If Andrew wants, we can fix that up directly before sending upstream or
> I'll simply include that patch when sending out part2 v2.
>
> (I want to avoid sending another series just for this)

Thanks, I did this. The same change plus gratuitous comment reflowing.

--- a/mm/memory.c~mm-slightly-clarify-ksm-logic-in-do_swap_page-fix
+++ a/mm/memory.c
@@ -3609,11 +3609,11 @@ vm_fault_t do_swap_page(struct vm_fault

if (swapcache) {
/*
- * Make sure try_to_free_swap or reuse_swap_page or swapoff did
- * not release the swapcache from under us. The page pin, and
- * pte_same test below, are not enough to exclude that. Even if
- * it is still swapcache, we need to check that the page's swap
- * has not changed.
+ * Make sure try_to_free_swap or swapoff did not release the
+ * swapcache from under us. The page pin, and pte_same test
+ * below, are not enough to exclude that. Even if it is still
+ * swapcache, we need to check that the page's swap has not
+ * changed.
*/
if (unlikely(!PageSwapCache(page) ||
page_private(page) != entry.val))
_