2019-06-27 05:16:41

by Pingfan Liu

[permalink] [raw]
Subject: [PATCHv5] mm/gup: speed up check_and_migrate_cma_pages() on huge page

Both hugetlb and thp locate on the same migration type of pageblock, since
they are allocated from a free_list[]. Based on this fact, it is enough to
check on a single subpage to decide the migration type of the whole huge
page. By this way, it saves (2M/4K - 1) times loop for pmd_huge on x86,
similar on other archs.

Furthermore, when executing isolate_huge_page(), it avoid taking global
hugetlb_lock many times, and meanless remove/add to the local link list
cma_page_list.

Signed-off-by: Pingfan Liu <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Ira Weiny <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: John Hubbard <[email protected]>
Cc: "Aneesh Kumar K.V" <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Keith Busch <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: [email protected]
---
v3 -> v4: fix C language precedence issue
v4 -> v5: drop the check PageCompound() and improve notes
mm/gup.c | 23 +++++++++++++++--------
1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index ddde097..1deaad2 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1336,25 +1336,30 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
struct vm_area_struct **vmas,
unsigned int gup_flags)
{
- long i;
+ long i, step;
bool drain_allow = true;
bool migrate_allow = true;
LIST_HEAD(cma_page_list);

check_again:
- for (i = 0; i < nr_pages; i++) {
+ for (i = 0; i < nr_pages;) {
+
+ struct page *head = compound_head(pages[i]);
+
+ /*
+ * gup may start from a tail page. Advance step by the left
+ * part.
+ */
+ step = (1 << compound_order(head)) - (pages[i] - head);
/*
* If we get a page from the CMA zone, since we are going to
* be pinning these entries, we might as well move them out
* of the CMA zone if possible.
*/
- if (is_migrate_cma_page(pages[i])) {
-
- struct page *head = compound_head(pages[i]);
-
- if (PageHuge(head)) {
+ if (is_migrate_cma_page(head)) {
+ if (PageHuge(head))
isolate_huge_page(head, &cma_page_list);
- } else {
+ else {
if (!PageLRU(head) && drain_allow) {
lru_add_drain_all();
drain_allow = false;
@@ -1369,6 +1374,8 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
}
}
}
+
+ i += step;
}

if (!list_empty(&cma_page_list)) {
--
2.7.5


2019-06-27 23:25:36

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCHv5] mm/gup: speed up check_and_migrate_cma_pages() on huge page

On Thu, 27 Jun 2019 13:15:45 +0800 Pingfan Liu <[email protected]> wrote:

> Both hugetlb and thp locate on the same migration type of pageblock, since
> they are allocated from a free_list[]. Based on this fact, it is enough to
> check on a single subpage to decide the migration type of the whole huge
> page. By this way, it saves (2M/4K - 1) times loop for pmd_huge on x86,
> similar on other archs.
>
> Furthermore, when executing isolate_huge_page(), it avoid taking global
> hugetlb_lock many times, and meanless remove/add to the local link list
> cma_page_list.
>

Thanks, looks good to me. Have any timing measurements been taken?

> ...
>
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1336,25 +1336,30 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> struct vm_area_struct **vmas,
> unsigned int gup_flags)
> {
> - long i;
> + long i, step;

I'll make these variables unsigned long - to match nr_pages and because
we have no need for them to be negative.

> ...

2019-06-27 23:39:03

by Ira Weiny

[permalink] [raw]
Subject: Re: [PATCHv5] mm/gup: speed up check_and_migrate_cma_pages() on huge page

On Thu, Jun 27, 2019 at 01:15:45PM +0800, Pingfan Liu wrote:
> Both hugetlb and thp locate on the same migration type of pageblock, since
> they are allocated from a free_list[]. Based on this fact, it is enough to
> check on a single subpage to decide the migration type of the whole huge
> page. By this way, it saves (2M/4K - 1) times loop for pmd_huge on x86,
> similar on other archs.
>
> Furthermore, when executing isolate_huge_page(), it avoid taking global
> hugetlb_lock many times, and meanless remove/add to the local link list
> cma_page_list.
>
> Signed-off-by: Pingfan Liu <[email protected]>

Reviewed-by: Ira Weiny <[email protected]>

> Cc: Andrew Morton <[email protected]>
> Cc: Ira Weiny <[email protected]>
> Cc: Mike Rapoport <[email protected]>
> Cc: "Kirill A. Shutemov" <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: John Hubbard <[email protected]>
> Cc: "Aneesh Kumar K.V" <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Keith Busch <[email protected]>
> Cc: Mike Kravetz <[email protected]>
> Cc: [email protected]
> ---
> v3 -> v4: fix C language precedence issue
> v4 -> v5: drop the check PageCompound() and improve notes
> mm/gup.c | 23 +++++++++++++++--------
> 1 file changed, 15 insertions(+), 8 deletions(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index ddde097..1deaad2 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1336,25 +1336,30 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> struct vm_area_struct **vmas,
> unsigned int gup_flags)
> {
> - long i;
> + long i, step;
> bool drain_allow = true;
> bool migrate_allow = true;
> LIST_HEAD(cma_page_list);
>
> check_again:
> - for (i = 0; i < nr_pages; i++) {
> + for (i = 0; i < nr_pages;) {
> +
> + struct page *head = compound_head(pages[i]);
> +
> + /*
> + * gup may start from a tail page. Advance step by the left
> + * part.
> + */
> + step = (1 << compound_order(head)) - (pages[i] - head);
> /*
> * If we get a page from the CMA zone, since we are going to
> * be pinning these entries, we might as well move them out
> * of the CMA zone if possible.
> */
> - if (is_migrate_cma_page(pages[i])) {
> -
> - struct page *head = compound_head(pages[i]);
> -
> - if (PageHuge(head)) {
> + if (is_migrate_cma_page(head)) {
> + if (PageHuge(head))
> isolate_huge_page(head, &cma_page_list);
> - } else {
> + else {
> if (!PageLRU(head) && drain_allow) {
> lru_add_drain_all();
> drain_allow = false;
> @@ -1369,6 +1374,8 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> }
> }
> }
> +
> + i += step;
> }
>
> if (!list_empty(&cma_page_list)) {
> --
> 2.7.5
>

2019-06-28 04:00:49

by Pingfan Liu

[permalink] [raw]
Subject: Re: [PATCHv5] mm/gup: speed up check_and_migrate_cma_pages() on huge page

On Fri, Jun 28, 2019 at 7:25 AM Andrew Morton <[email protected]> wrote:
>
> On Thu, 27 Jun 2019 13:15:45 +0800 Pingfan Liu <[email protected]> wrote:
>
> > Both hugetlb and thp locate on the same migration type of pageblock, since
> > they are allocated from a free_list[]. Based on this fact, it is enough to
> > check on a single subpage to decide the migration type of the whole huge
> > page. By this way, it saves (2M/4K - 1) times loop for pmd_huge on x86,
> > similar on other archs.
> >
> > Furthermore, when executing isolate_huge_page(), it avoid taking global
> > hugetlb_lock many times, and meanless remove/add to the local link list
> > cma_page_list.
> >
>
> Thanks, looks good to me. Have any timing measurements been taken?
Not yet. It is a little hard to force huge page to be allocated CMA
area. Should I provide the measurements?
>
> > ...
> >
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -1336,25 +1336,30 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> > struct vm_area_struct **vmas,
> > unsigned int gup_flags)
> > {
> > - long i;
> > + long i, step;
>
> I'll make these variables unsigned long - to match nr_pages and because
> we have no need for them to be negative.
OK, will fix it.

Thanks,
Pingfan
>
> > ...