2022-06-15 18:27:55

by Yang Shi

[permalink] [raw]
Subject: [mm-unstable v4 PATCH 0/7] Cleanup transhuge_xxx helpers


v4: * Consolidated the transhuge_vma_size_ok() helper proposed in the
earlier versions into transhuge_vma_suitable(), per Zach.
* Fixed the regression introduced by patch 3/7, per Zach and Miaohe.
* Reworded the comment for transhuge_vma_suitable(), per Zach.
* Removed khugepaged_enter() per Miaohe.
* More comments for hugepage_vma_check(), per Zach.
* Squashed patch 4/7 (mm: khugepaged: use transhuge_vma_suitable replace open-code)
in the earlier version into patch 2/7 of this version.
* Minor correction to the doc about THPeligible (patch 7/7), so the
total number of patches is kept 7.
v3: * Fixed the comment from Willy
v2: * Rebased to the latest mm-unstable
* Fixed potential regression for smaps's THPeligible

This series is the follow-up of the discussion about cleaning up transhuge_xxx
helpers at https://lore.kernel.org/linux-mm/[email protected]/.

THP has a bunch of helpers that do VMA sanity check for different paths, they
do the similar checks for the most callsites and have a lot duplicate codes.
And it is confusing what helpers should be used at what conditions.

This series reorganized and cleaned up the code so that we could consolidate
all the checks into hugepage_vma_check().

The transhuge_vma_enabled(), transparent_hugepage_active() and
__transparent_hugepage_enabled() are killed by this series.


Yang Shi (7):
mm: khugepaged: check THP flag in hugepage_vma_check()
mm: thp: consolidate vma size check to transhuge_vma_suitable
mm: khugepaged: better comments for anon vma check in hugepage_vma_revalidate
mm: thp: kill transparent_hugepage_active()
mm: thp: kill __transhuge_page_enabled()
mm: khugepaged: reorg some khugepaged helpers
doc: proc: fix the description to THPeligible

Documentation/filesystems/proc.rst | 4 ++-
fs/proc/task_mmu.c | 2 +-
include/linux/huge_mm.h | 75 +++++++++++++++++++------------------------------------
include/linux/khugepaged.h | 30 ----------------------
mm/huge_memory.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++--------
mm/khugepaged.c | 84 +++++++++++++++++++-------------------------------------------
mm/memory.c | 7 ++++--
7 files changed, 130 insertions(+), 153 deletions(-)


2022-06-15 18:28:15

by Yang Shi

[permalink] [raw]
Subject: [v4 PATCH 1/7] mm: khugepaged: check THP flag in hugepage_vma_check()

Currently the THP flag check in hugepage_vma_check() will fallthrough if
the flag is NEVER and VM_HUGEPAGE is set. This is not a problem for now
since all the callers have the flag checked before or can't be invoked if
the flag is NEVER.

However, the following patch will call hugepage_vma_check() in more
places, for example, page fault, so this flag must be checked in
hugepge_vma_check().

Reviewed-by: Zach O'Keefe <[email protected]>
Reviewed-by: Miaohe Lin <[email protected]>
Signed-off-by: Yang Shi <[email protected]>
---
mm/khugepaged.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 476d79360101..b1dab94c0f1e 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -458,6 +458,9 @@ bool hugepage_vma_check(struct vm_area_struct *vma,
if (shmem_file(vma->vm_file))
return shmem_huge_enabled(vma);

+ if (!khugepaged_enabled())
+ return false;
+
/* THP settings require madvise. */
if (!(vm_flags & VM_HUGEPAGE) && !khugepaged_always())
return false;
--
2.26.3

2022-06-15 18:30:46

by Yang Shi

[permalink] [raw]
Subject: [v4 PATCH 6/7] mm: khugepaged: reorg some khugepaged helpers

The khugepaged_{enabled|always|req_madv} are not khugepaged only
anymore, move them to huge_mm.h and rename to hugepage_flags_xxx, and
remove khugepaged_req_madv due to no users.

Also move khugepaged_defrag to khugepaged.c since its only caller is in
that file, it doesn't have to be in a header file.

Signed-off-by: Yang Shi <[email protected]>
---
include/linux/huge_mm.h | 8 ++++++++
include/linux/khugepaged.h | 14 --------------
mm/huge_memory.c | 4 ++--
mm/khugepaged.c | 18 +++++++++++-------
4 files changed, 21 insertions(+), 23 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 9d97d7ee6234..4cf546af7d97 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -116,6 +116,14 @@ extern struct kobj_attribute shmem_enabled_attr;

extern unsigned long transparent_hugepage_flags;

+#define hugepage_flags_enabled() \
+ (transparent_hugepage_flags & \
+ ((1<<TRANSPARENT_HUGEPAGE_FLAG) | \
+ (1<<TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG)))
+#define hugepage_flags_always() \
+ (transparent_hugepage_flags & \
+ (1<<TRANSPARENT_HUGEPAGE_FLAG))
+
/*
* Do the below checks:
* - For file vma, check if the linear page offset of vma is
diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
index ea5fd4c398f7..384f034ae947 100644
--- a/include/linux/khugepaged.h
+++ b/include/linux/khugepaged.h
@@ -24,20 +24,6 @@ static inline void collapse_pte_mapped_thp(struct mm_struct *mm,
}
#endif

-#define khugepaged_enabled() \
- (transparent_hugepage_flags & \
- ((1<<TRANSPARENT_HUGEPAGE_FLAG) | \
- (1<<TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG)))
-#define khugepaged_always() \
- (transparent_hugepage_flags & \
- (1<<TRANSPARENT_HUGEPAGE_FLAG))
-#define khugepaged_req_madv() \
- (transparent_hugepage_flags & \
- (1<<TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG))
-#define khugepaged_defrag() \
- (transparent_hugepage_flags & \
- (1<<TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG))
-
static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm)
{
if (test_bit(MMF_VM_HUGEPAGE, &oldmm->flags))
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d0c37d99917b..0f2cce2d7041 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -117,11 +117,11 @@ bool hugepage_vma_check(struct vm_area_struct *vma,
if (!in_pf && shmem_file(vma->vm_file))
return shmem_huge_enabled(vma);

- if (!khugepaged_enabled())
+ if (!hugepage_flags_enabled())
return false;

/* THP settings require madvise. */
- if (!(vm_flags & VM_HUGEPAGE) && !khugepaged_always())
+ if (!(vm_flags & VM_HUGEPAGE) && !hugepage_flags_always())
return false;

/* Only regular file is valid */
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2a676f37c921..d8ebb60aae36 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -472,7 +472,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma,
unsigned long vm_flags)
{
if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) &&
- khugepaged_enabled()) {
+ hugepage_flags_enabled()) {
if (hugepage_vma_check(vma, vm_flags, false, false))
__khugepaged_enter(vma->vm_mm);
}
@@ -763,6 +763,10 @@ static bool khugepaged_scan_abort(int nid)
return false;
}

+#define khugepaged_defrag() \
+ (transparent_hugepage_flags & \
+ (1<<TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG))
+
/* Defrag for khugepaged will enter direct reclaim/compaction if necessary */
static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void)
{
@@ -860,7 +864,7 @@ static struct page *khugepaged_alloc_hugepage(bool *wait)
khugepaged_alloc_sleep();
} else
count_vm_event(THP_COLLAPSE_ALLOC);
- } while (unlikely(!hpage) && likely(khugepaged_enabled()));
+ } while (unlikely(!hpage) && likely(hugepage_flags_enabled()));

return hpage;
}
@@ -2186,7 +2190,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
static int khugepaged_has_work(void)
{
return !list_empty(&khugepaged_scan.mm_head) &&
- khugepaged_enabled();
+ hugepage_flags_enabled();
}

static int khugepaged_wait_event(void)
@@ -2251,7 +2255,7 @@ static void khugepaged_wait_work(void)
return;
}

- if (khugepaged_enabled())
+ if (hugepage_flags_enabled())
wait_event_freezable(khugepaged_wait, khugepaged_wait_event());
}

@@ -2282,7 +2286,7 @@ static void set_recommended_min_free_kbytes(void)
int nr_zones = 0;
unsigned long recommended_min;

- if (!khugepaged_enabled()) {
+ if (!hugepage_flags_enabled()) {
calculate_min_free_kbytes();
goto update_wmarks;
}
@@ -2332,7 +2336,7 @@ int start_stop_khugepaged(void)
int err = 0;

mutex_lock(&khugepaged_mutex);
- if (khugepaged_enabled()) {
+ if (hugepage_flags_enabled()) {
if (!khugepaged_thread)
khugepaged_thread = kthread_run(khugepaged, NULL,
"khugepaged");
@@ -2358,7 +2362,7 @@ int start_stop_khugepaged(void)
void khugepaged_min_free_kbytes_update(void)
{
mutex_lock(&khugepaged_mutex);
- if (khugepaged_enabled() && khugepaged_thread)
+ if (hugepage_flags_enabled() && khugepaged_thread)
set_recommended_min_free_kbytes();
mutex_unlock(&khugepaged_mutex);
}
--
2.26.3

2022-06-16 00:51:24

by Zach O'Keefe

[permalink] [raw]
Subject: Re: [v4 PATCH 6/7] mm: khugepaged: reorg some khugepaged helpers

On 15 Jun 10:29, Yang Shi wrote:
> The khugepaged_{enabled|always|req_madv} are not khugepaged only
> anymore, move them to huge_mm.h and rename to hugepage_flags_xxx, and
> remove khugepaged_req_madv due to no users.
>
> Also move khugepaged_defrag to khugepaged.c since its only caller is in
> that file, it doesn't have to be in a header file.
>
> Signed-off-by: Yang Shi <[email protected]>
> ---
> include/linux/huge_mm.h | 8 ++++++++
> include/linux/khugepaged.h | 14 --------------
> mm/huge_memory.c | 4 ++--
> mm/khugepaged.c | 18 +++++++++++-------
> 4 files changed, 21 insertions(+), 23 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 9d97d7ee6234..4cf546af7d97 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -116,6 +116,14 @@ extern struct kobj_attribute shmem_enabled_attr;
>
> extern unsigned long transparent_hugepage_flags;
>
> +#define hugepage_flags_enabled() \
> + (transparent_hugepage_flags & \
> + ((1<<TRANSPARENT_HUGEPAGE_FLAG) | \
> + (1<<TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG)))
> +#define hugepage_flags_always() \
> + (transparent_hugepage_flags & \
> + (1<<TRANSPARENT_HUGEPAGE_FLAG))
> +
> /*
> * Do the below checks:
> * - For file vma, check if the linear page offset of vma is
> diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
> index ea5fd4c398f7..384f034ae947 100644
> --- a/include/linux/khugepaged.h
> +++ b/include/linux/khugepaged.h
> @@ -24,20 +24,6 @@ static inline void collapse_pte_mapped_thp(struct mm_struct *mm,
> }
> #endif
>
> -#define khugepaged_enabled() \
> - (transparent_hugepage_flags & \
> - ((1<<TRANSPARENT_HUGEPAGE_FLAG) | \
> - (1<<TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG)))
> -#define khugepaged_always() \
> - (transparent_hugepage_flags & \
> - (1<<TRANSPARENT_HUGEPAGE_FLAG))
> -#define khugepaged_req_madv() \
> - (transparent_hugepage_flags & \
> - (1<<TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG))
> -#define khugepaged_defrag() \
> - (transparent_hugepage_flags & \
> - (1<<TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG))
> -
> static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm)
> {
> if (test_bit(MMF_VM_HUGEPAGE, &oldmm->flags))
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index d0c37d99917b..0f2cce2d7041 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -117,11 +117,11 @@ bool hugepage_vma_check(struct vm_area_struct *vma,
> if (!in_pf && shmem_file(vma->vm_file))
> return shmem_huge_enabled(vma);
>
> - if (!khugepaged_enabled())
> + if (!hugepage_flags_enabled())
> return false;
>
> /* THP settings require madvise. */
> - if (!(vm_flags & VM_HUGEPAGE) && !khugepaged_always())
> + if (!(vm_flags & VM_HUGEPAGE) && !hugepage_flags_always())
> return false;
>
> /* Only regular file is valid */
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 2a676f37c921..d8ebb60aae36 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -472,7 +472,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma,
> unsigned long vm_flags)
> {
> if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) &&
> - khugepaged_enabled()) {
> + hugepage_flags_enabled()) {
> if (hugepage_vma_check(vma, vm_flags, false, false))
> __khugepaged_enter(vma->vm_mm);
> }
> @@ -763,6 +763,10 @@ static bool khugepaged_scan_abort(int nid)
> return false;
> }
>
> +#define khugepaged_defrag() \
> + (transparent_hugepage_flags & \
> + (1<<TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG))
> +
> /* Defrag for khugepaged will enter direct reclaim/compaction if necessary */
> static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void)
> {
> @@ -860,7 +864,7 @@ static struct page *khugepaged_alloc_hugepage(bool *wait)
> khugepaged_alloc_sleep();
> } else
> count_vm_event(THP_COLLAPSE_ALLOC);
> - } while (unlikely(!hpage) && likely(khugepaged_enabled()));
> + } while (unlikely(!hpage) && likely(hugepage_flags_enabled()));
>
> return hpage;
> }
> @@ -2186,7 +2190,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
> static int khugepaged_has_work(void)
> {
> return !list_empty(&khugepaged_scan.mm_head) &&
> - khugepaged_enabled();
> + hugepage_flags_enabled();
> }
>
> static int khugepaged_wait_event(void)
> @@ -2251,7 +2255,7 @@ static void khugepaged_wait_work(void)
> return;
> }
>
> - if (khugepaged_enabled())
> + if (hugepage_flags_enabled())
> wait_event_freezable(khugepaged_wait, khugepaged_wait_event());
> }
>
> @@ -2282,7 +2286,7 @@ static void set_recommended_min_free_kbytes(void)
> int nr_zones = 0;
> unsigned long recommended_min;
>
> - if (!khugepaged_enabled()) {
> + if (!hugepage_flags_enabled()) {
> calculate_min_free_kbytes();
> goto update_wmarks;
> }
> @@ -2332,7 +2336,7 @@ int start_stop_khugepaged(void)
> int err = 0;
>
> mutex_lock(&khugepaged_mutex);
> - if (khugepaged_enabled()) {
> + if (hugepage_flags_enabled()) {
> if (!khugepaged_thread)
> khugepaged_thread = kthread_run(khugepaged, NULL,
> "khugepaged");
> @@ -2358,7 +2362,7 @@ int start_stop_khugepaged(void)
> void khugepaged_min_free_kbytes_update(void)
> {
> mutex_lock(&khugepaged_mutex);
> - if (khugepaged_enabled() && khugepaged_thread)
> + if (hugepage_flags_enabled() && khugepaged_thread)
> set_recommended_min_free_kbytes();
> mutex_unlock(&khugepaged_mutex);
> }
> --
> 2.26.3
>

Reviewed-by: Zach O'Keefe <[email protected]>