2018-05-23 08:28:02

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 00/21] mm, THP, swap: Swapout/swapin THP in one piece

From: Huang Ying <[email protected]>

Hi, Andrew, could you help me to check whether the overall design is
reasonable?

Hi, Hugh, Shaohua, Minchan and Rik, could you help me to review the
swap part of the patchset? Especially [02/21], [03/21], [04/21],
[05/21], [06/21], [07/21], [08/21], [09/21], [10/21], [11/21],
[12/21], [20/21].

Hi, Andrea and Kirill, could you help me to review the THP part of the
patchset? Especially [01/21], [07/21], [09/21], [11/21], [13/21],
[15/21], [16/21], [17/21], [18/21], [19/21], [20/21], [21/21].

Hi, Johannes and Michal, could you help me to review the cgroup part
of the patchset? Especially [14/21].

And for all, Any comment is welcome!

This patchset is based on the 2018-05-18 head of mmotm/master.

This is the final step of THP (Transparent Huge Page) swap
optimization. After the first and second step, the splitting huge
page is delayed from almost the first step of swapout to after swapout
has been finished. In this step, we avoid splitting THP for swapout
and swapout/swapin the THP in one piece.

We tested the patchset with vm-scalability benchmark swap-w-seq test
case, with 16 processes. The test case forks 16 processes. Each
process allocates large anonymous memory range, and writes it from
begin to end for 8 rounds. The first round will swapout, while the
remaining rounds will swapin and swapout. The test is done on a Xeon
E5 v3 system, the swap device used is a RAM simulated PMEM (persistent
memory) device. The test result is as follow,

base optimized
---------------- --------------------------
%stddev %change %stddev
\ | \
1417897 ± 2% +992.8% 15494673 vm-scalability.throughput
1020489 ± 4% +1091.2% 12156349 vmstat.swap.si
1255093 ± 3% +940.3% 13056114 vmstat.swap.so
1259769 ± 7% +1818.3% 24166779 meminfo.AnonHugePages
28021761 -10.7% 25018848 ± 2% meminfo.AnonPages
64080064 ± 4% -95.6% 2787565 ± 33% interrupts.CAL:Function_call_interrupts
13.91 ± 5% -13.8 0.10 ± 27% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath

Where, the score of benchmark (bytes written per second) improved
992.8%. The swapout/swapin throughput improved 1008% (from about
2.17GB/s to 24.04GB/s). The performance difference is huge. In base
kernel, for the first round of writing, the THP is swapout and split,
so in the remaining rounds, there is only normal page swapin and
swapout. While in optimized kernel, the THP is kept after first
swapout, so THP swapin and swapout is used in the remaining rounds.
This shows the key benefit to swapout/swapin THP in one piece, the THP
will be kept instead of being split. meminfo information verified
this, in base kernel only 4.5% of anonymous page are THP during the
test, while in optimized kernel, that is 96.6%. The TLB flushing IPI
(represented as interrupts.CAL:Function_call_interrupts) reduced
95.6%, while cycles for spinlock reduced from 13.9% to 0.1%. These
are performance benefit of THP swapout/swapin too.

Below is the description for all steps of THP swap optimization.

Recently, the performance of the storage devices improved so fast that
we cannot saturate the disk bandwidth with single logical CPU when do
page swapping even on a high-end server machine. Because the
performance of the storage device improved faster than that of single
logical CPU. And it seems that the trend will not change in the near
future. On the other hand, the THP becomes more and more popular
because of increased memory size. So it becomes necessary to optimize
THP swap performance.

The advantages to swapout/swapin a THP in one piece include:

- Batch various swap operations for the THP. Many operations need to
be done once per THP instead of per normal page, for example,
allocating/freeing the swap space, writing/reading the swap space,
flushing TLB, page fault, etc. This will improve the performance of
the THP swap greatly.

- The THP swap space read/write will be large sequential IO (2M on
x86_64). It is particularly helpful for the swapin, which are
usually 4k random IO. This will improve the performance of the THP
swap too.

- It will help the memory fragmentation, especially when the THP is
heavily used by the applications. The THP order pages will be free
up after THP swapout.

- It will improve the THP utilization on the system with the swap
turned on. Because the speed for khugepaged to collapse the normal
pages into the THP is quite slow. After the THP is split during the
swapout, it will take quite long time for the normal pages to
collapse back into the THP after being swapin. The high THP
utilization helps the efficiency of the page based memory management
too.

There are some concerns regarding THP swapin, mainly because possible
enlarged read/write IO size (for swapout/swapin) may put more overhead
on the storage device. To deal with that, the THP swapin is turned on
only when necessary. A new sysfs interface:
/sys/kernel/mm/transparent_hugepage/swapin_enabled is added to
configure it. It uses "always/never/madvise" logic, to be turned on
globally, turned off globally, or turned on only for VMA with
MADV_HUGEPAGE, etc.
GE, etc.

Changelog
---------

v3:

- Rebased on 5/18 HEAD of mmotm/master

- Fixed a build bug, Thanks 0-Day!

v2:

- Fixed several build bugs, Thanks 0-Day!

- Improved documentation as suggested by Randy Dunlap.

- Fixed several bugs in reading huge swap cluster


2018-05-23 08:27:26

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 04/21] mm, THP, swap: Support PMD swap mapping in swapcache_free_cluster()

From: Huang Ying <[email protected]>

Previously, during swapout, all PMD page mapping will be split and
replaced with PTE swap mapping. And when clearing the SWAP_HAS_CACHE
flag for the huge swap cluster in swapcache_free_cluster(), the huge
swap cluster will be split. Now, during swapout, the PMD page mapping
will be changed to PMD swap mapping. So when clearing the
SWAP_HAS_CACHE flag, the huge swap cluster will only be split if the
PMD swap mapping count is 0. Otherwise, we will keep it as the huge
swap cluster. So that we can swapin a THP as a whole later.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
mm/swapfile.c | 41 ++++++++++++++++++++++++++++++-----------
1 file changed, 30 insertions(+), 11 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 075048032383..8dbc0f9b2f90 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -514,6 +514,18 @@ static void dec_cluster_info_page(struct swap_info_struct *p,
free_cluster(p, idx);
}

+#ifdef CONFIG_THP_SWAP
+static inline int cluster_swapcount(struct swap_cluster_info *ci)
+{
+ if (!ci || !cluster_is_huge(ci))
+ return 0;
+
+ return cluster_count(ci) - SWAPFILE_CLUSTER;
+}
+#else
+#define cluster_swapcount(ci) 0
+#endif
+
/*
* It's possible scan_swap_map() uses a free cluster in the middle of free
* cluster list. Avoiding such abuse to avoid list corruption.
@@ -905,6 +917,7 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
struct swap_cluster_info *ci;

ci = lock_cluster(si, offset);
+ memset(si->swap_map + offset, 0, SWAPFILE_CLUSTER);
cluster_set_count_flag(ci, 0, 0);
free_cluster(si, idx);
unlock_cluster(ci);
@@ -1288,24 +1301,30 @@ static void swapcache_free_cluster(swp_entry_t entry)

ci = lock_cluster(si, offset);
VM_BUG_ON(!cluster_is_huge(ci));
+ VM_BUG_ON(!is_cluster_offset(offset));
+ VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
map = si->swap_map + offset;
- for (i = 0; i < SWAPFILE_CLUSTER; i++) {
- val = map[i];
- VM_BUG_ON(!(val & SWAP_HAS_CACHE));
- if (val == SWAP_HAS_CACHE)
- free_entries++;
+ if (!cluster_swapcount(ci)) {
+ for (i = 0; i < SWAPFILE_CLUSTER; i++) {
+ val = map[i];
+ VM_BUG_ON(!(val & SWAP_HAS_CACHE));
+ if (val == SWAP_HAS_CACHE)
+ free_entries++;
+ }
+ if (free_entries != SWAPFILE_CLUSTER)
+ cluster_clear_huge(ci);
}
if (!free_entries) {
- for (i = 0; i < SWAPFILE_CLUSTER; i++)
- map[i] &= ~SWAP_HAS_CACHE;
+ for (i = 0; i < SWAPFILE_CLUSTER; i++) {
+ val = map[i];
+ VM_BUG_ON(!(val & SWAP_HAS_CACHE) ||
+ val == SWAP_HAS_CACHE);
+ map[i] = val & ~SWAP_HAS_CACHE;
+ }
}
- cluster_clear_huge(ci);
unlock_cluster(ci);
if (free_entries == SWAPFILE_CLUSTER) {
spin_lock(&si->lock);
- ci = lock_cluster(si, offset);
- memset(map, 0, SWAPFILE_CLUSTER);
- unlock_cluster(ci);
mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER);
swap_free_cluster(si, idx);
spin_unlock(&si->lock);
--
2.16.1


2018-05-23 08:27:52

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 16/21] mm, THP, swap: Free PMD swap mapping when zap_huge_pmd()

From: Huang Ying <[email protected]>

For a PMD swap mapping, zap_huge_pmd() will clear the PMD and call
free_swap_and_cache() to decrease the swap reference count and maybe
free or split the huge swap cluster and the THP in swap cache.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
mm/huge_memory.c | 32 +++++++++++++++++++++-----------
1 file changed, 21 insertions(+), 11 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 01fdd59fe6d4..e057b966ea68 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2007,7 +2007,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
spin_unlock(ptl);
if (is_huge_zero_pmd(orig_pmd))
tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
- } else if (is_huge_zero_pmd(orig_pmd)) {
+ } else if (pmd_present(orig_pmd) && is_huge_zero_pmd(orig_pmd)) {
zap_deposited_table(tlb->mm, pmd);
spin_unlock(ptl);
tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
@@ -2020,17 +2020,27 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
page_remove_rmap(page, true);
VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
VM_BUG_ON_PAGE(!PageHead(page), page);
- } else if (thp_migration_supported()) {
- swp_entry_t entry;
-
- VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
- entry = pmd_to_swp_entry(orig_pmd);
- page = pfn_to_page(swp_offset(entry));
+ } else {
+ swp_entry_t entry = pmd_to_swp_entry(orig_pmd);
+
+ if (thp_migration_supported() &&
+ is_migration_entry(entry))
+ page = pfn_to_page(swp_offset(entry));
+ else if (thp_swap_supported() &&
+ !non_swap_entry(entry))
+ free_swap_and_cache(entry, true);
+ else {
+ WARN_ONCE(1,
+"Non present huge pmd without pmd migration or swap enabled!");
+ goto unlock;
+ }
flush_needed = 0;
- } else
- WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
+ }

- if (PageAnon(page)) {
+ if (!page) {
+ zap_deposited_table(tlb->mm, pmd);
+ add_mm_counter(tlb->mm, MM_SWAPENTS, -HPAGE_PMD_NR);
+ } else if (PageAnon(page)) {
zap_deposited_table(tlb->mm, pmd);
add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR);
} else {
@@ -2038,7 +2048,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
zap_deposited_table(tlb->mm, pmd);
add_mm_counter(tlb->mm, MM_FILEPAGES, -HPAGE_PMD_NR);
}
-
+unlock:
spin_unlock(ptl);
if (flush_needed)
tlb_remove_page_size(tlb, page, HPAGE_PMD_SIZE);
--
2.16.1


2018-05-23 08:28:01

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 07/21] mm, THP, swap: Support PMD swap mapping in split_swap_cluster()

From: Huang Ying <[email protected]>

When splitting a THP in swap cache or failing to allocate a THP when
swapin a huge swap cluster, the huge swap cluster will be split. In
addition to clear the huge flag of the swap cluster, the PMD swap
mapping count recorded in cluster_count() will be set to 0. But we
will not touch PMD swap mappings themselves, because it is hard to
find them all sometimes. When the PMD swap mappings are operated
later, it will be found that the huge swap cluster has been split and
the PMD swap mappings will be split at that time.

Unless splitting a THP in swap cache (specified via "force"
parameter), split_swap_cluster() will return -EEXIST if there is
SWAP_HAS_CACHE flag in swap_map[offset]. Because this indicates there
is a THP corresponds to this huge swap cluster, and it isn't desired
to split the THP.

When splitting a THP in swap cache, the position to call
split_swap_cluster() is changed to before unlocking sub-pages. So
that all sub-pages will be kept locked from the THP has been split to
the huge swap cluster is split. This makes the code much easier to be
reasoned.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
include/linux/swap.h | 4 ++--
mm/huge_memory.c | 18 ++++++++++++------
mm/swapfile.c | 45 ++++++++++++++++++++++++++++++---------------
3 files changed, 44 insertions(+), 23 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index bb9de2cb952a..878f132dabc0 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -617,10 +617,10 @@ static inline swp_entry_t get_swap_page(struct page *page)
#endif /* CONFIG_SWAP */

#ifdef CONFIG_THP_SWAP
-extern int split_swap_cluster(swp_entry_t entry);
+extern int split_swap_cluster(swp_entry_t entry, bool force);
extern int split_swap_cluster_map(swp_entry_t entry);
#else
-static inline int split_swap_cluster(swp_entry_t entry)
+static inline int split_swap_cluster(swp_entry_t entry, bool force)
{
return 0;
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 84d5d8ff869e..e363e13f6751 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2502,6 +2502,17 @@ static void __split_huge_page(struct page *page, struct list_head *list,

unfreeze_page(head);

+ /*
+ * Split swap cluster before unlocking sub-pages. So all
+ * sub-pages will be kept locked from THP has been split to
+ * swap cluster is split.
+ */
+ if (PageSwapCache(head)) {
+ swp_entry_t entry = { .val = page_private(head) };
+
+ split_swap_cluster(entry, true);
+ }
+
for (i = 0; i < HPAGE_PMD_NR; i++) {
struct page *subpage = head + i;
if (subpage == page)
@@ -2728,12 +2739,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
__dec_node_page_state(page, NR_SHMEM_THPS);
spin_unlock(&pgdata->split_queue_lock);
__split_huge_page(page, list, flags);
- if (PageSwapCache(head)) {
- swp_entry_t entry = { .val = page_private(head) };
-
- ret = split_swap_cluster(entry);
- } else
- ret = 0;
+ ret = 0;
} else {
if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) {
pr_alert("total_mapcount: %u, page_count(): %u\n",
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 05f53c4c0cfe..1e723d3a9a6f 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1414,21 +1414,6 @@ static void swapcache_free_cluster(swp_entry_t entry)
}
}
}
-
-int split_swap_cluster(swp_entry_t entry)
-{
- struct swap_info_struct *si;
- struct swap_cluster_info *ci;
- unsigned long offset = swp_offset(entry);
-
- si = _swap_info_get(entry);
- if (!si)
- return -EBUSY;
- ci = lock_cluster(si, offset);
- cluster_clear_huge(ci);
- unlock_cluster(ci);
- return 0;
-}
#else
static inline void swapcache_free_cluster(swp_entry_t entry)
{
@@ -4072,6 +4057,36 @@ int split_swap_cluster_map(swp_entry_t entry)
unlock_cluster(ci);
return 0;
}
+
+int split_swap_cluster(swp_entry_t entry, bool force)
+{
+ struct swap_info_struct *si;
+ struct swap_cluster_info *ci;
+ unsigned long offset = swp_offset(entry);
+ int ret = 0;
+
+ si = get_swap_device(entry);
+ if (!si)
+ return -EINVAL;
+ ci = lock_cluster(si, offset);
+ /* The swap cluster has been split by someone else */
+ if (!cluster_is_huge(ci))
+ goto out;
+ VM_BUG_ON(!is_cluster_offset(offset));
+ VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
+ /* If not forced, don't split swap cluster has swap cache */
+ if (!force && si->swap_map[offset] & SWAP_HAS_CACHE) {
+ ret = -EEXIST;
+ goto out;
+ }
+ cluster_set_count(ci, SWAPFILE_CLUSTER);
+ cluster_clear_huge(ci);
+
+out:
+ unlock_cluster(ci);
+ put_swap_device(si);
+ return ret;
+}
#endif

static int __init swapfile_init(void)
--
2.16.1


2018-05-23 08:28:31

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 10/21] mm, THP, swap: Support to count THP swapin and its fallback

From: Huang Ying <[email protected]>

2 new /proc/vmstat fields are added, "thp_swapin" and
"thp_swapin_fallback" to count swapin a THP from swap device as a
whole and fallback to normal page swapin.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
Documentation/vm/transhuge.rst | 10 +++++++++-
include/linux/vm_event_item.h | 2 ++
mm/huge_memory.c | 4 +++-
mm/page_io.c | 15 ++++++++++++---
mm/vmstat.c | 2 ++
5 files changed, 28 insertions(+), 5 deletions(-)

diff --git a/Documentation/vm/transhuge.rst b/Documentation/vm/transhuge.rst
index 2c6867fca6ff..a87b1d880cd4 100644
--- a/Documentation/vm/transhuge.rst
+++ b/Documentation/vm/transhuge.rst
@@ -360,10 +360,18 @@ thp_swpout
piece without splitting.

thp_swpout_fallback
- is incremented if a huge page has to be split before swapout.
+ is incremented if a huge page is split before swapout.
Usually because failed to allocate some continuous swap space
for the huge page.

+thp_swpin
+ is incremented every time a huge page is swapin in one piece
+ without splitting.
+
+thp_swpin_fallback
+ is incremented if a huge page is split during swapin. Usually
+ because failed to allocate a huge page.
+
As the system ages, allocating huge pages may be expensive as the
system uses memory compaction to copy data around memory to free a
huge page for use. There are some counters in ``/proc/vmstat`` to help
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 5c7f010676a7..7b438548a78e 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -88,6 +88,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
THP_ZERO_PAGE_ALLOC_FAILED,
THP_SWPOUT,
THP_SWPOUT_FALLBACK,
+ THP_SWPIN,
+ THP_SWPIN_FALLBACK,
#endif
#ifdef CONFIG_MEMORY_BALLOON
BALLOON_INFLATE,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8303fa021c42..c2437914c632 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1664,8 +1664,10 @@ int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
/* swapoff occurs under us */
} else if (ret == -EINVAL)
ret = 0;
- else
+ else {
+ count_vm_event(THP_SWPIN_FALLBACK);
goto fallback;
+ }
}
delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
goto out;
diff --git a/mm/page_io.c b/mm/page_io.c
index b41cf9644585..96277058681e 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -347,6 +347,15 @@ int __swap_writepage(struct page *page, struct writeback_control *wbc,
return ret;
}

+static inline void count_swpin_vm_event(struct page *page)
+{
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ if (unlikely(PageTransHuge(page)))
+ count_vm_event(THP_SWPIN);
+#endif
+ count_vm_events(PSWPIN, hpage_nr_pages(page));
+}
+
int swap_readpage(struct page *page, bool synchronous)
{
struct bio *bio;
@@ -370,7 +379,7 @@ int swap_readpage(struct page *page, bool synchronous)

ret = mapping->a_ops->readpage(swap_file, page);
if (!ret)
- count_vm_event(PSWPIN);
+ count_swpin_vm_event(page);
return ret;
}

@@ -381,7 +390,7 @@ int swap_readpage(struct page *page, bool synchronous)
unlock_page(page);
}

- count_vm_event(PSWPIN);
+ count_swpin_vm_event(page);
return 0;
}

@@ -400,7 +409,7 @@ int swap_readpage(struct page *page, bool synchronous)
get_task_struct(current);
bio->bi_private = current;
bio_set_op_attrs(bio, REQ_OP_READ, 0);
- count_vm_event(PSWPIN);
+ count_swpin_vm_event(page);
bio_get(bio);
qc = submit_bio(bio);
while (synchronous) {
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 75eda9c2b260..259c7bddbb6e 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1263,6 +1263,8 @@ const char * const vmstat_text[] = {
"thp_zero_page_alloc_failed",
"thp_swpout",
"thp_swpout_fallback",
+ "thp_swpin",
+ "thp_swpin_fallback",
#endif
#ifdef CONFIG_MEMORY_BALLOON
"balloon_inflate",
--
2.16.1


2018-05-23 08:28:33

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 21/21] mm, THP: Avoid to split THP when reclaim MADV_FREE THP

From: Huang Ying <[email protected]>

Previously, to reclaim MADV_FREE THP, the THP will be split firstly,
then reclaim each sub-pages. This wastes cycles to split THP and
unmap and free each sub-pages, and split THP even if it has been
written since MADV_FREE. We have to do this because MADV_FREE THP
reclaiming shares same try_to_unmap() calling with swap, while swap
needs to split the PMD page mapping at that time. Now swap can
process PMD mapping, this makes it easy to avoid to split THP when
MADV_FREE THP is reclaimed.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
mm/huge_memory.c | 41 ++++++++++++++++++++++++++++++++---------
mm/vmscan.c | 3 ++-
2 files changed, 34 insertions(+), 10 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 88984e95b9b2..2d68a8f65531 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1671,6 +1671,15 @@ int do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
return 0;
}

+static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
+{
+ pgtable_t pgtable;
+
+ pgtable = pgtable_trans_huge_withdraw(mm, pmd);
+ pte_free(mm, pgtable);
+ mm_dec_nr_ptes(mm);
+}
+
#ifdef CONFIG_THP_SWAP
void __split_huge_swap_pmd(struct vm_area_struct *vma,
unsigned long haddr,
@@ -1885,6 +1894,28 @@ bool set_pmd_swap_entry(struct page_vma_mapped_walk *pvmw, struct page *page,
pmd_t swp_pmd;
swp_entry_t entry = { .val = page_private(page) };

+ if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) {
+ WARN_ON_ONCE(1);
+ return false;
+ }
+
+ /* MADV_FREE page check */
+ if (!PageSwapBacked(page)) {
+ if (!PageDirty(page)) {
+ zap_deposited_table(mm, pvmw->pmd);
+ add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR);
+ goto out_remove_rmap;
+ }
+
+ /*
+ * If the page was redirtied, it cannot be
+ * discarded. Remap the page to page table.
+ */
+ set_pmd_at(mm, address, pvmw->pmd, pmdval);
+ SetPageSwapBacked(page);
+ return false;
+ }
+
if (swap_duplicate(&entry, true) < 0) {
set_pmd_at(mm, address, pvmw->pmd, pmdval);
return false;
@@ -1902,21 +1933,13 @@ bool set_pmd_swap_entry(struct page_vma_mapped_walk *pvmw, struct page *page,
swp_pmd = pmd_swp_mksoft_dirty(swp_pmd);
set_pmd_at(mm, address, pvmw->pmd, swp_pmd);

+out_remove_rmap:
page_remove_rmap(page, true);
put_page(page);
return true;
}
#endif

-static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
-{
- pgtable_t pgtable;
-
- pgtable = pgtable_trans_huge_withdraw(mm, pmd);
- pte_free(mm, pgtable);
- mm_dec_nr_ptes(mm);
-}
-
/*
* Return true if we do MADV_FREE successfully on entire pmd page.
* Otherwise, return false.
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9f46047d4dee..1b89552523f6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1137,7 +1137,8 @@ static unsigned long shrink_page_list(struct list_head *page_list,
/* Adding to swap updated mapping */
mapping = page_mapping(page);
}
- } else if (unlikely(PageTransHuge(page))) {
+ } else if (unlikely(PageTransHuge(page)) &&
+ (!thp_swap_supported() || !PageAnon(page))) {
/* Split file THP */
if (split_huge_page_to_list(page, page_list))
goto keep_locked;
--
2.16.1


2018-05-23 08:28:45

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 13/21] mm, THP, swap: Support PMD swap mapping in madvise_free()

From: Huang Ying <[email protected]>

When madvise_free() found a PMD swap mapping, if only part of the huge
swap cluster is operated on, the PMD swap mapping will be split and
fallback to PTE swap mapping processing. Otherwise, if all huge swap
cluster is operated on, free_swap_and_cache() will be called to
decrease the PMD swap mapping count and probably free the swap space
and the THP in swap cache too.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
mm/huge_memory.c | 50 +++++++++++++++++++++++++++++++++++---------------
mm/madvise.c | 2 +-
2 files changed, 36 insertions(+), 16 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 668d77cec14d..a8af2ddc578a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1842,6 +1842,15 @@ static inline void __split_huge_swap_pmd(struct vm_area_struct *vma,
}
#endif

+static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
+{
+ pgtable_t pgtable;
+
+ pgtable = pgtable_trans_huge_withdraw(mm, pmd);
+ pte_free(mm, pgtable);
+ mm_dec_nr_ptes(mm);
+}
+
/*
* Return true if we do MADV_FREE successfully on entire pmd page.
* Otherwise, return false.
@@ -1862,15 +1871,35 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
goto out_unlocked;

orig_pmd = *pmd;
- if (is_huge_zero_pmd(orig_pmd))
- goto out;
-
if (unlikely(!pmd_present(orig_pmd))) {
- VM_BUG_ON(thp_migration_supported() &&
- !is_pmd_migration_entry(orig_pmd));
- goto out;
+ swp_entry_t entry = pmd_to_swp_entry(orig_pmd);
+
+ if (is_migration_entry(entry)) {
+ VM_BUG_ON(!thp_migration_supported());
+ goto out;
+ } else if (thp_swap_supported() && !non_swap_entry(entry)) {
+ /* If part of THP is discarded */
+ if (next - addr != HPAGE_PMD_SIZE) {
+ unsigned long haddr = addr & HPAGE_PMD_MASK;
+
+ __split_huge_swap_pmd(vma, haddr, pmd);
+ goto out;
+ }
+ free_swap_and_cache(entry, true);
+ pmd_clear(pmd);
+ zap_deposited_table(mm, pmd);
+ if (current->mm == mm)
+ sync_mm_rss(mm);
+ add_mm_counter(mm, MM_SWAPENTS, -HPAGE_PMD_NR);
+ ret = true;
+ goto out;
+ } else
+ VM_BUG_ON(1);
}

+ if (is_huge_zero_pmd(orig_pmd))
+ goto out;
+
page = pmd_page(orig_pmd);
/*
* If other processes are mapping this page, we couldn't discard
@@ -1916,15 +1945,6 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
return ret;
}

-static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
-{
- pgtable_t pgtable;
-
- pgtable = pgtable_trans_huge_withdraw(mm, pmd);
- pte_free(mm, pgtable);
- mm_dec_nr_ptes(mm);
-}
-
int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
pmd_t *pmd, unsigned long addr)
{
diff --git a/mm/madvise.c b/mm/madvise.c
index d180000c626b..e03e85a20fb4 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -321,7 +321,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
unsigned long next;

next = pmd_addr_end(addr, end);
- if (pmd_trans_huge(*pmd))
+ if (pmd_trans_huge(*pmd) || is_swap_pmd(*pmd))
if (madvise_free_huge_pmd(tlb, vma, pmd, addr, next))
goto next;

--
2.16.1


2018-05-23 08:29:01

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 19/21] mm, THP, swap: Support PMD swap mapping in common path

From: Huang Ying <[email protected]>

Original code is only for PMD migration entry, it is revised to
support PMD swap mapping.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
fs/proc/task_mmu.c | 8 ++++----
mm/gup.c | 34 ++++++++++++++++++++++------------
mm/huge_memory.c | 6 +++---
mm/mempolicy.c | 2 +-
4 files changed, 30 insertions(+), 20 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 597969db9e90..ddb6a2832f86 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -978,7 +978,7 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
pmd = pmd_clear_soft_dirty(pmd);

set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
- } else if (is_migration_entry(pmd_to_swp_entry(pmd))) {
+ } else if (is_swap_pmd(pmd)) {
pmd = pmd_swp_clear_soft_dirty(pmd);
set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
}
@@ -1309,7 +1309,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
frame = pmd_pfn(pmd) +
((addr & ~PMD_MASK) >> PAGE_SHIFT);
}
-#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP)
else if (is_swap_pmd(pmd)) {
swp_entry_t entry = pmd_to_swp_entry(pmd);
unsigned long offset;
@@ -1323,8 +1323,8 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
flags |= PM_SWAP;
if (pmd_swp_soft_dirty(pmd))
flags |= PM_SOFT_DIRTY;
- VM_BUG_ON(!is_pmd_migration_entry(pmd));
- page = migration_entry_to_page(entry);
+ if (is_pmd_migration_entry(pmd))
+ page = migration_entry_to_page(entry);
}
#endif

diff --git a/mm/gup.c b/mm/gup.c
index b70d7ba7cc13..84ba4ad8120d 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -216,6 +216,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
spinlock_t *ptl;
struct page *page;
struct mm_struct *mm = vma->vm_mm;
+ swp_entry_t entry;

pmd = pmd_offset(pudp, address);
/*
@@ -243,18 +244,21 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
if (!pmd_present(pmdval)) {
if (likely(!(flags & FOLL_MIGRATION)))
return no_page_table(vma, flags);
- VM_BUG_ON(thp_migration_supported() &&
- !is_pmd_migration_entry(pmdval));
- if (is_pmd_migration_entry(pmdval))
+ entry = pmd_to_swp_entry(pmdval);
+ if (thp_migration_supported() && is_migration_entry(entry)) {
pmd_migration_entry_wait(mm, pmd);
- pmdval = READ_ONCE(*pmd);
- /*
- * MADV_DONTNEED may convert the pmd to null because
- * mmap_sem is held in read mode
- */
- if (pmd_none(pmdval))
+ pmdval = READ_ONCE(*pmd);
+ /*
+ * MADV_DONTNEED may convert the pmd to null because
+ * mmap_sem is held in read mode
+ */
+ if (pmd_none(pmdval))
+ return no_page_table(vma, flags);
+ goto retry;
+ }
+ if (thp_swap_supported() && !non_swap_entry(entry))
return no_page_table(vma, flags);
- goto retry;
+ VM_BUG_ON(1);
}
if (pmd_devmap(pmdval)) {
ptl = pmd_lock(mm, pmd);
@@ -276,11 +280,17 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
return no_page_table(vma, flags);
}
if (unlikely(!pmd_present(*pmd))) {
+ entry = pmd_to_swp_entry(*pmd);
spin_unlock(ptl);
if (likely(!(flags & FOLL_MIGRATION)))
return no_page_table(vma, flags);
- pmd_migration_entry_wait(mm, pmd);
- goto retry_locked;
+ if (thp_migration_supported() && is_migration_entry(entry)) {
+ pmd_migration_entry_wait(mm, pmd);
+ goto retry_locked;
+ }
+ if (thp_swap_supported() && !non_swap_entry(entry))
+ return no_page_table(vma, flags);
+ VM_BUG_ON(1);
}
if (unlikely(!pmd_trans_huge(*pmd))) {
spin_unlock(ptl);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e057b966ea68..e80d03c2412a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2074,7 +2074,7 @@ static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl,
static pmd_t move_soft_dirty_pmd(pmd_t pmd)
{
#ifdef CONFIG_MEM_SOFT_DIRTY
- if (unlikely(is_pmd_migration_entry(pmd)))
+ if (unlikely(is_swap_pmd(pmd)))
pmd = pmd_swp_mksoft_dirty(pmd);
else if (pmd_present(pmd))
pmd = pmd_mksoft_dirty(pmd);
@@ -2160,11 +2160,11 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
preserve_write = prot_numa && pmd_write(*pmd);
ret = 1;

-#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP)
if (is_swap_pmd(*pmd)) {
swp_entry_t entry = pmd_to_swp_entry(*pmd);

- VM_BUG_ON(!is_pmd_migration_entry(*pmd));
+ VM_BUG_ON(!thp_swap_supported() && !is_migration_entry(entry));
if (is_write_migration_entry(entry)) {
pmd_t newpmd;
/*
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 9ac49ef17b4e..180d7c08f6cc 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -436,7 +436,7 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
struct queue_pages *qp = walk->private;
unsigned long flags;

- if (unlikely(is_pmd_migration_entry(*pmd))) {
+ if (unlikely(is_swap_pmd(*pmd))) {
ret = 1;
goto unlock;
}
--
2.16.1


2018-05-23 08:29:14

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 20/21] mm, THP, swap: create PMD swap mapping when unmap the THP

From: Huang Ying <[email protected]>

This is the final step of the THP swapin support. When reclaiming a
anonymous THP, after allocating the huge swap cluster and add the THP
into swap cache, the PMD page mapping will be changed to the mapping
to the swap space. Previously, the PMD page mapping will be split
before being changed. In this patch, the unmap code is enhanced not
to split the PMD mapping, but create a PMD swap mapping to replace it
instead. So later when clear the SWAP_HAS_CACHE flag in the last step
of swapout, the huge swap cluster will be kept instead of being split,
and when swapin, the huge swap cluster will be read as a whole into a
THP. That is, the THP will not be split during swapout/swapin. This
can eliminate the overhead of splitting/collapsing, and reduce the
page fault count, etc. But more important, the utilization of THP is
improved greatly, that is, much more THP will be kept when swapping is
used, so that we can take full advantage of THP including its high
performance for swapout/swapin.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
include/linux/huge_mm.h | 11 +++++++++++
mm/huge_memory.c | 30 ++++++++++++++++++++++++++++++
mm/rmap.c | 43 +++++++++++++++++++++++++++++++++++++++++--
mm/vmscan.c | 6 +-----
4 files changed, 83 insertions(+), 7 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 5001c28b3d18..d03fcddcc42d 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -404,6 +404,8 @@ static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */

+struct page_vma_mapped_walk;
+
#ifdef CONFIG_THP_SWAP
extern void __split_huge_swap_pmd(struct vm_area_struct *vma,
unsigned long haddr,
@@ -411,6 +413,8 @@ extern void __split_huge_swap_pmd(struct vm_area_struct *vma,
extern int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long address, pmd_t orig_pmd);
extern int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd);
+extern bool set_pmd_swap_entry(struct page_vma_mapped_walk *pvmw,
+ struct page *page, unsigned long address, pmd_t pmdval);

static inline bool transparent_hugepage_swapin_enabled(
struct vm_area_struct *vma)
@@ -452,6 +456,13 @@ static inline int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
return 0;
}

+static inline bool set_pmd_swap_entry(struct page_vma_mapped_walk *pvmw,
+ struct page *page, unsigned long address,
+ pmd_t pmdval)
+{
+ return false;
+}
+
static inline bool transparent_hugepage_swapin_enabled(
struct vm_area_struct *vma)
{
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e80d03c2412a..88984e95b9b2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1876,6 +1876,36 @@ int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
count_vm_event(THP_SWPIN_FALLBACK);
goto fallback;
}
+
+bool set_pmd_swap_entry(struct page_vma_mapped_walk *pvmw, struct page *page,
+ unsigned long address, pmd_t pmdval)
+{
+ struct vm_area_struct *vma = pvmw->vma;
+ struct mm_struct *mm = vma->vm_mm;
+ pmd_t swp_pmd;
+ swp_entry_t entry = { .val = page_private(page) };
+
+ if (swap_duplicate(&entry, true) < 0) {
+ set_pmd_at(mm, address, pvmw->pmd, pmdval);
+ return false;
+ }
+ if (list_empty(&mm->mmlist)) {
+ spin_lock(&mmlist_lock);
+ if (list_empty(&mm->mmlist))
+ list_add(&mm->mmlist, &init_mm.mmlist);
+ spin_unlock(&mmlist_lock);
+ }
+ add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR);
+ add_mm_counter(mm, MM_SWAPENTS, HPAGE_PMD_NR);
+ swp_pmd = swp_entry_to_pmd(entry);
+ if (pmd_soft_dirty(pmdval))
+ swp_pmd = pmd_swp_mksoft_dirty(swp_pmd);
+ set_pmd_at(mm, address, pvmw->pmd, swp_pmd);
+
+ page_remove_rmap(page, true);
+ put_page(page);
+ return true;
+}
#endif

static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
diff --git a/mm/rmap.c b/mm/rmap.c
index 5f45d6325c40..4861b1a86e2a 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1402,12 +1402,51 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
continue;
}

+ address = pvmw.address;
+
+#ifdef CONFIG_THP_SWAP
+ /* PMD-mapped THP swap entry */
+ if (thp_swap_supported() && !pvmw.pte && PageAnon(page)) {
+ pmd_t pmdval;
+
+ VM_BUG_ON_PAGE(PageHuge(page) ||
+ !PageTransCompound(page), page);
+
+ flush_cache_range(vma, address,
+ address + HPAGE_PMD_SIZE);
+ mmu_notifier_invalidate_range_start(mm, address,
+ address + HPAGE_PMD_SIZE);
+ if (should_defer_flush(mm, flags)) {
+ /* check comments for PTE below */
+ pmdval = pmdp_huge_get_and_clear(mm, address,
+ pvmw.pmd);
+ set_tlb_ubc_flush_pending(mm,
+ pmd_dirty(pmdval));
+ } else
+ pmdval = pmdp_huge_clear_flush(vma, address,
+ pvmw.pmd);
+
+ /*
+ * Move the dirty bit to the page. Now the pmd
+ * is gone.
+ */
+ if (pmd_dirty(pmdval))
+ set_page_dirty(page);
+
+ /* Update high watermark before we lower rss */
+ update_hiwater_rss(mm);
+
+ ret = set_pmd_swap_entry(&pvmw, page, address, pmdval);
+ mmu_notifier_invalidate_range_end(mm, address,
+ address + HPAGE_PMD_SIZE);
+ continue;
+ }
+#endif
+
/* Unexpected PMD-mapped THP? */
VM_BUG_ON_PAGE(!pvmw.pte, page);

subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
- address = pvmw.address;
-

if (IS_ENABLED(CONFIG_MIGRATION) &&
(flags & TTU_MIGRATION) &&
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 50055d72f294..9f46047d4dee 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1148,11 +1148,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
* processes. Try to unmap it here.
*/
if (page_mapped(page)) {
- enum ttu_flags flags = ttu_flags | TTU_BATCH_FLUSH;
-
- if (unlikely(PageTransHuge(page)))
- flags |= TTU_SPLIT_HUGE_PMD;
- if (!try_to_unmap(page, flags)) {
+ if (!try_to_unmap(page, ttu_flags | TTU_BATCH_FLUSH)) {
nr_unmap_fail++;
goto activate_locked;
}
--
2.16.1


2018-05-23 08:29:33

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 18/21] mm, THP, swap: Support PMD swap mapping in mincore()

From: Huang Ying <[email protected]>

During mincore(), for PMD swap mapping, swap cache will be looked up.
If the resulting page isn't compound page, the PMD swap mapping will
be split and fallback to PTE swap mapping processing.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
mm/mincore.c | 37 +++++++++++++++++++++++++++++++------
1 file changed, 31 insertions(+), 6 deletions(-)

diff --git a/mm/mincore.c b/mm/mincore.c
index a66f2052c7b1..897dd2c187e8 100644
--- a/mm/mincore.c
+++ b/mm/mincore.c
@@ -48,7 +48,8 @@ static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr,
* and is up to date; i.e. that no page-in operation would be required
* at this time if an application were to map and access this page.
*/
-static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff)
+static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff,
+ bool *compound)
{
unsigned char present = 0;
struct page *page;
@@ -86,6 +87,8 @@ static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff)
#endif
if (page) {
present = PageUptodate(page);
+ if (compound)
+ *compound = PageCompound(page);
put_page(page);
}

@@ -103,7 +106,8 @@ static int __mincore_unmapped_range(unsigned long addr, unsigned long end,

pgoff = linear_page_index(vma, addr);
for (i = 0; i < nr; i++, pgoff++)
- vec[i] = mincore_page(vma->vm_file->f_mapping, pgoff);
+ vec[i] = mincore_page(vma->vm_file->f_mapping,
+ pgoff, NULL);
} else {
for (i = 0; i < nr; i++)
vec[i] = 0;
@@ -127,14 +131,36 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
pte_t *ptep;
unsigned char *vec = walk->private;
int nr = (end - addr) >> PAGE_SHIFT;
+ swp_entry_t entry;

ptl = pmd_trans_huge_lock(pmd, vma);
if (ptl) {
- memset(vec, 1, nr);
+ unsigned char val = 1;
+ bool compound;
+
+ if (thp_swap_supported() && is_swap_pmd(*pmd)) {
+ entry = pmd_to_swp_entry(*pmd);
+ if (!non_swap_entry(entry)) {
+ val = mincore_page(swap_address_space(entry),
+ swp_offset(entry),
+ &compound);
+ /*
+ * The huge swap cluster has been
+ * split under us
+ */
+ if (!compound) {
+ __split_huge_swap_pmd(vma, addr, pmd);
+ spin_unlock(ptl);
+ goto fallback;
+ }
+ }
+ }
+ memset(vec, val, nr);
spin_unlock(ptl);
goto out;
}

+fallback:
if (pmd_trans_unstable(pmd)) {
__mincore_unmapped_range(addr, end, vma, vec);
goto out;
@@ -150,8 +176,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
else if (pte_present(pte))
*vec = 1;
else { /* pte is a swap entry */
- swp_entry_t entry = pte_to_swp_entry(pte);
-
+ entry = pte_to_swp_entry(pte);
if (non_swap_entry(entry)) {
/*
* migration or hwpoison entries are always
@@ -161,7 +186,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
} else {
#ifdef CONFIG_SWAP
*vec = mincore_page(swap_address_space(entry),
- swp_offset(entry));
+ swp_offset(entry), NULL);
#else
WARN_ON(1);
*vec = 1;
--
2.16.1


2018-05-23 08:29:51

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 17/21] mm, THP, swap: Support PMD swap mapping for MADV_WILLNEED

From: Huang Ying <[email protected]>

During MADV_WILLNEED, for a PMD swap mapping, if THP swapin is enabled
for the VMA, the whole swap cluster will be swapin. Otherwise, the
huge swap cluster and the PMD swap mapping will be split and fallback
to PTE swap mapping.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
mm/madvise.c | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)

diff --git a/mm/madvise.c b/mm/madvise.c
index e03e85a20fb4..44a0a62f4848 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -196,14 +196,36 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start,
pte_t *orig_pte;
struct vm_area_struct *vma = walk->private;
unsigned long index;
+ swp_entry_t entry;
+ struct page *page;
+ pmd_t pmdval;
+
+ pmdval = *pmd;
+ if (thp_swap_supported() && is_swap_pmd(pmdval) &&
+ !is_pmd_migration_entry(pmdval)) {
+ entry = pmd_to_swp_entry(pmdval);
+ if (!transparent_hugepage_swapin_enabled(vma)) {
+ if (!split_swap_cluster(entry, false))
+ split_huge_swap_pmd(vma, pmd, start, pmdval);
+ } else {
+ page = read_swap_cache_async(entry,
+ GFP_HIGHUSER_MOVABLE,
+ vma, start, false);
+ /* The swap cluster has been split under us */
+ if (page) {
+ if (!PageTransHuge(page))
+ split_huge_swap_pmd(vma, pmd, start,
+ pmdval);
+ put_page(page);
+ }
+ }
+ }

if (pmd_none_or_trans_huge_or_clear_bad(pmd))
return 0;

for (index = start; index != end; index += PAGE_SIZE) {
pte_t pte;
- swp_entry_t entry;
- struct page *page;
spinlock_t *ptl;

orig_pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
--
2.16.1


2018-05-23 08:31:31

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 14/21] mm, cgroup, THP, swap: Support to move swap account for PMD swap mapping

From: Huang Ying <[email protected]>

Previously the huge swap cluster will be split after the THP is
swapout. Now, to support to swapin the THP as a whole, the huge swap
cluster will not be split after the THP is reclaimed. So in memcg, we
need to move the swap account for PMD swap mappings in the process's
page table.

When the page table is scanned during moving memcg charge, the PMD
swap mapping will be identified. And mem_cgroup_move_swap_account()
and its callee is revised to move account for the whole huge swap
cluster. If the swap cluster mapped by PMD has been split, the PMD
swap mapping will be split and fallback to PTE processing. Because
the swap slots of the swap cluster may have been swapin or moved to
other cgroup already.

Because there is no way to prevent a huge swap cluster from being
split except when it has SWAP_HAS_CACHE flag set. It is possible for
the huge swap cluster to be split and the charge for the swap slots
inside to be changed, after we check the PMD swap mapping and the huge
swap cluster before we commit the charge moving. But the race window
is so small, that we will just ignore the race.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
include/linux/huge_mm.h | 9 +++
include/linux/swap.h | 6 ++
include/linux/swap_cgroup.h | 3 +-
mm/huge_memory.c | 12 +---
mm/memcontrol.c | 138 +++++++++++++++++++++++++++++++++++---------
mm/swap_cgroup.c | 45 ++++++++++++---
mm/swapfile.c | 12 ++++
7 files changed, 180 insertions(+), 45 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 4e299e327720..5001c28b3d18 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -405,6 +405,9 @@ static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */

#ifdef CONFIG_THP_SWAP
+extern void __split_huge_swap_pmd(struct vm_area_struct *vma,
+ unsigned long haddr,
+ pmd_t *pmd);
extern int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long address, pmd_t orig_pmd);
extern int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd);
@@ -432,6 +435,12 @@ static inline bool transparent_hugepage_swapin_enabled(
return false;
}
#else /* CONFIG_THP_SWAP */
+static inline void __split_huge_swap_pmd(struct vm_area_struct *vma,
+ unsigned long haddr,
+ pmd_t *pmd)
+{
+}
+
static inline int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long address, pmd_t orig_pmd)
{
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 5832a750baed..88677acdcff6 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -628,6 +628,7 @@ static inline swp_entry_t get_swap_page(struct page *page)
#ifdef CONFIG_THP_SWAP
extern int split_swap_cluster(swp_entry_t entry, bool force);
extern int split_swap_cluster_map(swp_entry_t entry);
+extern bool in_huge_swap_cluster(swp_entry_t entry);
#else
static inline int split_swap_cluster(swp_entry_t entry, bool force)
{
@@ -638,6 +639,11 @@ static inline int split_swap_cluster_map(swp_entry_t entry)
{
return 0;
}
+
+static inline bool in_huge_swap_cluster(swp_entry_t entry)
+{
+ return false;
+}
#endif

#ifdef CONFIG_MEMCG
diff --git a/include/linux/swap_cgroup.h b/include/linux/swap_cgroup.h
index a12dd1c3966c..c40fb52b0563 100644
--- a/include/linux/swap_cgroup.h
+++ b/include/linux/swap_cgroup.h
@@ -7,7 +7,8 @@
#ifdef CONFIG_MEMCG_SWAP

extern unsigned short swap_cgroup_cmpxchg(swp_entry_t ent,
- unsigned short old, unsigned short new);
+ unsigned short old, unsigned short new,
+ unsigned int nr_ents);
extern unsigned short swap_cgroup_record(swp_entry_t ent, unsigned short id,
unsigned int nr_ents);
extern unsigned short lookup_swap_cgroup_id(swp_entry_t ent);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a8af2ddc578a..c4eb7737b313 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1630,9 +1630,9 @@ int do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
}

#ifdef CONFIG_THP_SWAP
-static void __split_huge_swap_pmd(struct vm_area_struct *vma,
- unsigned long haddr,
- pmd_t *pmd)
+void __split_huge_swap_pmd(struct vm_area_struct *vma,
+ unsigned long haddr,
+ pmd_t *pmd)
{
struct mm_struct *mm = vma->vm_mm;
pgtable_t pgtable;
@@ -1834,12 +1834,6 @@ int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
count_vm_event(THP_SWPIN_FALLBACK);
goto fallback;
}
-#else
-static inline void __split_huge_swap_pmd(struct vm_area_struct *vma,
- unsigned long haddr,
- pmd_t *pmd)
-{
-}
#endif

static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index bdb8028c806c..45095f71201d 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2413,9 +2413,10 @@ void mem_cgroup_split_huge_fixup(struct page *head)
#ifdef CONFIG_MEMCG_SWAP
/**
* mem_cgroup_move_swap_account - move swap charge and swap_cgroup's record.
- * @entry: swap entry to be moved
+ * @entry: the first swap entry to be moved
* @from: mem_cgroup which the entry is moved from
* @to: mem_cgroup which the entry is moved to
+ * @nr_ents: number of swap entries
*
* It succeeds only when the swap_cgroup's record for this entry is the same
* as the mem_cgroup's id of @from.
@@ -2426,23 +2427,27 @@ void mem_cgroup_split_huge_fixup(struct page *head)
* both res and memsw, and called css_get().
*/
static int mem_cgroup_move_swap_account(swp_entry_t entry,
- struct mem_cgroup *from, struct mem_cgroup *to)
+ struct mem_cgroup *from,
+ struct mem_cgroup *to,
+ unsigned int nr_ents)
{
unsigned short old_id, new_id;

old_id = mem_cgroup_id(from);
new_id = mem_cgroup_id(to);

- if (swap_cgroup_cmpxchg(entry, old_id, new_id) == old_id) {
- mod_memcg_state(from, MEMCG_SWAP, -1);
- mod_memcg_state(to, MEMCG_SWAP, 1);
+ if (swap_cgroup_cmpxchg(entry, old_id, new_id, nr_ents) == old_id) {
+ mod_memcg_state(from, MEMCG_SWAP, -nr_ents);
+ mod_memcg_state(to, MEMCG_SWAP, nr_ents);
return 0;
}
return -EINVAL;
}
#else
static inline int mem_cgroup_move_swap_account(swp_entry_t entry,
- struct mem_cgroup *from, struct mem_cgroup *to)
+ struct mem_cgroup *from,
+ struct mem_cgroup *to,
+ unsigned int nr_ents)
{
return -EINVAL;
}
@@ -4601,6 +4606,7 @@ enum mc_target_type {
MC_TARGET_PAGE,
MC_TARGET_SWAP,
MC_TARGET_DEVICE,
+ MC_TARGET_FALLBACK,
};

static struct page *mc_handle_present_pte(struct vm_area_struct *vma,
@@ -4667,6 +4673,34 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma,
}
#endif

+#ifdef CONFIG_THP_SWAP
+static struct page *mc_handle_swap_pmd(struct vm_area_struct *vma,
+ pmd_t pmd, swp_entry_t *entry)
+{
+ struct page *page = NULL;
+ swp_entry_t ent = pmd_to_swp_entry(pmd);
+
+ if (!(mc.flags & MOVE_ANON) || non_swap_entry(ent))
+ return NULL;
+
+ /*
+ * Because lookup_swap_cache() updates some statistics counter,
+ * we call find_get_page() with swapper_space directly.
+ */
+ page = find_get_page(swap_address_space(ent), swp_offset(ent));
+ if (do_memsw_account())
+ entry->val = ent.val;
+
+ return page;
+}
+#else
+static struct page *mc_handle_swap_pmd(struct vm_area_struct *vma,
+ pmd_t pmd, swp_entry_t *entry)
+{
+ return NULL;
+}
+#endif
+
static struct page *mc_handle_file_pte(struct vm_area_struct *vma,
unsigned long addr, pte_t ptent, swp_entry_t *entry)
{
@@ -4855,7 +4889,9 @@ static enum mc_target_type get_mctgt_type(struct vm_area_struct *vma,
* There is a swap entry and a page doesn't exist or isn't charged.
* But we cannot move a tail-page in a THP.
*/
- if (ent.val && !ret && (!page || !PageTransCompound(page)) &&
+ if (ent.val && !ret &&
+ ((page && !PageTransCompound(page)) ||
+ (!page && !in_huge_swap_cluster(ent))) &&
mem_cgroup_id(mc.from) == lookup_swap_cgroup_id(ent)) {
ret = MC_TARGET_SWAP;
if (target)
@@ -4866,37 +4902,65 @@ static enum mc_target_type get_mctgt_type(struct vm_area_struct *vma,

#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/*
- * We don't consider PMD mapped swapping or file mapped pages because THP does
- * not support them for now.
- * Caller should make sure that pmd_trans_huge(pmd) is true.
+ * We don't consider file mapped pages because THP does not support
+ * them for now.
*/
static enum mc_target_type get_mctgt_type_thp(struct vm_area_struct *vma,
- unsigned long addr, pmd_t pmd, union mc_target *target)
+ unsigned long addr, pmd_t *pmdp, union mc_target *target)
{
+ pmd_t pmd = *pmdp;
struct page *page = NULL;
enum mc_target_type ret = MC_TARGET_NONE;
+ swp_entry_t ent = { .val = 0 };

if (unlikely(is_swap_pmd(pmd))) {
- VM_BUG_ON(thp_migration_supported() &&
- !is_pmd_migration_entry(pmd));
- return ret;
+ if (is_pmd_migration_entry(pmd)) {
+ VM_BUG_ON(!thp_migration_supported());
+ return ret;
+ }
+ page = mc_handle_swap_pmd(vma, pmd, &ent);
+ /* The swap cluster has been split under us */
+ if ((page && !PageTransHuge(page)) ||
+ (!page && ent.val && !in_huge_swap_cluster(ent))) {
+ __split_huge_swap_pmd(vma, addr, pmdp);
+ ret = MC_TARGET_FALLBACK;
+ goto out;
+ }
+ } else {
+ page = pmd_page(pmd);
+ get_page(page);
}
- page = pmd_page(pmd);
- VM_BUG_ON_PAGE(!page || !PageHead(page), page);
+ VM_BUG_ON_PAGE(page && !PageHead(page), page);
if (!(mc.flags & MOVE_ANON))
- return ret;
- if (page->mem_cgroup == mc.from) {
+ goto out;
+ if (!page && !ent.val)
+ goto out;
+ if (page && page->mem_cgroup == mc.from) {
ret = MC_TARGET_PAGE;
if (target) {
get_page(page);
target->page = page;
}
}
+ if (ent.val && !ret && !page &&
+ mem_cgroup_id(mc.from) == lookup_swap_cgroup_id(ent)) {
+ /*
+ * It is possible for the huge swap cluster to be
+ * split by others and the charge is changed, but the
+ * race window is small.
+ */
+ ret = MC_TARGET_SWAP;
+ if (target)
+ target->ent = ent;
+ }
+out:
+ if (page)
+ put_page(page);
return ret;
}
#else
static inline enum mc_target_type get_mctgt_type_thp(struct vm_area_struct *vma,
- unsigned long addr, pmd_t pmd, union mc_target *target)
+ unsigned long addr, pmd_t *pmdp, union mc_target *target)
{
return MC_TARGET_NONE;
}
@@ -4909,6 +4973,7 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd,
struct vm_area_struct *vma = walk->vma;
pte_t *pte;
spinlock_t *ptl;
+ int ret;

ptl = pmd_trans_huge_lock(pmd, vma);
if (ptl) {
@@ -4917,12 +4982,16 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd,
* support transparent huge page with MEMORY_DEVICE_PUBLIC or
* MEMORY_DEVICE_PRIVATE but this might change.
*/
- if (get_mctgt_type_thp(vma, addr, *pmd, NULL) == MC_TARGET_PAGE)
- mc.precharge += HPAGE_PMD_NR;
+ ret = get_mctgt_type_thp(vma, addr, pmd, NULL);
spin_unlock(ptl);
+ if (ret == MC_TARGET_FALLBACK)
+ goto fallback;
+ if (ret)
+ mc.precharge += HPAGE_PMD_NR;
return 0;
}

+fallback:
if (pmd_trans_unstable(pmd))
return 0;
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
@@ -5114,6 +5183,7 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
enum mc_target_type target_type;
union mc_target target;
struct page *page;
+ swp_entry_t ent;

ptl = pmd_trans_huge_lock(pmd, vma);
if (ptl) {
@@ -5121,8 +5191,9 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
spin_unlock(ptl);
return 0;
}
- target_type = get_mctgt_type_thp(vma, addr, *pmd, &target);
- if (target_type == MC_TARGET_PAGE) {
+ target_type = get_mctgt_type_thp(vma, addr, pmd, &target);
+ switch (target_type) {
+ case MC_TARGET_PAGE:
page = target.page;
if (!isolate_lru_page(page)) {
if (!mem_cgroup_move_account(page, true,
@@ -5133,7 +5204,8 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
putback_lru_page(page);
}
put_page(page);
- } else if (target_type == MC_TARGET_DEVICE) {
+ break;
+ case MC_TARGET_DEVICE:
page = target.page;
if (!mem_cgroup_move_account(page, true,
mc.from, mc.to)) {
@@ -5141,9 +5213,21 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
mc.moved_charge += HPAGE_PMD_NR;
}
put_page(page);
+ break;
+ case MC_TARGET_SWAP:
+ ent = target.ent;
+ if (!mem_cgroup_move_swap_account(ent, mc.from, mc.to,
+ HPAGE_PMD_NR)) {
+ mc.precharge -= HPAGE_PMD_NR;
+ mc.moved_swap += HPAGE_PMD_NR;
+ }
+ break;
+ default:
+ break;
}
spin_unlock(ptl);
- return 0;
+ if (target_type != MC_TARGET_FALLBACK)
+ return 0;
}

if (pmd_trans_unstable(pmd))
@@ -5153,7 +5237,6 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
for (; addr != end; addr += PAGE_SIZE) {
pte_t ptent = *(pte++);
bool device = false;
- swp_entry_t ent;

if (!mc.precharge)
break;
@@ -5187,7 +5270,8 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
break;
case MC_TARGET_SWAP:
ent = target.ent;
- if (!mem_cgroup_move_swap_account(ent, mc.from, mc.to)) {
+ if (!mem_cgroup_move_swap_account(ent, mc.from,
+ mc.to, 1)) {
mc.precharge--;
/* we fixup refcnts and charges later. */
mc.moved_swap++;
diff --git a/mm/swap_cgroup.c b/mm/swap_cgroup.c
index 45affaef3bc6..ccc08e88962a 100644
--- a/mm/swap_cgroup.c
+++ b/mm/swap_cgroup.c
@@ -87,29 +87,58 @@ static struct swap_cgroup *lookup_swap_cgroup(swp_entry_t ent,

/**
* swap_cgroup_cmpxchg - cmpxchg mem_cgroup's id for this swp_entry.
- * @ent: swap entry to be cmpxchged
+ * @ent: the first swap entry to be cmpxchged
* @old: old id
* @new: new id
+ * @nr_ents: number of swap entries
*
* Returns old id at success, 0 at failure.
* (There is no mem_cgroup using 0 as its id)
*/
unsigned short swap_cgroup_cmpxchg(swp_entry_t ent,
- unsigned short old, unsigned short new)
+ unsigned short old, unsigned short new,
+ unsigned int nr_ents)
{
struct swap_cgroup_ctrl *ctrl;
- struct swap_cgroup *sc;
+ struct swap_cgroup *sc_start, *sc;
unsigned long flags;
unsigned short retval;
+ pgoff_t offset_start = swp_offset(ent), offset;
+ pgoff_t end = offset_start + nr_ents;

- sc = lookup_swap_cgroup(ent, &ctrl);
+ sc_start = lookup_swap_cgroup(ent, &ctrl);

spin_lock_irqsave(&ctrl->lock, flags);
- retval = sc->id;
- if (retval == old)
+ sc = sc_start;
+ offset = offset_start;
+ for (;;) {
+ if (sc->id != old) {
+ retval = 0;
+ goto out;
+ }
+ offset++;
+ if (offset == end)
+ break;
+ if (offset % SC_PER_PAGE)
+ sc++;
+ else
+ sc = __lookup_swap_cgroup(ctrl, offset);
+ }
+
+ sc = sc_start;
+ offset = offset_start;
+ for (;;) {
sc->id = new;
- else
- retval = 0;
+ offset++;
+ if (offset == end)
+ break;
+ if (offset % SC_PER_PAGE)
+ sc++;
+ else
+ sc = __lookup_swap_cgroup(ctrl, offset);
+ }
+ retval = old;
+out:
spin_unlock_irqrestore(&ctrl->lock, flags);
return retval;
}
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 77b2ddd37d9b..ae8df3291f07 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1687,6 +1687,18 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,

return map_swapcount;
}
+
+bool in_huge_swap_cluster(swp_entry_t entry)
+{
+ struct swap_info_struct *si;
+ struct swap_cluster_info *ci;
+
+ si = _swap_info_get(entry);
+ if (!si || !si->cluster_info)
+ return false;
+ ci = si->cluster_info + swp_offset(entry) / SWAPFILE_CLUSTER;
+ return cluster_is_huge(ci);
+}
#else
#define swap_page_trans_huge_swapped(si, entry) swap_swapcount(si, entry, NULL)
#define page_swapped(page) (page_swapcount(page) != 0)
--
2.16.1


2018-05-23 08:31:59

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 15/21] mm, THP, swap: Support to copy PMD swap mapping when fork()

From: Huang Ying <[email protected]>

During fork, the page table need to be copied from parent to child. A
PMD swap mapping need to be copied too and the swap reference count
need to be increased.

When the huge swap cluster has been split already, we need to split
the PMD swap mapping and fallback to PTE copying.

When swap count continuation failed to allocate a page with
GFP_ATOMIC, we need to unlock the spinlock and try again with
GFP_KERNEL.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
mm/huge_memory.c | 72 ++++++++++++++++++++++++++++++++++++++++++++------------
1 file changed, 57 insertions(+), 15 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c4eb7737b313..01fdd59fe6d4 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -941,6 +941,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
if (unlikely(!pgtable))
goto out;

+retry:
dst_ptl = pmd_lock(dst_mm, dst_pmd);
src_ptl = pmd_lockptr(src_mm, src_pmd);
spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
@@ -948,26 +949,67 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
ret = -EAGAIN;
pmd = *src_pmd;

-#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
if (unlikely(is_swap_pmd(pmd))) {
swp_entry_t entry = pmd_to_swp_entry(pmd);

- VM_BUG_ON(!is_pmd_migration_entry(pmd));
- if (is_write_migration_entry(entry)) {
- make_migration_entry_read(&entry);
- pmd = swp_entry_to_pmd(entry);
- if (pmd_swp_soft_dirty(*src_pmd))
- pmd = pmd_swp_mksoft_dirty(pmd);
- set_pmd_at(src_mm, addr, src_pmd, pmd);
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+ if (is_migration_entry(entry)) {
+ if (is_write_migration_entry(entry)) {
+ make_migration_entry_read(&entry);
+ pmd = swp_entry_to_pmd(entry);
+ if (pmd_swp_soft_dirty(*src_pmd))
+ pmd = pmd_swp_mksoft_dirty(pmd);
+ set_pmd_at(src_mm, addr, src_pmd, pmd);
+ }
+ add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ mm_inc_nr_ptes(dst_mm);
+ pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
+ set_pmd_at(dst_mm, addr, dst_pmd, pmd);
+ ret = 0;
+ goto out_unlock;
}
- add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
- mm_inc_nr_ptes(dst_mm);
- pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
- set_pmd_at(dst_mm, addr, dst_pmd, pmd);
- ret = 0;
- goto out_unlock;
- }
#endif
+ if (thp_swap_supported() && !non_swap_entry(entry)) {
+ ret = swap_duplicate(&entry, true);
+ if (!ret) {
+ add_mm_counter(dst_mm, MM_SWAPENTS,
+ HPAGE_PMD_NR);
+ mm_inc_nr_ptes(dst_mm);
+ pgtable_trans_huge_deposit(dst_mm, dst_pmd,
+ pgtable);
+ set_pmd_at(dst_mm, addr, dst_pmd, pmd);
+ /* make sure dst_mm is on swapoff's mmlist. */
+ if (unlikely(list_empty(&dst_mm->mmlist))) {
+ spin_lock(&mmlist_lock);
+ if (list_empty(&dst_mm->mmlist))
+ list_add(&dst_mm->mmlist,
+ &src_mm->mmlist);
+ spin_unlock(&mmlist_lock);
+ }
+ } else if (ret == -ENOTDIR) {
+ /*
+ * The swap cluster has been split, split the
+ * pmd map now
+ */
+ __split_huge_swap_pmd(vma, addr, src_pmd);
+ pte_free(dst_mm, pgtable);
+ } else if (ret == -ENOMEM) {
+ spin_unlock(src_ptl);
+ spin_unlock(dst_ptl);
+ ret = add_swap_count_continuation(entry,
+ GFP_KERNEL);
+ if (ret < 0) {
+ ret = -ENOMEM;
+ pte_free(dst_mm, pgtable);
+ goto out;
+ }
+ goto retry;
+ } else
+ VM_BUG_ON(1);
+ goto out_unlock;
+ }
+ VM_BUG_ON(1);
+ }

if (unlikely(!pmd_trans_huge(pmd))) {
pte_free(dst_mm, pgtable);
--
2.16.1


2018-05-23 08:32:19

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 12/21] mm, THP, swap: Support PMD swap mapping in swapoff

From: Huang Ying <[email protected]>

During swapoff, for a huge swap cluster, we need to allocate a THP,
read its contents into the THP and unuse the PMD and PTE swap mappings
to it. If failed to allocate a THP, the huge swap cluster will be
split.

During unuse, if it is found that the swap cluster mapped by a PMD
swap mapping is split already, we will split the PMD swap mapping and
unuse the PTEs.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
include/asm-generic/pgtable.h | 15 ++------
include/linux/huge_mm.h | 8 ++++
mm/huge_memory.c | 4 +-
mm/swapfile.c | 86 ++++++++++++++++++++++++++++++++++++++++++-
4 files changed, 98 insertions(+), 15 deletions(-)

diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index bb8354981a36..caa381962cd2 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -931,22 +931,13 @@ static inline int pmd_none_or_trans_huge_or_clear_bad(pmd_t *pmd)
barrier();
#endif
/*
- * !pmd_present() checks for pmd migration entries
- *
- * The complete check uses is_pmd_migration_entry() in linux/swapops.h
- * But using that requires moving current function and pmd_trans_unstable()
- * to linux/swapops.h to resovle dependency, which is too much code move.
- *
- * !pmd_present() is equivalent to is_pmd_migration_entry() currently,
- * because !pmd_present() pages can only be under migration not swapped
- * out.
- *
- * pmd_none() is preseved for future condition checks on pmd migration
+ * pmd_none() is preseved for future condition checks on pmd swap
* entries and not confusing with this function name, although it is
* redundant with !pmd_present().
*/
if (pmd_none(pmdval) || pmd_trans_huge(pmdval) ||
- (IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION) && !pmd_present(pmdval)))
+ ((IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION) ||
+ IS_ENABLED(CONFIG_THP_SWAP)) && !pmd_present(pmdval)))
return 1;
if (unlikely(pmd_bad(pmdval))) {
pmd_clear_bad(pmd);
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 1cfd43047f0d..4e299e327720 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -405,6 +405,8 @@ static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */

#ifdef CONFIG_THP_SWAP
+extern int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ unsigned long address, pmd_t orig_pmd);
extern int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd);

static inline bool transparent_hugepage_swapin_enabled(
@@ -430,6 +432,12 @@ static inline bool transparent_hugepage_swapin_enabled(
return false;
}
#else /* CONFIG_THP_SWAP */
+static inline int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ unsigned long address, pmd_t orig_pmd)
+{
+ return 0;
+}
+
static inline int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
{
return 0;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3fd129a21f2e..668d77cec14d 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1663,8 +1663,8 @@ static void __split_huge_swap_pmd(struct vm_area_struct *vma,
pmd_populate(mm, pmd, pgtable);
}

-static int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
- unsigned long address, pmd_t orig_pmd)
+int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ unsigned long address, pmd_t orig_pmd)
{
struct mm_struct *mm = vma->vm_mm;
spinlock_t *ptl;
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 1a62fbc13381..77b2ddd37d9b 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1937,6 +1937,11 @@ static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte)
return pte_same(pte_swp_clear_soft_dirty(pte), swp_pte);
}

+static inline int pmd_same_as_swp(pmd_t pmd, pmd_t swp_pmd)
+{
+ return pmd_same(pmd_swp_clear_soft_dirty(pmd), swp_pmd);
+}
+
/*
* No need to decide whether this PTE shares the swap entry with others,
* just let do_wp_page work it out if a write is requested later - to
@@ -1998,6 +2003,57 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
return ret;
}

+#ifdef CONFIG_THP_SWAP
+static int unuse_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ unsigned long addr, swp_entry_t entry, struct page *page)
+{
+ struct mem_cgroup *memcg;
+ struct swap_info_struct *si;
+ spinlock_t *ptl;
+ int ret = 1;
+
+ if (mem_cgroup_try_charge(page, vma->vm_mm, GFP_KERNEL,
+ &memcg, true)) {
+ ret = -ENOMEM;
+ goto out_nolock;
+ }
+
+ ptl = pmd_lock(vma->vm_mm, pmd);
+ if (unlikely(!pmd_same_as_swp(*pmd, swp_entry_to_pmd(entry)))) {
+ mem_cgroup_cancel_charge(page, memcg, true);
+ ret = 0;
+ goto out;
+ }
+
+ add_mm_counter(vma->vm_mm, MM_SWAPENTS, -HPAGE_PMD_NR);
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ get_page(page);
+ set_pmd_at(vma->vm_mm, addr, pmd,
+ pmd_mkold(mk_huge_pmd(page, vma->vm_page_prot)));
+ page_add_anon_rmap(page, vma, addr, true);
+ mem_cgroup_commit_charge(page, memcg, true, true);
+ si = _swap_info_get(entry);
+ if (si)
+ swap_free_cluster(si, entry);
+ /*
+ * Move the page to the active list so it is not
+ * immediately swapped out again after swapon.
+ */
+ activate_page(page);
+out:
+ spin_unlock(ptl);
+out_nolock:
+ return ret;
+}
+#else
+static inline int unuse_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ unsigned long addr, swp_entry_t entry,
+ struct page *page)
+{
+ return 0;
+}
+#endif
+
static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long addr, unsigned long end,
swp_entry_t entry, struct page *page)
@@ -2038,7 +2094,7 @@ static inline int unuse_pmd_range(struct vm_area_struct *vma, pud_t *pud,
unsigned long addr, unsigned long end,
swp_entry_t entry, struct page *page)
{
- pmd_t *pmd;
+ pmd_t swp_pmd = swp_entry_to_pmd(entry), *pmd, orig_pmd;
unsigned long next;
int ret;

@@ -2046,6 +2102,24 @@ static inline int unuse_pmd_range(struct vm_area_struct *vma, pud_t *pud,
do {
cond_resched();
next = pmd_addr_end(addr, end);
+ orig_pmd = *pmd;
+ if (thp_swap_supported() && is_swap_pmd(orig_pmd)) {
+ if (likely(!pmd_same_as_swp(orig_pmd, swp_pmd)))
+ continue;
+ /* Huge cluster has been split already */
+ if (!PageTransCompound(page)) {
+ ret = split_huge_swap_pmd(vma, pmd,
+ addr, orig_pmd);
+ if (ret)
+ return ret;
+ ret = unuse_pte_range(vma, pmd, addr,
+ next, entry, page);
+ } else
+ ret = unuse_pmd(vma, pmd, addr, entry, page);
+ if (ret)
+ return ret;
+ continue;
+ }
if (pmd_none_or_trans_huge_or_clear_bad(pmd))
continue;
ret = unuse_pte_range(vma, pmd, addr, next, entry, page);
@@ -2210,6 +2284,7 @@ int try_to_unuse(unsigned int type, bool frontswap,
* to prevent compiler doing
* something odd.
*/
+ struct swap_cluster_info *ci = NULL;
unsigned char swcount;
struct page *page;
swp_entry_t entry;
@@ -2239,6 +2314,7 @@ int try_to_unuse(unsigned int type, bool frontswap,
* there are races when an instance of an entry might be missed.
*/
while ((i = find_next_to_unuse(si, i, frontswap)) != 0) {
+retry:
if (signal_pending(current)) {
retval = -EINTR;
break;
@@ -2250,6 +2326,8 @@ int try_to_unuse(unsigned int type, bool frontswap,
* page and read the swap into it.
*/
swap_map = &si->swap_map[i];
+ if (si->cluster_info)
+ ci = si->cluster_info + i / SWAPFILE_CLUSTER;
entry = swp_entry(type, i);
page = read_swap_cache_async(entry,
GFP_HIGHUSER_MOVABLE, NULL, 0, false);
@@ -2270,6 +2348,12 @@ int try_to_unuse(unsigned int type, bool frontswap,
*/
if (!swcount || swcount == SWAP_MAP_BAD)
continue;
+ /* Split huge cluster if failed to allocate huge page */
+ if (thp_swap_supported() && cluster_is_huge(ci)) {
+ retval = split_swap_cluster(entry, false);
+ if (!retval || retval == -EEXIST)
+ goto retry;
+ }
retval = -ENOMEM;
break;
}
--
2.16.1


2018-05-23 08:32:33

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 09/21] mm, THP, swap: Swapin a THP as a whole

From: Huang Ying <[email protected]>

With this patch, when page fault handler find a PMD swap mapping, it
will swap in a THP as a whole. This avoids the overhead of
splitting/collapsing before/after the THP swapping. And improves the
swap performance greatly for reduced page fault count etc.

do_huge_pmd_swap_page() is added in the patch to implement this. It
is similar to do_swap_page() for normal page swapin.

If failing to allocate a THP, the huge swap cluster and the PMD swap
mapping will be split to fallback to normal page swapin.

If the huge swap cluster has been split already, the PMD swap mapping
will be split to fallback to normal page swapin.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
include/linux/huge_mm.h | 9 +++
include/linux/swap.h | 9 +++
mm/huge_memory.c | 170 ++++++++++++++++++++++++++++++++++++++++++++++++
mm/memory.c | 16 +++--
4 files changed, 198 insertions(+), 6 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 0dbfbe34b01a..f5348d072351 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -402,4 +402,13 @@ static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */

+#ifdef CONFIG_THP_SWAP
+extern int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd);
+#else /* CONFIG_THP_SWAP */
+static inline int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
+{
+ return 0;
+}
+#endif /* CONFIG_THP_SWAP */
+
#endif /* _LINUX_HUGE_MM_H */
diff --git a/include/linux/swap.h b/include/linux/swap.h
index d2e017dd7bbd..5832a750baed 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -560,6 +560,15 @@ static inline struct page *lookup_swap_cache(swp_entry_t swp,
return NULL;
}

+static inline struct page *read_swap_cache_async(swp_entry_t swp,
+ gfp_t gft_mask,
+ struct vm_area_struct *vma,
+ unsigned long addr,
+ bool do_poll)
+{
+ return NULL;
+}
+
static inline int add_to_swap(struct page *page)
{
return 0;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3975d824b4ed..8303fa021c42 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -33,6 +33,8 @@
#include <linux/page_idle.h>
#include <linux/shmem_fs.h>
#include <linux/oom.h>
+#include <linux/delayacct.h>
+#include <linux/swap.h>

#include <asm/tlb.h>
#include <asm/pgalloc.h>
@@ -1609,6 +1611,174 @@ static void __split_huge_swap_pmd(struct vm_area_struct *vma,
smp_wmb(); /* make pte visible before pmd */
pmd_populate(mm, pmd, pgtable);
}
+
+static int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ unsigned long address, pmd_t orig_pmd)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ spinlock_t *ptl;
+ int ret = 0;
+
+ ptl = pmd_lock(mm, pmd);
+ if (pmd_same(*pmd, orig_pmd))
+ __split_huge_swap_pmd(vma, address & HPAGE_PMD_MASK, pmd);
+ else
+ ret = -ENOENT;
+ spin_unlock(ptl);
+
+ return ret;
+}
+
+int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
+{
+ struct page *page;
+ struct mem_cgroup *memcg;
+ struct vm_area_struct *vma = vmf->vma;
+ unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
+ swp_entry_t entry;
+ pmd_t pmd;
+ int i, locked, exclusive = 0, ret = 0;
+
+ entry = pmd_to_swp_entry(orig_pmd);
+ VM_BUG_ON(non_swap_entry(entry));
+ delayacct_set_flag(DELAYACCT_PF_SWAPIN);
+retry:
+ page = lookup_swap_cache(entry, NULL, vmf->address);
+ if (!page) {
+ page = read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE, vma,
+ haddr, false);
+ if (!page) {
+ /*
+ * Back out if somebody else faulted in this pmd
+ * while we released the pmd lock.
+ */
+ if (likely(pmd_same(*vmf->pmd, orig_pmd))) {
+ ret = split_swap_cluster(entry, false);
+ /*
+ * Retry if somebody else swap in the swap
+ * entry
+ */
+ if (ret == -EEXIST) {
+ ret = 0;
+ goto retry;
+ /* swapoff occurs under us */
+ } else if (ret == -EINVAL)
+ ret = 0;
+ else
+ goto fallback;
+ }
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+ goto out;
+ }
+
+ /* Had to read the page from swap area: Major fault */
+ ret = VM_FAULT_MAJOR;
+ count_vm_event(PGMAJFAULT);
+ count_memcg_event_mm(vma->vm_mm, PGMAJFAULT);
+ } else if (!PageTransCompound(page))
+ goto fallback;
+
+ locked = lock_page_or_retry(page, vma->vm_mm, vmf->flags);
+
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+ if (!locked) {
+ ret |= VM_FAULT_RETRY;
+ goto out_release;
+ }
+
+ /*
+ * Make sure try_to_free_swap or reuse_swap_page or swapoff did not
+ * release the swapcache from under us. The page pin, and pmd_same
+ * test below, are not enough to exclude that. Even if it is still
+ * swapcache, we need to check that the page's swap has not changed.
+ */
+ if (unlikely(!PageSwapCache(page) || page_private(page) != entry.val))
+ goto out_page;
+
+ if (mem_cgroup_try_charge(page, vma->vm_mm, GFP_KERNEL,
+ &memcg, true)) {
+ ret = VM_FAULT_OOM;
+ goto out_page;
+ }
+
+ /*
+ * Back out if somebody else already faulted in this pmd.
+ */
+ vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd);
+ spin_lock(vmf->ptl);
+ if (unlikely(!pmd_same(*vmf->pmd, orig_pmd)))
+ goto out_nomap;
+
+ if (unlikely(!PageUptodate(page))) {
+ ret = VM_FAULT_SIGBUS;
+ goto out_nomap;
+ }
+
+ /*
+ * The page isn't present yet, go ahead with the fault.
+ *
+ * Be careful about the sequence of operations here.
+ * To get its accounting right, reuse_swap_page() must be called
+ * while the page is counted on swap but not yet in mapcount i.e.
+ * before page_add_anon_rmap() and swap_free(); try_to_free_swap()
+ * must be called after the swap_free(), or it will never succeed.
+ */
+
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ add_mm_counter(vma->vm_mm, MM_SWAPENTS, -HPAGE_PMD_NR);
+ pmd = mk_huge_pmd(page, vma->vm_page_prot);
+ if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
+ pmd = maybe_pmd_mkwrite(pmd_mkdirty(pmd), vma);
+ vmf->flags &= ~FAULT_FLAG_WRITE;
+ ret |= VM_FAULT_WRITE;
+ exclusive = RMAP_EXCLUSIVE;
+ }
+ for (i = 0; i < HPAGE_PMD_NR; i++)
+ flush_icache_page(vma, page + i);
+ if (pmd_swp_soft_dirty(orig_pmd))
+ pmd = pmd_mksoft_dirty(pmd);
+ do_page_add_anon_rmap(page, vma, haddr,
+ exclusive | RMAP_COMPOUND);
+ mem_cgroup_commit_charge(page, memcg, true, true);
+ activate_page(page);
+ set_pmd_at(vma->vm_mm, haddr, vmf->pmd, pmd);
+
+ swap_free(entry, true);
+ if (mem_cgroup_swap_full(page) ||
+ (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
+ try_to_free_swap(page);
+ unlock_page(page);
+
+ if (vmf->flags & FAULT_FLAG_WRITE) {
+ ret |= do_huge_pmd_wp_page(vmf, pmd);
+ if (ret & VM_FAULT_ERROR)
+ ret &= VM_FAULT_ERROR;
+ goto out;
+ }
+
+ /* No need to invalidate - it was non-present before */
+ update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
+ spin_unlock(vmf->ptl);
+out:
+ return ret;
+out_nomap:
+ mem_cgroup_cancel_charge(page, memcg, true);
+ spin_unlock(vmf->ptl);
+out_page:
+ unlock_page(page);
+out_release:
+ put_page(page);
+ return ret;
+fallback:
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+ if (!split_huge_swap_pmd(vmf->vma, vmf->pmd, vmf->address, orig_pmd))
+ ret = VM_FAULT_FALLBACK;
+ else
+ ret = 0;
+ if (page)
+ put_page(page);
+ return ret;
+}
#else
static inline void __split_huge_swap_pmd(struct vm_area_struct *vma,
unsigned long haddr,
diff --git a/mm/memory.c b/mm/memory.c
index f8a4336dd015..a8b85fb333c7 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4073,13 +4073,17 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,

barrier();
if (unlikely(is_swap_pmd(orig_pmd))) {
- VM_BUG_ON(thp_migration_supported() &&
- !is_pmd_migration_entry(orig_pmd));
- if (is_pmd_migration_entry(orig_pmd))
+ if (thp_migration_supported() &&
+ is_pmd_migration_entry(orig_pmd)) {
pmd_migration_entry_wait(mm, vmf.pmd);
- return 0;
- }
- if (pmd_trans_huge(orig_pmd) || pmd_devmap(orig_pmd)) {
+ return 0;
+ } else if (thp_swap_supported()) {
+ ret = do_huge_pmd_swap_page(&vmf, orig_pmd);
+ if (!(ret & VM_FAULT_FALLBACK))
+ return ret;
+ } else
+ VM_BUG_ON(1);
+ } else if (pmd_trans_huge(orig_pmd) || pmd_devmap(orig_pmd)) {
if (pmd_protnone(orig_pmd) && vma_is_accessible(vma))
return do_huge_pmd_numa_page(&vmf, orig_pmd);

--
2.16.1


2018-05-23 08:33:09

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 11/21] mm, THP, swap: Add sysfs interface to configure THP swapin

From: Huang Ying <[email protected]>

Swapin a THP as a whole isn't desirable at some situations. For
example, for random access pattern, swapin a THP as a whole will
inflate the reading greatly. So a sysfs interface:
/sys/kernel/mm/transparent_hugepage/swapin_enabled is added to
configure it. Three options as follow are provided,

- always: THP swapin will be enabled always

- madvise: THP swapin will be enabled only for VMA with VM_HUGEPAGE
flag set.

- never: THP swapin will be disabled always

The default configuration is: madvise.

During page fault, if a PMD swap mapping is found and THP swapin is
disabled, the huge swap cluster and the PMD swap mapping will be split
and fallback to normal page swapin.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
Documentation/vm/transhuge.rst | 21 ++++++++++
include/linux/huge_mm.h | 31 +++++++++++++++
mm/huge_memory.c | 89 +++++++++++++++++++++++++++++++++---------
3 files changed, 123 insertions(+), 18 deletions(-)

diff --git a/Documentation/vm/transhuge.rst b/Documentation/vm/transhuge.rst
index a87b1d880cd4..d727706cffc3 100644
--- a/Documentation/vm/transhuge.rst
+++ b/Documentation/vm/transhuge.rst
@@ -163,6 +163,27 @@ Some userspace (such as a test program, or an optimized memory allocation

cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size

+Transparent hugepage may be swapout and swapin in one piece without
+splitting. This will improve the utility of transparent hugepage but
+inflate the read/write too. So whether to enable swapin transparent
+hugepage in one piece can be configured as follow.
+
+ echo always >/sys/kernel/mm/transparent_hugepage/swapin_enabled
+ echo madvise >/sys/kernel/mm/transparent_hugepage/swapin_enabled
+ echo never >/sys/kernel/mm/transparent_hugepage/swapin_enabled
+
+always
+ Attempt to allocate a transparent huge page and read it from
+ swap space in one piece every time.
+
+never
+ Always split the swap space and PMD swap mapping and swapin
+ the fault normal page during swapin.
+
+madvise
+ Only swapin the transparent huge page in one piece for
+ MADV_HUGEPAGE madvise regions.
+
khugepaged will be automatically started when
transparent_hugepage/enabled is set to "always" or "madvise, and it'll
be automatically shutdown if it's set to "never".
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index f5348d072351..1cfd43047f0d 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -62,6 +62,8 @@ enum transparent_hugepage_flag {
#ifdef CONFIG_DEBUG_VM
TRANSPARENT_HUGEPAGE_DEBUG_COW_FLAG,
#endif
+ TRANSPARENT_HUGEPAGE_SWAPIN_FLAG,
+ TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG,
};

struct kobject;
@@ -404,11 +406,40 @@ static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)

#ifdef CONFIG_THP_SWAP
extern int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd);
+
+static inline bool transparent_hugepage_swapin_enabled(
+ struct vm_area_struct *vma)
+{
+ if (vma->vm_flags & VM_NOHUGEPAGE)
+ return false;
+
+ if (is_vma_temporary_stack(vma))
+ return false;
+
+ if (test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
+ return false;
+
+ if (transparent_hugepage_flags &
+ (1 << TRANSPARENT_HUGEPAGE_SWAPIN_FLAG))
+ return true;
+
+ if (transparent_hugepage_flags &
+ (1 << TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG))
+ return !!(vma->vm_flags & VM_HUGEPAGE);
+
+ return false;
+}
#else /* CONFIG_THP_SWAP */
static inline int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
{
return 0;
}
+
+static inline bool transparent_hugepage_swapin_enabled(
+ struct vm_area_struct *vma)
+{
+ return false;
+}
#endif /* CONFIG_THP_SWAP */

#endif /* _LINUX_HUGE_MM_H */
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c2437914c632..3fd129a21f2e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -57,7 +57,8 @@ unsigned long transparent_hugepage_flags __read_mostly =
#endif
(1<<TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG)|
(1<<TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG)|
- (1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG);
+ (1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG)|
+ (1<<TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG);

static struct shrinker deferred_split_shrinker;

@@ -316,6 +317,53 @@ static struct kobj_attribute debug_cow_attr =
__ATTR(debug_cow, 0644, debug_cow_show, debug_cow_store);
#endif /* CONFIG_DEBUG_VM */

+#ifdef CONFIG_THP_SWAP
+static ssize_t swapin_enabled_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ if (test_bit(TRANSPARENT_HUGEPAGE_SWAPIN_FLAG,
+ &transparent_hugepage_flags))
+ return sprintf(buf, "[always] madvise never\n");
+ else if (test_bit(TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG,
+ &transparent_hugepage_flags))
+ return sprintf(buf, "always [madvise] never\n");
+ else
+ return sprintf(buf, "always madvise [never]\n");
+}
+
+static ssize_t swapin_enabled_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ ssize_t ret = count;
+
+ if (!memcmp("always", buf,
+ min(sizeof("always")-1, count))) {
+ clear_bit(TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG,
+ &transparent_hugepage_flags);
+ set_bit(TRANSPARENT_HUGEPAGE_SWAPIN_FLAG,
+ &transparent_hugepage_flags);
+ } else if (!memcmp("madvise", buf,
+ min(sizeof("madvise")-1, count))) {
+ clear_bit(TRANSPARENT_HUGEPAGE_SWAPIN_FLAG,
+ &transparent_hugepage_flags);
+ set_bit(TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG,
+ &transparent_hugepage_flags);
+ } else if (!memcmp("never", buf,
+ min(sizeof("never")-1, count))) {
+ clear_bit(TRANSPARENT_HUGEPAGE_SWAPIN_FLAG,
+ &transparent_hugepage_flags);
+ clear_bit(TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG,
+ &transparent_hugepage_flags);
+ } else
+ ret = -EINVAL;
+
+ return ret;
+}
+static struct kobj_attribute swapin_enabled_attr =
+ __ATTR(swapin_enabled, 0644, swapin_enabled_show, swapin_enabled_store);
+#endif /* CONFIG_THP_SWAP */
+
static struct attribute *hugepage_attr[] = {
&enabled_attr.attr,
&defrag_attr.attr,
@@ -326,6 +374,9 @@ static struct attribute *hugepage_attr[] = {
#endif
#ifdef CONFIG_DEBUG_VM
&debug_cow_attr.attr,
+#endif
+#ifdef CONFIG_THP_SWAP
+ &swapin_enabled_attr.attr,
#endif
NULL,
};
@@ -1645,6 +1696,9 @@ int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
retry:
page = lookup_swap_cache(entry, NULL, vmf->address);
if (!page) {
+ if (!transparent_hugepage_swapin_enabled(vma))
+ goto split;
+
page = read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE, vma,
haddr, false);
if (!page) {
@@ -1652,23 +1706,8 @@ int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
* Back out if somebody else faulted in this pmd
* while we released the pmd lock.
*/
- if (likely(pmd_same(*vmf->pmd, orig_pmd))) {
- ret = split_swap_cluster(entry, false);
- /*
- * Retry if somebody else swap in the swap
- * entry
- */
- if (ret == -EEXIST) {
- ret = 0;
- goto retry;
- /* swapoff occurs under us */
- } else if (ret == -EINVAL)
- ret = 0;
- else {
- count_vm_event(THP_SWPIN_FALLBACK);
- goto fallback;
- }
- }
+ if (likely(pmd_same(*vmf->pmd, orig_pmd)))
+ goto split;
delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
goto out;
}
@@ -1780,6 +1819,20 @@ int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
if (page)
put_page(page);
return ret;
+split:
+ ret = split_swap_cluster(entry, false);
+ /* Retry if somebody else swap in the swap entry */
+ if (ret == -EEXIST) {
+ ret = 0;
+ goto retry;
+ }
+ /* swapoff occurs under us */
+ if (ret == -EINVAL) {
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+ return 0;
+ }
+ count_vm_event(THP_SWPIN_FALLBACK);
+ goto fallback;
}
#else
static inline void __split_huge_swap_pmd(struct vm_area_struct *vma,
--
2.16.1


2018-05-23 08:33:26

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 02/21] mm, THP, swap: Make CONFIG_THP_SWAP depends on CONFIG_SWAP

From: Huang Ying <[email protected]>

It's unreasonable to optimize swapping for THP without basic swapping
support. And this will cause build errors when THP_SWAP functions are
defined in swapfile.c and called elsewhere.

The comments are fixed too to reflect the latest progress.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
mm/Kconfig | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/Kconfig b/mm/Kconfig
index ce95491abd6a..cee958bb6002 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -420,10 +420,9 @@ config ARCH_WANTS_THP_SWAP

config THP_SWAP
def_bool y
- depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP
+ depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP && SWAP
help
Swap transparent huge pages in one piece, without splitting.
- XXX: For now this only does clustered swap space allocation.

For selection by architectures with reasonable THP sizes.

--
2.16.1


2018-05-23 08:33:39

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 06/21] mm, THP, swap: Support PMD swap mapping when splitting huge PMD

From: Huang Ying <[email protected]>

A huge PMD need to be split when zap a part of the PMD mapping etc.
If the PMD mapping is a swap mapping, we need to split it too. This
patch implemented the support for this. This is similar as splitting
the PMD page mapping, except we need to decrease the PMD swap mapping
count for the huge swap cluster too. If the PMD swap mapping count
becomes 0, the huge swap cluster will be split.

Notice: is_huge_zero_pmd() and pmd_page() doesn't work well with swap
PMD, so pmd_present() check is called before them.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
include/linux/swap.h | 6 ++++++
mm/huge_memory.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++-----
mm/swapfile.c | 28 +++++++++++++++++++++++++
3 files changed, 87 insertions(+), 5 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 7ed2c727c9b6..bb9de2cb952a 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -618,11 +618,17 @@ static inline swp_entry_t get_swap_page(struct page *page)

#ifdef CONFIG_THP_SWAP
extern int split_swap_cluster(swp_entry_t entry);
+extern int split_swap_cluster_map(swp_entry_t entry);
#else
static inline int split_swap_cluster(swp_entry_t entry)
{
return 0;
}
+
+static inline int split_swap_cluster_map(swp_entry_t entry)
+{
+ return 0;
+}
#endif

#ifdef CONFIG_MEMCG
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e9177363fe2e..84d5d8ff869e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1602,6 +1602,47 @@ int do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
return 0;
}

+#ifdef CONFIG_THP_SWAP
+static void __split_huge_swap_pmd(struct vm_area_struct *vma,
+ unsigned long haddr,
+ pmd_t *pmd)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ pgtable_t pgtable;
+ pmd_t _pmd;
+ swp_entry_t entry;
+ int i, soft_dirty;
+
+ entry = pmd_to_swp_entry(*pmd);
+ soft_dirty = pmd_soft_dirty(*pmd);
+
+ split_swap_cluster_map(entry);
+
+ pgtable = pgtable_trans_huge_withdraw(mm, pmd);
+ pmd_populate(mm, &_pmd, pgtable);
+
+ for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE, entry.val++) {
+ pte_t *pte, ptent;
+
+ pte = pte_offset_map(&_pmd, haddr);
+ VM_BUG_ON(!pte_none(*pte));
+ ptent = swp_entry_to_pte(entry);
+ if (soft_dirty)
+ ptent = pte_swp_mksoft_dirty(ptent);
+ set_pte_at(mm, haddr, pte, ptent);
+ pte_unmap(pte);
+ }
+ smp_wmb(); /* make pte visible before pmd */
+ pmd_populate(mm, pmd, pgtable);
+}
+#else
+static inline void __split_huge_swap_pmd(struct vm_area_struct *vma,
+ unsigned long haddr,
+ pmd_t *pmd)
+{
+}
+#endif
+
/*
* Return true if we do MADV_FREE successfully on entire pmd page.
* Otherwise, return false.
@@ -2068,7 +2109,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma);
- VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd)
+ VM_BUG_ON(!is_swap_pmd(*pmd) && !pmd_trans_huge(*pmd)
&& !pmd_devmap(*pmd));

count_vm_event(THP_SPLIT_PMD);
@@ -2090,8 +2131,11 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
put_page(page);
add_mm_counter(mm, MM_FILEPAGES, -HPAGE_PMD_NR);
return;
- } else if (is_huge_zero_pmd(*pmd)) {
+ } else if (pmd_present(*pmd) && is_huge_zero_pmd(*pmd)) {
/*
+ * is_huge_zero_pmd() may return true for PMD swap
+ * entry, so checking pmd_present() firstly.
+ *
* FIXME: Do we want to invalidate secondary mmu by calling
* mmu_notifier_invalidate_range() see comments below inside
* __split_huge_pmd() ?
@@ -2134,6 +2178,9 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
page = pfn_to_page(swp_offset(entry));
} else
#endif
+ if (thp_swap_supported() && is_swap_pmd(old_pmd))
+ return __split_huge_swap_pmd(vma, haddr, pmd);
+ else
page = pmd_page(old_pmd);
VM_BUG_ON_PAGE(!page_count(page), page);
page_ref_add(page, HPAGE_PMD_NR - 1);
@@ -2225,14 +2272,15 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
* pmd against. Otherwise we can end up replacing wrong page.
*/
VM_BUG_ON(freeze && !page);
- if (page && page != pmd_page(*pmd))
- goto out;
+ /* pmd_page() should be called only if pmd_present() */
+ if (page && (!pmd_present(*pmd) || page != pmd_page(*pmd)))
+ goto out;

if (pmd_trans_huge(*pmd)) {
page = pmd_page(*pmd);
if (PageMlocked(page))
clear_page_mlock(page);
- } else if (!(pmd_devmap(*pmd) || is_pmd_migration_entry(*pmd)))
+ } else if (!(pmd_devmap(*pmd) || is_swap_pmd(*pmd)))
goto out;
__split_huge_pmd_locked(vma, pmd, haddr, freeze);
out:
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 46117d6913ad..05f53c4c0cfe 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -4046,6 +4046,34 @@ static void free_swap_count_continuations(struct swap_info_struct *si)
}
}

+#ifdef CONFIG_THP_SWAP
+/* The corresponding page table shouldn't be changed under us */
+int split_swap_cluster_map(swp_entry_t entry)
+{
+ struct swap_info_struct *si;
+ struct swap_cluster_info *ci;
+ unsigned long offset = swp_offset(entry);
+
+ VM_BUG_ON(!is_cluster_offset(offset));
+ si = _swap_info_get(entry);
+ if (!si)
+ return -EBUSY;
+ ci = lock_cluster(si, offset);
+ /* The swap cluster has been split by someone else */
+ if (!cluster_is_huge(ci))
+ goto out;
+ cluster_set_count(ci, cluster_count(ci) - 1);
+ VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
+ if (cluster_count(ci) == SWAPFILE_CLUSTER &&
+ !(si->swap_map[offset] & SWAP_HAS_CACHE))
+ cluster_clear_huge(ci);
+
+out:
+ unlock_cluster(ci);
+ return 0;
+}
+#endif
+
static int __init swapfile_init(void)
{
int nid;
--
2.16.1


2018-05-23 08:33:49

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 01/21] mm, THP, swap: Enable PMD swap operations for CONFIG_THP_SWAP

From: Huang Ying <[email protected]>

Previously, the PMD swap operations are only enabled for
CONFIG_ARCH_ENABLE_THP_MIGRATION. Because they are only used by the
THP migration support. We will support PMD swap mapping to the huge
swap cluster and swapin the THP as a whole. That will be enabled via
CONFIG_THP_SWAP and needs these PMD swap operations. So enable the
PMD swap operations for CONFIG_THP_SWAP too.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
arch/x86/include/asm/pgtable.h | 2 +-
include/asm-generic/pgtable.h | 2 +-
include/linux/swapops.h | 44 ++++++++++++++++++++++--------------------
3 files changed, 25 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index f1633de5a675..0dff03085197 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1224,7 +1224,7 @@ static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
return pte_clear_flags(pte, _PAGE_SWP_SOFT_DIRTY);
}

-#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP)
static inline pmd_t pmd_swp_mksoft_dirty(pmd_t pmd)
{
return pmd_set_flags(pmd, _PAGE_SWP_SOFT_DIRTY);
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index f59639afaa39..bb8354981a36 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -675,7 +675,7 @@ static inline void ptep_modify_prot_commit(struct mm_struct *mm,
#endif

#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
-#ifndef CONFIG_ARCH_ENABLE_THP_MIGRATION
+#if !defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) && !defined(CONFIG_THP_SWAP)
static inline pmd_t pmd_swp_mksoft_dirty(pmd_t pmd)
{
return pmd;
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 1d3877c39a00..f1be5a52f5c8 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -258,17 +258,7 @@ static inline int is_write_migration_entry(swp_entry_t entry)

#endif

-struct page_vma_mapped_walk;
-
-#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
-extern void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
- struct page *page);
-
-extern void remove_migration_pmd(struct page_vma_mapped_walk *pvmw,
- struct page *new);
-
-extern void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd);
-
+#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP)
static inline swp_entry_t pmd_to_swp_entry(pmd_t pmd)
{
swp_entry_t arch_entry;
@@ -286,6 +276,28 @@ static inline pmd_t swp_entry_to_pmd(swp_entry_t entry)
arch_entry = __swp_entry(swp_type(entry), swp_offset(entry));
return __swp_entry_to_pmd(arch_entry);
}
+#else
+static inline swp_entry_t pmd_to_swp_entry(pmd_t pmd)
+{
+ return swp_entry(0, 0);
+}
+
+static inline pmd_t swp_entry_to_pmd(swp_entry_t entry)
+{
+ return __pmd(0);
+}
+#endif
+
+struct page_vma_mapped_walk;
+
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+extern void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
+ struct page *page);
+
+extern void remove_migration_pmd(struct page_vma_mapped_walk *pvmw,
+ struct page *new);
+
+extern void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd);

static inline int is_pmd_migration_entry(pmd_t pmd)
{
@@ -306,16 +318,6 @@ static inline void remove_migration_pmd(struct page_vma_mapped_walk *pvmw,

static inline void pmd_migration_entry_wait(struct mm_struct *m, pmd_t *p) { }

-static inline swp_entry_t pmd_to_swp_entry(pmd_t pmd)
-{
- return swp_entry(0, 0);
-}
-
-static inline pmd_t swp_entry_to_pmd(swp_entry_t entry)
-{
- return __pmd(0);
-}
-
static inline int is_pmd_migration_entry(pmd_t pmd)
{
return 0;
--
2.16.1


2018-05-23 08:34:14

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 08/21] mm, THP, swap: Support to read a huge swap cluster for swapin a THP

From: Huang Ying <[email protected]>

To swapin a THP as a whole, we need to read a huge swap cluster from
the swap device. This patch revised the __read_swap_cache_async() and
its callers and callees to support this. If __read_swap_cache_async()
find the swap cluster of the specified swap entry is huge, it will try
to allocate a THP, add it into the swap cache. So later the contents
of the huge swap cluster can be read into the THP.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
include/linux/huge_mm.h | 38 ++++++++++++++++++++++++
include/linux/swap.h | 4 +--
mm/huge_memory.c | 26 -----------------
mm/swap_state.c | 77 ++++++++++++++++++++++++++++++++++---------------
mm/swapfile.c | 11 ++++---
5 files changed, 100 insertions(+), 56 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 0d0cfddbf4b7..0dbfbe34b01a 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -250,6 +250,39 @@ static inline bool thp_migration_supported(void)
return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION);
}

+/*
+ * always: directly stall for all thp allocations
+ * defer: wake kswapd and fail if not immediately available
+ * defer+madvise: wake kswapd and directly stall for MADV_HUGEPAGE, otherwise
+ * fail if not immediately available
+ * madvise: directly stall for MADV_HUGEPAGE, otherwise fail if not immediately
+ * available
+ * never: never stall for any thp allocation
+ */
+static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
+{
+ bool vma_madvised;
+
+ if (!vma)
+ return GFP_TRANSHUGE_LIGHT;
+ vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE);
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG,
+ &transparent_hugepage_flags))
+ return GFP_TRANSHUGE | (vma_madvised ? 0 : __GFP_NORETRY);
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG,
+ &transparent_hugepage_flags))
+ return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM;
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG,
+ &transparent_hugepage_flags))
+ return GFP_TRANSHUGE_LIGHT |
+ (vma_madvised ? __GFP_DIRECT_RECLAIM :
+ __GFP_KSWAPD_RECLAIM);
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG,
+ &transparent_hugepage_flags))
+ return GFP_TRANSHUGE_LIGHT |
+ (vma_madvised ? __GFP_DIRECT_RECLAIM : 0);
+ return GFP_TRANSHUGE_LIGHT;
+}
#else /* CONFIG_TRANSPARENT_HUGEPAGE */
#define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; })
#define HPAGE_PMD_MASK ({ BUILD_BUG(); 0; })
@@ -362,6 +395,11 @@ static inline bool thp_migration_supported(void)
{
return false;
}
+
+static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
+{
+ return 0;
+}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */

#endif /* _LINUX_HUGE_MM_H */
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 878f132dabc0..d2e017dd7bbd 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -462,7 +462,7 @@ extern sector_t map_swap_page(struct page *, struct block_device **);
extern sector_t swapdev_block(int, pgoff_t);
extern int page_swapcount(struct page *);
extern int __swap_count(swp_entry_t entry);
-extern int __swp_swapcount(swp_entry_t entry);
+extern int __swp_swapcount(swp_entry_t entry, bool *huge_cluster);
extern int swp_swapcount(swp_entry_t entry);
extern struct swap_info_struct *page_swap_info(struct page *);
extern struct swap_info_struct *swp_swap_info(swp_entry_t entry);
@@ -589,7 +589,7 @@ static inline int __swap_count(swp_entry_t entry)
return 0;
}

-static inline int __swp_swapcount(swp_entry_t entry)
+static inline int __swp_swapcount(swp_entry_t entry, bool *huge_cluster)
{
return 0;
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e363e13f6751..3975d824b4ed 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -620,32 +620,6 @@ static int __do_huge_pmd_anonymous_page(struct vm_fault *vmf, struct page *page,

}

-/*
- * always: directly stall for all thp allocations
- * defer: wake kswapd and fail if not immediately available
- * defer+madvise: wake kswapd and directly stall for MADV_HUGEPAGE, otherwise
- * fail if not immediately available
- * madvise: directly stall for MADV_HUGEPAGE, otherwise fail if not immediately
- * available
- * never: never stall for any thp allocation
- */
-static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
-{
- const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE);
-
- if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags))
- return GFP_TRANSHUGE | (vma_madvised ? 0 : __GFP_NORETRY);
- if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags))
- return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM;
- if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags))
- return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM :
- __GFP_KSWAPD_RECLAIM);
- if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags))
- return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM :
- 0);
- return GFP_TRANSHUGE_LIGHT;
-}
-
/* Caller must hold page table lock. */
static bool set_huge_zero_page(pgtable_t pgtable, struct mm_struct *mm,
struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd,
diff --git a/mm/swap_state.c b/mm/swap_state.c
index c6b3eab73fde..59b37a84fbd7 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -386,6 +386,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct page *found_page = NULL, *new_page = NULL;
struct swap_info_struct *si;
int err;
+ bool huge_cluster = false;
+ swp_entry_t hentry;
+
*new_page_allocated = false;

do {
@@ -411,14 +414,32 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
* as SWAP_HAS_CACHE. That's done in later part of code or
* else swap_off will be aborted if we return NULL.
*/
- if (!__swp_swapcount(entry) && swap_slot_cache_enabled)
+ if (!__swp_swapcount(entry, &huge_cluster) &&
+ swap_slot_cache_enabled)
break;

/*
* Get a new page to read into from swap.
*/
- if (!new_page) {
- new_page = alloc_page_vma(gfp_mask, vma, addr);
+ if (!new_page ||
+ (thp_swap_supported() &&
+ !!PageTransCompound(new_page) != huge_cluster)) {
+ if (new_page)
+ put_page(new_page);
+ if (thp_swap_supported() && huge_cluster) {
+ gfp_t gfp = alloc_hugepage_direct_gfpmask(vma);
+
+ new_page = alloc_hugepage_vma(gfp, vma,
+ addr, HPAGE_PMD_ORDER);
+ if (new_page)
+ prep_transhuge_page(new_page);
+ hentry = swp_entry(swp_type(entry),
+ round_down(swp_offset(entry),
+ HPAGE_PMD_NR));
+ } else {
+ new_page = alloc_page_vma(gfp_mask, vma, addr);
+ hentry = entry;
+ }
if (!new_page)
break; /* Out of memory */
}
@@ -426,33 +447,37 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
/*
* call radix_tree_preload() while we can wait.
*/
- err = radix_tree_maybe_preload(gfp_mask & GFP_KERNEL);
+ err = radix_tree_maybe_preload_order(gfp_mask & GFP_KERNEL,
+ compound_order(new_page));
if (err)
break;

/*
* Swap entry may have been freed since our caller observed it.
*/
- err = swapcache_prepare(entry);
- if (err == -EEXIST) {
- radix_tree_preload_end();
- /*
- * We might race against get_swap_page() and stumble
- * across a SWAP_HAS_CACHE swap_map entry whose page
- * has not been brought into the swapcache yet.
- */
- cond_resched();
- continue;
- }
- if (err) { /* swp entry is obsolete ? */
+ err = swapcache_prepare(hentry, huge_cluster);
+ if (err) {
radix_tree_preload_end();
- break;
+ if (err == -EEXIST) {
+ /*
+ * We might race against get_swap_page() and
+ * stumble across a SWAP_HAS_CACHE swap_map
+ * entry whose page has not been brought into
+ * the swapcache yet.
+ */
+ cond_resched();
+ continue;
+ } else if (err == -ENOTDIR) {
+ /* huge swap cluster is split under us */
+ continue;
+ } else /* swp entry is obsolete ? */
+ break;
}

/* May fail (-ENOMEM) if radix-tree node allocation failed. */
__SetPageLocked(new_page);
__SetPageSwapBacked(new_page);
- err = __add_to_swap_cache(new_page, entry);
+ err = __add_to_swap_cache(new_page, hentry);
if (likely(!err)) {
radix_tree_preload_end();
/*
@@ -460,6 +485,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
*/
lru_cache_add_anon(new_page);
*new_page_allocated = true;
+ if (thp_swap_supported() && huge_cluster)
+ new_page += swp_offset(entry) &
+ (HPAGE_PMD_NR - 1);
return new_page;
}
radix_tree_preload_end();
@@ -468,7 +496,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
* add_to_swap_cache() doesn't return -EEXIST, so we can safely
* clear SWAP_HAS_CACHE flag.
*/
- put_swap_page(new_page, entry);
+ put_swap_page(new_page, hentry);
} while (err != -ENOMEM);

if (new_page)
@@ -490,7 +518,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
vma, addr, &page_was_allocated);

if (page_was_allocated)
- swap_readpage(retpage, do_poll);
+ swap_readpage(compound_head(retpage), do_poll);

return retpage;
}
@@ -609,8 +637,9 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
if (!page)
continue;
if (page_allocated) {
- swap_readpage(page, false);
- if (offset != entry_offset) {
+ swap_readpage(compound_head(page), false);
+ if (offset != entry_offset &&
+ !PageTransCompound(page)) {
SetPageReadahead(page);
count_vm_event(SWAP_RA);
}
@@ -771,8 +800,8 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
if (!page)
continue;
if (page_allocated) {
- swap_readpage(page, false);
- if (i != ra_info.offset) {
+ swap_readpage(compound_head(page), false);
+ if (i != ra_info.offset && !PageTransCompound(page)) {
SetPageReadahead(page);
count_vm_event(SWAP_RA);
}
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 1e723d3a9a6f..1a62fbc13381 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1501,7 +1501,8 @@ int __swap_count(swp_entry_t entry)
return count;
}

-static int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry)
+static int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry,
+ bool *huge_cluster)
{
int count = 0;
pgoff_t offset = swp_offset(entry);
@@ -1509,6 +1510,8 @@ static int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry)

ci = lock_cluster_or_swap_info(si, offset);
count = swap_count(si->swap_map[offset]);
+ if (huge_cluster && ci)
+ *huge_cluster = cluster_is_huge(ci);
unlock_cluster_or_swap_info(si, ci);
return count;
}
@@ -1518,14 +1521,14 @@ static int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry)
* This does not give an exact answer when swap count is continued,
* but does include the high COUNT_CONTINUED flag to allow for that.
*/
-int __swp_swapcount(swp_entry_t entry)
+int __swp_swapcount(swp_entry_t entry, bool *huge_cluster)
{
int count = 0;
struct swap_info_struct *si;

si = get_swap_device(entry);
if (si) {
- count = swap_swapcount(si, entry);
+ count = swap_swapcount(si, entry, huge_cluster);
put_swap_device(si);
}
return count;
@@ -1685,7 +1688,7 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
return map_swapcount;
}
#else
-#define swap_page_trans_huge_swapped(si, entry) swap_swapcount(si, entry)
+#define swap_page_trans_huge_swapped(si, entry) swap_swapcount(si, entry, NULL)
#define page_swapped(page) (page_swapcount(page) != 0)

static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
--
2.16.1


2018-05-23 08:34:30

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 05/21] mm, THP, swap: Support PMD swap mapping in free_swap_and_cache()/swap_free()

From: Huang Ying <[email protected]>

When a PMD swap mapping is removed from a huge swap cluster, for
example, unmap a memory range mapped with PMD swap mapping, etc,
free_swap_and_cache() will be called to decrease the reference count
to the huge swap cluster. free_swap_and_cache() may also free or
split the huge swap cluster, and free the corresponding THP in swap
cache if necessary. swap_free() is similar, and shares most
implementation with free_swap_and_cache(). This patch revises
free_swap_and_cache() and swap_free() to implement this.

If the swap cluster has been split already, for example, because of
failing to allocate a THP during swapin, we just decrease one from the
reference count of all swap slots.

Otherwise, we will decrease one from the reference count of all swap
slots and the PMD swap mapping count in cluster_count(). When the
corresponding THP isn't in swap cache, if PMD swap mapping count
becomes 0, the huge swap cluster will be split, and if all swap count
becomes 0, the huge swap cluster will be freed. When the corresponding
THP is in swap cache, if every swap_map[offset] == SWAP_HAS_CACHE, we
will try to delete the THP from swap cache. Which will cause the THP
and the huge swap cluster be freed.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
arch/s390/mm/pgtable.c | 2 +-
include/linux/swap.h | 9 ++--
kernel/power/swap.c | 4 +-
mm/madvise.c | 2 +-
mm/memory.c | 4 +-
mm/shmem.c | 6 +--
mm/swapfile.c | 114 +++++++++++++++++++++++++++++++++++++++++++------
7 files changed, 116 insertions(+), 25 deletions(-)

diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index 301e466e4263..3079a23eef75 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -646,7 +646,7 @@ static void ptep_zap_swap_entry(struct mm_struct *mm, swp_entry_t entry)

dec_mm_counter(mm, mm_counter(page));
}
- free_swap_and_cache(entry);
+ free_swap_and_cache(entry, false);
}

void ptep_zap_unused(struct mm_struct *mm, unsigned long addr,
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 57aa655ab27d..7ed2c727c9b6 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -453,9 +453,9 @@ extern int add_swap_count_continuation(swp_entry_t, gfp_t);
extern void swap_shmem_alloc(swp_entry_t);
extern int swap_duplicate(swp_entry_t *entry, bool cluster);
extern int swapcache_prepare(swp_entry_t entry, bool cluster);
-extern void swap_free(swp_entry_t);
+extern void swap_free(swp_entry_t entry, bool cluster);
extern void swapcache_free_entries(swp_entry_t *entries, int n);
-extern int free_swap_and_cache(swp_entry_t);
+extern int free_swap_and_cache(swp_entry_t entry, bool cluster);
extern int swap_type_of(dev_t, sector_t, struct block_device **);
extern unsigned int count_swap_pages(int, int);
extern sector_t map_swap_page(struct page *, struct block_device **);
@@ -509,7 +509,8 @@ static inline void show_swap_cache_info(void)
{
}

-#define free_swap_and_cache(e) ({(is_migration_entry(e) || is_device_private_entry(e));})
+#define free_swap_and_cache(e, c) \
+ ({(is_migration_entry(e) || is_device_private_entry(e)); })
#define swapcache_prepare(e, c) \
({(is_migration_entry(e) || is_device_private_entry(e)); })

@@ -527,7 +528,7 @@ static inline int swap_duplicate(swp_entry_t *swp, bool cluster)
return 0;
}

-static inline void swap_free(swp_entry_t swp)
+static inline void swap_free(swp_entry_t swp, bool cluster)
{
}

diff --git a/kernel/power/swap.c b/kernel/power/swap.c
index 1efcb5b0c3ed..f8b4d6df73fd 100644
--- a/kernel/power/swap.c
+++ b/kernel/power/swap.c
@@ -182,7 +182,7 @@ sector_t alloc_swapdev_block(int swap)
offset = swp_offset(get_swap_page_of_type(swap));
if (offset) {
if (swsusp_extents_insert(offset))
- swap_free(swp_entry(swap, offset));
+ swap_free(swp_entry(swap, offset), false);
else
return swapdev_block(swap, offset);
}
@@ -206,7 +206,7 @@ void free_all_swap_pages(int swap)
ext = rb_entry(node, struct swsusp_extent, node);
rb_erase(node, &swsusp_extents);
for (offset = ext->start; offset <= ext->end; offset++)
- swap_free(swp_entry(swap, offset));
+ swap_free(swp_entry(swap, offset), false);

kfree(ext);
}
diff --git a/mm/madvise.c b/mm/madvise.c
index 4d3c922ea1a1..d180000c626b 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -349,7 +349,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
if (non_swap_entry(entry))
continue;
nr_swap--;
- free_swap_and_cache(entry);
+ free_swap_and_cache(entry, false);
pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
continue;
}
diff --git a/mm/memory.c b/mm/memory.c
index 88eed4c87f1a..f8a4336dd015 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1376,7 +1376,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
page = migration_entry_to_page(entry);
rss[mm_counter(page)]--;
}
- if (unlikely(!free_swap_and_cache(entry)))
+ if (unlikely(!free_swap_and_cache(entry, false)))
print_bad_pte(vma, addr, ptent, NULL);
pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
} while (pte++, addr += PAGE_SIZE, addr != end);
@@ -3060,7 +3060,7 @@ int do_swap_page(struct vm_fault *vmf)
}
set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);

- swap_free(entry);
+ swap_free(entry, false);
if (mem_cgroup_swap_full(page) ||
(vma->vm_flags & VM_LOCKED) || PageMlocked(page))
try_to_free_swap(page);
diff --git a/mm/shmem.c b/mm/shmem.c
index 4d8a6f8e571f..df5bc74f1cb9 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -676,7 +676,7 @@ static int shmem_free_swap(struct address_space *mapping,
xa_unlock_irq(&mapping->i_pages);
if (old != radswap)
return -ENOENT;
- free_swap_and_cache(radix_to_swp_entry(radswap));
+ free_swap_and_cache(radix_to_swp_entry(radswap), false);
return 0;
}

@@ -1211,7 +1211,7 @@ static int shmem_unuse_inode(struct shmem_inode_info *info,
spin_lock_irq(&info->lock);
info->swapped--;
spin_unlock_irq(&info->lock);
- swap_free(swap);
+ swap_free(swap, false);
}
}
return error;
@@ -1750,7 +1750,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,

delete_from_swap_cache(page);
set_page_dirty(page);
- swap_free(swap);
+ swap_free(swap, false);

} else {
if (vma && userfaultfd_missing(vma)) {
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 8dbc0f9b2f90..46117d6913ad 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -885,7 +885,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
}

#ifdef CONFIG_THP_SWAP
-static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
+static int __swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
{
unsigned long idx;
struct swap_cluster_info *ci;
@@ -911,7 +911,7 @@ static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
return 1;
}

-static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
+static void __swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
{
unsigned long offset = idx * SWAPFILE_CLUSTER;
struct swap_cluster_info *ci;
@@ -924,11 +924,15 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
swap_range_free(si, offset, SWAPFILE_CLUSTER);
}
#else
-static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
+static int __swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
{
VM_WARN_ON_ONCE(1);
return 0;
}
+
+static void __swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
+{
+}
#endif /* CONFIG_THP_SWAP */

static unsigned long scan_swap_map(struct swap_info_struct *si,
@@ -996,7 +1000,7 @@ int get_swap_pages(int n_goal, bool cluster, swp_entry_t swp_entries[])
}
if (cluster) {
if (!(si->flags & SWP_FILE))
- n_ret = swap_alloc_cluster(si, swp_entries);
+ n_ret = __swap_alloc_cluster(si, swp_entries);
} else
n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,
n_goal, swp_entries);
@@ -1215,8 +1219,10 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p,
count = SWAP_MAP_MAX | COUNT_CONTINUED;
else
count = SWAP_MAP_MAX;
- } else
+ } else {
+ VM_BUG_ON(!count);
count--;
+ }
}

usage = count | has_cache;
@@ -1255,17 +1261,90 @@ static void swap_entry_free(struct swap_info_struct *p, swp_entry_t entry)
swap_range_free(p, offset, 1);
}

+#ifdef CONFIG_THP_SWAP
+static unsigned char swap_free_cluster(struct swap_info_struct *si,
+ swp_entry_t entry)
+{
+ struct swap_cluster_info *ci;
+ unsigned long offset = swp_offset(entry);
+ unsigned int count, i, free_entries = 0, cache_only = 0;
+ unsigned char *map, ret = 1;
+
+ ci = lock_cluster(si, offset);
+ VM_BUG_ON(!is_cluster_offset(offset));
+ /* Cluster has been split, free each swap entries in cluster */
+ if (!cluster_is_huge(ci)) {
+ unlock_cluster(ci);
+ for (i = 0; i < SWAPFILE_CLUSTER; i++, entry.val++) {
+ if (!__swap_entry_free(si, entry, 1)) {
+ free_entries++;
+ free_swap_slot(entry);
+ }
+ }
+ return !(free_entries == SWAPFILE_CLUSTER);
+ }
+ count = cluster_count(ci) - 1;
+ VM_BUG_ON(count < SWAPFILE_CLUSTER);
+ cluster_set_count(ci, count);
+ map = si->swap_map + offset;
+ for (i = 0; i < SWAPFILE_CLUSTER; i++) {
+ if (map[i] == 1) {
+ map[i] = SWAP_MAP_BAD;
+ free_entries++;
+ } else if (__swap_entry_free_locked(si, ci, offset + i, 1) ==
+ SWAP_HAS_CACHE)
+ cache_only++;
+ }
+ VM_BUG_ON(free_entries && (count != SWAPFILE_CLUSTER ||
+ (map[0] & SWAP_HAS_CACHE)));
+ if (free_entries == SWAPFILE_CLUSTER)
+ memset(map, SWAP_HAS_CACHE, SWAPFILE_CLUSTER);
+ else if (!cluster_swapcount(ci) && !(map[0] & SWAP_HAS_CACHE))
+ cluster_clear_huge(ci);
+ unlock_cluster(ci);
+ if (free_entries == SWAPFILE_CLUSTER) {
+ spin_lock(&si->lock);
+ mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER);
+ __swap_free_cluster(si, offset / SWAPFILE_CLUSTER);
+ spin_unlock(&si->lock);
+ ret = 0;
+ } else if (free_entries) {
+ ci = lock_cluster(si, offset);
+ for (i = 0; i < SWAPFILE_CLUSTER; i++, entry.val++) {
+ if (map[i] == SWAP_MAP_BAD) {
+ map[i] = SWAP_HAS_CACHE;
+ unlock_cluster(ci);
+ free_swap_slot(entry);
+ ci = lock_cluster(si, offset);
+ }
+ }
+ unlock_cluster(ci);
+ } else if (cache_only == SWAPFILE_CLUSTER)
+ ret = SWAP_HAS_CACHE;
+
+ return ret;
+}
+#else
+static inline unsigned char swap_free_cluster(struct swap_info_struct *si,
+ swp_entry_t entry)
+{
+ return 0;
+}
+#endif
+
/*
* Caller has made sure that the swap device corresponding to entry
* is still around or has not been recycled.
*/
-void swap_free(swp_entry_t entry)
+void swap_free(swp_entry_t entry, bool cluster)
{
struct swap_info_struct *p;

p = _swap_info_get(entry);
if (p) {
- if (!__swap_entry_free(p, entry, 1))
+ if (thp_swap_supported() && cluster)
+ swap_free_cluster(p, entry);
+ else if (!__swap_entry_free(p, entry, 1))
free_swap_slot(entry);
}
}
@@ -1326,7 +1405,7 @@ static void swapcache_free_cluster(swp_entry_t entry)
if (free_entries == SWAPFILE_CLUSTER) {
spin_lock(&si->lock);
mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER);
- swap_free_cluster(si, idx);
+ __swap_free_cluster(si, idx);
spin_unlock(&si->lock);
} else if (free_entries) {
for (i = 0; i < SWAPFILE_CLUSTER; i++, entry.val++) {
@@ -1730,7 +1809,7 @@ int try_to_free_swap(struct page *page)
* Free the swap entry like above, but also try to
* free the page cache entry if it is the last user.
*/
-int free_swap_and_cache(swp_entry_t entry)
+int free_swap_and_cache(swp_entry_t entry, bool cluster)
{
struct swap_info_struct *p;
struct page *page = NULL;
@@ -1741,7 +1820,8 @@ int free_swap_and_cache(swp_entry_t entry)

p = _swap_info_get(entry);
if (p) {
- count = __swap_entry_free(p, entry, 1);
+ count = cluster ? swap_free_cluster(p, entry) :
+ __swap_entry_free(p, entry, 1);
if (count == SWAP_HAS_CACHE &&
!swap_page_trans_huge_swapped(p, entry)) {
page = find_get_page(swap_address_space(entry),
@@ -1750,7 +1830,7 @@ int free_swap_and_cache(swp_entry_t entry)
put_page(page);
page = NULL;
}
- } else if (!count)
+ } else if (!count && !cluster)
free_swap_slot(entry);
}
if (page) {
@@ -1914,7 +1994,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
}
set_pte_at(vma->vm_mm, addr, pte,
pte_mkold(mk_pte(page, vma->vm_page_prot)));
- swap_free(entry);
+ swap_free(entry, false);
/*
* Move the page to the active list so it is not
* immediately swapped out again after swapon.
@@ -2353,6 +2433,16 @@ int try_to_unuse(unsigned int type, bool frontswap,
}

mmput(start_mm);
+
+ /*
+ * Swap entries may be marked as SWAP_MAP_BAD temporarily in
+ * swap_free_cluster() before being freed really.
+ * find_next_to_unuse() will skip these swap entries, that is
+ * OK. But we need to wait until they are freed really.
+ */
+ while (!retval && READ_ONCE(si->inuse_pages))
+ schedule_timeout_uninterruptible(1);
+
return retval;
}

--
2.16.1


2018-05-23 08:34:36

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -mm -V3 03/21] mm, THP, swap: Support PMD swap mapping in swap_duplicate()

From: Huang Ying <[email protected]>

To support to swapin the THP as a whole, we need to create PMD swap
mapping during swapout, and maintain PMD swap mapping count. This
patch implements the support to increase the PMD swap mapping
count (for swapout, fork, etc.) and set SWAP_HAS_CACHE flag (for
swapin, etc.) for a huge swap cluster in swap_duplicate() function
family. Although it only implements a part of the design of the swap
reference count with PMD swap mapping, the whole design is described
as follow to make it easy to understand the patch and the whole
picture.

A huge swap cluster is used to hold the contents of a swapouted THP.
After swapout, a PMD page mapping to the THP will become a PMD
swap mapping to the huge swap cluster via a swap entry in PMD. While
a PTE page mapping to a subpage of the THP will become the PTE swap
mapping to a swap slot in the huge swap cluster via a swap entry in
PTE.

If there is no PMD swap mapping and the corresponding THP is removed
from the page cache (reclaimed), the huge swap cluster will be split
and become a normal swap cluster.

The count (cluster_count()) of the huge swap cluster is
SWAPFILE_CLUSTER (= HPAGE_PMD_NR) + PMD swap mapping count. Because
all swap slots in the huge swap cluster are mapped by PTE or PMD, or
has SWAP_HAS_CACHE bit set, the usage count of the swap cluster is
HPAGE_PMD_NR. And the PMD swap mapping count is recorded too to make
it easy to determine whether there are remaining PMD swap mappings.

The count in swap_map[offset] is the sum of PTE and PMD swap mapping
count. This means when we increase the PMD swap mapping count, we
need to increase swap_map[offset] for all swap slots inside the swap
cluster. An alternative choice is to make swap_map[offset] to record
PTE swap map count only, given we have recorded PMD swap mapping count
in the count of the huge swap cluster. But this need to increase
swap_map[offset] when splitting the PMD swap mapping, that may fail
because of memory allocation for swap count continuation. That is
hard to dealt with. So we choose current solution.

The PMD swap mapping to a huge swap cluster may be split when unmap a
part of PMD mapping etc. That is easy because only the count of the
huge swap cluster need to be changed. When the last PMD swap mapping
is gone and SWAP_HAS_CACHE is unset, we will split the huge swap
cluster (clear the huge flag). This makes it easy to reason the
cluster state.

A huge swap cluster will be split when splitting the THP in swap
cache, or failing to allocate THP during swapin, etc. But when
splitting the huge swap cluster, we will not try to split all PMD swap
mappings, because we haven't enough information available for that
sometimes. Later, when the PMD swap mapping is duplicated or swapin,
etc, the PMD swap mapping will be split and fallback to the PTE
operation.

When a THP is added into swap cache, the SWAP_HAS_CACHE flag will be
set in the swap_map[offset] of all swap slots inside the huge swap
cluster backing the THP. This huge swap cluster will not be split
unless the THP is split even if its PMD swap mapping count dropped to
0. Later, when the THP is removed from swap cache, the SWAP_HAS_CACHE
flag will be cleared in the swap_map[offset] of all swap slots inside
the huge swap cluster. And this huge swap cluster will be split if
its PMD swap mapping count is 0.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
---
include/linux/huge_mm.h | 5 +
include/linux/swap.h | 9 +-
mm/memory.c | 2 +-
mm/rmap.c | 2 +-
mm/swapfile.c | 287 +++++++++++++++++++++++++++++++++---------------
5 files changed, 213 insertions(+), 92 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index a8a126259bc4..0d0cfddbf4b7 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -79,6 +79,11 @@ extern struct kobj_attribute shmem_enabled_attr;
#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)

+static inline bool thp_swap_supported(void)
+{
+ return IS_ENABLED(CONFIG_THP_SWAP);
+}
+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define HPAGE_PMD_SHIFT PMD_SHIFT
#define HPAGE_PMD_SIZE ((1UL) << HPAGE_PMD_SHIFT)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index f73eafcaf4e9..57aa655ab27d 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -451,8 +451,8 @@ extern swp_entry_t get_swap_page_of_type(int);
extern int get_swap_pages(int n, bool cluster, swp_entry_t swp_entries[]);
extern int add_swap_count_continuation(swp_entry_t, gfp_t);
extern void swap_shmem_alloc(swp_entry_t);
-extern int swap_duplicate(swp_entry_t);
-extern int swapcache_prepare(swp_entry_t);
+extern int swap_duplicate(swp_entry_t *entry, bool cluster);
+extern int swapcache_prepare(swp_entry_t entry, bool cluster);
extern void swap_free(swp_entry_t);
extern void swapcache_free_entries(swp_entry_t *entries, int n);
extern int free_swap_and_cache(swp_entry_t);
@@ -510,7 +510,8 @@ static inline void show_swap_cache_info(void)
}

#define free_swap_and_cache(e) ({(is_migration_entry(e) || is_device_private_entry(e));})
-#define swapcache_prepare(e) ({(is_migration_entry(e) || is_device_private_entry(e));})
+#define swapcache_prepare(e, c) \
+ ({(is_migration_entry(e) || is_device_private_entry(e)); })

static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_mask)
{
@@ -521,7 +522,7 @@ static inline void swap_shmem_alloc(swp_entry_t swp)
{
}

-static inline int swap_duplicate(swp_entry_t swp)
+static inline int swap_duplicate(swp_entry_t *swp, bool cluster)
{
return 0;
}
diff --git a/mm/memory.c b/mm/memory.c
index 14578158ed20..88eed4c87f1a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -951,7 +951,7 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
swp_entry_t entry = pte_to_swp_entry(pte);

if (likely(!non_swap_entry(entry))) {
- if (swap_duplicate(entry) < 0)
+ if (swap_duplicate(&entry, false) < 0)
return entry.val;

/* make sure dst_mm is on swapoff's mmlist. */
diff --git a/mm/rmap.c b/mm/rmap.c
index 6db729dc4c50..5f45d6325c40 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1556,7 +1556,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
break;
}

- if (swap_duplicate(entry) < 0) {
+ if (swap_duplicate(&entry, false) < 0) {
set_pte_at(mm, address, pvmw.pte, pteval);
ret = false;
page_vma_mapped_walk_done(&pvmw);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 744d66666f82..075048032383 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -49,6 +49,9 @@ static bool swap_count_continued(struct swap_info_struct *, pgoff_t,
unsigned char);
static void free_swap_count_continuations(struct swap_info_struct *);
static sector_t map_swap_entry(swp_entry_t, struct block_device**);
+static int add_swap_count_continuation_locked(struct swap_info_struct *si,
+ unsigned long offset,
+ struct page *page);

DEFINE_SPINLOCK(swap_lock);
static unsigned int nr_swapfiles;
@@ -319,6 +322,11 @@ static inline void unlock_cluster_or_swap_info(struct swap_info_struct *si,
spin_unlock(&si->lock);
}

+static inline bool is_cluster_offset(unsigned long offset)
+{
+ return !(offset % SWAPFILE_CLUSTER);
+}
+
static inline bool cluster_list_empty(struct swap_cluster_list *list)
{
return cluster_is_null(&list->head);
@@ -1166,16 +1174,14 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry)
return NULL;
}

-static unsigned char __swap_entry_free(struct swap_info_struct *p,
- swp_entry_t entry, unsigned char usage)
+static unsigned char __swap_entry_free_locked(struct swap_info_struct *p,
+ struct swap_cluster_info *ci,
+ unsigned long offset,
+ unsigned char usage)
{
- struct swap_cluster_info *ci;
- unsigned long offset = swp_offset(entry);
unsigned char count;
unsigned char has_cache;

- ci = lock_cluster_or_swap_info(p, offset);
-
count = p->swap_map[offset];

has_cache = count & SWAP_HAS_CACHE;
@@ -1203,6 +1209,17 @@ static unsigned char __swap_entry_free(struct swap_info_struct *p,
usage = count | has_cache;
p->swap_map[offset] = usage ? : SWAP_HAS_CACHE;

+ return usage;
+}
+
+static unsigned char __swap_entry_free(struct swap_info_struct *p,
+ swp_entry_t entry, unsigned char usage)
+{
+ struct swap_cluster_info *ci;
+ unsigned long offset = swp_offset(entry);
+
+ ci = lock_cluster_or_swap_info(p, offset);
+ usage = __swap_entry_free_locked(p, ci, offset, usage);
unlock_cluster_or_swap_info(p, ci);

return usage;
@@ -3449,32 +3466,12 @@ void si_swapinfo(struct sysinfo *val)
spin_unlock(&swap_lock);
}

-/*
- * Verify that a swap entry is valid and increment its swap map count.
- *
- * Returns error code in following case.
- * - success -> 0
- * - swp_entry is invalid -> EINVAL
- * - swp_entry is migration entry -> EINVAL
- * - swap-cache reference is requested but there is already one. -> EEXIST
- * - swap-cache reference is requested but the entry is not used. -> ENOENT
- * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
- */
-static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+static int __swap_duplicate_locked(struct swap_info_struct *p,
+ unsigned long offset, unsigned char usage)
{
- struct swap_info_struct *p;
- struct swap_cluster_info *ci;
- unsigned long offset;
unsigned char count;
unsigned char has_cache;
- int err = -EINVAL;
-
- p = get_swap_device(entry);
- if (!p)
- goto out;
-
- offset = swp_offset(entry);
- ci = lock_cluster_or_swap_info(p, offset);
+ int err = 0;

count = p->swap_map[offset];

@@ -3484,12 +3481,11 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
*/
if (unlikely(swap_count(count) == SWAP_MAP_BAD)) {
err = -ENOENT;
- goto unlock_out;
+ goto out;
}

has_cache = count & SWAP_HAS_CACHE;
count &= ~SWAP_HAS_CACHE;
- err = 0;

if (usage == SWAP_HAS_CACHE) {

@@ -3516,11 +3512,39 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)

p->swap_map[offset] = count | has_cache;

-unlock_out:
+out:
+ return err;
+}
+
+/*
+ * Verify that a swap entry is valid and increment its swap map count.
+ *
+ * Returns error code in following case.
+ * - success -> 0
+ * - swp_entry is invalid -> EINVAL
+ * - swp_entry is migration entry -> EINVAL
+ * - swap-cache reference is requested but there is already one. -> EEXIST
+ * - swap-cache reference is requested but the entry is not used. -> ENOENT
+ * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
+ */
+static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+{
+ struct swap_info_struct *p;
+ struct swap_cluster_info *ci;
+ unsigned long offset;
+ int err = -EINVAL;
+
+ p = get_swap_device(entry);
+ if (!p)
+ goto out;
+
+ offset = swp_offset(entry);
+ ci = lock_cluster_or_swap_info(p, offset);
+ err = __swap_duplicate_locked(p, offset, usage);
unlock_cluster_or_swap_info(p, ci);
+
+ put_swap_device(p);
out:
- if (p)
- put_swap_device(p);
return err;
}

@@ -3533,6 +3557,81 @@ void swap_shmem_alloc(swp_entry_t entry)
__swap_duplicate(entry, SWAP_MAP_SHMEM);
}

+#ifdef CONFIG_THP_SWAP
+static int __swap_duplicate_cluster(swp_entry_t *entry, unsigned char usage)
+{
+ struct swap_info_struct *si;
+ struct swap_cluster_info *ci;
+ unsigned long offset;
+ unsigned char *map;
+ int i, err = 0;
+
+ si = get_swap_device(*entry);
+ if (!si) {
+ err = -EINVAL;
+ goto out;
+ }
+ offset = swp_offset(*entry);
+ ci = lock_cluster(si, offset);
+ if (cluster_is_free(ci)) {
+ err = -ENOENT;
+ goto unlock;
+ }
+ if (!cluster_is_huge(ci)) {
+ err = -ENOTDIR;
+ goto unlock;
+ }
+ VM_BUG_ON(!is_cluster_offset(offset));
+ VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
+ map = si->swap_map + offset;
+ if (usage == SWAP_HAS_CACHE) {
+ if (map[0] & SWAP_HAS_CACHE) {
+ err = -EEXIST;
+ goto unlock;
+ }
+ for (i = 0; i < SWAPFILE_CLUSTER; i++) {
+ VM_BUG_ON(map[i] & SWAP_HAS_CACHE);
+ map[i] |= SWAP_HAS_CACHE;
+ }
+ } else {
+ for (i = 0; i < SWAPFILE_CLUSTER; i++) {
+retry:
+ err = __swap_duplicate_locked(si, offset + i, 1);
+ if (err == -ENOMEM) {
+ struct page *page;
+
+ page = alloc_page(GFP_ATOMIC | __GFP_HIGHMEM);
+ err = add_swap_count_continuation_locked(
+ si, offset + i, page);
+ if (err) {
+ *entry = swp_entry(si->type, offset+i);
+ goto undup;
+ }
+ goto retry;
+ } else if (err)
+ goto undup;
+ }
+ cluster_set_count(ci, cluster_count(ci) + 1);
+ }
+unlock:
+ unlock_cluster(ci);
+ put_swap_device(si);
+out:
+ return err;
+undup:
+ for (i--; i >= 0; i--)
+ __swap_entry_free_locked(
+ si, ci, offset + i, 1);
+ goto unlock;
+}
+#else
+static inline int __swap_duplicate_cluster(swp_entry_t *entry,
+ unsigned char usage)
+{
+ return 0;
+}
+#endif
+
/*
* Increase reference count of swap entry by 1.
* Returns 0 for success, or -ENOMEM if a swap_count_continuation is required
@@ -3540,12 +3639,15 @@ void swap_shmem_alloc(swp_entry_t entry)
* if __swap_duplicate() fails for another reason (-EINVAL or -ENOENT), which
* might occur if a page table entry has got corrupted.
*/
-int swap_duplicate(swp_entry_t entry)
+int swap_duplicate(swp_entry_t *entry, bool cluster)
{
int err = 0;

- while (!err && __swap_duplicate(entry, 1) == -ENOMEM)
- err = add_swap_count_continuation(entry, GFP_ATOMIC);
+ if (thp_swap_supported() && cluster)
+ return __swap_duplicate_cluster(entry, 1);
+
+ while (!err && __swap_duplicate(*entry, 1) == -ENOMEM)
+ err = add_swap_count_continuation(*entry, GFP_ATOMIC);
return err;
}

@@ -3557,9 +3659,12 @@ int swap_duplicate(swp_entry_t entry)
* -EBUSY means there is a swap cache.
* Note: return code is different from swap_duplicate().
*/
-int swapcache_prepare(swp_entry_t entry)
+int swapcache_prepare(swp_entry_t entry, bool cluster)
{
- return __swap_duplicate(entry, SWAP_HAS_CACHE);
+ if (thp_swap_supported() && cluster)
+ return __swap_duplicate_cluster(&entry, SWAP_HAS_CACHE);
+ else
+ return __swap_duplicate(entry, SWAP_HAS_CACHE);
}

struct swap_info_struct *swp_swap_info(swp_entry_t entry)
@@ -3589,51 +3694,13 @@ pgoff_t __page_file_index(struct page *page)
}
EXPORT_SYMBOL_GPL(__page_file_index);

-/*
- * add_swap_count_continuation - called when a swap count is duplicated
- * beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's
- * page of the original vmalloc'ed swap_map, to hold the continuation count
- * (for that entry and for its neighbouring PAGE_SIZE swap entries). Called
- * again when count is duplicated beyond SWAP_MAP_MAX * SWAP_CONT_MAX, etc.
- *
- * These continuation pages are seldom referenced: the common paths all work
- * on the original swap_map, only referring to a continuation page when the
- * low "digit" of a count is incremented or decremented through SWAP_MAP_MAX.
- *
- * add_swap_count_continuation(, GFP_ATOMIC) can be called while holding
- * page table locks; if it fails, add_swap_count_continuation(, GFP_KERNEL)
- * can be called after dropping locks.
- */
-int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask)
+static int add_swap_count_continuation_locked(struct swap_info_struct *si,
+ unsigned long offset,
+ struct page *page)
{
- struct swap_info_struct *si;
- struct swap_cluster_info *ci;
struct page *head;
- struct page *page;
struct page *list_page;
- pgoff_t offset;
unsigned char count;
- int ret = 0;
-
- /*
- * When debugging, it's easier to use __GFP_ZERO here; but it's better
- * for latency not to zero a page while GFP_ATOMIC and holding locks.
- */
- page = alloc_page(gfp_mask | __GFP_HIGHMEM);
-
- si = get_swap_device(entry);
- if (!si) {
- /*
- * An acceptable race has occurred since the failing
- * __swap_duplicate(): the swap device may be swapoff
- */
- goto outer;
- }
- spin_lock(&si->lock);
-
- offset = swp_offset(entry);
-
- ci = lock_cluster(si, offset);

count = si->swap_map[offset] & ~SWAP_HAS_CACHE;

@@ -3643,13 +3710,11 @@ int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask)
* will race to add swap count continuation: we need to avoid
* over-provisioning.
*/
- goto out;
+ return 0;
}

- if (!page) {
- ret = -ENOMEM;
- goto out;
- }
+ if (!page)
+ return -ENOMEM;

/*
* We are fortunate that although vmalloc_to_page uses pte_offset_map,
@@ -3697,7 +3762,57 @@ int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask)
page = NULL; /* now it's attached, don't free it */
out_unlock_cont:
spin_unlock(&si->cont_lock);
-out:
+ if (page)
+ __free_page(page);
+ return 0;
+}
+
+/*
+ * add_swap_count_continuation - called when a swap count is duplicated
+ * beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's
+ * page of the original vmalloc'ed swap_map, to hold the continuation count
+ * (for that entry and for its neighbouring PAGE_SIZE swap entries). Called
+ * again when count is duplicated beyond SWAP_MAP_MAX * SWAP_CONT_MAX, etc.
+ *
+ * These continuation pages are seldom referenced: the common paths all work
+ * on the original swap_map, only referring to a continuation page when the
+ * low "digit" of a count is incremented or decremented through SWAP_MAP_MAX.
+ *
+ * add_swap_count_continuation(, GFP_ATOMIC) can be called while holding
+ * page table locks; if it fails, add_swap_count_continuation(, GFP_KERNEL)
+ * can be called after dropping locks.
+ */
+int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask)
+{
+ struct swap_info_struct *si;
+ struct swap_cluster_info *ci;
+ struct page *page;
+ unsigned long offset;
+ int ret = 0;
+
+ /*
+ * When debugging, it's easier to use __GFP_ZERO here; but it's better
+ * for latency not to zero a page while GFP_ATOMIC and holding locks.
+ */
+ page = alloc_page(gfp_mask | __GFP_HIGHMEM);
+
+ si = get_swap_device(entry);
+ if (!si) {
+ /*
+ * An acceptable race has occurred since the failing
+ * __swap_duplicate(): the swap device may be swapoff
+ */
+ goto outer;
+ }
+ spin_lock(&si->lock);
+
+ offset = swp_offset(entry);
+
+ ci = lock_cluster(si, offset);
+
+ ret = add_swap_count_continuation_locked(si, offset, page);
+ page = NULL;
+
unlock_cluster(ci);
spin_unlock(&si->lock);
put_swap_device(si);
--
2.16.1


2018-06-01 06:15:25

by Naoya Horiguchi

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 00/21] mm, THP, swap: Swapout/swapin THP in one piece

On Wed, May 23, 2018 at 04:26:04PM +0800, Huang, Ying wrote:
> From: Huang Ying <[email protected]>
>
> Hi, Andrew, could you help me to check whether the overall design is
> reasonable?
>
> Hi, Hugh, Shaohua, Minchan and Rik, could you help me to review the
> swap part of the patchset? Especially [02/21], [03/21], [04/21],
> [05/21], [06/21], [07/21], [08/21], [09/21], [10/21], [11/21],
> [12/21], [20/21].
>
> Hi, Andrea and Kirill, could you help me to review the THP part of the
> patchset? Especially [01/21], [07/21], [09/21], [11/21], [13/21],
> [15/21], [16/21], [17/21], [18/21], [19/21], [20/21], [21/21].
>
> Hi, Johannes and Michal, could you help me to review the cgroup part
> of the patchset? Especially [14/21].
>
> And for all, Any comment is welcome!

Hi Ying Huang,
I've read through this series and find no issue.

It seems that thp swapout never happens if swap devices are backed by
rotation storages. I guess that's because this feature depends on swap
cluster searching algorithm which only supports non-rotational storages.

I think that this limitation is OK because non-rotational storage is
better for swap device (most future users will use it). But I think
it's better to document the limitation somewhere because swap cluster
is in-kernel thing and we can't assume that end users know about it.

Thanks,
Naoya Horiguchi

>
> This patchset is based on the 2018-05-18 head of mmotm/master.
>
> This is the final step of THP (Transparent Huge Page) swap
> optimization. After the first and second step, the splitting huge
> page is delayed from almost the first step of swapout to after swapout
> has been finished. In this step, we avoid splitting THP for swapout
> and swapout/swapin the THP in one piece.
>
> We tested the patchset with vm-scalability benchmark swap-w-seq test
> case, with 16 processes. The test case forks 16 processes. Each
> process allocates large anonymous memory range, and writes it from
> begin to end for 8 rounds. The first round will swapout, while the
> remaining rounds will swapin and swapout. The test is done on a Xeon
> E5 v3 system, the swap device used is a RAM simulated PMEM (persistent
> memory) device. The test result is as follow,
>
> base optimized
> ---------------- --------------------------
> %stddev %change %stddev
> \ | \
> 1417897 $B!^(B 2% +992.8% 15494673 vm-scalability.throughput
> 1020489 $B!^(B 4% +1091.2% 12156349 vmstat.swap.si
> 1255093 $B!^(B 3% +940.3% 13056114 vmstat.swap.so
> 1259769 $B!^(B 7% +1818.3% 24166779 meminfo.AnonHugePages
> 28021761 -10.7% 25018848 $B!^(B 2% meminfo.AnonPages
> 64080064 $B!^(B 4% -95.6% 2787565 $B!^(B 33% interrupts.CAL:Function_call_interrupts
> 13.91 $B!^(B 5% -13.8 0.10 $B!^(B 27% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
>
> Where, the score of benchmark (bytes written per second) improved
> 992.8%. The swapout/swapin throughput improved 1008% (from about
> 2.17GB/s to 24.04GB/s). The performance difference is huge. In base
> kernel, for the first round of writing, the THP is swapout and split,
> so in the remaining rounds, there is only normal page swapin and
> swapout. While in optimized kernel, the THP is kept after first
> swapout, so THP swapin and swapout is used in the remaining rounds.
> This shows the key benefit to swapout/swapin THP in one piece, the THP
> will be kept instead of being split. meminfo information verified
> this, in base kernel only 4.5% of anonymous page are THP during the
> test, while in optimized kernel, that is 96.6%. The TLB flushing IPI
> (represented as interrupts.CAL:Function_call_interrupts) reduced
> 95.6%, while cycles for spinlock reduced from 13.9% to 0.1%. These
> are performance benefit of THP swapout/swapin too.
>
> Below is the description for all steps of THP swap optimization.
>
> Recently, the performance of the storage devices improved so fast that
> we cannot saturate the disk bandwidth with single logical CPU when do
> page swapping even on a high-end server machine. Because the
> performance of the storage device improved faster than that of single
> logical CPU. And it seems that the trend will not change in the near
> future. On the other hand, the THP becomes more and more popular
> because of increased memory size. So it becomes necessary to optimize
> THP swap performance.
>
> The advantages to swapout/swapin a THP in one piece include:
>
> - Batch various swap operations for the THP. Many operations need to
> be done once per THP instead of per normal page, for example,
> allocating/freeing the swap space, writing/reading the swap space,
> flushing TLB, page fault, etc. This will improve the performance of
> the THP swap greatly.
>
> - The THP swap space read/write will be large sequential IO (2M on
> x86_64). It is particularly helpful for the swapin, which are
> usually 4k random IO. This will improve the performance of the THP
> swap too.
>
> - It will help the memory fragmentation, especially when the THP is
> heavily used by the applications. The THP order pages will be free
> up after THP swapout.
>
> - It will improve the THP utilization on the system with the swap
> turned on. Because the speed for khugepaged to collapse the normal
> pages into the THP is quite slow. After the THP is split during the
> swapout, it will take quite long time for the normal pages to
> collapse back into the THP after being swapin. The high THP
> utilization helps the efficiency of the page based memory management
> too.
>
> There are some concerns regarding THP swapin, mainly because possible
> enlarged read/write IO size (for swapout/swapin) may put more overhead
> on the storage device. To deal with that, the THP swapin is turned on
> only when necessary. A new sysfs interface:
> /sys/kernel/mm/transparent_hugepage/swapin_enabled is added to
> configure it. It uses "always/never/madvise" logic, to be turned on
> globally, turned off globally, or turned on only for VMA with
> MADV_HUGEPAGE, etc.
> GE, etc.
>
> Changelog
> ---------
>
> v3:
>
> - Rebased on 5/18 HEAD of mmotm/master
>
> - Fixed a build bug, Thanks 0-Day!
>
> v2:
>
> - Fixed several build bugs, Thanks 0-Day!
>
> - Improved documentation as suggested by Randy Dunlap.
>
> - Fixed several bugs in reading huge swap cluster
>

2018-06-01 07:05:02

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 00/21] mm, THP, swap: Swapout/swapin THP in one piece

Naoya Horiguchi <[email protected]> writes:

> On Wed, May 23, 2018 at 04:26:04PM +0800, Huang, Ying wrote:
>> From: Huang Ying <[email protected]>
>>
>> Hi, Andrew, could you help me to check whether the overall design is
>> reasonable?
>>
>> Hi, Hugh, Shaohua, Minchan and Rik, could you help me to review the
>> swap part of the patchset? Especially [02/21], [03/21], [04/21],
>> [05/21], [06/21], [07/21], [08/21], [09/21], [10/21], [11/21],
>> [12/21], [20/21].
>>
>> Hi, Andrea and Kirill, could you help me to review the THP part of the
>> patchset? Especially [01/21], [07/21], [09/21], [11/21], [13/21],
>> [15/21], [16/21], [17/21], [18/21], [19/21], [20/21], [21/21].
>>
>> Hi, Johannes and Michal, could you help me to review the cgroup part
>> of the patchset? Especially [14/21].
>>
>> And for all, Any comment is welcome!
>
> Hi Ying Huang,
> I've read through this series and find no issue.

Thanks a lot for your review!

> It seems that thp swapout never happens if swap devices are backed by
> rotation storages. I guess that's because this feature depends on swap
> cluster searching algorithm which only supports non-rotational storages.
>
> I think that this limitation is OK because non-rotational storage is
> better for swap device (most future users will use it). But I think
> it's better to document the limitation somewhere because swap cluster
> is in-kernel thing and we can't assume that end users know about it.

Yes. I will try to document it somewhere.

Best Regards,
Huang, Ying

> Thanks,
> Naoya Horiguchi
>
>>
>> This patchset is based on the 2018-05-18 head of mmotm/master.
>>
>> This is the final step of THP (Transparent Huge Page) swap
>> optimization. After the first and second step, the splitting huge
>> page is delayed from almost the first step of swapout to after swapout
>> has been finished. In this step, we avoid splitting THP for swapout
>> and swapout/swapin the THP in one piece.
>>
>> We tested the patchset with vm-scalability benchmark swap-w-seq test
>> case, with 16 processes. The test case forks 16 processes. Each
>> process allocates large anonymous memory range, and writes it from
>> begin to end for 8 rounds. The first round will swapout, while the
>> remaining rounds will swapin and swapout. The test is done on a Xeon
>> E5 v3 system, the swap device used is a RAM simulated PMEM (persistent
>> memory) device. The test result is as follow,
>>
>> base optimized
>> ---------------- --------------------------
>> %stddev %change %stddev
>> \ | \
>> 1417897 ± 2% +992.8% 15494673 vm-scalability.throughput
>> 1020489 ± 4% +1091.2% 12156349 vmstat.swap.si
>> 1255093 ± 3% +940.3% 13056114 vmstat.swap.so
>> 1259769 ± 7% +1818.3% 24166779 meminfo.AnonHugePages
>> 28021761 -10.7% 25018848 ± 2% meminfo.AnonPages
>> 64080064 ± 4% -95.6% 2787565 ± 33% interrupts.CAL:Function_call_interrupts
>> 13.91 ± 5% -13.8 0.10 ± 27% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
>>
>> Where, the score of benchmark (bytes written per second) improved
>> 992.8%. The swapout/swapin throughput improved 1008% (from about
>> 2.17GB/s to 24.04GB/s). The performance difference is huge. In base
>> kernel, for the first round of writing, the THP is swapout and split,
>> so in the remaining rounds, there is only normal page swapin and
>> swapout. While in optimized kernel, the THP is kept after first
>> swapout, so THP swapin and swapout is used in the remaining rounds.
>> This shows the key benefit to swapout/swapin THP in one piece, the THP
>> will be kept instead of being split. meminfo information verified
>> this, in base kernel only 4.5% of anonymous page are THP during the
>> test, while in optimized kernel, that is 96.6%. The TLB flushing IPI
>> (represented as interrupts.CAL:Function_call_interrupts) reduced
>> 95.6%, while cycles for spinlock reduced from 13.9% to 0.1%. These
>> are performance benefit of THP swapout/swapin too.
>>
>> Below is the description for all steps of THP swap optimization.
>>
>> Recently, the performance of the storage devices improved so fast that
>> we cannot saturate the disk bandwidth with single logical CPU when do
>> page swapping even on a high-end server machine. Because the
>> performance of the storage device improved faster than that of single
>> logical CPU. And it seems that the trend will not change in the near
>> future. On the other hand, the THP becomes more and more popular
>> because of increased memory size. So it becomes necessary to optimize
>> THP swap performance.
>>
>> The advantages to swapout/swapin a THP in one piece include:
>>
>> - Batch various swap operations for the THP. Many operations need to
>> be done once per THP instead of per normal page, for example,
>> allocating/freeing the swap space, writing/reading the swap space,
>> flushing TLB, page fault, etc. This will improve the performance of
>> the THP swap greatly.
>>
>> - The THP swap space read/write will be large sequential IO (2M on
>> x86_64). It is particularly helpful for the swapin, which are
>> usually 4k random IO. This will improve the performance of the THP
>> swap too.
>>
>> - It will help the memory fragmentation, especially when the THP is
>> heavily used by the applications. The THP order pages will be free
>> up after THP swapout.
>>
>> - It will improve the THP utilization on the system with the swap
>> turned on. Because the speed for khugepaged to collapse the normal
>> pages into the THP is quite slow. After the THP is split during the
>> swapout, it will take quite long time for the normal pages to
>> collapse back into the THP after being swapin. The high THP
>> utilization helps the efficiency of the page based memory management
>> too.
>>
>> There are some concerns regarding THP swapin, mainly because possible
>> enlarged read/write IO size (for swapout/swapin) may put more overhead
>> on the storage device. To deal with that, the THP swapin is turned on
>> only when necessary. A new sysfs interface:
>> /sys/kernel/mm/transparent_hugepage/swapin_enabled is added to
>> configure it. It uses "always/never/madvise" logic, to be turned on
>> globally, turned off globally, or turned on only for VMA with
>> MADV_HUGEPAGE, etc.
>> GE, etc.
>>
>> Changelog
>> ---------
>>
>> v3:
>>
>> - Rebased on 5/18 HEAD of mmotm/master
>>
>> - Fixed a build bug, Thanks 0-Day!
>>
>> v2:
>>
>> - Fixed several build bugs, Thanks 0-Day!
>>
>> - Improved documentation as suggested by Randy Dunlap.
>>
>> - Fixed several bugs in reading huge swap cluster
>>

2018-06-04 18:07:54

by Daniel Jordan

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 00/21] mm, THP, swap: Swapout/swapin THP in one piece

On Wed, May 23, 2018 at 04:26:04PM +0800, Huang, Ying wrote:
> And for all, Any comment is welcome!
>
> This patchset is based on the 2018-05-18 head of mmotm/master.

Trying to review this and it doesn't apply to mmotm-2018-05-18-16-44. git
fails on patch 10:

Applying: mm, THP, swap: Support to count THP swapin and its fallback
error: Documentation/vm/transhuge.rst: does not exist in index
Patch failed at 0010 mm, THP, swap: Support to count THP swapin and its fallback

Sure enough, this tag has Documentation/vm/transhuge.txt but not the .rst
version. Was this the tag you meant? If so did you pull in some of Mike
Rapoport's doc changes on top?

> base optimized
> ---------------- --------------------------
> %stddev %change %stddev
> \ | \
> 1417897 ? 2% +992.8% 15494673 vm-scalability.throughput
> 1020489 ? 4% +1091.2% 12156349 vmstat.swap.si
> 1255093 ? 3% +940.3% 13056114 vmstat.swap.so
> 1259769 ? 7% +1818.3% 24166779 meminfo.AnonHugePages
> 28021761 -10.7% 25018848 ? 2% meminfo.AnonPages
> 64080064 ? 4% -95.6% 2787565 ? 33% interrupts.CAL:Function_call_interrupts
> 13.91 ? 5% -13.8 0.10 ? 27% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
>
...snip...
> test, while in optimized kernel, that is 96.6%. The TLB flushing IPI
> (represented as interrupts.CAL:Function_call_interrupts) reduced
> 95.6%, while cycles for spinlock reduced from 13.9% to 0.1%. These
> are performance benefit of THP swapout/swapin too.

Which spinlocks are we spending less time on?

2018-06-05 04:30:53

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 00/21] mm, THP, swap: Swapout/swapin THP in one piece

Daniel Jordan <[email protected]> writes:

> On Wed, May 23, 2018 at 04:26:04PM +0800, Huang, Ying wrote:
>> And for all, Any comment is welcome!
>>
>> This patchset is based on the 2018-05-18 head of mmotm/master.
>
> Trying to review this and it doesn't apply to mmotm-2018-05-18-16-44. git
> fails on patch 10:
>
> Applying: mm, THP, swap: Support to count THP swapin and its fallback
> error: Documentation/vm/transhuge.rst: does not exist in index
> Patch failed at 0010 mm, THP, swap: Support to count THP swapin and its fallback
>
> Sure enough, this tag has Documentation/vm/transhuge.txt but not the .rst
> version. Was this the tag you meant? If so did you pull in some of Mike
> Rapoport's doc changes on top?

I use the mmotm tree at

git://git.cmpxchg.org/linux-mmotm.git

Maybe you are using the other one?

>> base optimized
>> ---------------- --------------------------
>> %stddev %change %stddev
>> \ | \
>> 1417897 2% +992.8% 15494673 vm-scalability.throughput
>> 1020489 4% +1091.2% 12156349 vmstat.swap.si
>> 1255093 3% +940.3% 13056114 vmstat.swap.so
>> 1259769 7% +1818.3% 24166779 meminfo.AnonHugePages
>> 28021761 -10.7% 25018848 2% meminfo.AnonPages
>> 64080064 4% -95.6% 2787565 33% interrupts.CAL:Function_call_interrupts
>> 13.91 5% -13.8 0.10 27% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
>>
> ...snip...
>> test, while in optimized kernel, that is 96.6%. The TLB flushing IPI
>> (represented as interrupts.CAL:Function_call_interrupts) reduced
>> 95.6%, while cycles for spinlock reduced from 13.9% to 0.1%. These
>> are performance benefit of THP swapout/swapin too.
>
> Which spinlocks are we spending less time on?

"perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.mem_cgroup_commit_charge.do_swap_page.__handle_mm_fault": 4.39,
"perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.drain_pages_zone.drain_pages": 1.53,
"perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages_nodemask": 1.34,
"perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.swapcache_free_entries.free_swap_slot.do_swap_page": 1.02,
"perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node": 0.61,
"perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_active_list.shrink_node_memcg.shrink_node": 0.54,

Best Regards,
Huang, Ying

2018-06-05 16:40:33

by Daniel Jordan

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 00/21] mm, THP, swap: Swapout/swapin THP in one piece

On Tue, Jun 05, 2018 at 12:30:13PM +0800, Huang, Ying wrote:
> Daniel Jordan <[email protected]> writes:
>
> > On Wed, May 23, 2018 at 04:26:04PM +0800, Huang, Ying wrote:
> >> And for all, Any comment is welcome!
> >>
> >> This patchset is based on the 2018-05-18 head of mmotm/master.
> >
> > Trying to review this and it doesn't apply to mmotm-2018-05-18-16-44. git
> > fails on patch 10:
> >
> > Applying: mm, THP, swap: Support to count THP swapin and its fallback
> > error: Documentation/vm/transhuge.rst: does not exist in index
> > Patch failed at 0010 mm, THP, swap: Support to count THP swapin and its fallback
> >
> > Sure enough, this tag has Documentation/vm/transhuge.txt but not the .rst
> > version. Was this the tag you meant? If so did you pull in some of Mike
> > Rapoport's doc changes on top?
>
> I use the mmotm tree at
>
> git://git.cmpxchg.org/linux-mmotm.git
>
> Maybe you are using the other one?

Yes I was, and I didn't know about this other tree, thanks! Working my way
through your changes now.

>
> >> base optimized
> >> ---------------- --------------------------
> >> %stddev %change %stddev
> >> \ | \
> >> 1417897 2% +992.8% 15494673 vm-scalability.throughput
> >> 1020489 4% +1091.2% 12156349 vmstat.swap.si
> >> 1255093 3% +940.3% 13056114 vmstat.swap.so
> >> 1259769 7% +1818.3% 24166779 meminfo.AnonHugePages
> >> 28021761 -10.7% 25018848 2% meminfo.AnonPages
> >> 64080064 4% -95.6% 2787565 33% interrupts.CAL:Function_call_interrupts
> >> 13.91 5% -13.8 0.10 27% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
> >>
> > ...snip...
> >> test, while in optimized kernel, that is 96.6%. The TLB flushing IPI
> >> (represented as interrupts.CAL:Function_call_interrupts) reduced
> >> 95.6%, while cycles for spinlock reduced from 13.9% to 0.1%. These
> >> are performance benefit of THP swapout/swapin too.
> >
> > Which spinlocks are we spending less time on?
>
> "perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.mem_cgroup_commit_charge.do_swap_page.__handle_mm_fault": 4.39,
> "perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.drain_pages_zone.drain_pages": 1.53,
> "perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages_nodemask": 1.34,
> "perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.swapcache_free_entries.free_swap_slot.do_swap_page": 1.02,
> "perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node": 0.61,
> "perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_active_list.shrink_node_memcg.shrink_node": 0.54,

Nice, seems like lru_lock followed by zone->lock are the main improvements.

2018-06-11 20:54:21

by Daniel Jordan

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 03/21] mm, THP, swap: Support PMD swap mapping in swap_duplicate()

Hi,

The series up to and including this patch doesn't build. For this patch we
need:

diff --git a/mm/swap_state.c b/mm/swap_state.c
index c6b3eab73fde..2f2d07627113 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -433,7 +433,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
/*
* Swap entry may have been freed since our caller observed it.
*/
- err = swapcache_prepare(entry);
+ err = swapcache_prepare(entry, false);
if (err == -EEXIST) {
radix_tree_preload_end();
/*


On Wed, May 23, 2018 at 04:26:07PM +0800, Huang, Ying wrote:
> @@ -3516,11 +3512,39 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)

Two comments about this part of __swap_duplicate as long as you're moving it to
another function:

} else if (count || has_cache) {

if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX) /* #1 */
count += usage;
else if ((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX) /* #2 */
err = -EINVAL;

#1: __swap_duplicate_locked might use

VM_BUG_ON(usage != SWAP_HAS_CACHE && usage != 1);

to document the unstated assumption that usage is 1 (otherwise count could
overflow).

#2: We've masked off SWAP_HAS_CACHE and COUNT_CONTINUED, and already checked
for SWAP_MAP_BAD, so I think condition #2 always fails and can just be removed.

> +#ifdef CONFIG_THP_SWAP
> +static int __swap_duplicate_cluster(swp_entry_t *entry, unsigned char usage)
...
> + } else {
> + for (i = 0; i < SWAPFILE_CLUSTER; i++) {
> +retry:
> + err = __swap_duplicate_locked(si, offset + i, 1);

I guess usage is assumed to be 1 at this point (__swap_duplicate_locked makes
the same assumption). Maybe make this explicit with

err = __swap_duplicate_locked(si, offset + i, usage);

, use 'usage' in cluster_set_count and __swap_entry_free too, and then
earlier have a

VM_BUG_ON(usage != SWAP_HAS_CACHE && usage != 1);

?

> +#else
> +static inline int __swap_duplicate_cluster(swp_entry_t *entry,

This doesn't need inline.


Not related to your changes, but while we're here, the comment with
SWAP_HAS_CONT in swap_count() could be deleted: I don't think there ever was a
SWAP_HAS_CONT.

The rest looks ok up to this point.

2018-06-12 01:25:07

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 03/21] mm, THP, swap: Support PMD swap mapping in swap_duplicate()

Hi, Daniel,

Thanks for your effort to review this series.

Daniel Jordan <[email protected]> writes:

> Hi,
>
> The series up to and including this patch doesn't build. For this patch we
> need:
>
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index c6b3eab73fde..2f2d07627113 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -433,7 +433,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> /*
> * Swap entry may have been freed since our caller observed it.
> */
> - err = swapcache_prepare(entry);
> + err = swapcache_prepare(entry, false);
> if (err == -EEXIST) {
> radix_tree_preload_end();
> /*

Thanks for pointing this out! Will change in the next version.

>
> On Wed, May 23, 2018 at 04:26:07PM +0800, Huang, Ying wrote:
>> @@ -3516,11 +3512,39 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
>
> Two comments about this part of __swap_duplicate as long as you're moving it to
> another function:
>
> } else if (count || has_cache) {
>
> if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX) /* #1 */
> count += usage;
> else if ((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX) /* #2 */
> err = -EINVAL;
>
> #1: __swap_duplicate_locked might use
>
> VM_BUG_ON(usage != SWAP_HAS_CACHE && usage != 1);
>
> to document the unstated assumption that usage is 1 (otherwise count could
> overflow).

Sounds good. Will do this.

> #2: We've masked off SWAP_HAS_CACHE and COUNT_CONTINUED, and already checked
> for SWAP_MAP_BAD, so I think condition #2 always fails and can just be removed.

I think this is used to check some software bug. For example,
SWAP_MAP_SHMEM will yield true here.

>> +#ifdef CONFIG_THP_SWAP
>> +static int __swap_duplicate_cluster(swp_entry_t *entry, unsigned char usage)
> ...
>> + } else {
>> + for (i = 0; i < SWAPFILE_CLUSTER; i++) {
>> +retry:
>> + err = __swap_duplicate_locked(si, offset + i, 1);
>
> I guess usage is assumed to be 1 at this point (__swap_duplicate_locked makes
> the same assumption). Maybe make this explicit with
>
> err = __swap_duplicate_locked(si, offset + i, usage);
>
> , use 'usage' in cluster_set_count and __swap_entry_free too, and then
> earlier have a
>
> VM_BUG_ON(usage != SWAP_HAS_CACHE && usage != 1);
>
> ?

Yes. I will fix this. And we can just check it in
__swap_duplicate_locked() and all these will be covered.

>> +#else
>> +static inline int __swap_duplicate_cluster(swp_entry_t *entry,
>
> This doesn't need inline.

Why not? This is just a one line stub.

> Not related to your changes, but while we're here, the comment with
> SWAP_HAS_CONT in swap_count() could be deleted: I don't think there ever was a
> SWAP_HAS_CONT.

Yes. We should correct this. Because this should go to a separate patch,
would you mind to submit a patch to fix it?

> The rest looks ok up to this point.

Thanks!

Best Regards,
Huang, Ying

2018-06-12 03:17:55

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 03/21] mm, THP, swap: Support PMD swap mapping in swap_duplicate()

"Huang, Ying" <[email protected]> writes:
>> On Wed, May 23, 2018 at 04:26:07PM +0800, Huang, Ying wrote:
>>> @@ -3516,11 +3512,39 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
>>
>> Two comments about this part of __swap_duplicate as long as you're moving it to
>> another function:
>>
>> } else if (count || has_cache) {
>>
>> if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX) /* #1 */
>> count += usage;
>> else if ((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX) /* #2 */
>> err = -EINVAL;
>>
>> #1: __swap_duplicate_locked might use
>>
>> VM_BUG_ON(usage != SWAP_HAS_CACHE && usage != 1);
>>
>> to document the unstated assumption that usage is 1 (otherwise count could
>> overflow).
>
> Sounds good. Will do this.

Found usage parameter of __swap_duplicate() could be SWAP_MAP_SHMEM too.
We can improve the parameter checking. But that appears not belong to
this series.

Best Regards,
Huang, Ying

2018-06-12 12:06:00

by Daniel Jordan

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 03/21] mm, THP, swap: Support PMD swap mapping in swap_duplicate()

On Tue, Jun 12, 2018 at 09:23:19AM +0800, Huang, Ying wrote:
> Daniel Jordan <[email protected]> writes:
> > #2: We've masked off SWAP_HAS_CACHE and COUNT_CONTINUED, and already checked
> > for SWAP_MAP_BAD, so I think condition #2 always fails and can just be removed.
>
> I think this is used to check some software bug. For example,
> SWAP_MAP_SHMEM will yield true here.

So it does! And so __swap_duplicate returns -EINVAL in that case, which
swap_shmem_alloc just ignores. Confusing, and an explicit check for
SWAP_MAP_SHMEM would be cleaner, but why fix what isn't broken.

>
> >> +#ifdef CONFIG_THP_SWAP
> >> +static int __swap_duplicate_cluster(swp_entry_t *entry, unsigned char usage)
> > ...
> >> + } else {
> >> + for (i = 0; i < SWAPFILE_CLUSTER; i++) {
> >> +retry:
> >> + err = __swap_duplicate_locked(si, offset + i, 1);
> >
> > I guess usage is assumed to be 1 at this point (__swap_duplicate_locked makes
> > the same assumption). Maybe make this explicit with
> >
> > err = __swap_duplicate_locked(si, offset + i, usage);
> >
> > , use 'usage' in cluster_set_count and __swap_entry_free too, and then
> > earlier have a
> >
> > VM_BUG_ON(usage != SWAP_HAS_CACHE && usage != 1);
> >
> > ?
>
> Yes. I will fix this. And we can just check it in
> __swap_duplicate_locked() and all these will be covered.

I'll respond to your other mail.

> > Not related to your changes, but while we're here, the comment with
> > SWAP_HAS_CONT in swap_count() could be deleted: I don't think there ever was a
> > SWAP_HAS_CONT.
>
> Yes. We should correct this. Because this should go to a separate patch,
> would you mind to submit a patch to fix it?

Sure, I'll do that.

Daniel

2018-06-12 12:08:10

by Daniel Jordan

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 03/21] mm, THP, swap: Support PMD swap mapping in swap_duplicate()

On Tue, Jun 12, 2018 at 11:15:28AM +0800, Huang, Ying wrote:
> "Huang, Ying" <[email protected]> writes:
> >> On Wed, May 23, 2018 at 04:26:07PM +0800, Huang, Ying wrote:
> >>> @@ -3516,11 +3512,39 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
> >>
> >> Two comments about this part of __swap_duplicate as long as you're moving it to
> >> another function:
> >>
> >> } else if (count || has_cache) {
> >>
> >> if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX) /* #1 */
> >> count += usage;
> >> else if ((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX) /* #2 */
> >> err = -EINVAL;
> >>
> >> #1: __swap_duplicate_locked might use
> >>
> >> VM_BUG_ON(usage != SWAP_HAS_CACHE && usage != 1);
> >>
> >> to document the unstated assumption that usage is 1 (otherwise count could
> >> overflow).
> >
> > Sounds good. Will do this.
>
> Found usage parameter of __swap_duplicate() could be SWAP_MAP_SHMEM too.
> We can improve the parameter checking. But that appears not belong to
> this series.

Fair enough, I'll see about adding this along with the other patch I'm sending.

Daniel

2018-06-12 21:46:24

by Daniel Jordan

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 03/21] mm, THP, swap: Support PMD swap mapping in swap_duplicate()

On Tue, Jun 12, 2018 at 09:23:19AM +0800, Huang, Ying wrote:
> Daniel Jordan <[email protected]> writes:
> >> +#else
> >> +static inline int __swap_duplicate_cluster(swp_entry_t *entry,
> >
> > This doesn't need inline.
>
> Why not? This is just a one line stub.

Forgot to respond to this. The compiler will likely choose to optimize out
calls to an empty function like this. Checking, this is indeed what it does in
this case on my machine, with or without inline.


By the way, when building without CONFIG_THP_SWAP, we get

linux/mm/swapfile.c:933:13: warning: ‘__swap_free_cluster’ defined but not used [-Wunused-function]
static void __swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
^~~~~~~~~~~~~~~~~~~

2018-06-13 01:27:37

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 03/21] mm, THP, swap: Support PMD swap mapping in swap_duplicate()

Daniel Jordan <[email protected]> writes:

> On Tue, Jun 12, 2018 at 09:23:19AM +0800, Huang, Ying wrote:
>> Daniel Jordan <[email protected]> writes:
>> >> +#else
>> >> +static inline int __swap_duplicate_cluster(swp_entry_t *entry,
>> >
>> > This doesn't need inline.
>>
>> Why not? This is just a one line stub.
>
> Forgot to respond to this. The compiler will likely choose to optimize out
> calls to an empty function like this. Checking, this is indeed what it does in
> this case on my machine, with or without inline.

Yes. I believe a decent compiler will inline the function in any way.
And it does no harm to keep "inline" too, Yes?

> By the way, when building without CONFIG_THP_SWAP, we get
>
> linux/mm/swapfile.c:933:13: warning: ‘__swap_free_cluster’ defined but not used [-Wunused-function]
> static void __swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
> ^~~~~~~~~~~~~~~~~~~

Thanks! I will fix this. Don't know why 0-Day didn't catch this.

Best Regards,
Huang, Ying

2018-06-13 11:51:11

by Daniel Jordan

[permalink] [raw]
Subject: Re: [PATCH -mm -V3 03/21] mm, THP, swap: Support PMD swap mapping in swap_duplicate()

On Wed, Jun 13, 2018 at 09:26:54AM +0800, Huang, Ying wrote:
> Daniel Jordan <[email protected]> writes:
>
> > On Tue, Jun 12, 2018 at 09:23:19AM +0800, Huang, Ying wrote:
> >> Daniel Jordan <[email protected]> writes:
> >> >> +#else
> >> >> +static inline int __swap_duplicate_cluster(swp_entry_t *entry,
> >> >
> >> > This doesn't need inline.
> >>
> >> Why not? This is just a one line stub.
> >
> > Forgot to respond to this. The compiler will likely choose to optimize out
> > calls to an empty function like this. Checking, this is indeed what it does in
> > this case on my machine, with or without inline.
>
> Yes. I believe a decent compiler will inline the function in any way.
> And it does no harm to keep "inline" too, Yes?

Right, it does no harm, it's just a matter of style.