From: Huang Ying <[email protected]>
Hi, Andrew, could you help me to check whether the overall design is
reasonable?
Hi, Hugh, Shaohua, Minchan and Rik, could you help me to review the
swap part of the patchset? Especially [02/21], [03/21], [04/21],
[05/21], [06/21], [07/21], [08/21], [09/21], [10/21], [11/21],
[12/21], [20/21].
Hi, Andrea and Kirill, could you help me to review the THP part of the
patchset? Especially [01/21], [07/21], [09/21], [11/21], [13/21],
[15/21], [16/21], [17/21], [18/21], [19/21], [20/21], [21/21].
Hi, Johannes and Michal, could you help me to review the cgroup part
of the patchset? Especially [14/21].
And for all, Any comment is welcome!
This patchset is based on the 2018-06-14 head of mmotm/master.
This is the final step of THP (Transparent Huge Page) swap
optimization. After the first and second step, the splitting huge
page is delayed from almost the first step of swapout to after swapout
has been finished. In this step, we avoid splitting THP for swapout
and swapout/swapin the THP in one piece.
We tested the patchset with vm-scalability benchmark swap-w-seq test
case, with 16 processes. The test case forks 16 processes. Each
process allocates large anonymous memory range, and writes it from
begin to end for 8 rounds. The first round will swapout, while the
remaining rounds will swapin and swapout. The test is done on a Xeon
E5 v3 system, the swap device used is a RAM simulated PMEM (persistent
memory) device. The test result is as follow,
base optimized
---------------- --------------------------
%stddev %change %stddev
\ | \
1417897 ± 2% +992.8% 15494673 vm-scalability.throughput
1020489 ± 4% +1091.2% 12156349 vmstat.swap.si
1255093 ± 3% +940.3% 13056114 vmstat.swap.so
1259769 ± 7% +1818.3% 24166779 meminfo.AnonHugePages
28021761 -10.7% 25018848 ± 2% meminfo.AnonPages
64080064 ± 4% -95.6% 2787565 ± 33% interrupts.CAL:Function_call_interrupts
13.91 ± 5% -13.8 0.10 ± 27% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
Where, the score of benchmark (bytes written per second) improved
992.8%. The swapout/swapin throughput improved 1008% (from about
2.17GB/s to 24.04GB/s). The performance difference is huge. In base
kernel, for the first round of writing, the THP is swapout and split,
so in the remaining rounds, there is only normal page swapin and
swapout. While in optimized kernel, the THP is kept after first
swapout, so THP swapin and swapout is used in the remaining rounds.
This shows the key benefit to swapout/swapin THP in one piece, the THP
will be kept instead of being split. meminfo information verified
this, in base kernel only 4.5% of anonymous page are THP during the
test, while in optimized kernel, that is 96.6%. The TLB flushing IPI
(represented as interrupts.CAL:Function_call_interrupts) reduced
95.6%, while cycles for spinlock reduced from 13.9% to 0.1%. These
are performance benefit of THP swapout/swapin too.
Below is the description for all steps of THP swap optimization.
Recently, the performance of the storage devices improved so fast that
we cannot saturate the disk bandwidth with single logical CPU when do
page swapping even on a high-end server machine. Because the
performance of the storage device improved faster than that of single
logical CPU. And it seems that the trend will not change in the near
future. On the other hand, the THP becomes more and more popular
because of increased memory size. So it becomes necessary to optimize
THP swap performance.
The advantages to swapout/swapin a THP in one piece include:
- Batch various swap operations for the THP. Many operations need to
be done once per THP instead of per normal page, for example,
allocating/freeing the swap space, writing/reading the swap space,
flushing TLB, page fault, etc. This will improve the performance of
the THP swap greatly.
- The THP swap space read/write will be large sequential IO (2M on
x86_64). It is particularly helpful for the swapin, which are
usually 4k random IO. This will improve the performance of the THP
swap too.
- It will help the memory fragmentation, especially when the THP is
heavily used by the applications. The THP order pages will be free
up after THP swapout.
- It will improve the THP utilization on the system with the swap
turned on. Because the speed for khugepaged to collapse the normal
pages into the THP is quite slow. After the THP is split during the
swapout, it will take quite long time for the normal pages to
collapse back into the THP after being swapin. The high THP
utilization helps the efficiency of the page based memory management
too.
There are some concerns regarding THP swapin, mainly because possible
enlarged read/write IO size (for swapout/swapin) may put more overhead
on the storage device. To deal with that, the THP swapin is turned on
only when necessary. A new sysfs interface:
/sys/kernel/mm/transparent_hugepage/swapin_enabled is added to
configure it. It uses "always/never/madvise" logic, to be turned on
globally, turned off globally, or turned on only for VMA with
MADV_HUGEPAGE, etc.
GE, etc.
Changelog
---------
v4:
- Rebased on 6/14 HEAD of mmotm/master
- Fixed one build bug and several coding style issues, Thanks Daniel Jordon
v3:
- Rebased on 5/18 HEAD of mmotm/master
- Fixed a build bug, Thanks 0-Day!
v2:
- Fixed several build bugs, Thanks 0-Day!
- Improved documentation as suggested by Randy Dunlap.
- Fixed several bugs in reading huge swap cluster
From: Huang Ying <[email protected]>
It's unreasonable to optimize swapping for THP without basic swapping
support. And this will cause build errors when THP_SWAP functions are
defined in swapfile.c and called elsewhere.
The comments are fixed too to reflect the latest progress.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
mm/Kconfig | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/mm/Kconfig b/mm/Kconfig
index ce95491abd6a..cee958bb6002 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -420,10 +420,9 @@ config ARCH_WANTS_THP_SWAP
config THP_SWAP
def_bool y
- depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP
+ depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP && SWAP
help
Swap transparent huge pages in one piece, without splitting.
- XXX: For now this only does clustered swap space allocation.
For selection by architectures with reasonable THP sizes.
--
2.16.4
From: Huang Ying <[email protected]>
During fork, the page table need to be copied from parent to child. A
PMD swap mapping need to be copied too and the swap reference count
need to be increased.
When the huge swap cluster has been split already, we need to split
the PMD swap mapping and fallback to PTE copying.
When swap count continuation failed to allocate a page with
GFP_ATOMIC, we need to unlock the spinlock and try again with
GFP_KERNEL.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
mm/huge_memory.c | 72 ++++++++++++++++++++++++++++++++++++++++++++------------
1 file changed, 57 insertions(+), 15 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index adad32d54ff2..38c247a38f67 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -941,6 +941,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
if (unlikely(!pgtable))
goto out;
+retry:
dst_ptl = pmd_lock(dst_mm, dst_pmd);
src_ptl = pmd_lockptr(src_mm, src_pmd);
spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
@@ -948,26 +949,67 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
ret = -EAGAIN;
pmd = *src_pmd;
-#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
if (unlikely(is_swap_pmd(pmd))) {
swp_entry_t entry = pmd_to_swp_entry(pmd);
- VM_BUG_ON(!is_pmd_migration_entry(pmd));
- if (is_write_migration_entry(entry)) {
- make_migration_entry_read(&entry);
- pmd = swp_entry_to_pmd(entry);
- if (pmd_swp_soft_dirty(*src_pmd))
- pmd = pmd_swp_mksoft_dirty(pmd);
- set_pmd_at(src_mm, addr, src_pmd, pmd);
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+ if (is_migration_entry(entry)) {
+ if (is_write_migration_entry(entry)) {
+ make_migration_entry_read(&entry);
+ pmd = swp_entry_to_pmd(entry);
+ if (pmd_swp_soft_dirty(*src_pmd))
+ pmd = pmd_swp_mksoft_dirty(pmd);
+ set_pmd_at(src_mm, addr, src_pmd, pmd);
+ }
+ add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ mm_inc_nr_ptes(dst_mm);
+ pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
+ set_pmd_at(dst_mm, addr, dst_pmd, pmd);
+ ret = 0;
+ goto out_unlock;
}
- add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
- mm_inc_nr_ptes(dst_mm);
- pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
- set_pmd_at(dst_mm, addr, dst_pmd, pmd);
- ret = 0;
- goto out_unlock;
- }
#endif
+ if (thp_swap_supported() && !non_swap_entry(entry)) {
+ ret = swap_duplicate(&entry, true);
+ if (!ret) {
+ add_mm_counter(dst_mm, MM_SWAPENTS,
+ HPAGE_PMD_NR);
+ mm_inc_nr_ptes(dst_mm);
+ pgtable_trans_huge_deposit(dst_mm, dst_pmd,
+ pgtable);
+ set_pmd_at(dst_mm, addr, dst_pmd, pmd);
+ /* make sure dst_mm is on swapoff's mmlist. */
+ if (unlikely(list_empty(&dst_mm->mmlist))) {
+ spin_lock(&mmlist_lock);
+ if (list_empty(&dst_mm->mmlist))
+ list_add(&dst_mm->mmlist,
+ &src_mm->mmlist);
+ spin_unlock(&mmlist_lock);
+ }
+ } else if (ret == -ENOTDIR) {
+ /*
+ * The swap cluster has been split, split the
+ * pmd map now
+ */
+ __split_huge_swap_pmd(vma, addr, src_pmd);
+ pte_free(dst_mm, pgtable);
+ } else if (ret == -ENOMEM) {
+ spin_unlock(src_ptl);
+ spin_unlock(dst_ptl);
+ ret = add_swap_count_continuation(entry,
+ GFP_KERNEL);
+ if (ret < 0) {
+ ret = -ENOMEM;
+ pte_free(dst_mm, pgtable);
+ goto out;
+ }
+ goto retry;
+ } else
+ VM_BUG_ON(1);
+ goto out_unlock;
+ }
+ VM_BUG_ON(1);
+ }
if (unlikely(!pmd_trans_huge(pmd))) {
pte_free(dst_mm, pgtable);
--
2.16.4
From: Huang Ying <[email protected]>
During mincore(), for PMD swap mapping, swap cache will be looked up.
If the resulting page isn't compound page, the PMD swap mapping will
be split and fallback to PTE swap mapping processing.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
mm/mincore.c | 37 +++++++++++++++++++++++++++++++------
1 file changed, 31 insertions(+), 6 deletions(-)
diff --git a/mm/mincore.c b/mm/mincore.c
index a66f2052c7b1..897dd2c187e8 100644
--- a/mm/mincore.c
+++ b/mm/mincore.c
@@ -48,7 +48,8 @@ static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr,
* and is up to date; i.e. that no page-in operation would be required
* at this time if an application were to map and access this page.
*/
-static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff)
+static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff,
+ bool *compound)
{
unsigned char present = 0;
struct page *page;
@@ -86,6 +87,8 @@ static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff)
#endif
if (page) {
present = PageUptodate(page);
+ if (compound)
+ *compound = PageCompound(page);
put_page(page);
}
@@ -103,7 +106,8 @@ static int __mincore_unmapped_range(unsigned long addr, unsigned long end,
pgoff = linear_page_index(vma, addr);
for (i = 0; i < nr; i++, pgoff++)
- vec[i] = mincore_page(vma->vm_file->f_mapping, pgoff);
+ vec[i] = mincore_page(vma->vm_file->f_mapping,
+ pgoff, NULL);
} else {
for (i = 0; i < nr; i++)
vec[i] = 0;
@@ -127,14 +131,36 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
pte_t *ptep;
unsigned char *vec = walk->private;
int nr = (end - addr) >> PAGE_SHIFT;
+ swp_entry_t entry;
ptl = pmd_trans_huge_lock(pmd, vma);
if (ptl) {
- memset(vec, 1, nr);
+ unsigned char val = 1;
+ bool compound;
+
+ if (thp_swap_supported() && is_swap_pmd(*pmd)) {
+ entry = pmd_to_swp_entry(*pmd);
+ if (!non_swap_entry(entry)) {
+ val = mincore_page(swap_address_space(entry),
+ swp_offset(entry),
+ &compound);
+ /*
+ * The huge swap cluster has been
+ * split under us
+ */
+ if (!compound) {
+ __split_huge_swap_pmd(vma, addr, pmd);
+ spin_unlock(ptl);
+ goto fallback;
+ }
+ }
+ }
+ memset(vec, val, nr);
spin_unlock(ptl);
goto out;
}
+fallback:
if (pmd_trans_unstable(pmd)) {
__mincore_unmapped_range(addr, end, vma, vec);
goto out;
@@ -150,8 +176,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
else if (pte_present(pte))
*vec = 1;
else { /* pte is a swap entry */
- swp_entry_t entry = pte_to_swp_entry(pte);
-
+ entry = pte_to_swp_entry(pte);
if (non_swap_entry(entry)) {
/*
* migration or hwpoison entries are always
@@ -161,7 +186,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
} else {
#ifdef CONFIG_SWAP
*vec = mincore_page(swap_address_space(entry),
- swp_offset(entry));
+ swp_offset(entry), NULL);
#else
WARN_ON(1);
*vec = 1;
--
2.16.4
From: Huang Ying <[email protected]>
Previously, during swapout, all PMD page mapping will be split and
replaced with PTE swap mapping. And when clearing the SWAP_HAS_CACHE
flag for the huge swap cluster in swapcache_free_cluster(), the huge
swap cluster will be split. Now, during swapout, the PMD page mapping
will be changed to PMD swap mapping. So when clearing the
SWAP_HAS_CACHE flag, the huge swap cluster will only be split if the
PMD swap mapping count is 0. Otherwise, we will keep it as the huge
swap cluster. So that we can swapin a THP as a whole later.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
mm/swapfile.c | 41 ++++++++++++++++++++++++++++++-----------
1 file changed, 30 insertions(+), 11 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 48e2c54385ee..36f4b6451360 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -514,6 +514,18 @@ static void dec_cluster_info_page(struct swap_info_struct *p,
free_cluster(p, idx);
}
+#ifdef CONFIG_THP_SWAP
+static inline int cluster_swapcount(struct swap_cluster_info *ci)
+{
+ if (!ci || !cluster_is_huge(ci))
+ return 0;
+
+ return cluster_count(ci) - SWAPFILE_CLUSTER;
+}
+#else
+#define cluster_swapcount(ci) 0
+#endif
+
/*
* It's possible scan_swap_map() uses a free cluster in the middle of free
* cluster list. Avoiding such abuse to avoid list corruption.
@@ -905,6 +917,7 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
struct swap_cluster_info *ci;
ci = lock_cluster(si, offset);
+ memset(si->swap_map + offset, 0, SWAPFILE_CLUSTER);
cluster_set_count_flag(ci, 0, 0);
free_cluster(si, idx);
unlock_cluster(ci);
@@ -1288,24 +1301,30 @@ static void swapcache_free_cluster(swp_entry_t entry)
ci = lock_cluster(si, offset);
VM_BUG_ON(!cluster_is_huge(ci));
+ VM_BUG_ON(!is_cluster_offset(offset));
+ VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
map = si->swap_map + offset;
- for (i = 0; i < SWAPFILE_CLUSTER; i++) {
- val = map[i];
- VM_BUG_ON(!(val & SWAP_HAS_CACHE));
- if (val == SWAP_HAS_CACHE)
- free_entries++;
+ if (!cluster_swapcount(ci)) {
+ for (i = 0; i < SWAPFILE_CLUSTER; i++) {
+ val = map[i];
+ VM_BUG_ON(!(val & SWAP_HAS_CACHE));
+ if (val == SWAP_HAS_CACHE)
+ free_entries++;
+ }
+ if (free_entries != SWAPFILE_CLUSTER)
+ cluster_clear_huge(ci);
}
if (!free_entries) {
- for (i = 0; i < SWAPFILE_CLUSTER; i++)
- map[i] &= ~SWAP_HAS_CACHE;
+ for (i = 0; i < SWAPFILE_CLUSTER; i++) {
+ val = map[i];
+ VM_BUG_ON(!(val & SWAP_HAS_CACHE) ||
+ val == SWAP_HAS_CACHE);
+ map[i] = val & ~SWAP_HAS_CACHE;
+ }
}
- cluster_clear_huge(ci);
unlock_cluster(ci);
if (free_entries == SWAPFILE_CLUSTER) {
spin_lock(&si->lock);
- ci = lock_cluster(si, offset);
- memset(map, 0, SWAPFILE_CLUSTER);
- unlock_cluster(ci);
mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER);
swap_free_cluster(si, idx);
spin_unlock(&si->lock);
--
2.16.4
From: Huang Ying <[email protected]>
Swapin a THP as a whole isn't desirable at some situations. For
example, for random access pattern, swapin a THP as a whole will
inflate the reading greatly. So a sysfs interface:
/sys/kernel/mm/transparent_hugepage/swapin_enabled is added to
configure it. Three options as follow are provided,
- always: THP swapin will be enabled always
- madvise: THP swapin will be enabled only for VMA with VM_HUGEPAGE
flag set.
- never: THP swapin will be disabled always
The default configuration is: madvise.
During page fault, if a PMD swap mapping is found and THP swapin is
disabled, the huge swap cluster and the PMD swap mapping will be split
and fallback to normal page swapin.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
Documentation/admin-guide/mm/transhuge.rst | 21 +++++++
include/linux/huge_mm.h | 31 +++++++++++
mm/huge_memory.c | 89 ++++++++++++++++++++++++------
3 files changed, 123 insertions(+), 18 deletions(-)
diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
index 85e33f785fd7..23aefb17101c 100644
--- a/Documentation/admin-guide/mm/transhuge.rst
+++ b/Documentation/admin-guide/mm/transhuge.rst
@@ -160,6 +160,27 @@ Some userspace (such as a test program, or an optimized memory allocation
cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size
+Transparent hugepage may be swapout and swapin in one piece without
+splitting. This will improve the utility of transparent hugepage but
+may inflate the read/write too. So whether to enable swapin
+transparent hugepage in one piece can be configured as follow.
+
+ echo always >/sys/kernel/mm/transparent_hugepage/swapin_enabled
+ echo madvise >/sys/kernel/mm/transparent_hugepage/swapin_enabled
+ echo never >/sys/kernel/mm/transparent_hugepage/swapin_enabled
+
+always
+ Attempt to allocate a transparent huge page and read it from
+ swap space in one piece every time.
+
+never
+ Always split the swap space and PMD swap mapping and swapin
+ the fault normal page during swapin.
+
+madvise
+ Only swapin the transparent huge page in one piece for
+ MADV_HUGEPAGE madvise regions.
+
khugepaged will be automatically started when
transparent_hugepage/enabled is set to "always" or "madvise, and it'll
be automatically shutdown if it's set to "never".
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 42117b75de2d..7931fa888f11 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -63,6 +63,8 @@ enum transparent_hugepage_flag {
#ifdef CONFIG_DEBUG_VM
TRANSPARENT_HUGEPAGE_DEBUG_COW_FLAG,
#endif
+ TRANSPARENT_HUGEPAGE_SWAPIN_FLAG,
+ TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG,
};
struct kobject;
@@ -405,11 +407,40 @@ static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
#ifdef CONFIG_THP_SWAP
extern int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd);
+
+static inline bool transparent_hugepage_swapin_enabled(
+ struct vm_area_struct *vma)
+{
+ if (vma->vm_flags & VM_NOHUGEPAGE)
+ return false;
+
+ if (is_vma_temporary_stack(vma))
+ return false;
+
+ if (test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
+ return false;
+
+ if (transparent_hugepage_flags &
+ (1 << TRANSPARENT_HUGEPAGE_SWAPIN_FLAG))
+ return true;
+
+ if (transparent_hugepage_flags &
+ (1 << TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG))
+ return !!(vma->vm_flags & VM_HUGEPAGE);
+
+ return false;
+}
#else /* CONFIG_THP_SWAP */
static inline int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
{
return 0;
}
+
+static inline bool transparent_hugepage_swapin_enabled(
+ struct vm_area_struct *vma)
+{
+ return false;
+}
#endif /* CONFIG_THP_SWAP */
#endif /* _LINUX_HUGE_MM_H */
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8d49a11e83ee..da42d1cdc26a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -57,7 +57,8 @@ unsigned long transparent_hugepage_flags __read_mostly =
#endif
(1<<TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG)|
(1<<TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG)|
- (1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG);
+ (1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG)|
+ (1<<TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG);
static struct shrinker deferred_split_shrinker;
@@ -316,6 +317,53 @@ static struct kobj_attribute debug_cow_attr =
__ATTR(debug_cow, 0644, debug_cow_show, debug_cow_store);
#endif /* CONFIG_DEBUG_VM */
+#ifdef CONFIG_THP_SWAP
+static ssize_t swapin_enabled_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ if (test_bit(TRANSPARENT_HUGEPAGE_SWAPIN_FLAG,
+ &transparent_hugepage_flags))
+ return sprintf(buf, "[always] madvise never\n");
+ else if (test_bit(TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG,
+ &transparent_hugepage_flags))
+ return sprintf(buf, "always [madvise] never\n");
+ else
+ return sprintf(buf, "always madvise [never]\n");
+}
+
+static ssize_t swapin_enabled_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ ssize_t ret = count;
+
+ if (!memcmp("always", buf,
+ min(sizeof("always")-1, count))) {
+ clear_bit(TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG,
+ &transparent_hugepage_flags);
+ set_bit(TRANSPARENT_HUGEPAGE_SWAPIN_FLAG,
+ &transparent_hugepage_flags);
+ } else if (!memcmp("madvise", buf,
+ min(sizeof("madvise")-1, count))) {
+ clear_bit(TRANSPARENT_HUGEPAGE_SWAPIN_FLAG,
+ &transparent_hugepage_flags);
+ set_bit(TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG,
+ &transparent_hugepage_flags);
+ } else if (!memcmp("never", buf,
+ min(sizeof("never")-1, count))) {
+ clear_bit(TRANSPARENT_HUGEPAGE_SWAPIN_FLAG,
+ &transparent_hugepage_flags);
+ clear_bit(TRANSPARENT_HUGEPAGE_SWAPIN_REQ_MADV_FLAG,
+ &transparent_hugepage_flags);
+ } else
+ ret = -EINVAL;
+
+ return ret;
+}
+static struct kobj_attribute swapin_enabled_attr =
+ __ATTR(swapin_enabled, 0644, swapin_enabled_show, swapin_enabled_store);
+#endif /* CONFIG_THP_SWAP */
+
static struct attribute *hugepage_attr[] = {
&enabled_attr.attr,
&defrag_attr.attr,
@@ -326,6 +374,9 @@ static struct attribute *hugepage_attr[] = {
#endif
#ifdef CONFIG_DEBUG_VM
&debug_cow_attr.attr,
+#endif
+#ifdef CONFIG_THP_SWAP
+ &swapin_enabled_attr.attr,
#endif
NULL,
};
@@ -1645,6 +1696,9 @@ int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
retry:
page = lookup_swap_cache(entry, NULL, vmf->address);
if (!page) {
+ if (!transparent_hugepage_swapin_enabled(vma))
+ goto split;
+
page = read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE, vma,
haddr, false);
if (!page) {
@@ -1652,23 +1706,8 @@ int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
* Back out if somebody else faulted in this pmd
* while we released the pmd lock.
*/
- if (likely(pmd_same(*vmf->pmd, orig_pmd))) {
- ret = split_swap_cluster(entry, false);
- /*
- * Retry if somebody else swap in the swap
- * entry
- */
- if (ret == -EEXIST) {
- ret = 0;
- goto retry;
- /* swapoff occurs under us */
- } else if (ret == -EINVAL)
- ret = 0;
- else {
- count_vm_event(THP_SWPIN_FALLBACK);
- goto fallback;
- }
- }
+ if (likely(pmd_same(*vmf->pmd, orig_pmd)))
+ goto split;
delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
goto out;
}
@@ -1780,6 +1819,20 @@ int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
if (page)
put_page(page);
return ret;
+split:
+ ret = split_swap_cluster(entry, false);
+ /* Retry if somebody else swap in the swap entry */
+ if (ret == -EEXIST) {
+ ret = 0;
+ goto retry;
+ }
+ /* swapoff occurs under us */
+ if (ret == -EINVAL) {
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+ return 0;
+ }
+ count_vm_event(THP_SWPIN_FALLBACK);
+ goto fallback;
}
#else
static inline void __split_huge_swap_pmd(struct vm_area_struct *vma,
--
2.16.4
From: Huang Ying <[email protected]>
Original code is only for PMD migration entry, it is revised to
support PMD swap mapping.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
fs/proc/task_mmu.c | 8 ++++----
mm/gup.c | 34 ++++++++++++++++++++++------------
mm/huge_memory.c | 6 +++---
mm/mempolicy.c | 2 +-
4 files changed, 30 insertions(+), 20 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index e9679016271f..afcf6ac57219 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -978,7 +978,7 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
pmd = pmd_clear_soft_dirty(pmd);
set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
- } else if (is_migration_entry(pmd_to_swp_entry(pmd))) {
+ } else if (is_swap_pmd(pmd)) {
pmd = pmd_swp_clear_soft_dirty(pmd);
set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
}
@@ -1309,7 +1309,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
frame = pmd_pfn(pmd) +
((addr & ~PMD_MASK) >> PAGE_SHIFT);
}
-#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP)
else if (is_swap_pmd(pmd)) {
swp_entry_t entry = pmd_to_swp_entry(pmd);
unsigned long offset;
@@ -1323,8 +1323,8 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
flags |= PM_SWAP;
if (pmd_swp_soft_dirty(pmd))
flags |= PM_SOFT_DIRTY;
- VM_BUG_ON(!is_pmd_migration_entry(pmd));
- page = migration_entry_to_page(entry);
+ if (is_pmd_migration_entry(pmd))
+ page = migration_entry_to_page(entry);
}
#endif
diff --git a/mm/gup.c b/mm/gup.c
index b70d7ba7cc13..84ba4ad8120d 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -216,6 +216,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
spinlock_t *ptl;
struct page *page;
struct mm_struct *mm = vma->vm_mm;
+ swp_entry_t entry;
pmd = pmd_offset(pudp, address);
/*
@@ -243,18 +244,21 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
if (!pmd_present(pmdval)) {
if (likely(!(flags & FOLL_MIGRATION)))
return no_page_table(vma, flags);
- VM_BUG_ON(thp_migration_supported() &&
- !is_pmd_migration_entry(pmdval));
- if (is_pmd_migration_entry(pmdval))
+ entry = pmd_to_swp_entry(pmdval);
+ if (thp_migration_supported() && is_migration_entry(entry)) {
pmd_migration_entry_wait(mm, pmd);
- pmdval = READ_ONCE(*pmd);
- /*
- * MADV_DONTNEED may convert the pmd to null because
- * mmap_sem is held in read mode
- */
- if (pmd_none(pmdval))
+ pmdval = READ_ONCE(*pmd);
+ /*
+ * MADV_DONTNEED may convert the pmd to null because
+ * mmap_sem is held in read mode
+ */
+ if (pmd_none(pmdval))
+ return no_page_table(vma, flags);
+ goto retry;
+ }
+ if (thp_swap_supported() && !non_swap_entry(entry))
return no_page_table(vma, flags);
- goto retry;
+ VM_BUG_ON(1);
}
if (pmd_devmap(pmdval)) {
ptl = pmd_lock(mm, pmd);
@@ -276,11 +280,17 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
return no_page_table(vma, flags);
}
if (unlikely(!pmd_present(*pmd))) {
+ entry = pmd_to_swp_entry(*pmd);
spin_unlock(ptl);
if (likely(!(flags & FOLL_MIGRATION)))
return no_page_table(vma, flags);
- pmd_migration_entry_wait(mm, pmd);
- goto retry_locked;
+ if (thp_migration_supported() && is_migration_entry(entry)) {
+ pmd_migration_entry_wait(mm, pmd);
+ goto retry_locked;
+ }
+ if (thp_swap_supported() && !non_swap_entry(entry))
+ return no_page_table(vma, flags);
+ VM_BUG_ON(1);
}
if (unlikely(!pmd_trans_huge(*pmd))) {
spin_unlock(ptl);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 6b9ca1c14500..e50adc6b59b2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2074,7 +2074,7 @@ static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl,
static pmd_t move_soft_dirty_pmd(pmd_t pmd)
{
#ifdef CONFIG_MEM_SOFT_DIRTY
- if (unlikely(is_pmd_migration_entry(pmd)))
+ if (unlikely(is_swap_pmd(pmd)))
pmd = pmd_swp_mksoft_dirty(pmd);
else if (pmd_present(pmd))
pmd = pmd_mksoft_dirty(pmd);
@@ -2160,11 +2160,11 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
preserve_write = prot_numa && pmd_write(*pmd);
ret = 1;
-#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP)
if (is_swap_pmd(*pmd)) {
swp_entry_t entry = pmd_to_swp_entry(*pmd);
- VM_BUG_ON(!is_pmd_migration_entry(*pmd));
+ VM_BUG_ON(!thp_swap_supported() && !is_migration_entry(entry));
if (is_write_migration_entry(entry)) {
pmd_t newpmd;
/*
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 9ac49ef17b4e..180d7c08f6cc 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -436,7 +436,7 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
struct queue_pages *qp = walk->private;
unsigned long flags;
- if (unlikely(is_pmd_migration_entry(*pmd))) {
+ if (unlikely(is_swap_pmd(*pmd))) {
ret = 1;
goto unlock;
}
--
2.16.4
From: Huang Ying <[email protected]>
For a PMD swap mapping, zap_huge_pmd() will clear the PMD and call
free_swap_and_cache() to decrease the swap reference count and maybe
free or split the huge swap cluster and the THP in swap cache.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
mm/huge_memory.c | 32 +++++++++++++++++++++-----------
1 file changed, 21 insertions(+), 11 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 38c247a38f67..6b9ca1c14500 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2007,7 +2007,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
spin_unlock(ptl);
if (is_huge_zero_pmd(orig_pmd))
tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
- } else if (is_huge_zero_pmd(orig_pmd)) {
+ } else if (pmd_present(orig_pmd) && is_huge_zero_pmd(orig_pmd)) {
zap_deposited_table(tlb->mm, pmd);
spin_unlock(ptl);
tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
@@ -2020,17 +2020,27 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
page_remove_rmap(page, true);
VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
VM_BUG_ON_PAGE(!PageHead(page), page);
- } else if (thp_migration_supported()) {
- swp_entry_t entry;
-
- VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
- entry = pmd_to_swp_entry(orig_pmd);
- page = pfn_to_page(swp_offset(entry));
+ } else {
+ swp_entry_t entry = pmd_to_swp_entry(orig_pmd);
+
+ if (thp_migration_supported() &&
+ is_migration_entry(entry))
+ page = pfn_to_page(swp_offset(entry));
+ else if (thp_swap_supported() &&
+ !non_swap_entry(entry))
+ free_swap_and_cache(entry, true);
+ else {
+ WARN_ONCE(1,
+"Non present huge pmd without pmd migration or swap enabled!");
+ goto unlock;
+ }
flush_needed = 0;
- } else
- WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
+ }
- if (PageAnon(page)) {
+ if (!page) {
+ zap_deposited_table(tlb->mm, pmd);
+ add_mm_counter(tlb->mm, MM_SWAPENTS, -HPAGE_PMD_NR);
+ } else if (PageAnon(page)) {
zap_deposited_table(tlb->mm, pmd);
add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR);
} else {
@@ -2038,7 +2048,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
zap_deposited_table(tlb->mm, pmd);
add_mm_counter(tlb->mm, MM_FILEPAGES, -HPAGE_PMD_NR);
}
-
+unlock:
spin_unlock(ptl);
if (flush_needed)
tlb_remove_page_size(tlb, page, HPAGE_PMD_SIZE);
--
2.16.4
From: Huang Ying <[email protected]>
Previously, to reclaim MADV_FREE THP, the THP will be split firstly,
then reclaim each sub-pages. This wastes cycles to split THP and
unmap and free each sub-pages, and split THP even if it has been
written since MADV_FREE. We have to do this because MADV_FREE THP
reclaiming shares same try_to_unmap() calling with swap, while swap
needs to split the PMD page mapping at that time. Now swap can
process PMD mapping, this makes it easy to avoid to split THP when
MADV_FREE THP is reclaimed.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
mm/huge_memory.c | 41 ++++++++++++++++++++++++++++++++---------
mm/vmscan.c | 3 ++-
2 files changed, 34 insertions(+), 10 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 195f24040b41..7c6edd897f15 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1671,6 +1671,15 @@ int do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
return 0;
}
+static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
+{
+ pgtable_t pgtable;
+
+ pgtable = pgtable_trans_huge_withdraw(mm, pmd);
+ pte_free(mm, pgtable);
+ mm_dec_nr_ptes(mm);
+}
+
#ifdef CONFIG_THP_SWAP
void __split_huge_swap_pmd(struct vm_area_struct *vma,
unsigned long haddr,
@@ -1885,6 +1894,28 @@ bool set_pmd_swap_entry(struct page_vma_mapped_walk *pvmw, struct page *page,
pmd_t swp_pmd;
swp_entry_t entry = { .val = page_private(page) };
+ if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) {
+ WARN_ON_ONCE(1);
+ return false;
+ }
+
+ /* MADV_FREE page check */
+ if (!PageSwapBacked(page)) {
+ if (!PageDirty(page)) {
+ zap_deposited_table(mm, pvmw->pmd);
+ add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR);
+ goto out_remove_rmap;
+ }
+
+ /*
+ * If the page was redirtied, it cannot be
+ * discarded. Remap the page to page table.
+ */
+ set_pmd_at(mm, address, pvmw->pmd, pmdval);
+ SetPageSwapBacked(page);
+ return false;
+ }
+
if (swap_duplicate(&entry, true) < 0) {
set_pmd_at(mm, address, pvmw->pmd, pmdval);
return false;
@@ -1902,21 +1933,13 @@ bool set_pmd_swap_entry(struct page_vma_mapped_walk *pvmw, struct page *page,
swp_pmd = pmd_swp_mksoft_dirty(swp_pmd);
set_pmd_at(mm, address, pvmw->pmd, swp_pmd);
+out_remove_rmap:
page_remove_rmap(page, true);
put_page(page);
return true;
}
#endif
-static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
-{
- pgtable_t pgtable;
-
- pgtable = pgtable_trans_huge_withdraw(mm, pmd);
- pte_free(mm, pgtable);
- mm_dec_nr_ptes(mm);
-}
-
/*
* Return true if we do MADV_FREE successfully on entire pmd page.
* Otherwise, return false.
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 891d3c7b8f21..80d0150dd028 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1137,7 +1137,8 @@ static unsigned long shrink_page_list(struct list_head *page_list,
/* Adding to swap updated mapping */
mapping = page_mapping(page);
}
- } else if (unlikely(PageTransHuge(page))) {
+ } else if (unlikely(PageTransHuge(page)) &&
+ (!thp_swap_supported() || !PageAnon(page))) {
/* Split file THP */
if (split_huge_page_to_list(page, page_list))
goto keep_locked;
--
2.16.4
From: Huang Ying <[email protected]>
This is the final step of the THP swapin support. When reclaiming a
anonymous THP, after allocating the huge swap cluster and add the THP
into swap cache, the PMD page mapping will be changed to the mapping
to the swap space. Previously, the PMD page mapping will be split
before being changed. In this patch, the unmap code is enhanced not
to split the PMD mapping, but create a PMD swap mapping to replace it
instead. So later when clear the SWAP_HAS_CACHE flag in the last step
of swapout, the huge swap cluster will be kept instead of being split,
and when swapin, the huge swap cluster will be read as a whole into a
THP. That is, the THP will not be split during swapout/swapin. This
can eliminate the overhead of splitting/collapsing, and reduce the
page fault count, etc. But more important, the utilization of THP is
improved greatly, that is, much more THP will be kept when swapping is
used, so that we can take full advantage of THP including its high
performance for swapout/swapin.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
include/linux/huge_mm.h | 11 +++++++++++
mm/huge_memory.c | 30 ++++++++++++++++++++++++++++++
mm/rmap.c | 43 +++++++++++++++++++++++++++++++++++++++++--
mm/vmscan.c | 6 +-----
4 files changed, 83 insertions(+), 7 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 8e706590fbc1..28e46f078e73 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -405,6 +405,8 @@ static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+struct page_vma_mapped_walk;
+
#ifdef CONFIG_THP_SWAP
extern void __split_huge_swap_pmd(struct vm_area_struct *vma,
unsigned long haddr,
@@ -412,6 +414,8 @@ extern void __split_huge_swap_pmd(struct vm_area_struct *vma,
extern int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long address, pmd_t orig_pmd);
extern int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd);
+extern bool set_pmd_swap_entry(struct page_vma_mapped_walk *pvmw,
+ struct page *page, unsigned long address, pmd_t pmdval);
static inline bool transparent_hugepage_swapin_enabled(
struct vm_area_struct *vma)
@@ -453,6 +457,13 @@ static inline int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
return 0;
}
+static inline bool set_pmd_swap_entry(struct page_vma_mapped_walk *pvmw,
+ struct page *page, unsigned long address,
+ pmd_t pmdval)
+{
+ return false;
+}
+
static inline bool transparent_hugepage_swapin_enabled(
struct vm_area_struct *vma)
{
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e50adc6b59b2..195f24040b41 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1876,6 +1876,36 @@ int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
count_vm_event(THP_SWPIN_FALLBACK);
goto fallback;
}
+
+bool set_pmd_swap_entry(struct page_vma_mapped_walk *pvmw, struct page *page,
+ unsigned long address, pmd_t pmdval)
+{
+ struct vm_area_struct *vma = pvmw->vma;
+ struct mm_struct *mm = vma->vm_mm;
+ pmd_t swp_pmd;
+ swp_entry_t entry = { .val = page_private(page) };
+
+ if (swap_duplicate(&entry, true) < 0) {
+ set_pmd_at(mm, address, pvmw->pmd, pmdval);
+ return false;
+ }
+ if (list_empty(&mm->mmlist)) {
+ spin_lock(&mmlist_lock);
+ if (list_empty(&mm->mmlist))
+ list_add(&mm->mmlist, &init_mm.mmlist);
+ spin_unlock(&mmlist_lock);
+ }
+ add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR);
+ add_mm_counter(mm, MM_SWAPENTS, HPAGE_PMD_NR);
+ swp_pmd = swp_entry_to_pmd(entry);
+ if (pmd_soft_dirty(pmdval))
+ swp_pmd = pmd_swp_mksoft_dirty(swp_pmd);
+ set_pmd_at(mm, address, pvmw->pmd, swp_pmd);
+
+ page_remove_rmap(page, true);
+ put_page(page);
+ return true;
+}
#endif
static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
diff --git a/mm/rmap.c b/mm/rmap.c
index 5f45d6325c40..4861b1a86e2a 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1402,12 +1402,51 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
continue;
}
+ address = pvmw.address;
+
+#ifdef CONFIG_THP_SWAP
+ /* PMD-mapped THP swap entry */
+ if (thp_swap_supported() && !pvmw.pte && PageAnon(page)) {
+ pmd_t pmdval;
+
+ VM_BUG_ON_PAGE(PageHuge(page) ||
+ !PageTransCompound(page), page);
+
+ flush_cache_range(vma, address,
+ address + HPAGE_PMD_SIZE);
+ mmu_notifier_invalidate_range_start(mm, address,
+ address + HPAGE_PMD_SIZE);
+ if (should_defer_flush(mm, flags)) {
+ /* check comments for PTE below */
+ pmdval = pmdp_huge_get_and_clear(mm, address,
+ pvmw.pmd);
+ set_tlb_ubc_flush_pending(mm,
+ pmd_dirty(pmdval));
+ } else
+ pmdval = pmdp_huge_clear_flush(vma, address,
+ pvmw.pmd);
+
+ /*
+ * Move the dirty bit to the page. Now the pmd
+ * is gone.
+ */
+ if (pmd_dirty(pmdval))
+ set_page_dirty(page);
+
+ /* Update high watermark before we lower rss */
+ update_hiwater_rss(mm);
+
+ ret = set_pmd_swap_entry(&pvmw, page, address, pmdval);
+ mmu_notifier_invalidate_range_end(mm, address,
+ address + HPAGE_PMD_SIZE);
+ continue;
+ }
+#endif
+
/* Unexpected PMD-mapped THP? */
VM_BUG_ON_PAGE(!pvmw.pte, page);
subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
- address = pvmw.address;
-
if (IS_ENABLED(CONFIG_MIGRATION) &&
(flags & TTU_MIGRATION) &&
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 03822f86f288..891d3c7b8f21 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1148,11 +1148,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
* processes. Try to unmap it here.
*/
if (page_mapped(page)) {
- enum ttu_flags flags = ttu_flags | TTU_BATCH_FLUSH;
-
- if (unlikely(PageTransHuge(page)))
- flags |= TTU_SPLIT_HUGE_PMD;
- if (!try_to_unmap(page, flags)) {
+ if (!try_to_unmap(page, ttu_flags | TTU_BATCH_FLUSH)) {
nr_unmap_fail++;
goto activate_locked;
}
--
2.16.4
From: Huang Ying <[email protected]>
Previously the huge swap cluster will be split after the THP is
swapout. Now, to support to swapin the THP as a whole, the huge swap
cluster will not be split after the THP is reclaimed. So in memcg, we
need to move the swap account for PMD swap mappings in the process's
page table.
When the page table is scanned during moving memcg charge, the PMD
swap mapping will be identified. And mem_cgroup_move_swap_account()
and its callee is revised to move account for the whole huge swap
cluster. If the swap cluster mapped by PMD has been split, the PMD
swap mapping will be split and fallback to PTE processing. Because
the swap slots of the swap cluster may have been swapin or moved to
other cgroup already.
Because there is no way to prevent a huge swap cluster from being
split except when it has SWAP_HAS_CACHE flag set. It is possible for
the huge swap cluster to be split and the charge for the swap slots
inside to be changed, after we check the PMD swap mapping and the huge
swap cluster before we commit the charge moving. But the race window
is so small, that we will just ignore the race.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
include/linux/huge_mm.h | 9 +++
include/linux/swap.h | 6 ++
include/linux/swap_cgroup.h | 3 +-
mm/huge_memory.c | 12 +---
mm/memcontrol.c | 138 +++++++++++++++++++++++++++++++++++---------
mm/swap_cgroup.c | 45 ++++++++++++---
mm/swapfile.c | 12 ++++
7 files changed, 180 insertions(+), 45 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index bc92c2944756..8e706590fbc1 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -406,6 +406,9 @@ static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#ifdef CONFIG_THP_SWAP
+extern void __split_huge_swap_pmd(struct vm_area_struct *vma,
+ unsigned long haddr,
+ pmd_t *pmd);
extern int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long address, pmd_t orig_pmd);
extern int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd);
@@ -433,6 +436,12 @@ static inline bool transparent_hugepage_swapin_enabled(
return false;
}
#else /* CONFIG_THP_SWAP */
+static inline void __split_huge_swap_pmd(struct vm_area_struct *vma,
+ unsigned long haddr,
+ pmd_t *pmd)
+{
+}
+
static inline int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long address, pmd_t orig_pmd)
{
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 5832a750baed..88677acdcff6 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -628,6 +628,7 @@ static inline swp_entry_t get_swap_page(struct page *page)
#ifdef CONFIG_THP_SWAP
extern int split_swap_cluster(swp_entry_t entry, bool force);
extern int split_swap_cluster_map(swp_entry_t entry);
+extern bool in_huge_swap_cluster(swp_entry_t entry);
#else
static inline int split_swap_cluster(swp_entry_t entry, bool force)
{
@@ -638,6 +639,11 @@ static inline int split_swap_cluster_map(swp_entry_t entry)
{
return 0;
}
+
+static inline bool in_huge_swap_cluster(swp_entry_t entry)
+{
+ return false;
+}
#endif
#ifdef CONFIG_MEMCG
diff --git a/include/linux/swap_cgroup.h b/include/linux/swap_cgroup.h
index a12dd1c3966c..c40fb52b0563 100644
--- a/include/linux/swap_cgroup.h
+++ b/include/linux/swap_cgroup.h
@@ -7,7 +7,8 @@
#ifdef CONFIG_MEMCG_SWAP
extern unsigned short swap_cgroup_cmpxchg(swp_entry_t ent,
- unsigned short old, unsigned short new);
+ unsigned short old, unsigned short new,
+ unsigned int nr_ents);
extern unsigned short swap_cgroup_record(swp_entry_t ent, unsigned short id,
unsigned int nr_ents);
extern unsigned short lookup_swap_cgroup_id(swp_entry_t ent);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index fe946ca09b28..adad32d54ff2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1630,9 +1630,9 @@ int do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
}
#ifdef CONFIG_THP_SWAP
-static void __split_huge_swap_pmd(struct vm_area_struct *vma,
- unsigned long haddr,
- pmd_t *pmd)
+void __split_huge_swap_pmd(struct vm_area_struct *vma,
+ unsigned long haddr,
+ pmd_t *pmd)
{
struct mm_struct *mm = vma->vm_mm;
pgtable_t pgtable;
@@ -1834,12 +1834,6 @@ int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
count_vm_event(THP_SWPIN_FALLBACK);
goto fallback;
}
-#else
-static inline void __split_huge_swap_pmd(struct vm_area_struct *vma,
- unsigned long haddr,
- pmd_t *pmd)
-{
-}
#endif
static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 02e77c88967a..13392dbc7c9d 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2408,9 +2408,10 @@ void mem_cgroup_split_huge_fixup(struct page *head)
#ifdef CONFIG_MEMCG_SWAP
/**
* mem_cgroup_move_swap_account - move swap charge and swap_cgroup's record.
- * @entry: swap entry to be moved
+ * @entry: the first swap entry to be moved
* @from: mem_cgroup which the entry is moved from
* @to: mem_cgroup which the entry is moved to
+ * @nr_ents: number of swap entries
*
* It succeeds only when the swap_cgroup's record for this entry is the same
* as the mem_cgroup's id of @from.
@@ -2421,23 +2422,27 @@ void mem_cgroup_split_huge_fixup(struct page *head)
* both res and memsw, and called css_get().
*/
static int mem_cgroup_move_swap_account(swp_entry_t entry,
- struct mem_cgroup *from, struct mem_cgroup *to)
+ struct mem_cgroup *from,
+ struct mem_cgroup *to,
+ unsigned int nr_ents)
{
unsigned short old_id, new_id;
old_id = mem_cgroup_id(from);
new_id = mem_cgroup_id(to);
- if (swap_cgroup_cmpxchg(entry, old_id, new_id) == old_id) {
- mod_memcg_state(from, MEMCG_SWAP, -1);
- mod_memcg_state(to, MEMCG_SWAP, 1);
+ if (swap_cgroup_cmpxchg(entry, old_id, new_id, nr_ents) == old_id) {
+ mod_memcg_state(from, MEMCG_SWAP, -nr_ents);
+ mod_memcg_state(to, MEMCG_SWAP, nr_ents);
return 0;
}
return -EINVAL;
}
#else
static inline int mem_cgroup_move_swap_account(swp_entry_t entry,
- struct mem_cgroup *from, struct mem_cgroup *to)
+ struct mem_cgroup *from,
+ struct mem_cgroup *to,
+ unsigned int nr_ents)
{
return -EINVAL;
}
@@ -4596,6 +4601,7 @@ enum mc_target_type {
MC_TARGET_PAGE,
MC_TARGET_SWAP,
MC_TARGET_DEVICE,
+ MC_TARGET_FALLBACK,
};
static struct page *mc_handle_present_pte(struct vm_area_struct *vma,
@@ -4662,6 +4668,34 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma,
}
#endif
+#ifdef CONFIG_THP_SWAP
+static struct page *mc_handle_swap_pmd(struct vm_area_struct *vma,
+ pmd_t pmd, swp_entry_t *entry)
+{
+ struct page *page = NULL;
+ swp_entry_t ent = pmd_to_swp_entry(pmd);
+
+ if (!(mc.flags & MOVE_ANON) || non_swap_entry(ent))
+ return NULL;
+
+ /*
+ * Because lookup_swap_cache() updates some statistics counter,
+ * we call find_get_page() with swapper_space directly.
+ */
+ page = find_get_page(swap_address_space(ent), swp_offset(ent));
+ if (do_memsw_account())
+ entry->val = ent.val;
+
+ return page;
+}
+#else
+static struct page *mc_handle_swap_pmd(struct vm_area_struct *vma,
+ pmd_t pmd, swp_entry_t *entry)
+{
+ return NULL;
+}
+#endif
+
static struct page *mc_handle_file_pte(struct vm_area_struct *vma,
unsigned long addr, pte_t ptent, swp_entry_t *entry)
{
@@ -4850,7 +4884,9 @@ static enum mc_target_type get_mctgt_type(struct vm_area_struct *vma,
* There is a swap entry and a page doesn't exist or isn't charged.
* But we cannot move a tail-page in a THP.
*/
- if (ent.val && !ret && (!page || !PageTransCompound(page)) &&
+ if (ent.val && !ret &&
+ ((page && !PageTransCompound(page)) ||
+ (!page && !in_huge_swap_cluster(ent))) &&
mem_cgroup_id(mc.from) == lookup_swap_cgroup_id(ent)) {
ret = MC_TARGET_SWAP;
if (target)
@@ -4861,37 +4897,65 @@ static enum mc_target_type get_mctgt_type(struct vm_area_struct *vma,
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/*
- * We don't consider PMD mapped swapping or file mapped pages because THP does
- * not support them for now.
- * Caller should make sure that pmd_trans_huge(pmd) is true.
+ * We don't consider file mapped pages because THP does not support
+ * them for now.
*/
static enum mc_target_type get_mctgt_type_thp(struct vm_area_struct *vma,
- unsigned long addr, pmd_t pmd, union mc_target *target)
+ unsigned long addr, pmd_t *pmdp, union mc_target *target)
{
+ pmd_t pmd = *pmdp;
struct page *page = NULL;
enum mc_target_type ret = MC_TARGET_NONE;
+ swp_entry_t ent = { .val = 0 };
if (unlikely(is_swap_pmd(pmd))) {
- VM_BUG_ON(thp_migration_supported() &&
- !is_pmd_migration_entry(pmd));
- return ret;
+ if (is_pmd_migration_entry(pmd)) {
+ VM_BUG_ON(!thp_migration_supported());
+ return ret;
+ }
+ page = mc_handle_swap_pmd(vma, pmd, &ent);
+ /* The swap cluster has been split under us */
+ if ((page && !PageTransHuge(page)) ||
+ (!page && ent.val && !in_huge_swap_cluster(ent))) {
+ __split_huge_swap_pmd(vma, addr, pmdp);
+ ret = MC_TARGET_FALLBACK;
+ goto out;
+ }
+ } else {
+ page = pmd_page(pmd);
+ get_page(page);
}
- page = pmd_page(pmd);
- VM_BUG_ON_PAGE(!page || !PageHead(page), page);
+ VM_BUG_ON_PAGE(page && !PageHead(page), page);
if (!(mc.flags & MOVE_ANON))
- return ret;
- if (page->mem_cgroup == mc.from) {
+ goto out;
+ if (!page && !ent.val)
+ goto out;
+ if (page && page->mem_cgroup == mc.from) {
ret = MC_TARGET_PAGE;
if (target) {
get_page(page);
target->page = page;
}
}
+ if (ent.val && !ret && !page &&
+ mem_cgroup_id(mc.from) == lookup_swap_cgroup_id(ent)) {
+ /*
+ * It is possible for the huge swap cluster to be
+ * split by others and the charge is changed, but the
+ * race window is small.
+ */
+ ret = MC_TARGET_SWAP;
+ if (target)
+ target->ent = ent;
+ }
+out:
+ if (page)
+ put_page(page);
return ret;
}
#else
static inline enum mc_target_type get_mctgt_type_thp(struct vm_area_struct *vma,
- unsigned long addr, pmd_t pmd, union mc_target *target)
+ unsigned long addr, pmd_t *pmdp, union mc_target *target)
{
return MC_TARGET_NONE;
}
@@ -4904,6 +4968,7 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd,
struct vm_area_struct *vma = walk->vma;
pte_t *pte;
spinlock_t *ptl;
+ int ret;
ptl = pmd_trans_huge_lock(pmd, vma);
if (ptl) {
@@ -4912,12 +4977,16 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd,
* support transparent huge page with MEMORY_DEVICE_PUBLIC or
* MEMORY_DEVICE_PRIVATE but this might change.
*/
- if (get_mctgt_type_thp(vma, addr, *pmd, NULL) == MC_TARGET_PAGE)
- mc.precharge += HPAGE_PMD_NR;
+ ret = get_mctgt_type_thp(vma, addr, pmd, NULL);
spin_unlock(ptl);
+ if (ret == MC_TARGET_FALLBACK)
+ goto fallback;
+ if (ret)
+ mc.precharge += HPAGE_PMD_NR;
return 0;
}
+fallback:
if (pmd_trans_unstable(pmd))
return 0;
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
@@ -5108,6 +5177,7 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
enum mc_target_type target_type;
union mc_target target;
struct page *page;
+ swp_entry_t ent;
ptl = pmd_trans_huge_lock(pmd, vma);
if (ptl) {
@@ -5115,8 +5185,9 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
spin_unlock(ptl);
return 0;
}
- target_type = get_mctgt_type_thp(vma, addr, *pmd, &target);
- if (target_type == MC_TARGET_PAGE) {
+ target_type = get_mctgt_type_thp(vma, addr, pmd, &target);
+ switch (target_type) {
+ case MC_TARGET_PAGE:
page = target.page;
if (!isolate_lru_page(page)) {
if (!mem_cgroup_move_account(page, true,
@@ -5127,7 +5198,8 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
putback_lru_page(page);
}
put_page(page);
- } else if (target_type == MC_TARGET_DEVICE) {
+ break;
+ case MC_TARGET_DEVICE:
page = target.page;
if (!mem_cgroup_move_account(page, true,
mc.from, mc.to)) {
@@ -5135,9 +5207,21 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
mc.moved_charge += HPAGE_PMD_NR;
}
put_page(page);
+ break;
+ case MC_TARGET_SWAP:
+ ent = target.ent;
+ if (!mem_cgroup_move_swap_account(ent, mc.from, mc.to,
+ HPAGE_PMD_NR)) {
+ mc.precharge -= HPAGE_PMD_NR;
+ mc.moved_swap += HPAGE_PMD_NR;
+ }
+ break;
+ default:
+ break;
}
spin_unlock(ptl);
- return 0;
+ if (target_type != MC_TARGET_FALLBACK)
+ return 0;
}
if (pmd_trans_unstable(pmd))
@@ -5147,7 +5231,6 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
for (; addr != end; addr += PAGE_SIZE) {
pte_t ptent = *(pte++);
bool device = false;
- swp_entry_t ent;
if (!mc.precharge)
break;
@@ -5181,7 +5264,8 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
break;
case MC_TARGET_SWAP:
ent = target.ent;
- if (!mem_cgroup_move_swap_account(ent, mc.from, mc.to)) {
+ if (!mem_cgroup_move_swap_account(ent, mc.from,
+ mc.to, 1)) {
mc.precharge--;
/* we fixup refcnts and charges later. */
mc.moved_swap++;
diff --git a/mm/swap_cgroup.c b/mm/swap_cgroup.c
index 45affaef3bc6..ccc08e88962a 100644
--- a/mm/swap_cgroup.c
+++ b/mm/swap_cgroup.c
@@ -87,29 +87,58 @@ static struct swap_cgroup *lookup_swap_cgroup(swp_entry_t ent,
/**
* swap_cgroup_cmpxchg - cmpxchg mem_cgroup's id for this swp_entry.
- * @ent: swap entry to be cmpxchged
+ * @ent: the first swap entry to be cmpxchged
* @old: old id
* @new: new id
+ * @nr_ents: number of swap entries
*
* Returns old id at success, 0 at failure.
* (There is no mem_cgroup using 0 as its id)
*/
unsigned short swap_cgroup_cmpxchg(swp_entry_t ent,
- unsigned short old, unsigned short new)
+ unsigned short old, unsigned short new,
+ unsigned int nr_ents)
{
struct swap_cgroup_ctrl *ctrl;
- struct swap_cgroup *sc;
+ struct swap_cgroup *sc_start, *sc;
unsigned long flags;
unsigned short retval;
+ pgoff_t offset_start = swp_offset(ent), offset;
+ pgoff_t end = offset_start + nr_ents;
- sc = lookup_swap_cgroup(ent, &ctrl);
+ sc_start = lookup_swap_cgroup(ent, &ctrl);
spin_lock_irqsave(&ctrl->lock, flags);
- retval = sc->id;
- if (retval == old)
+ sc = sc_start;
+ offset = offset_start;
+ for (;;) {
+ if (sc->id != old) {
+ retval = 0;
+ goto out;
+ }
+ offset++;
+ if (offset == end)
+ break;
+ if (offset % SC_PER_PAGE)
+ sc++;
+ else
+ sc = __lookup_swap_cgroup(ctrl, offset);
+ }
+
+ sc = sc_start;
+ offset = offset_start;
+ for (;;) {
sc->id = new;
- else
- retval = 0;
+ offset++;
+ if (offset == end)
+ break;
+ if (offset % SC_PER_PAGE)
+ sc++;
+ else
+ sc = __lookup_swap_cgroup(ctrl, offset);
+ }
+ retval = old;
+out:
spin_unlock_irqrestore(&ctrl->lock, flags);
return retval;
}
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 34e64f3570c3..11e7243799a1 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1683,6 +1683,18 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
return map_swapcount;
}
+
+bool in_huge_swap_cluster(swp_entry_t entry)
+{
+ struct swap_info_struct *si;
+ struct swap_cluster_info *ci;
+
+ si = _swap_info_get(entry);
+ if (!si || !si->cluster_info)
+ return false;
+ ci = si->cluster_info + swp_offset(entry) / SWAPFILE_CLUSTER;
+ return cluster_is_huge(ci);
+}
#else
#define swap_page_trans_huge_swapped(si, entry) swap_swapcount(si, entry, NULL)
#define page_swapped(page) (page_swapcount(page) != 0)
--
2.16.4
From: Huang Ying <[email protected]>
When splitting a THP in swap cache or failing to allocate a THP when
swapin a huge swap cluster, the huge swap cluster will be split. In
addition to clear the huge flag of the swap cluster, the PMD swap
mapping count recorded in cluster_count() will be set to 0. But we
will not touch PMD swap mappings themselves, because it is hard to
find them all sometimes. When the PMD swap mappings are operated
later, it will be found that the huge swap cluster has been split and
the PMD swap mappings will be split at that time.
Unless splitting a THP in swap cache (specified via "force"
parameter), split_swap_cluster() will return -EEXIST if there is
SWAP_HAS_CACHE flag in swap_map[offset]. Because this indicates there
is a THP corresponds to this huge swap cluster, and it isn't desired
to split the THP.
When splitting a THP in swap cache, the position to call
split_swap_cluster() is changed to before unlocking sub-pages. So
that all sub-pages will be kept locked from the THP has been split to
the huge swap cluster is split. This makes the code much easier to be
reasoned.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
include/linux/swap.h | 4 ++--
mm/huge_memory.c | 18 ++++++++++++------
mm/swapfile.c | 45 ++++++++++++++++++++++++++++++---------------
3 files changed, 44 insertions(+), 23 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index bb9de2cb952a..878f132dabc0 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -617,10 +617,10 @@ static inline swp_entry_t get_swap_page(struct page *page)
#endif /* CONFIG_SWAP */
#ifdef CONFIG_THP_SWAP
-extern int split_swap_cluster(swp_entry_t entry);
+extern int split_swap_cluster(swp_entry_t entry, bool force);
extern int split_swap_cluster_map(swp_entry_t entry);
#else
-static inline int split_swap_cluster(swp_entry_t entry)
+static inline int split_swap_cluster(swp_entry_t entry, bool force)
{
return 0;
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2d615328d77f..586d8693b8af 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2502,6 +2502,17 @@ static void __split_huge_page(struct page *page, struct list_head *list,
unfreeze_page(head);
+ /*
+ * Split swap cluster before unlocking sub-pages. So all
+ * sub-pages will be kept locked from THP has been split to
+ * swap cluster is split.
+ */
+ if (PageSwapCache(head)) {
+ swp_entry_t entry = { .val = page_private(head) };
+
+ split_swap_cluster(entry, true);
+ }
+
for (i = 0; i < HPAGE_PMD_NR; i++) {
struct page *subpage = head + i;
if (subpage == page)
@@ -2728,12 +2739,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
__dec_node_page_state(page, NR_SHMEM_THPS);
spin_unlock(&pgdata->split_queue_lock);
__split_huge_page(page, list, flags);
- if (PageSwapCache(head)) {
- swp_entry_t entry = { .val = page_private(head) };
-
- ret = split_swap_cluster(entry);
- } else
- ret = 0;
+ ret = 0;
} else {
if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) {
pr_alert("total_mapcount: %u, page_count(): %u\n",
diff --git a/mm/swapfile.c b/mm/swapfile.c
index a0141307f3ac..5ff2da89b77c 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1410,21 +1410,6 @@ static void swapcache_free_cluster(swp_entry_t entry)
}
}
}
-
-int split_swap_cluster(swp_entry_t entry)
-{
- struct swap_info_struct *si;
- struct swap_cluster_info *ci;
- unsigned long offset = swp_offset(entry);
-
- si = _swap_info_get(entry);
- if (!si)
- return -EBUSY;
- ci = lock_cluster(si, offset);
- cluster_clear_huge(ci);
- unlock_cluster(ci);
- return 0;
-}
#else
static inline void swapcache_free_cluster(swp_entry_t entry)
{
@@ -4069,6 +4054,36 @@ int split_swap_cluster_map(swp_entry_t entry)
unlock_cluster(ci);
return 0;
}
+
+int split_swap_cluster(swp_entry_t entry, bool force)
+{
+ struct swap_info_struct *si;
+ struct swap_cluster_info *ci;
+ unsigned long offset = swp_offset(entry);
+ int ret = 0;
+
+ si = get_swap_device(entry);
+ if (!si)
+ return -EINVAL;
+ ci = lock_cluster(si, offset);
+ /* The swap cluster has been split by someone else */
+ if (!cluster_is_huge(ci))
+ goto out;
+ VM_BUG_ON(!is_cluster_offset(offset));
+ VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
+ /* If not forced, don't split swap cluster has swap cache */
+ if (!force && si->swap_map[offset] & SWAP_HAS_CACHE) {
+ ret = -EEXIST;
+ goto out;
+ }
+ cluster_set_count(ci, SWAPFILE_CLUSTER);
+ cluster_clear_huge(ci);
+
+out:
+ unlock_cluster(ci);
+ put_swap_device(si);
+ return ret;
+}
#endif
static int __init swapfile_init(void)
--
2.16.4
From: Huang Ying <[email protected]>
When madvise_free() found a PMD swap mapping, if only part of the huge
swap cluster is operated on, the PMD swap mapping will be split and
fallback to PTE swap mapping processing. Otherwise, if all huge swap
cluster is operated on, free_swap_and_cache() will be called to
decrease the PMD swap mapping count and probably free the swap space
and the THP in swap cache too.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
mm/huge_memory.c | 50 +++++++++++++++++++++++++++++++++++---------------
mm/madvise.c | 2 +-
2 files changed, 36 insertions(+), 16 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 73fc77633642..fe946ca09b28 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1842,6 +1842,15 @@ static inline void __split_huge_swap_pmd(struct vm_area_struct *vma,
}
#endif
+static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
+{
+ pgtable_t pgtable;
+
+ pgtable = pgtable_trans_huge_withdraw(mm, pmd);
+ pte_free(mm, pgtable);
+ mm_dec_nr_ptes(mm);
+}
+
/*
* Return true if we do MADV_FREE successfully on entire pmd page.
* Otherwise, return false.
@@ -1862,15 +1871,35 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
goto out_unlocked;
orig_pmd = *pmd;
- if (is_huge_zero_pmd(orig_pmd))
- goto out;
-
if (unlikely(!pmd_present(orig_pmd))) {
- VM_BUG_ON(thp_migration_supported() &&
- !is_pmd_migration_entry(orig_pmd));
- goto out;
+ swp_entry_t entry = pmd_to_swp_entry(orig_pmd);
+
+ if (is_migration_entry(entry)) {
+ VM_BUG_ON(!thp_migration_supported());
+ goto out;
+ } else if (thp_swap_supported() && !non_swap_entry(entry)) {
+ /* If part of THP is discarded */
+ if (next - addr != HPAGE_PMD_SIZE) {
+ unsigned long haddr = addr & HPAGE_PMD_MASK;
+
+ __split_huge_swap_pmd(vma, haddr, pmd);
+ goto out;
+ }
+ free_swap_and_cache(entry, true);
+ pmd_clear(pmd);
+ zap_deposited_table(mm, pmd);
+ if (current->mm == mm)
+ sync_mm_rss(mm);
+ add_mm_counter(mm, MM_SWAPENTS, -HPAGE_PMD_NR);
+ ret = true;
+ goto out;
+ } else
+ VM_BUG_ON(1);
}
+ if (is_huge_zero_pmd(orig_pmd))
+ goto out;
+
page = pmd_page(orig_pmd);
/*
* If other processes are mapping this page, we couldn't discard
@@ -1916,15 +1945,6 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
return ret;
}
-static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
-{
- pgtable_t pgtable;
-
- pgtable = pgtable_trans_huge_withdraw(mm, pmd);
- pte_free(mm, pgtable);
- mm_dec_nr_ptes(mm);
-}
-
int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
pmd_t *pmd, unsigned long addr)
{
diff --git a/mm/madvise.c b/mm/madvise.c
index a96abb915e22..fc423d96d4d6 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -321,7 +321,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
unsigned long next;
next = pmd_addr_end(addr, end);
- if (pmd_trans_huge(*pmd))
+ if (pmd_trans_huge(*pmd) || is_swap_pmd(*pmd))
if (madvise_free_huge_pmd(tlb, vma, pmd, addr, next))
goto next;
--
2.16.4
From: Huang Ying <[email protected]>
During MADV_WILLNEED, for a PMD swap mapping, if THP swapin is enabled
for the VMA, the whole swap cluster will be swapin. Otherwise, the
huge swap cluster and the PMD swap mapping will be split and fallback
to PTE swap mapping.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
mm/madvise.c | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/mm/madvise.c b/mm/madvise.c
index fc423d96d4d6..6c7a96357626 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -196,14 +196,36 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start,
pte_t *orig_pte;
struct vm_area_struct *vma = walk->private;
unsigned long index;
+ swp_entry_t entry;
+ struct page *page;
+ pmd_t pmdval;
+
+ pmdval = *pmd;
+ if (thp_swap_supported() && is_swap_pmd(pmdval) &&
+ !is_pmd_migration_entry(pmdval)) {
+ entry = pmd_to_swp_entry(pmdval);
+ if (!transparent_hugepage_swapin_enabled(vma)) {
+ if (!split_swap_cluster(entry, false))
+ split_huge_swap_pmd(vma, pmd, start, pmdval);
+ } else {
+ page = read_swap_cache_async(entry,
+ GFP_HIGHUSER_MOVABLE,
+ vma, start, false);
+ /* The swap cluster has been split under us */
+ if (page) {
+ if (!PageTransHuge(page))
+ split_huge_swap_pmd(vma, pmd, start,
+ pmdval);
+ put_page(page);
+ }
+ }
+ }
if (pmd_none_or_trans_huge_or_clear_bad(pmd))
return 0;
for (index = start; index != end; index += PAGE_SIZE) {
pte_t pte;
- swp_entry_t entry;
- struct page *page;
spinlock_t *ptl;
orig_pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
--
2.16.4
From: Huang Ying <[email protected]>
During swapoff, for a huge swap cluster, we need to allocate a THP,
read its contents into the THP and unuse the PMD and PTE swap mappings
to it. If failed to allocate a THP, the huge swap cluster will be
split.
During unuse, if it is found that the swap cluster mapped by a PMD
swap mapping is split already, we will split the PMD swap mapping and
unuse the PTEs.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
include/asm-generic/pgtable.h | 15 ++------
include/linux/huge_mm.h | 8 ++++
mm/huge_memory.c | 4 +-
mm/swapfile.c | 86 ++++++++++++++++++++++++++++++++++++++++++-
4 files changed, 98 insertions(+), 15 deletions(-)
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index bb8354981a36..caa381962cd2 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -931,22 +931,13 @@ static inline int pmd_none_or_trans_huge_or_clear_bad(pmd_t *pmd)
barrier();
#endif
/*
- * !pmd_present() checks for pmd migration entries
- *
- * The complete check uses is_pmd_migration_entry() in linux/swapops.h
- * But using that requires moving current function and pmd_trans_unstable()
- * to linux/swapops.h to resovle dependency, which is too much code move.
- *
- * !pmd_present() is equivalent to is_pmd_migration_entry() currently,
- * because !pmd_present() pages can only be under migration not swapped
- * out.
- *
- * pmd_none() is preseved for future condition checks on pmd migration
+ * pmd_none() is preseved for future condition checks on pmd swap
* entries and not confusing with this function name, although it is
* redundant with !pmd_present().
*/
if (pmd_none(pmdval) || pmd_trans_huge(pmdval) ||
- (IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION) && !pmd_present(pmdval)))
+ ((IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION) ||
+ IS_ENABLED(CONFIG_THP_SWAP)) && !pmd_present(pmdval)))
return 1;
if (unlikely(pmd_bad(pmdval))) {
pmd_clear_bad(pmd);
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 7931fa888f11..bc92c2944756 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -406,6 +406,8 @@ static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#ifdef CONFIG_THP_SWAP
+extern int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ unsigned long address, pmd_t orig_pmd);
extern int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd);
static inline bool transparent_hugepage_swapin_enabled(
@@ -431,6 +433,12 @@ static inline bool transparent_hugepage_swapin_enabled(
return false;
}
#else /* CONFIG_THP_SWAP */
+static inline int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ unsigned long address, pmd_t orig_pmd)
+{
+ return 0;
+}
+
static inline int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
{
return 0;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index da42d1cdc26a..73fc77633642 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1663,8 +1663,8 @@ static void __split_huge_swap_pmd(struct vm_area_struct *vma,
pmd_populate(mm, pmd, pgtable);
}
-static int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
- unsigned long address, pmd_t orig_pmd)
+int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ unsigned long address, pmd_t orig_pmd)
{
struct mm_struct *mm = vma->vm_mm;
spinlock_t *ptl;
diff --git a/mm/swapfile.c b/mm/swapfile.c
index e1e43654407c..34e64f3570c3 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1933,6 +1933,11 @@ static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte)
return pte_same(pte_swp_clear_soft_dirty(pte), swp_pte);
}
+static inline int pmd_same_as_swp(pmd_t pmd, pmd_t swp_pmd)
+{
+ return pmd_same(pmd_swp_clear_soft_dirty(pmd), swp_pmd);
+}
+
/*
* No need to decide whether this PTE shares the swap entry with others,
* just let do_wp_page work it out if a write is requested later - to
@@ -1994,6 +1999,57 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
return ret;
}
+#ifdef CONFIG_THP_SWAP
+static int unuse_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ unsigned long addr, swp_entry_t entry, struct page *page)
+{
+ struct mem_cgroup *memcg;
+ struct swap_info_struct *si;
+ spinlock_t *ptl;
+ int ret = 1;
+
+ if (mem_cgroup_try_charge(page, vma->vm_mm, GFP_KERNEL,
+ &memcg, true)) {
+ ret = -ENOMEM;
+ goto out_nolock;
+ }
+
+ ptl = pmd_lock(vma->vm_mm, pmd);
+ if (unlikely(!pmd_same_as_swp(*pmd, swp_entry_to_pmd(entry)))) {
+ mem_cgroup_cancel_charge(page, memcg, true);
+ ret = 0;
+ goto out;
+ }
+
+ add_mm_counter(vma->vm_mm, MM_SWAPENTS, -HPAGE_PMD_NR);
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ get_page(page);
+ set_pmd_at(vma->vm_mm, addr, pmd,
+ pmd_mkold(mk_huge_pmd(page, vma->vm_page_prot)));
+ page_add_anon_rmap(page, vma, addr, true);
+ mem_cgroup_commit_charge(page, memcg, true, true);
+ si = _swap_info_get(entry);
+ if (si)
+ swap_free_cluster(si, entry);
+ /*
+ * Move the page to the active list so it is not
+ * immediately swapped out again after swapon.
+ */
+ activate_page(page);
+out:
+ spin_unlock(ptl);
+out_nolock:
+ return ret;
+}
+#else
+static inline int unuse_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ unsigned long addr, swp_entry_t entry,
+ struct page *page)
+{
+ return 0;
+}
+#endif
+
static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long addr, unsigned long end,
swp_entry_t entry, struct page *page)
@@ -2034,7 +2090,7 @@ static inline int unuse_pmd_range(struct vm_area_struct *vma, pud_t *pud,
unsigned long addr, unsigned long end,
swp_entry_t entry, struct page *page)
{
- pmd_t *pmd;
+ pmd_t swp_pmd = swp_entry_to_pmd(entry), *pmd, orig_pmd;
unsigned long next;
int ret;
@@ -2042,6 +2098,24 @@ static inline int unuse_pmd_range(struct vm_area_struct *vma, pud_t *pud,
do {
cond_resched();
next = pmd_addr_end(addr, end);
+ orig_pmd = *pmd;
+ if (thp_swap_supported() && is_swap_pmd(orig_pmd)) {
+ if (likely(!pmd_same_as_swp(orig_pmd, swp_pmd)))
+ continue;
+ /* Huge cluster has been split already */
+ if (!PageTransCompound(page)) {
+ ret = split_huge_swap_pmd(vma, pmd,
+ addr, orig_pmd);
+ if (ret)
+ return ret;
+ ret = unuse_pte_range(vma, pmd, addr,
+ next, entry, page);
+ } else
+ ret = unuse_pmd(vma, pmd, addr, entry, page);
+ if (ret)
+ return ret;
+ continue;
+ }
if (pmd_none_or_trans_huge_or_clear_bad(pmd))
continue;
ret = unuse_pte_range(vma, pmd, addr, next, entry, page);
@@ -2206,6 +2280,7 @@ int try_to_unuse(unsigned int type, bool frontswap,
* to prevent compiler doing
* something odd.
*/
+ struct swap_cluster_info *ci = NULL;
unsigned char swcount;
struct page *page;
swp_entry_t entry;
@@ -2235,6 +2310,7 @@ int try_to_unuse(unsigned int type, bool frontswap,
* there are races when an instance of an entry might be missed.
*/
while ((i = find_next_to_unuse(si, i, frontswap)) != 0) {
+retry:
if (signal_pending(current)) {
retval = -EINTR;
break;
@@ -2246,6 +2322,8 @@ int try_to_unuse(unsigned int type, bool frontswap,
* page and read the swap into it.
*/
swap_map = &si->swap_map[i];
+ if (si->cluster_info)
+ ci = si->cluster_info + i / SWAPFILE_CLUSTER;
entry = swp_entry(type, i);
page = read_swap_cache_async(entry,
GFP_HIGHUSER_MOVABLE, NULL, 0, false);
@@ -2266,6 +2344,12 @@ int try_to_unuse(unsigned int type, bool frontswap,
*/
if (!swcount || swcount == SWAP_MAP_BAD)
continue;
+ /* Split huge cluster if failed to allocate huge page */
+ if (thp_swap_supported() && cluster_is_huge(ci)) {
+ retval = split_swap_cluster(entry, false);
+ if (!retval || retval == -EEXIST)
+ goto retry;
+ }
retval = -ENOMEM;
break;
}
--
2.16.4
From: Huang Ying <[email protected]>
2 new /proc/vmstat fields are added, "thp_swapin" and
"thp_swapin_fallback" to count swapin a THP from swap device as a
whole and fallback to normal page swapin.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
Documentation/admin-guide/mm/transhuge.rst | 8 ++++++++
include/linux/vm_event_item.h | 2 ++
mm/huge_memory.c | 4 +++-
mm/page_io.c | 15 ++++++++++++---
mm/vmstat.c | 2 ++
5 files changed, 27 insertions(+), 4 deletions(-)
diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
index 7ab93a8404b9..85e33f785fd7 100644
--- a/Documentation/admin-guide/mm/transhuge.rst
+++ b/Documentation/admin-guide/mm/transhuge.rst
@@ -364,6 +364,14 @@ thp_swpout_fallback
Usually because failed to allocate some continuous swap space
for the huge page.
+thp_swpin
+ is incremented every time a huge page is swapin in one piece
+ without splitting.
+
+thp_swpin_fallback
+ is incremented if a huge page has to be split during swapin.
+ Usually because failed to allocate a huge page.
+
As the system ages, allocating huge pages may be expensive as the
system uses memory compaction to copy data around memory to free a
huge page for use. There are some counters in ``/proc/vmstat`` to help
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 5c7f010676a7..7b438548a78e 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -88,6 +88,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
THP_ZERO_PAGE_ALLOC_FAILED,
THP_SWPOUT,
THP_SWPOUT_FALLBACK,
+ THP_SWPIN,
+ THP_SWPIN_FALLBACK,
#endif
#ifdef CONFIG_MEMORY_BALLOON
BALLOON_INFLATE,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ac79ae2ab257..8d49a11e83ee 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1664,8 +1664,10 @@ int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
/* swapoff occurs under us */
} else if (ret == -EINVAL)
ret = 0;
- else
+ else {
+ count_vm_event(THP_SWPIN_FALLBACK);
goto fallback;
+ }
}
delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
goto out;
diff --git a/mm/page_io.c b/mm/page_io.c
index b41cf9644585..96277058681e 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -347,6 +347,15 @@ int __swap_writepage(struct page *page, struct writeback_control *wbc,
return ret;
}
+static inline void count_swpin_vm_event(struct page *page)
+{
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ if (unlikely(PageTransHuge(page)))
+ count_vm_event(THP_SWPIN);
+#endif
+ count_vm_events(PSWPIN, hpage_nr_pages(page));
+}
+
int swap_readpage(struct page *page, bool synchronous)
{
struct bio *bio;
@@ -370,7 +379,7 @@ int swap_readpage(struct page *page, bool synchronous)
ret = mapping->a_ops->readpage(swap_file, page);
if (!ret)
- count_vm_event(PSWPIN);
+ count_swpin_vm_event(page);
return ret;
}
@@ -381,7 +390,7 @@ int swap_readpage(struct page *page, bool synchronous)
unlock_page(page);
}
- count_vm_event(PSWPIN);
+ count_swpin_vm_event(page);
return 0;
}
@@ -400,7 +409,7 @@ int swap_readpage(struct page *page, bool synchronous)
get_task_struct(current);
bio->bi_private = current;
bio_set_op_attrs(bio, REQ_OP_READ, 0);
- count_vm_event(PSWPIN);
+ count_swpin_vm_event(page);
bio_get(bio);
qc = submit_bio(bio);
while (synchronous) {
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 75eda9c2b260..259c7bddbb6e 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1263,6 +1263,8 @@ const char * const vmstat_text[] = {
"thp_zero_page_alloc_failed",
"thp_swpout",
"thp_swpout_fallback",
+ "thp_swpin",
+ "thp_swpin_fallback",
#endif
#ifdef CONFIG_MEMORY_BALLOON
"balloon_inflate",
--
2.16.4
From: Huang Ying <[email protected]>
With this patch, when page fault handler find a PMD swap mapping, it
will swap in a THP as a whole. This avoids the overhead of
splitting/collapsing before/after the THP swapping. And improves the
swap performance greatly for reduced page fault count etc.
do_huge_pmd_swap_page() is added in the patch to implement this. It
is similar to do_swap_page() for normal page swapin.
If failing to allocate a THP, the huge swap cluster and the PMD swap
mapping will be split to fallback to normal page swapin.
If the huge swap cluster has been split already, the PMD swap mapping
will be split to fallback to normal page swapin.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
include/linux/huge_mm.h | 9 +++
include/linux/swap.h | 9 +++
mm/huge_memory.c | 170 ++++++++++++++++++++++++++++++++++++++++++++++++
mm/memory.c | 16 +++--
4 files changed, 198 insertions(+), 6 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index c5b8af173f67..42117b75de2d 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -403,4 +403,13 @@ static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+#ifdef CONFIG_THP_SWAP
+extern int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd);
+#else /* CONFIG_THP_SWAP */
+static inline int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
+{
+ return 0;
+}
+#endif /* CONFIG_THP_SWAP */
+
#endif /* _LINUX_HUGE_MM_H */
diff --git a/include/linux/swap.h b/include/linux/swap.h
index d2e017dd7bbd..5832a750baed 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -560,6 +560,15 @@ static inline struct page *lookup_swap_cache(swp_entry_t swp,
return NULL;
}
+static inline struct page *read_swap_cache_async(swp_entry_t swp,
+ gfp_t gft_mask,
+ struct vm_area_struct *vma,
+ unsigned long addr,
+ bool do_poll)
+{
+ return NULL;
+}
+
static inline int add_to_swap(struct page *page)
{
return 0;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 275a4e616ec9..ac79ae2ab257 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -33,6 +33,8 @@
#include <linux/page_idle.h>
#include <linux/shmem_fs.h>
#include <linux/oom.h>
+#include <linux/delayacct.h>
+#include <linux/swap.h>
#include <asm/tlb.h>
#include <asm/pgalloc.h>
@@ -1609,6 +1611,174 @@ static void __split_huge_swap_pmd(struct vm_area_struct *vma,
smp_wmb(); /* make pte visible before pmd */
pmd_populate(mm, pmd, pgtable);
}
+
+static int split_huge_swap_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ unsigned long address, pmd_t orig_pmd)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ spinlock_t *ptl;
+ int ret = 0;
+
+ ptl = pmd_lock(mm, pmd);
+ if (pmd_same(*pmd, orig_pmd))
+ __split_huge_swap_pmd(vma, address & HPAGE_PMD_MASK, pmd);
+ else
+ ret = -ENOENT;
+ spin_unlock(ptl);
+
+ return ret;
+}
+
+int do_huge_pmd_swap_page(struct vm_fault *vmf, pmd_t orig_pmd)
+{
+ struct page *page;
+ struct mem_cgroup *memcg;
+ struct vm_area_struct *vma = vmf->vma;
+ unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
+ swp_entry_t entry;
+ pmd_t pmd;
+ int i, locked, exclusive = 0, ret = 0;
+
+ entry = pmd_to_swp_entry(orig_pmd);
+ VM_BUG_ON(non_swap_entry(entry));
+ delayacct_set_flag(DELAYACCT_PF_SWAPIN);
+retry:
+ page = lookup_swap_cache(entry, NULL, vmf->address);
+ if (!page) {
+ page = read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE, vma,
+ haddr, false);
+ if (!page) {
+ /*
+ * Back out if somebody else faulted in this pmd
+ * while we released the pmd lock.
+ */
+ if (likely(pmd_same(*vmf->pmd, orig_pmd))) {
+ ret = split_swap_cluster(entry, false);
+ /*
+ * Retry if somebody else swap in the swap
+ * entry
+ */
+ if (ret == -EEXIST) {
+ ret = 0;
+ goto retry;
+ /* swapoff occurs under us */
+ } else if (ret == -EINVAL)
+ ret = 0;
+ else
+ goto fallback;
+ }
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+ goto out;
+ }
+
+ /* Had to read the page from swap area: Major fault */
+ ret = VM_FAULT_MAJOR;
+ count_vm_event(PGMAJFAULT);
+ count_memcg_event_mm(vma->vm_mm, PGMAJFAULT);
+ } else if (!PageTransCompound(page))
+ goto fallback;
+
+ locked = lock_page_or_retry(page, vma->vm_mm, vmf->flags);
+
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+ if (!locked) {
+ ret |= VM_FAULT_RETRY;
+ goto out_release;
+ }
+
+ /*
+ * Make sure try_to_free_swap or reuse_swap_page or swapoff did not
+ * release the swapcache from under us. The page pin, and pmd_same
+ * test below, are not enough to exclude that. Even if it is still
+ * swapcache, we need to check that the page's swap has not changed.
+ */
+ if (unlikely(!PageSwapCache(page) || page_private(page) != entry.val))
+ goto out_page;
+
+ if (mem_cgroup_try_charge(page, vma->vm_mm, GFP_KERNEL,
+ &memcg, true)) {
+ ret = VM_FAULT_OOM;
+ goto out_page;
+ }
+
+ /*
+ * Back out if somebody else already faulted in this pmd.
+ */
+ vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd);
+ spin_lock(vmf->ptl);
+ if (unlikely(!pmd_same(*vmf->pmd, orig_pmd)))
+ goto out_nomap;
+
+ if (unlikely(!PageUptodate(page))) {
+ ret = VM_FAULT_SIGBUS;
+ goto out_nomap;
+ }
+
+ /*
+ * The page isn't present yet, go ahead with the fault.
+ *
+ * Be careful about the sequence of operations here.
+ * To get its accounting right, reuse_swap_page() must be called
+ * while the page is counted on swap but not yet in mapcount i.e.
+ * before page_add_anon_rmap() and swap_free(); try_to_free_swap()
+ * must be called after the swap_free(), or it will never succeed.
+ */
+
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ add_mm_counter(vma->vm_mm, MM_SWAPENTS, -HPAGE_PMD_NR);
+ pmd = mk_huge_pmd(page, vma->vm_page_prot);
+ if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
+ pmd = maybe_pmd_mkwrite(pmd_mkdirty(pmd), vma);
+ vmf->flags &= ~FAULT_FLAG_WRITE;
+ ret |= VM_FAULT_WRITE;
+ exclusive = RMAP_EXCLUSIVE;
+ }
+ for (i = 0; i < HPAGE_PMD_NR; i++)
+ flush_icache_page(vma, page + i);
+ if (pmd_swp_soft_dirty(orig_pmd))
+ pmd = pmd_mksoft_dirty(pmd);
+ do_page_add_anon_rmap(page, vma, haddr,
+ exclusive | RMAP_COMPOUND);
+ mem_cgroup_commit_charge(page, memcg, true, true);
+ activate_page(page);
+ set_pmd_at(vma->vm_mm, haddr, vmf->pmd, pmd);
+
+ swap_free(entry, true);
+ if (mem_cgroup_swap_full(page) ||
+ (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
+ try_to_free_swap(page);
+ unlock_page(page);
+
+ if (vmf->flags & FAULT_FLAG_WRITE) {
+ ret |= do_huge_pmd_wp_page(vmf, pmd);
+ if (ret & VM_FAULT_ERROR)
+ ret &= VM_FAULT_ERROR;
+ goto out;
+ }
+
+ /* No need to invalidate - it was non-present before */
+ update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
+ spin_unlock(vmf->ptl);
+out:
+ return ret;
+out_nomap:
+ mem_cgroup_cancel_charge(page, memcg, true);
+ spin_unlock(vmf->ptl);
+out_page:
+ unlock_page(page);
+out_release:
+ put_page(page);
+ return ret;
+fallback:
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+ if (!split_huge_swap_pmd(vmf->vma, vmf->pmd, vmf->address, orig_pmd))
+ ret = VM_FAULT_FALLBACK;
+ else
+ ret = 0;
+ if (page)
+ put_page(page);
+ return ret;
+}
#else
static inline void __split_huge_swap_pmd(struct vm_area_struct *vma,
unsigned long haddr,
diff --git a/mm/memory.c b/mm/memory.c
index 55e278bb59ee..2125035b6a70 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4072,13 +4072,17 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
barrier();
if (unlikely(is_swap_pmd(orig_pmd))) {
- VM_BUG_ON(thp_migration_supported() &&
- !is_pmd_migration_entry(orig_pmd));
- if (is_pmd_migration_entry(orig_pmd))
+ if (thp_migration_supported() &&
+ is_pmd_migration_entry(orig_pmd)) {
pmd_migration_entry_wait(mm, vmf.pmd);
- return 0;
- }
- if (pmd_trans_huge(orig_pmd) || pmd_devmap(orig_pmd)) {
+ return 0;
+ } else if (thp_swap_supported()) {
+ ret = do_huge_pmd_swap_page(&vmf, orig_pmd);
+ if (!(ret & VM_FAULT_FALLBACK))
+ return ret;
+ } else
+ VM_BUG_ON(1);
+ } else if (pmd_trans_huge(orig_pmd) || pmd_devmap(orig_pmd)) {
if (pmd_protnone(orig_pmd) && vma_is_accessible(vma))
return do_huge_pmd_numa_page(&vmf, orig_pmd);
--
2.16.4
From: Huang Ying <[email protected]>
To support to swapin the THP as a whole, we need to create PMD swap
mapping during swapout, and maintain PMD swap mapping count. This
patch implements the support to increase the PMD swap mapping
count (for swapout, fork, etc.) and set SWAP_HAS_CACHE flag (for
swapin, etc.) for a huge swap cluster in swap_duplicate() function
family. Although it only implements a part of the design of the swap
reference count with PMD swap mapping, the whole design is described
as follow to make it easy to understand the patch and the whole
picture.
A huge swap cluster is used to hold the contents of a swapouted THP.
After swapout, a PMD page mapping to the THP will become a PMD
swap mapping to the huge swap cluster via a swap entry in PMD. While
a PTE page mapping to a subpage of the THP will become the PTE swap
mapping to a swap slot in the huge swap cluster via a swap entry in
PTE.
If there is no PMD swap mapping and the corresponding THP is removed
from the page cache (reclaimed), the huge swap cluster will be split
and become a normal swap cluster.
The count (cluster_count()) of the huge swap cluster is
SWAPFILE_CLUSTER (= HPAGE_PMD_NR) + PMD swap mapping count. Because
all swap slots in the huge swap cluster are mapped by PTE or PMD, or
has SWAP_HAS_CACHE bit set, the usage count of the swap cluster is
HPAGE_PMD_NR. And the PMD swap mapping count is recorded too to make
it easy to determine whether there are remaining PMD swap mappings.
The count in swap_map[offset] is the sum of PTE and PMD swap mapping
count. This means when we increase the PMD swap mapping count, we
need to increase swap_map[offset] for all swap slots inside the swap
cluster. An alternative choice is to make swap_map[offset] to record
PTE swap map count only, given we have recorded PMD swap mapping count
in the count of the huge swap cluster. But this need to increase
swap_map[offset] when splitting the PMD swap mapping, that may fail
because of memory allocation for swap count continuation. That is
hard to dealt with. So we choose current solution.
The PMD swap mapping to a huge swap cluster may be split when unmap a
part of PMD mapping etc. That is easy because only the count of the
huge swap cluster need to be changed. When the last PMD swap mapping
is gone and SWAP_HAS_CACHE is unset, we will split the huge swap
cluster (clear the huge flag). This makes it easy to reason the
cluster state.
A huge swap cluster will be split when splitting the THP in swap
cache, or failing to allocate THP during swapin, etc. But when
splitting the huge swap cluster, we will not try to split all PMD swap
mappings, because we haven't enough information available for that
sometimes. Later, when the PMD swap mapping is duplicated or swapin,
etc, the PMD swap mapping will be split and fallback to the PTE
operation.
When a THP is added into swap cache, the SWAP_HAS_CACHE flag will be
set in the swap_map[offset] of all swap slots inside the huge swap
cluster backing the THP. This huge swap cluster will not be split
unless the THP is split even if its PMD swap mapping count dropped to
0. Later, when the THP is removed from swap cache, the SWAP_HAS_CACHE
flag will be cleared in the swap_map[offset] of all swap slots inside
the huge swap cluster. And this huge swap cluster will be split if
its PMD swap mapping count is 0.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
include/linux/huge_mm.h | 5 +
include/linux/swap.h | 9 +-
mm/memory.c | 2 +-
mm/rmap.c | 2 +-
mm/swap_state.c | 2 +-
mm/swapfile.c | 287 +++++++++++++++++++++++++++++++++---------------
6 files changed, 214 insertions(+), 93 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index d3bbf6bea9e9..213d32e57c39 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -80,6 +80,11 @@ extern struct kobj_attribute shmem_enabled_attr;
#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
+static inline bool thp_swap_supported(void)
+{
+ return IS_ENABLED(CONFIG_THP_SWAP);
+}
+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define HPAGE_PMD_SHIFT PMD_SHIFT
#define HPAGE_PMD_SIZE ((1UL) << HPAGE_PMD_SHIFT)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index f73eafcaf4e9..57aa655ab27d 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -451,8 +451,8 @@ extern swp_entry_t get_swap_page_of_type(int);
extern int get_swap_pages(int n, bool cluster, swp_entry_t swp_entries[]);
extern int add_swap_count_continuation(swp_entry_t, gfp_t);
extern void swap_shmem_alloc(swp_entry_t);
-extern int swap_duplicate(swp_entry_t);
-extern int swapcache_prepare(swp_entry_t);
+extern int swap_duplicate(swp_entry_t *entry, bool cluster);
+extern int swapcache_prepare(swp_entry_t entry, bool cluster);
extern void swap_free(swp_entry_t);
extern void swapcache_free_entries(swp_entry_t *entries, int n);
extern int free_swap_and_cache(swp_entry_t);
@@ -510,7 +510,8 @@ static inline void show_swap_cache_info(void)
}
#define free_swap_and_cache(e) ({(is_migration_entry(e) || is_device_private_entry(e));})
-#define swapcache_prepare(e) ({(is_migration_entry(e) || is_device_private_entry(e));})
+#define swapcache_prepare(e, c) \
+ ({(is_migration_entry(e) || is_device_private_entry(e)); })
static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_mask)
{
@@ -521,7 +522,7 @@ static inline void swap_shmem_alloc(swp_entry_t swp)
{
}
-static inline int swap_duplicate(swp_entry_t swp)
+static inline int swap_duplicate(swp_entry_t *swp, bool cluster)
{
return 0;
}
diff --git a/mm/memory.c b/mm/memory.c
index e9cac1c4fa69..f3900282e3da 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -951,7 +951,7 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
swp_entry_t entry = pte_to_swp_entry(pte);
if (likely(!non_swap_entry(entry))) {
- if (swap_duplicate(entry) < 0)
+ if (swap_duplicate(&entry, false) < 0)
return entry.val;
/* make sure dst_mm is on swapoff's mmlist. */
diff --git a/mm/rmap.c b/mm/rmap.c
index 6db729dc4c50..5f45d6325c40 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1556,7 +1556,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
break;
}
- if (swap_duplicate(entry) < 0) {
+ if (swap_duplicate(&entry, false) < 0) {
set_pte_at(mm, address, pvmw.pte, pteval);
ret = false;
page_vma_mapped_walk_done(&pvmw);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index dc312559f7df..b0575182e77b 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -433,7 +433,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
/*
* Swap entry may have been freed since our caller observed it.
*/
- err = swapcache_prepare(entry);
+ err = swapcache_prepare(entry, false);
if (err == -EEXIST) {
radix_tree_preload_end();
/*
diff --git a/mm/swapfile.c b/mm/swapfile.c
index f42b1b0cdc58..48e2c54385ee 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -49,6 +49,9 @@ static bool swap_count_continued(struct swap_info_struct *, pgoff_t,
unsigned char);
static void free_swap_count_continuations(struct swap_info_struct *);
static sector_t map_swap_entry(swp_entry_t, struct block_device**);
+static int add_swap_count_continuation_locked(struct swap_info_struct *si,
+ unsigned long offset,
+ struct page *page);
DEFINE_SPINLOCK(swap_lock);
static unsigned int nr_swapfiles;
@@ -319,6 +322,11 @@ static inline void unlock_cluster_or_swap_info(struct swap_info_struct *si,
spin_unlock(&si->lock);
}
+static inline bool is_cluster_offset(unsigned long offset)
+{
+ return !(offset % SWAPFILE_CLUSTER);
+}
+
static inline bool cluster_list_empty(struct swap_cluster_list *list)
{
return cluster_is_null(&list->head);
@@ -1166,16 +1174,14 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry)
return NULL;
}
-static unsigned char __swap_entry_free(struct swap_info_struct *p,
- swp_entry_t entry, unsigned char usage)
+static unsigned char __swap_entry_free_locked(struct swap_info_struct *p,
+ struct swap_cluster_info *ci,
+ unsigned long offset,
+ unsigned char usage)
{
- struct swap_cluster_info *ci;
- unsigned long offset = swp_offset(entry);
unsigned char count;
unsigned char has_cache;
- ci = lock_cluster_or_swap_info(p, offset);
-
count = p->swap_map[offset];
has_cache = count & SWAP_HAS_CACHE;
@@ -1203,6 +1209,17 @@ static unsigned char __swap_entry_free(struct swap_info_struct *p,
usage = count | has_cache;
p->swap_map[offset] = usage ? : SWAP_HAS_CACHE;
+ return usage;
+}
+
+static unsigned char __swap_entry_free(struct swap_info_struct *p,
+ swp_entry_t entry, unsigned char usage)
+{
+ struct swap_cluster_info *ci;
+ unsigned long offset = swp_offset(entry);
+
+ ci = lock_cluster_or_swap_info(p, offset);
+ usage = __swap_entry_free_locked(p, ci, offset, usage);
unlock_cluster_or_swap_info(p, ci);
return usage;
@@ -3450,32 +3467,12 @@ void si_swapinfo(struct sysinfo *val)
spin_unlock(&swap_lock);
}
-/*
- * Verify that a swap entry is valid and increment its swap map count.
- *
- * Returns error code in following case.
- * - success -> 0
- * - swp_entry is invalid -> EINVAL
- * - swp_entry is migration entry -> EINVAL
- * - swap-cache reference is requested but there is already one. -> EEXIST
- * - swap-cache reference is requested but the entry is not used. -> ENOENT
- * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
- */
-static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+static int __swap_duplicate_locked(struct swap_info_struct *p,
+ unsigned long offset, unsigned char usage)
{
- struct swap_info_struct *p;
- struct swap_cluster_info *ci;
- unsigned long offset;
unsigned char count;
unsigned char has_cache;
- int err = -EINVAL;
-
- p = get_swap_device(entry);
- if (!p)
- goto out;
-
- offset = swp_offset(entry);
- ci = lock_cluster_or_swap_info(p, offset);
+ int err = 0;
count = p->swap_map[offset];
@@ -3485,12 +3482,11 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
*/
if (unlikely(swap_count(count) == SWAP_MAP_BAD)) {
err = -ENOENT;
- goto unlock_out;
+ goto out;
}
has_cache = count & SWAP_HAS_CACHE;
count &= ~SWAP_HAS_CACHE;
- err = 0;
if (usage == SWAP_HAS_CACHE) {
@@ -3517,11 +3513,39 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
p->swap_map[offset] = count | has_cache;
-unlock_out:
+out:
+ return err;
+}
+
+/*
+ * Verify that a swap entry is valid and increment its swap map count.
+ *
+ * Returns error code in following case.
+ * - success -> 0
+ * - swp_entry is invalid -> EINVAL
+ * - swp_entry is migration entry -> EINVAL
+ * - swap-cache reference is requested but there is already one. -> EEXIST
+ * - swap-cache reference is requested but the entry is not used. -> ENOENT
+ * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
+ */
+static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+{
+ struct swap_info_struct *p;
+ struct swap_cluster_info *ci;
+ unsigned long offset;
+ int err = -EINVAL;
+
+ p = get_swap_device(entry);
+ if (!p)
+ goto out;
+
+ offset = swp_offset(entry);
+ ci = lock_cluster_or_swap_info(p, offset);
+ err = __swap_duplicate_locked(p, offset, usage);
unlock_cluster_or_swap_info(p, ci);
+
+ put_swap_device(p);
out:
- if (p)
- put_swap_device(p);
return err;
}
@@ -3534,6 +3558,81 @@ void swap_shmem_alloc(swp_entry_t entry)
__swap_duplicate(entry, SWAP_MAP_SHMEM);
}
+#ifdef CONFIG_THP_SWAP
+static int __swap_duplicate_cluster(swp_entry_t *entry, unsigned char usage)
+{
+ struct swap_info_struct *si;
+ struct swap_cluster_info *ci;
+ unsigned long offset;
+ unsigned char *map;
+ int i, err = 0;
+
+ si = get_swap_device(*entry);
+ if (!si) {
+ err = -EINVAL;
+ goto out;
+ }
+ offset = swp_offset(*entry);
+ ci = lock_cluster(si, offset);
+ if (cluster_is_free(ci)) {
+ err = -ENOENT;
+ goto unlock;
+ }
+ if (!cluster_is_huge(ci)) {
+ err = -ENOTDIR;
+ goto unlock;
+ }
+ VM_BUG_ON(!is_cluster_offset(offset));
+ VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
+ map = si->swap_map + offset;
+ if (usage == SWAP_HAS_CACHE) {
+ if (map[0] & SWAP_HAS_CACHE) {
+ err = -EEXIST;
+ goto unlock;
+ }
+ for (i = 0; i < SWAPFILE_CLUSTER; i++) {
+ VM_BUG_ON(map[i] & SWAP_HAS_CACHE);
+ map[i] |= SWAP_HAS_CACHE;
+ }
+ } else {
+ for (i = 0; i < SWAPFILE_CLUSTER; i++) {
+retry:
+ err = __swap_duplicate_locked(si, offset + i, usage);
+ if (err == -ENOMEM) {
+ struct page *page;
+
+ page = alloc_page(GFP_ATOMIC | __GFP_HIGHMEM);
+ err = add_swap_count_continuation_locked(
+ si, offset + i, page);
+ if (err) {
+ *entry = swp_entry(si->type, offset+i);
+ goto undup;
+ }
+ goto retry;
+ } else if (err)
+ goto undup;
+ }
+ cluster_set_count(ci, cluster_count(ci) + usage);
+ }
+unlock:
+ unlock_cluster(ci);
+ put_swap_device(si);
+out:
+ return err;
+undup:
+ for (i--; i >= 0; i--)
+ __swap_entry_free_locked(
+ si, ci, offset + i, usage);
+ goto unlock;
+}
+#else
+static inline int __swap_duplicate_cluster(swp_entry_t *entry,
+ unsigned char usage)
+{
+ return 0;
+}
+#endif
+
/*
* Increase reference count of swap entry by 1.
* Returns 0 for success, or -ENOMEM if a swap_count_continuation is required
@@ -3541,12 +3640,15 @@ void swap_shmem_alloc(swp_entry_t entry)
* if __swap_duplicate() fails for another reason (-EINVAL or -ENOENT), which
* might occur if a page table entry has got corrupted.
*/
-int swap_duplicate(swp_entry_t entry)
+int swap_duplicate(swp_entry_t *entry, bool cluster)
{
int err = 0;
- while (!err && __swap_duplicate(entry, 1) == -ENOMEM)
- err = add_swap_count_continuation(entry, GFP_ATOMIC);
+ if (thp_swap_supported() && cluster)
+ return __swap_duplicate_cluster(entry, 1);
+
+ while (!err && __swap_duplicate(*entry, 1) == -ENOMEM)
+ err = add_swap_count_continuation(*entry, GFP_ATOMIC);
return err;
}
@@ -3558,9 +3660,12 @@ int swap_duplicate(swp_entry_t entry)
* -EBUSY means there is a swap cache.
* Note: return code is different from swap_duplicate().
*/
-int swapcache_prepare(swp_entry_t entry)
+int swapcache_prepare(swp_entry_t entry, bool cluster)
{
- return __swap_duplicate(entry, SWAP_HAS_CACHE);
+ if (thp_swap_supported() && cluster)
+ return __swap_duplicate_cluster(&entry, SWAP_HAS_CACHE);
+ else
+ return __swap_duplicate(entry, SWAP_HAS_CACHE);
}
struct swap_info_struct *swp_swap_info(swp_entry_t entry)
@@ -3590,51 +3695,13 @@ pgoff_t __page_file_index(struct page *page)
}
EXPORT_SYMBOL_GPL(__page_file_index);
-/*
- * add_swap_count_continuation - called when a swap count is duplicated
- * beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's
- * page of the original vmalloc'ed swap_map, to hold the continuation count
- * (for that entry and for its neighbouring PAGE_SIZE swap entries). Called
- * again when count is duplicated beyond SWAP_MAP_MAX * SWAP_CONT_MAX, etc.
- *
- * These continuation pages are seldom referenced: the common paths all work
- * on the original swap_map, only referring to a continuation page when the
- * low "digit" of a count is incremented or decremented through SWAP_MAP_MAX.
- *
- * add_swap_count_continuation(, GFP_ATOMIC) can be called while holding
- * page table locks; if it fails, add_swap_count_continuation(, GFP_KERNEL)
- * can be called after dropping locks.
- */
-int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask)
+static int add_swap_count_continuation_locked(struct swap_info_struct *si,
+ unsigned long offset,
+ struct page *page)
{
- struct swap_info_struct *si;
- struct swap_cluster_info *ci;
struct page *head;
- struct page *page;
struct page *list_page;
- pgoff_t offset;
unsigned char count;
- int ret = 0;
-
- /*
- * When debugging, it's easier to use __GFP_ZERO here; but it's better
- * for latency not to zero a page while GFP_ATOMIC and holding locks.
- */
- page = alloc_page(gfp_mask | __GFP_HIGHMEM);
-
- si = get_swap_device(entry);
- if (!si) {
- /*
- * An acceptable race has occurred since the failing
- * __swap_duplicate(): the swap device may be swapoff
- */
- goto outer;
- }
- spin_lock(&si->lock);
-
- offset = swp_offset(entry);
-
- ci = lock_cluster(si, offset);
count = si->swap_map[offset] & ~SWAP_HAS_CACHE;
@@ -3644,13 +3711,11 @@ int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask)
* will race to add swap count continuation: we need to avoid
* over-provisioning.
*/
- goto out;
+ return 0;
}
- if (!page) {
- ret = -ENOMEM;
- goto out;
- }
+ if (!page)
+ return -ENOMEM;
/*
* We are fortunate that although vmalloc_to_page uses pte_offset_map,
@@ -3698,7 +3763,57 @@ int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask)
page = NULL; /* now it's attached, don't free it */
out_unlock_cont:
spin_unlock(&si->cont_lock);
-out:
+ if (page)
+ __free_page(page);
+ return 0;
+}
+
+/*
+ * add_swap_count_continuation - called when a swap count is duplicated
+ * beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's
+ * page of the original vmalloc'ed swap_map, to hold the continuation count
+ * (for that entry and for its neighbouring PAGE_SIZE swap entries). Called
+ * again when count is duplicated beyond SWAP_MAP_MAX * SWAP_CONT_MAX, etc.
+ *
+ * These continuation pages are seldom referenced: the common paths all work
+ * on the original swap_map, only referring to a continuation page when the
+ * low "digit" of a count is incremented or decremented through SWAP_MAP_MAX.
+ *
+ * add_swap_count_continuation(, GFP_ATOMIC) can be called while holding
+ * page table locks; if it fails, add_swap_count_continuation(, GFP_KERNEL)
+ * can be called after dropping locks.
+ */
+int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask)
+{
+ struct swap_info_struct *si;
+ struct swap_cluster_info *ci;
+ struct page *page;
+ unsigned long offset;
+ int ret = 0;
+
+ /*
+ * When debugging, it's easier to use __GFP_ZERO here; but it's better
+ * for latency not to zero a page while GFP_ATOMIC and holding locks.
+ */
+ page = alloc_page(gfp_mask | __GFP_HIGHMEM);
+
+ si = get_swap_device(entry);
+ if (!si) {
+ /*
+ * An acceptable race has occurred since the failing
+ * __swap_duplicate(): the swap device may be swapoff
+ */
+ goto outer;
+ }
+ spin_lock(&si->lock);
+
+ offset = swp_offset(entry);
+
+ ci = lock_cluster(si, offset);
+
+ ret = add_swap_count_continuation_locked(si, offset, page);
+ page = NULL;
+
unlock_cluster(ci);
spin_unlock(&si->lock);
put_swap_device(si);
--
2.16.4
From: Huang Ying <[email protected]>
Previously, the PMD swap operations are only enabled for
CONFIG_ARCH_ENABLE_THP_MIGRATION. Because they are only used by the
THP migration support. We will support PMD swap mapping to the huge
swap cluster and swapin the THP as a whole. That will be enabled via
CONFIG_THP_SWAP and needs these PMD swap operations. So enable the
PMD swap operations for CONFIG_THP_SWAP too.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
arch/x86/include/asm/pgtable.h | 2 +-
include/asm-generic/pgtable.h | 2 +-
include/linux/swapops.h | 44 ++++++++++++++++++++++--------------------
3 files changed, 25 insertions(+), 23 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 99ecde23c3ec..13bf58838daf 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1224,7 +1224,7 @@ static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
return pte_clear_flags(pte, _PAGE_SWP_SOFT_DIRTY);
}
-#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP)
static inline pmd_t pmd_swp_mksoft_dirty(pmd_t pmd)
{
return pmd_set_flags(pmd, _PAGE_SWP_SOFT_DIRTY);
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index f59639afaa39..bb8354981a36 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -675,7 +675,7 @@ static inline void ptep_modify_prot_commit(struct mm_struct *mm,
#endif
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
-#ifndef CONFIG_ARCH_ENABLE_THP_MIGRATION
+#if !defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) && !defined(CONFIG_THP_SWAP)
static inline pmd_t pmd_swp_mksoft_dirty(pmd_t pmd)
{
return pmd;
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 1d3877c39a00..f1be5a52f5c8 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -258,17 +258,7 @@ static inline int is_write_migration_entry(swp_entry_t entry)
#endif
-struct page_vma_mapped_walk;
-
-#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
-extern void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
- struct page *page);
-
-extern void remove_migration_pmd(struct page_vma_mapped_walk *pvmw,
- struct page *new);
-
-extern void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd);
-
+#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP)
static inline swp_entry_t pmd_to_swp_entry(pmd_t pmd)
{
swp_entry_t arch_entry;
@@ -286,6 +276,28 @@ static inline pmd_t swp_entry_to_pmd(swp_entry_t entry)
arch_entry = __swp_entry(swp_type(entry), swp_offset(entry));
return __swp_entry_to_pmd(arch_entry);
}
+#else
+static inline swp_entry_t pmd_to_swp_entry(pmd_t pmd)
+{
+ return swp_entry(0, 0);
+}
+
+static inline pmd_t swp_entry_to_pmd(swp_entry_t entry)
+{
+ return __pmd(0);
+}
+#endif
+
+struct page_vma_mapped_walk;
+
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+extern void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
+ struct page *page);
+
+extern void remove_migration_pmd(struct page_vma_mapped_walk *pvmw,
+ struct page *new);
+
+extern void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd);
static inline int is_pmd_migration_entry(pmd_t pmd)
{
@@ -306,16 +318,6 @@ static inline void remove_migration_pmd(struct page_vma_mapped_walk *pvmw,
static inline void pmd_migration_entry_wait(struct mm_struct *m, pmd_t *p) { }
-static inline swp_entry_t pmd_to_swp_entry(pmd_t pmd)
-{
- return swp_entry(0, 0);
-}
-
-static inline pmd_t swp_entry_to_pmd(swp_entry_t entry)
-{
- return __pmd(0);
-}
-
static inline int is_pmd_migration_entry(pmd_t pmd)
{
return 0;
--
2.16.4
From: Huang Ying <[email protected]>
To swapin a THP as a whole, we need to read a huge swap cluster from
the swap device. This patch revised the __read_swap_cache_async() and
its callers and callees to support this. If __read_swap_cache_async()
find the swap cluster of the specified swap entry is huge, it will try
to allocate a THP, add it into the swap cache. So later the contents
of the huge swap cluster can be read into the THP.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
include/linux/huge_mm.h | 38 ++++++++++++++++++++++++
include/linux/swap.h | 4 +--
mm/huge_memory.c | 26 -----------------
mm/swap_state.c | 77 ++++++++++++++++++++++++++++++++++---------------
mm/swapfile.c | 11 ++++---
5 files changed, 100 insertions(+), 56 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 213d32e57c39..c5b8af173f67 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -251,6 +251,39 @@ static inline bool thp_migration_supported(void)
return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION);
}
+/*
+ * always: directly stall for all thp allocations
+ * defer: wake kswapd and fail if not immediately available
+ * defer+madvise: wake kswapd and directly stall for MADV_HUGEPAGE, otherwise
+ * fail if not immediately available
+ * madvise: directly stall for MADV_HUGEPAGE, otherwise fail if not immediately
+ * available
+ * never: never stall for any thp allocation
+ */
+static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
+{
+ bool vma_madvised;
+
+ if (!vma)
+ return GFP_TRANSHUGE_LIGHT;
+ vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE);
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG,
+ &transparent_hugepage_flags))
+ return GFP_TRANSHUGE | (vma_madvised ? 0 : __GFP_NORETRY);
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG,
+ &transparent_hugepage_flags))
+ return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM;
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG,
+ &transparent_hugepage_flags))
+ return GFP_TRANSHUGE_LIGHT |
+ (vma_madvised ? __GFP_DIRECT_RECLAIM :
+ __GFP_KSWAPD_RECLAIM);
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG,
+ &transparent_hugepage_flags))
+ return GFP_TRANSHUGE_LIGHT |
+ (vma_madvised ? __GFP_DIRECT_RECLAIM : 0);
+ return GFP_TRANSHUGE_LIGHT;
+}
#else /* CONFIG_TRANSPARENT_HUGEPAGE */
#define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; })
#define HPAGE_PMD_MASK ({ BUILD_BUG(); 0; })
@@ -363,6 +396,11 @@ static inline bool thp_migration_supported(void)
{
return false;
}
+
+static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
+{
+ return 0;
+}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#endif /* _LINUX_HUGE_MM_H */
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 878f132dabc0..d2e017dd7bbd 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -462,7 +462,7 @@ extern sector_t map_swap_page(struct page *, struct block_device **);
extern sector_t swapdev_block(int, pgoff_t);
extern int page_swapcount(struct page *);
extern int __swap_count(swp_entry_t entry);
-extern int __swp_swapcount(swp_entry_t entry);
+extern int __swp_swapcount(swp_entry_t entry, bool *huge_cluster);
extern int swp_swapcount(swp_entry_t entry);
extern struct swap_info_struct *page_swap_info(struct page *);
extern struct swap_info_struct *swp_swap_info(swp_entry_t entry);
@@ -589,7 +589,7 @@ static inline int __swap_count(swp_entry_t entry)
return 0;
}
-static inline int __swp_swapcount(swp_entry_t entry)
+static inline int __swp_swapcount(swp_entry_t entry, bool *huge_cluster)
{
return 0;
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 586d8693b8af..275a4e616ec9 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -620,32 +620,6 @@ static int __do_huge_pmd_anonymous_page(struct vm_fault *vmf, struct page *page,
}
-/*
- * always: directly stall for all thp allocations
- * defer: wake kswapd and fail if not immediately available
- * defer+madvise: wake kswapd and directly stall for MADV_HUGEPAGE, otherwise
- * fail if not immediately available
- * madvise: directly stall for MADV_HUGEPAGE, otherwise fail if not immediately
- * available
- * never: never stall for any thp allocation
- */
-static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
-{
- const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE);
-
- if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags))
- return GFP_TRANSHUGE | (vma_madvised ? 0 : __GFP_NORETRY);
- if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags))
- return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM;
- if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags))
- return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM :
- __GFP_KSWAPD_RECLAIM);
- if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags))
- return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM :
- 0);
- return GFP_TRANSHUGE_LIGHT;
-}
-
/* Caller must hold page table lock. */
static bool set_huge_zero_page(pgtable_t pgtable, struct mm_struct *mm,
struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd,
diff --git a/mm/swap_state.c b/mm/swap_state.c
index b0575182e77b..fa1e011123b8 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -386,6 +386,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct page *found_page = NULL, *new_page = NULL;
struct swap_info_struct *si;
int err;
+ bool huge_cluster = false;
+ swp_entry_t hentry;
+
*new_page_allocated = false;
do {
@@ -411,14 +414,32 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
* as SWAP_HAS_CACHE. That's done in later part of code or
* else swap_off will be aborted if we return NULL.
*/
- if (!__swp_swapcount(entry) && swap_slot_cache_enabled)
+ if (!__swp_swapcount(entry, &huge_cluster) &&
+ swap_slot_cache_enabled)
break;
/*
* Get a new page to read into from swap.
*/
- if (!new_page) {
- new_page = alloc_page_vma(gfp_mask, vma, addr);
+ if (!new_page ||
+ (thp_swap_supported() &&
+ !!PageTransCompound(new_page) != huge_cluster)) {
+ if (new_page)
+ put_page(new_page);
+ if (thp_swap_supported() && huge_cluster) {
+ gfp_t gfp = alloc_hugepage_direct_gfpmask(vma);
+
+ new_page = alloc_hugepage_vma(gfp, vma,
+ addr, HPAGE_PMD_ORDER);
+ if (new_page)
+ prep_transhuge_page(new_page);
+ hentry = swp_entry(swp_type(entry),
+ round_down(swp_offset(entry),
+ HPAGE_PMD_NR));
+ } else {
+ new_page = alloc_page_vma(gfp_mask, vma, addr);
+ hentry = entry;
+ }
if (!new_page)
break; /* Out of memory */
}
@@ -426,33 +447,37 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
/*
* call radix_tree_preload() while we can wait.
*/
- err = radix_tree_maybe_preload(gfp_mask & GFP_KERNEL);
+ err = radix_tree_maybe_preload_order(gfp_mask & GFP_KERNEL,
+ compound_order(new_page));
if (err)
break;
/*
* Swap entry may have been freed since our caller observed it.
*/
- err = swapcache_prepare(entry, false);
- if (err == -EEXIST) {
- radix_tree_preload_end();
- /*
- * We might race against get_swap_page() and stumble
- * across a SWAP_HAS_CACHE swap_map entry whose page
- * has not been brought into the swapcache yet.
- */
- cond_resched();
- continue;
- }
- if (err) { /* swp entry is obsolete ? */
+ err = swapcache_prepare(hentry, huge_cluster);
+ if (err) {
radix_tree_preload_end();
- break;
+ if (err == -EEXIST) {
+ /*
+ * We might race against get_swap_page() and
+ * stumble across a SWAP_HAS_CACHE swap_map
+ * entry whose page has not been brought into
+ * the swapcache yet.
+ */
+ cond_resched();
+ continue;
+ } else if (err == -ENOTDIR) {
+ /* huge swap cluster is split under us */
+ continue;
+ } else /* swp entry is obsolete ? */
+ break;
}
/* May fail (-ENOMEM) if radix-tree node allocation failed. */
__SetPageLocked(new_page);
__SetPageSwapBacked(new_page);
- err = __add_to_swap_cache(new_page, entry);
+ err = __add_to_swap_cache(new_page, hentry);
if (likely(!err)) {
radix_tree_preload_end();
/*
@@ -460,6 +485,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
*/
lru_cache_add_anon(new_page);
*new_page_allocated = true;
+ if (thp_swap_supported() && huge_cluster)
+ new_page += swp_offset(entry) &
+ (HPAGE_PMD_NR - 1);
return new_page;
}
radix_tree_preload_end();
@@ -468,7 +496,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
* add_to_swap_cache() doesn't return -EEXIST, so we can safely
* clear SWAP_HAS_CACHE flag.
*/
- put_swap_page(new_page, entry);
+ put_swap_page(new_page, hentry);
} while (err != -ENOMEM);
if (new_page)
@@ -490,7 +518,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
vma, addr, &page_was_allocated);
if (page_was_allocated)
- swap_readpage(retpage, do_poll);
+ swap_readpage(compound_head(retpage), do_poll);
return retpage;
}
@@ -609,8 +637,9 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
if (!page)
continue;
if (page_allocated) {
- swap_readpage(page, false);
- if (offset != entry_offset) {
+ swap_readpage(compound_head(page), false);
+ if (offset != entry_offset &&
+ !PageTransCompound(page)) {
SetPageReadahead(page);
count_vm_event(SWAP_RA);
}
@@ -771,8 +800,8 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
if (!page)
continue;
if (page_allocated) {
- swap_readpage(page, false);
- if (i != ra_info.offset) {
+ swap_readpage(compound_head(page), false);
+ if (i != ra_info.offset && !PageTransCompound(page)) {
SetPageReadahead(page);
count_vm_event(SWAP_RA);
}
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 5ff2da89b77c..e1e43654407c 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1497,7 +1497,8 @@ int __swap_count(swp_entry_t entry)
return count;
}
-static int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry)
+static int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry,
+ bool *huge_cluster)
{
int count = 0;
pgoff_t offset = swp_offset(entry);
@@ -1505,6 +1506,8 @@ static int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry)
ci = lock_cluster_or_swap_info(si, offset);
count = swap_count(si->swap_map[offset]);
+ if (huge_cluster && ci)
+ *huge_cluster = cluster_is_huge(ci);
unlock_cluster_or_swap_info(si, ci);
return count;
}
@@ -1514,14 +1517,14 @@ static int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry)
* This does not give an exact answer when swap count is continued,
* but does include the high COUNT_CONTINUED flag to allow for that.
*/
-int __swp_swapcount(swp_entry_t entry)
+int __swp_swapcount(swp_entry_t entry, bool *huge_cluster)
{
int count = 0;
struct swap_info_struct *si;
si = get_swap_device(entry);
if (si) {
- count = swap_swapcount(si, entry);
+ count = swap_swapcount(si, entry, huge_cluster);
put_swap_device(si);
}
return count;
@@ -1681,7 +1684,7 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
return map_swapcount;
}
#else
-#define swap_page_trans_huge_swapped(si, entry) swap_swapcount(si, entry)
+#define swap_page_trans_huge_swapped(si, entry) swap_swapcount(si, entry, NULL)
#define page_swapped(page) (page_swapcount(page) != 0)
static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
--
2.16.4
From: Huang Ying <[email protected]>
A huge PMD need to be split when zap a part of the PMD mapping etc.
If the PMD mapping is a swap mapping, we need to split it too. This
patch implemented the support for this. This is similar as splitting
the PMD page mapping, except we need to decrease the PMD swap mapping
count for the huge swap cluster too. If the PMD swap mapping count
becomes 0, the huge swap cluster will be split.
Notice: is_huge_zero_pmd() and pmd_page() doesn't work well with swap
PMD, so pmd_present() check is called before them.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
include/linux/swap.h | 6 ++++++
mm/huge_memory.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++-----
mm/swapfile.c | 28 +++++++++++++++++++++++++
3 files changed, 87 insertions(+), 5 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 7ed2c727c9b6..bb9de2cb952a 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -618,11 +618,17 @@ static inline swp_entry_t get_swap_page(struct page *page)
#ifdef CONFIG_THP_SWAP
extern int split_swap_cluster(swp_entry_t entry);
+extern int split_swap_cluster_map(swp_entry_t entry);
#else
static inline int split_swap_cluster(swp_entry_t entry)
{
return 0;
}
+
+static inline int split_swap_cluster_map(swp_entry_t entry)
+{
+ return 0;
+}
#endif
#ifdef CONFIG_MEMCG
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index feba371169ca..2d615328d77f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1602,6 +1602,47 @@ int do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
return 0;
}
+#ifdef CONFIG_THP_SWAP
+static void __split_huge_swap_pmd(struct vm_area_struct *vma,
+ unsigned long haddr,
+ pmd_t *pmd)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ pgtable_t pgtable;
+ pmd_t _pmd;
+ swp_entry_t entry;
+ int i, soft_dirty;
+
+ entry = pmd_to_swp_entry(*pmd);
+ soft_dirty = pmd_soft_dirty(*pmd);
+
+ split_swap_cluster_map(entry);
+
+ pgtable = pgtable_trans_huge_withdraw(mm, pmd);
+ pmd_populate(mm, &_pmd, pgtable);
+
+ for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE, entry.val++) {
+ pte_t *pte, ptent;
+
+ pte = pte_offset_map(&_pmd, haddr);
+ VM_BUG_ON(!pte_none(*pte));
+ ptent = swp_entry_to_pte(entry);
+ if (soft_dirty)
+ ptent = pte_swp_mksoft_dirty(ptent);
+ set_pte_at(mm, haddr, pte, ptent);
+ pte_unmap(pte);
+ }
+ smp_wmb(); /* make pte visible before pmd */
+ pmd_populate(mm, pmd, pgtable);
+}
+#else
+static inline void __split_huge_swap_pmd(struct vm_area_struct *vma,
+ unsigned long haddr,
+ pmd_t *pmd)
+{
+}
+#endif
+
/*
* Return true if we do MADV_FREE successfully on entire pmd page.
* Otherwise, return false.
@@ -2068,7 +2109,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma);
- VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd)
+ VM_BUG_ON(!is_swap_pmd(*pmd) && !pmd_trans_huge(*pmd)
&& !pmd_devmap(*pmd));
count_vm_event(THP_SPLIT_PMD);
@@ -2090,8 +2131,11 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
put_page(page);
add_mm_counter(mm, MM_FILEPAGES, -HPAGE_PMD_NR);
return;
- } else if (is_huge_zero_pmd(*pmd)) {
+ } else if (pmd_present(*pmd) && is_huge_zero_pmd(*pmd)) {
/*
+ * is_huge_zero_pmd() may return true for PMD swap
+ * entry, so checking pmd_present() firstly.
+ *
* FIXME: Do we want to invalidate secondary mmu by calling
* mmu_notifier_invalidate_range() see comments below inside
* __split_huge_pmd() ?
@@ -2134,6 +2178,9 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
page = pfn_to_page(swp_offset(entry));
} else
#endif
+ if (thp_swap_supported() && is_swap_pmd(old_pmd))
+ return __split_huge_swap_pmd(vma, haddr, pmd);
+ else
page = pmd_page(old_pmd);
VM_BUG_ON_PAGE(!page_count(page), page);
page_ref_add(page, HPAGE_PMD_NR - 1);
@@ -2225,14 +2272,15 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
* pmd against. Otherwise we can end up replacing wrong page.
*/
VM_BUG_ON(freeze && !page);
- if (page && page != pmd_page(*pmd))
- goto out;
+ /* pmd_page() should be called only if pmd_present() */
+ if (page && (!pmd_present(*pmd) || page != pmd_page(*pmd)))
+ goto out;
if (pmd_trans_huge(*pmd)) {
page = pmd_page(*pmd);
if (PageMlocked(page))
clear_page_mlock(page);
- } else if (!(pmd_devmap(*pmd) || is_pmd_migration_entry(*pmd)))
+ } else if (!(pmd_devmap(*pmd) || is_swap_pmd(*pmd)))
goto out;
__split_huge_pmd_locked(vma, pmd, haddr, freeze);
out:
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 7d11d8104ba7..a0141307f3ac 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -4043,6 +4043,34 @@ static void free_swap_count_continuations(struct swap_info_struct *si)
}
}
+#ifdef CONFIG_THP_SWAP
+/* The corresponding page table shouldn't be changed under us */
+int split_swap_cluster_map(swp_entry_t entry)
+{
+ struct swap_info_struct *si;
+ struct swap_cluster_info *ci;
+ unsigned long offset = swp_offset(entry);
+
+ VM_BUG_ON(!is_cluster_offset(offset));
+ si = _swap_info_get(entry);
+ if (!si)
+ return -EBUSY;
+ ci = lock_cluster(si, offset);
+ /* The swap cluster has been split by someone else */
+ if (!cluster_is_huge(ci))
+ goto out;
+ cluster_set_count(ci, cluster_count(ci) - 1);
+ VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
+ if (cluster_count(ci) == SWAPFILE_CLUSTER &&
+ !(si->swap_map[offset] & SWAP_HAS_CACHE))
+ cluster_clear_huge(ci);
+
+out:
+ unlock_cluster(ci);
+ return 0;
+}
+#endif
+
static int __init swapfile_init(void)
{
int nid;
--
2.16.4
From: Huang Ying <[email protected]>
When a PMD swap mapping is removed from a huge swap cluster, for
example, unmap a memory range mapped with PMD swap mapping, etc,
free_swap_and_cache() will be called to decrease the reference count
to the huge swap cluster. free_swap_and_cache() may also free or
split the huge swap cluster, and free the corresponding THP in swap
cache if necessary. swap_free() is similar, and shares most
implementation with free_swap_and_cache(). This patch revises
free_swap_and_cache() and swap_free() to implement this.
If the swap cluster has been split already, for example, because of
failing to allocate a THP during swapin, we just decrease one from the
reference count of all swap slots.
Otherwise, we will decrease one from the reference count of all swap
slots and the PMD swap mapping count in cluster_count(). When the
corresponding THP isn't in swap cache, if PMD swap mapping count
becomes 0, the huge swap cluster will be split, and if all swap count
becomes 0, the huge swap cluster will be freed. When the corresponding
THP is in swap cache, if every swap_map[offset] == SWAP_HAS_CACHE, we
will try to delete the THP from swap cache. Which will cause the THP
and the huge swap cluster be freed.
Signed-off-by: "Huang, Ying" <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Daniel Jordan <[email protected]>
---
arch/s390/mm/pgtable.c | 2 +-
include/linux/swap.h | 9 ++--
kernel/power/swap.c | 4 +-
mm/madvise.c | 2 +-
mm/memory.c | 4 +-
mm/shmem.c | 6 +--
mm/swapfile.c | 110 +++++++++++++++++++++++++++++++++++++++++++------
7 files changed, 112 insertions(+), 25 deletions(-)
diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index 301e466e4263..3079a23eef75 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -646,7 +646,7 @@ static void ptep_zap_swap_entry(struct mm_struct *mm, swp_entry_t entry)
dec_mm_counter(mm, mm_counter(page));
}
- free_swap_and_cache(entry);
+ free_swap_and_cache(entry, false);
}
void ptep_zap_unused(struct mm_struct *mm, unsigned long addr,
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 57aa655ab27d..7ed2c727c9b6 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -453,9 +453,9 @@ extern int add_swap_count_continuation(swp_entry_t, gfp_t);
extern void swap_shmem_alloc(swp_entry_t);
extern int swap_duplicate(swp_entry_t *entry, bool cluster);
extern int swapcache_prepare(swp_entry_t entry, bool cluster);
-extern void swap_free(swp_entry_t);
+extern void swap_free(swp_entry_t entry, bool cluster);
extern void swapcache_free_entries(swp_entry_t *entries, int n);
-extern int free_swap_and_cache(swp_entry_t);
+extern int free_swap_and_cache(swp_entry_t entry, bool cluster);
extern int swap_type_of(dev_t, sector_t, struct block_device **);
extern unsigned int count_swap_pages(int, int);
extern sector_t map_swap_page(struct page *, struct block_device **);
@@ -509,7 +509,8 @@ static inline void show_swap_cache_info(void)
{
}
-#define free_swap_and_cache(e) ({(is_migration_entry(e) || is_device_private_entry(e));})
+#define free_swap_and_cache(e, c) \
+ ({(is_migration_entry(e) || is_device_private_entry(e)); })
#define swapcache_prepare(e, c) \
({(is_migration_entry(e) || is_device_private_entry(e)); })
@@ -527,7 +528,7 @@ static inline int swap_duplicate(swp_entry_t *swp, bool cluster)
return 0;
}
-static inline void swap_free(swp_entry_t swp)
+static inline void swap_free(swp_entry_t swp, bool cluster)
{
}
diff --git a/kernel/power/swap.c b/kernel/power/swap.c
index c2bcf97d24c8..ada10dc667b9 100644
--- a/kernel/power/swap.c
+++ b/kernel/power/swap.c
@@ -182,7 +182,7 @@ sector_t alloc_swapdev_block(int swap)
offset = swp_offset(get_swap_page_of_type(swap));
if (offset) {
if (swsusp_extents_insert(offset))
- swap_free(swp_entry(swap, offset));
+ swap_free(swp_entry(swap, offset), false);
else
return swapdev_block(swap, offset);
}
@@ -206,7 +206,7 @@ void free_all_swap_pages(int swap)
ext = rb_entry(node, struct swsusp_extent, node);
rb_erase(node, &swsusp_extents);
for (offset = ext->start; offset <= ext->end; offset++)
- swap_free(swp_entry(swap, offset));
+ swap_free(swp_entry(swap, offset), false);
kfree(ext);
}
diff --git a/mm/madvise.c b/mm/madvise.c
index b731933dddae..a96abb915e22 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -349,7 +349,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
if (non_swap_entry(entry))
continue;
nr_swap--;
- free_swap_and_cache(entry);
+ free_swap_and_cache(entry, false);
pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
continue;
}
diff --git a/mm/memory.c b/mm/memory.c
index f3900282e3da..55e278bb59ee 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1376,7 +1376,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
page = migration_entry_to_page(entry);
rss[mm_counter(page)]--;
}
- if (unlikely(!free_swap_and_cache(entry)))
+ if (unlikely(!free_swap_and_cache(entry, false)))
print_bad_pte(vma, addr, ptent, NULL);
pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
} while (pte++, addr += PAGE_SIZE, addr != end);
@@ -3059,7 +3059,7 @@ int do_swap_page(struct vm_fault *vmf)
}
set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
- swap_free(entry);
+ swap_free(entry, false);
if (mem_cgroup_swap_full(page) ||
(vma->vm_flags & VM_LOCKED) || PageMlocked(page))
try_to_free_swap(page);
diff --git a/mm/shmem.c b/mm/shmem.c
index 2cab84403055..047504372a2c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -676,7 +676,7 @@ static int shmem_free_swap(struct address_space *mapping,
xa_unlock_irq(&mapping->i_pages);
if (old != radswap)
return -ENOENT;
- free_swap_and_cache(radix_to_swp_entry(radswap));
+ free_swap_and_cache(radix_to_swp_entry(radswap), false);
return 0;
}
@@ -1211,7 +1211,7 @@ static int shmem_unuse_inode(struct shmem_inode_info *info,
spin_lock_irq(&info->lock);
info->swapped--;
spin_unlock_irq(&info->lock);
- swap_free(swap);
+ swap_free(swap, false);
}
}
return error;
@@ -1749,7 +1749,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
delete_from_swap_cache(page);
set_page_dirty(page);
- swap_free(swap);
+ swap_free(swap, false);
} else {
if (vma && userfaultfd_missing(vma)) {
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 36f4b6451360..7d11d8104ba7 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -885,7 +885,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
}
#ifdef CONFIG_THP_SWAP
-static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
+static int __swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
{
unsigned long idx;
struct swap_cluster_info *ci;
@@ -911,7 +911,7 @@ static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
return 1;
}
-static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
+static void __swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
{
unsigned long offset = idx * SWAPFILE_CLUSTER;
struct swap_cluster_info *ci;
@@ -924,7 +924,7 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
swap_range_free(si, offset, SWAPFILE_CLUSTER);
}
#else
-static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
+static int __swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
{
VM_WARN_ON_ONCE(1);
return 0;
@@ -996,7 +996,7 @@ int get_swap_pages(int n_goal, bool cluster, swp_entry_t swp_entries[])
}
if (cluster) {
if (!(si->flags & SWP_FILE))
- n_ret = swap_alloc_cluster(si, swp_entries);
+ n_ret = __swap_alloc_cluster(si, swp_entries);
} else
n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,
n_goal, swp_entries);
@@ -1215,8 +1215,10 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p,
count = SWAP_MAP_MAX | COUNT_CONTINUED;
else
count = SWAP_MAP_MAX;
- } else
+ } else {
+ VM_BUG_ON(!count);
count--;
+ }
}
usage = count | has_cache;
@@ -1255,17 +1257,90 @@ static void swap_entry_free(struct swap_info_struct *p, swp_entry_t entry)
swap_range_free(p, offset, 1);
}
+#ifdef CONFIG_THP_SWAP
+static unsigned char swap_free_cluster(struct swap_info_struct *si,
+ swp_entry_t entry)
+{
+ struct swap_cluster_info *ci;
+ unsigned long offset = swp_offset(entry);
+ unsigned int count, i, free_entries = 0, cache_only = 0;
+ unsigned char *map, ret = 1;
+
+ ci = lock_cluster(si, offset);
+ VM_BUG_ON(!is_cluster_offset(offset));
+ /* Cluster has been split, free each swap entries in cluster */
+ if (!cluster_is_huge(ci)) {
+ unlock_cluster(ci);
+ for (i = 0; i < SWAPFILE_CLUSTER; i++, entry.val++) {
+ if (!__swap_entry_free(si, entry, 1)) {
+ free_entries++;
+ free_swap_slot(entry);
+ }
+ }
+ return !(free_entries == SWAPFILE_CLUSTER);
+ }
+ count = cluster_count(ci) - 1;
+ VM_BUG_ON(count < SWAPFILE_CLUSTER);
+ cluster_set_count(ci, count);
+ map = si->swap_map + offset;
+ for (i = 0; i < SWAPFILE_CLUSTER; i++) {
+ if (map[i] == 1) {
+ map[i] = SWAP_MAP_BAD;
+ free_entries++;
+ } else if (__swap_entry_free_locked(si, ci, offset + i, 1) ==
+ SWAP_HAS_CACHE)
+ cache_only++;
+ }
+ VM_BUG_ON(free_entries && (count != SWAPFILE_CLUSTER ||
+ (map[0] & SWAP_HAS_CACHE)));
+ if (free_entries == SWAPFILE_CLUSTER)
+ memset(map, SWAP_HAS_CACHE, SWAPFILE_CLUSTER);
+ else if (!cluster_swapcount(ci) && !(map[0] & SWAP_HAS_CACHE))
+ cluster_clear_huge(ci);
+ unlock_cluster(ci);
+ if (free_entries == SWAPFILE_CLUSTER) {
+ spin_lock(&si->lock);
+ mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER);
+ __swap_free_cluster(si, offset / SWAPFILE_CLUSTER);
+ spin_unlock(&si->lock);
+ ret = 0;
+ } else if (free_entries) {
+ ci = lock_cluster(si, offset);
+ for (i = 0; i < SWAPFILE_CLUSTER; i++, entry.val++) {
+ if (map[i] == SWAP_MAP_BAD) {
+ map[i] = SWAP_HAS_CACHE;
+ unlock_cluster(ci);
+ free_swap_slot(entry);
+ ci = lock_cluster(si, offset);
+ }
+ }
+ unlock_cluster(ci);
+ } else if (cache_only == SWAPFILE_CLUSTER)
+ ret = SWAP_HAS_CACHE;
+
+ return ret;
+}
+#else
+static inline unsigned char swap_free_cluster(struct swap_info_struct *si,
+ swp_entry_t entry)
+{
+ return 0;
+}
+#endif
+
/*
* Caller has made sure that the swap device corresponding to entry
* is still around or has not been recycled.
*/
-void swap_free(swp_entry_t entry)
+void swap_free(swp_entry_t entry, bool cluster)
{
struct swap_info_struct *p;
p = _swap_info_get(entry);
if (p) {
- if (!__swap_entry_free(p, entry, 1))
+ if (thp_swap_supported() && cluster)
+ swap_free_cluster(p, entry);
+ else if (!__swap_entry_free(p, entry, 1))
free_swap_slot(entry);
}
}
@@ -1326,7 +1401,7 @@ static void swapcache_free_cluster(swp_entry_t entry)
if (free_entries == SWAPFILE_CLUSTER) {
spin_lock(&si->lock);
mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER);
- swap_free_cluster(si, idx);
+ __swap_free_cluster(si, idx);
spin_unlock(&si->lock);
} else if (free_entries) {
for (i = 0; i < SWAPFILE_CLUSTER; i++, entry.val++) {
@@ -1730,7 +1805,7 @@ int try_to_free_swap(struct page *page)
* Free the swap entry like above, but also try to
* free the page cache entry if it is the last user.
*/
-int free_swap_and_cache(swp_entry_t entry)
+int free_swap_and_cache(swp_entry_t entry, bool cluster)
{
struct swap_info_struct *p;
struct page *page = NULL;
@@ -1741,7 +1816,8 @@ int free_swap_and_cache(swp_entry_t entry)
p = _swap_info_get(entry);
if (p) {
- count = __swap_entry_free(p, entry, 1);
+ count = cluster ? swap_free_cluster(p, entry) :
+ __swap_entry_free(p, entry, 1);
if (count == SWAP_HAS_CACHE &&
!swap_page_trans_huge_swapped(p, entry)) {
page = find_get_page(swap_address_space(entry),
@@ -1750,7 +1826,7 @@ int free_swap_and_cache(swp_entry_t entry)
put_page(page);
page = NULL;
}
- } else if (!count)
+ } else if (!count && !cluster)
free_swap_slot(entry);
}
if (page) {
@@ -1914,7 +1990,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
}
set_pte_at(vma->vm_mm, addr, pte,
pte_mkold(mk_pte(page, vma->vm_page_prot)));
- swap_free(entry);
+ swap_free(entry, false);
/*
* Move the page to the active list so it is not
* immediately swapped out again after swapon.
@@ -2353,6 +2429,16 @@ int try_to_unuse(unsigned int type, bool frontswap,
}
mmput(start_mm);
+
+ /*
+ * Swap entries may be marked as SWAP_MAP_BAD temporarily in
+ * swap_free_cluster() before being freed really.
+ * find_next_to_unuse() will skip these swap entries, that is
+ * OK. But we need to wait until they are freed really.
+ */
+ while (!retval && READ_ONCE(si->inuse_pages))
+ schedule_timeout_uninterruptible(1);
+
return retval;
}
--
2.16.4
On Fri, 22 Jun 2018 11:51:30 +0800 "Huang, Ying" <[email protected]> wrote:
> This is the final step of THP (Transparent Huge Page) swap
> optimization. After the first and second step, the splitting huge
> page is delayed from almost the first step of swapout to after swapout
> has been finished. In this step, we avoid splitting THP for swapout
> and swapout/swapin the THP in one piece.
It's a tremendously good performance improvement. It's also a
tremendously large patchset :(
And it depends upon your
mm-swap-fix-race-between-swapoff-and-some-swap-operations.patch and
mm-fix-race-between-swapoff-and-mincore.patch, the first of which has
been floating about since February without adequate review.
I'll give this patchset a spin in -mm to see what happens and will come
back later to take a closer look. But the best I can do at this time
is to hopefully cc some possible reviewers :)
Andrew Morton <[email protected]> writes:
> On Fri, 22 Jun 2018 11:51:30 +0800 "Huang, Ying" <[email protected]> wrote:
>
>> This is the final step of THP (Transparent Huge Page) swap
>> optimization. After the first and second step, the splitting huge
>> page is delayed from almost the first step of swapout to after swapout
>> has been finished. In this step, we avoid splitting THP for swapout
>> and swapout/swapin the THP in one piece.
>
> It's a tremendously good performance improvement. It's also a
> tremendously large patchset :(
>
> And it depends upon your
> mm-swap-fix-race-between-swapoff-and-some-swap-operations.patch and
> mm-fix-race-between-swapoff-and-mincore.patch, the first of which has
> been floating about since February without adequate review.
>
> I'll give this patchset a spin in -mm to see what happens and will come
> back later to take a closer look. But the best I can do at this time
> is to hopefully cc some possible reviewers :)
Thanks a lot for your help! Hope more people can review it!
Best Regards,
Huang, Ying
On Thu, 28 Jun 2018 13:29:09 +0800 "Huang\, Ying" <[email protected]> wrote:
> Andrew Morton <[email protected]> writes:
>
> > On Fri, 22 Jun 2018 11:51:30 +0800 "Huang, Ying" <[email protected]> wrote:
> >
> >> This is the final step of THP (Transparent Huge Page) swap
> >> optimization. After the first and second step, the splitting huge
> >> page is delayed from almost the first step of swapout to after swapout
> >> has been finished. In this step, we avoid splitting THP for swapout
> >> and swapout/swapin the THP in one piece.
> >
> > It's a tremendously good performance improvement. It's also a
> > tremendously large patchset :(
> >
> > And it depends upon your
> > mm-swap-fix-race-between-swapoff-and-some-swap-operations.patch and
> > mm-fix-race-between-swapoff-and-mincore.patch, the first of which has
> > been floating about since February without adequate review.
> >
> > I'll give this patchset a spin in -mm to see what happens and will come
> > back later to take a closer look. But the best I can do at this time
> > is to hopefully cc some possible reviewers :)
>
> Thanks a lot for your help! Hope more people can review it!
I took it out of -mm again, temporarily. Due to a huge tangle with the
xarray conversions in linux-next.
Andrew Morton <[email protected]> writes:
> On Thu, 28 Jun 2018 13:29:09 +0800 "Huang\, Ying" <[email protected]> wrote:
>
>> Andrew Morton <[email protected]> writes:
>>
>> > On Fri, 22 Jun 2018 11:51:30 +0800 "Huang, Ying" <[email protected]> wrote:
>> >
>> >> This is the final step of THP (Transparent Huge Page) swap
>> >> optimization. After the first and second step, the splitting huge
>> >> page is delayed from almost the first step of swapout to after swapout
>> >> has been finished. In this step, we avoid splitting THP for swapout
>> >> and swapout/swapin the THP in one piece.
>> >
>> > It's a tremendously good performance improvement. It's also a
>> > tremendously large patchset :(
>> >
>> > And it depends upon your
>> > mm-swap-fix-race-between-swapoff-and-some-swap-operations.patch and
>> > mm-fix-race-between-swapoff-and-mincore.patch, the first of which has
>> > been floating about since February without adequate review.
>> >
>> > I'll give this patchset a spin in -mm to see what happens and will come
>> > back later to take a closer look. But the best I can do at this time
>> > is to hopefully cc some possible reviewers :)
>>
>> Thanks a lot for your help! Hope more people can review it!
>
> I took it out of -mm again, temporarily. Due to a huge tangle with the
> xarray conversions in linux-next.
No problem. I will rebase the patchset on your latest -mm tree, or the
next version to be released?
Best Regards,
Huang, Ying
On Thu, 28 Jun 2018 13:35:15 +0800 "Huang\, Ying" <[email protected]> wrote:
> Andrew Morton <[email protected]> writes:
>
> > On Thu, 28 Jun 2018 13:29:09 +0800 "Huang\, Ying" <[email protected]> wrote:
> >
> >> Andrew Morton <[email protected]> writes:
> >>
> >> > On Fri, 22 Jun 2018 11:51:30 +0800 "Huang, Ying" <[email protected]> wrote:
> >> >
> >> >> This is the final step of THP (Transparent Huge Page) swap
> >> >> optimization. After the first and second step, the splitting huge
> >> >> page is delayed from almost the first step of swapout to after swapout
> >> >> has been finished. In this step, we avoid splitting THP for swapout
> >> >> and swapout/swapin the THP in one piece.
> >> >
> >> > It's a tremendously good performance improvement. It's also a
> >> > tremendously large patchset :(
> >> >
> >> > And it depends upon your
> >> > mm-swap-fix-race-between-swapoff-and-some-swap-operations.patch and
> >> > mm-fix-race-between-swapoff-and-mincore.patch, the first of which has
> >> > been floating about since February without adequate review.
> >> >
> >> > I'll give this patchset a spin in -mm to see what happens and will come
> >> > back later to take a closer look. But the best I can do at this time
> >> > is to hopefully cc some possible reviewers :)
> >>
> >> Thanks a lot for your help! Hope more people can review it!
> >
> > I took it out of -mm again, temporarily. Due to a huge tangle with the
> > xarray conversions in linux-next.
>
> No problem. I will rebase the patchset on your latest -mm tree, or the
> next version to be released?
We need to figure that out with Matthew.
Probably the xarray conversions are simpler and more mature so yes,
probably they should be staged first.
On Wed, Jun 27, 2018 at 11:18:39PM -0700, Andrew Morton wrote:
> On Thu, 28 Jun 2018 13:35:15 +0800 "Huang\, Ying" <[email protected]> wrote:
> > No problem. I will rebase the patchset on your latest -mm tree, or the
> > next version to be released?
>
> We need to figure that out with Matthew.
>
> Probably the xarray conversions are simpler and more mature so yes,
> probably they should be staged first.
I'll take a look. Honestly, my biggest problem with this patch set is
overuse of tagging:
59832 Jun 22 Huang, Ying ( 131) [PATCH -mm -v4 00/21] mm, THP, swap: Swa
59833 N Jun 22 Huang, Ying ( 126) ├─>[PATCH -mm -v4 01/21] mm, THP, swap:
59834 N Jun 22 Huang, Ying ( 44) ├─>[PATCH -mm -v4 02/21] mm, THP, swap:
59835 N Jun 22 Huang, Ying ( 583) ├─>[PATCH -mm -v4 03/21] mm, THP, swap:
59836 N Jun 22 Huang, Ying ( 104) ├─>[PATCH -mm -v4 04/21] mm, THP, swap:
59837 N Jun 22 Huang, Ying ( 394) ├─>[PATCH -mm -v4 05/21] mm, THP, swap:
59838 N Jun 22 Huang, Ying ( 198) ├─>[PATCH -mm -v4 06/21] mm, THP, swap:
59839 N Jun 22 Huang, Ying ( 161) ├─>[PATCH -mm -v4 07/21] mm, THP, swap:
59840 N Jun 22 Huang, Ying ( 351) ├─>[PATCH -mm -v4 08/21] mm, THP, swap:
59841 N Jun 22 Huang, Ying ( 293) ├─>[PATCH -mm -v4 09/21] mm, THP, swap:
59842 N Jun 22 Huang, Ying ( 138) ├─>[PATCH -mm -v4 10/21] mm, THP, swap:
59843 N Jun 22 Huang, Ying ( 264) ├─>[PATCH -mm -v4 11/21] mm, THP, swap:
59844 N Jun 22 Huang, Ying ( 251) ├─>[PATCH -mm -v4 12/21] mm, THP, swap:
59845 N Jun 22 Huang, Ying ( 121) ├─>[PATCH -mm -v4 13/21] mm, THP, swap:
59846 N Jun 22 Huang, Ying ( 517) ├─>[PATCH -mm -v4 14/21] mm, cgroup, THP
59847 N Jun 22 Huang, Ying ( 128) ├─>[PATCH -mm -v4 15/21] mm, THP, swap:
59848 N Jun 22 Huang, Ying ( 85) ├─>[PATCH -mm -v4 16/21] mm, THP, swap:
59849 N Jun 22 Huang, Ying ( 70) ├─>[PATCH -mm -v4 17/21] mm, THP, swap:
There's literally zero useful information displayed in the patch subjects.
Matthew Wilcox <[email protected]> writes:
> On Wed, Jun 27, 2018 at 11:18:39PM -0700, Andrew Morton wrote:
>> On Thu, 28 Jun 2018 13:35:15 +0800 "Huang\, Ying" <[email protected]> wrote:
>> > No problem. I will rebase the patchset on your latest -mm tree, or the
>> > next version to be released?
>>
>> We need to figure that out with Matthew.
>>
>> Probably the xarray conversions are simpler and more mature so yes,
>> probably they should be staged first.
>
> I'll take a look. Honestly, my biggest problem with this patch set is
> overuse of tagging:
>
> 59832 Jun 22 Huang, Ying ( 131) [PATCH -mm -v4 00/21] mm, THP, swap: Swa
> 59833 N Jun 22 Huang, Ying ( 126) ├─>[PATCH -mm -v4 01/21] mm, THP, swap:
> 59834 N Jun 22 Huang, Ying ( 44) ├─>[PATCH -mm -v4 02/21] mm, THP, swap:
> 59835 N Jun 22 Huang, Ying ( 583) ├─>[PATCH -mm -v4 03/21] mm, THP, swap:
> 59836 N Jun 22 Huang, Ying ( 104) ├─>[PATCH -mm -v4 04/21] mm, THP, swap:
> 59837 N Jun 22 Huang, Ying ( 394) ├─>[PATCH -mm -v4 05/21] mm, THP, swap:
> 59838 N Jun 22 Huang, Ying ( 198) ├─>[PATCH -mm -v4 06/21] mm, THP, swap:
> 59839 N Jun 22 Huang, Ying ( 161) ├─>[PATCH -mm -v4 07/21] mm, THP, swap:
> 59840 N Jun 22 Huang, Ying ( 351) ├─>[PATCH -mm -v4 08/21] mm, THP, swap:
> 59841 N Jun 22 Huang, Ying ( 293) ├─>[PATCH -mm -v4 09/21] mm, THP, swap:
> 59842 N Jun 22 Huang, Ying ( 138) ├─>[PATCH -mm -v4 10/21] mm, THP, swap:
> 59843 N Jun 22 Huang, Ying ( 264) ├─>[PATCH -mm -v4 11/21] mm, THP, swap:
> 59844 N Jun 22 Huang, Ying ( 251) ├─>[PATCH -mm -v4 12/21] mm, THP, swap:
> 59845 N Jun 22 Huang, Ying ( 121) ├─>[PATCH -mm -v4 13/21] mm, THP, swap:
> 59846 N Jun 22 Huang, Ying ( 517) ├─>[PATCH -mm -v4 14/21] mm, cgroup, THP
> 59847 N Jun 22 Huang, Ying ( 128) ├─>[PATCH -mm -v4 15/21] mm, THP, swap:
> 59848 N Jun 22 Huang, Ying ( 85) ├─>[PATCH -mm -v4 16/21] mm, THP, swap:
> 59849 N Jun 22 Huang, Ying ( 70) ├─>[PATCH -mm -v4 17/21] mm, THP, swap:
>
> There's literally zero useful information displayed in the patch subjects.
Thanks! What's your suggestion on tagging? Only keep "mm" or "swap"?
Best Regards,
Huang, Ying
On Fri, Jun 29, 2018 at 09:17:16AM +0800, Huang, Ying wrote:
> Matthew Wilcox <[email protected]> writes:
> > I'll take a look. Honestly, my biggest problem with this patch set is
> > overuse of tagging:
> >
> > 59832 Jun 22 Huang, Ying ( 131) [PATCH -mm -v4 00/21] mm, THP, swap: Swa
> > There's literally zero useful information displayed in the patch subjects.
>
> Thanks! What's your suggestion on tagging? Only keep "mm" or "swap"?
Subject: [PATCH v14 10/74] xarray: Add XArray tags
I'm not sure where the extra '-' in front of '-v4' comes from. I also
wouldn't put the '-mm' in front of it -- that information can live in
the cover letter's body rather than any patch's subject.
I think 'swap:' implies "mm:", so yeah I'd just go with that.
Subject: [PATCH v4 00/21] swap: Useful information here
I'd see that as:
59832 Jun 22 Huang, Ying ( 131) [PATCH v4 00/21] swap: Useful informatio
I had a quick look at your patches. I think only two are affected by
the XArray, and I'll make some general comments about them soon.
On Fri, Jun 22, 2018 at 11:51:33AM +0800, Huang, Ying wrote:
> +++ b/mm/swap_state.c
> @@ -433,7 +433,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> /*
> * Swap entry may have been freed since our caller observed it.
> */
> - err = swapcache_prepare(entry);
> + err = swapcache_prepare(entry, false);
> if (err == -EEXIST) {
> radix_tree_preload_end();
> /*
This commit should be just a textual conflict.
On Fri, Jun 22, 2018 at 11:51:38AM +0800, Huang, Ying wrote:
> +++ b/mm/swap_state.c
> @@ -426,33 +447,37 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> /*
> * call radix_tree_preload() while we can wait.
> */
> - err = radix_tree_maybe_preload(gfp_mask & GFP_KERNEL);
> + err = radix_tree_maybe_preload_order(gfp_mask & GFP_KERNEL,
> + compound_order(new_page));
> if (err)
> break;
There's no more preloading in the XArray world, so this can just be dropped.
> /*
> * Swap entry may have been freed since our caller observed it.
> */
> + err = swapcache_prepare(hentry, huge_cluster);
> + if (err) {
> radix_tree_preload_end();
> - break;
> + if (err == -EEXIST) {
> + /*
> + * We might race against get_swap_page() and
> + * stumble across a SWAP_HAS_CACHE swap_map
> + * entry whose page has not been brought into
> + * the swapcache yet.
> + */
> + cond_resched();
> + continue;
> + } else if (err == -ENOTDIR) {
> + /* huge swap cluster is split under us */
> + continue;
> + } else /* swp entry is obsolete ? */
> + break;
I'm not entirely happy about -ENOTDIR being overloaded to mean this.
Maybe we can return a new enum rather than an errno?
Also, I'm not sure that a true/false parameter is the right approach for
"is this a huge page". I think we'll have usecases for swap entries which
are both larger and smaller than PMD_SIZE.
I was hoping to encode the swap entry size into the entry; we only need one
extra bit to do that (no matter the size of the entry). I detailed the
encoding scheme here:
https://plus.google.com/117536210417097546339/posts/hvctn17WUZu
(let me know if that doesn't work for you; I'm not very experienced with
this G+ thing)
Matthew Wilcox <[email protected]> writes:
> On Fri, Jun 29, 2018 at 09:17:16AM +0800, Huang, Ying wrote:
>> Matthew Wilcox <[email protected]> writes:
>> > I'll take a look. Honestly, my biggest problem with this patch set is
>> > overuse of tagging:
>> >
>> > 59832 Jun 22 Huang, Ying ( 131) [PATCH -mm -v4 00/21] mm, THP, swap: Swa
>> > There's literally zero useful information displayed in the patch subjects.
>>
>> Thanks! What's your suggestion on tagging? Only keep "mm" or "swap"?
>
> Subject: [PATCH v14 10/74] xarray: Add XArray tags
>
> I'm not sure where the extra '-' in front of '-v4' comes from. I also
> wouldn't put the '-mm' in front of it -- that information can live in
> the cover letter's body rather than any patch's subject.
>
> I think 'swap:' implies "mm:", so yeah I'd just go with that.
>
> Subject: [PATCH v4 00/21] swap: Useful information here
>
> I'd see that as:
>
> 59832 Jun 22 Huang, Ying ( 131) [PATCH v4 00/21] swap: Useful informatio
Looks good! I will use this naming convention in the future.
> I had a quick look at your patches. I think only two are affected by
> the XArray, and I'll make some general comments about them soon.
Thanks!
Best Regards,
Huang, Ying
Matthew Wilcox <[email protected]> writes:
> On Fri, Jun 22, 2018 at 11:51:33AM +0800, Huang, Ying wrote:
>> +++ b/mm/swap_state.c
>> @@ -433,7 +433,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>> /*
>> * Swap entry may have been freed since our caller observed it.
>> */
>> - err = swapcache_prepare(entry);
>> + err = swapcache_prepare(entry, false);
>> if (err == -EEXIST) {
>> radix_tree_preload_end();
>> /*
>
> This commit should be just a textual conflict.
Yes. Will check it.
Best Regards,
Huang, Ying
Matthew Wilcox <[email protected]> writes:
> On Fri, Jun 22, 2018 at 11:51:38AM +0800, Huang, Ying wrote:
>> +++ b/mm/swap_state.c
>> @@ -426,33 +447,37 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>> /*
>> * call radix_tree_preload() while we can wait.
>> */
>> - err = radix_tree_maybe_preload(gfp_mask & GFP_KERNEL);
>> + err = radix_tree_maybe_preload_order(gfp_mask & GFP_KERNEL,
>> + compound_order(new_page));
>> if (err)
>> break;
>
> There's no more preloading in the XArray world, so this can just be dropped.
Sure.
>> /*
>> * Swap entry may have been freed since our caller observed it.
>> */
>> + err = swapcache_prepare(hentry, huge_cluster);
>> + if (err) {
>> radix_tree_preload_end();
>> - break;
>> + if (err == -EEXIST) {
>> + /*
>> + * We might race against get_swap_page() and
>> + * stumble across a SWAP_HAS_CACHE swap_map
>> + * entry whose page has not been brought into
>> + * the swapcache yet.
>> + */
>> + cond_resched();
>> + continue;
>> + } else if (err == -ENOTDIR) {
>> + /* huge swap cluster is split under us */
>> + continue;
>> + } else /* swp entry is obsolete ? */
>> + break;
>
> I'm not entirely happy about -ENOTDIR being overloaded to mean this.
> Maybe we can return a new enum rather than an errno?
Can we use -ESTALE instead? The "huge swap cluster is split under us"
means the swap entry is kind of "staled".
> Also, I'm not sure that a true/false parameter is the right approach for
> "is this a huge page". I think we'll have usecases for swap entries which
> are both larger and smaller than PMD_SIZE.
OK. I can change the interface to number of swap entries style to make
it more flexible.
> I was hoping to encode the swap entry size into the entry; we only need one
> extra bit to do that (no matter the size of the entry). I detailed the
> encoding scheme here:
>
> https://plus.google.com/117536210417097546339/posts/hvctn17WUZu
>
> (let me know if that doesn't work for you; I'm not very experienced with
> this G+ thing)
The encoding method looks good. To use it, we need to
- Encode swap entry and size into swap_entry_size
- Call function with swap_entry_size
- Decode swap_entry_size to swap entry and size
It appears that there is no real benefit?
Best Regards,
Huang, Ying
On Fri, Jun 22, 2018 at 11:51:38AM +0800, Huang, Ying wrote:
> @@ -411,14 +414,32 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
...
> + if (thp_swap_supported() && huge_cluster) {
> + gfp_t gfp = alloc_hugepage_direct_gfpmask(vma);
> +
> + new_page = alloc_hugepage_vma(gfp, vma,
> + addr, HPAGE_PMD_ORDER);
When allocating a huge page, we ignore the gfp_mask argument.
That doesn't matter right now since AFAICT we're not losing any flags: gfp_mask
from existing callers of __read_swap_cache_async seems to always be a subset of
GFP_HIGHUSER_MOVABLE and alloc_hugepage_direct_gfpmask always returns a
superset of that.
But maybe we should warn here in case we end up violating a restriction from a
future caller. Something like this?:
> + gfp_t gfp = alloc_hugepage_direct_gfpmask(vma);
VM_WARN_ONCE((gfp | gfp_mask) != gfp,
"ignoring gfp_mask bits");
On (06/27/18 21:51), Andrew Morton wrote:
> On Fri, 22 Jun 2018 11:51:30 +0800 "Huang, Ying" <[email protected]> wrote:
>
> > This is the final step of THP (Transparent Huge Page) swap
> > optimization. After the first and second step, the splitting huge
> > page is delayed from almost the first step of swapout to after swapout
> > has been finished. In this step, we avoid splitting THP for swapout
> > and swapout/swapin the THP in one piece.
>
> It's a tremendously good performance improvement. It's also a
> tremendously large patchset :(
Will zswap gain a THP swap out/in support at some point?
mm/zswap.c: static int zswap_frontswap_store(...)
...
/* THP isn't supported */
if (PageTransHuge(page)) {
ret = -EINVAL;
goto reject;
}
-ss
Sergey Senozhatsky <[email protected]> writes:
> On (06/27/18 21:51), Andrew Morton wrote:
>> On Fri, 22 Jun 2018 11:51:30 +0800 "Huang, Ying" <[email protected]> wrote:
>>
>> > This is the final step of THP (Transparent Huge Page) swap
>> > optimization. After the first and second step, the splitting huge
>> > page is delayed from almost the first step of swapout to after swapout
>> > has been finished. In this step, we avoid splitting THP for swapout
>> > and swapout/swapin the THP in one piece.
>>
>> It's a tremendously good performance improvement. It's also a
>> tremendously large patchset :(
>
> Will zswap gain a THP swap out/in support at some point?
>
>
> mm/zswap.c: static int zswap_frontswap_store(...)
> ...
> /* THP isn't supported */
> if (PageTransHuge(page)) {
> ret = -EINVAL;
> goto reject;
> }
That's not on my TODO list. Do you have interest to work on this?
Best Regards,
Huang, Ying
Daniel Jordan <[email protected]> writes:
> On Fri, Jun 22, 2018 at 11:51:38AM +0800, Huang, Ying wrote:
>> @@ -411,14 +414,32 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> ...
>> + if (thp_swap_supported() && huge_cluster) {
>> + gfp_t gfp = alloc_hugepage_direct_gfpmask(vma);
>> +
>> + new_page = alloc_hugepage_vma(gfp, vma,
>> + addr, HPAGE_PMD_ORDER);
>
> When allocating a huge page, we ignore the gfp_mask argument.
>
> That doesn't matter right now since AFAICT we're not losing any flags: gfp_mask
> from existing callers of __read_swap_cache_async seems to always be a subset of
> GFP_HIGHUSER_MOVABLE and alloc_hugepage_direct_gfpmask always returns a
> superset of that.
>
> But maybe we should warn here in case we end up violating a restriction from a
> future caller. Something like this?:
>
>> + gfp_t gfp = alloc_hugepage_direct_gfpmask(vma);
> VM_WARN_ONCE((gfp | gfp_mask) != gfp,
> "ignoring gfp_mask bits");
This looks good! Thanks! Will add this.
Best Regards,
Huang, Ying
On (07/04/18 10:20), Huang, Ying wrote:
> > On (06/27/18 21:51), Andrew Morton wrote:
> >> On Fri, 22 Jun 2018 11:51:30 +0800 "Huang, Ying" <[email protected]> wrote:
> >>
> >> > This is the final step of THP (Transparent Huge Page) swap
> >> > optimization. After the first and second step, the splitting huge
> >> > page is delayed from almost the first step of swapout to after swapout
> >> > has been finished. In this step, we avoid splitting THP for swapout
> >> > and swapout/swapin the THP in one piece.
> >>
> >> It's a tremendously good performance improvement. It's also a
> >> tremendously large patchset :(
> >
> > Will zswap gain a THP swap out/in support at some point?
> >
> >
> > mm/zswap.c: static int zswap_frontswap_store(...)
> > ...
> > /* THP isn't supported */
> > if (PageTransHuge(page)) {
> > ret = -EINVAL;
> > goto reject;
> > }
>
> That's not on my TODO list. Do you have interest to work on this?
I'd say I'm interested. Can't promise that I'll have enough spare time
any time soon, tho.
The numbers you posted do look fantastic indeed, embedded devices
[which normally use zswap/zram quite heavily] _probably_ should see
some performance improvement as well once zswap [and may be zram] can
handle THP.
-ss
Sergey Senozhatsky <[email protected]> writes:
> On (07/04/18 10:20), Huang, Ying wrote:
>> > On (06/27/18 21:51), Andrew Morton wrote:
>> >> On Fri, 22 Jun 2018 11:51:30 +0800 "Huang, Ying" <[email protected]> wrote:
>> >>
>> >> > This is the final step of THP (Transparent Huge Page) swap
>> >> > optimization. After the first and second step, the splitting huge
>> >> > page is delayed from almost the first step of swapout to after swapout
>> >> > has been finished. In this step, we avoid splitting THP for swapout
>> >> > and swapout/swapin the THP in one piece.
>> >>
>> >> It's a tremendously good performance improvement. It's also a
>> >> tremendously large patchset :(
>> >
>> > Will zswap gain a THP swap out/in support at some point?
>> >
>> >
>> > mm/zswap.c: static int zswap_frontswap_store(...)
>> > ...
>> > /* THP isn't supported */
>> > if (PageTransHuge(page)) {
>> > ret = -EINVAL;
>> > goto reject;
>> > }
>>
>> That's not on my TODO list. Do you have interest to work on this?
>
> I'd say I'm interested. Can't promise that I'll have enough spare time
> any time soon, tho.
Thanks!
> The numbers you posted do look fantastic indeed, embedded devices
> [which normally use zswap/zram quite heavily] _probably_ should see
> some performance improvement as well once zswap [and may be zram] can
> handle THP.
Yes. I think so too.
Best Regards,
Huang, Ying
On Fri, Jun 22, 2018 at 11:51:35AM +0800, Huang, Ying wrote:
> +static unsigned char swap_free_cluster(struct swap_info_struct *si,
> + swp_entry_t entry)
...
> + /* Cluster has been split, free each swap entries in cluster */
> + if (!cluster_is_huge(ci)) {
> + unlock_cluster(ci);
> + for (i = 0; i < SWAPFILE_CLUSTER; i++, entry.val++) {
> + if (!__swap_entry_free(si, entry, 1)) {
> + free_entries++;
> + free_swap_slot(entry);
> + }
> + }
Is is better on average to use __swap_entry_free_locked instead of
__swap_entry_free here? I'm not sure myself, just asking.
As it's written, if the cluster's been split, we always take and drop the
cluster lock 512 times, but if we don't expect to call free_swap_slot that
often, then we could just drop and retake the cluster lock inside the innermost
'if' against the possibility that free_swap_slot eventually makes us take the
cluster lock again.
...
> + return !(free_entries == SWAPFILE_CLUSTER);
return free_entries != SWAPFILE_CLUSTER;
Daniel Jordan <[email protected]> writes:
> On Fri, Jun 22, 2018 at 11:51:35AM +0800, Huang, Ying wrote:
>> +static unsigned char swap_free_cluster(struct swap_info_struct *si,
>> + swp_entry_t entry)
> ...
>> + /* Cluster has been split, free each swap entries in cluster */
>> + if (!cluster_is_huge(ci)) {
>> + unlock_cluster(ci);
>> + for (i = 0; i < SWAPFILE_CLUSTER; i++, entry.val++) {
>> + if (!__swap_entry_free(si, entry, 1)) {
>> + free_entries++;
>> + free_swap_slot(entry);
>> + }
>> + }
>
> Is is better on average to use __swap_entry_free_locked instead of
> __swap_entry_free here? I'm not sure myself, just asking.
>
> As it's written, if the cluster's been split, we always take and drop the
> cluster lock 512 times, but if we don't expect to call free_swap_slot that
> often, then we could just drop and retake the cluster lock inside the innermost
> 'if' against the possibility that free_swap_slot eventually makes us take the
> cluster lock again.
Yes. This is a good idea. Thanks for your suggestion! I will change
this in the next version.
Best Regards,
Huang, Ying
> ...
>> + return !(free_entries == SWAPFILE_CLUSTER);
>
> return free_entries != SWAPFILE_CLUSTER;
On Thu, Jun 21, 2018 at 8:55 PM Huang, Ying <[email protected]> wrote:
>
> From: Huang Ying <[email protected]>
>
> Previously, the PMD swap operations are only enabled for
> CONFIG_ARCH_ENABLE_THP_MIGRATION. Because they are only used by the
> THP migration support. We will support PMD swap mapping to the huge
> swap cluster and swapin the THP as a whole. That will be enabled via
> CONFIG_THP_SWAP and needs these PMD swap operations. So enable the
> PMD swap operations for CONFIG_THP_SWAP too.
>
> Signed-off-by: "Huang, Ying" <[email protected]>
> Cc: "Kirill A. Shutemov" <[email protected]>
> Cc: Andrea Arcangeli <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Shaohua Li <[email protected]>
> Cc: Hugh Dickins <[email protected]>
> Cc: Minchan Kim <[email protected]>
> Cc: Rik van Riel <[email protected]>
> Cc: Dave Hansen <[email protected]>
> Cc: Naoya Horiguchi <[email protected]>
> Cc: Zi Yan <[email protected]>
> Cc: Daniel Jordan <[email protected]>
> ---
> arch/x86/include/asm/pgtable.h | 2 +-
> include/asm-generic/pgtable.h | 2 +-
> include/linux/swapops.h | 44 ++++++++++++++++++++++--------------------
> 3 files changed, 25 insertions(+), 23 deletions(-)
>
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index 99ecde23c3ec..13bf58838daf 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -1224,7 +1224,7 @@ static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
> return pte_clear_flags(pte, _PAGE_SWP_SOFT_DIRTY);
> }
>
> -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> +#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP)
How about introducing a new config symbol representing the common
infrastructure between the two and have them select that symbol.
Would that also allow us to clean up the usage of
CONFIG_ARCH_ENABLE_THP_MIGRATION in fs/proc/task_mmu.c? In other
words, what's the point of having nice ifdef'd alternatives in header
files when ifdefs are still showing up in C files, all of it should be
optionally determined by header files.
On Thu, Jun 21, 2018 at 8:55 PM Huang, Ying <[email protected]> wrote:
>
> From: Huang Ying <[email protected]>
>
> It's unreasonable to optimize swapping for THP without basic swapping
> support. And this will cause build errors when THP_SWAP functions are
> defined in swapfile.c and called elsewhere.
>
> The comments are fixed too to reflect the latest progress.
Looks good to me:
Reviewed-by: Dan Williams <[email protected]>
On Thu, Jun 21, 2018 at 8:55 PM Huang, Ying <[email protected]> wrote:
>
> From: Huang Ying <[email protected]>
>
> To support to swapin the THP as a whole, we need to create PMD swap
> mapping during swapout, and maintain PMD swap mapping count. This
> patch implements the support to increase the PMD swap mapping
> count (for swapout, fork, etc.) and set SWAP_HAS_CACHE flag (for
> swapin, etc.) for a huge swap cluster in swap_duplicate() function
> family. Although it only implements a part of the design of the swap
> reference count with PMD swap mapping, the whole design is described
> as follow to make it easy to understand the patch and the whole
> picture.
>
> A huge swap cluster is used to hold the contents of a swapouted THP.
> After swapout, a PMD page mapping to the THP will become a PMD
> swap mapping to the huge swap cluster via a swap entry in PMD. While
> a PTE page mapping to a subpage of the THP will become the PTE swap
> mapping to a swap slot in the huge swap cluster via a swap entry in
> PTE.
>
> If there is no PMD swap mapping and the corresponding THP is removed
> from the page cache (reclaimed), the huge swap cluster will be split
> and become a normal swap cluster.
>
> The count (cluster_count()) of the huge swap cluster is
> SWAPFILE_CLUSTER (= HPAGE_PMD_NR) + PMD swap mapping count. Because
> all swap slots in the huge swap cluster are mapped by PTE or PMD, or
> has SWAP_HAS_CACHE bit set, the usage count of the swap cluster is
> HPAGE_PMD_NR. And the PMD swap mapping count is recorded too to make
> it easy to determine whether there are remaining PMD swap mappings.
>
> The count in swap_map[offset] is the sum of PTE and PMD swap mapping
> count. This means when we increase the PMD swap mapping count, we
> need to increase swap_map[offset] for all swap slots inside the swap
> cluster. An alternative choice is to make swap_map[offset] to record
> PTE swap map count only, given we have recorded PMD swap mapping count
> in the count of the huge swap cluster. But this need to increase
> swap_map[offset] when splitting the PMD swap mapping, that may fail
> because of memory allocation for swap count continuation. That is
> hard to dealt with. So we choose current solution.
>
> The PMD swap mapping to a huge swap cluster may be split when unmap a
> part of PMD mapping etc. That is easy because only the count of the
> huge swap cluster need to be changed. When the last PMD swap mapping
> is gone and SWAP_HAS_CACHE is unset, we will split the huge swap
> cluster (clear the huge flag). This makes it easy to reason the
> cluster state.
>
> A huge swap cluster will be split when splitting the THP in swap
> cache, or failing to allocate THP during swapin, etc. But when
> splitting the huge swap cluster, we will not try to split all PMD swap
> mappings, because we haven't enough information available for that
> sometimes. Later, when the PMD swap mapping is duplicated or swapin,
> etc, the PMD swap mapping will be split and fallback to the PTE
> operation.
>
> When a THP is added into swap cache, the SWAP_HAS_CACHE flag will be
> set in the swap_map[offset] of all swap slots inside the huge swap
> cluster backing the THP. This huge swap cluster will not be split
> unless the THP is split even if its PMD swap mapping count dropped to
> 0. Later, when the THP is removed from swap cache, the SWAP_HAS_CACHE
> flag will be cleared in the swap_map[offset] of all swap slots inside
> the huge swap cluster. And this huge swap cluster will be split if
> its PMD swap mapping count is 0.
>
> Signed-off-by: "Huang, Ying" <[email protected]>
> Cc: "Kirill A. Shutemov" <[email protected]>
> Cc: Andrea Arcangeli <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Shaohua Li <[email protected]>
> Cc: Hugh Dickins <[email protected]>
> Cc: Minchan Kim <[email protected]>
> Cc: Rik van Riel <[email protected]>
> Cc: Dave Hansen <[email protected]>
> Cc: Naoya Horiguchi <[email protected]>
> Cc: Zi Yan <[email protected]>
> Cc: Daniel Jordan <[email protected]>
> ---
> include/linux/huge_mm.h | 5 +
> include/linux/swap.h | 9 +-
> mm/memory.c | 2 +-
> mm/rmap.c | 2 +-
> mm/swap_state.c | 2 +-
> mm/swapfile.c | 287 +++++++++++++++++++++++++++++++++---------------
> 6 files changed, 214 insertions(+), 93 deletions(-)
I'm probably missing some background, but I find the patch hard to
read. Can you disseminate some of this patch changelog into kernel-doc
commentary so it's easier to follow which helpers do what relative to
THP swap.
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index d3bbf6bea9e9..213d32e57c39 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -80,6 +80,11 @@ extern struct kobj_attribute shmem_enabled_attr;
> #define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
> #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
>
> +static inline bool thp_swap_supported(void)
> +{
> + return IS_ENABLED(CONFIG_THP_SWAP);
> +}
> +
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> #define HPAGE_PMD_SHIFT PMD_SHIFT
> #define HPAGE_PMD_SIZE ((1UL) << HPAGE_PMD_SHIFT)
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index f73eafcaf4e9..57aa655ab27d 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -451,8 +451,8 @@ extern swp_entry_t get_swap_page_of_type(int);
> extern int get_swap_pages(int n, bool cluster, swp_entry_t swp_entries[]);
> extern int add_swap_count_continuation(swp_entry_t, gfp_t);
> extern void swap_shmem_alloc(swp_entry_t);
> -extern int swap_duplicate(swp_entry_t);
> -extern int swapcache_prepare(swp_entry_t);
> +extern int swap_duplicate(swp_entry_t *entry, bool cluster);
This patch introduces a new flag to swap_duplicate(), but then all all
usages still pass 'false' so why does this patch change the argument.
Seems this change belongs to another patch?
> +extern int swapcache_prepare(swp_entry_t entry, bool cluster);
Rather than add a cluster flag to these helpers can the swp_entry_t
carry the cluster flag directly?
Dan Williams <[email protected]> writes:
> On Thu, Jun 21, 2018 at 8:55 PM Huang, Ying <[email protected]> wrote:
>>
>> From: Huang Ying <[email protected]>
>>
>> Previously, the PMD swap operations are only enabled for
>> CONFIG_ARCH_ENABLE_THP_MIGRATION. Because they are only used by the
>> THP migration support. We will support PMD swap mapping to the huge
>> swap cluster and swapin the THP as a whole. That will be enabled via
>> CONFIG_THP_SWAP and needs these PMD swap operations. So enable the
>> PMD swap operations for CONFIG_THP_SWAP too.
>>
>> Signed-off-by: "Huang, Ying" <[email protected]>
>> Cc: "Kirill A. Shutemov" <[email protected]>
>> Cc: Andrea Arcangeli <[email protected]>
>> Cc: Michal Hocko <[email protected]>
>> Cc: Johannes Weiner <[email protected]>
>> Cc: Shaohua Li <[email protected]>
>> Cc: Hugh Dickins <[email protected]>
>> Cc: Minchan Kim <[email protected]>
>> Cc: Rik van Riel <[email protected]>
>> Cc: Dave Hansen <[email protected]>
>> Cc: Naoya Horiguchi <[email protected]>
>> Cc: Zi Yan <[email protected]>
>> Cc: Daniel Jordan <[email protected]>
>> ---
>> arch/x86/include/asm/pgtable.h | 2 +-
>> include/asm-generic/pgtable.h | 2 +-
>> include/linux/swapops.h | 44 ++++++++++++++++++++++--------------------
>> 3 files changed, 25 insertions(+), 23 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>> index 99ecde23c3ec..13bf58838daf 100644
>> --- a/arch/x86/include/asm/pgtable.h
>> +++ b/arch/x86/include/asm/pgtable.h
>> @@ -1224,7 +1224,7 @@ static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
>> return pte_clear_flags(pte, _PAGE_SWP_SOFT_DIRTY);
>> }
>>
>> -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>> +#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP)
>
> How about introducing a new config symbol representing the common
> infrastructure between the two and have them select that symbol.
The common infrastructure shared by two mechanisms is PMD swap entry.
But I didn't find there are many places where the common infrastructure
is used. So I think it may be over-engineering to introduce a new
config symbol but use it for so few times.
> Would that also allow us to clean up the usage of
> CONFIG_ARCH_ENABLE_THP_MIGRATION in fs/proc/task_mmu.c? In other
> words, what's the point of having nice ifdef'd alternatives in header
> files when ifdefs are still showing up in C files, all of it should be
> optionally determined by header files.
Unfortunately, I think it is not a easy task to wrap all C code via
#ifdef in header files. And it may be over-engineering to wrap them
all. I guess this is why there are still some #ifdef in C files.
Best Regards,
Huang, Ying
On Sun, Jul 8, 2018 at 10:40 PM, Huang, Ying <[email protected]> wrote:
> Dan Williams <[email protected]> writes:
>
>> On Thu, Jun 21, 2018 at 8:55 PM Huang, Ying <[email protected]> wrote:
>>>
>>> From: Huang Ying <[email protected]>
>>>
>>> Previously, the PMD swap operations are only enabled for
>>> CONFIG_ARCH_ENABLE_THP_MIGRATION. Because they are only used by the
>>> THP migration support. We will support PMD swap mapping to the huge
>>> swap cluster and swapin the THP as a whole. That will be enabled via
>>> CONFIG_THP_SWAP and needs these PMD swap operations. So enable the
>>> PMD swap operations for CONFIG_THP_SWAP too.
>>>
>>> Signed-off-by: "Huang, Ying" <[email protected]>
>>> Cc: "Kirill A. Shutemov" <[email protected]>
>>> Cc: Andrea Arcangeli <[email protected]>
>>> Cc: Michal Hocko <[email protected]>
>>> Cc: Johannes Weiner <[email protected]>
>>> Cc: Shaohua Li <[email protected]>
>>> Cc: Hugh Dickins <[email protected]>
>>> Cc: Minchan Kim <[email protected]>
>>> Cc: Rik van Riel <[email protected]>
>>> Cc: Dave Hansen <[email protected]>
>>> Cc: Naoya Horiguchi <[email protected]>
>>> Cc: Zi Yan <[email protected]>
>>> Cc: Daniel Jordan <[email protected]>
>>> ---
>>> arch/x86/include/asm/pgtable.h | 2 +-
>>> include/asm-generic/pgtable.h | 2 +-
>>> include/linux/swapops.h | 44 ++++++++++++++++++++++--------------------
>>> 3 files changed, 25 insertions(+), 23 deletions(-)
>>>
>>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>>> index 99ecde23c3ec..13bf58838daf 100644
>>> --- a/arch/x86/include/asm/pgtable.h
>>> +++ b/arch/x86/include/asm/pgtable.h
>>> @@ -1224,7 +1224,7 @@ static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
>>> return pte_clear_flags(pte, _PAGE_SWP_SOFT_DIRTY);
>>> }
>>>
>>> -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>>> +#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP)
>>
>> How about introducing a new config symbol representing the common
>> infrastructure between the two and have them select that symbol.
>
> The common infrastructure shared by two mechanisms is PMD swap entry.
> But I didn't find there are many places where the common infrastructure
> is used. So I think it may be over-engineering to introduce a new
> config symbol but use it for so few times.
>
>> Would that also allow us to clean up the usage of
>> CONFIG_ARCH_ENABLE_THP_MIGRATION in fs/proc/task_mmu.c? In other
>> words, what's the point of having nice ifdef'd alternatives in header
>> files when ifdefs are still showing up in C files, all of it should be
>> optionally determined by header files.
>
> Unfortunately, I think it is not a easy task to wrap all C code via
> #ifdef in header files. And it may be over-engineering to wrap them
> all. I guess this is why there are still some #ifdef in C files.
That's the entire point. Yes, over-engineer the header files so the
actual C code is more readable.
Dan Williams <[email protected]> writes:
> On Sun, Jul 8, 2018 at 10:40 PM, Huang, Ying <[email protected]> wrote:
>> Dan Williams <[email protected]> writes:
>>> Would that also allow us to clean up the usage of
>>> CONFIG_ARCH_ENABLE_THP_MIGRATION in fs/proc/task_mmu.c? In other
>>> words, what's the point of having nice ifdef'd alternatives in header
>>> files when ifdefs are still showing up in C files, all of it should be
>>> optionally determined by header files.
>>
>> Unfortunately, I think it is not a easy task to wrap all C code via
>> #ifdef in header files. And it may be over-engineering to wrap them
>> all. I guess this is why there are still some #ifdef in C files.
>
> That's the entire point. Yes, over-engineer the header files so the
> actual C code is more readable.
Take pagemap_pmd_range() in fs/proc/task_mmu.c as an example, to avoid
#ifdef, we may wrap all code between #ifdef/#endif into a separate
function, put the new function into another C file (which is compiled
only if #ifdef is true), then change header files for that too.
In this way, we avoid #ifdef/#endif, but the code is more complex and
tightly related code may be put into different files. The readability
may be hurt too.
Maybe you have smarter way to change the code to avoid #ifdef and
improve code readability?
Best Regards,
Huang, Ying
Dan Williams <[email protected]> writes:
> On Thu, Jun 21, 2018 at 8:55 PM Huang, Ying <[email protected]> wrote:
>>
>> From: Huang Ying <[email protected]>
>>
>> It's unreasonable to optimize swapping for THP without basic swapping
>> support. And this will cause build errors when THP_SWAP functions are
>> defined in swapfile.c and called elsewhere.
>>
>> The comments are fixed too to reflect the latest progress.
>
> Looks good to me:
>
> Reviewed-by: Dan Williams <[email protected]>
Thanks!
Best Regards,
Huang, Ying
Dan Williams <[email protected]> writes:
> On Thu, Jun 21, 2018 at 8:55 PM Huang, Ying <[email protected]> wrote:
>>
>> From: Huang Ying <[email protected]>
>>
>> To support to swapin the THP as a whole, we need to create PMD swap
>> mapping during swapout, and maintain PMD swap mapping count. This
>> patch implements the support to increase the PMD swap mapping
>> count (for swapout, fork, etc.) and set SWAP_HAS_CACHE flag (for
>> swapin, etc.) for a huge swap cluster in swap_duplicate() function
>> family. Although it only implements a part of the design of the swap
>> reference count with PMD swap mapping, the whole design is described
>> as follow to make it easy to understand the patch and the whole
>> picture.
>>
>> A huge swap cluster is used to hold the contents of a swapouted THP.
>> After swapout, a PMD page mapping to the THP will become a PMD
>> swap mapping to the huge swap cluster via a swap entry in PMD. While
>> a PTE page mapping to a subpage of the THP will become the PTE swap
>> mapping to a swap slot in the huge swap cluster via a swap entry in
>> PTE.
>>
>> If there is no PMD swap mapping and the corresponding THP is removed
>> from the page cache (reclaimed), the huge swap cluster will be split
>> and become a normal swap cluster.
>>
>> The count (cluster_count()) of the huge swap cluster is
>> SWAPFILE_CLUSTER (= HPAGE_PMD_NR) + PMD swap mapping count. Because
>> all swap slots in the huge swap cluster are mapped by PTE or PMD, or
>> has SWAP_HAS_CACHE bit set, the usage count of the swap cluster is
>> HPAGE_PMD_NR. And the PMD swap mapping count is recorded too to make
>> it easy to determine whether there are remaining PMD swap mappings.
>>
>> The count in swap_map[offset] is the sum of PTE and PMD swap mapping
>> count. This means when we increase the PMD swap mapping count, we
>> need to increase swap_map[offset] for all swap slots inside the swap
>> cluster. An alternative choice is to make swap_map[offset] to record
>> PTE swap map count only, given we have recorded PMD swap mapping count
>> in the count of the huge swap cluster. But this need to increase
>> swap_map[offset] when splitting the PMD swap mapping, that may fail
>> because of memory allocation for swap count continuation. That is
>> hard to dealt with. So we choose current solution.
>>
>> The PMD swap mapping to a huge swap cluster may be split when unmap a
>> part of PMD mapping etc. That is easy because only the count of the
>> huge swap cluster need to be changed. When the last PMD swap mapping
>> is gone and SWAP_HAS_CACHE is unset, we will split the huge swap
>> cluster (clear the huge flag). This makes it easy to reason the
>> cluster state.
>>
>> A huge swap cluster will be split when splitting the THP in swap
>> cache, or failing to allocate THP during swapin, etc. But when
>> splitting the huge swap cluster, we will not try to split all PMD swap
>> mappings, because we haven't enough information available for that
>> sometimes. Later, when the PMD swap mapping is duplicated or swapin,
>> etc, the PMD swap mapping will be split and fallback to the PTE
>> operation.
>>
>> When a THP is added into swap cache, the SWAP_HAS_CACHE flag will be
>> set in the swap_map[offset] of all swap slots inside the huge swap
>> cluster backing the THP. This huge swap cluster will not be split
>> unless the THP is split even if its PMD swap mapping count dropped to
>> 0. Later, when the THP is removed from swap cache, the SWAP_HAS_CACHE
>> flag will be cleared in the swap_map[offset] of all swap slots inside
>> the huge swap cluster. And this huge swap cluster will be split if
>> its PMD swap mapping count is 0.
>>
>> Signed-off-by: "Huang, Ying" <[email protected]>
>> Cc: "Kirill A. Shutemov" <[email protected]>
>> Cc: Andrea Arcangeli <[email protected]>
>> Cc: Michal Hocko <[email protected]>
>> Cc: Johannes Weiner <[email protected]>
>> Cc: Shaohua Li <[email protected]>
>> Cc: Hugh Dickins <[email protected]>
>> Cc: Minchan Kim <[email protected]>
>> Cc: Rik van Riel <[email protected]>
>> Cc: Dave Hansen <[email protected]>
>> Cc: Naoya Horiguchi <[email protected]>
>> Cc: Zi Yan <[email protected]>
>> Cc: Daniel Jordan <[email protected]>
>> ---
>> include/linux/huge_mm.h | 5 +
>> include/linux/swap.h | 9 +-
>> mm/memory.c | 2 +-
>> mm/rmap.c | 2 +-
>> mm/swap_state.c | 2 +-
>> mm/swapfile.c | 287 +++++++++++++++++++++++++++++++++---------------
>> 6 files changed, 214 insertions(+), 93 deletions(-)
>
> I'm probably missing some background, but I find the patch hard to
> read. Can you disseminate some of this patch changelog into kernel-doc
> commentary so it's easier to follow which helpers do what relative to
> THP swap.
Yes. This is a good idea. Thanks for pointing it out. I will add more
kernel-doc commentary to make the code easier to be understood.
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index d3bbf6bea9e9..213d32e57c39 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -80,6 +80,11 @@ extern struct kobj_attribute shmem_enabled_attr;
>> #define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
>> #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
>>
>> +static inline bool thp_swap_supported(void)
>> +{
>> + return IS_ENABLED(CONFIG_THP_SWAP);
>> +}
>> +
>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> #define HPAGE_PMD_SHIFT PMD_SHIFT
>> #define HPAGE_PMD_SIZE ((1UL) << HPAGE_PMD_SHIFT)
>> diff --git a/include/linux/swap.h b/include/linux/swap.h
>> index f73eafcaf4e9..57aa655ab27d 100644
>> --- a/include/linux/swap.h
>> +++ b/include/linux/swap.h
>> @@ -451,8 +451,8 @@ extern swp_entry_t get_swap_page_of_type(int);
>> extern int get_swap_pages(int n, bool cluster, swp_entry_t swp_entries[]);
>> extern int add_swap_count_continuation(swp_entry_t, gfp_t);
>> extern void swap_shmem_alloc(swp_entry_t);
>> -extern int swap_duplicate(swp_entry_t);
>> -extern int swapcache_prepare(swp_entry_t);
>> +extern int swap_duplicate(swp_entry_t *entry, bool cluster);
>
> This patch introduces a new flag to swap_duplicate(), but then all all
> usages still pass 'false' so why does this patch change the argument.
> Seems this change belongs to another patch?
This patch just introduce the capability to deal with huge swap entry in
swap_duplicate() family functions. The first user of the huge swap
entry is in
[PATCH -mm -v4 08/21] mm, THP, swap: Support to read a huge swap cluster for swapin a THP
via swapcache_prepare().
Yes, it is generally better to put the implementation and the user into
one patch. But I found in that way, I have to put most code of this
patchset into single huge patch, that is not good for review too. So I
made some compromise to separate the implementation and the users into
different patches to make the size of single patch not too huge. Does
this make sense to you?
>> +extern int swapcache_prepare(swp_entry_t entry, bool cluster);
>
> Rather than add a cluster flag to these helpers can the swp_entry_t
> carry the cluster flag directly?
Matthew Wilcox suggested to replace the "cluster" flag with the number
of entries to make the interface more flexible. And he suggest to use a
very smart way to encode the nr_entries into swap_entry_t with something
like,
https://plus.google.com/117536210417097546339/posts/hvctn17WUZu
But I think we need to
- encode swap type, swap offset, nr_entries into a new swp_entry_t
- call a function
- decode the new swp_entry_t in the function
So it appears that it doesn't bring real value except reduce one
parameter. Or you suggest something else?
Best Regards,
Huang, Ying
On 06/21/2018 08:51 PM, Huang, Ying wrote:
> From: Huang Ying <[email protected]>
>
> Previously, the PMD swap operations are only enabled for
> CONFIG_ARCH_ENABLE_THP_MIGRATION. Because they are only used by the
> THP migration support. We will support PMD swap mapping to the huge
> swap cluster and swapin the THP as a whole. That will be enabled via
> CONFIG_THP_SWAP and needs these PMD swap operations. So enable the
> PMD swap operations for CONFIG_THP_SWAP too.
This commit message kinda skirts around the real reasons for this patch.
Shouldn't we just say something like:
Currently, "swap entries" in the page tables are used for a
number of things outside of actual swap, like page migration.
We support THP/PMD "swap entries" for page migration currently
and the functions behind this are tied to page migration's
config option (CONFIG_ARCH_ENABLE_THP_MIGRATION).
But, we also need them for THP swap.
...
It would also be nice to explain a bit why you are moving code around.
Would this look any better if we made a Kconfig option:
config HAVE_THP_SWAP_ENTRIES
def_bool n
# "Swap entries" in the page tables are used
# both for migration and actual swap.
depends on THP_SWAP || ARCH_ENABLE_THP_MIGRATION
You logically talked about this need for PMD swap operations in your
commit message, so I think it makes sense to codify that in a single
place where it can be coherently explained.
> config THP_SWAP
> def_bool y
> - depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP
> + depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP && SWAP
> help
This seems like a bug-fix. Is there a reason this didn't cause problems
up to now?
> +static inline bool thp_swap_supported(void)
> +{
> + return IS_ENABLED(CONFIG_THP_SWAP);
> +}
This seems like rather useless abstraction. Why do we need it?
...
> -static inline int swap_duplicate(swp_entry_t swp)
> +static inline int swap_duplicate(swp_entry_t *swp, bool cluster)
> {
> return 0;
> }
FWIW, I despise true/false function arguments like this. When I see
this in code:
swap_duplicate(&entry, false);
I have no idea what false does. I'd much rather see:
enum do_swap_cluster {
SWP_DO_CLUSTER,
SWP_NO_CLUSTER
};
So you see:
swap_duplicate(&entry, SWP_NO_CLUSTER);
vs.
swap_duplicate(&entry, SWP_DO_CLUSTER);
> diff --git a/mm/memory.c b/mm/memory.c
> index e9cac1c4fa69..f3900282e3da 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -951,7 +951,7 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> swp_entry_t entry = pte_to_swp_entry(pte);
>
> if (likely(!non_swap_entry(entry))) {
> - if (swap_duplicate(entry) < 0)
> + if (swap_duplicate(&entry, false) < 0)
> return entry.val;
>
> /* make sure dst_mm is on swapoff's mmlist. */
I'll also point out that in a multi-hundred-line patch, adding arguments
to a existing function would not be something I'd try to include in the
patch. I'd break it out separately unless absolutely necessary.
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index f42b1b0cdc58..48e2c54385ee 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -49,6 +49,9 @@ static bool swap_count_continued(struct swap_info_struct *, pgoff_t,
> unsigned char);
> static void free_swap_count_continuations(struct swap_info_struct *);
> static sector_t map_swap_entry(swp_entry_t, struct block_device**);
> +static int add_swap_count_continuation_locked(struct swap_info_struct *si,
> + unsigned long offset,
> + struct page *page);
>
> DEFINE_SPINLOCK(swap_lock);
> static unsigned int nr_swapfiles;
> @@ -319,6 +322,11 @@ static inline void unlock_cluster_or_swap_info(struct swap_info_struct *si,
> spin_unlock(&si->lock);
> }
>
> +static inline bool is_cluster_offset(unsigned long offset)
> +{
> + return !(offset % SWAPFILE_CLUSTER);
> +}
> +
> static inline bool cluster_list_empty(struct swap_cluster_list *list)
> {
> return cluster_is_null(&list->head);
> @@ -1166,16 +1174,14 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry)
> return NULL;
> }
>
> -static unsigned char __swap_entry_free(struct swap_info_struct *p,
> - swp_entry_t entry, unsigned char usage)
> +static unsigned char __swap_entry_free_locked(struct swap_info_struct *p,
> + struct swap_cluster_info *ci,
> + unsigned long offset,
> + unsigned char usage)
> {
> - struct swap_cluster_info *ci;
> - unsigned long offset = swp_offset(entry);
> unsigned char count;
> unsigned char has_cache;
>
> - ci = lock_cluster_or_swap_info(p, offset);
> -
> count = p->swap_map[offset];
>
> has_cache = count & SWAP_HAS_CACHE;
> @@ -1203,6 +1209,17 @@ static unsigned char __swap_entry_free(struct swap_info_struct *p,
> usage = count | has_cache;
> p->swap_map[offset] = usage ? : SWAP_HAS_CACHE;
>
> + return usage;
> +}
> +
> +static unsigned char __swap_entry_free(struct swap_info_struct *p,
> + swp_entry_t entry, unsigned char usage)
> +{
> + struct swap_cluster_info *ci;
> + unsigned long offset = swp_offset(entry);
> +
> + ci = lock_cluster_or_swap_info(p, offset);
> + usage = __swap_entry_free_locked(p, ci, offset, usage);
> unlock_cluster_or_swap_info(p, ci);
>
> return usage;
> @@ -3450,32 +3467,12 @@ void si_swapinfo(struct sysinfo *val)
> spin_unlock(&swap_lock);
> }
>
> -/*
> - * Verify that a swap entry is valid and increment its swap map count.
> - *
> - * Returns error code in following case.
> - * - success -> 0
> - * - swp_entry is invalid -> EINVAL
> - * - swp_entry is migration entry -> EINVAL
> - * - swap-cache reference is requested but there is already one. -> EEXIST
> - * - swap-cache reference is requested but the entry is not used. -> ENOENT
> - * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
> - */
> -static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
> +static int __swap_duplicate_locked(struct swap_info_struct *p,
> + unsigned long offset, unsigned char usage)
> {
> - struct swap_info_struct *p;
> - struct swap_cluster_info *ci;
> - unsigned long offset;
> unsigned char count;
> unsigned char has_cache;
> - int err = -EINVAL;
> -
> - p = get_swap_device(entry);
> - if (!p)
> - goto out;
> -
> - offset = swp_offset(entry);
> - ci = lock_cluster_or_swap_info(p, offset);
> + int err = 0;
>
> count = p->swap_map[offset];
>
> @@ -3485,12 +3482,11 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
> */
> if (unlikely(swap_count(count) == SWAP_MAP_BAD)) {
> err = -ENOENT;
> - goto unlock_out;
> + goto out;
> }
>
> has_cache = count & SWAP_HAS_CACHE;
> count &= ~SWAP_HAS_CACHE;
> - err = 0;
>
> if (usage == SWAP_HAS_CACHE) {
>
> @@ -3517,11 +3513,39 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
>
> p->swap_map[offset] = count | has_cache;
>
> -unlock_out:
> +out:
> + return err;
> +}
... and that all looks like refactoring, not actively implementing PMD
swap support. That's unfortunate.
> +/*
> + * Verify that a swap entry is valid and increment its swap map count.
> + *
> + * Returns error code in following case.
> + * - success -> 0
> + * - swp_entry is invalid -> EINVAL
> + * - swp_entry is migration entry -> EINVAL
> + * - swap-cache reference is requested but there is already one. -> EEXIST
> + * - swap-cache reference is requested but the entry is not used. -> ENOENT
> + * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
> + */
> +static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
> +{
> + struct swap_info_struct *p;
> + struct swap_cluster_info *ci;
> + unsigned long offset;
> + int err = -EINVAL;
> +
> + p = get_swap_device(entry);
> + if (!p)
> + goto out;
Is this an error, or just for running into something like a migration
entry? Comments please.
> + offset = swp_offset(entry);
> + ci = lock_cluster_or_swap_info(p, offset);
> + err = __swap_duplicate_locked(p, offset, usage);
> unlock_cluster_or_swap_info(p, ci);
> +
> + put_swap_device(p);
> out:
> - if (p)
> - put_swap_device(p);
> return err;
> }
Not a comment on this patch, but lock_cluster_or_swap_info() is woefully
uncommented.
> @@ -3534,6 +3558,81 @@ void swap_shmem_alloc(swp_entry_t entry)
> __swap_duplicate(entry, SWAP_MAP_SHMEM);
> }
>
> +#ifdef CONFIG_THP_SWAP
> +static int __swap_duplicate_cluster(swp_entry_t *entry, unsigned char usage)
> +{
> + struct swap_info_struct *si;
> + struct swap_cluster_info *ci;
> + unsigned long offset;
> + unsigned char *map;
> + int i, err = 0;
Instead of an #ifdef, is there a reason we can't just do:
if (!IS_ENABLED(THP_SWAP))
return 0;
?
> + si = get_swap_device(*entry);
> + if (!si) {
> + err = -EINVAL;
> + goto out;
> + }
> + offset = swp_offset(*entry);
> + ci = lock_cluster(si, offset);
Could you explain a bit why we do lock_cluster() and not
lock_cluster_or_swap_info() here?
> + if (cluster_is_free(ci)) {
> + err = -ENOENT;
> + goto unlock;
> + }
Needs comments on how this could happen. We just took the lock, so I
assume this is some kind of race, but can you elaborate?
> + if (!cluster_is_huge(ci)) {
> + err = -ENOTDIR;
> + goto unlock;
> + }
Yikes! This function is the core of the new functionality and its
comment count is exactly 0. There was quite a long patch description,
which will be surely lost to the ages, but nothing in the code that
folks _will_ be looking at for decades to come.
Can we fix that?
> + VM_BUG_ON(!is_cluster_offset(offset));
> + VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
So, by this point, we know we are looking at (or supposed to be looking
at) a cluster on the device?
> + map = si->swap_map + offset;
> + if (usage == SWAP_HAS_CACHE) {
> + if (map[0] & SWAP_HAS_CACHE) {
> + err = -EEXIST;
> + goto unlock;
> + }
> + for (i = 0; i < SWAPFILE_CLUSTER; i++) {
> + VM_BUG_ON(map[i] & SWAP_HAS_CACHE);
> + map[i] |= SWAP_HAS_CACHE;
> + }
So, it's OK to race with the first entry, but after that it's a bug
because the tail pages should agree with the head page's state?
> + } else {
> + for (i = 0; i < SWAPFILE_CLUSTER; i++) {
> +retry:
> + err = __swap_duplicate_locked(si, offset + i, usage);
> + if (err == -ENOMEM) {
> + struct page *page;
> +
> + page = alloc_page(GFP_ATOMIC | __GFP_HIGHMEM);
I noticed that the non-clustering analog of this function takes a GFP
mask. Why not this one?
> + err = add_swap_count_continuation_locked(
> + si, offset + i, page);
> + if (err) {
> + *entry = swp_entry(si->type, offset+i);
> + goto undup;
> + }
> + goto retry;
> + } else if (err)
> + goto undup;
> + }
> + cluster_set_count(ci, cluster_count(ci) + usage);
> + }
> +unlock:
> + unlock_cluster(ci);
> + put_swap_device(si);
> +out:
> + return err;
> +undup:
> + for (i--; i >= 0; i--)
> + __swap_entry_free_locked(
> + si, ci, offset + i, usage);
> + goto unlock;
> +}
So, we've basically created a fork of the __swap_duplicate() code for
huge pages, along with a presumably new set of bugs and a second code
path to update. Was this unavoidable? Can we unify this any more with
the small pages path?
> /*
> * Increase reference count of swap entry by 1.
> * Returns 0 for success, or -ENOMEM if a swap_count_continuation is required
> @@ -3541,12 +3640,15 @@ void swap_shmem_alloc(swp_entry_t entry)
> * if __swap_duplicate() fails for another reason (-EINVAL or -ENOENT), which
> * might occur if a page table entry has got corrupted.
> */
> -int swap_duplicate(swp_entry_t entry)
> +int swap_duplicate(swp_entry_t *entry, bool cluster)
> {
> int err = 0;
>
> - while (!err && __swap_duplicate(entry, 1) == -ENOMEM)
> - err = add_swap_count_continuation(entry, GFP_ATOMIC);
> + if (thp_swap_supported() && cluster)
> + return __swap_duplicate_cluster(entry, 1);
> +
> + while (!err && __swap_duplicate(*entry, 1) == -ENOMEM)
> + err = add_swap_count_continuation(*entry, GFP_ATOMIC);
> return err;
> }
Reading this, I wonder whether this has been refactored as much as
possible. Both add_swap_count_continuation() and
__swap_duplciate_cluster() start off with the same get_swap_device() dance.
> @@ -3558,9 +3660,12 @@ int swap_duplicate(swp_entry_t entry)
> * -EBUSY means there is a swap cache.
> * Note: return code is different from swap_duplicate().
> */
> -int swapcache_prepare(swp_entry_t entry)
> +int swapcache_prepare(swp_entry_t entry, bool cluster)
> {
> - return __swap_duplicate(entry, SWAP_HAS_CACHE);
> + if (thp_swap_supported() && cluster)
> + return __swap_duplicate_cluster(&entry, SWAP_HAS_CACHE);
> + else
> + return __swap_duplicate(entry, SWAP_HAS_CACHE);
> }
>
> struct swap_info_struct *swp_swap_info(swp_entry_t entry)
> @@ -3590,51 +3695,13 @@ pgoff_t __page_file_index(struct page *page)
> }
> EXPORT_SYMBOL_GPL(__page_file_index);
>
> -/*
> - * add_swap_count_continuation - called when a swap count is duplicated
> - * beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's
> - * page of the original vmalloc'ed swap_map, to hold the continuation count
> - * (for that entry and for its neighbouring PAGE_SIZE swap entries). Called
> - * again when count is duplicated beyond SWAP_MAP_MAX * SWAP_CONT_MAX, etc.
This closes out with a lot of refactoring noise. Any chance that can be
isolated into another patch?
> +#ifdef CONFIG_THP_SWAP
> +static inline int cluster_swapcount(struct swap_cluster_info *ci)
> +{
> + if (!ci || !cluster_is_huge(ci))
> + return 0;
> +
> + return cluster_count(ci) - SWAPFILE_CLUSTER;
> +}
> +#else
> +#define cluster_swapcount(ci) 0
> +#endif
Dumb questions, round 2: On a CONFIG_THP_SWAP=n build, presumably,
cluster_is_huge()=0 always, so cluster_swapout() always returns 0. Right?
So, why the #ifdef?
> /*
> * It's possible scan_swap_map() uses a free cluster in the middle of free
> * cluster list. Avoiding such abuse to avoid list corruption.
> @@ -905,6 +917,7 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
> struct swap_cluster_info *ci;
>
> ci = lock_cluster(si, offset);
> + memset(si->swap_map + offset, 0, SWAPFILE_CLUSTER);
> cluster_set_count_flag(ci, 0, 0);
> free_cluster(si, idx);
> unlock_cluster(ci);
This is another case of gloriously comment-free code, but stuff that
_was_ covered in the changelog. I'd much rather have code comments than
changelog comments. Could we fix that?
I'm generally finding it quite hard to review this because I keep having
to refer back to the changelog to see if what you are doing matches what
you said you were doing.
> @@ -1288,24 +1301,30 @@ static void swapcache_free_cluster(swp_entry_t entry)
>
> ci = lock_cluster(si, offset);
> VM_BUG_ON(!cluster_is_huge(ci));
> + VM_BUG_ON(!is_cluster_offset(offset));
> + VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
> map = si->swap_map + offset;
> - for (i = 0; i < SWAPFILE_CLUSTER; i++) {
> - val = map[i];
> - VM_BUG_ON(!(val & SWAP_HAS_CACHE));
> - if (val == SWAP_HAS_CACHE)
> - free_entries++;
> + if (!cluster_swapcount(ci)) {
> + for (i = 0; i < SWAPFILE_CLUSTER; i++) {
> + val = map[i];
> + VM_BUG_ON(!(val & SWAP_HAS_CACHE));
> + if (val == SWAP_HAS_CACHE)
> + free_entries++;
> + }
> + if (free_entries != SWAPFILE_CLUSTER)
> + cluster_clear_huge(ci);
> }
Also, I'll point out that cluster_swapcount() continues the horrific
naming of cluster_couunt(), not saying what the count is *of*. The
return value doesn't help much:
return cluster_count(ci) - SWAPFILE_CLUSTER;
I'm seeing a pattern here.
old code:
foo()
{
do_swap_something()
}
new code:
foo(bool cluster)
{
if (cluster)
do_swap_cluster_something();
else
do_swap_something();
}
That make me fear that we have:
1. Created a new, wholly untested code path
2. Created two places to patch bugs
3. Are not reusing code when possible
The code non-resuse was, and continues to be, IMNHO, one of the largest
sources of bugs with the original THP implementation. It might be
infeasible to do here, but let's at least give it as much of a go as we can.
Can I ask that you take another round through this set and:
1. Consolidate code refactoring into separate patches
2. Add comments to code, and avoid doing it solely in changelogs
3. Make an effort to share more code between the old code and new
code. Where code can not be shared, call that out in the changelog.
This is a *really* hard-to-review set at the moment. Doing those things
will make it much easier to review and hopefully give us more
maintainable code going forward.
My apologies for not having done this review sooner.
On Fri, Jun 22, 2018 at 11:51:44AM +0800, Huang, Ying wrote:
> Because there is no way to prevent a huge swap cluster from being
> split except when it has SWAP_HAS_CACHE flag set.
What about making get_mctgt_type_thp take the cluster lock? That function
would be the first lock_cluster user outside of swapfile.c, but it would
serialize with split_swap_cluster.
> It is possible for
> the huge swap cluster to be split and the charge for the swap slots
> inside to be changed, after we check the PMD swap mapping and the huge
> swap cluster before we commit the charge moving. But the race window
> is so small, that we will just ignore the race.
Moving the charges is a slow path, so can't we just be correct here and not
leak?
Dave Hansen <[email protected]> writes:
> On 06/21/2018 08:51 PM, Huang, Ying wrote:
>> From: Huang Ying <[email protected]>
>>
>> Previously, the PMD swap operations are only enabled for
>> CONFIG_ARCH_ENABLE_THP_MIGRATION. Because they are only used by the
>> THP migration support. We will support PMD swap mapping to the huge
>> swap cluster and swapin the THP as a whole. That will be enabled via
>> CONFIG_THP_SWAP and needs these PMD swap operations. So enable the
>> PMD swap operations for CONFIG_THP_SWAP too.
>
> This commit message kinda skirts around the real reasons for this patch.
> Shouldn't we just say something like:
>
> Currently, "swap entries" in the page tables are used for a
> number of things outside of actual swap, like page migration.
> We support THP/PMD "swap entries" for page migration currently
> and the functions behind this are tied to page migration's
> config option (CONFIG_ARCH_ENABLE_THP_MIGRATION).
>
> But, we also need them for THP swap.
> ...
>
> It would also be nice to explain a bit why you are moving code around.
This looks much better than my original words. Thanks for help!
> Would this look any better if we made a Kconfig option:
>
> config HAVE_THP_SWAP_ENTRIES
> def_bool n
> # "Swap entries" in the page tables are used
> # both for migration and actual swap.
> depends on THP_SWAP || ARCH_ENABLE_THP_MIGRATION
>
> You logically talked about this need for PMD swap operations in your
> commit message, so I think it makes sense to codify that in a single
> place where it can be coherently explained.
Because both you and Dan thinks it's better to add new Kconfig option, I
will do that.
Best Regards,
Huang, Ying
Dave Hansen <[email protected]> writes:
>> config THP_SWAP
>> def_bool y
>> - depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP
>> + depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP && SWAP
>> help
>
>
> This seems like a bug-fix. Is there a reason this didn't cause problems
> up to now?
Yes. The original code has some problem in theory, but not in practice
because all code enclosed by
#ifdef CONFIG_THP_SWAP
#endif
are in swapfile.c. But that will be not true in this patchset.
Best Regards,
Huang, Ying
On 07/09/2018 06:19 PM, Huang, Ying wrote:
> Dave Hansen <[email protected]> writes:
>
>>> config THP_SWAP
>>> def_bool y
>>> - depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP
>>> + depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP && SWAP
>>> help
>>
>> This seems like a bug-fix. Is there a reason this didn't cause problems
>> up to now?
> Yes. The original code has some problem in theory, but not in practice
> because all code enclosed by
>
> #ifdef CONFIG_THP_SWAP
> #endif
>
> are in swapfile.c. But that will be not true in this patchset.
That's great info for the changelog.
Dave Hansen <[email protected]> writes:
> On 07/09/2018 06:19 PM, Huang, Ying wrote:
>> Dave Hansen <[email protected]> writes:
>>
>>>> config THP_SWAP
>>>> def_bool y
>>>> - depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP
>>>> + depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP && SWAP
>>>> help
>>>
>>> This seems like a bug-fix. Is there a reason this didn't cause problems
>>> up to now?
>> Yes. The original code has some problem in theory, but not in practice
>> because all code enclosed by
>>
>> #ifdef CONFIG_THP_SWAP
>> #endif
>>
>> are in swapfile.c. But that will be not true in this patchset.
>
> That's great info for the changelog.
Sure. Will add it.
Best Regards,
Huang, Ying
Dave Hansen <[email protected]> writes:
>> +static inline bool thp_swap_supported(void)
>> +{
>> + return IS_ENABLED(CONFIG_THP_SWAP);
>> +}
>
> This seems like rather useless abstraction. Why do we need it?
I just want to make it shorter, 19 vs 27 characters. But if you think
IS_ENABLED(CONFIG_THP_SWAP) is much better, I can use that instead.
> ...
>> -static inline int swap_duplicate(swp_entry_t swp)
>> +static inline int swap_duplicate(swp_entry_t *swp, bool cluster)
>> {
>> return 0;
>> }
>
> FWIW, I despise true/false function arguments like this. When I see
> this in code:
>
> swap_duplicate(&entry, false);
>
> I have no idea what false does. I'd much rather see:
>
> enum do_swap_cluster {
> SWP_DO_CLUSTER,
> SWP_NO_CLUSTER
> };
>
> So you see:
>
> swap_duplicate(&entry, SWP_NO_CLUSTER);
>
> vs.
>
> swap_duplicate(&entry, SWP_DO_CLUSTER);
>
Yes. Boolean parameter isn't good at most times. Matthew Wilcox
suggested to use
swap_duplicate(&entry, HPAGE_PMD_NR);
vs.
swap_duplicate(&entry, 1);
He thinks this makes the interface more flexible to support other swap
entry size in the future. What do you think about that?
>> diff --git a/mm/memory.c b/mm/memory.c
>> index e9cac1c4fa69..f3900282e3da 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -951,7 +951,7 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>> swp_entry_t entry = pte_to_swp_entry(pte);
>>
>> if (likely(!non_swap_entry(entry))) {
>> - if (swap_duplicate(entry) < 0)
>> + if (swap_duplicate(&entry, false) < 0)
>> return entry.val;
>>
>> /* make sure dst_mm is on swapoff's mmlist. */
>
> I'll also point out that in a multi-hundred-line patch, adding arguments
> to a existing function would not be something I'd try to include in the
> patch. I'd break it out separately unless absolutely necessary.
You mean add another patch, which only adds arguments to the function,
but not change the body of the function?
>> diff --git a/mm/swapfile.c b/mm/swapfile.c
>> index f42b1b0cdc58..48e2c54385ee 100644
>> --- a/mm/swapfile.c
>> +++ b/mm/swapfile.c
>> @@ -49,6 +49,9 @@ static bool swap_count_continued(struct swap_info_struct *, pgoff_t,
>> unsigned char);
>> static void free_swap_count_continuations(struct swap_info_struct *);
>> static sector_t map_swap_entry(swp_entry_t, struct block_device**);
>> +static int add_swap_count_continuation_locked(struct swap_info_struct *si,
>> + unsigned long offset,
>> + struct page *page);
>>
>> DEFINE_SPINLOCK(swap_lock);
>> static unsigned int nr_swapfiles;
>> @@ -319,6 +322,11 @@ static inline void unlock_cluster_or_swap_info(struct swap_info_struct *si,
>> spin_unlock(&si->lock);
>> }
>>
>> +static inline bool is_cluster_offset(unsigned long offset)
>> +{
>> + return !(offset % SWAPFILE_CLUSTER);
>> +}
>> +
>> static inline bool cluster_list_empty(struct swap_cluster_list *list)
>> {
>> return cluster_is_null(&list->head);
>> @@ -1166,16 +1174,14 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry)
>> return NULL;
>> }
>>
>> -static unsigned char __swap_entry_free(struct swap_info_struct *p,
>> - swp_entry_t entry, unsigned char usage)
>> +static unsigned char __swap_entry_free_locked(struct swap_info_struct *p,
>> + struct swap_cluster_info *ci,
>> + unsigned long offset,
>> + unsigned char usage)
>> {
>> - struct swap_cluster_info *ci;
>> - unsigned long offset = swp_offset(entry);
>> unsigned char count;
>> unsigned char has_cache;
>>
>> - ci = lock_cluster_or_swap_info(p, offset);
>> -
>> count = p->swap_map[offset];
>>
>> has_cache = count & SWAP_HAS_CACHE;
>> @@ -1203,6 +1209,17 @@ static unsigned char __swap_entry_free(struct swap_info_struct *p,
>> usage = count | has_cache;
>> p->swap_map[offset] = usage ? : SWAP_HAS_CACHE;
>>
>> + return usage;
>> +}
>> +
>> +static unsigned char __swap_entry_free(struct swap_info_struct *p,
>> + swp_entry_t entry, unsigned char usage)
>> +{
>> + struct swap_cluster_info *ci;
>> + unsigned long offset = swp_offset(entry);
>> +
>> + ci = lock_cluster_or_swap_info(p, offset);
>> + usage = __swap_entry_free_locked(p, ci, offset, usage);
>> unlock_cluster_or_swap_info(p, ci);
>>
>> return usage;
>> @@ -3450,32 +3467,12 @@ void si_swapinfo(struct sysinfo *val)
>> spin_unlock(&swap_lock);
>> }
>>
>> -/*
>> - * Verify that a swap entry is valid and increment its swap map count.
>> - *
>> - * Returns error code in following case.
>> - * - success -> 0
>> - * - swp_entry is invalid -> EINVAL
>> - * - swp_entry is migration entry -> EINVAL
>> - * - swap-cache reference is requested but there is already one. -> EEXIST
>> - * - swap-cache reference is requested but the entry is not used. -> ENOENT
>> - * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
>> - */
>> -static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
>> +static int __swap_duplicate_locked(struct swap_info_struct *p,
>> + unsigned long offset, unsigned char usage)
>> {
>> - struct swap_info_struct *p;
>> - struct swap_cluster_info *ci;
>> - unsigned long offset;
>> unsigned char count;
>> unsigned char has_cache;
>> - int err = -EINVAL;
>> -
>> - p = get_swap_device(entry);
>> - if (!p)
>> - goto out;
>> -
>> - offset = swp_offset(entry);
>> - ci = lock_cluster_or_swap_info(p, offset);
>> + int err = 0;
>>
>> count = p->swap_map[offset];
>>
>> @@ -3485,12 +3482,11 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
>> */
>> if (unlikely(swap_count(count) == SWAP_MAP_BAD)) {
>> err = -ENOENT;
>> - goto unlock_out;
>> + goto out;
>> }
>>
>> has_cache = count & SWAP_HAS_CACHE;
>> count &= ~SWAP_HAS_CACHE;
>> - err = 0;
>>
>> if (usage == SWAP_HAS_CACHE) {
>>
>> @@ -3517,11 +3513,39 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
>>
>> p->swap_map[offset] = count | has_cache;
>>
>> -unlock_out:
>> +out:
>> + return err;
>> +}
>
> ... and that all looks like refactoring, not actively implementing PMD
> swap support. That's unfortunate.
>
>> +/*
>> + * Verify that a swap entry is valid and increment its swap map count.
>> + *
>> + * Returns error code in following case.
>> + * - success -> 0
>> + * - swp_entry is invalid -> EINVAL
>> + * - swp_entry is migration entry -> EINVAL
>> + * - swap-cache reference is requested but there is already one. -> EEXIST
>> + * - swap-cache reference is requested but the entry is not used. -> ENOENT
>> + * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
>> + */
>> +static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
>> +{
>> + struct swap_info_struct *p;
>> + struct swap_cluster_info *ci;
>> + unsigned long offset;
>> + int err = -EINVAL;
>> +
>> + p = get_swap_device(entry);
>> + if (!p)
>> + goto out;
>
> Is this an error, or just for running into something like a migration
> entry? Comments please.
__swap_duplicate() may be called with invalid swap entry because the swap
device may be swapoff after we get swap entry during page fault. Yes, I
will add some comments here.
>> + offset = swp_offset(entry);
>> + ci = lock_cluster_or_swap_info(p, offset);
>> + err = __swap_duplicate_locked(p, offset, usage);
>> unlock_cluster_or_swap_info(p, ci);
>> +
>> + put_swap_device(p);
>> out:
>> - if (p)
>> - put_swap_device(p);
>> return err;
>> }
>
> Not a comment on this patch, but lock_cluster_or_swap_info() is woefully
> uncommented.
OK. Will add some comments for that.
>> @@ -3534,6 +3558,81 @@ void swap_shmem_alloc(swp_entry_t entry)
>> __swap_duplicate(entry, SWAP_MAP_SHMEM);
>> }
>>
>> +#ifdef CONFIG_THP_SWAP
>> +static int __swap_duplicate_cluster(swp_entry_t *entry, unsigned char usage)
>> +{
>> + struct swap_info_struct *si;
>> + struct swap_cluster_info *ci;
>> + unsigned long offset;
>> + unsigned char *map;
>> + int i, err = 0;
>
> Instead of an #ifdef, is there a reason we can't just do:
>
> if (!IS_ENABLED(THP_SWAP))
> return 0;
>
> ?
Good idea. Will do this for the whole patchset.
>> + si = get_swap_device(*entry);
>> + if (!si) {
>> + err = -EINVAL;
>> + goto out;
>> + }
>> + offset = swp_offset(*entry);
>> + ci = lock_cluster(si, offset);
>
> Could you explain a bit why we do lock_cluster() and not
> lock_cluster_or_swap_info() here?
The code size of lock_cluster() is a little smaller, and I think it is a
little easier to read. But I know lock_cluster_or_swap_info() can be used
here without functionality problems. If we try to merge the code for
huge and normal swap entry, that could be used.
>> + if (cluster_is_free(ci)) {
>> + err = -ENOENT;
>> + goto unlock;
>> + }
>
> Needs comments on how this could happen. We just took the lock, so I
> assume this is some kind of race, but can you elaborate?
Sure. Will add some comments for this.
>> + if (!cluster_is_huge(ci)) {
>> + err = -ENOTDIR;
>> + goto unlock;
>> + }
>
> Yikes! This function is the core of the new functionality and its
> comment count is exactly 0. There was quite a long patch description,
> which will be surely lost to the ages, but nothing in the code that
> folks _will_ be looking at for decades to come.
>
> Can we fix that?
Sure. Will add more comments.
>> + VM_BUG_ON(!is_cluster_offset(offset));
>> + VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
>
> So, by this point, we know we are looking at (or supposed to be looking
> at) a cluster on the device?
Yes.
>> + map = si->swap_map + offset;
>> + if (usage == SWAP_HAS_CACHE) {
>> + if (map[0] & SWAP_HAS_CACHE) {
>> + err = -EEXIST;
>> + goto unlock;
>> + }
>> + for (i = 0; i < SWAPFILE_CLUSTER; i++) {
>> + VM_BUG_ON(map[i] & SWAP_HAS_CACHE);
>> + map[i] |= SWAP_HAS_CACHE;
>> + }
>
> So, it's OK to race with the first entry, but after that it's a bug
> because the tail pages should agree with the head page's state?
Yes. Will add some comments about this.
>> + } else {
>> + for (i = 0; i < SWAPFILE_CLUSTER; i++) {
>> +retry:
>> + err = __swap_duplicate_locked(si, offset + i, usage);
>> + if (err == -ENOMEM) {
>> + struct page *page;
>> +
>> + page = alloc_page(GFP_ATOMIC | __GFP_HIGHMEM);
>
> I noticed that the non-clustering analog of this function takes a GFP
> mask. Why not this one?
The value of gfp_mask is GFP_ATOMIC in swap_duplicate(), so they are
exactly same.
>> + err = add_swap_count_continuation_locked(
>> + si, offset + i, page);
>> + if (err) {
>> + *entry = swp_entry(si->type, offset+i);
>> + goto undup;
>> + }
>> + goto retry;
>> + } else if (err)
>> + goto undup;
>> + }
>> + cluster_set_count(ci, cluster_count(ci) + usage);
>> + }
>> +unlock:
>> + unlock_cluster(ci);
>> + put_swap_device(si);
>> +out:
>> + return err;
>> +undup:
>> + for (i--; i >= 0; i--)
>> + __swap_entry_free_locked(
>> + si, ci, offset + i, usage);
>> + goto unlock;
>> +}
>
> So, we've basically created a fork of the __swap_duplicate() code for
> huge pages, along with a presumably new set of bugs and a second code
> path to update. Was this unavoidable? Can we unify this any more with
> the small pages path?
Will discuss this in another thread.
>> /*
>> * Increase reference count of swap entry by 1.
>> * Returns 0 for success, or -ENOMEM if a swap_count_continuation is required
>> @@ -3541,12 +3640,15 @@ void swap_shmem_alloc(swp_entry_t entry)
>> * if __swap_duplicate() fails for another reason (-EINVAL or -ENOENT), which
>> * might occur if a page table entry has got corrupted.
>> */
>> -int swap_duplicate(swp_entry_t entry)
>> +int swap_duplicate(swp_entry_t *entry, bool cluster)
>> {
>> int err = 0;
>>
>> - while (!err && __swap_duplicate(entry, 1) == -ENOMEM)
>> - err = add_swap_count_continuation(entry, GFP_ATOMIC);
>> + if (thp_swap_supported() && cluster)
>> + return __swap_duplicate_cluster(entry, 1);
>> +
>> + while (!err && __swap_duplicate(*entry, 1) == -ENOMEM)
>> + err = add_swap_count_continuation(*entry, GFP_ATOMIC);
>> return err;
>> }
>
> Reading this, I wonder whether this has been refactored as much as
> possible. Both add_swap_count_continuation() and
> __swap_duplciate_cluster() start off with the same get_swap_device() dance.
Yes. There's some duplicated code logic. Will think about how to
improve it.
>> @@ -3558,9 +3660,12 @@ int swap_duplicate(swp_entry_t entry)
>> * -EBUSY means there is a swap cache.
>> * Note: return code is different from swap_duplicate().
>> */
>> -int swapcache_prepare(swp_entry_t entry)
>> +int swapcache_prepare(swp_entry_t entry, bool cluster)
>> {
>> - return __swap_duplicate(entry, SWAP_HAS_CACHE);
>> + if (thp_swap_supported() && cluster)
>> + return __swap_duplicate_cluster(&entry, SWAP_HAS_CACHE);
>> + else
>> + return __swap_duplicate(entry, SWAP_HAS_CACHE);
>> }
>>
>> struct swap_info_struct *swp_swap_info(swp_entry_t entry)
>> @@ -3590,51 +3695,13 @@ pgoff_t __page_file_index(struct page *page)
>> }
>> EXPORT_SYMBOL_GPL(__page_file_index);
>>
>> -/*
>> - * add_swap_count_continuation - called when a swap count is duplicated
>> - * beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's
>> - * page of the original vmalloc'ed swap_map, to hold the continuation count
>> - * (for that entry and for its neighbouring PAGE_SIZE swap entries). Called
>> - * again when count is duplicated beyond SWAP_MAP_MAX * SWAP_CONT_MAX, etc.
>
> This closes out with a lot of refactoring noise. Any chance that can be
> isolated into another patch?
Sure. Will do that.
Best Regards,
Huang, Ying
Dave Hansen <[email protected]> writes:
>> +#ifdef CONFIG_THP_SWAP
>> +static inline int cluster_swapcount(struct swap_cluster_info *ci)
>> +{
>> + if (!ci || !cluster_is_huge(ci))
>> + return 0;
>> +
>> + return cluster_count(ci) - SWAPFILE_CLUSTER;
>> +}
>> +#else
>> +#define cluster_swapcount(ci) 0
>> +#endif
>
> Dumb questions, round 2: On a CONFIG_THP_SWAP=n build, presumably,
> cluster_is_huge()=0 always, so cluster_swapout() always returns 0. Right?
>
> So, why the #ifdef?
#ifdef here is to reduce the code size for !CONFIG_THP_SWAP.
>> /*
>> * It's possible scan_swap_map() uses a free cluster in the middle of free
>> * cluster list. Avoiding such abuse to avoid list corruption.
>> @@ -905,6 +917,7 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
>> struct swap_cluster_info *ci;
>>
>> ci = lock_cluster(si, offset);
>> + memset(si->swap_map + offset, 0, SWAPFILE_CLUSTER);
>> cluster_set_count_flag(ci, 0, 0);
>> free_cluster(si, idx);
>> unlock_cluster(ci);
>
> This is another case of gloriously comment-free code, but stuff that
> _was_ covered in the changelog. I'd much rather have code comments than
> changelog comments. Could we fix that?
>
> I'm generally finding it quite hard to review this because I keep having
> to refer back to the changelog to see if what you are doing matches what
> you said you were doing.
Sure. Will fix this.
>> @@ -1288,24 +1301,30 @@ static void swapcache_free_cluster(swp_entry_t entry)
>>
>> ci = lock_cluster(si, offset);
>> VM_BUG_ON(!cluster_is_huge(ci));
>> + VM_BUG_ON(!is_cluster_offset(offset));
>> + VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
>> map = si->swap_map + offset;
>> - for (i = 0; i < SWAPFILE_CLUSTER; i++) {
>> - val = map[i];
>> - VM_BUG_ON(!(val & SWAP_HAS_CACHE));
>> - if (val == SWAP_HAS_CACHE)
>> - free_entries++;
>> + if (!cluster_swapcount(ci)) {
>> + for (i = 0; i < SWAPFILE_CLUSTER; i++) {
>> + val = map[i];
>> + VM_BUG_ON(!(val & SWAP_HAS_CACHE));
>> + if (val == SWAP_HAS_CACHE)
>> + free_entries++;
>> + }
>> + if (free_entries != SWAPFILE_CLUSTER)
>> + cluster_clear_huge(ci);
>> }
>
> Also, I'll point out that cluster_swapcount() continues the horrific
> naming of cluster_couunt(), not saying what the count is *of*. The
> return value doesn't help much:
>
> return cluster_count(ci) - SWAPFILE_CLUSTER;
We have page_swapcount() for page, swp_swapcount() for swap entry.
cluster_swapcount() tries to mimic them for swap cluster. But I am not
good at naming in general. What's your suggestion?
Best Regards,
Huang, Ying
Dave Hansen <[email protected]> writes:
> I'm seeing a pattern here.
>
> old code:
>
> foo()
> {
> do_swap_something()
> }
>
> new code:
>
> foo(bool cluster)
> {
> if (cluster)
> do_swap_cluster_something();
> else
> do_swap_something();
> }
>
> That make me fear that we have:
> 1. Created a new, wholly untested code path
> 2. Created two places to patch bugs
> 3. Are not reusing code when possible
>
> The code non-resuse was, and continues to be, IMNHO, one of the largest
> sources of bugs with the original THP implementation. It might be
> infeasible to do here, but let's at least give it as much of a go as we can.
I totally agree that we should unify the code path for huge and normal
page/swap if possible. One concern is code size for !CONFIG_THP_SWAP.
The original method is good for that. The new method may introduce some
huge swap related code that is hard to be eliminated for
!CONFIG_THP_SWAP. Andrew Morton pointed this out for the patchset of
the first step of the THP swap optimization.
This may be mitigated at least partly via,
`
#ifdef CONFIG_THP_SWAP
#define nr_swap_entries(nr) (nr)
#else
#define nr_swap_entries(nr) 1
#endif
void do_something(swp_entry_t entry, int __nr_entries)
{
int i, nr_entries = nr_swap_entries(__nr_entries);
if (nr_entries = SWAPFILE_CLUSTER)
; /* huge swap specific */
else
; /* normal swap specific */
for (i = 0; i < nr_entries; i++) {
; /* do something for each entry */
}
/* ... */
}
`
and rely on compiler to do the dirty work for us if possible.
Hi, Andrew,
What do you think about this?
> Can I ask that you take another round through this set and:
>
> 1. Consolidate code refactoring into separate patches
Sure.
> 2. Add comments to code, and avoid doing it solely in changelogs
Sure.
> 3. Make an effort to share more code between the old code and new
> code. Where code can not be shared, call that out in the changelog.
Will do that if we resolve the code size concern.
> This is a *really* hard-to-review set at the moment. Doing those things
> will make it much easier to review and hopefully give us more
> maintainable code going forward.
>
> My apologies for not having done this review sooner.
Thanks a lot for your comments!
Best Regards,
Huang, Ying
Daniel Jordan <[email protected]> writes:
> On Fri, Jun 22, 2018 at 11:51:44AM +0800, Huang, Ying wrote:
>> Because there is no way to prevent a huge swap cluster from being
>> split except when it has SWAP_HAS_CACHE flag set.
>
> What about making get_mctgt_type_thp take the cluster lock? That function
> would be the first lock_cluster user outside of swapfile.c, but it would
> serialize with split_swap_cluster.
>
>> It is possible for
>> the huge swap cluster to be split and the charge for the swap slots
>> inside to be changed, after we check the PMD swap mapping and the huge
>> swap cluster before we commit the charge moving. But the race window
>> is so small, that we will just ignore the race.
>
> Moving the charges is a slow path, so can't we just be correct here and not
> leak?
Check the code and thought about this again, found the race may not
exist. Because the PMD is locked when get_mctgt_type_thp() is called
until charge is completed for the PMD. So the charge of the huge swap
cluster cannot be changed at the same time even if the huge swap cluster
is split by other processes. Right?
Will update the comments for this.
Best Regards,
Huang, Ying
> Yes. Boolean parameter isn't good at most times. Matthew Wilcox
> suggested to use
>
> swap_duplicate(&entry, HPAGE_PMD_NR);
>
> vs.
>
> swap_duplicate(&entry, 1);
>
> He thinks this makes the interface more flexible to support other swap
> entry size in the future. What do you think about that?
That looks great to me too.
>>> if (likely(!non_swap_entry(entry))) {
>>> - if (swap_duplicate(entry) < 0)
>>> + if (swap_duplicate(&entry, false) < 0)
>>> return entry.val;
>>>
>>> /* make sure dst_mm is on swapoff's mmlist. */
>>
>> I'll also point out that in a multi-hundred-line patch, adding arguments
>> to a existing function would not be something I'd try to include in the
>> patch. I'd break it out separately unless absolutely necessary.
>
> You mean add another patch, which only adds arguments to the function,
> but not change the body of the function?
Yes. Or, just add the non-THP-swap version first.
On 07/09/2018 11:53 PM, Huang, Ying wrote:
> Dave Hansen <[email protected]> writes:
>>> +#ifdef CONFIG_THP_SWAP
>>> +static inline int cluster_swapcount(struct swap_cluster_info *ci)
>>> +{
>>> + if (!ci || !cluster_is_huge(ci))
>>> + return 0;
>>> +
>>> + return cluster_count(ci) - SWAPFILE_CLUSTER;
>>> +}
>>> +#else
>>> +#define cluster_swapcount(ci) 0
>>> +#endif
>>
>> Dumb questions, round 2: On a CONFIG_THP_SWAP=n build, presumably,
>> cluster_is_huge()=0 always, so cluster_swapout() always returns 0. Right?
>>
>> So, why the #ifdef?
>
> #ifdef here is to reduce the code size for !CONFIG_THP_SWAP.
I'd just remove the !CONFIG_THP_SWAP version entirely.
>>> @@ -1288,24 +1301,30 @@ static void swapcache_free_cluster(swp_entry_t entry)
>>>
>>> ci = lock_cluster(si, offset);
>>> VM_BUG_ON(!cluster_is_huge(ci));
>>> + VM_BUG_ON(!is_cluster_offset(offset));
>>> + VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
>>> map = si->swap_map + offset;
>>> - for (i = 0; i < SWAPFILE_CLUSTER; i++) {
>>> - val = map[i];
>>> - VM_BUG_ON(!(val & SWAP_HAS_CACHE));
>>> - if (val == SWAP_HAS_CACHE)
>>> - free_entries++;
>>> + if (!cluster_swapcount(ci)) {
>>> + for (i = 0; i < SWAPFILE_CLUSTER; i++) {
>>> + val = map[i];
>>> + VM_BUG_ON(!(val & SWAP_HAS_CACHE));
>>> + if (val == SWAP_HAS_CACHE)
>>> + free_entries++;
>>> + }
>>> + if (free_entries != SWAPFILE_CLUSTER)
>>> + cluster_clear_huge(ci);
>>> }
>>
>> Also, I'll point out that cluster_swapcount() continues the horrific
>> naming of cluster_couunt(), not saying what the count is *of*. The
>> return value doesn't help much:
>>
>> return cluster_count(ci) - SWAPFILE_CLUSTER;
>
> We have page_swapcount() for page, swp_swapcount() for swap entry.
> cluster_swapcount() tries to mimic them for swap cluster. But I am not
> good at naming in general. What's your suggestion?
I don't have a suggestion because I haven't the foggiest idea what it is
doing. :)
Is it the number of instantiated swap cache pages that are referring to
this cluster? Is it just huge pages? Huge and small? One refcount per
huge page, or 512?
On 07/10/2018 12:13 AM, Huang, Ying wrote:
> Dave Hansen <[email protected]> writes:
>> The code non-resuse was, and continues to be, IMNHO, one of the largest
>> sources of bugs with the original THP implementation. It might be
>> infeasible to do here, but let's at least give it as much of a go as we can.
>
> I totally agree that we should unify the code path for huge and normal
> page/swap if possible. One concern is code size for !CONFIG_THP_SWAP.
I've honestly never heard that as an argument before. In general, our
.c files implement *full* functionality: the most complex case. The
headers #ifdef that functionality down because of our .config or
architecture.
The thing that matters here is debugging and reviewing the _complicated_
case, IMNHO.
> The original method is good for that. The new method may introduce some
> huge swap related code that is hard to be eliminated for
> !CONFIG_THP_SWAP. Andrew Morton pointed this out for the patchset of
> the first step of the THP swap optimization.
>
> This may be mitigated at least partly via,
>
> `
> #ifdef CONFIG_THP_SWAP
> #define nr_swap_entries(nr) (nr)
> #else
> #define nr_swap_entries(nr) 1
> #endif
>
> void do_something(swp_entry_t entry, int __nr_entries)
> {
> int i, nr_entries = nr_swap_entries(__nr_entries);
>
> if (nr_entries = SWAPFILE_CLUSTER)
> ; /* huge swap specific */
> else
> ; /* normal swap specific */
>
> for (i = 0; i < nr_entries; i++) {
> ; /* do something for each entry */
> }
>
> /* ... */
> }
> `
While that isn't perfect, it's better than the current state of things.
While you are refactoring things, I think you also need to take a good
look at roughly chopping this series in half by finding another stopping
point. You've done a great job so far of trickling this functionality
in so far, but 21 patches is quite a bit, and the set is only going to
get larger.
On Tue, Jul 10, 2018 at 03:49:58PM +0800, Huang, Ying wrote:
> Daniel Jordan <[email protected]> writes:
>
> > On Fri, Jun 22, 2018 at 11:51:44AM +0800, Huang, Ying wrote:
> >> Because there is no way to prevent a huge swap cluster from being
> >> split except when it has SWAP_HAS_CACHE flag set.
> >
> > What about making get_mctgt_type_thp take the cluster lock? That function
> > would be the first lock_cluster user outside of swapfile.c, but it would
> > serialize with split_swap_cluster.
> >
> >> It is possible for
> >> the huge swap cluster to be split and the charge for the swap slots
> >> inside to be changed, after we check the PMD swap mapping and the huge
> >> swap cluster before we commit the charge moving. But the race window
> >> is so small, that we will just ignore the race.
> >
> > Moving the charges is a slow path, so can't we just be correct here and not
> > leak?
>
> Check the code and thought about this again, found the race may not
> exist. Because the PMD is locked when get_mctgt_type_thp() is called
> until charge is completed for the PMD. So the charge of the huge swap
> cluster cannot be changed at the same time even if the huge swap cluster
> is split by other processes.
That's true, the PMD lock does prevent the swap charge from going stale between
the time mem_cgroup_move_charge_pte_range identifies a huge swap entry in
get_mctgt_type_thp and the time it moves the charge in mem_cgroup_move_account,
at least for some events like swapping in pages.
> Right?
I'm not sure the PMD lock covers everything, but after looking a while longer
at this, the charge moving seems to be a best-effort feature in other respects,
so the accounting doesn't need to be completely accurate.
Dave Hansen <[email protected]> writes:
>> Yes. Boolean parameter isn't good at most times. Matthew Wilcox
>> suggested to use
>>
>> swap_duplicate(&entry, HPAGE_PMD_NR);
>>
>> vs.
>>
>> swap_duplicate(&entry, 1);
>>
>> He thinks this makes the interface more flexible to support other swap
>> entry size in the future. What do you think about that?
>
> That looks great to me too.
>
>>>> if (likely(!non_swap_entry(entry))) {
>>>> - if (swap_duplicate(entry) < 0)
>>>> + if (swap_duplicate(&entry, false) < 0)
>>>> return entry.val;
>>>>
>>>> /* make sure dst_mm is on swapoff's mmlist. */
>>>
>>> I'll also point out that in a multi-hundred-line patch, adding arguments
>>> to a existing function would not be something I'd try to include in the
>>> patch. I'd break it out separately unless absolutely necessary.
>>
>> You mean add another patch, which only adds arguments to the function,
>> but not change the body of the function?
>
> Yes. Or, just add the non-THP-swap version first.
OK, will do this.
Best Regards,
Huang, Ying
Dave Hansen <[email protected]> writes:
> On 07/09/2018 11:53 PM, Huang, Ying wrote:
>> Dave Hansen <[email protected]> writes:
>>>> +#ifdef CONFIG_THP_SWAP
>>>> +static inline int cluster_swapcount(struct swap_cluster_info *ci)
>>>> +{
>>>> + if (!ci || !cluster_is_huge(ci))
>>>> + return 0;
>>>> +
>>>> + return cluster_count(ci) - SWAPFILE_CLUSTER;
>>>> +}
>>>> +#else
>>>> +#define cluster_swapcount(ci) 0
>>>> +#endif
>>>
>>> Dumb questions, round 2: On a CONFIG_THP_SWAP=n build, presumably,
>>> cluster_is_huge()=0 always, so cluster_swapout() always returns 0. Right?
>>>
>>> So, why the #ifdef?
>>
>> #ifdef here is to reduce the code size for !CONFIG_THP_SWAP.
>
> I'd just remove the !CONFIG_THP_SWAP version entirely.
Sure. Unless there are some build errors after some other refactoring.
>>>> @@ -1288,24 +1301,30 @@ static void swapcache_free_cluster(swp_entry_t entry)
>>>>
>>>> ci = lock_cluster(si, offset);
>>>> VM_BUG_ON(!cluster_is_huge(ci));
>>>> + VM_BUG_ON(!is_cluster_offset(offset));
>>>> + VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER);
>>>> map = si->swap_map + offset;
>>>> - for (i = 0; i < SWAPFILE_CLUSTER; i++) {
>>>> - val = map[i];
>>>> - VM_BUG_ON(!(val & SWAP_HAS_CACHE));
>>>> - if (val == SWAP_HAS_CACHE)
>>>> - free_entries++;
>>>> + if (!cluster_swapcount(ci)) {
>>>> + for (i = 0; i < SWAPFILE_CLUSTER; i++) {
>>>> + val = map[i];
>>>> + VM_BUG_ON(!(val & SWAP_HAS_CACHE));
>>>> + if (val == SWAP_HAS_CACHE)
>>>> + free_entries++;
>>>> + }
>>>> + if (free_entries != SWAPFILE_CLUSTER)
>>>> + cluster_clear_huge(ci);
>>>> }
>>>
>>> Also, I'll point out that cluster_swapcount() continues the horrific
>>> naming of cluster_couunt(), not saying what the count is *of*. The
>>> return value doesn't help much:
>>>
>>> return cluster_count(ci) - SWAPFILE_CLUSTER;
>>
>> We have page_swapcount() for page, swp_swapcount() for swap entry.
>> cluster_swapcount() tries to mimic them for swap cluster. But I am not
>> good at naming in general. What's your suggestion?
>
> I don't have a suggestion because I haven't the foggiest idea what it is
> doing. :)
>
> Is it the number of instantiated swap cache pages that are referring to
> this cluster? Is it just huge pages? Huge and small? One refcount per
> huge page, or 512?
page_swapcount() and swp_swapcount() for a normal swap entry is the
number of PTE swap mapping to the normal swap entry.
cluster_swapcount() for a huge swap entry (or huge swap cluster) is the
number of PMD swap mapping to the huge swap entry.
Originally, cluster_count is the reference count of the swap entries in
the swap cluster (that is, how many entries are in use). Now, it is the
sum of the reference count of the swap entries in the swap cluster and
the number of PMD swap mapping to the huge swap entry.
I need to add comments for this at least.
Best Regards,
Huang, Ying
Dave Hansen <[email protected]> writes:
> On 07/10/2018 12:13 AM, Huang, Ying wrote:
>> Dave Hansen <[email protected]> writes:
>>> The code non-resuse was, and continues to be, IMNHO, one of the largest
>>> sources of bugs with the original THP implementation. It might be
>>> infeasible to do here, but let's at least give it as much of a go as we can.
>>
>> I totally agree that we should unify the code path for huge and normal
>> page/swap if possible. One concern is code size for !CONFIG_THP_SWAP.
>
> I've honestly never heard that as an argument before. In general, our
> .c files implement *full* functionality: the most complex case. The
> headers #ifdef that functionality down because of our .config or
> architecture.
>
> The thing that matters here is debugging and reviewing the _complicated_
> case, IMNHO.
I agree with your point here. I will try it and measure the code size
change too.
>> The original method is good for that. The new method may introduce some
>> huge swap related code that is hard to be eliminated for
>> !CONFIG_THP_SWAP. Andrew Morton pointed this out for the patchset of
>> the first step of the THP swap optimization.
>>
>> This may be mitigated at least partly via,
>>
>> `
>> #ifdef CONFIG_THP_SWAP
>> #define nr_swap_entries(nr) (nr)
>> #else
>> #define nr_swap_entries(nr) 1
>> #endif
>>
>> void do_something(swp_entry_t entry, int __nr_entries)
>> {
>> int i, nr_entries = nr_swap_entries(__nr_entries);
>>
>> if (nr_entries = SWAPFILE_CLUSTER)
>> ; /* huge swap specific */
>> else
>> ; /* normal swap specific */
>>
>> for (i = 0; i < nr_entries; i++) {
>> ; /* do something for each entry */
>> }
>>
>> /* ... */
>> }
>> `
>
> While that isn't perfect, it's better than the current state of things.
>
> While you are refactoring things, I think you also need to take a good
> look at roughly chopping this series in half by finding another stopping
> point. You've done a great job so far of trickling this functionality
> in so far, but 21 patches is quite a bit, and the set is only going to
> get larger.
Yes. The patchset is too large. I will try to reduce it if possible.
At least [21/21] can be separated. [02/21] may be sent separately
too. Other parts are hard, THP swapin and creating/supporting PMD swap
mapping need to be in one patchset.
Best Regards,
Huang, Ying