2015-05-14 17:10:34

by Sasha Levin

[permalink] [raw]
Subject: [PATCH 00/11] mm: debug: formatting memory management structs

This patch series adds knowledge about various memory management structures
to the standard print functions.

In essence, it allows us to easily print those structures:

printk("%pZp %pZm %pZv", page, mm, vma);

This allows us to customize output when hitting bugs even further, thus
we introduce VM_BUG() which allows printing anything when hitting a bug
rather than just a single piece of information.

This also means we can get rid of VM_BUG_ON_* since they're now nothing
more than a format string.

Changes since RFC:
- Address comments by Kirill.

Sasha Levin (11):
mm: debug: format flags in a buffer
mm: debug: deal with a new family of MM pointers
mm: debug: dump VMA into a string rather than directly on screen
mm: debug: dump struct MM into a string rather than directly on
screen
mm: debug: dump page into a string rather than directly on screen
mm: debug: clean unused code
mm: debug: VM_BUG()
mm: debug: kill VM_BUG_ON_PAGE
mm: debug: kill VM_BUG_ON_VMA
mm: debug: kill VM_BUG_ON_MM
mm: debug: use VM_BUG() to help with debug output

arch/arm/mm/mmap.c | 2 +-
arch/frv/mm/elf-fdpic.c | 4 +-
arch/mips/mm/gup.c | 4 +-
arch/parisc/kernel/sys_parisc.c | 2 +-
arch/powerpc/mm/hugetlbpage.c | 2 +-
arch/powerpc/mm/pgtable_64.c | 4 +-
arch/s390/mm/gup.c | 2 +-
arch/s390/mm/mmap.c | 2 +-
arch/s390/mm/pgtable.c | 6 +--
arch/sh/mm/mmap.c | 2 +-
arch/sparc/kernel/sys_sparc_64.c | 4 +-
arch/sparc/mm/gup.c | 2 +-
arch/sparc/mm/hugetlbpage.c | 4 +-
arch/tile/mm/hugetlbpage.c | 2 +-
arch/x86/kernel/sys_x86_64.c | 2 +-
arch/x86/mm/gup.c | 8 ++--
arch/x86/mm/hugetlbpage.c | 2 +-
arch/x86/mm/pgtable.c | 6 +--
include/linux/huge_mm.h | 2 +-
include/linux/hugetlb.h | 2 +-
include/linux/hugetlb_cgroup.h | 4 +-
include/linux/mm.h | 22 ++++-----
include/linux/mmdebug.h | 40 ++++++----------
include/linux/page-flags.h | 26 +++++-----
include/linux/pagemap.h | 11 +++--
include/linux/rmap.h | 2 +-
kernel/fork.c | 2 +-
lib/vsprintf.c | 22 +++++++++
mm/balloon_compaction.c | 4 +-
mm/cleancache.c | 6 +--
mm/compaction.c | 2 +-
mm/debug.c | 98 ++++++++++++++++++++------------------
mm/filemap.c | 18 +++----
mm/gup.c | 12 ++---
mm/huge_memory.c | 50 +++++++++----------
mm/hugetlb.c | 28 +++++------
mm/hugetlb_cgroup.c | 2 +-
mm/internal.h | 8 ++--
mm/interval_tree.c | 2 +-
mm/kasan/report.c | 2 +-
mm/ksm.c | 13 ++---
mm/memcontrol.c | 48 +++++++++----------
mm/memory.c | 10 ++--
mm/memory_hotplug.c | 2 +-
mm/migrate.c | 6 +--
mm/mlock.c | 4 +-
mm/mmap.c | 15 +++---
mm/mremap.c | 4 +-
mm/page_alloc.c | 28 +++++------
mm/page_io.c | 4 +-
mm/pagewalk.c | 2 +-
mm/pgtable-generic.c | 8 ++--
mm/rmap.c | 20 ++++----
mm/shmem.c | 10 ++--
mm/slub.c | 4 +-
mm/swap.c | 39 +++++++--------
mm/swap_state.c | 16 +++----
mm/swapfile.c | 8 ++--
mm/vmscan.c | 24 +++++-----
59 files changed, 355 insertions(+), 335 deletions(-)

--
1.7.10.4


2015-05-14 17:10:37

by Sasha Levin

[permalink] [raw]
Subject: [PATCH 01/11] mm: debug: format flags in a buffer

Format various flags to a string buffer rather than printing them. This is
a helper for later.

Signed-off-by: Sasha Levin <[email protected]>
---
mm/debug.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)

diff --git a/mm/debug.c b/mm/debug.c
index 3eb3ac2..decebcf 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -80,6 +80,41 @@ static void dump_flags(unsigned long flags,
pr_cont(")\n");
}

+static char *format_flags(unsigned long flags,
+ const struct trace_print_flags *names, int count,
+ char *buf, char *end)
+{
+ const char *delim = "";
+ unsigned long mask;
+ int i;
+
+ buf += snprintf(buf, (buf > end ? 0 : end - buf),
+ "flags: %#lx(", flags);
+
+ /* remove zone id */
+ flags &= (1UL << NR_PAGEFLAGS) - 1;
+
+ for (i = 0; i < count && flags; i++) {
+ mask = names[i].mask;
+ if ((flags & mask) != mask)
+ continue;
+
+ flags &= ~mask;
+ buf += snprintf(buf, (buf > end ? 0 : end - buf),
+ "%s%s", delim, names[i].name);
+ delim = "|";
+ }
+
+ /* check for left over flags */
+ if (flags)
+ buf += snprintf(buf, (buf > end ? 0 : end - buf),
+ "%s%#lx", delim, flags);
+
+ buf += snprintf(buf, (buf > end ? 0 : end - buf), ")\n");
+
+ return buf;
+}
+
void dump_page_badflags(struct page *page, const char *reason,
unsigned long badflags)
{
--
1.7.10.4

2015-05-14 17:10:43

by Sasha Levin

[permalink] [raw]
Subject: [PATCH 02/11] mm: debug: deal with a new family of MM pointers

This teaches our printing functions about a new family of MM pointer that it
could now print.

I've picked %pZ because %pm and %pM were already taken, so I figured it
doesn't really matter what we go with. We also have the option of stealing
one of those two...

Signed-off-by: Sasha Levin <[email protected]>
---
lib/vsprintf.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 8243e2f..9350904 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -1375,6 +1375,21 @@ char *comm_name(char *buf, char *end, struct task_struct *tsk,
return string(buf, end, name, spec);
}

+static noinline_for_stack
+char *mm_pointer(char *buf, char *end, struct task_struct *tsk,
+ struct printf_spec spec, const char *fmt)
+{
+ switch (fmt[1]) {
+ default:
+ spec.base = 16;
+ spec.field_width = sizeof(unsigned long) * 2 + 2;
+ spec.flags |= SPECIAL | SMALL | ZEROPAD;
+ return number(buf, end, (unsigned long) ptr, spec);
+ }
+
+ return buf;
+}
+
int kptr_restrict __read_mostly;

/*
@@ -1463,6 +1478,7 @@ int kptr_restrict __read_mostly;
* (legacy clock framework) of the clock
* - 'Cr' For a clock, it prints the current rate of the clock
* - 'T' task_struct->comm
+ * - 'Z' Outputs a readable version of a type of memory management struct.
*
* Note: The difference between 'S' and 'F' is that on ia64 and ppc64
* function pointers are really function descriptors, which contain a
@@ -1615,6 +1631,8 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
spec, fmt);
case 'T':
return comm_name(buf, end, ptr, spec, fmt);
+ case 'Z':
+ return mm_pointer(buf, end, ptr, spec, fmt);
}
spec.flags |= SMALL;
if (spec.field_width == -1) {
--
1.7.10.4

2015-05-14 17:10:48

by Sasha Levin

[permalink] [raw]
Subject: [PATCH 03/11] mm: debug: dump VMA into a string rather than directly on screen

This lets us use regular string formatting code to dump VMAs, use it
in VM_BUG_ON_VMA instead of just printing it to screen as well.

Signed-off-by: Sasha Levin <[email protected]>
---
include/linux/mmdebug.h | 8 ++++++--
lib/vsprintf.c | 7 +++++--
mm/debug.c | 26 ++++++++++++++------------
3 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 877ef22..506e405 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -10,10 +10,10 @@ struct mm_struct;
extern void dump_page(struct page *page, const char *reason);
extern void dump_page_badflags(struct page *page, const char *reason,
unsigned long badflags);
-void dump_vma(const struct vm_area_struct *vma);
void dump_mm(const struct mm_struct *mm);

#ifdef CONFIG_DEBUG_VM
+char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
#define VM_BUG_ON(cond) BUG_ON(cond)
#define VM_BUG_ON_PAGE(cond, page) \
do { \
@@ -25,7 +25,7 @@ void dump_mm(const struct mm_struct *mm);
#define VM_BUG_ON_VMA(cond, vma) \
do { \
if (unlikely(cond)) { \
- dump_vma(vma); \
+ pr_emerg("%pZv", vma); \
BUG(); \
} \
} while (0)
@@ -40,6 +40,10 @@ void dump_mm(const struct mm_struct *mm);
#define VM_WARN_ON_ONCE(cond) WARN_ON_ONCE(cond)
#define VM_WARN_ONCE(cond, format...) WARN_ONCE(cond, format)
#else
+static char *format_vma(const struct vm_area_struct *vma, char *buf, char *end)
+{
+ return buf;
+}
#define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
#define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond)
#define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond)
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 9350904..ea11d513 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -1376,10 +1376,12 @@ char *comm_name(char *buf, char *end, struct task_struct *tsk,
}

static noinline_for_stack
-char *mm_pointer(char *buf, char *end, struct task_struct *tsk,
+char *mm_pointer(char *buf, char *end, const void *ptr,
struct printf_spec spec, const char *fmt)
{
switch (fmt[1]) {
+ case 'v':
+ return format_vma(ptr, buf, end);
default:
spec.base = 16;
spec.field_width = sizeof(unsigned long) * 2 + 2;
@@ -1478,7 +1480,8 @@ int kptr_restrict __read_mostly;
* (legacy clock framework) of the clock
* - 'Cr' For a clock, it prints the current rate of the clock
* - 'T' task_struct->comm
- * - 'Z' Outputs a readable version of a type of memory management struct.
+ * - 'Z[v]' Outputs a readable version of a type of memory management struct:
+ * v struct vm_area_struct
*
* Note: The difference between 'S' and 'F' is that on ia64 and ppc64
* function pointers are really function descriptors, which contain a
diff --git a/mm/debug.c b/mm/debug.c
index decebcf..f5f7d47 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -186,20 +186,22 @@ static const struct trace_print_flags vmaflags_names[] = {
{VM_MERGEABLE, "mergeable" },
};

-void dump_vma(const struct vm_area_struct *vma)
+char *format_vma(const struct vm_area_struct *vma, char *buf, char *end)
{
- pr_emerg("vma %p start %p end %p\n"
- "next %p prev %p mm %p\n"
- "prot %lx anon_vma %p vm_ops %p\n"
- "pgoff %lx file %p private_data %p\n",
- vma, (void *)vma->vm_start, (void *)vma->vm_end, vma->vm_next,
- vma->vm_prev, vma->vm_mm,
- (unsigned long)pgprot_val(vma->vm_page_prot),
- vma->anon_vma, vma->vm_ops, vma->vm_pgoff,
- vma->vm_file, vma->vm_private_data);
- dump_flags(vma->vm_flags, vmaflags_names, ARRAY_SIZE(vmaflags_names));
+ buf += snprintf(buf, buf > end ? 0 : end - buf,
+ "vma %p start %p end %p\n"
+ "next %p prev %p mm %p\n"
+ "prot %lx anon_vma %p vm_ops %p\n"
+ "pgoff %lx file %p private_data %p\n",
+ vma, (void *)vma->vm_start, (void *)vma->vm_end, vma->vm_next,
+ vma->vm_prev, vma->vm_mm,
+ (unsigned long)pgprot_val(vma->vm_page_prot),
+ vma->anon_vma, vma->vm_ops, vma->vm_pgoff,
+ vma->vm_file, vma->vm_private_data);
+
+ return format_flags(vma->vm_flags, vmaflags_names, ARRAY_SIZE(vmaflags_names),
+ buf, end);
}
-EXPORT_SYMBOL(dump_vma);

void dump_mm(const struct mm_struct *mm)
{
--
1.7.10.4

2015-05-14 17:10:39

by Sasha Levin

[permalink] [raw]
Subject: [PATCH 04/11] mm: debug: dump struct MM into a string rather than directly on screen

This lets us use regular string formatting code to dump MMs, use it
in VM_BUG_ON_MM instead of just printing it to screen as well.

Signed-off-by: Sasha Levin <[email protected]>
---
include/linux/mmdebug.h | 8 ++++++--
lib/vsprintf.c | 5 ++++-
mm/debug.c | 11 +++++++----
3 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 506e405..202ebdf 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -10,10 +10,10 @@ struct mm_struct;
extern void dump_page(struct page *page, const char *reason);
extern void dump_page_badflags(struct page *page, const char *reason,
unsigned long badflags);
-void dump_mm(const struct mm_struct *mm);

#ifdef CONFIG_DEBUG_VM
char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
+char *format_mm(const struct mm_struct *mm, char *buf, char *end);
#define VM_BUG_ON(cond) BUG_ON(cond)
#define VM_BUG_ON_PAGE(cond, page) \
do { \
@@ -32,7 +32,7 @@ char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
#define VM_BUG_ON_MM(cond, mm) \
do { \
if (unlikely(cond)) { \
- dump_mm(mm); \
+ pr_emerg("%pZm", mm); \
BUG(); \
} \
} while (0)
@@ -44,6 +44,10 @@ static char *format_vma(const struct vm_area_struct *vma, char *buf, char *end)
{
return buf;
}
+static char *format_mm(const struct mm_struct *mm, char *buf, char *end)
+{
+ return buf;
+}
#define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
#define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond)
#define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond)
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index ea11d513..595bf50 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -1380,6 +1380,8 @@ char *mm_pointer(char *buf, char *end, const void *ptr,
struct printf_spec spec, const char *fmt)
{
switch (fmt[1]) {
+ case 'm':
+ return format_mm(ptr, buf, end);
case 'v':
return format_vma(ptr, buf, end);
default:
@@ -1480,8 +1482,9 @@ int kptr_restrict __read_mostly;
* (legacy clock framework) of the clock
* - 'Cr' For a clock, it prints the current rate of the clock
* - 'T' task_struct->comm
- * - 'Z[v]' Outputs a readable version of a type of memory management struct:
+ * - 'Z[mv]' Outputs a readable version of a type of memory management struct:
* v struct vm_area_struct
+ * m struct mm_struct
*
* Note: The difference between 'S' and 'F' is that on ia64 and ppc64
* function pointers are really function descriptors, which contain a
diff --git a/mm/debug.c b/mm/debug.c
index f5f7d47..1ec246e 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -203,9 +203,10 @@ char *format_vma(const struct vm_area_struct *vma, char *buf, char *end)
buf, end);
}

-void dump_mm(const struct mm_struct *mm)
+char *format_mm(const struct mm_struct *mm, char *buf, char *end)
{
- pr_emerg("mm %p mmap %p seqnum %d task_size %lu\n"
+ buf += snprintf(buf, buf > end ? 0 : end - buf,
+ "mm %p mmap %p seqnum %d task_size %lu\n"
#ifdef CONFIG_MMU
"get_unmapped_area %p\n"
#endif
@@ -270,8 +271,10 @@ void dump_mm(const struct mm_struct *mm)
"" /* This is here to not have a comma! */
);

- dump_flags(mm->def_flags, vmaflags_names,
- ARRAY_SIZE(vmaflags_names));
+ buf = format_flags(mm->def_flags, vmaflags_names,
+ ARRAY_SIZE(vmaflags_names), buf, end);
+
+ return buf;
}

#endif /* CONFIG_DEBUG_VM */
--
1.7.10.4

2015-05-14 17:13:04

by Sasha Levin

[permalink] [raw]
Subject: [PATCH 05/11] mm: debug: dump page into a string rather than directly on screen

This lets us use regular string formatting code to dump VMAs, use it
in VM_BUG_ON_PAGE instead of just printing it to screen as well.

Signed-off-by: Sasha Levin <[email protected]>
---
include/linux/mmdebug.h | 6 ++----
lib/vsprintf.c | 5 ++++-
mm/balloon_compaction.c | 4 ++--
mm/debug.c | 28 +++++++++++-----------------
mm/kasan/report.c | 2 +-
mm/memory.c | 2 +-
mm/memory_hotplug.c | 2 +-
mm/page_alloc.c | 2 +-
8 files changed, 23 insertions(+), 28 deletions(-)

diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 202ebdf..8b3f5a0 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -7,9 +7,7 @@ struct page;
struct vm_area_struct;
struct mm_struct;

-extern void dump_page(struct page *page, const char *reason);
-extern void dump_page_badflags(struct page *page, const char *reason,
- unsigned long badflags);
+char *format_page(struct page *page, char *buf, char *end);

#ifdef CONFIG_DEBUG_VM
char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
@@ -18,7 +16,7 @@ char *format_mm(const struct mm_struct *mm, char *buf, char *end);
#define VM_BUG_ON_PAGE(cond, page) \
do { \
if (unlikely(cond)) { \
- dump_page(page, "VM_BUG_ON_PAGE(" __stringify(cond)")");\
+ pr_emerg("%pZp", page); \
BUG(); \
} \
} while (0)
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 595bf50..1f045ae 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -1382,6 +1382,8 @@ char *mm_pointer(char *buf, char *end, const void *ptr,
switch (fmt[1]) {
case 'm':
return format_mm(ptr, buf, end);
+ case 'p':
+ return format_page(ptr, buf, end);
case 'v':
return format_vma(ptr, buf, end);
default:
@@ -1482,9 +1484,10 @@ int kptr_restrict __read_mostly;
* (legacy clock framework) of the clock
* - 'Cr' For a clock, it prints the current rate of the clock
* - 'T' task_struct->comm
- * - 'Z[mv]' Outputs a readable version of a type of memory management struct:
+ * - 'Z[mpv]' Outputs a readable version of a type of memory management struct:
* v struct vm_area_struct
* m struct mm_struct
+ * p struct page
*
* Note: The difference between 'S' and 'F' is that on ia64 and ppc64
* function pointers are really function descriptors, which contain a
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index fcad832..88b3cae 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -187,7 +187,7 @@ void balloon_page_putback(struct page *page)
put_page(page);
} else {
WARN_ON(1);
- dump_page(page, "not movable balloon page");
+ pr_alert("Not movable balloon page:\n%pZp", page);
}
unlock_page(page);
}
@@ -207,7 +207,7 @@ int balloon_page_migrate(struct page *newpage,
BUG_ON(!trylock_page(newpage));

if (WARN_ON(!__is_movable_balloon_page(page))) {
- dump_page(page, "not movable balloon page");
+ pr_alert("Not movable balloon page:\n%pZp", page);
unlock_page(newpage);
return rc;
}
diff --git a/mm/debug.c b/mm/debug.c
index 1ec246e..44efbb5 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -115,32 +115,26 @@ static char *format_flags(unsigned long flags,
return buf;
}

-void dump_page_badflags(struct page *page, const char *reason,
- unsigned long badflags)
+char *format_page(struct page *page, char *buf, char *end)
{
- pr_emerg("page:%p count:%d mapcount:%d mapping:%p index:%#lx\n",
+ buf += snprintf(buf, (buf > end ? 0 : end - buf),
+ "page:%p count:%d mapcount:%d mapping:%p index:%#lx\n",
page, atomic_read(&page->_count), page_mapcount(page),
page->mapping, page->index);
+
BUILD_BUG_ON(ARRAY_SIZE(pageflag_names) != __NR_PAGEFLAGS);
- dump_flags(page->flags, pageflag_names, ARRAY_SIZE(pageflag_names));
- if (reason)
- pr_alert("page dumped because: %s\n", reason);
- if (page->flags & badflags) {
- pr_alert("bad because of flags:\n");
- dump_flags(page->flags & badflags,
- pageflag_names, ARRAY_SIZE(pageflag_names));
- }
+
+ buf = format_flags(page->flags, pageflag_names,
+ ARRAY_SIZE(pageflag_names), buf, end);
#ifdef CONFIG_MEMCG
if (page->mem_cgroup)
- pr_alert("page->mem_cgroup:%p\n", page->mem_cgroup);
+ buf += snprintf(buf, (buf > end ? 0 : end - buf),
+ "page->mem_cgroup:%p\n", page->mem_cgroup);
#endif
-}

-void dump_page(struct page *page, const char *reason)
-{
- dump_page_badflags(page, reason, 0);
+ return buf;
}
-EXPORT_SYMBOL(dump_page);
+EXPORT_SYMBOL(format_page);

#ifdef CONFIG_DEBUG_VM

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 680ceed..272a282 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -121,7 +121,7 @@ static void print_address_description(struct kasan_access_info *info)
"kasan: bad access detected");
return;
}
- dump_page(page, "kasan: bad access detected");
+ pr_emerg("kasan: bad access detected:\n%pZp", page);
}

if (kernel_or_module_addr(addr)) {
diff --git a/mm/memory.c b/mm/memory.c
index d1fa0c1..6e5d4bd 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -683,7 +683,7 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
current->comm,
(long long)pte_val(pte), (long long)pmd_val(*pmd));
if (page)
- dump_page(page, "bad pte");
+ pr_alert("Bad pte:\n%pZp", page);
printk(KERN_ALERT
"addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
(void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index c6a8d95..366fba0 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1431,7 +1431,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
#ifdef CONFIG_DEBUG_VM
printk(KERN_ALERT "removing pfn %lx from LRU failed\n",
pfn);
- dump_page(page, "failed to remove from LRU");
+ pr_alert("Failed to remove from LRU:\n%pZp", page);
#endif
put_page(page);
/* Because we don't have big zone->lock. we should
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5b7d6be..06577ec 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -403,7 +403,7 @@ static void bad_page(struct page *page, const char *reason,

printk(KERN_ALERT "BUG: Bad page state in process %s pfn:%05lx\n",
current->comm, page_to_pfn(page));
- dump_page_badflags(page, reason, bad_flags);
+ pr_alert("%s:\n%pZpBad flags: %lX", reason, page, bad_flags);

print_modules();
dump_stack();
--
1.7.10.4

2015-05-14 17:13:01

by Sasha Levin

[permalink] [raw]
Subject: [PATCH 06/11] mm: debug: clean unused code

Remove dump_flags which is no longer used.

Signed-off-by: Sasha Levin <[email protected]>
---
mm/debug.c | 30 ------------------------------
1 file changed, 30 deletions(-)

diff --git a/mm/debug.c b/mm/debug.c
index 44efbb5..3abea22 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -50,36 +50,6 @@ static const struct trace_print_flags pageflag_names[] = {
#endif
};

-static void dump_flags(unsigned long flags,
- const struct trace_print_flags *names, int count)
-{
- const char *delim = "";
- unsigned long mask;
- int i;
-
- pr_emerg("flags: %#lx(", flags);
-
- /* remove zone id */
- flags &= (1UL << NR_PAGEFLAGS) - 1;
-
- for (i = 0; i < count && flags; i++) {
-
- mask = names[i].mask;
- if ((flags & mask) != mask)
- continue;
-
- flags &= ~mask;
- pr_cont("%s%s", delim, names[i].name);
- delim = "|";
- }
-
- /* check for left over flags */
- if (flags)
- pr_cont("%s%#lx", delim, flags);
-
- pr_cont(")\n");
-}
-
static char *format_flags(unsigned long flags,
const struct trace_print_flags *names, int count,
char *buf, char *end)
--
1.7.10.4

2015-05-14 17:10:51

by Sasha Levin

[permalink] [raw]
Subject: [PATCH 07/11] mm: debug: VM_BUG()

VM_BUG() complements VM_BUG_ON() just like with WARN() and WARN_ON().

This lets us format custom strings to output when a VM_BUG() is hit.

Signed-off-by: Sasha Levin <[email protected]>
---
include/linux/mmdebug.h | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 8b3f5a0..42f41e3 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -12,7 +12,14 @@ char *format_page(struct page *page, char *buf, char *end);
#ifdef CONFIG_DEBUG_VM
char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
char *format_mm(const struct mm_struct *mm, char *buf, char *end);
-#define VM_BUG_ON(cond) BUG_ON(cond)
+#define VM_BUG(cond, fmt...) \
+ do { \
+ if (unlikely(cond)) { \
+ pr_emerg(fmt); \
+ BUG(); \
+ } \
+ } while (0)
+#define VM_BUG_ON(cond) VM_BUG(cond, "%s\n", __stringify(cond))
#define VM_BUG_ON_PAGE(cond, page) \
do { \
if (unlikely(cond)) { \
@@ -46,6 +53,7 @@ static char *format_mm(const struct mm_struct *mm, char *buf, char *end)
{
return buf;
}
+#define VM_BUG(cond, fmt...) BUILD_BUG_ON_INVALID(cond)
#define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
#define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond)
#define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond)
--
1.7.10.4

2015-05-14 17:11:43

by Sasha Levin

[permalink] [raw]
Subject: [PATCH 08/11] mm: debug: kill VM_BUG_ON_PAGE

Just use VM_BUG() instead.

Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/mm/gup.c | 8 +++----
include/linux/hugetlb.h | 2 +-
include/linux/hugetlb_cgroup.h | 4 ++--
include/linux/mm.h | 22 +++++++++---------
include/linux/mmdebug.h | 8 -------
include/linux/page-flags.h | 26 +++++++++++-----------
include/linux/pagemap.h | 11 ++++-----
mm/cleancache.c | 6 ++---
mm/compaction.c | 2 +-
mm/filemap.c | 18 +++++++--------
mm/gup.c | 6 ++---
mm/huge_memory.c | 38 +++++++++++++++----------------
mm/hugetlb.c | 14 ++++++------
mm/hugetlb_cgroup.c | 2 +-
mm/internal.h | 8 +++----
mm/ksm.c | 13 ++++++-----
mm/memcontrol.c | 48 ++++++++++++++++++++--------------------
mm/memory.c | 8 +++----
mm/migrate.c | 6 ++---
mm/mlock.c | 4 ++--
mm/page_alloc.c | 26 +++++++++++-----------
mm/page_io.c | 4 ++--
mm/rmap.c | 14 ++++++------
mm/shmem.c | 10 +++++----
mm/slub.c | 4 ++--
mm/swap.c | 39 ++++++++++++++++----------------
mm/swap_state.c | 16 +++++++-------
mm/swapfile.c | 8 +++----
mm/vmscan.c | 24 ++++++++++----------
29 files changed, 198 insertions(+), 201 deletions(-)

diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c
index 81bf3d2..b04ea9e 100644
--- a/arch/x86/mm/gup.c
+++ b/arch/x86/mm/gup.c
@@ -108,8 +108,8 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,

static inline void get_head_page_multiple(struct page *page, int nr)
{
- VM_BUG_ON_PAGE(page != compound_head(page), page);
- VM_BUG_ON_PAGE(page_count(page) == 0, page);
+ VM_BUG(page != compound_head(page), "%pZp", page);
+ VM_BUG(page_count(page) == 0, "%pZp", page);
atomic_add(nr, &page->_count);
SetPageReferenced(page);
}
@@ -135,7 +135,7 @@ static noinline int gup_huge_pmd(pmd_t pmd, unsigned long addr,
head = pte_page(pte);
page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
do {
- VM_BUG_ON_PAGE(compound_head(page) != head, page);
+ VM_BUG(compound_head(page) != head, "%pZp", page);
pages[*nr] = page;
if (PageTail(page))
get_huge_page_tail(page);
@@ -212,7 +212,7 @@ static noinline int gup_huge_pud(pud_t pud, unsigned long addr,
head = pte_page(pte);
page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
do {
- VM_BUG_ON_PAGE(compound_head(page) != head, page);
+ VM_BUG(compound_head(page) != head, "%pZp", page);
pages[*nr] = page;
if (PageTail(page))
get_huge_page_tail(page);
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 2050261..0da5cc4 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -415,7 +415,7 @@ static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,

static inline struct hstate *page_hstate(struct page *page)
{
- VM_BUG_ON_PAGE(!PageHuge(page), page);
+ VM_BUG(!PageHuge(page), "%pZp", page);
return size_to_hstate(PAGE_SIZE << compound_order(page));
}

diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index bcc853e..7cca841 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -28,7 +28,7 @@ struct hugetlb_cgroup;

static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
{
- VM_BUG_ON_PAGE(!PageHuge(page), page);
+ VM_BUG(!PageHuge(page), "%pZp", page);

if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
return NULL;
@@ -38,7 +38,7 @@ static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
static inline
int set_hugetlb_cgroup(struct page *page, struct hugetlb_cgroup *h_cg)
{
- VM_BUG_ON_PAGE(!PageHuge(page), page);
+ VM_BUG(!PageHuge(page), "%pZp", page);

if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
return -1;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index be9247c..3affbc8 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -340,7 +340,7 @@ static inline int get_freepage_migratetype(struct page *page)
*/
static inline int put_page_testzero(struct page *page)
{
- VM_BUG_ON_PAGE(atomic_read(&page->_count) == 0, page);
+ VM_BUG(atomic_read(&page->_count) == 0, "%pZp", page);
return atomic_dec_and_test(&page->_count);
}

@@ -404,7 +404,7 @@ extern void kvfree(const void *addr);
static inline void compound_lock(struct page *page)
{
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- VM_BUG_ON_PAGE(PageSlab(page), page);
+ VM_BUG(PageSlab(page), "%pZp", page);
bit_spin_lock(PG_compound_lock, &page->flags);
#endif
}
@@ -412,7 +412,7 @@ static inline void compound_lock(struct page *page)
static inline void compound_unlock(struct page *page)
{
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- VM_BUG_ON_PAGE(PageSlab(page), page);
+ VM_BUG(PageSlab(page), "%pZp", page);
bit_spin_unlock(PG_compound_lock, &page->flags);
#endif
}
@@ -448,7 +448,7 @@ static inline void page_mapcount_reset(struct page *page)

static inline int page_mapcount(struct page *page)
{
- VM_BUG_ON_PAGE(PageSlab(page), page);
+ VM_BUG(PageSlab(page), "%pZp", page);
return atomic_read(&page->_mapcount) + 1;
}

@@ -472,7 +472,7 @@ static inline bool __compound_tail_refcounted(struct page *page)
*/
static inline bool compound_tail_refcounted(struct page *page)
{
- VM_BUG_ON_PAGE(!PageHead(page), page);
+ VM_BUG(!PageHead(page), "%pZp", page);
return __compound_tail_refcounted(page);
}

@@ -481,9 +481,9 @@ static inline void get_huge_page_tail(struct page *page)
/*
* __split_huge_page_refcount() cannot run from under us.
*/
- VM_BUG_ON_PAGE(!PageTail(page), page);
- VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
- VM_BUG_ON_PAGE(atomic_read(&page->_count) != 0, page);
+ VM_BUG(!PageTail(page), "%pZp", page);
+ VM_BUG(page_mapcount(page) < 0, "%pZp", page);
+ VM_BUG(atomic_read(&page->_count) != 0, "%pZp", page);
if (compound_tail_refcounted(page->first_page))
atomic_inc(&page->_mapcount);
}
@@ -499,7 +499,7 @@ static inline void get_page(struct page *page)
* Getting a normal page or the head of a compound page
* requires to already have an elevated page->_count.
*/
- VM_BUG_ON_PAGE(atomic_read(&page->_count) <= 0, page);
+ VM_BUG(atomic_read(&page->_count) <= 0, "%pZp", page);
atomic_inc(&page->_count);
}

@@ -1441,7 +1441,7 @@ static inline bool ptlock_init(struct page *page)
* slab code uses page->slab_cache and page->first_page (for tail
* pages), which share storage with page->ptl.
*/
- VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page);
+ VM_BUG(*(unsigned long *)&page->ptl, "%pZp", page);
if (!ptlock_alloc(page))
return false;
spin_lock_init(ptlock_ptr(page));
@@ -1538,7 +1538,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page)
static inline void pgtable_pmd_page_dtor(struct page *page)
{
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- VM_BUG_ON_PAGE(page->pmd_huge_pte, page);
+ VM_BUG(page->pmd_huge_pte, "%pZp", page);
#endif
ptlock_free(page);
}
diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 42f41e3..f43f868 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -20,13 +20,6 @@ char *format_mm(const struct mm_struct *mm, char *buf, char *end);
} \
} while (0)
#define VM_BUG_ON(cond) VM_BUG(cond, "%s\n", __stringify(cond))
-#define VM_BUG_ON_PAGE(cond, page) \
- do { \
- if (unlikely(cond)) { \
- pr_emerg("%pZp", page); \
- BUG(); \
- } \
- } while (0)
#define VM_BUG_ON_VMA(cond, vma) \
do { \
if (unlikely(cond)) { \
@@ -55,7 +48,6 @@ static char *format_mm(const struct mm_struct *mm, char *buf, char *end)
}
#define VM_BUG(cond, fmt...) BUILD_BUG_ON_INVALID(cond)
#define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
-#define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond)
#define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond)
#define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond)
#define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 91b7f9b..f1a18ad 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -139,13 +139,13 @@ enum pageflags {
#define PF_HEAD(page, enforce) compound_head(page)
#define PF_NO_TAIL(page, enforce) ({ \
if (enforce) \
- VM_BUG_ON_PAGE(PageTail(page), page); \
+ VM_BUG(PageTail(page), "%pZp", page); \
else \
page = compound_head(page); \
page;})
#define PF_NO_COMPOUND(page, enforce) ({ \
if (enforce) \
- VM_BUG_ON_PAGE(PageCompound(page), page); \
+ VM_BUG(PageCompound(page), "%pZp", page); \
page;})

/*
@@ -429,14 +429,14 @@ static inline int PageUptodate(struct page *page)

static inline void __SetPageUptodate(struct page *page)
{
- VM_BUG_ON_PAGE(PageTail(page), page);
+ VM_BUG(PageTail(page), "%pZp", page);
smp_wmb();
__set_bit(PG_uptodate, &page->flags);
}

static inline void SetPageUptodate(struct page *page)
{
- VM_BUG_ON_PAGE(PageTail(page), page);
+ VM_BUG(PageTail(page), "%pZp", page);
/*
* Memory barrier must be issued before setting the PG_uptodate bit,
* so that all previous stores issued in order to bring the page
@@ -572,7 +572,7 @@ static inline bool page_huge_active(struct page *page)
*/
static inline int PageTransHuge(struct page *page)
{
- VM_BUG_ON_PAGE(PageTail(page), page);
+ VM_BUG(PageTail(page), "%pZp", page);
return PageHead(page);
}

@@ -620,13 +620,13 @@ static inline int PageBuddy(struct page *page)

static inline void __SetPageBuddy(struct page *page)
{
- VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
+ VM_BUG(atomic_read(&page->_mapcount) != -1, "%pZp", page);
atomic_set(&page->_mapcount, PAGE_BUDDY_MAPCOUNT_VALUE);
}

static inline void __ClearPageBuddy(struct page *page)
{
- VM_BUG_ON_PAGE(!PageBuddy(page), page);
+ VM_BUG(!PageBuddy(page), "%pZp", page);
atomic_set(&page->_mapcount, -1);
}

@@ -639,13 +639,13 @@ static inline int PageBalloon(struct page *page)

static inline void __SetPageBalloon(struct page *page)
{
- VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
+ VM_BUG(atomic_read(&page->_mapcount) != -1, "%pZp", page);
atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
}

static inline void __ClearPageBalloon(struct page *page)
{
- VM_BUG_ON_PAGE(!PageBalloon(page), page);
+ VM_BUG(!PageBalloon(page), "%pZp", page);
atomic_set(&page->_mapcount, -1);
}

@@ -655,25 +655,25 @@ static inline void __ClearPageBalloon(struct page *page)
*/
static inline int PageSlabPfmemalloc(struct page *page)
{
- VM_BUG_ON_PAGE(!PageSlab(page), page);
+ VM_BUG(!PageSlab(page), "%pZp", page);
return PageActive(page);
}

static inline void SetPageSlabPfmemalloc(struct page *page)
{
- VM_BUG_ON_PAGE(!PageSlab(page), page);
+ VM_BUG(!PageSlab(page), "%pZp", page);
SetPageActive(page);
}

static inline void __ClearPageSlabPfmemalloc(struct page *page)
{
- VM_BUG_ON_PAGE(!PageSlab(page), page);
+ VM_BUG(!PageSlab(page), "%pZp", page);
__ClearPageActive(page);
}

static inline void ClearPageSlabPfmemalloc(struct page *page)
{
- VM_BUG_ON_PAGE(!PageSlab(page), page);
+ VM_BUG(!PageSlab(page), "%pZp", page);
ClearPageActive(page);
}

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 7c37907..fa9ba8b 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -157,7 +157,7 @@ static inline int page_cache_get_speculative(struct page *page)
* disabling preempt, and hence no need for the "speculative get" that
* SMP requires.
*/
- VM_BUG_ON_PAGE(page_count(page) == 0, page);
+ VM_BUG(page_count(page) == 0, "%pZp", page);
atomic_inc(&page->_count);

#else
@@ -170,7 +170,7 @@ static inline int page_cache_get_speculative(struct page *page)
return 0;
}
#endif
- VM_BUG_ON_PAGE(PageTail(page), page);
+ VM_BUG(PageTail(page), "%pZp", page);

return 1;
}
@@ -186,14 +186,15 @@ static inline int page_cache_add_speculative(struct page *page, int count)
# ifdef CONFIG_PREEMPT_COUNT
VM_BUG_ON(!in_atomic());
# endif
- VM_BUG_ON_PAGE(page_count(page) == 0, page);
+ VM_BUG(page_count(page) == 0, "%pZp", page);
atomic_add(count, &page->_count);

#else
if (unlikely(!atomic_add_unless(&page->_count, count, 0)))
return 0;
#endif
- VM_BUG_ON_PAGE(PageCompound(page) && page != compound_head(page), page);
+ VM_BUG(PageCompound(page) && page != compound_head(page), "%pZp",
+ page);

return 1;
}
@@ -205,7 +206,7 @@ static inline int page_freeze_refs(struct page *page, int count)

static inline void page_unfreeze_refs(struct page *page, int count)
{
- VM_BUG_ON_PAGE(page_count(page) != 0, page);
+ VM_BUG(page_count(page) != 0, "%pZp", page);
VM_BUG_ON(count == 0);

atomic_set(&page->_count, count);
diff --git a/mm/cleancache.c b/mm/cleancache.c
index 8fc5081..d4d5ce0 100644
--- a/mm/cleancache.c
+++ b/mm/cleancache.c
@@ -185,7 +185,7 @@ int __cleancache_get_page(struct page *page)
goto out;
}

- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
pool_id = page->mapping->host->i_sb->cleancache_poolid;
if (pool_id < 0)
goto out;
@@ -223,7 +223,7 @@ void __cleancache_put_page(struct page *page)
return;
}

- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
pool_id = page->mapping->host->i_sb->cleancache_poolid;
if (pool_id >= 0 &&
cleancache_get_key(page->mapping->host, &key) >= 0) {
@@ -252,7 +252,7 @@ void __cleancache_invalidate_page(struct address_space *mapping,
return;

if (pool_id >= 0) {
- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
if (cleancache_get_key(mapping->host, &key) >= 0) {
cleancache_ops->invalidate_page(pool_id,
key, page->index);
diff --git a/mm/compaction.c b/mm/compaction.c
index 6ef2fdf..170bf6c 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -779,7 +779,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if (__isolate_lru_page(page, isolate_mode) != 0)
continue;

- VM_BUG_ON_PAGE(PageCompound(page), page);
+ VM_BUG(PageCompound(page), "%pZp", page);

/* Successfully isolated */
del_page_from_lru_list(page, lruvec, page_lru(page));
diff --git a/mm/filemap.c b/mm/filemap.c
index 6ad0a80..ec1ab0aa 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -462,9 +462,9 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
{
int error;

- VM_BUG_ON_PAGE(!PageLocked(old), old);
- VM_BUG_ON_PAGE(!PageLocked(new), new);
- VM_BUG_ON_PAGE(new->mapping, new);
+ VM_BUG(!PageLocked(old), "%pZp", old);
+ VM_BUG(!PageLocked(new), "%pZp", new);
+ VM_BUG(new->mapping, "%pZp", new);

error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
if (!error) {
@@ -549,8 +549,8 @@ static int __add_to_page_cache_locked(struct page *page,
struct mem_cgroup *memcg;
int error;

- VM_BUG_ON_PAGE(!PageLocked(page), page);
- VM_BUG_ON_PAGE(PageSwapBacked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
+ VM_BUG(PageSwapBacked(page), "%pZp", page);

if (!huge) {
error = mem_cgroup_try_charge(page, current->mm,
@@ -743,7 +743,7 @@ EXPORT_SYMBOL_GPL(add_page_wait_queue);
void unlock_page(struct page *page)
{
page = compound_head(page);
- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
clear_bit_unlock(PG_locked, &page->flags);
smp_mb__after_atomic();
wake_up_page(page, PG_locked);
@@ -1036,7 +1036,7 @@ repeat:
page_cache_release(page);
goto repeat;
}
- VM_BUG_ON_PAGE(page->index != offset, page);
+ VM_BUG(page->index != offset, "%pZp", page);
}
return page;
}
@@ -1093,7 +1093,7 @@ repeat:
page_cache_release(page);
goto repeat;
}
- VM_BUG_ON_PAGE(page->index != offset, page);
+ VM_BUG(page->index != offset, "%pZp", page);
}

if (page && (fgp_flags & FGP_ACCESSED))
@@ -1914,7 +1914,7 @@ retry_find:
put_page(page);
goto retry_find;
}
- VM_BUG_ON_PAGE(page->index != offset, page);
+ VM_BUG(page->index != offset, "%pZp", page);

/*
* We have a locked page in the page cache, now we need to check
diff --git a/mm/gup.c b/mm/gup.c
index 6297f6b..743648e 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1084,7 +1084,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
tail = page;
do {
- VM_BUG_ON_PAGE(compound_head(page) != head, page);
+ VM_BUG(compound_head(page) != head, "%pZp", page);
pages[*nr] = page;
(*nr)++;
page++;
@@ -1131,7 +1131,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
tail = page;
do {
- VM_BUG_ON_PAGE(compound_head(page) != head, page);
+ VM_BUG(compound_head(page) != head, "%pZp", page);
pages[*nr] = page;
(*nr)++;
page++;
@@ -1174,7 +1174,7 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
page = head + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT);
tail = page;
do {
- VM_BUG_ON_PAGE(compound_head(page) != head, page);
+ VM_BUG(compound_head(page) != head, "%pZp", page);
pages[*nr] = page;
(*nr)++;
page++;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e103a9a..82ccd2c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -723,7 +723,7 @@ static int __do_huge_pmd_anonymous_page(struct mm_struct *mm,
pgtable_t pgtable;
spinlock_t *ptl;

- VM_BUG_ON_PAGE(!PageCompound(page), page);
+ VM_BUG(!PageCompound(page), "%pZp", page);

if (mem_cgroup_try_charge(page, mm, gfp, &memcg))
return VM_FAULT_OOM;
@@ -897,7 +897,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
goto out;
}
src_page = pmd_page(pmd);
- VM_BUG_ON_PAGE(!PageHead(src_page), src_page);
+ VM_BUG(!PageHead(src_page), "%pZp", src_page);
get_page(src_page);
page_dup_rmap(src_page);
add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
@@ -1029,7 +1029,7 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
ptl = pmd_lock(mm, pmd);
if (unlikely(!pmd_same(*pmd, orig_pmd)))
goto out_free_pages;
- VM_BUG_ON_PAGE(!PageHead(page), page);
+ VM_BUG(!PageHead(page), "%pZp", page);

pmdp_clear_flush_notify(vma, haddr, pmd);
/* leave pmd empty until pte is filled */
@@ -1101,7 +1101,7 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
goto out_unlock;

page = pmd_page(orig_pmd);
- VM_BUG_ON_PAGE(!PageCompound(page) || !PageHead(page), page);
+ VM_BUG(!PageCompound(page) || !PageHead(page), "%pZp", page);
if (page_mapcount(page) == 1) {
pmd_t entry;
entry = pmd_mkyoung(orig_pmd);
@@ -1184,7 +1184,7 @@ alloc:
add_mm_counter(mm, MM_ANONPAGES, HPAGE_PMD_NR);
put_huge_zero_page();
} else {
- VM_BUG_ON_PAGE(!PageHead(page), page);
+ VM_BUG(!PageHead(page), "%pZp", page);
page_remove_rmap(page);
put_page(page);
}
@@ -1222,7 +1222,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
goto out;

page = pmd_page(*pmd);
- VM_BUG_ON_PAGE(!PageHead(page), page);
+ VM_BUG(!PageHead(page), "%pZp", page);
if (flags & FOLL_TOUCH) {
pmd_t _pmd;
/*
@@ -1247,7 +1247,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
}
}
page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
- VM_BUG_ON_PAGE(!PageCompound(page), page);
+ VM_BUG(!PageCompound(page), "%pZp", page);
if (flags & FOLL_GET)
get_page_foll(page);

@@ -1400,7 +1400,7 @@ int madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,

/* No hugepage in swapcache */
page = pmd_page(orig_pmd);
- VM_BUG_ON_PAGE(PageSwapCache(page), page);
+ VM_BUG(PageSwapCache(page), "%pZp", page);

orig_pmd = pmd_mkold(orig_pmd);
orig_pmd = pmd_mkclean(orig_pmd);
@@ -1441,9 +1441,9 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
} else {
page = pmd_page(orig_pmd);
page_remove_rmap(page);
- VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
+ VM_BUG(page_mapcount(page) < 0, "%pZp", page);
add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR);
- VM_BUG_ON_PAGE(!PageHead(page), page);
+ VM_BUG(!PageHead(page), "%pZp", page);
atomic_long_dec(&tlb->mm->nr_ptes);
spin_unlock(ptl);
tlb_remove_page(tlb, page);
@@ -2189,9 +2189,9 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
if (unlikely(!page))
goto out;

- VM_BUG_ON_PAGE(PageCompound(page), page);
- VM_BUG_ON_PAGE(!PageAnon(page), page);
- VM_BUG_ON_PAGE(!PageSwapBacked(page), page);
+ VM_BUG(PageCompound(page), "%pZp", page);
+ VM_BUG(!PageAnon(page), "%pZp", page);
+ VM_BUG(!PageSwapBacked(page), "%pZp", page);

/*
* We can do it before isolate_lru_page because the
@@ -2234,8 +2234,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
}
/* 0 stands for page_is_file_cache(page) == false */
inc_zone_page_state(page, NR_ISOLATED_ANON + 0);
- VM_BUG_ON_PAGE(!PageLocked(page), page);
- VM_BUG_ON_PAGE(PageLRU(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
+ VM_BUG(PageLRU(page), "%pZp", page);

/* If there is no mapped pte young don't collapse the page */
if (pte_young(pteval) || PageReferenced(page) ||
@@ -2277,7 +2277,7 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
} else {
src_page = pte_page(pteval);
copy_user_highpage(page, src_page, address, vma);
- VM_BUG_ON_PAGE(page_mapcount(src_page) != 1, src_page);
+ VM_BUG(page_mapcount(src_page) != 1, "%pZp", src_page);
release_pte_page(src_page);
/*
* ptl mostly unnecessary, but preempt has to
@@ -2380,7 +2380,7 @@ khugepaged_alloc_page(struct page **hpage, gfp_t gfp, struct mm_struct *mm,
struct vm_area_struct *vma, unsigned long address,
int node)
{
- VM_BUG_ON_PAGE(*hpage, *hpage);
+ VM_BUG(*hpage, "%pZp", *hpage);

/*
* Before allocating the hugepage, release the mmap_sem read lock.
@@ -2654,7 +2654,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
if (khugepaged_scan_abort(node))
goto out_unmap;
khugepaged_node_load[node]++;
- VM_BUG_ON_PAGE(PageCompound(page), page);
+ VM_BUG(PageCompound(page), "%pZp", page);
if (!PageLRU(page) || PageLocked(page) || !PageAnon(page))
goto out_unmap;
/*
@@ -2952,7 +2952,7 @@ again:
return;
}
page = pmd_page(*pmd);
- VM_BUG_ON_PAGE(!page_count(page), page);
+ VM_BUG(!page_count(page), "%pZp", page);
get_page(page);
spin_unlock(ptl);
mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 716465a..55c75da 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -901,7 +901,7 @@ static void update_and_free_page(struct hstate *h, struct page *page)
1 << PG_active | 1 << PG_private |
1 << PG_writeback);
}
- VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
+ VM_BUG(hugetlb_cgroup_from_page(page), "%pZp", page);
set_compound_page_dtor(page, NULL);
set_page_refcounted(page);
if (hstate_is_gigantic(h)) {
@@ -932,20 +932,20 @@ struct hstate *size_to_hstate(unsigned long size)
*/
bool page_huge_active(struct page *page)
{
- VM_BUG_ON_PAGE(!PageHuge(page), page);
+ VM_BUG(!PageHuge(page), "%pZp", page);
return PageHead(page) && PagePrivate(&page[1]);
}

/* never called for tail page */
static void set_page_huge_active(struct page *page)
{
- VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
+ VM_BUG(!PageHeadHuge(page), "%pZp", page);
SetPagePrivate(&page[1]);
}

static void clear_page_huge_active(struct page *page)
{
- VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
+ VM_BUG(!PageHeadHuge(page), "%pZp", page);
ClearPagePrivate(&page[1]);
}

@@ -1373,7 +1373,7 @@ retry:
* no users -- drop the buddy allocator's reference.
*/
put_page_testzero(page);
- VM_BUG_ON_PAGE(page_count(page), page);
+ VM_BUG(page_count(page), "%pZp", page);
enqueue_huge_page(h, page);
}
free:
@@ -3938,7 +3938,7 @@ bool isolate_huge_page(struct page *page, struct list_head *list)
{
bool ret = true;

- VM_BUG_ON_PAGE(!PageHead(page), page);
+ VM_BUG(!PageHead(page), "%pZp", page);
spin_lock(&hugetlb_lock);
if (!page_huge_active(page) || !get_page_unless_zero(page)) {
ret = false;
@@ -3953,7 +3953,7 @@ unlock:

void putback_active_hugepage(struct page *page)
{
- VM_BUG_ON_PAGE(!PageHead(page), page);
+ VM_BUG(!PageHead(page), "%pZp", page);
spin_lock(&hugetlb_lock);
set_page_huge_active(page);
list_move_tail(&page->lru, &(page_hstate(page))->hugepage_activelist);
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 6e00574..9df90f5 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -403,7 +403,7 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
if (hugetlb_cgroup_disabled())
return;

- VM_BUG_ON_PAGE(!PageHuge(oldhpage), oldhpage);
+ VM_BUG(!PageHuge(oldhpage), "%pZp", oldhpage);
spin_lock(&hugetlb_lock);
h_cg = hugetlb_cgroup_from_page(oldhpage);
set_hugetlb_cgroup(oldhpage, NULL);
diff --git a/mm/internal.h b/mm/internal.h
index a48cbef..b7d9a96 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -42,8 +42,8 @@ static inline unsigned long ra_submit(struct file_ra_state *ra,
*/
static inline void set_page_refcounted(struct page *page)
{
- VM_BUG_ON_PAGE(PageTail(page), page);
- VM_BUG_ON_PAGE(atomic_read(&page->_count), page);
+ VM_BUG(PageTail(page), "%pZp", page);
+ VM_BUG(atomic_read(&page->_count), "%pZp", page);
set_page_count(page, 1);
}

@@ -61,7 +61,7 @@ static inline void __get_page_tail_foll(struct page *page,
* speculative page access (like in
* page_cache_get_speculative()) on tail pages.
*/
- VM_BUG_ON_PAGE(atomic_read(&page->first_page->_count) <= 0, page);
+ VM_BUG(atomic_read(&page->first_page->_count) <= 0, "%pZp", page);
if (get_page_head)
atomic_inc(&page->first_page->_count);
get_huge_page_tail(page);
@@ -86,7 +86,7 @@ static inline void get_page_foll(struct page *page)
* Getting a normal page or the head of a compound page
* requires to already have an elevated page->_count.
*/
- VM_BUG_ON_PAGE(atomic_read(&page->_count) <= 0, page);
+ VM_BUG(atomic_read(&page->_count) <= 0, "%pZp", page);
atomic_inc(&page->_count);
}
}
diff --git a/mm/ksm.c b/mm/ksm.c
index bc7be0e..040185f 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1897,13 +1897,13 @@ int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc)
int ret = SWAP_AGAIN;
int search_new_forks = 0;

- VM_BUG_ON_PAGE(!PageKsm(page), page);
+ VM_BUG(!PageKsm(page), "%pZp", page);

/*
* Rely on the page lock to protect against concurrent modifications
* to that page's node of the stable tree.
*/
- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);

stable_node = page_stable_node(page);
if (!stable_node)
@@ -1957,13 +1957,14 @@ void ksm_migrate_page(struct page *newpage, struct page *oldpage)
{
struct stable_node *stable_node;

- VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage);
- VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
- VM_BUG_ON_PAGE(newpage->mapping != oldpage->mapping, newpage);
+ VM_BUG(!PageLocked(oldpage), "%pZp", oldpage);
+ VM_BUG(!PageLocked(newpage), "%pZp", newpage);
+ VM_BUG(newpage->mapping != oldpage->mapping, "%pZp", newpage);

stable_node = page_stable_node(newpage);
if (stable_node) {
- VM_BUG_ON_PAGE(stable_node->kpfn != page_to_pfn(oldpage), oldpage);
+ VM_BUG(stable_node->kpfn != page_to_pfn(oldpage), "%pZp",
+ oldpage);
stable_node->kpfn = page_to_pfn(newpage);
/*
* newpage->mapping was set in advance; now we need smp_wmb()
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 14c2f20..6ae7c39 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2365,7 +2365,7 @@ struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page)
unsigned short id;
swp_entry_t ent;

- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);

memcg = page->mem_cgroup;
if (memcg) {
@@ -2407,7 +2407,7 @@ static void unlock_page_lru(struct page *page, int isolated)
struct lruvec *lruvec;

lruvec = mem_cgroup_page_lruvec(page, zone);
- VM_BUG_ON_PAGE(PageLRU(page), page);
+ VM_BUG(PageLRU(page), "%pZp", page);
SetPageLRU(page);
add_page_to_lru_list(page, lruvec, page_lru(page));
}
@@ -2419,7 +2419,7 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg,
{
int isolated;

- VM_BUG_ON_PAGE(page->mem_cgroup, page);
+ VM_BUG(page->mem_cgroup, "%pZp", page);

/*
* In some cases, SwapCache and FUSE(splice_buf->radixtree), the page
@@ -2726,7 +2726,7 @@ void __memcg_kmem_uncharge_pages(struct page *page, int order)
if (!memcg)
return;

- VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page);
+ VM_BUG(mem_cgroup_is_root(memcg), "%pZp", page);

memcg_uncharge_kmem(memcg, 1 << order);
page->mem_cgroup = NULL;
@@ -4748,7 +4748,7 @@ static int mem_cgroup_move_account(struct page *page,
int ret;

VM_BUG_ON(from == to);
- VM_BUG_ON_PAGE(PageLRU(page), page);
+ VM_BUG(PageLRU(page), "%pZp", page);
/*
* The page is isolated from LRU. So, collapse function
* will not handle this page. But page splitting can happen.
@@ -4864,7 +4864,7 @@ static enum mc_target_type get_mctgt_type_thp(struct vm_area_struct *vma,
enum mc_target_type ret = MC_TARGET_NONE;

page = pmd_page(pmd);
- VM_BUG_ON_PAGE(!page || !PageHead(page), page);
+ VM_BUG(!page || !PageHead(page), "%pZp", page);
if (!(mc.flags & MOVE_ANON))
return ret;
if (page->mem_cgroup == mc.from) {
@@ -5479,7 +5479,7 @@ int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,

if (PageTransHuge(page)) {
nr_pages <<= compound_order(page);
- VM_BUG_ON_PAGE(!PageTransHuge(page), page);
+ VM_BUG(!PageTransHuge(page), "%pZp", page);
}

if (do_swap_account && PageSwapCache(page))
@@ -5521,8 +5521,8 @@ void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg,
{
unsigned int nr_pages = 1;

- VM_BUG_ON_PAGE(!page->mapping, page);
- VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page);
+ VM_BUG(!page->mapping, "%pZp", page);
+ VM_BUG(PageLRU(page) && !lrucare, "%pZp", page);

if (mem_cgroup_disabled())
return;
@@ -5538,7 +5538,7 @@ void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg,

if (PageTransHuge(page)) {
nr_pages <<= compound_order(page);
- VM_BUG_ON_PAGE(!PageTransHuge(page), page);
+ VM_BUG(!PageTransHuge(page), "%pZp", page);
}

local_irq_disable();
@@ -5580,7 +5580,7 @@ void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg)

if (PageTransHuge(page)) {
nr_pages <<= compound_order(page);
- VM_BUG_ON_PAGE(!PageTransHuge(page), page);
+ VM_BUG(!PageTransHuge(page), "%pZp", page);
}

cancel_charge(memcg, nr_pages);
@@ -5630,8 +5630,8 @@ static void uncharge_list(struct list_head *page_list)
page = list_entry(next, struct page, lru);
next = page->lru.next;

- VM_BUG_ON_PAGE(PageLRU(page), page);
- VM_BUG_ON_PAGE(page_count(page), page);
+ VM_BUG(PageLRU(page), "%pZp", page);
+ VM_BUG(page_count(page), "%pZp", page);

if (!page->mem_cgroup)
continue;
@@ -5653,7 +5653,7 @@ static void uncharge_list(struct list_head *page_list)

if (PageTransHuge(page)) {
nr_pages <<= compound_order(page);
- VM_BUG_ON_PAGE(!PageTransHuge(page), page);
+ VM_BUG(!PageTransHuge(page), "%pZp", page);
nr_huge += nr_pages;
}

@@ -5724,13 +5724,13 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage,
struct mem_cgroup *memcg;
int isolated;

- VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage);
- VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
- VM_BUG_ON_PAGE(!lrucare && PageLRU(oldpage), oldpage);
- VM_BUG_ON_PAGE(!lrucare && PageLRU(newpage), newpage);
- VM_BUG_ON_PAGE(PageAnon(oldpage) != PageAnon(newpage), newpage);
- VM_BUG_ON_PAGE(PageTransHuge(oldpage) != PageTransHuge(newpage),
- newpage);
+ VM_BUG(!PageLocked(oldpage), "%pZp", oldpage);
+ VM_BUG(!PageLocked(newpage), "%pZp", newpage);
+ VM_BUG(!lrucare && PageLRU(oldpage), "%pZp", oldpage);
+ VM_BUG(!lrucare && PageLRU(newpage), "%pZp", newpage);
+ VM_BUG(PageAnon(oldpage) != PageAnon(newpage), "%pZp", newpage);
+ VM_BUG(PageTransHuge(oldpage) != PageTransHuge(newpage), "%pZp",
+ newpage);

if (mem_cgroup_disabled())
return;
@@ -5812,8 +5812,8 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
struct mem_cgroup *memcg;
unsigned short oldid;

- VM_BUG_ON_PAGE(PageLRU(page), page);
- VM_BUG_ON_PAGE(page_count(page), page);
+ VM_BUG(PageLRU(page), "%pZp", page);
+ VM_BUG(page_count(page), "%pZp", page);

if (!do_swap_account)
return;
@@ -5825,7 +5825,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
return;

oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg));
- VM_BUG_ON_PAGE(oldid, page);
+ VM_BUG(oldid, "%pZp", page);
mem_cgroup_swap_statistics(memcg, true);

page->mem_cgroup = NULL;
diff --git a/mm/memory.c b/mm/memory.c
index 6e5d4bd..dd509d9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -302,7 +302,7 @@ int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
return 0;
batch = tlb->active;
}
- VM_BUG_ON_PAGE(batch->nr > batch->max, page);
+ VM_BUG(batch->nr > batch->max, "%pZp", page);

return batch->max - batch->nr;
}
@@ -1977,7 +1977,7 @@ static int do_page_mkwrite(struct vm_area_struct *vma, struct page *page,
}
ret |= VM_FAULT_LOCKED;
} else
- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
return ret;
}

@@ -2020,7 +2020,7 @@ static inline int wp_page_reuse(struct mm_struct *mm,
lock_page(page);

dirtied = set_page_dirty(page);
- VM_BUG_ON_PAGE(PageAnon(page), page);
+ VM_BUG(PageAnon(page), "%pZp", page);
mapping = page->mapping;
unlock_page(page);
page_cache_release(page);
@@ -2763,7 +2763,7 @@ static int __do_fault(struct vm_area_struct *vma, unsigned long address,
if (unlikely(!(ret & VM_FAULT_LOCKED)))
lock_page(vmf.page);
else
- VM_BUG_ON_PAGE(!PageLocked(vmf.page), vmf.page);
+ VM_BUG(!PageLocked(vmf.page), "%pZp", vmf.page);

out:
*page = vmf.page;
diff --git a/mm/migrate.c b/mm/migrate.c
index 022adc2..2693888 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -500,7 +500,7 @@ void migrate_page_copy(struct page *newpage, struct page *page)
if (PageUptodate(page))
SetPageUptodate(newpage);
if (TestClearPageActive(page)) {
- VM_BUG_ON_PAGE(PageUnevictable(page), page);
+ VM_BUG(PageUnevictable(page), "%pZp", page);
SetPageActive(newpage);
} else if (TestClearPageUnevictable(page))
SetPageUnevictable(newpage);
@@ -869,7 +869,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
* free the metadata, so the page can be freed.
*/
if (!page->mapping) {
- VM_BUG_ON_PAGE(PageAnon(page), page);
+ VM_BUG(PageAnon(page), "%pZp", page);
if (page_has_private(page)) {
try_to_free_buffers(page);
goto out_unlock;
@@ -1606,7 +1606,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
{
int page_lru;

- VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
+ VM_BUG(compound_order(page) && !PageTransHuge(page), "%pZp", page);

/* Avoid migrating to a node that is nearly full */
if (!migrate_balanced_pgdat(pgdat, 1UL << compound_order(page)))
diff --git a/mm/mlock.c b/mm/mlock.c
index 6fd2cf1..54269cd 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -232,8 +232,8 @@ static int __mlock_posix_error_return(long retval)
static bool __putback_lru_fast_prepare(struct page *page, struct pagevec *pvec,
int *pgrescued)
{
- VM_BUG_ON_PAGE(PageLRU(page), page);
- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(PageLRU(page), "%pZp", page);
+ VM_BUG(!PageLocked(page), "%pZp", page);

if (page_mapcount(page) <= 1 && page_evictable(page)) {
pagevec_add(pvec, page);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 06577ec..4d3668f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -596,7 +596,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
if (page_zone_id(page) != page_zone_id(buddy))
return 0;

- VM_BUG_ON_PAGE(page_count(buddy) != 0, buddy);
+ VM_BUG(page_count(buddy) != 0, "%pZp", buddy);

return 1;
}
@@ -610,7 +610,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
if (page_zone_id(page) != page_zone_id(buddy))
return 0;

- VM_BUG_ON_PAGE(page_count(buddy) != 0, buddy);
+ VM_BUG(page_count(buddy) != 0, "%pZp", buddy);

return 1;
}
@@ -654,7 +654,7 @@ static inline void __free_one_page(struct page *page,
int max_order = MAX_ORDER;

VM_BUG_ON(!zone_is_initialized(zone));
- VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
+ VM_BUG(page->flags & PAGE_FLAGS_CHECK_AT_PREP, "%pZp", page);

VM_BUG_ON(migratetype == -1);
if (is_migrate_isolate(migratetype)) {
@@ -671,8 +671,8 @@ static inline void __free_one_page(struct page *page,

page_idx = pfn & ((1 << max_order) - 1);

- VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
- VM_BUG_ON_PAGE(bad_range(zone, page), page);
+ VM_BUG(page_idx & ((1 << order) - 1), "%pZp", page);
+ VM_BUG(bad_range(zone, page), "%pZp", page);

while (order < max_order - 1) {
buddy_idx = __find_buddy_index(page_idx, order);
@@ -930,8 +930,8 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
bool compound = PageCompound(page);
int i, bad = 0;

- VM_BUG_ON_PAGE(PageTail(page), page);
- VM_BUG_ON_PAGE(compound && compound_order(page) != order, page);
+ VM_BUG(PageTail(page), "%pZp", page);
+ VM_BUG(compound && compound_order(page) != order, "%pZp", page);

trace_mm_page_free(page, order);
kmemcheck_free_shadow(page, order);
@@ -1246,7 +1246,7 @@ static inline void expand(struct zone *zone, struct page *page,
area--;
high--;
size >>= 1;
- VM_BUG_ON_PAGE(bad_range(zone, &page[size]), &page[size]);
+ VM_BUG(bad_range(zone, &page[size]), "%pZp", &page[size]);

if (IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) &&
debug_guardpage_enabled() &&
@@ -1418,7 +1418,7 @@ int move_freepages(struct zone *zone,

for (page = start_page; page <= end_page;) {
/* Make sure we are not inadvertently changing nodes */
- VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page);
+ VM_BUG(page_to_nid(page) != zone_to_nid(zone), "%pZp", page);

if (!pfn_valid_within(page_to_pfn(page))) {
page++;
@@ -1943,8 +1943,8 @@ void split_page(struct page *page, unsigned int order)
{
int i;

- VM_BUG_ON_PAGE(PageCompound(page), page);
- VM_BUG_ON_PAGE(!page_count(page), page);
+ VM_BUG(PageCompound(page), "%pZp", page);
+ VM_BUG(!page_count(page), "%pZp", page);

#ifdef CONFIG_KMEMCHECK
/*
@@ -2096,7 +2096,7 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
zone_statistics(preferred_zone, zone, gfp_flags);
local_irq_restore(flags);

- VM_BUG_ON_PAGE(bad_range(zone, page), page);
+ VM_BUG(bad_range(zone, page), "%pZp", page);
return page;

failed:
@@ -6514,7 +6514,7 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
word_bitidx = bitidx / BITS_PER_LONG;
bitidx &= (BITS_PER_LONG-1);

- VM_BUG_ON_PAGE(!zone_spans_pfn(zone, pfn), page);
+ VM_BUG(!zone_spans_pfn(zone, pfn), "%pZp", page);

bitidx += end_bitidx;
mask <<= (BITS_PER_LONG - bitidx - 1);
diff --git a/mm/page_io.c b/mm/page_io.c
index 6424869..deea5be 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -331,8 +331,8 @@ int swap_readpage(struct page *page)
int ret = 0;
struct swap_info_struct *sis = page_swap_info(page);

- VM_BUG_ON_PAGE(!PageLocked(page), page);
- VM_BUG_ON_PAGE(PageUptodate(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
+ VM_BUG(PageUptodate(page), "%pZp", page);
if (frontswap_load(page) == 0) {
SetPageUptodate(page);
unlock_page(page);
diff --git a/mm/rmap.c b/mm/rmap.c
index dad23a4..f8a6bca 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -971,9 +971,9 @@ void page_move_anon_rmap(struct page *page,
{
struct anon_vma *anon_vma = vma->anon_vma;

- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
VM_BUG_ON_VMA(!anon_vma, vma);
- VM_BUG_ON_PAGE(page->index != linear_page_index(vma, address), page);
+ VM_BUG(page->index != linear_page_index(vma, address), "%pZp", page);

anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
page->mapping = (struct address_space *) anon_vma;
@@ -1078,7 +1078,7 @@ void do_page_add_anon_rmap(struct page *page,
if (unlikely(PageKsm(page)))
return;

- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
/* address might be in next vma when migration races vma_adjust */
if (first)
__page_set_anon_rmap(page, vma, address, exclusive);
@@ -1274,7 +1274,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
pte_t swp_pte;

if (flags & TTU_FREE) {
- VM_BUG_ON_PAGE(PageSwapCache(page), page);
+ VM_BUG(PageSwapCache(page), "%pZp", page);
if (!dirty && !PageDirty(page)) {
/* It's a freeable page by MADV_FREE */
dec_mm_counter(mm, MM_ANONPAGES);
@@ -1407,7 +1407,7 @@ int try_to_unmap(struct page *page, enum ttu_flags flags)
.anon_lock = page_lock_anon_vma_read,
};

- VM_BUG_ON_PAGE(!PageHuge(page) && PageTransHuge(page), page);
+ VM_BUG(!PageHuge(page) && PageTransHuge(page), "%pZp", page);

/*
* During exec, a temporary VMA is setup and later moved.
@@ -1453,7 +1453,7 @@ int try_to_munlock(struct page *page)

};

- VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
+ VM_BUG(!PageLocked(page) || PageLRU(page), "%pZp", page);

ret = rmap_walk(page, &rwc);
return ret;
@@ -1559,7 +1559,7 @@ static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc)
* structure at mapping cannot be freed and reused yet,
* so we can safely take mapping->i_mmap_rwsem.
*/
- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);

if (!mapping)
return ret;
diff --git a/mm/shmem.c b/mm/shmem.c
index 3f974a1..888dfb0 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -295,8 +295,8 @@ static int shmem_add_to_page_cache(struct page *page,
{
int error;

- VM_BUG_ON_PAGE(!PageLocked(page), page);
- VM_BUG_ON_PAGE(!PageSwapBacked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
+ VM_BUG(!PageSwapBacked(page), "%pZp", page);

page_cache_get(page);
page->mapping = mapping;
@@ -436,7 +436,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
continue;
if (!unfalloc || !PageUptodate(page)) {
if (page->mapping == mapping) {
- VM_BUG_ON_PAGE(PageWriteback(page), page);
+ VM_BUG(PageWriteback(page), "%pZp",
+ page);
truncate_inode_page(mapping, page);
}
}
@@ -513,7 +514,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
lock_page(page);
if (!unfalloc || !PageUptodate(page)) {
if (page->mapping == mapping) {
- VM_BUG_ON_PAGE(PageWriteback(page), page);
+ VM_BUG(PageWriteback(page), "%pZp",
+ page);
truncate_inode_page(mapping, page);
} else {
/* Page was replaced by swap: retry */
diff --git a/mm/slub.c b/mm/slub.c
index f920dc5..f516e0c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -338,13 +338,13 @@ static inline int oo_objects(struct kmem_cache_order_objects x)
*/
static __always_inline void slab_lock(struct page *page)
{
- VM_BUG_ON_PAGE(PageTail(page), page);
+ VM_BUG(PageTail(page), "%pZp", page);
bit_spin_lock(PG_locked, &page->flags);
}

static __always_inline void slab_unlock(struct page *page)
{
- VM_BUG_ON_PAGE(PageTail(page), page);
+ VM_BUG(PageTail(page), "%pZp", page);
__bit_spin_unlock(PG_locked, &page->flags);
}

diff --git a/mm/swap.c b/mm/swap.c
index 8773de0..47af078 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -59,7 +59,7 @@ static void __page_cache_release(struct page *page)

spin_lock_irqsave(&zone->lru_lock, flags);
lruvec = mem_cgroup_page_lruvec(page, zone);
- VM_BUG_ON_PAGE(!PageLRU(page), page);
+ VM_BUG(!PageLRU(page), "%pZp", page);
__ClearPageLRU(page);
del_page_from_lru_list(page, lruvec, page_off_lru(page));
spin_unlock_irqrestore(&zone->lru_lock, flags);
@@ -131,8 +131,8 @@ void put_unrefcounted_compound_page(struct page *page_head, struct page *page)
* __split_huge_page_refcount cannot race
* here, see the comment above this function.
*/
- VM_BUG_ON_PAGE(!PageHead(page_head), page_head);
- VM_BUG_ON_PAGE(page_mapcount(page) != 0, page);
+ VM_BUG(!PageHead(page_head), "%pZp", page_head);
+ VM_BUG(page_mapcount(page) != 0, "%pZp", page);
if (put_page_testzero(page_head)) {
/*
* If this is the tail of a slab THP page,
@@ -148,7 +148,7 @@ void put_unrefcounted_compound_page(struct page *page_head, struct page *page)
* not go away until the compound page enters
* the buddy allocator.
*/
- VM_BUG_ON_PAGE(PageSlab(page_head), page_head);
+ VM_BUG(PageSlab(page_head), "%pZp", page_head);
__put_compound_page(page_head);
}
} else
@@ -202,7 +202,7 @@ out_put_single:
__put_single_page(page);
return;
}
- VM_BUG_ON_PAGE(page_head != page->first_page, page);
+ VM_BUG(page_head != page->first_page, "%pZp", page);
/*
* We can release the refcount taken by
* get_page_unless_zero() now that
@@ -210,12 +210,13 @@ out_put_single:
* compound_lock.
*/
if (put_page_testzero(page_head))
- VM_BUG_ON_PAGE(1, page_head);
+ VM_BUG(1, "%pZp", page_head);
/* __split_huge_page_refcount will wait now */
- VM_BUG_ON_PAGE(page_mapcount(page) <= 0, page);
+ VM_BUG(page_mapcount(page) <= 0, "%pZp", page);
atomic_dec(&page->_mapcount);
- VM_BUG_ON_PAGE(atomic_read(&page_head->_count) <= 0, page_head);
- VM_BUG_ON_PAGE(atomic_read(&page->_count) != 0, page);
+ VM_BUG(atomic_read(&page_head->_count) <= 0, "%pZp",
+ page_head);
+ VM_BUG(atomic_read(&page->_count) != 0, "%pZp", page);
compound_unlock_irqrestore(page_head, flags);

if (put_page_testzero(page_head)) {
@@ -226,7 +227,7 @@ out_put_single:
}
} else {
/* @page_head is a dangling pointer */
- VM_BUG_ON_PAGE(PageTail(page), page);
+ VM_BUG(PageTail(page), "%pZp", page);
goto out_put_single;
}
}
@@ -306,7 +307,7 @@ bool __get_page_tail(struct page *page)
* page. __split_huge_page_refcount
* cannot race here.
*/
- VM_BUG_ON_PAGE(!PageHead(page_head), page_head);
+ VM_BUG(!PageHead(page_head), "%pZp", page_head);
__get_page_tail_foll(page, true);
return true;
} else {
@@ -668,8 +669,8 @@ EXPORT_SYMBOL(lru_cache_add_file);
*/
void lru_cache_add(struct page *page)
{
- VM_BUG_ON_PAGE(PageActive(page) && PageUnevictable(page), page);
- VM_BUG_ON_PAGE(PageLRU(page), page);
+ VM_BUG(PageActive(page) && PageUnevictable(page), "%pZp", page);
+ VM_BUG(PageLRU(page), "%pZp", page);
__lru_cache_add(page);
}

@@ -710,7 +711,7 @@ void add_page_to_unevictable_list(struct page *page)
void lru_cache_add_active_or_unevictable(struct page *page,
struct vm_area_struct *vma)
{
- VM_BUG_ON_PAGE(PageLRU(page), page);
+ VM_BUG(PageLRU(page), "%pZp", page);

if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED)) {
SetPageActive(page);
@@ -995,7 +996,7 @@ void release_pages(struct page **pages, int nr, bool cold)
}

lruvec = mem_cgroup_page_lruvec(page, zone);
- VM_BUG_ON_PAGE(!PageLRU(page), page);
+ VM_BUG(!PageLRU(page), "%pZp", page);
__ClearPageLRU(page);
del_page_from_lru_list(page, lruvec, page_off_lru(page));
}
@@ -1038,9 +1039,9 @@ void lru_add_page_tail(struct page *page, struct page *page_tail,
{
const int file = 0;

- VM_BUG_ON_PAGE(!PageHead(page), page);
- VM_BUG_ON_PAGE(PageCompound(page_tail), page);
- VM_BUG_ON_PAGE(PageLRU(page_tail), page);
+ VM_BUG(!PageHead(page), "%pZp", page);
+ VM_BUG(PageCompound(page_tail), "%pZp", page);
+ VM_BUG(PageLRU(page_tail), "%pZp", page);
VM_BUG_ON(NR_CPUS != 1 &&
!spin_is_locked(&lruvec_zone(lruvec)->lru_lock));

@@ -1079,7 +1080,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
int active = PageActive(page);
enum lru_list lru = page_lru(page);

- VM_BUG_ON_PAGE(PageLRU(page), page);
+ VM_BUG(PageLRU(page), "%pZp", page);

SetPageLRU(page);
add_page_to_lru_list(page, lruvec, lru);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index a2611ce..0609662 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -81,9 +81,9 @@ int __add_to_swap_cache(struct page *page, swp_entry_t entry)
int error;
struct address_space *address_space;

- VM_BUG_ON_PAGE(!PageLocked(page), page);
- VM_BUG_ON_PAGE(PageSwapCache(page), page);
- VM_BUG_ON_PAGE(!PageSwapBacked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
+ VM_BUG(PageSwapCache(page), "%pZp", page);
+ VM_BUG(!PageSwapBacked(page), "%pZp", page);

page_cache_get(page);
SetPageSwapCache(page);
@@ -137,9 +137,9 @@ void __delete_from_swap_cache(struct page *page)
swp_entry_t entry;
struct address_space *address_space;

- VM_BUG_ON_PAGE(!PageLocked(page), page);
- VM_BUG_ON_PAGE(!PageSwapCache(page), page);
- VM_BUG_ON_PAGE(PageWriteback(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
+ VM_BUG(!PageSwapCache(page), "%pZp", page);
+ VM_BUG(PageWriteback(page), "%pZp", page);

entry.val = page_private(page);
address_space = swap_address_space(entry);
@@ -163,8 +163,8 @@ int add_to_swap(struct page *page, struct list_head *list)
swp_entry_t entry;
int err;

- VM_BUG_ON_PAGE(!PageLocked(page), page);
- VM_BUG_ON_PAGE(!PageUptodate(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
+ VM_BUG(!PageUptodate(page), "%pZp", page);

entry = get_swap_page();
if (!entry.val)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index a7e7210..d71dcd6 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -884,7 +884,7 @@ int reuse_swap_page(struct page *page)
{
int count;

- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);
if (unlikely(PageKsm(page)))
return 0;
count = page_mapcount(page);
@@ -904,7 +904,7 @@ int reuse_swap_page(struct page *page)
*/
int try_to_free_swap(struct page *page)
{
- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);

if (!PageSwapCache(page))
return 0;
@@ -2710,7 +2710,7 @@ struct swap_info_struct *page_swap_info(struct page *page)
*/
struct address_space *__page_file_mapping(struct page *page)
{
- VM_BUG_ON_PAGE(!PageSwapCache(page), page);
+ VM_BUG(!PageSwapCache(page), "%pZp", page);
return page_swap_info(page)->swap_file->f_mapping;
}
EXPORT_SYMBOL_GPL(__page_file_mapping);
@@ -2718,7 +2718,7 @@ EXPORT_SYMBOL_GPL(__page_file_mapping);
pgoff_t __page_file_index(struct page *page)
{
swp_entry_t swap = { .val = page_private(page) };
- VM_BUG_ON_PAGE(!PageSwapCache(page), page);
+ VM_BUG(!PageSwapCache(page), "%pZp", page);
return swp_offset(swap);
}
EXPORT_SYMBOL_GPL(__page_file_index);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7d20d36..d63586f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -688,7 +688,7 @@ void putback_lru_page(struct page *page)
bool is_unevictable;
int was_unevictable = PageUnevictable(page);

- VM_BUG_ON_PAGE(PageLRU(page), page);
+ VM_BUG(PageLRU(page), "%pZp", page);

redo:
ClearPageUnevictable(page);
@@ -761,7 +761,7 @@ static enum page_references page_check_references(struct page *page,
unsigned long vm_flags;
int pte_dirty;

- VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG(!PageLocked(page), "%pZp", page);

referenced_ptes = page_referenced(page, 1, sc->target_mem_cgroup,
&vm_flags, &pte_dirty);
@@ -887,8 +887,8 @@ static unsigned long shrink_page_list(struct list_head *page_list,
if (!trylock_page(page))
goto keep;

- VM_BUG_ON_PAGE(PageActive(page), page);
- VM_BUG_ON_PAGE(page_zone(page) != zone, page);
+ VM_BUG(PageActive(page), "%pZp", page);
+ VM_BUG(page_zone(page) != zone, "%pZp", page);

sc->nr_scanned++;

@@ -1059,7 +1059,7 @@ unmap:
* due to skipping of swapcache so we free
* page in here rather than __remove_mapping.
*/
- VM_BUG_ON_PAGE(PageSwapCache(page), page);
+ VM_BUG(PageSwapCache(page), "%pZp", page);
if (!page_freeze_refs(page, 1))
goto keep_locked;
__ClearPageLocked(page);
@@ -1196,14 +1196,14 @@ activate_locked:
/* Not a candidate for swapping, so reclaim swap space. */
if (PageSwapCache(page) && vm_swap_full())
try_to_free_swap(page);
- VM_BUG_ON_PAGE(PageActive(page), page);
+ VM_BUG(PageActive(page), "%pZp", page);
SetPageActive(page);
pgactivate++;
keep_locked:
unlock_page(page);
keep:
list_add(&page->lru, &ret_pages);
- VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page);
+ VM_BUG(PageLRU(page) || PageUnevictable(page), "%pZp", page);
}

mem_cgroup_uncharge_list(&free_pages);
@@ -1358,7 +1358,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
page = lru_to_page(src);
prefetchw_prev_lru_page(page, src, flags);

- VM_BUG_ON_PAGE(!PageLRU(page), page);
+ VM_BUG(!PageLRU(page), "%pZp", page);

switch (__isolate_lru_page(page, mode)) {
case 0:
@@ -1413,7 +1413,7 @@ int isolate_lru_page(struct page *page)
{
int ret = -EBUSY;

- VM_BUG_ON_PAGE(!page_count(page), page);
+ VM_BUG(!page_count(page), "%pZp", page);

if (PageLRU(page)) {
struct zone *zone = page_zone(page);
@@ -1501,7 +1501,7 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
struct page *page = lru_to_page(page_list);
int lru;

- VM_BUG_ON_PAGE(PageLRU(page), page);
+ VM_BUG(PageLRU(page), "%pZp", page);
list_del(&page->lru);
if (unlikely(!page_evictable(page))) {
spin_unlock_irq(&zone->lru_lock);
@@ -1736,7 +1736,7 @@ static void move_active_pages_to_lru(struct lruvec *lruvec,
page = lru_to_page(list);
lruvec = mem_cgroup_page_lruvec(page, zone);

- VM_BUG_ON_PAGE(PageLRU(page), page);
+ VM_BUG(PageLRU(page), "%pZp", page);
SetPageLRU(page);

nr_pages = hpage_nr_pages(page);
@@ -3863,7 +3863,7 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
if (page_evictable(page)) {
enum lru_list lru = page_lru_base_type(page);

- VM_BUG_ON_PAGE(PageActive(page), page);
+ VM_BUG(PageActive(page), "%pZp", page);
ClearPageUnevictable(page);
del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE);
add_page_to_lru_list(page, lruvec, lru);
--
1.7.10.4

2015-05-14 17:11:39

by Sasha Levin

[permalink] [raw]
Subject: [PATCH 09/11] mm: debug: kill VM_BUG_ON_VMA

Just use VM_BUG() instead.

Signed-off-by: Sasha Levin <[email protected]>
---
include/linux/huge_mm.h | 2 +-
include/linux/mmdebug.h | 8 --------
include/linux/rmap.h | 2 +-
mm/gup.c | 4 ++--
mm/huge_memory.c | 6 +++---
mm/hugetlb.c | 14 +++++++-------
mm/interval_tree.c | 2 +-
mm/mmap.c | 11 +++++------
mm/mremap.c | 4 ++--
mm/rmap.c | 6 +++---
10 files changed, 25 insertions(+), 34 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 44a840a..cfd745b 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -136,7 +136,7 @@ extern int __pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma,
static inline int pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma,
spinlock_t **ptl)
{
- VM_BUG_ON_VMA(!rwsem_is_locked(&vma->vm_mm->mmap_sem), vma);
+ VM_BUG(!rwsem_is_locked(&vma->vm_mm->mmap_sem), "%pZv", vma);
if (pmd_trans_huge(*pmd))
return __pmd_trans_huge_lock(pmd, vma, ptl);
else
diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index f43f868..5106ab5 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -20,13 +20,6 @@ char *format_mm(const struct mm_struct *mm, char *buf, char *end);
} \
} while (0)
#define VM_BUG_ON(cond) VM_BUG(cond, "%s\n", __stringify(cond))
-#define VM_BUG_ON_VMA(cond, vma) \
- do { \
- if (unlikely(cond)) { \
- pr_emerg("%pZv", vma); \
- BUG(); \
- } \
- } while (0)
#define VM_BUG_ON_MM(cond, mm) \
do { \
if (unlikely(cond)) { \
@@ -48,7 +41,6 @@ static char *format_mm(const struct mm_struct *mm, char *buf, char *end)
}
#define VM_BUG(cond, fmt...) BUILD_BUG_ON_INVALID(cond)
#define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
-#define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond)
#define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond)
#define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond)
#define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index bf36b6e..54beb2f 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -153,7 +153,7 @@ int anon_vma_fork(struct vm_area_struct *, struct vm_area_struct *);
static inline void anon_vma_merge(struct vm_area_struct *vma,
struct vm_area_struct *next)
{
- VM_BUG_ON_VMA(vma->anon_vma != next->anon_vma, vma);
+ VM_BUG(vma->anon_vma != next->anon_vma, "%pZv", vma);
unlink_anon_vmas(next);
}

diff --git a/mm/gup.c b/mm/gup.c
index 743648e..0b851ac 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -846,8 +846,8 @@ long populate_vma_page_range(struct vm_area_struct *vma,

VM_BUG_ON(start & ~PAGE_MASK);
VM_BUG_ON(end & ~PAGE_MASK);
- VM_BUG_ON_VMA(start < vma->vm_start, vma);
- VM_BUG_ON_VMA(end > vma->vm_end, vma);
+ VM_BUG(start < vma->vm_start, "%pZv", vma);
+ VM_BUG(end > vma->vm_end, "%pZv", vma);
VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_sem), mm);

gup_flags = FOLL_TOUCH | FOLL_POPULATE;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 82ccd2c..ed222a4 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1092,7 +1092,7 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
gfp_t huge_gfp; /* for allocation and charge */

ptl = pmd_lockptr(mm, pmd);
- VM_BUG_ON_VMA(!vma->anon_vma, vma);
+ VM_BUG(!vma->anon_vma, "%pZv", vma);
haddr = address & HPAGE_PMD_MASK;
if (is_huge_zero_pmd(orig_pmd))
goto alloc;
@@ -2107,7 +2107,7 @@ int khugepaged_enter_vma_merge(struct vm_area_struct *vma,
if (vma->vm_ops)
/* khugepaged not yet working on file or special mappings */
return 0;
- VM_BUG_ON_VMA(vm_flags & VM_NO_THP, vma);
+ VM_BUG(vm_flags & VM_NO_THP, "%pZv", vma);
hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
hend = vma->vm_end & HPAGE_PMD_MASK;
if (hstart < hend)
@@ -2465,7 +2465,7 @@ static bool hugepage_vma_check(struct vm_area_struct *vma)
return false;
if (is_vma_temporary_stack(vma))
return false;
- VM_BUG_ON_VMA(vma->vm_flags & VM_NO_THP, vma);
+ VM_BUG(vma->vm_flags & VM_NO_THP, "%pZv", vma);
return true;
}

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 55c75da..fbd5718 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -503,7 +503,7 @@ static inline struct resv_map *inode_resv_map(struct inode *inode)

static struct resv_map *vma_resv_map(struct vm_area_struct *vma)
{
- VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
+ VM_BUG(!is_vm_hugetlb_page(vma), "%pZv", vma);
if (vma->vm_flags & VM_MAYSHARE) {
struct address_space *mapping = vma->vm_file->f_mapping;
struct inode *inode = mapping->host;
@@ -518,8 +518,8 @@ static struct resv_map *vma_resv_map(struct vm_area_struct *vma)

static void set_vma_resv_map(struct vm_area_struct *vma, struct resv_map *map)
{
- VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
- VM_BUG_ON_VMA(vma->vm_flags & VM_MAYSHARE, vma);
+ VM_BUG(!is_vm_hugetlb_page(vma), "%pZv", vma);
+ VM_BUG(vma->vm_flags & VM_MAYSHARE, "%pZv", vma);

set_vma_private_data(vma, (get_vma_private_data(vma) &
HPAGE_RESV_MASK) | (unsigned long)map);
@@ -527,15 +527,15 @@ static void set_vma_resv_map(struct vm_area_struct *vma, struct resv_map *map)

static void set_vma_resv_flags(struct vm_area_struct *vma, unsigned long flags)
{
- VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
- VM_BUG_ON_VMA(vma->vm_flags & VM_MAYSHARE, vma);
+ VM_BUG(!is_vm_hugetlb_page(vma), "%pZv", vma);
+ VM_BUG(vma->vm_flags & VM_MAYSHARE, "%pZv", vma);

set_vma_private_data(vma, get_vma_private_data(vma) | flags);
}

static int is_vma_resv_set(struct vm_area_struct *vma, unsigned long flag)
{
- VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
+ VM_BUG(!is_vm_hugetlb_page(vma), "%pZv", vma);

return (get_vma_private_data(vma) & flag) != 0;
}
@@ -543,7 +543,7 @@ static int is_vma_resv_set(struct vm_area_struct *vma, unsigned long flag)
/* Reset counters to 0 and clear all HPAGE_RESV_* flags */
void reset_vma_resv_huge_pages(struct vm_area_struct *vma)
{
- VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
+ VM_BUG(!is_vm_hugetlb_page(vma), "%pZv", vma);
if (!(vma->vm_flags & VM_MAYSHARE))
vma->vm_private_data = (void *)0;
}
diff --git a/mm/interval_tree.c b/mm/interval_tree.c
index f2c2492..49d4f53 100644
--- a/mm/interval_tree.c
+++ b/mm/interval_tree.c
@@ -34,7 +34,7 @@ void vma_interval_tree_insert_after(struct vm_area_struct *node,
struct vm_area_struct *parent;
unsigned long last = vma_last_pgoff(node);

- VM_BUG_ON_VMA(vma_start_pgoff(node) != vma_start_pgoff(prev), node);
+ VM_BUG(vma_start_pgoff(node) != vma_start_pgoff(prev), "%pZv", node);

if (!prev->shared.rb.rb_right) {
parent = prev;
diff --git a/mm/mmap.c b/mm/mmap.c
index bb50cac..f2db320 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -426,9 +426,8 @@ static void validate_mm_rb(struct rb_root *root, struct vm_area_struct *ignore)
for (nd = rb_first(root); nd; nd = rb_next(nd)) {
struct vm_area_struct *vma;
vma = rb_entry(nd, struct vm_area_struct, vm_rb);
- VM_BUG_ON_VMA(vma != ignore &&
- vma->rb_subtree_gap != vma_compute_subtree_gap(vma),
- vma);
+ VM_BUG(vma != ignore && vma->rb_subtree_gap != vma_compute_subtree_gap(vma),
+ "%pZv", vma);
}
}

@@ -805,8 +804,8 @@ again: remove_next = 1 + (end > next->vm_end);
if (!anon_vma && adjust_next)
anon_vma = next->anon_vma;
if (anon_vma) {
- VM_BUG_ON_VMA(adjust_next && next->anon_vma &&
- anon_vma != next->anon_vma, next);
+ VM_BUG(adjust_next && next->anon_vma && anon_vma != next->anon_vma,
+ "%pZv", next);
anon_vma_lock_write(anon_vma);
anon_vma_interval_tree_pre_update_vma(vma);
if (adjust_next)
@@ -2932,7 +2931,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
* safe. It is only safe to keep the vm_pgoff
* linear if there are no pages mapped yet.
*/
- VM_BUG_ON_VMA(faulted_in_anon_vma, new_vma);
+ VM_BUG(faulted_in_anon_vma, "%pZv", new_vma);
*vmap = vma = new_vma;
}
*need_rmap_locks = (new_vma->vm_pgoff <= vma->vm_pgoff);
diff --git a/mm/mremap.c b/mm/mremap.c
index a7c93ec..f875e20 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -194,8 +194,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
if (pmd_trans_huge(*old_pmd)) {
int err = 0;
if (extent == HPAGE_PMD_SIZE) {
- VM_BUG_ON_VMA(vma->vm_file || !vma->anon_vma,
- vma);
+ VM_BUG(vma->vm_file || !vma->anon_vma,
+ "%pZv", vma);
/* See comment in move_ptes() */
if (need_rmap_locks)
anon_vma_lock_write(vma->anon_vma);
diff --git a/mm/rmap.c b/mm/rmap.c
index f8a6bca..1ef7e6f 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -576,7 +576,7 @@ vma_address(struct page *page, struct vm_area_struct *vma)
unsigned long address = __vma_address(page, vma);

/* page should be within @vma mapping range */
- VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
+ VM_BUG(address < vma->vm_start || address >= vma->vm_end, "%pZv", vma);

return address;
}
@@ -972,7 +972,7 @@ void page_move_anon_rmap(struct page *page,
struct anon_vma *anon_vma = vma->anon_vma;

VM_BUG(!PageLocked(page), "%pZp", page);
- VM_BUG_ON_VMA(!anon_vma, vma);
+ VM_BUG(!anon_vma, "%pZv", vma);
VM_BUG(page->index != linear_page_index(vma, address), "%pZp", page);

anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
@@ -1099,7 +1099,7 @@ void do_page_add_anon_rmap(struct page *page,
void page_add_new_anon_rmap(struct page *page,
struct vm_area_struct *vma, unsigned long address)
{
- VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
+ VM_BUG(address < vma->vm_start || address >= vma->vm_end, "%pZv", vma);
SetPageSwapBacked(page);
atomic_set(&page->_mapcount, 0); /* increment count (starts at -1) */
if (PageTransHuge(page))
--
1.7.10.4

2015-05-14 17:11:22

by Sasha Levin

[permalink] [raw]
Subject: [PATCH 10/11] mm: debug: kill VM_BUG_ON_MM

Just use VM_BUG() instead.

Signed-off-by: Sasha Levin <[email protected]>
---
include/linux/mmdebug.h | 8 --------
kernel/fork.c | 2 +-
mm/gup.c | 2 +-
mm/huge_memory.c | 2 +-
mm/mmap.c | 2 +-
mm/pagewalk.c | 2 +-
6 files changed, 5 insertions(+), 13 deletions(-)

diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 5106ab5..b810800 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -20,13 +20,6 @@ char *format_mm(const struct mm_struct *mm, char *buf, char *end);
} \
} while (0)
#define VM_BUG_ON(cond) VM_BUG(cond, "%s\n", __stringify(cond))
-#define VM_BUG_ON_MM(cond, mm) \
- do { \
- if (unlikely(cond)) { \
- pr_emerg("%pZm", mm); \
- BUG(); \
- } \
- } while (0)
#define VM_WARN_ON(cond) WARN_ON(cond)
#define VM_WARN_ON_ONCE(cond) WARN_ON_ONCE(cond)
#define VM_WARN_ONCE(cond, format...) WARN_ONCE(cond, format)
@@ -41,7 +34,6 @@ static char *format_mm(const struct mm_struct *mm, char *buf, char *end)
}
#define VM_BUG(cond, fmt...) BUILD_BUG_ON_INVALID(cond)
#define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
-#define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond)
#define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond)
#define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond)
#define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond)
diff --git a/kernel/fork.c b/kernel/fork.c
index 2e67086..3dd29c1 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -645,7 +645,7 @@ static void check_mm(struct mm_struct *mm)
mm_nr_pmds(mm));

#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !USE_SPLIT_PMD_PTLOCKS
- VM_BUG_ON_MM(mm->pmd_huge_pte, mm);
+ VM_BUG(mm->pmd_huge_pte, "%pZm", mm);
#endif
}

diff --git a/mm/gup.c b/mm/gup.c
index 0b851ac..57cc2de 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -848,7 +848,7 @@ long populate_vma_page_range(struct vm_area_struct *vma,
VM_BUG_ON(end & ~PAGE_MASK);
VM_BUG(start < vma->vm_start, "%pZv", vma);
VM_BUG(end > vma->vm_end, "%pZv", vma);
- VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_sem), mm);
+ VM_BUG(!rwsem_is_locked(&mm->mmap_sem), "%pZm", mm);

gup_flags = FOLL_TOUCH | FOLL_POPULATE;
/*
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ed222a4..3d6d6c5 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2071,7 +2071,7 @@ int __khugepaged_enter(struct mm_struct *mm)
return -ENOMEM;

/* __khugepaged_exit() must not run from under us */
- VM_BUG_ON_MM(khugepaged_test_exit(mm), mm);
+ VM_BUG(khugepaged_test_exit(mm), "%pZm", mm);
if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) {
free_mm_slot(mm_slot);
return 0;
diff --git a/mm/mmap.c b/mm/mmap.c
index f2db320..311a795 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -464,7 +464,7 @@ static void validate_mm(struct mm_struct *mm)
pr_emerg("map_count %d rb %d\n", mm->map_count, i);
bug = 1;
}
- VM_BUG_ON_MM(bug, mm);
+ VM_BUG(bug, "%pZm", mm);
}
#else
#define validate_mm_rb(root, ignore) do { } while (0)
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 29f2f8b..952cddc 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -249,7 +249,7 @@ int walk_page_range(unsigned long start, unsigned long end,
if (!walk->mm)
return -EINVAL;

- VM_BUG_ON_MM(!rwsem_is_locked(&walk->mm->mmap_sem), walk->mm);
+ VM_BUG(!rwsem_is_locked(&walk->mm->mmap_sem), "%pZm", walk->mm);

vma = find_vma(walk->mm, start);
do {
--
1.7.10.4

2015-05-14 17:10:57

by Sasha Levin

[permalink] [raw]
Subject: [PATCH 11/11] mm: debug: use VM_BUG() to help with debug output

This shows how we can use VM_BUG() to improve output in various
common places.

Signed-off-by: Sasha Levin <[email protected]>
---
arch/arm/mm/mmap.c | 2 +-
arch/frv/mm/elf-fdpic.c | 4 ++--
arch/mips/mm/gup.c | 4 ++--
arch/parisc/kernel/sys_parisc.c | 2 +-
arch/powerpc/mm/hugetlbpage.c | 2 +-
arch/powerpc/mm/pgtable_64.c | 4 ++--
arch/s390/mm/gup.c | 2 +-
arch/s390/mm/mmap.c | 2 +-
arch/s390/mm/pgtable.c | 6 +++---
arch/sh/mm/mmap.c | 2 +-
arch/sparc/kernel/sys_sparc_64.c | 4 ++--
arch/sparc/mm/gup.c | 2 +-
arch/sparc/mm/hugetlbpage.c | 4 ++--
arch/tile/mm/hugetlbpage.c | 2 +-
arch/x86/kernel/sys_x86_64.c | 2 +-
arch/x86/mm/hugetlbpage.c | 2 +-
arch/x86/mm/pgtable.c | 6 +++---
mm/huge_memory.c | 4 ++--
mm/mmap.c | 2 +-
mm/pgtable-generic.c | 8 ++++----
20 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 407dc78..6767df7 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -159,7 +159,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
* allocations.
*/
if (addr & ~PAGE_MASK) {
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
info.flags = 0;
info.low_limit = mm->mmap_base;
info.high_limit = TASK_SIZE;
diff --git a/arch/frv/mm/elf-fdpic.c b/arch/frv/mm/elf-fdpic.c
index 836f147..6ae5497 100644
--- a/arch/frv/mm/elf-fdpic.c
+++ b/arch/frv/mm/elf-fdpic.c
@@ -88,7 +88,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
addr = vm_unmapped_area(&info);
if (!(addr & ~PAGE_MASK))
goto success;
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);

/* search from just above the WorkRAM area to the top of memory */
info.low_limit = PAGE_ALIGN(0x80000000);
@@ -96,7 +96,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
addr = vm_unmapped_area(&info);
if (!(addr & ~PAGE_MASK))
goto success;
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);

#if 0
printk("[area] l=%lx (ENOMEM) f='%s'\n",
diff --git a/arch/mips/mm/gup.c b/arch/mips/mm/gup.c
index 349995d..364e27b 100644
--- a/arch/mips/mm/gup.c
+++ b/arch/mips/mm/gup.c
@@ -85,7 +85,7 @@ static int gup_huge_pmd(pmd_t pmd, unsigned long addr, unsigned long end,
head = pte_page(pte);
page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
do {
- VM_BUG_ON(compound_head(page) != head);
+ VM_BUG(compound_head(page) != head, "%pZp\n%pZp", page, head);
pages[*nr] = page;
if (PageTail(page))
get_huge_page_tail(page);
@@ -151,7 +151,7 @@ static int gup_huge_pud(pud_t pud, unsigned long addr, unsigned long end,
head = pte_page(pte);
page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
do {
- VM_BUG_ON(compound_head(page) != head);
+ VM_BUG(compound_head(page) != head, "%pZp\n%pZp", page, head);
pages[*nr] = page;
if (PageTail(page))
get_huge_page_tail(page);
diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
index e1ffea2..845823c 100644
--- a/arch/parisc/kernel/sys_parisc.c
+++ b/arch/parisc/kernel/sys_parisc.c
@@ -187,7 +187,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
addr = vm_unmapped_area(&info);
if (!(addr & ~PAGE_MASK))
goto found_addr;
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);

/*
* A failed mmap() very likely causes application failure,
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 1b88b1c..bf5117c 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -1067,7 +1067,7 @@ int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
page = head + ((addr & (sz-1)) >> PAGE_SHIFT);
tail = page;
do {
- VM_BUG_ON(compound_head(page) != head);
+ VM_BUG(compound_head(page) != head, "%pZp\n%pZp", page, head);
pages[*nr] = page;
(*nr)++;
page++;
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index 59daa5e..b33bc22 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -559,7 +559,7 @@ pmd_t pmdp_clear_flush(struct vm_area_struct *vma, unsigned long address,
{
pmd_t pmd;

- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
if (pmd_trans_huge(*pmdp)) {
pmd = pmdp_get_and_clear(vma->vm_mm, address, pmdp);
} else {
@@ -627,7 +627,7 @@ void pmdp_splitting_flush(struct vm_area_struct *vma,
{
unsigned long old, tmp;

- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);

#ifdef CONFIG_DEBUG_VM
WARN_ON(!pmd_trans_huge(*pmdp));
diff --git a/arch/s390/mm/gup.c b/arch/s390/mm/gup.c
index 1eb41bb..2ad6ba0 100644
--- a/arch/s390/mm/gup.c
+++ b/arch/s390/mm/gup.c
@@ -66,7 +66,7 @@ static inline int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
tail = page;
do {
- VM_BUG_ON(compound_head(page) != head);
+ VM_BUG(compound_head(page) != head, "%pZp\n%pZp", page, head);
pages[*nr] = page;
(*nr)++;
page++;
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index 6e552af..178eb32 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -167,7 +167,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
* allocations.
*/
if (addr & ~PAGE_MASK) {
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
info.flags = 0;
info.low_limit = TASK_UNMAPPED_BASE;
info.high_limit = TASK_SIZE;
diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index b33f661..d69fb62 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -1333,7 +1333,7 @@ EXPORT_SYMBOL_GPL(gmap_test_and_clear_dirty);
int pmdp_clear_flush_young(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp)
{
- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
/* No need to flush TLB
* On s390 reference bits are in storage key and never in TLB */
return pmdp_test_and_clear_young(vma, address, pmdp);
@@ -1343,7 +1343,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp,
pmd_t entry, int dirty)
{
- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);

entry = pmd_mkyoung(entry);
if (dirty)
@@ -1363,7 +1363,7 @@ static void pmdp_splitting_flush_sync(void *arg)
void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp)
{
- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
if (!test_and_set_bit(_SEGMENT_ENTRY_SPLIT_BIT,
(unsigned long *) pmdp)) {
/* need to serialize against gup-fast (IRQ disabled) */
diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c
index 6777177..f30fd96 100644
--- a/arch/sh/mm/mmap.c
+++ b/arch/sh/mm/mmap.c
@@ -132,7 +132,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
* allocations.
*/
if (addr & ~PAGE_MASK) {
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
info.flags = 0;
info.low_limit = TASK_UNMAPPED_BASE;
info.high_limit = TASK_SIZE;
diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
index 30e7ddb..a77210d 100644
--- a/arch/sparc/kernel/sys_sparc_64.c
+++ b/arch/sparc/kernel/sys_sparc_64.c
@@ -131,7 +131,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
addr = vm_unmapped_area(&info);

if ((addr & ~PAGE_MASK) && task_size > VA_EXCLUDE_END) {
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
info.low_limit = VA_EXCLUDE_END;
info.high_limit = task_size;
addr = vm_unmapped_area(&info);
@@ -200,7 +200,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
* allocations.
*/
if (addr & ~PAGE_MASK) {
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
info.flags = 0;
info.low_limit = TASK_UNMAPPED_BASE;
info.high_limit = STACK_TOP32;
diff --git a/arch/sparc/mm/gup.c b/arch/sparc/mm/gup.c
index 2e5c4fc..9d92335 100644
--- a/arch/sparc/mm/gup.c
+++ b/arch/sparc/mm/gup.c
@@ -84,7 +84,7 @@ static int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
tail = page;
do {
- VM_BUG_ON(compound_head(page) != head);
+ VM_BUG(compound_head(page) != head, "%pZp\n%pZp", page, head);
pages[*nr] = page;
(*nr)++;
page++;
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index 131eaf4..268fa24 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -42,7 +42,7 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *filp,
addr = vm_unmapped_area(&info);

if ((addr & ~PAGE_MASK) && task_size > VA_EXCLUDE_END) {
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
info.low_limit = VA_EXCLUDE_END;
info.high_limit = task_size;
addr = vm_unmapped_area(&info);
@@ -79,7 +79,7 @@ hugetlb_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
* allocations.
*/
if (addr & ~PAGE_MASK) {
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
info.flags = 0;
info.low_limit = TASK_UNMAPPED_BASE;
info.high_limit = STACK_TOP32;
diff --git a/arch/tile/mm/hugetlbpage.c b/arch/tile/mm/hugetlbpage.c
index c034dc3..a1dada8 100644
--- a/arch/tile/mm/hugetlbpage.c
+++ b/arch/tile/mm/hugetlbpage.c
@@ -200,7 +200,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
* allocations.
*/
if (addr & ~PAGE_MASK) {
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
info.flags = 0;
info.low_limit = TASK_UNMAPPED_BASE;
info.high_limit = TASK_SIZE;
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 10e0272..9737762 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -203,7 +203,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
addr = vm_unmapped_area(&info);
if (!(addr & ~PAGE_MASK))
return addr;
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);

bottomup:
/*
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index 42982b2..ae468ee 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -111,7 +111,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
* allocations.
*/
if (addr & ~PAGE_MASK) {
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
info.flags = 0;
info.low_limit = TASK_UNMAPPED_BASE;
info.high_limit = TASK_SIZE;
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 3d6edea..7ec9841 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -427,7 +427,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
{
int changed = !pmd_same(*pmdp, entry);

- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);

if (changed && dirty) {
*pmdp = entry;
@@ -501,7 +501,7 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
{
int young;

- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);

young = pmdp_test_and_clear_young(vma, address, pmdp);
if (young)
@@ -514,7 +514,7 @@ void pmdp_splitting_flush(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp)
{
int set;
- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
set = !test_and_set_bit(_PAGE_BIT_SPLITTING,
(unsigned long *)pmdp);
if (set) {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3d6d6c5..a3fe87d 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2487,7 +2487,7 @@ static void collapse_huge_page(struct mm_struct *mm,
unsigned long mmun_end; /* For mmu_notifiers */
gfp_t gfp;

- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);

/* Only allocate from the target node */
gfp = alloc_hugepage_gfpmask(khugepaged_defrag(), __GFP_OTHER_NODE) |
@@ -2619,7 +2619,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
int node = NUMA_NO_NODE;
bool writable = false, referenced = false;

- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);

pmd = mm_find_pmd(mm, address);
if (!pmd)
diff --git a/mm/mmap.c b/mm/mmap.c
index 311a795..5439e8e 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1977,7 +1977,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
* allocations.
*/
if (addr & ~PAGE_MASK) {
- VM_BUG_ON(addr != -ENOMEM);
+ VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
info.flags = 0;
info.low_limit = TASK_UNMAPPED_BASE;
info.high_limit = TASK_SIZE;
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index c25f94b..97327c3 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -64,7 +64,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
{
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
int changed = !pmd_same(*pmdp, entry);
- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
if (changed) {
set_pmd_at(vma->vm_mm, address, pmdp, entry);
flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
@@ -95,7 +95,7 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
{
int young;
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
#else
BUG();
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
@@ -125,7 +125,7 @@ pmd_t pmdp_clear_flush(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp)
{
pmd_t pmd;
- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
pmd = pmdp_get_and_clear(vma->vm_mm, address, pmdp);
flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
return pmd;
@@ -139,7 +139,7 @@ void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp)
{
pmd_t pmd = pmd_mksplitting(*pmdp);
- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
set_pmd_at(vma->vm_mm, address, pmdp, pmd);
/* tlb flush only to serialize against gup-fast */
flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
--
1.7.10.4

2015-05-14 20:24:18

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH 00/11] mm: debug: formatting memory management structs

On Thu, 14 May 2015 13:10:03 -0400 Sasha Levin <[email protected]> wrote:

> This patch series adds knowledge about various memory management structures
> to the standard print functions.
>
> In essence, it allows us to easily print those structures:
>
> printk("%pZp %pZm %pZv", page, mm, vma);
>
> This allows us to customize output when hitting bugs even further, thus
> we introduce VM_BUG() which allows printing anything when hitting a bug
> rather than just a single piece of information.
>
> This also means we can get rid of VM_BUG_ON_* since they're now nothing
> more than a format string.

A good set of example output would help people understand this proposal.

2015-05-14 20:26:44

by Sasha Levin

[permalink] [raw]
Subject: Re: [PATCH 00/11] mm: debug: formatting memory management structs

On 05/14/2015 04:24 PM, Andrew Morton wrote:
> On Thu, 14 May 2015 13:10:03 -0400 Sasha Levin <[email protected]> wrote:
>
>> > This patch series adds knowledge about various memory management structures
>> > to the standard print functions.
>> >
>> > In essence, it allows us to easily print those structures:
>> >
>> > printk("%pZp %pZm %pZv", page, mm, vma);
>> >
>> > This allows us to customize output when hitting bugs even further, thus
>> > we introduce VM_BUG() which allows printing anything when hitting a bug
>> > rather than just a single piece of information.
>> >
>> > This also means we can get rid of VM_BUG_ON_* since they're now nothing
>> > more than a format string.
> A good set of example output would help people understand this proposal.

That would be the equivalent of doing:

dump_page(page);
dump_mm(mm);
dump_vma(vma);

I'll add a few example usages in.


Thanks,
Sasha

2015-06-26 21:34:40

by Sasha Levin

[permalink] [raw]
Subject: Re: [PATCH 00/11] mm: debug: formatting memory management structs

There were no objections beyond Andrew's request for a better changelog.

If there are no any further objections, can it be merged please?

On 05/14/2015 01:10 PM, Sasha Levin wrote:
> This patch series adds knowledge about various memory management structures
> to the standard print functions.
>
> In essence, it allows us to easily print those structures:
>
> printk("%pZp %pZm %pZv", page, mm, vma);
>
> This allows us to customize output when hitting bugs even further, thus
> we introduce VM_BUG() which allows printing anything when hitting a bug
> rather than just a single piece of information.
>
> This also means we can get rid of VM_BUG_ON_* since they're now nothing
> more than a format string.
>
> Changes since RFC:
> - Address comments by Kirill.
>
> Sasha Levin (11):
> mm: debug: format flags in a buffer
> mm: debug: deal with a new family of MM pointers
> mm: debug: dump VMA into a string rather than directly on screen
> mm: debug: dump struct MM into a string rather than directly on
> screen
> mm: debug: dump page into a string rather than directly on screen
> mm: debug: clean unused code
> mm: debug: VM_BUG()
> mm: debug: kill VM_BUG_ON_PAGE
> mm: debug: kill VM_BUG_ON_VMA
> mm: debug: kill VM_BUG_ON_MM
> mm: debug: use VM_BUG() to help with debug output
>
> arch/arm/mm/mmap.c | 2 +-
> arch/frv/mm/elf-fdpic.c | 4 +-
> arch/mips/mm/gup.c | 4 +-
> arch/parisc/kernel/sys_parisc.c | 2 +-
> arch/powerpc/mm/hugetlbpage.c | 2 +-
> arch/powerpc/mm/pgtable_64.c | 4 +-
> arch/s390/mm/gup.c | 2 +-
> arch/s390/mm/mmap.c | 2 +-
> arch/s390/mm/pgtable.c | 6 +--
> arch/sh/mm/mmap.c | 2 +-
> arch/sparc/kernel/sys_sparc_64.c | 4 +-
> arch/sparc/mm/gup.c | 2 +-
> arch/sparc/mm/hugetlbpage.c | 4 +-
> arch/tile/mm/hugetlbpage.c | 2 +-
> arch/x86/kernel/sys_x86_64.c | 2 +-
> arch/x86/mm/gup.c | 8 ++--
> arch/x86/mm/hugetlbpage.c | 2 +-
> arch/x86/mm/pgtable.c | 6 +--
> include/linux/huge_mm.h | 2 +-
> include/linux/hugetlb.h | 2 +-
> include/linux/hugetlb_cgroup.h | 4 +-
> include/linux/mm.h | 22 ++++-----
> include/linux/mmdebug.h | 40 ++++++----------
> include/linux/page-flags.h | 26 +++++-----
> include/linux/pagemap.h | 11 +++--
> include/linux/rmap.h | 2 +-
> kernel/fork.c | 2 +-
> lib/vsprintf.c | 22 +++++++++
> mm/balloon_compaction.c | 4 +-
> mm/cleancache.c | 6 +--
> mm/compaction.c | 2 +-
> mm/debug.c | 98 ++++++++++++++++++++------------------
> mm/filemap.c | 18 +++----
> mm/gup.c | 12 ++---
> mm/huge_memory.c | 50 +++++++++----------
> mm/hugetlb.c | 28 +++++------
> mm/hugetlb_cgroup.c | 2 +-
> mm/internal.h | 8 ++--
> mm/interval_tree.c | 2 +-
> mm/kasan/report.c | 2 +-
> mm/ksm.c | 13 ++---
> mm/memcontrol.c | 48 +++++++++----------
> mm/memory.c | 10 ++--
> mm/memory_hotplug.c | 2 +-
> mm/migrate.c | 6 +--
> mm/mlock.c | 4 +-
> mm/mmap.c | 15 +++---
> mm/mremap.c | 4 +-
> mm/page_alloc.c | 28 +++++------
> mm/page_io.c | 4 +-
> mm/pagewalk.c | 2 +-
> mm/pgtable-generic.c | 8 ++--
> mm/rmap.c | 20 ++++----
> mm/shmem.c | 10 ++--
> mm/slub.c | 4 +-
> mm/swap.c | 39 +++++++--------
> mm/swap_state.c | 16 +++----
> mm/swapfile.c | 8 ++--
> mm/vmscan.c | 24 +++++-----
> 59 files changed, 355 insertions(+), 335 deletions(-)
>

2015-06-30 23:35:56

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH 05/11] mm: debug: dump page into a string rather than directly on screen

On Thu, 14 May 2015, Sasha Levin wrote:

> diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
> index 202ebdf..8b3f5a0 100644
> --- a/include/linux/mmdebug.h
> +++ b/include/linux/mmdebug.h
> @@ -7,9 +7,7 @@ struct page;
> struct vm_area_struct;
> struct mm_struct;
>
> -extern void dump_page(struct page *page, const char *reason);
> -extern void dump_page_badflags(struct page *page, const char *reason,
> - unsigned long badflags);
> +char *format_page(struct page *page, char *buf, char *end);
>
> #ifdef CONFIG_DEBUG_VM
> char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
> @@ -18,7 +16,7 @@ char *format_mm(const struct mm_struct *mm, char *buf, char *end);
> #define VM_BUG_ON_PAGE(cond, page) \
> do { \
> if (unlikely(cond)) { \
> - dump_page(page, "VM_BUG_ON_PAGE(" __stringify(cond)")");\
> + pr_emerg("%pZp", page); \
> BUG(); \
> } \
> } while (0)
> diff --git a/lib/vsprintf.c b/lib/vsprintf.c
> index 595bf50..1f045ae 100644
> --- a/lib/vsprintf.c
> +++ b/lib/vsprintf.c
> @@ -1382,6 +1382,8 @@ char *mm_pointer(char *buf, char *end, const void *ptr,
> switch (fmt[1]) {
> case 'm':
> return format_mm(ptr, buf, end);
> + case 'p':
> + return format_page(ptr, buf, end);
> case 'v':
> return format_vma(ptr, buf, end);
> default:
> @@ -1482,9 +1484,10 @@ int kptr_restrict __read_mostly;
> * (legacy clock framework) of the clock
> * - 'Cr' For a clock, it prints the current rate of the clock
> * - 'T' task_struct->comm
> - * - 'Z[mv]' Outputs a readable version of a type of memory management struct:
> + * - 'Z[mpv]' Outputs a readable version of a type of memory management struct:
> * v struct vm_area_struct
> * m struct mm_struct
> + * p struct page
> *
> * Note: The difference between 'S' and 'F' is that on ia64 and ppc64
> * function pointers are really function descriptors, which contain a
> diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
> index fcad832..88b3cae 100644
> --- a/mm/balloon_compaction.c
> +++ b/mm/balloon_compaction.c
> @@ -187,7 +187,7 @@ void balloon_page_putback(struct page *page)
> put_page(page);
> } else {
> WARN_ON(1);
> - dump_page(page, "not movable balloon page");
> + pr_alert("Not movable balloon page:\n%pZp", page);
> }
> unlock_page(page);
> }

I don't know how others feel, but this looks strange to me and seems like
it's only a result of how we must now dump page information
(dump_page(page) is no longer available, we must do pr_alert("%pZp",
page)).

Since we're relying on print formats, this would arguably be better as

pr_alert("Not movable balloon page:\n");
pr_alert("%pZp", page);

to avoid introducing newlines into potentially lengthy messages that need
a specified loglevel like you've done above.

But that's not much different than the existing dump_page()
implementation.

So for this to be worth it, it seems like we'd need a compelling usecase
for something like pr_alert("%pZp %pZv", page, vma) and I'm not sure we're
ever actually going to see that. I would argue that

dump_page(page);
dump_vma(vma);

would be simpler in such circumstances.

I do understand the problem with the current VM_BUG_ON_PAGE() and
VM_BUG_ON_VMA() stuff, and it compels me to ask about just going back to
the normal

VM_BUG_ON(cond);

coupled with dump_page(), dump_vma(), dump_whatever(). It all seems so
much simpler to me.

2015-07-01 08:53:25

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH 05/11] mm: debug: dump page into a string rather than directly on screen

On Tue, Jun 30, 2015 at 04:35:45PM -0700, David Rientjes wrote:
> I do understand the problem with the current VM_BUG_ON_PAGE() and
> VM_BUG_ON_VMA() stuff, and it compels me to ask about just going back to
> the normal
>
> VM_BUG_ON(cond);
>
> coupled with dump_page(), dump_vma(), dump_whatever(). It all seems so
> much simpler to me.

Is there a sensible way to couple them? I don't see much, except opencode
VM_BUG_ON():

if (IS_ENABLED(CONFIG_DEBUG_VM) && cond) {
dump_page(...);
dump_vma(...);
dump_whatever();
BUG();
}

That's too verbose to me to be usable.

BTW, I also tried[1] to solve this problem too, but people doesn't like
either.

[1] http://lkml.kernel.org/g/[email protected]

--
Kirill A. Shutemov

2015-07-01 19:21:51

by Sasha Levin

[permalink] [raw]
Subject: Re: [PATCH 05/11] mm: debug: dump page into a string rather than directly on screen

On 06/30/2015 07:35 PM, David Rientjes wrote:
> I don't know how others feel, but this looks strange to me and seems like
> it's only a result of how we must now dump page information
> (dump_page(page) is no longer available, we must do pr_alert("%pZp",
> page)).
>
> Since we're relying on print formats, this would arguably be better as
>
> pr_alert("Not movable balloon page:\n");
> pr_alert("%pZp", page);
>
> to avoid introducing newlines into potentially lengthy messages that need
> a specified loglevel like you've done above.
>
> But that's not much different than the existing dump_page()
> implementation.
>
> So for this to be worth it, it seems like we'd need a compelling usecase
> for something like pr_alert("%pZp %pZv", page, vma) and I'm not sure we're
> ever actually going to see that. I would argue that
>
> dump_page(page);
> dump_vma(vma);
>
> would be simpler in such circumstances.

I think we can find usecases where we want to dump more information than what's
contained in just one page/vma/mm struct. Things like the following from mm/gup.c:

VM_BUG_ON_PAGE(compound_head(page) != head, page);

Where seeing 'head' would be interesting as well.

Or for VMAs, from include/linux/rmap.h:

VM_BUG_ON_VMA(vma->anon_vma != next->anon_vma, vma);

Would it be interesting to see both vma, and next? Probably.

Or opportunities to add information from other variables, such as in:

VM_BUG_ON_PAGE(stable_node->kpfn != page_to_pfn(oldpage), oldpage);

Is stable_node->kpfn interesting? Might be.


We *could* go ahead and open code all of that, but that's not happening, It's not
intuitive and people just slap VM_BUG_ON()s and hope they can figure it out when
those VM_BUG_ON()s happen.

Are there any pieces of code that open code what you suggested?


Thanks,
Sasha

2015-07-01 21:26:09

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH 05/11] mm: debug: dump page into a string rather than directly on screen

On Wed, 1 Jul 2015, Sasha Levin wrote:

> On 06/30/2015 07:35 PM, David Rientjes wrote:
> > I don't know how others feel, but this looks strange to me and seems like
> > it's only a result of how we must now dump page information
> > (dump_page(page) is no longer available, we must do pr_alert("%pZp",
> > page)).
> >
> > Since we're relying on print formats, this would arguably be better as
> >
> > pr_alert("Not movable balloon page:\n");
> > pr_alert("%pZp", page);
> >
> > to avoid introducing newlines into potentially lengthy messages that need
> > a specified loglevel like you've done above.
> >
> > But that's not much different than the existing dump_page()
> > implementation.
> >
> > So for this to be worth it, it seems like we'd need a compelling usecase
> > for something like pr_alert("%pZp %pZv", page, vma) and I'm not sure we're
> > ever actually going to see that. I would argue that
> >
> > dump_page(page);
> > dump_vma(vma);
> >
> > would be simpler in such circumstances.
>
> I think we can find usecases where we want to dump more information than what's
> contained in just one page/vma/mm struct. Things like the following from mm/gup.c:
>
> VM_BUG_ON_PAGE(compound_head(page) != head, page);
>
> Where seeing 'head' would be interesting as well.
>

I think it's a debate about whether this would be better off handled as

if (VM_BUG_ON(compound_head(page) != head)) {
dump_page(page);
dump_page(head);
}

and avoid VM_BUG_ON_PAGE() and the new print formats entirely. We can
improve upon existing VM_BUG_ON(), and BUG_ON() itself since the VM isn't
anything special in this regard, to print diagnostic information that may
be helpful, but I don't feel like adding special VM_BUG_ON_*() macros or
printing formats makes any of this simpler.

2015-07-01 21:34:47

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH 05/11] mm: debug: dump page into a string rather than directly on screen

On Wed, Jul 01, 2015 at 02:25:56PM -0700, David Rientjes wrote:
> On Wed, 1 Jul 2015, Sasha Levin wrote:
>
> > On 06/30/2015 07:35 PM, David Rientjes wrote:
> > > I don't know how others feel, but this looks strange to me and seems like
> > > it's only a result of how we must now dump page information
> > > (dump_page(page) is no longer available, we must do pr_alert("%pZp",
> > > page)).
> > >
> > > Since we're relying on print formats, this would arguably be better as
> > >
> > > pr_alert("Not movable balloon page:\n");
> > > pr_alert("%pZp", page);
> > >
> > > to avoid introducing newlines into potentially lengthy messages that need
> > > a specified loglevel like you've done above.
> > >
> > > But that's not much different than the existing dump_page()
> > > implementation.
> > >
> > > So for this to be worth it, it seems like we'd need a compelling usecase
> > > for something like pr_alert("%pZp %pZv", page, vma) and I'm not sure we're
> > > ever actually going to see that. I would argue that
> > >
> > > dump_page(page);
> > > dump_vma(vma);
> > >
> > > would be simpler in such circumstances.
> >
> > I think we can find usecases where we want to dump more information than what's
> > contained in just one page/vma/mm struct. Things like the following from mm/gup.c:
> >
> > VM_BUG_ON_PAGE(compound_head(page) != head, page);
> >
> > Where seeing 'head' would be interesting as well.
> >
>
> I think it's a debate about whether this would be better off handled as
>
> if (VM_BUG_ON(compound_head(page) != head)) {
> dump_page(page);
> dump_page(head);

Huh? How would we reach this, if VM_BUG_ON() will trigger BUG()?

> }
>
> and avoid VM_BUG_ON_PAGE() and the new print formats entirely. We can
> improve upon existing VM_BUG_ON(), and BUG_ON() itself since the VM isn't
> anything special in this regard, to print diagnostic information that may
> be helpful, but I don't feel like adding special VM_BUG_ON_*() macros or
> printing formats makes any of this simpler.

--
Kirill A. Shutemov

2015-07-01 22:33:48

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 05/11] mm: debug: dump page into a string rather than directly on screen

On 1.7.2015 23:25, David Rientjes wrote:
> On Wed, 1 Jul 2015, Sasha Levin wrote:
>
>> On 06/30/2015 07:35 PM, David Rientjes wrote:
>>
>> I think we can find usecases where we want to dump more information than what's
>> contained in just one page/vma/mm struct. Things like the following from mm/gup.c:
>>
>> VM_BUG_ON_PAGE(compound_head(page) != head, page);
>>
>> Where seeing 'head' would be interesting as well.
>>
>
> I think it's a debate about whether this would be better off handled as
>
> if (VM_BUG_ON(compound_head(page) != head)) {
> dump_page(page);
> dump_page(head);
> }
>
> and avoid VM_BUG_ON_PAGE() and the new print formats entirely. We can
> improve upon existing VM_BUG_ON(), and BUG_ON() itself since the VM isn't
> anything special in this regard,

Well, BUG_ON() is just evaluating a condition that results in executing the UD2
instruction, which traps and the handler prints everything. The file:line info
it prints is emitted in a different section, and the handler has to search for
it to print it, using the trapping address. This all to minimize impact on I$,
branch predictors and whatnot.

VM_BUG_ON_PAGE() etc have to actually emit the extra printing code before
triggering UD2. I'm not sure if there's a way to extend the generic mechanism
here. The file:line info would have to also include information about the extra
things we want to dump, and where the handler would find the necessary pointers
(in the registers saved on UD2 exception, or stack). This could probably be done
with some dwarf debuginfo magic but we know how unreliable that can be. Some of
the data might already be discarded in the non-error path doesn't need it, so it
would have to make sure to store it somewhere for the error purposes.

Now we seem to accept that VM_BUG_ON* is more intrusive than BUG_ON() and it's
not expected to be enabled in default distro kernels etc., so it can afford to
pollute the code with extra prints...

> to print diagnostic information that may
> be helpful, but I don't feel like adding special VM_BUG_ON_*() macros or
> printing formats makes any of this simpler.
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>

2015-07-01 22:50:36

by Sasha Levin

[permalink] [raw]
Subject: Re: [PATCH 05/11] mm: debug: dump page into a string rather than directly on screen

On 07/01/2015 05:25 PM, David Rientjes wrote:
> On Wed, 1 Jul 2015, Sasha Levin wrote:
>
>> On 06/30/2015 07:35 PM, David Rientjes wrote:
>>> I don't know how others feel, but this looks strange to me and seems like
>>> it's only a result of how we must now dump page information
>>> (dump_page(page) is no longer available, we must do pr_alert("%pZp",
>>> page)).
>>>
>>> Since we're relying on print formats, this would arguably be better as
>>>
>>> pr_alert("Not movable balloon page:\n");
>>> pr_alert("%pZp", page);
>>>
>>> to avoid introducing newlines into potentially lengthy messages that need
>>> a specified loglevel like you've done above.
>>>
>>> But that's not much different than the existing dump_page()
>>> implementation.
>>>
>>> So for this to be worth it, it seems like we'd need a compelling usecase
>>> for something like pr_alert("%pZp %pZv", page, vma) and I'm not sure we're
>>> ever actually going to see that. I would argue that
>>>
>>> dump_page(page);
>>> dump_vma(vma);
>>>
>>> would be simpler in such circumstances.
>>
>> I think we can find usecases where we want to dump more information than what's
>> contained in just one page/vma/mm struct. Things like the following from mm/gup.c:
>>
>> VM_BUG_ON_PAGE(compound_head(page) != head, page);
>>
>> Where seeing 'head' would be interesting as well.
>>
>
> I think it's a debate about whether this would be better off handled as
>
> if (VM_BUG_ON(compound_head(page) != head)) {
> dump_page(page);
> dump_page(head);
> }

Since we'd BUG at VM_BUG_ON(), this would be something closer to:

if (unlikely(compound_head(page) != head)) {
dump_page(page);
dump_page(head);
VM_BUG_ON(1);
}

But my point here was that while one *could* do it that way, no one does because
it's not intuitive. We both agree that in the example above it would be useful to
see both 'page' and 'head', and yet the code that was written didn't dump any of
them. Why? No one wants to write debug code unless it's easy and short.

> and avoid VM_BUG_ON_PAGE() and the new print formats entirely. We can
> improve upon existing VM_BUG_ON(), and BUG_ON() itself since the VM isn't
> anything special in this regard, to print diagnostic information that may
> be helpful, but I don't feel like adding special VM_BUG_ON_*() macros or
> printing formats makes any of this simpler.

This patchset actually kills the VM_BUG_ON_*() macros for exactly that reason:
VM isn't special at all and doesn't need it's own magic code in the form of
VM_BUG_ON_*() macros and dump_*() functions.


Thanks,
Sasha

2015-07-08 23:58:30

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH 05/11] mm: debug: dump page into a string rather than directly on screen

On Wed, 1 Jul 2015, Sasha Levin wrote:

> Since we'd BUG at VM_BUG_ON(), this would be something closer to:
>
> if (unlikely(compound_head(page) != head)) {
> dump_page(page);
> dump_page(head);
> VM_BUG_ON(1);
> }
>

I was thinking closer to

if (VM_WARN_ON(compound_head(page) != head)) {
...
BUG();
}

so we prefix all output with the typical warning diagnostics, emit
whatever page, vma, etc output we want, and then finally die. The final
BUG() here would have to be replaced by something that suppresses the
repeated output.

If it's really just a warning, then no BUG() needed.

> But my point here was that while one *could* do it that way, no one does because
> it's not intuitive. We both agree that in the example above it would be useful to
> see both 'page' and 'head', and yet the code that was written didn't dump any of
> them. Why? No one wants to write debug code unless it's easy and short.
>

pr_alert("%pZp %pZv", page, vma) isn't shorter than dump_page(page);
dump_vma(vma), but it would be a line shorter. I'm not sure that the
former is easier, though, and it prevents us from ever expanding dump_*()
functions for conditional output.

2015-08-06 15:08:21

by Sasha Levin

[permalink] [raw]
Subject: Re: [PATCH 05/11] mm: debug: dump page into a string rather than directly on screen

On 07/08/2015 07:58 PM, David Rientjes wrote:
> On Wed, 1 Jul 2015, Sasha Levin wrote:
>
>> > Since we'd BUG at VM_BUG_ON(), this would be something closer to:
>> >
>> > if (unlikely(compound_head(page) != head)) {
>> > dump_page(page);
>> > dump_page(head);
>> > VM_BUG_ON(1);
>> > }
>> >
> I was thinking closer to
>
> if (VM_WARN_ON(compound_head(page) != head)) {
> ...
> BUG();
> }
>
> so we prefix all output with the typical warning diagnostics, emit
> whatever page, vma, etc output we want, and then finally die. The final
> BUG() here would have to be replaced by something that suppresses the
> repeated output.
>
> If it's really just a warning, then no BUG() needed.

How is that simpler than getting it all under VM_BUG()? Just like the regular
WARN() does.

>> > But my point here was that while one *could* do it that way, no one does because
>> > it's not intuitive. We both agree that in the example above it would be useful to
>> > see both 'page' and 'head', and yet the code that was written didn't dump any of
>> > them. Why? No one wants to write debug code unless it's easy and short.
>> >
> pr_alert("%pZp %pZv", page, vma) isn't shorter than dump_page(page);
> dump_vma(vma), but it would be a line shorter. I'm not sure that the
> former is easier, though, and it prevents us from ever expanding dump_*()
> functions for conditional output.

I'm not objecting to leaving dump_*() for these trivial cases.


Thanks,
Sasha