2015-08-22 07:53:06

by Yaowei Bai

[permalink] [raw]
Subject: [PATCH 1/3] mm/page_alloc: fix a terrible misleading comment

The comment says that the per-cpu batchsize and zone watermarks
are determined by present_pages which is definitely wrong, they
are both calculated from managed_pages. Fix it.

Signed-off-by: Yaowei Bai <[email protected]>
---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5b5240b..c22b133 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6003,7 +6003,7 @@ void __init mem_init_print_info(const char *str)
* set_dma_reserve - set the specified number of pages reserved in the first zone
* @new_dma_reserve: The number of pages to mark reserved
*
- * The per-cpu batchsize and zone watermarks are determined by present_pages.
+ * The per-cpu batchsize and zone watermarks are determined by managed_pages.
* In the DMA zone, a significant percentage may be consumed by kernel image
* and other unfreeable allocations which can skew the watermarks badly. This
* function may optionally be used to account for unfreeable pages in the
--
1.9.1


2015-08-22 07:51:31

by Yaowei Bai

[permalink] [raw]
Subject: [PATCH 2/3] mm/page_alloc: add a helper function to check page before alloc/free

The major portion of check_new_page() and free_pages_check() are same,
introduce a helper function check_one_page() for readablity.

Signed-off-by: Yaowei Bai <[email protected]>
---
mm/page_alloc.c | 54 +++++++++++++++++++++++-------------------------------
1 file changed, 23 insertions(+), 31 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c22b133..a0839de 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -707,7 +707,7 @@ out:
zone->free_area[order].nr_free++;
}

-static inline int free_pages_check(struct page *page)
+static inline int check_one_page(struct page *page, bool free)
{
const char *bad_reason = NULL;
unsigned long bad_flags = 0;
@@ -718,10 +718,16 @@ static inline int free_pages_check(struct page *page)
bad_reason = "non-NULL mapping";
if (unlikely(atomic_read(&page->_count) != 0))
bad_reason = "nonzero _count";
- if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_FREE)) {
- bad_reason = "PAGE_FLAGS_CHECK_AT_FREE flag(s) set";
- bad_flags = PAGE_FLAGS_CHECK_AT_FREE;
- }
+ if (free)
+ if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_FREE)) {
+ bad_reason = "PAGE_FLAGS_CHECK_AT_FREE flag(s) set";
+ bad_flags = PAGE_FLAGS_CHECK_AT_FREE;
+ }
+ else
+ if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_PREP)) {
+ bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag set";
+ bad_flags = PAGE_FLAGS_CHECK_AT_PREP;
+ }
#ifdef CONFIG_MEMCG
if (unlikely(page->mem_cgroup))
bad_reason = "page still charged to cgroup";
@@ -730,6 +736,17 @@ static inline int free_pages_check(struct page *page)
bad_page(page, bad_reason, bad_flags);
return 1;
}
+ return 0;
+}
+
+static inline int free_pages_check(struct page *page)
+{
+ int ret=0;
+
+ ret=check_one_page(page, true);
+ if (ret)
+ return ret;
+
page_cpupid_reset_last(page);
if (page->flags & PAGE_FLAGS_CHECK_AT_PREP)
page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
@@ -1287,32 +1304,7 @@ static inline void expand(struct zone *zone, struct page *page,
*/
static inline int check_new_page(struct page *page)
{
- const char *bad_reason = NULL;
- unsigned long bad_flags = 0;
-
- if (unlikely(page_mapcount(page)))
- bad_reason = "nonzero mapcount";
- if (unlikely(page->mapping != NULL))
- bad_reason = "non-NULL mapping";
- if (unlikely(atomic_read(&page->_count) != 0))
- bad_reason = "nonzero _count";
- if (unlikely(page->flags & __PG_HWPOISON)) {
- bad_reason = "HWPoisoned (hardware-corrupted)";
- bad_flags = __PG_HWPOISON;
- }
- if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_PREP)) {
- bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag set";
- bad_flags = PAGE_FLAGS_CHECK_AT_PREP;
- }
-#ifdef CONFIG_MEMCG
- if (unlikely(page->mem_cgroup))
- bad_reason = "page still charged to cgroup";
-#endif
- if (unlikely(bad_reason)) {
- bad_page(page, bad_reason, bad_flags);
- return 1;
- }
- return 0;
+ return check_one_page(page, false);
}

static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
--
1.9.1

2015-08-22 07:48:38

by Yaowei Bai

[permalink] [raw]
Subject: [PATCH 3/3] mm/memblock: fix memblock comment

's/amd/and/'

Signed-off-by: Yaowei Bai <[email protected]>
---
include/linux/memblock.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index cc4b019..273aad7 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -304,7 +304,7 @@ static inline void __init memblock_set_bottom_up(bool enable) {}
static inline bool memblock_bottom_up(void) { return false; }
#endif

-/* Flags for memblock_alloc_base() amd __memblock_alloc_base() */
+/* Flags for memblock_alloc_base() and __memblock_alloc_base() */
#define MEMBLOCK_ALLOC_ANYWHERE (~(phys_addr_t)0)
#define MEMBLOCK_ALLOC_ACCESSIBLE 0

--
1.9.1

2015-08-24 10:44:12

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH 1/3] mm/page_alloc: fix a terrible misleading comment

On Sat 22-08-15 15:40:10, Yaowei Bai wrote:
> The comment says that the per-cpu batchsize and zone watermarks
> are determined by present_pages which is definitely wrong, they
> are both calculated from managed_pages. Fix it.

This seems to be missed in b40da04946aa ("mm: use zone->present_pages
instead of zone->managed_pages where appropriate")
>
> Signed-off-by: Yaowei Bai <[email protected]>

Acked-by: Michal Hocko <[email protected]>

> ---
> mm/page_alloc.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 5b5240b..c22b133 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6003,7 +6003,7 @@ void __init mem_init_print_info(const char *str)
> * set_dma_reserve - set the specified number of pages reserved in the first zone
> * @new_dma_reserve: The number of pages to mark reserved
> *
> - * The per-cpu batchsize and zone watermarks are determined by present_pages.
> + * The per-cpu batchsize and zone watermarks are determined by managed_pages.
> * In the DMA zone, a significant percentage may be consumed by kernel image
> * and other unfreeable allocations which can skew the watermarks badly. This
> * function may optionally be used to account for unfreeable pages in the
> --
> 1.9.1
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

--
Michal Hocko
SUSE Labs

2015-08-24 10:47:50

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/page_alloc: add a helper function to check page before alloc/free

On Sat 22-08-15 15:40:11, Yaowei Bai wrote:
> The major portion of check_new_page() and free_pages_check() are same,
> introduce a helper function check_one_page() for readablity.
>
> Signed-off-by: Yaowei Bai <[email protected]>
> ---
> mm/page_alloc.c | 54 +++++++++++++++++++++++-------------------------------
> 1 file changed, 23 insertions(+), 31 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c22b133..a0839de 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -707,7 +707,7 @@ out:
> zone->free_area[order].nr_free++;
> }
>
> -static inline int free_pages_check(struct page *page)
> +static inline int check_one_page(struct page *page, bool free)
> {
> const char *bad_reason = NULL;
> unsigned long bad_flags = 0;
> @@ -718,10 +718,16 @@ static inline int free_pages_check(struct page *page)
> bad_reason = "non-NULL mapping";
> if (unlikely(atomic_read(&page->_count) != 0))
> bad_reason = "nonzero _count";
> - if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_FREE)) {
> - bad_reason = "PAGE_FLAGS_CHECK_AT_FREE flag(s) set";
> - bad_flags = PAGE_FLAGS_CHECK_AT_FREE;
> - }
> + if (free)
> + if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_FREE)) {
> + bad_reason = "PAGE_FLAGS_CHECK_AT_FREE flag(s) set";
> + bad_flags = PAGE_FLAGS_CHECK_AT_FREE;
> + }
> + else
> + if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_PREP)) {
> + bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag set";
> + bad_flags = PAGE_FLAGS_CHECK_AT_PREP;
> + }

Wouldn't it be easier to simply give bad_flags as another parameter?

> #ifdef CONFIG_MEMCG
> if (unlikely(page->mem_cgroup))
> bad_reason = "page still charged to cgroup";
> @@ -730,6 +736,17 @@ static inline int free_pages_check(struct page *page)
> bad_page(page, bad_reason, bad_flags);
> return 1;
> }
> + return 0;
> +}
> +
> +static inline int free_pages_check(struct page *page)
> +{
> + int ret=0;
> +
> + ret=check_one_page(page, true);
> + if (ret)
> + return ret;
> +
> page_cpupid_reset_last(page);
> if (page->flags & PAGE_FLAGS_CHECK_AT_PREP)
> page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
> @@ -1287,32 +1304,7 @@ static inline void expand(struct zone *zone, struct page *page,
> */
> static inline int check_new_page(struct page *page)
> {
> - const char *bad_reason = NULL;
> - unsigned long bad_flags = 0;
> -
> - if (unlikely(page_mapcount(page)))
> - bad_reason = "nonzero mapcount";
> - if (unlikely(page->mapping != NULL))
> - bad_reason = "non-NULL mapping";
> - if (unlikely(atomic_read(&page->_count) != 0))
> - bad_reason = "nonzero _count";
> - if (unlikely(page->flags & __PG_HWPOISON)) {
> - bad_reason = "HWPoisoned (hardware-corrupted)";
> - bad_flags = __PG_HWPOISON;
> - }
> - if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_PREP)) {
> - bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag set";
> - bad_flags = PAGE_FLAGS_CHECK_AT_PREP;
> - }
> -#ifdef CONFIG_MEMCG
> - if (unlikely(page->mem_cgroup))
> - bad_reason = "page still charged to cgroup";
> -#endif
> - if (unlikely(bad_reason)) {
> - bad_page(page, bad_reason, bad_flags);
> - return 1;
> - }
> - return 0;
> + return check_one_page(page, false);
> }
>
> static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
> --
> 1.9.1
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

--
Michal Hocko
SUSE Labs

2015-08-24 10:50:37

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH 3/3] mm/memblock: fix memblock comment

On Sat 22-08-15 15:40:12, Yaowei Bai wrote:
> 's/amd/and/'

Is this really worth it? It doesn't help grepability and just churns the
history.

>
> Signed-off-by: Yaowei Bai <[email protected]>
> ---
> include/linux/memblock.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index cc4b019..273aad7 100644
> --- a/include/linux/memblock.h
> +++ b/include/linux/memblock.h
> @@ -304,7 +304,7 @@ static inline void __init memblock_set_bottom_up(bool enable) {}
> static inline bool memblock_bottom_up(void) { return false; }
> #endif
>
> -/* Flags for memblock_alloc_base() amd __memblock_alloc_base() */
> +/* Flags for memblock_alloc_base() and __memblock_alloc_base() */
> #define MEMBLOCK_ALLOC_ANYWHERE (~(phys_addr_t)0)
> #define MEMBLOCK_ALLOC_ACCESSIBLE 0
>
> --
> 1.9.1
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

--
Michal Hocko
SUSE Labs