2018-11-02 12:02:31

by Balbir Singh

[permalink] [raw]
Subject: [PATCH] mm/hotplug: Optimize clear_hwpoisoned_pages

In hot remove, we try to clear poisoned pages, but
a small optimization to check if num_poisoned_pages
is 0 helps remove the iteration through nr_pages.

Signed-off-by: Balbir Singh <[email protected]>
---
mm/sparse.c | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/mm/sparse.c b/mm/sparse.c
index 33307fc05c4d..16219c7ddb5f 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -724,6 +724,16 @@ static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages)
if (!memmap)
return;

+ /*
+ * A further optimization is to have per section
+ * ref counted num_poisoned_pages, but that is going
+ * to need more space per memmap, for now just do
+ * a quick global check, this should speed up this
+ * routine in the absence of bad pages.
+ */
+ if (atomic_long_read(&num_poisoned_pages) == 0)
+ return;
+
for (i = 0; i < nr_pages; i++) {
if (PageHWPoison(&memmap[i])) {
atomic_long_sub(1, &num_poisoned_pages);
--
2.17.1



2018-11-02 12:34:29

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] mm/hotplug: Optimize clear_hwpoisoned_pages

On Fri 02-11-18 23:00:01, Balbir Singh wrote:
> In hot remove, we try to clear poisoned pages, but
> a small optimization to check if num_poisoned_pages
> is 0 helps remove the iteration through nr_pages.
>
> Signed-off-by: Balbir Singh <[email protected]>

Makes sense to me. It would be great to actually have some number but
the optimization for the normal case is quite obvious.

Acked-by: Michal Hocko <[email protected]>

> ---
> mm/sparse.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 33307fc05c4d..16219c7ddb5f 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -724,6 +724,16 @@ static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages)
> if (!memmap)
> return;
>
> + /*
> + * A further optimization is to have per section
> + * ref counted num_poisoned_pages, but that is going
> + * to need more space per memmap, for now just do
> + * a quick global check, this should speed up this
> + * routine in the absence of bad pages.
> + */
> + if (atomic_long_read(&num_poisoned_pages) == 0)
> + return;
> +
> for (i = 0; i < nr_pages; i++) {
> if (PageHWPoison(&memmap[i])) {
> atomic_long_sub(1, &num_poisoned_pages);
> --
> 2.17.1
>

--
Michal Hocko
SUSE Labs

2018-11-06 23:37:00

by Naoya Horiguchi

[permalink] [raw]
Subject: Re: [PATCH] mm/hotplug: Optimize clear_hwpoisoned_pages

On Fri, Nov 02, 2018 at 11:00:01PM +1100, Balbir Singh wrote:
> In hot remove, we try to clear poisoned pages, but
> a small optimization to check if num_poisoned_pages
> is 0 helps remove the iteration through nr_pages.
>
> Signed-off-by: Balbir Singh <[email protected]>

Acked-by: Naoya Horiguchi <[email protected]>

Thanks!