2020-07-01 15:30:41

by Dave Hansen

[permalink] [raw]
Subject: [PATCH 0/3] [v2] Repair and clean up vm.zone_reclaim_mode sysctl ABI

A previous cleanup accidentally changed the vm.zone_reclaim_mode ABI.

This series restores the ABI and then reorganizes the code to make
the ABI more obvious. Since the single-patch v1[1], I've:

* Restored the RECLAIM_ZONE naming, comment and Documentation now
that the implicit checks for it are known.
* Move RECLAIM_* definitions to a uapi header
* Add a node_reclaim_enabled() helper

Documentation/admin-guide/sysctl/vm.rst | 10 +++++-----
include/linux/swap.h | 7 +++++++
include/uapi/linux/mempolicy.h | 7 +++++++
mm/khugepaged.c | 2 +-
mm/page_alloc.c | 2 +-
mm/vmscan.c | 3 ---
6 files changed, 21 insertions(+), 10 deletions(-)

1. https://lore.kernel.org/linux-mm/[email protected]/

Cc: Ben Widawsky <[email protected]>
Cc: Alex Shi <[email protected]>
Cc: Daniel Wagner <[email protected]>
Cc: "Tobin C. Harding" <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Huang Ying <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Daniel Wagner <[email protected]>


2020-07-01 15:32:28

by Dave Hansen

[permalink] [raw]
Subject: [PATCH 2/3] mm/vmscan: move RECLAIM* bits to uapi header


From: Dave Hansen <[email protected]>

It is currently not obvious that the RECLAIM_* bits are part of the
uapi since they are defined in vmscan.c. Move them to a uapi header
to make it obvious.

This should have no functional impact.

Signed-off-by: Dave Hansen <[email protected]>
Cc: Ben Widawsky <[email protected]>
Cc: Alex Shi <[email protected]>
Cc: Daniel Wagner <[email protected]>
Cc: "Tobin C. Harding" <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Huang Ying <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Daniel Wagner <[email protected]>

--

Note: This is not cc'd to stable. It does not fix any bugs.
---

b/include/uapi/linux/mempolicy.h | 7 +++++++
b/mm/vmscan.c | 8 --------
2 files changed, 7 insertions(+), 8 deletions(-)

diff -puN include/uapi/linux/mempolicy.h~mm-vmscan-move-RECLAIM-bits-to-uapi include/uapi/linux/mempolicy.h
--- a/include/uapi/linux/mempolicy.h~mm-vmscan-move-RECLAIM-bits-to-uapi 2020-07-01 08:22:12.502955333 -0700
+++ b/include/uapi/linux/mempolicy.h 2020-07-01 08:22:12.508955333 -0700
@@ -62,5 +62,12 @@ enum {
#define MPOL_F_MOF (1 << 3) /* this policy wants migrate on fault */
#define MPOL_F_MORON (1 << 4) /* Migrate On protnone Reference On Node */

+/*
+ * These bit locations are exposed in the vm.zone_reclaim_mode sysctl
+ * ABI. New bits are OK, but existing bits can never change.
+ */
+#define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */
+#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */
+#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */

#endif /* _UAPI_LINUX_MEMPOLICY_H */
diff -puN mm/vmscan.c~mm-vmscan-move-RECLAIM-bits-to-uapi mm/vmscan.c
--- a/mm/vmscan.c~mm-vmscan-move-RECLAIM-bits-to-uapi 2020-07-01 08:22:12.504955333 -0700
+++ b/mm/vmscan.c 2020-07-01 08:22:12.509955333 -0700
@@ -4091,14 +4091,6 @@ module_init(kswapd_init)
int node_reclaim_mode __read_mostly;

/*
- * These bit locations are exposed in the vm.zone_reclaim_mode sysctl
- * ABI. New bits are OK, but existing bits can never change.
- */
-#define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */
-#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */
-#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */
-
-/*
* Priority for NODE_RECLAIM. This determines the fraction of pages
* of a node considered for each zone_reclaim. 4 scans 1/16th of
* a zone.
_

2020-07-01 15:32:45

by Dave Hansen

[permalink] [raw]
Subject: [PATCH 3/3] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks


From: Dave Hansen <[email protected]>

RECLAIM_ZONE was assumed to be unused because it was never explicitly
used in the kernel. However, there were a number of places where it
was checked implicitly by checking 'node_reclaim_mode' for a zero
value.

These zero checks are not great because it is not obvious what a zero
mode *means* in the code. Replace them with a helper which makes it
more obvious: node_reclaim_enabled().

This helper also provides a handy place to explicitly check the
RECLAIM_ZONE bit itself. Check it explicitly there to make it more
obvious where the bit can affect behavior.

This should have no functional impact.

Signed-off-by: Dave Hansen <[email protected]>
Cc: Ben Widawsky <[email protected]>
Cc: Alex Shi <[email protected]>
Cc: Daniel Wagner <[email protected]>
Cc: "Tobin C. Harding" <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Huang Ying <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Daniel Wagner <[email protected]>

--

Note: This is not cc'd to stable. It does not fix any bugs.
---

b/include/linux/swap.h | 7 +++++++
b/mm/khugepaged.c | 2 +-
b/mm/page_alloc.c | 2 +-
3 files changed, 9 insertions(+), 2 deletions(-)

diff -puN include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper include/linux/swap.h
--- a/include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper 2020-07-01 08:22:13.650955330 -0700
+++ b/include/linux/swap.h 2020-07-01 08:22:13.659955330 -0700
@@ -12,6 +12,7 @@
#include <linux/fs.h>
#include <linux/atomic.h>
#include <linux/page-flags.h>
+#include <uapi/linux/mempolicy.h>
#include <asm/page.h>

struct notifier_block;
@@ -374,6 +375,12 @@ extern int sysctl_min_slab_ratio;
#define node_reclaim_mode 0
#endif

+static inline bool node_reclaim_enabled(void)
+{
+ /* Is any node_reclaim_mode bit set? */
+ return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP);
+}
+
extern void check_move_unevictable_pages(struct pagevec *pvec);

extern int kswapd_run(int nid);
diff -puN mm/khugepaged.c~mm-vmscan-node_reclaim_mode_helper mm/khugepaged.c
--- a/mm/khugepaged.c~mm-vmscan-node_reclaim_mode_helper 2020-07-01 08:22:13.652955330 -0700
+++ b/mm/khugepaged.c 2020-07-01 08:22:13.660955330 -0700
@@ -709,7 +709,7 @@ static bool khugepaged_scan_abort(int ni
* If node_reclaim_mode is disabled, then no extra effort is made to
* allocate memory locally.
*/
- if (!node_reclaim_mode)
+ if (!node_reclaim_enabled())
return false;

/* If there is a count for this node already, it must be acceptable */
diff -puN mm/page_alloc.c~mm-vmscan-node_reclaim_mode_helper mm/page_alloc.c
--- a/mm/page_alloc.c~mm-vmscan-node_reclaim_mode_helper 2020-07-01 08:22:13.655955330 -0700
+++ b/mm/page_alloc.c 2020-07-01 08:22:13.662955330 -0700
@@ -3733,7 +3733,7 @@ retry:
if (alloc_flags & ALLOC_NO_WATERMARKS)
goto try_this_zone;

- if (node_reclaim_mode == 0 ||
+ if (!node_reclaim_enabled() ||
!zone_allows_reclaim(ac->preferred_zoneref->zone, zone))
continue;

_

2020-07-01 15:47:13

by Ben Widawsky

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/vmscan: move RECLAIM* bits to uapi header

On 20-07-01 08:26:24, Dave Hansen wrote:
>
> From: Dave Hansen <[email protected]>
>
> It is currently not obvious that the RECLAIM_* bits are part of the
> uapi since they are defined in vmscan.c. Move them to a uapi header
> to make it obvious.
>
> This should have no functional impact.
>
> Signed-off-by: Dave Hansen <[email protected]>
> Cc: Ben Widawsky <[email protected]>
> Cc: Alex Shi <[email protected]>
> Cc: Daniel Wagner <[email protected]>
> Cc: "Tobin C. Harding" <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Huang Ying <[email protected]>
> Cc: Dan Williams <[email protected]>
> Cc: Qian Cai <[email protected]>
> Cc: Daniel Wagner <[email protected]>
>
> --
>
> Note: This is not cc'd to stable. It does not fix any bugs.
> ---
>
> b/include/uapi/linux/mempolicy.h | 7 +++++++
> b/mm/vmscan.c | 8 --------
> 2 files changed, 7 insertions(+), 8 deletions(-)
>
> diff -puN include/uapi/linux/mempolicy.h~mm-vmscan-move-RECLAIM-bits-to-uapi include/uapi/linux/mempolicy.h
> --- a/include/uapi/linux/mempolicy.h~mm-vmscan-move-RECLAIM-bits-to-uapi 2020-07-01 08:22:12.502955333 -0700
> +++ b/include/uapi/linux/mempolicy.h 2020-07-01 08:22:12.508955333 -0700
> @@ -62,5 +62,12 @@ enum {
> #define MPOL_F_MOF (1 << 3) /* this policy wants migrate on fault */
> #define MPOL_F_MORON (1 << 4) /* Migrate On protnone Reference On Node */
>
> +/*
> + * These bit locations are exposed in the vm.zone_reclaim_mode sysctl
> + * ABI. New bits are OK, but existing bits can never change.
> + */
> +#define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */
> +#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */
> +#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */

Have you considered turning this into an enum while moving it?

>
> #endif /* _UAPI_LINUX_MEMPOLICY_H */
> diff -puN mm/vmscan.c~mm-vmscan-move-RECLAIM-bits-to-uapi mm/vmscan.c
> --- a/mm/vmscan.c~mm-vmscan-move-RECLAIM-bits-to-uapi 2020-07-01 08:22:12.504955333 -0700
> +++ b/mm/vmscan.c 2020-07-01 08:22:12.509955333 -0700
> @@ -4091,14 +4091,6 @@ module_init(kswapd_init)
> int node_reclaim_mode __read_mostly;
>
> /*
> - * These bit locations are exposed in the vm.zone_reclaim_mode sysctl
> - * ABI. New bits are OK, but existing bits can never change.
> - */
> -#define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */
> -#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */
> -#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */
> -
> -/*
> * Priority for NODE_RECLAIM. This determines the fraction of pages
> * of a node considered for each zone_reclaim. 4 scans 1/16th of
> * a zone.
> _

2020-07-01 15:56:58

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/vmscan: move RECLAIM* bits to uapi header

On 7/1/20 8:46 AM, Ben Widawsky wrote:
>> +/*
>> + * These bit locations are exposed in the vm.zone_reclaim_mode sysctl
>> + * ABI. New bits are OK, but existing bits can never change.
>> + */
>> +#define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */
>> +#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */
>> +#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */
> Have you considered turning this into an enum while moving it?

The thought occurred to me, but all of the other bits in the uapi file
were defined this way. I decided to not not attempt to buck the trend
in their new home.

2020-07-01 16:01:53

by Ben Widawsky

[permalink] [raw]
Subject: Re: [PATCH 0/3] [v2] Repair and clean up vm.zone_reclaim_mode sysctl ABI

On 20-07-01 08:26:21, Dave Hansen wrote:
> A previous cleanup accidentally changed the vm.zone_reclaim_mode ABI.
>
> This series restores the ABI and then reorganizes the code to make
> the ABI more obvious. Since the single-patch v1[1], I've:
>
> * Restored the RECLAIM_ZONE naming, comment and Documentation now
> that the implicit checks for it are known.
> * Move RECLAIM_* definitions to a uapi header
> * Add a node_reclaim_enabled() helper
>
> Documentation/admin-guide/sysctl/vm.rst | 10 +++++-----
> include/linux/swap.h | 7 +++++++
> include/uapi/linux/mempolicy.h | 7 +++++++
> mm/khugepaged.c | 2 +-
> mm/page_alloc.c | 2 +-
> mm/vmscan.c | 3 ---
> 6 files changed, 21 insertions(+), 10 deletions(-)
>
> 1. https://lore.kernel.org/linux-mm/[email protected]/
>
> Cc: Ben Widawsky <[email protected]>
> Cc: Alex Shi <[email protected]>
> Cc: Daniel Wagner <[email protected]>
> Cc: "Tobin C. Harding" <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Huang Ying <[email protected]>
> Cc: Dan Williams <[email protected]>
> Cc: Qian Cai <[email protected]>
> Cc: Daniel Wagner <[email protected]>

Series is:
Reviewed-by: Ben Widawsky <[email protected]>

I was more thorough this time in checking all uses of node_reclaim_mode :-). I
do think in patch 2/3, using an enum would be a little better, which I've
mentioned there.

2020-07-01 20:05:49

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH 3/3] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks

On Wed, 1 Jul 2020, Dave Hansen wrote:

> diff -puN include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper include/linux/swap.h
> --- a/include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper 2020-07-01 08:22:13.650955330 -0700
> +++ b/include/linux/swap.h 2020-07-01 08:22:13.659955330 -0700
> @@ -12,6 +12,7 @@
> #include <linux/fs.h>
> #include <linux/atomic.h>
> #include <linux/page-flags.h>
> +#include <uapi/linux/mempolicy.h>
> #include <asm/page.h>
>
> struct notifier_block;
> @@ -374,6 +375,12 @@ extern int sysctl_min_slab_ratio;
> #define node_reclaim_mode 0
> #endif
>
> +static inline bool node_reclaim_enabled(void)
> +{
> + /* Is any node_reclaim_mode bit set? */
> + return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP);
> +}
> +
> extern void check_move_unevictable_pages(struct pagevec *pvec);
>
> extern int kswapd_run(int nid);

If a user writes a bit that isn't a RECLAIM_* bit to vm.zone_reclaim_mode
today, it acts as though RECLAIM_ZONE is enabled: we try to reclaim in
zonelist order before falling back to the next zone in the page allocator.
The sysctl doesn't enforce any max value :/ I dont know if there is any
such user, but this would break them if there is.

Should this simply be return !!node_reclaim_mode?

2020-07-01 20:06:26

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/vmscan: move RECLAIM* bits to uapi header

On Wed, 1 Jul 2020, Dave Hansen wrote:

>
> From: Dave Hansen <[email protected]>
>
> It is currently not obvious that the RECLAIM_* bits are part of the
> uapi since they are defined in vmscan.c. Move them to a uapi header
> to make it obvious.
>
> This should have no functional impact.
>
> Signed-off-by: Dave Hansen <[email protected]>
> Cc: Ben Widawsky <[email protected]>
> Cc: Alex Shi <[email protected]>
> Cc: Daniel Wagner <[email protected]>
> Cc: "Tobin C. Harding" <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Huang Ying <[email protected]>
> Cc: Dan Williams <[email protected]>
> Cc: Qian Cai <[email protected]>
> Cc: Daniel Wagner <[email protected]>

Acked-by: David Rientjes <[email protected]>

2020-07-01 20:07:31

by Ben Widawsky

[permalink] [raw]
Subject: Re: [PATCH 3/3] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks

On 20-07-01 13:03:01, David Rientjes wrote:
> On Wed, 1 Jul 2020, Dave Hansen wrote:
>
> > diff -puN include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper include/linux/swap.h
> > --- a/include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper 2020-07-01 08:22:13.650955330 -0700
> > +++ b/include/linux/swap.h 2020-07-01 08:22:13.659955330 -0700
> > @@ -12,6 +12,7 @@
> > #include <linux/fs.h>
> > #include <linux/atomic.h>
> > #include <linux/page-flags.h>
> > +#include <uapi/linux/mempolicy.h>
> > #include <asm/page.h>
> >
> > struct notifier_block;
> > @@ -374,6 +375,12 @@ extern int sysctl_min_slab_ratio;
> > #define node_reclaim_mode 0
> > #endif
> >
> > +static inline bool node_reclaim_enabled(void)
> > +{
> > + /* Is any node_reclaim_mode bit set? */
> > + return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP);
> > +}
> > +
> > extern void check_move_unevictable_pages(struct pagevec *pvec);
> >
> > extern int kswapd_run(int nid);
>
> If a user writes a bit that isn't a RECLAIM_* bit to vm.zone_reclaim_mode
> today, it acts as though RECLAIM_ZONE is enabled: we try to reclaim in
> zonelist order before falling back to the next zone in the page allocator.
> The sysctl doesn't enforce any max value :/ I dont know if there is any
> such user, but this would break them if there is.
>
> Should this simply be return !!node_reclaim_mode?
>

I don't think so because I don't think anything else validates the unused bits
remain unused.

2020-07-01 21:32:41

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH 3/3] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks

On 7/1/20 1:04 PM, Ben Widawsky wrote:
>> +static inline bool node_reclaim_enabled(void)
>> +{
>> + /* Is any node_reclaim_mode bit set? */
>> + return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP);
>> +}
>> +
>> extern void check_move_unevictable_pages(struct pagevec *pvec);
>>
>> extern int kswapd_run(int nid);
> If a user writes a bit that isn't a RECLAIM_* bit to vm.zone_reclaim_mode
> today, it acts as though RECLAIM_ZONE is enabled: we try to reclaim in
> zonelist order before falling back to the next zone in the page allocator.
> The sysctl doesn't enforce any max value :/ I dont know if there is any
> such user, but this would break them if there is.
>
> Should this simply be return !!node_reclaim_mode?

You're right that there _could_ be a user-visible behavior change here.
But, if there were a change it would be for a bit which wasn't even
mentioned in the documentation. Somebody would have had to look at the
doc mentioning 1,2,4 and written an 8. If they did that, they're asking
for trouble because we could have defined the '8' bit to do nasty things
like auto-demote all your memory. :)

I'll mention it in the changelog, but I still think we should check the
actual, known bits rather than check for 0.

BTW, in the hardware, they almost invariably make unused bits "reserved"
and do mean things like #GP if someone tries to set them. This is a
case where the kernel probably should have done the same. It would have
saved us the trouble of asking these questions now. Maybe we should
even do that going forward.

2020-07-01 22:03:44

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH 3/3] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks

On Wed, 1 Jul 2020, Dave Hansen wrote:

> On 7/1/20 1:04 PM, Ben Widawsky wrote:
> >> +static inline bool node_reclaim_enabled(void)
> >> +{
> >> + /* Is any node_reclaim_mode bit set? */
> >> + return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP);
> >> +}
> >> +
> >> extern void check_move_unevictable_pages(struct pagevec *pvec);
> >>
> >> extern int kswapd_run(int nid);
> > If a user writes a bit that isn't a RECLAIM_* bit to vm.zone_reclaim_mode
> > today, it acts as though RECLAIM_ZONE is enabled: we try to reclaim in
> > zonelist order before falling back to the next zone in the page allocator.
> > The sysctl doesn't enforce any max value :/ I dont know if there is any
> > such user, but this would break them if there is.
> >
> > Should this simply be return !!node_reclaim_mode?
>
> You're right that there _could_ be a user-visible behavior change here.
> But, if there were a change it would be for a bit which wasn't even
> mentioned in the documentation. Somebody would have had to look at the
> doc mentioning 1,2,4 and written an 8. If they did that, they're asking
> for trouble because we could have defined the '8' bit to do nasty things
> like auto-demote all your memory. :)
>
> I'll mention it in the changelog, but I still think we should check the
> actual, known bits rather than check for 0.
>
> BTW, in the hardware, they almost invariably make unused bits "reserved"
> and do mean things like #GP if someone tries to set them. This is a
> case where the kernel probably should have done the same. It would have
> saved us the trouble of asking these questions now. Maybe we should
> even do that going forward.
>

Maybe enforce it in a sysctl handler so the user catches any errors, which
would be better than silently accepting some policy that doesn't exist?

RECLAIM_UNMAP and/or RECLAIM_WRITE should likely get -EINVAL if attempted
to be set without RECLAIM_ZONE as well: they are no-ops without
RECLAIM_ZONE. This would likely have caught something wrong with commit
648b5cf368e0 ("mm/vmscan: remove unused RECLAIM_OFF/RECLAIM_ZONE") if it
would have already been in place.

I don't feel strongly about this, so feel free to ignore.