2019-09-10 09:21:29

by Mike Christie

[permalink] [raw]
Subject: Re: [RFC PATCH] Add proc interface to set PF_MEMALLOC flags

Forgot to cc linux-mm.

On 09/09/2019 11:28 AM, Mike Christie wrote:
> There are several storage drivers like dm-multipath, iscsi, and nbd that
> have userspace components that can run in the IO path. For example,
> iscsi and nbd's userspace deamons may need to recreate a socket and/or
> send IO on it, and dm-multipath's daemon multipathd may need to send IO
> to figure out the state of paths and re-set them up.
>
> In the kernel these drivers have access to GFP_NOIO/GFP_NOFS and the
> memalloc_*_save/restore functions to control the allocation behavior,
> but for userspace we would end up hitting a allocation that ended up
> writing data back to the same device we are trying to allocate for.
>
> This patch allows the userspace deamon to set the PF_MEMALLOC* flags
> through procfs. It currently only supports PF_MEMALLOC_NOIO, but
> depending on what other drivers and userspace file systems need, for
> the final version I can add the other flags for that file or do a file
> per flag or just do a memalloc_noio file.
>
> Signed-off-by: Mike Christie <[email protected]>
> ---
> Documentation/filesystems/proc.txt | 6 ++++
> fs/proc/base.c | 53 ++++++++++++++++++++++++++++++
> 2 files changed, 59 insertions(+)
>
> diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
> index 99ca040e3f90..b5456a61a013 100644
> --- a/Documentation/filesystems/proc.txt
> +++ b/Documentation/filesystems/proc.txt
> @@ -46,6 +46,7 @@ Table of Contents
> 3.10 /proc/<pid>/timerslack_ns - Task timerslack value
> 3.11 /proc/<pid>/patch_state - Livepatch patch operation state
> 3.12 /proc/<pid>/arch_status - Task architecture specific information
> + 3.13 /proc/<pid>/memalloc - Control task's memory reclaim behavior
>
> 4 Configuring procfs
> 4.1 Mount options
> @@ -1980,6 +1981,11 @@ Example
> $ cat /proc/6753/arch_status
> AVX512_elapsed_ms: 8
>
> +3.13 /proc/<pid>/memalloc - Control task's memory reclaim behavior
> +-----------------------------------------------------------------------
> +A value of "noio" indicates that when a task allocates memory it will not
> +reclaim memory that requires starting phisical IO.
> +
> Description
> -----------
>
> diff --git a/fs/proc/base.c b/fs/proc/base.c
> index ebea9501afb8..c4faa3464602 100644
> --- a/fs/proc/base.c
> +++ b/fs/proc/base.c
> @@ -1223,6 +1223,57 @@ static const struct file_operations proc_oom_score_adj_operations = {
> .llseek = default_llseek,
> };
>
> +static ssize_t memalloc_read(struct file *file, char __user *buf, size_t count,
> + loff_t *ppos)
> +{
> + struct task_struct *task;
> + ssize_t rc = 0;
> +
> + task = get_proc_task(file_inode(file));
> + if (!task)
> + return -ESRCH;
> +
> + if (task->flags & PF_MEMALLOC_NOIO)
> + rc = simple_read_from_buffer(buf, count, ppos, "noio", 4);
> + put_task_struct(task);
> + return rc;
> +}
> +
> +static ssize_t memalloc_write(struct file *file, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct task_struct *task;
> + char buffer[5];
> + int rc = count;
> +
> + memset(buffer, 0, sizeof(buffer));
> + if (count != sizeof(buffer) - 1)
> + return -EINVAL;
> +
> + if (copy_from_user(buffer, buf, count))
> + return -EFAULT;
> + buffer[count] = '\0';
> +
> + task = get_proc_task(file_inode(file));
> + if (!task)
> + return -ESRCH;
> +
> + if (!strcmp(buffer, "noio")) {
> + task->flags |= PF_MEMALLOC_NOIO;
> + } else {
> + rc = -EINVAL;
> + }
> +
> + put_task_struct(task);
> + return rc;
> +}
> +
> +static const struct file_operations proc_memalloc_operations = {
> + .read = memalloc_read,
> + .write = memalloc_write,
> + .llseek = default_llseek,
> +};
> +
> #ifdef CONFIG_AUDIT
> #define TMPBUFLEN 11
> static ssize_t proc_loginuid_read(struct file * file, char __user * buf,
> @@ -3097,6 +3148,7 @@ static const struct pid_entry tgid_base_stuff[] = {
> #ifdef CONFIG_PROC_PID_ARCH_STATUS
> ONE("arch_status", S_IRUGO, proc_pid_arch_status),
> #endif
> + REG("memalloc", S_IRUGO|S_IWUSR, proc_memalloc_operations),
> };
>
> static int proc_tgid_base_readdir(struct file *file, struct dir_context *ctx)
> @@ -3487,6 +3539,7 @@ static const struct pid_entry tid_base_stuff[] = {
> #ifdef CONFIG_PROC_PID_ARCH_STATUS
> ONE("arch_status", S_IRUGO, proc_pid_arch_status),
> #endif
> + REG("memalloc", S_IRUGO|S_IWUSR, proc_memalloc_operations),
> };
>
> static int proc_tid_base_readdir(struct file *file, struct dir_context *ctx)
>


2019-09-10 22:15:46

by Tetsuo Handa

[permalink] [raw]
Subject: Re: [RFC PATCH] Add proc interface to set PF_MEMALLOC flags

On 2019/09/10 3:26, Mike Christie wrote:
> Forgot to cc linux-mm.
>
> On 09/09/2019 11:28 AM, Mike Christie wrote:
>> There are several storage drivers like dm-multipath, iscsi, and nbd that
>> have userspace components that can run in the IO path. For example,
>> iscsi and nbd's userspace deamons may need to recreate a socket and/or
>> send IO on it, and dm-multipath's daemon multipathd may need to send IO
>> to figure out the state of paths and re-set them up.
>>
>> In the kernel these drivers have access to GFP_NOIO/GFP_NOFS and the
>> memalloc_*_save/restore functions to control the allocation behavior,
>> but for userspace we would end up hitting a allocation that ended up
>> writing data back to the same device we are trying to allocate for.
>>
>> This patch allows the userspace deamon to set the PF_MEMALLOC* flags
>> through procfs. It currently only supports PF_MEMALLOC_NOIO, but
>> depending on what other drivers and userspace file systems need, for
>> the final version I can add the other flags for that file or do a file
>> per flag or just do a memalloc_noio file.

Interesting patch. But can't we instead globally mask __GFP_NOFS / __GFP_NOIO
than playing games with per a thread masking (which suffers from inability to
propagate current thread's mask to other threads indirectly involved)?

>> +static ssize_t memalloc_write(struct file *file, const char __user *buf,
>> + size_t count, loff_t *ppos)
>> +{
>> + struct task_struct *task;
>> + char buffer[5];
>> + int rc = count;
>> +
>> + memset(buffer, 0, sizeof(buffer));
>> + if (count != sizeof(buffer) - 1)
>> + return -EINVAL;
>> +
>> + if (copy_from_user(buffer, buf, count))

copy_from_user() / copy_to_user() might involve memory allocation
via page fault which has to be done under the mask? Moreover, since
just open()ing this file can involve memory allocation, do we forbid
open("/proc/thread-self/memalloc") ?

>> + return -EFAULT;
>> + buffer[count] = '\0';
>> +
>> + task = get_proc_task(file_inode(file));
>> + if (!task)
>> + return -ESRCH;
>> +
>> + if (!strcmp(buffer, "noio")) {
>> + task->flags |= PF_MEMALLOC_NOIO;
>> + } else {
>> + rc = -EINVAL;
>> + }
>> +
>> + put_task_struct(task);
>> + return rc;
>> +}

2019-09-10 23:31:11

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [RFC PATCH] Add proc interface to set PF_MEMALLOC flags

On Wed, Sep 11, 2019 at 07:12:06AM +0900, Tetsuo Handa wrote:
> >> +static ssize_t memalloc_write(struct file *file, const char __user *buf,
> >> + size_t count, loff_t *ppos)
> >> +{
> >> + struct task_struct *task;
> >> + char buffer[5];
> >> + int rc = count;
> >> +
> >> + memset(buffer, 0, sizeof(buffer));
> >> + if (count != sizeof(buffer) - 1)
> >> + return -EINVAL;
> >> +
> >> + if (copy_from_user(buffer, buf, count))
>
> copy_from_user() / copy_to_user() might involve memory allocation
> via page fault which has to be done under the mask? Moreover, since
> just open()ing this file can involve memory allocation, do we forbid
> open("/proc/thread-self/memalloc") ?

Not saying that I'm okay with the approach in general, but I don't think
this a problem. The application has to set allocation policy before
inserting itself into IO or FS path.

--
Kirill A. Shutemov

2019-09-11 15:25:44

by Mike Christie

[permalink] [raw]
Subject: Re: [RFC PATCH] Add proc interface to set PF_MEMALLOC flags

On 09/10/2019 05:12 PM, Tetsuo Handa wrote:
> On 2019/09/10 3:26, Mike Christie wrote:
>> Forgot to cc linux-mm.
>>
>> On 09/09/2019 11:28 AM, Mike Christie wrote:
>>> There are several storage drivers like dm-multipath, iscsi, and nbd that
>>> have userspace components that can run in the IO path. For example,
>>> iscsi and nbd's userspace deamons may need to recreate a socket and/or
>>> send IO on it, and dm-multipath's daemon multipathd may need to send IO
>>> to figure out the state of paths and re-set them up.
>>>
>>> In the kernel these drivers have access to GFP_NOIO/GFP_NOFS and the
>>> memalloc_*_save/restore functions to control the allocation behavior,
>>> but for userspace we would end up hitting a allocation that ended up
>>> writing data back to the same device we are trying to allocate for.
>>>
>>> This patch allows the userspace deamon to set the PF_MEMALLOC* flags
>>> through procfs. It currently only supports PF_MEMALLOC_NOIO, but
>>> depending on what other drivers and userspace file systems need, for
>>> the final version I can add the other flags for that file or do a file
>>> per flag or just do a memalloc_noio file.
>
> Interesting patch. But can't we instead globally mask __GFP_NOFS / __GFP_NOIO
> than playing games with per a thread masking (which suffers from inability to
> propagate current thread's mask to other threads indirectly involved)?

If I understood you, then that had been discussed in the past:

https://www.spinics.net/lists/linux-fsdevel/msg149035.html

We only need this for specific threads which implement part of a storage
driver in userspace.

>
>>> +static ssize_t memalloc_write(struct file *file, const char __user *buf,
>>> + size_t count, loff_t *ppos)
>>> +{
>>> + struct task_struct *task;
>>> + char buffer[5];
>>> + int rc = count;
>>> +
>>> + memset(buffer, 0, sizeof(buffer));
>>> + if (count != sizeof(buffer) - 1)
>>> + return -EINVAL;
>>> +
>>> + if (copy_from_user(buffer, buf, count))
>
> copy_from_user() / copy_to_user() might involve memory allocation
> via page fault which has to be done under the mask? Moreover, since
> just open()ing this file can involve memory allocation, do we forbid
> open("/proc/thread-self/memalloc") ?

I was having the daemons set the flag when they initialize.