2015-06-01 13:00:22

by Michal Hocko

[permalink] [raw]
Subject: [RFC 0/2] mapping_gfp_mask from the page fault path

Hi,
I somehow forgot about these patches. The previous version was
posted here: http://marc.info/?l=linux-mm&m=142668784122763&w=2. The
first attempt was broken but even when fixed it seems like ignoring
mapping_gfp_mask in page_cache_read is too fragile because
filesystems might use locks in their filemap_fault handlers
which could trigger recursion problems as pointed out by Dave
http://marc.info/?l=linux-mm&m=142682332032293&w=2.

The first patch should be straightforward fix to obey mapping_gfp_mask
when allocating for mapping. It can be applied even without the second
one.

The second patch is an attempt to handle mapping_gfp_mask from the
page fault path properly. GFP_IOFS should be safe from he page fault
path in general (we would be quite broken otherwise because there
are places where GFP_KERNEL is used - e.g. pte allocation). MM will
communicate this to the fs layer via struct vm_fault::gfp_mask.
If fs needs to change this allocation context in a fs callback it can
overwrite this mask. If the code flow gets back to MM we will obey this
gfp_mask (e.g. in page_cache_read). This should be more appropriate than
following mapping_gfp_mask blindly. See the patch description for more
details.

I am still not sure this is the right way to go so I am sending this as
an RFC so any comments are highly appreciated.

Thanks!


2015-06-01 13:00:16

by Michal Hocko

[permalink] [raw]
Subject: [RFC 1/2] mm: do not ignore mapping_gfp_mask in page cache allocation paths

page_cache_read, do_generic_file_read, __generic_file_splice_read and
__ntfs_grab_cache_pages currently ignore mapping_gfp_mask when calling
add_to_page_cache_lru which might cause recursion into fs down in the
direct reclaim path if the mapping really relies on GFP_NOFS semantic.

This doesn't seem to be the case now because page_cache_read (page fault
path) doesn't seem to suffer from the reclaim recursion issues and
do_generic_file_read and __generic_file_splice_read also shouldn't be
called under fs locks which would deadlock in the reclaim path. Anyway
it is better to obey mapping gfp mask and prevent from later breakage.

Signed-off-by: Michal Hocko <[email protected]>
---
fs/ntfs/file.c | 2 +-
fs/splice.c | 2 +-
mm/filemap.c | 6 ++++--
3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/fs/ntfs/file.c b/fs/ntfs/file.c
index 1da9b2d184dc..568c9dbc7e61 100644
--- a/fs/ntfs/file.c
+++ b/fs/ntfs/file.c
@@ -422,7 +422,7 @@ static inline int __ntfs_grab_cache_pages(struct address_space *mapping,
}
}
err = add_to_page_cache_lru(*cached_page, mapping, index,
- GFP_KERNEL);
+ GFP_KERNEL & mapping_gfp_mask(mapping));
if (unlikely(err)) {
if (err == -EEXIST)
continue;
diff --git a/fs/splice.c b/fs/splice.c
index 7d2fbb788fc5..ebd184f24e0d 100644
--- a/fs/splice.c
+++ b/fs/splice.c
@@ -360,7 +360,7 @@ __generic_file_splice_read(struct file *in, loff_t *ppos,
break;

error = add_to_page_cache_lru(page, mapping, index,
- GFP_KERNEL);
+ GFP_KERNEL & mapping_gfp_mask(mapping));
if (unlikely(error)) {
page_cache_release(page);
if (error == -EEXIST)
diff --git a/mm/filemap.c b/mm/filemap.c
index df533a10e8c3..adfc5d2e21c8 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1669,7 +1669,8 @@ static ssize_t do_generic_file_read(struct file *filp, loff_t *ppos,
goto out;
}
error = add_to_page_cache_lru(page, mapping,
- index, GFP_KERNEL);
+ index,
+ GFP_KERNEL & mapping_gfp_mask(mapping));
if (error) {
page_cache_release(page);
if (error == -EEXIST) {
@@ -1770,7 +1771,8 @@ static int page_cache_read(struct file *file, pgoff_t offset)
if (!page)
return -ENOMEM;

- ret = add_to_page_cache_lru(page, mapping, offset, GFP_KERNEL);
+ ret = add_to_page_cache_lru(page, mapping, offset,
+ GFP_KERNEL & mapping_gfp_mask(mapping));
if (ret == 0)
ret = mapping->a_ops->readpage(file, page);
else if (ret == -EEXIST)
--
2.1.4

2015-06-01 13:00:27

by Michal Hocko

[permalink] [raw]
Subject: [RFC 2/2] mm: Allow GFP_IOFS for page_cache_read page cache allocation

page_cache_read has been historically using page_cache_alloc_cold to
allocate a new page. This means that mapping_gfp_mask is used as the
base for the gfp_mask. Many filesystems are setting this mask to
GFP_NOFS to prevent from fs recursion issues. page_cache_read is,
however, not called from the fs layera directly so it doesn't need this
protection normally.

ceph and ocfs2 which call filemap_fault from their fault handlers
seem to be OK because they are not taking any fs lock before invoking
generic implementation. xfs which takes XFS_MMAPLOCK_SHARED is safe
from the reclaim recursion POV because this lock serializes truncate
and punch hole with the page faults and it doesn't get involved in the
reclaim.

The GFP_NOFS protection might be even harmful. There is a push to fail
GFP_NOFS allocations rather than loop within allocator indefinitely with
a very limited reclaim ability. Once we start failing those requests
the OOM killer might be triggered prematurely because the page cache
allocation failure is propagated up the page fault path and end up in
pagefault_out_of_memory.

We cannot play with mapping_gfp_mask directly because that would be racy
wrt. parallel page faults and it might interfere with other users who
really rely on NOFS semantic from the stored gfp_mask. The mask is also
inode proper so it would even be a layering violation. What we can do
instead is to push the gfp_mask into struct vm_fault and allow fs layer
to overwrite it should the callback need to be called with a different
allocation context.

Initialize the default to (mapping_gfp_mask | GFP_IOFS) because this
should be safe from the page fault path normally. Why do we care
about mapping_gfp_mask at all then? Because this doesn't hold only
reclaim protection flags but it also might contain zone and movability
restrictions (GFP_DMA32, __GFP_MOVABLE and others) so we have to respect
those.

Reported-by: Tetsuo Handa <[email protected]>
Signed-off-by: Michal Hocko <[email protected]>
---
include/linux/mm.h | 4 ++++
mm/filemap.c | 9 ++++-----
mm/memory.c | 17 +++++++++++++++++
3 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 76376e04988a..03b8420e123c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -219,10 +219,14 @@ extern pgprot_t protection_map[16];
* ->fault function. The vma's ->fault is responsible for returning a bitmask
* of VM_FAULT_xxx flags that give details about how the fault was handled.
*
+ * MM layer fills up gfp_mask for page allocations but fault handler might
+ * alter it if its implementation requires a different allocation context.
+ *
* pgoff should be used in favour of virtual_address, if possible.
*/
struct vm_fault {
unsigned int flags; /* FAULT_FLAG_xxx flags */
+ gfp_t gfp_mask; /* gfp mask to be used for allocations */
pgoff_t pgoff; /* Logical page offset based on vma */
void __user *virtual_address; /* Faulting virtual address */

diff --git a/mm/filemap.c b/mm/filemap.c
index adfc5d2e21c8..bfbc30ff47a4 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1760,19 +1760,18 @@ EXPORT_SYMBOL(generic_file_read_iter);
* This adds the requested page to the page cache if it isn't already there,
* and schedules an I/O to read in its contents from disk.
*/
-static int page_cache_read(struct file *file, pgoff_t offset)
+static int page_cache_read(struct file *file, pgoff_t offset, gfp_t gfp_mask)
{
struct address_space *mapping = file->f_mapping;
struct page *page;
int ret;

do {
- page = page_cache_alloc_cold(mapping);
+ page = __page_cache_alloc(gfp_mask|__GFP_COLD);
if (!page)
return -ENOMEM;

- ret = add_to_page_cache_lru(page, mapping, offset,
- GFP_KERNEL & mapping_gfp_mask(mapping));
+ ret = add_to_page_cache_lru(page, mapping, offset, GFP_KERNEL & gfp_mask);
if (ret == 0)
ret = mapping->a_ops->readpage(file, page);
else if (ret == -EEXIST)
@@ -1955,7 +1954,7 @@ int filemap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
* We're only likely to ever get here if MADV_RANDOM is in
* effect.
*/
- error = page_cache_read(file, offset);
+ error = page_cache_read(file, offset, vmf->gfp_mask);

/*
* The page we want has now been added to the page cache.
diff --git a/mm/memory.c b/mm/memory.c
index 8a2fc9945b46..25ab29560dca 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1949,6 +1949,20 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
copy_user_highpage(dst, src, va, vma);
}

+static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
+{
+ struct file *vm_file = vma->vm_file;
+
+ if (vm_file)
+ return mapping_gfp_mask(vm_file->f_mapping) | GFP_IOFS;
+
+ /*
+ * Special mappings (e.g. VDSO) do not have any file so fake
+ * a default GFP_KERNEL for them.
+ */
+ return GFP_KERNEL;
+}
+
/*
* Notify the address space that the page is about to become writable so that
* it can prohibit this or wait for the page to get into an appropriate state.
@@ -1964,6 +1978,7 @@ static int do_page_mkwrite(struct vm_area_struct *vma, struct page *page,
vmf.virtual_address = (void __user *)(address & PAGE_MASK);
vmf.pgoff = page->index;
vmf.flags = FAULT_FLAG_WRITE|FAULT_FLAG_MKWRITE;
+ vmf.gfp_mask = __get_fault_gfp_mask(vma);
vmf.page = page;
vmf.cow_page = NULL;

@@ -2763,6 +2778,7 @@ static int __do_fault(struct vm_area_struct *vma, unsigned long address,
vmf.pgoff = pgoff;
vmf.flags = flags;
vmf.page = NULL;
+ vmf.gfp_mask = __get_fault_gfp_mask(vma);
vmf.cow_page = cow_page;

ret = vma->vm_ops->fault(vma, &vmf);
@@ -2929,6 +2945,7 @@ static void do_fault_around(struct vm_area_struct *vma, unsigned long address,
vmf.pgoff = pgoff;
vmf.max_pgoff = max_pgoff;
vmf.flags = flags;
+ vmf.gfp_mask = __get_fault_gfp_mask(vma);
vma->vm_ops->map_pages(vma, &vmf);
}

--
2.1.4

2015-06-02 20:22:50

by Andrew Morton

[permalink] [raw]
Subject: Re: [RFC 0/2] mapping_gfp_mask from the page fault path

On Mon, 1 Jun 2015 15:00:01 +0200 Michal Hocko <[email protected]> wrote:

> I somehow forgot about these patches. The previous version was
> posted here: http://marc.info/?l=linux-mm&m=142668784122763&w=2. The
> first attempt was broken but even when fixed it seems like ignoring
> mapping_gfp_mask in page_cache_read is too fragile because
> filesystems might use locks in their filemap_fault handlers
> which could trigger recursion problems as pointed out by Dave
> http://marc.info/?l=linux-mm&m=142682332032293&w=2.
>
> The first patch should be straightforward fix to obey mapping_gfp_mask
> when allocating for mapping. It can be applied even without the second
> one.

I'm not so sure about that. If only [1/2] is applied then those
filesystems which are setting mapping_gfp_mask to GFP_NOFS will now
actually start using GFP_NOFS from within page_cache_read() etc. The
weaker allocation mode might cause problems.

2015-06-03 13:05:24

by Tetsuo Handa

[permalink] [raw]
Subject: Re: [RFC 0/2] mapping_gfp_mask from the page fault path

Andrew Morton wrote:
> On Mon, 1 Jun 2015 15:00:01 +0200 Michal Hocko <[email protected]> wrote:
>
> > I somehow forgot about these patches. The previous version was
> > posted here: http://marc.info/?l=linux-mm&m=142668784122763&w=2. The
> > first attempt was broken but even when fixed it seems like ignoring
> > mapping_gfp_mask in page_cache_read is too fragile because
> > filesystems might use locks in their filemap_fault handlers
> > which could trigger recursion problems as pointed out by Dave
> > http://marc.info/?l=linux-mm&m=142682332032293&w=2.
> >
> > The first patch should be straightforward fix to obey mapping_gfp_mask
> > when allocating for mapping. It can be applied even without the second
> > one.
>
> I'm not so sure about that. If only [1/2] is applied then those
> filesystems which are setting mapping_gfp_mask to GFP_NOFS will now
> actually start using GFP_NOFS from within page_cache_read() etc. The
> weaker allocation mode might cause problems.

If [1/2] is applied, the OOM killer will be disabled until [2/2] is also
applied because !__GFP_FS allocations does not invoke the OOM killer.
But both __GFP_FS allocations (e.g. GFP_KERNEL) and !__GFP_FS allocations
(e.g. GFP_NOFS) apply "loop forever unless order > PAGE_ALLOC_COSTLY_ORDER
or GFP_NORETRY is given or chosen as an OOM victim" rule. And the problem
which silently hang up the system unless we choose an OOM victim is outside
of these patches' scope.

By the way,

Michal Hocko wrote:
> Initialize the default to (mapping_gfp_mask | GFP_IOFS) because this
> should be safe from the page fault path normally. Why do we care
> about mapping_gfp_mask at all then? Because this doesn't hold only
> reclaim protection flags but it also might contain zone and movability
> restrictions (GFP_DMA32, __GFP_MOVABLE and others) so we have to respect
> those.

[2/2] says that mapping_gfp_mask(mapping) might contain bits which are not
in !GFP_KERNEL. If we do

GFP_KERNEL & mapping_gfp_mask(mapping)

we will drop such bits and will cause problems. Thus, "GFP_KERNEL"
in patch [1/1] should be replaced with "mapping_gfp_mask(mapping)" than
"GFP_KERNEL & mapping_gfp_mask(mapping)" ?

Well, maybe we should define GFP_NOIO, GFP_NOFS, GFP_KERNEL like

#define __GFP_NOWAIT ((__force gfp_t)___GFP_NOWAIT) /* Can not wait and reschedule */
#define __GFP_NOIO ((__force gfp_t)___GFP_NOIO) /* Can not start physical IO */
#define __GFP_NOFS ((__force gfp_t)___GFP_NOFS) /* Can not call down to low-level FS */
#define GFP_NOIO (__GFP_NOFS | __GFP_NOIO)
#define GFP_NOFS (__GFP_NOFS)
#define GFP_KERNEL (0)

so that __GFP_* bits represent requirements than permissions?

2015-06-03 13:28:45

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC 0/2] mapping_gfp_mask from the page fault path

On Tue 02-06-15 13:22:41, Andrew Morton wrote:
> On Mon, 1 Jun 2015 15:00:01 +0200 Michal Hocko <[email protected]> wrote:
>
> > I somehow forgot about these patches. The previous version was
> > posted here: http://marc.info/?l=linux-mm&m=142668784122763&w=2. The
> > first attempt was broken but even when fixed it seems like ignoring
> > mapping_gfp_mask in page_cache_read is too fragile because
> > filesystems might use locks in their filemap_fault handlers
> > which could trigger recursion problems as pointed out by Dave
> > http://marc.info/?l=linux-mm&m=142682332032293&w=2.
> >
> > The first patch should be straightforward fix to obey mapping_gfp_mask
> > when allocating for mapping. It can be applied even without the second
> > one.
>
> I'm not so sure about that. If only [1/2] is applied then those
> filesystems which are setting mapping_gfp_mask to GFP_NOFS will now
> actually start using GFP_NOFS from within page_cache_read() etc. The
> weaker allocation mode might cause problems.

They are using the weaker allocation mode in this context already
because page_cache_alloc_cold is obeying mapping gfp mask. So all this
patch does is to make sure that add_to_page_cache_lru gfp_maks is in
sync with other allocations. So I do not see why this would be a
problem. Quite opposite if the function was called from a real GFP_NOFS
context we could deadlock with the current code.
--
Michal Hocko
SUSE Labs

2015-06-03 13:42:17

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC 0/2] mapping_gfp_mask from the page fault path

On Wed 03-06-15 22:04:22, Tetsuo Handa wrote:
[...]
> Michal Hocko wrote:
> > Initialize the default to (mapping_gfp_mask | GFP_IOFS) because this
> > should be safe from the page fault path normally. Why do we care
> > about mapping_gfp_mask at all then? Because this doesn't hold only
> > reclaim protection flags but it also might contain zone and movability
> > restrictions (GFP_DMA32, __GFP_MOVABLE and others) so we have to respect
> > those.
>
> [2/2] says that mapping_gfp_mask(mapping) might contain bits which are not
> in !GFP_KERNEL. If we do
>
> GFP_KERNEL & mapping_gfp_mask(mapping)
>
> we will drop such bits and will cause problems.

No we won't.

> Thus, "GFP_KERNEL"
> in patch [1/1] should be replaced with "mapping_gfp_mask(mapping)" than
> "GFP_KERNEL & mapping_gfp_mask(mapping)" ?

Those gfp_masks are for LRU handling and that is GFP_KERNEL by
default. We only need to drop those which are not compatible with
mapping_gfp_mask. We do not care about __GFP_MOVABLE, GFP_DMA32 etc...
--
Michal Hocko
SUSE Labs

2015-07-08 11:58:24

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC 2/2] mm: Allow GFP_IOFS for page_cache_read page cache allocation

Does anybody have a better idea how to deal with the issue mentioned in
the patch? Or is it considered uninteresting and not worth bothering? I
am not worried about the current state but if we want to allow GFP_NOFS
allocations to fail (I have a series of patches to do so and plan
to post an RFC soon) to actually allow the user of the allocator to
have fallback strategies then we do not want to keep this side way to
premature OOM killer open, right?

On Mon 01-06-15 15:00:03, Michal Hocko wrote:
> page_cache_read has been historically using page_cache_alloc_cold to
> allocate a new page. This means that mapping_gfp_mask is used as the
> base for the gfp_mask. Many filesystems are setting this mask to
> GFP_NOFS to prevent from fs recursion issues. page_cache_read is,
> however, not called from the fs layera directly so it doesn't need this
> protection normally.
>
> ceph and ocfs2 which call filemap_fault from their fault handlers
> seem to be OK because they are not taking any fs lock before invoking
> generic implementation. xfs which takes XFS_MMAPLOCK_SHARED is safe
> from the reclaim recursion POV because this lock serializes truncate
> and punch hole with the page faults and it doesn't get involved in the
> reclaim.
>
> The GFP_NOFS protection might be even harmful. There is a push to fail
> GFP_NOFS allocations rather than loop within allocator indefinitely with
> a very limited reclaim ability. Once we start failing those requests
> the OOM killer might be triggered prematurely because the page cache
> allocation failure is propagated up the page fault path and end up in
> pagefault_out_of_memory.
>
> We cannot play with mapping_gfp_mask directly because that would be racy
> wrt. parallel page faults and it might interfere with other users who
> really rely on NOFS semantic from the stored gfp_mask. The mask is also
> inode proper so it would even be a layering violation. What we can do
> instead is to push the gfp_mask into struct vm_fault and allow fs layer
> to overwrite it should the callback need to be called with a different
> allocation context.
>
> Initialize the default to (mapping_gfp_mask | GFP_IOFS) because this
> should be safe from the page fault path normally. Why do we care
> about mapping_gfp_mask at all then? Because this doesn't hold only
> reclaim protection flags but it also might contain zone and movability
> restrictions (GFP_DMA32, __GFP_MOVABLE and others) so we have to respect
> those.
>
> Reported-by: Tetsuo Handa <[email protected]>
> Signed-off-by: Michal Hocko <[email protected]>
> ---
> include/linux/mm.h | 4 ++++
> mm/filemap.c | 9 ++++-----
> mm/memory.c | 17 +++++++++++++++++
> 3 files changed, 25 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 76376e04988a..03b8420e123c 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -219,10 +219,14 @@ extern pgprot_t protection_map[16];
> * ->fault function. The vma's ->fault is responsible for returning a bitmask
> * of VM_FAULT_xxx flags that give details about how the fault was handled.
> *
> + * MM layer fills up gfp_mask for page allocations but fault handler might
> + * alter it if its implementation requires a different allocation context.
> + *
> * pgoff should be used in favour of virtual_address, if possible.
> */
> struct vm_fault {
> unsigned int flags; /* FAULT_FLAG_xxx flags */
> + gfp_t gfp_mask; /* gfp mask to be used for allocations */
> pgoff_t pgoff; /* Logical page offset based on vma */
> void __user *virtual_address; /* Faulting virtual address */
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index adfc5d2e21c8..bfbc30ff47a4 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1760,19 +1760,18 @@ EXPORT_SYMBOL(generic_file_read_iter);
> * This adds the requested page to the page cache if it isn't already there,
> * and schedules an I/O to read in its contents from disk.
> */
> -static int page_cache_read(struct file *file, pgoff_t offset)
> +static int page_cache_read(struct file *file, pgoff_t offset, gfp_t gfp_mask)
> {
> struct address_space *mapping = file->f_mapping;
> struct page *page;
> int ret;
>
> do {
> - page = page_cache_alloc_cold(mapping);
> + page = __page_cache_alloc(gfp_mask|__GFP_COLD);
> if (!page)
> return -ENOMEM;
>
> - ret = add_to_page_cache_lru(page, mapping, offset,
> - GFP_KERNEL & mapping_gfp_mask(mapping));
> + ret = add_to_page_cache_lru(page, mapping, offset, GFP_KERNEL & gfp_mask);
> if (ret == 0)
> ret = mapping->a_ops->readpage(file, page);
> else if (ret == -EEXIST)
> @@ -1955,7 +1954,7 @@ int filemap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
> * We're only likely to ever get here if MADV_RANDOM is in
> * effect.
> */
> - error = page_cache_read(file, offset);
> + error = page_cache_read(file, offset, vmf->gfp_mask);
>
> /*
> * The page we want has now been added to the page cache.
> diff --git a/mm/memory.c b/mm/memory.c
> index 8a2fc9945b46..25ab29560dca 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1949,6 +1949,20 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
> copy_user_highpage(dst, src, va, vma);
> }
>
> +static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
> +{
> + struct file *vm_file = vma->vm_file;
> +
> + if (vm_file)
> + return mapping_gfp_mask(vm_file->f_mapping) | GFP_IOFS;
> +
> + /*
> + * Special mappings (e.g. VDSO) do not have any file so fake
> + * a default GFP_KERNEL for them.
> + */
> + return GFP_KERNEL;
> +}
> +
> /*
> * Notify the address space that the page is about to become writable so that
> * it can prohibit this or wait for the page to get into an appropriate state.
> @@ -1964,6 +1978,7 @@ static int do_page_mkwrite(struct vm_area_struct *vma, struct page *page,
> vmf.virtual_address = (void __user *)(address & PAGE_MASK);
> vmf.pgoff = page->index;
> vmf.flags = FAULT_FLAG_WRITE|FAULT_FLAG_MKWRITE;
> + vmf.gfp_mask = __get_fault_gfp_mask(vma);
> vmf.page = page;
> vmf.cow_page = NULL;
>
> @@ -2763,6 +2778,7 @@ static int __do_fault(struct vm_area_struct *vma, unsigned long address,
> vmf.pgoff = pgoff;
> vmf.flags = flags;
> vmf.page = NULL;
> + vmf.gfp_mask = __get_fault_gfp_mask(vma);
> vmf.cow_page = cow_page;
>
> ret = vma->vm_ops->fault(vma, &vmf);
> @@ -2929,6 +2945,7 @@ static void do_fault_around(struct vm_area_struct *vma, unsigned long address,
> vmf.pgoff = pgoff;
> vmf.max_pgoff = max_pgoff;
> vmf.flags = flags;
> + vmf.gfp_mask = __get_fault_gfp_mask(vma);
> vma->vm_ops->map_pages(vma, &vmf);
> }
>
> --
> 2.1.4
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

--
Michal Hocko
SUSE Labs