2018-05-16 05:43:34

by Christoph Hellwig

[permalink] [raw]
Subject: vm_fault_t conversion, for real

Hi all,

this series tries to actually turn vm_fault_t into a type that can be
typechecked and checks the fallout instead of sprinkling random
annotations without context.

The first one fixes a real bug in orangefs, the second and third fix
mismatched existing vm_fault_t annotations on the same function, the
fourth removes an unused export that was in the chain. The remainder
until the last one do some not quite trivial conversions, and the last
one does the trivial mass annotation and flips vm_fault_t to a __bitwise
unsigned int - the unsigned means we also get plain compiler type
checking for the new ->fault signature even without sparse.

This has survived an x86 allyesconfig build, and got a SUCCESS from the
buildbot that I don't really trust - I'm pretty sure there are bits
and pieces hiding in other architectures that it hasn't caught up to.

The sparse annotations are manuall verified for the core MM code and
a few other interesting bits (e.g. DAX and the x86 fault code)

The series is against linux-next as of 2018/05/15 to make sure any
annotations in subsystem trees are picked up.


2018-05-16 05:43:35

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 01/14] orangefs: don't return errno values from ->fault

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/orangefs/file.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/fs/orangefs/file.c b/fs/orangefs/file.c
index 26358efbf794..b4a25cd4f3fa 100644
--- a/fs/orangefs/file.c
+++ b/fs/orangefs/file.c
@@ -528,18 +528,16 @@ static long orangefs_ioctl(struct file *file, unsigned int cmd, unsigned long ar
return ret;
}

-static int orangefs_fault(struct vm_fault *vmf)
+static vm_fault_t orangefs_fault(struct vm_fault *vmf)
{
struct file *file = vmf->vma->vm_file;
int rc;
- rc = orangefs_inode_getattr(file->f_mapping->host, 0, 1,
- STATX_SIZE);
- if (rc == -ESTALE)
- rc = -EIO;
+
+ rc = orangefs_inode_getattr(file->f_mapping->host, 0, 1, STATX_SIZE);
if (rc) {
gossip_err("%s: orangefs_inode_getattr failed, "
"rc:%d:.\n", __func__, rc);
- return rc;
+ return VM_FAULT_SIGBUS;
}
return filemap_fault(vmf);
}
--
2.17.0

_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-05-16 05:43:36

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 02/14] fs: make the filemap_page_mkwrite prototype consistent

!CONFIG_MMU version didn't agree with the rest of the kernel..

Signed-off-by: Christoph Hellwig <[email protected]>
---
mm/filemap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 52517f28e6f4..cf21ced98eff 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2748,7 +2748,7 @@ int generic_file_readonly_mmap(struct file *file, struct vm_area_struct *vma)
return generic_file_mmap(file, vma);
}
#else
-int filemap_page_mkwrite(struct vm_fault *vmf)
+vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
{
return -ENOSYS;
}
--
2.17.0

_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-05-16 05:43:37

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 03/14] dax: make the dax_iomap_fault prototype consistent

Signed-off-by: Christoph Hellwig <[email protected]>
---
include/linux/dax.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/dax.h b/include/linux/dax.h
index dc65ece825ee..a292bccdc274 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -183,7 +183,7 @@ void dax_flush(struct dax_device *dax_dev, void *addr, size_t size);

ssize_t dax_iomap_rw(struct kiocb *iocb, struct iov_iter *iter,
const struct iomap_ops *ops);
-int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
+vm_fault_t dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
pfn_t *pfnp, int *errp, const struct iomap_ops *ops);
vm_fault_t dax_finish_sync_fault(struct vm_fault *vmf,
enum page_entry_size pe_size, pfn_t pfn);
--
2.17.0

2018-05-16 05:43:38

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 04/14] mm: remove the unused device_private_entry_fault export

Signed-off-by: Christoph Hellwig <[email protected]>
---
kernel/memremap.c | 1 -
1 file changed, 1 deletion(-)

diff --git a/kernel/memremap.c b/kernel/memremap.c
index db4e1a373e5f..59ee3b604b39 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -65,7 +65,6 @@ int device_private_entry_fault(struct vm_area_struct *vma,
*/
return page->pgmap->page_fault(vma, addr, page, flags, pmdp);
}
-EXPORT_SYMBOL(device_private_entry_fault);
#endif /* CONFIG_DEVICE_PRIVATE */

static void pgmap_radix_release(struct resource *res, unsigned long end_pgoff)
--
2.17.0

_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-05-16 05:43:39

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 05/14] ceph: untangle ceph_filemap_fault

Streamline the code to have a somewhat natural flow, and separate the
errno values from the VM_FAULT_* values.

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/ceph/addr.c | 100 +++++++++++++++++++++++++------------------------
1 file changed, 51 insertions(+), 49 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 5f7ad3d0df2e..6e80894ca073 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1428,15 +1428,18 @@ static void ceph_restore_sigs(sigset_t *oldset)
/*
* vm ops
*/
-static int ceph_filemap_fault(struct vm_fault *vmf)
+static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct inode *inode = file_inode(vma->vm_file);
+ struct address_space *mapping = inode->i_mapping;
struct ceph_inode_info *ci = ceph_inode(inode);
struct ceph_file_info *fi = vma->vm_file->private_data;
- struct page *pinned_page = NULL;
+ struct page *pinned_page = NULL, *page;
loff_t off = vmf->pgoff << PAGE_SHIFT;
- int want, got, ret;
+ int want, got, err = 0;
+ vm_fault_t ret = 0;
+ bool did_fault = false;
sigset_t oldset;

ceph_block_sigs(&oldset);
@@ -1449,9 +1452,9 @@ static int ceph_filemap_fault(struct vm_fault *vmf)
want = CEPH_CAP_FILE_CACHE;

got = 0;
- ret = ceph_get_caps(ci, CEPH_CAP_FILE_RD, want, -1, &got, &pinned_page);
- if (ret < 0)
- goto out_restore;
+ err = ceph_get_caps(ci, CEPH_CAP_FILE_RD, want, -1, &got, &pinned_page);
+ if (err < 0)
+ goto out_errno;

dout("filemap_fault %p %llu~%zd got cap refs on %s\n",
inode, off, (size_t)PAGE_SIZE, ceph_cap_string(got));
@@ -1462,8 +1465,8 @@ static int ceph_filemap_fault(struct vm_fault *vmf)
ceph_add_rw_context(fi, &rw_ctx);
ret = filemap_fault(vmf);
ceph_del_rw_context(fi, &rw_ctx);
- } else
- ret = -EAGAIN;
+ did_fault = true;
+ }

dout("filemap_fault %p %llu~%zd dropping cap refs on %s ret %d\n",
inode, off, (size_t)PAGE_SIZE, ceph_cap_string(got), ret);
@@ -1471,57 +1474,55 @@ static int ceph_filemap_fault(struct vm_fault *vmf)
put_page(pinned_page);
ceph_put_cap_refs(ci, got);

- if (ret != -EAGAIN)
+ if (did_fault)
goto out_restore;

/* read inline data */
if (off >= PAGE_SIZE) {
/* does not support inline data > PAGE_SIZE */
ret = VM_FAULT_SIGBUS;
+ goto out_restore;
+ }
+
+ page = find_or_create_page(mapping, 0,
+ mapping_gfp_constraint(mapping, ~__GFP_FS));
+ if (!page) {
+ ret = VM_FAULT_OOM;
+ goto out_inline;
+ }
+
+ err = __ceph_do_getattr(inode, page, CEPH_STAT_CAP_INLINE_DATA, true);
+ if (err < 0 || off >= i_size_read(inode)) {
+ unlock_page(page);
+ put_page(page);
+ if (err < 0)
+ goto out_errno;
+ ret = VM_FAULT_SIGBUS;
} else {
- int ret1;
- struct address_space *mapping = inode->i_mapping;
- struct page *page = find_or_create_page(mapping, 0,
- mapping_gfp_constraint(mapping,
- ~__GFP_FS));
- if (!page) {
- ret = VM_FAULT_OOM;
- goto out_inline;
- }
- ret1 = __ceph_do_getattr(inode, page,
- CEPH_STAT_CAP_INLINE_DATA, true);
- if (ret1 < 0 || off >= i_size_read(inode)) {
- unlock_page(page);
- put_page(page);
- if (ret1 < 0)
- ret = ret1;
- else
- ret = VM_FAULT_SIGBUS;
- goto out_inline;
- }
- if (ret1 < PAGE_SIZE)
- zero_user_segment(page, ret1, PAGE_SIZE);
+ if (err < PAGE_SIZE)
+ zero_user_segment(page, err, PAGE_SIZE);
else
flush_dcache_page(page);
SetPageUptodate(page);
vmf->page = page;
ret = VM_FAULT_MAJOR | VM_FAULT_LOCKED;
-out_inline:
- dout("filemap_fault %p %llu~%zd read inline data ret %d\n",
- inode, off, (size_t)PAGE_SIZE, ret);
}
+
+out_inline:
+ dout("filemap_fault %p %llu~%zd read inline data ret %d\n",
+ inode, off, (size_t)PAGE_SIZE, ret);
out_restore:
ceph_restore_sigs(&oldset);
- if (ret < 0)
- ret = (ret == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS;
-
return ret;
+out_errno:
+ ret = (err == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS;
+ goto out_restore;
}

/*
* Reuse write_begin here for simplicity.
*/
-static int ceph_page_mkwrite(struct vm_fault *vmf)
+static vm_fault_t ceph_page_mkwrite(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct inode *inode = file_inode(vma->vm_file);
@@ -1532,7 +1533,8 @@ static int ceph_page_mkwrite(struct vm_fault *vmf)
loff_t off = page_offset(page);
loff_t size = i_size_read(inode);
size_t len;
- int want, got, ret;
+ int want, got, err = 0;
+ vm_fault_t ret = 0;
sigset_t oldset;

prealloc_cf = ceph_alloc_cap_flush();
@@ -1547,10 +1549,10 @@ static int ceph_page_mkwrite(struct vm_fault *vmf)
lock_page(page);
locked_page = page;
}
- ret = ceph_uninline_data(vma->vm_file, locked_page);
+ err = ceph_uninline_data(vma->vm_file, locked_page);
if (locked_page)
unlock_page(locked_page);
- if (ret < 0)
+ if (err < 0)
goto out_free;
}

@@ -1567,9 +1569,9 @@ static int ceph_page_mkwrite(struct vm_fault *vmf)
want = CEPH_CAP_FILE_BUFFER;

got = 0;
- ret = ceph_get_caps(ci, CEPH_CAP_FILE_WR, want, off + len,
+ err = ceph_get_caps(ci, CEPH_CAP_FILE_WR, want, off + len,
&got, NULL);
- if (ret < 0)
+ if (err < 0)
goto out_free;

dout("page_mkwrite %p %llu~%zd got cap refs on %s\n",
@@ -1587,13 +1589,13 @@ static int ceph_page_mkwrite(struct vm_fault *vmf)
break;
}

- ret = ceph_update_writeable_page(vma->vm_file, off, len, page);
- if (ret >= 0) {
+ err = ceph_update_writeable_page(vma->vm_file, off, len, page);
+ if (err >= 0) {
/* success. we'll keep the page locked. */
set_page_dirty(page);
ret = VM_FAULT_LOCKED;
}
- } while (ret == -EAGAIN);
+ } while (err == -EAGAIN);

if (ret == VM_FAULT_LOCKED ||
ci->i_inline_version != CEPH_INLINE_NONE) {
@@ -1608,13 +1610,13 @@ static int ceph_page_mkwrite(struct vm_fault *vmf)
}

dout("page_mkwrite %p %llu~%zd dropping cap refs on %s ret %d\n",
- inode, off, len, ceph_cap_string(got), ret);
+ inode, off, len, ceph_cap_string(got), err);
ceph_put_cap_refs(ci, got);
out_free:
ceph_restore_sigs(&oldset);
ceph_free_cap_flush(prealloc_cf);
- if (ret < 0)
- ret = (ret == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS;
+ if (err < 0)
+ ret = (err == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS;
return ret;
}

--
2.17.0

2018-05-16 05:43:40

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 06/14] btrfs: separate errno from VM_FAULT_* values

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/btrfs/ctree.h | 2 +-
fs/btrfs/inode.c | 19 ++++++++++---------
2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 1485cd130e2b..02a0de73c1d1 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -3203,7 +3203,7 @@ int btrfs_merge_bio_hook(struct page *page, unsigned long offset,
size_t size, struct bio *bio,
unsigned long bio_flags);
void btrfs_set_range_writeback(void *private_data, u64 start, u64 end);
-int btrfs_page_mkwrite(struct vm_fault *vmf);
+vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf);
int btrfs_readpage(struct file *file, struct page *page);
void btrfs_evict_inode(struct inode *inode);
int btrfs_write_inode(struct inode *inode, struct writeback_control *wbc);
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index ec9db248c499..f4f03f0f4556 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -8824,7 +8824,7 @@ static void btrfs_invalidatepage(struct page *page, unsigned int offset,
* beyond EOF, then the page is guaranteed safe against truncation until we
* unlock the page.
*/
-int btrfs_page_mkwrite(struct vm_fault *vmf)
+vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
{
struct page *page = vmf->page;
struct inode *inode = file_inode(vmf->vma->vm_file);
@@ -8836,7 +8836,8 @@ int btrfs_page_mkwrite(struct vm_fault *vmf)
char *kaddr;
unsigned long zero_start;
loff_t size;
- int ret;
+ vm_fault_t ret;
+ int err;
int reserved = 0;
u64 reserved_space;
u64 page_start;
@@ -8858,14 +8859,14 @@ int btrfs_page_mkwrite(struct vm_fault *vmf)
* end up waiting indefinitely to get a lock on the page currently
* being processed by btrfs_page_mkwrite() function.
*/
- ret = btrfs_delalloc_reserve_space(inode, &data_reserved, page_start,
+ err = btrfs_delalloc_reserve_space(inode, &data_reserved, page_start,
reserved_space);
- if (!ret) {
- ret = file_update_time(vmf->vma->vm_file);
+ if (!err) {
+ err = file_update_time(vmf->vma->vm_file);
reserved = 1;
}
- if (ret) {
- if (ret == -ENOMEM)
+ if (err) {
+ if (err == -ENOMEM)
ret = VM_FAULT_OOM;
else /* -ENOSPC, -EIO, etc */
ret = VM_FAULT_SIGBUS;
@@ -8927,9 +8928,9 @@ int btrfs_page_mkwrite(struct vm_fault *vmf)
EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG,
0, 0, &cached_state);

- ret = btrfs_set_extent_delalloc(inode, page_start, end, 0,
+ err = btrfs_set_extent_delalloc(inode, page_start, end, 0,
&cached_state, 0);
- if (ret) {
+ if (err) {
unlock_extent_cached(io_tree, page_start, page_end,
&cached_state);
ret = VM_FAULT_SIGBUS;
--
2.17.0

_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-05-16 05:43:41

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 07/14] ext4: separate errno from VM_FAULT_* values

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/ext4/ext4.h | 4 ++--
fs/ext4/inode.c | 30 +++++++++++++++---------------
2 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index fa52b7dd4542..48592d0edf3e 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -2463,8 +2463,8 @@ extern int ext4_writepage_trans_blocks(struct inode *);
extern int ext4_chunk_trans_blocks(struct inode *, int nrblocks);
extern int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode,
loff_t lstart, loff_t lend);
-extern int ext4_page_mkwrite(struct vm_fault *vmf);
-extern int ext4_filemap_fault(struct vm_fault *vmf);
+extern vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf);
+extern vm_fault_t ext4_filemap_fault(struct vm_fault *vmf);
extern qsize_t *ext4_get_reserved_space(struct inode *inode);
extern int ext4_get_projid(struct inode *inode, kprojid_t *projid);
extern void ext4_da_update_reserve_space(struct inode *inode,
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 95bc48f5c88b..fe49045a2832 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -6106,27 +6106,27 @@ static int ext4_bh_unmapped(handle_t *handle, struct buffer_head *bh)
return !buffer_mapped(bh);
}

-int ext4_page_mkwrite(struct vm_fault *vmf)
+vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct page *page = vmf->page;
loff_t size;
unsigned long len;
- int ret;
+ vm_fault_t ret;
struct file *file = vma->vm_file;
struct inode *inode = file_inode(file);
struct address_space *mapping = inode->i_mapping;
handle_t *handle;
get_block_t *get_block;
- int retries = 0;
+ int retries = 0, err;

sb_start_pagefault(inode->i_sb);
file_update_time(vma->vm_file);

down_read(&EXT4_I(inode)->i_mmap_sem);

- ret = ext4_convert_inline_data(inode);
- if (ret)
+ err = ext4_convert_inline_data(inode);
+ if (err)
goto out_ret;

/* Delalloc case is easy... */
@@ -6134,9 +6134,9 @@ int ext4_page_mkwrite(struct vm_fault *vmf)
!ext4_should_journal_data(inode) &&
!ext4_nonda_switch(inode->i_sb)) {
do {
- ret = block_page_mkwrite(vma, vmf,
+ err = block_page_mkwrite(vma, vmf,
ext4_da_get_block_prep);
- } while (ret == -ENOSPC &&
+ } while (err == -ENOSPC &&
ext4_should_retry_alloc(inode->i_sb, &retries));
goto out_ret;
}
@@ -6181,8 +6181,8 @@ int ext4_page_mkwrite(struct vm_fault *vmf)
ret = VM_FAULT_SIGBUS;
goto out;
}
- ret = block_page_mkwrite(vma, vmf, get_block);
- if (!ret && ext4_should_journal_data(inode)) {
+ err = block_page_mkwrite(vma, vmf, get_block);
+ if (!err && ext4_should_journal_data(inode)) {
if (ext4_walk_page_buffers(handle, page_buffers(page), 0,
PAGE_SIZE, NULL, do_journal_get_write_access)) {
unlock_page(page);
@@ -6193,24 +6193,24 @@ int ext4_page_mkwrite(struct vm_fault *vmf)
ext4_set_inode_state(inode, EXT4_STATE_JDATA);
}
ext4_journal_stop(handle);
- if (ret == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries))
+ if (err == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries))
goto retry_alloc;
out_ret:
- ret = block_page_mkwrite_return(ret);
+ ret = block_page_mkwrite_return(err);
out:
up_read(&EXT4_I(inode)->i_mmap_sem);
sb_end_pagefault(inode->i_sb);
return ret;
}

-int ext4_filemap_fault(struct vm_fault *vmf)
+vm_fault_t ext4_filemap_fault(struct vm_fault *vmf)
{
struct inode *inode = file_inode(vmf->vma->vm_file);
- int err;
+ vm_fault_t ret;

down_read(&EXT4_I(inode)->i_mmap_sem);
- err = filemap_fault(vmf);
+ ret = filemap_fault(vmf);
up_read(&EXT4_I(inode)->i_mmap_sem);

- return err;
+ return ret;
}
--
2.17.0

2018-05-16 05:43:43

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 09/14] ubifs: separate errno from VM_FAULT_* values

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/ubifs/file.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
index 1acb2ff505e6..7c1a2e1c3de5 100644
--- a/fs/ubifs/file.c
+++ b/fs/ubifs/file.c
@@ -1513,7 +1513,7 @@ static int ubifs_releasepage(struct page *page, gfp_t unused_gfp_flags)
* mmap()d file has taken write protection fault and is being made writable.
* UBIFS must ensure page is budgeted for.
*/
-static int ubifs_vm_page_mkwrite(struct vm_fault *vmf)
+static vm_fault_t ubifs_vm_page_mkwrite(struct vm_fault *vmf)
{
struct page *page = vmf->page;
struct inode *inode = file_inode(vmf->vma->vm_file);
@@ -1521,6 +1521,7 @@ static int ubifs_vm_page_mkwrite(struct vm_fault *vmf)
struct timespec now = current_time(inode);
struct ubifs_budget_req req = { .new_page = 1 };
int err, update_time;
+ vm_fault_t ret = 0;

dbg_gen("ino %lu, pg %lu, i_size %lld", inode->i_ino, page->index,
i_size_read(inode));
@@ -1601,8 +1602,8 @@ static int ubifs_vm_page_mkwrite(struct vm_fault *vmf)
unlock_page(page);
ubifs_release_budget(c, &req);
if (err)
- err = VM_FAULT_SIGBUS;
- return err;
+ ret = VM_FAULT_SIGBUS;
+ return ret;
}

static const struct vm_operations_struct ubifs_file_vm_ops = {
--
2.17.0

_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-05-16 05:43:42

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 08/14] ocfs2: separate errno from VM_FAULT_* values

Signed-off-by: Christoph Hellwig <[email protected]>
---
fs/ocfs2/mmap.c | 36 +++++++++++++++++++-----------------
1 file changed, 19 insertions(+), 17 deletions(-)

diff --git a/fs/ocfs2/mmap.c b/fs/ocfs2/mmap.c
index fb9a20e3d608..e75c1fc5333e 100644
--- a/fs/ocfs2/mmap.c
+++ b/fs/ocfs2/mmap.c
@@ -44,11 +44,11 @@
#include "ocfs2_trace.h"


-static int ocfs2_fault(struct vm_fault *vmf)
+static vm_fault_t ocfs2_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
sigset_t oldset;
- int ret;
+ vm_fault_t ret;

ocfs2_block_signals(&oldset);
ret = filemap_fault(vmf);
@@ -59,10 +59,10 @@ static int ocfs2_fault(struct vm_fault *vmf)
return ret;
}

-static int __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh,
- struct page *page)
+static vm_fault_t __ocfs2_page_mkwrite(struct file *file,
+ struct buffer_head *di_bh, struct page *page)
{
- int ret = VM_FAULT_NOPAGE;
+ vm_fault_t ret = VM_FAULT_NOPAGE;
struct inode *inode = file_inode(file);
struct address_space *mapping = inode->i_mapping;
loff_t pos = page_offset(page);
@@ -71,6 +71,7 @@ static int __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh,
struct page *locked_page = NULL;
void *fsdata;
loff_t size = i_size_read(inode);
+ int err;

last_index = (size - 1) >> PAGE_SHIFT;

@@ -105,12 +106,12 @@ static int __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh,
if (page->index == last_index)
len = ((size - 1) & ~PAGE_MASK) + 1;

- ret = ocfs2_write_begin_nolock(mapping, pos, len, OCFS2_WRITE_MMAP,
+ err = ocfs2_write_begin_nolock(mapping, pos, len, OCFS2_WRITE_MMAP,
&locked_page, &fsdata, di_bh, page);
- if (ret) {
- if (ret != -ENOSPC)
- mlog_errno(ret);
- if (ret == -ENOMEM)
+ if (err) {
+ if (err != -ENOSPC)
+ mlog_errno(err);
+ if (err == -ENOMEM)
ret = VM_FAULT_OOM;
else
ret = VM_FAULT_SIGBUS;
@@ -121,20 +122,21 @@ static int __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh,
ret = VM_FAULT_NOPAGE;
goto out;
}
- ret = ocfs2_write_end_nolock(mapping, pos, len, len, fsdata);
- BUG_ON(ret != len);
+ err = ocfs2_write_end_nolock(mapping, pos, len, len, fsdata);
+ BUG_ON(err != len);
ret = VM_FAULT_LOCKED;
out:
return ret;
}

-static int ocfs2_page_mkwrite(struct vm_fault *vmf)
+static vm_fault_t ocfs2_page_mkwrite(struct vm_fault *vmf)
{
struct page *page = vmf->page;
struct inode *inode = file_inode(vmf->vma->vm_file);
struct buffer_head *di_bh = NULL;
sigset_t oldset;
- int ret;
+ vm_fault_t ret = 0;
+ int err;

sb_start_pagefault(inode->i_sb);
ocfs2_block_signals(&oldset);
@@ -144,10 +146,10 @@ static int ocfs2_page_mkwrite(struct vm_fault *vmf)
* node. Taking the data lock will also ensure that we don't
* attempt page truncation as part of a downconvert.
*/
- ret = ocfs2_inode_lock(inode, &di_bh, 1);
- if (ret < 0) {
+ err = ocfs2_inode_lock(inode, &di_bh, 1);
+ if (err < 0) {
mlog_errno(ret);
- if (ret == -ENOMEM)
+ if (err == -ENOMEM)
ret = VM_FAULT_OOM;
else
ret = VM_FAULT_SIGBUS;
--
2.17.0

2018-05-16 05:43:47

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 13/14] mm: move arch specific VM_FAULT_* flags to mm.h

Various architectures define their own internal flags. Not sure a public
header like mm.h is a good place, but keeping them inside the arch code
with possible conflicts also seems like a bad idea. Maybe we just need
to stop overloading the value instead.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/arm/mm/fault.c | 3 ---
arch/arm64/mm/fault.c | 3 ---
arch/s390/mm/fault.c | 6 ------
arch/unicore32/mm/fault.c | 3 ---
include/linux/mm.h | 7 +++++++
5 files changed, 7 insertions(+), 15 deletions(-)

diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 32034543f49c..b696eabccf60 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -201,9 +201,6 @@ void do_bad_area(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
}

#ifdef CONFIG_MMU
-#define VM_FAULT_BADMAP 0x010000
-#define VM_FAULT_BADACCESS 0x020000
-
/*
* Check that the permissions on the VMA allow for the fault which occurred.
* If we encountered a write fault, we must have write permission, otherwise
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 91c53a7d2575..3d0b1f8eacce 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -318,9 +318,6 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re
}
}

-#define VM_FAULT_BADMAP 0x010000
-#define VM_FAULT_BADACCESS 0x020000
-
static int __do_page_fault(struct mm_struct *mm, unsigned long addr,
unsigned int mm_flags, unsigned long vm_flags,
struct task_struct *tsk)
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
index e074480d3598..48c781ae25d0 100644
--- a/arch/s390/mm/fault.c
+++ b/arch/s390/mm/fault.c
@@ -44,12 +44,6 @@
#define __SUBCODE_MASK 0x0600
#define __PF_RES_FIELD 0x8000000000000000ULL

-#define VM_FAULT_BADCONTEXT 0x010000
-#define VM_FAULT_BADMAP 0x020000
-#define VM_FAULT_BADACCESS 0x040000
-#define VM_FAULT_SIGNAL 0x080000
-#define VM_FAULT_PFAULT 0x100000
-
enum fault_type {
KERNEL_FAULT,
USER_FAULT,
diff --git a/arch/unicore32/mm/fault.c b/arch/unicore32/mm/fault.c
index 381473412937..6c3c1a82925f 100644
--- a/arch/unicore32/mm/fault.c
+++ b/arch/unicore32/mm/fault.c
@@ -148,9 +148,6 @@ void do_bad_area(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
__do_kernel_fault(mm, addr, fsr, regs);
}

-#define VM_FAULT_BADMAP 0x010000
-#define VM_FAULT_BADACCESS 0x020000
-
/*
* Check that the permissions on the VMA allow for the fault which occurred.
* If we encountered a write fault, we must have write permission, otherwise
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 338b8a1afb02..64d09e3afc24 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1250,6 +1250,13 @@ static inline void clear_page_pfmemalloc(struct page *page)
* and needs fsync() to complete (for
* synchronous page faults in DAX) */

+/* Only for use in architecture specific page fault handling: */
+#define VM_FAULT_BADMAP 0x010000
+#define VM_FAULT_BADACCESS 0x020000
+#define VM_FAULT_BADCONTEXT 0x040000
+#define VM_FAULT_SIGNAL 0x080000
+#define VM_FAULT_PFAULT 0x100000
+
#define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
VM_FAULT_FALLBACK)
--
2.17.0

2018-05-16 05:43:44

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 10/14] vgem: separate errno from VM_FAULT_* values

And streamline the code in vgem_fault with early returns so that it is
a little bit more readable.

Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/gpu/drm/vgem/vgem_drv.c | 51 +++++++++++++++------------------
1 file changed, 23 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index 2524ff116f00..a261e0aab83a 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -61,12 +61,13 @@ static void vgem_gem_free_object(struct drm_gem_object *obj)
kfree(vgem_obj);
}

-static int vgem_gem_fault(struct vm_fault *vmf)
+static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_vgem_gem_object *obj = vma->vm_private_data;
/* We don't use vmf->pgoff since that has the fake offset */
unsigned long vaddr = vmf->address;
+ struct page *page;
int ret;
loff_t num_pages;
pgoff_t page_offset;
@@ -85,35 +86,29 @@ static int vgem_gem_fault(struct vm_fault *vmf)
ret = 0;
}
mutex_unlock(&obj->pages_lock);
- if (ret) {
- struct page *page;
-
- page = shmem_read_mapping_page(
- file_inode(obj->base.filp)->i_mapping,
- page_offset);
- if (!IS_ERR(page)) {
- vmf->page = page;
- ret = 0;
- } else switch (PTR_ERR(page)) {
- case -ENOSPC:
- case -ENOMEM:
- ret = VM_FAULT_OOM;
- break;
- case -EBUSY:
- ret = VM_FAULT_RETRY;
- break;
- case -EFAULT:
- case -EINVAL:
- ret = VM_FAULT_SIGBUS;
- break;
- default:
- WARN_ON(PTR_ERR(page));
- ret = VM_FAULT_SIGBUS;
- break;
- }
+ if (!ret)
+ return 0;
+
+ page = shmem_read_mapping_page(file_inode(obj->base.filp)->i_mapping,
+ page_offset);
+ if (!IS_ERR(page)) {
+ vmf->page = page;
+ return 0;
+ }

+ switch (PTR_ERR(page)) {
+ case -ENOSPC:
+ case -ENOMEM:
+ return VM_FAULT_OOM;
+ case -EBUSY:
+ return VM_FAULT_RETRY;
+ case -EFAULT:
+ case -EINVAL:
+ return VM_FAULT_SIGBUS;
+ default:
+ WARN_ON(PTR_ERR(page));
+ return VM_FAULT_SIGBUS;
}
- return ret;
}

static const struct vm_operations_struct vgem_gem_vm_ops = {
--
2.17.0

_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-05-16 05:43:45

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 11/14] ttm: separate errno from VM_FAULT_* values

Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/gpu/drm/ttm/ttm_bo_vm.c | 42 +++++++++++++++++----------------
1 file changed, 22 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index 8eba95b3c737..255e7801f62c 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -43,10 +43,11 @@

#define TTM_BO_VM_NUM_PREFAULT 16

-static int ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo,
+static vm_fault_t ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo,
struct vm_fault *vmf)
{
- int ret = 0;
+ vm_fault_t ret = 0;
+ int err = 0;

if (likely(!bo->moving))
goto out_unlock;
@@ -77,8 +78,8 @@ static int ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo,
/*
* Ordinary wait.
*/
- ret = dma_fence_wait(bo->moving, true);
- if (unlikely(ret != 0)) {
+ err = dma_fence_wait(bo->moving, true);
+ if (unlikely(err != 0)) {
ret = (ret != -ERESTARTSYS) ? VM_FAULT_SIGBUS :
VM_FAULT_NOPAGE;
goto out_unlock;
@@ -104,7 +105,7 @@ static unsigned long ttm_bo_io_mem_pfn(struct ttm_buffer_object *bo,
+ page_offset;
}

-static int ttm_bo_vm_fault(struct vm_fault *vmf)
+static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct ttm_buffer_object *bo = (struct ttm_buffer_object *)
@@ -115,7 +116,8 @@ static int ttm_bo_vm_fault(struct vm_fault *vmf)
unsigned long pfn;
struct ttm_tt *ttm = NULL;
struct page *page;
- int ret;
+ vm_fault_t ret;
+ int err;
int i;
unsigned long address = vmf->address;
struct ttm_mem_type_manager *man =
@@ -128,9 +130,9 @@ static int ttm_bo_vm_fault(struct vm_fault *vmf)
* for reserve, and if it fails, retry the fault after waiting
* for the buffer to become unreserved.
*/
- ret = ttm_bo_reserve(bo, true, true, NULL);
- if (unlikely(ret != 0)) {
- if (ret != -EBUSY)
+ err = ttm_bo_reserve(bo, true, true, NULL);
+ if (unlikely(err != 0)) {
+ if (err != -EBUSY)
return VM_FAULT_NOPAGE;

if (vmf->flags & FAULT_FLAG_ALLOW_RETRY) {
@@ -162,8 +164,8 @@ static int ttm_bo_vm_fault(struct vm_fault *vmf)
}

if (bdev->driver->fault_reserve_notify) {
- ret = bdev->driver->fault_reserve_notify(bo);
- switch (ret) {
+ err = bdev->driver->fault_reserve_notify(bo);
+ switch (err) {
case 0:
break;
case -EBUSY:
@@ -191,13 +193,13 @@ static int ttm_bo_vm_fault(struct vm_fault *vmf)
goto out_unlock;
}

- ret = ttm_mem_io_lock(man, true);
- if (unlikely(ret != 0)) {
+ err = ttm_mem_io_lock(man, true);
+ if (unlikely(err != 0)) {
ret = VM_FAULT_NOPAGE;
goto out_unlock;
}
- ret = ttm_mem_io_reserve_vm(bo);
- if (unlikely(ret != 0)) {
+ err = ttm_mem_io_reserve_vm(bo);
+ if (unlikely(err != 0)) {
ret = VM_FAULT_SIGBUS;
goto out_io_unlock;
}
@@ -265,21 +267,21 @@ static int ttm_bo_vm_fault(struct vm_fault *vmf)
}

if (vma->vm_flags & VM_MIXEDMAP)
- ret = vm_insert_mixed(&cvma, address,
+ err = vm_insert_mixed(&cvma, address,
__pfn_to_pfn_t(pfn, PFN_DEV));
else
- ret = vm_insert_pfn(&cvma, address, pfn);
+ err = vm_insert_pfn(&cvma, address, pfn);

/*
* Somebody beat us to this PTE or prefaulting to
* an already populated PTE, or prefaulting error.
*/

- if (unlikely((ret == -EBUSY) || (ret != 0 && i > 0)))
+ if (unlikely((err == -EBUSY) || (err != 0 && i > 0)))
break;
- else if (unlikely(ret != 0)) {
+ else if (unlikely(err != 0)) {
ret =
- (ret == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS;
+ (err == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS;
goto out_io_unlock;
}

--
2.17.0

_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-05-16 05:43:48

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 14/14] mm: turn on vm_fault_t type checking

Switch vm_fault_t to point to an unsigned int with __bіtwise annotations.
This both catches any old ->fault or ->page_mkwrite instance with plain
compiler type checking, as well as finding more intricate problems with
sparse.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/alpha/mm/fault.c | 2 +-
arch/arc/mm/fault.c | 3 +-
arch/arm/mm/fault.c | 5 +-
arch/arm64/mm/fault.c | 7 +-
arch/hexagon/mm/vm_fault.c | 2 +-
arch/ia64/mm/fault.c | 2 +-
arch/m68k/mm/fault.c | 2 +-
arch/microblaze/mm/fault.c | 2 +-
arch/mips/mm/fault.c | 2 +-
arch/nds32/mm/fault.c | 2 +-
arch/nios2/mm/fault.c | 2 +-
arch/openrisc/mm/fault.c | 2 +-
arch/parisc/mm/fault.c | 2 +-
arch/powerpc/include/asm/copro.h | 2 +-
arch/powerpc/mm/copro_fault.c | 2 +-
arch/powerpc/mm/fault.c | 10 +--
arch/powerpc/platforms/cell/spufs/fault.c | 2 +-
arch/riscv/mm/fault.c | 3 +-
arch/s390/kernel/vdso.c | 2 +-
arch/s390/mm/fault.c | 2 +-
arch/sh/mm/fault.c | 2 +-
arch/sparc/mm/fault_32.c | 4 +-
arch/sparc/mm/fault_64.c | 3 +-
arch/um/kernel/trap.c | 2 +-
arch/unicore32/mm/fault.c | 10 +--
arch/x86/entry/vdso/vma.c | 4 +-
arch/x86/mm/fault.c | 11 +--
arch/xtensa/mm/fault.c | 2 +-
drivers/dax/device.c | 21 +++---
drivers/gpu/drm/drm_vm.c | 10 +--
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +-
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 2 +-
drivers/gpu/drm/exynos/exynos_drm_gem.c | 2 +-
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 +-
drivers/gpu/drm/gma500/framebuffer.c | 6 +-
drivers/gpu/drm/gma500/gem.c | 2 +-
drivers/gpu/drm/gma500/psb_drv.h | 2 +-
drivers/gpu/drm/i915/i915_drv.h | 2 +-
drivers/gpu/drm/i915/i915_gem.c | 21 ++----
drivers/gpu/drm/msm/msm_drv.h | 2 +-
drivers/gpu/drm/msm/msm_gem.c | 2 +-
drivers/gpu/drm/qxl/qxl_ttm.c | 4 +-
drivers/gpu/drm/radeon/radeon_ttm.c | 2 +-
drivers/gpu/drm/udl/udl_drv.h | 2 +-
drivers/gpu/drm/udl/udl_gem.c | 2 +-
drivers/gpu/drm/vc4/vc4_bo.c | 2 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/hwtracing/intel_th/msu.c | 2 +-
drivers/iommu/amd_iommu_v2.c | 2 +-
drivers/iommu/intel-svm.c | 3 +-
drivers/misc/cxl/fault.c | 2 +-
drivers/misc/ocxl/context.c | 6 +-
drivers/misc/ocxl/link.c | 2 +-
drivers/misc/ocxl/sysfs.c | 2 +-
drivers/scsi/cxlflash/superpipe.c | 4 +-
drivers/staging/ncpfs/mmap.c | 2 +-
drivers/xen/privcmd.c | 2 +-
fs/9p/vfs_file.c | 2 +-
fs/afs/internal.h | 2 +-
fs/afs/write.c | 2 +-
fs/f2fs/file.c | 10 +--
fs/fuse/file.c | 2 +-
fs/gfs2/file.c | 2 +-
fs/iomap.c | 2 +-
fs/nfs/file.c | 4 +-
fs/nilfs2/file.c | 2 +-
fs/proc/vmcore.c | 2 +-
fs/userfaultfd.c | 4 +-
fs/xfs/xfs_file.c | 12 ++--
include/linux/huge_mm.h | 13 ++--
include/linux/hugetlb.h | 2 +-
include/linux/iomap.h | 4 +-
include/linux/mm.h | 67 +++++++++--------
include/linux/mm_types.h | 5 +-
include/linux/oom.h | 2 +-
include/linux/swapops.h | 4 +-
include/linux/userfaultfd_k.h | 5 +-
ipc/shm.c | 2 +-
kernel/events/core.c | 4 +-
mm/gup.c | 7 +-
mm/hmm.c | 2 +-
mm/huge_memory.c | 29 ++++----
mm/hugetlb.c | 25 +++----
mm/internal.h | 2 +-
mm/khugepaged.c | 3 +-
mm/ksm.c | 2 +-
mm/memory.c | 88 ++++++++++++-----------
mm/mmap.c | 4 +-
mm/shmem.c | 9 +--
samples/vfio-mdev/mbochs.c | 4 +-
virt/kvm/kvm_main.c | 2 +-
91 files changed, 285 insertions(+), 259 deletions(-)

diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
index de2bd217adad..e313430fe9b4 100644
--- a/arch/alpha/mm/fault.c
+++ b/arch/alpha/mm/fault.c
@@ -88,7 +88,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
struct mm_struct *mm = current->mm;
const struct exception_table_entry *fixup;
int fault, si_code = SEGV_MAPERR;
- unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
+ vm_fault_t flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

/* As of EV6, a load into $31/$f31 is a prefetch, and never faults
(or is suppressed by the PALcode). Support that for older CPUs
diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index b884bbd6f354..fe495df421a4 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -66,7 +66,8 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
struct task_struct *tsk = current;
struct mm_struct *mm = tsk->mm;
siginfo_t info;
- int fault, ret;
+ vm_fault_t fault;
+ int ret;
int write = regs->ecr_cause & ECR_C_PROTV_STORE; /* ST/EX */
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index b696eabccf60..9f32a6518db3 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -218,7 +218,7 @@ static inline bool access_error(unsigned int fsr, struct vm_area_struct *vma)
return vma->vm_flags & mask ? false : true;
}

-static int __kprobes
+static vm_fault_t __kprobes
__do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
unsigned int flags, struct task_struct *tsk)
{
@@ -258,7 +258,8 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
{
struct task_struct *tsk;
struct mm_struct *mm;
- int fault, sig, code;
+ vm_fault_t fault;
+ int sig, code;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

if (notify_page_fault(regs, fsr))
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 3d0b1f8eacce..e6fb6a8c655d 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -318,7 +318,7 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re
}
}

-static int __do_page_fault(struct mm_struct *mm, unsigned long addr,
+static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
unsigned int mm_flags, unsigned long vm_flags,
struct task_struct *tsk)
{
@@ -366,7 +366,8 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
struct task_struct *tsk;
struct mm_struct *mm;
struct siginfo si;
- int fault, major = 0;
+ vm_fault_t fault;
+ int major = 0;
unsigned long vm_flags = VM_READ | VM_WRITE;
unsigned int mm_flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

@@ -430,7 +431,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
}

fault = __do_page_fault(mm, addr, mm_flags, vm_flags, tsk);
- major |= fault & VM_FAULT_MAJOR;
+ major |= !!(fault & VM_FAULT_MAJOR);

if (fault & VM_FAULT_RETRY) {
/*
diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c
index 933bbcef5363..eb263e61daf4 100644
--- a/arch/hexagon/mm/vm_fault.c
+++ b/arch/hexagon/mm/vm_fault.c
@@ -52,7 +52,7 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs)
struct mm_struct *mm = current->mm;
int si_signo;
int si_code = SEGV_MAPERR;
- int fault;
+ vm_fault_t fault;
const struct exception_table_entry *fixup;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c
index 817fa120645f..a9d55ad8d67b 100644
--- a/arch/ia64/mm/fault.c
+++ b/arch/ia64/mm/fault.c
@@ -86,7 +86,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re
struct vm_area_struct *vma, *prev_vma;
struct mm_struct *mm = current->mm;
unsigned long mask;
- int fault;
+ vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

mask = ((((isr >> IA64_ISR_X_BIT) & 1UL) << VM_EXEC_BIT)
diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c
index f2ff3779875a..4ceb14b3b4de 100644
--- a/arch/m68k/mm/fault.c
+++ b/arch/m68k/mm/fault.c
@@ -70,7 +70,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
{
struct mm_struct *mm = current->mm;
struct vm_area_struct * vma;
- int fault;
+ vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

pr_debug("do page fault:\nregs->sr=%#x, regs->pc=%#lx, address=%#lx, %ld, %p\n",
diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c
index af607447c683..202ad6a494f5 100644
--- a/arch/microblaze/mm/fault.c
+++ b/arch/microblaze/mm/fault.c
@@ -90,7 +90,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address,
struct mm_struct *mm = current->mm;
int code = SEGV_MAPERR;
int is_write = error_code & ESR_S;
- int fault;
+ vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

regs->ear = address;
diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c
index 5f71f2b903b7..73d8a0f0b810 100644
--- a/arch/mips/mm/fault.c
+++ b/arch/mips/mm/fault.c
@@ -43,7 +43,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write,
struct mm_struct *mm = tsk->mm;
const int field = sizeof(unsigned long) * 2;
int si_code;
- int fault;
+ vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 10);
diff --git a/arch/nds32/mm/fault.c b/arch/nds32/mm/fault.c
index 9bdb7c3ecbb6..b740534b152c 100644
--- a/arch/nds32/mm/fault.c
+++ b/arch/nds32/mm/fault.c
@@ -73,7 +73,7 @@ void do_page_fault(unsigned long entry, unsigned long addr,
struct mm_struct *mm;
struct vm_area_struct *vma;
int si_code;
- int fault;
+ vm_fault_t fault;
unsigned int mask = VM_READ | VM_WRITE | VM_EXEC;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c
index b804dd06ea1c..24fd84cf6006 100644
--- a/arch/nios2/mm/fault.c
+++ b/arch/nios2/mm/fault.c
@@ -47,7 +47,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause,
struct task_struct *tsk = current;
struct mm_struct *mm = tsk->mm;
int code = SEGV_MAPERR;
- int fault;
+ vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

cause >>= 2;
diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c
index 9f011d16cc46..dc4dbafc1d83 100644
--- a/arch/openrisc/mm/fault.c
+++ b/arch/openrisc/mm/fault.c
@@ -53,7 +53,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address,
struct mm_struct *mm;
struct vm_area_struct *vma;
int si_code;
- int fault;
+ vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

tsk = current;
diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
index a80117980fc2..c8e8b7c05558 100644
--- a/arch/parisc/mm/fault.c
+++ b/arch/parisc/mm/fault.c
@@ -262,7 +262,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code,
struct task_struct *tsk;
struct mm_struct *mm;
unsigned long acc_type;
- int fault = 0;
+ vm_fault_t fault = 0;
unsigned int flags;

if (faulthandler_disabled())
diff --git a/arch/powerpc/include/asm/copro.h b/arch/powerpc/include/asm/copro.h
index ce216df31381..fac150839ef6 100644
--- a/arch/powerpc/include/asm/copro.h
+++ b/arch/powerpc/include/asm/copro.h
@@ -16,7 +16,7 @@ struct copro_slb
};

int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea,
- unsigned long dsisr, unsigned *flt);
+ unsigned long dsisr, vm_fault_t *flt);

int copro_calculate_slb(struct mm_struct *mm, u64 ea, struct copro_slb *slb);

diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c
index 7d0945bd3a61..c8da352e8686 100644
--- a/arch/powerpc/mm/copro_fault.c
+++ b/arch/powerpc/mm/copro_fault.c
@@ -34,7 +34,7 @@
* to handle fortunately.
*/
int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea,
- unsigned long dsisr, unsigned *flt)
+ unsigned long dsisr, vm_fault_t *flt)
{
struct vm_area_struct *vma;
unsigned long is_write;
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index ef268d5d9db7..2b4096cb1b46 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -159,7 +159,7 @@ static noinline int bad_access(struct pt_regs *regs, unsigned long address)
}

static int do_sigbus(struct pt_regs *regs, unsigned long address,
- unsigned int fault)
+ vm_fault_t fault)
{
siginfo_t info;
unsigned int lsb = 0;
@@ -190,7 +190,8 @@ static int do_sigbus(struct pt_regs *regs, unsigned long address,
return 0;
}

-static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
+static int mm_fault_error(struct pt_regs *regs, unsigned long addr,
+ vm_fault_t fault)
{
/*
* Kernel page fault interrupted by SIGKILL. We have no reason to
@@ -403,7 +404,8 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
int is_exec = TRAP(regs) == 0x400;
int is_user = user_mode(regs);
int is_write = page_fault_is_write(error_code);
- int fault, major = 0;
+ vm_fault_t fault;
+ int major = 0;
bool store_update_sp = false;

if (notify_page_fault(regs))
@@ -537,7 +539,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
}
#endif /* CONFIG_PPC_MEM_KEYS */

- major |= fault & VM_FAULT_MAJOR;
+ major |= !!(fault & VM_FAULT_MAJOR);

/*
* Handle the retry right now, the mmap_sem has been released in that
diff --git a/arch/powerpc/platforms/cell/spufs/fault.c b/arch/powerpc/platforms/cell/spufs/fault.c
index 1e002e94d0f6..83cf58daaa79 100644
--- a/arch/powerpc/platforms/cell/spufs/fault.c
+++ b/arch/powerpc/platforms/cell/spufs/fault.c
@@ -111,7 +111,7 @@ int spufs_handle_class1(struct spu_context *ctx)
{
u64 ea, dsisr, access;
unsigned long flags;
- unsigned flt = 0;
+ vm_fault_t flt = 0;
int ret;

/*
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index 148c98ca9b45..88401d5125bc 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -41,7 +41,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
struct mm_struct *mm;
unsigned long addr, cause;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
- int fault, code = SEGV_MAPERR;
+ int code = SEGV_MAPERR;
+ vm_fault_t fault;

cause = regs->scause;
addr = regs->sbadaddr;
diff --git a/arch/s390/kernel/vdso.c b/arch/s390/kernel/vdso.c
index f3a1c7c6824e..8da00ed2eb12 100644
--- a/arch/s390/kernel/vdso.c
+++ b/arch/s390/kernel/vdso.c
@@ -47,7 +47,7 @@ static struct page **vdso64_pagelist;
*/
unsigned int __read_mostly vdso_enabled = 1;

-static int vdso_fault(const struct vm_special_mapping *sm,
+static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
struct vm_area_struct *vma, struct vm_fault *vmf)
{
struct page **vdso_pagelist;
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
index 48c781ae25d0..8af651ed108b 100644
--- a/arch/s390/mm/fault.c
+++ b/arch/s390/mm/fault.c
@@ -405,7 +405,7 @@ static inline int do_exception(struct pt_regs *regs, int access)
unsigned long trans_exc_code;
unsigned long address;
unsigned int flags;
- int fault;
+ vm_fault_t fault;

tsk = current;
/*
diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c
index b8e7bb84b6b1..c4c074251b02 100644
--- a/arch/sh/mm/fault.c
+++ b/arch/sh/mm/fault.c
@@ -396,7 +396,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
struct task_struct *tsk;
struct mm_struct *mm;
struct vm_area_struct * vma;
- int fault;
+ vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

tsk = current;
diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c
index 9f75b6444bf1..34fb0d2d9998 100644
--- a/arch/sparc/mm/fault_32.c
+++ b/arch/sparc/mm/fault_32.c
@@ -165,8 +165,8 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
struct mm_struct *mm = tsk->mm;
unsigned int fixup;
unsigned long g2;
- int from_user = !(regs->psr & PSR_PS);
- int fault, code;
+ int from_user = !(regs->psr & PSR_PS), code;
+ vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

if (text_fault)
diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c
index 63166fcf9e25..8f8a604c1300 100644
--- a/arch/sparc/mm/fault_64.c
+++ b/arch/sparc/mm/fault_64.c
@@ -278,7 +278,8 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs)
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma;
unsigned int insn = 0;
- int si_code, fault_code, fault;
+ int si_code, fault_code;
+ vm_fault_t fault;
unsigned long address, mm_rss;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
index ec9a42c14c56..cced82946042 100644
--- a/arch/um/kernel/trap.c
+++ b/arch/um/kernel/trap.c
@@ -72,7 +72,7 @@ int handle_page_fault(unsigned long address, unsigned long ip,
}

do {
- int fault;
+ vm_fault_t fault;

fault = handle_mm_fault(vma, address, flags);

diff --git a/arch/unicore32/mm/fault.c b/arch/unicore32/mm/fault.c
index 6c3c1a82925f..c4b477cea5bf 100644
--- a/arch/unicore32/mm/fault.c
+++ b/arch/unicore32/mm/fault.c
@@ -165,8 +165,8 @@ static inline bool access_error(unsigned int fsr, struct vm_area_struct *vma)
return vma->vm_flags & mask ? false : true;
}

-static int __do_pf(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
- unsigned int flags, struct task_struct *tsk)
+static vm_fault_t __do_pf(struct mm_struct *mm, unsigned long addr,
+ unsigned int fsr, unsigned int flags, struct task_struct *tsk)
{
struct vm_area_struct *vma;
int fault;
@@ -192,8 +192,7 @@ static int __do_pf(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
* If for any reason at all we couldn't handle the fault, make
* sure we exit gracefully rather than endlessly redo the fault.
*/
- fault = handle_mm_fault(vma, addr & PAGE_MASK, flags);
- return fault;
+ return handle_mm_fault(vma, addr & PAGE_MASK, flags);

check_stack:
if (vma->vm_flags & VM_GROWSDOWN && !expand_stack(vma, addr))
@@ -206,7 +205,8 @@ static int do_pf(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
{
struct task_struct *tsk;
struct mm_struct *mm;
- int fault, sig, code;
+ int sig, code;
+ vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

tsk = current;
diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 5b8b556dbb12..c575eec31507 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -39,7 +39,7 @@ void __init init_vdso_image(const struct vdso_image *image)

struct linux_binprm;

-static int vdso_fault(const struct vm_special_mapping *sm,
+static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
struct vm_area_struct *vma, struct vm_fault *vmf)
{
const struct vdso_image *image = vma->vm_mm->context.vdso_image;
@@ -84,7 +84,7 @@ static int vdso_mremap(const struct vm_special_mapping *sm,
return 0;
}

-static int vvar_fault(const struct vm_special_mapping *sm,
+static vm_fault_t vvar_fault(const struct vm_special_mapping *sm,
struct vm_area_struct *vma, struct vm_fault *vmf)
{
const struct vdso_image *image = vma->vm_mm->context.vdso_image;
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index fd84edf82252..5d820b88a129 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -204,7 +204,7 @@ static void fill_sig_info_pkey(int si_signo, int si_code, siginfo_t *info,

static void
force_sig_info_fault(int si_signo, int si_code, unsigned long address,
- struct task_struct *tsk, u32 *pkey, int fault)
+ struct task_struct *tsk, u32 *pkey, vm_fault_t fault)
{
unsigned lsb = 0;
siginfo_t info;
@@ -976,7 +976,7 @@ bad_area_access_error(struct pt_regs *regs, unsigned long error_code,

static void
do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address,
- u32 *pkey, unsigned int fault)
+ u32 *pkey, vm_fault_t fault)
{
struct task_struct *tsk = current;
int code = BUS_ADRERR;
@@ -1008,7 +1008,7 @@ do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address,

static noinline void
mm_fault_error(struct pt_regs *regs, unsigned long error_code,
- unsigned long address, u32 *pkey, unsigned int fault)
+ unsigned long address, u32 *pkey, vm_fault_t fault)
{
if (fatal_signal_pending(current) && !(error_code & X86_PF_USER)) {
no_context(regs, error_code, address, 0, 0);
@@ -1222,7 +1222,8 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
struct vm_area_struct *vma;
struct task_struct *tsk;
struct mm_struct *mm;
- int fault, major = 0;
+ int major = 0;
+ vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
u32 pkey;

@@ -1401,7 +1402,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
*/
pkey = vma_pkey(vma);
fault = handle_mm_fault(vma, address, flags);
- major |= fault & VM_FAULT_MAJOR;
+ major |= !!(fault & VM_FAULT_MAJOR);

/*
* If we need to retry the mmap_sem has already been released,
diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c
index c111a833205a..2ab0e0dcd166 100644
--- a/arch/xtensa/mm/fault.c
+++ b/arch/xtensa/mm/fault.c
@@ -42,7 +42,7 @@ void do_page_fault(struct pt_regs *regs)
int code;

int is_write, is_exec;
- int fault;
+ vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;

code = SEGV_MAPERR;
diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index aff2c1594220..bf88e5df0cdb 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -244,11 +244,12 @@ __weak phys_addr_t dax_pgoff_to_phys(struct dev_dax *dev_dax, pgoff_t pgoff,
return -1;
}

-static int __dev_dax_pte_fault(struct dev_dax *dev_dax, struct vm_fault *vmf)
+static vm_fault_t __dev_dax_pte_fault(struct dev_dax *dev_dax,
+ struct vm_fault *vmf)
{
struct device *dev = &dev_dax->dev;
struct dax_region *dax_region;
- int rc = VM_FAULT_SIGBUS;
+ vm_fault_t rc = VM_FAULT_SIGBUS;
phys_addr_t phys;
pfn_t pfn;
unsigned int fault_size = PAGE_SIZE;
@@ -284,7 +285,8 @@ static int __dev_dax_pte_fault(struct dev_dax *dev_dax, struct vm_fault *vmf)
return VM_FAULT_NOPAGE;
}

-static int __dev_dax_pmd_fault(struct dev_dax *dev_dax, struct vm_fault *vmf)
+static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax,
+ struct vm_fault *vmf)
{
unsigned long pmd_addr = vmf->address & PMD_MASK;
struct device *dev = &dev_dax->dev;
@@ -334,7 +336,8 @@ static int __dev_dax_pmd_fault(struct dev_dax *dev_dax, struct vm_fault *vmf)
}

#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-static int __dev_dax_pud_fault(struct dev_dax *dev_dax, struct vm_fault *vmf)
+static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax,
+ struct vm_fault *vmf)
{
unsigned long pud_addr = vmf->address & PUD_MASK;
struct device *dev = &dev_dax->dev;
@@ -384,16 +387,18 @@ static int __dev_dax_pud_fault(struct dev_dax *dev_dax, struct vm_fault *vmf)
vmf->flags & FAULT_FLAG_WRITE);
}
#else
-static int __dev_dax_pud_fault(struct dev_dax *dev_dax, struct vm_fault *vmf)
+static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax,
+ struct vm_fault *vmf)
{
return VM_FAULT_FALLBACK;
}
#endif /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */

-static int dev_dax_huge_fault(struct vm_fault *vmf,
+static vm_fault_t dev_dax_huge_fault(struct vm_fault *vmf,
enum page_entry_size pe_size)
{
- int rc, id;
+ vm_fault_t rc;
+ int id;
struct file *filp = vmf->vma->vm_file;
struct dev_dax *dev_dax = filp->private_data;

@@ -420,7 +425,7 @@ static int dev_dax_huge_fault(struct vm_fault *vmf,
return rc;
}

-static int dev_dax_fault(struct vm_fault *vmf)
+static vm_fault_t dev_dax_fault(struct vm_fault *vmf)
{
return dev_dax_huge_fault(vmf, PE_SIZE_PTE);
}
diff --git a/drivers/gpu/drm/drm_vm.c b/drivers/gpu/drm/drm_vm.c
index 2660543ad86a..c3301046dfaa 100644
--- a/drivers/gpu/drm/drm_vm.c
+++ b/drivers/gpu/drm/drm_vm.c
@@ -100,7 +100,7 @@ static pgprot_t drm_dma_prot(uint32_t map_type, struct vm_area_struct *vma)
* map, get the page, increment the use count and return it.
*/
#if IS_ENABLED(CONFIG_AGP)
-static int drm_vm_fault(struct vm_fault *vmf)
+static vm_fault_t drm_vm_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_file *priv = vma->vm_file->private_data;
@@ -173,7 +173,7 @@ static int drm_vm_fault(struct vm_fault *vmf)
return VM_FAULT_SIGBUS; /* Disallow mremap */
}
#else
-static int drm_vm_fault(struct vm_fault *vmf)
+static vm_fault_t drm_vm_fault(struct vm_fault *vmf)
{
return VM_FAULT_SIGBUS;
}
@@ -189,7 +189,7 @@ static int drm_vm_fault(struct vm_fault *vmf)
* Get the mapping, find the real physical page to map, get the page, and
* return it.
*/
-static int drm_vm_shm_fault(struct vm_fault *vmf)
+static vm_fault_t drm_vm_shm_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_local_map *map = vma->vm_private_data;
@@ -291,7 +291,7 @@ static void drm_vm_shm_close(struct vm_area_struct *vma)
*
* Determine the page number from the page offset and get it from drm_device_dma::pagelist.
*/
-static int drm_vm_dma_fault(struct vm_fault *vmf)
+static vm_fault_t drm_vm_dma_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_file *priv = vma->vm_file->private_data;
@@ -326,7 +326,7 @@ static int drm_vm_dma_fault(struct vm_fault *vmf)
*
* Determine the map offset from the page offset and get it from drm_sg_mem::pagelist.
*/
-static int drm_vm_sg_fault(struct vm_fault *vmf)
+static vm_fault_t drm_vm_sg_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_local_map *map = vma->vm_private_data;
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 763cf5bf8eae..54183c1d3fef 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -64,7 +64,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
struct drm_file *file);

int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-int etnaviv_gem_fault(struct vm_fault *vmf);
+vm_fault_t etnaviv_gem_fault(struct vm_fault *vmf);
int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index fcc969fa0e69..8eead441f710 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -180,7 +180,7 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma)
return obj->ops->mmap(obj, vma);
}

-int etnaviv_gem_fault(struct vm_fault *vmf)
+vm_fault_t etnaviv_gem_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_gem_object *obj = vma->vm_private_data;
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index 11cc01b47bc0..d3e4566e27bc 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -431,7 +431,7 @@ int exynos_drm_gem_dumb_create(struct drm_file *file_priv,
return 0;
}

-int exynos_drm_gem_fault(struct vm_fault *vmf)
+vm_fault_t exynos_drm_gem_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_gem_object *obj = vma->vm_private_data;
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 5a4c7de80f65..6c40b975d909 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -111,7 +111,7 @@ int exynos_drm_gem_dumb_create(struct drm_file *file_priv,
struct drm_mode_create_dumb *args);

/* page fault handler and mmap fault address(virtual) to physical memory. */
-int exynos_drm_gem_fault(struct vm_fault *vmf);
+vm_fault_t exynos_drm_gem_fault(struct vm_fault *vmf);

/* set vm_flags and we can change the vm attribute to other one at here. */
int exynos_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/gma500/framebuffer.c b/drivers/gpu/drm/gma500/framebuffer.c
index cb0a2ae916e0..d76275a0daac 100644
--- a/drivers/gpu/drm/gma500/framebuffer.c
+++ b/drivers/gpu/drm/gma500/framebuffer.c
@@ -111,7 +111,7 @@ static int psbfb_pan(struct fb_var_screeninfo *var, struct fb_info *info)
return 0;
}

-static int psbfb_vm_fault(struct vm_fault *vmf)
+static vm_fault_t psbfb_vm_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct psb_framebuffer *psbfb = vma->vm_private_data;
@@ -138,8 +138,8 @@ static int psbfb_vm_fault(struct vm_fault *vmf)
if (unlikely((ret == -EBUSY) || (ret != 0 && i > 0)))
break;
else if (unlikely(ret != 0)) {
- ret = (ret == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS;
- return ret;
+ return (ret == -ENOMEM) ?
+ VM_FAULT_OOM : VM_FAULT_SIGBUS;
}
address += PAGE_SIZE;
phys_addr += PAGE_SIZE;
diff --git a/drivers/gpu/drm/gma500/gem.c b/drivers/gpu/drm/gma500/gem.c
index 131239759a75..2b7a394fc9b3 100644
--- a/drivers/gpu/drm/gma500/gem.c
+++ b/drivers/gpu/drm/gma500/gem.c
@@ -134,7 +134,7 @@ int psb_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
* vma->vm_private_data points to the GEM object that is backing this
* mapping.
*/
-int psb_gem_fault(struct vm_fault *vmf)
+vm_fault_t psb_gem_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_gem_object *obj;
diff --git a/drivers/gpu/drm/gma500/psb_drv.h b/drivers/gpu/drm/gma500/psb_drv.h
index e8300f509023..48ab91e59e1e 100644
--- a/drivers/gpu/drm/gma500/psb_drv.h
+++ b/drivers/gpu/drm/gma500/psb_drv.h
@@ -749,7 +749,7 @@ extern int psb_gem_get_aperture(struct drm_device *dev, void *data,
struct drm_file *file);
extern int psb_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
struct drm_mode_create_dumb *args);
-extern int psb_gem_fault(struct vm_fault *vmf);
+extern vm_fault_t psb_gem_fault(struct vm_fault *vmf);

/* psb_device.c */
extern const struct psb_ops psb_chip_ops;
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 34c125e2d90c..1830d96a16e4 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3169,7 +3169,7 @@ int i915_gem_wait_for_idle(struct drm_i915_private *dev_priv,
unsigned int flags);
int __must_check i915_gem_suspend(struct drm_i915_private *dev_priv);
void i915_gem_resume(struct drm_i915_private *dev_priv);
-int i915_gem_fault(struct vm_fault *vmf);
+vm_fault_t i915_gem_fault(struct vm_fault *vmf);
int i915_gem_object_wait(struct drm_i915_gem_object *obj,
unsigned int flags,
long timeout,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 0a2070112b66..1231bdd52b7f 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1991,7 +1991,7 @@ compute_partial_view(struct drm_i915_gem_object *obj,
* The current feature set supported by i915_gem_fault() and thus GTT mmaps
* is exposed via I915_PARAM_MMAP_GTT_VERSION (see i915_gem_mmap_gtt_version).
*/
-int i915_gem_fault(struct vm_fault *vmf)
+vm_fault_t i915_gem_fault(struct vm_fault *vmf)
{
#define MIN_CHUNK_PAGES ((1 << 20) >> PAGE_SHIFT) /* 1 MiB */
struct vm_area_struct *area = vmf->vma;
@@ -2108,10 +2108,8 @@ int i915_gem_fault(struct vm_fault *vmf)
* fail). But any other -EIO isn't ours (e.g. swap in failure)
* and so needs to be reported.
*/
- if (!i915_terminally_wedged(&dev_priv->gpu_error)) {
- ret = VM_FAULT_SIGBUS;
- break;
- }
+ if (!i915_terminally_wedged(&dev_priv->gpu_error))
+ return VM_FAULT_SIGBUS;
case -EAGAIN:
/*
* EAGAIN means the gpu is hung and we'll wait for the error
@@ -2126,21 +2124,16 @@ int i915_gem_fault(struct vm_fault *vmf)
* EBUSY is ok: this just means that another thread
* already did the job.
*/
- ret = VM_FAULT_NOPAGE;
- break;
+ return VM_FAULT_NOPAGE;
case -ENOMEM:
- ret = VM_FAULT_OOM;
- break;
+ return VM_FAULT_OOM;
case -ENOSPC:
case -EFAULT:
- ret = VM_FAULT_SIGBUS;
- break;
+ return VM_FAULT_SIGBUS;
default:
WARN_ONCE(ret, "unhandled error in i915_gem_fault: %i\n", ret);
- ret = VM_FAULT_SIGBUS;
- break;
+ return VM_FAULT_SIGBUS;
}
- return ret;
}

static void __i915_gem_object_release_mmap(struct drm_i915_gem_object *obj)
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index b2da1fbf81e0..a92de7bc7722 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -184,7 +184,7 @@ void msm_gem_shrinker_cleanup(struct drm_device *dev);
int msm_gem_mmap_obj(struct drm_gem_object *obj,
struct vm_area_struct *vma);
int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-int msm_gem_fault(struct vm_fault *vmf);
+vm_fault_t msm_gem_fault(struct vm_fault *vmf);
uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj);
int msm_gem_get_iova(struct drm_gem_object *obj,
struct msm_gem_address_space *aspace, uint64_t *iova);
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index f583bb4222f9..27e55d19b3de 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -219,7 +219,7 @@ int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
return msm_gem_mmap_obj(vma->vm_private_data, vma);
}

-int msm_gem_fault(struct vm_fault *vmf)
+vm_fault_t msm_gem_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_gem_object *obj = vma->vm_private_data;
diff --git a/drivers/gpu/drm/qxl/qxl_ttm.c b/drivers/gpu/drm/qxl/qxl_ttm.c
index ee2340e31f06..4a7d76d7cc70 100644
--- a/drivers/gpu/drm/qxl/qxl_ttm.c
+++ b/drivers/gpu/drm/qxl/qxl_ttm.c
@@ -105,10 +105,10 @@ static void qxl_ttm_global_fini(struct qxl_device *qdev)
static struct vm_operations_struct qxl_ttm_vm_ops;
static const struct vm_operations_struct *ttm_vm_ops;

-static int qxl_ttm_fault(struct vm_fault *vmf)
+static vm_fault_t qxl_ttm_fault(struct vm_fault *vmf)
{
struct ttm_buffer_object *bo;
- int r;
+ vm_fault_t r;

bo = (struct ttm_buffer_object *)vmf->vma->vm_private_data;
if (bo == NULL)
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index 8689fcca051c..7d3bf1e2ac83 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -947,7 +947,7 @@ void radeon_ttm_set_active_vram_size(struct radeon_device *rdev, u64 size)
static struct vm_operations_struct radeon_ttm_vm_ops;
static const struct vm_operations_struct *ttm_vm_ops = NULL;

-static int radeon_ttm_fault(struct vm_fault *vmf)
+static vm_fault_t radeon_ttm_fault(struct vm_fault *vmf)
{
struct ttm_buffer_object *bo;
struct radeon_device *rdev;
diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
index 55c0cc309198..4151c5bc031d 100644
--- a/drivers/gpu/drm/udl/udl_drv.h
+++ b/drivers/gpu/drm/udl/udl_drv.h
@@ -136,7 +136,7 @@ void udl_gem_put_pages(struct udl_gem_object *obj);
int udl_gem_vmap(struct udl_gem_object *obj);
void udl_gem_vunmap(struct udl_gem_object *obj);
int udl_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-int udl_gem_fault(struct vm_fault *vmf);
+vm_fault_t udl_gem_fault(struct vm_fault *vmf);

int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
int width, int height);
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c
index 9a15cce22cce..9bac8de2e826 100644
--- a/drivers/gpu/drm/udl/udl_gem.c
+++ b/drivers/gpu/drm/udl/udl_gem.c
@@ -100,7 +100,7 @@ int udl_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
return ret;
}

-int udl_gem_fault(struct vm_fault *vmf)
+vm_fault_t udl_gem_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct udl_gem_object *obj = to_udl_bo(vma->vm_private_data);
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index add9cc97a3b6..8dcce7182bb7 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -721,7 +721,7 @@ vc4_prime_export(struct drm_device *dev, struct drm_gem_object *obj, int flags)
return dmabuf;
}

-int vc4_fault(struct vm_fault *vmf)
+vm_fault_t vc4_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_gem_object *obj = vma->vm_private_data;
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index 22589d39083c..dfda2b2aa3a2 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -673,7 +673,7 @@ int vc4_get_hang_state_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int vc4_label_bo_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
-int vc4_fault(struct vm_fault *vmf);
+vm_fault_t vc4_fault(struct vm_fault *vmf);
int vc4_mmap(struct file *filp, struct vm_area_struct *vma);
struct reservation_object *vc4_prime_res_obj(struct drm_gem_object *obj);
int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
index ede388309376..d08b7988e963 100644
--- a/drivers/hwtracing/intel_th/msu.c
+++ b/drivers/hwtracing/intel_th/msu.c
@@ -1182,7 +1182,7 @@ static void msc_mmap_close(struct vm_area_struct *vma)
mutex_unlock(&msc->buf_mutex);
}

-static int msc_mmap_fault(struct vm_fault *vmf)
+static vm_fault_t msc_mmap_fault(struct vm_fault *vmf)
{
struct msc_iter *iter = vmf->vma->vm_file->private_data;
struct msc *msc = iter->msc;
diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c
index 1d0b53a04a08..58da65df03f5 100644
--- a/drivers/iommu/amd_iommu_v2.c
+++ b/drivers/iommu/amd_iommu_v2.c
@@ -508,7 +508,7 @@ static void do_fault(struct work_struct *work)
{
struct fault *fault = container_of(work, struct fault, work);
struct vm_area_struct *vma;
- int ret = VM_FAULT_ERROR;
+ vm_fault_t ret = VM_FAULT_ERROR;
unsigned int flags = 0;
struct mm_struct *mm;
u64 address;
diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c
index e8cd984cf9c8..a65c87546560 100644
--- a/drivers/iommu/intel-svm.c
+++ b/drivers/iommu/intel-svm.c
@@ -594,7 +594,8 @@ static irqreturn_t prq_event_thread(int irq, void *d)
struct vm_area_struct *vma;
struct page_req_dsc *req;
struct qi_desc resp;
- int ret, result;
+ int result;
+ vm_fault_t ret;
u64 address;

handled = 1;
diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
index 70dbb6de102c..93ecc67a0f3b 100644
--- a/drivers/misc/cxl/fault.c
+++ b/drivers/misc/cxl/fault.c
@@ -134,7 +134,7 @@ static int cxl_handle_segment_miss(struct cxl_context *ctx,

int cxl_handle_mm_fault(struct mm_struct *mm, u64 dsisr, u64 dar)
{
- unsigned flt = 0;
+ vm_fault_t flt = 0;
int result;
unsigned long access, flags, inv_flags = 0;

diff --git a/drivers/misc/ocxl/context.c b/drivers/misc/ocxl/context.c
index 909e8807824a..16c5157e570e 100644
--- a/drivers/misc/ocxl/context.c
+++ b/drivers/misc/ocxl/context.c
@@ -83,7 +83,7 @@ int ocxl_context_attach(struct ocxl_context *ctx, u64 amr)
return rc;
}

-static int map_afu_irq(struct vm_area_struct *vma, unsigned long address,
+static vm_fault_t map_afu_irq(struct vm_area_struct *vma, unsigned long address,
u64 offset, struct ocxl_context *ctx)
{
u64 trigger_addr;
@@ -96,7 +96,7 @@ static int map_afu_irq(struct vm_area_struct *vma, unsigned long address,
return VM_FAULT_NOPAGE;
}

-static int map_pp_mmio(struct vm_area_struct *vma, unsigned long address,
+static vm_fault_t map_pp_mmio(struct vm_area_struct *vma, unsigned long address,
u64 offset, struct ocxl_context *ctx)
{
u64 pp_mmio_addr;
@@ -123,7 +123,7 @@ static int map_pp_mmio(struct vm_area_struct *vma, unsigned long address,
return VM_FAULT_NOPAGE;
}

-static int ocxl_mmap_fault(struct vm_fault *vmf)
+static vm_fault_t ocxl_mmap_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct ocxl_context *ctx = vma->vm_file->private_data;
diff --git a/drivers/misc/ocxl/link.c b/drivers/misc/ocxl/link.c
index f30790582dc0..9e159c9971f3 100644
--- a/drivers/misc/ocxl/link.c
+++ b/drivers/misc/ocxl/link.c
@@ -126,7 +126,7 @@ static void ack_irq(struct spa *spa, enum xsl_response r)

static void xsl_fault_handler_bh(struct work_struct *fault_work)
{
- unsigned int flt = 0;
+ vm_fault_t flt = 0;
unsigned long access, flags, inv_flags = 0;
enum xsl_response r;
struct xsl_fault *fault = container_of(fault_work, struct xsl_fault,
diff --git a/drivers/misc/ocxl/sysfs.c b/drivers/misc/ocxl/sysfs.c
index d9753a1db14b..c92a2100dcdd 100644
--- a/drivers/misc/ocxl/sysfs.c
+++ b/drivers/misc/ocxl/sysfs.c
@@ -64,7 +64,7 @@ static ssize_t global_mmio_read(struct file *filp, struct kobject *kobj,
return count;
}

-static int global_mmio_fault(struct vm_fault *vmf)
+static vm_fault_t global_mmio_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct ocxl_afu *afu = vma->vm_private_data;
diff --git a/drivers/scsi/cxlflash/superpipe.c b/drivers/scsi/cxlflash/superpipe.c
index 04a3bf9dc85f..9d663b5793d6 100644
--- a/drivers/scsi/cxlflash/superpipe.c
+++ b/drivers/scsi/cxlflash/superpipe.c
@@ -1101,7 +1101,7 @@ static struct page *get_err_page(struct cxlflash_cfg *cfg)
*
* Return: 0 on success, VM_FAULT_SIGBUS on failure
*/
-static int cxlflash_mmap_fault(struct vm_fault *vmf)
+static vm_fault_t cxlflash_mmap_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct file *file = vma->vm_file;
@@ -1112,7 +1112,7 @@ static int cxlflash_mmap_fault(struct vm_fault *vmf)
struct ctx_info *ctxi = NULL;
struct page *err_page = NULL;
enum ctx_ctrl ctrl = CTX_CTRL_ERR_FALLBACK | CTX_CTRL_FILE;
- int rc = 0;
+ vm_fault_t rc = 0;
int ctxid;

ctxid = cfg->ops->process_element(ctx);
diff --git a/drivers/staging/ncpfs/mmap.c b/drivers/staging/ncpfs/mmap.c
index a5c5cf2ff007..d2182dd67403 100644
--- a/drivers/staging/ncpfs/mmap.c
+++ b/drivers/staging/ncpfs/mmap.c
@@ -28,7 +28,7 @@
* XXX: how are we excluding truncate/invalidate here? Maybe need to lock
* page?
*/
-static int ncp_file_mmap_fault(struct vm_fault *vmf)
+static vm_fault_t ncp_file_mmap_fault(struct vm_fault *vmf)
{
struct inode *inode = file_inode(vmf->vma->vm_file);
char *pg_addr;
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 1c909183c42a..0a778d30d333 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -801,7 +801,7 @@ static void privcmd_close(struct vm_area_struct *vma)
kfree(pages);
}

-static int privcmd_fault(struct vm_fault *vmf)
+static vm_fault_t privcmd_fault(struct vm_fault *vmf)
{
printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
vmf->vma, vmf->vma->vm_start, vmf->vma->vm_end,
diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index 03c9e325bfbc..5f2e48d41d72 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -533,7 +533,7 @@ v9fs_mmap_file_mmap(struct file *filp, struct vm_area_struct *vma)
return retval;
}

-static int
+static vm_fault_t
v9fs_vm_page_mkwrite(struct vm_fault *vmf)
{
struct v9fs_inode *v9inode;
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index f8086ec95e24..eca8ab11c165 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -1028,7 +1028,7 @@ extern int afs_writepages(struct address_space *, struct writeback_control *);
extern void afs_pages_written_back(struct afs_vnode *, struct afs_call *);
extern ssize_t afs_file_write(struct kiocb *, struct iov_iter *);
extern int afs_fsync(struct file *, loff_t, loff_t, int);
-extern int afs_page_mkwrite(struct vm_fault *);
+extern vm_fault_t afs_page_mkwrite(struct vm_fault *);
extern void afs_prune_wb_keys(struct afs_vnode *);
extern int afs_launder_page(struct page *);

diff --git a/fs/afs/write.c b/fs/afs/write.c
index c164698dc304..f5ae8c6ded00 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -753,7 +753,7 @@ int afs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
* notification that a previously read-only page is about to become writable
* - if it returns an error, the caller will deliver a bus error signal
*/
-int afs_page_mkwrite(struct vm_fault *vmf)
+vm_fault_t afs_page_mkwrite(struct vm_fault *vmf)
{
struct file *file = vmf->vma->vm_file;
struct inode *inode = file_inode(file);
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index cc08956334a0..2d7347ddedde 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -33,19 +33,19 @@
#include "trace.h"
#include <trace/events/f2fs.h>

-static int f2fs_filemap_fault(struct vm_fault *vmf)
+static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf)
{
struct inode *inode = file_inode(vmf->vma->vm_file);
- int err;
+ vm_fault_t ret;

down_read(&F2FS_I(inode)->i_mmap_sem);
- err = filemap_fault(vmf);
+ ret = filemap_fault(vmf);
up_read(&F2FS_I(inode)->i_mmap_sem);

- return err;
+ return ret;
}

-static int f2fs_vm_page_mkwrite(struct vm_fault *vmf)
+static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
{
struct page *page = vmf->page;
struct inode *inode = file_inode(vmf->vma->vm_file);
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index a201fb0ac64f..67648ccbdd43 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -2048,7 +2048,7 @@ static void fuse_vma_close(struct vm_area_struct *vma)
* - sync(2)
* - try_to_free_pages() with order > PAGE_ALLOC_COSTLY_ORDER
*/
-static int fuse_page_mkwrite(struct vm_fault *vmf)
+static vm_fault_t fuse_page_mkwrite(struct vm_fault *vmf)
{
struct page *page = vmf->page;
struct inode *inode = file_inode(vmf->vma->vm_file);
diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index 4b71f021a9e2..789f2e210177 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -387,7 +387,7 @@ static int gfs2_allocate_page_backing(struct page *page)
* blocks allocated on disk to back that page.
*/

-static int gfs2_page_mkwrite(struct vm_fault *vmf)
+static vm_fault_t gfs2_page_mkwrite(struct vm_fault *vmf)
{
struct page *page = vmf->page;
struct inode *inode = file_inode(vmf->vma->vm_file);
diff --git a/fs/iomap.c b/fs/iomap.c
index d193390a1c20..24aa0cb3d1aa 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -444,7 +444,7 @@ iomap_page_mkwrite_actor(struct inode *inode, loff_t pos, loff_t length,
return length;
}

-int iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops)
+vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops)
{
struct page *page = vmf->page;
struct inode *inode = file_inode(vmf->vma->vm_file);
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 81cca49a8375..29553fdba8af 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -532,13 +532,13 @@ const struct address_space_operations nfs_file_aops = {
* writable, implying that someone is about to modify the page through a
* shared-writable mapping
*/
-static int nfs_vm_page_mkwrite(struct vm_fault *vmf)
+static vm_fault_t nfs_vm_page_mkwrite(struct vm_fault *vmf)
{
struct page *page = vmf->page;
struct file *filp = vmf->vma->vm_file;
struct inode *inode = file_inode(filp);
unsigned pagelen;
- int ret = VM_FAULT_NOPAGE;
+ vm_fault_t ret = VM_FAULT_NOPAGE;
struct address_space *mapping;

dfprintk(PAGECACHE, "NFS: vm_page_mkwrite(%pD2(%lu), offset %lld)\n",
diff --git a/fs/nilfs2/file.c b/fs/nilfs2/file.c
index c5fa3dee72fc..7da0fac71dc2 100644
--- a/fs/nilfs2/file.c
+++ b/fs/nilfs2/file.c
@@ -51,7 +51,7 @@ int nilfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
return err;
}

-static int nilfs_page_mkwrite(struct vm_fault *vmf)
+static vm_fault_t nilfs_page_mkwrite(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct page *page = vmf->page;
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 247c3499e5bd..ef08a35f6a8c 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -379,7 +379,7 @@ static ssize_t read_vmcore(struct file *file, char __user *buffer,
* On s390 the fault handler is used for memory regions that can't be mapped
* directly with remap_pfn_range().
*/
-static int mmap_vmcore_fault(struct vm_fault *vmf)
+static vm_fault_t mmap_vmcore_fault(struct vm_fault *vmf)
{
#ifdef CONFIG_S390
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index cec550c8468f..302522375dbb 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -336,12 +336,12 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx,
* fatal_signal_pending()s, and the mmap_sem must be released before
* returning it.
*/
-int handle_userfault(struct vm_fault *vmf, unsigned long reason)
+vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
{
struct mm_struct *mm = vmf->vma->vm_mm;
struct userfaultfd_ctx *ctx;
struct userfaultfd_wait_queue uwq;
- int ret;
+ vm_fault_t ret;
bool must_wait, return_to_userland;
long blocking_state;

diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 521272201fb7..bed07dfbb85e 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -1080,7 +1080,7 @@ xfs_file_llseek(
* page_lock (MM)
* i_lock (XFS - extent map serialisation)
*/
-static int
+static vm_fault_t
__xfs_filemap_fault(
struct vm_fault *vmf,
enum page_entry_size pe_size,
@@ -1088,7 +1088,7 @@ __xfs_filemap_fault(
{
struct inode *inode = file_inode(vmf->vma->vm_file);
struct xfs_inode *ip = XFS_I(inode);
- int ret;
+ vm_fault_t ret;

trace_xfs_filemap_fault(ip, pe_size, write_fault);

@@ -1117,7 +1117,7 @@ __xfs_filemap_fault(
return ret;
}

-static int
+static vm_fault_t
xfs_filemap_fault(
struct vm_fault *vmf)
{
@@ -1127,7 +1127,7 @@ xfs_filemap_fault(
(vmf->flags & FAULT_FLAG_WRITE));
}

-static int
+static vm_fault_t
xfs_filemap_huge_fault(
struct vm_fault *vmf,
enum page_entry_size pe_size)
@@ -1140,7 +1140,7 @@ xfs_filemap_huge_fault(
(vmf->flags & FAULT_FLAG_WRITE));
}

-static int
+static vm_fault_t
xfs_filemap_page_mkwrite(
struct vm_fault *vmf)
{
@@ -1152,7 +1152,7 @@ xfs_filemap_page_mkwrite(
* on write faults. In reality, it needs to serialise against truncate and
* prepare memory for writing so handle is as standard write fault.
*/
-static int
+static vm_fault_t
xfs_filemap_pfn_mkwrite(
struct vm_fault *vmf)
{
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index a8a126259bc4..f598a826f0ac 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -6,7 +6,7 @@

#include <linux/fs.h> /* only for vma_is_dax() */

-extern int do_huge_pmd_anonymous_page(struct vm_fault *vmf);
+extern vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf);
extern int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
struct vm_area_struct *vma);
@@ -23,7 +23,7 @@ static inline void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud)
}
#endif

-extern int do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd);
+extern vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd);
extern struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
unsigned long addr,
pmd_t *pmd,
@@ -46,9 +46,9 @@ extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long addr, pgprot_t newprot,
int prot_numa);
-int vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t *pmd, pfn_t pfn, bool write);
-int vmf_insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+vm_fault_t vmf_insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
pud_t *pud, pfn_t pfn, bool write);
enum transparent_hugepage_flag {
TRANSPARENT_HUGEPAGE_FLAG,
@@ -216,7 +216,7 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
pud_t *pud, int flags);

-extern int do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t orig_pmd);
+extern vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t orig_pmd);

extern struct page *huge_zero_page;

@@ -321,7 +321,8 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
return NULL;
}

-static inline int do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t orig_pmd)
+static inline vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf,
+ pmd_t orig_pmd)
{
return 0;
}
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 36fa6a2a82e3..c779b2ffcd8a 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -105,7 +105,7 @@ void hugetlb_report_meminfo(struct seq_file *);
int hugetlb_report_node_meminfo(int, char *);
void hugetlb_show_meminfo(void);
unsigned long hugetlb_total_pages(void);
-int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, unsigned int flags);
int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, pte_t *dst_pte,
struct vm_area_struct *dst_vma,
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 4bd87294219a..5423400f6f82 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -3,6 +3,7 @@
#define LINUX_IOMAP_H 1

#include <linux/types.h>
+#include <linux/mm_types.h>

struct fiemap_extent_info;
struct inode;
@@ -88,7 +89,8 @@ int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len,
bool *did_zero, const struct iomap_ops *ops);
int iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero,
const struct iomap_ops *ops);
-int iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops);
+vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf,
+ const struct iomap_ops *ops);
int iomap_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
loff_t start, loff_t len, const struct iomap_ops *ops);
loff_t iomap_seek_hole(struct inode *inode, loff_t offset,
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 64d09e3afc24..574df95dfa5d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -700,10 +700,10 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
return pte;
}

-int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
+vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
struct page *page);
-int finish_fault(struct vm_fault *vmf);
-int finish_mkwrite_fault(struct vm_fault *vmf);
+vm_fault_t finish_fault(struct vm_fault *vmf);
+vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf);
#endif

/*
@@ -1233,29 +1233,36 @@ static inline void clear_page_pfmemalloc(struct page *page)
* just gets major/minor fault counters bumped up.
*/

-#define VM_FAULT_OOM 0x0001
-#define VM_FAULT_SIGBUS 0x0002
-#define VM_FAULT_MAJOR 0x0004
-#define VM_FAULT_WRITE 0x0008 /* Special case for get_user_pages */
-#define VM_FAULT_HWPOISON 0x0010 /* Hit poisoned small page */
-#define VM_FAULT_HWPOISON_LARGE 0x0020 /* Hit poisoned large page. Index encoded in upper bits */
-#define VM_FAULT_SIGSEGV 0x0040
-
-#define VM_FAULT_NOPAGE 0x0100 /* ->fault installed the pte, not return page */
-#define VM_FAULT_LOCKED 0x0200 /* ->fault locked the returned page */
-#define VM_FAULT_RETRY 0x0400 /* ->fault blocked, must retry */
-#define VM_FAULT_FALLBACK 0x0800 /* huge page fault failed, fall back to small */
-#define VM_FAULT_DONE_COW 0x1000 /* ->fault has fully handled COW */
-#define VM_FAULT_NEEDDSYNC 0x2000 /* ->fault did not modify page tables
- * and needs fsync() to complete (for
- * synchronous page faults in DAX) */
+#define VM_FAULT_OOM ((__force vm_fault_t)0x0001)
+#define VM_FAULT_SIGBUS ((__force vm_fault_t)0x0002)
+#define VM_FAULT_MAJOR ((__force vm_fault_t)0x0004)
+/* Special case for get_user_pages */
+#define VM_FAULT_WRITE ((__force vm_fault_t)0x0008)
+/* Hit poisoned small page */
+#define VM_FAULT_HWPOISON ((__force vm_fault_t)0x0010)
+ /* Hit poisoned large page. Index encoded in upper bits */
+#define VM_FAULT_HWPOISON_LARGE ((__force vm_fault_t)0x0020)
+#define VM_FAULT_SIGSEGV ((__force vm_fault_t)0x0040)
+/* ->fault installed the pte, not return page */
+#define VM_FAULT_NOPAGE ((__force vm_fault_t)0x0100)
+/* ->fault locked the returned page */
+#define VM_FAULT_LOCKED ((__force vm_fault_t)0x0200)
+/* ->fault blocked, must retry */
+#define VM_FAULT_RETRY ((__force vm_fault_t)0x0400)
+/* huge page fault failed, fall back to small */
+#define VM_FAULT_FALLBACK ((__force vm_fault_t)0x0800)
+/* ->fault has fully handled COW */
+#define VM_FAULT_DONE_COW ((__force vm_fault_t)0x1000)
+/* ->fault did not modify page tables and needs fsync() to complete
+ * (for synchronous page faults in DAX) */
+#define VM_FAULT_NEEDDSYNC ((__force vm_fault_t)0x2000)

/* Only for use in architecture specific page fault handling: */
-#define VM_FAULT_BADMAP 0x010000
-#define VM_FAULT_BADACCESS 0x020000
-#define VM_FAULT_BADCONTEXT 0x040000
-#define VM_FAULT_SIGNAL 0x080000
-#define VM_FAULT_PFAULT 0x100000
+#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000)
+#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000)
+#define VM_FAULT_BADCONTEXT ((__force vm_fault_t)0x040000)
+#define VM_FAULT_SIGNAL ((__force vm_fault_t)0x080000)
+#define VM_FAULT_PFAULT ((__force vm_fault_t)0x100000)

#define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
@@ -1277,8 +1284,8 @@ static inline void clear_page_pfmemalloc(struct page *page)
{ VM_FAULT_NEEDDSYNC, "NEEDDSYNC" }

/* Encode hstate index for a hwpoisoned large page */
-#define VM_FAULT_SET_HINDEX(x) ((x) << 12)
-#define VM_FAULT_GET_HINDEX(x) (((x) >> 12) & 0xf)
+#define VM_FAULT_SET_HINDEX(x) ((__force vm_fault_t)((x) << 12))
+#define VM_FAULT_GET_HINDEX(x) ((((__force unsigned int)(x) >> 12) & 0xf))

/*
* Can be called by the pagefault handler when it gets a VM_FAULT_OOM.
@@ -1391,8 +1398,8 @@ int generic_error_remove_page(struct address_space *mapping, struct page *page);
int invalidate_inode_page(struct page *page);

#ifdef CONFIG_MMU
-extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
- unsigned int flags);
+extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
+ unsigned long address, unsigned int flags);
extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
unsigned long address, unsigned int fault_flags,
bool *unlocked);
@@ -1401,7 +1408,7 @@ void unmap_mapping_pages(struct address_space *mapping,
void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen, int even_cows);
#else
-static inline int handle_mm_fault(struct vm_area_struct *vma,
+static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
unsigned long address, unsigned int flags)
{
/* should never happen if there's no MMU */
@@ -2555,7 +2562,7 @@ static inline struct page *follow_page(struct vm_area_struct *vma,
#define FOLL_REMOTE 0x2000 /* we are working on non-current tsk/mm */
#define FOLL_COW 0x4000 /* internal GUP flag */

-static inline int vm_fault_to_errno(int vm_fault, int foll_flags)
+static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags)
{
if (vm_fault & VM_FAULT_OOM)
return -ENOMEM;
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 54f1e05ecf3e..da2b77a19911 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -22,7 +22,8 @@
#endif
#define AT_VECTOR_SIZE (2*(AT_VECTOR_SIZE_ARCH + AT_VECTOR_SIZE_BASE + 1))

-typedef int vm_fault_t;
+typedef unsigned __bitwise vm_fault_t;
+

struct address_space;
struct mem_cgroup;
@@ -619,7 +620,7 @@ struct vm_special_mapping {
* If non-NULL, then this is called to resolve page faults
* on the special mapping. If used, .pages is not checked.
*/
- int (*fault)(const struct vm_special_mapping *sm,
+ vm_fault_t (*fault)(const struct vm_special_mapping *sm,
struct vm_area_struct *vma,
struct vm_fault *vmf);

diff --git a/include/linux/oom.h b/include/linux/oom.h
index 553eb37def7e..80c56be07f74 100644
--- a/include/linux/oom.h
+++ b/include/linux/oom.h
@@ -96,7 +96,7 @@ static inline bool mm_is_oom_victim(struct mm_struct *mm)
*
* Return 0 when the PF is safe VM_FAULT_SIGBUS otherwise.
*/
-static inline int check_stable_address_space(struct mm_struct *mm)
+static inline vm_fault_t check_stable_address_space(struct mm_struct *mm)
{
if (unlikely(test_bit(MMF_UNSTABLE, &mm->flags)))
return VM_FAULT_SIGBUS;
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 1d3877c39a00..e9a3d88e058f 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -134,7 +134,7 @@ static inline struct page *device_private_entry_to_page(swp_entry_t entry)
return pfn_to_page(swp_offset(entry));
}

-int device_private_entry_fault(struct vm_area_struct *vma,
+vm_fault_t device_private_entry_fault(struct vm_area_struct *vma,
unsigned long addr,
swp_entry_t entry,
unsigned int flags,
@@ -169,7 +169,7 @@ static inline struct page *device_private_entry_to_page(swp_entry_t entry)
return NULL;
}

-static inline int device_private_entry_fault(struct vm_area_struct *vma,
+static inline vm_fault_t device_private_entry_fault(struct vm_area_struct *vma,
unsigned long addr,
swp_entry_t entry,
unsigned int flags,
diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
index f2f3b68ba910..e8d47a9a2450 100644
--- a/include/linux/userfaultfd_k.h
+++ b/include/linux/userfaultfd_k.h
@@ -28,7 +28,7 @@
#define UFFD_SHARED_FCNTL_FLAGS (O_CLOEXEC | O_NONBLOCK)
#define UFFD_FLAGS_SET (EFD_SHARED_FCNTL_FLAGS)

-extern int handle_userfault(struct vm_fault *vmf, unsigned long reason);
+extern vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason);

extern ssize_t mcopy_atomic(struct mm_struct *dst_mm, unsigned long dst_start,
unsigned long src_start, unsigned long len);
@@ -75,7 +75,8 @@ extern void userfaultfd_unmap_complete(struct mm_struct *mm,
#else /* CONFIG_USERFAULTFD */

/* mm helpers */
-static inline int handle_userfault(struct vm_fault *vmf, unsigned long reason)
+static inline vm_fault_t handle_userfault(struct vm_fault *vmf,
+ unsigned long reason)
{
return VM_FAULT_SIGBUS;
}
diff --git a/ipc/shm.c b/ipc/shm.c
index 29978ee76c2e..051a3e1fb8df 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -408,7 +408,7 @@ void exit_shm(struct task_struct *task)
up_write(&shm_ids(ns).rwsem);
}

-static int shm_fault(struct vm_fault *vmf)
+static vm_fault_t shm_fault(struct vm_fault *vmf)
{
struct file *file = vmf->vma->vm_file;
struct shm_file_data *sfd = shm_file_data(file);
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 67612ce359ad..a535e09f0b0c 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5271,11 +5271,11 @@ void perf_event_update_userpage(struct perf_event *event)
}
EXPORT_SYMBOL_GPL(perf_event_update_userpage);

-static int perf_mmap_fault(struct vm_fault *vmf)
+static vm_fault_t perf_mmap_fault(struct vm_fault *vmf)
{
struct perf_event *event = vmf->vma->vm_file->private_data;
struct ring_buffer *rb;
- int ret = VM_FAULT_SIGBUS;
+ vm_fault_t ret = VM_FAULT_SIGBUS;

if (vmf->flags & FAULT_FLAG_MKWRITE) {
if (vmf->pgoff == 0)
diff --git a/mm/gup.c b/mm/gup.c
index 6c7c85e5d9c4..e8348a5de3b5 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -497,7 +497,7 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma,
unsigned long address, unsigned int *flags, int *nonblocking)
{
unsigned int fault_flags = 0;
- int ret;
+ vm_fault_t ret;

/* mlock all present pages, but do not fault in new pages */
if ((*flags & (FOLL_POPULATE | FOLL_MLOCK)) == FOLL_MLOCK)
@@ -815,7 +815,8 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
bool *unlocked)
{
struct vm_area_struct *vma;
- int ret, major = 0;
+ int major = 0;
+ vm_fault_t ret;

if (unlocked)
fault_flags |= FAULT_FLAG_ALLOW_RETRY;
@@ -829,7 +830,7 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
return -EFAULT;

ret = handle_mm_fault(vma, address, fault_flags);
- major |= ret & VM_FAULT_MAJOR;
+ major |= !!(ret & VM_FAULT_MAJOR);
if (ret & VM_FAULT_ERROR) {
int err = vm_fault_to_errno(ret, 0);

diff --git a/mm/hmm.c b/mm/hmm.c
index de7b6bf77201..5a568b75f7a4 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -299,7 +299,7 @@ static int hmm_vma_do_fault(struct mm_walk *walk, unsigned long addr,
struct hmm_vma_walk *hmm_vma_walk = walk->private;
struct hmm_range *range = hmm_vma_walk->range;
struct vm_area_struct *vma = walk->vma;
- int r;
+ vm_fault_t r;

flags |= hmm_vma_walk->block ? 0 : FAULT_FLAG_ALLOW_RETRY;
flags |= write_fault ? FAULT_FLAG_WRITE : 0;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 323acdd14e6e..5bc71dc19a4e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -544,14 +544,14 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
}
EXPORT_SYMBOL_GPL(thp_get_unmapped_area);

-static int __do_huge_pmd_anonymous_page(struct vm_fault *vmf, struct page *page,
- gfp_t gfp)
+static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
+ struct page *page, gfp_t gfp)
{
struct vm_area_struct *vma = vmf->vma;
struct mem_cgroup *memcg;
pgtable_t pgtable;
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
- int ret = 0;
+ vm_fault_t ret = 0;

VM_BUG_ON_PAGE(!PageCompound(page), page);

@@ -587,7 +587,7 @@ static int __do_huge_pmd_anonymous_page(struct vm_fault *vmf, struct page *page,

/* Deliver the page fault to userland */
if (userfaultfd_missing(vma)) {
- int ret;
+ vm_fault_t ret;

spin_unlock(vmf->ptl);
mem_cgroup_cancel_charge(page, memcg, true);
@@ -666,7 +666,7 @@ static bool set_huge_zero_page(pgtable_t pgtable, struct mm_struct *mm,
return true;
}

-int do_huge_pmd_anonymous_page(struct vm_fault *vmf)
+vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
gfp_t gfp;
@@ -685,7 +685,7 @@ int do_huge_pmd_anonymous_page(struct vm_fault *vmf)
pgtable_t pgtable;
struct page *zero_page;
bool set;
- int ret;
+ vm_fault_t ret;
pgtable = pte_alloc_one(vma->vm_mm, haddr);
if (unlikely(!pgtable))
return VM_FAULT_OOM;
@@ -755,7 +755,7 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
spin_unlock(ptl);
}

-int vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t *pmd, pfn_t pfn, bool write)
{
pgprot_t pgprot = vma->vm_page_prot;
@@ -815,7 +815,7 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
spin_unlock(ptl);
}

-int vmf_insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+vm_fault_t vmf_insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
pud_t *pud, pfn_t pfn, bool write)
{
pgprot_t pgprot = vma->vm_page_prot;
@@ -1121,15 +1121,16 @@ void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd)
spin_unlock(vmf->ptl);
}

-static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
- struct page *page)
+static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf,
+ pmd_t orig_pmd, struct page *page)
{
struct vm_area_struct *vma = vmf->vma;
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
struct mem_cgroup *memcg;
pgtable_t pgtable;
pmd_t _pmd;
- int ret = 0, i;
+ vm_fault_t ret = 0;
+ int i;
struct page **pages;
unsigned long mmun_start; /* For mmu_notifiers */
unsigned long mmun_end; /* For mmu_notifiers */
@@ -1239,7 +1240,7 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
goto out;
}

-int do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
+vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
{
struct vm_area_struct *vma = vmf->vma;
struct page *page = NULL, *new_page;
@@ -1248,7 +1249,7 @@ int do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
unsigned long mmun_start; /* For mmu_notifiers */
unsigned long mmun_end; /* For mmu_notifiers */
gfp_t huge_gfp; /* for allocation and charge */
- int ret = 0;
+ vm_fault_t ret = 0;

vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd);
VM_BUG_ON_VMA(!vma->anon_vma, vma);
@@ -1459,7 +1460,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
}

/* NUMA hinting page fault entry point for trans huge pmds */
-int do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
+vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
{
struct vm_area_struct *vma = vmf->vma;
struct anon_vma *anon_vma = NULL;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 129088710510..8809aaff4add 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3159,7 +3159,7 @@ static unsigned long hugetlb_vm_op_pagesize(struct vm_area_struct *vma)
* hugegpage VMA. do_page_fault() is supposed to trap this, so BUG is we get
* this far.
*/
-static int hugetlb_vm_op_fault(struct vm_fault *vmf)
+static vm_fault_t hugetlb_vm_op_fault(struct vm_fault *vmf)
{
BUG();
return 0;
@@ -3499,16 +3499,17 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
* cannot race with other handlers or page migration.
* Keep the pte_same checks anyway to make transition from the mutex easier.
*/
-static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
+static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, pte_t *ptep,
struct page *pagecache_page, spinlock_t *ptl)
{
pte_t pte;
struct hstate *h = hstate_vma(vma);
struct page *old_page, *new_page;
- int ret = 0, outside_reserve = 0;
+ int outside_reserve = 0;
unsigned long mmun_start; /* For mmu_notifiers */
unsigned long mmun_end; /* For mmu_notifiers */
+ vm_fault_t ret = 0;

pte = huge_ptep_get(ptep);
old_page = pte_page(pte);
@@ -3675,12 +3676,13 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
return 0;
}

-static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
- struct address_space *mapping, pgoff_t idx,
- unsigned long address, pte_t *ptep, unsigned int flags)
+static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
+ struct vm_area_struct *vma, struct address_space *mapping,
+ pgoff_t idx, unsigned long address, pte_t *ptep,
+ unsigned int flags)
{
struct hstate *h = hstate_vma(vma);
- int ret = VM_FAULT_SIGBUS;
+ vm_fault_t ret = VM_FAULT_SIGBUS;
int anon_rmap = 0;
unsigned long size;
struct page *page;
@@ -3742,8 +3744,7 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,

page = alloc_huge_page(vma, address, 0);
if (IS_ERR(page)) {
- ret = PTR_ERR(page);
- if (ret == -ENOMEM)
+ if (PTR_ERR(page) == -ENOMEM)
ret = VM_FAULT_OOM;
else
ret = VM_FAULT_SIGBUS;
@@ -3870,12 +3871,12 @@ u32 hugetlb_fault_mutex_hash(struct hstate *h, struct mm_struct *mm,
}
#endif

-int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, unsigned int flags)
{
pte_t *ptep, entry;
spinlock_t *ptl;
- int ret;
+ vm_fault_t ret;
u32 hash;
pgoff_t idx;
struct page *page = NULL;
@@ -4206,7 +4207,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
if (absent || is_swap_pte(huge_ptep_get(pte)) ||
((flags & FOLL_WRITE) &&
!huge_pte_write(huge_ptep_get(pte)))) {
- int ret;
+ vm_fault_t ret;
unsigned int fault_flags = 0;

if (pte)
diff --git a/mm/internal.h b/mm/internal.h
index 62d8c34e63d5..e12210f07393 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -38,7 +38,7 @@

void page_writeback_init(void);

-int do_swap_page(struct vm_fault *vmf);
+vm_fault_t do_swap_page(struct vm_fault *vmf);

void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
unsigned long floor, unsigned long ceiling);
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index d7b2a4bf8671..778fd407ae93 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -880,7 +880,8 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
unsigned long address, pmd_t *pmd,
int referenced)
{
- int swapped_in = 0, ret = 0;
+ int swapped_in = 0;
+ vm_fault_t ret = 0;
struct vm_fault vmf = {
.vma = vma,
.address = address,
diff --git a/mm/ksm.c b/mm/ksm.c
index a6d43cf9a982..4be259489409 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -470,7 +470,7 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
{
struct page *page;
- int ret = 0;
+ vm_fault_t ret = 0;

do {
cond_resched();
diff --git a/mm/memory.c b/mm/memory.c
index 14578158ed20..3634a012c388 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2370,9 +2370,9 @@ static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
*
* We do this without the lock held, so that it can sleep if it needs to.
*/
-static int do_page_mkwrite(struct vm_fault *vmf)
+static vm_fault_t do_page_mkwrite(struct vm_fault *vmf)
{
- int ret;
+ vm_fault_t ret;
struct page *page = vmf->page;
unsigned int old_flags = vmf->flags;

@@ -2476,7 +2476,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
* held to the old page, as well as updating the rmap.
* - In any case, unlock the PTL and drop the reference we took to the old page.
*/
-static int wp_page_copy(struct vm_fault *vmf)
+static vm_fault_t wp_page_copy(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct mm_struct *mm = vma->vm_mm;
@@ -2624,7 +2624,7 @@ static int wp_page_copy(struct vm_fault *vmf)
* The function expects the page to be locked or other protection against
* concurrent faults / writeback (such as DAX radix tree locks).
*/
-int finish_mkwrite_fault(struct vm_fault *vmf)
+vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
{
WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
@@ -2645,12 +2645,12 @@ int finish_mkwrite_fault(struct vm_fault *vmf)
* Handle write page faults for VM_MIXEDMAP or VM_PFNMAP for a VM_SHARED
* mapping
*/
-static int wp_pfn_shared(struct vm_fault *vmf)
+static vm_fault_t wp_pfn_shared(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;

if (vma->vm_ops && vma->vm_ops->pfn_mkwrite) {
- int ret;
+ vm_fault_t ret;

pte_unmap_unlock(vmf->pte, vmf->ptl);
vmf->flags |= FAULT_FLAG_MKWRITE;
@@ -2663,7 +2663,7 @@ static int wp_pfn_shared(struct vm_fault *vmf)
return VM_FAULT_WRITE;
}

-static int wp_page_shared(struct vm_fault *vmf)
+static vm_fault_t wp_page_shared(struct vm_fault *vmf)
__releases(vmf->ptl)
{
struct vm_area_struct *vma = vmf->vma;
@@ -2671,7 +2671,7 @@ static int wp_page_shared(struct vm_fault *vmf)
get_page(vmf->page);

if (vma->vm_ops && vma->vm_ops->page_mkwrite) {
- int tmp;
+ vm_fault_t tmp;

pte_unmap_unlock(vmf->pte, vmf->ptl);
tmp = do_page_mkwrite(vmf);
@@ -2714,7 +2714,7 @@ static int wp_page_shared(struct vm_fault *vmf)
* but allow concurrent faults), with pte both mapped and locked.
* We return with mmap_sem still held, but pte unmapped and unlocked.
*/
-static int do_wp_page(struct vm_fault *vmf)
+static vm_fault_t do_wp_page(struct vm_fault *vmf)
__releases(vmf->ptl)
{
struct vm_area_struct *vma = vmf->vma;
@@ -2890,7 +2890,7 @@ EXPORT_SYMBOL(unmap_mapping_range);
* We return with the mmap_sem locked or unlocked in the same cases
* as does filemap_fault().
*/
-int do_swap_page(struct vm_fault *vmf)
+vm_fault_t do_swap_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct page *page = NULL, *swapcache;
@@ -2899,7 +2899,7 @@ int do_swap_page(struct vm_fault *vmf)
pte_t pte;
int locked;
int exclusive = 0;
- int ret = 0;
+ vm_fault_t ret = 0;

if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
goto out;
@@ -3110,12 +3110,12 @@ int do_swap_page(struct vm_fault *vmf)
* but allow concurrent faults), and pte mapped but not yet locked.
* We return with mmap_sem still held, but pte unmapped and unlocked.
*/
-static int do_anonymous_page(struct vm_fault *vmf)
+static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct mem_cgroup *memcg;
struct page *page;
- int ret = 0;
+ vm_fault_t ret = 0;
pte_t entry;

/* File mapping without ->vm_ops ? */
@@ -3224,10 +3224,10 @@ static int do_anonymous_page(struct vm_fault *vmf)
* released depending on flags and vma->vm_ops->fault() return value.
* See filemap_fault() and __lock_page_retry().
*/
-static int __do_fault(struct vm_fault *vmf)
+static vm_fault_t __do_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
- int ret;
+ vm_fault_t ret;

ret = vma->vm_ops->fault(vmf);
if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY |
@@ -3261,7 +3261,7 @@ static int pmd_devmap_trans_unstable(pmd_t *pmd)
return pmd_devmap(*pmd) || pmd_trans_unstable(pmd);
}

-static int pte_alloc_one_map(struct vm_fault *vmf)
+static vm_fault_t pte_alloc_one_map(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;

@@ -3337,13 +3337,14 @@ static void deposit_prealloc_pte(struct vm_fault *vmf)
vmf->prealloc_pte = NULL;
}

-static int do_set_pmd(struct vm_fault *vmf, struct page *page)
+static vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
{
struct vm_area_struct *vma = vmf->vma;
bool write = vmf->flags & FAULT_FLAG_WRITE;
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
pmd_t entry;
- int i, ret;
+ vm_fault_t ret;
+ int i;

if (!transhuge_vma_suitable(vma, haddr))
return VM_FAULT_FALLBACK;
@@ -3414,13 +3415,13 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page)
* Target users are page handler itself and implementations of
* vm_ops->map_pages.
*/
-int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
+vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
struct page *page)
{
struct vm_area_struct *vma = vmf->vma;
bool write = vmf->flags & FAULT_FLAG_WRITE;
pte_t entry;
- int ret;
+ vm_fault_t ret;

if (pmd_none(*vmf->pmd) && PageTransCompound(page) &&
IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) {
@@ -3479,10 +3480,10 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
* The function expects the page to be locked and on success it consumes a
* reference of a page being mapped (for the PTE which maps it).
*/
-int finish_fault(struct vm_fault *vmf)
+vm_fault_t finish_fault(struct vm_fault *vmf)
{
struct page *page;
- int ret = 0;
+ vm_fault_t ret = 0;

/* Did we COW the page? */
if ((vmf->flags & FAULT_FLAG_WRITE) &&
@@ -3568,12 +3569,13 @@ late_initcall(fault_around_debugfs);
* (and therefore to page order). This way it's easier to guarantee
* that we don't cross page table boundaries.
*/
-static int do_fault_around(struct vm_fault *vmf)
+static vm_fault_t do_fault_around(struct vm_fault *vmf)
{
unsigned long address = vmf->address, nr_pages, mask;
pgoff_t start_pgoff = vmf->pgoff;
pgoff_t end_pgoff;
- int off, ret = 0;
+ vm_fault_t ret = 0;
+ int off;

nr_pages = READ_ONCE(fault_around_bytes) >> PAGE_SHIFT;
mask = ~(nr_pages * PAGE_SIZE - 1) & PAGE_MASK;
@@ -3623,10 +3625,10 @@ static int do_fault_around(struct vm_fault *vmf)
return ret;
}

-static int do_read_fault(struct vm_fault *vmf)
+static vm_fault_t do_read_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
- int ret = 0;
+ vm_fault_t ret = 0;

/*
* Let's call ->map_pages() first and use ->fault() as fallback
@@ -3650,10 +3652,10 @@ static int do_read_fault(struct vm_fault *vmf)
return ret;
}

-static int do_cow_fault(struct vm_fault *vmf)
+static vm_fault_t do_cow_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
- int ret;
+ vm_fault_t ret;

if (unlikely(anon_vma_prepare(vma)))
return VM_FAULT_OOM;
@@ -3689,10 +3691,10 @@ static int do_cow_fault(struct vm_fault *vmf)
return ret;
}

-static int do_shared_fault(struct vm_fault *vmf)
+static vm_fault_t do_shared_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
- int ret, tmp;
+ vm_fault_t ret, tmp;

ret = __do_fault(vmf);
if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
@@ -3730,10 +3732,10 @@ static int do_shared_fault(struct vm_fault *vmf)
* The mmap_sem may have been released depending on flags and our
* return value. See filemap_fault() and __lock_page_or_retry().
*/
-static int do_fault(struct vm_fault *vmf)
+static vm_fault_t do_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
- int ret;
+ vm_fault_t ret;

/* The VMA was not fully populated on mmap() or missing VM_DONTEXPAND */
if (!vma->vm_ops->fault)
@@ -3768,7 +3770,7 @@ static int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
return mpol_misplaced(page, vma, addr);
}

-static int do_numa_page(struct vm_fault *vmf)
+static vm_fault_t do_numa_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct page *page = NULL;
@@ -3858,7 +3860,7 @@ static int do_numa_page(struct vm_fault *vmf)
return 0;
}

-static inline int create_huge_pmd(struct vm_fault *vmf)
+static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf)
{
if (vma_is_anonymous(vmf->vma))
return do_huge_pmd_anonymous_page(vmf);
@@ -3868,7 +3870,7 @@ static inline int create_huge_pmd(struct vm_fault *vmf)
}

/* `inline' is required to avoid gcc 4.1.2 build error */
-static inline int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
+static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
{
if (vma_is_anonymous(vmf->vma))
return do_huge_pmd_wp_page(vmf, orig_pmd);
@@ -3887,7 +3889,7 @@ static inline bool vma_is_accessible(struct vm_area_struct *vma)
return vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE);
}

-static int create_huge_pud(struct vm_fault *vmf)
+static vm_fault_t create_huge_pud(struct vm_fault *vmf)
{
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/* No support for anonymous transparent PUD pages yet */
@@ -3899,7 +3901,7 @@ static int create_huge_pud(struct vm_fault *vmf)
return VM_FAULT_FALLBACK;
}

-static int wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud)
+static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud)
{
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/* No support for anonymous transparent PUD pages yet */
@@ -3926,7 +3928,7 @@ static int wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud)
* The mmap_sem may have been released depending on flags and our return value.
* See filemap_fault() and __lock_page_or_retry().
*/
-static int handle_pte_fault(struct vm_fault *vmf)
+static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
{
pte_t entry;

@@ -4014,8 +4016,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
* The mmap_sem may have been released depending on flags and our
* return value. See filemap_fault() and __lock_page_or_retry().
*/
-static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
- unsigned int flags)
+static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
+ unsigned long address, unsigned int flags)
{
struct vm_fault vmf = {
.vma = vma,
@@ -4028,7 +4030,7 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
struct mm_struct *mm = vma->vm_mm;
pgd_t *pgd;
p4d_t *p4d;
- int ret;
+ vm_fault_t ret;

pgd = pgd_offset(mm, address);
p4d = p4d_alloc(mm, pgd, address);
@@ -4103,10 +4105,10 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
* The mmap_sem may have been released depending on flags and our
* return value. See filemap_fault() and __lock_page_or_retry().
*/
-int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
+vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
unsigned int flags)
{
- int ret;
+ vm_fault_t ret;

__set_current_state(TASK_RUNNING);

diff --git a/mm/mmap.c b/mm/mmap.c
index 135b1d36da17..d28db0b62601 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -3277,7 +3277,7 @@ void vm_stat_account(struct mm_struct *mm, vm_flags_t flags, long npages)
mm->data_vm += npages;
}

-static int special_mapping_fault(struct vm_fault *vmf);
+static vm_fault_t special_mapping_fault(struct vm_fault *vmf);

/*
* Having a close hook prevents vma merging regardless of flags.
@@ -3316,7 +3316,7 @@ static const struct vm_operations_struct legacy_special_mapping_vmops = {
.fault = special_mapping_fault,
};

-static int special_mapping_fault(struct vm_fault *vmf)
+static vm_fault_t special_mapping_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
pgoff_t pgoff;
diff --git a/mm/shmem.c b/mm/shmem.c
index 4d8a6f8e571f..49f447d32f8f 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -123,7 +123,7 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp,
static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
struct page **pagep, enum sgp_type sgp,
gfp_t gfp, struct vm_area_struct *vma,
- struct vm_fault *vmf, int *fault_type);
+ struct vm_fault *vmf, vm_fault_t *fault_type);

int shmem_getpage(struct inode *inode, pgoff_t index,
struct page **pagep, enum sgp_type sgp)
@@ -1620,7 +1620,8 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp,
*/
static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
struct page **pagep, enum sgp_type sgp, gfp_t gfp,
- struct vm_area_struct *vma, struct vm_fault *vmf, int *fault_type)
+ struct vm_area_struct *vma, struct vm_fault *vmf,
+ vm_fault_t *fault_type)
{
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
@@ -1947,14 +1948,14 @@ static int synchronous_wake_function(wait_queue_entry_t *wait, unsigned mode, in
return ret;
}

-static int shmem_fault(struct vm_fault *vmf)
+static vm_fault_t shmem_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct inode *inode = file_inode(vma->vm_file);
gfp_t gfp = mapping_gfp_mask(inode->i_mapping);
enum sgp_type sgp;
int error;
- int ret = VM_FAULT_LOCKED;
+ vm_fault_t ret = VM_FAULT_LOCKED;

/*
* Trinity finds that probing a hole which tmpfs is punching can
diff --git a/samples/vfio-mdev/mbochs.c b/samples/vfio-mdev/mbochs.c
index 2960e26c6ea4..7743fbc6ad58 100644
--- a/samples/vfio-mdev/mbochs.c
+++ b/samples/vfio-mdev/mbochs.c
@@ -657,7 +657,7 @@ static void mbochs_put_pages(struct mdev_state *mdev_state)
dev_dbg(dev, "%s: %d pages released\n", __func__, count);
}

-static int mbochs_region_vm_fault(struct vm_fault *vmf)
+static vm_fault_t mbochs_region_vm_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct mdev_state *mdev_state = vma->vm_private_data;
@@ -695,7 +695,7 @@ static int mbochs_mmap(struct mdev_device *mdev, struct vm_area_struct *vma)
return 0;
}

-static int mbochs_dmabuf_vm_fault(struct vm_fault *vmf)
+static vm_fault_t mbochs_dmabuf_vm_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct mbochs_dmabuf *dmabuf = vma->vm_private_data;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c7b2e927f699..3da1ad291d34 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2340,7 +2340,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
}
EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);

-static int kvm_vcpu_fault(struct vm_fault *vmf)
+static vm_fault_t kvm_vcpu_fault(struct vm_fault *vmf)
{
struct kvm_vcpu *vcpu = vmf->vma->vm_file->private_data;
struct page *page;
--
2.17.0

2018-05-16 05:43:46

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 12/14] lustre: separate errno from VM_FAULT_* values

Signed-off-by: Christoph Hellwig <[email protected]>
---
.../staging/lustre/lustre/llite/llite_mmap.c | 37 +++++++------------
.../lustre/lustre/llite/vvp_internal.h | 2 +-
2 files changed, 14 insertions(+), 25 deletions(-)

diff --git a/drivers/staging/lustre/lustre/llite/llite_mmap.c b/drivers/staging/lustre/lustre/llite/llite_mmap.c
index 214b07554e62..061d98871959 100644
--- a/drivers/staging/lustre/lustre/llite/llite_mmap.c
+++ b/drivers/staging/lustre/lustre/llite/llite_mmap.c
@@ -231,23 +231,18 @@ static int ll_page_mkwrite0(struct vm_area_struct *vma, struct page *vmpage,
return result;
}

-static inline int to_fault_error(int result)
+static inline vm_fault_t to_fault_error(int result)
{
switch (result) {
case 0:
- result = VM_FAULT_LOCKED;
- break;
+ return VM_FAULT_LOCKED;
case -EFAULT:
- result = VM_FAULT_NOPAGE;
- break;
+ return VM_FAULT_NOPAGE;
case -ENOMEM:
- result = VM_FAULT_OOM;
- break;
+ return VM_FAULT_OOM;
default:
- result = VM_FAULT_SIGBUS;
- break;
+ return VM_FAULT_SIGBUS;
}
- return result;
}

/**
@@ -261,7 +256,7 @@ static inline int to_fault_error(int result)
* \retval VM_FAULT_ERROR on general error
* \retval NOPAGE_OOM not have memory for allocate new page
*/
-static int ll_fault0(struct vm_area_struct *vma, struct vm_fault *vmf)
+static vm_fault_t ll_fault0(struct vm_area_struct *vma, struct vm_fault *vmf)
{
struct lu_env *env;
struct cl_io *io;
@@ -269,7 +264,7 @@ static int ll_fault0(struct vm_area_struct *vma, struct vm_fault *vmf)
struct page *vmpage;
unsigned long ra_flags;
int result = 0;
- int fault_ret = 0;
+ vm_fault_t fault_ret = 0;
u16 refcheck;

env = cl_env_get(&refcheck);
@@ -323,7 +318,7 @@ static int ll_fault0(struct vm_area_struct *vma, struct vm_fault *vmf)
return fault_ret;
}

-static int ll_fault(struct vm_fault *vmf)
+static vm_fault_t ll_fault(struct vm_fault *vmf)
{
int count = 0;
bool printed = false;
@@ -364,7 +359,7 @@ static int ll_fault(struct vm_fault *vmf)
return result;
}

-static int ll_page_mkwrite(struct vm_fault *vmf)
+static vm_fault_t ll_page_mkwrite(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
int count = 0;
@@ -390,22 +385,16 @@ static int ll_page_mkwrite(struct vm_fault *vmf)
switch (result) {
case 0:
LASSERT(PageLocked(vmf->page));
- result = VM_FAULT_LOCKED;
- break;
+ return VM_FAULT_LOCKED;
case -ENODATA:
case -EAGAIN:
case -EFAULT:
- result = VM_FAULT_NOPAGE;
- break;
+ return VM_FAULT_NOPAGE;
case -ENOMEM:
- result = VM_FAULT_OOM;
- break;
+ return VM_FAULT_OOM;
default:
- result = VM_FAULT_SIGBUS;
- break;
+ return VM_FAULT_SIGBUS;
}
-
- return result;
}

/**
diff --git a/drivers/staging/lustre/lustre/llite/vvp_internal.h b/drivers/staging/lustre/lustre/llite/vvp_internal.h
index 7d3abb43584a..c194966a3d82 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_internal.h
+++ b/drivers/staging/lustre/lustre/llite/vvp_internal.h
@@ -83,7 +83,7 @@ struct vvp_io {
/**
* fault API used bitflags for return code.
*/
- unsigned int ft_flags;
+ vm_fault_t ft_flags;
/**
* check that flags are from filemap_fault
*/
--
2.17.0

_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-05-16 09:53:03

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH 10/14] vgem: separate errno from VM_FAULT_* values

On Wed, May 16, 2018 at 07:43:44AM +0200, Christoph Hellwig wrote:
> And streamline the code in vgem_fault with early returns so that it is
> a little bit more readable.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> drivers/gpu/drm/vgem/vgem_drv.c | 51 +++++++++++++++------------------
> 1 file changed, 23 insertions(+), 28 deletions(-)
>
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 2524ff116f00..a261e0aab83a 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -61,12 +61,13 @@ static void vgem_gem_free_object(struct drm_gem_object *obj)
> kfree(vgem_obj);
> }
>
> -static int vgem_gem_fault(struct vm_fault *vmf)
> +static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
> {
> struct vm_area_struct *vma = vmf->vma;
> struct drm_vgem_gem_object *obj = vma->vm_private_data;
> /* We don't use vmf->pgoff since that has the fake offset */
> unsigned long vaddr = vmf->address;
> + struct page *page;
> int ret;
> loff_t num_pages;
> pgoff_t page_offset;
> @@ -85,35 +86,29 @@ static int vgem_gem_fault(struct vm_fault *vmf)
> ret = 0;
> }
> mutex_unlock(&obj->pages_lock);
> - if (ret) {
> - struct page *page;
> -
> - page = shmem_read_mapping_page(
> - file_inode(obj->base.filp)->i_mapping,
> - page_offset);
> - if (!IS_ERR(page)) {
> - vmf->page = page;
> - ret = 0;
> - } else switch (PTR_ERR(page)) {
> - case -ENOSPC:
> - case -ENOMEM:
> - ret = VM_FAULT_OOM;
> - break;
> - case -EBUSY:
> - ret = VM_FAULT_RETRY;
> - break;
> - case -EFAULT:
> - case -EINVAL:
> - ret = VM_FAULT_SIGBUS;
> - break;
> - default:
> - WARN_ON(PTR_ERR(page));
> - ret = VM_FAULT_SIGBUS;
> - break;
> - }
> + if (!ret)
> + return 0;
> +
> + page = shmem_read_mapping_page(file_inode(obj->base.filp)->i_mapping,
> + page_offset);
> + if (!IS_ERR(page)) {
> + vmf->page = page;
> + return 0;
> + }
>
> + switch (PTR_ERR(page)) {
> + case -ENOSPC:
> + case -ENOMEM:
> + return VM_FAULT_OOM;
> + case -EBUSY:
> + return VM_FAULT_RETRY;
> + case -EFAULT:
> + case -EINVAL:
> + return VM_FAULT_SIGBUS;
> + default:
> + WARN_ON(PTR_ERR(page));
> + return VM_FAULT_SIGBUS;
> }
> - return ret;

Reviewed-by: Daniel Vetter <[email protected]>

Want me to merge this through drm-misc or plan to pick it up yourself?
-Daniel

> }
>
> static const struct vm_operations_struct vgem_gem_vm_ops = {
> --
> 2.17.0
>

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-05-16 11:13:29

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH 06/14] btrfs: separate errno from VM_FAULT_* values

On Wed, May 16, 2018 at 07:43:40AM +0200, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig <[email protected]>

Reviewed-by: David Sterba <dsterba-IBi9RG/[email protected]>

I can add it to the btrfs queue now, unless you need the patch for the
rest of the series.

2018-05-16 11:18:51

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH 01/14] orangefs: don't return errno values from ->fault

On Wed, May 16, 2018 at 07:43:35AM +0200, Christoph Hellwig wrote:
> + rc = orangefs_inode_getattr(file->f_mapping->host, 0, 1, STATX_SIZE);
> if (rc) {
> gossip_err("%s: orangefs_inode_getattr failed, "
> "rc:%d:.\n", __func__, rc);
> - return rc;
> + return VM_FAULT_SIGBUS;

Nope. orangefs_inode_getattr can return -ENOMEM.

> }
> return filemap_fault(vmf);
> }
> --
> 2.17.0
>

2018-05-16 11:23:47

by Matthew Wilcox

[permalink] [raw]
Subject: Re: vm_fault_t conversion, for real

On Wed, May 16, 2018 at 07:43:34AM +0200, Christoph Hellwig wrote:
> this series tries to actually turn vm_fault_t into a type that can be
> typechecked and checks the fallout instead of sprinkling random
> annotations without context.

Yes, why should we have small tasks that newcomers can do when the mighty
Christoph Hellwig can swoop in and take over from them? Seriously,
can't your talents find a better use than this?

> The first one fixes a real bug in orangefs, the second and third fix
> mismatched existing vm_fault_t annotations on the same function, the
> fourth removes an unused export that was in the chain. The remainder
> until the last one do some not quite trivial conversions, and the last
> one does the trivial mass annotation and flips vm_fault_t to a __bitwise
> unsigned int - the unsigned means we also get plain compiler type
> checking for the new ->fault signature even without sparse.

Yes, that was (part of) the eventual goal. Well done. Would you like
a biscuit?

2018-05-16 11:28:13

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH 14/14] mm: turn on vm_fault_t type checking

On Wed, May 16, 2018 at 07:43:48AM +0200, Christoph Hellwig wrote:
> Switch vm_fault_t to point to an unsigned int with __bіtwise annotations.
> This both catches any old ->fault or ->page_mkwrite instance with plain
> compiler type checking, as well as finding more intricate problems with
> sparse.

Come on, Christoph; you know better than this. This patch is completely
unreviewable. Split it into one patch per maintainer tree, and in any
event, the patch to convert vm_fault_t to an unsigned int should be
separated from all the trivial conversions.

2018-05-16 13:01:59

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 10/14] vgem: separate errno from VM_FAULT_* values

On Wed, May 16, 2018 at 11:53:03AM +0200, Daniel Vetter wrote:
> Reviewed-by: Daniel Vetter <[email protected]>
>
> Want me to merge this through drm-misc or plan to pick it up yourself?

For now I just want a honest discussion if people really actually
want the vm_fault_t change with the whole picture in place.
_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-05-16 13:03:09

by Christoph Hellwig

[permalink] [raw]
Subject: Re: vm_fault_t conversion, for real

On Wed, May 16, 2018 at 04:23:47AM -0700, Matthew Wilcox wrote:
> On Wed, May 16, 2018 at 07:43:34AM +0200, Christoph Hellwig wrote:
> > this series tries to actually turn vm_fault_t into a type that can be
> > typechecked and checks the fallout instead of sprinkling random
> > annotations without context.
>
> Yes, why should we have small tasks that newcomers can do when the mighty
> Christoph Hellwig can swoop in and take over from them? Seriously,
> can't your talents find a better use than this?

I've spent less time on this than trying to argue to you and Souptick
that these changes are only to get ignored and yelled at as an
"asshole maintainer". So yes, I could have done more productive things
if you hadn't forced this escalation.
_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-05-16 13:03:44

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 14/14] mm: turn on vm_fault_t type checking

On Wed, May 16, 2018 at 04:28:13AM -0700, Matthew Wilcox wrote:
> On Wed, May 16, 2018 at 07:43:48AM +0200, Christoph Hellwig wrote:
> > Switch vm_fault_t to point to an unsigned int with __bіtwise annotations.
> > This both catches any old ->fault or ->page_mkwrite instance with plain
> > compiler type checking, as well as finding more intricate problems with
> > sparse.
>
> Come on, Christoph; you know better than this. This patch is completely
> unreviewable. Split it into one patch per maintainer tree, and in any
> event, the patch to convert vm_fault_t to an unsigned int should be
> separated from all the trivial conversions.

The whole point is that tiny split patches for mechnical translations
are totally pointless. Switching the typedef might be worth splitting
if people really insist.
_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-05-16 13:13:04

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH 10/14] vgem: separate errno from VM_FAULT_* values

On Wed, May 16, 2018 at 03:01:59PM +0200, Christoph Hellwig wrote:
> On Wed, May 16, 2018 at 11:53:03AM +0200, Daniel Vetter wrote:
> > Reviewed-by: Daniel Vetter <[email protected]>
> >
> > Want me to merge this through drm-misc or plan to pick it up yourself?
>
> For now I just want a honest discussion if people really actually
> want the vm_fault_t change with the whole picture in place.

That discussion already happened on the -mm mailing list. And again
at LSFMM. Both times the answer was yes.

2018-05-16 13:22:56

by Matthew Wilcox

[permalink] [raw]
Subject: Re: vm_fault_t conversion, for real

On Wed, May 16, 2018 at 03:03:09PM +0200, Christoph Hellwig wrote:
> On Wed, May 16, 2018 at 04:23:47AM -0700, Matthew Wilcox wrote:
> > On Wed, May 16, 2018 at 07:43:34AM +0200, Christoph Hellwig wrote:
> > > this series tries to actually turn vm_fault_t into a type that can be
> > > typechecked and checks the fallout instead of sprinkling random
> > > annotations without context.
> >
> > Yes, why should we have small tasks that newcomers can do when the mighty
> > Christoph Hellwig can swoop in and take over from them? Seriously,
> > can't your talents find a better use than this?
>
> I've spent less time on this than trying to argue to you and Souptick
> that these changes are only to get ignored and yelled at as an
> "asshole maintainer". So yes, I could have done more productive things
> if you hadn't forced this escalation.

Perhaps you should try being less of an arsehole if you don't want to
get yelled at? I don't mind when you're an arsehole towards me, but I
do mind when you're an arsehole towards newcomers. How are we supposed
to attract and retain new maintainers when you're so rude?

2018-05-16 15:08:29

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH 14/14] mm: turn on vm_fault_t type checking

On Wed, May 16, 2018 at 07:43:48AM +0200, Christoph Hellwig wrote:
> Switch vm_fault_t to point to an unsigned int with __bіtwise annotations.
> This both catches any old ->fault or ->page_mkwrite instance with plain
> compiler type checking, as well as finding more intricate problems with
> sparse.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---

<ULTRASNIP>

For the iomap and xfs parts,
Reviewed-by: Darrick J. Wong <[email protected]>

That said...

> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 54f1e05ecf3e..da2b77a19911 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -22,7 +22,8 @@
> #endif
> #define AT_VECTOR_SIZE (2*(AT_VECTOR_SIZE_ARCH + AT_VECTOR_SIZE_BASE + 1))
>
> -typedef int vm_fault_t;
> +typedef unsigned __bitwise vm_fault_t;
> +
>
> struct address_space;
> struct mem_cgroup;
> @@ -619,7 +620,7 @@ struct vm_special_mapping {
> * If non-NULL, then this is called to resolve page faults
> * on the special mapping. If used, .pages is not checked.
> */
> - int (*fault)(const struct vm_special_mapping *sm,
> + vm_fault_t (*fault)(const struct vm_special_mapping *sm,

Uh, we're changing function signatures /and/ redefinining vm_fault_t?
All in the same 90K patch?

I /was/ expecting a series of "convert XXXXX and all callers/users"
patches followed by a trivial one to switch the definition, not a giant
pile of change. FWIW I don't mind so much if you make a patch
containing a change for some super-common primitive and a hojillion
little diff hunks tree-wide, but only one logical change at a time for a
big patch, please...

I quite prefer seeing the whole series from start to finish all packaged
up in one series, but wow this was overwhelming. :/

--D

<ULTRASNIP>

_______________________________________________
Ocfs2-devel mailing list
[email protected]
https://oss.oracle.com/mailman/listinfo/ocfs2-devel

2018-05-16 17:32:34

by Christoph Hellwig

[permalink] [raw]
Subject: Re: vm_fault_t conversion, for real

On Wed, May 16, 2018 at 06:22:56AM -0700, Matthew Wilcox wrote:
> Perhaps you should try being less of an arsehole if you don't want to
> get yelled at? I don't mind when you're an arsehole towards me, but I
> do mind when you're an arsehole towards newcomers. How are we supposed
> to attract and retain new maintainers when you're so rude?

*plonk* The only one I'm seeing being extremely rude here is you.

2018-05-16 17:34:45

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 14/14] mm: turn on vm_fault_t type checking

On Wed, May 16, 2018 at 08:08:29AM -0700, Darrick J. Wong wrote:
> Uh, we're changing function signatures /and/ redefinining vm_fault_t?
> All in the same 90K patch?
>
> I /was/ expecting a series of "convert XXXXX and all callers/users"
> patches followed by a trivial one to switch the definition, not a giant
> pile of change. FWIW I don't mind so much if you make a patch
> containing a change for some super-common primitive and a hojillion
> little diff hunks tree-wide, but only one logical change at a time for a
> big patch, please...
>
> I quite prefer seeing the whole series from start to finish all packaged
> up in one series, but wow this was overwhelming. :/

Another vote to split the change of the typedef, ok I get the message..
_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel