For fixed sized copies, copy_to_user() will utilize __put_user_size
fastpaths. However, it is missing the translation for 64bit copies on
x86/32. Testing on a Pinetrail Atom, the 64 bit put_user fastpath is
substantially faster than the generic copy_to_user() fallback.
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
CC: [email protected]
---
arch/x86/include/asm/uaccess_32.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 3c03a5de64d3..0ed5504c6060 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -59,6 +59,10 @@ __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
__put_user_size(*(u32 *)from, (u32 __user *)to,
4, ret, 4);
return ret;
+ case 8:
+ __put_user_size(*(u64 *)from, (u64 __user *)to,
+ 8, ret, 8);
+ return ret;
}
}
return __copy_to_user_ll(to, from, n);
--
2.1.4
* Chris Wilson <[email protected]> wrote:
> For fixed sized copies, copy_to_user() will utilize __put_user_size
> fastpaths. However, it is missing the translation for 64bit copies on
> x86/32. Testing on a Pinetrail Atom, the 64 bit put_user fastpath is
> substantially faster than the generic copy_to_user() fallback.
>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: [email protected]
> CC: [email protected]
The patch makes sense, but your Signed-off-by line is missing.
Thanks,
Ingo
On Thu, Apr 16, 2015 at 09:28:02AM +0200, Ingo Molnar wrote:
>
> * Chris Wilson <[email protected]> wrote:
>
> > For fixed sized copies, copy_to_user() will utilize __put_user_size
> > fastpaths. However, it is missing the translation for 64bit copies on
> > x86/32. Testing on a Pinetrail Atom, the 64 bit put_user fastpath is
> > substantially faster than the generic copy_to_user() fallback.
> >
> > Cc: Thomas Gleixner <[email protected]>
> > Cc: Ingo Molnar <[email protected]>
> > Cc: "H. Peter Anvin" <[email protected]>
> > Cc: [email protected]
> > CC: [email protected]
>
> The patch makes sense, but your Signed-off-by line is missing.
Sorry, totally forgot that when rewriting the changelog.
Signed-off-by: Chris Wilson <[email protected]>
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
Commit-ID: 6a907738ab9840ca3d71c22cd28fba4cbae7f7ce
Gitweb: http://git.kernel.org/tip/6a907738ab9840ca3d71c22cd28fba4cbae7f7ce
Author: Chris Wilson <[email protected]>
AuthorDate: Wed, 15 Apr 2015 10:51:26 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Thu, 16 Apr 2015 12:08:21 +0200
x86/asm: Enable fast 32-bit put_user_64() for copy_to_user()
For fixed sized copies, copy_to_user() will utilize
__put_user_size() fastpaths. However, it is missing the
translation for 64-bit copies on x86/32.
Testing on a Pinetrail Atom, the 64 bit put_user() fastpath
is substantially faster than the generic copy_to_user()
fallback.
Signed-off-by: Chris Wilson <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/include/asm/uaccess_32.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 3c03a5d..0ed5504 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -59,6 +59,10 @@ __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
__put_user_size(*(u32 *)from, (u32 __user *)to,
4, ret, 4);
return ret;
+ case 8:
+ __put_user_size(*(u64 *)from, (u64 __user *)to,
+ 8, ret, 8);
+ return ret;
}
}
return __copy_to_user_ll(to, from, n);