Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934032AbcCITal (ORCPT ); Wed, 9 Mar 2016 14:30:41 -0500 Received: from kanga.kvack.org ([205.233.56.17]:52789 "EHLO kanga.kvack.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933251AbcCITah (ORCPT ); Wed, 9 Mar 2016 14:30:37 -0500 Date: Wed, 9 Mar 2016 14:30:35 -0500 From: Benjamin LaHaise To: "H. Peter Anvin" Cc: Ingo Molnar , Russell King , Thomas Gleixner , x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] x86_32: add support for 64 bit __get_user() v2 Message-ID: <20160309193035.GS12913@kvack.org> References: <20160309172225.GN12913@kvack.org> <20160309175016.GO12913@kvack.org> <56E069FA.7070700@zytor.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56E069FA.7070700@zytor.com> User-Agent: Mutt/1.4.2.2i Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1915 Lines: 54 The existing __get_user() implementation does not support fetching 64 bit values on 32 bit x86. Implement this in a way that does not generate any incorrect warnings as cautioned by Russell King. Test code available at http://www.kvack.org/~bcrl/x86_32-get_user.tar . v2: use __inttype() as suggested by H. Peter Anvin, which cleans the code up nicely, and fix things to work on x86_64 as well. Tested on both 32 bit and 64 bit x86. Signed-off-by: Benjamin LaHaise diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index a4a30e4..1284da2 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -333,7 +333,25 @@ do { \ } while (0) #ifdef CONFIG_X86_32 -#define __get_user_asm_u64(x, ptr, retval, errret) (x) = __get_user_bad() +#define __get_user_asm_u64(x, ptr, retval, errret) \ +({ \ + asm volatile(ASM_STAC "\n" \ + "1: movl %2,%%eax\n" \ + "2: movl %3,%%edx\n" \ + "3: " ASM_CLAC "\n" \ + ".section .fixup,\"ax\"\n" \ + "4: mov %4,%0\n" \ + " xorl %%eax,%%eax\n" \ + " xorl %%edx,%%edx\n" \ + " jmp 3b\n" \ + ".previous\n" \ + _ASM_EXTABLE(1b, 4b) \ + _ASM_EXTABLE(2b, 4b) \ + : "=r" (retval), "=A"(x) \ + : "m" (__m(ptr)), "m" __m(((u32 *)(ptr)) + 1), \ + "i" (errret), "0" (retval)); \ +}) + #define __get_user_asm_ex_u64(x, ptr) (x) = __get_user_bad() #else #define __get_user_asm_u64(x, ptr, retval, errret) \ @@ -420,7 +438,7 @@ do { \ #define __get_user_nocheck(x, ptr, size) \ ({ \ int __gu_err; \ - unsigned long __gu_val; \ + __inttype(*(ptr)) __gu_val; \ __uaccess_begin(); \ __get_user_size(__gu_val, (ptr), (size), __gu_err, -EFAULT); \ __uaccess_end(); \ -- "Thought is the essence of where you are now."