2004-10-13 09:49:21

by Jin, Gordon

[permalink] [raw]
Subject: [PATCH x86_64]: Correct copy_user_generic return value when exception happens


Fix a bug that arch/x86_64/lib/copy_user:copy_user_generic will return a wrong
value when exception happens.
In the case the address is not 8-byte aligned (i.e. go into Lbad_alignment),
if exception happens in Ls11, %rdx will be wrong number of copied bytes,
then copy_user_generic returns wrong value.
It also fixed a bug of zeroing wrong number of bytes of destination at this
situation. (In Lzero_rest)

Signed-off-by: Yanmin Zhang <[email protected]>
Signed-off-by: Nanhai Zou <[email protected]>
Signed-off-by: Gordon Jin <[email protected]>
Signed-off-by: Suresh Siddha <[email protected]>

--- linux-2.6.9-rc3/arch/x86_64/lib/copy_user.S~ 2004-07-15 13:30:14.376131432 -0700
+++ linux-2.6.9-rc3/arch/x86_64/lib/copy_user.S 2004-07-15 23:41:40.572981784 -0700
@@ -73,7 +73,7 @@
* rdx count
*
* Output:
- * eax uncopied bytes or 0 if successfull.
+ * eax uncopied bytes or 0 if successful.
*/
.globl copy_user_generic
.p2align 4
@@ -179,9 +179,9 @@
movl $8,%r9d
subl %ecx,%r9d
movl %r9d,%ecx
- subq %r9,%rdx
- jz .Lsmall_align
- js .Lsmall_align
+ cmpq %r9,%rdx
+ jz .Lhandle_7
+ js .Lhandle_7
.Lalign_1:
.Ls11: movb (%rsi),%bl
.Ld11: movb %bl,(%rdi)
@@ -189,10 +189,8 @@
incq %rdi
decl %ecx
jnz .Lalign_1
+ subq %r9,%rdx
jmp .Lafter_bad_alignment
-.Lsmall_align:
- addq %r9,%rdx
- jmp .Lhandle_7
#endif

/* table sorted by exception address */
@@ -219,8 +217,8 @@
.quad .Ls10,.Le_byte
.quad .Ld10,.Le_byte
#ifdef FIX_ALIGNMENT
- .quad .Ls11,.Le_byte
- .quad .Ld11,.Le_byte
+ .quad .Ls11,.Lzero_rest
+ .quad .Ld11,.Lzero_rest
#endif
.quad .Le5,.Le_zero
.previous

Thanks,
Gordon


2004-10-13 09:55:27

by Jin, Gordon

[permalink] [raw]
Subject: RE: [PATCH x86_64]: Correct copy_user_generic return value when exception happens

LTP case write03 and pwrite03 exposed this bug.

Below is a simplified version for write03.c:
Firstly, it writes 100 bytes to fd, so that f_pos is not 8-byte aligned.
Secondly, it mmaps a buffer with PROT_NONE flag.
Finally, it writes the buffer to fd and expects fail. But the write will succeed, if not armed with the patch.

#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

int main(int argc, char **argv)
{
int fd;
int ret;
char wbuf[100];
char * bad_addr = 0;

fd = creat("test",0644);
if (fd < 0) {
printf("creating a new file failed\n");
}

(void)memset(wbuf, '0', 100);

if (write(fd, wbuf, 100) == -1) {
perror("first write");
return (-1);
}

bad_addr = mmap(0, 1, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, 0, 0);
if (bad_addr <= 0) {
printf("mmap failed\n");
}

ret = write(fd, bad_addr, 100);
if (ret != -1) {
printf( "FAIL: write(2) failed to fail\n");
return(-1);
} else {
printf( "PASS\n");
return(0);
}
}


Thanks,
Gordon

-----Original Message-----
From: Jin, Gordon
Sent: Wednesday, October 13, 2004 5:49 PM
To: [email protected]; [email protected]
Subject: [PATCH x86_64]: Correct copy_user_generic return value when exception happens


Fix a bug that arch/x86_64/lib/copy_user:copy_user_generic will return a wrong
value when exception happens.
In the case the address is not 8-byte aligned (i.e. go into Lbad_alignment),
if exception happens in Ls11, %rdx will be wrong number of copied bytes,
then copy_user_generic returns wrong value.
It also fixed a bug of zeroing wrong number of bytes of destination at this
situation. (In Lzero_rest)

Signed-off-by: Yanmin Zhang <[email protected]>
Signed-off-by: Nanhai Zou <[email protected]>
Signed-off-by: Gordon Jin <[email protected]>
Signed-off-by: Suresh Siddha <[email protected]>

--- linux-2.6.9-rc3/arch/x86_64/lib/copy_user.S~ 2004-07-15 13:30:14.376131432 -0700
+++ linux-2.6.9-rc3/arch/x86_64/lib/copy_user.S 2004-07-15 23:41:40.572981784 -0700
@@ -73,7 +73,7 @@
* rdx count
*
* Output:
- * eax uncopied bytes or 0 if successfull.
+ * eax uncopied bytes or 0 if successful.
*/
.globl copy_user_generic
.p2align 4
@@ -179,9 +179,9 @@
movl $8,%r9d
subl %ecx,%r9d
movl %r9d,%ecx
- subq %r9,%rdx
- jz .Lsmall_align
- js .Lsmall_align
+ cmpq %r9,%rdx
+ jz .Lhandle_7
+ js .Lhandle_7
.Lalign_1:
.Ls11: movb (%rsi),%bl
.Ld11: movb %bl,(%rdi)
@@ -189,10 +189,8 @@
incq %rdi
decl %ecx
jnz .Lalign_1
+ subq %r9,%rdx
jmp .Lafter_bad_alignment
-.Lsmall_align:
- addq %r9,%rdx
- jmp .Lhandle_7
#endif

/* table sorted by exception address */
@@ -219,8 +217,8 @@
.quad .Ls10,.Le_byte
.quad .Ld10,.Le_byte
#ifdef FIX_ALIGNMENT
- .quad .Ls11,.Le_byte
- .quad .Ld11,.Le_byte
+ .quad .Ls11,.Lzero_rest
+ .quad .Ld11,.Lzero_rest
#endif
.quad .Le5,.Le_zero
.previous

Thanks,
Gordon