Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755881AbbHYSiy (ORCPT ); Tue, 25 Aug 2015 14:38:54 -0400 Received: from mail-qg0-f54.google.com ([209.85.192.54]:34299 "EHLO mail-qg0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752370AbbHYSiu (ORCPT ); Tue, 25 Aug 2015 14:38:50 -0400 MIME-Version: 1.0 In-Reply-To: References: <55DCB365.2060501@hurleysoftware.com> From: Dmitry Vyukov Date: Tue, 25 Aug 2015 20:38:30 +0200 Message-ID: Subject: Re: Potential data race in uart_ioctl To: Peter Hurley Cc: Andrey Konovalov , Greg Kroah-Hartman , Jiri Slaby , linux-serial@vger.kernel.org, LKML , Alexander Potapenko , Kostya Serebryany Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3652 Lines: 79 On Tue, Aug 25, 2015 at 8:32 PM, Dmitry Vyukov wrote: > On Tue, Aug 25, 2015 at 8:26 PM, Peter Hurley wrote: >> Hi Andrey, >> >> On 08/25/2015 08:17 AM, Andrey Konovalov wrote: >>> Hi! >>> >>> We are working on a dynamic data race detector for the Linux kernel >>> called KernelThreadSanitizer (ktsan) >>> (https://github.com/google/ktsan/wiki). >>> >>> While booting the kernel (upstream revision 21bdb584af8c) we got a report: >>> >>> ================================================================== >>> ThreadSanitizer: data-race in uart_ioctl >>> >>> Read of size 8 by thread T424 (K971): >>> [] uart_ioctl+0x36/0x11e0 >>> drivers/tty/serial/serial_core.c:1216 >>> [] tty_ioctl+0x4f2/0x11d0 drivers/tty/tty_io.c:2924 >>> [< inlined >] do_vfs_ioctl+0x44a/0x750 vfs_ioctl fs/ioctl.c:43 >>> [] do_vfs_ioctl+0x44a/0x750 fs/ioctl.c:607 >>> [< inlined >] SyS_ioctl+0x79/0xa0 SYSC_ioctl fs/ioctl.c:622 >>> [] SyS_ioctl+0x79/0xa0 fs/ioctl.c:613 >>> [] entry_SYSCALL_64_fastpath+0x12/0x71 >>> arch/x86/entry/entry_64.S:186 >>> DBG: cpu = ffff88063fc1fe68 >>> DBG: cpu id = 0 >>> >>> Previous write of size 8 by thread T422 (K970): >>> [] uart_open+0x12f/0x220 >>> drivers/tty/serial/serial_core.c:1629 >>> [] tty_open+0x192/0x8f0 drivers/tty/tty_io.c:2105 >>> [] chrdev_open+0x13c/0x290 fs/char_dev.c:388 >>> [] do_dentry_open+0x3ac/0x550 fs/open.c:736 >>> [] vfs_open+0xb8/0xe0 fs/open.c:853 >>> [< inlined >] path_openat+0x81c/0x2440 do_last fs/namei.c:3163 >>> [] path_openat+0x81c/0x2440 fs/namei.c:3295 >>> [] do_filp_open+0xfa/0x170 fs/namei.c:3330 >>> [] do_sys_open+0x183/0x2b0 fs/open.c:1025 >>> [< inlined >] SyS_open+0x35/0x50 SYSC_open fs/open.c:1043 >>> [] SyS_open+0x35/0x50 fs/open.c:1038 >>> [] entry_SYSCALL_64_fastpath+0x12/0x71 >>> arch/x86/entry/entry_64.S:186 >>> DBG: cpu = ffff88063fd1fe68 >>> >>> DBG: addr: ffff8801d2a0ce88 >>> DBG: first offset: 0, second offset: 0 >>> DBG: T424 clock: {T424: 211057, T422: 275728} >>> DBG: T422 clock: {T422: 275819} >>> ================================================================== >>> >>> It seems that one thread reads and uses tty->driver_data while it's >>> being initialized in another one. The second thread holds port->mutex, >>> but the first one does a few accesses to tty->driver_data before >>> locking it. >>> >>> Could you confirm if this is a real race? >> >> Although I don't understand what triggers ktsan to signal a race >> condition, this doesn't appear to be an actual race. >> >> For an ioctl() syscall to act on any given tty requires a successful >> open() syscall to have nearly completed (do_sys_open() => fd_install() >> initializes the file descriptor; ioctl() => fdget() derefs the descriptor). >> >> Perhaps what's tripping the race detection is that 2nd and subsequent >> opens also (redundantly) write the same values as from the first open? > > Since we use a fuzzer, yes, it is possible that open is called twice. Oh, no, sorry, this happens during booting. The race is on tty_struct, which is probably shared between several file descriptors. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/