Received: by 2002:a25:7ec1:0:0:0:0:0 with SMTP id z184csp846558ybc; Fri, 22 Nov 2019 14:14:51 -0800 (PST) X-Google-Smtp-Source: APXvYqzjPQLkp3x6SA/BBEOFYSL46M8W/yGPtyBAoclKt+jKfMnxlVM0ZteC5qv7PFQdy5FEBeZp X-Received: by 2002:a17:907:1114:: with SMTP id qu20mr25480559ejb.42.1574460891087; Fri, 22 Nov 2019 14:14:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574460891; cv=none; d=google.com; s=arc-20160816; b=qZ03Dez3lQzX/kd/hoLeTinY651Z1Bx2wRAqks0O91llNZGlNp0Un5hY2nTHrQXy/S 3xrAFKdeTgsaOvBb6qidTWy8IY/LIJubP0mwieQMhiLLzCQzMSgGAHxMAwJZsLVGdjwT a3MAOB7wO6zAazoxUiKJzXBA3VdfuJvABwdUPQ/IWC05S2CFopzDYEvquITe0+sQaXXJ ncEZ++tllZfXKiw3EszdJs81pBqpyDsfeP9eVo+auqScyAgeV5Wa8s6u/vYTeQrP0mc/ oMT2KlkO8d0weOG/4F8SbcI8BrDvmfi5EricWG98aogxIQ28syKOdQglftqxr6gamdgp OSnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:from:subject:message-id:in-reply-to :date:mime-version; bh=WTLZqtqfoN0AmabcgLeW8pAz7p65L6LdXI/HHqQhgkA=; b=Vjl1hlXF9kAVgYK2DAdbepAn97LZULlK5Vi8MDX5KNOgF/3D7AzpbAljsc0p75kmyz zTkGW6RJwlZRn71ARhpNeR0mgMO9Ee9AM7BtfWT8L+DBZvr/yERHwyDvXLoDXEh6oX/9 uqTDSmhtbWx9Z7F9QzGQ3wrWqtOgHHvN+R4eyB8+ZhogrpFtSXoVreG4xybi2g9LEVj4 h596n4OHqtHbLiO03iCFoUXQaUyxOmt6tnEJu6bpMKazDY/nv6JxHCaJ4XYSyWrfchfn iUSz5siZGAYmuBbqc75NoUy7M7xd3hqeEFD3zwfcapNhhXo3YbQEM5E0bTVd//6XjGNs 1E5g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=appspotmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h15si4936212ejt.387.2019.11.22.14.14.26; Fri, 22 Nov 2019 14:14:51 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=appspotmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726852AbfKVWNX (ORCPT + 99 others); Fri, 22 Nov 2019 17:13:23 -0500 Received: from mail-io1-f70.google.com ([209.85.166.70]:54852 "EHLO mail-io1-f70.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726638AbfKVWNW (ORCPT ); Fri, 22 Nov 2019 17:13:22 -0500 Received: by mail-io1-f70.google.com with SMTP id f66so5950255ioa.21 for ; Fri, 22 Nov 2019 14:13:21 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:date:in-reply-to:message-id:subject :from:to:cc; bh=WTLZqtqfoN0AmabcgLeW8pAz7p65L6LdXI/HHqQhgkA=; b=HCdVnvCjnkOzhOGo9ASb5/n7Dex96mIM1Lvey/HmSKDuI5118sjkauIZgHhpaB5Cz6 PW5OF3VopyiIjeW+85bJ8yU9vT/dBCnGyGjnvCcwl0lIQT2t8RyzhSqSI658uSMlYf7b HsuanMspwrrEn1wZjRAhsq5WDartyaTHkbchbdj+yXumSKJ0I2yJdwELoukYbWGhfXuq TBqIoRE3AkfJcnyYSj9aIbvTGd8/x74ZsX/RJt4qAtdv0KvzYFIbdrJS0wa/lY4MRF1Z dVSLgT1eoUtbmP3e7L7MeK4+Bnn92OlnAMcFqh94NwEomB6csNEVyX3Wn1y0ZiFpVKIo F4EQ== X-Gm-Message-State: APjAAAV922qm8xIuwc5h2SXIZNKIWyXDk5JPvUVoP63SxlFqlEckNIUS /jjsnH+loZ3wGMqYP/+HGm51TYbMSxTjeT6CBfOW4FMzcRWX MIME-Version: 1.0 X-Received: by 2002:a05:6638:3e9:: with SMTP id s9mr17035334jaq.7.1574460801384; Fri, 22 Nov 2019 14:13:21 -0800 (PST) Date: Fri, 22 Nov 2019 14:13:21 -0800 In-Reply-To: X-Google-Appengine-App-Id: s~syzkaller X-Google-Appengine-App-Id-Alias: syzkaller Message-ID: <000000000000c789960597f6b88b@google.com> Subject: Re: Re: possible deadlock in mon_bin_vma_fault From: syzbot To: Alan Stern Cc: arnd@arndb.de, gregkh@linuxfoundation.org, jrdr.linux@gmail.com, keescook@chromium.org, kstewart@linuxfoundation.org, linux-kernel@vger.kernel.org, linux-usb@vger.kernel.org, stern@rowland.harvard.edu, syzkaller-bugs@googlegroups.com, tglx@linutronix.de, viro@zeniv.linux.org.uk, zaitcev@redhat.com Content-Type: text/plain; charset="UTF-8"; format=flowed; delsp=yes Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > On Fri, 22 Nov 2019, Pete Zaitcev wrote: >> > It would be more elegant to do the rp->mmap_active test before calling >> > kcalloc and mon_alloc_buf. But of course that's a pretty minor thing. >> Indeed it feels wrong that so much work gets discarded. However, memory >> allocations can block, right? In the same time, our main objective here >> is >> to make sure that when a page fault happens, we fill in the page that VMA >> is intended to refer, and not one that was re-allocated. Therefore, I'm >> trying to avoid a situation where: >> 1. thread A checks mmap_active, finds it at zero and proceeds into the >> reallocation ioctl >> 2. thread A sleeps in get_free_page() >> 3. thread B runs mmap() and succeeds >> 4. thread A obtains its pages and proceeds to substitute the buffer >> 5. thread B (or any other) pagefaults and ends with the new, unexpected >> page >> The code is not pretty, but I don't see an alternative. Heck, I would >> love you to find more races if you can. > The alternative is to have the routines for mmap() hold fetch_lock > instead of b_lock. mmap() is allowed to sleep, so that would be okay. > Then you would also hold fetch_lock while checking mmap_active and > doing the memory allocations. That would prevent any races -- in your > example above, thread A would acquire fetch_lock in step 1, so thread B > would block in step 3 until step 4 was finished. Hence B would end up > mapping the correct pages. > In practice, I don't see this being a routine problem. How often do > multiple threads independently try to mmap the same usbmon buffer? > Still, let's see syzbot reacts to your current patch. The line below > is how you ask syzbot to test a candidate patch. > Alan Stern > #syz test: linux-4.19.y f6e27dbb1afa "linux-4.19.y" does not look like a valid git repo address. > commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b > Author: Pete Zaitcev > Date: Thu Nov 21 17:24:00 2019 -0600 > usb: Fix a deadlock in usbmon between mmap and read > Signed-off-by: Pete Zaitcev > Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com > diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c > index ac2b4fcc265f..f48a23adbc35 100644 > --- a/drivers/usb/mon/mon_bin.c > +++ b/drivers/usb/mon/mon_bin.c > @@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file, > unsigned int cmd, unsigned long arg > mutex_lock(&rp->fetch_lock); > spin_lock_irqsave(&rp->b_lock, flags); > - mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE); > - kfree(rp->b_vec); > - rp->b_vec = vec; > - rp->b_size = size; > - rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0; > - rp->cnt_lost = 0; > + if (rp->mmap_active) { > + mon_free_buff(vec, size/CHUNK_SIZE); > + kfree(vec); > + ret = -EBUSY; > + } else { > + mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE); > + kfree(rp->b_vec); > + rp->b_vec = vec; > + rp->b_size = size; > + rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0; > + rp->cnt_lost = 0; > + } > spin_unlock_irqrestore(&rp->b_lock, flags); > mutex_unlock(&rp->fetch_lock); > } > @@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct > poll_table_struct *wait) > static void mon_bin_vma_open(struct vm_area_struct *vma) > { > struct mon_reader_bin *rp = vma->vm_private_data; > + unsigned long flags; > + > + spin_lock_irqsave(&rp->b_lock, flags); > rp->mmap_active++; > + spin_unlock_irqrestore(&rp->b_lock, flags); > } > static void mon_bin_vma_close(struct vm_area_struct *vma) > { > + unsigned long flags; > + > struct mon_reader_bin *rp = vma->vm_private_data; > + spin_lock_irqsave(&rp->b_lock, flags); > rp->mmap_active--; > + spin_unlock_irqrestore(&rp->b_lock, flags); > } > /* > @@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct > vm_fault *vmf) > unsigned long offset, chunk_idx; > struct page *pageptr; > - mutex_lock(&rp->fetch_lock); > offset = vmf->pgoff << PAGE_SHIFT; > - if (offset >= rp->b_size) { > - mutex_unlock(&rp->fetch_lock); > + if (offset >= rp->b_size) > return VM_FAULT_SIGBUS; > - } > chunk_idx = offset / CHUNK_SIZE; > pageptr = rp->b_vec[chunk_idx].pg; > get_page(pageptr); > - mutex_unlock(&rp->fetch_lock); > vmf->page = pageptr; > return 0; > }