Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755501AbcK1VyU (ORCPT ); Mon, 28 Nov 2016 16:54:20 -0500 Received: from mx1.redhat.com ([209.132.183.28]:57616 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751660AbcK1VyM (ORCPT ); Mon, 28 Nov 2016 16:54:12 -0500 Date: Mon, 28 Nov 2016 15:54:11 -0600 From: Josh Poimboeuf To: Vince Weaver Cc: "linux-kernel@vger.kernel.org" , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , "dvyukov@google.com" , "Paul E. McKenney" Subject: Re: perf: fuzzer BUG: KASAN: stack-out-of-bounds in __unwind_start Message-ID: <20161128215411.fkis7bbimjy4v4j7@treble> References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.6.0.1 (2016-04-01) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 28 Nov 2016 21:54:12 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3334 Lines: 66 On Thu, Nov 24, 2016 at 12:33:48PM -0500, Vince Weaver wrote: > > This is on a skylake machine, linus git as of yesterday after the various > kasan-related fixes went in. Not sure if there were any that hadn't hit > upstream yet. > > Anyway I can't tell from this one what the actual trigger is. After this > mess the fuzzer process was locked, udev started complaining, and it > eventually died completely after a few hours of repeated messages like > this. > > [38898.373183] INFO: rcu_sched self-detected stall on CPU > [38898.378452] 7-...: (5249 ticks this GP) idle=727/140000000000001/0 softirq=3141908/3141908 fqs=2625 > [38898.381211] INFO: rcu_sched detected stalls on CPUs/tasks: > [38898.381214] 0-...: (1 GPs behind) idle=05f/140000000000001/2 softirq=3285458/3285459 fqs=2625 > [38898.381217] 7-...: (5249 ticks this GP) idle=727/140000000000001/0 softirq=3141908/3141908 fqs=2625 > [38898.381218] (detected by 1, t=5252 jiffies, g=3685053, c=3685052, q=32) > [38898.381244] ================================================================== > [38898.381247] BUG: KASAN: stack-out-of-bounds in __unwind_start+0x1a2/0x1c0 at addr ffff8801e9727c28 > [38898.381248] Read of size 8 by task swapper/1/0 > [38898.381250] page:ffffea0007a5c9c0 count:0 mapcount:0 mapping: (null) index:0x0 > [38898.381251] flags: 0x2ffff8000000000() > [38898.381251] page dumped because: kasan: bad access detected > [38898.381328] Memory state around the buggy address: > [38898.381330] ffff8801e9727b00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > [38898.381331] ffff8801e9727b80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > [38898.381332] >ffff8801e9727c00: 00 00 00 00 f1 f1 f1 f1 00 00 00 00 f3 f3 f3 f3 > [38898.381333] ^ > [38898.381334] ffff8801e9727c80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f1 > [38898.381335] ffff8801e9727d00: f1 f1 f1 00 00 00 f4 f2 f2 f2 f2 00 00 00 00 f3 > [38898.381335] ================================================================== > [38898.510702] (t=5284 jiffies g=3685053 c=3685052 q=32) > > (That's all, the report above repeats but no useful things like a > backtrace are ever printed) After looking at the RCU stall detection code, I think the KASAN error and missing stack dump aren't very surprising. RCU calls the scheduler dump_cpu_task() function, which seems inherently problematic: it tries to dump the stack of a task while it's running on another CPU. There are some issues with that: 1) There's no way to find the starting frame of a currently running task from another CPU. In fact, I'm wondering how dump_cpu_task() ever worked at all? It seems like you'd have to get lucky that the sp/bp registers stored by the last call to schedule() happen to point to a currently valid stack frame. 2) Even if there were a way to find the starting frame, it's racy because the target task could be overwriting the stack while we're reading it. 3) IRQ/exception stack dumps would be missing anyway because the stack dump code only looks at the current CPU's interrupt stacks. Maybe dump_cpu_task() should instead run the stack dump directly from the target CPU, e.g. with trigger_single_cpu_backtrace() or smp_call_function_single()? Paul, Peter, Ingo, any thoughts? -- Josh