Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp275739pxv; Thu, 8 Jul 2021 21:01:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxCOwe4DrwtoUSAh2iXirOMhI9p0fEU+eRlNLff6JDw3gWhnO38bNfnJXXgcVML3DZwQvBg X-Received: by 2002:a5d:9c86:: with SMTP id p6mr26750660iop.24.1625803285918; Thu, 08 Jul 2021 21:01:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625803285; cv=none; d=google.com; s=arc-20160816; b=OXtfxxDpQifHvdJReSh7vMI7cmMb+/x/RVJULc6MSOjjCxdNBAECI8jmo6HJomSHe4 90UCt0tEVywqPBFUi9nnCU0NkFzw6vWzf6nXE2xjd4AbPyy3TQROsizTEoGr7dLImdZd fTn1B6qkuA9qvNlamST3kv+myxPN5PAKY19Qo/vs+zdG8HfhjLyvswBES2Kr7q88UiiA 0hXcJbZC6wHDj64KIfpIpx01I6mz8p24d05yGZZ9/6+46KR9XZu4sE6uV1LtN6fGVENj ylnyvjGIii3p54NLJGYQM8isMWleiSZMyg8f9NYqt3q1b+RmzYJJ3C25bWiM82Th6wxL +AGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=/Jy+wbL3yHCrFiMqsWGS+EAeGOqOu3Dy36LHF0PNkKI=; b=KMLNc+nTWPuOW9LEtbKD+9h2RfJRJuLZrbi0NaUF4qKM8yKyCgCYx9XsepmBysqAgr c4BM9TPX61VXRMKX2aOgXNeSnEqVmF9rxusT8dtDbFVtj5CYmrxpnbwHIH/9PtPqgBJJ nVAkK/GL2w2qXEQJyakfPufxBIrAQoOvFHOv0UB0CYMorDeylLEQLRHcwrqLZUnPoU1W yfbREaGbLObGtuxp7eNzrmIMHHrGl7o1p6/bS0zk62eAed1cIG73rYT53ueqpn9S9rbZ 85YE9CuD/NmaNfOt6lRmRhLsZBN9oFH1Q8ErIAeZG2UwqzSl7jurVFefQCNVes1zCzRz 5veA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=QdeyCjvC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o5si2927231ilf.135.2021.07.08.21.01.13; Thu, 08 Jul 2021 21:01:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=QdeyCjvC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229487AbhGIEB4 (ORCPT + 99 others); Fri, 9 Jul 2021 00:01:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229441AbhGIEB4 (ORCPT ); Fri, 9 Jul 2021 00:01:56 -0400 Received: from mail-io1-xd35.google.com (mail-io1-xd35.google.com [IPv6:2607:f8b0:4864:20::d35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A8F8C061574 for ; Thu, 8 Jul 2021 20:59:13 -0700 (PDT) Received: by mail-io1-xd35.google.com with SMTP id d9so10922949ioo.2 for ; Thu, 08 Jul 2021 20:59:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=/Jy+wbL3yHCrFiMqsWGS+EAeGOqOu3Dy36LHF0PNkKI=; b=QdeyCjvCfT8p8oZ+Xsku+7FpNMHqwV1HgP3XLfgS9B4XXr22Ed/moe2exaumH9/c24 mYVixfTXMgbucSpw8BZ0IotD6L/XKNGHAYZYuS1gHLv4ZOfZ4PgtnIb/4XLdpze6GxFn bjbvtpYE/f218ERLBwLOpFKksoeoqiLDX6198REMtFhPg0lKZh3SSISnV1XZMsQuTHwd UYiCAWEzXgoTWDffzsvtiEf1tqgeiKYW8p48HC0kOhYRhF+3bDahQ2ZB7zGAS4B9aXJh +oMz+xA0zNTyYRoHHTS1W/qO2oPRWano5M2X5v4LOl9/tfcsc3kpWtZKSFJVerHmFozg rtSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=/Jy+wbL3yHCrFiMqsWGS+EAeGOqOu3Dy36LHF0PNkKI=; b=Q8LeDhyaq/AIw9G4iMZmrZQRp5mWQKDFAJTG2W+9NauxLZPE9ZIYAlF5s1gYwCSvER g7UYi0l7sBffnwz8/KXg6QiNawvPKWbUbCx250NFb6ZlPirCO4AzEvo+iM0/AEAucxUf jOtXUgJ8/UBLpJ9HyJoz+iVHjlunUVrOU/fQsCJxgonGLN7wnLiMheUuS58aKBmdvOIG x/lqyBCUVizMwzKpjS6/3Hp+W3aPaJRs+JWUUDgyx3dug6PXlA8cGoNW4/Yy6awQWyyR QfBMop62HhqDJ3JJmE2Ozp9sRG9A+C3ecPXbZZFf6I0jURsWoqLGLlu60UT5kWqJWouO DUCg== X-Gm-Message-State: AOAM533s2KVE9sqsk9gWihs4yUeVoaJL8PnBPjQtAmWoH1Y+TYQs8PcA agQ+mpg/gqV/vcWPeVThItX4QBY9OJ+ouPjTcTI= X-Received: by 2002:a02:c984:: with SMTP id b4mr30649907jap.40.1625803152868; Thu, 08 Jul 2021 20:59:12 -0700 (PDT) MIME-Version: 1.0 References: <20210708162417.777bff77@gmail.com> In-Reply-To: <20210708162417.777bff77@gmail.com> From: Lai Jiangshan Date: Fri, 9 Jul 2021 11:59:01 +0800 Message-ID: Subject: Re: BUG in alloc_workqueue (linux-next) To: Pavel Skripkin Cc: Tejun Heo , LKML , Yang Yingliang , Xu Qiang Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, Pavel Thanks for the report. Huawei (CC-ed) is also dealing with the problem: https://lore.kernel.org/lkml/20210708093136.2195752-1-yangyingliang@huawei.com/t/#u Could you have a try on the fix, please? Thanks Lai On Thu, Jul 8, 2021 at 9:24 PM Pavel Skripkin wrote: > > I've spent some time trying to came up with a fix, but I gave > up :( But! I have an idea about what's happening, maybe it will help > somehow... > > > So, all 3 reports have same stack trace: alloc_workqueue() in > loop_configure(). I skimmed through syzbot's log and found, that syzbot injected > failure into alloc_unbound_pwq() in all 3 cases: > > FAULT_INJECTION: forcing a failure. > name failslab, interval 1, probability 0, space 0, times 0 > CPU: 1 PID: 17986 Comm: syz-executor.0 Tainted: G W 5.13.0-next-20210706 #9 > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a-rebuilt.opensuse.org 04/01/2014 > Call Trace: > dump_stack_lvl (lib/dump_stack.c:106 (discriminator 4)) > should_fail.cold (lib/fault-inject.c:52 lib/fault-inject.c:146) > should_failslab (mm/slab_common.c:1327) > kmem_cache_alloc_node (mm/slab.h:487 mm/slub.c:2902 mm/slub.c:3017) > ? alloc_unbound_pwq (kernel/workqueue.c:3813) > alloc_unbound_pwq (kernel/workqueue.c:3813) > apply_wqattrs_prepare (kernel/workqueue.c:3963) > apply_workqueue_attrs_locked (kernel/workqueue.c:4041) > alloc_workqueue (kernel/workqueue.c:4078 kernel/workqueue.c:4201 kernel/workqueue.c:4309) > > > So, if alloc_unbound_pwq() fails, apply_wqattrs_prepare() will jump to > this code: > > out_free: > free_workqueue_attrs(tmp_attrs); > free_workqueue_attrs(new_attrs); > apply_wqattrs_cleanup(ctx); <----| > return NULL; | > | > put_pwq_unlocked() -> put_pwq() -> schedule_work(&pwq->unbound_release_work); > > > and apply_wqattrs_cleanup() will schedule pwq_unbound_release_workfn() > [2], but alloc_workqueue() will free workqueue_struct in case of > alloc_unbound_pwq() error [1]. In that case we will get UAF in pwq_unbound_release_workfn() > like in 3rd report. > > > Does written above make some sence? :) > > > > With regards, > Pavel Skripkin