Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp615375pxb; Fri, 15 Apr 2022 07:23:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwht+983zzG96plb2YgcK5kKr1qBOHhLJ8en7u67nzrALWrOqJh0EPmylqmk+u3t/P0GMEL X-Received: by 2002:a17:902:d70e:b0:156:1b99:e909 with SMTP id w14-20020a170902d70e00b001561b99e909mr53464120ply.155.1650032602599; Fri, 15 Apr 2022 07:23:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1650032602; cv=none; d=google.com; s=arc-20160816; b=tQnucNalg2StjC2o3W7A1Btf+0pvWmF6uWqrNPMGvP62Ku91+3eFLyZk5vMcx6Sm0O WGb/pUrAwsx4lF6Ixm2aIHMNiNzkg3vHkvKlGje8aYnIFeFFIjd8YPQeGyYvAhbtqLxa Ur6nM+98jcaZxw5zqjJOCYsXGcPMggnE50yCyLPVYRt9g0jShlt9OhAwpxfNNvx+56KE rBNS+U2GuOhqYrgZ9nMBmk6IKkx6esPogGBfyyk3CX5MmRmK/UPykBxjVohOxKX7m/3x o6Xo0R02x52vGPb4hPmeIfxNnIaXOzLFfGSECOrGz19HVYFlMZy91vj0d/dGltvus4HM 6n2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:subject :from:references:cc:to:content-language:user-agent:mime-version:date :message-id:dkim-signature; bh=R60EpWC4+M+3jSe48lUE7Gq4ExP+RSflXlkjF6VcaNg=; b=rzbvaUo9NezeWgEFq6WVjTbGUtXc9CxClakxYa0Zmz1JDloK++GMIwfZBptj1dzIHA t/QXX7jHpvi1OKVtOLA+TTfYl3oZk6eSfkeOiYX6yMCEI1N1Lbfl+2WAeG/aKy6wijAt 7sZFf0dQ+rGOY5KDpGbWBqOlgq3Ag2pe12YkErmn8Hm6xHBOpOhEr6hCpFy6OGaa+RhU fHPac+k1zjxDDJYlNOp6Qg3j7SoSOUkHW2u2+cr2hRYV5NORboQY0ynqji+CuXpIlY4s 90dbkSBo+EZwVy6LTv8wUi/MF2ez7nF7+aBqQ7xzQjL1tfvsC7D/b3SgDcK2T3IgeXr/ qQEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MJT5ZerL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u1-20020a17090a4bc100b001bfb0db0879si4145912pjl.88.2022.04.15.07.23.07; Fri, 15 Apr 2022 07:23:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MJT5ZerL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343955AbiDNRxs (ORCPT + 99 others); Thu, 14 Apr 2022 13:53:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343948AbiDNRxq (ORCPT ); Thu, 14 Apr 2022 13:53:46 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B62AEA767 for ; Thu, 14 Apr 2022 10:51:20 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id n8so5287261plh.1 for ; Thu, 14 Apr 2022 10:51:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=message-id:date:mime-version:user-agent:content-language:to:cc :references:from:subject:in-reply-to:content-transfer-encoding; bh=R60EpWC4+M+3jSe48lUE7Gq4ExP+RSflXlkjF6VcaNg=; b=MJT5ZerLaHe0JjBpnq1LElIVlHvKx2ombKDP9yRuUPnzETaAhn9nIFw2s+5thyg8ck z+Auc/bX6s4XDzQD7WK+/TG6WF/NPaz7NxNF+va1neIBQvkDM8rvE/Z8Bu7yJMiDjGhr Qq/F7VMSrZoc5QrzMA5rKvUUC9eGBE4aSx1/AZufDrZmL8MFVO75636rgU+GQRNVBJf+ XMC2Wjv8LayNUGel212gOwnBPbKqlmm34ot5OyZMuDQLngo5WNI+Qvm1BsfC5EhRNMHA pczXurvhUftRw1ejmjb5pCtSqc/EQVw3Lip1QWxc1k5n4sDFrV6ZrKI9OY/a3Zk8+Y1S UitA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent :content-language:to:cc:references:from:subject:in-reply-to :content-transfer-encoding; bh=R60EpWC4+M+3jSe48lUE7Gq4ExP+RSflXlkjF6VcaNg=; b=a2KEhPo0R6k2MtHOn5DEJnZHPgLoNLz14stkJESfht64shtlgDgUTy6OqwXZcKZZmX RDjg7wk8IvOjaoFkLnq27qq0JqMpNYtC9k8fzRH63wkxaSwgfRLdX0z3Cnnj48aZAl87 7csbKRZrwUMv/2xQcQZn7YTsPDjZqXfUgJk6sUquAelEpAC7qIuM6a2OD0qxH2EOs9/h 7gUqiRIJJk2c2EsZyDlcqa58Angkmuugpg1CQ+sNl+qI/J3AO6EiYRob2kSlHzJouFH2 y5XNlSzN1PXVg+SFk6W1kpWunYFfv6GehHEBW/l43jVYSeYko7ghcUafU6QyUolXJTBI iGxw== X-Gm-Message-State: AOAM5320D3FR+GzkLBXOmuJNhkiJfQARccClQEIFntZpokEQqVz6KFFy uzo/j9CBVN/Hnb6x3MWgKAjgzA== X-Received: by 2002:a17:902:b7c9:b0:158:b09e:527a with SMTP id v9-20020a170902b7c900b00158b09e527amr6104672plz.40.1649958679806; Thu, 14 Apr 2022 10:51:19 -0700 (PDT) Received: from [192.168.254.17] ([50.39.160.154]) by smtp.gmail.com with ESMTPSA id p3-20020a056a000b4300b004faee36ea56sm506706pfo.155.2022.04.14.10.51.19 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 14 Apr 2022 10:51:19 -0700 (PDT) Message-ID: <584183e2-2473-6185-e07d-f478da118b87@linaro.org> Date: Thu, 14 Apr 2022 10:51:18 -0700 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.7.0 Content-Language: en-US To: =?UTF-8?Q?Michal_Koutn=c3=bd?= Cc: cgroups@vger.kernel.org, Tejun Heo , Zefan Li , Johannes Weiner , Christian Brauner , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , netdev@vger.kernel.org, bpf@vger.kernel.org, stable@vger.kernel.org, linux-kernel@vger.kernel.org, syzbot+e42ae441c3b10acf9e9d@syzkaller.appspotmail.com References: <20220412192459.227740-1-tadeusz.struk@linaro.org> <20220414164409.GA5404@blackbody.suse.cz> From: Tadeusz Struk Subject: Re: [PATCH] cgroup: don't queue css_release_work if one already pending In-Reply-To: <20220414164409.GA5404@blackbody.suse.cz> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Michal, Thanks for your analysis. On 4/14/22 09:44, Michal Koutný wrote: > Hello Tadeusz. > > Thanks for analyzing this syzbot report. Let me provide my understanding > of the test case and explanation why I think your patch fixes it but is > not fully correct. > > On Tue, Apr 12, 2022 at 12:24:59PM -0700, Tadeusz Struk wrote: >> Syzbot found a corrupted list bug scenario that can be triggered from >> cgroup css_create(). The reproduces writes to cgroup.subtree_control >> file, which invokes cgroup_apply_control_enable(), css_create(), and >> css_populate_dir(), which then randomly fails with a fault injected -ENOMEM. > > The reproducer code makes it hard for me to understand which function > fails with ENOMEM. > But I can see your patch fixes the reproducer and your additional debug > patch which proves that css->destroy_work is re-queued. Yes, it is hard to see the actual failing point because, I think it is randomly failing in different places. I think in the actual case that causes the list corruption is in fact in css_create(). It is the css_create() error path that does fist rcu enqueue in: https://elixir.bootlin.com/linux/v5.10.109/source/kernel/cgroup/cgroup.c#L5228 and the second is triggered by the css->refcnt calling css_release() The reason why we don't see it actually failing in css_create() in the trace dump is that the fail_dump() is rate-limited, see: https://elixir.bootlin.com/linux/v5.18-rc2/source/lib/fault-inject.c#L44 I was confused as well, so I put additional debug prints in every place where css_release() can fail, and it was actually in css_create()->cgroup_idr_alloc() that failed in my case. What happened was, the write triggered: cgroup_subtree_control_write()->cgroup_apply_control()->cgroup_apply_control_enable()->css_create() which, allocates and initializes the css, then fails in cgroup_idr_alloc(), bails out and calls queue_rcu_work(cgroup_destroy_wq, &css->destroy_rwork); then cgroup_subtree_control_write() bails out to out_unlock:, which then goes: cgroup_kn_unlock()->cgroup_put()->css_put()->percpu_ref_put(&css->refcnt)->percpu_ref_put_many(ref) which then calls ref->data->release(ref) and enqueues the same &css->destroy_rwork on cgroup_destroy_wq causing list corruption in insert_work. >> In such scenario the css_create() error path rcu enqueues css_free_rwork_fn >> work for an css->refcnt initialized with css_release() destructor, > > Note that css_free_rwork_fn() utilizes css->destroy_*r*work. > The error path in css_create() open codes relevant parts of > css_release_work_fn() so that css_release() can be skipped and the > refcnt is eventually just percpu_ref_exit()'d. > >> and there is a chance that the css_release() function will be invoked >> for a cgroup_subsys_state, for which a destroy_work has already been >> queued via css_create() error path. > > But I think the problem is css_populate_dir() failing in > cgroup_apply_control_enable(). (Is this what you actually meant? > css_create() error path is then irrelevant, no?) I thought so too at first as the the crushdump shows that this is failing in css_populate_dir(), but this is not the fail that causes the list corruption. The code can recover from the fail in css_populate_dir(). The fail that causes trouble is in css_create(), that makes it go to its error path. I can dig out the patch with my debug prints and request syzbot to run it if you want. > > The already created csses should then be rolled back via > cgroup_restore_control(cgrp); > cgroup_apply_control_disable(cgrp); > ... > kill_css(css) > > I suspect the double-queuing is a result of the fact that there exists > only the single reference to the css->refcnt. I.e. it's > percpu_ref_kill_and_confirm()'d and released both at the same time. > > (Normally (when not killing the last reference), css->destroy_work reuse > is not a problem because of the sequenced chain > css_killed_work_fn()->css_put()->css_release().) > >> This can be avoided by adding a check to css_release() that checks >> if it has already been enqueued. > > If that's what's happening, then your patch omits the final > css_release_work_fn() in favor of css_killed_work_fn() but both should > be run during the rollback upon css_populate_dir() failure. This change only prevents from double queue: queue_[rcu]_work(cgroup_destroy_wq, &css->destroy_rwork); I don't see how it affects the css_killed_work_fn() clean path. I didn't look at it, since I thought it is irrelevant in this case. -- Thanks, Tadeusz