Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp4331830imm; Mon, 11 Jun 2018 10:29:24 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJJDkdDPkAjgc9mdJImCOAaM4zYcZI0JaQC8AfJtkHNX9iCq9ovTFhH6J3y8+RIXI3tOyYl X-Received: by 2002:a62:8f8c:: with SMTP id n134-v6mr139377pfd.66.1528738164047; Mon, 11 Jun 2018 10:29:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528738164; cv=none; d=google.com; s=arc-20160816; b=ROyt1l3QaqZY+kiVyXng3ifwbnwBUbAOU4hslfhDTSIUAZC+4n0BMwLC04zO/5C4OY 2FzVAfOWwulS91wgwQbe2KGcgeLqvs0C+QM0Z6RDdPV+yLUjr2oq1Tah51EcTrziqHUK fwoFCcPbjONNdP42iBvU6arVYFYutxV0iuhQt1IlOQ3gqM0dmwu80wzueAcVdqIWQkZm cGXUhqMaa6k83m8Gz6KsSwyxK8c3Aq/zn3PAI7lUMHwE8IMSvAQOlp9npxkapzcm5Rf1 FjCrbTMN3mDEm4Qv27wf2SnxZA6vO5p0CuKe0rmdg6EqxwbjTiiw6ALci4zxnc1gCDeH iyig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=y1kqUVT0NOc1kCcFWtdCVg3s7vYg7kE7wAAwJBwXwQY=; b=AFlvw3kOcFd3oG9mppNr+PsbNNewwLP+IPxtkaklTc+zwZXDgxEUKezn39pZlP7LNW 6jKaAsrs4S/HlbE2UErYeRE5VJaG/fMqMrEpOHNbJN6Ydm2pAfn0D6Wjz+toqZall4tL orann9yeimAYqNsKZyu12YjG7tg5vKULnZXMwigPvjKaudamJiRf1WA5zBfs8Kcjv3qo 8PLEhjWD46o2oscHRZNWjZfpEqw9jSJLMpVApOYjopElyknuJI9RL+QfmFxGus25x2yD 2ctSDC1opFkMJWog27WeW7MD4qLaiHM3eFHamywFrvgKjffw6nPWVhv7+IEiizOnSQBG ChEw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 65-v6si925428pla.376.2018.06.11.10.28.39; Mon, 11 Jun 2018 10:29:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933558AbeFKQ3Z (ORCPT + 99 others); Mon, 11 Jun 2018 12:29:25 -0400 Received: from mx2.suse.de ([195.135.220.15]:47729 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932363AbeFKQ3Y (ORCPT ); Mon, 11 Jun 2018 12:29:24 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext-too.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 543A2ACF0; Mon, 11 Jun 2018 16:29:22 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 25EE51E04CC; Mon, 11 Jun 2018 18:29:20 +0200 (CEST) Date: Mon, 11 Jun 2018 18:29:20 +0200 From: Jan Kara To: Tejun Heo Cc: Jan Kara , Tetsuo Handa , Dmitry Vyukov , Jens Axboe , syzbot , syzkaller-bugs , linux-fsdevel , LKML , Al Viro , Dave Chinner , linux-block@vger.kernel.org, Linus Torvalds Subject: Re: [PATCH] bdi: Fix another oops in wb_workfn() Message-ID: <20180611162920.mwapvuqotvhkntt3@quack2.suse.cz> References: <201806080231.w582VIRn021009@www262.sakura.ne.jp> <2b437c6f-3e10-3d83-bdf3-82075d3eaa1a@i-love.sakura.ne.jp> <3cf4b0e3-31b6-8cdc-7c1e-15ba575a7879@i-love.sakura.ne.jp> <20180611091248.2i6nt27h5mxrodm2@quack2.suse.cz> <20180611160131.GQ1351649@devbig577.frc2.facebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180611160131.GQ1351649@devbig577.frc2.facebook.com> User-Agent: NeoMutt/20170421 (1.8.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 11-06-18 09:01:31, Tejun Heo wrote: > Hello, > > On Mon, Jun 11, 2018 at 11:12:48AM +0200, Jan Kara wrote: > > However this is wrong and so is the patch. The problem is in > > cgwb_bdi_unregister() which does cgwb_kill() and thus drops bdi's > > reference to wb structures before going through the list of wbs again and > > calling wb_shutdown() on each of them. The writeback structures we are > > accessing at this point can be already freed in principle like: > > > > CPU1 CPU2 > > cgwb_bdi_unregister() > > cgwb_kill(*slot); > > > > cgwb_release() > > queue_work(cgwb_release_wq, &wb->release_work); > > cgwb_release_workfn() > > wb = list_first_entry(&bdi->wb_list, ...) > > spin_unlock_irq(&cgwb_lock); > > wb_shutdown(wb); > > ... > > kfree_rcu(wb, rcu); > > wb_shutdown(wb); -> oops use-after-free > > > > I'm not 100% sure how to fix this. wb structures can be at various phases of > > shutdown (or there may be other external references still existing) when we > > enter cgwb_bdi_unregister() so I think adding a way for cgwb_bdi_unregister() > > to wait for standard wb shutdown path to finish is the most robust way. > > What do you think about attached patch Tejun? So far only compile tested... > > > > Possible problem with it is that now cgwb_bdi_unregister() will wait for > > all wb references to be dropped so it adds some implicit dependencies to > > bdi shutdown path. > > Would something like the following work or am I missing the point > entirely? I was pondering the same solution for a while but I think it won't work. The problem is that e.g. wb_memcg_offline() could have already removed wb from the radix tree but it is still pending in bdi->wb_list (wb_shutdown() has not run yet) and so we'd drop reference we didn't get. Honza > diff --git a/mm/backing-dev.c b/mm/backing-dev.c > index 347cc83..359cacd 100644 > --- a/mm/backing-dev.c > +++ b/mm/backing-dev.c > @@ -715,14 +715,19 @@ static void cgwb_bdi_unregister(struct backing_dev_info *bdi) > WARN_ON(test_bit(WB_registered, &bdi->wb.state)); > > spin_lock_irq(&cgwb_lock); > - radix_tree_for_each_slot(slot, &bdi->cgwb_tree, &iter, 0) > - cgwb_kill(*slot); > + radix_tree_for_each_slot(slot, &bdi->cgwb_tree, &iter, 0) { > + struct bdi_writeback *wb = *slot; > + > + wb_get(wb); > + cgwb_kill(wb); > + } > > while (!list_empty(&bdi->wb_list)) { > wb = list_first_entry(&bdi->wb_list, struct bdi_writeback, > bdi_node); > spin_unlock_irq(&cgwb_lock); > wb_shutdown(wb); > + wb_put(wb); > spin_lock_irq(&cgwb_lock); > } > spin_unlock_irq(&cgwb_lock); > > > -- > tejun -- Jan Kara SUSE Labs, CR