Received: by 2002:ac0:e350:0:0:0:0:0 with SMTP id g16csp2058794imn; Mon, 1 Aug 2022 09:35:30 -0700 (PDT) X-Google-Smtp-Source: AA6agR7lmmmMpoDPysEXCdofQGi0ccYvCc3ZGQqz37dqhwU1uGyo+oRPXHXvpliudHxg5Oq2kY7V X-Received: by 2002:a05:6402:350a:b0:43c:4528:4eb5 with SMTP id b10-20020a056402350a00b0043c45284eb5mr227524edd.372.1659371730683; Mon, 01 Aug 2022 09:35:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1659371730; cv=none; d=google.com; s=arc-20160816; b=EM5SbeExK1qNiW6No/fBIlKFGbs0d5llHZlPALw32Ff8HuHv6GHFiXK1xfQVd3RK9w 9ZdM0cln/Uc8v+1aYswGh2Opy7QKniQ8tnZ6Axc4le2u0WK5pysfLwIDNyST7PrhF9t3 WJI2DER7tBVGREvFIArDBfvrxXO6SnDSS/CvXnZEGC0sBt14A1bE1a7XT/+VChIxX7sm 4TXzbozVwaY2GjkQJgbX1yjz05dxUAyZEam1EBMPgwKCeY0dLX1ILS6NsGrMaMm3J75q DFOLMvx965TdLmDjchqI/izKITOt0Vugwb+XfFUX8+Z8gP1TPRXHkx8fhtD3SXmofZsS h7EQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=xDVffGBJScwkOd+9hsMyXStFZyBJcuy4eeJPA0O+mYE=; b=KJUnsClV4bd3zicBp4NNNQ/I7sP7TE2W3zMeBd6zQs+/SsaoDhpa4H44gvgKXyTML6 rfVkabIAgYRCys0YT1Tr/qCO0ELnvYOKrYomNpDbK4Dgga3zCPJOre9Q7azTC48BjuIH FXEVutbsFTfR31ayyIGUXHmg159w+8fpLxzOD8ubFV1JgiAnvskTQEcpNaf0OY42TEiD dN2lkz27k69cODnQ3hf0L33HMVyf19YmMUmzLrrthKM5UwaiO9O2RHRwIGLe5GH31Dbo OXfcbwNEc7UPR6NR5rycu2JjJYntLotZWSuakbDMGtjrBuQTY6tvgNxPf+bbyQKgMbLJ mZ8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=jMmbhYHI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o14-20020aa7c7ce000000b0043cf96f44f2si9300589eds.193.2022.08.01.09.34.59; Mon, 01 Aug 2022 09:35:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=jMmbhYHI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233505AbiHAPux (ORCPT + 99 others); Mon, 1 Aug 2022 11:50:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233599AbiHAPum (ORCPT ); Mon, 1 Aug 2022 11:50:42 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6304137FB7 for ; Mon, 1 Aug 2022 08:50:40 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id g12so10996976pfb.3 for ; Mon, 01 Aug 2022 08:50:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xDVffGBJScwkOd+9hsMyXStFZyBJcuy4eeJPA0O+mYE=; b=jMmbhYHIQdhHEsYbYCTMWkE+N1oR2+4UIsO935pssDIUKV24yYBYURTemf+naJm91J +9BPjXvfoOiYx9oigoCYE3NmVM8Jcdx+neZJCJkl+xHlmLz7Cw3EWQ11Q2GLWVxga4Hq c0kegNzqCTkBQDxScnsZzfUCJB29phKYzgaYc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xDVffGBJScwkOd+9hsMyXStFZyBJcuy4eeJPA0O+mYE=; b=5cX9FDUB2CK9RTW3esyncbV7RX9jvQZE0W/mK58q7+JtwDN5v2jS3XYKPlleVlrWOI xp5D+1K8jUp/q9P2HkR4i0xmUzswbtrPaiiDmHx5Z0JOEr44OWqrjAkZisseyEsEfODI 58Wj+BoMHo/Qor2sn8d0ZjN3IJkOBRj/+BfoS5oOLkl4yqG6ZrH/SS5CLj6bZyAKIuAX tF2kk7nWVtcrJMAgri52qSu18TdSKKk6XhHgBAovoloXwy/gKGkhzuS1nyZ6XMtNJ8Po Chg806rkhpu/mAEyYSjbCYgNOD81XXxlVT65Xu5Nr90Nu1dfgT/5s7khuXmDiXAvkyPM 0biA== X-Gm-Message-State: AJIora+85p+B5e1qLD1HhfiXwvb0jeOWcZa4pOd/uT18cbIy6OHZkXZE 4+23qCJEHKXQydNNNn6dIO83gQ== X-Received: by 2002:a65:6b8a:0:b0:3db:7dc5:fec2 with SMTP id d10-20020a656b8a000000b003db7dc5fec2mr13491063pgw.223.1659369039877; Mon, 01 Aug 2022 08:50:39 -0700 (PDT) Received: from khazhy-linux.svl.corp.google.com ([2620:15c:2d4:203:8277:7816:dd6b:eadc]) by smtp.gmail.com with ESMTPSA id lx7-20020a17090b4b0700b001f4dd3b7d7fsm5132323pjb.9.2022.08.01.08.50.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Aug 2022 08:50:39 -0700 (PDT) From: Khazhismel Kumykov X-Google-Original-From: Khazhismel Kumykov To: Alexander Viro , Andrew Morton , "Matthew Wilcox (Oracle)" , Jan Kara Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Khazhismel Kumykov Subject: [PATCH v2] writeback: avoid use-after-free after removing device Date: Mon, 1 Aug 2022 08:50:34 -0700 Message-Id: <20220801155034.3772543-1-khazhy@google.com> X-Mailer: git-send-email 2.37.1.455.g008518b4e5-goog In-Reply-To: <20220729215123.1998585-1-khazhy@google.com> References: <20220729215123.1998585-1-khazhy@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When a disk is removed, bdi_unregister gets called to stop further writeback and wait for associated delayed work to complete. However, wb_inode_writeback_end() may schedule bandwidth estimation dwork after this has completed, which can result in the timer attempting to access the just freed bdi_writeback. Fix this by checking if the bdi_writeback is alive, similar to when scheduling writeback work. Since this requires wb->work_lock, and wb_inode_writeback_end() may get called from interrupt, switch wb->work_lock to an irqsafe lock. Fixes: 45a2966fd641 ("writeback: fix bandwidth estimate for spiky workload") Signed-off-by: Khazhismel Kumykov --- fs/fs-writeback.c | 12 ++++++------ mm/backing-dev.c | 10 +++++----- mm/page-writeback.c | 6 +++++- 3 files changed, 16 insertions(+), 12 deletions(-) v2: made changelog a bit more verbose diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 05221366a16d..08a1993ab7fd 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -134,10 +134,10 @@ static bool inode_io_list_move_locked(struct inode *inode, static void wb_wakeup(struct bdi_writeback *wb) { - spin_lock_bh(&wb->work_lock); + spin_lock_irq(&wb->work_lock); if (test_bit(WB_registered, &wb->state)) mod_delayed_work(bdi_wq, &wb->dwork, 0); - spin_unlock_bh(&wb->work_lock); + spin_unlock_irq(&wb->work_lock); } static void finish_writeback_work(struct bdi_writeback *wb, @@ -164,7 +164,7 @@ static void wb_queue_work(struct bdi_writeback *wb, if (work->done) atomic_inc(&work->done->cnt); - spin_lock_bh(&wb->work_lock); + spin_lock_irq(&wb->work_lock); if (test_bit(WB_registered, &wb->state)) { list_add_tail(&work->list, &wb->work_list); @@ -172,7 +172,7 @@ static void wb_queue_work(struct bdi_writeback *wb, } else finish_writeback_work(wb, work); - spin_unlock_bh(&wb->work_lock); + spin_unlock_irq(&wb->work_lock); } /** @@ -2082,13 +2082,13 @@ static struct wb_writeback_work *get_next_work_item(struct bdi_writeback *wb) { struct wb_writeback_work *work = NULL; - spin_lock_bh(&wb->work_lock); + spin_lock_irq(&wb->work_lock); if (!list_empty(&wb->work_list)) { work = list_entry(wb->work_list.next, struct wb_writeback_work, list); list_del_init(&work->list); } - spin_unlock_bh(&wb->work_lock); + spin_unlock_irq(&wb->work_lock); return work; } diff --git a/mm/backing-dev.c b/mm/backing-dev.c index 95550b8fa7fe..de65cb1e5f76 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -260,10 +260,10 @@ void wb_wakeup_delayed(struct bdi_writeback *wb) unsigned long timeout; timeout = msecs_to_jiffies(dirty_writeback_interval * 10); - spin_lock_bh(&wb->work_lock); + spin_lock_irq(&wb->work_lock); if (test_bit(WB_registered, &wb->state)) queue_delayed_work(bdi_wq, &wb->dwork, timeout); - spin_unlock_bh(&wb->work_lock); + spin_unlock_irq(&wb->work_lock); } static void wb_update_bandwidth_workfn(struct work_struct *work) @@ -334,12 +334,12 @@ static void cgwb_remove_from_bdi_list(struct bdi_writeback *wb); static void wb_shutdown(struct bdi_writeback *wb) { /* Make sure nobody queues further work */ - spin_lock_bh(&wb->work_lock); + spin_lock_irq(&wb->work_lock); if (!test_and_clear_bit(WB_registered, &wb->state)) { - spin_unlock_bh(&wb->work_lock); + spin_unlock_irq(&wb->work_lock); return; } - spin_unlock_bh(&wb->work_lock); + spin_unlock_irq(&wb->work_lock); cgwb_remove_from_bdi_list(wb); /* diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 55c2776ae699..3c34db15cf70 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2867,6 +2867,7 @@ static void wb_inode_writeback_start(struct bdi_writeback *wb) static void wb_inode_writeback_end(struct bdi_writeback *wb) { + unsigned long flags; atomic_dec(&wb->writeback_inodes); /* * Make sure estimate of writeback throughput gets updated after @@ -2875,7 +2876,10 @@ static void wb_inode_writeback_end(struct bdi_writeback *wb) * that if multiple inodes end writeback at a similar time, they get * batched into one bandwidth update. */ - queue_delayed_work(bdi_wq, &wb->bw_dwork, BANDWIDTH_INTERVAL); + spin_lock_irqsave(&wb->work_lock, flags); + if (test_bit(WB_registered, &wb->state)) + queue_delayed_work(bdi_wq, &wb->bw_dwork, BANDWIDTH_INTERVAL); + spin_unlock_irqrestore(&wb->work_lock, flags); } bool __folio_end_writeback(struct folio *folio) -- 2.37.1.455.g008518b4e5-goog