Received: by 10.213.65.68 with SMTP id h4csp3277519imn; Mon, 9 Apr 2018 18:05:32 -0700 (PDT) X-Google-Smtp-Source: AIpwx491Mk+Q2LOj2NtGLc82rq+stKmLXaTsLS7boO+FOpj3u7sP87A7IyJRzhG4uglOvOsRaaUj X-Received: by 10.99.119.79 with SMTP id s76mr25991216pgc.291.1523322332874; Mon, 09 Apr 2018 18:05:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523322332; cv=none; d=google.com; s=arc-20160816; b=s8SxVCbGd1lT+ardr3NlWBz26eO1CTs2a3xWc3uLjMp1PdvQy2RtX9nIt/ut5Hx+MS H5UZ3083pNEkWvq5sAmdYqOEpeQOXOPpApiuD6tShc7poFxswu3Ypxqf5R55S5vEYgKZ BGx4u9DHoe5qlu5ihKfyxFf0167ZY8yjGxTybkwKlYX0rpPKhiXoQIA5PXSa1j7i0G6Z xRasgaC5yMFtoY5G9gyBRRGwmpM8N8t81h1iRaixyZIN2f4EYe+AUzGiodoCGGXKRlEH eQ0ZP9mkBzcfyzfbZ2TKYuC/cnnAfORYLNN7WeV2RtWYxfRlSz9qBlUUPJsm/aSaoxNu zw2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=dM/Lfn52jW1B+5YD36btC/f1Ba98YTXW1Wqmrlnv55Q=; b=ssdpQS40BjLvLYWKkncGoFq2Jci1469ZqLE86SbWCL9KcZAxJBhh/tbRAQOiIC+6B2 SRwubaR1KuXFzQCgprPB8r/hgMQHq4H3xx0SiPxeRHYTzwCfq/l7VnMHZcrSdjRIz3/b Whijtq0bLt7vhjIokmJKlbtcR2yaArCKPDr/rDyMOPzUjdUmZL5TwgHEW+rfIQDUKGLB AMQh0sCbmEDkkQNklqAR6F7QpkTfJRNMddYz1mdqMymh6763indnN+4hnjztRaXEsVNy rpCHFAZkCCAtIZbWcJlwSgPPAqYohq6tgRDnDdIGEqWiQZUfPiIbWeyYLIFfK49TmRO8 cyhg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=HyA1QE0+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l2si165538pgq.460.2018.04.09.18.04.55; Mon, 09 Apr 2018 18:05:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=HyA1QE0+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751731AbeDJBBs (ORCPT + 99 others); Mon, 9 Apr 2018 21:01:48 -0400 Received: from mail-pl0-f65.google.com ([209.85.160.65]:45455 "EHLO mail-pl0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751502AbeDJBBr (ORCPT ); Mon, 9 Apr 2018 21:01:47 -0400 Received: by mail-pl0-f65.google.com with SMTP id e22-v6so4520163plj.12 for ; Mon, 09 Apr 2018 18:01:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dM/Lfn52jW1B+5YD36btC/f1Ba98YTXW1Wqmrlnv55Q=; b=HyA1QE0+euPZZ30ZsoAqZts/isORdLDdmlQV9rxyd2mb/hPGuW1v4T1LqDC9U5b8kb mvEO4iTaggF7WLYycTwq/CuIixn7/lEW8mxlOyLg2z4mf5yt4P3F885VTfesCc3kBkej T39zvOXdBqwTUybdu5ixxdl58M5bffZsNmUwLQkUu0St/e6BNn30JCHGXaoeIV8e1Es7 9z5M1XGEIqGUL0/vdVfZAvPKdhTmxcPptd7xvrp8jkrnskaYlGHYUcsKmqOSkI/cn9wz isDor558Gb8i3d4TtBG1kgDBFoc4Gaw4ywDAfX0OHhy1TjzHuvCkZlO/KqEvIJnqRTV3 v9Gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dM/Lfn52jW1B+5YD36btC/f1Ba98YTXW1Wqmrlnv55Q=; b=P5h2huxu18deASZvSZEzd9ErniuCOdMrob3pYBhe96CYyl3IH+kT7K0uDBXbxp0jaf G5HBH9Cb0q+wQTRpbrxbs7rWm8vO3A+SoMC23agB6iTfLb3E/36jomRJoFMDuFwqgDRL qcZN6ehq79eOMgo+mocLjHAgttZbX0OstJqNLP55FQSGY+hsRp/83iAUVecU2N5vxbNj 2LjlkUwOIGKz9cqFVBK3cuyWghY1FrBjosW+/uidC88mc1DjHE/p98rQJWNBLR/3wIuA VSIWojVw6RYrn7k7P+otfqJEuL8CRm60fIXRpEUe0npDIuGJszj2s6fWFmVi/UOpoGLB HDkA== X-Gm-Message-State: AElRT7HRQcBdSOIh85xej/VZ1MTbyCUfcuocpfZr45I6XVygBRZz8H2G vAXkbIe2JqZ86cCZsnvmPnMmmA== X-Received: by 2002:a17:902:8212:: with SMTP id x18-v6mr42154209pln.372.1523322106583; Mon, 09 Apr 2018 18:01:46 -0700 (PDT) Received: from gthelen.svl.corp.google.com ([2620:15c:2cb:201:7fd0:97b4:747b:9bf1]) by smtp.gmail.com with ESMTPSA id a4sm2908842pfj.107.2018.04.09.18.01.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 09 Apr 2018 18:01:45 -0700 (PDT) From: Greg Thelen To: Wang Long , Michal Hocko , Andrew Morton , Johannes Weiner , Tejun Heo Cc: npiggin@gmail.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Greg Thelen Subject: [PATCH v3] writeback: safer lock nesting Date: Mon, 9 Apr 2018 17:59:08 -0700 Message-Id: <20180410005908.167976-1-gthelen@google.com> X-Mailer: git-send-email 2.17.0.484.g0c8726318c-goog In-Reply-To: <201804080259.VS5U0mKT%fengguang.wu@intel.com> References: <201804080259.VS5U0mKT%fengguang.wu@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org lock_page_memcg()/unlock_page_memcg() use spin_lock_irqsave/restore() if the page's memcg is undergoing move accounting, which occurs when a process leaves its memcg for a new one that has memory.move_charge_at_immigrate set. unlocked_inode_to_wb_begin,end() use spin_lock_irq/spin_unlock_irq() if the given inode is switching writeback domains. Switches occur when enough writes are issued from a new domain. This existing pattern is thus suspicious: lock_page_memcg(page); unlocked_inode_to_wb_begin(inode, &locked); ... unlocked_inode_to_wb_end(inode, locked); unlock_page_memcg(page); If both inode switch and process memcg migration are both in-flight then unlocked_inode_to_wb_end() will unconditionally enable interrupts while still holding the lock_page_memcg() irq spinlock. This suggests the possibility of deadlock if an interrupt occurs before unlock_page_memcg(). truncate __cancel_dirty_page lock_page_memcg unlocked_inode_to_wb_begin unlocked_inode_to_wb_end end_page_writeback test_clear_page_writeback lock_page_memcg unlock_page_memcg Due to configuration limitations this deadlock is not currently possible because we don't mix cgroup writeback (a cgroupv2 feature) and memory.move_charge_at_immigrate (a cgroupv1 feature). If the kernel is hacked to always claim inode switching and memcg moving_account, then this script triggers lockup in less than a minute: cd /mnt/cgroup/memory mkdir a b echo 1 > a/memory.move_charge_at_immigrate echo 1 > b/memory.move_charge_at_immigrate ( echo $BASHPID > a/cgroup.procs while true; do dd if=/dev/zero of=/mnt/big bs=1M count=256 done ) & while true; do sync done & sleep 1h & SLEEP=$! while true; do echo $SLEEP > a/cgroup.procs echo $SLEEP > b/cgroup.procs done Given the deadlock is not currently possible, it's debatable if there's any reason to modify the kernel. I suggest we should to prevent future surprises. Reported-by: Wang Long Signed-off-by: Greg Thelen Change-Id: Ibb773e8045852978f6207074491d262f1b3fb613 --- Changelog since v2: - explicitly initialize wb_lock_cookie to silence compiler warnings. Changelog since v1: - add wb_lock_cookie to record lock context. fs/fs-writeback.c | 7 ++++--- include/linux/backing-dev-defs.h | 5 +++++ include/linux/backing-dev.h | 30 ++++++++++++++++-------------- mm/page-writeback.c | 18 +++++++++--------- 4 files changed, 34 insertions(+), 26 deletions(-) diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 1280f915079b..f4b2f6625913 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -745,11 +745,12 @@ int inode_congested(struct inode *inode, int cong_bits) */ if (inode && inode_to_wb_is_valid(inode)) { struct bdi_writeback *wb; - bool locked, congested; + struct wb_lock_cookie lock_cookie; + bool congested; - wb = unlocked_inode_to_wb_begin(inode, &locked); + wb = unlocked_inode_to_wb_begin(inode, &lock_cookie); congested = wb_congested(wb, cong_bits); - unlocked_inode_to_wb_end(inode, locked); + unlocked_inode_to_wb_end(inode, &lock_cookie); return congested; } diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h index bfe86b54f6c1..0bd432a4d7bd 100644 --- a/include/linux/backing-dev-defs.h +++ b/include/linux/backing-dev-defs.h @@ -223,6 +223,11 @@ static inline void set_bdi_congested(struct backing_dev_info *bdi, int sync) set_wb_congested(bdi->wb.congested, sync); } +struct wb_lock_cookie { + bool locked; + unsigned long flags; +}; + #ifdef CONFIG_CGROUP_WRITEBACK /** diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h index 3e4ce54d84ab..1d744c61d996 100644 --- a/include/linux/backing-dev.h +++ b/include/linux/backing-dev.h @@ -346,7 +346,7 @@ static inline struct bdi_writeback *inode_to_wb(const struct inode *inode) /** * unlocked_inode_to_wb_begin - begin unlocked inode wb access transaction * @inode: target inode - * @lockedp: temp bool output param, to be passed to the end function + * @cookie: output param, to be passed to the end function * * The caller wants to access the wb associated with @inode but isn't * holding inode->i_lock, mapping->tree_lock or wb->list_lock. This @@ -354,12 +354,11 @@ static inline struct bdi_writeback *inode_to_wb(const struct inode *inode) * association doesn't change until the transaction is finished with * unlocked_inode_to_wb_end(). * - * The caller must call unlocked_inode_to_wb_end() with *@lockdep - * afterwards and can't sleep during transaction. IRQ may or may not be - * disabled on return. + * The caller must call unlocked_inode_to_wb_end() with *@cookie afterwards and + * can't sleep during transaction. IRQ may or may not be disabled on return. */ static inline struct bdi_writeback * -unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp) +unlocked_inode_to_wb_begin(struct inode *inode, struct wb_lock_cookie *cookie) { rcu_read_lock(); @@ -367,10 +366,10 @@ unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp) * Paired with store_release in inode_switch_wb_work_fn() and * ensures that we see the new wb if we see cleared I_WB_SWITCH. */ - *lockedp = smp_load_acquire(&inode->i_state) & I_WB_SWITCH; + cookie->locked = smp_load_acquire(&inode->i_state) & I_WB_SWITCH; - if (unlikely(*lockedp)) - spin_lock_irq(&inode->i_mapping->tree_lock); + if (unlikely(cookie->locked)) + spin_lock_irqsave(&inode->i_mapping->tree_lock, cookie->flags); /* * Protected by either !I_WB_SWITCH + rcu_read_lock() or tree_lock. @@ -382,12 +381,14 @@ unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp) /** * unlocked_inode_to_wb_end - end inode wb access transaction * @inode: target inode - * @locked: *@lockedp from unlocked_inode_to_wb_begin() + * @cookie: @cookie from unlocked_inode_to_wb_begin() */ -static inline void unlocked_inode_to_wb_end(struct inode *inode, bool locked) +static inline void unlocked_inode_to_wb_end(struct inode *inode, + struct wb_lock_cookie *cookie) { - if (unlikely(locked)) - spin_unlock_irq(&inode->i_mapping->tree_lock); + if (unlikely(cookie->locked)) + spin_unlock_irqrestore(&inode->i_mapping->tree_lock, + cookie->flags); rcu_read_unlock(); } @@ -434,12 +435,13 @@ static inline struct bdi_writeback *inode_to_wb(struct inode *inode) } static inline struct bdi_writeback * -unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp) +unlocked_inode_to_wb_begin(struct inode *inode, struct wb_lock_cookie *cookie) { return inode_to_wb(inode); } -static inline void unlocked_inode_to_wb_end(struct inode *inode, bool locked) +static inline void unlocked_inode_to_wb_end(struct inode *inode, + struct wb_lock_cookie *cookie) { } diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 586f31261c83..bc38a2a7a597 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2501,13 +2501,13 @@ void account_page_redirty(struct page *page) if (mapping && mapping_cap_account_dirty(mapping)) { struct inode *inode = mapping->host; struct bdi_writeback *wb; - bool locked; + struct wb_lock_cookie cookie = {0}; - wb = unlocked_inode_to_wb_begin(inode, &locked); + wb = unlocked_inode_to_wb_begin(inode, &cookie); current->nr_dirtied--; dec_node_page_state(page, NR_DIRTIED); dec_wb_stat(wb, WB_DIRTIED); - unlocked_inode_to_wb_end(inode, locked); + unlocked_inode_to_wb_end(inode, &cookie); } } EXPORT_SYMBOL(account_page_redirty); @@ -2613,15 +2613,15 @@ void __cancel_dirty_page(struct page *page) if (mapping_cap_account_dirty(mapping)) { struct inode *inode = mapping->host; struct bdi_writeback *wb; - bool locked; + struct wb_lock_cookie cookie = {0}; lock_page_memcg(page); - wb = unlocked_inode_to_wb_begin(inode, &locked); + wb = unlocked_inode_to_wb_begin(inode, &cookie); if (TestClearPageDirty(page)) account_page_cleaned(page, mapping, wb); - unlocked_inode_to_wb_end(inode, locked); + unlocked_inode_to_wb_end(inode, &cookie); unlock_page_memcg(page); } else { ClearPageDirty(page); @@ -2653,7 +2653,7 @@ int clear_page_dirty_for_io(struct page *page) if (mapping && mapping_cap_account_dirty(mapping)) { struct inode *inode = mapping->host; struct bdi_writeback *wb; - bool locked; + struct wb_lock_cookie cookie = {0}; /* * Yes, Virginia, this is indeed insane. @@ -2690,14 +2690,14 @@ int clear_page_dirty_for_io(struct page *page) * always locked coming in here, so we get the desired * exclusion. */ - wb = unlocked_inode_to_wb_begin(inode, &locked); + wb = unlocked_inode_to_wb_begin(inode, &cookie); if (TestClearPageDirty(page)) { dec_lruvec_page_state(page, NR_FILE_DIRTY); dec_zone_page_state(page, NR_ZONE_WRITE_PENDING); dec_wb_stat(wb, WB_RECLAIMABLE); ret = 1; } - unlocked_inode_to_wb_end(inode, locked); + unlocked_inode_to_wb_end(inode, &cookie); return ret; } return TestClearPageDirty(page); -- 2.17.0.484.g0c8726318c-goog