Received: by 10.192.165.156 with SMTP id m28csp456465imm; Wed, 11 Apr 2018 01:48:58 -0700 (PDT) X-Google-Smtp-Source: AIpwx48vPbljjcjXcyXVPkjom3Nn7/RUX00i9aVwehX/rZsn/dfdE6f8hQC6mCBcwuvYMerMSJlJ X-Received: by 10.101.78.134 with SMTP id b6mr2698161pgs.392.1523436538710; Wed, 11 Apr 2018 01:48:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523436538; cv=none; d=google.com; s=arc-20160816; b=Qb0A/ZdbkDTKX1yihV67qXeme478WFfkySqNBr+4v5xMyq4uyzGAHVbTV0NSb0xbwX xbB7X53iNaWxQ4XOcpFFHRbKbXNj+EM7gtYa12bQerjCaDTcop9VUYuWCv2i87pl0Km1 6Ts4iQ/fIybrNhJO0lNjRuvIC/p5wOu75MwCzVWd3BgijQIyd75iNSxLuPESDWHJyNc2 Z3hZQ8ikSpXU9qC0LAhFzhXkqbVdE9YvWxcH7gRWADFG/uX5aHVelhYNziqqBWeZG/mW sBPQM92KtD9miXiMzhDJocdgUrEChVhUThwWyfoqVCnM91/oSw52kwaBRk+kEm5eRb7h kydQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=+GLz96st5Jt87YCQ86+JTaLsI2X9kQJ1IQIE5CVyIQM=; b=ld6fUr3n7PhBzsOgXjI+BktTnEo2QVUHkTTM+BNMiZ4w2e9WoAOWEC0rDAnp2oXX7W NtPGY0Cj6S3eYsvgyz605/qJECoE1uBsY4dPy+BFLSUJcMDpA9gd+RFchL0nfAzNhEKC 6zOuNrzHFtc8G5ZlmCDfj65QXRZsgYLjvdY7YzA36zLUldtz8+5qZgawz4l2bbvpOHc3 3H5GMSetfBqt3IgQMyiqdiCZT2DSp2kyqGDMen+6Zbf+d873rYxJAa9RjO6geb1r3lpb rpNUo1Ajiz5PGXqzQWnb+iXbWkeCkBfZ5+zL6BqTcKX52OrE5LAgActtNbBTsEieZlBb RcFg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Vh+GW+Zy; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t12-v6si592657ply.353.2018.04.11.01.48.21; Wed, 11 Apr 2018 01:48:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Vh+GW+Zy; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752598AbeDKIpr (ORCPT + 99 others); Wed, 11 Apr 2018 04:45:47 -0400 Received: from mail-pf0-f194.google.com ([209.85.192.194]:36738 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752212AbeDKIpj (ORCPT ); Wed, 11 Apr 2018 04:45:39 -0400 Received: by mail-pf0-f194.google.com with SMTP id g14so612000pfh.3 for ; Wed, 11 Apr 2018 01:45:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+GLz96st5Jt87YCQ86+JTaLsI2X9kQJ1IQIE5CVyIQM=; b=Vh+GW+ZyKkrolgopfdzvwb+NkpdCH9GOFrTxSpb+m+/5SBCorjlzU2nZ+1XgxGGey2 wWpHy7FPT/mipfPq6zN66KionlOdzCHBA1TY+kwj/sx8BMDALeUcW2dIKW1529XDHIxd qWPNWTzzHRoYZhCMlkulO0y536GI6iKCp2Y3KQyTfdt+awZGfS/IbJo5l/Q0VCZrjacU 8ULbR2LoUaWyaOQllpD8VB3xuo9A5DHvPNWF1YMF4dP9c4TSCblnwM5uvURKAkdjAwK6 2JffshQu3vSkGyvFPKPu3lUIwANuxf7vrO7aDU3LfuG/9siZmpzJAtX/qKSadlD3SZmQ X1GQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+GLz96st5Jt87YCQ86+JTaLsI2X9kQJ1IQIE5CVyIQM=; b=OR7DVk3/+plF3mfHNOfa5HesRpekgGFpnA5uBlMJo/WTHduDzE0q5I+tPjdqzFenGF nbCrjly9ScvjE8whCED2n2qxaBaYNSt2Wr1O6HtaNhhxXDcgMx7RehmBL05VVwJVtpcG dsiAGsdL4pe7e4g3SxUJV5QQw/RSh42itdx+aWEMhO8mGQ8Ez2WZcQqqP92I44xPCVMt 8uqQqH27/UCaTq9Qs2y62gZG3MNWQhlMWHV2MlzD26et/e/l6e0BxeJW7lg4zkXn4u7O RVZrKEkszEsjEg409ntmgFJP/n5QAkq64cTgUtmQwY8lnC8mXsKJUreIdjldPB8of1PI A3GQ== X-Gm-Message-State: ALQs6tBlZhsGr8EY2KvIRaTmRi++zuQzsvq16WYvi5Ag8spUAoji39Da yhVKMmzTSZELF6+9SVhnRPwDIg== X-Received: by 10.98.75.194 with SMTP id d63mr3188547pfj.99.1523436338859; Wed, 11 Apr 2018 01:45:38 -0700 (PDT) Received: from gthelen.svl.corp.google.com ([2620:15c:2cb:201:7fd0:97b4:747b:9bf1]) by smtp.gmail.com with ESMTPSA id b3sm2089657pff.11.2018.04.11.01.45.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Apr 2018 01:45:37 -0700 (PDT) From: Greg Thelen To: Wang Long , Michal Hocko , Andrew Morton , Johannes Weiner , Tejun Heo , Sasha Levin Cc: npiggin@gmail.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Greg Thelen , stable@vger.kernel.org Subject: [PATCH for-4.4] writeback: safer lock nesting Date: Wed, 11 Apr 2018 01:45:21 -0700 Message-Id: <20180411084521.254006-1-gthelen@google.com> X-Mailer: git-send-email 2.17.0.484.g0c8726318c-goog In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org lock_page_memcg()/unlock_page_memcg() use spin_lock_irqsave/restore() if the page's memcg is undergoing move accounting, which occurs when a process leaves its memcg for a new one that has memory.move_charge_at_immigrate set. unlocked_inode_to_wb_begin,end() use spin_lock_irq/spin_unlock_irq() if the given inode is switching writeback domains. Switches occur when enough writes are issued from a new domain. This existing pattern is thus suspicious: lock_page_memcg(page); unlocked_inode_to_wb_begin(inode, &locked); ... unlocked_inode_to_wb_end(inode, locked); unlock_page_memcg(page); If both inode switch and process memcg migration are both in-flight then unlocked_inode_to_wb_end() will unconditionally enable interrupts while still holding the lock_page_memcg() irq spinlock. This suggests the possibility of deadlock if an interrupt occurs before unlock_page_memcg(). truncate __cancel_dirty_page lock_page_memcg unlocked_inode_to_wb_begin unlocked_inode_to_wb_end end_page_writeback test_clear_page_writeback lock_page_memcg unlock_page_memcg Due to configuration limitations this deadlock is not currently possible because we don't mix cgroup writeback (a cgroupv2 feature) and memory.move_charge_at_immigrate (a cgroupv1 feature). If the kernel is hacked to always claim inode switching and memcg moving_account, then this script triggers lockup in less than a minute: cd /mnt/cgroup/memory mkdir a b echo 1 > a/memory.move_charge_at_immigrate echo 1 > b/memory.move_charge_at_immigrate ( echo $BASHPID > a/cgroup.procs while true; do dd if=/dev/zero of=/mnt/big bs=1M count=256 done ) & while true; do sync done & sleep 1h & SLEEP=$! while true; do echo $SLEEP > a/cgroup.procs echo $SLEEP > b/cgroup.procs done The deadlock does not seem possible, so it's debatable if there's any reason to modify the kernel. I suggest we should to prevent future surprises. And Wang Long said "this deadlock occurs three times in our environment", so there's more reason to apply this, even to stable. [ This patch is only for 4.4 stable. Newer stable kernels should use be able to cherry pick the upstream "writeback: safer lock nesting" patch. ] Fixes: 682aa8e1a6a1 ("writeback: implement unlocked_inode_to_wb transaction and use it for stat updates") Cc: stable@vger.kernel.org # v4.2+ Reported-by: Wang Long Signed-off-by: Greg Thelen Acked-by: Michal Hocko Acked-by: Wang Long --- fs/fs-writeback.c | 7 ++++--- include/linux/backing-dev-defs.h | 5 +++++ include/linux/backing-dev.h | 31 +++++++++++++++++-------------- mm/page-writeback.c | 18 +++++++++--------- 4 files changed, 35 insertions(+), 26 deletions(-) diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 22b30249fbcb..0fe667875852 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -747,11 +747,12 @@ int inode_congested(struct inode *inode, int cong_bits) */ if (inode && inode_to_wb_is_valid(inode)) { struct bdi_writeback *wb; - bool locked, congested; + struct wb_lock_cookie lock_cookie = {}; + bool congested; - wb = unlocked_inode_to_wb_begin(inode, &locked); + wb = unlocked_inode_to_wb_begin(inode, &lock_cookie); congested = wb_congested(wb, cong_bits); - unlocked_inode_to_wb_end(inode, locked); + unlocked_inode_to_wb_end(inode, &lock_cookie); return congested; } diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h index 140c29635069..a307c37c2e6c 100644 --- a/include/linux/backing-dev-defs.h +++ b/include/linux/backing-dev-defs.h @@ -191,6 +191,11 @@ static inline void set_bdi_congested(struct backing_dev_info *bdi, int sync) set_wb_congested(bdi->wb.congested, sync); } +struct wb_lock_cookie { + bool locked; + unsigned long flags; +}; + #ifdef CONFIG_CGROUP_WRITEBACK /** diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h index 89d3de3e096b..361274ce5815 100644 --- a/include/linux/backing-dev.h +++ b/include/linux/backing-dev.h @@ -366,7 +366,7 @@ static inline struct bdi_writeback *inode_to_wb(struct inode *inode) /** * unlocked_inode_to_wb_begin - begin unlocked inode wb access transaction * @inode: target inode - * @lockedp: temp bool output param, to be passed to the end function + * @cookie: output param, to be passed to the end function * * The caller wants to access the wb associated with @inode but isn't * holding inode->i_lock, mapping->tree_lock or wb->list_lock. This @@ -374,12 +374,12 @@ static inline struct bdi_writeback *inode_to_wb(struct inode *inode) * association doesn't change until the transaction is finished with * unlocked_inode_to_wb_end(). * - * The caller must call unlocked_inode_to_wb_end() with *@lockdep - * afterwards and can't sleep during transaction. IRQ may or may not be - * disabled on return. + * The caller must call unlocked_inode_to_wb_end() with *@cookie afterwards and + * can't sleep during the transaction. IRQs may or may not be disabled on + * return. */ static inline struct bdi_writeback * -unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp) +unlocked_inode_to_wb_begin(struct inode *inode, struct wb_lock_cookie *cookie) { rcu_read_lock(); @@ -387,10 +387,10 @@ unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp) * Paired with store_release in inode_switch_wb_work_fn() and * ensures that we see the new wb if we see cleared I_WB_SWITCH. */ - *lockedp = smp_load_acquire(&inode->i_state) & I_WB_SWITCH; + cookie->locked = smp_load_acquire(&inode->i_state) & I_WB_SWITCH; - if (unlikely(*lockedp)) - spin_lock_irq(&inode->i_mapping->tree_lock); + if (unlikely(cookie->locked)) + spin_lock_irqsave(&inode->i_mapping->tree_lock, cookie->flags); /* * Protected by either !I_WB_SWITCH + rcu_read_lock() or tree_lock. @@ -402,12 +402,14 @@ unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp) /** * unlocked_inode_to_wb_end - end inode wb access transaction * @inode: target inode - * @locked: *@lockedp from unlocked_inode_to_wb_begin() + * @cookie: @cookie from unlocked_inode_to_wb_begin() */ -static inline void unlocked_inode_to_wb_end(struct inode *inode, bool locked) +static inline void unlocked_inode_to_wb_end(struct inode *inode, + struct wb_lock_cookie *cookie) { - if (unlikely(locked)) - spin_unlock_irq(&inode->i_mapping->tree_lock); + if (unlikely(cookie->locked)) + spin_unlock_irqrestore(&inode->i_mapping->tree_lock, + cookie->flags); rcu_read_unlock(); } @@ -454,12 +456,13 @@ static inline struct bdi_writeback *inode_to_wb(struct inode *inode) } static inline struct bdi_writeback * -unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp) +unlocked_inode_to_wb_begin(struct inode *inode, struct wb_lock_cookie *cookie) { return inode_to_wb(inode); } -static inline void unlocked_inode_to_wb_end(struct inode *inode, bool locked) +static inline void unlocked_inode_to_wb_end(struct inode *inode, + struct wb_lock_cookie *cookie) { } diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 6d0dbde4503b..3309dbda7ffa 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2510,13 +2510,13 @@ void account_page_redirty(struct page *page) if (mapping && mapping_cap_account_dirty(mapping)) { struct inode *inode = mapping->host; struct bdi_writeback *wb; - bool locked; + struct wb_lock_cookie cookie = {}; - wb = unlocked_inode_to_wb_begin(inode, &locked); + wb = unlocked_inode_to_wb_begin(inode, &cookie); current->nr_dirtied--; dec_zone_page_state(page, NR_DIRTIED); dec_wb_stat(wb, WB_DIRTIED); - unlocked_inode_to_wb_end(inode, locked); + unlocked_inode_to_wb_end(inode, &cookie); } } EXPORT_SYMBOL(account_page_redirty); @@ -2622,15 +2622,15 @@ void cancel_dirty_page(struct page *page) struct inode *inode = mapping->host; struct bdi_writeback *wb; struct mem_cgroup *memcg; - bool locked; + struct wb_lock_cookie cookie = {}; memcg = mem_cgroup_begin_page_stat(page); - wb = unlocked_inode_to_wb_begin(inode, &locked); + wb = unlocked_inode_to_wb_begin(inode, &cookie); if (TestClearPageDirty(page)) account_page_cleaned(page, mapping, memcg, wb); - unlocked_inode_to_wb_end(inode, locked); + unlocked_inode_to_wb_end(inode, &cookie); mem_cgroup_end_page_stat(memcg); } else { ClearPageDirty(page); @@ -2663,7 +2663,7 @@ int clear_page_dirty_for_io(struct page *page) struct inode *inode = mapping->host; struct bdi_writeback *wb; struct mem_cgroup *memcg; - bool locked; + struct wb_lock_cookie cookie = {}; /* * Yes, Virginia, this is indeed insane. @@ -2701,14 +2701,14 @@ int clear_page_dirty_for_io(struct page *page) * exclusion. */ memcg = mem_cgroup_begin_page_stat(page); - wb = unlocked_inode_to_wb_begin(inode, &locked); + wb = unlocked_inode_to_wb_begin(inode, &cookie); if (TestClearPageDirty(page)) { mem_cgroup_dec_page_stat(memcg, MEM_CGROUP_STAT_DIRTY); dec_zone_page_state(page, NR_FILE_DIRTY); dec_wb_stat(wb, WB_RECLAIMABLE); ret = 1; } - unlocked_inode_to_wb_end(inode, locked); + unlocked_inode_to_wb_end(inode, &cookie); mem_cgroup_end_page_stat(memcg); return ret; } -- 2.17.0.484.g0c8726318c-goog