Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp1325671imm; Tue, 2 Oct 2018 06:33:52 -0700 (PDT) X-Google-Smtp-Source: ACcGV63EuoBYaJohZJT4mRMeQSL08JY32oieumMzLxyGuQ4K3UH3OWx97sNTF4dg/v8ABeBg5cIz X-Received: by 2002:a17:902:d90e:: with SMTP id c14-v6mr11377593plz.61.1538487232087; Tue, 02 Oct 2018 06:33:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538487232; cv=none; d=google.com; s=arc-20160816; b=KxHkBjPOuzv10EQ+4p1TM8qABnDJQWR9A3AepRiv3NxY3AjDT5sXD3jpIX0yDB83Og mxSzLGNMuMV4jVfaWIk46uH6I+s26gr0g6Njyl/F6UcMMFFnM3B8PdFI1tv3sGJnzOsL hkwhXdISMZ1mOSjih01OJN1M+4Pn+GQ3A5g6ZpSElI9IsHxVka88T+LtYF1rIW2NvI2r oyGDArXZaXPTb/zysvrujmOSaqw+Jux+u1o6Oklp6eIzqeiiVGfdjeYcXpznYk6ApeUq NAnTbHbQhJQxkiPsdUdEXLUwg0YX57uaixKIayBjnLUy1PoMHWpzBjKq5XrKM5HcmW2U qxGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from; bh=n0yFf+5Mlh0Wu+lls6s7G+F8pAoyNJWUr9uOXAK1BGE=; b=Ml3UcO3UKrq+IO3L8a/prEj2mV4JveyKXStdvq/xWLsBC9oO4SZ4YpwrIX6L+iFy1V N0Ld4jayY5xAavJsG5dsZnGHod9Y1cTMT7qju9PE7ATDCbaYy+BA23KIfyidVqsw/Gnq cllIp8iJHDPtyQUh8+78yHxxlo2fxtXSdQNawemOFb+f5XjRpBOJCvmPMGvM3uYrjuUJ uINVChd6icJ3hvMXk/OhFQk/WB0Pw0AETc3Ko4ccfGB0515YbOYjqGEolhYbze3jhIZ6 PbBlvgS+uOJMub/rBbMMp9UG99QedaWC7QgbF8zyvd4x92jMmi7jrahHRmO4oQep9Fb3 gS1Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t8-v6si5365176pgj.212.2018.10.02.06.33.36; Tue, 02 Oct 2018 06:33:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731655AbeJBUQN (ORCPT + 99 others); Tue, 2 Oct 2018 16:16:13 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:34746 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731587AbeJBUQM (ORCPT ); Tue, 2 Oct 2018 16:16:12 -0400 Received: from localhost (24-104-73-23-ip-static.hfc.comcastbusiness.net [24.104.73.23]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id F19B1C4E; Tue, 2 Oct 2018 13:32:48 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Guoqing Jiang , NeilBrown , Shaohua Li , Sasha Levin Subject: [PATCH 4.14 019/137] md-cluster: clear another nodes suspend_area after the copy is finished Date: Tue, 2 Oct 2018 06:23:40 -0700 Message-Id: <20181002132459.818175898@linuxfoundation.org> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181002132458.446916963@linuxfoundation.org> References: <20181002132458.446916963@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Guoqing Jiang [ Upstream commit 010228e4a932ca1e8365e3b58c8e1e44c16ff793 ] When one node leaves cluster or stops the resyncing (resync or recovery) array, then other nodes need to call recover_bitmaps to continue the unfinished task. But we need to clear suspend_area later after other nodes copy the resync information to their bitmap (by call bitmap_copy_from_slot). Otherwise, all nodes could write to the suspend_area even the suspend_area is not handled by any node, because area_resyncing returns 0 at the beginning of raid1_write_request. Which means one node could write suspend_area while another node is resyncing the same area, then data could be inconsistent. So let's clear suspend_area later to avoid above issue with the protection of bm lock. Also it is straightforward to clear suspend_area after nodes have copied the resync info to bitmap. Signed-off-by: Guoqing Jiang Reviewed-by: NeilBrown Signed-off-by: Shaohua Li Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- drivers/md/md-cluster.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) --- a/drivers/md/md-cluster.c +++ b/drivers/md/md-cluster.c @@ -304,15 +304,6 @@ static void recover_bitmaps(struct md_th while (cinfo->recovery_map) { slot = fls64((u64)cinfo->recovery_map) - 1; - /* Clear suspend_area associated with the bitmap */ - spin_lock_irq(&cinfo->suspend_lock); - list_for_each_entry_safe(s, tmp, &cinfo->suspend_list, list) - if (slot == s->slot) { - list_del(&s->list); - kfree(s); - } - spin_unlock_irq(&cinfo->suspend_lock); - snprintf(str, 64, "bitmap%04d", slot); bm_lockres = lockres_init(mddev, str, NULL, 1); if (!bm_lockres) { @@ -331,6 +322,16 @@ static void recover_bitmaps(struct md_th pr_err("md-cluster: Could not copy data from bitmap %d\n", slot); goto clear_bit; } + + /* Clear suspend_area associated with the bitmap */ + spin_lock_irq(&cinfo->suspend_lock); + list_for_each_entry_safe(s, tmp, &cinfo->suspend_list, list) + if (slot == s->slot) { + list_del(&s->list); + kfree(s); + } + spin_unlock_irq(&cinfo->suspend_lock); + if (hi > 0) { if (lo < mddev->recovery_cp) mddev->recovery_cp = lo;