Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp1374013imm; Tue, 2 Oct 2018 07:15:59 -0700 (PDT) X-Google-Smtp-Source: ACcGV60ddoJQRECc4HjtSYiLTbzHage7ysCb4jh8J+W3o909ybyvqpvdhKupvTbJt9fJZA0fBlGO X-Received: by 2002:a63:4a64:: with SMTP id j36-v6mr8717724pgl.168.1538489759666; Tue, 02 Oct 2018 07:15:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538489759; cv=none; d=google.com; s=arc-20160816; b=xsfT41ah6kNz7BEyNOxqAenNJfmH6bGVCFw1GPuQcPU+N20HSjRec2VSAADKIRqB/J Fiy8ZGGxPRQw+X7TohlObBDeyWVY0W9ysGuqacg81NSig6FsPg2xI7gPzAO0yXF/JW1U TWKRbpCvhxckzI1XhH3B4STDD9+lMgB5xdudsrtDcYg3oP50lFzS2Arl+PM6h9yNpjF5 s1qMwCJrajv4psunb3YFd+fewjM4MdP00xE01YQH6g/56stCo01OyPBc4v8qO0RrDM3Y ZzzSz/uHT9PsasvXfAS+bx9Rh5UdUTdskgl1/azvyrSdiihQu+SJqjVEGxGMiINZOMaV WR1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from; bh=/x8GM/6HRjuRv54X/Y8puDqMYzdJ1g0LdqM7ugCl6+U=; b=k3tNkLyEImI65yu0wfoxy10+f1h2kWzHKuwEpX5/fpnNt1BEKF8Fx83LWx1/6I75CL NWcgskYLTRly8TSc3GEP+QTykQCs5E58YD9Kg7A1tqE5G8HSqsA5+VTLi9XvRn0dtgJ4 nMAciA7vMQ8pjK82WpVCEmPR1y8CYolgUbrQ6EvaXv3eAL44L+rzSL2A7A29k71yvxY8 Mh1JnlEb8hSKnxuTQY8AY3+5GOpEtPcUc8xb8o6OgIZMv8SERRDY0NESKWN+tKWPR2Og 0KUQWBK0opAHPBl1BBU8fPjWKrds0trTqKuvtq6rGsspZsxSMAl4cgW6pYDJhP3GeQzo HVgw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m16-v6si639153pgd.48.2018.10.02.07.15.44; Tue, 02 Oct 2018 07:15:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727753AbeJBUKK (ORCPT + 99 others); Tue, 2 Oct 2018 16:10:10 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:59956 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727122AbeJBUKJ (ORCPT ); Tue, 2 Oct 2018 16:10:09 -0400 Received: from localhost (24-104-73-23-ip-static.hfc.comcastbusiness.net [24.104.73.23]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 194C4C11; Tue, 2 Oct 2018 13:26:48 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Guoqing Jiang , NeilBrown , Shaohua Li , Sasha Levin Subject: [PATCH 4.18 027/228] md-cluster: clear another nodes suspend_area after the copy is finished Date: Tue, 2 Oct 2018 06:22:04 -0700 Message-Id: <20181002132500.937309360@linuxfoundation.org> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181002132459.032960735@linuxfoundation.org> References: <20181002132459.032960735@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Guoqing Jiang [ Upstream commit 010228e4a932ca1e8365e3b58c8e1e44c16ff793 ] When one node leaves cluster or stops the resyncing (resync or recovery) array, then other nodes need to call recover_bitmaps to continue the unfinished task. But we need to clear suspend_area later after other nodes copy the resync information to their bitmap (by call bitmap_copy_from_slot). Otherwise, all nodes could write to the suspend_area even the suspend_area is not handled by any node, because area_resyncing returns 0 at the beginning of raid1_write_request. Which means one node could write suspend_area while another node is resyncing the same area, then data could be inconsistent. So let's clear suspend_area later to avoid above issue with the protection of bm lock. Also it is straightforward to clear suspend_area after nodes have copied the resync info to bitmap. Signed-off-by: Guoqing Jiang Reviewed-by: NeilBrown Signed-off-by: Shaohua Li Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- drivers/md/md-cluster.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) --- a/drivers/md/md-cluster.c +++ b/drivers/md/md-cluster.c @@ -304,15 +304,6 @@ static void recover_bitmaps(struct md_th while (cinfo->recovery_map) { slot = fls64((u64)cinfo->recovery_map) - 1; - /* Clear suspend_area associated with the bitmap */ - spin_lock_irq(&cinfo->suspend_lock); - list_for_each_entry_safe(s, tmp, &cinfo->suspend_list, list) - if (slot == s->slot) { - list_del(&s->list); - kfree(s); - } - spin_unlock_irq(&cinfo->suspend_lock); - snprintf(str, 64, "bitmap%04d", slot); bm_lockres = lockres_init(mddev, str, NULL, 1); if (!bm_lockres) { @@ -331,6 +322,16 @@ static void recover_bitmaps(struct md_th pr_err("md-cluster: Could not copy data from bitmap %d\n", slot); goto clear_bit; } + + /* Clear suspend_area associated with the bitmap */ + spin_lock_irq(&cinfo->suspend_lock); + list_for_each_entry_safe(s, tmp, &cinfo->suspend_list, list) + if (slot == s->slot) { + list_del(&s->list); + kfree(s); + } + spin_unlock_irq(&cinfo->suspend_lock); + if (hi > 0) { if (lo < mddev->recovery_cp) mddev->recovery_cp = lo;