Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp3391773ybp; Sun, 6 Oct 2019 10:55:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqwnM1vqvGIKkEx+RVDHfpHYikkJTrFhuduBle4jSY0EiovLPrexrYX+lrs3NNUN9YZDig7Y X-Received: by 2002:a05:6402:1583:: with SMTP id c3mr25792162edv.286.1570384526702; Sun, 06 Oct 2019 10:55:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570384526; cv=none; d=google.com; s=arc-20160816; b=URP5FW4xhC2CjN7TBi0/KETA5oTTNkL+/vSGIQ1XPxkRtcu0G5uLz1DZrUndbHijRO fSHnwryIqcqmCdEJuwM4mSRYgizfa1s9coNanJ/4OJ5S/aTQo3ZWY5rALVYKkrjxMp39 KYfVhGgp471Gbu2ah8wYfNq5dgeZPT60WuRRVrGXFG13qsOuT98EpOP0MVedJFok0Hx6 pupkpKMXJFj7IzLL9lXsNS7IBqD0E6J9qg+jWlVbOuGw3yotNKFW3YNcRZQ98wNTWmRD dceU6TfXMomcYh5LoAO5v3Y0O9Suw4hk/XA01uM8HFHCZdJxXj21y0IhlOh0gZBExTdt mZ5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Osr3kqGYOnWLWLzkXtpGFbBTnoNqlmNB4klqAy+Y+v0=; b=Zz467awRsnQ2MYNpz3xdHLutDTfeJ1Y4Zox+cVR3RZFA5tMbfWpF8fmyAzGXdNEfU9 6sOUiSTD2JPxTSXu95hgLRrR9/yWUOGAckgjFQ/hoYWbDj4zmaKq5rpuqiW+fdog2k1Y 1mzb4c9B/iZ66n2NwlVKJ6PvPOgc0yIqyLMsWEcjimAxYKafOm2qj4oYUqZLwU5xW+s8 Ir/Rb/05QlGJE7tYvROmIRXj24OQ93VZggJeVaBcMN9VpnPEotA46DjKM9CXrMGot05y yZymsYf3vxy+REkXoRaYDuigI1UfyiqZqzFas7tEyrNeoYtiwzFmA8S3QZh+SItWZlri LyXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=F+GJzq0P; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id op27si5885143ejb.253.2019.10.06.10.55.03; Sun, 06 Oct 2019 10:55:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=F+GJzq0P; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727011AbfJFRwq (ORCPT + 99 others); Sun, 6 Oct 2019 13:52:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:38306 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730676AbfJFRix (ORCPT ); Sun, 6 Oct 2019 13:38:53 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 196992080F; Sun, 6 Oct 2019 17:38:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570383532; bh=BH/a5c+bXXYLg/uA0RRJfRa6LSxsadgUgOffxgQNYPg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F+GJzq0P2nIivwnTAWSrGS3cYJfCumgAxh9gQFllouy+kFD51IiLhSGz8SVs8WOlf oJOlkvxQ201suO0D/rmk6gpcg6dwLykrJDTyhTAp2arTRL2IxGzQvu9WBsLZxR6IIY YyoJjKJxJsP+hXX4bgTIJIWRXvDwKEdzmxPBhsnw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ming Lei , Mike Snitzer Subject: [PATCH 5.2 132/137] dm raid: fix updating of max_discard_sectors limit Date: Sun, 6 Oct 2019 19:21:56 +0200 Message-Id: <20191006171220.369479205@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191006171209.403038733@linuxfoundation.org> References: <20191006171209.403038733@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ming Lei commit c8156fc77d0796ba2618936dbb3084e769e916c1 upstream. Unit of 'chunk_size' is byte, instead of sector, so fix it by setting the queue_limits' max_discard_sectors to rs->md.chunk_sectors. Also, rename chunk_size to chunk_size_bytes. Without this fix, too big max_discard_sectors is applied on the request queue of dm-raid, finally raid code has to split the bio again. This re-split done by raid causes the following nested clone_endio: 1) one big bio 'A' is submitted to dm queue, and served as the original bio 2) one new bio 'B' is cloned from the original bio 'A', and .map() is run on this bio of 'B', and B's original bio points to 'A' 3) raid code sees that 'B' is too big, and split 'B' and re-submit the remainded part of 'B' to dm-raid queue via generic_make_request(). 4) now dm will handle 'B' as new original bio, then allocate a new clone bio of 'C' and run .map() on 'C'. Meantime C's original bio points to 'B'. 5) suppose now 'C' is completed by raid directly, then the following clone_endio() is called recursively: clone_endio(C) ->clone_endio(B) #B is original bio of 'C' ->bio_endio(A) 'A' can be big enough to make hundreds of nested clone_endio(), then stack can be corrupted easily. Fixes: 61697a6abd24a ("dm: eliminate 'split_discard_bios' flag from DM target interface") Cc: stable@vger.kernel.org Signed-off-by: Ming Lei Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-raid.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -3738,18 +3738,18 @@ static int raid_iterate_devices(struct d static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits) { struct raid_set *rs = ti->private; - unsigned int chunk_size = to_bytes(rs->md.chunk_sectors); + unsigned int chunk_size_bytes = to_bytes(rs->md.chunk_sectors); - blk_limits_io_min(limits, chunk_size); - blk_limits_io_opt(limits, chunk_size * mddev_data_stripes(rs)); + blk_limits_io_min(limits, chunk_size_bytes); + blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs)); /* * RAID1 and RAID10 personalities require bio splitting, * RAID0/4/5/6 don't and process large discard bios properly. */ if (rs_is_raid1(rs) || rs_is_raid10(rs)) { - limits->discard_granularity = chunk_size; - limits->max_discard_sectors = chunk_size; + limits->discard_granularity = chunk_size_bytes; + limits->max_discard_sectors = rs->md.chunk_sectors; } }