Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp443670imm; Fri, 1 Jun 2018 03:53:07 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLqm2G8pfulYrmO6dT8X+Dz82RxbSLXVTGsH8e7HSf7KCpk3SxxRNaSLLNtAKcmgU2pXG8L X-Received: by 2002:a62:e816:: with SMTP id c22-v6mr6409572pfi.124.1527850387580; Fri, 01 Jun 2018 03:53:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527850387; cv=none; d=google.com; s=arc-20160816; b=AFvE+OiGaYWHsKLAqvl2DpMUDtvMmRM7RcSWA0wEdvJ5TL1giuE+6pLkEcMUz4VXTt Z6e3Fe7cXejWDcrI+wljL6Oge7obucZZda1L0vG8OBKjBXBnubTEctZ1KCHwp3zZY/Oz dKf53vYTrxV7FEFT6HNhn0FIwtV53mHcQIS9Dqs9/qpARDmb9ZkFLIcgI3JmpnHPT/dO ACWiPs+dkeIJJ2UGgTbvXvsCtz4iCY41ZaEqok4LBGpZX8PjKZQbWeEPiWhRn1csdWCI OfwkNwblobN34jkwVdTIfYmc75wzbG1IJsPeCyrgECFbrwTozNjut/9wKXw0KiC3l2U+ OWvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=+XNnIKnzc6rHGST7yXwOo7AlwqLJA2pRZjqYuX2iH+8=; b=yDB0cgG0vdiQj+U2BLMGDsXE7ikBi4zW4ye8HoZZEytQT6Ol2BXr0LEAIEgDomyWZQ hQ9SIoFoVcSjAUKx5ZZjspBgQ0bbc+VyY9udHsUp8KtmgBc1gTtStwT58/eEt/xRka3F g9NcMHhblyuokL/UW4gpBhwdxOGaiJjE5/5MOrgulBe3vp928ShVYJVd0qjzlf4ff/U7 9izr8395sx06z+yJZkzngY7ueeltGWfoJ9Kl8iz/2swHMncpkOcoH98FHNgcNuFTQQVb sJbhBsxeI2jpBCNcqeehBb8Jqtfkf57EkBLqjBd7KzL9toOHN7vIRIDV4fY89ElwwhW7 b1gw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=lQWmxk/E; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i65-v6si1320821pfg.218.2018.06.01.03.52.53; Fri, 01 Jun 2018 03:53:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=lQWmxk/E; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751685AbeFAKwF (ORCPT + 99 others); Fri, 1 Jun 2018 06:52:05 -0400 Received: from mail-it0-f66.google.com ([209.85.214.66]:53841 "EHLO mail-it0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750852AbeFAKwB (ORCPT ); Fri, 1 Jun 2018 06:52:01 -0400 Received: by mail-it0-f66.google.com with SMTP id a195-v6so1175239itd.3; Fri, 01 Jun 2018 03:52:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=+XNnIKnzc6rHGST7yXwOo7AlwqLJA2pRZjqYuX2iH+8=; b=lQWmxk/EijYK7xlFMe0YliSowsPflHlIaGMGy+ymhszW80QNbKf90OJQSPHN5GqZH+ v5MRggLZ6coB8pZnRvzymcCEnZTmB88JwtmfGVbvxixP/yzptUy8kJcU3HztH1ucvwQC NXp9WSh2Y0X+/53E0xucqHJE+Cf3nnbElKiy8dfBnO7jaOc0eW4PyvVa6E2+1aKePYPq CwR0IUPZ3QLrSw1cAYzEkMJcsaJEzLeq+/3U6cq0a7o3hw8TO1pCYHP3fXAShHnW90Fn xaJGgkDiLxPTvAf0IWLoc68T/b+EMVnvI16B7mF0Ow1SH7dIqXSGiIwE+VIEdOnbS9Yi QcWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=+XNnIKnzc6rHGST7yXwOo7AlwqLJA2pRZjqYuX2iH+8=; b=A1u5LdA9fVTob0/9Vqt4HhErGONEMKgzYoF4mEjsc2UANCzUtonBQg7HCn9t1/vAeB 2rmbxFFZ20i1G2SFiS+9sVBknmYvuN6qtxA5aUIubzeBDu7aJRP5b/XvWG7tXYqEqfi9 bRu14MVPEFpicGlVr/+Xv6gz1pi3YbeHZ3f8Oas8C4bOp61aqxITo9GQoRFhnJb+TOqS nyuXOnpr9H5AYO2zP1/xp2FXJFICuc5yy+BuPJVAX4ZWv1fvwPKYufncMsFQxD24urvz +4D2GErhx2LtnSWj+nFDCYS8cgnwmCB5cL72OD8hGGgB5MtHQfXLSSxRHsQCJJrmoXPU /kDw== X-Gm-Message-State: APt69E1GPu+xxNKfJ938+KYY8eK5CFvVE8yGw5+lY2+9nZ99BoJttRpz NFlgZlXDZ9lfzLbnm0ht3761pGZt6u9FxnvxPFM= X-Received: by 2002:a24:43cf:: with SMTP id s198-v6mr3476838itb.28.1527850320098; Fri, 01 Jun 2018 03:52:00 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a4f:7517:0:0:0:0:0 with HTTP; Fri, 1 Jun 2018 03:51:59 -0700 (PDT) In-Reply-To: <20180520222558.7053-7-kent.overstreet@gmail.com> References: <20180520222558.7053-1-kent.overstreet@gmail.com> <20180520222558.7053-7-kent.overstreet@gmail.com> From: Arnd Bergmann Date: Fri, 1 Jun 2018 12:51:59 +0200 X-Google-Sender-Auth: oD1A0A1G2f5TmEhgKUuLHMh_8us Message-ID: Subject: Re: [PATCH 06/12] md: convert to bioset_init()/mempool_init() To: Kent Overstreet Cc: Linux Kernel Mailing List , linux-block , Jens Axboe , Christoph Hellwig , colyli@suse.de, Mike Snitzer , "Darrick J. Wong" , Chris Mason , bacik@fb.com, linux-xfs , drbd-dev@lists.linbit.com, linux-btrfs , "open list:SOFTWARE RAID (Multiple Disks) SUPPORT" , NeilBrown Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 21, 2018 at 12:25 AM, Kent Overstreet wrote: > @@ -510,7 +510,10 @@ static void mddev_delayed_delete(struct work_struct *ws); > > static void mddev_put(struct mddev *mddev) > { > - struct bio_set *bs = NULL, *sync_bs = NULL; > + struct bio_set bs, sync_bs; > + > + memset(&bs, 0, sizeof(bs)); > + memset(&sync_bs, 0, sizeof(sync_bs)); > > if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock)) > return; > @@ -521,8 +524,8 @@ static void mddev_put(struct mddev *mddev) > list_del_init(&mddev->all_mddevs); > bs = mddev->bio_set; > sync_bs = mddev->sync_set; > - mddev->bio_set = NULL; > - mddev->sync_set = NULL; > + memset(&mddev->bio_set, 0, sizeof(mddev->bio_set)); > + memset(&mddev->sync_set, 0, sizeof(mddev->sync_set)); > if (mddev->gendisk) { > /* We did a probe so need to clean up. Call > * queue_work inside the spinlock so that > @@ -535,10 +538,8 @@ static void mddev_put(struct mddev *mddev) > kfree(mddev); > } > spin_unlock(&all_mddevs_lock); > - if (bs) > - bioset_free(bs); > - if (sync_bs) > - bioset_free(sync_bs); > + bioset_exit(&bs); > + bioset_exit(&sync_bs); > } This has triggered a new warning in randconfig builds for me, on 32-bit: drivers/md/md.c: In function 'mddev_put': drivers/md/md.c:564:1: error: the frame size of 1096 bytes is larger than 1024 bytes [-Werror=frame-larger-than=] and on 64-bit: drivers/md/md.c: In function 'mddev_put': drivers/md/md.c:564:1: error: the frame size of 2112 bytes is larger than 2048 bytes [-Werror=frame-larger-than=] The size of bio_set is a bit configuration dependent, but it looks like this is a bit too big to put on the stack either way, epsecially when you traverse a list and copy two of them with assignments for each loop in 'bs = mddev->bio_set'. A related problem is that calling bioset_exit on a copy of the bio_set might not do the right thing. The function is defined as void bioset_exit(struct bio_set *bs) { if (bs->rescue_workqueue) destroy_workqueue(bs->rescue_workqueue); bs->rescue_workqueue = NULL; mempool_exit(&bs->bio_pool); mempool_exit(&bs->bvec_pool); bioset_integrity_free(bs); if (bs->bio_slab) bio_put_slab(bs); bs->bio_slab = NULL; } So it calls a couple of functions (detroy_workqueue, mempool_exit, bioset_integrity_free, bio_put_slab), each of which might rely on a the structure being the same one that was originally allocated. If the structure contains any kind of linked list entry, passing a copy into the list management functions would guarantee corruption. I could not come up with an obvious fix, but maybe it's possible to change mddev_put() to do call bioset_exit() before the structure gets freed? Something like the (completely untested and probably wrong somewhere) patch below Signed-off-by: Arnd Bergmann diff --git a/drivers/md/md.c b/drivers/md/md.c index 411f60d591d1..3bf54591ddcd 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -531,11 +531,6 @@ static void mddev_delayed_delete(struct work_struct *ws); static void mddev_put(struct mddev *mddev) { - struct bio_set bs, sync_bs; - - memset(&bs, 0, sizeof(bs)); - memset(&sync_bs, 0, sizeof(sync_bs)); - if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock)) return; if (!mddev->raid_disks && list_empty(&mddev->disks) && @@ -543,10 +538,14 @@ static void mddev_put(struct mddev *mddev) /* Array is not configured at all, and not held active, * so destroy it */ list_del_init(&mddev->all_mddevs); - bs = mddev->bio_set; - sync_bs = mddev->sync_set; + spin_unlock(&all_mddevs_lock); + + bioset_exit(&mddev->bio_set); memset(&mddev->bio_set, 0, sizeof(mddev->bio_set)); + + bioset_exit(&mddev->sync_set); memset(&mddev->sync_set, 0, sizeof(mddev->sync_set)); + if (mddev->gendisk) { /* We did a probe so need to clean up. Call * queue_work inside the spinlock so that @@ -557,10 +556,9 @@ static void mddev_put(struct mddev *mddev) queue_work(md_misc_wq, &mddev->del_work); } else kfree(mddev); + } else { + spin_unlock(&all_mddevs_lock); } - spin_unlock(&all_mddevs_lock); - bioset_exit(&bs); - bioset_exit(&sync_bs); } static void md_safemode_timeout(struct timer_list *t);