Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753044AbcDFBvI (ORCPT ); Tue, 5 Apr 2016 21:51:08 -0400 Received: from youngberry.canonical.com ([91.189.89.112]:49464 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751186AbcDFBvG (ORCPT ); Tue, 5 Apr 2016 21:51:06 -0400 MIME-Version: 1.0 In-Reply-To: <20160406012839.GA32713@kmo-pixel> References: <1459878246-9249-1-git-send-email-ming.lei@canonical.com> <20160406003025.GC31161@kmo-pixel> <20160406011028.GA32334@kmo-pixel> <20160406012839.GA32713@kmo-pixel> Date: Wed, 6 Apr 2016 09:51:02 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] block: make sure big bio is splitted into at most 256 bvecs From: Ming Lei To: Kent Overstreet Cc: Jens Axboe , Linux Kernel Mailing List , linux-block@vger.kernel.org, Christoph Hellwig , Eric Wheeler , Sebastian Roesner , "4.2+" , Shaohua Li Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3252 Lines: 84 On Wed, Apr 6, 2016 at 9:28 AM, Kent Overstreet wrote: > On Wed, Apr 06, 2016 at 09:20:59AM +0800, Ming Lei wrote: >> On Wed, Apr 6, 2016 at 9:10 AM, Kent Overstreet >> wrote: >> > On Wed, Apr 06, 2016 at 08:59:31AM +0800, Ming Lei wrote: >> >> On Wed, Apr 6, 2016 at 8:30 AM, Kent Overstreet >> >> wrote: >> >> > On Wed, Apr 06, 2016 at 01:44:06AM +0800, Ming Lei wrote: >> >> >> After arbitrary bio size is supported, the incoming bio may >> >> >> be very big. We have to split the bio into small bios so that >> >> >> each holds at most BIO_MAX_PAGES bvecs for safety reason, such >> >> >> as bio_clone(). >> >> >> >> >> >> This patch fixes the following kernel crash: >> >> > >> >> > Ming, let's not do it this way; drivers that don't clone biovecs are the norm - >> >> > instead, md has its own queue limits that it ought to be setting up correctly. >> >> >> >> Except for md, there are also several usages of bio_clone: >> >> >> >> - drbd >> >> - osdblk >> >> - pktcdvd >> >> - xen-blkfront >> >> - verify code of bcache >> >> >> >> I don't like bio_clone() too, which can cause trouble to multipage bvecs. >> >> >> >> How about fixing the issue by this simple patch first? Then once we limits >> >> all above queues by max sectors, the global limit can be removed as >> >> mentioned by the comment. >> > >> > just do this: >> > >> > void blk_set_limit_clonable(struct queue_limits *lim) >> > { >> > lim->max_segments = min(lim->max_segments, BIO_MAX_PAGES); >> > } >> >> As I memtioned it is __not__ correct to use max_segments, and the issue is >> related with max sectors, please see the code of bio_clone_bioset(): > > I know how bio_clone_bioset() works but I'm not seeing how that has anything to > do with max sectors. The way it copies the biovec is not going to merge > segments, if the original bio had non full page segments then so is the clone. OK, I see, now it is a totally new limit, and no current queue limit can fit the purpose. Looks we need to introduce the new limit of io_max_vecs, which can be applied into blk_bio_segment_split(). But a queue flag should be better than queue limit since it is a 'limit' from software/driver. > >> bio = bio_alloc_bioset(gfp_mask, bio_segments(bio_src), bs); >> >> bio_segments() returns pages actually. >> >> > >> > and then call that from the appropriate drivers. It should be like 20 minutes of >> > work. >> > >> > My issue is that your approach of just enforcing a global limit is a step in the >> > wrong direction - we want to get _away_ from that and move towards drivers >> > specifying _directly_ what their limits are: more straightforward, less opaque. >> > >> > Also, your patch is wrong, as it'll break if there's bvecs that aren't full >> > pages. >> >> I don't understand why my patch is wrong, since we can split anywhere >> in a bio, could you explain it a bit? > > If you have a bio that has > BIO_MAX_PAGES segments, but all the segments are a > single sector (not a full page!) - then think about what'll happen... > > It can happen with userspace issuing direct IOs Yeah, I will cook a patch for review. Thanks, Ming