Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755226AbdCGIvV (ORCPT ); Tue, 7 Mar 2017 03:51:21 -0500 Received: from mail-wm0-f50.google.com ([74.125.82.50]:35572 "EHLO mail-wm0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755089AbdCGIuJ (ORCPT ); Tue, 7 Mar 2017 03:50:09 -0500 Subject: Re: [PATCH] blk: improve order of bio handling in generic_make_request() To: Jens Axboe , NeilBrown References: <87h93blz6g.fsf@notabene.neil.brown.name> <71562c2c-97f4-9a0a-32ec-30e0702ca575@profitbricks.com> <87lgsjj9w8.fsf@notabene.neil.brown.name> Cc: LKML , Lars Ellenberg , Kent Overstreet , Pavel Machek , Mike Snitzer , Mikulas Patocka From: Jack Wang Message-ID: Date: Tue, 7 Mar 2017 09:49:59 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1552 Lines: 47 On 06.03.2017 21:18, Jens Axboe wrote: > On 03/05/2017 09:40 PM, NeilBrown wrote: >> On Fri, Mar 03 2017, Jack Wang wrote: >>> >>> Thanks Neil for pushing the fix. >>> >>> We can optimize generic_make_request a little bit: >>> - assign bio_list struct hold directly instead init and merge >>> - remove duplicate code >>> >>> I think better to squash into your fix. >> >> Hi Jack, >> I don't object to your changes, but I'd like to see a response from >> Jens first. >> My preference would be to get the original patch in, then other changes >> that build on it, such as this one, can be added. Until the core >> changes lands, any other work is pointless. >> >> Of course if Jens wants a this merged before he'll apply it, I'll >> happily do that. > > I like the change, and thanks for tackling this. It's been a pending > issue for way too long. I do think we should squash Jack's patch > into the original, as it does clean up the code nicely. > > Do we have a proper test case for this, so we can verify that it > does indeed also work in practice? > Hi Jens, I can trigger deadlock with in RAID1 with test below: I create one md with one local loop device and one remote scsi exported by SRP. running fio with mix rw on top of md, force_close session on storage side. mdx_raid1 is wait on free_array in D state, and a lot of fio also in D state in wait_barrier. With the patch from Neil above, I can no longer trigger it anymore. The discussion was in link below: http://www.spinics.net/lists/raid/msg54680.html Thanks, Jack Wang