Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754681AbbDHR4z (ORCPT ); Wed, 8 Apr 2015 13:56:55 -0400 Received: from mx1.redhat.com ([209.132.183.28]:33516 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754263AbbDHR4x (ORCPT ); Wed, 8 Apr 2015 13:56:53 -0400 From: Jeff Moyer To: Jens Axboe Cc: linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/2] blk-plug: don't flush nested plug lists References: <1428347694-17704-1-git-send-email-jmoyer@redhat.com> <1428347694-17704-2-git-send-email-jmoyer@redhat.com> <55256786.8000608@kernel.dk> X-PGP-KeyID: 1F78E1B4 X-PGP-CertKey: F6FE 280D 8293 F72C 65FD 5A58 1FF8 A7CA 1F78 E1B4 X-PCLoadLetter: What the f**k does that mean? Date: Wed, 08 Apr 2015 13:56:50 -0400 In-Reply-To: <55256786.8000608@kernel.dk> (Jens Axboe's message of "Wed, 08 Apr 2015 11:38:14 -0600") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1800 Lines: 44 Jens Axboe writes: >> Comments would be greatly appreciated. > > It's hard to argue with the increased merging for your case. The task > plugs did originally work like you changed them to, not flushing until > the outermost plug was flushed. Unfortunately I don't quite remember > why I changed them, will have to do a bit of digging to refresh my > memory. Let me know what you dig up. > For cases where we don't do any merging (like nvme), we always want to > flush. Well almost, if we start do utilize the batched submission, > then the plug would still potentially help (just for other reasons > than merging). It's never straight-forward. :) > And agree with Ming, this can be cleaned up substantially. I'd also Let me know if you have any issues with the v2 posting. > like to see some test results from the other end of the spectrum. Your > posted cased is clearly based case (we missed tons of merging, now we > don't), I'd like to see a normal case and a worst case result as well > so we have an idea of what this would do to latencies. As for other benchmark numbers, this work was inspired by two reports of lower performance after converting virtio_blk to use blk-mq in RHEL. The workloads were both synthetic, though. I'd be happy to run a battery of tests on the patch set, but if there is anything specific you want to see (besides a workload that will have no merges), let me know. I guess I should also mention that another solution to the problem might be the mythical mq I/O scheduler. :) Cheers, Jeff -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/