Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754222AbbFEAGm (ORCPT ); Thu, 4 Jun 2015 20:06:42 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58078 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752616AbbFEAGj (ORCPT ); Thu, 4 Jun 2015 20:06:39 -0400 Date: Thu, 4 Jun 2015 20:06:36 -0400 From: Mike Snitzer To: Ming Lin Cc: Ming Lei , dm-devel@redhat.com, Christoph Hellwig , Alasdair G Kergon , Lars Ellenberg , Philip Kelleher , Joshua Morris , Christoph Hellwig , Kent Overstreet , Nitin Gupta , Oleg Drokin , Al Viro , Jens Axboe , Andreas Dilger , Geoff Levand , Jiri Kosina , lkml , Jim Paris , Minchan Kim , Dongsu Park , drbd-user@lists.linbit.com Subject: Re: [PATCH v4 01/11] block: make generic_make_request handle arbitrarily sized bios Message-ID: <20150605000636.GA24611@redhat.com> References: <1432318723-18829-2-git-send-email-mlin@kernel.org> <20150526143626.GA4315@redhat.com> <20150526160400.GB4715@redhat.com> <20150528003627.GD32216@agk-dp.fab.redhat.com> <1433138551.11778.4.camel@hasee> <20150604210617.GA23710@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1960 Lines: 62 On Thu, Jun 04 2015 at 6:21pm -0400, Ming Lin wrote: > On Thu, Jun 4, 2015 at 2:06 PM, Mike Snitzer wrote: > > > > We need to test on large HW raid setups like a Netapp filer (or even > > local SAS drives connected via some SAS controller). Like a 8+2 drive > > RAID6 or 8+1 RAID5 setup. Testing with MD raid on JBOD setups with 8 > > devices is also useful. It is larger RAID setups that will be more > > sensitive to IO sizes being properly aligned on RAID stripe and/or chunk > > size boundaries. > > I'll test it on large HW raid setup. > > Here is HW RAID5 setup with 19 278G HDDs on Dell R730xd(2sockets/48 > logical cpus/264G mem). > http://minggr.net/pub/20150604/hw_raid5.jpg > > The stripe size is 64K. > > I'm going to test ext4/btrfs/xfs on it. > "bs" set to 1216k(64K * 19 = 1216k) > and run 48 jobs. Definitely an odd blocksize (though 1280K full stripe is pretty common for 10+2 HW RAID6 w/ 128K chunk size). > [global] > ioengine=libaio > iodepth=64 > direct=1 > runtime=1800 > time_based > group_reporting > numjobs=48 > rw=read > > [job1] > bs=1216K > directory=/mnt > size=1G How does time_based relate to size=1G? It'll rewrite the same 1 gig file repeatedly? > Or do you have other suggestions of what tests I should run? You're welcome to run this job but I'll also check with others here to see what fio jobs we used in the recent past when assessing performance of the dm-crypt parallelization changes. Also, a lot of care needs to be taken to eliminate jitter in the system while the test is running. We got a lot of good insight from Bart Van Assche on that and put it to practice. I'll see if we can (re)summarize that too. Mike -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/