Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932252AbbFEFVc (ORCPT ); Fri, 5 Jun 2015 01:21:32 -0400 Received: from mail.kernel.org ([198.145.29.136]:41576 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751223AbbFEFVa (ORCPT ); Fri, 5 Jun 2015 01:21:30 -0400 MIME-Version: 1.0 In-Reply-To: <20150605000636.GA24611@redhat.com> References: <1432318723-18829-2-git-send-email-mlin@kernel.org> <20150526143626.GA4315@redhat.com> <20150526160400.GB4715@redhat.com> <20150528003627.GD32216@agk-dp.fab.redhat.com> <1433138551.11778.4.camel@hasee> <20150604210617.GA23710@redhat.com> <20150605000636.GA24611@redhat.com> Date: Thu, 4 Jun 2015 22:21:24 -0700 Message-ID: Subject: Re: [PATCH v4 01/11] block: make generic_make_request handle arbitrarily sized bios From: Ming Lin To: Mike Snitzer Cc: Ming Lei , dm-devel@redhat.com, Christoph Hellwig , Alasdair G Kergon , Lars Ellenberg , Philip Kelleher , Joshua Morris , Christoph Hellwig , Kent Overstreet , Nitin Gupta , Oleg Drokin , Al Viro , Jens Axboe , Andreas Dilger , Geoff Levand , Jiri Kosina , lkml , Jim Paris , Minchan Kim , Dongsu Park , drbd-user@lists.linbit.com Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2319 Lines: 79 On Thu, Jun 4, 2015 at 5:06 PM, Mike Snitzer wrote: > On Thu, Jun 04 2015 at 6:21pm -0400, > Ming Lin wrote: > >> On Thu, Jun 4, 2015 at 2:06 PM, Mike Snitzer wrote: >> > >> > We need to test on large HW raid setups like a Netapp filer (or even >> > local SAS drives connected via some SAS controller). Like a 8+2 drive >> > RAID6 or 8+1 RAID5 setup. Testing with MD raid on JBOD setups with 8 >> > devices is also useful. It is larger RAID setups that will be more >> > sensitive to IO sizes being properly aligned on RAID stripe and/or chunk >> > size boundaries. >> >> I'll test it on large HW raid setup. >> >> Here is HW RAID5 setup with 19 278G HDDs on Dell R730xd(2sockets/48 >> logical cpus/264G mem). >> http://minggr.net/pub/20150604/hw_raid5.jpg >> >> The stripe size is 64K. >> >> I'm going to test ext4/btrfs/xfs on it. >> "bs" set to 1216k(64K * 19 = 1216k) >> and run 48 jobs. > > Definitely an odd blocksize (though 1280K full stripe is pretty common > for 10+2 HW RAID6 w/ 128K chunk size). I can change it to 10 HDDs HW RAID6 w/ 128K chunk size, then use bs=1280K > >> [global] >> ioengine=libaio >> iodepth=64 >> direct=1 >> runtime=1800 >> time_based >> group_reporting >> numjobs=48 >> rw=read >> >> [job1] >> bs=1216K >> directory=/mnt >> size=1G > > How does time_based relate to size=1G? It'll rewrite the same 1 gig > file repeatedly? Above job file is for read. For write, I think so. Do is make sense for performance test? > >> Or do you have other suggestions of what tests I should run? > > You're welcome to run this job but I'll also check with others here to > see what fio jobs we used in the recent past when assessing performance > of the dm-crypt parallelization changes. That's very helpful. > > Also, a lot of care needs to be taken to eliminate jitter in the system > while the test is running. We got a lot of good insight from Bart Van > Assche on that and put it to practice. I'll see if we can (re)summarize > that too. Very helpful too. Thanks. > > Mike -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/