Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756038AbbGPPiH (ORCPT ); Thu, 16 Jul 2015 11:38:07 -0400 Received: from mail-pa0-f44.google.com ([209.85.220.44]:32893 "EHLO mail-pa0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751869AbbGPPiE (ORCPT ); Thu, 16 Jul 2015 11:38:04 -0400 From: Ming Lei To: Jens Axboe , linux-kernel@vger.kernel.org Cc: "Justin M. Forbes" , Jeff Moyer , Tejun Heo , Christoph Hellwig Subject: [PATCH v7 0/6] block: loop: improve loop with AIO Date: Thu, 16 Jul 2015 23:37:42 +0800 Message-Id: <1437061068-26118-1-git-send-email-ming.lei@canonical.com> X-Mailer: git-send-email 1.9.1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3730 Lines: 93 Hi Guys, There are about 3 advantages to use direct I/O and AIO on read/write loop's backing file: 1) double cache can be avoided, then memory usage gets decreased a lot 2) not like user space direct I/O, there isn't cost of pinning pages 3) avoid context switch for obtaining good throughput - in buffered file read, random I/O throughput is often obtained only if they are submitted concurrently from lots of tasks; but for sequential I/O, most of times they can be hit from page cache, so concurrent submissions often introduce unnecessary context switch and can't improve throughput much. There was such discussion[1] to use non-blocking I/O to improve the problem for application. - with direct I/O and AIO, concurrent submissions can be avoided and random read throughput can't be affected meantime So this patchset trys to improve loop via AIO, and about 45% memory usage can be decreased, see detailed data in commit log of patch4, also IO throughput isn't affected too. V7: - only 4/6 and 5/6 updated - update direct io after lo->offset is changed(4/6) - fix updating LO_FLAGS_DIRECT_IO in __loop_update_dio()(4/6) - introduce ioctl cmd of LOOP_SET_DIRECT_IO(5/6) for case of 'mount -o loop' - losetup git tree: http://kernel.ubuntu.com/git/ming/util-linux.git/log/?h=loop-dio-v7 - how to set direct io losetup --direct-io 1 /dev/loopN - how to clear direct io losetup --direct-io 0 /dev/loopN - how to show if direct io is used for accessing backing file losetup -l V6: - only patch 4 and patch 5 get updated - check lo->lo_offset to decide if direct IO can be supported(4/5) - introduce one flag for userspace(losetup) to keep updated if using direct I/O to access backing file(4/5) - implement patches for util-linux(losetup) so that losetup can enable direct I/O feature:(4/5) http://kernel.ubuntu.com/git/ming/util-linux.git/log/?h=losetup-dio - remove the direct IO control interface from sysfs(4/5) - handle partial read in case of direct read (5/5) - add more comments for direct IO (5/5) V5: - don't introduce IOCB_DONT_DIRTY_PAGE and bypass dirtying for ITER_KVEC and ITER_BVEC direct IO(read), as required by Christoph V4: - add detailed commit log for 'use kthread_work' - allow userspace(sysfs, losetup) to decide if dio/aio is used as suggested by Christoph and Dave Chinner - only use dio if the backing block device's min io size is 512 as pointed by Dave Chinner & Christoph V3: - based on Al's iov_iter work and Christoph's kiocb changes - use kthread_work - introduce IOCB_DONT_DIRTY_PAGE flag - set QUEUE_FLAG_NOMERGES for loop's request queue V2: - remove 'extra' parameter to aio_kernel_alloc() - try to avoid memory allcation inside queue req callback - introduce 'use_mq' sysfs file for enabling kernel aio or disabling it V1: - link: http://marc.info/?t=140803157700004&r=1&w=2 - improve failure path in aio_kernel_submit() drivers/block/loop.c | 263 ++++++++++++++++++++++++++++++++++++---------- drivers/block/loop.h | 13 +-- fs/direct-io.c | 9 +- include/uapi/linux/loop.h | 2 + 4 files changed, 221 insertions(+), 66 deletions(-) Thanks, Ming -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/