Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp354305imu; Mon, 5 Nov 2018 01:55:39 -0800 (PST) X-Google-Smtp-Source: AJdET5doqol0GCsnfyVfvJiwbmwdgCgeqjSuZVNC9qvCnRJqmhMoePe8JeQ9ftyuxQoo4f/02wE6 X-Received: by 2002:a62:507:: with SMTP id 7-v6mr21477960pff.80.1541411739660; Mon, 05 Nov 2018 01:55:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541411739; cv=none; d=google.com; s=arc-20160816; b=Txl93rOfe/9ipbs+b6PsScKa44gvP367518JIOV1S2zenKzdzz4/VM8/SIulN5heDa SffqBdqvJIiEYCRatReFTDDPwU4vJAWYBYfGA1dgzwiV9D3n46w8P7pJVPESqfz3YNXz sX9SthHIqEcyHUtnXyQIdvWDMOd5ooS3csksiyLk0fdjAInxh5bLWgX3LnyikJAzH3yD D6o8AbUuoSadupdHeJ12K9RhQ3wpn06mo+MaNKgF7wBtRfzWaV3jd42FiEH23v4t2cVU n4ttaWWhs3/nWadSJ0zmS/TutWdIvluqKrxqH+ASqu4DFbR84nzfL3NPWsl+nY+Pi9PM JEhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=IPh63MFAwKcZofuZCakw742W18YuYp8d2fPgzDwGdcg=; b=ZCGBeo76RbsvnJVXFlnumVOK4rrmpXBPVW+/oZNv9bcRrXAkCq5K6VjQO81/6Lwu1l MF9Rm0oiLYOoU7wv+z1wFrNQ3S4CErefEGwYsEnmTrc81KDokqaJYPdIlH9hs+3cUVN7 soM4z5caYM6nTuV1vg9UehGcch96CNpW7bEjHOcB975lhtMrGoquhhGC3pZHjUNiY2C+ IbllqGNLl5lzFpE7UvKqS4GK+qwgc62HFAWv2x4uQqYyxkBuwRw7Qv0pfSflz9eCkYpn FupxC0RLPVylzWpYghjugo5MGVipd/ashFLFhTgmRX0KZry/hX0Bjnvfm7lDY4XAV1cV 20oA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 23-v6si42504894pgx.594.2018.11.05.01.55.25; Mon, 05 Nov 2018 01:55:39 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727481AbeKETNo (ORCPT + 99 others); Mon, 5 Nov 2018 14:13:44 -0500 Received: from mx2.suse.de ([195.135.220.15]:60858 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726086AbeKETNo (ORCPT ); Mon, 5 Nov 2018 14:13:44 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 677FEACB4; Mon, 5 Nov 2018 09:54:48 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 38CB01E07AB; Mon, 5 Nov 2018 10:54:47 +0100 (CET) Date: Mon, 5 Nov 2018 10:54:47 +0100 From: Jan Kara To: John Hubbard Cc: Christoph Hellwig , Dave Chinner , Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara , linux-mm@kvack.org, Andrew Morton , LKML , linux-rdma , linux-fsdevel@vger.kernel.org Subject: Re: [PATCH 4/6] mm: introduce page->dma_pinned_flags, _count Message-ID: <20181105095447.GE6953@quack2.suse.cz> References: <20181012060014.10242-1-jhubbard@nvidia.com> <20181012060014.10242-5-jhubbard@nvidia.com> <20181013035516.GA18822@dastard> <7c2e3b54-0b1d-6726-a508-804ef8620cfd@nvidia.com> <20181013164740.GA6593@infradead.org> <84811b54-60bf-2bc3-a58d-6a7925c24aad@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <84811b54-60bf-2bc3-a58d-6a7925c24aad@nvidia.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun 04-11-18 23:10:12, John Hubbard wrote: > On 10/13/18 9:47 AM, Christoph Hellwig wrote: > > On Sat, Oct 13, 2018 at 12:34:12AM -0700, John Hubbard wrote: > >> In patch 6/6, pin_page_for_dma(), which is called at the end of get_user_pages(), > >> unceremoniously rips the pages out of the LRU, as a prerequisite to using > >> either of the page->dma_pinned_* fields. > >> > >> The idea is that LRU is not especially useful for this situation anyway, > >> so we'll just make it one or the other: either a page is dma-pinned, and > >> just hanging out doing RDMA most likely (and LRU is less meaningful during that > >> time), or it's possibly on an LRU list. > > > > Have you done any benchmarking what this does to direct I/O performance, > > especially for small I/O directly to a (fast) block device? > > > > Hi Christoph, > > I'm seeing about 20% slower in one case: lots of reads and writes of size 8192 B, > on a fast NVMe device. My put_page() --> put_user_page() conversions are incomplete > and buggy yet, but I've got enough of them done to briefly run the test. > > One thing that occurs to me is that jumping on and off the LRU takes time, and > if we limited this to 64-bit platforms, maybe we could use a real page flag? I > know that leaves 32-bit out in the cold, but...maybe use this slower approach > for 32-bit, and the pure page flag for 64-bit? uggh, we shouldn't slow down anything > by 20%. > > Test program is below. I hope I didn't overlook something obvious, but it's > definitely possible, given my lack of experience with direct IO. > > I'm preparing to send an updated RFC this week, that contains the feedback to date, > and also many converted call sites as well, so that everyone can see what the whole > (proposed) story would look like in its latest incarnation. Hmm, have you tried larger buffer sizes? Because synchronous 8k IO isn't going to max-out NVME iops by far. Can I suggest you install fio [1] (it has the advantage that it is pretty much standard for a test like this so everyone knows what the test does from a glimpse) and run with it something like the following workfile: [reader] direct=1 ioengine=libaio blocksize=4096 size=1g numjobs=1 rw=read iodepth=64 And see how the numbers with and without your patches compare? Honza [1] https://github.com/axboe/fio > > #define _GNU_SOURCE > #include > #include > #include > #include > #include > #include > #include > #include > > const static unsigned BUF_SIZE = 4096; > static const unsigned FULL_DATA_SIZE = 2 * BUF_SIZE; > > void read_from_file(int fd, size_t how_much, char * buf) > { > size_t bytes_read; > > for (size_t index = 0; index < how_much; index += BUF_SIZE) { > bytes_read = read(fd, buf, BUF_SIZE); > if (bytes_read != BUF_SIZE) { > printf("reading file failed: %m\n"); > exit(3); > } > } > } > > void seek_to_start(int fd, char *caller) > { > off_t result = lseek(fd, 0, SEEK_SET); > if (result == -1) { > printf("%s: lseek failed: %m\n", caller); > exit(4); > } > } > > void write_to_file(int fd, size_t how_much, char * buf) > { > int result; > for (size_t index = 0; index < how_much; index += BUF_SIZE) { > result = write(fd, buf, BUF_SIZE); > if (result < 0) { > printf("writing file failed: %m\n"); > exit(3); > } > } > } > > void read_and_write(int fd, size_t how_much, char * buf) > { > seek_to_start(fd, "About to read"); > read_from_file(fd, how_much, buf); > > memset(buf, 'a', BUF_SIZE); > > seek_to_start(fd, "About to write"); > write_to_file(fd, how_much, buf); > } > > int main(int argc, char *argv[]) > { > void *buf; > /* > * O_DIRECT requires at least 512 B alighnment, but runs faster > * (2.8 sec, vs. 3.5 sec) with 4096 B alignment. > */ > unsigned align = 4096; > posix_memalign(&buf, align, BUF_SIZE ); > > if (argc < 3) { > printf("Usage: %s \n", argv[0]); > return 1; > } > char *filename = argv[1]; > unsigned iterations = strtoul(argv[2], 0, 0); > > /* Not using O_SYNC for now, anyway. */ > int fd = open(filename, O_DIRECT | O_RDWR); > if (fd < 0) { > printf("Failed to open %s: %m\n", filename); > return 2; > } > > printf("File: %s, data size: %u, interations: %u\n", > filename, FULL_DATA_SIZE, iterations); > > for (int count = 0; count < iterations; count++) { > read_and_write(fd, FULL_DATA_SIZE, buf); > } > > close(fd); > return 0; > } > > > thanks, > -- > John Hubbard > NVIDIA -- Jan Kara SUSE Labs, CR