Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp196534imj; Thu, 14 Feb 2019 18:27:38 -0800 (PST) X-Google-Smtp-Source: AHgI3Ia9XHTAB4fs0tRD4CpvtSovBZzIb/+JhnQF+NvDD0169J63V4QnwLTPuB99VsQSzQmtEnVY X-Received: by 2002:a62:5e41:: with SMTP id s62mr7340101pfb.232.1550197658219; Thu, 14 Feb 2019 18:27:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550197658; cv=none; d=google.com; s=arc-20160816; b=seMsvyOBJ9KKmK7vODCTXlWwDo4tNZ08lXYfbIro3TrpDIXC4VVfUCbWjWK4bsUxZg Cxm5HTH1x0Ip6+h/GmLtht3rF/EyysqHS8xPMnmMvtbnKTKQvVz8miHYXv2N8vDHRW7z FxOa7zMRn36eh1a31SG5Md5SzVcsK+OZ+3gT38JwxwI54upbbhDNkQJUF9cV7VxduVh6 4+vefBrvfxuSMzobO9nIn6WNdKLQekk6siRovcjOTDnskqMo4i/f+zex6MXJbyBBlLY4 hexdLwt3nTfuZDFr8E3d3wuSdkD7OI1q3qsgdJlhRhPBjl0OeCk08QMSJqsNFZWR9kAK 6itQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=HvRDIw0zoie/jzUxLAc1QB/FaRPcQgXHGlFWf8fbir4=; b=rPViQXtuDhajwfQNets2t6R2Hb+KZeSg795Z3b1wg2I76twU/RlB9qQqKT79IkWflI mvMqifS4berLhIHsel2opdOzOvi+VXd2pAOQ2USUKChDDN2xrqV3oDAbxzLF2TBXfn4M Pc0uWY7KINAqKqR0lD3nHnZPDG7gLp0hL6EZJkExgVTP3tL/n9p2NfXcwoj084z8YCVO teVs2gucjOUUFyz041J15ufUu5jMxotUfbXpgeZMBhL8rTjHptVLETowwS6BTiZ7QBmZ YHoYhi8eq4sTcg2i8UVH24vxmYqJYo/caw0cXC/XOxnvhpQJ4tswpg9pPkTkQ5rh1SlP 4nbg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j19si3124443pfn.100.2019.02.14.18.27.22; Thu, 14 Feb 2019 18:27:38 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730306AbfBOBT1 (ORCPT + 99 others); Thu, 14 Feb 2019 20:19:27 -0500 Received: from ipmail06.adl2.internode.on.net ([150.101.137.129]:51743 "EHLO ipmail06.adl2.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726052AbfBOBT1 (ORCPT ); Thu, 14 Feb 2019 20:19:27 -0500 Received: from ppp59-167-129-252.static.internode.on.net (HELO dastard) ([59.167.129.252]) by ipmail06.adl2.internode.on.net with ESMTP; 15 Feb 2019 11:49:22 +1030 Received: from dave by dastard with local (Exim 4.80) (envelope-from ) id 1guS9x-0006wx-E8; Fri, 15 Feb 2019 12:19:21 +1100 Date: Fri, 15 Feb 2019 12:19:21 +1100 From: Dave Chinner To: Jerome Glisse Cc: Matthew Wilcox , Jason Gunthorpe , Dan Williams , Jan Kara , Christopher Lameter , Doug Ledford , Ira Weiny , lsf-pc@lists.linux-foundation.org, linux-rdma , Linux MM , Linux Kernel Mailing List , John Hubbard , Michal Hocko Subject: Re: [LSF/MM TOPIC] Discuss least bad options for resolving longterm-GUP usage by RDMA Message-ID: <20190215011921.GS20493@dastard> References: <01000168c8e2de6b-9ab820ed-38ad-469c-b210-60fcff8ea81c-000000@email.amazonses.com> <20190208044302.GA20493@dastard> <20190208111028.GD6353@quack2.suse.cz> <20190211102402.GF19029@quack2.suse.cz> <20190211180654.GB24692@ziepe.ca> <20190214202622.GB3420@redhat.com> <20190214205049.GC12668@bombadil.infradead.org> <20190214213922.GD3420@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190214213922.GD3420@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 14, 2019 at 04:39:22PM -0500, Jerome Glisse wrote: > On Thu, Feb 14, 2019 at 12:50:49PM -0800, Matthew Wilcox wrote: > > On Thu, Feb 14, 2019 at 03:26:22PM -0500, Jerome Glisse wrote: > > > On Mon, Feb 11, 2019 at 11:06:54AM -0700, Jason Gunthorpe wrote: > > > > But it also doesnt' trucate/create a hole. Another thread wrote to it > > > > right away and the 'hole' was essentially instantly reallocated. This > > > > is an inherent, pre-existing, race in the ftrucate/etc APIs. > > > > > > So it is kind of a // point to this, but direct I/O do "truncate" pages > > > or more exactly after a write direct I/O invalidate_inode_pages2_range() > > > is call and it will try to unmap and remove from page cache all pages > > > that have been written too. > > > > Hang on. Pages are tossed out of the page cache _before_ an O_DIRECT > > write starts. The only way what you're describing can happen is if > > there's a race between an O_DIRECT writer and an mmap. Which is either > > an incredibly badly written application or someone trying an exploit. > > I believe they are tossed after O_DIRECT starts (dio_complete). But Yes, but also before. See iomap_dio_rw() and generic_file_direct_write(). > regardless the issues is that an RDMA can have pin the page long > before the DIO in which case the page can not be toss from the page > cache and what ever is written to the block device will be discarded > once the RDMA unpin the pages. So we would end up in the code path > that spit out big error message in the kernel log. Which tells us filesystem people that the applications are doing something that _will_ cause data corruption and hence not to spend any time triaging data corruption reports because it's not a filesystem bug that caused it. See open(2): Applications should avoid mixing O_DIRECT and normal I/O to the same file, and especially to overlapping byte regions in the same file. Even when the filesystem correctly handles the coherency issues in this situation, overall I/O throughput is likely to be slower than using either mode alone. Likewise, applications should avoid mixing mmap(2) of files with direct I/O to the same files. -Dave. -- Dave Chinner david@fromorbit.com