Return-Path: Received: from mail-pl1-f194.google.com ([209.85.214.194]:35519 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726339AbeKTFwH (ORCPT ); Tue, 20 Nov 2018 00:52:07 -0500 Received: by mail-pl1-f194.google.com with SMTP id v1-v6so7910965plo.2 for ; Mon, 19 Nov 2018 11:27:04 -0800 (PST) Date: Mon, 19 Nov 2018 12:27:02 -0700 From: Jason Gunthorpe To: Jerome Glisse Cc: Leon Romanovsky , Kenneth Lee , Tim Sell , linux-doc@vger.kernel.org, Alexander Shishkin , Zaibo Xu , zhangfei.gao@foxmail.com, linuxarm@huawei.com, haojian.zhuang@linaro.org, Christoph Lameter , Hao Fang , Gavin Schenk , RDMA mailing list , Zhou Wang , Doug Ledford , Uwe =?utf-8?Q?Kleine-K=C3=B6nig?= , David Kershner , Kenneth Lee , Johan Hovold , Cyrille Pitchen , Sagar Dharia , Jens Axboe , guodong.xu@linaro.org, linux-netdev , Randy Dunlap , linux-kernel@vger.kernel.org, Vinod Koul , linux-crypto@vger.kernel.org, Philippe Ombredanne , Sanyog Kale , "David S. Miller" , linux-accelerators@lists.ozlabs.org Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce Message-ID: <20181119192702.GD4890@ziepe.ca> References: <20181115085109.GD157308@Turing-Arch-b> <20181115145455.GN3759@mtr-leonro.mtl.com> <20181119091405.GE157308@Turing-Arch-b> <20181119091910.GF157308@Turing-Arch-b> <20181119104801.GF8268@mtr-leonro.mtl.com> <20181119164853.GA4593@redhat.com> <20181119182752.GA4890@ziepe.ca> <20181119184215.GB4593@redhat.com> <20181119185333.GC4890@ziepe.ca> <20181119191721.GC4593@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181119191721.GC4593@redhat.com> Sender: linux-crypto-owner@vger.kernel.org List-ID: On Mon, Nov 19, 2018 at 02:17:21PM -0500, Jerome Glisse wrote: > On Mon, Nov 19, 2018 at 11:53:33AM -0700, Jason Gunthorpe wrote: > > On Mon, Nov 19, 2018 at 01:42:16PM -0500, Jerome Glisse wrote: > > > On Mon, Nov 19, 2018 at 11:27:52AM -0700, Jason Gunthorpe wrote: > > > > On Mon, Nov 19, 2018 at 11:48:54AM -0500, Jerome Glisse wrote: > > > > > > > > > Just to comment on this, any infiniband driver which use umem and do > > > > > not have ODP (here ODP for me means listening to mmu notifier so all > > > > > infiniband driver except mlx5) will be affected by same issue AFAICT. > > > > > > > > > > AFAICT there is no special thing happening after fork() inside any of > > > > > those driver. So if parent create a umem mr before fork() and program > > > > > hardware with it then after fork() the parent might start using new > > > > > page for the umem range while the old memory is use by the child. The > > > > > reverse is also true (parent using old memory and child new memory) > > > > > bottom line you can not predict which memory the child or the parent > > > > > will use for the range after fork(). > > > > > > > > > > So no matter what you consider the child or the parent, what the hw > > > > > will use for the mr is unlikely to match what the CPU use for the > > > > > same virtual address. In other word: > > > > > > > > > > Before fork: > > > > > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE > > > > > HARDWARE: virtual addr ptr1 -> physical address = 0xCAFE > > > > > > > > > > Case 1: > > > > > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE > > > > > CPU child: virtual addr ptr1 -> physical address = 0xDEAD > > > > > HARDWARE: virtual addr ptr1 -> physical address = 0xCAFE > > > > > > > > > > Case 2: > > > > > CPU parent: virtual addr ptr1 -> physical address = 0xBEEF > > > > > CPU child: virtual addr ptr1 -> physical address = 0xCAFE > > > > > HARDWARE: virtual addr ptr1 -> physical address = 0xCAFE > > > > > > > > IIRC this is solved in IB by automatically calling > > > > madvise(MADV_DONTFORK) before creating the MR. > > > > > > > > MADV_DONTFORK > > > > .. This is useful to prevent copy-on-write semantics from changing the > > > > physical location of a page if the parent writes to it after a > > > > fork(2) .. > > > > > > This would work around the issue but this is not transparent ie > > > range marked with DONTFORK no longer behave as expected from the > > > application point of view. > > > > Do you know what the difference is? The man page really gives no > > hint.. > > > > Does it sometimes unmap the pages during fork? > > It is handled in kernel/fork.c look for DONTCOPY, basicaly it just > leave empty page table in the child process so child will have to > fault in new page. This also means that child will get 0 as initial > value for all memory address under DONTCOPY/DONTFORK which breaks > application expectation of what fork() do. Hum, I wonder why this API was selected then.. > > I actually wonder if the kernel is a bit broken here, we have the same > > problem with O_DIRECT and other stuff, right? > > No it is not, O_DIRECT is fine. The only corner case i can think > of with O_DIRECT is one thread launching an O_DIRECT that write > to private anonymous memory (other O_DIRECT case do not matter) > while another thread call fork() then what the child get can be > undefined ie either it get the data before the O_DIRECT finish > or it gets the result of the O_DIRECT. But this is realy what > you should expect when doing such thing without synchronization. > > So O_DIRECT is fine. ?? How can O_DIRECT be fine but RDMA not? They use exactly the same get_user_pages flow, right? Can we do what O_DIRECT does in RDMA and be fine too? AFAIK the only difference is the length of the race window. You'd have to fork and fault during the shorter time O_DIRECT has get_user_pages open. > > Really, if I have a get_user_pages FOLL_WRITE on a page and we fork, > > then shouldn't the COW immediately be broken during the fork? > > > > The kernel can't guarentee that an ongoing DMA will not write to those > > pages, and it breaks the fork semantic to write to both processes. > > Fixing that would incur a high cost: need to grow struct page, need > to copy potentialy gigabyte of memory during fork() ... this would be > a serious performance regression for many folks just to work around an > abuse of device driver. So i don't think anything on that front would > be welcome. Why? Keep track in each mm if there are any active get_user_pages FOLL_WRITE pages in the mm, if yes then sweep the VMAs and fix the issue for the FOLL_WRITE pages. John is already working on being able to detect pages under GUP, so it seems like a small step.. Since nearly all cases of fork don't have a GUP FOLL_WRITE active there would be no performance hit. > umem without proper ODP and VFIO are the only bad user i know of (for > VFIO you can argue that it is part of the API contract and thus that > it is not an abuse but it is not spell out loud in documentation). I > have been trying to push back on any people trying to push thing that > would make the same mistake or at least making sure they understand > what is happening. It is something we have to live with and support for the foreseeable future. > What really need to happen is people fixing their hardware and do the > right thing (good software engineer versus evil hardware engineer ;)) Even ODP is no pancea, there are performance problems. What we really need is CAPI like stuff, so you will tell Intel to redesign the CPU?? :) Jason