Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp2400732ybl; Thu, 15 Aug 2019 11:11:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqw9tEI79u0hSojtN1Wc9H1MoxdwZTAcFOKDZo6c5W3+R8pMxs7Jt4aDX8yM+cThVOgCg0/1 X-Received: by 2002:a17:90a:9505:: with SMTP id t5mr3388936pjo.96.1565892704340; Thu, 15 Aug 2019 11:11:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565892704; cv=none; d=google.com; s=arc-20160816; b=R80Wpzl6sCbuW5oLtfH6cEy0KF0NWZJKKuQKfjir856dFNw/AzMI8ReTlBNH+3EYZq yrGdJXwzGJ1jQTt1SjsW3AtAwgWJvA+wYDWzuSijHCcTjAiZR0HRa5C0nj7LCNLw+sXn cxzR2IuZLa2vLN45x9j/ujR9do89pcVT/x/XzLpkOGmxgvtd+ADuYswGJuWb5jMNRobX iK0b/PMpTJyfvb85+qahLna/oh09Dp7VAYTnHR+1T5nu2Bczig7PX4z5S7mdSHt6LAY5 gZlN5pKO6OrmSflvVFci+bHQlBD23z4Ya2pnFGF9/iRNqQQZonwxeXMHVL+wL31XfYWq AosA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=IneA/NByuov1/wIWEDzX2u+XtXERMrXayHKFhJ+HiY4=; b=HKn9SO1iu5IKyVCWr6NUu5IrZv8TKtQvhnqsJFetumO0qW7i9hWTPc6XUZ8HT+CM4a 4WAAZ9kjJPOExn2nilusohDXpz2sn1zytsyVGDqbDxoUrVcXiwJ6MV6YzacMJvfUKnwl D2zf0Vxv8j29/38PDGb8XYDvA28zBCNeAXbr0rN6ZY9cJchdOLDn9OM/TArMWRgZT+SB yyyFxRhr11K+1A7/Hy1uXF/HzoFbwHsXxjA2nkWtK+wgFtlOYC2wQWhE5GgyD+cZlA2/ Ew/U4JfBKIlAq5DkI7T7d1TnXTjH7FotLadCZsvorG1P9xh5FnLs027JMZUJlNvqvLLR OiZg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=DW8d7upH; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i8si2482221pfr.97.2019.08.15.11.11.29; Thu, 15 Aug 2019 11:11:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=DW8d7upH; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732187AbfHORWA (ORCPT + 99 others); Thu, 15 Aug 2019 13:22:00 -0400 Received: from mail-ot1-f65.google.com ([209.85.210.65]:34094 "EHLO mail-ot1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731487AbfHORWA (ORCPT ); Thu, 15 Aug 2019 13:22:00 -0400 Received: by mail-ot1-f65.google.com with SMTP id c7so7219183otp.1 for ; Thu, 15 Aug 2019 10:21:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=IneA/NByuov1/wIWEDzX2u+XtXERMrXayHKFhJ+HiY4=; b=DW8d7upHOzxtkb8qPMBczW68kx2wDiXvN/uCio1qHyvbYXa22blzcVmidO4Q0OtH17 bla0ds6vchbl+QpN/bMqBBbQsh57wTyjFAznEuU9i3Y+612QFgvQ7WUEP80tg+YhzF3l q5MiWXjguzBAr2g9rFa7BDyXQZQ3eUe4QPsAA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=IneA/NByuov1/wIWEDzX2u+XtXERMrXayHKFhJ+HiY4=; b=GTpB4/O7BEpRbCOCm6qyG0DtKc+u3nxxB//U/g/hLJEqHIK401Vj2m0GA+/gBvX2DS h/ib0Il3xd1snViet1Pblzam0vA/3rGrmzSjVTG9yQ218jqmUVb8faae1VULj8ish0s9 pQCJHGqPm375Mx8AGo29WzAkfcEkQKevipzi2ePttqTOPuEqzAd2UE20ey8ht9d/eNel gkbbTEmY2EjFvWUwOJ3xUqoNLdcp0b1jFxVOWkRuW1N9bHH0awRhN/lnLUgWEPK4poQS 0WlRmOZuFUYFgz6HAKiSwwFegXuLNEUjff7y3qbpFGqCnxKvTTCdsT+bH8BKyKj2N0Gp euuA== X-Gm-Message-State: APjAAAWJ6G4MrXKNDC5zxu8IXy2e0NZdnJXzDbLdbEne4Mc9wazS7pI5 x1AqPYH0Fh79wF3NFg8vLReR8VUbwguxVQTyvNLuxQ== X-Received: by 2002:a9d:7cc9:: with SMTP id r9mr4642730otn.188.1565889719385; Thu, 15 Aug 2019 10:21:59 -0700 (PDT) MIME-Version: 1.0 References: <20190814202027.18735-1-daniel.vetter@ffwll.ch> <20190814202027.18735-3-daniel.vetter@ffwll.ch> <20190814134558.fe659b1a9a169c0150c3e57c@linux-foundation.org> <20190815084429.GE9477@dhcp22.suse.cz> <20190815130415.GD21596@ziepe.ca> <20190815143759.GG21596@ziepe.ca> <20190815151028.GJ21596@ziepe.ca> <20190815163238.GA30781@redhat.com> <20190815171622.GL21596@ziepe.ca> In-Reply-To: <20190815171622.GL21596@ziepe.ca> From: Daniel Vetter Date: Thu, 15 Aug 2019 19:21:47 +0200 Message-ID: Subject: Re: [PATCH 2/5] kernel.h: Add non_block_start/end() To: Jason Gunthorpe Cc: Jerome Glisse , Michal Hocko , Andrew Morton , LKML , Linux MM , DRI Development , Intel Graphics Development , Peter Zijlstra , Ingo Molnar , David Rientjes , =?UTF-8?Q?Christian_K=C3=B6nig?= , Masahiro Yamada , Wei Wang , Andy Shevchenko , Thomas Gleixner , Jann Horn , Feng Tang , Kees Cook , Randy Dunlap , Daniel Vetter Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 15, 2019 at 7:16 PM Jason Gunthorpe wrote: > > On Thu, Aug 15, 2019 at 12:32:38PM -0400, Jerome Glisse wrote: > > On Thu, Aug 15, 2019 at 12:10:28PM -0300, Jason Gunthorpe wrote: > > > On Thu, Aug 15, 2019 at 04:43:38PM +0200, Daniel Vetter wrote: > > > > > > > You have to wait for the gpu to finnish current processing in > > > > invalidate_range_start. Otherwise there's no point to any of this > > > > really. So the wait_event/dma_fence_wait are unavoidable really. > > > > > > I don't envy your task :| > > > > > > But, what you describe sure sounds like a 'registration cache' model, > > > not the 'shadow pte' model of coherency. > > > > > > The key difference is that a regirstationcache is allowed to become > > > incoherent with the VMA's because it holds page pins. It is a > > > programming bug in userspace to change VA mappings via mmap/munmap/etc > > > while the device is working on that VA, but it does not harm system > > > integrity because of the page pin. > > > > > > The cache ensures that each initiated operation sees a DMA setup that > > > matches the current VA map when the operation is initiated and allows > > > expensive device DMA setups to be re-used. > > > > > > A 'shadow pte' model (ie hmm) *really* needs device support to > > > directly block DMA access - ie trigger 'device page fault'. ie the > > > invalidate_start should inform the device to enter a fault mode and > > > that is it. If the device can't do that, then the driver probably > > > shouldn't persue this level of coherency. The driver would quickly get > > > into the messy locking problems like dma_fence_wait from a notifier. > > > > I think here we do not agree on the hardware requirement. For GPU > > we will always need to be able to wait for some GPU fence from inside > > the notifier callback, there is just no way around that for many of > > the GPUs today (i do not see any indication of that changing). > > I didn't say you couldn't wait, I was trying to say that the wait > should only be contigent on the HW itself. Ie you can wait on a GPU > page table lock, and you can wait on a GPU page table flush completion > via IRQ. > > What is troubling is to wait till some other thread gets a GPU command > completion and decr's a kref on the DMA buffer - which kinda looks > like what this dma_fence() stuff is all about. A driver like that > would have to be super careful to ensure consistent forward progress > toward dma ref == 0 when the system is under reclaim. > > ie by running it's entire IRQ flow under fs_reclaim locking. This is correct. At least for i915 it's already a required due to our shrinker also having to do the same. I think amdgpu isn't bothering with that since they have vram for most of the stuff, and just limit system memory usage to half of all and forgo the shrinker. Probably not the nicest approach. Anyway, both do the same mmu_notifier dance, just want to explain that we've been living with this for longer already. So yeah writing a gpu driver is not easy. > > associated with the mm_struct. In all GPU driver so far it is a short > > lived lock and nothing blocking is done while holding it (it is just > > about updating page table directory really wether it is filling it or > > clearing it). > > The main blocking I expect in a shadow PTE flow is waiting for the HW > to complete invalidations of its PTE cache. > > > > It is important to identify what model you are going for as defining a > > > 'registration cache' coherence expectation allows the driver to skip > > > blocking in invalidate_range_start. All it does is invalidate the > > > cache so that future operations pick up the new VA mapping. > > > > > > Intel's HFI RDMA driver uses this model extensively, and I think it is > > > well proven, within some limitations of course. > > > > > > At least, 'registration cache' is the only use model I know of where > > > it is acceptable to skip invalidate_range_end. > > > > Here GPU are not in the registration cache model, i know it might looks > > like it because of GUP but GUP was use just because hmm did not exist > > at the time. > > It is not because of GUP, it is because of the lack of > invalidate_range_end. A driver cannot correctly implement the SPTE > model without invalidate_range_end, even if it holds the page pins via > GUP. > > So, I've been assuming the few drivers without invalidate_range_end > are trying to do registration caching, rather than assuming they are > broken. I915 might just be broken. amdgpu does the full thing, using hmm_mirror. But still with dma_fence_wait. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch