Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp493674pxb; Tue, 9 Feb 2021 05:46:54 -0800 (PST) X-Google-Smtp-Source: ABdhPJzJgwYeC5nUovaoK/UO40hD079QDKAXaHIN37AfiqSa2u6jaU7aee9ox/5sedAQ/jip7oJF X-Received: by 2002:a50:cc4d:: with SMTP id n13mr22472285edi.337.1612878413900; Tue, 09 Feb 2021 05:46:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612878413; cv=none; d=google.com; s=arc-20160816; b=u52aVPSEEkgGQkO9aUdVEJ9gclGf9IgRtaXGrF2oQF9hoCoP5qi9Ttz3gqNrQUMNub 4n4hII3PB8bj/P5FpmvOhJfwt3D59+1W/ZwvRAUxwBZGN16quvhiE/FEmGYFleS+9Voz aoSmbm+roH36F92TmfMVi/M9NSo91Xf//XjAWE+TZkZ4Zv1E4U52uMVg+r/t2EDCtyfI AScCVEL8Bh4uxWE+7P7UKYfMIX++2ZpCKzQCZiE7xeGiVJkbx1afMdRAMcdrrp2YvO/t EKvbxdf0tgG/q8gxTaCaS3nBGqQmVT1SjAc8GbWC//7C00LI63XWEThrowX+WNXaJ+oz /7/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=LhfQbnhwDGpqGDadCiKHjxS8DmSjilEDHzg3CJ532pI=; b=hnnOfnx59I+2V/gGj4PtxGxXjG/3ekzCIE6jZ2KjLxE2R7RoIg5OTta+s4usvL/22z Dv8cheX7AYxRLxPOLte75vDMguJ4xniffLw5qi8Q97IZLEpYVrmzFj/HeE3YVM5xV7R6 Cs/lADFXRsT8RuiZi6ex7OO8fRqkQu5GnMRj6zhItVpCZhI74VGxbi58LkQX8iFKO14N qGRkHHuIksu7gfBaNeYhEQP7oTFHRCSZ0DSrxdjSHKKk5FuGvuLgR3LyAkQbFEL5n46a YuJLa4AzzVppaDpThDg+6AoDcNOJSNYcBROmDlppiowTRLYF7Ui212qiT97upgdSSCQ9 raLg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=JigqNdgD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i24si13362185ejy.153.2021.02.09.05.46.30; Tue, 09 Feb 2021 05:46:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=JigqNdgD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231794AbhBINpM (ORCPT + 99 others); Tue, 9 Feb 2021 08:45:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231626AbhBINk4 (ORCPT ); Tue, 9 Feb 2021 08:40:56 -0500 Received: from mail-oo1-xc30.google.com (mail-oo1-xc30.google.com [IPv6:2607:f8b0:4864:20::c30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0009C06178C for ; Tue, 9 Feb 2021 05:40:02 -0800 (PST) Received: by mail-oo1-xc30.google.com with SMTP id x19so4254204ooj.10 for ; Tue, 09 Feb 2021 05:40:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=LhfQbnhwDGpqGDadCiKHjxS8DmSjilEDHzg3CJ532pI=; b=JigqNdgDEFgtQlKMjnmdnlDlN7M4qkzWm5dhzF1dd992sb4eU3A75DIikN86A5yKF3 +HYqLOXv3f90n0bmYdNvleTeiQcUc5RdDT1mXps9fTyTu9OPEPFxGyjMaOdfAf5jGZOp LN5zBs74rUhwch5wHpsKumLLOkAYJemIb3dfs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=LhfQbnhwDGpqGDadCiKHjxS8DmSjilEDHzg3CJ532pI=; b=FdSg2QTJIIu3Dui2hj51my4JvqFkEYswK+dMQlQf4fxNOlqvWzbvsO/tmvtQaC66Bj VX2cWXTDIgW/mHQIK5Ho9vEf3hPNYbjlqfRSp2fyN7HAdZ1xq/RupHW3PU3c0pBTRTGX zZjqO64NYjPwls/WRTIBtpFB4phTslifqSmrUH2ny+kOvADptMAb52YArvhaj+wzRB0C W+I3jMGpk8Qm/OEFFol48+hZ8OZskVfWIHV2veoH/6+9kZjUTSpW3m9huS/B4CC5tK4P 6D4VpSFRpLxNxOynhTkKTneLOKtkZR0afOYocVR3JYXtqgE+5Xal4Z+n3BURubgJQaHt CrKw== X-Gm-Message-State: AOAM533cP0q4m2JQT0m29LNmaTBqlZylY4QhQ4rvaa1WgGt40nJzNAeV DMchHBubqVTMptKtK4+E+FuRdAf/lgticzYo/ptvNw== X-Received: by 2002:a4a:d891:: with SMTP id b17mr15851254oov.28.1612878002267; Tue, 09 Feb 2021 05:40:02 -0800 (PST) MIME-Version: 1.0 References: <20210209010722.13839-1-apopple@nvidia.com> <3426910.QXTomnrpqD@nvdebian> <20210209133520.GB4718@ziepe.ca> In-Reply-To: <20210209133520.GB4718@ziepe.ca> From: Daniel Vetter Date: Tue, 9 Feb 2021 14:39:51 +0100 Message-ID: Subject: Re: [PATCH 0/9] Add support for SVM atomics in Nouveau To: Jason Gunthorpe Cc: Alistair Popple , Linux MM , Nouveau Dev , Ben Skeggs , Andrew Morton , Linux Doc Mailing List , Linux Kernel Mailing List , kvm-ppc@vger.kernel.org, dri-devel , John Hubbard , Ralph Campbell , Jerome Glisse Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 9, 2021 at 2:35 PM Jason Gunthorpe wrote: > > On Tue, Feb 09, 2021 at 11:57:28PM +1100, Alistair Popple wrote: > > On Tuesday, 9 February 2021 9:27:05 PM AEDT Daniel Vetter wrote: > > > > > > > > Recent changes to pin_user_pages() prevent the creation of pinned pages in > > > > ZONE_MOVABLE. This series allows pinned pages to be created in > > ZONE_MOVABLE > > > > as attempts to migrate may fail which would be fatal to userspace. > > > > > > > > In this case migration of the pinned page is unnecessary as the page can > > be > > > > unpinned at anytime by having the driver revoke atomic permission as it > > > > does for the migrate_to_ram() callback. However a method of calling this > > > > when memory needs to be moved has yet to be resolved so any discussion is > > > > welcome. > > > > > > Why do we need to pin for gpu atomics? You still have the callback for > > > cpu faults, so you > > > can move the page as needed, and hence a long-term pin sounds like the > > > wrong approach. > > > > Technically a real long term unmoveable pin isn't required, because as you say > > the page can be moved as needed at any time. However I needed some way of > > stopping the CPU page from being freed once the userspace mappings for it had > > been removed. > > The issue is you took the page out of the PTE it belongs to, which > makes it orphaned and unlocatable by the rest of the mm? > > Ideally this would leave the PTE in place so everything continues to > work, just disable CPU access to it. > > Maybe some kind of special swap entry? I probably should have read the patches more in detail, I was assuming the ZONE_DEVICE is only for vram. At least I thought the requirement for gpu atomics was that the page is in vram, but maybe I'm mixing up how this works on nvidia with how it works in other places. Iirc we had a long discussion about this at lpc19 that ended with the conclusion that we must be able to migrate, and sometimes migration is blocked. But the details ellude me now. Either way ZONE_DEVICE for not vram/device memory sounds wrong. Is that really going on here? -Daniel > > I also don't much like the use of ZONE_DEVICE here, that should only > be used for actual device memory, not as a temporary proxy for CPU > pages.. Having two struct pages refer to the same physical memory is > pretty ugly. > > > The normal solution of registering an MMU notifier to unpin the page when it > > needs to be moved also doesn't work as the CPU page tables now point to the > > device-private page and hence the migration code won't call any invalidate > > notifiers for the CPU page. > > The fact the page is lost from the MM seems to be the main issue here. > > > Yes, I would like to avoid the long term pin constraints as well if possible I > > just haven't found a solution yet. Are you suggesting it might be possible to > > add a callback in the page migration logic to specially deal with moving these > > pages? > > How would migration even find the page? > > Jason -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch