Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp1213694imu; Wed, 9 Jan 2019 13:49:23 -0800 (PST) X-Google-Smtp-Source: ALg8bN7oQo4nXe4WIXBA8dgaKMnDL1MNSXfoGBuk3aHwWSaDUKyk43Kg25yzQkXlYoPYfUlVInGr X-Received: by 2002:a17:902:e20b:: with SMTP id ce11mr7463976plb.251.1547070563024; Wed, 09 Jan 2019 13:49:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547070562; cv=none; d=google.com; s=arc-20160816; b=Mnkb2+omZBJ35KYfCHKoUXmVBd80m8DtVyDWZHvKet9M6xIANgJufGg+UtT8jiyiMo NQ2K1QQmYqT/CbKzW+7xGsKd2d8BwL05TBXu0pUDXkrP0ct5xtBEdncCouRAO5BIJ81B Vx/m/2b71ZTN3cBd9L5OuGSNWm/dC7GBAqRPIAg+265YT50aZQwl0AWs6Cp89J9OW3yB 4Lx8epkqgqBlBA8JgF+ygDEUznA8rUN1+R7pHWYK6Rt6yuvoqXlloM5TzcrBcAqiV1ZA IgumHHCSW5ech/IlA0QSC8RL6rIU4xcA8/ozsJZ7WOh2dRquTTmLWl0CCDQGCJFKO7lk oD/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=aDpOX28K5mKPaBYyOEJFS4VKBjaPfrpKZbXdtAUZrBI=; b=yq145gMd+Cu2Wbn7Wz5vm5vkNAM0ZpSdwbM/95cPJKY4OGOPWXwtSV8pVbCebtEYRu dbu37uOakeQ8/ZxTBJ0a9VKO1ieUhIhxg1XM3jHG/XYT5fu5C/z3q7S6kkTqMyFHsOQB kmDVTVtQyZVzEcXkgq8UMQesneV04RWj1Z7V5n9VkFrEfOOlMVx2Odm59RsqLtuc+NT4 +yfdx0As+zgfIJZvOVKpjPOOc7yZ7IiNJe2oyu27KWcynSaLsoko3yEvNCJq6zWE4/8D dmSFSy/nnwDVG9U7FFUXaF4lgYX9FEKBLFXbUonq5SyzZqrYp6yYpTKzv4Bp+8Jly03D UjIg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=kH9IPAvk; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h19si67667751plr.67.2019.01.09.13.49.07; Wed, 09 Jan 2019 13:49:22 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=kH9IPAvk; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727551AbfAIVpv (ORCPT + 99 others); Wed, 9 Jan 2019 16:45:51 -0500 Received: from mail-it1-f193.google.com ([209.85.166.193]:53841 "EHLO mail-it1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726510AbfAIVpu (ORCPT ); Wed, 9 Jan 2019 16:45:50 -0500 Received: by mail-it1-f193.google.com with SMTP id g85so14035718ita.3 for ; Wed, 09 Jan 2019 13:45:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=aDpOX28K5mKPaBYyOEJFS4VKBjaPfrpKZbXdtAUZrBI=; b=kH9IPAvkbqGaH8AUE+jC0Gd8rVwnJzmAalr9ZnQ3WQ/XKIfYTKgpPq+dCYyFRrwG6y V0YijWZ14p7epsm4EVOffnslEFjaMK+R5eLqo8xMD/tVrIOJw8mmywvuIdoRWwr4MGHJ BfSs/6G8kiB7Gu/h5j9v5K0Qom1TT75j/HfG8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=aDpOX28K5mKPaBYyOEJFS4VKBjaPfrpKZbXdtAUZrBI=; b=uC5Ub3u0Ftn2fEVvsKFvckFSkNFhPuOHE5pwVgTULKKcNZKt6GXJMPu3EhK+E8gm3i rEls5JaIgVdIFGifyaz/BocPjSJAGDzmsFo/wM2osUPHSN+JorPWKfHk0rjI3WBsoEf9 Ig48we89P2wmoxPtujjlqgrYuRhD37fbZsaABVWAzI1tD82uaedkO4+n1Pg8I2x90pfG JUbWQuJnFrNPCyZSHMBbP3pGieTGXcJVd/J5Vz44AH1xYRP/kdYwU6d3VQMkl4jFsRDN qvFCW/S/HEq8IspOiDEGCO1p+6Xns13jPWx70NQU6BKF5ujla/hU26Z7A2kUkCFv/L24 QDQw== X-Gm-Message-State: AJcUukd2t1QoZ9LR26VAQASs17GZssRly/Ey9Ttfc29+WmuCixso7O4J KCjquNPYzgro3qS+jnXGIINCU2znGmDvByOFtwMl9A== X-Received: by 2002:a24:94cb:: with SMTP id j194mr5351501ite.117.1547070348869; Wed, 09 Jan 2019 13:45:48 -0800 (PST) MIME-Version: 1.0 References: <20190108112519.27473-1-kraxel@redhat.com> <20190108112519.27473-16-kraxel@redhat.com> <20190109101044.GS21184@phenom.ffwll.local> <20190109145443.l5yus2pgvxcl4zbt@sirius.home.kraxel.org> <20190109205158.qx7a2gfyprbvpok5@sirius.home.kraxel.org> In-Reply-To: <20190109205158.qx7a2gfyprbvpok5@sirius.home.kraxel.org> From: Daniel Vetter Date: Wed, 9 Jan 2019 22:45:36 +0100 Message-ID: Subject: Re: [PATCH v2 15/15] drm/bochs: reserve bo for pin/unpin To: Gerd Hoffmann Cc: dri-devel , David Airlie , Oleksandr Andrushchenko , David Airlie , open list , "open list:DRM DRIVER FOR BOCHS VIRTUAL GPU" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 9, 2019 at 9:52 PM Gerd Hoffmann wrote: > > Hi, > > > > If I understand things correctly it is valid to set all import/export > > > callbacks (prime_handle_to_fd, prime_fd_to_handle, > > > gem_prime_get_sg_table, gem_prime_import_sg_table) to NULL when not > > > supporting dma-buf import/export and still advertise DRIVER_PRIME to > > > indicate the other prime callbacks are supported (so generic fbdev > > > emulation can use gem_prime_vmap etc). Is that correct? > > > > I'm not sure how much that's a good idea ... Never thought about it > > tbh. All the fbdev/dma-buf stuff has plenty of hacks and > > inconsistencies still, so I guess we can't make it much worse really. > > Setting prime_handle_to_fd + prime_fd_to_handle to NULL has the effect > that drm stops advertising DRM_PRIME_CAP_{IMPORT,EXPORT} to userspace. > > Which looks better to me than telling userspace we support it then throw > errors unconditionally when userspace tries to use that. > > > > Is it possible to export TTM_PL_VRAM objects (with backing storage being > > > a pci memory bar)? If so, how? > > > > Not really in general. amdgpu upcasts to amdgpu_bo (if it's amgpu BO) > > and then knows the internals so it can do a proper pci peer2peer > > mapping. Or at least there's been lots of patches floating around to > > make that happen. > > That is limited to bo sharing between two amdgpu devices, correct? > > > I think other drivers migrate the bo out of VRAM. > > Well, that doesn't look too useful. bochs and qxl virtual hardware > can't access buffers outside VRAM. So, while I could migrate the > buffers to RAM (via memcpy) when exporting they would at the same time > become unusable for the GPU ... > > > > On importing: > > > > > > Importing into TTM_PL_TT object looks easy again, at least when the > > > object is actually stored in RAM. What if not? > > > > They are all supposed to be stored in RAM. Note that all current ttm > > importers totally break the abstraction, by taking the sg list, > > throwing the dma mapping away and assuming there's a struct page > > backing it. Would be good if we could stop spreading that abuse - the > > dma-buf interfaces have been modelled after the ttm bo interfaces, so > > shouldn't be too hard to wire this up correctly. > > Ok. With virtio-gpu (where objects are backed by RAM pages anyway) > wiring this up should be easy. > > But given there is no correct sample code I can look at it would be cool > if you could give some more hints how this is supposed to work. The > gem_prime_import_sg_table() callback gets a sg list passed in after all, > so I probably would have tried to take the sg list too ... I'm not a fan of that helper either, that's really the broken part imo. i915 doesn't use it. It's a midlayer so that the nvidia blob can avoid directly touching the EXPORT_SYMBOL_GPL dma-buf symbols, afaiui there's really no other solid reason for it. What the new gem cma helpers does is imo much better (it still uses the import_sg_table midlayer, but oh well). For ttm you'd need to make sure that all the various ttm cpu side access functions also all go through the relevant dma-buf interfaces, and not through the struct page list fished out of the sgt. That was at least the idea, long ago. > > > Importing into TTM_PL_VRAM: Impossible I think, without copying over > > > the data. Should that be done? If so, how? Or is it better to just > > > not support import then? > > > > Hm, since you ask about TTM concepts and not what this means in terms > > of dma-buf: > > Ok, more details on the quesion: > > dma-buf: whatever the driver gets passed into the > gem_prime_import_sg_table() callback. > > import into TTM_PL_VRAM: qemu driver which supports VRAM storage only > (bochs, qxl), so the buffer has to be stored there if we want do > something with it (like scanning out to a crtc). > > > As long as you upcast to the ttm_bo you can do whatever > > you want to really. > > Well, if the dma-buf comes from another device (say export vgem bo, then > try import into bochs/qxl/virtio) I can't upcast. In that case you'll in practice only get system RAM, and you're not allowed to move it (dma-buf is meant to be zero-copy after all). If your hw can't scan these out directly, then userspace needs to arrange for a buffer copy into a native buffer somehow (that's how Xorg prime works at least I think). No idea whether your virtual gpus can make use of that directly. You might also get some pci peer2peer range in the future, but it's strictly opt-in (because there's too many dma-buf importers that just blindly assume there's a struct page behind the sgt). > When the dma-buf comes from the same device drm_gem_prime_import_dev() > will notice and take a shortcut (skip import, just increase refcount > instead), so I don't have to worry about that case in the > gem_prime_import_sg_table() callback. You can also upcast if it's from the same driver, not just same device. -Daniel > > But with plain dma-buf this doesn't work right now > > (least because ttm assumes it gets system RAM on import, in theory you > > could put the peer2peer dma mapping into the sg list and it should > > work). > > Well, qemu display devices don't have peer2peer dma support. > So I guess the answer is "doesn't work". > > cheers, > Gerd > -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch