Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp1055662imu; Wed, 9 Jan 2019 10:47:43 -0800 (PST) X-Google-Smtp-Source: ALg8bN5LJ7IDnB1EiOfLk3bOgDT2G9EhAdP4JPaYW6JWPKNPZpwez5AdzrfBPOHl1vbufNq6Dnuy X-Received: by 2002:a65:590b:: with SMTP id f11mr6519371pgu.60.1547059663207; Wed, 09 Jan 2019 10:47:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547059663; cv=none; d=google.com; s=arc-20160816; b=NrcafkVz84R8ExiT1vwJp2nh/jzbnV8Vhp9aA+4FYswYOzMyE0zc6ZYqBw4w4DbjTF 5MA+5dg6K/pGBuBaNMZ5VCgYQ7PPKGRWaT4+3JsQa8XKaANLzBqqLlCFKrrc0+FL7R6S 9SaXm/ZbRVLk7CRzQWCu4wNoli9l42t3/Wbtm15qVjV8Sa2XbmGHJuag4WQUSl0+mv8/ pwxdJo6+uQcMuTUWEMKzcTd6eXuRp5gOyfq6FUXKC9y7tfhtFSUGCZrpKWQaUYsvwyv4 iZReXQXsMXo8QopL1VzV3xae4EO/pvMOXMVGtYIiA+KfulzSYEVmq0brFButJ+e0Xx99 pyQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=oMukVTmVfInnLbPFaL2IDn+q5LVO76WkVp10SUcldls=; b=KACiouGtuwnDcFb910Na14yuCIwGipmyTOtQZmFHzjdxzOqoKLtSOnIKD0IR6VHD2O AMS87+/NsYD+9iBCdsfCqVYtg4MBCLtSzl2IvA0Xa/ityhLXJeEARhesEMh6MFnaGa93 tkhvJoF8toKaae1cOYGJd2dMgLWUKBsy5RDo9pA9xGI8jiCKvlStwq4MIqJife8baFnH 9cr+7c9hpWUTSYyg1gKmwvMNjQeJJ2vqDUiQHa65h3k2WUP5HLWqQiLoGECASyL7wwWg 3QH/e/h7aKBqQ+aL6nNaXcpVctaKUESWzBEv/0F185HvSWjTi68RYnrdUVPF+nkhNFC6 XkrA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=JUh7QtNP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c10si17398000pll.271.2019.01.09.10.47.27; Wed, 09 Jan 2019 10:47:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=JUh7QtNP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727866AbfAISqR (ORCPT + 99 others); Wed, 9 Jan 2019 13:46:17 -0500 Received: from mail-wm1-f66.google.com ([209.85.128.66]:38051 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726465AbfAISqQ (ORCPT ); Wed, 9 Jan 2019 13:46:16 -0500 Received: by mail-wm1-f66.google.com with SMTP id m22so9478744wml.3 for ; Wed, 09 Jan 2019 10:46:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=oMukVTmVfInnLbPFaL2IDn+q5LVO76WkVp10SUcldls=; b=JUh7QtNPArpe/eDgXRBEohNmF9uMv1rnz7FkC0T6APE+P4hDbmBMKMXyHA36ZCOiN1 UJOvDrc3zGHgOjwfy6zX8hrvntNXEdt/dM5nWjex+qOqdp8NEk1f1WSzEcEZ84rXLQNf RMJ6hkq3Q/bD+Shi9D1Xoa2cGpLFIeKQHe89hwsg4JlAM7Hj8nlW6cPd7a511dCs2vSy DcnJBfLuNsLXHW1gomL+M9VNOmJKDRY2y4po9aKefIaXjC+0Pe/jS7pCVjltb4sjH5pf XsB1FEtjYC96Xb99jB2+JkQ98p5XQQ/TQZdAAgCL+t+EY2pdwoonQsOSFBfpAGmldiA8 j2zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=oMukVTmVfInnLbPFaL2IDn+q5LVO76WkVp10SUcldls=; b=cMypVI1jP1ET/mKSWMs+1mC/GU5pPvDba4Wwtnf7v1Hqdo4Nu4mWMz0t6K8ZLb2oQU mKBSX7WS9Vhp1TTvxMHGaLshRjrsXdWuGr5fAGjlaIqO3AN/A/lLO+gpUQINVfxkoeGQ tQiuwJjbq+b67psk4EHBUXciJ/3nRxSn4zYx0KHMLboXZgy4yOV/IVwHVBZmme4Hw0+d 26cjMXKrc/Ed/4EA3hx6B0D0l0ic86GCrg5ts/mqkcqmuQQSvdr9UOZNzdvVR6i5g+Yx 0YUH9LKcBumabP5aJxkiT38dhML9hJq6+D209No8Nu1Gm1CgfYQXuWZ47HVRW0sAvZUZ DzVQ== X-Gm-Message-State: AJcUukeiC2fge1BvJqb5kT9sU9HVSuKVESo+NlSRQLCeyw/T7Xe7+zg5 YXCAZuKDV8/WCVBvPt/eZ17uut789Ux8vMse0yNnWPu1cBc= X-Received: by 2002:a1c:c87:: with SMTP id 129mr6429129wmm.116.1547059572751; Wed, 09 Jan 2019 10:46:12 -0800 (PST) MIME-Version: 1.0 References: <20190108112519.27473-1-kraxel@redhat.com> <20190108112519.27473-16-kraxel@redhat.com> <20190109101044.GS21184@phenom.ffwll.local> <20190109145443.l5yus2pgvxcl4zbt@sirius.home.kraxel.org> In-Reply-To: From: Alex Deucher Date: Wed, 9 Jan 2019 13:45:59 -0500 Message-ID: Subject: Re: [PATCH v2 15/15] drm/bochs: reserve bo for pin/unpin To: Daniel Vetter Cc: Gerd Hoffmann , Oleksandr Andrushchenko , open list , dri-devel , "open list:DRM DRIVER FOR BOCHS VIRTUAL GPU" , David Airlie , David Airlie Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 9, 2019 at 12:36 PM Daniel Vetter wrote: > > On Wed, Jan 9, 2019 at 3:54 PM Gerd Hoffmann wrote: > > > > On Wed, Jan 09, 2019 at 11:10:44AM +0100, Daniel Vetter wrote: > > > On Tue, Jan 08, 2019 at 12:25:19PM +0100, Gerd Hoffmann wrote: > > > > The buffer object must be reserved before calling > > > > ttm_bo_validate for pinning/unpinning. > > > > > > > > Signed-off-by: Gerd Hoffmann > > > > > > Seems a bit a bisect fumble in your series here: legacy kms code reserved > > > the ttm bo before calling boch_bo_pin/unpin, your atomic code doesn't. I > > > think pushing this into bochs_bo_pin/unpin makes sense for atomic, but to > > > avoid bisect fail I think you need to have these temporarily in your > > > cleanup/prepare_plane functions too. > > > > I think I've sorted that. Have some other changes too, will probably > > send v3 tomorrow. > > > > > Looked through the entire series, this here is the only issue I think > > > should be fixed before merging (making atomic_enable optional can be done > > > as a follow-up if you feel like it). With that addressed on the series: > > > > > > Acked-by: Daniel Vetter > > > > Thanks. > > > > While being at it: I'm also looking at dma-buf export and import > > support for the qemu drivers. > > > > Right now both qxl and virtio have gem_prime_get_sg_table and > > gem_prime_import_sg_table handlers which throw a WARN_ONCE() and return > > an error. > > > > If I understand things correctly it is valid to set all import/export > > callbacks (prime_handle_to_fd, prime_fd_to_handle, > > gem_prime_get_sg_table, gem_prime_import_sg_table) to NULL when not > > supporting dma-buf import/export and still advertise DRIVER_PRIME to > > indicate the other prime callbacks are supported (so generic fbdev > > emulation can use gem_prime_vmap etc). Is that correct? > > I'm not sure how much that's a good idea ... Never thought about it > tbh. All the fbdev/dma-buf stuff has plenty of hacks and > inconsistencies still, so I guess we can't make it much worse really. > > > On exporting: > > > > TTM_PL_TT should be easy, just pin the buffer, grab the pages list and > > feed that into drm_prime_pages_to_sg. Didn't try yet though. Is that > > approach correct? > > > > Is it possible to export TTM_PL_VRAM objects (with backing storage being > > a pci memory bar)? If so, how? > > Not really in general. amdgpu upcasts to amdgpu_bo (if it's amgpu BO) > and then knows the internals so it can do a proper pci peer2peer > mapping. Or at least there's been lots of patches floating around to > make that happen. Here's Christian's WIP stuff for adding device memory support to dma-buf: https://cgit.freedesktop.org/~deathsimple/linux/log/?h=p2p Alex > > I think other drivers migrate the bo out of VRAM. > > > On importing: > > > > Importing into TTM_PL_TT object looks easy again, at least when the > > object is actually stored in RAM. What if not? > > They are all supposed to be stored in RAM. Note that all current ttm > importers totally break the abstraction, by taking the sg list, > throwing the dma mapping away and assuming there's a struct page > backing it. Would be good if we could stop spreading that abuse - the > dma-buf interfaces have been modelled after the ttm bo interfaces, so > shouldn't be too hard to wire this up correctly. > > > Importing into TTM_PL_VRAM: Impossible I think, without copying over > > the data. Should that be done? If so, how? Or is it better to just > > not support import then? > > Hm, since you ask about TTM concepts and not what this means in terms > of dma-buf: As long as you upcast to the ttm_bo you can do whatever > you want to really. But with plain dma-buf this doesn't work right now > (least because ttm assumes it gets system RAM on import, in theory you > could put the peer2peer dma mapping into the sg list and it should > work). > -Daniel > > > > > thanks, > > Gerd > > > > _______________________________________________ > > dri-devel mailing list > > dri-devel@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > +41 (0) 79 365 57 48 - http://blog.ffwll.ch > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel