2019-12-19 09:56:44

by Daniel Vetter

[permalink] [raw]
Subject: Re: Warnings in DRM code when removing/unbinding a driver

On Wed, Dec 18, 2019 at 7:08 PM John Garry <[email protected]> wrote:
>
> +
>
> So the v5.4 kernel does not have this issue.
>
> I have bisected the initial occurrence to:
>
> commit 37a48adfba6cf6e87df9ba8b75ab85d514ed86d8
> Author: Thomas Zimmermann <[email protected]>
> Date: Fri Sep 6 14:20:53 2019 +0200
>
> drm/vram: Add kmap ref-counting to GEM VRAM objects
>
> The kmap and kunmap operations of GEM VRAM buffers can now be called
> in interleaving pairs. The first call to drm_gem_vram_kmap() maps the
> buffer's memory to kernel address space and the final call to
> drm_gem_vram_kunmap() unmaps the memory. Intermediate calls to these
> functions increment or decrement a reference counter.
>
> So this either exposes or creates the issue.

Yeah that's just shooting the messenger. Like I said, for most drivers
you can pretty much assume that their unload sequence has been broken
since forever. It's not often tested, and especially the hotunbind
from a device (as opposed to driver unload) stuff wasn't even possible
to get right until just recently.
-Daniel

>
> John
>
> >> On Mon, 2019-12-16 at 17:23 +0000, John Garry wrote:
> >>> Hi all,
> >>>
> >>> Enabling CONFIG_DEBUG_TEST_DRIVER_REMOVE causes many warns on a system
> >>> with the HIBMC hw:
> >>>
> >>> [ 27.788806] WARNING: CPU: 24 PID: 1 at
> >>> drivers/gpu/drm/drm_gem_vram_helper.c:564
> >>> bo_driver_move_notify+0x8c/0x98
> >>
> >> A total shot in the dark. This might make no sense,
> >> but it's worth a try:
> >
> > Thanks for the suggestion, but still the same splat.
> >
> > I haven't had a chance to analyze the problem myself. But perhaps we
> > should just change over the device-managed interface, as Daniel mentioned.
> >
> >>
> >> diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
> >> b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
> >> index 2fd4ca91a62d..69bb0e29da88 100644
> >> --- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
> >> +++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
> >> @@ -247,9 +247,8 @@ static int hibmc_unload(struct drm_device *dev)
> >> {
> >> struct hibmc_drm_private *priv = dev->dev_private;
> >> - hibmc_fbdev_fini(priv);
> >> -
> >> drm_atomic_helper_shutdown(dev);
> >> + hibmc_fbdev_fini(priv);
> >> if (dev->irq_enabled)
> >> drm_irq_uninstall(dev);
> >>
> >> Hope it helps,
> >> Ezequiel
> >>
> >
> > Thanks,
> > John
> >
> > [EOM]
> >
> >>> [ 27.798969] Modules linked in:
> >>> [ 27.802018] CPU: 24 PID: 1 Comm: swapper/0 Tainted: G B
> >>> 5.5.0-rc1-dirty #565
> >>> [ 27.810358] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI
> >>> RC0 - V1.16.01 03/15/2019
> >>> [ 27.818872] pstate: 20c00009 (nzCv daif +PAN +UAO)
> >>> [ 27.823654] pc : bo_driver_move_notify+0x8c/0x98
> >>> [ 27.828262] lr : bo_driver_move_notify+0x40/0x98
> >>> [ 27.832868] sp : ffff00236f0677e0
> >>> [ 27.836173] x29: ffff00236f0677e0 x28: ffffa0001454e5e0
> >>> [ 27.841476] x27: ffff002366e52128 x26: ffffa000149e67b0
> >>> [ 27.846779] x25: ffff002366e523e0 x24: ffff002336936120
> >>> [ 27.852082] x23: ffff0023346f4010 x22: ffff002336936128
> >>> [ 27.857385] x21: ffffa000149c15c0 x20: ffff0023369361f8
> >>> [ 27.862687] x19: ffff002336936000 x18: 0000000000001258
> >>> [ 27.867989] x17: 0000000000001190 x16: 00000000000011d0
> >>> [ 27.873292] x15: 0000000000001348 x14: ffffa00012d68190
> >>> [ 27.878595] x13: 0000000000000006 x12: 1ffff40003241f91
> >>> [ 27.883897] x11: ffff940003241f91 x10: dfffa00000000000
> >>> [ 27.889200] x9 : ffff940003241f92 x8 : 0000000000000001
> >>> [ 27.894502] x7 : ffffa0001920fc88 x6 : ffff940003241f92
> >>> [ 27.899804] x5 : ffff940003241f92 x4 : ffff0023369363a0
> >>> [ 27.905107] x3 : ffffa00010c104b8 x2 : dfffa00000000000
> >>> [ 27.910409] x1 : 0000000000000003 x0 : 0000000000000001
> >>> [ 27.915712] Call trace:
> >>> [ 27.918151] bo_driver_move_notify+0x8c/0x98
> >>> [ 27.922412] ttm_bo_cleanup_memtype_use+0x54/0x100
> >>> [ 27.927194] ttm_bo_put+0x3a0/0x5d0
> >>> [ 27.930673] drm_gem_vram_object_free+0xc/0x18
> >>> [ 27.935109] drm_gem_object_free+0x34/0xd0
> >>> [ 27.939196] drm_gem_object_put_unlocked+0xc8/0xf0
> >>> [ 27.943978] hibmc_user_framebuffer_destroy+0x20/0x40
> >>> [ 27.949020] drm_framebuffer_free+0x48/0x58
> >>> [ 27.953194] drm_mode_object_put.part.1+0x90/0xe8
> >>> [ 27.957889] drm_mode_object_put+0x28/0x38
> >>> [ 27.961976] hibmc_fbdev_fini+0x54/0x78
> >>> [ 27.965802] hibmc_unload+0x2c/0xd0
> >>> [ 27.969281] hibmc_pci_remove+0x2c/0x40
> >>> [ 27.973109] pci_device_remove+0x6c/0x140
> >>> [ 27.977110] really_probe+0x174/0x548
> >>> [ 27.980763] driver_probe_device+0x7c/0x148
> >>> [ 27.984936] device_driver_attach+0x94/0xa0
> >>> [ 27.989109] __driver_attach+0xa8/0x110
> >>> [ 27.992935] bus_for_each_dev+0xe8/0x158
> >>> [ 27.996849] driver_attach+0x30/0x40
> >>> [ 28.000415] bus_add_driver+0x234/0x2f0
> >>> [ 28.004241] driver_register+0xbc/0x1d0
> >>> [ 28.008067] __pci_register_driver+0xbc/0xd0
> >>> [ 28.012329] hibmc_pci_driver_init+0x20/0x28
> >>> [ 28.016590] do_one_initcall+0xb4/0x254
> >>> [ 28.020417] kernel_init_freeable+0x27c/0x328
> >>> [ 28.024765] kernel_init+0x10/0x118
> >>> [ 28.028245] ret_from_fork+0x10/0x18
> >>> [ 28.031813] ---[ end trace 35a83b71b657878d ]---
> >>> [ 28.036503] ------------[ cut here ]------------
> >>> [ 28.041115] WARNING: CPU: 24 PID: 1 at
> >>> drivers/gpu/drm/drm_gem_vram_helper.c:40
> >>> ttm_buffer_object_destroy+0x4c/0x80
> >>> [ 28.051537] Modules linked in:
> >>> [ 28.054585] CPU: 24 PID: 1 Comm: swapper/0 Tainted: G B W
> >>> 5.5.0-rc1-dirty #565
> >>> [ 28.062924] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI
> >>> RC0 - V1.16.01 03/15/2019
> >>>
> >>> [snip]
> >>>
> >>> Indeed, simply unbinding the device from the driver causes the same sort
> >>> of issue:
> >>>
> >>> root@(none)$ cd ./bus/pci/drivers/hibmc-drm/
> >>> root@(none)$ ls
> >>> 0000:05:00.0 bind new_id remove_id uevent
> >>> unbind
> >>> root@(none)$ echo 0000\:05\:00.0 > unbind
> >>> [ 116.074352] ------------[ cut here ]------------
> >>> [ 116.078978] WARNING: CPU: 17 PID: 1178 at
> >>> drivers/gpu/drm/drm_gem_vram_helper.c:40
> >>> ttm_buffer_object_destroy+0x4c/0x80
> >>> [ 116.089661] Modules linked in:
> >>> [ 116.092711] CPU: 17 PID: 1178 Comm: sh Tainted: G B W
> >>> 5.5.0-rc1-dirty #565
> >>> [ 116.100704] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI
> >>> RC0 - V1.16.01 03/15/2019
> >>> [ 116.109218] pstate: 20400009 (nzCv daif +PAN -UAO)
> >>> [ 116.114001] pc : ttm_buffer_object_destroy+0x4c/0x80
> >>> [ 116.118956] lr : ttm_buffer_object_destroy+0x18/0x80
> >>> [ 116.123910] sp : ffff0022e6cef8e0
> >>> [ 116.127215] x29: ffff0022e6cef8e0 x28: ffff00231b1fb000
> >>> [ 116.132519] x27: 0000000000000000 x26: ffff00231b1fb000
> >>> [ 116.137821] x25: ffff0022e6cefdc0 x24: 0000000000002480
> >>> [ 116.143124] x23: ffff0023682b6ab0 x22: ffff0023682b6800
> >>> [ 116.148427] x21: ffff0023682b6800 x20: 0000000000000000
> >>> [ 116.153730] x19: ffff0023682b6800 x18: 0000000000000000
> >>> [ 116.159032] x17: 000000000000000000000000001
> >>> [ 116.185545] x7 : ffff0023682b6b07 x6 : ffff80046d056d61
> >>> [ 116.190848] x5 : ffff80046d056d61 x4 : ffff0023682b6ba0
> >>> [ 116.196151] x3 : ffffa00010197338 x2 : dfffa00000000000
> >>> [ 116.201453] x1 : 0000000000000003 x0 : 0000000000000001
> >>> [ 116.206756] Call trace:
> >>> [ 116.209195] ttm_buffer_object_destroy+0x4c/0x80
> >>> [ 116.213803] ttm_bo_release_list+0x184/0x220
> >>> [ 116.218064] ttm_bo_put+0x410/0x5d0
> >>> [ 116.221544] drm_gem_vram_object_free+0xc/0x18
> >>> [ 116.225979] drm_gem_object_free+0x34/0xd0
> >>> [ 116.230066] drm_gem_object_put_unlocked+0xc8/0xf0
> >>> [ 116.234848] hibmc_user_framebuffer_destroy+0x20/0x40
> >>> [ 116.239890] drm_framebuffer_free+0x48/0x58
> >>> [ 116.244064] drm_mode_object_put.part.1+0x90/0xe8
> >>> [ 116.248759] drm_mode_object_put+0x28/0x38
> >>> [ 116.252846] hibmc_fbdev_fini+0x54/0x78
> >>> [ 116.256672] hibmc_unload+0x2c/0xd0
> >>> [ 116.260151] hibmc_pci_remove+0x2c/0x40
> >>> [ 116.263979] pci_device_remove+0x6c/0x140
> >>> [ 116.267980] device_release_driver_internal+0x134/0x250
> >>> [ 116.273196] device_driver_detach+0x28/0x38
> >>> [ 116.277369] unbind_store+0xfc/0x150
> >>> [ 116.280934] drv_attr_store+0x48/0x60
> >>> [ 116.284589] sysfs_kf_write+0x80/0xb0
> >>> [ 116.288241] kernfs_fop_write+0x1d4/0x320
> >>> [ 116.292243] __vfs_write+0x54/0x98
> >>> [ 116.295635] vfs_write+0xe8/0x270
> >>> [ 116.298940] ksys_write+0xc8/0x180
> >>> [ 116.302333] __arm64_sys_write+0x40/0x50
> >>> [ 116.306248] el0_svc_common.constprop.0+0xa4/0x1f8
> >>> [ 116.311029] el0_svc_handler+0x34/0xb0
> >>> [ 116.314770] el0_sync_handler+0x10c/0x1c8
> >>> [ 116.318769] el0_sync+0x140/0x180
> >>> [ 116.322074] ---[ end trace e60e43d0e316b5c8 ]---
> >>> [ 116.326868] ------------[ cut here ]------------
> >>>
> >>>
> >>> dmesg and .config is here:
> >>> https://pastebin.com/4P5yaZBS
> >>>
> >>> I'm not sure if this is a HIBMC driver issue or issue with the
> >>> framework.
> >>>
> >>> john
> >>>
> >>>
> >>> _______________________________________________
> >>> dri-devel mailing list
> >>> [email protected]
> >>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> >>
> >>
> >
> > _______________________________________________
> > Linuxarm mailing list
> > [email protected]
> > http://hulk.huawei.com/mailman/listinfo/linuxarm
> > .
>


--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


2019-12-19 10:06:14

by John Garry

[permalink] [raw]
Subject: Re: Warnings in DRM code when removing/unbinding a driver

On 19/12/2019 09:54, Daniel Vetter wrote:
> On Wed, Dec 18, 2019 at 7:08 PM John Garry <[email protected]> wrote:
>>
>> +
>>
>> So the v5.4 kernel does not have this issue.
>>
>> I have bisected the initial occurrence to:
>>
>> commit 37a48adfba6cf6e87df9ba8b75ab85d514ed86d8
>> Author: Thomas Zimmermann <[email protected]>
>> Date: Fri Sep 6 14:20:53 2019 +0200
>>
>> drm/vram: Add kmap ref-counting to GEM VRAM objects
>>
>> The kmap and kunmap operations of GEM VRAM buffers can now be called
>> in interleaving pairs. The first call to drm_gem_vram_kmap() maps the
>> buffer's memory to kernel address space and the final call to
>> drm_gem_vram_kunmap() unmaps the memory. Intermediate calls to these
>> functions increment or decrement a reference counter.
>>
>> So this either exposes or creates the issue.
>
> Yeah that's just shooting the messenger.

OK, so it exposes it.

Like I said, for most drivers
> you can pretty much assume that their unload sequence has been broken
> since forever. It's not often tested, and especially the hotunbind
> from a device (as opposed to driver unload) stuff wasn't even possible
> to get right until just recently.

Do you think it's worth trying to fix this for 5.5 and earlier, or just
switch to the device-managed interface for 5.6 and forget about 5.5 and
earlier?

Thanks,
John

2019-12-19 10:12:01

by Daniel Vetter

[permalink] [raw]
Subject: Re: Warnings in DRM code when removing/unbinding a driver

On Thu, Dec 19, 2019 at 11:03 AM John Garry <[email protected]> wrote:
>
> On 19/12/2019 09:54, Daniel Vetter wrote:
> > On Wed, Dec 18, 2019 at 7:08 PM John Garry <[email protected]> wrote:
> >>
> >> +
> >>
> >> So the v5.4 kernel does not have this issue.
> >>
> >> I have bisected the initial occurrence to:
> >>
> >> commit 37a48adfba6cf6e87df9ba8b75ab85d514ed86d8
> >> Author: Thomas Zimmermann <[email protected]>
> >> Date: Fri Sep 6 14:20:53 2019 +0200
> >>
> >> drm/vram: Add kmap ref-counting to GEM VRAM objects
> >>
> >> The kmap and kunmap operations of GEM VRAM buffers can now be called
> >> in interleaving pairs. The first call to drm_gem_vram_kmap() maps the
> >> buffer's memory to kernel address space and the final call to
> >> drm_gem_vram_kunmap() unmaps the memory. Intermediate calls to these
> >> functions increment or decrement a reference counter.
> >>
> >> So this either exposes or creates the issue.
> >
> > Yeah that's just shooting the messenger.
>
> OK, so it exposes it.
>
> Like I said, for most drivers
> > you can pretty much assume that their unload sequence has been broken
> > since forever. It's not often tested, and especially the hotunbind
> > from a device (as opposed to driver unload) stuff wasn't even possible
> > to get right until just recently.
>
> Do you think it's worth trying to fix this for 5.5 and earlier, or just
> switch to the device-managed interface for 5.6 and forget about 5.5 and
> earlier?

I suspect it's going to be quite some trickery to fix this properly
and everywhere, even for just one driver. Lots of drm drivers
unfortunately use anti-patterns with wrong lifetimes (e.g. you can't
use devm_kmalloc for anything that hangs of a drm_device, like
plane/crtc/connector). Except when it's for a real hotunpluggable
device (usb) we've never bothered backporting these fixes. Too much
broken stuff unfortunately.
-Daniel

>
> Thanks,
> John



--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

2019-12-19 11:34:23

by Gerd Hoffmann

[permalink] [raw]
Subject: Re: Warnings in DRM code when removing/unbinding a driver

Hi,

> > Like I said, for most drivers
> > > you can pretty much assume that their unload sequence has been broken
> > > since forever. It's not often tested, and especially the hotunbind
> > > from a device (as opposed to driver unload) stuff wasn't even possible
> > > to get right until just recently.
> >
> > Do you think it's worth trying to fix this for 5.5 and earlier, or just
> > switch to the device-managed interface for 5.6 and forget about 5.5 and
> > earlier?
>
> I suspect it's going to be quite some trickery to fix this properly
> and everywhere, even for just one driver. Lots of drm drivers
> unfortunately use anti-patterns with wrong lifetimes (e.g. you can't
> use devm_kmalloc for anything that hangs of a drm_device, like
> plane/crtc/connector). Except when it's for a real hotunpluggable
> device (usb) we've never bothered backporting these fixes. Too much
> broken stuff unfortunately.

While being at it: How would a driver cleanup properly cleanup gem
objects created by userspace on hotunbind? Specifically a gem object
pinned to vram?

cheers,
Gerd

2019-12-19 12:43:42

by Daniel Vetter

[permalink] [raw]
Subject: Re: Warnings in DRM code when removing/unbinding a driver

On Thu, Dec 19, 2019 at 12:32 PM Gerd Hoffmann <[email protected]> wrote:
>
> Hi,
>
> > > Like I said, for most drivers
> > > > you can pretty much assume that their unload sequence has been broken
> > > > since forever. It's not often tested, and especially the hotunbind
> > > > from a device (as opposed to driver unload) stuff wasn't even possible
> > > > to get right until just recently.
> > >
> > > Do you think it's worth trying to fix this for 5.5 and earlier, or just
> > > switch to the device-managed interface for 5.6 and forget about 5.5 and
> > > earlier?
> >
> > I suspect it's going to be quite some trickery to fix this properly
> > and everywhere, even for just one driver. Lots of drm drivers
> > unfortunately use anti-patterns with wrong lifetimes (e.g. you can't
> > use devm_kmalloc for anything that hangs of a drm_device, like
> > plane/crtc/connector). Except when it's for a real hotunpluggable
> > device (usb) we've never bothered backporting these fixes. Too much
> > broken stuff unfortunately.
>
> While being at it: How would a driver cleanup properly cleanup gem
> objects created by userspace on hotunbind? Specifically a gem object
> pinned to vram?

Two things:
- the mmap needs to be torn down and replaced by something which will
sigbus. Probably should have that as a helper (plus vram fault code
should use drm_dev_enter/exit to plug races).
- otherwise all datastructures need to be properly refcounted.
drm_device now is (if your driver isn't broken), but any dma_fence or
dma_buf we create and export has an independent lifetime, and
currently the refcounting for is still wobbly I think.

So some work to do, both in helpers/core code and in drivers to get updated.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

2019-12-23 09:02:34

by Pekka Paalanen

[permalink] [raw]
Subject: SIGBUS on device disappearance (Re: Warnings in DRM code when removing/unbinding a driver)

On Thu, 19 Dec 2019 13:42:33 +0100
Daniel Vetter <[email protected]> wrote:

> On Thu, Dec 19, 2019 at 12:32 PM Gerd Hoffmann <[email protected]> wrote:
> >
> > While being at it: How would a driver cleanup properly cleanup gem
> > objects created by userspace on hotunbind? Specifically a gem object
> > pinned to vram?
>
> Two things:
> - the mmap needs to be torn down and replaced by something which will
> sigbus. Probably should have that as a helper (plus vram fault code
> should use drm_dev_enter/exit to plug races).

Hi,

I assume SIGBUS is the traditional way to say "oops, the memory you
mmapped and tried to access no longer exists". Is there nothing
else for this?

I'm asking, because SIGBUS is really hard to handle right in
userspace. It can be caused by any number of wildly different
reasons, yet being a signal means that a userspace process can only
have a single global handler for it. That makes it almost
impossible to use safely in libraries, because you would want to
register independent handlers from multiple libraries in the same
process. Some libraries may also be using threads.

How to handle a SIGBUS completely depends on what triggered it.
Almost always userspace wants it to be a non-fatal error. A Wayland
compositor can hit SIGBUS on accessing wl_shm-based client buffers
(regular mmapped files), and then it just wants to continue with
garbage data as if nothing happened and possibly send a protocol
error to the client provoking it.

I would also imagine that Mesa, when it starts looking into
supporting GPU hotunplug, needs to handle vanished mmaps. I don't
think Mesa can ever install signal handlers, because that would
mess with the applications that may already be using SIGBUS for
handling disappearing mmapped files. It needs to start returning
errors via API calls. I cannot imagine a way to reliably prevent
such SIGBUS either by e.g. ensuring Mesa gets notified of removal
before it actually starts failing.

For now, I'm just looking for a simple "yes" or "no" here for the
something else. If it's "no" like I expect, creating something else
is probably in the order of years to get into a usable state. Does
anyone already have plans towards that?


Thanks,
pq


Attachments:
(No filename) (849.00 B)
OpenPGP digital signature

2020-01-07 15:44:23

by Daniel Vetter

[permalink] [raw]
Subject: Re: SIGBUS on device disappearance (Re: Warnings in DRM code when removing/unbinding a driver)

On Mon, Dec 23, 2019 at 11:00:15AM +0200, Pekka Paalanen wrote:
> On Thu, 19 Dec 2019 13:42:33 +0100
> Daniel Vetter <[email protected]> wrote:
>
> > On Thu, Dec 19, 2019 at 12:32 PM Gerd Hoffmann <[email protected]> wrote:
> > >
> > > While being at it: How would a driver cleanup properly cleanup gem
> > > objects created by userspace on hotunbind? Specifically a gem object
> > > pinned to vram?
> >
> > Two things:
> > - the mmap needs to be torn down and replaced by something which will
> > sigbus. Probably should have that as a helper (plus vram fault code
> > should use drm_dev_enter/exit to plug races).
>
> Hi,
>
> I assume SIGBUS is the traditional way to say "oops, the memory you
> mmapped and tried to access no longer exists". Is there nothing
> else for this?
>
> I'm asking, because SIGBUS is really hard to handle right in
> userspace. It can be caused by any number of wildly different
> reasons, yet being a signal means that a userspace process can only
> have a single global handler for it. That makes it almost
> impossible to use safely in libraries, because you would want to
> register independent handlers from multiple libraries in the same
> process. Some libraries may also be using threads.
>
> How to handle a SIGBUS completely depends on what triggered it.
> Almost always userspace wants it to be a non-fatal error. A Wayland
> compositor can hit SIGBUS on accessing wl_shm-based client buffers
> (regular mmapped files), and then it just wants to continue with
> garbage data as if nothing happened and possibly send a protocol
> error to the client provoking it.

For drm drivers that you actually want to hotunplug (as opposed to more
just for driver development) they all use system memory/shmem, so
shouldn't sigbus. I think at least, I haven't tested anything. This is for
udl, or the tiny displays behind an spi bridge.

For pci drivers where the mmap often points at a pci bridge the mmio range
will be gone, so not SIGBUSing is going to be a tough order. Not
impossible, but before we enshrine this into uapi someont will have to do
some serious typing.

> I would also imagine that Mesa, when it starts looking into
> supporting GPU hotunplug, needs to handle vanished mmaps. I don't
> think Mesa can ever install signal handlers, because that would
> mess with the applications that may already be using SIGBUS for
> handling disappearing mmapped files. It needs to start returning
> errors via API calls. I cannot imagine a way to reliably prevent
> such SIGBUS either by e.g. ensuring Mesa gets notified of removal
> before it actually starts failing.

Mesa already blows up in all kinds of interesting ways when it gets an EIO
at execbuf. I think. Robust handling of gpu hotunplug for gl/vk contexts
is going to be more work on top (and mmap is probably the least issue
there, at least right now).

> For now, I'm just looking for a simple "yes" or "no" here for the
> something else. If it's "no" like I expect, creating something else
> is probably in the order of years to get into a usable state. Does
> anyone already have plans towards that?

I agree with you that SIGBUS for mmap of hotunplugged devices is
essentially unusable because sighandlers and all what you point out (would
make it impossible to have robust vk/gl contexts, at least robuts against
hotunplug).

So in principle I'm open to have some other uapi for this, but it's going
to be serios amounts of work across the stack.

For display only udl-style devices otoh I think we should be mostly there,
+/- driver bugs as usual.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch