2022-08-15 10:19:17

by Dmitry Osipenko

[permalink] [raw]
Subject: [PATCH v1] drm/ttm: Refcount allocated tail pages

Higher order pages allocated using alloc_pages() aren't refcounted and they
need to be refcounted, otherwise it's impossible to map them by KVM. This
patch sets the refcount of the tail pages and fixes the KVM memory mapping
faults.

Without this change guest virgl driver can't map host buffers into guest
and can't provide OpenGL 4.5 profile support to the guest. The host
mappings are also needed for enabling the Venus driver using host GPU
drivers that are utilizing TTM.

Based on a patch proposed by Trigger Huang.

Cc: [email protected]
Cc: Trigger Huang <[email protected]>
Link: https://www.collabora.com/news-and-blog/blog/2021/11/26/venus-on-qemu-enabling-new-virtual-vulkan-driver/#qcom1343
Tested-by: Dmitry Osipenko <[email protected]> # AMDGPU (Qemu and crosvm)
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/gpu/drm/ttm/ttm_pool.c | 25 ++++++++++++++++++++++++-
1 file changed, 24 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index 21b61631f73a..11e92bb149c9 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -81,6 +81,7 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
struct ttm_pool_dma *dma;
struct page *p;
+ unsigned int i;
void *vaddr;

/* Don't set the __GFP_COMP flag for higher order allocations.
@@ -93,8 +94,10 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,

if (!pool->use_dma_alloc) {
p = alloc_pages(gfp_flags, order);
- if (p)
+ if (p) {
p->private = order;
+ goto ref_tail_pages;
+ }
return p;
}

@@ -120,6 +123,23 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,

dma->vaddr = (unsigned long)vaddr | order;
p->private = (unsigned long)dma;
+
+ref_tail_pages:
+ /*
+ * KVM requires mapped tail pages to be refcounted because put_page()
+ * is invoked on them in the end of the page fault handling, and thus,
+ * tail pages need to be protected from the premature releasing.
+ * In fact, KVM page fault handler refuses to map tail pages to guest
+ * if they aren't refcounted because hva_to_pfn_remapped() checks the
+ * refcount specifically for this case.
+ *
+ * In particular, unreferenced tail pages result in a KVM "Bad address"
+ * failure for VMMs that use VirtIO-GPU when guest's Mesa VirGL driver
+ * accesses mapped host TTM buffer that contains tail pages.
+ */
+ for (i = 1; i < 1 << order; i++)
+ page_ref_inc(p + i);
+
return p;

error_free:
@@ -133,6 +153,7 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
{
unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
struct ttm_pool_dma *dma;
+ unsigned int i;
void *vaddr;

#ifdef CONFIG_X86
@@ -142,6 +163,8 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
if (caching != ttm_cached && !PageHighMem(p))
set_pages_wb(p, 1 << order);
#endif
+ for (i = 1; i < 1 << order; i++)
+ page_ref_dec(p + i);

if (!pool || !pool->use_dma_alloc) {
__free_pages(p, order);
--
2.37.2


2022-08-15 11:01:52

by Christian König

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

Am 15.08.22 um 11:54 schrieb Dmitry Osipenko:
> Higher order pages allocated using alloc_pages() aren't refcounted and they
> need to be refcounted, otherwise it's impossible to map them by KVM. This
> patch sets the refcount of the tail pages and fixes the KVM memory mapping
> faults.
>
> Without this change guest virgl driver can't map host buffers into guest
> and can't provide OpenGL 4.5 profile support to the guest. The host
> mappings are also needed for enabling the Venus driver using host GPU
> drivers that are utilizing TTM.
>
> Based on a patch proposed by Trigger Huang.

Well I can't count how often I have repeated this: This is an absolutely
clear NAK!

TTM pages are not reference counted in the first place and because of
this giving them to virgl is illegal.

Please immediately stop this completely broken approach. We have
discussed this multiple times now.

Regards,
Christian.

>
> Cc: [email protected]
> Cc: Trigger Huang <[email protected]>
> Link: https://www.collabora.com/news-and-blog/blog/2021/11/26/venus-on-qemu-enabling-new-virtual-vulkan-driver/#qcom1343
> Tested-by: Dmitry Osipenko <[email protected]> # AMDGPU (Qemu and crosvm)
> Signed-off-by: Dmitry Osipenko <[email protected]>
> ---
> drivers/gpu/drm/ttm/ttm_pool.c | 25 ++++++++++++++++++++++++-
> 1 file changed, 24 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> index 21b61631f73a..11e92bb149c9 100644
> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> @@ -81,6 +81,7 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
> struct ttm_pool_dma *dma;
> struct page *p;
> + unsigned int i;
> void *vaddr;
>
> /* Don't set the __GFP_COMP flag for higher order allocations.
> @@ -93,8 +94,10 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
>
> if (!pool->use_dma_alloc) {
> p = alloc_pages(gfp_flags, order);
> - if (p)
> + if (p) {
> p->private = order;
> + goto ref_tail_pages;
> + }
> return p;
> }
>
> @@ -120,6 +123,23 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
>
> dma->vaddr = (unsigned long)vaddr | order;
> p->private = (unsigned long)dma;
> +
> +ref_tail_pages:
> + /*
> + * KVM requires mapped tail pages to be refcounted because put_page()
> + * is invoked on them in the end of the page fault handling, and thus,
> + * tail pages need to be protected from the premature releasing.
> + * In fact, KVM page fault handler refuses to map tail pages to guest
> + * if they aren't refcounted because hva_to_pfn_remapped() checks the
> + * refcount specifically for this case.
> + *
> + * In particular, unreferenced tail pages result in a KVM "Bad address"
> + * failure for VMMs that use VirtIO-GPU when guest's Mesa VirGL driver
> + * accesses mapped host TTM buffer that contains tail pages.
> + */
> + for (i = 1; i < 1 << order; i++)
> + page_ref_inc(p + i);
> +
> return p;
>
> error_free:
> @@ -133,6 +153,7 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
> {
> unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
> struct ttm_pool_dma *dma;
> + unsigned int i;
> void *vaddr;
>
> #ifdef CONFIG_X86
> @@ -142,6 +163,8 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
> if (caching != ttm_cached && !PageHighMem(p))
> set_pages_wb(p, 1 << order);
> #endif
> + for (i = 1; i < 1 << order; i++)
> + page_ref_dec(p + i);
>
> if (!pool || !pool->use_dma_alloc) {
> __free_pages(p, order);

2022-08-15 11:41:13

by Christian König

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

Am 15.08.22 um 13:19 schrieb Dmitry Osipenko:
> [SNIP]
>>>> I'll try to dig out the older discussions, thank you for the quick
>>>> reply!
>>> Are you sure it was really discussed in public previously? All I can
>>> find is yours two answers to a similar patches where you're saying that
>>> this it's a wrong solution without in-depth explanation and further
>>> discussions.
>> Yeah, that's my problem as well I can't find that of hand.
>>
>> But yes it certainly was discussed in public.
> If it was only CC'd to dri-devel, then could be that emails didn't pass
> the spam moderation :/

That might be possible.

>>> Maybe it was discussed privately? In this case I will be happy to get
>>> more info from you about the root of the problem so I could start to
>>> look at how to fix it properly. It's not apparent where the problem is
>>> to a TTM newbie like me.
>>>
>> Well this is completely unfixable. See the whole purpose of TTM is to
>> allow tracing where what is mapped of a buffer object.
>>
>> If you circumvent that and increase the page reference yourself than
>> that whole functionality can't work correctly any more.
> Are you suggesting that the problem is that TTM doesn't see the KVM page
> faults/mappings?

Yes, and no. It's one of the issues, but there is more behind that (e.g.
what happens when TTM switches from pages to local memory for backing a BO).

Another question is why is KVM accessing the page structure in the first
place? The VMA is mapped with VM_PFNMAP and VM_IO, KVM should never ever
touch any of those pages.

Regards,
Christian.

2022-08-15 12:05:33

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

On 8/15/22 14:28, Christian König wrote:
>>>> Maybe it was discussed privately? In this case I will be happy to get
>>>> more info from you about the root of the problem so I could start to
>>>> look at how to fix it properly. It's not apparent where the problem is
>>>> to a TTM newbie like me.
>>>>
>>> Well this is completely unfixable. See the whole purpose of TTM is to
>>> allow tracing where what is mapped of a buffer object.
>>>
>>> If you circumvent that and increase the page reference yourself than
>>> that whole functionality can't work correctly any more.
>> Are you suggesting that the problem is that TTM doesn't see the KVM page
>> faults/mappings?
>
> Yes, and no. It's one of the issues, but there is more behind that (e.g.
> what happens when TTM switches from pages to local memory for backing a
> BO).

If KVM page fault could reach TTM, then it should be able to relocate
BO. I see now where is the problem, thanks. Although, I'm wondering
whether it already works somehow.. I'll try to play with the the AMDGPU
shrinker and see what will happen on guest mapping of a relocated BO.

> Another question is why is KVM accessing the page structure in the first
> place? The VMA is mapped with VM_PFNMAP and VM_IO, KVM should never ever
> touch any of those pages.

https://elixir.bootlin.com/linux/v5.19/source/virt/kvm/kvm_main.c#L2549

--
Best regards,
Dmitry

2022-08-15 13:27:28

by Christian König

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

Am 15.08.22 um 13:50 schrieb Dmitry Osipenko:
> On 8/15/22 14:28, Christian König wrote:
>>>>> Maybe it was discussed privately? In this case I will be happy to get
>>>>> more info from you about the root of the problem so I could start to
>>>>> look at how to fix it properly. It's not apparent where the problem is
>>>>> to a TTM newbie like me.
>>>>>
>>>> Well this is completely unfixable. See the whole purpose of TTM is to
>>>> allow tracing where what is mapped of a buffer object.
>>>>
>>>> If you circumvent that and increase the page reference yourself than
>>>> that whole functionality can't work correctly any more.
>>> Are you suggesting that the problem is that TTM doesn't see the KVM page
>>> faults/mappings?
>> Yes, and no. It's one of the issues, but there is more behind that (e.g.
>> what happens when TTM switches from pages to local memory for backing a
>> BO).
> If KVM page fault could reach TTM, then it should be able to relocate
> BO. I see now where is the problem, thanks. Although, I'm wondering
> whether it already works somehow.. I'll try to play with the the AMDGPU
> shrinker and see what will happen on guest mapping of a relocated BO.

Well the page fault already somehow reaches TTM, otherwise the pfn
couldn't be filled in in the first place.

The issues is more that KVM should never ever grab a page reference to
pages mapped with VM_IO or VM_PFNMAP.

Essentially we need to apply the same restriction as with
get_user_pages() here.

>> Another question is why is KVM accessing the page structure in the first
>> place? The VMA is mapped with VM_PFNMAP and VM_IO, KVM should never ever
>> touch any of those pages.
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.19%2Fsource%2Fvirt%2Fkvm%2Fkvm_main.c%23L2549&amp;data=05%7C01%7Cchristian.koenig%40amd.com%7C2f38c27f20f842fc582a08da7eb4580d%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637961610314049167%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=Pu5F1EF9UvDPdOQ7sjJ1WDRt5XpFZmAMXdkexnDpEmU%3D&amp;reserved=0

Well that comment sounds like KVM is doing the right thing, so I'm
wondering what exactly is going on here.

Regards,
Christian.

>

2022-08-15 14:20:08

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

On 8/15/22 16:06, Christian König wrote:
> Am 15.08.22 um 13:50 schrieb Dmitry Osipenko:
>> On 8/15/22 14:28, Christian König wrote:
>>>>>> Maybe it was discussed privately? In this case I will be happy to get
>>>>>> more info from you about the root of the problem so I could start to
>>>>>> look at how to fix it properly. It's not apparent where the
>>>>>> problem is
>>>>>> to a TTM newbie like me.
>>>>>>
>>>>> Well this is completely unfixable. See the whole purpose of TTM is to
>>>>> allow tracing where what is mapped of a buffer object.
>>>>>
>>>>> If you circumvent that and increase the page reference yourself than
>>>>> that whole functionality can't work correctly any more.
>>>> Are you suggesting that the problem is that TTM doesn't see the KVM
>>>> page
>>>> faults/mappings?
>>> Yes, and no. It's one of the issues, but there is more behind that (e.g.
>>> what happens when TTM switches from pages to local memory for backing a
>>> BO).
>> If KVM page fault could reach TTM, then it should be able to relocate
>> BO. I see now where is the problem, thanks. Although, I'm wondering
>> whether it already works somehow.. I'll try to play with the the AMDGPU
>> shrinker and see what will happen on guest mapping of a relocated BO.
>
> Well the page fault already somehow reaches TTM, otherwise the pfn
> couldn't be filled in in the first place.
>
> The issues is more that KVM should never ever grab a page reference to
> pages mapped with VM_IO or VM_PFNMAP.
>
> Essentially we need to apply the same restriction as with
> get_user_pages() here.
>
>>> Another question is why is KVM accessing the page structure in the first
>>> place? The VMA is mapped with VM_PFNMAP and VM_IO, KVM should never ever
>>> touch any of those pages.
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.19%2Fsource%2Fvirt%2Fkvm%2Fkvm_main.c%23L2549&amp;data=05%7C01%7Cchristian.koenig%40amd.com%7C2f38c27f20f842fc582a08da7eb4580d%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637961610314049167%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=Pu5F1EF9UvDPdOQ7sjJ1WDRt5XpFZmAMXdkexnDpEmU%3D&amp;reserved=0
>>
>
> Well that comment sounds like KVM is doing the right thing, so I'm
> wondering what exactly is going on here.

KVM actually doesn't hold the page reference, it takes the temporal
reference during page fault and then drops the reference once page is
mapped, IIUC. Is it still illegal for TTM? Or there is a possibility for
a race condition here?

--
Best regards,
Dmitry

2022-08-15 14:30:48

by Christian König

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

Am 15.08.22 um 15:45 schrieb Dmitry Osipenko:
> [SNIP]
>> Well that comment sounds like KVM is doing the right thing, so I'm
>> wondering what exactly is going on here.
> KVM actually doesn't hold the page reference, it takes the temporal
> reference during page fault and then drops the reference once page is
> mapped, IIUC. Is it still illegal for TTM? Or there is a possibility for
> a race condition here?
>

Well the question is why does KVM grab the page reference in the first
place?

If that is to prevent the mapping from changing then yes that's illegal
and won't work. It can always happen that you grab the address, solve
the fault and then immediately fault again because the address you just
grabbed is invalidated.

If it's for some other reason than we should probably investigate if we
shouldn't stop doing this.

Regards,
Christian.

2022-08-15 14:59:12

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

On 8/15/22 16:53, Christian König wrote:
> Am 15.08.22 um 15:45 schrieb Dmitry Osipenko:
>> [SNIP]
>>> Well that comment sounds like KVM is doing the right thing, so I'm
>>> wondering what exactly is going on here.
>> KVM actually doesn't hold the page reference, it takes the temporal
>> reference during page fault and then drops the reference once page is
>> mapped, IIUC. Is it still illegal for TTM? Or there is a possibility for
>> a race condition here?
>>
>
> Well the question is why does KVM grab the page reference in the first
> place?
>
> If that is to prevent the mapping from changing then yes that's illegal
> and won't work. It can always happen that you grab the address, solve
> the fault and then immediately fault again because the address you just
> grabbed is invalidated.
>
> If it's for some other reason than we should probably investigate if we
> shouldn't stop doing this.

CC: +Paolo Bonzini who introduced this code

commit add6a0cd1c5ba51b201e1361b05a5df817083618
Author: Paolo Bonzini <[email protected]>
Date: Tue Jun 7 17:51:18 2016 +0200

KVM: MMU: try to fix up page faults before giving up

The vGPU folks would like to trap the first access to a BAR by setting
vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault
handler
then can use remap_pfn_range to place some non-reserved pages in the
VMA.

This kind of VM_PFNMAP mapping is not handled by KVM, but follow_pfn
and fixup_user_fault together help supporting it. The patch also
supports
VM_MIXEDMAP vmas where the pfns are not reserved and thus subject to
reference counting.

@Paolo,
https://lore.kernel.org/dri-devel/[email protected]/T/#m7647ce5f8c4749599d2c6bc15a2b45f8d8cf8154

--
Best regards,
Dmitry

2022-08-15 16:11:08

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

On 8/15/22 17:57, Dmitry Osipenko wrote:
> On 8/15/22 16:53, Christian König wrote:
>> Am 15.08.22 um 15:45 schrieb Dmitry Osipenko:
>>> [SNIP]
>>>> Well that comment sounds like KVM is doing the right thing, so I'm
>>>> wondering what exactly is going on here.
>>> KVM actually doesn't hold the page reference, it takes the temporal
>>> reference during page fault and then drops the reference once page is
>>> mapped, IIUC. Is it still illegal for TTM? Or there is a possibility for
>>> a race condition here?
>>>
>>
>> Well the question is why does KVM grab the page reference in the first
>> place?
>>
>> If that is to prevent the mapping from changing then yes that's illegal
>> and won't work. It can always happen that you grab the address, solve
>> the fault and then immediately fault again because the address you just
>> grabbed is invalidated.
>>
>> If it's for some other reason than we should probably investigate if we
>> shouldn't stop doing this.
>
> CC: +Paolo Bonzini who introduced this code
>
> commit add6a0cd1c5ba51b201e1361b05a5df817083618
> Author: Paolo Bonzini <[email protected]>
> Date: Tue Jun 7 17:51:18 2016 +0200
>
> KVM: MMU: try to fix up page faults before giving up
>
> The vGPU folks would like to trap the first access to a BAR by setting
> vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault
> handler
> then can use remap_pfn_range to place some non-reserved pages in the
> VMA.
>
> This kind of VM_PFNMAP mapping is not handled by KVM, but follow_pfn
> and fixup_user_fault together help supporting it. The patch also
> supports
> VM_MIXEDMAP vmas where the pfns are not reserved and thus subject to
> reference counting.
>
> @Paolo,
> https://lore.kernel.org/dri-devel/[email protected]/T/#m7647ce5f8c4749599d2c6bc15a2b45f8d8cf8154
>

If we need to bump the refcount only for VM_MIXEDMAP and not for
VM_PFNMAP, then perhaps we could add a flag for that to the kvm_main
code that will denote to kvm_release_page_clean whether it needs to put
the page?

--
Best regards,
Dmitry

2022-08-17 23:05:09

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

On 8/15/22 18:54, Dmitry Osipenko wrote:
> On 8/15/22 17:57, Dmitry Osipenko wrote:
>> On 8/15/22 16:53, Christian König wrote:
>>> Am 15.08.22 um 15:45 schrieb Dmitry Osipenko:
>>>> [SNIP]
>>>>> Well that comment sounds like KVM is doing the right thing, so I'm
>>>>> wondering what exactly is going on here.
>>>> KVM actually doesn't hold the page reference, it takes the temporal
>>>> reference during page fault and then drops the reference once page is
>>>> mapped, IIUC. Is it still illegal for TTM? Or there is a possibility for
>>>> a race condition here?
>>>>
>>>
>>> Well the question is why does KVM grab the page reference in the first
>>> place?
>>>
>>> If that is to prevent the mapping from changing then yes that's illegal
>>> and won't work. It can always happen that you grab the address, solve
>>> the fault and then immediately fault again because the address you just
>>> grabbed is invalidated.
>>>
>>> If it's for some other reason than we should probably investigate if we
>>> shouldn't stop doing this.
>>
>> CC: +Paolo Bonzini who introduced this code
>>
>> commit add6a0cd1c5ba51b201e1361b05a5df817083618
>> Author: Paolo Bonzini <[email protected]>
>> Date: Tue Jun 7 17:51:18 2016 +0200
>>
>> KVM: MMU: try to fix up page faults before giving up
>>
>> The vGPU folks would like to trap the first access to a BAR by setting
>> vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault
>> handler
>> then can use remap_pfn_range to place some non-reserved pages in the
>> VMA.
>>
>> This kind of VM_PFNMAP mapping is not handled by KVM, but follow_pfn
>> and fixup_user_fault together help supporting it. The patch also
>> supports
>> VM_MIXEDMAP vmas where the pfns are not reserved and thus subject to
>> reference counting.
>>
>> @Paolo,
>> https://lore.kernel.org/dri-devel/[email protected]/T/#m7647ce5f8c4749599d2c6bc15a2b45f8d8cf8154
>>
>
> If we need to bump the refcount only for VM_MIXEDMAP and not for
> VM_PFNMAP, then perhaps we could add a flag for that to the kvm_main
> code that will denote to kvm_release_page_clean whether it needs to put
> the page?

The other variant that kind of works is to mark TTM pages reserved using
SetPageReserved/ClearPageReserved, telling KVM not to mess with the page
struct. But the potential consequences of doing this are unclear to me.

Christian, do you think we can do it?

--
Best regards,
Dmitry

2022-08-17 23:22:48

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

On 8/18/22 01:57, Dmitry Osipenko wrote:
> On 8/15/22 18:54, Dmitry Osipenko wrote:
>> On 8/15/22 17:57, Dmitry Osipenko wrote:
>>> On 8/15/22 16:53, Christian König wrote:
>>>> Am 15.08.22 um 15:45 schrieb Dmitry Osipenko:
>>>>> [SNIP]
>>>>>> Well that comment sounds like KVM is doing the right thing, so I'm
>>>>>> wondering what exactly is going on here.
>>>>> KVM actually doesn't hold the page reference, it takes the temporal
>>>>> reference during page fault and then drops the reference once page is
>>>>> mapped, IIUC. Is it still illegal for TTM? Or there is a possibility for
>>>>> a race condition here?
>>>>>
>>>>
>>>> Well the question is why does KVM grab the page reference in the first
>>>> place?
>>>>
>>>> If that is to prevent the mapping from changing then yes that's illegal
>>>> and won't work. It can always happen that you grab the address, solve
>>>> the fault and then immediately fault again because the address you just
>>>> grabbed is invalidated.
>>>>
>>>> If it's for some other reason than we should probably investigate if we
>>>> shouldn't stop doing this.
>>>
>>> CC: +Paolo Bonzini who introduced this code
>>>
>>> commit add6a0cd1c5ba51b201e1361b05a5df817083618
>>> Author: Paolo Bonzini <[email protected]>
>>> Date: Tue Jun 7 17:51:18 2016 +0200
>>>
>>> KVM: MMU: try to fix up page faults before giving up
>>>
>>> The vGPU folks would like to trap the first access to a BAR by setting
>>> vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault
>>> handler
>>> then can use remap_pfn_range to place some non-reserved pages in the
>>> VMA.
>>>
>>> This kind of VM_PFNMAP mapping is not handled by KVM, but follow_pfn
>>> and fixup_user_fault together help supporting it. The patch also
>>> supports
>>> VM_MIXEDMAP vmas where the pfns are not reserved and thus subject to
>>> reference counting.
>>>
>>> @Paolo,
>>> https://lore.kernel.org/dri-devel/[email protected]/T/#m7647ce5f8c4749599d2c6bc15a2b45f8d8cf8154
>>>
>>
>> If we need to bump the refcount only for VM_MIXEDMAP and not for
>> VM_PFNMAP, then perhaps we could add a flag for that to the kvm_main
>> code that will denote to kvm_release_page_clean whether it needs to put
>> the page?
>
> The other variant that kind of works is to mark TTM pages reserved using
> SetPageReserved/ClearPageReserved, telling KVM not to mess with the page
> struct. But the potential consequences of doing this are unclear to me.
>
> Christian, do you think we can do it?

Although, no. It also doesn't work with KVM without additional changes
to KVM.

--
Best regards,
Dmitry

2022-08-18 09:59:15

by Christian König

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

Am 18.08.22 um 01:13 schrieb Dmitry Osipenko:
> On 8/18/22 01:57, Dmitry Osipenko wrote:
>> On 8/15/22 18:54, Dmitry Osipenko wrote:
>>> On 8/15/22 17:57, Dmitry Osipenko wrote:
>>>> On 8/15/22 16:53, Christian König wrote:
>>>>> Am 15.08.22 um 15:45 schrieb Dmitry Osipenko:
>>>>>> [SNIP]
>>>>>>> Well that comment sounds like KVM is doing the right thing, so I'm
>>>>>>> wondering what exactly is going on here.
>>>>>> KVM actually doesn't hold the page reference, it takes the temporal
>>>>>> reference during page fault and then drops the reference once page is
>>>>>> mapped, IIUC. Is it still illegal for TTM? Or there is a possibility for
>>>>>> a race condition here?
>>>>>>
>>>>> Well the question is why does KVM grab the page reference in the first
>>>>> place?
>>>>>
>>>>> If that is to prevent the mapping from changing then yes that's illegal
>>>>> and won't work. It can always happen that you grab the address, solve
>>>>> the fault and then immediately fault again because the address you just
>>>>> grabbed is invalidated.
>>>>>
>>>>> If it's for some other reason than we should probably investigate if we
>>>>> shouldn't stop doing this.
>>>> CC: +Paolo Bonzini who introduced this code
>>>>
>>>> commit add6a0cd1c5ba51b201e1361b05a5df817083618
>>>> Author: Paolo Bonzini <[email protected]>
>>>> Date: Tue Jun 7 17:51:18 2016 +0200
>>>>
>>>> KVM: MMU: try to fix up page faults before giving up
>>>>
>>>> The vGPU folks would like to trap the first access to a BAR by setting
>>>> vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault
>>>> handler
>>>> then can use remap_pfn_range to place some non-reserved pages in the
>>>> VMA.
>>>>
>>>> This kind of VM_PFNMAP mapping is not handled by KVM, but follow_pfn
>>>> and fixup_user_fault together help supporting it. The patch also
>>>> supports
>>>> VM_MIXEDMAP vmas where the pfns are not reserved and thus subject to
>>>> reference counting.
>>>>
>>>> @Paolo,
>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F73e5ed8d-0d25-7d44-8fa2-e1d61b1f5a04%40amd.com%2FT%2F%23m7647ce5f8c4749599d2c6bc15a2b45f8d8cf8154&amp;data=05%7C01%7Cchristian.koenig%40amd.com%7Cecb0f0eb6c2d43afa15b08da80a625ff%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637963748360952327%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=52Dvisa7p%2BckmBxMvDJFScGSNy9JRPDdnPK05C%2F6n7A%3D&amp;reserved=0
>>>>
>>> If we need to bump the refcount only for VM_MIXEDMAP and not for
>>> VM_PFNMAP, then perhaps we could add a flag for that to the kvm_main
>>> code that will denote to kvm_release_page_clean whether it needs to put
>>> the page?
>> The other variant that kind of works is to mark TTM pages reserved using
>> SetPageReserved/ClearPageReserved, telling KVM not to mess with the page
>> struct. But the potential consequences of doing this are unclear to me.
>>
>> Christian, do you think we can do it?
> Although, no. It also doesn't work with KVM without additional changes
> to KVM.

Well my fundamental problem is that I can't fit together why KVM is
grabing a page reference in the first place.

See the idea of the page reference is that you have one reference is
that you count the reference so that the memory is not reused while you
access it, e.g. for I/O or mapping it into different address spaces etc...

But none of those use cases seem to apply to KVM. If I'm not totally
mistaken in KVM you want to make sure that the address space mapping,
e.g. the translation between virtual and physical address, don't change
while you handle it, but grabbing a page reference is the completely
wrong approach for that.

Regards,
Christian.


2022-09-06 20:30:24

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

On Tue, Sep 06, 2022 at 10:01:47PM +0200, Daniel Vetter wrote:
> On Mon, Aug 15, 2022 at 12:05:19PM +0200, Christian K?nig wrote:
> > Am 15.08.22 um 11:54 schrieb Dmitry Osipenko:
> > > Higher order pages allocated using alloc_pages() aren't refcounted and they
> > > need to be refcounted, otherwise it's impossible to map them by KVM. This
> > > patch sets the refcount of the tail pages and fixes the KVM memory mapping
> > > faults.
> > >
> > > Without this change guest virgl driver can't map host buffers into guest
> > > and can't provide OpenGL 4.5 profile support to the guest. The host
> > > mappings are also needed for enabling the Venus driver using host GPU
> > > drivers that are utilizing TTM.
> > >
> > > Based on a patch proposed by Trigger Huang.
> >
> > Well I can't count how often I have repeated this: This is an absolutely
> > clear NAK!
> >
> > TTM pages are not reference counted in the first place and because of this
> > giving them to virgl is illegal.
> >
> > Please immediately stop this completely broken approach. We have discussed
> > this multiple times now.
>
> Yeah we need to get this stuff closed for real by tagging them all with
> VM_IO or VM_PFNMAP asap.

For a bit more context: Anything mapping a bo should be VM_SPECIAL. And I
think we should add the checks to the gem and dma-buf mmap functions to
validate for that, and fix all the fallout.

Otherwise this dragon keeps resurrecting ...

VM_SPECIAL _will_ block get_user_pages, which will block everyone from
even trying to refcount this stuff.

Minimally we need to fix this for all ttm drivers, and it sounds like
that's still not yet the case :-( Iirc last time around some funky amdkfd
userspace was the hold-up because regressions?
-Daniel

>
> It seems ot be a recurring amount of fun that people try to mmap dma-buf
> and then call get_user_pages on them.
>
> Which just doesn't work. I guess this is also why Rob Clark send out that
> dma-buf patch to expos mapping information (i.e. wc vs wb vs uncached).
>
> There seems to be some serious bonghits going on :-/
> -Daniel
>
> >
> > Regards,
> > Christian.
> >
> > >
> > > Cc: [email protected]
> > > Cc: Trigger Huang <[email protected]>
> > > Link: https://www.collabora.com/news-and-blog/blog/2021/11/26/venus-on-qemu-enabling-new-virtual-vulkan-driver/#qcom1343
> > > Tested-by: Dmitry Osipenko <[email protected]> # AMDGPU (Qemu and crosvm)
> > > Signed-off-by: Dmitry Osipenko <[email protected]>
> > > ---
> > > drivers/gpu/drm/ttm/ttm_pool.c | 25 ++++++++++++++++++++++++-
> > > 1 file changed, 24 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> > > index 21b61631f73a..11e92bb149c9 100644
> > > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > > @@ -81,6 +81,7 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> > > unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
> > > struct ttm_pool_dma *dma;
> > > struct page *p;
> > > + unsigned int i;
> > > void *vaddr;
> > > /* Don't set the __GFP_COMP flag for higher order allocations.
> > > @@ -93,8 +94,10 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> > > if (!pool->use_dma_alloc) {
> > > p = alloc_pages(gfp_flags, order);
> > > - if (p)
> > > + if (p) {
> > > p->private = order;
> > > + goto ref_tail_pages;
> > > + }
> > > return p;
> > > }
> > > @@ -120,6 +123,23 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> > > dma->vaddr = (unsigned long)vaddr | order;
> > > p->private = (unsigned long)dma;
> > > +
> > > +ref_tail_pages:
> > > + /*
> > > + * KVM requires mapped tail pages to be refcounted because put_page()
> > > + * is invoked on them in the end of the page fault handling, and thus,
> > > + * tail pages need to be protected from the premature releasing.
> > > + * In fact, KVM page fault handler refuses to map tail pages to guest
> > > + * if they aren't refcounted because hva_to_pfn_remapped() checks the
> > > + * refcount specifically for this case.
> > > + *
> > > + * In particular, unreferenced tail pages result in a KVM "Bad address"
> > > + * failure for VMMs that use VirtIO-GPU when guest's Mesa VirGL driver
> > > + * accesses mapped host TTM buffer that contains tail pages.
> > > + */
> > > + for (i = 1; i < 1 << order; i++)
> > > + page_ref_inc(p + i);
> > > +
> > > return p;
> > > error_free:
> > > @@ -133,6 +153,7 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
> > > {
> > > unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
> > > struct ttm_pool_dma *dma;
> > > + unsigned int i;
> > > void *vaddr;
> > > #ifdef CONFIG_X86
> > > @@ -142,6 +163,8 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
> > > if (caching != ttm_cached && !PageHighMem(p))
> > > set_pages_wb(p, 1 << order);
> > > #endif
> > > + for (i = 1; i < 1 << order; i++)
> > > + page_ref_dec(p + i);
> > > if (!pool || !pool->use_dma_alloc) {
> > > __free_pages(p, order);
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2022-09-06 21:01:57

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

On Mon, Aug 15, 2022 at 12:05:19PM +0200, Christian K?nig wrote:
> Am 15.08.22 um 11:54 schrieb Dmitry Osipenko:
> > Higher order pages allocated using alloc_pages() aren't refcounted and they
> > need to be refcounted, otherwise it's impossible to map them by KVM. This
> > patch sets the refcount of the tail pages and fixes the KVM memory mapping
> > faults.
> >
> > Without this change guest virgl driver can't map host buffers into guest
> > and can't provide OpenGL 4.5 profile support to the guest. The host
> > mappings are also needed for enabling the Venus driver using host GPU
> > drivers that are utilizing TTM.
> >
> > Based on a patch proposed by Trigger Huang.
>
> Well I can't count how often I have repeated this: This is an absolutely
> clear NAK!
>
> TTM pages are not reference counted in the first place and because of this
> giving them to virgl is illegal.
>
> Please immediately stop this completely broken approach. We have discussed
> this multiple times now.

Yeah we need to get this stuff closed for real by tagging them all with
VM_IO or VM_PFNMAP asap.

It seems ot be a recurring amount of fun that people try to mmap dma-buf
and then call get_user_pages on them.

Which just doesn't work. I guess this is also why Rob Clark send out that
dma-buf patch to expos mapping information (i.e. wc vs wb vs uncached).

There seems to be some serious bonghits going on :-/
-Daniel

>
> Regards,
> Christian.
>
> >
> > Cc: [email protected]
> > Cc: Trigger Huang <[email protected]>
> > Link: https://www.collabora.com/news-and-blog/blog/2021/11/26/venus-on-qemu-enabling-new-virtual-vulkan-driver/#qcom1343
> > Tested-by: Dmitry Osipenko <[email protected]> # AMDGPU (Qemu and crosvm)
> > Signed-off-by: Dmitry Osipenko <[email protected]>
> > ---
> > drivers/gpu/drm/ttm/ttm_pool.c | 25 ++++++++++++++++++++++++-
> > 1 file changed, 24 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> > index 21b61631f73a..11e92bb149c9 100644
> > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > @@ -81,6 +81,7 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> > unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
> > struct ttm_pool_dma *dma;
> > struct page *p;
> > + unsigned int i;
> > void *vaddr;
> > /* Don't set the __GFP_COMP flag for higher order allocations.
> > @@ -93,8 +94,10 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> > if (!pool->use_dma_alloc) {
> > p = alloc_pages(gfp_flags, order);
> > - if (p)
> > + if (p) {
> > p->private = order;
> > + goto ref_tail_pages;
> > + }
> > return p;
> > }
> > @@ -120,6 +123,23 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> > dma->vaddr = (unsigned long)vaddr | order;
> > p->private = (unsigned long)dma;
> > +
> > +ref_tail_pages:
> > + /*
> > + * KVM requires mapped tail pages to be refcounted because put_page()
> > + * is invoked on them in the end of the page fault handling, and thus,
> > + * tail pages need to be protected from the premature releasing.
> > + * In fact, KVM page fault handler refuses to map tail pages to guest
> > + * if they aren't refcounted because hva_to_pfn_remapped() checks the
> > + * refcount specifically for this case.
> > + *
> > + * In particular, unreferenced tail pages result in a KVM "Bad address"
> > + * failure for VMMs that use VirtIO-GPU when guest's Mesa VirGL driver
> > + * accesses mapped host TTM buffer that contains tail pages.
> > + */
> > + for (i = 1; i < 1 << order; i++)
> > + page_ref_inc(p + i);
> > +
> > return p;
> > error_free:
> > @@ -133,6 +153,7 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
> > {
> > unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
> > struct ttm_pool_dma *dma;
> > + unsigned int i;
> > void *vaddr;
> > #ifdef CONFIG_X86
> > @@ -142,6 +163,8 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
> > if (caching != ttm_cached && !PageHighMem(p))
> > set_pages_wb(p, 1 << order);
> > #endif
> > + for (i = 1; i < 1 << order; i++)
> > + page_ref_dec(p + i);
> > if (!pool || !pool->use_dma_alloc) {
> > __free_pages(p, order);
>

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2022-09-07 07:01:42

by Christian König

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

Am 06.09.22 um 22:05 schrieb Daniel Vetter:
> On Tue, Sep 06, 2022 at 10:01:47PM +0200, Daniel Vetter wrote:
>> On Mon, Aug 15, 2022 at 12:05:19PM +0200, Christian König wrote:
>>> Am 15.08.22 um 11:54 schrieb Dmitry Osipenko:
>>>> Higher order pages allocated using alloc_pages() aren't refcounted and they
>>>> need to be refcounted, otherwise it's impossible to map them by KVM. This
>>>> patch sets the refcount of the tail pages and fixes the KVM memory mapping
>>>> faults.
>>>>
>>>> Without this change guest virgl driver can't map host buffers into guest
>>>> and can't provide OpenGL 4.5 profile support to the guest. The host
>>>> mappings are also needed for enabling the Venus driver using host GPU
>>>> drivers that are utilizing TTM.
>>>>
>>>> Based on a patch proposed by Trigger Huang.
>>> Well I can't count how often I have repeated this: This is an absolutely
>>> clear NAK!
>>>
>>> TTM pages are not reference counted in the first place and because of this
>>> giving them to virgl is illegal.
>>>
>>> Please immediately stop this completely broken approach. We have discussed
>>> this multiple times now.
>> Yeah we need to get this stuff closed for real by tagging them all with
>> VM_IO or VM_PFNMAP asap.
> For a bit more context: Anything mapping a bo should be VM_SPECIAL. And I
> think we should add the checks to the gem and dma-buf mmap functions to
> validate for that, and fix all the fallout.
>
> Otherwise this dragon keeps resurrecting ...
>
> VM_SPECIAL _will_ block get_user_pages, which will block everyone from
> even trying to refcount this stuff.
>
> Minimally we need to fix this for all ttm drivers, and it sounds like
> that's still not yet the case :-( Iirc last time around some funky amdkfd
> userspace was the hold-up because regressions?

My recollection is that Felix and I fixed this with a KFD specific
workaround. But I can double check with Felix on Monday.

Christian.

> -Daniel
>
>> It seems ot be a recurring amount of fun that people try to mmap dma-buf
>> and then call get_user_pages on them.
>>
>> Which just doesn't work. I guess this is also why Rob Clark send out that
>> dma-buf patch to expos mapping information (i.e. wc vs wb vs uncached).
>>
>> There seems to be some serious bonghits going on :-/
>> -Daniel
>>
>>> Regards,
>>> Christian.
>>>
>>>> Cc: [email protected]
>>>> Cc: Trigger Huang <[email protected]>
>>>> Link: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.collabora.com%2Fnews-and-blog%2Fblog%2F2021%2F11%2F26%2Fvenus-on-qemu-enabling-new-virtual-vulkan-driver%2F%23qcom1343&amp;data=05%7C01%7Cchristian.koenig%40amd.com%7C37a7d9b0f91249da415b08da90432d3a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637980915471280078%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=XN6wFiWc6Jljekmst0aOCPSTsFLlmkUjD9F%2Fl9nluAs%3D&amp;reserved=0
>>>> Tested-by: Dmitry Osipenko <[email protected]> # AMDGPU (Qemu and crosvm)
>>>> Signed-off-by: Dmitry Osipenko <[email protected]>
>>>> ---
>>>> drivers/gpu/drm/ttm/ttm_pool.c | 25 ++++++++++++++++++++++++-
>>>> 1 file changed, 24 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
>>>> index 21b61631f73a..11e92bb149c9 100644
>>>> --- a/drivers/gpu/drm/ttm/ttm_pool.c
>>>> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
>>>> @@ -81,6 +81,7 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
>>>> unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
>>>> struct ttm_pool_dma *dma;
>>>> struct page *p;
>>>> + unsigned int i;
>>>> void *vaddr;
>>>> /* Don't set the __GFP_COMP flag for higher order allocations.
>>>> @@ -93,8 +94,10 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
>>>> if (!pool->use_dma_alloc) {
>>>> p = alloc_pages(gfp_flags, order);
>>>> - if (p)
>>>> + if (p) {
>>>> p->private = order;
>>>> + goto ref_tail_pages;
>>>> + }
>>>> return p;
>>>> }
>>>> @@ -120,6 +123,23 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
>>>> dma->vaddr = (unsigned long)vaddr | order;
>>>> p->private = (unsigned long)dma;
>>>> +
>>>> +ref_tail_pages:
>>>> + /*
>>>> + * KVM requires mapped tail pages to be refcounted because put_page()
>>>> + * is invoked on them in the end of the page fault handling, and thus,
>>>> + * tail pages need to be protected from the premature releasing.
>>>> + * In fact, KVM page fault handler refuses to map tail pages to guest
>>>> + * if they aren't refcounted because hva_to_pfn_remapped() checks the
>>>> + * refcount specifically for this case.
>>>> + *
>>>> + * In particular, unreferenced tail pages result in a KVM "Bad address"
>>>> + * failure for VMMs that use VirtIO-GPU when guest's Mesa VirGL driver
>>>> + * accesses mapped host TTM buffer that contains tail pages.
>>>> + */
>>>> + for (i = 1; i < 1 << order; i++)
>>>> + page_ref_inc(p + i);
>>>> +
>>>> return p;
>>>> error_free:
>>>> @@ -133,6 +153,7 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
>>>> {
>>>> unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
>>>> struct ttm_pool_dma *dma;
>>>> + unsigned int i;
>>>> void *vaddr;
>>>> #ifdef CONFIG_X86
>>>> @@ -142,6 +163,8 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
>>>> if (caching != ttm_cached && !PageHighMem(p))
>>>> set_pages_wb(p, 1 << order);
>>>> #endif
>>>> + for (i = 1; i < 1 << order; i++)
>>>> + page_ref_dec(p + i);
>>>> if (!pool || !pool->use_dma_alloc) {
>>>> __free_pages(p, order);
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=05%7C01%7Cchristian.koenig%40amd.com%7C37a7d9b0f91249da415b08da90432d3a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637980915471280078%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=bGZ1OAxL%2Fd99Nqu49soWZVqvvUKjuD6n6BKkAhMv4fs%3D&amp;reserved=0

2022-09-08 11:32:18

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

On Tue, Sep 6, 2022 at 1:01 PM Daniel Vetter <[email protected]> wrote:
>
> On Mon, Aug 15, 2022 at 12:05:19PM +0200, Christian König wrote:
> > Am 15.08.22 um 11:54 schrieb Dmitry Osipenko:
> > > Higher order pages allocated using alloc_pages() aren't refcounted and they
> > > need to be refcounted, otherwise it's impossible to map them by KVM. This
> > > patch sets the refcount of the tail pages and fixes the KVM memory mapping
> > > faults.
> > >
> > > Without this change guest virgl driver can't map host buffers into guest
> > > and can't provide OpenGL 4.5 profile support to the guest. The host
> > > mappings are also needed for enabling the Venus driver using host GPU
> > > drivers that are utilizing TTM.
> > >
> > > Based on a patch proposed by Trigger Huang.
> >
> > Well I can't count how often I have repeated this: This is an absolutely
> > clear NAK!
> >
> > TTM pages are not reference counted in the first place and because of this
> > giving them to virgl is illegal.
> >
> > Please immediately stop this completely broken approach. We have discussed
> > this multiple times now.
>
> Yeah we need to get this stuff closed for real by tagging them all with
> VM_IO or VM_PFNMAP asap.
>
> It seems ot be a recurring amount of fun that people try to mmap dma-buf
> and then call get_user_pages on them.
>
> Which just doesn't work. I guess this is also why Rob Clark send out that
> dma-buf patch to expos mapping information (i.e. wc vs wb vs uncached).

No, not really.. my patch was simply so that the VMM side of virtgpu
could send the correct cache mode to the guest when handed a dma-buf
;-)

BR,
-R

>
> There seems to be some serious bonghits going on :-/
> -Daniel
>
> >
> > Regards,
> > Christian.
> >
> > >
> > > Cc: [email protected]
> > > Cc: Trigger Huang <[email protected]>
> > > Link: https://www.collabora.com/news-and-blog/blog/2021/11/26/venus-on-qemu-enabling-new-virtual-vulkan-driver/#qcom1343
> > > Tested-by: Dmitry Osipenko <[email protected]> # AMDGPU (Qemu and crosvm)
> > > Signed-off-by: Dmitry Osipenko <[email protected]>
> > > ---
> > > drivers/gpu/drm/ttm/ttm_pool.c | 25 ++++++++++++++++++++++++-
> > > 1 file changed, 24 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> > > index 21b61631f73a..11e92bb149c9 100644
> > > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > > @@ -81,6 +81,7 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> > > unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
> > > struct ttm_pool_dma *dma;
> > > struct page *p;
> > > + unsigned int i;
> > > void *vaddr;
> > > /* Don't set the __GFP_COMP flag for higher order allocations.
> > > @@ -93,8 +94,10 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> > > if (!pool->use_dma_alloc) {
> > > p = alloc_pages(gfp_flags, order);
> > > - if (p)
> > > + if (p) {
> > > p->private = order;
> > > + goto ref_tail_pages;
> > > + }
> > > return p;
> > > }
> > > @@ -120,6 +123,23 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> > > dma->vaddr = (unsigned long)vaddr | order;
> > > p->private = (unsigned long)dma;
> > > +
> > > +ref_tail_pages:
> > > + /*
> > > + * KVM requires mapped tail pages to be refcounted because put_page()
> > > + * is invoked on them in the end of the page fault handling, and thus,
> > > + * tail pages need to be protected from the premature releasing.
> > > + * In fact, KVM page fault handler refuses to map tail pages to guest
> > > + * if they aren't refcounted because hva_to_pfn_remapped() checks the
> > > + * refcount specifically for this case.
> > > + *
> > > + * In particular, unreferenced tail pages result in a KVM "Bad address"
> > > + * failure for VMMs that use VirtIO-GPU when guest's Mesa VirGL driver
> > > + * accesses mapped host TTM buffer that contains tail pages.
> > > + */
> > > + for (i = 1; i < 1 << order; i++)
> > > + page_ref_inc(p + i);
> > > +
> > > return p;
> > > error_free:
> > > @@ -133,6 +153,7 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
> > > {
> > > unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
> > > struct ttm_pool_dma *dma;
> > > + unsigned int i;
> > > void *vaddr;
> > > #ifdef CONFIG_X86
> > > @@ -142,6 +163,8 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching,
> > > if (caching != ttm_cached && !PageHighMem(p))
> > > set_pages_wb(p, 1 << order);
> > > #endif
> > > + for (i = 1; i < 1 << order; i++)
> > > + page_ref_dec(p + i);
> > > if (!pool || !pool->use_dma_alloc) {
> > > __free_pages(p, order);
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

2023-01-11 17:31:06

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

On Tue, Sep 06, 2022, Daniel Vetter wrote:
> On Tue, Sep 06, 2022 at 10:01:47PM +0200, Daniel Vetter wrote:
> > On Mon, Aug 15, 2022 at 12:05:19PM +0200, Christian K?nig wrote:
> > > Am 15.08.22 um 11:54 schrieb Dmitry Osipenko:
> > > > Higher order pages allocated using alloc_pages() aren't refcounted and they
> > > > need to be refcounted, otherwise it's impossible to map them by KVM. This
> > > > patch sets the refcount of the tail pages and fixes the KVM memory mapping
> > > > faults.
> > > >
> > > > Without this change guest virgl driver can't map host buffers into guest
> > > > and can't provide OpenGL 4.5 profile support to the guest. The host
> > > > mappings are also needed for enabling the Venus driver using host GPU
> > > > drivers that are utilizing TTM.
> > > >
> > > > Based on a patch proposed by Trigger Huang.
> > >
> > > Well I can't count how often I have repeated this: This is an absolutely
> > > clear NAK!
> > >
> > > TTM pages are not reference counted in the first place and because of this
> > > giving them to virgl is illegal.
> > >
> > > Please immediately stop this completely broken approach. We have discussed
> > > this multiple times now.
> >
> > Yeah we need to get this stuff closed for real by tagging them all with
> > VM_IO or VM_PFNMAP asap.
>
> For a bit more context: Anything mapping a bo should be VM_SPECIAL. And I
> think we should add the checks to the gem and dma-buf mmap functions to
> validate for that, and fix all the fallout.
>
> Otherwise this dragon keeps resurrecting ...
>
> VM_SPECIAL _will_ block get_user_pages, which will block everyone from
> even trying to refcount this stuff.

FWIW, IIUC that won't change the KVM story. KVM acquires the PFN for these pages
via follow_pte(), not by gup(). Details are in a different strand of this thread[*].

If TTM pages aren't tied into mmu_notifiers, then I believe the only solution is
to not allow them to be mapped into user page tables. If they are tied into
mmu_notifiers, then this is fully a KVM limitation that we are (slowly) resolving.

[*] https://lore.kernel.org/all/[email protected]

2023-01-11 17:31:19

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

On Thu, Aug 18, 2022, Christian K?nig wrote:
> Am 18.08.22 um 01:13 schrieb Dmitry Osipenko:
> > On 8/18/22 01:57, Dmitry Osipenko wrote:
> > > On 8/15/22 18:54, Dmitry Osipenko wrote:
> > > > On 8/15/22 17:57, Dmitry Osipenko wrote:
> > > > > On 8/15/22 16:53, Christian K?nig wrote:
> > > > > > Am 15.08.22 um 15:45 schrieb Dmitry Osipenko:
> > > > > > > [SNIP]
> > > > > > > > Well that comment sounds like KVM is doing the right thing, so I'm
> > > > > > > > wondering what exactly is going on here.
> > > > > > > KVM actually doesn't hold the page reference, it takes the temporal
> > > > > > > reference during page fault and then drops the reference once page is
> > > > > > > mapped, IIUC. Is it still illegal for TTM? Or there is a possibility for
> > > > > > > a race condition here?
> > > > > > >
> > > > > > Well the question is why does KVM grab the page reference in the first
> > > > > > place?
> > > > > >
> > > > > > If that is to prevent the mapping from changing then yes that's illegal
> > > > > > and won't work. It can always happen that you grab the address, solve
> > > > > > the fault and then immediately fault again because the address you just
> > > > > > grabbed is invalidated.
> > > > > >
> > > > > > If it's for some other reason than we should probably investigate if we
> > > > > > shouldn't stop doing this.

...

> > > > If we need to bump the refcount only for VM_MIXEDMAP and not for
> > > > VM_PFNMAP, then perhaps we could add a flag for that to the kvm_main
> > > > code that will denote to kvm_release_page_clean whether it needs to put
> > > > the page?
> > > The other variant that kind of works is to mark TTM pages reserved using
> > > SetPageReserved/ClearPageReserved, telling KVM not to mess with the page
> > > struct. But the potential consequences of doing this are unclear to me.
> > >
> > > Christian, do you think we can do it?
> > Although, no. It also doesn't work with KVM without additional changes
> > to KVM.
>
> Well my fundamental problem is that I can't fit together why KVM is grabing
> a page reference in the first place.

It's to workaround a deficiency in KVM.

> See the idea of the page reference is that you have one reference is that
> you count the reference so that the memory is not reused while you access
> it, e.g. for I/O or mapping it into different address spaces etc...
>
> But none of those use cases seem to apply to KVM. If I'm not totally
> mistaken in KVM you want to make sure that the address space mapping, e.g.
> the translation between virtual and physical address, don't change while you
> handle it, but grabbing a page reference is the completely wrong approach
> for that.

TL;DR: 100% agree, and we're working on fixing this in KVM, but were still months
away from a full solution.

Yep. KVM uses mmu_notifiers to react to mapping changes, with a few caveats that
we are (slowly) fixing, though those caveats are only tangentially related.

The deficiency in KVM is that KVM's internal APIs to translate a virtual address
to a physical address spit out only the resulting host PFN. The details of _how_
that PFN was acquired are not captured. Specifically, KVM loses track of whether
or not a PFN was acquired via gup() or follow_pte() (KVM is very permissive when
it comes to backing guest memory).

Because gup() gifts the caller a reference, that means KVM also loses track of
whether or not KVM holds a page refcount. To avoid pinning guest memory, KVM does
quickly put the reference gifted by gup(), but because KVM doesn't _know_ if it
holds a reference, KVM uses a heuristic, which is essentially "is the PFN associated
with a 'normal' struct page?".

/*
* Returns a 'struct page' if the pfn is "valid" and backed by a refcounted
* page, NULL otherwise. Note, the list of refcounted PG_reserved page types
* is likely incomplete, it has been compiled purely through people wanting to
* back guest with a certain type of memory and encountering issues.
*/
struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn)

That heuristic also triggers if follow_pte() resolves to a PFN that is associated
with a "struct page", and so to avoid putting a reference it doesn't own, KVM does
the silly thing of manually getting a reference immediately after follow_pte().

And that in turn gets tripped up non-refcounted tail pages because KVM sees a
normal, valid "struct page" and assumes it's refcounted. To fudge around that
issue, KVM requires "struct page" memory to be refcounted.

The long-term solution is to refactor KVM to precisely track whether or not KVM
holds a reference. Patches have been prosposed to do exactly that[1], but they
were put on hold due to the aforementioned caveats with mmu_notifiers. The
caveats are that most flows where KVM plumbs a physical address into hardware
structures aren't wired up to KVM's mmu_notifier.

KVM could support non-refcounted struct page memory without first fixing the
mmu_notifier issues, but I was (and still am) concerned that that would create an
even larger hole in KVM until the mmu_notifier issues are sorted out[2].

[1] https://lore.kernel.org/all/[email protected]
[2] https://lore.kernel.org/all/[email protected]

2023-01-11 22:34:42

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v1] drm/ttm: Refcount allocated tail pages

Hello Sean,

On 1/11/23 20:05, Sean Christopherson wrote:
> On Thu, Aug 18, 2022, Christian König wrote:
>> Am 18.08.22 um 01:13 schrieb Dmitry Osipenko:
>>> On 8/18/22 01:57, Dmitry Osipenko wrote:
>>>> On 8/15/22 18:54, Dmitry Osipenko wrote:
>>>>> On 8/15/22 17:57, Dmitry Osipenko wrote:
>>>>>> On 8/15/22 16:53, Christian König wrote:
>>>>>>> Am 15.08.22 um 15:45 schrieb Dmitry Osipenko:
>>>>>>>> [SNIP]
>>>>>>>>> Well that comment sounds like KVM is doing the right thing, so I'm
>>>>>>>>> wondering what exactly is going on here.
>>>>>>>> KVM actually doesn't hold the page reference, it takes the temporal
>>>>>>>> reference during page fault and then drops the reference once page is
>>>>>>>> mapped, IIUC. Is it still illegal for TTM? Or there is a possibility for
>>>>>>>> a race condition here?
>>>>>>>>
>>>>>>> Well the question is why does KVM grab the page reference in the first
>>>>>>> place?
>>>>>>>
>>>>>>> If that is to prevent the mapping from changing then yes that's illegal
>>>>>>> and won't work. It can always happen that you grab the address, solve
>>>>>>> the fault and then immediately fault again because the address you just
>>>>>>> grabbed is invalidated.
>>>>>>>
>>>>>>> If it's for some other reason than we should probably investigate if we
>>>>>>> shouldn't stop doing this.
>
> ...
>
>>>>> If we need to bump the refcount only for VM_MIXEDMAP and not for
>>>>> VM_PFNMAP, then perhaps we could add a flag for that to the kvm_main
>>>>> code that will denote to kvm_release_page_clean whether it needs to put
>>>>> the page?
>>>> The other variant that kind of works is to mark TTM pages reserved using
>>>> SetPageReserved/ClearPageReserved, telling KVM not to mess with the page
>>>> struct. But the potential consequences of doing this are unclear to me.
>>>>
>>>> Christian, do you think we can do it?
>>> Although, no. It also doesn't work with KVM without additional changes
>>> to KVM.
>>
>> Well my fundamental problem is that I can't fit together why KVM is grabing
>> a page reference in the first place.
>
> It's to workaround a deficiency in KVM.
>
>> See the idea of the page reference is that you have one reference is that
>> you count the reference so that the memory is not reused while you access
>> it, e.g. for I/O or mapping it into different address spaces etc...
>>
>> But none of those use cases seem to apply to KVM. If I'm not totally
>> mistaken in KVM you want to make sure that the address space mapping, e.g.
>> the translation between virtual and physical address, don't change while you
>> handle it, but grabbing a page reference is the completely wrong approach
>> for that.
>
> TL;DR: 100% agree, and we're working on fixing this in KVM, but were still months
> away from a full solution.
>
> Yep. KVM uses mmu_notifiers to react to mapping changes, with a few caveats that
> we are (slowly) fixing, though those caveats are only tangentially related.
>
> The deficiency in KVM is that KVM's internal APIs to translate a virtual address
> to a physical address spit out only the resulting host PFN. The details of _how_
> that PFN was acquired are not captured. Specifically, KVM loses track of whether
> or not a PFN was acquired via gup() or follow_pte() (KVM is very permissive when
> it comes to backing guest memory).
>
> Because gup() gifts the caller a reference, that means KVM also loses track of
> whether or not KVM holds a page refcount. To avoid pinning guest memory, KVM does
> quickly put the reference gifted by gup(), but because KVM doesn't _know_ if it
> holds a reference, KVM uses a heuristic, which is essentially "is the PFN associated
> with a 'normal' struct page?".
>
> /*
> * Returns a 'struct page' if the pfn is "valid" and backed by a refcounted
> * page, NULL otherwise. Note, the list of refcounted PG_reserved page types
> * is likely incomplete, it has been compiled purely through people wanting to
> * back guest with a certain type of memory and encountering issues.
> */
> struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn)
>
> That heuristic also triggers if follow_pte() resolves to a PFN that is associated
> with a "struct page", and so to avoid putting a reference it doesn't own, KVM does
> the silly thing of manually getting a reference immediately after follow_pte().
>
> And that in turn gets tripped up non-refcounted tail pages because KVM sees a
> normal, valid "struct page" and assumes it's refcounted. To fudge around that
> issue, KVM requires "struct page" memory to be refcounted.
>
> The long-term solution is to refactor KVM to precisely track whether or not KVM
> holds a reference. Patches have been prosposed to do exactly that[1], but they
> were put on hold due to the aforementioned caveats with mmu_notifiers. The
> caveats are that most flows where KVM plumbs a physical address into hardware
> structures aren't wired up to KVM's mmu_notifier.
>
> KVM could support non-refcounted struct page memory without first fixing the
> mmu_notifier issues, but I was (and still am) concerned that that would create an
> even larger hole in KVM until the mmu_notifier issues are sorted out[2].
>
> [1] https://lore.kernel.org/all/[email protected]
> [2] https://lore.kernel.org/all/[email protected]

Thanks for the summary! Indeed, it's the KVM side that needs to be
patched. Couple months ago I found that a non-TTM i915 driver also
suffers from the same problem because it uses huge pages that we want
map to a guest. So we definitely will need to fix the KVM side.

--
Best regards,
Dmitry