It was noticed that 'free' information on a Hyper-V guest reports ballooned
out memory in 'total' and this contradicts what other ballooning drivers
(e.g. virtio-balloon/virtio-mem/xen balloon) do.
Vitaly Kuznetsov (2):
hv_balloon: simplify math in alloc_balloon_pages()
hv_balloon: do adjust_managed_page_count() when
ballooning/un-ballooning
drivers/hv/hv_balloon.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
--
2.26.2
'alloc_unit' in alloc_balloon_pages() is either '512' for 2M allocations or
'1' for 4k allocations. So
1 << get_order(alloc_unit << PAGE_SHIFT)
equals to 'alloc_unit' and the for loop basically sets all them offline.
Simplify the math to improve the readability.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
drivers/hv/hv_balloon.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index eb56e09ae15f..da3b6bd2367c 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_balloon.c
@@ -1238,7 +1238,7 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
split_page(pg, get_order(alloc_unit << PAGE_SHIFT));
/* mark all pages offline */
- for (j = 0; j < (1 << get_order(alloc_unit << PAGE_SHIFT)); j++)
+ for (j = 0; j < alloc_unit; j++)
__SetPageOffline(pg + j);
bl_resp->range_count++;
--
2.26.2
Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver
does not adjust managed pages count when ballooning/un-ballooning and this leads
to incorrect stats being reported, e.g. unexpected 'free' output.
Note, the calculation in post_status() seems to remain correct: ballooned out
pages are never 'available' and we manually add dm->num_pages_ballooned to
'commited'.
Suggested-by: David Hildenbrand <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
drivers/hv/hv_balloon.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index da3b6bd2367c..8c471823a5af 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_balloon.c
@@ -1198,6 +1198,7 @@ static void free_balloon_pages(struct hv_dynmem_device *dm,
__ClearPageOffline(pg);
__free_page(pg);
dm->num_pages_ballooned--;
+ adjust_managed_page_count(pg, 1);
}
}
@@ -1238,8 +1239,10 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
split_page(pg, get_order(alloc_unit << PAGE_SHIFT));
/* mark all pages offline */
- for (j = 0; j < alloc_unit; j++)
+ for (j = 0; j < alloc_unit; j++) {
__SetPageOffline(pg + j);
+ adjust_managed_page_count(pg + j, -1);
+ }
bl_resp->range_count++;
bl_resp->range_array[i].finfo.start_page =
--
2.26.2
On 02.12.20 17:12, Vitaly Kuznetsov wrote:
> Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver
> does not adjust managed pages count when ballooning/un-ballooning and this leads
> to incorrect stats being reported, e.g. unexpected 'free' output.
>
> Note, the calculation in post_status() seems to remain correct: ballooned out
> pages are never 'available' and we manually add dm->num_pages_ballooned to
> 'commited'.
>
> Suggested-by: David Hildenbrand <[email protected]>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> drivers/hv/hv_balloon.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
> index da3b6bd2367c..8c471823a5af 100644
> --- a/drivers/hv/hv_balloon.c
> +++ b/drivers/hv/hv_balloon.c
> @@ -1198,6 +1198,7 @@ static void free_balloon_pages(struct hv_dynmem_device *dm,
> __ClearPageOffline(pg);
> __free_page(pg);
> dm->num_pages_ballooned--;
> + adjust_managed_page_count(pg, 1);
> }
> }
>
> @@ -1238,8 +1239,10 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
> split_page(pg, get_order(alloc_unit << PAGE_SHIFT));
>
> /* mark all pages offline */
> - for (j = 0; j < alloc_unit; j++)
> + for (j = 0; j < alloc_unit; j++) {
> __SetPageOffline(pg + j);
> + adjust_managed_page_count(pg + j, -1);
> + }
>
> bl_resp->range_count++;
> bl_resp->range_array[i].finfo.start_page =
>
I assume this has been properly tested such that it does not change the
system behavior regarding when/how HyperV decides to add/remove memory.
LGTM
Reviewed-by: David Hildenbrand <[email protected]>
--
Thanks,
David / dhildenb
David Hildenbrand <[email protected]> writes:
> On 02.12.20 17:12, Vitaly Kuznetsov wrote:
>> Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver
>> does not adjust managed pages count when ballooning/un-ballooning and this leads
>> to incorrect stats being reported, e.g. unexpected 'free' output.
>>
>> Note, the calculation in post_status() seems to remain correct: ballooned out
>> pages are never 'available' and we manually add dm->num_pages_ballooned to
>> 'commited'.
>>
>> Suggested-by: David Hildenbrand <[email protected]>
>> Signed-off-by: Vitaly Kuznetsov <[email protected]>
>> ---
>> drivers/hv/hv_balloon.c | 5 ++++-
>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
>> index da3b6bd2367c..8c471823a5af 100644
>> --- a/drivers/hv/hv_balloon.c
>> +++ b/drivers/hv/hv_balloon.c
>> @@ -1198,6 +1198,7 @@ static void free_balloon_pages(struct hv_dynmem_device *dm,
>> __ClearPageOffline(pg);
>> __free_page(pg);
>> dm->num_pages_ballooned--;
>> + adjust_managed_page_count(pg, 1);
>> }
>> }
>>
>> @@ -1238,8 +1239,10 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
>> split_page(pg, get_order(alloc_unit << PAGE_SHIFT));
>>
>> /* mark all pages offline */
>> - for (j = 0; j < alloc_unit; j++)
>> + for (j = 0; j < alloc_unit; j++) {
>> __SetPageOffline(pg + j);
>> + adjust_managed_page_count(pg + j, -1);
>> + }
>>
>> bl_resp->range_count++;
>> bl_resp->range_array[i].finfo.start_page =
>>
>
> I assume this has been properly tested such that it does not change the
> system behavior regarding when/how HyperV decides to add/remove memory.
>
I'm always reluctant to confirm 'proper testing' as no matter how small
and 'obvious' the change is, regressions keep happening :-) But yes,
this was tested on a Hyper-V host and 'stress' and I observed 'free'
when the balloon was both inflated and deflated, values looked sane.
> LGTM
>
> Reviewed-by: David Hildenbrand <[email protected]>
Thanks!
--
Vitaly
On 03.12.20 18:49, Vitaly Kuznetsov wrote:
> David Hildenbrand <[email protected]> writes:
>
>> On 02.12.20 17:12, Vitaly Kuznetsov wrote:
>>> Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver
>>> does not adjust managed pages count when ballooning/un-ballooning and this leads
>>> to incorrect stats being reported, e.g. unexpected 'free' output.
>>>
>>> Note, the calculation in post_status() seems to remain correct: ballooned out
>>> pages are never 'available' and we manually add dm->num_pages_ballooned to
>>> 'commited'.
>>>
>>> Suggested-by: David Hildenbrand <[email protected]>
>>> Signed-off-by: Vitaly Kuznetsov <[email protected]>
>>> ---
>>> drivers/hv/hv_balloon.c | 5 ++++-
>>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
>>> index da3b6bd2367c..8c471823a5af 100644
>>> --- a/drivers/hv/hv_balloon.c
>>> +++ b/drivers/hv/hv_balloon.c
>>> @@ -1198,6 +1198,7 @@ static void free_balloon_pages(struct hv_dynmem_device *dm,
>>> __ClearPageOffline(pg);
>>> __free_page(pg);
>>> dm->num_pages_ballooned--;
>>> + adjust_managed_page_count(pg, 1);
>>> }
>>> }
>>>
>>> @@ -1238,8 +1239,10 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
>>> split_page(pg, get_order(alloc_unit << PAGE_SHIFT));
>>>
>>> /* mark all pages offline */
>>> - for (j = 0; j < alloc_unit; j++)
>>> + for (j = 0; j < alloc_unit; j++) {
>>> __SetPageOffline(pg + j);
>>> + adjust_managed_page_count(pg + j, -1);
>>> + }
>>>
>>> bl_resp->range_count++;
>>> bl_resp->range_array[i].finfo.start_page =
>>>
>>
>> I assume this has been properly tested such that it does not change the
>> system behavior regarding when/how HyperV decides to add/remove memory.
>>
>
> I'm always reluctant to confirm 'proper testing' as no matter how small
> and 'obvious' the change is, regressions keep happening :-) But yes,
> this was tested on a Hyper-V host and 'stress' and I observed 'free'
> when the balloon was both inflated and deflated, values looked sane.
That;s what I wanted to hear ;)
--
Thanks,
David / dhildenb
On 02.12.20 17:12, Vitaly Kuznetsov wrote:
> 'alloc_unit' in alloc_balloon_pages() is either '512' for 2M allocations or
> '1' for 4k allocations. So
>
> 1 << get_order(alloc_unit << PAGE_SHIFT)
>
> equals to 'alloc_unit' and the for loop basically sets all them offline.
> Simplify the math to improve the readability.
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> drivers/hv/hv_balloon.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
> index eb56e09ae15f..da3b6bd2367c 100644
> --- a/drivers/hv/hv_balloon.c
> +++ b/drivers/hv/hv_balloon.c
> @@ -1238,7 +1238,7 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
> split_page(pg, get_order(alloc_unit << PAGE_SHIFT));
>
> /* mark all pages offline */
> - for (j = 0; j < (1 << get_order(alloc_unit << PAGE_SHIFT)); j++)
> + for (j = 0; j < alloc_unit; j++)
> __SetPageOffline(pg + j);
>
> bl_resp->range_count++;
>
Right, alloc_unit is multiples of 4k pages, such that it can directly be
used for page ranges in deflation/inflation paths.
Reviewed-by: David Hildenbrand <[email protected]>
--
Thanks,
David / dhildenb
On Wed, Dec 02, 2020 at 05:12:43PM +0100, Vitaly Kuznetsov wrote:
> It was noticed that 'free' information on a Hyper-V guest reports ballooned
> out memory in 'total' and this contradicts what other ballooning drivers
> (e.g. virtio-balloon/virtio-mem/xen balloon) do.
>
> Vitaly Kuznetsov (2):
> hv_balloon: simplify math in alloc_balloon_pages()
> hv_balloon: do adjust_managed_page_count() when
> ballooning/un-ballooning
LGTM.
I will wait for a few more days before applying this series to
hyperv-next.
Wei.
On Wed, Dec 09, 2020 at 01:17:18PM +0000, Wei Liu wrote:
> On Wed, Dec 02, 2020 at 05:12:43PM +0100, Vitaly Kuznetsov wrote:
> > It was noticed that 'free' information on a Hyper-V guest reports ballooned
> > out memory in 'total' and this contradicts what other ballooning drivers
> > (e.g. virtio-balloon/virtio-mem/xen balloon) do.
> >
> > Vitaly Kuznetsov (2):
> > hv_balloon: simplify math in alloc_balloon_pages()
> > hv_balloon: do adjust_managed_page_count() when
> > ballooning/un-ballooning
>
> LGTM.
>
> I will wait for a few more days before applying this series to
> hyperv-next.
Applied to hyperv-next.
Wei.