2022-01-30 23:41:43

by Vitaly Kuznetsov

[permalink] [raw]
Subject: [PATCH 0/2] Drivers: hv: Minor cleanup around init_vp_index()

Two minor changes with no functional change intended:
- s,alloced,allocated
- compare cpumasks and not their weights

Vitaly Kuznetsov (2):
Drivers: hv: Rename 'alloced' to 'allocated'
Drivers: hv: Compare cpumasks and not their weights in init_vp_index()

drivers/hv/channel_mgmt.c | 19 +++++++++----------
drivers/hv/hyperv_vmbus.h | 14 +++++++-------
drivers/hv/vmbus_drv.c | 2 +-
3 files changed, 17 insertions(+), 18 deletions(-)

--
2.34.1


2022-01-30 23:41:43

by Vitaly Kuznetsov

[permalink] [raw]
Subject: [PATCH 2/2] Drivers: hv: Compare cpumasks and not their weights in init_vp_index()

The condition is supposed to check whether 'allocated_mask' got fully
exhausted, i.e. there's no free CPU on the NUMA node left so we have
to use one of the already used CPUs. As only bits which correspond
to CPUs from 'cpumask_of_node(numa_node)' get set in 'allocated_mask',
checking for the equal weights is technically correct but not obvious.
Let's compare cpumasks directly.

No functional change intended.

Suggested-by: Michael Kelley <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
drivers/hv/channel_mgmt.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 52cf6ae525e9..26d269ba947c 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -762,8 +762,7 @@ static void init_vp_index(struct vmbus_channel *channel)
}
allocated_mask = &hv_context.hv_numa_map[numa_node];

- if (cpumask_weight(allocated_mask) ==
- cpumask_weight(cpumask_of_node(numa_node))) {
+ if (cpumask_equal(allocated_mask, cpumask_of_node(numa_node))) {
/*
* We have cycled through all the CPUs in the node;
* reset the allocated map.
--
2.34.1

2022-01-31 11:11:10

by Michael Kelley (LINUX)

[permalink] [raw]
Subject: RE: [PATCH 2/2] Drivers: hv: Compare cpumasks and not their weights in init_vp_index()

From: Vitaly Kuznetsov <[email protected]> Sent: Friday, January 28, 2022 2:34 AM
>
> The condition is supposed to check whether 'allocated_mask' got fully
> exhausted, i.e. there's no free CPU on the NUMA node left so we have
> to use one of the already used CPUs. As only bits which correspond
> to CPUs from 'cpumask_of_node(numa_node)' get set in 'allocated_mask',
> checking for the equal weights is technically correct but not obvious.
> Let's compare cpumasks directly.
>
> No functional change intended.
>
> Suggested-by: Michael Kelley <[email protected]>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> drivers/hv/channel_mgmt.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
> index 52cf6ae525e9..26d269ba947c 100644
> --- a/drivers/hv/channel_mgmt.c
> +++ b/drivers/hv/channel_mgmt.c
> @@ -762,8 +762,7 @@ static void init_vp_index(struct vmbus_channel *channel)
> }
> allocated_mask = &hv_context.hv_numa_map[numa_node];
>
> - if (cpumask_weight(allocated_mask) ==
> - cpumask_weight(cpumask_of_node(numa_node))) {
> + if (cpumask_equal(allocated_mask, cpumask_of_node(numa_node)))
> {
> /*
> * We have cycled through all the CPUs in the node;
> * reset the allocated map.
> --
> 2.34.1

Reviewed-by: Michael Kelley <[email protected]>

2022-02-04 20:03:00

by Wei Liu

[permalink] [raw]
Subject: Re: [PATCH 0/2] Drivers: hv: Minor cleanup around init_vp_index()

On Fri, Jan 28, 2022 at 11:34:10AM +0100, Vitaly Kuznetsov wrote:
> Two minor changes with no functional change intended:
> - s,alloced,allocated
> - compare cpumasks and not their weights
>
> Vitaly Kuznetsov (2):
> Drivers: hv: Rename 'alloced' to 'allocated'
> Drivers: hv: Compare cpumasks and not their weights in init_vp_index()

Series applied to hyperv-next. Thanks.

>
> drivers/hv/channel_mgmt.c | 19 +++++++++----------
> drivers/hv/hyperv_vmbus.h | 14 +++++++-------
> drivers/hv/vmbus_drv.c | 2 +-
> 3 files changed, 17 insertions(+), 18 deletions(-)
>
> --
> 2.34.1
>