From: Shreenidhi Shedi <[email protected]>
Using BIT() macro improves readability & it uses unsigned long for
shifting which is an added advantage.
Kernel builds with -fno-strict-overflow CFLAG hence shifting a signed
integer by 31 bits is not an issue in this case.
Signed-off-by: Shreenidhi Shedi <[email protected]>
---
arch/x86/kernel/cpu/vmware.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c
index c04b933f4..02039ec35 100644
--- a/arch/x86/kernel/cpu/vmware.c
+++ b/arch/x86/kernel/cpu/vmware.c
@@ -476,8 +476,8 @@ static bool __init vmware_legacy_x2apic_available(void)
{
uint32_t eax, ebx, ecx, edx;
VMWARE_CMD(GETVCPU_INFO, eax, ebx, ecx, edx);
- return (eax & (1 << VMWARE_CMD_VCPU_RESERVED)) == 0 &&
- (eax & (1 << VMWARE_CMD_LEGACY_X2APIC)) != 0;
+ return !(eax & BIT(VMWARE_CMD_VCPU_RESERVED)) &&
+ (eax & BIT(VMWARE_CMD_LEGACY_X2APIC));
}
#ifdef CONFIG_AMD_MEM_ENCRYPT
--
2.36.1
On 6/1/22 3:18 AM, Shreenidhi Shedi wrote:
> From: Shreenidhi Shedi <[email protected]>
>
> Using BIT() macro improves readability & it uses unsigned long for
> shifting which is an added advantage.
>
> Kernel builds with -fno-strict-overflow CFLAG hence shifting a signed
> integer by 31 bits is not an issue in this case.
>
> Signed-off-by: Shreenidhi Shedi <[email protected]>
> ---
Looks good to me.
Reviewed-by: Srivatsa S. Bhat (VMware) <[email protected]>
> arch/x86/kernel/cpu/vmware.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c
> index c04b933f4..02039ec35 100644
> --- a/arch/x86/kernel/cpu/vmware.c
> +++ b/arch/x86/kernel/cpu/vmware.c
> @@ -476,8 +476,8 @@ static bool __init vmware_legacy_x2apic_available(void)
> {
> uint32_t eax, ebx, ecx, edx;
> VMWARE_CMD(GETVCPU_INFO, eax, ebx, ecx, edx);
> - return (eax & (1 << VMWARE_CMD_VCPU_RESERVED)) == 0 &&
> - (eax & (1 << VMWARE_CMD_LEGACY_X2APIC)) != 0;
> + return !(eax & BIT(VMWARE_CMD_VCPU_RESERVED)) &&
> + (eax & BIT(VMWARE_CMD_LEGACY_X2APIC));
> }
>
> #ifdef CONFIG_AMD_MEM_ENCRYPT
> --
> 2.36.1
>
Regards,
Srivatsa
VMware Photon OS
The following commit has been merged into the x86/vmware branch of tip:
Commit-ID: 4745ca43104b422354f06dc814d3f13661f217af
Gitweb: https://git.kernel.org/tip/4745ca43104b422354f06dc814d3f13661f217af
Author: Shreenidhi Shedi <[email protected]>
AuthorDate: Wed, 01 Jun 2022 15:48:20 +05:30
Committer: Borislav Petkov <[email protected]>
CommitterDate: Wed, 22 Jun 2022 11:23:14 +02:00
x86/vmware: Use BIT() macro for shifting
VMWARE_CMD_VCPU_RESERVED is bit 31 and that would mean undefined
behavior when shifting an int but the kernel is built with
-fno-strict-overflow which will wrap around using two's complement.
Use the BIT() macro to improve readability and avoid any potential
overflow confusion because it uses an unsigned long.
[ bp: Clarify commit message. ]
Signed-off-by: Shreenidhi Shedi <[email protected]>
Signed-off-by: Borislav Petkov <[email protected]>
Reviewed-by: Srivatsa S. Bhat (VMware) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/kernel/cpu/vmware.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c
index c04b933..02039ec 100644
--- a/arch/x86/kernel/cpu/vmware.c
+++ b/arch/x86/kernel/cpu/vmware.c
@@ -476,8 +476,8 @@ static bool __init vmware_legacy_x2apic_available(void)
{
uint32_t eax, ebx, ecx, edx;
VMWARE_CMD(GETVCPU_INFO, eax, ebx, ecx, edx);
- return (eax & (1 << VMWARE_CMD_VCPU_RESERVED)) == 0 &&
- (eax & (1 << VMWARE_CMD_LEGACY_X2APIC)) != 0;
+ return !(eax & BIT(VMWARE_CMD_VCPU_RESERVED)) &&
+ (eax & BIT(VMWARE_CMD_LEGACY_X2APIC));
}
#ifdef CONFIG_AMD_MEM_ENCRYPT