2022-09-26 15:53:16

by Juergen Gross

[permalink] [raw]
Subject: [PATCH 0/3] xen/pv: sanitize xen pv guest msr accesses

Historically when running as Xen PV guest all MSR accesses have been
silently swallowing any GP faults, even when the kernel was using not
the *msr_safe() access functions.

Change that by making the behavior controllable via kernel config and
via a boot parameter.

This will help finding paths where MSRs are being accessed under Xen
which are not emulated by the hypervisor.

Juergen Gross (3):
xen/pv: allow pmu msr accesses to cause GP
xen/pv: refactor msr access functions to support safe and unsafe
accesses
xen/pv: support selecting safe/unsafe msr accesses

.../admin-guide/kernel-parameters.txt | 6 ++
arch/x86/xen/Kconfig | 9 ++
arch/x86/xen/enlighten_pv.c | 97 +++++++++++++------
arch/x86/xen/pmu.c | 44 +++++----
4 files changed, 110 insertions(+), 46 deletions(-)

--
2.35.3


2022-09-26 16:25:10

by Juergen Gross

[permalink] [raw]
Subject: [PATCH 1/3] xen/pv: allow pmu msr accesses to cause GP

Today pmu_msr_read() and pmu_msr_write() fall back to the safe variants
of read/write MSR in case the MSR access isn't emulated via Xen. Allow
the caller to select the potentially faulting variant by passing NULL
for the error pointer.

Remove one level of indentation by restructuring the code a little bit.

Signed-off-by: Juergen Gross <[email protected]>
---
arch/x86/xen/pmu.c | 44 ++++++++++++++++++++++++++------------------
1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c
index 21ecbe754cb2..34b4144f6041 100644
--- a/arch/x86/xen/pmu.c
+++ b/arch/x86/xen/pmu.c
@@ -293,22 +293,24 @@ static bool xen_amd_pmu_emulate(unsigned int msr, u64 *val, bool is_read)
bool pmu_msr_read(unsigned int msr, uint64_t *val, int *err)
{
if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
- if (is_amd_pmu_msr(msr)) {
- if (!xen_amd_pmu_emulate(msr, val, 1))
- *val = native_read_msr_safe(msr, err);
- return true;
+ if (!is_amd_pmu_msr(msr))
+ return false;
+ if (!xen_amd_pmu_emulate(msr, val, 1)) {
+ *val = err ? native_read_msr_safe(msr, err)
+ : native_read_msr(msr);
}
+ return true;
} else {
int type, index;

- if (is_intel_pmu_msr(msr, &type, &index)) {
- if (!xen_intel_pmu_emulate(msr, val, type, index, 1))
- *val = native_read_msr_safe(msr, err);
- return true;
+ if (!is_intel_pmu_msr(msr, &type, &index))
+ return false;
+ if (!xen_intel_pmu_emulate(msr, val, type, index, 1)) {
+ *val = err ? native_read_msr_safe(msr, err)
+ : native_read_msr(msr);
}
+ return true;
}
-
- return false;
}

bool pmu_msr_write(unsigned int msr, uint32_t low, uint32_t high, int *err)
@@ -316,22 +318,28 @@ bool pmu_msr_write(unsigned int msr, uint32_t low, uint32_t high, int *err)
uint64_t val = ((uint64_t)high << 32) | low;

if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
- if (is_amd_pmu_msr(msr)) {
- if (!xen_amd_pmu_emulate(msr, &val, 0))
+ if (!is_amd_pmu_msr(msr))
+ return false;
+ if (!xen_amd_pmu_emulate(msr, &val, 0)) {
+ if (err)
*err = native_write_msr_safe(msr, low, high);
- return true;
+ else
+ native_write_msr(msr, low, high);
}
+ return true;
} else {
int type, index;

- if (is_intel_pmu_msr(msr, &type, &index)) {
- if (!xen_intel_pmu_emulate(msr, &val, type, index, 0))
+ if (!is_intel_pmu_msr(msr, &type, &index))
+ return false;
+ if (!xen_intel_pmu_emulate(msr, &val, type, index, 0)) {
+ if (err)
*err = native_write_msr_safe(msr, low, high);
- return true;
+ else
+ native_write_msr(msr, low, high);
}
+ return true;
}
-
- return false;
}

static unsigned long long xen_amd_read_pmc(int counter)
--
2.35.3

2022-09-26 16:49:31

by Juergen Gross

[permalink] [raw]
Subject: Re: [PATCH 1/3] xen/pv: allow pmu msr accesses to cause GP

On 26.09.22 17:29, Jan Beulich wrote:
> On 26.09.2022 16:18, Juergen Gross wrote:
>> Today pmu_msr_read() and pmu_msr_write() fall back to the safe variants
>> of read/write MSR in case the MSR access isn't emulated via Xen. Allow
>> the caller to select the potentially faulting variant by passing NULL
>> for the error pointer.
>
> Maybe make this "the sole caller" or some such? Because if there were
> multiple, they might easily disagree on what the best meaning of passing
> NULL is.

Okay.

>
>> --- a/arch/x86/xen/pmu.c
>> +++ b/arch/x86/xen/pmu.c
>> @@ -293,22 +293,24 @@ static bool xen_amd_pmu_emulate(unsigned int msr, u64 *val, bool is_read)
>> bool pmu_msr_read(unsigned int msr, uint64_t *val, int *err)
>> {
>> if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
>> - if (is_amd_pmu_msr(msr)) {
>> - if (!xen_amd_pmu_emulate(msr, val, 1))
>> - *val = native_read_msr_safe(msr, err);
>> - return true;
>> + if (!is_amd_pmu_msr(msr))
>> + return false;
>> + if (!xen_amd_pmu_emulate(msr, val, 1)) {
>> + *val = err ? native_read_msr_safe(msr, err)
>> + : native_read_msr(msr);
>> }
>> + return true;
>
> Minor remark: Fold this and ...
>
>> } else {
>> int type, index;
>>
>> - if (is_intel_pmu_msr(msr, &type, &index)) {
>> - if (!xen_intel_pmu_emulate(msr, val, type, index, 1))
>> - *val = native_read_msr_safe(msr, err);
>> - return true;
>> + if (!is_intel_pmu_msr(msr, &type, &index))
>> + return false;
>> + if (!xen_intel_pmu_emulate(msr, val, type, index, 1)) {
>> + *val = err ? native_read_msr_safe(msr, err)
>> + : native_read_msr(msr);
>> }
>> + return true;
>
> ... this by moving them ...
>
>> }
>> -
>> - return false;
>> }
>
> ... above here? You might even de-duplicate the native_read_msr{,_safe}()
> invocations by moving them out of the if/else ...

Oh, nice idea!


Juergen


Attachments:
OpenPGP_0xB0DE9DD628BF132F.asc (3.08 kB)
OpenPGP public key
OpenPGP_signature (505.00 B)
OpenPGP digital signature
Download all attachments

2022-09-26 18:02:31

by Jan Beulich

[permalink] [raw]
Subject: Re: [PATCH 1/3] xen/pv: allow pmu msr accesses to cause GP

On 26.09.2022 16:18, Juergen Gross wrote:
> Today pmu_msr_read() and pmu_msr_write() fall back to the safe variants
> of read/write MSR in case the MSR access isn't emulated via Xen. Allow
> the caller to select the potentially faulting variant by passing NULL
> for the error pointer.

Maybe make this "the sole caller" or some such? Because if there were
multiple, they might easily disagree on what the best meaning of passing
NULL is.

> --- a/arch/x86/xen/pmu.c
> +++ b/arch/x86/xen/pmu.c
> @@ -293,22 +293,24 @@ static bool xen_amd_pmu_emulate(unsigned int msr, u64 *val, bool is_read)
> bool pmu_msr_read(unsigned int msr, uint64_t *val, int *err)
> {
> if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
> - if (is_amd_pmu_msr(msr)) {
> - if (!xen_amd_pmu_emulate(msr, val, 1))
> - *val = native_read_msr_safe(msr, err);
> - return true;
> + if (!is_amd_pmu_msr(msr))
> + return false;
> + if (!xen_amd_pmu_emulate(msr, val, 1)) {
> + *val = err ? native_read_msr_safe(msr, err)
> + : native_read_msr(msr);
> }
> + return true;

Minor remark: Fold this and ...

> } else {
> int type, index;
>
> - if (is_intel_pmu_msr(msr, &type, &index)) {
> - if (!xen_intel_pmu_emulate(msr, val, type, index, 1))
> - *val = native_read_msr_safe(msr, err);
> - return true;
> + if (!is_intel_pmu_msr(msr, &type, &index))
> + return false;
> + if (!xen_intel_pmu_emulate(msr, val, type, index, 1)) {
> + *val = err ? native_read_msr_safe(msr, err)
> + : native_read_msr(msr);
> }
> + return true;

... this by moving them ...

> }
> -
> - return false;
> }

... above here? You might even de-duplicate the native_read_msr{,_safe}()
invocations by moving them out of the if/else ...

Jan

2022-09-26 20:17:46

by Boris Ostrovsky

[permalink] [raw]
Subject: Re: [PATCH 1/3] xen/pv: allow pmu msr accesses to cause GP


On 9/26/22 10:18 AM, Juergen Gross wrote:
> bool pmu_msr_read(unsigned int msr, uint64_t *val, int *err)
> {
> if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
> - if (is_amd_pmu_msr(msr)) {
> - if (!xen_amd_pmu_emulate(msr, val, 1))
> - *val = native_read_msr_safe(msr, err);
> - return true;
> + if (!is_amd_pmu_msr(msr))


You should be able to move vendor check inside is_<vendor>_pmu_msr().


-boris


> + return false;
> + if (!xen_amd_pmu_emulate(msr, val, 1)) {
> + *val = err ? native_read_msr_safe(msr, err)
> + : native_read_msr(msr);
> }
> + return true;
> } else {

2022-09-27 05:48:04

by Juergen Gross

[permalink] [raw]
Subject: Re: [PATCH 1/3] xen/pv: allow pmu msr accesses to cause GP

On 26.09.22 22:09, Boris Ostrovsky wrote:
>
> On 9/26/22 10:18 AM, Juergen Gross wrote:
>>   bool pmu_msr_read(unsigned int msr, uint64_t *val, int *err)
>>   {
>>       if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
>> -        if (is_amd_pmu_msr(msr)) {
>> -            if (!xen_amd_pmu_emulate(msr, val, 1))
>> -                *val = native_read_msr_safe(msr, err);
>> -            return true;
>> +        if (!is_amd_pmu_msr(msr))
>
>
> You should be able to move vendor check inside is_<vendor>_pmu_msr().

I like that. Together with Jan's suggestion this makes the code much
more readable!


Juergen


Attachments:
OpenPGP_0xB0DE9DD628BF132F.asc (3.08 kB)
OpenPGP public key
OpenPGP_signature (505.00 B)
OpenPGP digital signature
Download all attachments