load_unaligned_zeropad() can lead to unwanted loads across page boundaries.
The unwanted loads are typically harmless. But, they might be made to
totally unrelated or even unmapped memory. load_unaligned_zeropad()
relies on exception fixup (#PF, #GP and now #VE) to recover from these
unwanted loads.
In TDX guests, the second page can be shared page and VMM may configure
it to trigger #VE.
Kernel assumes that #VE on a shared page is MMIO access and tries to
decode instruction to handle it. In case of load_unaligned_zeropad() it
may result in confusion as it is not MMIO access.
Fix it by detecting split page MMIO accesses and fail them.
load_unaligned_zeropad() will recover using exception fixups.
The issue was discovered by analysis. It was not triggered during the
testing.
Signed-off-by: Kirill A. Shutemov <[email protected]>
---
arch/x86/coco/tdx/tdx.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
index 7d6d484a6d28..3bcaf2170ede 100644
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -333,8 +333,8 @@ static bool mmio_write(int size, unsigned long addr, unsigned long val)
static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
{
+ unsigned long *reg, val, vaddr;
char buffer[MAX_INSN_SIZE];
- unsigned long *reg, val;
struct insn insn = {};
enum mmio_type mmio;
int size, extend_size;
@@ -360,6 +360,19 @@ static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
return -EINVAL;
}
+ /*
+ * Reject EPT violation #VEs that split pages.
+ *
+ * MMIO accesses suppose to be naturally aligned and therefore never
+ * cross a page boundary. Seeing split page accesses indicates a bug
+ * or load_unaligned_zeropad() that steps into unmapped shared page.
+ *
+ * load_unaligned_zeropad() will recover using exception fixups.
+ */
+ vaddr = (unsigned long)insn_get_addr_ref(&insn, regs);
+ if (vaddr / PAGE_SIZE != (vaddr + size) / PAGE_SIZE)
+ return -EFAULT;
+
/* Handle writes first */
switch (mmio) {
case MMIO_WRITE:
--
2.35.1
On 6/14/22 05:01, Kirill A. Shutemov wrote:
> load_unaligned_zeropad() can lead to unwanted loads across page boundaries.
> The unwanted loads are typically harmless. But, they might be made to
> totally unrelated or even unmapped memory. load_unaligned_zeropad()
> relies on exception fixup (#PF, #GP and now #VE) to recover from these
> unwanted loads.
>
> In TDX guests, the second page can be shared page and VMM may configure
> it to trigger #VE.
>
> Kernel assumes that #VE on a shared page is MMIO access and tries to
> decode instruction to handle it. In case of load_unaligned_zeropad() it
> may result in confusion as it is not MMIO access.
>
> Fix it by detecting split page MMIO accesses and fail them.
> load_unaligned_zeropad() will recover using exception fixups.
>
> The issue was discovered by analysis. It was not triggered during the
> testing.
I thought this whole exercise was kicked off by hitting this in testing.
Am I remembering this wrong?
> https://lore.kernel.org/all/[email protected]/
Says:
> This is an actual, real-world problem which was discovered during TDX
> testing.
Or were you considering this a different problem somehow?
On Wed, Jun 15, 2022 at 08:27:52AM -0700, Dave Hansen wrote:
> On 6/14/22 05:01, Kirill A. Shutemov wrote:
> > load_unaligned_zeropad() can lead to unwanted loads across page boundaries.
> > The unwanted loads are typically harmless. But, they might be made to
> > totally unrelated or even unmapped memory. load_unaligned_zeropad()
> > relies on exception fixup (#PF, #GP and now #VE) to recover from these
> > unwanted loads.
> >
> > In TDX guests, the second page can be shared page and VMM may configure
> > it to trigger #VE.
> >
> > Kernel assumes that #VE on a shared page is MMIO access and tries to
> > decode instruction to handle it. In case of load_unaligned_zeropad() it
> > may result in confusion as it is not MMIO access.
> >
> > Fix it by detecting split page MMIO accesses and fail them.
> > load_unaligned_zeropad() will recover using exception fixups.
> >
> > The issue was discovered by analysis. It was not triggered during the
> > testing.
>
> I thought this whole exercise was kicked off by hitting this in testing.
> Am I remembering this wrong?
>
> > https://lore.kernel.org/all/[email protected]/
>
> Says:
>
> > This is an actual, real-world problem which was discovered during TDX
> > testing.
>
> Or were you considering this a different problem somehow?
They are different.
The patch by the link addresses issue of load_unaligned_zeropad() stepping
onto unaccepted memory. This was triggered in practice.
This patch address stepping onto MMIO shared memory. I had to force the
situation manually as MMIO memory mapped with ioremap() and it is not next
to normally allocated memory used by load_unaligned_zeropad() (such as
dentry cache).
Although any shared memory (SWIOTLB buffer for instance) can generate EPT
violation #VE if the VMM is malicious enough.
--
Kirill A. Shutemov
On 6/14/22 05:01, Kirill A. Shutemov wrote:
> load_unaligned_zeropad() can lead to unwanted loads across page boundaries.
> The unwanted loads are typically harmless. But, they might be made to
> totally unrelated or even unmapped memory. load_unaligned_zeropad()
> relies on exception fixup (#PF, #GP and now #VE) to recover from these
> unwanted loads.
>
> In TDX guests, the second page can be shared page and VMM may configure
> it to trigger #VE.
>
> Kernel assumes that #VE on a shared page is MMIO access and tries to
> decode instruction to handle it. In case of load_unaligned_zeropad() it
> may result in confusion as it is not MMIO access.
>
> Fix it by detecting split page MMIO accesses and fail them.
> load_unaligned_zeropad() will recover using exception fixups.
>
> The issue was discovered by analysis. It was not triggered during the
> testing.
>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> ---
> arch/x86/coco/tdx/tdx.c | 15 ++++++++++++++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
> index 7d6d484a6d28..3bcaf2170ede 100644
> --- a/arch/x86/coco/tdx/tdx.c
> +++ b/arch/x86/coco/tdx/tdx.c
> @@ -333,8 +333,8 @@ static bool mmio_write(int size, unsigned long addr, unsigned long val)
>
> static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
> {
> + unsigned long *reg, val, vaddr;
> char buffer[MAX_INSN_SIZE];
> - unsigned long *reg, val;
> struct insn insn = {};
> enum mmio_type mmio;
> int size, extend_size;
> @@ -360,6 +360,19 @@ static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
> return -EINVAL;
> }
>
> + /*
> + * Reject EPT violation #VEs that split pages.
> + *
> + * MMIO accesses suppose to be naturally aligned and therefore never
> + * cross a page boundary. Seeing split page accesses indicates a bug
> + * or load_unaligned_zeropad() that steps into unmapped shared page.
Isn't this "unmapped" thing a rather superfluous implementation detail?
For the guest, it just needs to know that it *CAN* #VE on access to MMIO
and that it needs to be prepared. The fact that MMIO is implemented
with TDX shared memory *AND* that "unmapped shared pages" can cause
#VE's seems like too much detail.
Also, is this all precise? Are literal unmapped shared pages the *ONLY*
thing that a hypervisor can do do case a #VE? What about, say, reserved
bits being set in a shared EPT entry?
I was thinking a comment like this might be better:
> /*
> * Reject EPT violation #VEs that split pages.
> *
> * MMIO accesses are supposed to be naturally aligned and therefore
> * never cross page boundaries. Seeing split page accesses indicates
> * a bug or a load_unaligned_zeropad() that stepped into an MMIO page.
> *
> * load_unaligned_zeropad() will recover using exception fixups.
> */
On 6/15/22 10:10, Kirill A. Shutemov wrote:
>> I thought this whole exercise was kicked off by hitting this in testing.
>> Am I remembering this wrong?
>>
>>> https://lore.kernel.org/all/[email protected]/
>> Says:
>>
>>> This is an actual, real-world problem which was discovered during TDX
>>> testing.
>> Or were you considering this a different problem somehow?
> They are different.
>
> The patch by the link addresses issue of load_unaligned_zeropad() stepping
> onto unaccepted memory. This was triggered in practice.
OK, so we've got two problems both triggered by
load_unaligned_zeropad(), but where the fixes are different. We
actually hit the "unaccepted memory one" in testing, but that made us
think about other problems in the area and that's where this one came up.
On Wed, Jun 15, 2022 at 11:12:35AM -0700, Dave Hansen wrote:
> On 6/14/22 05:01, Kirill A. Shutemov wrote:
> > load_unaligned_zeropad() can lead to unwanted loads across page boundaries.
> > The unwanted loads are typically harmless. But, they might be made to
> > totally unrelated or even unmapped memory. load_unaligned_zeropad()
> > relies on exception fixup (#PF, #GP and now #VE) to recover from these
> > unwanted loads.
> >
> > In TDX guests, the second page can be shared page and VMM may configure
> > it to trigger #VE.
> >
> > Kernel assumes that #VE on a shared page is MMIO access and tries to
> > decode instruction to handle it. In case of load_unaligned_zeropad() it
> > may result in confusion as it is not MMIO access.
> >
> > Fix it by detecting split page MMIO accesses and fail them.
> > load_unaligned_zeropad() will recover using exception fixups.
> >
> > The issue was discovered by analysis. It was not triggered during the
> > testing.
> >
> > Signed-off-by: Kirill A. Shutemov <[email protected]>
> > ---
> > arch/x86/coco/tdx/tdx.c | 15 ++++++++++++++-
> > 1 file changed, 14 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
> > index 7d6d484a6d28..3bcaf2170ede 100644
> > --- a/arch/x86/coco/tdx/tdx.c
> > +++ b/arch/x86/coco/tdx/tdx.c
> > @@ -333,8 +333,8 @@ static bool mmio_write(int size, unsigned long addr, unsigned long val)
> >
> > static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
> > {
> > + unsigned long *reg, val, vaddr;
> > char buffer[MAX_INSN_SIZE];
> > - unsigned long *reg, val;
> > struct insn insn = {};
> > enum mmio_type mmio;
> > int size, extend_size;
> > @@ -360,6 +360,19 @@ static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
> > return -EINVAL;
> > }
> >
> > + /*
> > + * Reject EPT violation #VEs that split pages.
> > + *
> > + * MMIO accesses suppose to be naturally aligned and therefore never
> > + * cross a page boundary. Seeing split page accesses indicates a bug
> > + * or load_unaligned_zeropad() that steps into unmapped shared page.
>
> Isn't this "unmapped" thing a rather superfluous implementation detail?
>
> For the guest, it just needs to know that it *CAN* #VE on access to MMIO
> and that it needs to be prepared. The fact that MMIO is implemented
> with TDX shared memory *AND* that "unmapped shared pages" can cause
> #VE's seems like too much detail.
Okay, fair enough.
> Also, is this all precise? Are literal unmapped shared pages the *ONLY*
> thing that a hypervisor can do do case a #VE? What about, say, reserved
> bits being set in a shared EPT entry?
Right, it is analogous to page fault. So, yes, it can be triggered for
a number of reasons, not only unmapped.
> I was thinking a comment like this might be better:
>
> > /*
> > * Reject EPT violation #VEs that split pages.
> > *
> > * MMIO accesses are supposed to be naturally aligned and therefore
> > * never cross page boundaries. Seeing split page accesses indicates
> > * a bug or a load_unaligned_zeropad() that stepped into an MMIO page.
> > *
> > * load_unaligned_zeropad() will recover using exception fixups.
> > */
Looks good, thanks.
--
Kirill A. Shutemov
On 6/15/22 15:43, Kirill A. Shutemov wrote:
> I will reword it like this:
>
> The issue was discovered by analysis after triggering other issue with
> load_unaligned_zeropad().
Yeah, that sounds sane. I'm also happy to shove this into the commit
message before I push it out.
On Tue, Jun 14, 2022 at 03:01:35PM +0300, Kirill A. Shutemov wrote:
> load_unaligned_zeropad() can lead to unwanted loads across page boundaries.
> The unwanted loads are typically harmless. But, they might be made to
> totally unrelated or even unmapped memory. load_unaligned_zeropad()
> relies on exception fixup (#PF, #GP and now #VE) to recover from these
> unwanted loads.
>
> In TDX guests, the second page can be shared page and VMM may configure
> it to trigger #VE.
>
> Kernel assumes that #VE on a shared page is MMIO access and tries to
> decode instruction to handle it. In case of load_unaligned_zeropad() it
> may result in confusion as it is not MMIO access.
>
> Fix it by detecting split page MMIO accesses and fail them.
> load_unaligned_zeropad() will recover using exception fixups.
>
> The issue was discovered by analysis. It was not triggered during the
> testing.
>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> ---
> arch/x86/coco/tdx/tdx.c | 15 ++++++++++++++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
> index 7d6d484a6d28..3bcaf2170ede 100644
> --- a/arch/x86/coco/tdx/tdx.c
> +++ b/arch/x86/coco/tdx/tdx.c
> @@ -333,8 +333,8 @@ static bool mmio_write(int size, unsigned long addr, unsigned long val)
>
> static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
> {
> + unsigned long *reg, val, vaddr;
> char buffer[MAX_INSN_SIZE];
> - unsigned long *reg, val;
> struct insn insn = {};
> enum mmio_type mmio;
> int size, extend_size;
> @@ -360,6 +360,19 @@ static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
> return -EINVAL;
> }
>
> + /*
> + * Reject EPT violation #VEs that split pages.
> + *
> + * MMIO accesses suppose to be naturally aligned and therefore never
> + * cross a page boundary. Seeing split page accesses indicates a bug
> + * or load_unaligned_zeropad() that steps into unmapped shared page.
> + *
> + * load_unaligned_zeropad() will recover using exception fixups.
> + */
> + vaddr = (unsigned long)insn_get_addr_ref(&insn, regs);
> + if (vaddr / PAGE_SIZE != (vaddr + size) / PAGE_SIZE)
Oops. I just realized it has off-by-one. It supposed to be:
if (vaddr / PAGE_SIZE != (vaddr + size - 1) / PAGE_SIZE)
> + return -EFAULT;
> +
> /* Handle writes first */
> switch (mmio) {
> case MMIO_WRITE:
> --
> 2.35.1
>
--
Kirill A. Shutemov
On 6/15/22 15:32, Kirill A. Shutemov wrote:
>>> /*
>>> * Reject EPT violation #VEs that split pages.
>>> *
>>> * MMIO accesses are supposed to be naturally aligned and therefore
>>> * never cross page boundaries. Seeing split page accesses indicates
>>> * a bug or a load_unaligned_zeropad() that stepped into an MMIO page.
>>> *
>>> * load_unaligned_zeropad() will recover using exception fixups.
>>> */
> Looks good, thanks.
I've got that snippet and a few other things staged here:
> https://git.kernel.org/pub/scm/linux/kernel/git/daveh/devel.git/log/?h=testme
Could you take a quick look through before I push them somewhere for real?
On Wed, Jun 15, 2022 at 02:55:13PM -0700, Dave Hansen wrote:
> On 6/15/22 10:10, Kirill A. Shutemov wrote:
> >> I thought this whole exercise was kicked off by hitting this in testing.
> >> Am I remembering this wrong?
> >>
> >>> https://lore.kernel.org/all/[email protected]/
> >> Says:
> >>
> >>> This is an actual, real-world problem which was discovered during TDX
> >>> testing.
> >> Or were you considering this a different problem somehow?
> > They are different.
> >
> > The patch by the link addresses issue of load_unaligned_zeropad() stepping
> > onto unaccepted memory. This was triggered in practice.
>
> OK, so we've got two problems both triggered by
> load_unaligned_zeropad(), but where the fixes are different. We
> actually hit the "unaccepted memory one" in testing, but that made us
> think about other problems in the area and that's where this one came up.
I will reword it like this:
The issue was discovered by analysis after triggering other issue with
load_unaligned_zeropad().
Works for you?
--
Kirill A. Shutemov
On Wed, Jun 15, 2022 at 03:36:02PM -0700, Dave Hansen wrote:
> On 6/15/22 15:32, Kirill A. Shutemov wrote:
> >>> /*
> >>> * Reject EPT violation #VEs that split pages.
> >>> *
> >>> * MMIO accesses are supposed to be naturally aligned and therefore
> >>> * never cross page boundaries. Seeing split page accesses indicates
> >>> * a bug or a load_unaligned_zeropad() that stepped into an MMIO page.
> >>> *
> >>> * load_unaligned_zeropad() will recover using exception fixups.
> >>> */
> > Looks good, thanks.
>
> I've got that snippet and a few other things staged here:
>
> > https://git.kernel.org/pub/scm/linux/kernel/git/daveh/devel.git/log/?h=testme
>
> Could you take a quick look through before I push them somewhere for real?
Looks good to me. Could you fold in the off-by-one fix?
--
Kirill A. Shutemov
On Wed, Jun 15, 2022 at 03:49:49PM -0700, Dave Hansen wrote:
> On 6/15/22 15:43, Kirill A. Shutemov wrote:
> > I will reword it like this:
> >
> > The issue was discovered by analysis after triggering other issue with
> > load_unaligned_zeropad().
>
> Yeah, that sounds sane. I'm also happy to shove this into the commit
> message before I push it out.
It is fine as is in your tree.
--
Kirill A. Shutemov
On 6/15/22 15:52, Kirill A. Shutemov wrote:
>> + vaddr = (unsigned long)insn_get_addr_ref(&insn, regs);
>> + if (vaddr / PAGE_SIZE != (vaddr + size) / PAGE_SIZE)
> Oops. I just realized it has off-by-one. It supposed to be:
>
> if (vaddr / PAGE_SIZE != (vaddr + size - 1) / PAGE_SIZE)
That was bugging me. Glad you caught this.
Wouldn't this be more obviously correct?
if (ALIGN_DOWN(vaddr, PAGE_SIZE) !=
ALIGN_DOWN(vaddr + size, PAGE_SIZE))
...
I don't think we have a PAGE_ALIGN_DOWN().
On Wed, Jun 15, 2022 at 04:34:48PM -0700, Dave Hansen wrote:
> On 6/15/22 15:52, Kirill A. Shutemov wrote:
> >> + vaddr = (unsigned long)insn_get_addr_ref(&insn, regs);
> >> + if (vaddr / PAGE_SIZE != (vaddr + size) / PAGE_SIZE)
> > Oops. I just realized it has off-by-one. It supposed to be:
> >
> > if (vaddr / PAGE_SIZE != (vaddr + size - 1) / PAGE_SIZE)
>
> That was bugging me. Glad you caught this.
>
> Wouldn't this be more obviously correct?
>
> if (ALIGN_DOWN(vaddr, PAGE_SIZE) !=
> ALIGN_DOWN(vaddr + size, PAGE_SIZE))
> ...
I don't think it fixes anything.
Consider the case when vaddr is 4092 and size is 4. This is legitimate
access -- aligned and not page crosser.
The left side of the check will be evaluated to 0 and the right will be 1
which is wrong and will get us -EFAULT.
I cannot think of a helper that would fit the need.
--
Kirill A. Shutemov
From: Dave Hansen
> Sent: 16 June 2022 00:35
>
> On 6/15/22 15:52, Kirill A. Shutemov wrote:
> >> + vaddr = (unsigned long)insn_get_addr_ref(&insn, regs);
> >> + if (vaddr / PAGE_SIZE != (vaddr + size) / PAGE_SIZE)
> > Oops. I just realized it has off-by-one. It supposed to be:
> >
> > if (vaddr / PAGE_SIZE != (vaddr + size - 1) / PAGE_SIZE)
>
> That was bugging me. Glad you caught this.
>
> Wouldn't this be more obviously correct?
>
> if (ALIGN_DOWN(vaddr, PAGE_SIZE) !=
> ALIGN_DOWN(vaddr + size, PAGE_SIZE))
> ...
>
> I don't think we have a PAGE_ALIGN_DOWN().
Or:
if ((vaddr ^ (vaddr + size - 1)) >> PAGE_SHIFT)
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: ceba767b943de2128eaef95e19880809274ac35d
Gitweb: https://git.kernel.org/tip/ceba767b943de2128eaef95e19880809274ac35d
Author: Kirill A. Shutemov <[email protected]>
AuthorDate: Tue, 14 Jun 2022 15:01:35 +03:00
Committer: Dave Hansen <[email protected]>
CommitterDate: Fri, 17 Jun 2022 14:30:20 -07:00
x86/tdx: Handle load_unaligned_zeropad() page-cross to a shared page
load_unaligned_zeropad() can lead to unwanted loads across page boundaries.
The unwanted loads are typically harmless. But, they might be made to
totally unrelated or even unmapped memory. load_unaligned_zeropad()
relies on exception fixup (#PF, #GP and now #VE) to recover from these
unwanted loads.
In TDX guests, the second page can be shared page and a VMM may configure
it to trigger #VE.
The kernel assumes that #VE on a shared page is an MMIO access and tries to
decode instruction to handle it. In case of load_unaligned_zeropad() it
may result in confusion as it is not MMIO access.
Fix it by detecting split page MMIO accesses and failing them.
load_unaligned_zeropad() will recover using exception fixups.
The issue was discovered by analysis and reproduced artificially. It was
not triggered during testing.
[ dhansen: fix up changelogs and comments for grammar and clarity,
plus incorporate Kirill's off-by-one fix]
Signed-off-by: Kirill A. Shutemov <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
arch/x86/coco/tdx/tdx.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
index c8d44f4..d5c51c9 100644
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -333,8 +333,8 @@ static bool mmio_write(int size, unsigned long addr, unsigned long val)
static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
{
+ unsigned long *reg, val, vaddr;
char buffer[MAX_INSN_SIZE];
- unsigned long *reg, val;
struct insn insn = {};
enum mmio_type mmio;
int size, extend_size;
@@ -360,6 +360,19 @@ static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
return -EINVAL;
}
+ /*
+ * Reject EPT violation #VEs that split pages.
+ *
+ * MMIO accesses are supposed to be naturally aligned and therefore
+ * never cross page boundaries. Seeing split page accesses indicates
+ * a bug or a load_unaligned_zeropad() that stepped into an MMIO page.
+ *
+ * load_unaligned_zeropad() will recover using exception fixups.
+ */
+ vaddr = (unsigned long)insn_get_addr_ref(&insn, regs);
+ if (vaddr / PAGE_SIZE != (vaddr + size) / PAGE_SIZE)
+ return -EFAULT;
+
/* Handle writes first */
switch (mmio) {
case MMIO_WRITE:
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: 1e7769653b06b56b7ea7d56911d2d5b2957750cd
Gitweb: https://git.kernel.org/tip/1e7769653b06b56b7ea7d56911d2d5b2957750cd
Author: Kirill A. Shutemov <[email protected]>
AuthorDate: Tue, 14 Jun 2022 15:01:35 +03:00
Committer: Dave Hansen <[email protected]>
CommitterDate: Fri, 17 Jun 2022 15:37:33 -07:00
x86/tdx: Handle load_unaligned_zeropad() page-cross to a shared page
load_unaligned_zeropad() can lead to unwanted loads across page boundaries.
The unwanted loads are typically harmless. But, they might be made to
totally unrelated or even unmapped memory. load_unaligned_zeropad()
relies on exception fixup (#PF, #GP and now #VE) to recover from these
unwanted loads.
In TDX guests, the second page can be shared page and a VMM may configure
it to trigger #VE.
The kernel assumes that #VE on a shared page is an MMIO access and tries to
decode instruction to handle it. In case of load_unaligned_zeropad() it
may result in confusion as it is not MMIO access.
Fix it by detecting split page MMIO accesses and failing them.
load_unaligned_zeropad() will recover using exception fixups.
The issue was discovered by analysis and reproduced artificially. It was
not triggered during testing.
[ dhansen: fix up changelogs and comments for grammar and clarity,
plus incorporate Kirill's off-by-one fix]
Signed-off-by: Kirill A. Shutemov <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
arch/x86/coco/tdx/tdx.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
index c8d44f4..928dcf7 100644
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -333,8 +333,8 @@ static bool mmio_write(int size, unsigned long addr, unsigned long val)
static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
{
+ unsigned long *reg, val, vaddr;
char buffer[MAX_INSN_SIZE];
- unsigned long *reg, val;
struct insn insn = {};
enum mmio_type mmio;
int size, extend_size;
@@ -360,6 +360,19 @@ static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
return -EINVAL;
}
+ /*
+ * Reject EPT violation #VEs that split pages.
+ *
+ * MMIO accesses are supposed to be naturally aligned and therefore
+ * never cross page boundaries. Seeing split page accesses indicates
+ * a bug or a load_unaligned_zeropad() that stepped into an MMIO page.
+ *
+ * load_unaligned_zeropad() will recover using exception fixups.
+ */
+ vaddr = (unsigned long)insn_get_addr_ref(&insn, regs);
+ if (vaddr / PAGE_SIZE != (vaddr + size - 1) / PAGE_SIZE)
+ return -EFAULT;
+
/* Handle writes first */
switch (mmio) {
case MMIO_WRITE: