2023-10-16 16:43:20

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

Michael reported soft lockups on a system that has unaccepted memory.
This occurs when a user attempts to allocate and accept memory on
multiple CPUs simultaneously.

The root cause of the issue is that memory acceptance is serialized with
a spinlock, allowing only one CPU to accept memory at a time. The other
CPUs spin and wait for their turn, leading to starvation and soft lockup
reports.

To address this, the code has been modified to release the spinlock
while accepting memory. This allows for parallel memory acceptance on
multiple CPUs.

A newly introduced "accepting_list" keeps track of which memory is
currently being accepted. This is necessary to prevent parallel
acceptance of the same memory block. If a collision occurs, the lock is
released and the process is retried.

Such collisions should rarely occur. The main path for memory acceptance
is the page allocator, which accepts memory in MAX_ORDER chunks. As long
as MAX_ORDER is equal to or larger than the unit_size, collisions will
never occur because the caller fully owns the memory block being
accepted.

Aside from the page allocator, only memblock and deferered_free_range()
accept memory, but this only happens during boot.

The code has been tested with unit_size == 128MiB to trigger collisions
and validate the retry codepath.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reported-by: Michael Roth <[email protected]
Fixes: 2053bc57f367 ("efi: Add unaccepted memory support")
Cc: <[email protected]>
Reviewed-by: Nikolay Borisov <[email protected]>
---

v2:
- Fix deadlock (Vlastimil);
- Fix comments (Vlastimil);
- s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
from atomic context;

---
drivers/firmware/efi/unaccepted_memory.c | 71 ++++++++++++++++++++++--
1 file changed, 67 insertions(+), 4 deletions(-)

diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c
index 853f7dc3c21d..fa3363889224 100644
--- a/drivers/firmware/efi/unaccepted_memory.c
+++ b/drivers/firmware/efi/unaccepted_memory.c
@@ -5,9 +5,17 @@
#include <linux/spinlock.h>
#include <asm/unaccepted_memory.h>

-/* Protects unaccepted memory bitmap */
+/* Protects unaccepted memory bitmap and accepting_list */
static DEFINE_SPINLOCK(unaccepted_memory_lock);

+struct accept_range {
+ struct list_head list;
+ unsigned long start;
+ unsigned long end;
+};
+
+static LIST_HEAD(accepting_list);
+
/*
* accept_memory() -- Consult bitmap and accept the memory if needed.
*
@@ -24,6 +32,7 @@ void accept_memory(phys_addr_t start, phys_addr_t end)
{
struct efi_unaccepted_memory *unaccepted;
unsigned long range_start, range_end;
+ struct accept_range range, *entry;
unsigned long flags;
u64 unit_size;

@@ -78,20 +87,74 @@ void accept_memory(phys_addr_t start, phys_addr_t end)
if (end > unaccepted->size * unit_size * BITS_PER_BYTE)
end = unaccepted->size * unit_size * BITS_PER_BYTE;

- range_start = start / unit_size;
-
+ range.start = start / unit_size;
+ range.end = DIV_ROUND_UP(end, unit_size);
+retry:
spin_lock_irqsave(&unaccepted_memory_lock, flags);
+
+ /*
+ * Check if anybody works on accepting the same range of the memory.
+ *
+ * The check is done with unit_size granularity. It is crucial to catch
+ * all accept requests to the same unit_size block, even if they don't
+ * overlap on physical address level.
+ */
+ list_for_each_entry(entry, &accepting_list, list) {
+ if (entry->end < range.start)
+ continue;
+ if (entry->start >= range.end)
+ continue;
+
+ /*
+ * Somebody else accepting the range. Or at least part of it.
+ *
+ * Drop the lock and retry until it is complete.
+ */
+ spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
+
+ /*
+ * The code is reachable from atomic context.
+ * cond_resched() cannot be used.
+ */
+ cpu_relax();
+
+ goto retry;
+ }
+
+ /*
+ * Register that the range is about to be accepted.
+ * Make sure nobody else will accept it.
+ */
+ list_add(&range.list, &accepting_list);
+
+ range_start = range.start;
for_each_set_bitrange_from(range_start, range_end, unaccepted->bitmap,
- DIV_ROUND_UP(end, unit_size)) {
+ range.end) {
unsigned long phys_start, phys_end;
unsigned long len = range_end - range_start;

phys_start = range_start * unit_size + unaccepted->phys_base;
phys_end = range_end * unit_size + unaccepted->phys_base;

+ /*
+ * Keep interrupts disabled until the accept operation is
+ * complete in order to prevent deadlocks.
+ *
+ * Enabling interrupts before calling arch_accept_memory()
+ * creates an opportunity for an interrupt handler to request
+ * acceptance for the same memory. The handler will continuously
+ * spin with interrupts disabled, preventing other task from
+ * making progress with the acceptance process.
+ */
+ spin_unlock(&unaccepted_memory_lock);
+
arch_accept_memory(phys_start, phys_end);
+
+ spin_lock(&unaccepted_memory_lock);
bitmap_clear(unaccepted->bitmap, range_start, len);
}
+
+ list_del(&range.list);
spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
}

--
2.41.0


2023-10-16 16:49:36

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On 10/16/23 18:31, Kirill A. Shutemov wrote:
> Michael reported soft lockups on a system that has unaccepted memory.
> This occurs when a user attempts to allocate and accept memory on
> multiple CPUs simultaneously.
>
> The root cause of the issue is that memory acceptance is serialized with
> a spinlock, allowing only one CPU to accept memory at a time. The other
> CPUs spin and wait for their turn, leading to starvation and soft lockup
> reports.
>
> To address this, the code has been modified to release the spinlock
> while accepting memory. This allows for parallel memory acceptance on
> multiple CPUs.
>
> A newly introduced "accepting_list" keeps track of which memory is
> currently being accepted. This is necessary to prevent parallel
> acceptance of the same memory block. If a collision occurs, the lock is
> released and the process is retried.
>
> Such collisions should rarely occur. The main path for memory acceptance
> is the page allocator, which accepts memory in MAX_ORDER chunks. As long
> as MAX_ORDER is equal to or larger than the unit_size, collisions will
> never occur because the caller fully owns the memory block being
> accepted.
>
> Aside from the page allocator, only memblock and deferered_free_range()
> accept memory, but this only happens during boot.
>
> The code has been tested with unit_size == 128MiB to trigger collisions
> and validate the retry codepath.
>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> Reported-by: Michael Roth <[email protected]
> Fixes: 2053bc57f367 ("efi: Add unaccepted memory support")
> Cc: <[email protected]>
> Reviewed-by: Nikolay Borisov <[email protected]>

Reviewed-by: Vlastimil Babka <[email protected]>

<snip>

> + range_start = range.start;
> for_each_set_bitrange_from(range_start, range_end, unaccepted->bitmap,
> - DIV_ROUND_UP(end, unit_size)) {
> + range.end) {
> unsigned long phys_start, phys_end;
> unsigned long len = range_end - range_start;
>
> phys_start = range_start * unit_size + unaccepted->phys_base;
> phys_end = range_end * unit_size + unaccepted->phys_base;
>
> + /*
> + * Keep interrupts disabled until the accept operation is
> + * complete in order to prevent deadlocks.
> + *
> + * Enabling interrupts before calling arch_accept_memory()
> + * creates an opportunity for an interrupt handler to request
> + * acceptance for the same memory. The handler will continuously
> + * spin with interrupts disabled, preventing other task from
> + * making progress with the acceptance process.
> + */

AFAIU on PREEMPT_RT the spin_lock_irqsave() doesn't disable interrupts, so
this does not leave them disabled. But it also shouldn't be a risk of
deadlock because the interrupt handlers are themselves preemptible. The
latency might be bad as the cpu_relax() retry loop will not cause the task
everyone might be waiting for to be prioritised, but I guess it's not a big
issue as anyone with RT requirements probably won't use unaccepted memory in
the first place, and as you mention hitting the retry loop after boot in a
normal configuration should be pretty much never.

> + spin_unlock(&unaccepted_memory_lock);
> +
> arch_accept_memory(phys_start, phys_end);
> +
> + spin_lock(&unaccepted_memory_lock);
> bitmap_clear(unaccepted->bitmap, range_start, len);
> }
> +
> + list_del(&range.list);
> spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
> }
>

2023-10-16 17:56:41

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote:
> v2:
> - Fix deadlock (Vlastimil);
> - Fix comments (Vlastimil);
> - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
> from atomic context;

Isn't there an implicit cpu_relax() while we're spinning? Does this
really accomplish anything?

> +retry:
> spin_lock_irqsave(&unaccepted_memory_lock, flags);
[...]
> + spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
> +
> + /*
> + * The code is reachable from atomic context.
> + * cond_resched() cannot be used.
> + */
> + cpu_relax();
> +
> + goto retry;

2023-10-16 20:55:05

by Michael Roth

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote:
> Michael reported soft lockups on a system that has unaccepted memory.
> This occurs when a user attempts to allocate and accept memory on
> multiple CPUs simultaneously.
>
> The root cause of the issue is that memory acceptance is serialized with
> a spinlock, allowing only one CPU to accept memory at a time. The other
> CPUs spin and wait for their turn, leading to starvation and soft lockup
> reports.
>
> To address this, the code has been modified to release the spinlock
> while accepting memory. This allows for parallel memory acceptance on
> multiple CPUs.
>
> A newly introduced "accepting_list" keeps track of which memory is
> currently being accepted. This is necessary to prevent parallel
> acceptance of the same memory block. If a collision occurs, the lock is
> released and the process is retried.
>
> Such collisions should rarely occur. The main path for memory acceptance
> is the page allocator, which accepts memory in MAX_ORDER chunks. As long
> as MAX_ORDER is equal to or larger than the unit_size, collisions will
> never occur because the caller fully owns the memory block being
> accepted.
>
> Aside from the page allocator, only memblock and deferered_free_range()
> accept memory, but this only happens during boot.
>
> The code has been tested with unit_size == 128MiB to trigger collisions
> and validate the retry codepath.
>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> Reported-by: Michael Roth <[email protected]

Tested-by: Michael Roth <[email protected]>

This seems to improve things pretty dramatically for me. Previously I
saw soft-lockups with 16 vCPUs and 16 processes faulting into memory,
and now I can do 128+ vCPUs/processes.

I can still trigger soft lock-ups on occassion if the number of processes
faulting in memory exceeds the number of vCPUs available to the guest, but
with a 32 vCPU guest even something like this:

stress --vm 128 --vm-bytes 2G --vm-keep --cpu 255

still seems to avoid the soft lock-up messages. So that's probably well
into "potential future optimization" territory and this patch fixes the
more immediate issues.

Thanks!

-Mike

> Fixes: 2053bc57f367 ("efi: Add unaccepted memory support")
> Cc: <[email protected]>
> Reviewed-by: Nikolay Borisov <[email protected]>
> ---
>
> v2:
> - Fix deadlock (Vlastimil);
> - Fix comments (Vlastimil);
> - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
> from atomic context;
>

2023-10-16 21:39:59

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On Mon, Oct 16, 2023 at 06:55:41PM +0100, Matthew Wilcox wrote:
> On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote:
> > v2:
> > - Fix deadlock (Vlastimil);
> > - Fix comments (Vlastimil);
> > - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
> > from atomic context;
>
> Isn't there an implicit cpu_relax() while we're spinning? Does this
> really accomplish anything?

You are right. It is useless. I will drop it in v3.

--
Kiryl Shutsemau / Kirill A. Shutemov

2023-10-17 07:03:24

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On 10/16/23 22:54, Michael Roth wrote:
> On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote:
>> Michael reported soft lockups on a system that has unaccepted memory.
>> This occurs when a user attempts to allocate and accept memory on
>> multiple CPUs simultaneously.
>>
>> The root cause of the issue is that memory acceptance is serialized with
>> a spinlock, allowing only one CPU to accept memory at a time. The other
>> CPUs spin and wait for their turn, leading to starvation and soft lockup
>> reports.
>>
>> To address this, the code has been modified to release the spinlock
>> while accepting memory. This allows for parallel memory acceptance on
>> multiple CPUs.
>>
>> A newly introduced "accepting_list" keeps track of which memory is
>> currently being accepted. This is necessary to prevent parallel
>> acceptance of the same memory block. If a collision occurs, the lock is
>> released and the process is retried.
>>
>> Such collisions should rarely occur. The main path for memory acceptance
>> is the page allocator, which accepts memory in MAX_ORDER chunks. As long
>> as MAX_ORDER is equal to or larger than the unit_size, collisions will
>> never occur because the caller fully owns the memory block being
>> accepted.
>>
>> Aside from the page allocator, only memblock and deferered_free_range()
>> accept memory, but this only happens during boot.
>>
>> The code has been tested with unit_size == 128MiB to trigger collisions
>> and validate the retry codepath.
>>
>> Signed-off-by: Kirill A. Shutemov <[email protected]>
>> Reported-by: Michael Roth <[email protected]
>
> Tested-by: Michael Roth <[email protected]>
>
> This seems to improve things pretty dramatically for me. Previously I
> saw soft-lockups with 16 vCPUs and 16 processes faulting into memory,
> and now I can do 128+ vCPUs/processes.
>
> I can still trigger soft lock-ups on occassion if the number of processes
> faulting in memory exceeds the number of vCPUs available to the guest, but
> with a 32 vCPU guest even something like this:
>
> stress --vm 128 --vm-bytes 2G --vm-keep --cpu 255
>
> still seems to avoid the soft lock-up messages. So that's probably well
> into "potential future optimization" territory and this patch fixes the
> more immediate issues.

Do you mean that the guest pretends it has more cpus than the host provides
to it? I think such cpu starving configuration is prone to softlockups
already, so it wouldn't be new.

If you mean the guest has as many cpus as the host provides to it, but you
stress with many more than that number of processes, then I wonder how
softlockups would happen due to the extra processes. Since irqs are disabled
through the whole operation, the extra processes can't become scheduled, and
not being scheduled due to overloading doesn't trigger softlockups, hmm...

> Thanks!
>
> -Mike
>
>> Fixes: 2053bc57f367 ("efi: Add unaccepted memory support")
>> Cc: <[email protected]>
>> Reviewed-by: Nikolay Borisov <[email protected]>
>> ---
>>
>> v2:
>> - Fix deadlock (Vlastimil);
>> - Fix comments (Vlastimil);
>> - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
>> from atomic context;
>>

2023-10-17 07:42:38

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On Mon, 16 Oct 2023 at 23:39, Kirill A. Shutemov
<[email protected]> wrote:
>
> On Mon, Oct 16, 2023 at 06:55:41PM +0100, Matthew Wilcox wrote:
> > On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote:
> > > v2:
> > > - Fix deadlock (Vlastimil);
> > > - Fix comments (Vlastimil);
> > > - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
> > > from atomic context;
> >
> > Isn't there an implicit cpu_relax() while we're spinning? Does this
> > really accomplish anything?
>
> You are right. It is useless. I will drop it in v3.
>

I can drop that bit when applying the patch.

One question I have is whether the sequence

spin_lock_irqsave(&unaccepted_memory_lock, flags);
...
spin_unlock(&unaccepted_memory_lock);
arch_accept_memory(phys_start, phys_end);
spin_lock(&unaccepted_memory_lock);
...
spin_unlock_irqrestore(&unaccepted_memory_lock, flags);

is considered sound and is supported by all architectures?

2023-10-17 09:44:54

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On Tue, Oct 17, 2023 at 09:42:13AM +0200, Ard Biesheuvel wrote:
> On Mon, 16 Oct 2023 at 23:39, Kirill A. Shutemov
> <[email protected]> wrote:
> >
> > On Mon, Oct 16, 2023 at 06:55:41PM +0100, Matthew Wilcox wrote:
> > > On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote:
> > > > v2:
> > > > - Fix deadlock (Vlastimil);
> > > > - Fix comments (Vlastimil);
> > > > - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
> > > > from atomic context;
> > >
> > > Isn't there an implicit cpu_relax() while we're spinning? Does this
> > > really accomplish anything?
> >
> > You are right. It is useless. I will drop it in v3.
> >
>
> I can drop that bit when applying the patch.
>
> One question I have is whether the sequence
>
> spin_lock_irqsave(&unaccepted_memory_lock, flags);
> ...
> spin_unlock(&unaccepted_memory_lock);
> arch_accept_memory(phys_start, phys_end);
> spin_lock(&unaccepted_memory_lock);
> ...
> spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
>
> is considered sound and is supported by all architectures?

I am not an locking expert and only tested it on x86. But what potential
issue do you see?

--
Kiryl Shutsemau / Kirill A. Shutemov

2023-10-17 09:57:33

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On Tue, 17 Oct 2023 at 11:44, Kirill A. Shutemov
<[email protected]> wrote:
>
> On Tue, Oct 17, 2023 at 09:42:13AM +0200, Ard Biesheuvel wrote:
> > On Mon, 16 Oct 2023 at 23:39, Kirill A. Shutemov
> > <[email protected]> wrote:
> > >
> > > On Mon, Oct 16, 2023 at 06:55:41PM +0100, Matthew Wilcox wrote:
> > > > On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote:
> > > > > v2:
> > > > > - Fix deadlock (Vlastimil);
> > > > > - Fix comments (Vlastimil);
> > > > > - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
> > > > > from atomic context;
> > > >
> > > > Isn't there an implicit cpu_relax() while we're spinning? Does this
> > > > really accomplish anything?
> > >
> > > You are right. It is useless. I will drop it in v3.
> > >
> >
> > I can drop that bit when applying the patch.
> >
> > One question I have is whether the sequence
> >
> > spin_lock_irqsave(&unaccepted_memory_lock, flags);
> > ...
> > spin_unlock(&unaccepted_memory_lock);
> > arch_accept_memory(phys_start, phys_end);
> > spin_lock(&unaccepted_memory_lock);
> > ...
> > spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
> >
> > is considered sound and is supported by all architectures?
>
> I am not an locking expert and only tested it on x86. But what potential
> issue do you see?
>

Not sure. It just looks slightly out of place, and I am curious
whether all architectures tolerate this asymmetric use.

2023-10-17 10:20:02

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On Tue, Oct 17, 2023 at 09:42:13AM +0200, Ard Biesheuvel wrote:

> One question I have is whether the sequence
>
> spin_lock_irqsave(&unaccepted_memory_lock, flags);
> ...
> spin_unlock(&unaccepted_memory_lock);
> arch_accept_memory(phys_start, phys_end);
> spin_lock(&unaccepted_memory_lock);
> ...
> spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
>
> is considered sound and is supported by all architectures?

Yes.

2023-10-17 15:36:52

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On Tue, 17 Oct 2023 at 12:17, Peter Zijlstra <[email protected]> wrote:
>
> On Tue, Oct 17, 2023 at 09:42:13AM +0200, Ard Biesheuvel wrote:
>
> > One question I have is whether the sequence
> >
> > spin_lock_irqsave(&unaccepted_memory_lock, flags);
> > ...
> > spin_unlock(&unaccepted_memory_lock);
> > arch_accept_memory(phys_start, phys_end);
> > spin_lock(&unaccepted_memory_lock);
> > ...
> > spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
> >
> > is considered sound and is supported by all architectures?
>
> Yes.

Thanks for the clarification

I've queued this up now (with the cpu_relax() removed)

2023-10-18 18:56:59

by Jianxiong Gao

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

The patch helps us gain more stability in our testing.
We are not able to reproduce the soft lockup issue in over 20 runs
with 176 vcpus so far.

Thanks!
--
Jianxiong Gao

2023-11-01 00:47:10

by Michael Roth

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On Tue, Oct 17, 2023 at 09:02:59AM +0200, Vlastimil Babka wrote:
> On 10/16/23 22:54, Michael Roth wrote:
> > On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote:
> >> Michael reported soft lockups on a system that has unaccepted memory.
> >> This occurs when a user attempts to allocate and accept memory on
> >> multiple CPUs simultaneously.
> >>
> >> The root cause of the issue is that memory acceptance is serialized with
> >> a spinlock, allowing only one CPU to accept memory at a time. The other
> >> CPUs spin and wait for their turn, leading to starvation and soft lockup
> >> reports.
> >>
> >> To address this, the code has been modified to release the spinlock
> >> while accepting memory. This allows for parallel memory acceptance on
> >> multiple CPUs.
> >>
> >> A newly introduced "accepting_list" keeps track of which memory is
> >> currently being accepted. This is necessary to prevent parallel
> >> acceptance of the same memory block. If a collision occurs, the lock is
> >> released and the process is retried.
> >>
> >> Such collisions should rarely occur. The main path for memory acceptance
> >> is the page allocator, which accepts memory in MAX_ORDER chunks. As long
> >> as MAX_ORDER is equal to or larger than the unit_size, collisions will
> >> never occur because the caller fully owns the memory block being
> >> accepted.
> >>
> >> Aside from the page allocator, only memblock and deferered_free_range()
> >> accept memory, but this only happens during boot.
> >>
> >> The code has been tested with unit_size == 128MiB to trigger collisions
> >> and validate the retry codepath.
> >>
> >> Signed-off-by: Kirill A. Shutemov <[email protected]>
> >> Reported-by: Michael Roth <[email protected]
> >
> > Tested-by: Michael Roth <[email protected]>
> >
> > This seems to improve things pretty dramatically for me. Previously I
> > saw soft-lockups with 16 vCPUs and 16 processes faulting into memory,
> > and now I can do 128+ vCPUs/processes.
> >
> > I can still trigger soft lock-ups on occassion if the number of processes
> > faulting in memory exceeds the number of vCPUs available to the guest, but
> > with a 32 vCPU guest even something like this:
> >
> > stress --vm 128 --vm-bytes 2G --vm-keep --cpu 255
> >
> > still seems to avoid the soft lock-up messages. So that's probably well
> > into "potential future optimization" territory and this patch fixes the
> > more immediate issues.
>

Sorry for the delay, the optimizations here work well enough that it's been
a bit of challenge reproducing this reliably enough to get some good data
the lingering sources of soft lock-ups I'm still seeing.

> Do you mean that the guest pretends it has more cpus than the host provides
> to it? I think such cpu starving configuration is prone to softlockups
> already, so it wouldn't be new.
>
> If you mean the guest has as many cpus as the host provides to it, but you
> stress with many more than that number of processes, then I wonder how

Yes, this is what I meant. If there are more memory-hog worker threads in
the guest than there are vCPUs, I'm better able to reproduce soft-lockups.
That sort of makes sense since those threads will spend more time waiting on
an available vCPU to handle memory acceptance.

But it actually isn't a requirement, I've also been able to reproduce this
with equal numbers of worker threads and vCPUs if I run 4 VMs, each
running the stress/acceptance workload at the same time.

And if I force 4K pages in gmem backend (technically a supported
configuration) then I can reproduce it much more easily since the 2MB
acceptance path takes much longer and it makes it easier to expose any
potential remaining concurrency issues.

> softlockups would happen due to the extra processes. Since irqs are disabled
> through the whole operation, the extra processes can't become scheduled, and
> not being scheduled due to overloading doesn't trigger softlockups, hmm...

The soft lock-ups happen as soon as IRQs are re-enabled, either:

a) right after a thread sees that its range intersects something
that's in the process of being accepted

b) right after a thread finishes accepting its whole range and is
about to return from accept_memory()

I see a) occur more in the 4K test scenario, b) is more difficult to
reproduce and seems to need a larger system to reproduce more reliably.

The fact that b) seems to depend on larger systems sort of makes sense.
When we need to covert a page to private as part of accepting it, there
is a guest->host request that eventually goes off to host userspace which
will call the KVM ioctl KVM_SET_MEMORY_ATTRIBUTES to mark the memory as
private so that it will get faulted in from the guest_memfd backend. When
this happens, any guest page faults that are currently in flight will get
invalidated and require a retry, and there's also a guest TLB flush
that results in an NMI to all the cores the guest was scheduled on so that
it can exit and acknowledge new updates. So the higher the rate of
KVM_SET_MEMORY_ATTRIBUTES the system is able to process, the higher the
frequency of this sort of activity on the host side that can impact each
vCPUs ability to make progress on accepting a particular range.

Also I was running 4 guests, each with as many vCPUs as the host, so
contention for physical resources would probably be a factor as well.

I'm not sure what can be done about b), but they seem to be host-side
optimizations that aren't too relevant to this patch, and they seem to
occur less frequently than a), which seems to be more guest side.

Still not sure what is causing type a) lock-ups exactly, but through
various traces and debug statements I think I've at least gotten some idea
that there are certain conditions where the vCPUs become more and more
dependent on each other completing certain ranges, and they spend longer
and longer amounts of time looping through the accepting_list.

There are 3 things I've noticed that might lead to vCPUs getting hung up
on each other:

1) try_to_accept_memory_one() calls accept_page(page, MAX_ORDER), which
is a 4MB range

2) There's an extra 2MB region taken after each unit to account for
load_unaligned_zeropad()

3) There is what appears to be a bug here:

list_for_each_entry(entry, &accepting_list, list) {
if (entry->end < range.start)
continue;
if (entry->start >= range.end)
continue;

where if entry->end == range.start, the thread will wait on the owner
of that range even though it doesn't actually intersect.

I don't quite know how all this lines up to a dependency chain that would
potentially explain the lock-ups, but to mitigate that scenario, I tried only
adding the specific 2MB range that is being accepted to accepting_list, rather
than the whole range, and then just iterate through 2MB at a time in
accept_memory() instead of passing the larger range on to arch_accept_memory().

That seems to have resolved the soft lock-ups for the forced-4K scenario, but
I haven't had much time to test larger configurations yet.

-Mike


== Soft lock-ups of type A (kernel is vanilla 6.6.0-rc5 + this patch) ==

[ 266.312940] watchdog: BUG: soft lockup - CPU#161 stuck for 21s! [stress:7844]^M
[ 266.321432] Modules linked in:^M
[ 266.336478] Modules linked in: btrfs^M
[ 266.350571] btrfs^M
[ 266.363954] blake2b_generic^M
[ 266.377754] blake2b_generic^M
[ 266.393502] raid10^M
[ 266.406422] raid10^M
[ 266.418275] raid456^M
[ 266.430487] raid456^M
[ 266.442159] async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear virtio_net i2c_i801 net_failover psmouse virtio_scsi i2c_smbus failover crc32_pclmul lpc_ich^M
[ 266.442226] irq event stamp: 1100892^M
[ 266.442228] hardirqs last enabled at (1100891): irqentry_exit (kernel/entry/common.c:445)
[ 266.442253] hardirqs last disabled at (1100892): _raw_spin_lock_irqsave (include/linux/spinlock_api_smp.h:108 kernel/locking/spinlock.c:162)
[ 266.442261] softirqs last enabled at (1094506): __do_softirq (arch/x86/include/asm/preempt.h:27 kernel/softirq.c:400 kernel/softirq.c:582)
[ 266.442268] softirqs last disabled at (1094499): irq_exit_rcu (kernel/softirq.c:427 kernel/softirq.c:632 kernel/softirq.c:644)
[ 266.442291] CPU: 161 PID: 7844 Comm: stress Not tainted 6.6.0-rc5-snp-guest1-no-extend+ #1^M
[ 266.442298] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 2/2/2022^M
[ 266.442305] RIP: 0010:_raw_spin_unlock_irqrestore (arch/x86/include/asm/preempt.h:85 include/linux/spinlock_api_smp.h:152 kernel/locking/spinlock.c:194)
[ 266.442309] Code: b1 b1 18 ff 4c 89 e7 e8 79 e7 18 ff 81 e3 00 02 00 00 75 26 9c 58 0f 1f 40 00 f6 c4 02 75 22 48 85 db 74 06 fb 0f 1f 44 00 00 <65> ff 0d fc 5d 06 73 5b 41 5c 5d c3 cc cc cc cc e8 c6 9f 28 ff eb^M
[ 266.442311] RSP: 0000:ffffc9000da17a80 EFLAGS: 00000206^M
[ 266.442313] RAX: 0000000000000046 RBX: 0000000000000200 RCX: ffffc9000da1fac0^M
[ 266.442314] RDX: ffffffff8ccc7915 RSI: ffffffff8cfddf9a RDI: ffffffff8cfddf9a^M
[ 266.442316] RBP: ffffc9000da17a90 R08: 0000000000000000 R09: 0000000000000000^M
[ 266.442317] R10: 0000000000000000 R11: 0000000000000001 R12: ffffffff8dd1a1e0^M
[ 266.442318] R13: 00000014b8c00000 R14: 0000000000200000 R15: ffff88807baa7018^M
[ 266.442321] FS: 00007f6a297c5740(0000) GS:ffff889b8ee80000(0000) knlGS:0000000000000000^M
[ 266.442322] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033^M
[ 266.442323] CR2: 00007f6a20cb1010 CR3: 0008000125992001 CR4: 0000000000770ef0^M
[ 266.442325] PKRU: 55555554^M
[ 266.442325] Call Trace:^M
[ 266.442329] <IRQ>^M
[ 266.442343] ? show_regs (arch/x86/kernel/dumpstack.c:479)
[ 266.442358] ? watchdog_timer_fn (kernel/watchdog.c:520)
[ 266.442366] ? __pfx_watchdog_timer_fn (kernel/watchdog.c:439)
[ 266.442367] ? __hrtimer_run_queues (kernel/time/hrtimer.c:1688 kernel/time/hrtimer.c:1752)
[ 266.442377] ? hrtimer_interrupt (kernel/time/hrtimer.c:1817)
[ 266.442381] ? __sysvec_apic_timer_interrupt (arch/x86/include/asm/jump_label.h:27 include/linux/jump_label.h:207 arch/x86/include/asm/trace/irq_vectors.h:41 arch/x86/kernel/apic/apic.c:1081)
[ 266.442388] ? sysvec_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:1074 (discriminator 14))
[ 266.442394] </IRQ>^M
[ 266.442395] <TASK>^M
[ 266.442396] ? asm_sysvec_apic_timer_interrupt (arch/x86/include/asm/idtentry.h:645)
[ 266.442407] ? accept_memory (arch/x86/include/asm/vdso/processor.h:13 arch/x86/include/asm/vdso/processor.h:18 drivers/firmware/efi/unaccepted_memory.c:122)
[ 266.442420] ? _raw_spin_unlock_irqrestore (include/linux/spinlock_api_smp.h:151 kernel/locking/spinlock.c:194)
[ 266.442422] ? _raw_spin_unlock_irqrestore (include/linux/spinlock_api_smp.h:151 kernel/locking/spinlock.c:194)
[ 266.442428] ? _raw_spin_unlock_irqrestore (arch/x86/include/asm/preempt.h:85 include/linux/spinlock_api_smp.h:152 kernel/locking/spinlock.c:194)
[ 266.442430] accept_memory (arch/x86/include/asm/vdso/processor.h:13 arch/x86/include/asm/vdso/processor.h:18 drivers/firmware/efi/unaccepted_memory.c:122)
[ 266.442433] ? mark_held_locks (kernel/locking/lockdep.c:4273)
[ 266.442450] try_to_accept_memory (mm/page_alloc.c:6629 mm/page_alloc.c:6649)
[ 266.442467] get_page_from_freelist (mm/page_alloc.c:3125)
[ 266.442474] ? lock_acquire (kernel/locking/lockdep.c:467 kernel/locking/lockdep.c:5755 kernel/locking/lockdep.c:5718)
[ 266.442478] __alloc_pages (mm/page_alloc.c:4426)
[ 266.442485] __folio_alloc (mm/page_alloc.c:4462)
[ 266.442487] ? policy_node (include/linux/nodemask.h:266 mm/mempolicy.c:1887)
[ 266.442499] vma_alloc_folio (include/linux/mempolicy.h:75 include/linux/mempolicy.h:80 mm/mempolicy.c:2263)
[ 266.442503] do_pte_missing (mm/memory.c:4110 mm/memory.c:3667)
[ 266.442508] __handle_mm_fault (mm/memory.c:4978 mm/memory.c:5119)
[ 266.442515] handle_mm_fault (mm/memory.c:5284)
[ 266.442517] do_user_addr_fault (arch/x86/mm/fault.c:1365)
[ 266.442523] ? exit_to_user_mode_prepare (kernel/entry/common.c:212 (discriminator 31))
[ 266.453915] async_raid6_recov^M
[ 266.465236] exc_page_fault (arch/x86/include/asm/paravirt.h:689 arch/x86/include/asm/irqflags.h:127 arch/x86/mm/fault.c:1513 arch/x86/mm/fault.c:1561)
[ 266.465292] asm_exc_page_fault (arch/x86/include/asm/idtentry.h:570)
[ 266.465307] RIP: 0033:0x564b7e602cf0


== Soft lock-ups of type B ==

[ 266.675357] watchdog: BUG: soft lockup - CPU#134 stuck for 23s! [stress:7817]^M
[ 266.675394] Modules linked in: btrfs blake2b_generic raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear virtio_net i2c_i801 net_failover psmouse virtio_scsi i2c_smbus failover crc32_pclmul lpc_ich^M
[ 266.675675] irq event stamp: 2579636^M
[ 266.675680] hardirqs last enabled at (2579635): _raw_spin_unlock_irqrestore (include/linux/spinlock_api_smp.h:151 kernel/locking/spinlock.c:194)
[ 266.675751] hardirqs last disabled at (2579636): _raw_spin_lock_irqsave (include/linux/spinlock_api_smp.h:108 kernel/locking/spinlock.c:162)
[ 266.675754] softirqs last enabled at (1734708): __do_softirq (arch/x86/include/asm/preempt.h:27 kernel/softirq.c:400 kernel/softirq.c:582)
[ 266.675757] softirqs last disabled at (1734701): irq_exit_rcu (kernel/softirq.c:427 kernel/softirq.c:632 kernel/softirq.c:644)
[ 266.675813] CPU: 134 PID: 7817 Comm: stress Tainted: G L 6.6.0-rc5-snp-guest1-no-extend+ #1^M
[ 266.675831] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 2/2/2022^M
[ 266.675838] RIP: 0010:_raw_spin_unlock_irqrestore (arch/x86/include/asm/preempt.h:85 include/linux/spinlock_api_smp.h:152 kernel/locking/spinlock.c:194)
[ 266.675848] Code: b1 b1 18 ff 4c 89 e7 e8 79 e7 18 ff 81 e3 00 02 00 00 75 26 9c 58 0f 1f 40 00 f6 c4 02 75 22 48 85 db 74 06 fb 0f 1f 44 00 00 <65> ff 0d fc 5d 06 73 5b 41 5c 5d c3 cc cc cc cc e8 c6 9f 28 ff eb^M
[ 266.675850] RSP: 0000:ffffc9000d93fa80 EFLAGS: 00000206^M
[ 266.675852] RAX: 0000000000000046 RBX: 0000000000000200 RCX: 000000000000a53b^M
[ 266.675853] RDX: ffffffff8ccc78e2 RSI: ffffffff8cfddf9a RDI: ffffffff8cfddf9a^M
[ 266.675854] RBP: ffffc9000d93fa90 R08: 0000000000000000 R09: 0000000000000000^M
[ 266.675855] R10: 0000000000000000 R11: 0000000000000000 R12: ffffffff8dd1a1e0^M
[ 266.675856] R13: ffff88807baa7030 R14: 0000000000200000 R15: ffff88807baa7018^M
[ 266.675861] FS: 00007f6a297c5740(0000) GS:ffff889b8e100000(0000) knlGS:0000000000000000^M
[ 266.675862] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033^M
[ 266.675863] CR2: 00007f6a1ca4e010 CR3: 0008000124e04003 CR4: 0000000000770ef0^M
[ 266.675865] PKRU: 55555554^M
[ 266.675865] Call Trace:^M
[ 266.675879] <IRQ>^M
[ 266.675947] ? show_regs (arch/x86/kernel/dumpstack.c:479)
[ 266.675965] ? watchdog_timer_fn (kernel/watchdog.c:520)
[ 266.675979] ? __pfx_watchdog_timer_fn (kernel/watchdog.c:439)
[ 266.675981] ? __hrtimer_run_queues (kernel/time/hrtimer.c:1688 kernel/time/hrtimer.c:1752)
[ 266.675995] ? hrtimer_interrupt (kernel/time/hrtimer.c:1817)
[ 266.675998] ? __sysvec_apic_timer_interrupt (arch/x86/include/asm/jump_label.h:27 include/linux/jump_label.h:207 arch/x86/include/asm/trace/irq_vectors.h:41 arch/x86/kernel/apic/apic.c:1081)
[ 266.676006] ? sysvec_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:1074 (discriminator 14))
[ 266.676014] </IRQ>^M
[ 266.676014] <TASK>^M
[ 266.676016] ? asm_sysvec_apic_timer_interrupt (arch/x86/include/asm/idtentry.h:645)
[ 266.676032] ? accept_memory (drivers/firmware/efi/unaccepted_memory.c:162)
[ 266.676052] ? _raw_spin_unlock_irqrestore (include/linux/spinlock_api_smp.h:151 kernel/locking/spinlock.c:194)
[ 266.676054] ? _raw_spin_unlock_irqrestore (include/linux/spinlock_api_smp.h:151 kernel/locking/spinlock.c:194)
[ 266.676067] ? _raw_spin_unlock_irqrestore (arch/x86/include/asm/preempt.h:85 include/linux/spinlock_api_smp.h:152 kernel/locking/spinlock.c:194)
[ 266.676069] accept_memory (drivers/firmware/efi/unaccepted_memory.c:162)
[ 266.676074] try_to_accept_memory (mm/page_alloc.c:6629 mm/page_alloc.c:6649)
[ 266.676107] get_page_from_freelist (mm/page_alloc.c:3125)
[ 266.676110] ? lock_acquire (kernel/locking/lockdep.c:467 kernel/locking/lockdep.c:5755 kernel/locking/lockdep.c:5718)
[ 266.676123] __alloc_pages (mm/page_alloc.c:4426)
[ 266.676127] __folio_alloc (mm/page_alloc.c:4462)
[ 266.676129] ? policy_node (include/linux/nodemask.h:266 mm/mempolicy.c:1887)
[ 266.676138] vma_alloc_folio (include/linux/mempolicy.h:75 include/linux/mempolicy.h:80 mm/mempolicy.c:2263)
[ 266.676141] do_pte_missing (mm/memory.c:4110 mm/memory.c:3667)
[ 266.676149] __handle_mm_fault (mm/memory.c:4978 mm/memory.c:5119)
[ 266.676156] handle_mm_fault (mm/memory.c:5284)
[ 266.676159] do_user_addr_fault (arch/x86/mm/fault.c:1365)
[ 266.676165] ? exit_to_user_mode_prepare (kernel/entry/common.c:212 (discriminator 31))
[ 266.676174] exc_page_fault (arch/x86/include/asm/paravirt.h:689 arch/x86/include/asm/irqflags.h:127 arch/x86/mm/fault.c:1513 arch/x86/mm/fault.c:1561)
[ 266.676177] asm_exc_page_fault (arch/x86/include/asm/idtentry.h:570)
[ 266.676178] RIP: 0033:0x564b7e602cf0^

>
> > Thanks!
> >
> > -Mike
> >
> >> Fixes: 2053bc57f367 ("efi: Add unaccepted memory support")
> >> Cc: <[email protected]>
> >> Reviewed-by: Nikolay Borisov <[email protected]>
> >> ---
> >>
> >> v2:
> >> - Fix deadlock (Vlastimil);
> >> - Fix comments (Vlastimil);
> >> - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
> >> from atomic context;
> >>
>

2023-11-02 13:57:23

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On Tue, Oct 31, 2023 at 07:45:23PM -0500, Michael Roth wrote:
> > If you mean the guest has as many cpus as the host provides to it, but you
> > stress with many more than that number of processes, then I wonder how
>
> Yes, this is what I meant. If there are more memory-hog worker threads in
> the guest than there are vCPUs, I'm better able to reproduce soft-lockups.
> That sort of makes sense since those threads will spend more time waiting on
> an available vCPU to handle memory acceptance.
>
> But it actually isn't a requirement, I've also been able to reproduce this
> with equal numbers of worker threads and vCPUs if I run 4 VMs, each
> running the stress/acceptance workload at the same time.
>
> And if I force 4K pages in gmem backend (technically a supported
> configuration) then I can reproduce it much more easily since the 2MB
> acceptance path takes much longer and it makes it easier to expose any
> potential remaining concurrency issues.

This all sounds like we are solidly in "system is overloaded" territory.

Soft-lockups are still not good in this case. But I am not sure what we
can do about it.

One silly idea is to prevent all vCPUs to do accept simultaneously and
reserve one (or several) to do housekeeping. The idea is that this vCPU
can be preempted to do job on other tasks.

It would only make a difference for PREEMPT_FULL case and if the
housekeeping CPU will hit the accept path from preemptable context. And it
is obviously not applicable if there's only single vCPU.

> > softlockups would happen due to the extra processes. Since irqs are disabled
> > through the whole operation, the extra processes can't become scheduled, and
> > not being scheduled due to overloading doesn't trigger softlockups, hmm...
>
> The soft lock-ups happen as soon as IRQs are re-enabled, either:
>
> a) right after a thread sees that its range intersects something
> that's in the process of being accepted
>
> b) right after a thread finishes accepting its whole range and is
> about to return from accept_memory()
>
> I see a) occur more in the 4K test scenario, b) is more difficult to
> reproduce and seems to need a larger system to reproduce more reliably.

I am not sure why you differentiate these scenarios. Kernel just hits
place where it can be preempted and observes that it is overdue to
scheduling.

> The fact that b) seems to depend on larger systems sort of makes sense.
> When we need to covert a page to private as part of accepting it, there
> is a guest->host request that eventually goes off to host userspace which
> will call the KVM ioctl KVM_SET_MEMORY_ATTRIBUTES to mark the memory as
> private so that it will get faulted in from the guest_memfd backend. When
> this happens, any guest page faults that are currently in flight will get
> invalidated and require a retry, and there's also a guest TLB flush
> that results in an NMI to all the cores the guest was scheduled on so that
> it can exit and acknowledge new updates. So the higher the rate of
> KVM_SET_MEMORY_ATTRIBUTES the system is able to process, the higher the
> frequency of this sort of activity on the host side that can impact each
> vCPUs ability to make progress on accepting a particular range.
>
> Also I was running 4 guests, each with as many vCPUs as the host, so
> contention for physical resources would probably be a factor as well.

Yeah, at some point you will just saturate memory bandwidth.

> I'm not sure what can be done about b), but they seem to be host-side
> optimizations that aren't too relevant to this patch, and they seem to
> occur less frequently than a), which seems to be more guest side.
>
> Still not sure what is causing type a) lock-ups exactly, but through
> various traces and debug statements I think I've at least gotten some idea
> that there are certain conditions where the vCPUs become more and more
> dependent on each other completing certain ranges, and they spend longer
> and longer amounts of time looping through the accepting_list.
>
> There are 3 things I've noticed that might lead to vCPUs getting hung up
> on each other:
>
> 1) try_to_accept_memory_one() calls accept_page(page, MAX_ORDER), which
> is a 4MB range

This should not make one vCPU to setup on work on another. Page allocator
owns full 4MB. It is not shared with anyone.

> 2) There's an extra 2MB region taken after each unit to account for
> load_unaligned_zeropad()

Okay, yes, this is true.

> 3) There is what appears to be a bug here:
>
> list_for_each_entry(entry, &accepting_list, list) {
> if (entry->end < range.start)
> continue;
> if (entry->start >= range.end)
> continue;
>
> where if entry->end == range.start, the thread will wait on the owner
> of that range even though it doesn't actually intersect.

Good catch. Care to send a patch?

> I don't quite know how all this lines up to a dependency chain that would
> potentially explain the lock-ups, but to mitigate that scenario, I tried only
> adding the specific 2MB range that is being accepted to accepting_list, rather
> than the whole range, and then just iterate through 2MB at a time in
> accept_memory() instead of passing the larger range on to arch_accept_memory().

This might improve situation with soft lockups a bit, but would hurt
accept bandwidth.

> That seems to have resolved the soft lock-ups for the forced-4K scenario, but
> I haven't had much time to test larger configurations yet.

--
Kiryl Shutsemau / Kirill A. Shutemov

2023-11-03 00:01:58

by Michael Roth

[permalink] [raw]
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance

On Thu, Nov 02, 2023 at 04:56:11PM +0300, Kirill A. Shutemov wrote:
> On Tue, Oct 31, 2023 at 07:45:23PM -0500, Michael Roth wrote:
> > > If you mean the guest has as many cpus as the host provides to it, but you
> > > stress with many more than that number of processes, then I wonder how
> >
> > Yes, this is what I meant. If there are more memory-hog worker threads in
> > the guest than there are vCPUs, I'm better able to reproduce soft-lockups.
> > That sort of makes sense since those threads will spend more time waiting on
> > an available vCPU to handle memory acceptance.
> >
> > But it actually isn't a requirement, I've also been able to reproduce this
> > with equal numbers of worker threads and vCPUs if I run 4 VMs, each
> > running the stress/acceptance workload at the same time.
> >
> > And if I force 4K pages in gmem backend (technically a supported
> > configuration) then I can reproduce it much more easily since the 2MB
> > acceptance path takes much longer and it makes it easier to expose any
> > potential remaining concurrency issues.
>
> This all sounds like we are solidly in "system is overloaded" territory.
>
> Soft-lockups are still not good in this case. But I am not sure what we
> can do about it.

After spending more time on it I'm starting to reach a similar conclusion,
but I'm not yet convinced it's so much the system being overloaded as it
is the handling for KVM_SET_MEMORY_ATTRIBUTES being particularly punishing
for this sort of workload and starving vCPUs for execution time due to
it causing MMU invalidations that cause #NPFs to need restarting and
frequent NMIs due KVM_REQ_TLB_FLUSH requests. For non-CoCo guests I think
this activity would be much more infrequent.

For instance here's the journey of a particular 4MB range that ends up
triggering a soft-lockup in the guest according to host-side ftraces (in
this case I've disabled the additional 2MB region that gets taken for
the zero-padding issue, and implemented the bug fix mentioned earlier,
so there vCPUs don't ever end up waiting on each other):

== Acceptance for 4MB GPA range 0x18cbc00000:18cc000000 ==

<...>-1946910 [226] ...1. 324797.313982: kvm_page_fault: vcpu 219 rip 0x0 address 0x00000018cbc00000 error_code 0x500000004
<...>-1946910 [098] ...1. 324797.631256: kvm_page_fault: vcpu 219 rip 0x0 address 0x00000018cbdff000 error_code 0x500000004
<...>-1946910 [107] ...1. 324835.184044: kvm_page_fault: vcpu 219 rip 0x0 address 0x00000018cbe00000 error_code 0x500000004
<...>-1946910 [235] ...1. 324835.208404: kvm_page_fault: vcpu 219 rip 0x0 address 0x00000018cbfff000 error_code 0x500000004

It's a pretty wild ride that spans 38s across 4 CPUs. I seem to get these
for 2 or 3 unlucky GPA ranges for each run and the other ranges stay
well below the soft-lockup threshold.

Maybe there are ways to improve on that situation, like accepting using
larger chunk sizes (which is sort of the opposite of what I was suggesting
earlier, but maybe when done to a degree that significantly batches
invalidations and KVM_REQ_TLB_FLUSH requests it becomes less of an issue to
have vCPUs waiting on each other).

>
> One silly idea is to prevent all vCPUs to do accept simultaneously and
> reserve one (or several) to do housekeeping. The idea is that this vCPU
> can be preempted to do job on other tasks.

Maybe if larger chunk sizes / more batching does end up helping, a
worker thread/pool of this sort makes even more sense. But maybe there
are simpler ways to experiment with that.

>
> It would only make a difference for PREEMPT_FULL case and if the
> housekeeping CPU will hit the accept path from preemptable context. And it
> is obviously not applicable if there's only single vCPU.
>
> > > softlockups would happen due to the extra processes. Since irqs are disabled
> > > through the whole operation, the extra processes can't become scheduled, and
> > > not being scheduled due to overloading doesn't trigger softlockups, hmm...
> >
> > The soft lock-ups happen as soon as IRQs are re-enabled, either:
> >
> > a) right after a thread sees that its range intersects something
> > that's in the process of being accepted
> >
> > b) right after a thread finishes accepting its whole range and is
> > about to return from accept_memory()
> >
> > I see a) occur more in the 4K test scenario, b) is more difficult to
> > reproduce and seems to need a larger system to reproduce more reliably.
>
> I am not sure why you differentiate these scenarios. Kernel just hits
> place where it can be preempted and observes that it is overdue to
> scheduling.

It just seemed like a) was more similar to the original issue of threads
becoming serialized on a few CPUs, but with the changes noted above to
completely decouple vCPUs from each other I was still able to trigger soft
lock-ups, but instead of a storm of lock-ups from vCPU threads suffering
secondary effects, these were purely lock-ups of type b), which
point to there ultimately being something on the host-side which was
causing all the threads to trip over themselves.

>
> > The fact that b) seems to depend on larger systems sort of makes sense.
> > When we need to covert a page to private as part of accepting it, there
> > is a guest->host request that eventually goes off to host userspace which
> > will call the KVM ioctl KVM_SET_MEMORY_ATTRIBUTES to mark the memory as
> > private so that it will get faulted in from the guest_memfd backend. When
> > this happens, any guest page faults that are currently in flight will get
> > invalidated and require a retry, and there's also a guest TLB flush
> > that results in an NMI to all the cores the guest was scheduled on so that
> > it can exit and acknowledge new updates. So the higher the rate of
> > KVM_SET_MEMORY_ATTRIBUTES the system is able to process, the higher the
> > frequency of this sort of activity on the host side that can impact each
> > vCPUs ability to make progress on accepting a particular range.
> >
> > Also I was running 4 guests, each with as many vCPUs as the host, so
> > contention for physical resources would probably be a factor as well.
>
> Yeah, at some point you will just saturate memory bandwidth.
>
> > I'm not sure what can be done about b), but they seem to be host-side
> > optimizations that aren't too relevant to this patch, and they seem to
> > occur less frequently than a), which seems to be more guest side.
> >
> > Still not sure what is causing type a) lock-ups exactly, but through
> > various traces and debug statements I think I've at least gotten some idea
> > that there are certain conditions where the vCPUs become more and more
> > dependent on each other completing certain ranges, and they spend longer
> > and longer amounts of time looping through the accepting_list.
> >
> > There are 3 things I've noticed that might lead to vCPUs getting hung up
> > on each other:
> >
> > 1) try_to_accept_memory_one() calls accept_page(page, MAX_ORDER), which
> > is a 4MB range
>
> This should not make one vCPU to setup on work on another. Page allocator
> owns full 4MB. It is not shared with anyone.

Indeed, with 2) and 3) addressed there no longer seem to be any
dependencies between threads.

>
> > 2) There's an extra 2MB region taken after each unit to account for
> > load_unaligned_zeropad()
>
> Okay, yes, this is true.
>
> > 3) There is what appears to be a bug here:
> >
> > list_for_each_entry(entry, &accepting_list, list) {
> > if (entry->end < range.start)
> > continue;
> > if (entry->start >= range.end)
> > continue;
> >
> > where if entry->end == range.start, the thread will wait on the owner
> > of that range even though it doesn't actually intersect.
>
> Good catch. Care to send a patch?

Sure, I will get that posted by tomorrow after a bit more testing.

>
> > I don't quite know how all this lines up to a dependency chain that would
> > potentially explain the lock-ups, but to mitigate that scenario, I tried only
> > adding the specific 2MB range that is being accepted to accepting_list, rather
> > than the whole range, and then just iterate through 2MB at a time in
> > accept_memory() instead of passing the larger range on to arch_accept_memory().
>
> This might improve situation with soft lockups a bit, but would hurt
> accept bandwidth.

Yah, I think it was helpful for getting rid of some noise and getting a
better idea of the main source of the bottleneck, but the underlying issue
still remains even with these changes in place.

I'll continue to experiment with it, but it makes me feel better at least
that there isn't something strange going on with the current guest-side
implementation.

Thanks,

Mike

>
> > That seems to have resolved the soft lock-ups for the forced-4K scenario, but
> > I haven't had much time to test larger configurations yet.
>
> --
> Kiryl Shutsemau / Kirill A. Shutemov