2023-11-25 08:34:17

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH v2 0/4] KVM: x86/mmu: small locking cleanups

Remove "bool shared" argument from functions and iterators that need
not know if the lock is taken for read or write. This is common because
protection is achieved via RCU and tdp_mmu_pages_lock or because the
argument is only used for assertions that can be written by hand.

Also always take tdp_mmu_pages_lock even if mmu_lock is currently taken
for write.

Paolo Bonzini (4):
KVM: x86/mmu: remove unnecessary "bool shared" argument from functions
KVM: x86/mmu: remove unnecessary "bool shared" argument from iterators
KVM: x86/mmu: always take tdp_mmu_pages_lock
KVM: x86/mmu: fix comment about mmu_unsync_pages_lock

Documentation/virt/kvm/locking.rst | 7 +--
arch/x86/include/asm/kvm_host.h | 11 ++--
arch/x86/kvm/mmu/mmu.c | 6 +-
arch/x86/kvm/mmu/tdp_mmu.c | 90 ++++++++++++++----------------
arch/x86/kvm/mmu/tdp_mmu.h | 3 +-
5 files changed, 54 insertions(+), 63 deletions(-)

--
2.39.1


2023-11-25 08:34:28

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH v2 4/4] KVM: x86/mmu: fix comment about mmu_unsync_pages_lock

Fix the comment about what can and cannot happen when mmu_unsync_pages_lock
is not help. The comment correctly mentions "clearing sp->unsync", but then
it talks about unsync going from 0 to 1.

Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/mmu/mmu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 1cb81573a60b..a71b8813febe 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2840,9 +2840,9 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot,
/*
* Recheck after taking the spinlock, a different vCPU
* may have since marked the page unsync. A false
- * positive on the unprotected check above is not
+ * negative on the unprotected check above is not
* possible as clearing sp->unsync _must_ hold mmu_lock
- * for write, i.e. unsync cannot transition from 0->1
+ * for write, i.e. unsync cannot transition from 1->0
* while this CPU holds mmu_lock for read (or write).
*/
if (READ_ONCE(sp->unsync))
--
2.39.1

2023-11-25 08:34:48

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH v2 1/4] KVM: x86/mmu: remove unnecessary "bool shared" argument from functions

Neither tdp_mmu_next_root nor kvm_tdp_mmu_put_root need to know
if the lock is taken for read or write. Either way, protection
is achieved via RCU and tdp_mmu_pages_lock. Remove the argument
and just assert that the lock is taken.

Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/mmu/tdp_mmu.c | 34 +++++++++++++++++++++-------------
arch/x86/kvm/mmu/tdp_mmu.h | 3 +--
3 files changed, 23 insertions(+), 16 deletions(-)

v1->v2: comment tweak

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c57e181bba21..1cb81573a60b 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3556,7 +3556,7 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa,
return;

if (is_tdp_mmu_page(sp))
- kvm_tdp_mmu_put_root(kvm, sp, false);
+ kvm_tdp_mmu_put_root(kvm, sp);
else if (!--sp->root_count && sp->role.invalid)
kvm_mmu_prepare_zap_page(kvm, sp, invalid_list);

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 6cd4dd631a2f..05689c8d45b7 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -73,10 +73,13 @@ static void tdp_mmu_free_sp_rcu_callback(struct rcu_head *head)
tdp_mmu_free_sp(sp);
}

-void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
- bool shared)
+void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root)
{
- kvm_lockdep_assert_mmu_lock_held(kvm, shared);
+ /*
+ * Either read or write is okay, but mmu_lock must be held because
+ * writers are not required to take tdp_mmu_pages_lock.
+ */
+ lockdep_assert_held(&kvm->mmu_lock);

if (!refcount_dec_and_test(&root->tdp_mmu_root_count))
return;
@@ -106,10 +109,16 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
*/
static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
struct kvm_mmu_page *prev_root,
- bool shared, bool only_valid)
+ bool only_valid)
{
struct kvm_mmu_page *next_root;

+ /*
+ * While the roots themselves are RCU-protected, fields such as
+ * role.invalid are protected by mmu_lock.
+ */
+ lockdep_assert_held(&kvm->mmu_lock);
+
rcu_read_lock();

if (prev_root)
@@ -132,7 +141,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
rcu_read_unlock();

if (prev_root)
- kvm_tdp_mmu_put_root(kvm, prev_root, shared);
+ kvm_tdp_mmu_put_root(kvm, prev_root);

return next_root;
}
@@ -144,13 +153,12 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
* recent root. (Unless keeping a live reference is desirable.)
*
* If shared is set, this function is operating under the MMU lock in read
- * mode. In the unlikely event that this thread must free a root, the lock
- * will be temporarily dropped and reacquired in write mode.
+ * mode.
*/
#define __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, _only_valid)\
- for (_root = tdp_mmu_next_root(_kvm, NULL, _shared, _only_valid); \
+ for (_root = tdp_mmu_next_root(_kvm, NULL, _only_valid); \
_root; \
- _root = tdp_mmu_next_root(_kvm, _root, _shared, _only_valid)) \
+ _root = tdp_mmu_next_root(_kvm, _root, _only_valid)) \
if (kvm_lockdep_assert_mmu_lock_held(_kvm, _shared) && \
kvm_mmu_page_as_id(_root) != _as_id) { \
} else
@@ -159,9 +167,9 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
__for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true)

#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _shared) \
- for (_root = tdp_mmu_next_root(_kvm, NULL, _shared, false); \
+ for (_root = tdp_mmu_next_root(_kvm, NULL, false); \
_root; \
- _root = tdp_mmu_next_root(_kvm, _root, _shared, false)) \
+ _root = tdp_mmu_next_root(_kvm, _root, false)) \
if (!kvm_lockdep_assert_mmu_lock_held(_kvm, _shared)) { \
} else

@@ -891,7 +899,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
* the root must be reachable by mmu_notifiers while it's being
* zapped
*/
- kvm_tdp_mmu_put_root(kvm, root, true);
+ kvm_tdp_mmu_put_root(kvm, root);
}

read_unlock(&kvm->mmu_lock);
@@ -1500,7 +1508,7 @@ void kvm_tdp_mmu_try_split_huge_pages(struct kvm *kvm,
for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, shared) {
r = tdp_mmu_split_huge_pages_root(kvm, root, start, end, target_level, shared);
if (r) {
- kvm_tdp_mmu_put_root(kvm, root, shared);
+ kvm_tdp_mmu_put_root(kvm, root);
break;
}
}
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index 733a3aef3a96..20d97aa46c49 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -17,8 +17,7 @@ __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root)
return refcount_inc_not_zero(&root->tdp_mmu_root_count);
}

-void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
- bool shared);
+void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root);

bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush);
bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp);
--
2.39.1


2023-11-25 08:35:02

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH v2 3/4] KVM: x86/mmu: always take tdp_mmu_pages_lock

It is cheap to take tdp_mmu_pages_lock in all write-side critical sections.
We already do it all the time when zapping with read_lock(), so it is not
a problem to do it from the kvm_tdp_mmu_zap_all() path (aka
kvm_arch_flush_shadow_all(), aka VM destruction and MMU notifier release).

Signed-off-by: Paolo Bonzini <[email protected]>
---
Documentation/virt/kvm/locking.rst | 7 +++----
arch/x86/include/asm/kvm_host.h | 11 ++++++-----
arch/x86/kvm/mmu/tdp_mmu.c | 24 ++++--------------------
3 files changed, 13 insertions(+), 29 deletions(-)

v1->v2: fix kerneldoc

diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
index 3a034db5e55f..02880d5552d5 100644
--- a/Documentation/virt/kvm/locking.rst
+++ b/Documentation/virt/kvm/locking.rst
@@ -43,10 +43,9 @@ On x86:

- vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock and kvm->arch.xen.xen_lock

-- kvm->arch.mmu_lock is an rwlock. kvm->arch.tdp_mmu_pages_lock and
- kvm->arch.mmu_unsync_pages_lock are taken inside kvm->arch.mmu_lock, and
- cannot be taken without already holding kvm->arch.mmu_lock (typically with
- ``read_lock`` for the TDP MMU, thus the need for additional spinlocks).
+- kvm->arch.mmu_lock is an rwlock; critical sections for
+ kvm->arch.tdp_mmu_pages_lock and kvm->arch.mmu_unsync_pages_lock must
+ also take kvm->arch.mmu_lock

Everything else is a leaf: no other lock is taken inside the critical
sections.
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index d7036982332e..f58d318e37aa 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1406,9 +1406,8 @@ struct kvm_arch {
* the MMU lock in read mode + RCU or
* the MMU lock in write mode
*
- * For writes, this list is protected by:
- * the MMU lock in read mode + the tdp_mmu_pages_lock or
- * the MMU lock in write mode
+ * For writes, this list is protected by tdp_mmu_pages_lock; see
+ * below for the details.
*
* Roots will remain in the list until their tdp_mmu_root_count
* drops to zero, at which point the thread that decremented the
@@ -1425,8 +1424,10 @@ struct kvm_arch {
* - possible_nx_huge_pages;
* - the possible_nx_huge_page_link field of kvm_mmu_page structs used
* by the TDP MMU
- * It is acceptable, but not necessary, to acquire this lock when
- * the thread holds the MMU lock in write mode.
+ * Because the lock is only taken within the MMU lock, strictly
+ * speaking it is redundant to acquire this lock when the thread
+ * holds the MMU lock in write mode. However it often simplifies
+ * the code to do so.
*/
spinlock_t tdp_mmu_pages_lock;
#endif /* CONFIG_X86_64 */
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index a85b31a3fc44..d3473f4bf246 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -75,12 +75,6 @@ static void tdp_mmu_free_sp_rcu_callback(struct rcu_head *head)

void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root)
{
- /*
- * Either read or write is okay, but mmu_lock must be held because
- * writers are not required to take tdp_mmu_pages_lock.
- */
- lockdep_assert_held(&kvm->mmu_lock);
-
if (!refcount_dec_and_test(&root->tdp_mmu_root_count))
return;

@@ -281,28 +275,18 @@ static void tdp_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
*
* @kvm: kvm instance
* @sp: the page to be removed
- * @shared: This operation may not be running under the exclusive use of
- * the MMU lock and the operation must synchronize with other
- * threads that might be adding or removing pages.
*/
-static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp,
- bool shared)
+static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
{
tdp_unaccount_mmu_page(kvm, sp);

if (!sp->nx_huge_page_disallowed)
return;

- if (shared)
- spin_lock(&kvm->arch.tdp_mmu_pages_lock);
- else
- lockdep_assert_held_write(&kvm->mmu_lock);
-
+ spin_lock(&kvm->arch.tdp_mmu_pages_lock);
sp->nx_huge_page_disallowed = false;
untrack_possible_nx_huge_page(kvm, sp);
-
- if (shared)
- spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
+ spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
}

/**
@@ -331,7 +315,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)

trace_kvm_mmu_prepare_zap_page(sp);

- tdp_mmu_unlink_sp(kvm, sp, shared);
+ tdp_mmu_unlink_sp(kvm, sp);

for (i = 0; i < SPTE_ENT_PER_PAGE; i++) {
tdp_ptep_t sptep = pt + i;
--
2.39.1


2023-11-25 08:35:08

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH v2 2/4] KVM: x86/mmu: remove unnecessary "bool shared" argument from iterators

The "bool shared" argument is more or less unnecessary in the
for_each_*_tdp_mmu_root_yield_safe() macros. Many users check for
the lock before calling it; all of them either call small functions
that do the check, or end up calling tdp_mmu_set_spte_atomic() and
tdp_mmu_iter_set_spte(). Add a few assertions to make up for the
lost check in for_each_*_tdp_mmu_root_yield_safe(), but even this
is probably overkill and mostly for documentation reasons.

Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/mmu/tdp_mmu.c | 50 +++++++++++++++++++-------------------
1 file changed, 25 insertions(+), 25 deletions(-)

v1->v2: keep lockdep_assert_held()

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 05689c8d45b7..a85b31a3fc44 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -155,23 +155,20 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
* If shared is set, this function is operating under the MMU lock in read
* mode.
*/
-#define __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, _only_valid)\
- for (_root = tdp_mmu_next_root(_kvm, NULL, _only_valid); \
- _root; \
- _root = tdp_mmu_next_root(_kvm, _root, _only_valid)) \
- if (kvm_lockdep_assert_mmu_lock_held(_kvm, _shared) && \
- kvm_mmu_page_as_id(_root) != _as_id) { \
+#define __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _only_valid)\
+ for (_root = tdp_mmu_next_root(_kvm, NULL, _only_valid); \
+ ({ lockdep_assert_held(&(_kvm)->mmu_lock); }), _root; \
+ _root = tdp_mmu_next_root(_kvm, _root, _only_valid)) \
+ if (kvm_mmu_page_as_id(_root) != _as_id) { \
} else

-#define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \
- __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true)
+#define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id) \
+ __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, true)

-#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _shared) \
- for (_root = tdp_mmu_next_root(_kvm, NULL, false); \
- _root; \
- _root = tdp_mmu_next_root(_kvm, _root, false)) \
- if (!kvm_lockdep_assert_mmu_lock_held(_kvm, _shared)) { \
- } else
+#define for_each_tdp_mmu_root_yield_safe(_kvm, _root) \
+ for (_root = tdp_mmu_next_root(_kvm, NULL, false); \
+ ({ lockdep_assert_held(&(_kvm)->mmu_lock); }), _root; \
+ _root = tdp_mmu_next_root(_kvm, _root, false))

/*
* Iterate over all TDP MMU roots. Requires that mmu_lock be held for write,
@@ -840,7 +837,8 @@ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush)
{
struct kvm_mmu_page *root;

- for_each_tdp_mmu_root_yield_safe(kvm, root, false)
+ lockdep_assert_held_write(&kvm->mmu_lock);
+ for_each_tdp_mmu_root_yield_safe(kvm, root)
flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush);

return flush;
@@ -862,7 +860,8 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
* is being destroyed or the userspace VMM has exited. In both cases,
* KVM_RUN is unreachable, i.e. no vCPUs will ever service the request.
*/
- for_each_tdp_mmu_root_yield_safe(kvm, root, false)
+ lockdep_assert_held_write(&kvm->mmu_lock);
+ for_each_tdp_mmu_root_yield_safe(kvm, root)
tdp_mmu_zap_root(kvm, root, false);
}

@@ -876,7 +875,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)

read_lock(&kvm->mmu_lock);

- for_each_tdp_mmu_root_yield_safe(kvm, root, true) {
+ for_each_tdp_mmu_root_yield_safe(kvm, root) {
if (!root->tdp_mmu_scheduled_root_to_zap)
continue;

@@ -1133,7 +1132,7 @@ bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
{
struct kvm_mmu_page *root;

- __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false, false)
+ __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false)
flush = tdp_mmu_zap_leafs(kvm, root, range->start, range->end,
range->may_block, flush);

@@ -1322,7 +1323,7 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,

lockdep_assert_held_read(&kvm->mmu_lock);

- for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
+ for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id)
spte_set |= wrprot_gfn_range(kvm, root, slot->base_gfn,
slot->base_gfn + slot->npages, min_level);

@@ -1354,6 +1355,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
{
struct kvm_mmu_page *sp;

+ kvm_lockdep_assert_mmu_lock_held(kvm, shared);
+
/*
* Since we are allocating while under the MMU lock we have to be
* careful about GFP flags. Use GFP_NOWAIT to avoid blocking on direct
@@ -1504,8 +1507,7 @@ void kvm_tdp_mmu_try_split_huge_pages(struct kvm *kvm,
int r = 0;

kvm_lockdep_assert_mmu_lock_held(kvm, shared);
-
- for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, shared) {
+ for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id) {
r = tdp_mmu_split_huge_pages_root(kvm, root, start, end, target_level, shared);
if (r) {
kvm_tdp_mmu_put_root(kvm, root);
@@ -1568,8 +1570,7 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm,
bool spte_set = false;

lockdep_assert_held_read(&kvm->mmu_lock);
-
- for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
+ for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id)
spte_set |= clear_dirty_gfn_range(kvm, root, slot->base_gfn,
slot->base_gfn + slot->npages);

@@ -1703,8 +1704,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,
struct kvm_mmu_page *root;

lockdep_assert_held_read(&kvm->mmu_lock);
-
- for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
+ for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id)
zap_collapsible_spte_range(kvm, root, slot);
}

--
2.39.1


2023-12-01 01:56:18

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] KVM: x86/mmu: small locking cleanups

On Sat, 25 Nov 2023 03:33:56 -0500, Paolo Bonzini wrote:
> Remove "bool shared" argument from functions and iterators that need
> not know if the lock is taken for read or write. This is common because
> protection is achieved via RCU and tdp_mmu_pages_lock or because the
> argument is only used for assertions that can be written by hand.
>
> Also always take tdp_mmu_pages_lock even if mmu_lock is currently taken
> for write.
>
> [...]

Applied to kvm-x86 mmu, thanks!

[1/4] KVM: x86/mmu: remove unnecessary "bool shared" argument from functions
https://github.com/kvm-x86/linux/commit/2d30059d38e6
[2/4] KVM: x86/mmu: remove unnecessary "bool shared" argument from iterators
https://github.com/kvm-x86/linux/commit/59b93e634b40
[3/4] KVM: x86/mmu: always take tdp_mmu_pages_lock
https://github.com/kvm-x86/linux/commit/4072c73104f2
[4/4] KVM: x86/mmu: fix comment about mmu_unsync_pages_lock
https://github.com/kvm-x86/linux/commit/9dc2973a3b20

--
https://github.com/kvm-x86/linux/tree/next

2023-12-01 16:00:37

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] KVM: x86/mmu: small locking cleanups

On Thu, Nov 30, 2023, Sean Christopherson wrote:
> On Sat, 25 Nov 2023 03:33:56 -0500, Paolo Bonzini wrote:
> > Remove "bool shared" argument from functions and iterators that need
> > not know if the lock is taken for read or write. This is common because
> > protection is achieved via RCU and tdp_mmu_pages_lock or because the
> > argument is only used for assertions that can be written by hand.
> >
> > Also always take tdp_mmu_pages_lock even if mmu_lock is currently taken
> > for write.
> >
> > [...]
>
> Applied to kvm-x86 mmu, thanks!
>
> [1/4] KVM: x86/mmu: remove unnecessary "bool shared" argument from functions
> https://github.com/kvm-x86/linux/commit/2d30059d38e6
> [2/4] KVM: x86/mmu: remove unnecessary "bool shared" argument from iterators
> https://github.com/kvm-x86/linux/commit/59b93e634b40
> [3/4] KVM: x86/mmu: always take tdp_mmu_pages_lock
> https://github.com/kvm-x86/linux/commit/4072c73104f2
> [4/4] KVM: x86/mmu: fix comment about mmu_unsync_pages_lock
> https://github.com/kvm-x86/linux/commit/9dc2973a3b20

FYI, I had to force push to mmu to fixup an unrelated Fixes: issue, new hashes:

[1/4] KVM: x86/mmu: remove unnecessary "bool shared" argument from functions
https://github.com/kvm-x86/linux/commit/5f3c8c9187b6
[2/4] KVM: x86/mmu: remove unnecessary "bool shared" argument from iterators
https://github.com/kvm-x86/linux/commit/484dd27c0602
[3/4] KVM: x86/mmu: always take tdp_mmu_pages_lock
https://github.com/kvm-x86/linux/commit/250ce1b4d21a
[4/4] KVM: x86/mmu: fix comment about mmu_unsync_pages_lock
https://github.com/kvm-x86/linux/commit/e59f75de4e50