2022-07-08 15:38:49

by Mel Gorman

[permalink] [raw]
Subject: [PATCH] mm/page_alloc: replace local_lock with normal spinlock -fix -fix

pcpu_spin_unlock and pcpu_spin_unlock_irqrestore both unlock
pcp->lock and then enable preemption. This lacks symmetry against
both the pcpu_spin helpers and differs from how local_unlock_* is
implemented. While this is harmless, it's unnecessary and it's generally
better to unwind locks and preemption state in the reverse order as
they were acquired.

This is a fix on top of the mm-unstable patch
mm-page_alloc-replace-local_lock-with-normal-spinlock-fix.patch

Signed-off-by: Mel Gorman <[email protected]>
---
mm/page_alloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 934d1b5a5449..d0141e51e613 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -192,14 +192,14 @@ static DEFINE_MUTEX(pcp_batch_high_lock);

#define pcpu_spin_unlock(member, ptr) \
({ \
- spin_unlock(&ptr->member); \
pcpu_task_unpin(); \
+ spin_unlock(&ptr->member); \
})

#define pcpu_spin_unlock_irqrestore(member, ptr, flags) \
({ \
- spin_unlock_irqrestore(&ptr->member, flags); \
pcpu_task_unpin(); \
+ spin_unlock_irqrestore(&ptr->member, flags); \
})

/* struct per_cpu_pages specific helpers. */