2020-03-26 07:04:18

by Michel Lespinasse

[permalink] [raw]
Subject: [PATCH 0/8] Add a new mmap locking API wrapping mmap_sem calls

This patch series adds a new mmap locking API replacing the existing
mmap_sem lock and unlocks. Initially the API is just implemente in terms
of inlined rwsem calls, so it doesn't provide any new functionality.

There are two justifications for the new API:

- At first, it provides an easy hooking point to instrument mmap_sem
locking latencies independently of any other rwsems.

- In the future, it may be a starting point for replacing the rwsem
implementation with a different one, such as range locks. This is
something that is being explored, even though there is no wide concensus
about this possible direction yet.
(see https://patchwork.kernel.org/cover/11401483/)

The changes apply on top of v5.6-rc7.

I think it would be feasible to apply them in the next merge cycle if
given sufficient approval. The coccinelle part of the change is
relatively invasive, but can be skipped over on a file by file basis
if it causes any conflicts with other pending changes. The new mmap
locking API can interoperate with new code that is still using direct
rwsem calls, until the last patch in the series which renames mmap_sem
to enforce using the new API. Maybe that last patch could be delayed for
a bit, so that we'd get a chance to convert any new code that locks
mmap_sem in the -rc1 release before applying that last patch.

Michel Lespinasse (8):
mmap locking API: initial implementation as rwsem wrappers
MMU notifier: use the new mmap locking API
mmap locking API: use coccinelle to convert mmap_sem rwsem call sites
mmap locking API: convert mmap_sem call sites missed by coccinelle
mmap locking API: convert nested write lock sites
mmap locking API: add mmap_read_release() and mmap_read_unlock_non_owner()
mmap locking API: add MMAP_LOCK_INITIALIZER
mmap locking API: rename mmap_sem to mmap_lock

arch/alpha/kernel/traps.c | 4 +-
arch/alpha/mm/fault.c | 10 +--
arch/arc/kernel/process.c | 4 +-
arch/arc/kernel/troubleshoot.c | 4 +-
arch/arc/mm/fault.c | 4 +-
arch/arm/kernel/process.c | 4 +-
arch/arm/kernel/swp_emulate.c | 4 +-
arch/arm/lib/uaccess_with_memcpy.c | 16 ++--
arch/arm/mm/fault.c | 6 +-
arch/arm64/kernel/traps.c | 4 +-
arch/arm64/kernel/vdso.c | 8 +-
arch/arm64/mm/fault.c | 8 +-
arch/csky/kernel/vdso.c | 4 +-
arch/csky/mm/fault.c | 8 +-
arch/hexagon/kernel/vdso.c | 4 +-
arch/hexagon/mm/vm_fault.c | 8 +-
arch/ia64/kernel/perfmon.c | 8 +-
arch/ia64/mm/fault.c | 12 +--
arch/ia64/mm/init.c | 12 +--
arch/m68k/kernel/sys_m68k.c | 14 ++--
arch/m68k/mm/fault.c | 8 +-
arch/microblaze/mm/fault.c | 12 +--
arch/mips/kernel/traps.c | 4 +-
arch/mips/kernel/vdso.c | 4 +-
arch/mips/mm/fault.c | 10 +--
arch/nds32/kernel/vdso.c | 6 +-
arch/nds32/mm/fault.c | 12 +--
arch/nios2/mm/fault.c | 12 +--
arch/nios2/mm/init.c | 4 +-
arch/openrisc/mm/fault.c | 10 +--
arch/parisc/kernel/traps.c | 6 +-
arch/parisc/mm/fault.c | 8 +-
arch/powerpc/kernel/vdso.c | 6 +-
arch/powerpc/kvm/book3s_64_mmu_hv.c | 4 +-
arch/powerpc/kvm/book3s_hv.c | 6 +-
arch/powerpc/kvm/book3s_hv_uvmem.c | 12 +--
arch/powerpc/kvm/e500_mmu_host.c | 4 +-
arch/powerpc/mm/book3s64/iommu_api.c | 4 +-
arch/powerpc/mm/book3s64/subpage_prot.c | 12 +--
arch/powerpc/mm/copro_fault.c | 4 +-
arch/powerpc/mm/fault.c | 12 +--
arch/powerpc/oprofile/cell/spu_task_sync.c | 6 +-
arch/powerpc/platforms/cell/spufs/file.c | 4 +-
arch/riscv/kernel/vdso.c | 4 +-
arch/riscv/mm/fault.c | 10 +--
arch/s390/kernel/vdso.c | 4 +-
arch/s390/kvm/gaccess.c | 4 +-
arch/s390/kvm/kvm-s390.c | 24 +++---
arch/s390/kvm/priv.c | 32 ++++----
arch/s390/mm/fault.c | 6 +-
arch/s390/mm/gmap.c | 40 ++++-----
arch/s390/pci/pci_mmio.c | 4 +-
arch/sh/kernel/sys_sh.c | 6 +-
arch/sh/kernel/vsyscall/vsyscall.c | 4 +-
arch/sh/mm/fault.c | 14 ++--
arch/sparc/mm/fault_32.c | 18 ++--
arch/sparc/mm/fault_64.c | 12 +--
arch/sparc/vdso/vma.c | 4 +-
arch/um/include/asm/mmu_context.h | 5 +-
arch/um/kernel/tlb.c | 2 +-
arch/um/kernel/trap.c | 6 +-
arch/unicore32/mm/fault.c | 6 +-
arch/x86/entry/vdso/vma.c | 14 ++--
arch/x86/events/core.c | 4 +-
arch/x86/kernel/tboot.c | 2 +-
arch/x86/kernel/vm86_32.c | 4 +-
arch/x86/kvm/mmu/paging_tmpl.h | 8 +-
arch/x86/mm/fault.c | 10 +--
arch/x86/um/vdso/vma.c | 4 +-
arch/xtensa/mm/fault.c | 10 +--
drivers/android/binder_alloc.c | 10 +--
drivers/dma-buf/dma-resv.c | 4 +-
drivers/firmware/efi/efi.c | 2 +-
.../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 4 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 10 +--
drivers/gpu/drm/amd/amdkfd/kfd_events.c | 4 +-
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 2 +-
drivers/gpu/drm/i915/gem/i915_gem_mman.c | 4 +-
drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 8 +-
drivers/gpu/drm/nouveau/nouveau_svm.c | 20 ++---
drivers/gpu/drm/radeon/radeon_cs.c | 4 +-
drivers/gpu/drm/radeon/radeon_gem.c | 6 +-
drivers/gpu/drm/ttm/ttm_bo_vm.c | 4 +-
drivers/infiniband/core/umem_odp.c | 4 +-
drivers/infiniband/core/uverbs_main.c | 4 +-
drivers/infiniband/hw/mlx4/mr.c | 4 +-
drivers/infiniband/hw/qib/qib_user_pages.c | 6 +-
drivers/infiniband/hw/usnic/usnic_uiom.c | 4 +-
drivers/infiniband/sw/siw/siw_mem.c | 4 +-
drivers/iommu/amd_iommu_v2.c | 4 +-
drivers/iommu/intel-svm.c | 4 +-
drivers/media/v4l2-core/videobuf-core.c | 4 +-
drivers/media/v4l2-core/videobuf-dma-contig.c | 4 +-
drivers/media/v4l2-core/videobuf-dma-sg.c | 4 +-
drivers/misc/cxl/cxllib.c | 4 +-
drivers/misc/cxl/fault.c | 4 +-
drivers/misc/sgi-gru/grufault.c | 16 ++--
drivers/misc/sgi-gru/grufile.c | 4 +-
drivers/oprofile/buffer_sync.c | 10 +--
drivers/staging/kpc2000/kpc_dma/fileops.c | 4 +-
drivers/tee/optee/call.c | 4 +-
drivers/vfio/vfio_iommu_type1.c | 8 +-
drivers/xen/gntdev.c | 4 +-
drivers/xen/privcmd.c | 14 ++--
fs/aio.c | 4 +-
fs/coredump.c | 4 +-
fs/exec.c | 16 ++--
fs/io_uring.c | 4 +-
fs/proc/base.c | 18 ++--
fs/proc/task_mmu.c | 28 +++----
fs/proc/task_nommu.c | 18 ++--
fs/userfaultfd.c | 28 +++----
include/linux/mm.h | 1 +
include/linux/mm_types.h | 2 +-
include/linux/mmap_lock.h | 82 +++++++++++++++++++
include/linux/mmu_notifier.h | 5 +-
ipc/shm.c | 8 +-
kernel/acct.c | 4 +-
kernel/bpf/stackmap.c | 13 ++-
kernel/events/core.c | 4 +-
kernel/events/uprobes.c | 16 ++--
kernel/exit.c | 8 +-
kernel/fork.c | 14 ++--
kernel/futex.c | 4 +-
kernel/sched/fair.c | 4 +-
kernel/sys.c | 18 ++--
kernel/trace/trace_output.c | 4 +-
mm/filemap.c | 6 +-
mm/frame_vector.c | 4 +-
mm/gup.c | 20 ++---
mm/hmm.c | 2 +-
mm/init-mm.c | 2 +-
mm/internal.h | 2 +-
mm/khugepaged.c | 36 ++++----
mm/ksm.c | 34 ++++----
mm/madvise.c | 18 ++--
mm/memcontrol.c | 8 +-
mm/memory.c | 14 ++--
mm/mempolicy.c | 22 ++---
mm/migrate.c | 8 +-
mm/mincore.c | 4 +-
mm/mlock.c | 16 ++--
mm/mmap.c | 36 ++++----
mm/mmu_notifier.c | 22 ++---
mm/mprotect.c | 12 +--
mm/mremap.c | 6 +-
mm/msync.c | 8 +-
mm/nommu.c | 16 ++--
mm/oom_kill.c | 4 +-
mm/pagewalk.c | 15 ++--
mm/process_vm_access.c | 4 +-
mm/ptdump.c | 4 +-
mm/swapfile.c | 4 +-
mm/userfaultfd.c | 14 ++--
mm/util.c | 12 +--
net/ipv4/tcp.c | 4 +-
net/xdp/xdp_umem.c | 4 +-
virt/kvm/arm/mmu.c | 14 ++--
virt/kvm/async_pf.c | 4 +-
virt/kvm/kvm_main.c | 8 +-
160 files changed, 778 insertions(+), 693 deletions(-)
create mode 100644 include/linux/mmap_lock.h

--
2.25.1.696.g5e7596f4ac-goog


2020-03-26 07:04:21

by Michel Lespinasse

[permalink] [raw]
Subject: [PATCH 1/8] mmap locking API: initial implementation as rwsem wrappers

This change wraps the existing mmap_sem related rwsem calls into a new
mmap locking API. There are two justifications for the new API:

- At first, it provides an easy hooking point to instrument mmap_sem
locking latencies independently of any other rwsems.

- In the future, it may be a starting point for replacing the rwsem
implementation with a different one, such as range locks.

Signed-off-by: Michel Lespinasse <[email protected]>
---
include/linux/mm.h | 1 +
include/linux/mmap_lock.h | 59 +++++++++++++++++++++++++++++++++++++++
2 files changed, 60 insertions(+)
create mode 100644 include/linux/mmap_lock.h

diff --git a/include/linux/mm.h b/include/linux/mm.h
index c54fb96cb1e6..2f13c9198999 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -15,6 +15,7 @@
#include <linux/atomic.h>
#include <linux/debug_locks.h>
#include <linux/mm_types.h>
+#include <linux/mmap_lock.h>
#include <linux/range.h>
#include <linux/pfn.h>
#include <linux/percpu-refcount.h>
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
new file mode 100644
index 000000000000..cffd25afe92b
--- /dev/null
+++ b/include/linux/mmap_lock.h
@@ -0,0 +1,59 @@
+#ifndef _LINUX_MMAP_LOCK_H
+#define _LINUX_MMAP_LOCK_H
+
+static inline void mmap_init_lock(struct mm_struct *mm)
+{
+ init_rwsem(&mm->mmap_sem);
+}
+
+static inline void mmap_write_lock(struct mm_struct *mm)
+{
+ down_write(&mm->mmap_sem);
+}
+
+static inline int mmap_write_lock_killable(struct mm_struct *mm)
+{
+ return down_write_killable(&mm->mmap_sem);
+}
+
+static inline bool mmap_write_trylock(struct mm_struct *mm)
+{
+ return down_write_trylock(&mm->mmap_sem) != 0;
+}
+
+static inline void mmap_write_unlock(struct mm_struct *mm)
+{
+ up_write(&mm->mmap_sem);
+}
+
+static inline void mmap_downgrade_write_lock(struct mm_struct *mm)
+{
+ downgrade_write(&mm->mmap_sem);
+}
+
+static inline void mmap_read_lock(struct mm_struct *mm)
+{
+ down_read(&mm->mmap_sem);
+}
+
+static inline int mmap_read_lock_killable(struct mm_struct *mm)
+{
+ return down_read_killable(&mm->mmap_sem);
+}
+
+static inline bool mmap_read_trylock(struct mm_struct *mm)
+{
+ return down_read_trylock(&mm->mmap_sem) != 0;
+}
+
+static inline void mmap_read_unlock(struct mm_struct *mm)
+{
+ up_read(&mm->mmap_sem);
+}
+
+static inline bool mmap_is_locked(struct mm_struct *mm)
+{
+ return rwsem_is_locked(&mm->mmap_sem) != 0;
+}
+
+#endif /* _LINUX_MMAP_LOCK_H */
--
2.25.1.696.g5e7596f4ac-goog

2020-03-26 07:05:07

by Michel Lespinasse

[permalink] [raw]
Subject: [PATCH 7/8] mmap locking API: add MMAP_LOCK_INITIALIZER

Define a new initializer for the mmap locking api.
Initially this just evaluates to __RWSEM_INITIALIZER as the API
is defined as wrappers around rwsem.

Signed-off-by: Michel Lespinasse <[email protected]>
---
arch/x86/kernel/tboot.c | 2 +-
drivers/firmware/efi/efi.c | 2 +-
include/linux/mmap_lock.h | 2 ++
mm/init-mm.c | 2 +-
4 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/tboot.c b/arch/x86/kernel/tboot.c
index b89f6ac6a0c0..4b79335624b1 100644
--- a/arch/x86/kernel/tboot.c
+++ b/arch/x86/kernel/tboot.c
@@ -90,7 +90,7 @@ static struct mm_struct tboot_mm = {
.pgd = swapper_pg_dir,
.mm_users = ATOMIC_INIT(2),
.mm_count = ATOMIC_INIT(1),
- .mmap_sem = __RWSEM_INITIALIZER(init_mm.mmap_sem),
+ .mmap_sem = MMAP_LOCK_INITIALIZER(init_mm.mmap_sem),
.page_table_lock = __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
.mmlist = LIST_HEAD_INIT(init_mm.mmlist),
};
diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
index 21ea99f65113..5bdfe698cd7f 100644
--- a/drivers/firmware/efi/efi.c
+++ b/drivers/firmware/efi/efi.c
@@ -60,7 +60,7 @@ struct mm_struct efi_mm = {
.mm_rb = RB_ROOT,
.mm_users = ATOMIC_INIT(2),
.mm_count = ATOMIC_INIT(1),
- .mmap_sem = __RWSEM_INITIALIZER(efi_mm.mmap_sem),
+ .mmap_sem = MMAP_LOCK_INITIALIZER(efi_mm.mmap_sem),
.page_table_lock = __SPIN_LOCK_UNLOCKED(efi_mm.page_table_lock),
.mmlist = LIST_HEAD_INIT(efi_mm.mmlist),
.cpu_bitmap = { [BITS_TO_LONGS(NR_CPUS)] = 0},
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index 00d6cc02581d..7474b15bba38 100644
--- a/include/linux/mmap_lock.h
+++ b/include/linux/mmap_lock.h
@@ -1,6 +1,8 @@
#ifndef _LINUX_MMAP_LOCK_H
#define _LINUX_MMAP_LOCK_H

+#define MMAP_LOCK_INITIALIZER(name) __RWSEM_INITIALIZER(name)
+
static inline void mmap_init_lock(struct mm_struct *mm)
{
init_rwsem(&mm->mmap_sem);
diff --git a/mm/init-mm.c b/mm/init-mm.c
index 19603302a77f..3c128bd6a30c 100644
--- a/mm/init-mm.c
+++ b/mm/init-mm.c
@@ -31,7 +31,7 @@ struct mm_struct init_mm = {
.pgd = swapper_pg_dir,
.mm_users = ATOMIC_INIT(2),
.mm_count = ATOMIC_INIT(1),
- .mmap_sem = __RWSEM_INITIALIZER(init_mm.mmap_sem),
+ .mmap_sem = MMAP_LOCK_INITIALIZER(init_mm.mmap_sem),
.page_table_lock = __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
.arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock),
.mmlist = LIST_HEAD_INIT(init_mm.mmlist),
--
2.25.1.696.g5e7596f4ac-goog

2020-03-26 07:15:31

by Michel Lespinasse

[permalink] [raw]
Subject: Re: [PATCH 0/8] Add a new mmap locking API wrapping mmap_sem calls

On Thu, Mar 26, 2020 at 12:02 AM Michel Lespinasse <[email protected]> wrote:
> I think it would be feasible to apply them in the next merge cycle if
> given sufficient approval.

Reading this part back, maybe it sounds more pushy than I intended.
What I meant is that I think the code is ready to be submitted through
the normal channels, whatever release that lands it into.

2020-03-26 17:58:24

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 1/8] mmap locking API: initial implementation as rwsem wrappers

On Thu, Mar 26, 2020 at 12:02:29AM -0700, Michel Lespinasse wrote:

> +static inline bool mmap_is_locked(struct mm_struct *mm)
> +{
> + return rwsem_is_locked(&mm->mmap_sem) != 0;
> +}

I've been wondering if the various VM_BUG(rwsem_is_locked()) would be
better as lockdep expressions? Certainly when lockdep is enabled it
should be preferred, IMHO.

So, I think if inlines are to be introduced this should be something
similar to netdev's ASSERT_RTNL which seems to have worked well.

Maybe ASSERT_MMAP_SEM_READ/WRITE/HELD() and do the VM_BUG or
lockdep_is_held as appropriate?

Jason

2020-03-26 18:07:13

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH 1/8] mmap locking API: initial implementation as rwsem wrappers

On Thu, Mar 26, 2020 at 02:56:44PM -0300, Jason Gunthorpe wrote:
> On Thu, Mar 26, 2020 at 12:02:29AM -0700, Michel Lespinasse wrote:
>
> > +static inline bool mmap_is_locked(struct mm_struct *mm)
> > +{
> > + return rwsem_is_locked(&mm->mmap_sem) != 0;
> > +}
>
> I've been wondering if the various VM_BUG(rwsem_is_locked()) would be
> better as lockdep expressions? Certainly when lockdep is enabled it
> should be preferred, IMHO.
>
> So, I think if inlines are to be introduced this should be something
> similar to netdev's ASSERT_RTNL which seems to have worked well.
>
> Maybe ASSERT_MMAP_SEM_READ/WRITE/HELD() and do the VM_BUG or
> lockdep_is_held as appropriate?

I'd rather see lockdep_assert_held() used directly rather than have
a wrapper. This API includes options to check for it beind explicitly
held for read/write/any, which should be useful.

2020-03-26 18:10:09

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH 1/8] mmap locking API: initial implementation as rwsem wrappers

On Thu, Mar 26, 2020 at 11:06:21AM -0700, Matthew Wilcox wrote:
> On Thu, Mar 26, 2020 at 02:56:44PM -0300, Jason Gunthorpe wrote:
> > On Thu, Mar 26, 2020 at 12:02:29AM -0700, Michel Lespinasse wrote:
> >
> > > +static inline bool mmap_is_locked(struct mm_struct *mm)
> > > +{
> > > + return rwsem_is_locked(&mm->mmap_sem) != 0;
> > > +}
> >
> > I've been wondering if the various VM_BUG(rwsem_is_locked()) would be
> > better as lockdep expressions? Certainly when lockdep is enabled it
> > should be preferred, IMHO.
> >
> > So, I think if inlines are to be introduced this should be something
> > similar to netdev's ASSERT_RTNL which seems to have worked well.
> >
> > Maybe ASSERT_MMAP_SEM_READ/WRITE/HELD() and do the VM_BUG or
> > lockdep_is_held as appropriate?
>
> I'd rather see lockdep_assert_held() used directly rather than have
> a wrapper. This API includes options to check for it beind explicitly
> held for read/write/any, which should be useful.

... oh, but that requires naming the lock, which we're trying to get
away from.

I guess we need a wrapper, but yes, by all means, let's level it up
to put the VM_BUG_ON inside the wrapper, as that means we can get the
mm dumped everywhere, rather than just the few places that have done
VM_BUG_ON_MM instead of plain VM_BUG_ON.

2020-03-26 22:10:25

by Michel Lespinasse

[permalink] [raw]
Subject: Re: [PATCH 1/8] mmap locking API: initial implementation as rwsem wrappers

I don't think we strongly need an API for such assertions at this
point. There are actually a number of them (mostly the lockdep form)
being handled in the last patch renaming the mmap_sem field.

If a new form of lock is introduced in the future, it is doable to
implement it in such a way that lockdep assertions will work on it (I
have that working in my range locking patchset). For a range lock you
would probably want to add a new API anyway so that the assert can
verify that the specific range is locked, but IMO there is no strong
justification for new assertion APIs until we get there.

If there is no opposition to replacing rwsem_is_locked with
lockdep_assert_held, then I think that is workable. mmap_is_locked()
only has 5 call sites, so that's not a very large change.

On Thu, Mar 26, 2020 at 11:09 AM Matthew Wilcox <[email protected]> wrote:
>
> On Thu, Mar 26, 2020 at 11:06:21AM -0700, Matthew Wilcox wrote:
> > On Thu, Mar 26, 2020 at 02:56:44PM -0300, Jason Gunthorpe wrote:
> > > On Thu, Mar 26, 2020 at 12:02:29AM -0700, Michel Lespinasse wrote:
> > >
> > > > +static inline bool mmap_is_locked(struct mm_struct *mm)
> > > > +{
> > > > + return rwsem_is_locked(&mm->mmap_sem) != 0;
> > > > +}
> > >
> > > I've been wondering if the various VM_BUG(rwsem_is_locked()) would be
> > > better as lockdep expressions? Certainly when lockdep is enabled it
> > > should be preferred, IMHO.
> > >
> > > So, I think if inlines are to be introduced this should be something
> > > similar to netdev's ASSERT_RTNL which seems to have worked well.
> > >
> > > Maybe ASSERT_MMAP_SEM_READ/WRITE/HELD() and do the VM_BUG or
> > > lockdep_is_held as appropriate?
> >
> > I'd rather see lockdep_assert_held() used directly rather than have
> > a wrapper. This API includes options to check for it beind explicitly
> > held for read/write/any, which should be useful.
>
> ... oh, but that requires naming the lock, which we're trying to get
> away from.
>
> I guess we need a wrapper, but yes, by all means, let's level it up
> to put the VM_BUG_ON inside the wrapper, as that means we can get the
> mm dumped everywhere, rather than just the few places that have done
> VM_BUG_ON_MM instead of plain VM_BUG_ON.



--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.

2020-04-06 09:47:10

by Laurent Dufour

[permalink] [raw]
Subject: Re: [PATCH 7/8] mmap locking API: add MMAP_LOCK_INITIALIZER

Le 26/03/2020 à 08:02, Michel Lespinasse a écrit :
> Define a new initializer for the mmap locking api.
> Initially this just evaluates to __RWSEM_INITIALIZER as the API
> is defined as wrappers around rwsem.

I can't see the benefit of this change.
The overall idea is to hide the mmap_sem name. Here the macro
MMAP_LOCK_INITIALIZER() doesn't hide the name.

I think we can keep that in place until the real change of the mmap_sem to
something else.

Cheers,
Laurent.

>
> Signed-off-by: Michel Lespinasse <[email protected]>
> ---
> arch/x86/kernel/tboot.c | 2 +-
> drivers/firmware/efi/efi.c | 2 +-
> include/linux/mmap_lock.h | 2 ++
> mm/init-mm.c | 2 +-
> 4 files changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kernel/tboot.c b/arch/x86/kernel/tboot.c
> index b89f6ac6a0c0..4b79335624b1 100644
> --- a/arch/x86/kernel/tboot.c
> +++ b/arch/x86/kernel/tboot.c
> @@ -90,7 +90,7 @@ static struct mm_struct tboot_mm = {
> .pgd = swapper_pg_dir,
> .mm_users = ATOMIC_INIT(2),
> .mm_count = ATOMIC_INIT(1),
> - .mmap_sem = __RWSEM_INITIALIZER(init_mm.mmap_sem),
> + .mmap_sem = MMAP_LOCK_INITIALIZER(init_mm.mmap_sem),
> .page_table_lock = __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
> .mmlist = LIST_HEAD_INIT(init_mm.mmlist),
> };
> diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
> index 21ea99f65113..5bdfe698cd7f 100644
> --- a/drivers/firmware/efi/efi.c
> +++ b/drivers/firmware/efi/efi.c
> @@ -60,7 +60,7 @@ struct mm_struct efi_mm = {
> .mm_rb = RB_ROOT,
> .mm_users = ATOMIC_INIT(2),
> .mm_count = ATOMIC_INIT(1),
> - .mmap_sem = __RWSEM_INITIALIZER(efi_mm.mmap_sem),
> + .mmap_sem = MMAP_LOCK_INITIALIZER(efi_mm.mmap_sem),
> .page_table_lock = __SPIN_LOCK_UNLOCKED(efi_mm.page_table_lock),
> .mmlist = LIST_HEAD_INIT(efi_mm.mmlist),
> .cpu_bitmap = { [BITS_TO_LONGS(NR_CPUS)] = 0},
> diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
> index 00d6cc02581d..7474b15bba38 100644
> --- a/include/linux/mmap_lock.h
> +++ b/include/linux/mmap_lock.h
> @@ -1,6 +1,8 @@
> #ifndef _LINUX_MMAP_LOCK_H
> #define _LINUX_MMAP_LOCK_H
>
> +#define MMAP_LOCK_INITIALIZER(name) __RWSEM_INITIALIZER(name)
> +
> static inline void mmap_init_lock(struct mm_struct *mm)
> {
> init_rwsem(&mm->mmap_sem);
> diff --git a/mm/init-mm.c b/mm/init-mm.c
> index 19603302a77f..3c128bd6a30c 100644
> --- a/mm/init-mm.c
> +++ b/mm/init-mm.c
> @@ -31,7 +31,7 @@ struct mm_struct init_mm = {
> .pgd = swapper_pg_dir,
> .mm_users = ATOMIC_INIT(2),
> .mm_count = ATOMIC_INIT(1),
> - .mmap_sem = __RWSEM_INITIALIZER(init_mm.mmap_sem),
> + .mmap_sem = MMAP_LOCK_INITIALIZER(init_mm.mmap_sem),
> .page_table_lock = __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
> .arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock),
> .mmlist = LIST_HEAD_INIT(init_mm.mmlist),
>

2020-04-06 13:05:53

by Michel Lespinasse

[permalink] [raw]
Subject: Re: [PATCH 7/8] mmap locking API: add MMAP_LOCK_INITIALIZER

On Mon, Apr 6, 2020 at 2:46 AM Laurent Dufour <[email protected]> wrote:
>
> Le 26/03/2020 à 08:02, Michel Lespinasse a écrit :
> > Define a new initializer for the mmap locking api.
> > Initially this just evaluates to __RWSEM_INITIALIZER as the API
> > is defined as wrappers around rwsem.
>
> I can't see the benefit of this change.
> The overall idea is to hide the mmap_sem name. Here the macro
> MMAP_LOCK_INITIALIZER() doesn't hide the name.

The idea for the initializer is that it makes it easier to change the
underlying implementation - if we do, we can change the initializer
without having to change every place where it is used. I actually do
that in my other patch series converting the mmap_sem to a range lock.

But you are correct that it does not help with renaming the mmap_sem
field - my next commit in this series still has to do that in every
place this initializer is used.