Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp3007042yba; Mon, 22 Apr 2019 17:41:07 -0700 (PDT) X-Google-Smtp-Source: APXvYqynWSur90WTRrBvgqSUv6h5d6LONSjrDfkhIBAT0DewJIm4CXW9vzYCJhlTfLNnipkENqL8 X-Received: by 2002:a62:5f84:: with SMTP id t126mr23706643pfb.185.1555980067042; Mon, 22 Apr 2019 17:41:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555980067; cv=none; d=google.com; s=arc-20160816; b=NLx8b+5viia1rodDe17twCUV9LDP15wWQR04sxPTRFza8DWXHSFKRwn/jksibfnBXP sb2OLovAp5OCk3tLjFYF9t8ai9sbcg6Ws1wHdUJbKSY2ynxmt9i2+Us0o7cNS5u0LpVu RDT3CGdM+MYOFDgJ6TP2XrErnP+NCAYGLtNJq4829kmRm6dM0kou9gTdyjn6fOwtMwtB /Pfl4LuYZ5/6+qzdk18XTL08cmACVH1wDGXPmbYcBQUjcYaxjO8sIb+1brjVo/xlmgB6 0Gca8wPirNdShVca4Lenwl/1EAmqo4woidViKQZSn3IBCU2YhqvxF3Q+rX3GDsJsT84U 430A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=387eashXwjQh5KAXgGCHdyJxaR2TrgnUAVU+gbo6h2Y=; b=FgsHhY6q82hL5tZ5KWRAq2RC5xqyWZTM3dIZoCuQs/IrO7rEeGsYXzGN9JTwpra3dg 1WpaxpdrFnp+/fUs7GOx+cuHHJJJZ0ta+tAQvoNbZZg7vmBY/CK/rgA9NFBLTmLUiSNC Fm59hoMpJnSvtgI4FMPrVtsFqx4ZGUdbLNm86FyUU02zCdoH2HOfezBJDUes1QqPaX5Q 3/uW8UGl0xEdYnHXlhx45jeCSyID0Q34uGhRHdZ4t7qQezPI/Iz+MsDDv67ZkRZcf+Ez f+zp4fj1cl4sbdAO9CO1Jlvaqe+meBulPZF1yFQSkWYnZP5DnyMLLN6RUeqzH4ZlvDD3 zveg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t10si14418901plr.229.2019.04.22.17.40.51; Mon, 22 Apr 2019 17:41:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729092AbfDVUd6 (ORCPT + 99 others); Mon, 22 Apr 2019 16:33:58 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58360 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726943AbfDVUd6 (ORCPT ); Mon, 22 Apr 2019 16:33:58 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7D15F30832EF; Mon, 22 Apr 2019 20:33:56 +0000 (UTC) Received: from redhat.com (unknown [10.20.6.236]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E22545D9D4; Mon, 22 Apr 2019 20:33:52 +0000 (UTC) Date: Mon, 22 Apr 2019 16:33:51 -0400 From: Jerome Glisse To: Laurent Dufour Cc: akpm@linux-foundation.org, mhocko@kernel.org, peterz@infradead.org, kirill@shutemov.name, ak@linux.intel.com, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , aneesh.kumar@linux.ibm.com, benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , sergey.senozhatsky.work@gmail.com, Andrea Arcangeli , Alexei Starovoitov , kemi.wang@intel.com, Daniel Jordan , David Rientjes , Ganesh Mahendran , Minchan Kim , Punit Agrawal , vinayak menon , Yang Shi , zhong jiang , Haiyan Song , Balbir Singh , sj38.park@gmail.com, Michel Lespinasse , Mike Rapoport , linux-kernel@vger.kernel.org, linux-mm@kvack.org, haren@linux.vnet.ibm.com, npiggin@gmail.com, paulmck@linux.vnet.ibm.com, Tim Chen , linuxppc-dev@lists.ozlabs.org, x86@kernel.org Subject: Re: [PATCH v12 19/31] mm: protect the RB tree with a sequence lock Message-ID: <20190422203350.GJ14666@redhat.com> References: <20190416134522.17540-1-ldufour@linux.ibm.com> <20190416134522.17540-20-ldufour@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20190416134522.17540-20-ldufour@linux.ibm.com> User-Agent: Mutt/1.11.3 (2019-02-01) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.44]); Mon, 22 Apr 2019 20:33:57 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 16, 2019 at 03:45:10PM +0200, Laurent Dufour wrote: > Introducing a per mm_struct seqlock, mm_seq field, to protect the changes > made in the MM RB tree. This allows to walk the RB tree without grabbing > the mmap_sem, and on the walk is done to double check that sequence counter > was stable during the walk. > > The mm seqlock is held while inserting and removing entries into the MM RB > tree. Later in this series, it will be check when looking for a VMA > without holding the mmap_sem. > > This is based on the initial work from Peter Zijlstra: > https://lore.kernel.org/linux-mm/20100104182813.479668508@chello.nl/ > > Signed-off-by: Laurent Dufour Reviewed-by: J?r?me Glisse > --- > include/linux/mm_types.h | 3 +++ > kernel/fork.c | 3 +++ > mm/init-mm.c | 3 +++ > mm/mmap.c | 48 +++++++++++++++++++++++++++++++--------- > 4 files changed, 46 insertions(+), 11 deletions(-) > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index e78f72eb2576..24b3f8ce9e42 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -358,6 +358,9 @@ struct mm_struct { > struct { > struct vm_area_struct *mmap; /* list of VMAs */ > struct rb_root mm_rb; > +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT > + seqlock_t mm_seq; > +#endif > u64 vmacache_seqnum; /* per-thread vmacache */ > #ifdef CONFIG_MMU > unsigned long (*get_unmapped_area) (struct file *filp, > diff --git a/kernel/fork.c b/kernel/fork.c > index 2992d2c95256..3a1739197ebc 100644 > --- a/kernel/fork.c > +++ b/kernel/fork.c > @@ -1008,6 +1008,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, > mm->mmap = NULL; > mm->mm_rb = RB_ROOT; > mm->vmacache_seqnum = 0; > +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT > + seqlock_init(&mm->mm_seq); > +#endif > atomic_set(&mm->mm_users, 1); > atomic_set(&mm->mm_count, 1); > init_rwsem(&mm->mmap_sem); > diff --git a/mm/init-mm.c b/mm/init-mm.c > index a787a319211e..69346b883a4e 100644 > --- a/mm/init-mm.c > +++ b/mm/init-mm.c > @@ -27,6 +27,9 @@ > */ > struct mm_struct init_mm = { > .mm_rb = RB_ROOT, > +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT > + .mm_seq = __SEQLOCK_UNLOCKED(init_mm.mm_seq), > +#endif > .pgd = swapper_pg_dir, > .mm_users = ATOMIC_INIT(2), > .mm_count = ATOMIC_INIT(1), > diff --git a/mm/mmap.c b/mm/mmap.c > index 13460b38b0fb..f7f6027a7dff 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -170,6 +170,24 @@ void unlink_file_vma(struct vm_area_struct *vma) > } > } > > +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT > +static inline void mm_write_seqlock(struct mm_struct *mm) > +{ > + write_seqlock(&mm->mm_seq); > +} > +static inline void mm_write_sequnlock(struct mm_struct *mm) > +{ > + write_sequnlock(&mm->mm_seq); > +} > +#else > +static inline void mm_write_seqlock(struct mm_struct *mm) > +{ > +} > +static inline void mm_write_sequnlock(struct mm_struct *mm) > +{ > +} > +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */ > + > /* > * Close a vm structure and free it, returning the next. > */ > @@ -445,26 +463,32 @@ static void vma_gap_update(struct vm_area_struct *vma) > } > > static inline void vma_rb_insert(struct vm_area_struct *vma, > - struct rb_root *root) > + struct mm_struct *mm) > { > + struct rb_root *root = &mm->mm_rb; > + > /* All rb_subtree_gap values must be consistent prior to insertion */ > validate_mm_rb(root, NULL); > > rb_insert_augmented(&vma->vm_rb, root, &vma_gap_callbacks); > } > > -static void __vma_rb_erase(struct vm_area_struct *vma, struct rb_root *root) > +static void __vma_rb_erase(struct vm_area_struct *vma, struct mm_struct *mm) > { > + struct rb_root *root = &mm->mm_rb; > + > /* > * Note rb_erase_augmented is a fairly large inline function, > * so make sure we instantiate it only once with our desired > * augmented rbtree callbacks. > */ > + mm_write_seqlock(mm); > rb_erase_augmented(&vma->vm_rb, root, &vma_gap_callbacks); > + mm_write_sequnlock(mm); /* wmb */ > } > > static __always_inline void vma_rb_erase_ignore(struct vm_area_struct *vma, > - struct rb_root *root, > + struct mm_struct *mm, > struct vm_area_struct *ignore) > { > /* > @@ -472,21 +496,21 @@ static __always_inline void vma_rb_erase_ignore(struct vm_area_struct *vma, > * with the possible exception of the "next" vma being erased if > * next->vm_start was reduced. > */ > - validate_mm_rb(root, ignore); > + validate_mm_rb(&mm->mm_rb, ignore); > > - __vma_rb_erase(vma, root); > + __vma_rb_erase(vma, mm); > } > > static __always_inline void vma_rb_erase(struct vm_area_struct *vma, > - struct rb_root *root) > + struct mm_struct *mm) > { > /* > * All rb_subtree_gap values must be consistent prior to erase, > * with the possible exception of the vma being erased. > */ > - validate_mm_rb(root, vma); > + validate_mm_rb(&mm->mm_rb, vma); > > - __vma_rb_erase(vma, root); > + __vma_rb_erase(vma, mm); > } > > /* > @@ -601,10 +625,12 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma, > * immediately update the gap to the correct value. Finally we > * rebalance the rbtree after all augmented values have been set. > */ > + mm_write_seqlock(mm); > rb_link_node(&vma->vm_rb, rb_parent, rb_link); > vma->rb_subtree_gap = 0; > vma_gap_update(vma); > - vma_rb_insert(vma, &mm->mm_rb); > + vma_rb_insert(vma, mm); > + mm_write_sequnlock(mm); > } > > static void __vma_link_file(struct vm_area_struct *vma) > @@ -680,7 +706,7 @@ static __always_inline void __vma_unlink_common(struct mm_struct *mm, > { > struct vm_area_struct *next; > > - vma_rb_erase_ignore(vma, &mm->mm_rb, ignore); > + vma_rb_erase_ignore(vma, mm, ignore); > next = vma->vm_next; > if (has_prev) > prev->vm_next = next; > @@ -2674,7 +2700,7 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma, > insertion_point = (prev ? &prev->vm_next : &mm->mmap); > vma->vm_prev = NULL; > do { > - vma_rb_erase(vma, &mm->mm_rb); > + vma_rb_erase(vma, mm); > mm->map_count--; > tail_vma = vma; > vma = vma->vm_next; > -- > 2.21.0 >