Received: by 10.213.65.68 with SMTP id h4csp1092560imn; Tue, 27 Mar 2018 14:46:52 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/lV7odGiGVajo0fkxr2A4tQ8TCZkCEAf7GgkxecysbQ75mxuC2WK1AtFFqKvQqQiELj1lE X-Received: by 10.98.131.75 with SMTP id h72mr770226pfe.166.1522187212429; Tue, 27 Mar 2018 14:46:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522187212; cv=none; d=google.com; s=arc-20160816; b=esAoatqyPqBzN0AKt1H6ar49bIytE/m7jWEVooi47Bd4PdroCtMizlOxgDIHVpIn8S QLRQuJx0k6bioqTi1IgM9fxzfgjQinkFizy1o03HuJvDkAFq35hi6ZRii8ms+6In+/t9 3jwgSWSKD1IBw5X2yrVRirbm3H/g71z4FK35bL/12RKyfErK93MGSXkyc3Lg6CYiZosd 8VOJ4JAnJjXVbWYUnxQyeq6+5i3bN4Yk7fPzXCvI/aAoHAvuBy4fVFrl7yq8IHCKLA5S JzBbkQjkB4g+7WuwQZmXN/SD0mZw232TQ5/uXje/QpaCsZG+4GUTrf2rvWjoxN2MsPN5 8Oug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature :arc-authentication-results; bh=mp1t1r7PRvQRbGfwucJqxThHTRbFGSE0ONMbs79vNeY=; b=JM+L2CF8hQcY12Gd1FQJrBiVW0J9vTxOPHFyWhkbiYcUbajErKmhV73Yz/86UbO9O7 xYX7aTXX53O+/WN5rR/lqKNT9H6d6hD3xVev/J4gGnVc3eJ85A9qqdJ2PMDeXMfXOG+P 4XyiASJzVjTiHJxDs1RRNs3yWQG3GQgMewnwW+vOayhXBPlifMQxZPbbZzi2ChEPhHJb q/MboD4vWVW/03k9drw1Gc32vjqh2PnwJwI/6p+LMIl1nTQEVkAn1SJSAVo37kpsK4IK DhMHNUrvNWz6MWZI1kHvu7/4cnR0omYeKPp9K4oLMPH6sDsFkfMt2MJkDFdn+vtnSJ4A H5HQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=nK1cqhrz; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d12-v6si2071026plr.264.2018.03.27.14.46.37; Tue, 27 Mar 2018 14:46:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=nK1cqhrz; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752088AbeC0Vpi (ORCPT + 99 others); Tue, 27 Mar 2018 17:45:38 -0400 Received: from mail-pg0-f66.google.com ([74.125.83.66]:35368 "EHLO mail-pg0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751067AbeC0Vpg (ORCPT ); Tue, 27 Mar 2018 17:45:36 -0400 Received: by mail-pg0-f66.google.com with SMTP id j3so151746pgf.2 for ; Tue, 27 Mar 2018 14:45:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=mp1t1r7PRvQRbGfwucJqxThHTRbFGSE0ONMbs79vNeY=; b=nK1cqhrz3VL0UkZuq24oBfJEhzSwbUAMlyp6Mf+wThX9hCzYdr407t+zTVnCElhB2F nPcP5y5sv9WzyLr9BWo3il1NdfUiJuxA3viQXOJTzgCcwMULkW53gvx80mZ7nWS7fYJR bMTHxdNFylPqyoAiyOYLKXio0vQfjK5tQ1/lTkCfZ6JWNkAIeqMCTiLk8Zpn4XPvYccl oJqwtMSmnNckP3jn5998Kjjzo9+A6U4Q3W/hHys6sUkWLX20CtDBOIVsoSj55bQRRirs key9/4KiwStlGWW2NS04x0jRW6xupiJuw4ZFOzOR93lsKA87wfb//a/8uRVLYSEwnFID RJ8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=mp1t1r7PRvQRbGfwucJqxThHTRbFGSE0ONMbs79vNeY=; b=DmnCAY/m9Mu3/0wcqmV6GdHRfGJc2+RDVxGdLBD5JFe/f2Au8aFaClucJOX/lNFbrz XvWI9Xmq4lWv7y152YwEssFdnLgzxk02MklsUJgy8xdoUhi7KtxUymomSDfWrVuxkHWE E9bLa0Fk+lPoQpNZyVNOo/8y5jQpFhY4nqBZsTcXCXl/CPBOaDjtWAIfbZDuKQln8wVV So2RVVvkOVNZ68V2e3xFbzLXN7Sp78T9ilox12Jx9WmirC7u7uUSEZAEMTc7pnQfdX++ 0kZlgLumIrZ/lBsDcLriJ7n/D/8NBYtaWW0FS1VXinEGnLM6wmIht+l9wCEQ2vSzeqVs I+9w== X-Gm-Message-State: AElRT7FjuZM/pYT3tjudaYb2+/2hWKuWF3XTzAO7pIlEYX0ckzHxXrUi NEhh7YRRdfQ33smv8/5v1Ih9Lg== X-Received: by 10.101.64.68 with SMTP id h4mr674336pgp.76.1522187135397; Tue, 27 Mar 2018 14:45:35 -0700 (PDT) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id 17sm5001060pfo.4.2018.03.27.14.45.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 27 Mar 2018 14:45:34 -0700 (PDT) Date: Tue, 27 Mar 2018 14:45:33 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Laurent Dufour cc: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov , kemi.wang@intel.com, sergey.senozhatsky.work@gmail.com, Daniel Jordan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com, npiggin@gmail.com, bsingharora@gmail.com, Tim Chen , linuxppc-dev@lists.ozlabs.org, x86@kernel.org Subject: Re: [PATCH v9 08/24] mm: Protect VMA modifications using VMA sequence count In-Reply-To: <1520963994-28477-9-git-send-email-ldufour@linux.vnet.ibm.com> Message-ID: References: <1520963994-28477-1-git-send-email-ldufour@linux.vnet.ibm.com> <1520963994-28477-9-git-send-email-ldufour@linux.vnet.ibm.com> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 13 Mar 2018, Laurent Dufour wrote: > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index 65ae54659833..a2d9c87b7b0b 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -1136,8 +1136,11 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, > goto out_mm; > } > for (vma = mm->mmap; vma; vma = vma->vm_next) { > - vma->vm_flags &= ~VM_SOFTDIRTY; > + vm_write_begin(vma); > + WRITE_ONCE(vma->vm_flags, > + vma->vm_flags & ~VM_SOFTDIRTY); > vma_set_page_prot(vma); > + vm_write_end(vma); > } > downgrade_write(&mm->mmap_sem); > break; > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > index cec550c8468f..b8212ba17695 100644 > --- a/fs/userfaultfd.c > +++ b/fs/userfaultfd.c > @@ -659,8 +659,11 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) > > octx = vma->vm_userfaultfd_ctx.ctx; > if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) { > + vm_write_begin(vma); > vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; > - vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING); > + WRITE_ONCE(vma->vm_flags, > + vma->vm_flags & ~(VM_UFFD_WP | VM_UFFD_MISSING)); > + vm_write_end(vma); > return 0; > } > In several locations in this patch vm_write_begin(vma) -> vm_write_end(vma) is nesting things other than vma->vm_flags, vma->vm_policy, etc. I think it's better to do vm_write_end(vma) as soon as the members that the seqcount protects are modified. In other words, this isn't offering protection for vma->vm_userfaultfd_ctx. There are several examples of this in the patch. > @@ -885,8 +888,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file) > vma = prev; > else > prev = vma; > - vma->vm_flags = new_flags; > + vm_write_begin(vma); > + WRITE_ONCE(vma->vm_flags, new_flags); > vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; > + vm_write_end(vma); > } > up_write(&mm->mmap_sem); > mmput(mm); > @@ -1434,8 +1439,10 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, > * the next vma was merged into the current one and > * the current one has not been updated yet. > */ > - vma->vm_flags = new_flags; > + vm_write_begin(vma); > + WRITE_ONCE(vma->vm_flags, new_flags); > vma->vm_userfaultfd_ctx.ctx = ctx; > + vm_write_end(vma); > > skip: > prev = vma; > @@ -1592,8 +1599,10 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, > * the next vma was merged into the current one and > * the current one has not been updated yet. > */ > - vma->vm_flags = new_flags; > + vm_write_begin(vma); > + WRITE_ONCE(vma->vm_flags, new_flags); > vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; > + vm_write_end(vma); > > skip: > prev = vma; > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index b7e2268dfc9a..32314e9e48dd 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -1006,6 +1006,7 @@ static void collapse_huge_page(struct mm_struct *mm, > if (mm_find_pmd(mm, address) != pmd) > goto out; > > + vm_write_begin(vma); > anon_vma_lock_write(vma->anon_vma); > > pte = pte_offset_map(pmd, address); > @@ -1041,6 +1042,7 @@ static void collapse_huge_page(struct mm_struct *mm, > pmd_populate(mm, pmd, pmd_pgtable(_pmd)); > spin_unlock(pmd_ptl); > anon_vma_unlock_write(vma->anon_vma); > + vm_write_end(vma); > result = SCAN_FAIL; > goto out; > } > @@ -1075,6 +1077,7 @@ static void collapse_huge_page(struct mm_struct *mm, > set_pmd_at(mm, address, pmd, _pmd); > update_mmu_cache_pmd(vma, address, pmd); > spin_unlock(pmd_ptl); > + vm_write_end(vma); > > *hpage = NULL; > > diff --git a/mm/madvise.c b/mm/madvise.c > index 4d3c922ea1a1..e328f7ab5942 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -184,7 +184,9 @@ static long madvise_behavior(struct vm_area_struct *vma, > /* > * vm_flags is protected by the mmap_sem held in write mode. > */ > - vma->vm_flags = new_flags; > + vm_write_begin(vma); > + WRITE_ONCE(vma->vm_flags, new_flags); > + vm_write_end(vma); > out: > return error; > } > @@ -450,9 +452,11 @@ static void madvise_free_page_range(struct mmu_gather *tlb, > .private = tlb, > }; > > + vm_write_begin(vma); > tlb_start_vma(tlb, vma); > walk_page_range(addr, end, &free_walk); > tlb_end_vma(tlb, vma); > + vm_write_end(vma); > } > > static int madvise_free_single_vma(struct vm_area_struct *vma, > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index e0e706f0b34e..2632c6f93b63 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -380,8 +380,11 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new) > struct vm_area_struct *vma; > > down_write(&mm->mmap_sem); > - for (vma = mm->mmap; vma; vma = vma->vm_next) > + for (vma = mm->mmap; vma; vma = vma->vm_next) { > + vm_write_begin(vma); > mpol_rebind_policy(vma->vm_policy, new); > + vm_write_end(vma); > + } > up_write(&mm->mmap_sem); > } > > @@ -554,9 +557,11 @@ unsigned long change_prot_numa(struct vm_area_struct *vma, > { > int nr_updated; > > + vm_write_begin(vma); > nr_updated = change_protection(vma, addr, end, PAGE_NONE, 0, 1); > if (nr_updated) > count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated); > + vm_write_end(vma); > > return nr_updated; > } > @@ -657,6 +662,7 @@ static int vma_replace_policy(struct vm_area_struct *vma, > if (IS_ERR(new)) > return PTR_ERR(new); > > + vm_write_begin(vma); > if (vma->vm_ops && vma->vm_ops->set_policy) { > err = vma->vm_ops->set_policy(vma, new); > if (err) > @@ -664,11 +670,17 @@ static int vma_replace_policy(struct vm_area_struct *vma, > } > > old = vma->vm_policy; > - vma->vm_policy = new; /* protected by mmap_sem */ > + /* > + * The speculative page fault handler access this field without > + * hodling the mmap_sem. > + */ "The speculative page fault handler accesses this field without holding vma->vm_mm->mmap_sem" > + WRITE_ONCE(vma->vm_policy, new); > + vm_write_end(vma); > mpol_put(old); > > return 0; > err_out: > + vm_write_end(vma); > mpol_put(new); > return err; > } Wait, doesn't vma_dup_policy() also need to protect dst->vm_policy? diff --git a/mm/mempolicy.c b/mm/mempolicy.c --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2121,7 +2121,9 @@ int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) if (IS_ERR(pol)) return PTR_ERR(pol); - dst->vm_policy = pol; + vm_write_begin(dst); + WRITE_ONCE(dst->vm_policy, pol); + vm_write_end(dst); return 0; }