Received: by 10.213.65.68 with SMTP id h4csp655183imn; Wed, 28 Mar 2018 10:15:18 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+3Ng7B8H5Bp4RROup9v6y7pTnrnWU9BO9PJVCYdo24X+8qaFRUqLICnaqesAt6Hz4QKoTC X-Received: by 2002:a17:902:9a42:: with SMTP id x2-v6mr4662247plv.201.1522257318159; Wed, 28 Mar 2018 10:15:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522257318; cv=none; d=google.com; s=arc-20160816; b=ghKGN8fg/hJEqrAXftoj197Qped/O3WC8uYA5xCFN9jWf771u0Md2OgfMsh6XC9iGn jLRmAuIss1E7PdDwRkCqf7aWJ+c+j4eJWXVXzRPSTnZMXYAVpOk4BsYKg2MqEFo3odk8 fUQjOuhPQjetagsGo96C+FO3heol2rv3P3BG6IDRUUhoDw6EWIZxqJymCF7A2RXJpHW1 zTg+s7zxer9FGZ3+++g9yGp1hjq80DLQoWr8SEk/meAubX+4wrd3OpU4Gvn+1tfOfmev 9ie2QTzb86rxHMEqkRTq7rG1Zd5eP8JC8fgelOCxXYO7w5CHTyCa6QVC5N2EcluPqzoS 3IQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date:from :references:cc:to:subject:arc-authentication-results; bh=BblO22jkYl5FAsGZmhnN4W6TCSB0kKTl77hBTK04GUI=; b=S8nlFAf2DrQaXdn6RAJElrmi6sUTP6EdkXHHM+3eW0+ds5MkYJYItpg1LCOmahdQgM W1UmKHUBYRhUC+3TtU9XoKxsc5cnFZhP2cVRgSqXEQxRfUE7L+AEh0zZBP2YWnEWGdbV FeXyu1hpCXwZjxhB1rIjlGhuVN+v6eIJ28DzFmccCKN07fN/8V9MC/OTJDwA4yqqiARz 1S5lhZ9m77NJjh/NHGGS3e573vmA3Lvf9MgfyUVqwHJAvR3FYOBFT+f1MulZyg3FXSx0 QBpAzmZM/3FFumVKxc7aN/CAefP9glBqURWOubpA/v3vYJmPcr+04XfeJi0MV2lqI5Xa unKQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c76si2754498pga.156.2018.03.28.10.14.56; Wed, 28 Mar 2018 10:15:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753228AbeC1Q5j (ORCPT + 99 others); Wed, 28 Mar 2018 12:57:39 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:45702 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751219AbeC1Q5f (ORCPT ); Wed, 28 Mar 2018 12:57:35 -0400 Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w2SGto4l035362 for ; Wed, 28 Mar 2018 12:57:34 -0400 Received: from e06smtp15.uk.ibm.com (e06smtp15.uk.ibm.com [195.75.94.111]) by mx0a-001b2d01.pphosted.com with ESMTP id 2h0c28h5dn-1 (version=TLSv1.2 cipher=AES256-SHA256 bits=256 verify=NOT) for ; Wed, 28 Mar 2018 12:57:34 -0400 Received: from localhost by e06smtp15.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 28 Mar 2018 17:57:31 +0100 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp15.uk.ibm.com (192.168.101.145) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 28 Mar 2018 17:57:24 +0100 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w2SGvOkx12452114; Wed, 28 Mar 2018 16:57:24 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9D8D652041; Wed, 28 Mar 2018 16:48:39 +0100 (BST) Received: from [9.145.178.80] (unknown [9.145.178.80]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTP id 770895203F; Wed, 28 Mar 2018 16:48:37 +0100 (BST) Subject: Re: [PATCH v9 08/24] mm: Protect VMA modifications using VMA sequence count To: David Rientjes Cc: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov , kemi.wang@intel.com, sergey.senozhatsky.work@gmail.com, Daniel Jordan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com, npiggin@gmail.com, bsingharora@gmail.com, Tim Chen , linuxppc-dev@lists.ozlabs.org, x86@kernel.org References: <1520963994-28477-1-git-send-email-ldufour@linux.vnet.ibm.com> <1520963994-28477-9-git-send-email-ldufour@linux.vnet.ibm.com> From: Laurent Dufour Date: Wed, 28 Mar 2018 18:57:21 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-TM-AS-GCONF: 00 x-cbid: 18032816-0020-0000-0000-0000040AC830 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18032816-0021-0000-0000-0000429ED067 Message-Id: <8bb04603-f55e-3c46-8f29-a183ba6ef47b@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2018-03-28_06:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1803280176 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 27/03/2018 23:45, David Rientjes wrote: > On Tue, 13 Mar 2018, Laurent Dufour wrote: > >> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c >> index 65ae54659833..a2d9c87b7b0b 100644 >> --- a/fs/proc/task_mmu.c >> +++ b/fs/proc/task_mmu.c >> @@ -1136,8 +1136,11 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, >> goto out_mm; >> } >> for (vma = mm->mmap; vma; vma = vma->vm_next) { >> - vma->vm_flags &= ~VM_SOFTDIRTY; >> + vm_write_begin(vma); >> + WRITE_ONCE(vma->vm_flags, >> + vma->vm_flags & ~VM_SOFTDIRTY); >> vma_set_page_prot(vma); >> + vm_write_end(vma); >> } >> downgrade_write(&mm->mmap_sem); >> break; >> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c >> index cec550c8468f..b8212ba17695 100644 >> --- a/fs/userfaultfd.c >> +++ b/fs/userfaultfd.c >> @@ -659,8 +659,11 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) >> >> octx = vma->vm_userfaultfd_ctx.ctx; >> if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) { >> + vm_write_begin(vma); >> vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; >> - vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING); >> + WRITE_ONCE(vma->vm_flags, >> + vma->vm_flags & ~(VM_UFFD_WP | VM_UFFD_MISSING)); >> + vm_write_end(vma); >> return 0; >> } >> > > In several locations in this patch vm_write_begin(vma) -> > vm_write_end(vma) is nesting things other than vma->vm_flags, > vma->vm_policy, etc. I think it's better to do vm_write_end(vma) as soon > as the members that the seqcount protects are modified. In other words, > this isn't offering protection for vma->vm_userfaultfd_ctx. There are > several examples of this in the patch. That's true in this particular case, and I could change that to not include the change to vm_userfaultfd_ctx. This being said, I don't think this will have a major impact, but I'll make a close review on this patch to be sure there is too large protected part of code. >> @@ -885,8 +888,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file) >> vma = prev; >> else >> prev = vma; >> - vma->vm_flags = new_flags; >> + vm_write_begin(vma); >> + WRITE_ONCE(vma->vm_flags, new_flags); >> vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; >> + vm_write_end(vma); >> } >> up_write(&mm->mmap_sem); >> mmput(mm); >> @@ -1434,8 +1439,10 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, >> * the next vma was merged into the current one and >> * the current one has not been updated yet. >> */ >> - vma->vm_flags = new_flags; >> + vm_write_begin(vma); >> + WRITE_ONCE(vma->vm_flags, new_flags); >> vma->vm_userfaultfd_ctx.ctx = ctx; >> + vm_write_end(vma); >> >> skip: >> prev = vma; >> @@ -1592,8 +1599,10 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, >> * the next vma was merged into the current one and >> * the current one has not been updated yet. >> */ >> - vma->vm_flags = new_flags; >> + vm_write_begin(vma); >> + WRITE_ONCE(vma->vm_flags, new_flags); >> vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; >> + vm_write_end(vma); >> >> skip: >> prev = vma; >> diff --git a/mm/khugepaged.c b/mm/khugepaged.c >> index b7e2268dfc9a..32314e9e48dd 100644 >> --- a/mm/khugepaged.c >> +++ b/mm/khugepaged.c >> @@ -1006,6 +1006,7 @@ static void collapse_huge_page(struct mm_struct *mm, >> if (mm_find_pmd(mm, address) != pmd) >> goto out; >> >> + vm_write_begin(vma); >> anon_vma_lock_write(vma->anon_vma); >> >> pte = pte_offset_map(pmd, address); >> @@ -1041,6 +1042,7 @@ static void collapse_huge_page(struct mm_struct *mm, >> pmd_populate(mm, pmd, pmd_pgtable(_pmd)); >> spin_unlock(pmd_ptl); >> anon_vma_unlock_write(vma->anon_vma); >> + vm_write_end(vma); >> result = SCAN_FAIL; >> goto out; >> } >> @@ -1075,6 +1077,7 @@ static void collapse_huge_page(struct mm_struct *mm, >> set_pmd_at(mm, address, pmd, _pmd); >> update_mmu_cache_pmd(vma, address, pmd); >> spin_unlock(pmd_ptl); >> + vm_write_end(vma); >> >> *hpage = NULL; >> >> diff --git a/mm/madvise.c b/mm/madvise.c >> index 4d3c922ea1a1..e328f7ab5942 100644 >> --- a/mm/madvise.c >> +++ b/mm/madvise.c >> @@ -184,7 +184,9 @@ static long madvise_behavior(struct vm_area_struct *vma, >> /* >> * vm_flags is protected by the mmap_sem held in write mode. >> */ >> - vma->vm_flags = new_flags; >> + vm_write_begin(vma); >> + WRITE_ONCE(vma->vm_flags, new_flags); >> + vm_write_end(vma); >> out: >> return error; >> } >> @@ -450,9 +452,11 @@ static void madvise_free_page_range(struct mmu_gather *tlb, >> .private = tlb, >> }; >> >> + vm_write_begin(vma); >> tlb_start_vma(tlb, vma); >> walk_page_range(addr, end, &free_walk); >> tlb_end_vma(tlb, vma); >> + vm_write_end(vma); >> } >> >> static int madvise_free_single_vma(struct vm_area_struct *vma, >> diff --git a/mm/mempolicy.c b/mm/mempolicy.c >> index e0e706f0b34e..2632c6f93b63 100644 >> --- a/mm/mempolicy.c >> +++ b/mm/mempolicy.c >> @@ -380,8 +380,11 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new) >> struct vm_area_struct *vma; >> >> down_write(&mm->mmap_sem); >> - for (vma = mm->mmap; vma; vma = vma->vm_next) >> + for (vma = mm->mmap; vma; vma = vma->vm_next) { >> + vm_write_begin(vma); >> mpol_rebind_policy(vma->vm_policy, new); >> + vm_write_end(vma); >> + } >> up_write(&mm->mmap_sem); >> } >> >> @@ -554,9 +557,11 @@ unsigned long change_prot_numa(struct vm_area_struct *vma, >> { >> int nr_updated; >> >> + vm_write_begin(vma); >> nr_updated = change_protection(vma, addr, end, PAGE_NONE, 0, 1); >> if (nr_updated) >> count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated); >> + vm_write_end(vma); >> >> return nr_updated; >> } >> @@ -657,6 +662,7 @@ static int vma_replace_policy(struct vm_area_struct *vma, >> if (IS_ERR(new)) >> return PTR_ERR(new); >> >> + vm_write_begin(vma); >> if (vma->vm_ops && vma->vm_ops->set_policy) { >> err = vma->vm_ops->set_policy(vma, new); >> if (err) >> @@ -664,11 +670,17 @@ static int vma_replace_policy(struct vm_area_struct *vma, >> } >> >> old = vma->vm_policy; >> - vma->vm_policy = new; /* protected by mmap_sem */ >> + /* >> + * The speculative page fault handler access this field without >> + * hodling the mmap_sem. >> + */ > > "The speculative page fault handler accesses this field without holding > vma->vm_mm->mmap_sem" Oops :/ > >> + WRITE_ONCE(vma->vm_policy, new); >> + vm_write_end(vma); >> mpol_put(old); >> >> return 0; >> err_out: >> + vm_write_end(vma); >> mpol_put(new); >> return err; >> } > > Wait, doesn't vma_dup_policy() also need to protect dst->vm_policy? Indeed this is not necessary because vma_dup_policy() is called when dst is not yet linked in the RB tree, so it can't be found by the speculative page fault handler. This is not the case of vma_replace_policy, and this is why the protection are needed here. > > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -2121,7 +2121,9 @@ int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) > > if (IS_ERR(pol)) > return PTR_ERR(pol); > - dst->vm_policy = pol; > + vm_write_begin(dst); > + WRITE_ONCE(dst->vm_policy, pol); > + vm_write_end(dst); > return 0; > }