Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752474AbdHHOg3 (ORCPT ); Tue, 8 Aug 2017 10:36:29 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:51358 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752430AbdHHOgY (ORCPT ); Tue, 8 Aug 2017 10:36:24 -0400 From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com, npiggin@gmail.com, bsingharora@gmail.com, Tim Chen , linuxppc-dev@lists.ozlabs.org, x86@kernel.org Subject: [PATCH 12/16] mm: Protect SPF handler against anon_vma changes Date: Tue, 8 Aug 2017 16:35:45 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1502202949-8138-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1502202949-8138-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17080814-0012-0000-0000-0000056D5A46 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17080814-0013-0000-0000-000018E43F5A Message-Id: <1502202949-8138-13-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-08-08_07:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1706020000 definitions=main-1708080235 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3501 Lines: 89 The speculative page fault handler must be protected against anon_vma changes. This is because page_add_new_anon_rmap() is called during the speculative path. In addition, don't try speculative page fault if the VMA don't have an anon_vma structure allocated because its allocation should be protected by the mmap_sem. In __vma_adjust() when importer->anon_vma is set, there is no need to protect against speculative page faults since speculative page fault is aborted if the vma->anon_vma is not set. When calling page_add_new_anon_rmap() vma->anon_vma is necessarily valid since we checked for it when locking the pte and the anon_vma is removed once the pte is unlocked. So even if the speculative page fault handler is running concurrently with do_unmap(), as the pte is locked in unmap_region() - through unmap_vmas() - and the anon_vma unlinked later, because we check for the vma sequence counter which is updated in unmap_page_range() before locking the pte, and then in free_pgtables() so when locking the pte the change will be detected. Signed-off-by: Laurent Dufour --- mm/memory.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 519c28507a93..cb6906435ff5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -587,7 +587,9 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, * Hide vma from rmap and truncate_pagecache before freeing * pgtables */ + write_seqcount_begin(&vma->vm_sequence); unlink_anon_vmas(vma); + write_seqcount_end(&vma->vm_sequence); unlink_file_vma(vma); if (is_vm_hugetlb_page(vma)) { @@ -601,7 +603,9 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, && !is_vm_hugetlb_page(next)) { vma = next; next = vma->vm_next; + write_seqcount_begin(&vma->vm_sequence); unlink_anon_vmas(vma); + write_seqcount_end(&vma->vm_sequence); unlink_file_vma(vma); } free_pgd_range(tlb, addr, vma->vm_end, @@ -2403,7 +2407,7 @@ static int wp_page_copy(struct vm_fault *vmf) * thread doing COW. */ ptep_clear_flush_notify(vma, vmf->address, vmf->pte); - page_add_new_anon_rmap(new_page, vma, vmf->address, false); + __page_add_new_anon_rmap(new_page, vma, vmf->address, false); mem_cgroup_commit_charge(new_page, memcg, false, false); lru_cache_add_active_or_unevictable(new_page, vma); /* @@ -2873,7 +2877,7 @@ int do_swap_page(struct vm_fault *vmf) mem_cgroup_commit_charge(page, memcg, true, false); activate_page(page); } else { /* ksm created a completely new copy */ - page_add_new_anon_rmap(page, vma, vmf->address, false); + __page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); lru_cache_add_active_or_unevictable(page, vma); } @@ -3015,7 +3019,7 @@ static int do_anonymous_page(struct vm_fault *vmf) } inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, vmf->address, false); + __page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); lru_cache_add_active_or_unevictable(page, vma); setpte: @@ -3940,6 +3944,9 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, if (address < vma->vm_start || vma->vm_end <= address) goto unlock; + if (unlikely(!vma->anon_vma)) + goto unlock; + /* * Huge pages are not yet supported. */ -- 2.7.4