Received: by 10.192.165.156 with SMTP id m28csp131586imm; Tue, 17 Apr 2018 07:40:07 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/vwbBBbV5cLOf4IQ6dzEnRnwMXlJjINBekmNoqOCs7vBbN9OmnwNXovAQweFlA05NDquJj X-Received: by 2002:a17:902:9686:: with SMTP id n6-v6mr2245951plp.136.1523976007282; Tue, 17 Apr 2018 07:40:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523976007; cv=none; d=google.com; s=arc-20160816; b=j7/MpMFP6tr+MMpyn9/CUAkLA4zhjMp+GJMHkP0GXLkSm0muj7+jnTAOmkFhZuPIdj GDz9OqavrhA8WMDeWf0wFqVW05ZLss3KFdnh1rljpFCn21lGMLCi4JtyrAvvqbpCh4nz 7jeqi6menBtpjCJUMqj7ThZU7/qoF4bIq5yRexDvJSVG9DqIN7ubPcNeC4OUST+MMLsl Wgm2oP7I5ch+mTzf3kEUSW80lgZ1Ke0IMk2PmDuAmzegYPJWUOXB0i20Rxj8TsF0Ktoh /AaOV++fvzjq671edspV8avnCW20oGjQyrmB2Ztm3CpZR0+vPtP7dSQf/Wjy3Nz13JJ+ Xnyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:references:in-reply-to:date :subject:cc:to:from:arc-authentication-results; bh=VoO+iDnw0W2cR2gq3NDJTE5g6MBzk/xrZJ/Z268UfQo=; b=TGHQjts0QMIdj8pgxxUpcj/OxEU9eVTGG34bmObxx6knEIpvqLcgy7smaqBxnYbLLO 0OaqhxM33id3LBNLzxJ7LWH544AQxpCt0bXBj2LqxZ4414hmCrH0zsNSQEU3ih8+ksXi GL5dQMSuCOOVM1+C9YOthHJCXhcknIQOhCh3QYJf06PaEkRPHq5blo9n71T9YavoNmIc 3J5GJYMNDgtzCAAbJlsNJPDe44+LMG1QaPBa/XguOBPqn01zVS1eZGcSaK7TpwZ9TOsS pozFk3HbuuG6M+qMTxdPv+Nksv4whFFss4yG9dxoFDJp1+qNc6MKNx3zuq6PY7tu7kaZ kShQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z16si12066918pge.353.2018.04.17.07.39.52; Tue, 17 Apr 2018 07:40:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753917AbeDQOfH (ORCPT + 99 others); Tue, 17 Apr 2018 10:35:07 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:52316 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752812AbeDQOfE (ORCPT ); Tue, 17 Apr 2018 10:35:04 -0400 Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w3HET1ec101016 for ; Tue, 17 Apr 2018 10:35:04 -0400 Received: from e06smtp13.uk.ibm.com (e06smtp13.uk.ibm.com [195.75.94.109]) by mx0b-001b2d01.pphosted.com with ESMTP id 2hdhth38cs-1 (version=TLSv1.2 cipher=AES256-SHA256 bits=256 verify=NOT) for ; Tue, 17 Apr 2018 10:34:59 -0400 Received: from localhost by e06smtp13.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 17 Apr 2018 15:34:56 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp13.uk.ibm.com (192.168.101.143) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 17 Apr 2018 15:34:47 +0100 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w3HEYlO339714990; Tue, 17 Apr 2018 14:34:47 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 3A5C2A405F; Tue, 17 Apr 2018 15:26:56 +0100 (BST) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0B85CA4051; Tue, 17 Apr 2018 15:26:54 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.171.143]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 17 Apr 2018 15:26:53 +0100 (BST) From: Laurent Dufour To: akpm@linux-foundation.org, mhocko@kernel.org, peterz@infradead.org, kirill@shutemov.name, ak@linux.intel.com, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov , kemi.wang@intel.com, sergey.senozhatsky.work@gmail.com, Daniel Jordan , David Rientjes , Jerome Glisse , Ganesh Mahendran Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com, npiggin@gmail.com, bsingharora@gmail.com, paulmck@linux.vnet.ibm.com, Tim Chen , linuxppc-dev@lists.ozlabs.org, x86@kernel.org Subject: [PATCH v10 22/25] mm: speculative page fault handler return VMA Date: Tue, 17 Apr 2018 16:33:28 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1523975611-15978-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1523975611-15978-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18041714-0012-0000-0000-000005CBDCE7 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18041714-0013-0000-0000-000019482824 Message-Id: <1523975611-15978-23-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-04-17_07:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1804170131 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When the speculative page fault handler is returning VM_RETRY, there is a chance that VMA fetched without grabbing the mmap_sem can be reused by the legacy page fault handler. By reusing it, we avoid calling find_vma() again. To achieve, that we must ensure that the VMA structure will not be freed in our back. This is done by getting the reference on it (get_vma()) and by assuming that the caller will call the new service can_reuse_spf_vma() once it has grabbed the mmap_sem. can_reuse_spf_vma() is first checking that the VMA is still in the RB tree , and then that the VMA's boundaries matched the passed address and release the reference on the VMA so that it can be freed if needed. In the case the VMA is freed, can_reuse_spf_vma() will have returned false as the VMA is no more in the RB tree. In the architecture page fault handler, the call to the new service reuse_spf_or_find_vma() should be made in place of find_vma(), this will handle the check on the spf_vma and if needed call find_vma(). Signed-off-by: Laurent Dufour --- include/linux/mm.h | 22 +++++++-- mm/memory.c | 140 ++++++++++++++++++++++++++++++++--------------------- 2 files changed, 103 insertions(+), 59 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 08540c98d63b..50b6fd3bf9e2 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1382,25 +1382,37 @@ extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address, #ifdef CONFIG_SPECULATIVE_PAGE_FAULT extern int __handle_speculative_fault(struct mm_struct *mm, unsigned long address, - unsigned int flags); + unsigned int flags, + struct vm_area_struct **vma); static inline int handle_speculative_fault(struct mm_struct *mm, unsigned long address, - unsigned int flags) + unsigned int flags, + struct vm_area_struct **vma) { /* * Try speculative page fault for multithreaded user space task only. */ - if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1) + if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1) { + *vma = NULL; return VM_FAULT_RETRY; - return __handle_speculative_fault(mm, address, flags); + } + return __handle_speculative_fault(mm, address, flags, vma); } +extern bool can_reuse_spf_vma(struct vm_area_struct *vma, + unsigned long address); #else static inline int handle_speculative_fault(struct mm_struct *mm, unsigned long address, - unsigned int flags) + unsigned int flags, + struct vm_area_struct **vma) { return VM_FAULT_RETRY; } +static inline bool can_reuse_spf_vma(struct vm_area_struct *vma, + unsigned long address) +{ + return false; +} #endif /* CONFIG_SPECULATIVE_PAGE_FAULT */ extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, diff --git a/mm/memory.c b/mm/memory.c index 76178feff000..425f07e0bf38 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4311,13 +4311,22 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address, /* This is required by vm_normal_page() */ #error "Speculative page fault handler requires __HAVE_ARCH_PTE_SPECIAL" #endif - /* * vm_normal_page() adds some processing which should be done while * hodling the mmap_sem. */ + +/* + * Tries to handle the page fault in a speculative way, without grabbing the + * mmap_sem. + * When VM_FAULT_RETRY is returned, the vma pointer is valid and this vma must + * be checked later when the mmap_sem has been grabbed by calling + * can_reuse_spf_vma(). + * This is needed as the returned vma is kept in memory until the call to + * can_reuse_spf_vma() is made. + */ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address, - unsigned int flags) + unsigned int flags, struct vm_area_struct **vma) { struct vm_fault vmf = { .address = address, @@ -4325,21 +4334,22 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address, pgd_t *pgd, pgdval; p4d_t *p4d, p4dval; pud_t pudval; - int seq, ret = VM_FAULT_RETRY; - struct vm_area_struct *vma; + int seq, ret; /* Clear flags that may lead to release the mmap_sem to retry */ flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE); flags |= FAULT_FLAG_SPECULATIVE; - vma = get_vma(mm, address); - if (!vma) - return ret; + *vma = get_vma(mm, address); + if (!*vma) + return VM_FAULT_RETRY; + vmf.vma = *vma; - seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */ + /* rmb <-> seqlock,vma_rb_erase() */ + seq = raw_read_seqcount(&vmf.vma->vm_sequence); if (seq & 1) { - trace_spf_vma_changed(_RET_IP_, vma, address); - goto out_put; + trace_spf_vma_changed(_RET_IP_, vmf.vma, address); + return VM_FAULT_RETRY; } /* @@ -4347,9 +4357,9 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address, * with the VMA. * This include huge page from hugetlbfs. */ - if (vma->vm_ops) { - trace_spf_vma_notsup(_RET_IP_, vma, address); - goto out_put; + if (vmf.vma->vm_ops) { + trace_spf_vma_notsup(_RET_IP_, vmf.vma, address); + return VM_FAULT_RETRY; } /* @@ -4357,18 +4367,18 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address, * because vm_next and vm_prev must be safe. This can't be guaranteed * in the speculative path. */ - if (unlikely(!vma->anon_vma)) { - trace_spf_vma_notsup(_RET_IP_, vma, address); - goto out_put; + if (unlikely(!vmf.vma->anon_vma)) { + trace_spf_vma_notsup(_RET_IP_, vmf.vma, address); + return VM_FAULT_RETRY; } - vmf.vma_flags = READ_ONCE(vma->vm_flags); - vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot); + vmf.vma_flags = READ_ONCE(vmf.vma->vm_flags); + vmf.vma_page_prot = READ_ONCE(vmf.vma->vm_page_prot); /* Can't call userland page fault handler in the speculative path */ if (unlikely(vmf.vma_flags & VM_UFFD_MISSING)) { - trace_spf_vma_notsup(_RET_IP_, vma, address); - goto out_put; + trace_spf_vma_notsup(_RET_IP_, vmf.vma, address); + return VM_FAULT_RETRY; } if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP) { @@ -4377,36 +4387,27 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address, * boundaries but we want to trace it as not supported instead * of changed. */ - trace_spf_vma_notsup(_RET_IP_, vma, address); - goto out_put; + trace_spf_vma_notsup(_RET_IP_, vmf.vma, address); + return VM_FAULT_RETRY; } - if (address < READ_ONCE(vma->vm_start) - || READ_ONCE(vma->vm_end) <= address) { - trace_spf_vma_changed(_RET_IP_, vma, address); - goto out_put; + if (address < READ_ONCE(vmf.vma->vm_start) + || READ_ONCE(vmf.vma->vm_end) <= address) { + trace_spf_vma_changed(_RET_IP_, vmf.vma, address); + return VM_FAULT_RETRY; } - if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, + if (!arch_vma_access_permitted(vmf.vma, flags & FAULT_FLAG_WRITE, flags & FAULT_FLAG_INSTRUCTION, - flags & FAULT_FLAG_REMOTE)) { - trace_spf_vma_access(_RET_IP_, vma, address); - ret = VM_FAULT_SIGSEGV; - goto out_put; - } + flags & FAULT_FLAG_REMOTE)) + goto out_segv; /* This is one is required to check that the VMA has write access set */ if (flags & FAULT_FLAG_WRITE) { - if (unlikely(!(vmf.vma_flags & VM_WRITE))) { - trace_spf_vma_access(_RET_IP_, vma, address); - ret = VM_FAULT_SIGSEGV; - goto out_put; - } - } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) { - trace_spf_vma_access(_RET_IP_, vma, address); - ret = VM_FAULT_SIGSEGV; - goto out_put; - } + if (unlikely(!(vmf.vma_flags & VM_WRITE))) + goto out_segv; + } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) + goto out_segv; if (IS_ENABLED(CONFIG_NUMA)) { struct mempolicy *pol; @@ -4416,12 +4417,12 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address, * mpol_misplaced() which are not compatible with the *speculative page fault processing. */ - pol = __get_vma_policy(vma, address); + pol = __get_vma_policy(vmf.vma, address); if (!pol) pol = get_task_policy(current); if (pol && pol->mode == MPOL_INTERLEAVE) { - trace_spf_vma_notsup(_RET_IP_, vma, address); - goto out_put; + trace_spf_vma_notsup(_RET_IP_, vmf.vma, address); + return VM_FAULT_RETRY; } } @@ -4483,9 +4484,8 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address, vmf.pte = NULL; } - vmf.vma = vma; - vmf.pgoff = linear_page_index(vma, address); - vmf.gfp_mask = __get_fault_gfp_mask(vma); + vmf.pgoff = linear_page_index(vmf.vma, address); + vmf.gfp_mask = __get_fault_gfp_mask(vmf.vma); vmf.sequence = seq; vmf.flags = flags; @@ -4495,16 +4495,22 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address, * We need to re-validate the VMA after checking the bounds, otherwise * we might have a false positive on the bounds. */ - if (read_seqcount_retry(&vma->vm_sequence, seq)) { - trace_spf_vma_changed(_RET_IP_, vma, address); - goto out_put; + if (read_seqcount_retry(&vmf.vma->vm_sequence, seq)) { + trace_spf_vma_changed(_RET_IP_, vmf.vma, address); + return VM_FAULT_RETRY; } mem_cgroup_oom_enable(); ret = handle_pte_fault(&vmf); mem_cgroup_oom_disable(); - put_vma(vma); + /* + * If there is no need to retry, don't return the vma to the caller. + */ + if (ret != VM_FAULT_RETRY) { + put_vma(vmf.vma); + *vma = NULL; + } /* * The task may have entered a memcg OOM situation but @@ -4517,9 +4523,35 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address, return ret; out_walk: - trace_spf_vma_notsup(_RET_IP_, vma, address); + trace_spf_vma_notsup(_RET_IP_, vmf.vma, address); local_irq_enable(); -out_put: + return VM_FAULT_RETRY; + +out_segv: + trace_spf_vma_access(_RET_IP_, vmf.vma, address); + /* + * We don't return VM_FAULT_RETRY so the caller is not expected to + * retrieve the fetched VMA. + */ + put_vma(vmf.vma); + *vma = NULL; + return VM_FAULT_SIGSEGV; +} + +/* + * This is used to know if the vma fetch in the speculative page fault handler + * is still valid when trying the regular fault path while holding the + * mmap_sem. + * The call to put_vma(vma) must be made after checking the vma's fields, as + * the vma may be freed by put_vma(). In such a case it is expected that false + * is returned. + */ +bool can_reuse_spf_vma(struct vm_area_struct *vma, unsigned long address) +{ + bool ret; + + ret = !RB_EMPTY_NODE(&vma->vm_rb) && + vma->vm_start <= address && address < vma->vm_end; put_vma(vma); return ret; } -- 2.7.4