Received: by 10.223.185.116 with SMTP id b49csp1038311wrg; Fri, 16 Feb 2018 11:15:41 -0800 (PST) X-Google-Smtp-Source: AH8x225lxtb/e3mkgH3aeQGuSCEe4kImrtkk7bFk8Q9AYAqVR8Odn6AOC8ILjGO18u22sAVYdAky X-Received: by 2002:a17:902:27:: with SMTP id 36-v6mr6770615pla.128.1518808541793; Fri, 16 Feb 2018 11:15:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518808541; cv=none; d=google.com; s=arc-20160816; b=RdwPrr43PyCzs10lZkPWqqVW2HI/mSQrwk3ikFPd2Kt2VyBHG/l7JrEjhRV2lIua88 YP6OglHVQwbJpm4h4f3Uph4SaNzGcZy+pupCmJk+jR7ma3uWjo2Vgdo32ej6dlf5eVzG TDdHowHZv7UcxrUsGIumx9ypxXc2zJKTg/cVo84taDiEJIX4ERtm7QOKfdIkW5qNZ+7I mE/gUk+PZFS0/WNrrj7zkJUjSj47rEscPrFQE9HqywD7roO/y3PEac6atz/gFJ4YC+bJ u4NDBHpli/4mzGRciBz4AzVXv6AOGjidGjWDCNk4rVQVbtlAuT4DHpCacNy+64h8SQQv ka6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:references:in-reply-to:date :subject:cc:to:from:arc-authentication-results; bh=B/rGX6E3SZNtQkVVkrzhJMllsvmnKCe2fOCIch7yUkU=; b=hPmJt+BGIFRGcNbYDi+pSkfTQenjYa80yeDNttRMlbWlbJIe0hHrwcfT28ZfvluQ6T FFuRBZsOt04RYH8iEQg9bqcjZErwt3wOR///yE2Nb7U3BfG2fOPoLjXl/DFXeVmoM3PK VF/UnFWN6MVh82FEM7hVNjduPc1vPfRJwWeVT2nxPeeqnJIJxbrfccEzjAbh7upPGqpn 1axyYt1iWhbTap885RXBCr6OX3v/ofr8/NvowB2fvHJ+4RaMsacXiqNYode/gdJeM29u 713VUM3plw+BxvSQK5Qr8oAwB9oDTHQI8OYT9EKCrcIWkNu3fUY1HpbNAIggZQjItkVY CrbQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 88-v6si4655520pla.342.2018.02.16.11.15.19; Fri, 16 Feb 2018 11:15:41 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758157AbeBPP1K (ORCPT + 99 others); Fri, 16 Feb 2018 10:27:10 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:35032 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1758132AbeBPP1F (ORCPT ); Fri, 16 Feb 2018 10:27:05 -0500 Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w1GFNn9P085104 for ; Fri, 16 Feb 2018 10:27:04 -0500 Received: from e06smtp13.uk.ibm.com (e06smtp13.uk.ibm.com [195.75.94.109]) by mx0b-001b2d01.pphosted.com with ESMTP id 2g5yuc603d-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 16 Feb 2018 10:27:04 -0500 Received: from localhost by e06smtp13.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 16 Feb 2018 15:27:02 -0000 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp13.uk.ibm.com (192.168.101.143) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 16 Feb 2018 15:26:54 -0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w1GFQrb242205242; Fri, 16 Feb 2018 15:26:53 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C7C0E42041; Fri, 16 Feb 2018 15:19:37 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BB6054203F; Fri, 16 Feb 2018 15:19:35 +0000 (GMT) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.186.2]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 16 Feb 2018 15:19:35 +0000 (GMT) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov , kemi.wang@intel.com, sergey.senozhatsky.work@gmail.com, Daniel Jordan Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com, npiggin@gmail.com, bsingharora@gmail.com, Tim Chen , linuxppc-dev@lists.ozlabs.org, x86@kernel.org Subject: [PATCH v8 22/24] mm: Speculative page fault handler return VMA Date: Fri, 16 Feb 2018 16:25:36 +0100 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1518794738-4186-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1518794738-4186-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18021615-0012-0000-0000-000005B1801E X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18021615-0013-0000-0000-0000192D84BA Message-Id: <1518794738-4186-23-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2018-02-16_06:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1802160183 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When the speculative page fault handler is returning VM_RETRY, there is a chance that VMA fetched without grabbing the mmap_sem can be reused by the legacy page fault handler. By reusing it, we avoid calling find_vma() again. To achieve, that we must ensure that the VMA structure will not be freed in our back. This is done by getting the reference on it (get_vma()) and by assuming that the caller will call the new service can_reuse_spf_vma() once it has grabbed the mmap_sem. can_reuse_spf_vma() is first checking that the VMA is still in the RB tree , and then that the VMA's boundaries matched the passed address and release the reference on the VMA so that it can be freed if needed. In the case the VMA is freed, can_reuse_spf_vma() will have returned false as the VMA is no more in the RB tree. Signed-off-by: Laurent Dufour --- include/linux/mm.h | 5 +- mm/memory.c | 136 +++++++++++++++++++++++++++++++++-------------------- 2 files changed, 88 insertions(+), 53 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c383a4e2ceb3..0cd31a37bb3d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1355,7 +1355,10 @@ extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags); #ifdef CONFIG_SPECULATIVE_PAGE_FAULT extern int handle_speculative_fault(struct mm_struct *mm, - unsigned long address, unsigned int flags); + unsigned long address, unsigned int flags, + struct vm_area_struct **vma); +extern bool can_reuse_spf_vma(struct vm_area_struct *vma, + unsigned long address); #endif /* CONFIG_SPECULATIVE_PAGE_FAULT */ extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, unsigned long address, unsigned int fault_flags, diff --git a/mm/memory.c b/mm/memory.c index 2ef686405154..1f5ce5ff79af 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4307,13 +4307,22 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address, /* This is required by vm_normal_page() */ #error "Speculative page fault handler requires __HAVE_ARCH_PTE_SPECIAL" #endif - /* * vm_normal_page() adds some processing which should be done while * hodling the mmap_sem. */ + +/* + * Tries to handle the page fault in a speculative way, without grabbing the + * mmap_sem. + * When VM_FAULT_RETRY is returned, the vma pointer is valid and this vma must + * be checked later when the mmap_sem has been grabbed by calling + * can_reuse_spf_vma(). + * This is needed as the returned vma is kept in memory until the call to + * can_reuse_spf_vma() is made. + */ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, - unsigned int flags) + unsigned int flags, struct vm_area_struct **vma) { struct vm_fault vmf = { .address = address, @@ -4322,7 +4331,6 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, p4d_t *p4d, p4dval; pud_t pudval; int seq, ret = VM_FAULT_RETRY; - struct vm_area_struct *vma; #ifdef CONFIG_NUMA struct mempolicy *pol; #endif @@ -4331,14 +4339,16 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE); flags |= FAULT_FLAG_SPECULATIVE; - vma = get_vma(mm, address); - if (!vma) + *vma = get_vma(mm, address); + if (!*vma) return ret; + vmf.vma = *vma; - seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */ + /* rmb <-> seqlock,vma_rb_erase() */ + seq = raw_read_seqcount(&vmf.vma->vm_sequence); if (seq & 1) { - trace_spf_vma_changed(_RET_IP_, vma, address); - goto out_put; + trace_spf_vma_changed(_RET_IP_, vmf.vma, address); + return ret; } /* @@ -4346,9 +4356,9 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, * with the VMA. * This include huge page from hugetlbfs. */ - if (vma->vm_ops) { - trace_spf_vma_notsup(_RET_IP_, vma, address); - goto out_put; + if (vmf.vma->vm_ops) { + trace_spf_vma_notsup(_RET_IP_, vmf.vma, address); + return ret; } /* @@ -4356,18 +4366,18 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, * because vm_next and vm_prev must be safe. This can't be guaranteed * in the speculative path. */ - if (unlikely(!vma->anon_vma)) { - trace_spf_vma_notsup(_RET_IP_, vma, address); - goto out_put; + if (unlikely(!vmf.vma->anon_vma)) { + trace_spf_vma_notsup(_RET_IP_, vmf.vma, address); + return ret; } - vmf.vma_flags = READ_ONCE(vma->vm_flags); - vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot); + vmf.vma_flags = READ_ONCE(vmf.vma->vm_flags); + vmf.vma_page_prot = READ_ONCE(vmf.vma->vm_page_prot); /* Can't call userland page fault handler in the speculative path */ if (unlikely(vmf.vma_flags & VM_UFFD_MISSING)) { - trace_spf_vma_notsup(_RET_IP_, vma, address); - goto out_put; + trace_spf_vma_notsup(_RET_IP_, vmf.vma, address); + return ret; } if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP) { @@ -4376,48 +4386,39 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, * boundaries but we want to trace it as not supported instead * of changed. */ - trace_spf_vma_notsup(_RET_IP_, vma, address); - goto out_put; + trace_spf_vma_notsup(_RET_IP_, vmf.vma, address); + return ret; } - if (address < READ_ONCE(vma->vm_start) - || READ_ONCE(vma->vm_end) <= address) { - trace_spf_vma_changed(_RET_IP_, vma, address); - goto out_put; + if (address < READ_ONCE(vmf.vma->vm_start) + || READ_ONCE(vmf.vma->vm_end) <= address) { + trace_spf_vma_changed(_RET_IP_, vmf.vma, address); + return ret; } - if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, + if (!arch_vma_access_permitted(vmf.vma, flags & FAULT_FLAG_WRITE, flags & FAULT_FLAG_INSTRUCTION, - flags & FAULT_FLAG_REMOTE)) { - trace_spf_vma_access(_RET_IP_, vma, address); - ret = VM_FAULT_SIGSEGV; - goto out_put; - } + flags & FAULT_FLAG_REMOTE)) + goto out_segv; /* This is one is required to check that the VMA has write access set */ if (flags & FAULT_FLAG_WRITE) { - if (unlikely(!(vmf.vma_flags & VM_WRITE))) { - trace_spf_vma_access(_RET_IP_, vma, address); - ret = VM_FAULT_SIGSEGV; - goto out_put; - } - } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) { - trace_spf_vma_access(_RET_IP_, vma, address); - ret = VM_FAULT_SIGSEGV; - goto out_put; - } + if (unlikely(!(vmf.vma_flags & VM_WRITE))) + goto out_segv; + } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) + goto out_segv; #ifdef CONFIG_NUMA /* * MPOL_INTERLEAVE implies additional check in mpol_misplaced() which * are not compatible with the speculative page fault processing. */ - pol = __get_vma_policy(vma, address); + pol = __get_vma_policy(vmf.vma, address); if (!pol) pol = get_task_policy(current); if (pol && pol->mode == MPOL_INTERLEAVE) { - trace_spf_vma_notsup(_RET_IP_, vma, address); - goto out_put; + trace_spf_vma_notsup(_RET_IP_, vmf.vma, address); + return ret; } #endif @@ -4479,9 +4480,8 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, vmf.pte = NULL; } - vmf.vma = vma; - vmf.pgoff = linear_page_index(vma, address); - vmf.gfp_mask = __get_fault_gfp_mask(vma); + vmf.pgoff = linear_page_index(vmf.vma, address); + vmf.gfp_mask = __get_fault_gfp_mask(vmf.vma); vmf.sequence = seq; vmf.flags = flags; @@ -4491,16 +4491,22 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, * We need to re-validate the VMA after checking the bounds, otherwise * we might have a false positive on the bounds. */ - if (read_seqcount_retry(&vma->vm_sequence, seq)) { - trace_spf_vma_changed(_RET_IP_, vma, address); - goto out_put; + if (read_seqcount_retry(&vmf.vma->vm_sequence, seq)) { + trace_spf_vma_changed(_RET_IP_, vmf.vma, address); + return ret; } mem_cgroup_oom_enable(); ret = handle_pte_fault(&vmf); mem_cgroup_oom_disable(); - put_vma(vma); + /* + * If there is no need to retry, don't return the vma to the caller. + */ + if (!(ret & VM_FAULT_RETRY)) { + put_vma(vmf.vma); + *vma = NULL; + } /* * The task may have entered a memcg OOM situation but @@ -4513,9 +4519,35 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, return ret; out_walk: - trace_spf_vma_notsup(_RET_IP_, vma, address); + trace_spf_vma_notsup(_RET_IP_, vmf.vma, address); local_irq_enable(); -out_put: + return ret; + +out_segv: + trace_spf_vma_access(_RET_IP_, vmf.vma, address); + /* + * We don't return VM_FAULT_RETRY so the caller is not expected to + * retrieve the fetched VMA. + */ + put_vma(vmf.vma); + *vma = NULL; + return VM_FAULT_SIGSEGV; +} + +/* + * This is used to know if the vma fetch in the speculative page fault handler + * is still valid when trying the regular fault path while holding the + * mmap_sem. + * The call to put_vma(vma) must be made after checking the vma's fields, as + * the vma may be freed by put_vma(). In such a case it is expected that false + * is returned. + */ +bool can_reuse_spf_vma(struct vm_area_struct *vma, unsigned long address) +{ + bool ret; + + ret = !RB_EMPTY_NODE(&vma->vm_rb) && + vma->vm_start <= address && address < vma->vm_end; put_vma(vma); return ret; } -- 2.7.4