Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp2533328pxy; Sat, 24 Apr 2021 19:39:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx8YamENRqjL0+Ut/UAh2spJbNQi1yXd0RXQAZQnfreimdUH8SZJMsnojXz04rH1qYTMzMB X-Received: by 2002:aa7:9798:0:b029:25d:e0c7:471b with SMTP id o24-20020aa797980000b029025de0c7471bmr11080298pfp.11.1619318381645; Sat, 24 Apr 2021 19:39:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619318381; cv=none; d=google.com; s=arc-20160816; b=s9jVFW5vUTVhlFD7D8YcMxHwNLXH6qP42pX/01Gco/OXSVOv0WQJJvBIAWm711LaJl urljlSrVGljt+w/y82y8Nm/OVAXtVomlgR256+N3FT7qZO249S2VCAQgXseoyyy2AkKh LpL72dA3e01bnrur0Nyg0egLXONzrvmRpP8dcRSnEaVVHnbknS5vZHxb86RbhkUx1IjW 8S/bZlJMcXZCCVJ7IyFeFEd7kWO3NVzGQdfyvcN+H7BeR9RsHjonYY1IyndJXCMx6N/B 4LkZoteg94weA3TL46cJMqzLBZiyce+MqATJ1Yjd+KwFUpmrqC3aMtmectXCpo7rNwsG 3YsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=ct6mhepluUWhMYaFRYY2n6fSYRoDExIYwj2GLoE6xII=; b=OGvgv05Tjtx6jhxIAGhOQaWTkO/jvk5Hn5kX5I0e7J1Epgp3QViGahfrj+XnR1TTd2 Yz1jgBEl1fZuV5+hGtX99XD6N5oh/eaV/EBwkdKHx9ZaduUOP8G2JzpT+Ima9Rw5rp5S seVnroRnWkO2cFKwoe7rQgTP6r0BhXjwkg7tZHEAQ53KCv/c8JWFe9BYf6CZ1VRyCPyT AOO/7obxktt+XS2xkwnDe1bYz4wD3TvVu1xu0TjbFL++DVJgq0BtHj8bwfZFosEqD1sn bsbekOIHu8Jf5KcmK3bodCdL3VIl2OtTSE/Rundv4qWwXTzlqj5jABYh+Se0PgC/kORC kjwA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f11si222810pju.22.2021.04.24.19.39.16; Sat, 24 Apr 2021 19:39:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229711AbhDYCjF (ORCPT + 99 others); Sat, 24 Apr 2021 22:39:05 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:16154 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229514AbhDYCjE (ORCPT ); Sat, 24 Apr 2021 22:39:04 -0400 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FSXF3728gzpbTJ; Sun, 25 Apr 2021 10:35:19 +0800 (CST) Received: from huawei.com (10.175.104.174) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.498.0; Sun, 25 Apr 2021 10:38:18 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , , , , , Subject: [PATCH v4 3/4] mm/swap: remove confusing checking for non_swap_entry() in swap_ra_info() Date: Sun, 25 Apr 2021 10:38:05 +0800 Message-ID: <20210425023806.3537283-4-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210425023806.3537283-1-linmiaohe@huawei.com> References: <20210425023806.3537283-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.104.174] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The non_swap_entry() was used for working with VMA based swap readahead via commit ec560175c0b6 ("mm, swap: VMA based swap readahead"). At that time, the non_swap_entry() checking is necessary because the function is called before checking that in do_swap_page(). Then it's moved to swap_ra_info() since commit eaf649ebc3ac ("mm: swap: clean up swap readahead"). After that, the non_swap_entry() checking is unnecessary, because swap_ra_info() is called after non_swap_entry() has been checked already. The resulting code is confusing as the non_swap_entry() check looks racy now because while we released the pte lock, somebody else might have faulted in this pte. So we should check whether it's swap pte first to guard against such race or swap_type will be unexpected. But the race isn't important because it will not cause problem. We would have enough checking when we really operate the PTE entries later. So we remove the non_swap_entry() check here to avoid confusion. Signed-off-by: Miaohe Lin --- mm/swap_state.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 272ea2108c9d..df5405384520 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -721,7 +721,6 @@ static void swap_ra_info(struct vm_fault *vmf, { struct vm_area_struct *vma = vmf->vma; unsigned long ra_val; - swp_entry_t entry; unsigned long faddr, pfn, fpfn; unsigned long start, end; pte_t *pte, *orig_pte; @@ -739,11 +738,6 @@ static void swap_ra_info(struct vm_fault *vmf, faddr = vmf->address; orig_pte = pte = pte_offset_map(vmf->pmd, faddr); - entry = pte_to_swp_entry(*pte); - if ((unlikely(non_swap_entry(entry)))) { - pte_unmap(orig_pte); - return; - } fpfn = PFN_DOWN(faddr); ra_val = GET_SWAP_RA_VAL(vma); -- 2.19.1