Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp1625987pxa; Thu, 20 Aug 2020 16:36:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxoxPMfF8e3p0zr3p7GY7sAt/4n41DJ6SgogA4d6RpxOcPBFdCR5yhmIz6u0hMxkVufEfB1 X-Received: by 2002:a17:906:840c:: with SMTP id n12mr300299ejx.246.1597966582837; Thu, 20 Aug 2020 16:36:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597966582; cv=none; d=google.com; s=arc-20160816; b=f23CAxwkkhil4iaoi53toM2iNhXkB3fDhyYtVz8C69lnVpse7Hlc8yl7BqD4P5mRAn esgTQOpsh8liS8ASdulyJOjl+M2lgoORkm65y7DC8+aAf8y4Zonlo7Dskor9tjpjJyaG A7HPMAq3kqFEQesO3Ii7pO8v8mP+9s0cEsw9djHq3kjlt830yaX6d+BH2acx+cOTtaYR GHeMB81ngG6mO3wanQqzXMWK5XCKoal9WERDrmIdWD0xhaOWCM6dSqBO2xytAOxPh+Z5 c4pXGW9lmFxOlFRCeAX1WzyET2uRWl4lKHdHrMKXpHxwkD2M4suvC6BF7aC62xAxEZQA K7MQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=kmGgvuOtwY6Kw9xJc+vfWcev81nRqyEqXStuRPKrnJc=; b=iTaNMRbx8GvRed45XKgt84lp0fv9VTcr8m1tlXh4fTNnldZ8YIolfl70YqTbuwfPrM yrIvwcf1a0VQ1WQKyZ3TmAI9WUEDoOpVggeLb8q1EaTE1lFibemQWdt3yIVtFervWRDs ZJgK9891Gl58KS0tGqSVnwtMKJdWTN4ln8uilH1DDn7XKpRkPBY1wnHUohhjtgbv6hCH ihTQmabQaDKB+Z4wWzZPhKIonEpyBrIjpCNyZuOJrd4QjXIjLvmIjV1VyjYLLTiAPWVT ZDxWuJVjAcYSh0pdiII1aaOOkg0tAJy9SmZlbBWBOvygFcu9NRa0b4DG4b25TKqtm6Ci wKSQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j3si26965edq.596.2020.08.20.16.35.58; Thu, 20 Aug 2020 16:36:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728746AbgHTXe7 (ORCPT + 99 others); Thu, 20 Aug 2020 19:34:59 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:57828 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728605AbgHTXe6 (ORCPT ); Thu, 20 Aug 2020 19:34:58 -0400 Received: from dread.disaster.area (pa49-181-146-199.pa.nsw.optusnet.com.au [49.181.146.199]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 8D0153A4683; Fri, 21 Aug 2020 09:34:47 +1000 (AEST) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1k8u50-0007Ze-JP; Fri, 21 Aug 2020 09:34:46 +1000 Date: Fri, 21 Aug 2020 09:34:46 +1000 From: Dave Chinner To: Gao Xiang Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Carlos Maiolino , Eric Sandeen , "Huang, Ying" , Yang Shi , Rafael Aquini , stable Subject: Re: [PATCH v2] mm, THP, swap: fix allocating cluster for swapfile by mistake Message-ID: <20200820233446.GB7728@dread.disaster.area> References: <20200820045323.7809-1-hsiangkao@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200820045323.7809-1-hsiangkao@redhat.com> X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=QKgWuTDL c=1 sm=1 tr=0 cx=a_idp_d a=GorAHYkI+xOargNMzM6qxQ==:117 a=GorAHYkI+xOargNMzM6qxQ==:17 a=kj9zAlcOel0A:10 a=y4yBn9ojGxQA:10 a=7-415B0cAAAA:8 a=xbHa4w0rJhsKUg1b8lYA:9 a=CjuIK1q_8ugA:10 a=biEYGPWJfzWAr4FL6Ov7:22 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 20, 2020 at 12:53:23PM +0800, Gao Xiang wrote: > SWP_FS is used to make swap_{read,write}page() go through > the filesystem, and it's only used for swap files over > NFS. So, !SWP_FS means non NFS for now, it could be either > file backed or device backed. Something similar goes with > legacy SWP_FILE. > > So in order to achieve the goal of the original patch, > SWP_BLKDEV should be used instead. > > FS corruption can be observed with SSD device + XFS + > fragmented swapfile due to CONFIG_THP_SWAP=y. > > I reproduced the issue with the following details: > > Environment: > QEMU + upstream kernel + buildroot + NVMe (2 GB) > > Kernel config: > CONFIG_BLK_DEV_NVME=y > CONFIG_THP_SWAP=y Ok, so at it's core this is a swap file extent versus THP swap cluster alignment issue? > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 6c26916e95fd..2937daf3ca02 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1074,7 +1074,7 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) > goto nextsi; > } > if (size == SWAPFILE_CLUSTER) { > - if (!(si->flags & SWP_FS)) > + if (si->flags & SWP_BLKDEV) > n_ret = swap_alloc_cluster(si, swp_entries); > } else > n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE, IOWs, if you don't make this change, does the corruption problem go away if you align swap extents in iomap_swapfile_add_extent() to (SWAPFILE_CLUSTER * PAGE_SIZE) instead of just PAGE_SIZE? I.e. if the swapfile extents are aligned correctly to huge page swap cluster size and alignment, does the swap clustering optimisations for swapping THP pages work correctly? And, if so, is there any performance benefit we get from enabling proper THP swap clustering on swapfiles? Cheers, Dave. -- Dave Chinner david@fromorbit.com