Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp5188970pxj; Wed, 9 Jun 2021 11:12:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxpqScM/Tf+OAjzdbJWdJvdvJ19yk24JE2UWAQyMi7GYe2JG+d2M1N1SHCUzX0HKBSsGXiy X-Received: by 2002:a17:906:e10d:: with SMTP id gj13mr1078610ejb.150.1623262349378; Wed, 09 Jun 2021 11:12:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623262349; cv=none; d=google.com; s=arc-20160816; b=sQ6lop2SiApgTyfXjPyLCiKK1zW21+0kUILVNooFgWVYglKGsT75/KFuWtTMlAyKDd cJ6rfdPP7FNJ9VQ6ogwE6lm+FlcaxeHj/9G6niSJxRKNe7nFMhOhuEUxNS2tCvyX7ROh KA+5Lg/wRw8ArGOeTKD1rPOiLb6ycBVVMGzAtwb6OssfHL9PAknTSvB8pH4U3RECPkoJ 7bHzBifG0qPM1NOa1I5D0qzA7TscTDyqYQ1fP1Gay6Ojc+fCN4VHUsMdVrvLrmUfGYdm Nt5rdqS7UFhNXgo7XXqXx7EUzLmHRlTPgR9J81EcFtNPf+NzonwceTSxroysP/8RoIo2 Gp/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=CjuPCxkHMajDm0LJZ6HojP9067gxaOE35kQ09vyuTbY=; b=mqXrqxZmBsoDls6QnVQ0vlTB9z7xEHkTI0ww1aV81ctp69QR0BPdmVMMZqq1/lMoVD XzNM8gH0120W4qmZ84kVJNzc+WczPwZnr/jvzhuUgOOl52X5JW6OJcD7v08OznBam8Bb bkIePuN3AfMo9FO47ydA/PSWRJxlx58rW3F911LraUpMxsYFHwL8EPJm14cwbCM29VKS eZj7AolhI45qYL7iPvCaS6LWVcxme2qZSsiWEADwVfTZTX98RPzIEx+gBmdvxZrL5BrP EObWTmBNA0hAnOOtzBZe681IgezaC7QIXgMvQvuTGd8wVViJQcH1zbgjokw9gOBPDYUt 2Btw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=fsZc6cAA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f13si291757edr.531.2021.06.09.11.12.00; Wed, 09 Jun 2021 11:12:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=fsZc6cAA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230355AbhFIQzy (ORCPT + 99 others); Wed, 9 Jun 2021 12:55:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230215AbhFIQzy (ORCPT ); Wed, 9 Jun 2021 12:55:54 -0400 Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com [IPv6:2a00:1450:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94AD6C061574 for ; Wed, 9 Jun 2021 09:53:59 -0700 (PDT) Received: by mail-ej1-x633.google.com with SMTP id ce15so39429513ejb.4 for ; Wed, 09 Jun 2021 09:53:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=CjuPCxkHMajDm0LJZ6HojP9067gxaOE35kQ09vyuTbY=; b=fsZc6cAAlB0htjqxdXE0wo5PNRQysfgWur6+bR1Zo6eVJrQNOtXwHgNg791zHZexAT ylFxAKIlMjIKiqqfcAN8r11+yZ1HlVaMOyIyFrjXC6wAD5L2rmPmS/NuQZKP2Lj1ug1z 3MDWYvTABFF2uy2lhTgAYXCQkK7lBdK7zsHdkMN9i2lwd69QKUFYAS3F1n+MnJTWpWVH Db/EbKaa6BxRRF87mMa9XCRZGVCIfYNn6Dj6bnCxmXCxGRF7LRF3TgxTwEAMtSR5P+gm gA6YcLiwHJ7USXf8qYwYm735BOV7s/f0kA2ehkABwbq0ymzEYl1+IM2/YVOncfWyuFfZ adhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=CjuPCxkHMajDm0LJZ6HojP9067gxaOE35kQ09vyuTbY=; b=qiILomHSpG1A8AX2/qbI3w6XOY+SsvLfddbzSlO/fApOxfqVOWzHeXe0922scMi9la eyQlZ8Z6NTJGxZ7Fx+25bMjrPFV8wynFVRf/yp06ddEkSwAvfhvqj7rsU6lFlqgA1Xx2 Z10aJ7Xa7APV3EaXAi2c/rY3c5V7IUbeMj12P+HN8AIVxEzp7FMiPS9q6G0kXFx4I6S5 nyxVoxmJ49gFg3miMltWG3J9evI9l+62/uCgYRj+Fk8okLnkMrlH3x+UFRFyXQbjL3Xc PCkEhhaciW7HajiCfys/avbuTCV4qoGS0AqU+IutTkmDoxgI03fI/2X8StWFcFV0lfU4 mesg== X-Gm-Message-State: AOAM530NNzY5JQWQ4N88QY3UrMqLvsJlx5LCAiY5abudGFkWXru/pKDE EPn3KUeefpWMeY9CHceUkaPAtSIr2dAk4aL3Oa/EQPlTME8= X-Received: by 2002:a17:906:488a:: with SMTP id v10mr755069ejq.383.1623257638180; Wed, 09 Jun 2021 09:53:58 -0700 (PDT) MIME-Version: 1.0 References: <59d94b4-c0dd-310-894-be99416f3c92@google.com> In-Reply-To: <59d94b4-c0dd-310-894-be99416f3c92@google.com> From: Yang Shi Date: Wed, 9 Jun 2021 09:53:46 -0700 Message-ID: Subject: Re: [PATCH v2 01/10] mm/thp: fix __split_huge_pmd_locked() on shmem migration entry (fwd) To: Hugh Dickins Cc: Linux MM , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 8, 2021 at 9:05 PM Hugh Dickins wrote: > > > > ---------- Forwarded message ---------- > Date: Tue, 8 Jun 2021 21:00:12 -0700 (PDT) > From: Hugh Dickins > To: Andrew Morton > Cc: Hugh Dickins , > Kirill A. Shutemov , > Yang Shi , Wang Yugui , > Matthew Wilcox , > Naoya Horiguchi , > Alistair Popple , Ralph Campbell , > Zi Yan , Miaohe Lin , > Minchan Kim , Jue Wang , > Peter Xu , Jan Kara , > Shakeel Butt , Oscar Salvador > Subject: [PATCH v2 01/10] mm/thp: fix __split_huge_pmd_locked() on shmem > migration entry > > Stressing huge tmpfs page migration racing hole punch often crashed on the > VM_BUG_ON(!pmd_present) in pmdp_huge_clear_flush(), with DEBUG_VM=y kernel; > or shortly afterwards, on a bad dereference in __split_huge_pmd_locked() > when DEBUG_VM=n. They forgot to allow for pmd migration entries in the > non-anonymous case. > > Full disclosure: those particular experiments were on a kernel with more > relaxed mmap_lock and i_mmap_rwsem locking, and were not repeated on the > vanilla kernel: it is conceivable that stricter locking happens to avoid > those cases, or makes them less likely; but __split_huge_pmd_locked() > already allowed for pmd migration entries when handling anonymous THPs, > so this commit brings the shmem and file THP handling into line. > > And while there: use old_pmd rather than _pmd, as in the following blocks; > and make it clearer to the eye that the !vma_is_anonymous() block is > self-contained, making an early return after accounting for unmapping. > > Fixes: e71769ae5260 ("mm: enable thp migration for shmem thp") > Signed-off-by: Hugh Dickins > Cc: Reviewed-by: Yang Shi > --- > v2: omit is_huge_zero_pmd() mods (done differently in next), per Kirill > > mm/huge_memory.c | 27 ++++++++++++++++++--------- > mm/pgtable-generic.c | 5 ++--- > 2 files changed, 20 insertions(+), 12 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 63ed6b25deaa..42cfefc6e66e 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2044,7 +2044,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > count_vm_event(THP_SPLIT_PMD); > > if (!vma_is_anonymous(vma)) { > - _pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd); > + old_pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd); > /* > * We are going to unmap this huge page. So > * just go ahead and zap it > @@ -2053,16 +2053,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > zap_deposited_table(mm, pmd); > if (vma_is_special_huge(vma)) > return; > - page = pmd_page(_pmd); > - if (!PageDirty(page) && pmd_dirty(_pmd)) > - set_page_dirty(page); > - if (!PageReferenced(page) && pmd_young(_pmd)) > - SetPageReferenced(page); > - page_remove_rmap(page, true); > - put_page(page); > + if (unlikely(is_pmd_migration_entry(old_pmd))) { > + swp_entry_t entry; > + > + entry = pmd_to_swp_entry(old_pmd); > + page = migration_entry_to_page(entry); > + } else { > + page = pmd_page(old_pmd); > + if (!PageDirty(page) && pmd_dirty(old_pmd)) > + set_page_dirty(page); > + if (!PageReferenced(page) && pmd_young(old_pmd)) > + SetPageReferenced(page); > + page_remove_rmap(page, true); > + put_page(page); > + } > add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR); > return; > - } else if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) { > + } > + > + if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) { > /* > * FIXME: Do we want to invalidate secondary mmu by calling > * mmu_notifier_invalidate_range() see comments below inside > diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c > index c2210e1cdb51..4e640baf9794 100644 > --- a/mm/pgtable-generic.c > +++ b/mm/pgtable-generic.c > @@ -135,9 +135,8 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, > { > pmd_t pmd; > VM_BUG_ON(address & ~HPAGE_PMD_MASK); > - VM_BUG_ON(!pmd_present(*pmdp)); > - /* Below assumes pmd_present() is true */ > - VM_BUG_ON(!pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); > + VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) && > + !pmd_devmap(*pmdp)); > pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); > flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); > return pmd; > -- > 2.26.2 > >