Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1440339pxj; Fri, 21 May 2021 14:29:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw3sqBEtAgV6xE4Z3mcLGn7Y059c0NyogMePI3SMCHuqV3vZCWIjIJR57HylwYWe3/6Zc1u X-Received: by 2002:a17:906:5fd1:: with SMTP id k17mr11883989ejv.78.1621632590061; Fri, 21 May 2021 14:29:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621632590; cv=none; d=google.com; s=arc-20160816; b=WQbp+i5B3NQAUkPkxdlyUM3VPXvN5fhyzCB5VHdnh2+zOlFoKFtsXGhkNKFbbT6pCD Ns/0WFAuKrT9d8+Zdgl7kyb/H3/v3OsvMXd+TyO//x06rGYgQhW2eKg0CgRnLcKt+/bN NuOF2TYT4/+ejI92eyjWIBBCkKlzp+6blm1hxhTKJXgh9ysvUiViTE88MjwbSwC6fkh4 icJcWBCebpegQz+rtsAqIl+3reQ1Ek9Uk2CG6WeOLGfkllPxFWty7ZMY6+OovP1QAwpu pUFYRK7fKPnMk9toG6OT1BQ8tZiUa8/lCTAzKHjg8q6pgy/BQ9qEplaHTixUVOhid9Rc fttg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=p33zgO2dvO4T3zlpBknK48bHhSk8cDL/T5K7X55c9LA=; b=X5XGpiz8ncIUdZgBmzcmw0VSK2Wg0Rlk56Az7+msIBDkcSB23HBKOu4MWzYI09K7Ec KvXCK/HSuOZXaANsRtnJpgRZhBR1xEL+lBNJuPwC2LBmM0DH9X58E3disyIM3qHCDd6J SsxxN0fkRQbUcUzLmJM+P7CoUlHQ+A9EBUrFmqTRXdRPmztiFHAx96VSABjt9R5LFuFI s3tx+fnYOm6k9OBgqxG36+iGDa2uVin487JUNYaupw7l3Ug7NRX7O2e4ockekNdmin3h Bn+q3kcBPv7d07WCUQti+QVYh/eUWxxlQFK8I/hscWUUz3M1oL0Q8E2ot9ex4B2CVI0o zFSw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=Fmw3sIbN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q1si5689168edv.338.2021.05.21.14.29.26; Fri, 21 May 2021 14:29:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=Fmw3sIbN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237655AbhEUT2z (ORCPT + 99 others); Fri, 21 May 2021 15:28:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236596AbhEUT2v (ORCPT ); Fri, 21 May 2021 15:28:51 -0400 Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com [IPv6:2a00:1450:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1221C061574 for ; Fri, 21 May 2021 12:27:27 -0700 (PDT) Received: by mail-ej1-x62c.google.com with SMTP id gb17so14016121ejc.8 for ; Fri, 21 May 2021 12:27:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=p33zgO2dvO4T3zlpBknK48bHhSk8cDL/T5K7X55c9LA=; b=Fmw3sIbNbdideQbu0Jt2y0/cgFto3i/P3elmBi/M1QfGnQX1tuLGkJiTCPwirANK29 kFjpbcyQEyri4rGDpIzkdphKhdJX0ydIDknuUgcGC8uc3wyGeJWfzGDrwWOuu3qhSg/U kkkmGuW8Hyjn1thYJSEsRWXnU3tq4AVi5klgM/zQRS4BS3GaxnaU98L/yGoik8lrHxTW X+Kt6Jcr6HEjAcmF7slCg3/yIecwc0lfRAhZOBsWmgKLYImB3vxSTy2pz3te4iJIQaTB QGROV55ZQn8ejr4IvbcLmS3FqfQj3yfnt80FBKXiYI4n6u5n0VqLQ5tokFW+sgtIAysx VfvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=p33zgO2dvO4T3zlpBknK48bHhSk8cDL/T5K7X55c9LA=; b=exjaZj3/4F/uxqqB1ljVpQZzVeTT8HGzucC5bVTfWYdpdSVG3nnDguw816Ln+AxCe2 PKNjmtDlYWwts9o7yW4mrsIo5ZnadV55M1pfxMSGmtyCG4rI+YTse1GWC4ORgd5ltEfF tPC8cDJCe1ykXEFh7+dz2MmahI+rIaBaf1qxhBrKPOCg9GNTCp4dzzJCbKx75UYJLe1B 7Qn9CIkKlcmyMgSDqRjFiam7CnEVumEWWbWCTO1KyP10XUk9F/mIhdS51Fo+a++t3ZsC +4X1SEG+HCZp7LS2QX6S/b3JfmvvkVFqkVq/GyEgAlHG0a3hBHmwFKSad+ZzUJZeNT6R cBUQ== X-Gm-Message-State: AOAM533MR4+OABaUfmujyuIiaU/BTfktMx6jHk60SU87xFrgnL8N5N8F EinK6lx42NkRy1YSn4JgvM7Ym/IVA+gLaRERHfc= X-Received: by 2002:a17:906:b7d6:: with SMTP id fy22mr11439547ejb.383.1621625246236; Fri, 21 May 2021 12:27:26 -0700 (PDT) MIME-Version: 1.0 References: <20210513212334.217424-1-shy828301@gmail.com> In-Reply-To: From: Yang Shi Date: Fri, 21 May 2021 12:27:13 -0700 Message-ID: Subject: Re: [v2 PATCH] mm: thp: check total_mapcount instead of page_mapcount To: Hugh Dickins Cc: Zi Yan , "Kirill A. Shutemov" , Wang Yugui , Andrew Morton , Linux MM , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 21, 2021 at 10:16 AM Yang Shi wrote: > > On Thu, May 20, 2021 at 10:06 PM Hugh Dickins wrote: > > > > On Thu, 13 May 2021, Yang Shi wrote: > > > > > When debugging the bug reported by Wang Yugui [1], try_to_unmap() may > > > return false positive for PTE-mapped THP since page_mapcount() is used > > > to check if the THP is unmapped, but it just checks compound mapount and > > > head page's mapcount. If the THP is PTE-mapped and head page is not > > > mapped, it may return false positive. > > > > > > Use total_mapcount() instead of page_mapcount() for try_to_unmap() and > > > do so for the VM_BUG_ON_PAGE in split_huge_page_to_list as well. > > > > > > This changed the semantic of try_to_unmap(), but I don't see there is > > > any usecase that expects try_to_unmap() just unmap one subpage of a huge > > > page. So using page_mapcount() seems like a bug. > > > > > > [1] https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@e16-tech.com/ > > > > > > Signed-off-by: Yang Shi > > > > I don't object to this patch, I've no reason to NAK it; but I'll > > point out a few deficiencies which might make you want to revisit it. > > > > > --- > > > v2: Removed dead code and updated the comment of try_to_unmap() per Zi > > > Yan. > > > > > > mm/huge_memory.c | 11 +---------- > > > mm/rmap.c | 10 ++++++---- > > > 2 files changed, 7 insertions(+), 14 deletions(-) > > > > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > > index 63ed6b25deaa..3b08b9ba1578 100644 > > > --- a/mm/huge_memory.c > > > +++ b/mm/huge_memory.c > > > @@ -2348,7 +2348,6 @@ static void unmap_page(struct page *page) > > > ttu_flags |= TTU_SPLIT_FREEZE; > > > > > > unmap_success = try_to_unmap(page, ttu_flags); > > > - VM_BUG_ON_PAGE(!unmap_success, page); > > > > The unused variable unmap_success has already been reported and > > dealt with. But I couldn't tell what you intended: why change > > try_to_unmap()'s output, if you then ignore it? > > Because some other callers of try_to_unmap() check the output. > > > > > > } > > > > > > static void remap_page(struct page *page, unsigned int nr) > > > @@ -2718,7 +2717,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > > > } > > > > > > unmap_page(head); > > > - VM_BUG_ON_PAGE(compound_mapcount(head), head); > > > + VM_BUG_ON_PAGE(total_mapcount(head), head); > > > > And having forced try_to_unmap() to do the expensive-on-a-THP > > total_mapcount() calculation, you now repeat it here. Better > > to stick with the previous VM_BUG_ON_PAGE(!unmap_success). > > > > Or better a VM_WARN_ONCE(), accompanied by dump_page()s as before, > > to get some perhaps useful info out, which this patch has deleted. > > Probably better inside unmap_page() than cluttering up here. > > Moving the BUG or WARN into unmap_page() looks fine to me. IIUC, > VM_BUG_ON_PAGE or VM_WARN_ON_PAGE does call dump_page(), so dumping > something useful is not deleted. I misspelled the function name. There is *NOT* VM_WARN_ON_PAGE(), the name is VM_WARN_ON_ONCE_PAGE(). We may need to add VM_WARN_ON_PAGE() since I'd like this warning to be printed every time when it is met. > > > > > VM_WARN_ONCE() because nothing in this patch fixes whatever Wang > > Yugui is suffering from; and (aside from the BUG()) it's harmless, > > because there are other ways in which the page_ref_freeze() can fail, > > and that is allowed for. We would like to know when this problem > > occurs: there is something wrong, but no reason to crash. > > Yes, it fixes nothing. I didn't figure out why try_to_unmap() failed. > I agree BUG_ON could be relaxed. > > > > > > > > > /* block interrupt reentry in xa_lock and spinlock */ > > > local_irq_disable(); > > > @@ -2758,14 +2757,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > > > __split_huge_page(page, list, end); > > > ret = 0; > > > } else { > > > - if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) { > > > - pr_alert("total_mapcount: %u, page_count(): %u\n", > > > - mapcount, count); > > > - if (PageTail(page)) > > > - dump_page(head, NULL); > > > - dump_page(page, "total_mapcount(head) > 0"); > > > - BUG(); > > > - } > > > > This has always looked ugly (as if Kirill had hit an unsolved case), > > so it is nice to remove it; but you're losing the dump_page() info, > > and not really gaining anything more than a cosmetic cleanup. > > As I mentioned above, IIUC VM_BUG_ON_PAGE and VM_WARN_ON_PAGE do call > dump_page(). > > > > > > spin_unlock(&ds_queue->split_queue_lock); > > > fail: if (mapping) > > > xa_unlock(&mapping->i_pages); > > > diff --git a/mm/rmap.c b/mm/rmap.c > > > index 693a610e181d..f52825b1330d 100644 > > > --- a/mm/rmap.c > > > +++ b/mm/rmap.c > > > @@ -1742,12 +1742,14 @@ static int page_not_mapped(struct page *page) > > > } > > > > > > /** > > > - * try_to_unmap - try to remove all page table mappings to a page > > > - * @page: the page to get unmapped > > > + * try_to_unmap - try to remove all page table mappings to a page and the > > > + * compound page it belongs to > > > + * @page: the page or the subpages of compound page to get unmapped > > > * @flags: action and flags > > > * > > > * Tries to remove all the page table entries which are mapping this > > > - * page, used in the pageout path. Caller must hold the page lock. > > > + * page and the compound page it belongs to, used in the pageout path. > > > + * Caller must hold the page lock. > > > * > > > * If unmap is successful, return true. Otherwise, false. > > > */ > > > @@ -1777,7 +1779,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags) > > > else > > > rmap_walk(page, &rwc); > > > > > > - return !page_mapcount(page) ? true : false; > > > + return !total_mapcount(page) ? true : false; > > > > That always made me wince: "return !total_mapcount(page);" surely. > > But page_mapcount() seems not correct, it may return false positive, > right? Or it is harmless? > > And I actually spotted a few other places which should use > total_mapcount() but using page_mapcount() instead, for example, some > madvise code check if the page is shared by using page_mapcount(), > however it may return false negative (double mapped THP, but head page > is not PTE-mapped, just like what Wang Yugui reported). It is not > fatal, but not expected behavior. I understand total_mapcount() is > expensive, so is it a trade-off between cost and correctness or just > overlooked the false negative case in the first place? I can't tell. > > > > > Or slightly better, "return !page_mapped(page);", since at least that > > one breaks out as soon as it sees a mapcount. Though I guess I'm > > being silly there, since that case should never occur, so both > > total_mapcount() and page_mapped() scan through all pages. > > > > Or better, change try_to_unmap() to void: most callers ignore its > > return value anyway, and make their own decisions; the remaining > > few could be changed to do the same. Though again, I may be > > being silly, since the expensive THP case is not the common case. > > I'd say half callers ignore its return value. But I think it should be > worth doing. At least we could remove half unnecessary > total_mapcount() or page_mapped() call. > > Thanks a lot for all the suggestions, will incorporate them in the new version. > > > > > > } > > > > > > /** > > > -- > > > 2.26.2