Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp1416572pxb; Fri, 24 Sep 2021 04:10:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxMm3Qfx+XXAIuNXXhOpnJ1h8d/hOkHHVLh59ovfS9Te5vnD4BZsZbzb6hmIopvrJE+enRD X-Received: by 2002:a6b:b7d6:: with SMTP id h205mr8343745iof.60.1632481834973; Fri, 24 Sep 2021 04:10:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632481834; cv=none; d=google.com; s=arc-20160816; b=YYxCDxMNcz76FjdIInK23tpDUuU10mNT9Xb+1omBZ8+/D88OCfnss3qifO3u6Ei406 B07CUL6DJFgvpMQlrAVNNMSSIrqcopCMjKFT7ANyOK3L8HN2MYwkk/eC3kcM9WFkdgI8 d9rvFgUjT20obFD0RNfYQOQSzGkQmCJPnq3whdB7gdcHM6j/rcVCFUBsl+d09YIA03/p dQJfMtn39X4eki25zKkvK8vNSDdbIvJdePQSLLY7AD6FG6iYU+OxMIivBqGUEWjFv8Vy fivpA5zhDFXiL2ihIlj0xkBOyEZ4tqW7BB9qGxhoU145Ct4LU/a95ztEC4aWjrIGg9SN K39g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=eQHMLuz9imzYljMNKyIhF91ZYs0BCNyD8xwIGKNkN7U=; b=XtVaYeyKDByb/mSBFYP2ZHqDCHGr8rP+a06HP301B6KbU1ebSjgGgxRAKlDmK4ZbU2 ahwSdGvc0eER4IsKtwyqI/+8wPk89GCCMlfs8c6qauda7MnxDFrA6BdkP9PVr2ZKKddD KigsBvwGyjNNWb/b4gAg8pVc8v9yRgivDvhtajZuBG8+w7HQ5eJWHZfbfp6DKHpYLJBi Pi8WXYPNh3/z7wg14iq1WhNi5z+mlQ8hDfpmRRFBoKBon54CbcaphIkS9R790LYY+ejX mB2TNtOsW/zrRaI06eA1H39CywEl/79riply7KFoGBb6k5crP4Gqrhe19qawCYsWPgbu 8O5A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@shutemov-name.20210112.gappssmtp.com header.s=20210112 header.b=K6VGYySS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h4si10221345ilq.167.2021.09.24.04.10.20; Fri, 24 Sep 2021 04:10:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@shutemov-name.20210112.gappssmtp.com header.s=20210112 header.b=K6VGYySS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245027AbhIXJ1z (ORCPT + 99 others); Fri, 24 Sep 2021 05:27:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237056AbhIXJ1y (ORCPT ); Fri, 24 Sep 2021 05:27:54 -0400 Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com [IPv6:2a00:1450:4864:20::131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 402CDC061574 for ; Fri, 24 Sep 2021 02:26:21 -0700 (PDT) Received: by mail-lf1-x131.google.com with SMTP id g41so37667458lfv.1 for ; Fri, 24 Sep 2021 02:26:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=eQHMLuz9imzYljMNKyIhF91ZYs0BCNyD8xwIGKNkN7U=; b=K6VGYySSbJ8LF7c4NCbJWG3xW2uOQkA+fi1nij+7WCJZ5DXce/cWIiKDSobNVPOlpv GXaKdf250/IfcZU9AO+GqCzOsAWxjYlzdCvP8vJA3tLMEQmgMscb7Mu4B6okvRBa3Iyf UQm4D9/fDmcylpB/VmQSBhH4njbHN06MllBoK1jYapKkG4fUvcedsirkaO8jn2m9x261 8ZNJ9wSe2tlCIslyOt2EGYVkmGazplzCTpc2B9Ltkou0A9gtPIiVqfz2Lb/fg/atwJ3B dK8Byy4CyjyxEN3IKjsKtDobAy1B0J/BQ279v4POxX55PQPxK6qMPMoLyu0xGuwPjint dmdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=eQHMLuz9imzYljMNKyIhF91ZYs0BCNyD8xwIGKNkN7U=; b=WNxCAf8ZSbk1XRZX1ZRiy7kxml/8wWTaeIeCQ7qgVYs7MTfbltNW2KytTQxngykAua psyWGxgR73/hY60J1F7vg+3OmokCJQw6htgdmfBSODNhEy+PgSmJs7GY9xfLMVyg2tmw 9SnB+QISOjVOACAn/JXE5NiMAjm8oL7kwT2LC9aM4Ev/aqAVg2pVcD/qOiO3kKZ7JoVA fijY7BCtVzkcCxPeWfJ45OfmIEOGgEF4a6vg6Xf3d677nwZ/HCn7xcT8wtejOy8MVkCC 4kVuaMSdbT8xq115fvSFhr9yxrA+UixOUBG3/B86VOtejxNPeWTbCg8u1kw6p5Y1McF7 uFYQ== X-Gm-Message-State: AOAM533oZNsLQHAvc8fsH8W+N8CLxdcvH/0XcUyrOFXA/qUeQ/VaREor xqaJjNtGzv0apm7B0WN7vrUaiQ== X-Received: by 2002:a05:651c:2109:: with SMTP id a9mr2287794ljq.166.1632475579601; Fri, 24 Sep 2021 02:26:19 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id s10sm696114lfc.28.2021.09.24.02.26.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Sep 2021 02:26:18 -0700 (PDT) Received: by box.localdomain (Postfix, from userid 1000) id 7C221103068; Fri, 24 Sep 2021 12:26:21 +0300 (+03) Date: Fri, 24 Sep 2021 12:26:21 +0300 From: "Kirill A. Shutemov" To: Yang Shi Cc: HORIGUCHI =?utf-8?B?TkFPWUEo5aCA5Y+jIOebtOS5nyk=?= , Hugh Dickins , "Kirill A. Shutemov" , Matthew Wilcox , Peter Xu , Oscar Salvador , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Subject: Re: [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault Message-ID: <20210924092621.kbg4byfidfzgjk3g@box> References: <20210923032830.314328-1-shy828301@gmail.com> <20210923032830.314328-2-shy828301@gmail.com> <20210923143901.mdc6rejuh7hmr5vh@box.shutemov.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 23, 2021 at 01:39:49PM -0700, Yang Shi wrote: > On Thu, Sep 23, 2021 at 10:15 AM Yang Shi wrote: > > > > On Thu, Sep 23, 2021 at 7:39 AM Kirill A. Shutemov wrote: > > > > > > On Wed, Sep 22, 2021 at 08:28:26PM -0700, Yang Shi wrote: > > > > When handling shmem page fault the THP with corrupted subpage could be PMD > > > > mapped if certain conditions are satisfied. But kernel is supposed to > > > > send SIGBUS when trying to map hwpoisoned page. > > > > > > > > There are two paths which may do PMD map: fault around and regular fault. > > > > > > > > Before commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths") > > > > the thing was even worse in fault around path. The THP could be PMD mapped as > > > > long as the VMA fits regardless what subpage is accessed and corrupted. After > > > > this commit as long as head page is not corrupted the THP could be PMD mapped. > > > > > > > > In the regulat fault path the THP could be PMD mapped as long as the corrupted > > > > > > s/regulat/regular/ > > > > > > > page is not accessed and the VMA fits. > > > > > > > > This loophole could be fixed by iterating every subpage to check if any > > > > of them is hwpoisoned or not, but it is somewhat costly in page fault path. > > > > > > > > So introduce a new page flag called HasHWPoisoned on the first tail page. It > > > > indicates the THP has hwpoisoned subpage(s). It is set if any subpage of THP > > > > is found hwpoisoned by memory failure and cleared when the THP is freed or > > > > split. > > > > > > > > Cc: > > > > Suggested-by: Kirill A. Shutemov > > > > Signed-off-by: Yang Shi > > > > --- > > > > > > ... > > > > > > > diff --git a/mm/filemap.c b/mm/filemap.c > > > > index dae481293b5d..740b7afe159a 100644 > > > > --- a/mm/filemap.c > > > > +++ b/mm/filemap.c > > > > @@ -3195,12 +3195,14 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page) > > > > } > > > > > > > > if (pmd_none(*vmf->pmd) && PageTransHuge(page)) { > > > > - vm_fault_t ret = do_set_pmd(vmf, page); > > > > - if (!ret) { > > > > - /* The page is mapped successfully, reference consumed. */ > > > > - unlock_page(page); > > > > - return true; > > > > - } > > > > + vm_fault_t ret = do_set_pmd(vmf, page); > > > > + if (ret == VM_FAULT_FALLBACK) > > > > + goto out; > > > > > > Hm.. What? I don't get it. Who will establish page table in the pmd then? > > > > Aha, yeah. It should jump to the below PMD populate section. Will fix > > it in the next version. > > > > > > > > > + if (!ret) { > > > > + /* The page is mapped successfully, reference consumed. */ > > > > + unlock_page(page); > > > > + return true; > > > > + } > > > > } > > > > > > > > if (pmd_none(*vmf->pmd)) { > > > > @@ -3220,6 +3222,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page) > > > > return true; > > > > } > > > > > > > > +out: > > > > return false; > > > > } > > > > > > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > > > index 5e9ef0fc261e..0574b1613714 100644 > > > > --- a/mm/huge_memory.c > > > > +++ b/mm/huge_memory.c > > > > @@ -2426,6 +2426,8 @@ static void __split_huge_page(struct page *page, struct list_head *list, > > > > /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ > > > > lruvec = lock_page_lruvec(head); > > > > > > > > + ClearPageHasHWPoisoned(head); > > > > + > > > > > > Do we serialize the new flag with lock_page() or what? I mean what > > > prevents the flag being set again after this point, but before > > > ClearPageCompound()? > > > > No, not in this patch. But I think we could use refcount. THP split > > would freeze refcount and the split is guaranteed to succeed after > > that point, so refcount can be checked in memory failure. The > > SetPageHasHWPoisoned() call could be moved to __get_hwpoison_page() > > when get_unless_page_zero() bumps the refcount successfully. If the > > refcount is zero it means the THP is under split or being freed, we > > don't care about these two cases. > > Setting the flag in __get_hwpoison_page() would make this patch depend > on patch #3. However, this patch probably will be backported to older > versions. To ease the backport, I'd like to have the refcount check in > the same place where THP is checked. So, something like "if > (PageTransHuge(hpage) && page_count(hpage) != 0)". > > Then the call to set the flag could be moved to __get_hwpoison_page() > in the following patch (after patch #3). Does this sound good to you? Could you show the code I'm not sure I follow. page_count(hpage) check looks racy to me. What if split happens just after the check? -- Kirill A. Shutemov