Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp2709977pxa; Tue, 25 Aug 2020 00:35:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwIHIPHUb6m6nduVvp/nYBALOpcFQv8f1R265FGe35DOMNmHqmWLM9NGvd2kqwA07Y3+C2a X-Received: by 2002:a50:aaca:: with SMTP id r10mr7650738edc.307.1598340956596; Tue, 25 Aug 2020 00:35:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598340956; cv=none; d=google.com; s=arc-20160816; b=asrfFrqZ09ZBkanWFjArbOeHZ9EVYxZ4h8iJPMWnuuLIJN1XYrEfAjGL5o1R8DzKzd Vf3SN4scHcHeBTpHtncsXJpvd8jMY+3VXIvJeqLlUE2r+VBCRq6rRdDWnn9BpTZCNAU+ XAs2mwWlMQxhX9+3CHF3JnZs+5V/7F/UNWmTmcE0lEvWNIlfqKzDf8VMcIyQXqk2TMq2 oTYjr3AyugtniezrJWrs/LNPouGB4es9Y8ajBZJyYt7AhKbGi6UvJp9DxYmD9kPnnDpa 63yrDCNPo9QGr6EjwFwic09mqO9zdMCmo7pp394RpAZUkMerAfai/if/6GYOEJD76HjI BRjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=Nv6CSGIbCknVAshl07j/5xGDdqtleXXUaO2AgALOcBQ=; b=n0+eYHAnc0AWbz//yfAYacwqe7Xfika/23eykgsRC9/6M6aXHmiyIS+qChc8XhFsme XRW97je3BV8Slos/fPRm5jgOy1scO1lexgK1pjihWWJNknQ1vcOwzYOSE2wD94+kgLEs HFsJxX9ToAACZzA5CQqz+ZKer+JWSxncPoYPpGZR+mSlrUadq6tO2eOhMeOOmKgbRsYq zMpjYxuwZyBlxLlWWnDd5UWZ9sIqUadNkubjdRaI4G7hvFoE+ue1ppVVDDInguIny7Gu HQq0zfP3bOle0UriYzurSja90cSiqsCOxPAhUm3pQFQzRrRMkGP8b//0kiuo6YgpIMbU Wdcg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Apqc+J7z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f13si7940064edn.54.2020.08.25.00.35.33; Tue, 25 Aug 2020 00:35:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Apqc+J7z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729495AbgHYHeH (ORCPT + 99 others); Tue, 25 Aug 2020 03:34:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729473AbgHYHeG (ORCPT ); Tue, 25 Aug 2020 03:34:06 -0400 Received: from mail-ua1-x942.google.com (mail-ua1-x942.google.com [IPv6:2607:f8b0:4864:20::942]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90629C061755 for ; Tue, 25 Aug 2020 00:34:06 -0700 (PDT) Received: by mail-ua1-x942.google.com with SMTP id k18so3422981uao.11 for ; Tue, 25 Aug 2020 00:34:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Nv6CSGIbCknVAshl07j/5xGDdqtleXXUaO2AgALOcBQ=; b=Apqc+J7zJjLt7HIY9LJFORqtLChLWyXyW9OPPAm2IWuGKCEajOMJOtXt10DCqIEofM xvvHt+AE/VOWITfgJLQESVHUSXbl5SlmedCTjvNoU6i4abY1JLQs5mDoynT8Fpan7Hve URdOCRAMTN7YlABeKlRlIHyHB9fb7JRo8O2x66tNawMGyqBwQm9OQ4U9+CljGwOgEEum pN53ulxBMNpATD0T+DEUnY3jpR0IPiAa1DOZ2AtxLPw3InU5Wx2J0fgKInN09ExID/3P LkyET0zqDcSGBHVmedmDFcAy5zPKkEh8SiTsqt2rEpCJDplMY23oKHlDSsiyyxM6WRIa Cmcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Nv6CSGIbCknVAshl07j/5xGDdqtleXXUaO2AgALOcBQ=; b=M2tOPiB81YJppeW+Dk4w/JeCRDMWvEyMGrb5hHpYOQ3kO/K2vAuhfqVdYFzH7doSYz UxoMy/StX591H4xfSzCfhjc4NNe9ad4JTS/h9DkyMRJ/USyoApM6TeWp/8NKYTjrirRo +FQBEmkRlXUrCDONyIOqYgLwrO9R7FsN7STMqPHXHoHciW3TSGZk6k4tqft53xwZq9ik c/4OdpZKMSBvFU3M9DH6N64HxuxwhvgV9ZLcNysC6L/StqX6Zds8PuT46uHA5yrxWhDe KRSnb7iOCPAKlrX6gwX49ADkJF31LhrbVcEefDuHlKTt1qpBH7JWfKVvF9bcOjJAAVxC XGcw== X-Gm-Message-State: AOAM532sRDqCpNI7bxb4R6JGr/ivhka6hqienJ+3hITPyMNIcPkWNE+b zfEMWrnbXDRxDpW/dePZJ3E03PEMU4w88AkhKuBXnQ== X-Received: by 2002:ab0:462:: with SMTP id 89mr4776164uav.34.1598340845625; Tue, 25 Aug 2020 00:34:05 -0700 (PDT) MIME-Version: 1.0 References: <20200824110645.GC17456@casper.infradead.org> In-Reply-To: <20200824110645.GC17456@casper.infradead.org> From: Naresh Kamboju Date: Tue, 25 Aug 2020 13:03:53 +0530 Message-ID: Subject: Re: BUG: Bad page state in process true pfn:a8fed on arm To: Matthew Wilcox Cc: linux-mm , Linux-Next Mailing List , open list , lkft-triage@lists.linaro.org, Andrew Morton , LTP List , Arnd Bergmann , Russell King - ARM Linux , Mike Rapoport , Stephen Rothwell , Catalin Marinas , Christoph Hellwig , Andy Lutomirski , Peter Xu , opendmb@gmail.com, Linus Walleij , afzal.mohd.ma@gmail.com, Will Deacon , Greg Kroah-Hartman Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 24 Aug 2020 at 16:36, Matthew Wilcox wrote: > > On Mon, Aug 24, 2020 at 03:14:55PM +0530, Naresh Kamboju wrote: > > [ 67.545247] BUG: Bad page state in process true pfn:a8fed > > [ 67.550767] page:9640c0ab refcount:0 mapcount:-1024 > > Somebody freed a page table without calling __ClearPageTable() on it. After running git bisect on this problem, The first suspecting of this problem on arm architecture this patch. 424efe723f7717430bec7c93b4d28bba73e31cf6 ("mm: account PMD tables like PTE tables ") Reported-by: Naresh Kamboju Reported-by: Anders Roxell Additional information: We have tested linux next by reverting this patch and confirmed that the reported BUG is not reproduced. These configs enabled on the running device, CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y -- Suspecting patch -- commit 424efe723f7717430bec7c93b4d28bba73e31cf6 Author: Matthew Wilcox Date: Thu Aug 20 10:01:30 2020 +1000 mm: account PMD tables like PTE tables We account the PTE level of the page tables to the process in order to make smarter OOM decisions and help diagnose why memory is fragmented. For these same reasons, we should account pages allocated for PMDs. With larger process address spaces and ASLR, the number of PMDs in use is higher than it used to be so the inaccuracy is starting to matter. Link: http://lkml.kernel.org/r/20200627184642.GF25039@casper.infradead.org Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Mike Rapoport Cc: Abdul Haleem Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Christophe Leroy Cc: Joerg Roedel Cc: Max Filippov Cc: Peter Zijlstra Cc: Satheesh Rajendran Cc: Stafford Horne Signed-off-by: Andrew Morton Signed-off-by: Stephen Rothwell diff --git a/include/linux/mm.h b/include/linux/mm.h index b0a15ee77b8a..a4e5b806347c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2239,7 +2239,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_to_page(pmd)); } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool pmd_ptlock_init(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE page->pmd_huge_pte = NULL; @@ -2247,7 +2247,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) return ptlock_init(page); } -static inline void pgtable_pmd_page_dtor(struct page *page) +static inline void pmd_ptlock_free(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(page->pmd_huge_pte, page); @@ -2264,8 +2264,8 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; } -static inline void pgtable_pmd_page_dtor(struct page *page) {} +static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2278,6 +2278,22 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + if (!pmd_ptlock_init(page)) + return false; + __SetPageTable(page); + inc_zone_page_state(page, NR_PAGETABLE); + return true; +} + +static inline void pgtable_pmd_page_dtor(struct page *page) +{ + pmd_ptlock_free(page); + __ClearPageTable(page); + dec_zone_page_state(page, NR_PAGETABLE); +} + /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be - Naresh