Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp4457244pxb; Tue, 31 Aug 2021 05:48:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzzSKIJAJPx51pIskmmqc2XCxsdPhWSCswe0ALLBAe+Cc7xoEsuBPqQXYeSIxyI8G71OFBv X-Received: by 2002:aa7:de09:: with SMTP id h9mr30071614edv.271.1630414134658; Tue, 31 Aug 2021 05:48:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1630414134; cv=none; d=google.com; s=arc-20160816; b=vqab6remNthXeHjtWt+Nlpjeaa1vELRcsV4x4AHzKtgsjBEgnmgMH9/P76gyvTTImp MialAw/kP9vwVGRQg11jWV3hSUCDmzcbMRB+8+JzlkjYFzv9L/KyLguIeEqrkKd915un TUZll4y4mQ1keRwRc0k2TRbjwae0CW4ZAXNEQ6ajkw/OHvF6qHwuVIfKd1OjGRKZMVez Ifbh8T6JKoJ+evloG+OZC6hig/Gh9OczUCI/Yj2y39zK3dXltmQDuayf6FXTpWX/Fm61 wZjGqRpypnDW7fPr2wvTiQsgu18XEV5q6vakwkYqESX98XMhIBNR7Q4VE5lEt9m3tNqu CfhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=iVtwigYYu2XeeZ03Ish28u8rYxZVvK1HFONUIyOItYE=; b=y4XepUqujP3xYJPeBpo4RwaOdtfk9Mm5Kgeat19nbvb9+tSwdP7e4P5LXmS1FQIoPy SERChzAUqI7eOdxxj1DfOAdPooWVlLo4cWiPdMEf1WXHzJxRpwqZ62Z3aJAoSbwnEdrx 9WEPZQhxmdOLUtCSUIUsSfTPjaPcvR75ZJomtlFSscivMvxIJhe+s6dCyAz6PbYo8iIU 6dwou5+hE4x0vWpMgHijeTjVh3XERn/iTfACwrzLYS81ySNSwafYaaDngImSudsqmLHA ewWIGs45On1lsE2WMPbeQhgB0wbWiD2juHwJbM7f60YINSiykBLq/GgxMQ5OQR3IMCPN JMhw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a6si16353842edy.88.2021.08.31.05.47.57; Tue, 31 Aug 2021 05:48:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232332AbhHaMps (ORCPT + 99 others); Tue, 31 Aug 2021 08:45:48 -0400 Received: from outbound-smtp48.blacknight.com ([46.22.136.219]:50671 "EHLO outbound-smtp48.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230154AbhHaMpr (ORCPT ); Tue, 31 Aug 2021 08:45:47 -0400 Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp48.blacknight.com (Postfix) with ESMTPS id D3A76FA925 for ; Tue, 31 Aug 2021 13:44:50 +0100 (IST) Received: (qmail 23353 invoked from network); 31 Aug 2021 12:44:50 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.17.29]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 31 Aug 2021 12:44:50 -0000 Date: Tue, 31 Aug 2021 13:44:49 +0100 From: Mel Gorman To: Sultan Alsawaf Cc: linux-mm@kvack.org, mhocko@suse.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org Subject: Re: Stuck looping on list_empty(list) in free_pcppages_bulk() Message-ID: <20210831124449.GB4128@techsingularity.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 30, 2021 at 04:12:51PM -0700, Sultan Alsawaf wrote: > I apologize in advance for reporting a bug on an EOL kernel. I don't see any > changes as of 5.14 that could address something like this, so I'm emailing in > case whatever happened here may be a bug affecting newer kernels. > > With gdb, it appears that the CPU got stuck in the list_empty(list) loop inside > free_pcppages_bulk(): > ----------------8<---------------- > do { > batch_free++; > if (++migratetype == MIGRATE_PCPTYPES) > migratetype = 0; > list = &pcp->lists[migratetype]; > } while (list_empty(list)); > ---------------->8---------------- > > Although this code snippet is slightly different in 5.14, it's still ultimately > the same. Side note: I noticed that the way `migratetype` is incremented causes > `&pcp->lists[1]` to get looked at first rather than `&pcp->lists[0]`, since > `migratetype` will start out at 1. This quirk is still present in 5.14, though > the variable in question is now called `pindex`. > > With some more gdb digging, I found that the `count` variable was stored in %ESI > at the time of the stall. According to register dump in the splat, %ESI was 7. > > It looks like, for some reason, the pcp count was 7 higher than the number of > pages actually present in the pcp lists. > That's your answer -- the PCP count has been corrupted or misaccounted. Given this is a Fedora kernel, check for any patches affecting mm/page_alloc.c that could be accounting related or that would affect the IRQ disabling or zone lock acquisition for problems. Another possibility is memory corruption -- either kernel or the hardware itself. > I tried to find some way that this could happen, but the only thing I could > think of was that maybe an allocation had both __GFP_RECLAIMABLE and > __GFP_MOVABLE set in its gfp mask, in which case the rmqueue() call in > get_page_from_freelist() would pass in a migratetype equal to MIGRATE_PCPTYPES > and then pages could be added to an out-of-bounds pcp list while still > incrementing the overall pcp count. This seems pretty unlikely though. It's unlikely because it would be an outright bug to specify both flags. > As > another side note, it looks like there's nothing stopping this from occurring; > there's only a VM_WARN_ON() in gfp_migratetype() that checks if both bits are > set. > There is no explicit check for it because they should not be both set. I don't think this happens in kernel but if an out-of-tree module did it, it might corrupt adjacent PCPs. -- Mel Gorman SUSE Labs