Received: by 2002:a05:6602:18e:0:0:0:0 with SMTP id m14csp3016552ioo; Tue, 24 May 2022 11:00:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz9XJQ580IiivM7NBAIQv6Y8tH+b6hXFrzMSjEX8jTCPRgO3VQGRV/eLUukixEFxENeLIzI X-Received: by 2002:a17:90b:380f:b0:1e0:9a0:4a99 with SMTP id mq15-20020a17090b380f00b001e009a04a99mr5803551pjb.158.1653415258416; Tue, 24 May 2022 11:00:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1653415258; cv=none; d=google.com; s=arc-20160816; b=pTCrv2Ks2r+Ee8wEOP8gcu+FZ0qRvR2T3NUNQlWticIw7/xYfUsL8RnNPpCKT5z6oG qq1Uymw6zk6saecQ1sn4AfT/KzNNYQgN58VygaA0QwTIaor/qS+Z+JBxfoVm24sNbFRg pMUYUOSo7ytA93/jFG81oXOq+aeCp1+k0oTJrVyuXReDngbg3dpBCaPWEhSiW/7qBmLb IqlkRNrmteesEHs/hUIG+2aEqZuWoRZcYWYMBxsYOe7k4q3TYrSZZO73bf/biGJq69xr tRjUkvJsQMgcX+jgu6YHQiFw9p1EDedyU5NZRmD0EE1mY4tS+1ozNsEkAa7ZRxL/Gm3k TuBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=ePLjGrrDNwjFSXcCCj3YkUnuTgD/iM8t9QJBRRNBegM=; b=MmnTpjwaTI+0laoM/bm8ks6UINRbhXxC7Dygiy811WFnaBfTa4I3RNi8KwbfZuCZjz JNDQcylNLWuVeh1G2BsElKGS1NnzBnvXqnyJuib+i4viWUNFiVX8Ga8ApB0FauKqO2/v 2DlaMZyO1Q9anuW0NGcwF0JgyxflT6pN+595Z/7/+qLoSbMd9jDkPzpxED4O+iaOyjuk EQmRytlNt4qYgFgAu2jJ8g6GAj4z/j5gH/w+AOvXXkbFEl8Z+fWTCDl2YRzsHFy6+DF1 2BvYwR0r4ieJ1fOhRB+8QmekNNd4LL0lvR3MzSWj+FPWo722ZHPhWyr04eUIbgsvPV6q JDaQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b17-20020a170902e95100b00156a40a7207si15554091pll.70.2022.05.24.11.00.39; Tue, 24 May 2022 11:00:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237235AbiEXMMd (ORCPT + 99 others); Tue, 24 May 2022 08:12:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237207AbiEXMMb (ORCPT ); Tue, 24 May 2022 08:12:31 -0400 Received: from outbound-smtp08.blacknight.com (outbound-smtp08.blacknight.com [46.22.139.13]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F8ED4839C for ; Tue, 24 May 2022 05:12:29 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp08.blacknight.com (Postfix) with ESMTPS id CB47E1C565F for ; Tue, 24 May 2022 13:12:27 +0100 (IST) Received: (qmail 28145 invoked from network); 24 May 2022 12:12:27 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 24 May 2022 12:12:26 -0000 Date: Tue, 24 May 2022 13:12:24 +0100 From: Mel Gorman To: Hugh Dickins Cc: Andrew Morton , Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM Subject: Re: [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock Message-ID: <20220524121224.GY3441@techsingularity.net> References: <20220509130805.20335-1-mgorman@techsingularity.net> <20220509130805.20335-6-mgorman@techsingularity.net> <554f4cdf-e4d9-f547-d3bb-1bcc1c9eb1@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <554f4cdf-e4d9-f547-d3bb-1bcc1c9eb1@google.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, May 21, 2022 at 07:49:10PM -0700, Hugh Dickins wrote: > On Mon, 9 May 2022, Mel Gorman wrote: > > > Currently the PCP lists are protected by using local_lock_irqsave to > > prevent migration and IRQ reentrancy but this is inconvenient. Remote > > draining of the lists is impossible and a workqueue is required and > > every task allocation/free must disable then enable interrupts which is > > expensive. > > > > As preparation for dealing with both of those problems, protect the > > lists with a spinlock. The IRQ-unsafe version of the lock is used > > because IRQs are already disabled by local_lock_irqsave. spin_trylock > > is used in preparation for a time when local_lock could be used instead > > of lock_lock_irqsave. > > 8c580f60a145 ("mm/page_alloc: protect PCP lists with a spinlock") > in next-20220520: I haven't looked up whether that comes from a > stable or unstable suburb of akpm's tree. > > Mel, the VM_BUG_ON(in_hardirq()) which this adds to free_unref_page_list() > is not valid. I have no appreciation of how important it is to the whole > scheme, but as it stands, it crashes; and when I change it to a warning > Thanks Hugh. Sorry for the delay in responding, I was offline for a few days. The context where free_unref_page_list is called from IRQ context is safe and the VM_BUG_ON can be removed. --8<-- mm/page_alloc: Protect PCP lists with a spinlock -fix Hugh Dickins reported the following problem; [ 256.167040] WARNING: CPU: 0 PID: 9842 at mm/page_alloc.c:3478 free_unref_page_list+0x92/0x343 [ 256.170031] CPU: 0 PID: 9842 Comm: cc1 Not tainted 5.18.0-rc7-n20 #3 [ 256.171285] Hardware name: LENOVO 20HQS0EG02/20HQS0EG02, BIOS N1MET54W (1.39 ) 04/16/2019 [ 256.172555] RIP: 0010:free_unref_page_list+0x92/0x343 [ 256.173820] Code: ff ff 49 8b 44 24 08 4d 89 e0 4c 8d 60 f8 eb b6 48 8b 03 48 39 c3 0f 84 af 02 00 00 65 8b 05 72 7f df 7e a9 00 00 0f 00 74 02 <0f> 0b 9c 41 5d fa 41 0f ba e5 09 73 05 e8 1f 0a f9 ff e8 46 90 7b [ 256.175289] RSP: 0018:ffff88803ec07c80 EFLAGS: 00010006 [ 256.176683] RAX: 0000000080010000 RBX: ffff88803ec07cf8 RCX: 000000000000002c [ 256.178122] RDX: 0000000000000000 RSI: ffff88803ec29d28 RDI: 0000000000000040 [ 256.179580] RBP: ffff88803ec07cc0 R08: ffff88803ec07cf0 R09: 00000000000a401d [ 256.181031] R10: 0000000000000000 R11: ffff8880101891b8 R12: ffff88803f6dd600 [ 256.182501] R13: ffff88803ec07cf8 R14: 000000000000000f R15: 0000000000000000 [ 256.183957] FS: 00007ffff7fcfac0(0000) GS:ffff88803ec00000(0000) knlGS:0000000000000000 [ 256.185419] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 256.186911] CR2: 0000555555710cdc CR3: 00000000240b4004 CR4: 00000000003706f0 [ 256.188395] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 256.189888] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 256.191390] Call Trace: [ 256.192844] [ 256.194253] ? __mem_cgroup_uncharge_list+0x4e/0x57 [ 256.195715] release_pages+0x26f/0x27e [ 256.197150] ? list_add_tail+0x39/0x39 [ 256.198603] pagevec_lru_move_fn+0x95/0xa4 The VM_BUG_ON was added as preparing for a time when the PCP was an IRQ-unsafe lock. The fundamental limitation is that free_unref_page_list() cannot be called with the PCP lock held as a normal spinlock when an IRQ is delivered. At the moment, this is impossible and even if PCP was an IRQ-unsafe lock, free_unref_page_list is not called from page allocator context in an unsafe manner. Remove the VM_BUG_ON. This is a fix for the mmotm patch mm-page_alloc-protect-pcp-lists-with-a-spinlock.patch Reported-by: Hugh Dickins Signed-off-by: Mel Gorman --- mm/page_alloc.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0d169aeeac6f..4c1e2a773e47 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3522,8 +3522,6 @@ void free_unref_page_list(struct list_head *list) if (list_empty(list)) return; - VM_BUG_ON(in_hardirq()); - page = lru_to_page(list); locked_zone = page_zone(page); pcp = pcp_spin_lock(locked_zone->per_cpu_pageset);