Received: by 2002:a05:6a10:c604:0:0:0:0 with SMTP id y4csp4050989pxt; Tue, 10 Aug 2021 18:45:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxsLgKdKDKB8mpf5qV8ckXXV2K+Vi1+QP7DIbmCyxWkXQm4+FAT9zX0e7Jq8+vL0GwqOaHL X-Received: by 2002:a02:28a:: with SMTP id 132mr2402447jau.132.1628646321558; Tue, 10 Aug 2021 18:45:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628646321; cv=none; d=google.com; s=arc-20160816; b=Wc8VQJgTDahW5mKMFZr8MZT1PHaDMqysj+oBOF7D4GJo0NPrI/nmh2o53DtT2SdOw9 Rd/19OGE1rBmR0RgxhaI56HGqk7lotwsi1nuf4c16C7AcbIeBjYaJoX4/2tWNa8ZCIR3 IymwcniILfTaHWU96iQ6UkhfDxcIihzC5lK1pgaUTQ+J4Bb4CeYY+6YciuBPSx8VSpAr fNOwrlDgSKBvlsrBhR47L6IxPqeVirenW+acT9+vaiiD6UpBWqPAVaLfa1s5EtMgTNRs faLRPd/ALHNSWlRiRq1X0LSKTTVkjPxv2t32gGJOtaLPRZ882H7pJfnI/SQzrLS2KT1m iYDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:dkim-signature; bh=RcFB+UnNXNOSEfEvVY186AkJNiT29nBrWBVeeG0TlkI=; b=bstfzXFB1D0ssAUm68RyHmBMl+TVKyZcMqM0q4Ah4cUfLsFiYNwwa71wMZ4qOrl6CH AnKDE2pJc+0Yx1BJTePNvV/VUUnNkYLYWLnhFQZ6dznHtnSXXXwqaEPrh6RNGyVPeGI3 YoSpnmvefddaZfwleiram/5njCovHZ8cRwdPzKNNBUSu9INHBXu+oOdWRV6l8refDv/U snDRuoPfMUo7Q8UwCPuWEmnuAMRxSRfFai3XjSVi2OO4ybF0HOpzX/CMK3HbZol4r6k8 bFKmIIVkEtqTECNmH0C/L/vvwprWoxwgkHR9LyIpB1zmR1yLBhoCI/27AzfNCyeMQuoD ZKxQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@quicinc.com header.s=qcdkim header.b=lZKnrnN6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e4si3856141jaq.80.2021.08.10.18.45.09; Tue, 10 Aug 2021 18:45:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@quicinc.com header.s=qcdkim header.b=lZKnrnN6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230505AbhHKBnW (ORCPT + 99 others); Tue, 10 Aug 2021 21:43:22 -0400 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]:59376 "EHLO alexa-out-sd-01.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230190AbhHKBnV (ORCPT ); Tue, 10 Aug 2021 21:43:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1628646179; x=1660182179; h=subject:to:cc:references:from:message-id:date: mime-version:in-reply-to:content-transfer-encoding; bh=8sYmq5fBAgsNnOGPza/iYi6HIXCeHTxXeW7dDs6f1zc=; b=lZKnrnN6MNd6YSyv08ePoI2huzxsGa3Zs33Dc1PLwfqfG3W9oxWQnCKc C48v945x49h8dHB1fI9w/wGBYRF2AdwyiqjkaNHr/9Cs4AKGk7H/r5ycS QxuardrGrHY2Mk+nysiAZeipA1rRkMG0oXBSbq4z4YzG8FmTTbDe1izFO Y=; Received: from unknown (HELO ironmsg05-sd.qualcomm.com) ([10.53.140.145]) by alexa-out-sd-01.qualcomm.com with ESMTP; 10 Aug 2021 18:42:59 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg05-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 10 Aug 2021 18:42:58 -0700 Received: from [10.111.168.10] (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Tue, 10 Aug 2021 18:42:57 -0700 Subject: Re: [PATCH v4 29/35] mm: slub: Move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context To: Vlastimil Babka , Andrew Morton , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim CC: , , Mike Galbraith , Sebastian Andrzej Siewior , Thomas Gleixner , Mel Gorman , Jesper Dangaard Brouer , Jann Horn References: <20210805152000.12817-1-vbabka@suse.cz> <20210805152000.12817-30-vbabka@suse.cz> <0b36128c-3e12-77df-85fe-a153a714569b@quicinc.com> <50fe26ba-450b-af57-506d-438f67cfbce3@suse.cz> From: Qian Cai Message-ID: <13a3f616-19b5-ce25-87ad-bb241d0b0c18@quicinc.com> Date: Tue, 10 Aug 2021 21:42:56 -0400 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.12.0 MIME-Version: 1.0 In-Reply-To: <50fe26ba-450b-af57-506d-438f67cfbce3@suse.cz> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03f.na.qualcomm.com (10.85.0.47) To nasanexm03e.na.qualcomm.com (10.85.0.48) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 8/10/2021 10:33 AM, Vlastimil Babka wrote: > On 8/9/21 3:41 PM, Qian Cai wrote: > >>> static void flush_all(struct kmem_cache *s) >>> { >>> - on_each_cpu_cond(has_cpu_slab, flush_cpu_slab, s, 1); >>> + struct slub_flush_work *sfw; >>> + unsigned int cpu; >>> + >>> + mutex_lock(&flush_lock); >> >> Vlastimil, taking the lock here could trigger a warning during memory offline/online due to the locking order: >> >> slab_mutex -> flush_lock > > Here's the full fixup, also incorporating Mike's fix. Thanks. > > ----8<---- > From c2df67d5116d4615c322e262556e34117e268104 Mon Sep 17 00:00:00 2001 > From: Vlastimil Babka > Date: Tue, 10 Aug 2021 10:58:07 +0200 > Subject: [PATCH] mm, slub: fix memory and cpu hotplug related lock ordering > issues > > Qian Cai reported [1] a lockdep splat on memory offline. > > [ 91.374541] WARNING: possible circular locking dependency detected > [ 91.381411] 5.14.0-rc5-next-20210809+ #84 Not tainted > [ 91.387149] ------------------------------------------------------ > [ 91.394016] lsbug/1523 is trying to acquire lock: > [ 91.399406] ffff800018e76530 (flush_lock){+.+.}-{3:3}, at: flush_all+0x50/0x1c8 > [ 91.407425] but task is already holding lock: > [ 91.414638] ffff800018e48468 (slab_mutex){+.+.}-{3:3}, at: slab_memory_callback+0x44/0x280 > [ 91.423603] which lock already depends on the new lock. > > To fix it, we need to change the order in flush_all() so that cpus_read_lock() > is first and mutex_lock(&flush_lock) second. > > Also when called from slab_mem_going_offline_callback() we are already under > cpus_read_lock() and cannot take it again, so create a flush_all_cpus_locked() > variant and decouple flushing from actual shrinking for this call path. > > Additionally, Mike Galbraith reported [2] wrong order of cpus_read_lock() and > slab_mutex in kmem_cache_destroy() path and proposed a fix to reverse it. > > This patch is a fixup for the mmotm patch > mm-slub-move-flush_cpu_slab-invocations-__free_slab-invocations-out-of-irq-context.patch > > [1] https://lore.kernel.org/lkml/0b36128c-3e12-77df-85fe-a153a714569b@quicinc.com/ > [2] https://lore.kernel.org/lkml/2eb3cf340716c40f03a0a342ab40219b3d1de195.camel@gmx.de/ > > Reported-by: Qian Cai > Reported-by: Mike Galbraith > Signed-off-by: Vlastimil Babka This is running fine for me. There is a separate hugetlb crash while fuzzing and will report to where it belongs.