Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5965C61DA4 for ; Thu, 9 Feb 2023 20:29:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229826AbjBIU26 (ORCPT ); Thu, 9 Feb 2023 15:28:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229525AbjBIU24 (ORCPT ); Thu, 9 Feb 2023 15:28:56 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6435F5FE67 for ; Thu, 9 Feb 2023 12:28:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=he0ClzuWWKJzCwIDzxuxkjez/oJOJsB8LaLOV9MDX8g=; b=pXAQMb3eM2KB72Ll+o1pJFiOT+ 5l7DJ5zlqfvYdBpLXaEn4tSw+Oc36zvu/FTCiclOPl/74yiXGVPgXk5Ynwibn1vOveLazir4+YdPW 9uoR18LdCFL2QmRwWEb582w+IcyTUsqISt7hk2pB3/ug25HYAYbvcjoQuySUicX9kM8fFxSAsG5Nv Bu5X4oACwSiiRkjiw7bWUv4Aydu1vfJkTho3THjR4HeyfgpqK9NIGoGPfOWuZ1BLZLyFVzjAFQgfV 60ObkCVPRr7PQA4rq4R6O5SK0iQDsCzZTkzIybt2c94dEjJTAuDLVz5QqKVq/J+XpH92rcZN9cvYu Jxbg4pFQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pQDWs-002UZk-L0; Thu, 09 Feb 2023 20:28:26 +0000 Date: Thu, 9 Feb 2023 20:28:26 +0000 From: Matthew Wilcox To: Thomas Gleixner Cc: Vlastimil Babka , kernel test robot , Shanker Donthineni , oe-lkp@lists.linux.dev, lkp@intel.com, linux-kernel@vger.kernel.org, Marc Zyngier , Michael Walle , Sebastian Andrzej Siewior , Hans de Goede , Wolfram Sang , linux-mm@kvack.org, "Liam R. Howlett" , David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin Subject: Re: mm, slab/slub: Ensure kmem_cache_alloc_bulk() is available early Message-ID: References: <202302011308.f53123d2-oliver.sang@intel.com> <87o7qdzfay.ffs@tglx> <9a682773-df56-f36c-f582-e8eeef55d7f8@suse.cz> <875ycdwyx6.ffs@tglx> <871qn1wofe.ffs@tglx> <6c0b681e-97bc-d975-a8b9-500abdaaf0bc@suse.cz> <8b7762c3-02be-a5c9-1c4d-507cfb51a15c@suse.cz> <87edr1uykw.ffs@tglx> <878rh73mxl.ffs@tglx> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <878rh73mxl.ffs@tglx> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 08, 2023 at 09:46:30PM +0100, Thomas Gleixner wrote: > On Wed, Feb 08 2023 at 10:15, Vlastimil Babka wrote: > > Cc+ Willy > > > On 2/7/23 19:20, Thomas Gleixner wrote: > >> On Tue, Feb 07 2023 at 15:47, Vlastimil Babka wrote: > >>> From 340d7c7b99f3e67780f6dec480ed1d27e6f325eb Mon Sep 17 00:00:00 2001 > >>> From: Vlastimil Babka > >>> Date: Tue, 7 Feb 2023 15:34:53 +0100 > >>> Subject: [PATCH] mm, slab/slub: remove notes that bulk alloc/free needs > >>> interrupts enabled > >>> > >>> The slab functions kmem_cache_[alloc|free]_bulk() have been documented > >>> as requiring interrupts to be enabled, since their addition in 2015. > >>> It's unclear whether that was a fundamental restriction, or an attempt > >>> to save some cpu cycles by not having to save and restore the irq > >>> flags. > >> > >> I don't think so. The restriction is rather meant to avoid huge > >> allocations in atomic context which causes latencies and also might > >> deplete the atomic reserves. > > > > Fair enough. > > > >> So I rather avoid that and enforce !ATOMIC mode despite the > >> local_irq_save/restore() change which is really only to accomodate with > >> early boot. > > > > We could add some warning then? People might use the bulk alloc unknowingly > > again e.g. via maple tree. GFP_KERNEL would warn through the existing > > warning, but e.g. GFP_ATOMIC currently not. > > Correct. > > > Some maple tree users could use its preallocation instead outside of the > > atomic context, when possible. > > Right. > > The issue is that there might be maple_tree users which depend on > GFP_ATOMIC, but call in from interrupt enabled context, which is > legitimate today. > > Willy might have some insight on that. Not today, but eventually. There are XArray users which modify the tree in interrupt context or under some other spinlock that we can't drop for them in order to do an allocation. And I want to replace the radix tree underpinnings of the XArray with the maple tree. In my copious spare time.