Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4324985pxj; Tue, 25 May 2021 05:36:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxc5JulQTBqkPvtq8vizuitmNiccaWR7sELesjhzXA1+C7Gaqj+x8eXIgjFv4eNJocSkj/4 X-Received: by 2002:a17:906:fc0d:: with SMTP id ov13mr28051330ejb.504.1621946213567; Tue, 25 May 2021 05:36:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621946213; cv=none; d=google.com; s=arc-20160816; b=Q2/Pm8uwMii4jAXlxl+6Yc5N/BrppUpLTLb/r3blebUYxZuqztC0TvHdS3beo3B5rv 9mc7M9TbyykMja2f8cJPKY+kDmrsxT74i4p82CX8CNeBHTEvLp0eYSXivAlK2v/GSA1+ EcM9bCgFLp1jcmWdLxiUutQi25LTyaTl9SPXoDb4RgBbUWE14vrM3zcd2QkcgKZdSRpE MHyO9j//t4cZQojEsvFcrBQGGDx3n1ni0lI4vKe6/B1XYp6aUEhG16xMq3kUTp3iSQSf zTCDAloDfaYVI9KqTB6Zi+szhlShXuG4MBZocpP7jS4RWnpFGnofri4qX3OlNlpc24C0 upvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=hoVm++NPkFM51X07tElPY4tfsARKHqlqJbFm0nqvtT8=; b=r7UYkwp+AqUD+QAYVVnVYhxfDmKJWGu7goFpnl+HsYL7/9WvqXPoI6gq08xh9AgWkB K1AWh8pqAmMnTdH/rJzvb51DXTZP/N4M0W6UZF2f1hK3PGa/u4zIkaagg+bs3kGnoVja eOtvAZDBOzWzybJu+Ja+YAONFPw42UwZ1ApbZxEYOXeN8NbSksy3YLYNIG/GD/Ql8/UJ ufGqzLpwOn+Z7GOem35iI6rpT2hBuUWgAyhXwf23cjV7+bYw3bCYI3djebAsj3fBTPUt JAttUxp5zwxphUcroRCe3aa+kA0mqSLB/5dM6x8aocO9bFV0qwH5fEO7k0hN1b2dez9E ucjg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a22si15751221eje.230.2021.05.25.05.36.30; Tue, 25 May 2021 05:36:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232409AbhEYMhK (ORCPT + 99 others); Tue, 25 May 2021 08:37:10 -0400 Received: from outbound-smtp07.blacknight.com ([46.22.139.12]:49515 "EHLO outbound-smtp07.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232353AbhEYMhK (ORCPT ); Tue, 25 May 2021 08:37:10 -0400 Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp07.blacknight.com (Postfix) with ESMTPS id 36CB41C3DA2 for ; Tue, 25 May 2021 13:35:39 +0100 (IST) Received: (qmail 9730 invoked from network); 25 May 2021 12:35:38 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.23.168]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 25 May 2021 12:35:38 -0000 Date: Tue, 25 May 2021 13:35:36 +0100 From: Mel Gorman To: Vlastimil Babka Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim , Sebastian Andrzej Siewior , Thomas Gleixner , Jesper Dangaard Brouer , Peter Zijlstra , Jann Horn Subject: Re: [RFC 09/26] mm, slub: move disabling/enabling irqs to ___slab_alloc() Message-ID: <20210525123536.GR30378@techsingularity.net> References: <20210524233946.20352-1-vbabka@suse.cz> <20210524233946.20352-10-vbabka@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20210524233946.20352-10-vbabka@suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 25, 2021 at 01:39:29AM +0200, Vlastimil Babka wrote: > Currently __slab_alloc() disables irqs around the whole ___slab_alloc(). This > includes cases where this is not needed, such as when the allocation ends up in > the page allocator and has to awkwardly enable irqs back based on gfp flags. > Also the whole kmem_cache_alloc_bulk() is executed with irqs disabled even when > it hits the __slab_alloc() slow path, and long periods with disabled interrupts > are undesirable. > > As a first step towards reducing irq disabled periods, move irq handling into > ___slab_alloc(). Callers will instead prevent the s->cpu_slab percpu pointer > from becoming invalid via migrate_disable(). This does not protect against > access preemption, which is still done by disabled irq for most of > ___slab_alloc(). As the small immediate benefit, slab_out_of_memory() call from > ___slab_alloc() is now done with irqs enabled. > > kmem_cache_alloc_bulk() disables irqs for its fastpath and then re-enables them > before calling ___slab_alloc(), which then disables them at its discretion. The > whole kmem_cache_alloc_bulk() operation also disables cpu migration. > > When ___slab_alloc() calls new_slab() to allocate a new page, re-enable > preemption, because new_slab() will re-enable interrupts in contexts that allow > blocking. > > The patch itself will thus increase overhead a bit due to disabled migration > and increased disabling/enabling irqs in kmem_cache_alloc_bulk(), but that will > be gradually improved in the following patches. > > Signed-off-by: Vlastimil Babka Why did you use migrate_disable instead of preempt_disable? There is a fairly large comment in include/linux/preempt.h on why migrate_disable is undesirable so new users are likely to be put under the microscope once Thomas or Peter notice it. I think you are using it so that an allocation request can be preempted by a higher priority task but given that the code was disabling interrupts, there was already some preemption latency. However, migrate_disable is more expensive than preempt_disable (function call versus a simple increment). On that basis, I'd recommend starting with preempt_disable and only using migrate_disable if necessary. Bonus points for adding a comment where ___slab_alloc disables IRQs to clarify what is protected -- I assume it's protecting kmem_cache_cpu from being modified from interrupt context. If so, it's potentially a local_lock candidate. -- Mel Gorman SUSE Labs