Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4481075pxj; Tue, 25 May 2021 08:57:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwV0dY3wE+ZrEyb4ykCNUOA55Gd0hav8lvlMcRiB9jm0BMzXfMkrHdsnaymdim5ihWSyaPd X-Received: by 2002:a05:6402:19a:: with SMTP id r26mr32037095edv.44.1621958253834; Tue, 25 May 2021 08:57:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621958253; cv=none; d=google.com; s=arc-20160816; b=XdjpL7UM5pXe7njo467LiUT5+FM2ODqin88WCR6DchU7Z5SOslyj10clmUCphvNK3T Hud6+PQOnWSqXCfYtvu8gcMBDZCu0r+JrEsdOZ3vzSfXuo59XaB+cGE3a5VTJbzmllhn BMeGBBKuwyfP/yaES5u8UhnohXCGmBY33SOPmiG/4/KCEwkI5lD0fD75h7Hw+jfONNLC QOKDxTOs64+lCeG1GLjRvOzWKfRTfvfJZDrmUyErYsjLJNJkmEOEDaMqpqtW46ZtitQS u1O6int+Vp0VnC+aHMlS7BW6/+k6aND8KFtoFYR8BHtqVxsAOzHrsJfQeFO3lJM2iR0a IOfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:subject:from :references:cc:to:dkim-signature:dkim-signature; bh=ow7nAyPcGdke18qJEo6av0sYcdwU6XcTU/DgEkAn7yM=; b=Bqs21lXA/jH3akFWMlAuVvXeNZE+4G/xtC/H27FJG+Cba2NNBN+caUoqGjE3+UkHOf IkT9pY3QEtDH7mRyZNLNmnipRq7nE582HcwjXKhR10fgIhVDuX8DTKBJ+QpHllNB7y2Y urc36gDheTUsGNNio5axvNb1Depc5N/zeMvShN0fKd9JX6t78E2XGIqr+DSkZRoe+3xu In9xa3M0M6YKbOSpu6NmsQqvJ6rQgGixMk6ArTSkuTWCtJriS5f2SE5qxv8U9X9cgzyl kDdhIPJtlZXqF/U55glE8kSzN10k3jzTV3eUjUMbV1FKMHDk35Ty4NfXbRxSf9tdC17/ wUMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=FxL6ZlDt; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f25si10118456ejh.388.2021.05.25.08.57.10; Tue, 25 May 2021 08:57:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=FxL6ZlDt; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232818AbhEYMso (ORCPT + 99 others); Tue, 25 May 2021 08:48:44 -0400 Received: from mx2.suse.de ([195.135.220.15]:56872 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231907AbhEYMsm (ORCPT ); Tue, 25 May 2021 08:48:42 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1621946831; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ow7nAyPcGdke18qJEo6av0sYcdwU6XcTU/DgEkAn7yM=; b=FxL6ZlDtF8Wy2BqyYb/3eJSiSp6XCFeSeDaVDIVooNyPTiOzyDH4+pa/YteL8e1dl/y1vR kA7Ml6AJ5FUJi1O38ppB+jzJgRKzq64rKlaaBEiZOCkb/MIntldHQDji6r5jskgrbya4fN o/UHqWJgwQ5QofOl5M/nXlaD3fo4SMc= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1621946831; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ow7nAyPcGdke18qJEo6av0sYcdwU6XcTU/DgEkAn7yM=; b=qebmyN+O+iU2R107ggiYlMVKFtXlAATSYJ60aSRZl5H8i0OUyxfI37srJ1LjZXQ3pfIi8g Vy4Cp9Ldag/KVIAw== Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 4FB68AB71; Tue, 25 May 2021 12:47:11 +0000 (UTC) To: Mel Gorman Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim , Sebastian Andrzej Siewior , Thomas Gleixner , Jesper Dangaard Brouer , Peter Zijlstra , Jann Horn References: <20210524233946.20352-1-vbabka@suse.cz> <20210524233946.20352-10-vbabka@suse.cz> <20210525123536.GR30378@techsingularity.net> From: Vlastimil Babka Subject: Re: [RFC 09/26] mm, slub: move disabling/enabling irqs to ___slab_alloc() Message-ID: Date: Tue, 25 May 2021 14:47:10 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.2 MIME-Version: 1.0 In-Reply-To: <20210525123536.GR30378@techsingularity.net> Content-Type: text/plain; charset=iso-8859-15 Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 5/25/21 2:35 PM, Mel Gorman wrote: > On Tue, May 25, 2021 at 01:39:29AM +0200, Vlastimil Babka wrote: >> Currently __slab_alloc() disables irqs around the whole ___slab_alloc(). This >> includes cases where this is not needed, such as when the allocation ends up in >> the page allocator and has to awkwardly enable irqs back based on gfp flags. >> Also the whole kmem_cache_alloc_bulk() is executed with irqs disabled even when >> it hits the __slab_alloc() slow path, and long periods with disabled interrupts >> are undesirable. >> >> As a first step towards reducing irq disabled periods, move irq handling into >> ___slab_alloc(). Callers will instead prevent the s->cpu_slab percpu pointer >> from becoming invalid via migrate_disable(). This does not protect against >> access preemption, which is still done by disabled irq for most of >> ___slab_alloc(). As the small immediate benefit, slab_out_of_memory() call from >> ___slab_alloc() is now done with irqs enabled. >> >> kmem_cache_alloc_bulk() disables irqs for its fastpath and then re-enables them >> before calling ___slab_alloc(), which then disables them at its discretion. The >> whole kmem_cache_alloc_bulk() operation also disables cpu migration. >> >> When ___slab_alloc() calls new_slab() to allocate a new page, re-enable >> preemption, because new_slab() will re-enable interrupts in contexts that allow >> blocking. >> >> The patch itself will thus increase overhead a bit due to disabled migration >> and increased disabling/enabling irqs in kmem_cache_alloc_bulk(), but that will >> be gradually improved in the following patches. >> >> Signed-off-by: Vlastimil Babka > > Why did you use migrate_disable instead of preempt_disable? There is a > fairly large comment in include/linux/preempt.h on why migrate_disable > is undesirable so new users are likely to be put under the microscope > once Thomas or Peter notice it. I understood it as while undesirable, there's nothing better for now. > I think you are using it so that an allocation request can be preempted by > a higher priority task but given that the code was disabling interrupts, > there was already some preemption latency. Yes, and the disabled interrupts will get progressively "smaller" in the series. > However, migrate_disable > is more expensive than preempt_disable (function call versus a simple > increment). That's true, I think perhaps it could be reimplemented so that on !PREEMPT_RT and with no lockdep/preempt/whatnot debugging it could just translate to an inline migrate_disable? > On that basis, I'd recommend starting with preempt_disable > and only using migrate_disable if necessary. That's certainly possible and you're right it would be a less disruptive step. My thinking was that on !PREEMPT_RT it's actually just preempt_disable (however with the call overhead currently), but PREEMPT_RT would welcome the lack of preempt disable. I'd be interested to hear RT guys opinion here. > Bonus points for adding a comment where ___slab_alloc disables IRQs to > clarify what is protected -- I assume it's protecting kmem_cache_cpu > from being modified from interrupt context. If so, it's potentially a > local_lock candidate. Yeah that gets cleared up later :)