Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1041167pxb; Thu, 28 Jan 2021 06:47:51 -0800 (PST) X-Google-Smtp-Source: ABdhPJwQDnHz7eKzQBhpry1bKFZzFMhPlxhcJFuxlzi8v5sLj0j8YleR/LYi2oVycK7v49gySSeT X-Received: by 2002:aa7:c583:: with SMTP id g3mr15006598edq.357.1611845271063; Thu, 28 Jan 2021 06:47:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611845271; cv=none; d=google.com; s=arc-20160816; b=NAo8tRsFXfxULxMwqnRgJy62b53h9kj7Y0Rep2rPjTp+IgAGYJjOsrMYnMO7eNu8ta SAmK5pET5d51I6X4uOhtf/HkqvnJ99iBiKqZ8GeqdyjW9lfFAf5xvsEm8J9M9g0O6t+M pu6KiHttWPCbh2ip3Y2ho+kdvMgMnI2glWHsXN2yzmuLnxh+IOzRTNfFp2F0/oW9QUPO ix9NgFvnh/UAIGthL0JkluixRTt8Ci6/qKmx5uzRMAl8NywuC440g/rlobxP3E/77QCK Q790nZKLQ1disd1btctAXWrJ9afV7ZM2lMJSrsSc17gLZMYxGgi2iKzkF0rdsNwVAtXa SSkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=AtaMFizK644OpjhAkGMz/JcsqorTaXmoKqB8DdN1bfg=; b=zJdolxuU8zL9hkL2Bl/H82gl1diwRktOabsQoLD/YDd02sCorHkPzpmXexOHCjCSkB hsO73MdFLK3kxVky+LwzJ8L4tLieAbDnoAmj3tr+ZO6SCF/LOo3XIWGAoBVHuR2zxENT oLrFH9oA17LotZaXOzRWMXidlxKLpL5/VZnH0FrRTOaf049fffPBkLwMGiwWTiaWdKP0 8Kz1ucw1b4Gz417JqocTK9C5DnD8EQOK1IbCcaOgz3Or2cfpOBfJ/eFllV8NHU99KX7d gVL1x7xBwM8aQrHZYbUR5cP8q43cEOdjS4jNa8s+F0GFno81zN7z6WVbXD/8cd1g9Uvs jSTQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q12si2636733ejm.210.2021.01.28.06.47.26; Thu, 28 Jan 2021 06:47:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231817AbhA1Onw (ORCPT + 99 others); Thu, 28 Jan 2021 09:43:52 -0500 Received: from outbound-smtp22.blacknight.com ([81.17.249.190]:53599 "EHLO outbound-smtp22.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231550AbhA1Onl (ORCPT ); Thu, 28 Jan 2021 09:43:41 -0500 Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp22.blacknight.com (Postfix) with ESMTPS id 3ACA6BAE48 for ; Thu, 28 Jan 2021 14:42:47 +0000 (GMT) Received: (qmail 20546 invoked from network); 28 Jan 2021 14:42:46 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 28 Jan 2021 14:42:46 -0000 Date: Thu, 28 Jan 2021 14:42:45 +0000 From: Mel Gorman To: Michal Hocko Cc: Vincent Guittot , Vlastimil Babka , Christoph Lameter , Bharata B Rao , linux-kernel , linux-mm@kvack.org, David Rientjes , Joonsoo Kim , Andrew Morton , guro@fb.com, Shakeel Butt , Johannes Weiner , aneesh.kumar@linux.ibm.com, Jann Horn Subject: Re: [RFC PATCH v0] mm/slub: Let number of online CPUs determine the slub page order Message-ID: <20210128144245.GH3592@techsingularity.net> References: <20201118082759.1413056-1-bharata@linux.ibm.com> <20210121053003.GB2587010@in.ibm.com> <20210126085243.GE827@dhcp22.suse.cz> <20210126135918.GQ827@dhcp22.suse.cz> <20210128134512.GF3592@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 28, 2021 at 02:57:10PM +0100, Michal Hocko wrote: > On Thu 28-01-21 13:45:12, Mel Gorman wrote: > [...] > > So mostly this is down to the number of times SLUB calls into the page > > allocator which only caches order-0 pages on a per-cpu basis. I do have > > a prototype for a high-order per-cpu allocator but it is very rough -- > > high watermarks stop making sense, code is rough, memory needed for the > > pcpu structures quadruples etc. > > Thanks this is really useful. But it really begs a question whether this > is a general case or more an exception. And as such maybe we want to > define high throughput caches which would gain a higher order pages to > keep pace with allocation and reduce the churn or deploy some other > techniques to reduce the direct page allocator involvement. I don't think we want to define "high throughput caches" because it'll be workload dependant and a game of whack-a-mole. If the "high throughput cache" is a kmalloc cache for some set of workloads and one of the inode caches or dcaches for another one, there will be no setting that is universally good. -- Mel Gorman SUSE Labs