Received: by 2002:ac0:950c:0:0:0:0:0 with SMTP id f12csp1613425imc; Mon, 11 Mar 2019 18:48:38 -0700 (PDT) X-Google-Smtp-Source: APXvYqzoSOlyfWED3Zi+5LBrrg/s9nF1kuTuZ6fxc6f8L7JVCFg2svvC1sGA36kNn58ijZw5aUAP X-Received: by 2002:a63:d015:: with SMTP id z21mr23005369pgf.215.1552355318787; Mon, 11 Mar 2019 18:48:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552355318; cv=none; d=google.com; s=arc-20160816; b=DLsTXAg1I8v6n48qU6p1skrCQod/KzpFfaGPagffnkLTfi/BNzTnVXaZQjCiBvkP2l 242S8+t42vF5/s2Xv8zHU3g5hyQ4DI8ZwY5cZXNfjT2b7xk0cdzKAk2rnAjrL3rUiK1l 2P8YZSdPZBj/QWLZ1rW1vaIkfltTDU6ND/4dwWjZ6JCYBtgp9Tz8L67UlfCa/4B6hcRk n/efl5hqJeBsKIDme+kANsIQRLbDw8+uWwOKvfX2wAi/1WY+TvSGUc67nG+MLEnrIxzf 2Ptmz3BULKNmi0uFHxZWcqFO6UtJywfcw8RxyvKwL+JIAeJOW7KFAiKjMY+3i9W2jW++ rsfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:dkim-signature; bh=2hF6d6fZeUwH8051ioQNCV9dfHEA8Z+yoBZ4if8TWas=; b=iu6VMK1/YXDi3P4kFVN/5v+Ns4NiYIk6Ji4Dy0KL1fVFgKEBD3ckD6P7vhY9zzjGmY eQL6sA34jlp307S9M1Ma+wHgWSkSPmKXGgiSG1CyLAAPO3Vdlba8w+/SeNlVeB4GYMwA tVtAiPo97GwcyocyZjVIKKJLMu8mDxvs04x0o8hxlVQ7+0tDqHt0Qvr3odSyKPQVIjUz 6ZsUXW49TMSDaye6e9XIfnKKyaxjgOZMSDJPdl/EP1visCqpbRPkTfI8Hqcsq85or54b +QRoLBewHovt9+YlXtGMNO9NnQKzqJ4xJbc8uRzLJ4u1NnMbRfNGG67JzNFKqnsIfhYa S4oQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@tobin.cc header.s=fm2 header.b=y7wXdvON; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=Kz4fpROf; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x3si724482plv.424.2019.03.11.18.48.22; Mon, 11 Mar 2019 18:48:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@tobin.cc header.s=fm2 header.b=y7wXdvON; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=Kz4fpROf; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726685AbfCLBrn (ORCPT + 99 others); Mon, 11 Mar 2019 21:47:43 -0400 Received: from out2-smtp.messagingengine.com ([66.111.4.26]:54863 "EHLO out2-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726390AbfCLBrn (ORCPT ); Mon, 11 Mar 2019 21:47:43 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id D3616216AC; Mon, 11 Mar 2019 21:47:42 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Mon, 11 Mar 2019 21:47:42 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tobin.cc; h=date :from:to:cc:subject:message-id:references:mime-version :content-type:in-reply-to; s=fm2; bh=2hF6d6fZeUwH8051ioQNCV9dfHE A8Z+yoBZ4if8TWas=; b=y7wXdvON5UqpZqv6I9bl5qlvGVkSNQut+QiNC5WuiRI 6cDOuowo+0exbKZC+0SBnVP0/mQlREiNtXSgPC25DwCZTa1dtZ9PhWkoi9jaGkEP 1CoBeHEE2LVsii0ano+1sufTQS07h1CBNGZ4c0w4mnQ91ewrcz51IrwBiB/2ygRV fuwds1wxZVoZ/Cl7PEpNDEPSxd5AHVSmbmBnrvnvbpjwS71wTQ3rJj/fEpuvd50v BMTW1NdkG6DnniyuLCv3jcFTaQt3V9YZ60sdtzVRRWuyxsaojPbhRLNILm654otH unJrAbB/pX54P9xm5C5W7E9vn0wi+PuCxDIKsOHpLfA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=2hF6d6 fZeUwH8051ioQNCV9dfHEA8Z+yoBZ4if8TWas=; b=Kz4fpROf5JbK8QyCEK1XcJ xi0JWRY74K+KM25IOs5B0hwQ7hdrag5FQRkHGd1C+QfatPpw5e082LlUlf4ovxZ9 PcktF5T5OOXab698bMX2pv9xwla09MpUzCPLFsQ1/upIYy7gW7Jp5OKyJpdFh9rY 1hYc4sQDW9hueXdUFWnWksz8v5tSxKfWx48IEkNUocLM99oZGrxYmXTQG71+nXAw kV0qnR5ImOfoGkjyEhNH1D0MK9ekbY9QmB9kjxUdzb8rj5gB/9U86f0NTLp74k0q kUSIuUBSJ0a7ZvKP2iDLMcW8jihRJdZU42o4qB8Q1Ny6tlr5CD6kJd7nON42zKPg == X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgeejgdefiecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenfg hrlhcuvffnffculddutddmnecujfgurhepfffhvffukfhfgggtuggjofgfsehttdertdfo redvnecuhfhrohhmpedfvfhosghinhcuvedrucfjrghrughinhhgfdcuoehmvgesthhosg hinhdrtggtqeenucfkphepuddvgedrudeiledrvdefrddukeegnecurfgrrhgrmhepmhgr ihhlfhhrohhmpehmvgesthhosghinhdrtggtnecuvehluhhsthgvrhfuihiivgeptd X-ME-Proxy: Received: from localhost (124-169-23-184.dyn.iinet.net.au [124.169.23.184]) by mail.messagingengine.com (Postfix) with ESMTPA id A158B1033A; Mon, 11 Mar 2019 21:47:39 -0400 (EDT) Date: Tue, 12 Mar 2019 12:47:12 +1100 From: "Tobin C. Harding" To: Roman Gushchin Cc: "Tobin C. Harding" , Andrew Morton , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" Subject: Re: [RFC 04/15] slub: Enable Slab Movable Objects (SMO) Message-ID: <20190312014712.GF9362@eros.localdomain> References: <20190308041426.16654-1-tobin@kernel.org> <20190308041426.16654-5-tobin@kernel.org> <20190311224842.GC7915@tower.DHCP.thefacebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190311224842.GC7915@tower.DHCP.thefacebook.com> X-Mailer: Mutt 1.11.3 (2019-02-01) User-Agent: Mutt/1.11.3 (2019-02-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 11, 2019 at 10:48:45PM +0000, Roman Gushchin wrote: > On Fri, Mar 08, 2019 at 03:14:15PM +1100, Tobin C. Harding wrote: > > We have now in place a mechanism for adding callbacks to a cache in > > order to be able to implement object migration. > > > > Add a function __move() that implements SMO by moving all objects in a > > slab page using the isolate/migrate callback methods. > > > > Co-developed-by: Christoph Lameter > > Signed-off-by: Tobin C. Harding > > --- > > mm/slub.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 85 insertions(+) > > > > diff --git a/mm/slub.c b/mm/slub.c > > index 0133168d1089..6ce866b420f1 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -4325,6 +4325,91 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) > > return err; > > } > > > > +/* > > + * Allocate a slab scratch space that is sufficient to keep pointers to > > + * individual objects for all objects in cache and also a bitmap for the > > + * objects (used to mark which objects are active). > > + */ > > +static inline void *alloc_scratch(struct kmem_cache *s) > > +{ > > + unsigned int size = oo_objects(s->max); > > + > > + return kmalloc(size * sizeof(void *) + > > + BITS_TO_LONGS(size) * sizeof(unsigned long), > > + GFP_KERNEL); > > I wonder how big this allocation can be? > Given that the reason for migration is probably highly fragmented memory, > we probably don't want to have a high-order allocation here. So maybe > kvmalloc()? > > > +} > > + > > +/* > > + * __move() - Move all objects in the given slab. > > + * @page: The slab we are working on. > > + * @scratch: Pointer to scratch space. > > + * @node: The target node to move objects to. > > + * > > + * If the target node is not the current node then the object is moved > > + * to the target node. If the target node is the current node then this > > + * is an effective way of defragmentation since the current slab page > > + * with its object is exempt from allocation. > > + */ > > +static void __move(struct page *page, void *scratch, int node) > > +{ > > __move() isn't a very explanatory name. kmem_cache_move() (as in Christopher's > version) is much better, IMO. Or maybe move_slab_objects()? How about move_slab_page()? We use kmem_cache_move() later in the series. __move() moves all objects in the given page but not all objects in this cache (which kmem_cache_move() later does). Open to further suggestions though, naming things is hard :) Christopher's original patch uses kmem_cache_move() for a function that only moves objects from within partial slabs, I changed it because I did not think this name suitably describes the behaviour. So from the original I rename: __move() -> __defrag() kmem_cache_move() -> __move() And reuse kmem_cache_move() for move _all_ objects (includes full list). With this set applied we have the call chains kmem_cache_shrink() # Defined in slab_common.c, exported to kernel. -> __kmem_cache_shrink() # Defined in slub.c -> __defrag() # Unconditionally (i.e 100%) -> __move() kmem_cache_defrag() # Exported to kernel -> __defrag() -> __move() move_store() # sysfs -> kmem_cache_move() -> __move() or -> __move_all_objects_to() -> kmem_cache_move() -> __move() Suggested improvements? > Also, it's usually better to avoid adding new functions without calling them. > Maybe it's possible to merge this patch with (9)? Understood. The reason behind this is that I attempted to break this set up separating the implementation of SMO with the addition of each feature. This function is only called when features are implemented to use SMO, I did not want to elevate any feature above any other by including it in this patch. I'm open to suggestions on how to order though, also I'm happy to order it differently if/when we do PATCH version. Is that acceptable for the RFC versions? thanks, Tobin.