Received: by 10.192.165.148 with SMTP id m20csp1302868imm; Wed, 25 Apr 2018 16:25:45 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+ssZbaPlWy5OUEyCaDbZjT9DelbiYmu1SHejaqyK5ApE8SZzJpVS63APXIJy8gIWkj24R5 X-Received: by 2002:a17:902:6c07:: with SMTP id q7-v6mr31543702plk.67.1524698745691; Wed, 25 Apr 2018 16:25:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524698745; cv=none; d=google.com; s=arc-20160816; b=ZaAnhpZ5/VgpZ4u/JWjNs9Q31onco4V338gMsv+u6HA8R94fWvkHkjoMB/WRCBo7Vb oYdvkjtzwcDjXbW6dBWYe1MsY163SSFcT1UJKaFf+2VZB03qTNpYCDFY7nDTUQJ6yXgD S2L8zwe3HBhO09LKi58b5Z20Fq8z7IqxRVhNb18tnvSYGbLVnOG/biXc5mIiVgWqai8f 10R4SxpKF7HDK5S7s5ZBwXIg4lDN0YHY07UZvv17BeFeuq7iwLOkw47pPY7045qdO4jl DvmFwA8d1xXSAUbdD2K8Q8XAC5qXvuP4b+YMsHqZUOjyB0wsjy9xB0ENlMw7lqCmMPNS vKfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date :arc-authentication-results; bh=tFb7H00McOm17cQVuvDOpRvcA5AgENQzDGN0XVFr8Ts=; b=o1Mt5BkFA5oO/g94TqJcmbSTkXKrcLr/96KbpGlUSHAUVJVCWc+UzUMxW+JDpE9css BSVWolXwJmfhDfCFElixJZ5hXa6TDscZxS7ELTkAto6L1rWN2YPjay3anJu6FgKQfGFZ u634CxHBP8r2FTrqSJkI51CWXeIq1FVqbx/DqUlE/g2vh949wcj80B/y4BxYSxnacJ54 VXKA1TctFXGKSHM56ssj5dsJWKA6hO081mCxSfxj2LLM1S8HogU5H1rmIGljsjbpRs90 dKL25t0xtp031lJBtpzsiPRsr+WlM9kDrpAyZisjUUbC69sQRSXTUOhThPwjdqlRTZN3 lmAw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f7-v6si16271478plb.285.2018.04.25.16.25.31; Wed, 25 Apr 2018 16:25:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753918AbeDYXY0 (ORCPT + 99 others); Wed, 25 Apr 2018 19:24:26 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:52312 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752661AbeDYXYZ (ORCPT ); Wed, 25 Apr 2018 19:24:25 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6AB658D773; Wed, 25 Apr 2018 23:24:24 +0000 (UTC) Received: from file01.intranet.prod.int.rdu2.redhat.com (file01.intranet.prod.int.rdu2.redhat.com [10.11.5.7]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 20469202323A; Wed, 25 Apr 2018 23:24:24 +0000 (UTC) Received: from file01.intranet.prod.int.rdu2.redhat.com (localhost [127.0.0.1]) by file01.intranet.prod.int.rdu2.redhat.com (8.14.4/8.14.4) with ESMTP id w3PNOOPQ003189; Wed, 25 Apr 2018 19:24:24 -0400 Received: from localhost (mpatocka@localhost) by file01.intranet.prod.int.rdu2.redhat.com (8.14.4/8.14.4/Submit) with ESMTP id w3PNOMlo003185; Wed, 25 Apr 2018 19:24:22 -0400 X-Authentication-Warning: file01.intranet.prod.int.rdu2.redhat.com: mpatocka owned process doing -bs Date: Wed, 25 Apr 2018 19:24:22 -0400 (EDT) From: Mikulas Patocka X-X-Sender: mpatocka@file01.intranet.prod.int.rdu2.redhat.com To: Christopher Lameter cc: Mike Snitzer , Vlastimil Babka , Matthew Wilcox , Pekka Enberg , linux-mm@kvack.org, dm-devel@redhat.com, David Rientjes , Joonsoo Kim , Andrew Morton , linux-kernel@vger.kernel.org Subject: Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE In-Reply-To: Message-ID: References: <20c58a03-90a8-7e75-5fc7-856facfb6c8a@suse.cz> <20180413151019.GA5660@redhat.com> <20180416142703.GA22422@redhat.com> <20180416144638.GA22484@redhat.com> User-Agent: Alpine 2.02 (LRH 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 25 Apr 2018 23:24:24 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 25 Apr 2018 23:24:24 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'mpatocka@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 25 Apr 2018, Mikulas Patocka wrote: > > > On Wed, 18 Apr 2018, Christopher Lameter wrote: > > > On Tue, 17 Apr 2018, Mikulas Patocka wrote: > > > > > I can make a slub-only patch with no extra flag (on a freshly booted > > > system it increases only the order of caches "TCPv6" and "sighand_cache" > > > by one - so it should not have unexpected effects): > > > > > > Doing a generic solution for slab would be more comlpicated because slab > > > assumes that all slabs have the same order, so it can't fall-back to > > > lower-order allocations. > > > > Well again SLAB uses compound pages and thus would be able to detect the > > size of the page. It may be some work but it could be done. > > > > > > > > Index: linux-2.6/mm/slub.c > > > =================================================================== > > > --- linux-2.6.orig/mm/slub.c 2018-04-17 19:59:49.000000000 +0200 > > > +++ linux-2.6/mm/slub.c 2018-04-17 20:58:23.000000000 +0200 > > > @@ -3252,6 +3252,7 @@ static inline unsigned int slab_order(un > > > static inline int calculate_order(unsigned int size, unsigned int reserved) > > > { > > > unsigned int order; > > > + unsigned int test_order; > > > unsigned int min_objects; > > > unsigned int max_objects; > > > > > > @@ -3277,7 +3278,7 @@ static inline int calculate_order(unsign > > > order = slab_order(size, min_objects, > > > slub_max_order, fraction, reserved); > > > if (order <= slub_max_order) > > > - return order; > > > + goto ret_order; > > > fraction /= 2; > > > } > > > min_objects--; > > > @@ -3289,15 +3290,25 @@ static inline int calculate_order(unsign > > > */ > > > order = slab_order(size, 1, slub_max_order, 1, reserved); > > > > The slab order is determined in slab_order() > > > > > if (order <= slub_max_order) > > > - return order; > > > + goto ret_order; > > > > > > /* > > > * Doh this slab cannot be placed using slub_max_order. > > > */ > > > order = slab_order(size, 1, MAX_ORDER, 1, reserved); > > > - if (order < MAX_ORDER) > > > - return order; > > > - return -ENOSYS; > > > + if (order >= MAX_ORDER) > > > + return -ENOSYS; > > > + > > > +ret_order: > > > + for (test_order = order + 1; test_order < MAX_ORDER; test_order++) { > > > + unsigned long order_objects = ((PAGE_SIZE << order) - reserved) / size; > > > + unsigned long test_order_objects = ((PAGE_SIZE << test_order) - reserved) / size; > > > + if (test_order_objects > min(32, MAX_OBJS_PER_PAGE)) > > > + break; > > > + if (test_order_objects > order_objects << (test_order - order)) > > > + order = test_order; > > > + } > > > + return order; > > > > Could yo move that logic into slab_order()? It does something awfully > > similar. > > But slab_order (and its caller) limits the order to "max_order" and we > want more. > > Perhaps slab_order should be dropped and calculate_order totally > rewritten? > > Mikulas Do you want this? It deletes slab_order and replaces it with the "minimize_waste" logic directly. The patch starts with a minimal order for a given size and increases the order if one of these conditions is met: * we is below slub_min_order * we are below min_objects and slub_max_order * we go above slub_max_order only if it minimizes waste and if we don't increase the object count above 32 It simplifies the code and it is very similar to the old algorithms, most slab caches have the same order, so it shouldn't cause any regressions. This patch changes order of these slabs: TCPv6: 3 -> 4 sighand_cache: 3 -> 4 task_struct: 3 -> 4 --- mm/slub.c | 76 +++++++++++++++++++++----------------------------------------- 1 file changed, 26 insertions(+), 50 deletions(-) Index: linux-2.6/mm/slub.c =================================================================== --- linux-2.6.orig/mm/slub.c 2018-04-26 00:07:30.000000000 +0200 +++ linux-2.6/mm/slub.c 2018-04-26 00:21:37.000000000 +0200 @@ -3224,34 +3224,10 @@ static unsigned int slub_min_objects; * requested a higher mininum order then we start with that one instead of * the smallest order which will fit the object. */ -static inline unsigned int slab_order(unsigned int size, - unsigned int min_objects, unsigned int max_order, - unsigned int fract_leftover, unsigned int reserved) -{ - unsigned int min_order = slub_min_order; - unsigned int order; - - if (order_objects(min_order, size, reserved) > MAX_OBJS_PER_PAGE) - return get_order(size * MAX_OBJS_PER_PAGE) - 1; - - for (order = max(min_order, (unsigned int)get_order(min_objects * size + reserved)); - order <= max_order; order++) { - - unsigned int slab_size = (unsigned int)PAGE_SIZE << order; - unsigned int rem; - - rem = (slab_size - reserved) % size; - - if (rem <= slab_size / fract_leftover) - break; - } - - return order; -} - static inline int calculate_order(unsigned int size, unsigned int reserved) { unsigned int order; + unsigned int test_order; unsigned int min_objects; unsigned int max_objects; @@ -3269,35 +3245,35 @@ static inline int calculate_order(unsign max_objects = order_objects(slub_max_order, size, reserved); min_objects = min(min_objects, max_objects); - while (min_objects > 1) { - unsigned int fraction; + /* Get the minimum acceptable order for one object */ + order = get_order(size + reserved); + + for (test_order = order + 1; test_order < MAX_ORDER; test_order++) { + unsigned order_obj = order_objects(order, size, reserved); + unsigned test_order_obj = order_objects(test_order, size, reserved); + + /* If there are too many objects, stop searching */ + if (test_order_obj > MAX_OBJS_PER_PAGE) + break; - fraction = 16; - while (fraction >= 4) { - order = slab_order(size, min_objects, - slub_max_order, fraction, reserved); - if (order <= slub_max_order) - return order; - fraction /= 2; - } - min_objects--; + /* Always increase up to slub_min_order */ + if (test_order <= slub_min_order) + order = test_order; + + /* If we are below min_objects and slub_max_order, increase order */ + if (order_obj < min_objects && test_order <= slub_max_order) + order = test_order; + + /* Increase order even more, but only if it reduces waste */ + if (test_order_obj <= 32 && + test_order_obj > order_obj << (test_order - order)) + order = test_order; } - /* - * We were unable to place multiple objects in a slab. Now - * lets see if we can place a single object there. - */ - order = slab_order(size, 1, slub_max_order, 1, reserved); - if (order <= slub_max_order) - return order; + if (order >= MAX_ORDER) + return -ENOSYS; - /* - * Doh this slab cannot be placed using slub_max_order. - */ - order = slab_order(size, 1, MAX_ORDER, 1, reserved); - if (order < MAX_ORDER) - return order; - return -ENOSYS; + return order; } static void