Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp5996155ybe; Tue, 10 Sep 2019 11:55:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqyDrpMYvs/r5uGh+SbUvJA4X+t1Mbgy5wvLBuyiYbudcMCcQnarR6gu+53LMikD8FwJ85qE X-Received: by 2002:a17:906:d050:: with SMTP id bo16mr17040052ejb.146.1568141708924; Tue, 10 Sep 2019 11:55:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568141708; cv=none; d=google.com; s=arc-20160816; b=Y9z9kYqdwmcH4PKIRynoTzIa1HmVVEbuxUw7V5V5x5rSWLvUjBYF33+JCFRfOHZUTF 58ewoIQWKcqwWPGdAjcNmhKS1uQpJrc8xMef4xxIFxI6De/tL6hzK7GbD/olVuf0XYWG Twv7ty+7ZQZqJD5kUNaGfs9gRPHRdIPy+W8SWMC4+Qb2Es1wQ8J2lDMJEoyGXz4xxCCq kfUou8Ax1TRoWx6kIo4/JKoJ62RYOCebeaVIFldxn0mM4QPE/2VGQFpUzQkBrFbFXW0m QDO1BaDkKRG2Uac1d7Ld5PK0OmD0seRj9ZZes+jF1n6l1v0Ek/fHVke1xARkNNNKwUCx PPCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=ib25lP+BuyYvMO7UL7/CgDYGcB69A14fj4PNdbwwH7I=; b=PJkE3N+LJHWUAZAh1gvqSuHxu6fCvvQ9nk/H/Efa86/VFtbYX/kf7cwnsRpLI1uV5l xYYaSR57zr/yLnB95Yj3q9mYEJm89XIYis30coyueLEyhJQ6BLIzRIyoO9ELi9mdXgRl dZK26SczeuNuEX/fk/VfOMOYhtuxl068qQb+b2MVIcwXngeu5CAuCe83MXgjfeVXlDwX w1NOuMk5pEpgbhcpnKSp3/qtV+CC/eSXIDcvrNHnPrguOpZcMlcGz2otDfU+MwqRB7ee Jo9qPtihBssRxsUpDLfAPcEKki2Wr1OhGduZIsg5YT15XyWHcdO4LpaUDF1RRxatR7nS asfg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o4si9625683ejj.27.2019.09.10.11.54.45; Tue, 10 Sep 2019 11:55:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732262AbfIJK0R (ORCPT + 99 others); Tue, 10 Sep 2019 06:26:17 -0400 Received: from mx2.suse.de ([195.135.220.15]:57770 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727351AbfIJK0R (ORCPT ); Tue, 10 Sep 2019 06:26:17 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id CF761AF59; Tue, 10 Sep 2019 10:26:15 +0000 (UTC) Subject: Re: [PATCH v3 4/4] mm, slab_common: Make the loop for initializing KMALLOC_DMA start from 1 To: Pengfei Li , akpm@linux-foundation.org Cc: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, guro@fb.com References: <20190910012652.3723-1-lpf.vector@gmail.com> <20190910012652.3723-5-lpf.vector@gmail.com> From: Vlastimil Babka Message-ID: <23cb75f5-4a05-5901-2085-8aeabc78c100@suse.cz> Date: Tue, 10 Sep 2019 12:26:14 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190910012652.3723-5-lpf.vector@gmail.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 9/10/19 3:26 AM, Pengfei Li wrote: > KMALLOC_DMA will be initialized only if KMALLOC_NORMAL with > the same index exists. > > And kmalloc_caches[KMALLOC_NORMAL][0] is always NULL. > > Therefore, the loop that initializes KMALLOC_DMA should start > at 1 instead of 0, which will reduce 1 meaningless attempt. IMHO the saving of one iteration isn't worth making the code more subtle. KMALLOC_SHIFT_LOW would be nice, but that would skip 1 + 2 which are special. Since you're doing these cleanups, have you considered reordering kmalloc_info, size_index, kmalloc_index() etc so that sizes 96 and 192 are ordered naturally between 64, 128 and 256? That should remove various special casing such as in create_kmalloc_caches(). I can't guarantee it will be possible without breaking e.g. constant folding optimizations etc., but seems to me it should be feasible. (There are definitely more places to change than those I listed.) > Signed-off-by: Pengfei Li > --- > mm/slab_common.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/slab_common.c b/mm/slab_common.c > index af45b5278fdc..c81fc7dc2946 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -1236,7 +1236,7 @@ void __init create_kmalloc_caches(slab_flags_t flags) > slab_state = UP; > > #ifdef CONFIG_ZONE_DMA > - for (i = 0; i <= KMALLOC_SHIFT_HIGH; i++) { > + for (i = 1; i <= KMALLOC_SHIFT_HIGH; i++) { > struct kmem_cache *s = kmalloc_caches[KMALLOC_NORMAL][i]; > > if (s) { >