Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp2027851pxa; Mon, 24 Aug 2020 03:02:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwHpUDINly8EyB79NcLmaF87/CvxuXdZoSOMzSMzdOjhE18HBo3gx1Pd7T0R7LwVJ0nNm0n X-Received: by 2002:a17:906:3e0a:: with SMTP id k10mr5033913eji.148.1598263376057; Mon, 24 Aug 2020 03:02:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598263376; cv=none; d=google.com; s=arc-20160816; b=cEeuzFEBXRymAV4cLCQQMKYCwdVG+b1xE/KJyVyRX+OmJHT1cHl3VF0plHZi5duF0L Xev2R8PpKeSyZMfqh26U5JIAzK4HPmoG8hpTINS+QXDAtWq+phyo/7D5VTmQOwxhAQze d4DKf6Y9I5olvb7WMHAXKS5dCLiAHU/OYhQ6Oti9LoU2M5r0Y/PlFLhvtIoMHk/+ytx9 9PBUeOsUhYBPSXs1sbzcVC3sYAiW45Fwr8Qt6ayAeaEAX49FROEIDH5KMDf6MPgz0CSx jGQ6Vxa9157TYmodiUcgRYxHcdCJ66VZBPOPYI83e/IC6LokVpZEfim+qM5c/FIO7SyB H7/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject:reply-to; bh=BXjdv4Go5EBzGPmCdHRVLRe/bOHJaFN+AdtCtXwn4Ps=; b=KiA0AZFkcu0pQ+T5q44rKLFqw4jfskrGg/V8QP+QrTEZ1Gx1TRQUw/jBXo3iKjtVby wWyR+KSkrAog4OL5SsNqY1K4klVRmil4Eb+x3yWIS+dcerKRdf009qweKYJdwAEqeJKJ rTd4auuWsmmBor843XvPWdjHT0GfXHwBcqgjj2CHHcNFsvs4ZuJ4sxsMkVIrEITTSeRR WjS8DBn3keiNEwMo6jvikLdNEmygzLyZaDev289jkqueTld1BL56zWClphfeW5lgLlSL C9ikOpRf4Kcr+QRzmkLzhp1V8HURYSbsO4W+oaN2BR2EcHQMNm3eiAUgMULD3iYoOkE7 IfUQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g14si618345edt.369.2020.08.24.03.02.33; Mon, 24 Aug 2020 03:02:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729547AbgHXJ77 (ORCPT + 99 others); Mon, 24 Aug 2020 05:59:59 -0400 Received: from out30-45.freemail.mail.aliyun.com ([115.124.30.45]:33983 "EHLO out30-45.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726884AbgHXJ7v (ORCPT ); Mon, 24 Aug 2020 05:59:51 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04407;MF=xlpang@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0U6g21Wa_1598263186; Received: from xunleideMacBook-Pro.local(mailfrom:xlpang@linux.alibaba.com fp:SMTPD_---0U6g21Wa_1598263186) by smtp.aliyun-inc.com(127.0.0.1); Mon, 24 Aug 2020 17:59:47 +0800 Reply-To: xlpang@linux.alibaba.com Subject: Re: [PATCH 1/2] mm/slub: Introduce two counters for the partial objects To: Pekka Enberg , Christopher Lameter Cc: Vlastimil Babka , Andrew Morton , Wen Yang , Yang Shi , Roman Gushchin , "linux-mm@kvack.org" , LKML , Konstantin Khlebnikov , David Rientjes References: <1593678728-128358-1-git-send-email-xlpang@linux.alibaba.com> From: xunlei Message-ID: <9811b473-e09f-c2aa-cdd8-c71c34fe4707@linux.alibaba.com> Date: Mon, 24 Aug 2020 17:59:46 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/8/20 PM9:58, Pekka Enberg wrote: > Hi Christopher, > > On Tue, Aug 11, 2020 at 3:52 PM Christopher Lameter wrote: >> >> On Fri, 7 Aug 2020, Pekka Enberg wrote: >> >>> Why do you consider this to be a fast path? This is all partial list >>> accounting when we allocate/deallocate a slab, no? Just like >>> ___slab_alloc() says, I assumed this to be the slow path... What am I >>> missing? >> >> I thought these were per object counters? If you just want to count the >> number of slabs then you do not need the lock at all. We already have a >> counter for the number of slabs. > > The patch attempts to speed up count_partial(), which holds on to the > "n->list_lock" (with IRQs off) for the whole duration it takes to walk > the partial slab list: > > spin_lock_irqsave(&n->list_lock, flags); > list_for_each_entry(page, &n->partial, slab_list) > x += get_count(page); > spin_unlock_irqrestore(&n->list_lock, flags); > > It's counting the number of *objects*, but the counters are only > updated in bulk when we add/remove a slab to/from the partial list. > The counter updates are therefore *not* in the fast-path AFAICT. > > Xunlei, please correct me if I'm reading your patches wrong. Yes, it's all in slow-path. > > On Tue, Aug 11, 2020 at 3:52 PM Christopher Lameter wrote: >>> No objections to alternative fixes, of course, but wrapping the >>> counters under CONFIG_DEBUG seems like just hiding the actual issue... >> >> CONFIG_DEBUG is on by default. It just compiles in the debug code and >> disables it so we can enable it with a kernel boot option. This is because >> we have had numerous issues in the past with "production" kernels that >> could not be recompiled with debug options. So just running the prod >> kernel with another option will allow you to find hard to debug issues in >> a full scale producton deployment with potentially proprietary modules >> etc. > > Yeah, it's been too long since I last looked at the code and did not > realize even count_partial() is wrapped in CONFIG_DEBUG. So by all Besides CONFIG_DEBUG, count_partial() is also wrapped in CONFIG_SYSFS. > means, let's also wrap the counters with that too. > > - Pekka >