Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2083430pxb; Mon, 18 Jan 2021 07:51:28 -0800 (PST) X-Google-Smtp-Source: ABdhPJwdxpunzNmbVQ6NcESTAFaoV+ufqFbxYRs1acHfSxMxJIHsQ7HfIw6H4ogvdBt3CGEp4vH8 X-Received: by 2002:a17:906:29cd:: with SMTP id y13mr217240eje.466.1610985088793; Mon, 18 Jan 2021 07:51:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1610985088; cv=none; d=google.com; s=arc-20160816; b=O4ScDKacck33pSq3vJkCfHKHNAZ5T+UCDWR9WHT5iOhsJ9lNDDcWvZYF0n3hIpcETz oki8Wj1Sw5DLj+E3j1d6hLdm8rkX/nDnBi5BJ+JEjeYbxIrMoR6r87kYblEVCspft2gi YevBnMcQRJIOjFmw7+Qn2i2CIte5ZWdjbJh14z53SsL71kCBmZ28H1Va1eBgWqvZjcLe CODg1YAbQgc7UEXMsvGqG52CfPYeumfNtpPeZFobScQec/PalRmjnnrPIsJIWuJ3YcMx wUWLOgUiwt+rOn2FWFgaS1MQl4xkxX0FuBJ+ckf+HqPNrq98E5lWx2vCmLOHNaIpEawG vhtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:message-id :in-reply-to:subject:cc:to:from:date; bh=5dkMKIcGbp/fM1doa81TTmDdDok0uIpzh7UG+soGe/4=; b=OSaLJMImIclZYcpufuTNHpger959dSz1V5+VqJ5Tqh9kWSK3AEjT3RSYA8KpvlFAsN UbzBPnWWLOkqxD/NdFrkwM5q3t+i8lXjNJOAxilgGZgaLitczk5BwcGXP1ac/8lEsVlB 2D6MDnwbh0Ph0t4P7gm36+LEcHggQko6BgugMTN66UqqnlG4iowTnlS4YpsJKQhQEARX cgNJIpN5shSp7ztbJXtrvgzDiKKecc3+lWYfLX/2iw0T/dayMppJlmsUJyRtNANClXxW A7l5Oj3j+RRgIBDI5TxZq/QE5aVhBIowVq7B+4tlfRzVlQ15DJDEM2NtptTy1hKkZ8lZ UUoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qt20si472554ejb.486.2021.01.18.07.51.03; Mon, 18 Jan 2021 07:51:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405916AbhARPsO (ORCPT + 99 others); Mon, 18 Jan 2021 10:48:14 -0500 Received: from gentwo.org ([3.19.106.255]:52088 "EHLO gentwo.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405834AbhARPra (ORCPT ); Mon, 18 Jan 2021 10:47:30 -0500 Received: by gentwo.org (Postfix, from userid 1002) id DAB243F806; Mon, 18 Jan 2021 15:46:43 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by gentwo.org (Postfix) with ESMTP id D7E153EB3B; Mon, 18 Jan 2021 15:46:43 +0000 (UTC) Date: Mon, 18 Jan 2021 15:46:43 +0000 (UTC) From: Christoph Lameter X-X-Sender: cl@www.lameter.com To: Michal Hocko cc: Vlastimil Babka , Jann Horn , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Linux-MM , kernel list , Thomas Gleixner , Sebastian Andrzej Siewior , Roman Gushchin , Johannes Weiner , Shakeel Butt , Suren Baghdasaryan , Minchan Kim Subject: Re: SLUB: percpu partial object count is highly inaccurate, causing some memory wastage and maybe also worse tail latencies? In-Reply-To: <20210118110319.GC14336@dhcp22.suse.cz> Message-ID: References: <20210118110319.GC14336@dhcp22.suse.cz> User-Agent: Alpine 2.22 (DEB 394 2020-01-19) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 18 Jan 2021, Michal Hocko wrote: > > Hm this would be similar to recommending a periodical echo > drop_caches > > operation. We actually discourage from that (and yeah, some tools do that, and > > we now report those in dmesg). I believe the kernel should respond to memory > > pressure and not OOM prematurely by itself, including SLUB. > > Absolutely agreed! Partial caches are a very deep internal > implementation detail of the allocator and admin has no bussiness into > fiddling with that. This would only lead to more harm than good. > Comparision to drop_caches is really exact! Really? The maximum allocation here has a upper boundary that depends on the number of possible partial per cpu slabs. There is a worst case scenario that is not nice and wastes some memory but it is not an OOM situation and the system easily recovers from it. The slab shrinking is not needed but if you are concerned about reclaiming more memory right now then I guess you may want to run the slab shrink operation. Dropping the page cache is bad? Well sometimes you want more free memory due to a certain operation that needs to be started and where you do not want the overhead of page cache processing. You can go crazy and expect magical things from either operation. True.