Received: by 2002:a05:6a10:2785:0:0:0:0 with SMTP id ia5csp520073pxb; Wed, 13 Jan 2021 09:07:52 -0800 (PST) X-Google-Smtp-Source: ABdhPJz5jugmR+FjsQYiaZVfSwrCXyqa2eD7qFxCCJkmtpx72bRbCStuwKG42upJYRU01ocigRJD X-Received: by 2002:a17:907:2131:: with SMTP id qo17mr2226112ejb.546.1610557672266; Wed, 13 Jan 2021 09:07:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1610557672; cv=none; d=google.com; s=arc-20160816; b=jRmsFk2oKHRman48nVKVw2gzoHCt3qT4BYYXgCQKmOJSQsrO7FJcEXDYTs5UbkCWWk tuAZ42QzwbOQd4FNz47cshnECcYbqn/fFXtv5GUZwpDyX8hQBrSeRpXH7Pu+vOIlfyv6 85pabZp6pgMUHoI8OtWG11EthmBi7gTOwo7tXfs/EP+jMo1pbjgc5+o1WBH4ZRDdz1Jg rI38ZSKL4Wy/dWtLcuPOtqqiWFSsVbcAC/qQJ6ngt3R26DKE2TzTYzOa5Xg+Wui5WH2q 4AqFRYmEH1E+OlA9Gv8F6rhhmxrLy4uC+dZEj8K8IUe9XDdLiXjZ2dxt0HNaAiHq7zYH CIew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=kdd5+V9E2gaTdC1+DBtzt4qCrqkEWQtVk3Tys1oEbD0=; b=g3wx8mKZ8rkBIgFFKQ1bYB7Z/l57pCdv29bGh0ww3iqrur1E7rYi2SutbFe6sUA3Hn /w0jxodCOnDWbxIKW0JoaYaL0IZT72WbARP+m5x5sQTydu8HBU5b9Biy0mtqw8JuUhzW 2NBRXNGvaJl/vzXteN0eZdJdhPEM5INjUnC4YA8GxDokCf3v45S6gLfQFneB+8UQMEPf p5OR8XGJ/cbykqb84jJAinUbDx1g7JZ9znMFxaKxs+rgjnaLd/0QJVpoYHOBQcigVnrX sHjengbWp+Pvv235XOHL4tureQyMvhovzkAWjRUNpc6O090mip3OqsX9WLuNV/BEoWpT 0WTg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="Etz/WcT7"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bm3si1192692edb.188.2021.01.13.09.07.27; Wed, 13 Jan 2021 09:07:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="Etz/WcT7"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727918AbhAMREY (ORCPT + 99 others); Wed, 13 Jan 2021 12:04:24 -0500 Received: from mail.kernel.org ([198.145.29.99]:51090 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727536AbhAMREY (ORCPT ); Wed, 13 Jan 2021 12:04:24 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6FDFE23437; Wed, 13 Jan 2021 17:03:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610557423; bh=wJWj0rmsXwfQIv726JLKYsjPlKPpdeduj5DEQcebtLI=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=Etz/WcT7yBdVKr4uoLEw2RzjGok1cCu1/obuOfhOnBg0hZiFjEVgsos3pQZSPx9/3 sqzTRwnLXrmcObPc+/hW9ZPQDMh0fx2zcwX35caaPf+G514OJorfiKaPTdwO2Ot2ie trDEtDqm3ZSvNooOpVP0e9XYXMeYhUbQj/hMJgOgKXfLni7FFBcT7laNl0ocJ5mrkO 4I0FNbi0H1nGVztBFMU97TgNIkc78b2ymtgG//s8sY1pkUqJPu/iut7R9hdWaUBKHR niWjijix0b2bXNZb/lhJ1KQMLt+nmV+BTCr9PA45/MueM0dJGmFdGiQIvlcZr4GK9z lDfoQKYuXnSRQ== Date: Wed, 13 Jan 2021 09:03:41 -0800 From: Jakub Kicinski To: Eric Dumazet Cc: Alexander Lobakin , Edward Cree , "David S. Miller" , Edward Cree , Jonathan Lemon , Willem de Bruijn , Miaohe Lin , Steffen Klassert , Guillaume Nault , Yadu Kishore , Al Viro , netdev , LKML Subject: Re: [PATCH net-next 0/5] skbuff: introduce skbuff_heads bulking and reusing Message-ID: <20210113090341.74832be9@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> In-Reply-To: References: <20210111182655.12159-1-alobakin@pm.me> <20210112110802.3914-1-alobakin@pm.me> <20210112170242.414b8664@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 13 Jan 2021 05:46:05 +0100 Eric Dumazet wrote: > On Wed, Jan 13, 2021 at 2:02 AM Jakub Kicinski wrote: > > > > On Tue, 12 Jan 2021 13:23:16 +0100 Eric Dumazet wrote: > > > On Tue, Jan 12, 2021 at 12:08 PM Alexander Lobakin wrote: > > > > > > > > From: Edward Cree > > > > Date: Tue, 12 Jan 2021 09:54:04 +0000 > > > > > > > > > Without wishing to weigh in on whether this caching is a good idea... > > > > > > > > Well, we already have a cache to bulk flush "consumed" skbs, although > > > > kmem_cache_free() is generally lighter than kmem_cache_alloc(), and > > > > a page frag cache to allocate skb->head that is also bulking the > > > > operations, since it contains a (compound) page with the size of > > > > min(SZ_32K, PAGE_SIZE). > > > > If they wouldn't give any visible boosts, I think they wouldn't hit > > > > mainline. > > > > > > > > > Wouldn't it be simpler, rather than having two separate "alloc" and "flush" > > > > > caches, to have a single larger cache, such that whenever it becomes full > > > > > we bulk flush the top half, and when it's empty we bulk alloc the bottom > > > > > half? That should mean fewer branches, fewer instructions etc. than > > > > > having to decide which cache to act upon every time. > > > > > > > > I though about a unified cache, but couldn't decide whether to flush > > > > or to allocate heads and how much to process. Your suggestion answers > > > > these questions and generally seems great. I'll try that one, thanks! > > > > > > The thing is : kmalloc() is supposed to have batches already, and nice > > > per-cpu caches. > > > > > > This looks like an mm issue, are we sure we want to get over it ? > > > > > > I would like a full analysis of why SLAB/SLUB does not work well for > > > your test workload. > > > > +1, it does feel like we're getting into mm territory > > I read the existing code, and with Edward Cree idea of reusing the > existing cache (storage of pointers), > ths now all makes sense, since there will be not much added code (and > new storage of 64 pointers) > > The remaining issue is to make sure KASAN will still work, we need > this to detect old and new bugs. IDK much about MM, but we already have a kmem_cache for skbs and now we're building a cache on top of a cache. Shouldn't MM take care of providing a per-CPU BH-only lockless cache?