Received: by 2002:a05:6359:6284:b0:131:369:b2a3 with SMTP id se4csp355982rwb; Fri, 4 Aug 2023 14:01:31 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH0II1uYszeEtW8BMO7i2avm38NcvyUu+5tAoEy0GtIKEa8vZPwhnoM6jpX55YfA4lsWaGq X-Received: by 2002:a17:90a:7789:b0:267:f9ab:15bb with SMTP id v9-20020a17090a778900b00267f9ab15bbmr2316605pjk.14.1691182890969; Fri, 04 Aug 2023 14:01:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1691182890; cv=none; d=google.com; s=arc-20160816; b=JjekQ0Bstc9G9XG9rgloqlzu+LPl57/I7kshgINspmhwwq10msSZk6LESmQCjDrcLB gnWc50cXUXZUcEVvEDNI0nxAaFmSRDpyHNzs+i78z1SZBObB4nWvj5CuW640eUM1bail RjyWYuOONXQWqyjKKspo9mWQN/eMAQY2gVtyM0v89WdSyVAZB9nftlVTw2GzTp3Jry4m plsZLphrXfBG2OwUiRvsPezPziMenJDto2MEMqNwB81IR7m198nb6OO6uTQjSCHcvSs/ 9PDmq/i3s1HXIpmYTffUZhU/dELPeVMxTLhOnt1Z78EQLOXe8lR8OEqzd0hjWV+1+deo DSzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=sC/pP5ClnxE6/GwrHn6lig/HF/lSg+uq6gb6F5S7mNM=; fh=VZE40i0gihwsVxcmMsSMlbT3/IfCY4wireG90U9yWGI=; b=Pjou6hj2qvZoT6e8eSZa8+jubpVTbJ/VZfYGip35hClz1nfe7fEofA/PHPwfupSu63 k62AUaPvMX+STo15ZeO5xKxZ8MCnuiDrAkosZMO1VRhwQnO2uODHMqpWpjVkSump3abW FplD/wSuiSxxUSQ+qULIFjy++M13s8LrlW5mCot0lLUnjcRpyGgAGdd6Kll/0QrTKPpD PSX+kqC7+Vfk8krVP3LGLhbpCbEg77qQXTm2mu/8CDKNvlnZCbyXYS+n0whrgH/zO+RQ nGWWPkC4eyMhiZgwv0Tz1sSBNEbnW4zVlpefNei5lF/2VVauHHeb24386LWQ2zQDxm5d U1lw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=q9lDEPah; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q5-20020a17090a1b0500b00267ed1dd76esi5519055pjq.177.2023.08.04.14.01.16; Fri, 04 Aug 2023 14:01:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=q9lDEPah; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230193AbjHDUfT (ORCPT + 99 others); Fri, 4 Aug 2023 16:35:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230040AbjHDUfP (ORCPT ); Fri, 4 Aug 2023 16:35:15 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1702A4C37 for ; Fri, 4 Aug 2023 13:35:15 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 98EDA620AF for ; Fri, 4 Aug 2023 20:35:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D82BC433C8; Fri, 4 Aug 2023 20:35:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1691181314; bh=52KUF7ASXjYC/XDXuwqGaqu5PqSMpO06sMHL33KXuLg=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=q9lDEPahh6/mGcVWoYorDTP41TyrASjNn1raO9nzESg9Xg9RdYzbrJ0ale8vjR5Ww HEjk3xwyFz4s1ynj5VOSdsYmosK/Rk3MrjkLbzlzDv+PRR+Qo1klI6d/QNIfNIX0xj NXdPxlYQesAL8pEMwmLdbBua6ZKIaPuOHrwUl5LsrlHQdO+hBdrlAyNQqZZ/px5bQm q+ZzPQN6heyqVTZWDM6FbfcwQBmg0mpkfJ+Rray8441uEbfnx5tbWvO0DrKtY9rTJq As5Zi07F7ytQvw4tN81AF+g1MsYibJNTW0zylti775g+Pmb9dphAon2nsF9/hAdmp2 Tu8oOQ46hm4Tg== Date: Fri, 4 Aug 2023 13:35:12 -0700 From: Jakub Kicinski To: Alexander Lobakin Cc: Ratheesh Kannoth , "netdev@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "Sunil Kovvuri Goutham" , Geethasowjanya Akula , Subbaraya Sundeep Bhatta , Hariprasad Kelam , "davem@davemloft.net" , "edumazet@google.com" , "pabeni@redhat.com" Subject: Re: [EXT] Re: [PATCH net] octeontx2-pf: Set maximum queue size to 16K Message-ID: <20230804133512.4dbbbc16@kernel.org> In-Reply-To: <8732499b-df8c-0ee0-bf0e-815736cf4de2@intel.com> References: <20230802105227.3691713-1-rkannoth@marvell.com> <18fec8cd-fc91-736e-7c01-453a18f4e9c5@intel.com> <8732499b-df8c-0ee0-bf0e-815736cf4de2@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 4 Aug 2023 16:43:51 +0200 Alexander Lobakin wrote: > > So, will clamp to 2048 in page_pool_init() ? But it looks odd to me, as > > User requests > 2048, but will never be aware that it is clamped to 2048. > > Why should he be aware of that? :D > But seriously, I can't just say: "hey, I promise you that your driver > will work best when PP size is clamped to 2048, just blindly follow", > it's more of a preference right now. Because... > > > Better do this clamping in Driver and print a warning message ? > > ...because you just need to test your driver with different PP sizes and > decide yourself which upper cap to set. If it works the same when queues > are 16k and PPs are 2k versus 16k + 16k -- fine, you can stop on that. > If 16k + 16k or 16 + 8 or whatever works better -- stop on that. No hard > reqs. > > Just don't cap maximum queue length due to PP sanity check, it doesn't > make sense. IDK if I agree with you here :S Tuning this in the driver relies on the assumption that the HW / driver is the thing that matters. I'd think that the workload, platform (CPU) and config (e.g. is IOMMU enabled?) will matter at least as much. While driver developers will end up tuning to whatever servers they have, random single config and most likely.. iperf. IMO it's much better to re-purpose "pool_size" and treat it as the ring size, because that's what most drivers end up putting there. Defer tuning of the effective ring size to the core and user input (via the "it will be added any minute now" netlink API for configuring page pools)... So capping the recycle ring to 32k instead of returning the error seems like an okay solution for now.