Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp401220pxv; Thu, 8 Jul 2021 05:17:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxqWZ0vSX+Am83r35JoHTtVnmaXmXlwLuHKhdKJeQLcefXvs8LEoMJqeScgOdXg8uX1d5b4 X-Received: by 2002:a05:6402:216:: with SMTP id t22mr37414947edv.70.1625746659114; Thu, 08 Jul 2021 05:17:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625746659; cv=none; d=google.com; s=arc-20160816; b=DantTNtbffnYFtI9lohSkXTwLWkk3PgSj4eExHsk0mDcwp/uKyW+Rdxwn0yfokUBt/ lxlb8B2L3CfZbS4KQ2OJBlxEJtVghItIUHbM3uVdHAh5G9uvhC+5Blgwb6x0E2Wow1zD KTFTXtRJYrZkqOqkTyzHz4N/5aaJV+lbyG9HCd8HAutMlG/b9rJ36kwLrirYJPY6XQEp Xk86q9TYE+/5skst2SzQOQ/iAkcntUUij0l7uW5dRtebfLmcdZ4DhlCLCONyAGRsidqi kpB6o1dmUi1o151f6cuWjJKj6mf0qSy0Z8q3AHntxhzkwRlPcbU3eEary3qV9ds2K7f0 vwEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:to:subject:message-id :date:from:in-reply-to:references:mime-version:dkim-signature; bh=I55zh2XXZG4Wh8FdZlPVvOlalnJqCExFPnbC3bw2voU=; b=BTEhe1p/ioJDtuAtsqoGZC0Wh/ZnUMEBRAf7v2GWMunF8i1i/iGhAqxxL4x0JDQQUN b8+I++ZrGJTNlggDjn0k+9F4LMp6P+t9T4nwHvwFRGGvAeUAZZq1VAn1lMFTnl1RRYkR HWGQcBi4Jc/pTrJXVClKAT5sRXuzZkf9GiRHfuUISjJZUqkglNCiLiV9roQgRkp40HbB wydjtxbyjdO1b/1K4wsUqihYC9jx5PnWQdKklDarmXRlLEL+Z4g8iQ+G0FeXLYyWuKrj wqTg5gOsBb4BaCeHMtZ+qo5faHoV2ZqWHwvdTtVh/8yg+N3VRph5/K4kUUeQvMcSld8H JcHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gCVKbwkF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f16si3008842edy.546.2021.07.08.05.17.16; Thu, 08 Jul 2021 05:17:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gCVKbwkF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229901AbhGHMRI (ORCPT + 99 others); Thu, 8 Jul 2021 08:17:08 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:24091 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229590AbhGHMRH (ORCPT ); Thu, 8 Jul 2021 08:17:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625746465; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=I55zh2XXZG4Wh8FdZlPVvOlalnJqCExFPnbC3bw2voU=; b=gCVKbwkFuXcZSDFBztb01rBSN1REjjsuGxaY81Op7Z1dGYrx8CYJnlqn8LUo+Tz5n7rNtj T6Qhg6yY5w5wJtrPuSRvm5I6i6AhnPg85Dz8vMqYoXdXObPQHwfKwyHRPl8qN5TBXYBcCo SG6qeug/PXMrchqNNhSsjxyEgqH47NI= Received: from mail-il1-f199.google.com (mail-il1-f199.google.com [209.85.166.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-350-WYTZukSNPvaELJ1uTQTZVg-1; Thu, 08 Jul 2021 08:14:24 -0400 X-MC-Unique: WYTZukSNPvaELJ1uTQTZVg-1 Received: by mail-il1-f199.google.com with SMTP id f5-20020a92b5050000b02901ff388acf98so3480434ile.2 for ; Thu, 08 Jul 2021 05:14:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:content-transfer-encoding; bh=I55zh2XXZG4Wh8FdZlPVvOlalnJqCExFPnbC3bw2voU=; b=JpT2i2VswDHkCZa7zkxYvWKMNAPEyV9m/1hEDOVVxmGhJ5VI7ZGUDo39V9VBHrT3Gq IS/OP/KlQMfkyGtTcjIxxN2Pp5sxXgIm3ANWlQUojtUY4eLuzNFMveOUMRw33LlanKHY 21KfJw+kE+thzv4XVYHjeFGZ8A7BKSC6Qb3nb56AKZTcPxz/azZ5pe1J/OqF6wLlB4Zp hHG5Yx0cTJcHhVuyPfyeXdJOlEJQFqTG/CIowZkA/xQiNvPKA5MZPW9wwkGce/FMVn7W 3r/ZlUV3wKUdOisOD9AvpQcVpD3xyKj9/RWPkQCGXHSxqJDvWfTfnMrdkOr8mw19T5yv Wdjg== X-Gm-Message-State: AOAM533GV369NThh6bSU3Ak5utNHyUKK+30XO0mymgg3axRN2yVUqp9G 7GbqhMuyRJPi65ZpKmdmUXfeodjy/SpdR9cnXWoX17xnz3N8onxWPUSgET/KsAfUqayTFQUjOWW /ryo/VGTqsnL3pYoE0UB6i3PJ1rNgBu8PZpuIXpyQ X-Received: by 2002:a05:6638:372c:: with SMTP id k44mr26426561jav.94.1625746464043; Thu, 08 Jul 2021 05:14:24 -0700 (PDT) X-Received: by 2002:a05:6638:372c:: with SMTP id k44mr26426536jav.94.1625746463835; Thu, 08 Jul 2021 05:14:23 -0700 (PDT) MIME-Version: 1.0 References: <20210707081642.95365-1-ihuguet@redhat.com> <0e6a7c74-96f6-686f-5cf5-cd30e6ca25f8@gmail.com> <20210707130140.rgbbhvboozzvfoe3@gmail.com> In-Reply-To: <20210707130140.rgbbhvboozzvfoe3@gmail.com> From: =?UTF-8?B?w43DsWlnbyBIdWd1ZXQ=?= Date: Thu, 8 Jul 2021 14:14:13 +0200 Message-ID: Subject: Re: [PATCH 1/3] sfc: revert "reduce the number of requested xdp ev queues" To: =?UTF-8?B?w43DsWlnbyBIdWd1ZXQ=?= , Edward Cree , "David S. Miller" , Jakub Kicinski , ivan@cloudflare.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 7, 2021 at 3:01 PM Martin Habets wro= te: > > Another question I have, thinking about the long term solution: would > > it be a problem to use the standard TX queues for XDP_TX/REDIRECT? At > > least in the case that we're hitting the resources limits, I think > > that they could be enqueued to these queues. I think that just taking > > netif_tx_lock would avoid race conditions, or a per-queue lock. > > We considered this but did not want normal traffic to get delayed for > XDP traffic. The perceived performance drop on a normal queue would > be tricky to diagnose, and the only way to prevent it would be to > disable XDP on the interface all together. There is no way to do the > latter per interface, and we felt the "solution" of disabling XDP > was not a good way forward. > Off course our design of this was all done several years ago. In my opinion, there is no reason to make that distinction between normal traffic and XDP traffic. XDP traffic redirected with XDP_TX or XDP_REDIRECT is traffic that the user has chosen to redirect that way, but pushing the work down in the stack. Without XDP, this traffic had gone up the stack to userspace, or at least to the firewall, and then redirected, passed again to the network stack and added to normal TX queues. If the user wants to prevent XDP from mixing with normal traffic, just not attaching an XDP program to the interface, or not using XDP_TX/REDIRECT in it would be enough. Probably I don't understand what you want to say here. Anyway, if you think that keeping XDP TX queues separated is the way to go, it's OK, but my proposal is to share the normal TX queues at least in the cases where dedicated queues cannot be allocated. As you say, the performance drop would be tricky to measure, if there's any, but in any case, even separating the queues, they're competing for resources of CPU, PCI bandwidth, network bandwidth... The fact is that the situation right now is this one: - Many times (or almost always with modern servers' processors) XDP_TX/REDIRECT doesn't work at all - The only workaround is reducing the number of normal channels to let free resources for XDP, but this is a much higher performance drop for normal traffic than sharing queues with XDP, IMHO. Increasing the maximum number of channels and queues, or even making them virtually unlimited, would be very good, I think, because people who knows how to configure the hardware would take advantage of it, but there will always be situations of getting short of resources: - Who knows how many cores we will be using 5 forward from now? - VFs normally have less resources available: 8 MSI-X vectors by default With some time, I can try to prepare some patches with these changes, if you agree. Regards --=20 =C3=8D=C3=B1igo Huguet