Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp5553129ybi; Tue, 4 Jun 2019 08:20:35 -0700 (PDT) X-Google-Smtp-Source: APXvYqxnXV1nhC687P2zF3kDIJTiUsTlGxZTPrcCU9ZzaOeOnBITPydgHOPV1PMNPT2I1yPidrOI X-Received: by 2002:a63:2325:: with SMTP id j37mr35781491pgj.137.1559661635811; Tue, 04 Jun 2019 08:20:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559661635; cv=none; d=google.com; s=arc-20160816; b=NxhON616SCHZiuI+u0N+PKubfGPcm6EhRldt0CMQwRErxH19FWwnrFTr+cjbX+uBfG NK2OifrIHk0Qxt8L5r7ow9/TjgY2HMRr2Hd6Rf3N2hmFl1wet4jhyeGtppnQp6ISrdX6 ee5P9lhGMD2j/kycuoUZ+3Y4QeFDRd5J72ZdCUagyuLnZG5LK7XHLq5HAtfMDBVHseCs 0P+nYHkhsMMAe0HIHKNYHM+SNodtOP0VIXz6vnHDXDk4pKnPixPypZk52iZVlW4N84y9 LerB3xsw8EulNuxtEGHWd34NlFA4YOyzlunRsm1Kwwlb5qdGk+BeCVmQgrMAdHrri5nu puOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:cc:to:from:subject :message-id; bh=4myKM6eD0JKCUCZ6T/meupcqDRS+TGXrjZKubOs9LW8=; b=Vffu5NnL+x5wApjfqi4Gnc/9MPTugrhJanFXqNLME82wJ5nrUGq+w7/8a4kgZkHzSV OPrDw+AX6y/fpNOtrVSHXYW9jLh6+Z2RKHrbAS/W17O7Ljq8ctGZZ+kJwi3PDjti78hT j42koNY9o39KEGOIUeuNls96amCVFznmscKKZcmZcjZpM+CcKDxnykKV0LF/aUIpDx/X Alx3u8j2Ck5Ge75K4fB9oA7eEbtesVe3IduDvvGt64YQ7qV88C0YiI7IIdwLhin7TMMk l6/VAHik1J1s9Bg5SQNREs3NYqytPUPNOlpuf8NkPsO+YzZkcuZZAAWRCSYkGeUgFrDK 8hBQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l68si18862963pje.13.2019.06.04.08.20.15; Tue, 04 Jun 2019 08:20:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728094AbfFDPSy (ORCPT + 99 others); Tue, 4 Jun 2019 11:18:54 -0400 Received: from mx1.redhat.com ([209.132.183.28]:37322 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727783AbfFDPSy (ORCPT ); Tue, 4 Jun 2019 11:18:54 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 254043087BA9; Tue, 4 Jun 2019 15:18:37 +0000 (UTC) Received: from ovpn-112-67.rdu2.redhat.com (ovpn-112-67.rdu2.redhat.com [10.10.112.67]) by smtp.corp.redhat.com (Postfix) with ESMTP id 08F1F52C4; Tue, 4 Jun 2019 15:18:27 +0000 (UTC) Message-ID: Subject: Re: [PATCH v2 00/17] net: introduce Qualcomm IPA driver From: Dan Williams To: Arnd Bergmann , Alex Elder Cc: Subash Abhinov Kasiviswanathan , Bjorn Andersson , David Miller , Ilias Apalodimas , evgreen@chromium.org, Ben Chan , Eric Caruso , cpratapa@codeaurora.org, syadagir@codeaurora.org, abhishek.esse@gmail.com, Networking , DTML , Linux Kernel Mailing List , linux-soc@vger.kernel.org, Linux ARM , linux-arm-msm@vger.kernel.org Date: Tue, 04 Jun 2019 10:18:26 -0500 In-Reply-To: References: <20190531035348.7194-1-elder@linaro.org> <065c95a8-7b17-495d-f225-36c46faccdd7@linaro.org> <20190531233306.GB25597@minitux> <040ce9cc-7173-d10a-a82c-5186d2fcd737@linaro.org> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.30.5 (3.30.5-1.fc29) MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Tue, 04 Jun 2019 15:18:53 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 2019-06-04 at 10:13 +0200, Arnd Bergmann wrote: > On Mon, Jun 3, 2019 at 3:32 PM Alex Elder wrote: > > On 6/3/19 5:04 AM, Arnd Bergmann wrote: > > > On Sat, Jun 1, 2019 at 1:59 AM Subash Abhinov Kasiviswanathan > > > > > > - What I'm worried about most here is the flow control handling > > > on the > > > transmit side. The IPA driver now uses the modern BQL method to > > > control how much data gets submitted to the hardware at any > > > time. > > > The rmnet driver also uses flow control using the > > > rmnet_map_command() function, that blocks tx on the higher > > > level device when the remote side asks us to. > > > I fear that doing flow control for a single physical device on > > > two > > > separate netdev instances is counterproductive and confuses > > > both sides. > > > > I understand what you're saying here, and instinctively I think > > you're right. > > > > But BQL manages the *local* interface's ability to get rid of > > packets, whereas the QMAP flow control is initiated by the other > > end of the connection (the modem in this case). > > > > With multiplexing, it's possible that one of several logical > > devices on the modem side has exhausted a resource and must > > ask the source of the data on the host side to suspend the > > flow. Meanwhile the other logical devices sharing the physical > > link might be fine, and should not be delayed by the first one. > > > > It is the multiplexing itself that confuses the BQL algorithm. > > The abstraction obscures the *real* rates at which individual > > logical connections are able to transmit data. > > I would assume that the real rate constantly changes, at least > for wireless interfaces that are also shared with other users > on the same network. BQL is meant to deal with that, at least > when using a modern queuing algorithm. > > > Even if the multiple logical interfaces implemented BQL, they > > would not get the feedback they need directly from the IPA > > driver, because transmitting over the physical interface might > > succeed even if the logical interface on the modem side can't > > handle more data. So I think the flow control commands may be > > necessary, given multiplexing. > > Can you describe what kind of multiplexing is actually going on? > I'm still unclear about what we actually use multiple logical > interfaces for here, and how they relate to one another. Each logical interface represents a different "connection" (PDP/EPS context) to the provider network with a distinct IP address and QoS. VLANs may be a suitable analogy but here they are L3+QoS. In realistic example the main interface (say rmnet0) would be used for web browsing and have best-effort QoS. A second interface (say rmnet1) would be used for VOIP and have certain QoS guarantees from both the modem and the network itself. QMAP can also aggregate frames for a given channel (connection/EPS/PDP context/rmnet interface/etc) to better support LTE speeds. Dan > > The rmnet driver could use BQL, and could return NETDEV_TX_BUSY > > for a logical interface when its TX flow has been stopped by a > > QMAP command. That way the feedback for BQL on the logical > > interfaces would be provided in the right place. > > > > I have no good intuition about the interaction between > > two layered BQL managed queues though. > > Returning NETDEV_TX_BUSY is usually a bad idea as that > leads to unnecessary frame drop. > > I do think that using BQL and the QMAP flow command on > the /same/ device would be best, as that throttles the connection > when either of the two algorithms wants us to slow down. > > The question is mainly which of the two devices that should be. > Doing it in the ipa driver is probably easier to implement here, > but ideally I think we'd only have a single queue visible to the > network stack, if we can come up with a way to do that. > > > > - I was a little confused by the location of the rmnet driver in > > > drivers/net/ethernet/... More conventionally, I think as a > > > protocol > > > handler it should go into net/qmap/, with the ipa driver going > > > into drivers/net/qmap/ipa/, similar to what we have fo > > > ethernet, > > > wireless, ppp, appletalk, etc. > > > > > > - The rx_handler uses gro_cells, which as I understand is meant > > > for generic tunnelling setups and takes another loop through > > > NAPI to aggregate data from multiple queues, but in case of > > > IPA's single-queue receive calling gro directly would be > > > simpler > > > and more efficient. > > > > I have been planning to investigate some of the generic GRO > > stuff for IPA but was going to wait on that until the basic > > code was upstream. > > That's ok, that part can easily be changed after the fact, as it > does not impact the user interface or the general design. > > > > From the overall design and the rmnet Kconfig description, it > > > appears as though the intention as that rmnet could be a > > > generic wrapper on top of any device, but from the > > > implementation it seems that IPA is not actually usable that > > > way and would always go through IPA. > > > > As far as I know *nothing* upstream currently uses rmnet; the > > IPA driver will be the first, but as Bjorn said others seem to > > be on the way. I'm not sure what you mean by "IPA is not > > usable that way." Currently the IPA driver assumes a fixed > > configuration, and that configuration assumes the use of QMAP, > > and therefore assumes the rmnet driver is layered above it. > > That doesn't preclude rmnet from using a different back end. > > Yes, that's what I meant above: IPA can only be used through > rmnet (I wrote "through IPA", sorry for the typo), but cannot be > used by itself. > > Arnd