Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp103036ybi; Fri, 26 Jul 2019 06:48:52 -0700 (PDT) X-Google-Smtp-Source: APXvYqwWpa8VcfiCUCAk+dmTG7SbjCa1mXKMaK67OVT967Qi3ykAbqEEOkZK07Pa4aFV5APwpZvQ X-Received: by 2002:a17:902:8a8a:: with SMTP id p10mr98341364plo.88.1564148932846; Fri, 26 Jul 2019 06:48:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564148932; cv=none; d=google.com; s=arc-20160816; b=mwCmqNAvVtAMbFcrYwSzUPwGp62cfX4kNihAPSmL+iF1Lo7z02fxJYkr3I1AdOiJq+ 3icV5nbEA015SBNEy/P1tqWNhWWcu5HHpmAFpSaSDPazL80/XOjyFAldXuMMSEiUVEy2 poSV+/bU1eNNul6AP/QtaCH4ZKONID9/bUtEE1nv5LhrrfP2WpWbjl/vYPo4yU+02feb juVq9+iqbPS1GFvjl5qkGFQi/tlE1mNgTmBqnhAOfWCxgln2qnfSUomSCQM+OuW/Z9zW dioWUdeVsGn6LH4pa6UqwOFNTa9fxX1K47bWx92V75lMXDdUfBWeJDOYjPinCVrXDwFx 4mvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=qXlWq4lfRlho7WgoojQ+jlmY1k/y7Ct3O9ELJ9wa0Rw=; b=q3kgBolNdeC/hUWwZE6Fqfihpk6WKr/dp/MOtN9R7NpAFukBTtEqHVRmasqoRnArNI pS5Snbjl9IHFMEUTLLdcFDVfR/puvMHDKJOoWKJjy+IwVBhSC+X9bZHzx7jI0yIIAf42 py6/MCTCyzyhQZPPflfHCwSYAE+BY6SIodEKXCo1IY4lZicN4lVAMXyREbknnBOGqmcV bVxtKFx6UG/GKRawfkxwJa+d3G8as4PphsPj08Zu6KHb7kZHJ8Qy5Dx15ujmBhsawh0u jiVC2BeRWcCZr1aA8SKhTMr9wsTgsWXoto0rGqUoOfOLEosYpoKw63Gv2MeRZ3ZvLDqp u10A== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@lunn.ch header.s=20171124 header.b=YrMbPdJT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d8si22956233pfr.182.2019.07.26.06.48.37; Fri, 26 Jul 2019 06:48:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@lunn.ch header.s=20171124 header.b=YrMbPdJT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388167AbfGZNql (ORCPT + 99 others); Fri, 26 Jul 2019 09:46:41 -0400 Received: from vps0.lunn.ch ([185.16.172.187]:40104 "EHLO vps0.lunn.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388682AbfGZNqU (ORCPT ); Fri, 26 Jul 2019 09:46:20 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=qXlWq4lfRlho7WgoojQ+jlmY1k/y7Ct3O9ELJ9wa0Rw=; b=YrMbPdJTxCrdGj+GMfhs0RQYse 1rC4KUuact5ZKSb+Csrt9Ldj2rve/oetkxTMV30uns4YoWOBA/dkN7fciQPn+oG3bCiOtC7d/vt5T lf5XilRDi9l4/5QVS7uNMsy6P8ebBvE4o3jmrLAxUpK4XsajdzCjYectbkn12eb2RYBE=; Received: from andrew by vps0.lunn.ch with local (Exim 4.89) (envelope-from ) id 1hr0Y1-0005Lz-3x; Fri, 26 Jul 2019 15:46:13 +0200 Date: Fri, 26 Jul 2019 15:46:13 +0200 From: Andrew Lunn To: Horatiu Vultur Cc: Nikolay Aleksandrov , roopa@cumulusnetworks.com, davem@davemloft.net, bridge@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, allan.nielsen@microchip.com Subject: Re: [PATCH] net: bridge: Allow bridge to joing multicast groups Message-ID: <20190726134613.GD18223@lunn.ch> References: <1564055044-27593-1-git-send-email-horatiu.vultur@microchip.com> <7e7a7015-6072-d884-b2ba-0a51177245ab@cumulusnetworks.com> <20190725142101.65tusauc6fzxb2yp@soft-dev3.microsemi.net> <20190726120214.c26oj5vks7g5ntwu@soft-dev3.microsemi.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190726120214.c26oj5vks7g5ntwu@soft-dev3.microsemi.net> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 26, 2019 at 02:02:15PM +0200, Horatiu Vultur wrote: > Hi Nikolay, > > The 07/26/2019 12:26, Nikolay Aleksandrov wrote: > > External E-Mail > > > > > > On 26/07/2019 11:41, Nikolay Aleksandrov wrote: > > > On 25/07/2019 17:21, Horatiu Vultur wrote: > > >> Hi Nikolay, > > >> > > >> The 07/25/2019 16:21, Nikolay Aleksandrov wrote: > > >>> External E-Mail > > >>> > > >>> > > >>> On 25/07/2019 16:06, Nikolay Aleksandrov wrote: > > >>>> On 25/07/2019 14:44, Horatiu Vultur wrote: > > >>>>> There is no way to configure the bridge, to receive only specific link > > >>>>> layer multicast addresses. From the description of the command 'bridge > > >>>>> fdb append' is supposed to do that, but there was no way to notify the > > >>>>> network driver that the bridge joined a group, because LLADDR was added > > >>>>> to the unicast netdev_hw_addr_list. > > >>>>> > > >>>>> Therefore update fdb_add_entry to check if the NLM_F_APPEND flag is set > > >>>>> and if the source is NULL, which represent the bridge itself. Then add > > >>>>> address to multicast netdev_hw_addr_list for each bridge interfaces. > > >>>>> And then the .ndo_set_rx_mode function on the driver is called. To notify > > >>>>> the driver that the list of multicast mac addresses changed. > > >>>>> > > >>>>> Signed-off-by: Horatiu Vultur > > >>>>> --- > > >>>>> net/bridge/br_fdb.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++--- > > >>>>> 1 file changed, 46 insertions(+), 3 deletions(-) > > >>>>> > > >>>> > > >>>> Hi, > > >>>> I'm sorry but this patch is wrong on many levels, some notes below. In general > > >>>> NLM_F_APPEND is only used in vxlan, the bridge does not handle that flag at all. > > >>>> FDB is only for *unicast*, nothing is joined and no multicast should be used with fdbs. > > >>>> MDB is used for multicast handling, but both of these are used for forwarding. > > >>>> The reason the static fdbs are added to the filter is for non-promisc ports, so they can > > >>>> receive traffic destined for these FDBs for forwarding. > > >>>> If you'd like to join any multicast group please use the standard way, if you'd like to join > > >>>> it only on a specific port - join it only on that port (or ports) and the bridge and you'll > > >>> > > >>> And obviously this is for the case where you're not enabling port promisc mode (non-default). > > >>> In general you'll only need to join the group on the bridge to receive traffic for it > > >>> or add it as an mdb entry to forward it. > > >>> > > >>>> have the effect that you're describing. What do you mean there's no way ? > > >> > > >> Thanks for the explanation. > > >> There are few things that are not 100% clear to me and maybe you can > > >> explain them, not to go totally in the wrong direction. Currently I am > > >> writing a network driver on which I added switchdev support. Then I was > > >> looking for a way to configure the network driver to copy link layer > > >> multicast address to the CPU port. > > >> > > >> If I am using bridge mdb I can do it only for IP multicast addreses, > > >> but how should I do it if I want non IP frames with link layer multicast > > >> address to be copy to CPU? For example: all frames with multicast > > >> address '01-21-6C-00-00-01' to be copy to CPU. What is the user space > > >> command for that? > > >> > > > > > > Check SIOCADDMULTI (ip maddr from iproute2), f.e. add that mac to the port > > > which needs to receive it and the bridge will send it up automatically since > > > it's unknown mcast (note that if there's a querier, you'll have to make the > > > bridge mcast router if it is not the querier itself). It would also flood it to all > > > > Actually you mentioned non-IP traffic, so the querier stuff is not a problem. This > > traffic will always be flooded by the bridge (and also a copy will be locally sent up). > > Thus only the flooding may need to be controlled. > > OK, I see, but the part which is not clear to me is, which bridge > command(from iproute2) to use so the bridge would notify the network > driver(using swichdev or not) to configure the HW to copy all the frames > with dmac '01-21-6C-00-00-01' to CPU? So that the bridge can receive > those frames and then just to pass them up. Hi Horatiu Something to keep in mind. My default, multicast should be flooded, and that includes the CPU port for a DSA driver. Adding an MDB entry allows for optimisations, limiting which ports a multicast frame goes out of. But it is just an optimisation. Andrew