Received: by 2002:ac0:e350:0:0:0:0:0 with SMTP id g16csp176665imn; Fri, 29 Jul 2022 03:45:38 -0700 (PDT) X-Google-Smtp-Source: AA6agR4OE7IOZ/DBmRgPdctllrjvObUWalQqSl1MzsOUS9k9JPu5h1gJJizgOAGAGDwPzj62+FmM X-Received: by 2002:a17:90b:1b4c:b0:1f2:60c2:6da1 with SMTP id nv12-20020a17090b1b4c00b001f260c26da1mr4126326pjb.68.1659091538207; Fri, 29 Jul 2022 03:45:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1659091538; cv=none; d=google.com; s=arc-20160816; b=onUjBVkobeOQSV99aMy/wPfz5rwmKDT0A7AprJ5ttbDuLAmD3sViF7IV4IkHdv+qDJ h9T131bFlZF1LFj/3djbkNoLLf1WRQ5XUtykL8m30lqZvK667GKz5RyAfHXcITakrkzE zvdreSFMB9bxzI8u0VjN6XJlCOoF/B6xqWhSwtXPi8YuchviK5nXjfj7whnufrzDPZsJ ieRTomgI7/oHnv3lrhl9nKyZ/8bH+FbaIJvgu34VpESDcDIsN/qsytbvaqVD3pYaouLc 1srUJFbOcp+wIVVE8CLsPyPnxrm4OdjJ8ky94GGei6/NqkSWA0qp6L7oDx7VDt+cfwY3 Q7Yg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=+SVgFRpvIPHIfXhK3nSr/HBugOA7qaDBkrQI4UNyO7A=; b=RRBHr/jCzOo1IicIZV/V10r3o8fyRYegEITypLytzDrDelUoZ0UtsKDD9wzgsJETM/ RbSXtBcsuaL+/R9YMqRNnJWI79f/mLSpopcOFQiaVRvXFmGiQMiDWJIMA4HUdVM5MH62 D3sXkq4DrVi04FtbEZ+4KH7ze1Ai7BS69deSEFHeDlGBqfhLoMI7KKrW15IXq8XLsGhS mCMdBrIQWDFJ3ZcHJ1l1IYYmzH0KkH01vhm+D9PUzQwZIlglxoJiMqzAucNd09MxliAA VCKwfshIhqYvhhphBwjXcNvsGLbK0y9R0U4X2KCpP2n52JiaTMucE6Fu6NXLSMeeRX5R VSSg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=virtuozzo.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ob4-20020a17090b390400b001f1fd8a2430si3974723pjb.8.2022.07.29.03.45.22; Fri, 29 Jul 2022 03:45:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=virtuozzo.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235457AbiG2KhX (ORCPT + 99 others); Fri, 29 Jul 2022 06:37:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231329AbiG2KhR (ORCPT ); Fri, 29 Jul 2022 06:37:17 -0400 Received: from relay.virtuozzo.com (relay.virtuozzo.com [130.117.225.111]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1537E81B31; Fri, 29 Jul 2022 03:37:16 -0700 (PDT) Received: from dev010.ch-qa.sw.ru ([172.29.1.15]) by relay.virtuozzo.com with esmtp (Exim 4.95) (envelope-from ) id 1oHNL3-00Cf1E-T4; Fri, 29 Jul 2022 12:35:49 +0200 From: Alexander Mikhalitsyn To: netdev@vger.kernel.org Cc: Alexander Mikhalitsyn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Daniel Borkmann , David Ahern , Yajun Deng , Roopa Prabhu , linux-kernel@vger.kernel.org, "Denis V . Lunev" , Alexey Kuznetsov , Konstantin Khorenko , Pavel Tikhomirov , Andrey Zhadchenko , Alexander Mikhalitsyn , kernel@openvz.org Subject: [PATCH 0/2] neighbour: fix possible DoS due to net iface start/stop loop Date: Fri, 29 Jul 2022 13:35:57 +0300 Message-Id: <20220729103559.215140-1-alexander.mikhalitsyn@virtuozzo.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Dear friends, Recently one of OpenVZ users reported that they have issues with network availability of some containers. It was discovered that the reason is absence of ARP replies from the Host Node on the requests about container IPs. Of course, we started from tcpdump analysis and noticed that ARP requests successfuly comes to the problematic node external interface. So, something was wrong from the kernel side. I've played a lot with arping and perf in attempts to understand what's happening. And the key observation was that we experiencing issues only with ARP requests with broadcast source ip (skb->pkt_type == PACKET_BROADCAST). But for packets skb->pkt_type == PACKET_HOST everything works flawlessly. Let me show a small piece of code: static int arp_process(struct sock *sk, struct sk_buff *skb) ... if (NEIGH_CB(skb)->flags & LOCALLY_ENQUEUED || skb->pkt_type == PACKET_HOST || NEIGH_VAR(in_dev->arp_parms, PROXY_DELAY) == 0) { // reply instantly arp_send_dst(ARPOP_REPLY, ETH_P_ARP, sip, dev, tip, sha, dev->dev_addr, sha, reply_dst); } else { pneigh_enqueue(&arp_tbl, // reply with delay in_dev->arp_parms, skb); goto out_free_dst; } The problem was that for PACKET_BROADCAST packets we delaying replies and use pneigh_enqueue() function. For some reason, queued packets were lost almost all the time! The reason for such behaviour is pneigh_queue_purge() function which cleanups all the queue, and this function called everytime once some network device in the system gets link down. neigh_ifdown -> pneigh_queue_purge Now imagine that we have a node with 500+ containers with microservices. And some of that microservices are buggy and always restarting... in this case, pneigh_queue_purge function will be called very frequently. This problem is reproducible only with so-called "host routed" setup. The classical scheme bridge + veth is not affected. Minimal reproducer Suppose that we have a network 172.29.1.1/16 brd 172.29.255.255 and we have free-to-use IP, let it be 172.29.128.3 1. Network configuration. I showing the minimal configuration, it makes no sense as we have both veth devices stay at the same net namespace, but for demonstation and simplicity sake it's okay. ip l a veth31427 type veth peer name veth314271 ip l s veth31427 up ip l s veth314271 up # setup static arp entry and publish it arp -Ds -i br0 172.29.128.3 veth31427 pub # setup static route for this address route add 172.29.128.3/32 dev veth31427 2. "attacker" side (kubernetes pod with buggy microservice :) ) unshare -n ip l a type veth ip l s veth0 up ip l s veth1 up for i in {1..100000}; do ip link set veth0 down; sleep 0.01; ip link set veth0 up; done This will totaly block ARP replies for 172.29.128.3 address. Just try # arping -I eth0 172.29.128.3 -c 4 Our proposal is simple: 1. Let's cleanup queue partially. Remove only skb's that related to the net namespace of the adapter which link is down. 2. Let's account proxy_queue limit properly per-device. Current limitation looks not fully correct because we comparing per-device configurable limit with the "global" qlen of proxy_queue. Thanks, Alex Cc: "David S. Miller" Cc: Eric Dumazet Cc: Jakub Kicinski Cc: Paolo Abeni Cc: Daniel Borkmann Cc: David Ahern Cc: Yajun Deng Cc: Roopa Prabhu Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Denis V. Lunev Cc: Alexey Kuznetsov Cc: Konstantin Khorenko Cc: Pavel Tikhomirov Cc: Andrey Zhadchenko Cc: Alexander Mikhalitsyn Cc: kernel@openvz.org Alexander Mikhalitsyn (1): neighbour: make proxy_queue.qlen limit per-device Denis V. Lunev (1): neigh: fix possible DoS due to net iface start/stop loop include/net/neighbour.h | 1 + net/core/neighbour.c | 43 +++++++++++++++++++++++++++++++++-------- 2 files changed, 36 insertions(+), 8 deletions(-) -- 2.36.1