Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp3300539ybb; Tue, 31 Mar 2020 02:25:13 -0700 (PDT) X-Google-Smtp-Source: ADFU+vvnJ77olBMpUD3HysShwS7jBlO7rnMXR8Tu3acHrRUOLpA5Y4BRfUH9/oQkH6u25Uh1QV1N X-Received: by 2002:a05:6830:1d1:: with SMTP id r17mr12650957ota.81.1585646713716; Tue, 31 Mar 2020 02:25:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585646713; cv=none; d=google.com; s=arc-20160816; b=RGOuo91mfhCwL2iUipr8+f+6YaRsApPuZAG4RhLaD0cAzFzpcI/EM6LI9dCOcXLvqr j6gEH0iXxiDu7fo3U2SfuPvpiOakSL3V+estW7IYOmn7YEz4DEl9sL+0PUhjrcoP19kn ngfBXUFw6PTaAWnx7ij52KeKUhaM9wtwFyhs835JLkjRnaNta8mkd29Xz9ja2VLMbaoA vT6hY+NVJBEPsri7j1KX0/tlllXVdXWLuwLKO0DwIugGQ6Tzsqa+F/20It51M1ckpntJ RajTpszAi8R/6BMCTkyc4fXyp3w/SZMF1J6dOCeA9hpZrIxQa+RCAkwEzWAPV8BiwCc7 Fmjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=vjmrhLDxWYhkMsI6Pe55YhjsrnJCSH0OpWthEmQRAx0=; b=rCauyzHGbey89tNyHbutxAUdZQuzibJyoTlV8PQDCMk0/9JB23XMVIaWl9KFqEorId DzjM1edwCfk1rsCpb9jp25hPNbrxiDVYNP8Qz8B4w9Mucn59a9myjzlwBklPjOL+B3mX 4/+xvuXU5QqgpWaxSBBZxa8alKgSvo7LpPd6ulyGDSYqAf13+xAVbMR5ezC+c+1KN2fR ZBL9OSxiLq9PUh4otubHGfAFTEbFpzXzdq95HRb4jPIrQwHsDXarn6yUSP4cj/2S5yCS I0ys6VP/j1BJomCAp9XAszBSRWFxbebBphw6QwWyTLdzopxcWPmRKRJ2lSvn4ypHnCZv BUZg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=XHR4PJxC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x5si5367581otp.126.2020.03.31.02.25.01; Tue, 31 Mar 2020 02:25:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=XHR4PJxC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730881AbgCaJXd (ORCPT + 99 others); Tue, 31 Mar 2020 05:23:33 -0400 Received: from mail.kernel.org ([198.145.29.99]:41842 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730431AbgCaJCV (ORCPT ); Tue, 31 Mar 2020 05:02:21 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id BDD16212CC; Tue, 31 Mar 2020 09:02:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585645340; bh=xTx3IFmtgVOpPk5qCDwPxc5sdjDZyhiGdgIG54GnXQE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XHR4PJxCYIWkqgvda6jorlb9ThLlvYMpwpmejwdoZKw3ZRB76Po2S6fCzVIZTGVFM 3O6R0KCtHt8n5xwb5GMMMriewqAgZlWZPk4cZMyAEh/QwhyU2Ieh0Rv4CuGu82pnHs EM7qj9ZvWMEigJteSDMmjgU+3FFki3FoHClFvIc0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jon Rosen , Willem de Bruijn , "David S. Miller" Subject: [PATCH 5.5 023/170] net/packet: tpacket_rcv: avoid a producer race condition Date: Tue, 31 Mar 2020 10:57:17 +0200 Message-Id: <20200331085426.534958971@linuxfoundation.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200331085423.990189598@linuxfoundation.org> References: <20200331085423.990189598@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Willem de Bruijn [ Upstream commit 61fad6816fc10fb8793a925d5c1256d1c3db0cd2 ] PACKET_RX_RING can cause multiple writers to access the same slot if a fast writer wraps the ring while a slow writer is still copying. This is particularly likely with few, large, slots (e.g., GSO packets). Synchronize kernel thread ownership of rx ring slots with a bitmap. Writers acquire a slot race-free by testing tp_status TP_STATUS_KERNEL while holding the sk receive queue lock. They release this lock before copying and set tp_status to TP_STATUS_USER to release to userspace when done. During copying, another writer may take the lock, also see TP_STATUS_KERNEL, and start writing to the same slot. Introduce a new rx_owner_map bitmap with a bit per slot. To acquire a slot, test and set with the lock held. To release race-free, update tp_status and owner bit as a transaction, so take the lock again. This is the one of a variety of discussed options (see Link below): * instead of a shadow ring, embed the data in the slot itself, such as in tp_padding. But any test for this field may match a value left by userspace, causing deadlock. * avoid the lock on release. This leaves a small race if releasing the shadow slot before setting TP_STATUS_USER. The below reproducer showed that this race is not academic. If releasing the slot after tp_status, the race is more subtle. See the first link for details. * add a new tp_status TP_KERNEL_OWNED to avoid the transactional store of two fields. But, legacy applications may interpret all non-zero tp_status as owned by the user. As libpcap does. So this is possible only opt-in by newer processes. It can be added as an optional mode. * embed the struct at the tail of pg_vec to avoid extra allocation. The implementation proved no less complex than a separate field. The additional locking cost on release adds contention, no different than scaling on multicore or multiqueue h/w. In practice, below reproducer nor small packet tcpdump showed a noticeable change in perf report in cycles spent in spinlock. Where contention is problematic, packet sockets support mitigation through PACKET_FANOUT. And we can consider adding opt-in state TP_KERNEL_OWNED. Easy to reproduce by running multiple netperf or similar TCP_STREAM flows concurrently with `tcpdump -B 129 -n greater 60000`. Based on an earlier patchset by Jon Rosen. See links below. I believe this issue goes back to the introduction of tpacket_rcv, which predates git history. Link: https://www.mail-archive.com/netdev@vger.kernel.org/msg237222.html Suggested-by: Jon Rosen Signed-off-by: Willem de Bruijn Signed-off-by: Jon Rosen Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/packet/af_packet.c | 21 +++++++++++++++++++++ net/packet/internal.h | 5 ++++- 2 files changed, 25 insertions(+), 1 deletion(-) --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -2172,6 +2172,7 @@ static int tpacket_rcv(struct sk_buff *s struct timespec ts; __u32 ts_status; bool is_drop_n_account = false; + unsigned int slot_id = 0; bool do_vnet = false; /* struct tpacket{2,3}_hdr is aligned to a multiple of TPACKET_ALIGNMENT. @@ -2274,6 +2275,13 @@ static int tpacket_rcv(struct sk_buff *s if (!h.raw) goto drop_n_account; + if (po->tp_version <= TPACKET_V2) { + slot_id = po->rx_ring.head; + if (test_bit(slot_id, po->rx_ring.rx_owner_map)) + goto drop_n_account; + __set_bit(slot_id, po->rx_ring.rx_owner_map); + } + if (do_vnet && virtio_net_hdr_from_skb(skb, h.raw + macoff - sizeof(struct virtio_net_hdr), @@ -2379,7 +2387,10 @@ static int tpacket_rcv(struct sk_buff *s #endif if (po->tp_version <= TPACKET_V2) { + spin_lock(&sk->sk_receive_queue.lock); __packet_set_status(po, h.raw, status); + __clear_bit(slot_id, po->rx_ring.rx_owner_map); + spin_unlock(&sk->sk_receive_queue.lock); sk->sk_data_ready(sk); } else { prb_clear_blk_fill_status(&po->rx_ring); @@ -4276,6 +4287,7 @@ static int packet_set_ring(struct sock * { struct pgv *pg_vec = NULL; struct packet_sock *po = pkt_sk(sk); + unsigned long *rx_owner_map = NULL; int was_running, order = 0; struct packet_ring_buffer *rb; struct sk_buff_head *rb_queue; @@ -4361,6 +4373,12 @@ static int packet_set_ring(struct sock * } break; default: + if (!tx_ring) { + rx_owner_map = bitmap_alloc(req->tp_frame_nr, + GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO); + if (!rx_owner_map) + goto out_free_pg_vec; + } break; } } @@ -4390,6 +4408,8 @@ static int packet_set_ring(struct sock * err = 0; spin_lock_bh(&rb_queue->lock); swap(rb->pg_vec, pg_vec); + if (po->tp_version <= TPACKET_V2) + swap(rb->rx_owner_map, rx_owner_map); rb->frame_max = (req->tp_frame_nr - 1); rb->head = 0; rb->frame_size = req->tp_frame_size; @@ -4421,6 +4441,7 @@ static int packet_set_ring(struct sock * } out_free_pg_vec: + bitmap_free(rx_owner_map); if (pg_vec) free_pg_vec(pg_vec, order, req->tp_block_nr); out: --- a/net/packet/internal.h +++ b/net/packet/internal.h @@ -70,7 +70,10 @@ struct packet_ring_buffer { unsigned int __percpu *pending_refcnt; - struct tpacket_kbdq_core prb_bdqc; + union { + unsigned long *rx_owner_map; + struct tpacket_kbdq_core prb_bdqc; + }; }; extern struct mutex fanout_mutex;