Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp4144295ybi; Mon, 3 Jun 2019 06:25:36 -0700 (PDT) X-Google-Smtp-Source: APXvYqygO0CqD4kNbSy6Z+CSjnsd/7b2YunB5APTbCxxmecRP6tELeFIy0lXok9/B+6TBZ5tag5j X-Received: by 2002:a63:f64e:: with SMTP id u14mr25752746pgj.107.1559568336815; Mon, 03 Jun 2019 06:25:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559568336; cv=none; d=google.com; s=arc-20160816; b=uhRIdJC9CWUvhQAmeCisJ1yD3P577OEMrlF8HqCWZGqMFm0RJ4BA/mya6fLk0JJpxy fMt0oK2HgbIfOJh6LXhGkx1v5FrJfCNnHdex9F2ljdk8kBifwpF9MBpRXCP/oHBzZ8Tu 6ZWC+Yv/lqkudnMYD2n1t7I2zFjQwRasBf/Mqbk/akrKAz4F0v0WxlyFIfFwhc9PF8MF fA/vYb2BZpmAQsJQbElJsltTI/2dtqzGSQ6PK7OL/mR/XspAawAaofZGp0cahk3zpJAQ TtAVcQ633iInL9bljxAzMxzCQsPwQK7n8haPCaJJHTxB/7RoBEkF27mxDD3qEYFT1LL+ RjGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:from:subject; bh=diGIpGZfWKM988eSqsrpRT7+DdZ1OK7jfP0U4SpeFhE=; b=HtahjGxijACHNvGAE39Efit+5XnDiYq1zp9u4aqaUrbWYyK8UMU+rawMJ34aU6RSLJ W/v60gCIUkY97J5Ls/HmOYmpccnY8RWTXe58DnNG/JcMq8UOeFz8/phH0n7bm5Ay68P8 6NauTq6UWfRkWy6O0vm55PZOs2NSYOeCg4LJQuwWUrqJzsoh706cIiUhOeecicx4W5JK Fb8BhGx37EYB/kAbEjKXPuX1Kf3s8OnqqlJKYSmzlrNcadWCNsXp3vZi7DF3YL7CxK27 LJSAoRQLGnO8eoqb1EErp8fRhawhFtdAEZGQh3MgSh/ruYHfHu4GiPmpVd6460f/nEEJ 5Vhw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d2si18466931pgv.315.2019.06.03.06.25.18; Mon, 03 Jun 2019 06:25:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728640AbfFCNWV (ORCPT + 99 others); Mon, 3 Jun 2019 09:22:21 -0400 Received: from www62.your-server.de ([213.133.104.62]:48326 "EHLO www62.your-server.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726516AbfFCNWU (ORCPT ); Mon, 3 Jun 2019 09:22:20 -0400 Received: from [78.46.172.2] (helo=sslproxy05.your-server.de) by www62.your-server.de with esmtpsa (TLSv1.2:DHE-RSA-AES256-GCM-SHA384:256) (Exim 4.89_1) (envelope-from ) id 1hXmul-0002lq-S3; Mon, 03 Jun 2019 15:22:15 +0200 Received: from [2a02:120b:c3fc:feb0:dda7:bd28:a848:50e2] (helo=linux.home) by sslproxy05.your-server.de with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.89) (envelope-from ) id 1hXmul-000PGN-IY; Mon, 03 Jun 2019 15:22:15 +0200 Subject: Re: [PATCH bpf v2] bpf: preallocate a perf_sample_data per event fd From: Daniel Borkmann To: Matt Mullins , hall@fb.com, ast@kernel.org, bpf@vger.kernel.org, netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , Steven Rostedt , Ingo Molnar References: <20190531223735.4998-1-mmullins@fb.com> <6c6a4d47-796a-20e2-eb12-503a00d1fa0b@iogearbox.net> Message-ID: <68841715-4d5b-6ad1-5241-4e7199dd63da@iogearbox.net> Date: Mon, 3 Jun 2019 15:22:14 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: <6c6a4d47-796a-20e2-eb12-503a00d1fa0b@iogearbox.net> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Authenticated-Sender: daniel@iogearbox.net X-Virus-Scanned: Clear (ClamAV 0.100.3/25469/Mon Jun 3 09:59:22 2019) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/03/2019 03:08 PM, Daniel Borkmann wrote: > On 06/01/2019 12:37 AM, Matt Mullins wrote: >> It is possible that a BPF program can be called while another BPF >> program is executing bpf_perf_event_output. This has been observed with >> I/O completion occurring as a result of an interrupt: >> >> bpf_prog_247fd1341cddaea4_trace_req_end+0x8d7/0x1000 >> ? trace_call_bpf+0x82/0x100 >> ? sch_direct_xmit+0xe2/0x230 >> ? blk_mq_end_request+0x1/0x100 >> ? blk_mq_end_request+0x5/0x100 >> ? kprobe_perf_func+0x19b/0x240 >> ? __qdisc_run+0x86/0x520 >> ? blk_mq_end_request+0x1/0x100 >> ? blk_mq_end_request+0x5/0x100 >> ? kprobe_ftrace_handler+0x90/0xf0 >> ? ftrace_ops_assist_func+0x6e/0xe0 >> ? ip6_input_finish+0xbf/0x460 >> ? 0xffffffffa01e80bf >> ? nbd_dbg_flags_show+0xc0/0xc0 [nbd] >> ? blkdev_issue_zeroout+0x200/0x200 >> ? blk_mq_end_request+0x1/0x100 >> ? blk_mq_end_request+0x5/0x100 >> ? flush_smp_call_function_queue+0x6c/0xe0 >> ? smp_call_function_single_interrupt+0x32/0xc0 >> ? call_function_single_interrupt+0xf/0x20 >> ? call_function_single_interrupt+0xa/0x20 >> ? swiotlb_map_page+0x140/0x140 >> ? refcount_sub_and_test+0x1a/0x50 >> ? tcp_wfree+0x20/0xf0 >> ? skb_release_head_state+0x62/0xc0 >> ? skb_release_all+0xe/0x30 >> ? napi_consume_skb+0xb5/0x100 >> ? mlx5e_poll_tx_cq+0x1df/0x4e0 >> ? mlx5e_poll_tx_cq+0x38c/0x4e0 >> ? mlx5e_napi_poll+0x58/0xc30 >> ? mlx5e_napi_poll+0x232/0xc30 >> ? net_rx_action+0x128/0x340 >> ? __do_softirq+0xd4/0x2ad >> ? irq_exit+0xa5/0xb0 >> ? do_IRQ+0x7d/0xc0 >> ? common_interrupt+0xf/0xf >> >> ? __rb_free_aux+0xf0/0xf0 >> ? perf_output_sample+0x28/0x7b0 >> ? perf_prepare_sample+0x54/0x4a0 >> ? perf_event_output+0x43/0x60 >> ? bpf_perf_event_output_raw_tp+0x15f/0x180 >> ? blk_mq_start_request+0x1/0x120 >> ? bpf_prog_411a64a706fc6044_should_trace+0xad4/0x1000 >> ? bpf_trace_run3+0x2c/0x80 >> ? nbd_send_cmd+0x4c2/0x690 [nbd] >> >> This also cannot be alleviated by further splitting the per-cpu >> perf_sample_data structs (as in commit 283ca526a9bd ("bpf: fix >> corruption on concurrent perf_event_output calls")), as a raw_tp could >> be attached to the block:block_rq_complete tracepoint and execute during >> another raw_tp. Instead, keep a pre-allocated perf_sample_data >> structure per perf_event_array element and fail a bpf_perf_event_output >> if that element is concurrently being used. >> >> Fixes: 20b9d7ac4852 ("bpf: avoid excessive stack usage for perf_sample_data") >> Signed-off-by: Matt Mullins > > You do not elaborate why is this needed for all the networking programs that > use this functionality. The bpf_misc_sd should therefore be kept as-is. There > cannot be nested occurrences there (xdp, tc ingress/egress). Please explain why > non-tracing should be affected here... Aside from that it's also really bad to miss events like this as exporting through rb is critical. Why can't you have a per-CPU counter that selects a sample data context based on nesting level in tracing? (I don't see a discussion of this in your commit message.)