Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp791401pxy; Wed, 5 May 2021 14:07:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwz+QlvANJrVL5gEMA1Rf8SpMVYN3UQ7fN5eR1/PzxYw3IHveUbzqbew1+YS9Ee9L4IrRJp X-Received: by 2002:a17:906:3181:: with SMTP id 1mr753940ejy.36.1620248820382; Wed, 05 May 2021 14:07:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620248820; cv=none; d=google.com; s=arc-20160816; b=eRUEAN65MEguu8D6JmHN9SGmPReuRFYhCFNYZ5Glg6VwDnSIeU1GRFN4ChgZcfin0k ILD3szFWR0o7AJDY+IfUAW8qKoBqoZ+Hmq9MuzsGepf2bJmVaDp/AfoLcRAhYtcSJWqD +i2xpvUoGU3qhAger6Zs3U/BFQnDAA4iNPZDVeO/l3XQ0VKOnuqMtW3SOQET0+X/84Fr 9pS8+sfk8lQlbce0FFqGa+9YHhfXGht8tXzmYe5Ipncyb+GocvycGTvIKH9xJ8cseltp z79TJZ8IDcIbFdvSD2ecwfT3zAw+c9SnXZnEZ0zQGmDZ6e9YomDjEuUpeDy3eEzn6y7q nMJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=pLOXwb47NIf9UGbpdLZegdE9n9cuE/Ec0n3K1Hkffbo=; b=Hfr2RNqaMT/H0efPD+sgAmNguXt/7FxqG3kjHxPta6R9JOwleoSGBkozNrQniaOF4d m+aP66scket7zj18amFIVrf40GsVjY81AQ4Zv2joQtdXQdjHUkz1MSWNLP1jy8zf+Bxy JCKomx0tTcXdfw5gEshsORiZRlMBdVePEBm6A8T05t+ltcoqkRs9R6A8G3IzYFbYczlg +A8PQ5FgJUC+rzjKRD5v2zRSB/JNsuUGdZvpycKLzVUUAr+8Bu/CD6pED49Pxddrn/Ox Rf7pMzmdNNqykgstkMdxCCU66hlaA/j2ABlqXuQJlpCTC22UDqWKqD+L/U00nHYKik8j GVzw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id nb27si400546ejc.548.2021.05.05.14.06.36; Wed, 05 May 2021 14:07:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234964AbhEEUBf (ORCPT + 99 others); Wed, 5 May 2021 16:01:35 -0400 Received: from www62.your-server.de ([213.133.104.62]:44690 "EHLO www62.your-server.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230437AbhEEUBe (ORCPT ); Wed, 5 May 2021 16:01:34 -0400 Received: from sslproxy01.your-server.de ([78.46.139.224]) by www62.your-server.de with esmtpsa (TLSv1.3:TLS_AES_256_GCM_SHA384:256) (Exim 4.92.3) (envelope-from ) id 1leNhE-000CJh-5s; Wed, 05 May 2021 22:00:36 +0200 Received: from [85.7.101.30] (helo=linux.home) by sslproxy01.your-server.de with esmtpsa (TLSv1.3:TLS_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1leNhD-000Q9m-Us; Wed, 05 May 2021 22:00:35 +0200 Subject: Re: [PATCH bpf] bpf: Don't WARN_ON_ONCE in bpf_bprintf_prepare To: Andrii Nakryiko , Florent Revest Cc: bpf , Alexei Starovoitov , Andrii Nakryiko , KP Singh , Brendan Jackman , Stanislav Fomichev , open list , syzbot References: <20210505162307.2545061-1-revest@chromium.org> From: Daniel Borkmann Message-ID: Date: Wed, 5 May 2021 22:00:35 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Authenticated-Sender: daniel@iogearbox.net X-Virus-Scanned: Clear (ClamAV 0.103.2/26161/Wed May 5 13:06:38 2021) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 5/5/21 8:55 PM, Andrii Nakryiko wrote: > On Wed, May 5, 2021 at 9:23 AM Florent Revest wrote: >> >> The bpf_seq_printf, bpf_trace_printk and bpf_snprintf helpers share one >> per-cpu buffer that they use to store temporary data (arguments to >> bprintf). They "get" that buffer with try_get_fmt_tmp_buf and "put" it >> by the end of their scope with bpf_bprintf_cleanup. >> >> If one of these helpers gets called within the scope of one of these >> helpers, for example: a first bpf program gets called, uses > > Can we afford having few struct bpf_printf_bufs? They are just 512 > bytes, so can we have 3-5 of them? Tracing low-level stuff isn't the > only situation where this can occur, right? If someone is doing > bpf_snprintf() and interrupt occurs and we run another BPF program, it > will be impossible to do bpf_snprintf() or bpf_trace_printk() from the > second BPF program, etc. We can't eliminate the probability, but > having a small stack of buffers would make the probability so > miniscule as to not worry about it at all. > > Good thing is that try_get_fmt_tmp_buf() abstracts all the details, so > the changes are minimal. Nestedness property is preserved for > non-sleepable BPF programs, right? If we want this to work for > sleepable we'd need to either: 1) disable migration or 2) instead of > assuming a stack of buffers, do a loop to find unused one. Should be > acceptable performance-wise, as it's not the fastest code anyway > (printf'ing in general). > > In any case, re-using the same buffer for sort-of-optional-to-work > bpf_trace_printk() and probably-important-to-work bpf_snprintf() is > suboptimal, so seems worth fixing this. > > Thoughts? Yes, agree, it would otherwise be really hard to debug. I had the same thought on why not allowing nesting here given users very likely expect these helpers to just work for all the contexts. Thanks, Daniel