Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3879861pxj; Tue, 11 May 2021 14:08:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxyztqspYGifViDLebu3CCqyrFU0JroZ0AMr+bZiQmg5+J6Qo8TD8HW+zPsbEW66RAG80Pe X-Received: by 2002:a17:906:e4a:: with SMTP id q10mr33220355eji.511.1620767336591; Tue, 11 May 2021 14:08:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620767336; cv=none; d=google.com; s=arc-20160816; b=Wz48d21V0A3cESpD8RM0k0csG5mmcKAaxCG5jYQr0kVpgpgRM4Hn4APlGwhVJgwB5Y kSB96L8vpu4a65zjGgR7F2AovzzkLL4i3M++1BWgD4a+dES+2EIuScUbikdSckcUiDnv 8IT4LirvrVJo2KOO5fKUTZa/tVOo6ooy+tr4iACDJ7LPYmW+Q9rcSmM1KAZnSLf3Z/Uj XNi2c6TakALP+L59niYWt9G4f4lplww4x0aWoPQjELBVbAxcmxA0Gh79dVBoFUJrjNZn n/JMh2V+DYuIwcB5D9PsfBKdoCHgtu24xHJTysK0Rq71Eoxw6gajjBiYrSoirTdCS+PS abzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=CxkCjpyVTgAvYrFtnrZrEueuxhsHKne885eSCkZ9jts=; b=Ivyg0Qxn1ybHSUfjt2I3n12w59+UbEeTdTazwsP0QKyuU3W/L24uJu+NkKj8ST5BVr K1hnxuokNdM/9xHM2VWmoWvGHom8NtdbUZZWFt5yQmN23fOffk2rrC2uTaPiK29syyIm zlEjM3C5x4WCuVoDcsXpdGieBliRdZ88ddyiMefziBXMx4VcL8d2IoyzrazU0HhUd4/2 YjTIj/KTF5F/WcLenFq534uEndVe71t4fgWn9t6hAwP70L1gAatZPBcZd6/qLmcN8Olo AkANsL12epGjr/PtFN38xI8X8VleHAgDsAF36GnSni0wJWXdqCqAgTVzLwgFL+OpLoW7 G1Lg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=At5yBczq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p13si17956590ejb.542.2021.05.11.14.08.32; Tue, 11 May 2021 14:08:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=At5yBczq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229935AbhEKVIk (ORCPT + 99 others); Tue, 11 May 2021 17:08:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229736AbhEKVIj (ORCPT ); Tue, 11 May 2021 17:08:39 -0400 Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com [IPv6:2a00:1450:4864:20::132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CC35C061574; Tue, 11 May 2021 14:07:32 -0700 (PDT) Received: by mail-lf1-x132.google.com with SMTP id i9so24080832lfe.13; Tue, 11 May 2021 14:07:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=CxkCjpyVTgAvYrFtnrZrEueuxhsHKne885eSCkZ9jts=; b=At5yBczqo8f9k+X/AaeWpNzVia95KOdYlw3S2nIBmrJrmUFE+ExVwCMMHMxwlVC9h5 jMOmFVvSY6G5bdAk8OC+Krk1ciP+5s+w4x5s9R4m7mhxHS1wQ6K/VP9vCedM/6dMoBd+ GqeApr1rJ2tHJIuAJCrbSeNgGeK4s0qsMnzZKEevXmBBqAPO0drgwmsLk5ItxQ+H7iEQ EXjQ/YnQzJLhlQwiHOnNeM2uah9QZyeB4OlLegEIo5rP1/vqNPlZUSMRHWU0Rly6rfoK OOL6TA27ayR+BjxivLhYy0svr2xPW3vqonNDBNS0KuaKZyssR1a/cHN5t5lC4HX/Ct6J Rhdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=CxkCjpyVTgAvYrFtnrZrEueuxhsHKne885eSCkZ9jts=; b=f5hO2gSUM8vQ4q9GbTsI8xU+SvQ4hGfdJMaeH0PhAEpnj0BSXZ6HNzWLWqtpuOmNcC RD9Apwziz1JGy2Ur6kWS5EnWXNm+R+9i6cJUzfNUEyL87I0gPjRHgLtYgFpMsrv45NwK KQm30mk7b06yk0rHcJJELa09IMkiptrHskTGQy2LvbWpZqUZyqA6HZ2XD2YbNrXuxWLz s9j3TnH5JA3RWaBLZqLHEHX9cb0ZBJmUg+QSId3Nizht4F815AEdpzUoTXpCsLn/z4aX 0147kpsI3gvVq+spLVNvLTkdJYKZ5AAn1URzsnXByzWd+kEa5PcYFvok9uEyLxFSqpyd r4EA== X-Gm-Message-State: AOAM530sCh9gMLBa4VpHe9BrDb2aM5uT/xSgmVZkJ6F3ADlf+IPJPMsP MwL5V7/hfzijbyNHSp8YkaUBUrrxPZEbGR7A5jc= X-Received: by 2002:ac2:5b1a:: with SMTP id v26mr22697382lfn.534.1620767250846; Tue, 11 May 2021 14:07:30 -0700 (PDT) MIME-Version: 1.0 References: <20210511081054.2125874-1-revest@chromium.org> In-Reply-To: <20210511081054.2125874-1-revest@chromium.org> From: Alexei Starovoitov Date: Tue, 11 May 2021 14:07:19 -0700 Message-ID: Subject: Re: [PATCH bpf v2] bpf: Fix nested bpf_bprintf_prepare with more per-cpu buffers To: Florent Revest Cc: bpf , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Brendan Jackman , Stanislav Fomichev , LKML , syzbot+63122d0bc347f18c1884@syzkaller.appspotmail.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 11, 2021 at 1:12 AM Florent Revest wrote: > > The bpf_seq_printf, bpf_trace_printk and bpf_snprintf helpers share one > per-cpu buffer that they use to store temporary data (arguments to > bprintf). They "get" that buffer with try_get_fmt_tmp_buf and "put" it > by the end of their scope with bpf_bprintf_cleanup. > > If one of these helpers gets called within the scope of one of these > helpers, for example: a first bpf program gets called, uses > bpf_trace_printk which calls raw_spin_lock_irqsave which is traced by > another bpf program that calls bpf_snprintf, then the second "get" > fails. Essentially, these helpers are not re-entrant. They would return > -EBUSY and print a warning message once. > > This patch triples the number of bprintf buffers to allow three levels > of nesting. This is very similar to what was done for tracepoints in > "9594dc3c7e7 bpf: fix nested bpf tracepoints with per-cpu data" > > Fixes: d9c9e4db186a ("bpf: Factorize bpf_trace_printk and bpf_seq_printf") > Reported-by: syzbot+63122d0bc347f18c1884@syzkaller.appspotmail.com > Signed-off-by: Florent Revest > --- > kernel/bpf/helpers.c | 27 ++++++++++++++------------- > 1 file changed, 14 insertions(+), 13 deletions(-) > > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c > index 544773970dbc..ef658a9ea5c9 100644 > --- a/kernel/bpf/helpers.c > +++ b/kernel/bpf/helpers.c > @@ -696,34 +696,35 @@ static int bpf_trace_copy_string(char *buf, void *unsafe_ptr, char fmt_ptype, > */ > #define MAX_PRINTF_BUF_LEN 512 > > -struct bpf_printf_buf { > - char tmp_buf[MAX_PRINTF_BUF_LEN]; > +/* Support executing three nested bprintf helper calls on a given CPU */ > +struct bpf_bprintf_buffers { > + char tmp_bufs[3][MAX_PRINTF_BUF_LEN]; > }; > -static DEFINE_PER_CPU(struct bpf_printf_buf, bpf_printf_buf); > -static DEFINE_PER_CPU(int, bpf_printf_buf_used); > +static DEFINE_PER_CPU(struct bpf_bprintf_buffers, bpf_bprintf_bufs); > +static DEFINE_PER_CPU(int, bpf_bprintf_nest_level); > > static int try_get_fmt_tmp_buf(char **tmp_buf) > { > - struct bpf_printf_buf *bufs; > - int used; > + struct bpf_bprintf_buffers *bufs; > + int nest_level; > > preempt_disable(); > - used = this_cpu_inc_return(bpf_printf_buf_used); > - if (WARN_ON_ONCE(used > 1)) { > - this_cpu_dec(bpf_printf_buf_used); > + nest_level = this_cpu_inc_return(bpf_bprintf_nest_level); > + if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(bufs->tmp_bufs))) { > + this_cpu_dec(bpf_bprintf_nest_level); Applied to bpf tree. I think at the end the fix is simple enough and much better than an on-stack buffer.