Received: by 2002:a05:7412:37c9:b0:e2:908c:2ebd with SMTP id jz9csp1020359rdb; Tue, 19 Sep 2023 18:36:59 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEZxkhmPLaRQxmfpg0mG32xmazKbQDO/LRCTpd6fXyh98zK7dpvykQG4XfVnOsoAyHS6nSm X-Received: by 2002:a05:6808:1524:b0:3a7:3b6f:ed46 with SMTP id u36-20020a056808152400b003a73b6fed46mr1201496oiw.27.1695173818691; Tue, 19 Sep 2023 18:36:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695173818; cv=none; d=google.com; s=arc-20160816; b=lZgHoNjKxCOefNL/LW+AE11XQ0seyrwCs28FD0Z/Gv/gMBpYYg4nei05WJIKnp+DAi 5+bCKhGPD3i2BJn9U/LJtVTbx74MMRk8mO/GFxO01Bw8Pxbv9OT35rUJmhlCn6WWZDZc Nf8/IRNVKR4bWg7T8874XdQWWi535TWTY1i6/69mKZllqqoxzI/OkfbIcZ+AOrFIakh4 Ju/8EKZfMsQEX05nCbvQJOOlNeewl17JwVG87FUHPZGIDPUcwoh3kmAMo8XftOsonTUp 7Xk+/KVEOhjhsn68mlTkGfDxc0S6dJNfM636hQeZTfTMaLhC2tVlLfn4+fK+3jdlKVHa 4fGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=tfCHX0NYadHxilWRiPDE43IPqx1RrnQk5fFi8TttVsI=; fh=luN1T+TZmJdWrv2MwLqOcKf0btUR/zj1LntAQkciNa8=; b=cct2Ivh4JqoNb25j0gfE0Oj9QGQOiCmFvbeZ4baf7LT2i/XJ4Ox+IXSbLtK37lJa7K UfnV8t28+0RJM9uF6LEU7zfc8JHdGTIn+sHlWQiWxp+5DEWBAO+wP2A4Neg0o49ezz13 sP62grThQrWZDgWF0Fffph4bThbvkkKlfZqzYUULGA4f3zzQwSR6a1Aw+Toxf8qr8Rir XK3k12LibaiULYs92pKxDAwwRi+j/tNznkaoPeiHrVnl0bEiFrCVqH1vOMKfSvpLIlQC 0OsRJla6HAJEazz3VaU6ljTSn+30udDm52b2RS08AEAGGA638NmNqoyFlE2FJKMGuun5 CExA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=eF5j9sjQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id dc13-20020a056a0035cd00b00690dc61cc93si415087pfb.403.2023.09.19.18.36.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Sep 2023 18:36:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=eF5j9sjQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id AFD1580DBA77; Tue, 19 Sep 2023 16:30:32 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233362AbjISXaa (ORCPT + 99 others); Tue, 19 Sep 2023 19:30:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229748AbjISXa3 (ORCPT ); Tue, 19 Sep 2023 19:30:29 -0400 Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com [IPv6:2a00:1450:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C61FC0; Tue, 19 Sep 2023 16:30:23 -0700 (PDT) Received: by mail-lj1-x230.google.com with SMTP id 38308e7fff4ca-2b974031aeaso102512701fa.0; Tue, 19 Sep 2023 16:30:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695166221; x=1695771021; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=tfCHX0NYadHxilWRiPDE43IPqx1RrnQk5fFi8TttVsI=; b=eF5j9sjQG4s3T4YyVlqfzD6uHAY1huKZ/45XnfSp4EE9ofCEgAoJ0mI6XpZwz5aDyR dujUXWtNoHoMDoHl+WxOZml3z5k4hcesih9XrLqw6GCXRs5FIXbo76AG4MP+EnbkSaWm p1ik3LWczF/UwW3oYUzh9MV9uJIKRASOUHn5jHlz0wzwMFTnvPBhb58lyOQX6L0TPHI+ znaolvIi01a91oJHxUIPhpIG98cFzDxbY8wZfHCBt3VdZj3IP3zW6ZtBI8yGPGBBPQrH As8jAPwrE7zJh75IW7A6dUudgBIy93Tkg09NIofeFcrUnPPNhEilue2mHhPuCWsGqcni w8Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695166221; x=1695771021; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tfCHX0NYadHxilWRiPDE43IPqx1RrnQk5fFi8TttVsI=; b=lgnkXB5NEX3Arz4Y7xOxw17/j3hQjADy0fARXvdWp0RP+Ba49lz+hj850HrikFTlSj 243aIhTy80/pAmyUptauO7xYAYPc83UXsbKcAP0BYew5PDSH5OWnNeZ9J8Rn8gX0askB vLG4m/5SpFDRdMb+MtMTHUDTTDv+vM0I3QYlJ6/8eXnhSWC6voCngVDrM2flAi4PY5la RtdypoFpfWNVIuiPOF7RCJUPPtixGnj5/iPDyp3X42MmNGFA2blIQVNfvQTYfxgVNq9m 80599xt+bb9+j3MK7j+CKYB4sOAGsyy4hdtaKS1lFI7XaWzmP1GlUwACUCyOnhXi960T Jrtw== X-Gm-Message-State: AOJu0YwBfdM+PGbHSy3MtRch5dZouWrSAjJDtRB7r5w4hGcvo6rhgBmn C44lJmS442PwU3O5uFqRX7cHF30gAdsfQciEGv+eDYwSS+M= X-Received: by 2002:a2e:b616:0:b0:2ba:18e5:1063 with SMTP id r22-20020a2eb616000000b002ba18e51063mr616019ljn.50.1695166221039; Tue, 19 Sep 2023 16:30:21 -0700 (PDT) MIME-Version: 1.0 References: <20230912070149.969939-1-zhouchuyi@bytedance.com> <20230912070149.969939-4-zhouchuyi@bytedance.com> <30eadbff-8340-a721-362b-ff82de03cb9f@bytedance.com> <67d07ab7-8202-4bbd-88d9-587707bd58b1@bytedance.com> In-Reply-To: <67d07ab7-8202-4bbd-88d9-587707bd58b1@bytedance.com> From: Andrii Nakryiko Date: Tue, 19 Sep 2023 16:30:09 -0700 Message-ID: Subject: Re: [PATCH bpf-next v2 3/6] bpf: Introduce process open coded iterator kfuncs To: Chuyi Zhou Cc: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, tj@kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Tue, 19 Sep 2023 16:30:32 -0700 (PDT) On Sat, Sep 16, 2023 at 7:03=E2=80=AFAM Chuyi Zhou wrote: > > Hello. > > =E5=9C=A8 2023/9/16 04:37, Andrii Nakryiko =E5=86=99=E9=81=93: > > On Fri, Sep 15, 2023 at 8:03=E2=80=AFAM Chuyi Zhou wrote: > >> =E5=9C=A8 2023/9/15 07:26, Andrii Nakryiko =E5=86=99=E9=81=93: > >>> On Tue, Sep 12, 2023 at 12:02=E2=80=AFAM Chuyi Zhou wrote: > >>>> > >>>> This patch adds kfuncs bpf_iter_process_{new,next,destroy} which all= ow > >>>> creation and manipulation of struct bpf_iter_process in open-coded i= terator > >>>> style. BPF programs can use these kfuncs or through bpf_for_each mac= ro to > >>>> iterate all processes in the system. > >>>> > [...cut...] > >>> > >>> Few high level thoughts. I think it would be good to follow > >>> SEC("iter/task") naming and approach. Open-coded iterators in many > >>> ways are in-kernel counterpart to iterator programs, so keeping them > >>> close enough within reason is useful for knowledge transfer. > >>> > >>> SEC("iter/task") allows to: > >>> a) iterate all threads in the system > >>> b) iterate all threads for a given TGID > >>> c) it also allows to "iterate" a single thread or process, but that's > >>> a bit less relevant for in-kernel iterator, but we can still support > >>> them, why not? > >>> > >>> I'm not sure if it supports iterating all processes (as in group > >>> leaders of each task group) in the system, but if it's possible I > >>> think we should support it at least for open-coded iterator, seems > >>> like a very useful functionality. > >>> > >>> So to that end, let's design a small set of input arguments for > >>> bpf_iter_process_new() that would allow to specify this as flags + > >>> either (optional) struct task_struct * pointer to represent > >>> task/process or PID/TGID. > >>> > >> > >> Another concern from Alexei was the readability of the API of open-cod= ed > >> in BPF Program[1]. > >> > >> bpf_for_each(task, curr) is straightforward. Users can easily understa= nd > >> that this API does the same thing as 'for_each_process' in kernel. > > > > In general, users might have no idea about for_each_process macro in > > the kernel, so I don't find this particular argument very convincing. > > > > We can add a separate set of iterator kfuncs for every useful > > combination of conditions, of course, but it's a double-edged sword. > > Needing to use a different iterator just to specify a different > > direction of cgroup iteration (from the example you referred in [1]) > > also means that it's now harder to write some generic function that > > needs to do something for all cgroups matching some criteria where the > > order might be coming as an argument. > > > > Similarly for task iterators. It's not hard to imagine some processing > > that can be equivalently done per thread or per process in the system, > > or on each thread of the process, depending on some conditions or > > external configuration. Having to do three different > > bpf_for_each(task_xxx, task, ...) for this seems suboptimal. If the > > nature of the thing that is iterated over is the same, and it's just a > > different set of filters to specify which subset of those items should > > be iterated, I think it's better to try to stick to the same iterator > > with few simple arguments. IMO, of course, there is no objectively > > best approach. > > > >> > >> However, if we keep the approach of SEC("iter/task") > >> > >> enum ITER_ITEM { > >> ITER_TASK, > >> ITER_THREAD, > >> } > >> > >> __bpf_kfunc int bpf_iter_task_new(struct bpf_iter_process *it, struct > >> task_struct *group_task, enum ITER_ITEM type) > >> > >> the API have to chang: > >> > >> > >> bpf_for_each(task, curr, NULL, ITERATE_TASK) // iterate all process in > >> the system > >> bpf_for_each(task, curr, group_leader, ITERATE_THREAD) // iterate all > >> thread of group_leader > >> bpf_for_each(task, curr, NULL, ITERATE_THREAD) //iterate all threads o= f > >> all the process in the system > >> > >> Useres may guess what are this API actually doing.... > > > > I'd expect users to consult documentation before trying to use an > > unfamiliar cutting-edge functionality. So let's try to keep > > documentation clear and up to the point. Extra flag argument doesn't > > seem to be a big deal. > > Thanks for your suggestion! > > Before we begin working on the next version, I have outlined a detailed > API design here: > > 1.task_iter > > It will be used to iterate process/threads like SEC("iter/task"). Here > we should better to follow the naming and approach SEC("iter/task"): > > enum { > ITERATE_PROCESS, > ITERATE_THREAD, > } > > __bpf_kfunc int bpf_iter_task_new(struct bpf_iter_task *it, struct > task_struct *task, int flag); > > If we want to iterate all processes in the system, the iteration will > start from the *task* which is passed from user.(since process in the > system are connected through a linked list) but will go through all of them anyways, right? it's kind of surprising from usability standpoint to have to pass some task_struct to iterate all of them, tbh. I wonder if it's hard to adjust kfunc validation to allow "nullable" pointers? We can look at that separately, of course. > > Additionally, the *task* can allow users to specify iterating all > threads within a task group. > > SEC("xxx") > int xxxx(void *ctx) > { > struct task_struct *pos; > struct task_struct *cur_task =3D bpf_get_current_task_btf(); > > bpf_rcu_read_lock(); > > // iterating all process in the system start from cur_task > bpf_for_each(task, pos, cur_task, ITERATE_PROCESS) { > > } > > // iterate all thread belongs to cur_task group. > bpf_for_each(task, pos, cur_task, ITERATE_THREAD) { > > } > > bpf_rcu_read_unlock(); > return 0; > } > > Iterating all thread of each process is great=EF=BC=88ITERATE_ALL=EF=BC= =89. But maybe > let's break it down step by step and implement > ITERATE_PROCESS/ITERATE_THREAD first? (I'm little worried about the cpu > overhead of ITERATE_ALL, since we are doing a heavy job in BPF Prog) > Hm... but if it was a sleepable BPF program and bpf_rcu_read_{lock,unlock}() was only per task, then it shouldn't be the problem? See enum bpf_cgroup_iter_order. > I wanted to reuse BPF_TASK_ITER_ALL/BPF_TASK_ITER_TID/BPF_TASK_ITER_TGID > insted of new enums like ITERATE_PROCESS/ITERATE_THREAD. But it seems > necessary. In BPF Prog, we usually operate task_struct directly instead > of pid/tgid. It's a little weird to use > BPF_TASK_ITER_TID/BPF_TASK_ITER_TGID here: enum bpf_iter_task_type is internal type, so we can rename BPF_TASK_ITER_TID to BPF_TASK_ITER_THREAD and BPF_TASK_ITER_PROC (or add them as aliases). At the very least, we should use consistent BPF_TASK_ITER_xxx naming, instead of just ITERATE_PROCESS. See > > bpf_for_each(task, pos, cur_task, BPF_TASK_ITER_TID) { > } > > On the other hand, > BPF_TASK_ITER_ALL/BPF_TASK_ITER_TID/BPF_TASK_ITER_TGID are inner flags > that are hidden from the users. > Exposing ITERATE_PROCESS/ITERATE_THREAD will not cause confusion to user. > inner types are not a problem when used with vmlinux.h > > 2. css_iter. > > css_iter will be used to: > (1) iterating subsystem, like > for_each_mem_cgroup_tree/cpuset_for_each_descendant_pre in kernel. > (2) iterating cgroup. (patch-6's selfetest has a basic example) > > css(cgroup_subsys_state) is more fundamental than struct cgroup. I think > we'd better operating css rather than cgroup, since it's can be hard for > cgroup_iter to achive (2). So here we keep the name of "css_iter", > BPF_CGROUP_ITER_DESCENDANTS_PRE/BPF_CGROUP_ITER_DESCENDANTS_POST/BPF_CGRO= UP_ITER_ANCESTORS_UP > can be reused. > > > __bpf_kfunc int bpf_iter_css_new(struct bpf_iter_css *it, > struct cgroup_subsys_state *root, unsigned int flag) > > bpf_for_each(css, root, BPF_CGROUP_ITER_DESCENDANTS_PRE) > Makes sense, yep, thanks. > Thanks. > > > >