Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3947481pxj; Tue, 15 Jun 2021 12:02:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz10LITvx5XqjoaLvysd/g9/HDbo+Xo8c9pE8FX1mSJ5NGGJsEC9YywMI+xp4yv3bGdzRLq X-Received: by 2002:a02:8241:: with SMTP id q1mr615906jag.134.1623783758829; Tue, 15 Jun 2021 12:02:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623783758; cv=none; d=google.com; s=arc-20160816; b=oFj0LJbIQbrwsKkgo3chqqxlljkIBynpDsbATmz1tAT85PlJRLfaf6py7ik89jve8s svzt0psYjDdkragGOn/Hha/UwoUXJ/XAfxKSw86emuK/s5M+d2eFwnbHUizPyAbi3UrW Ub+3pRSylMneIR1Iq1yAGJpe9eTi1F0yb58OD6L0abBKjFS1rKdQZtI6rltQ1kE3HZf5 ZK9VnZjcXskHBQ+Ni2fDNh18Drn5AGjP7HEN3vKzFUu2q6qWF4Rq6OPPo/Wf6mDaUteV bBFq/DjcYLqJUuEO9l+Ehy2CTKDaynTr51pmx35lbeaPhNEaaqeE3/VPlnDddYEoWIeK BXfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=sS3IpUzgx4DwGF+70oSaHYF6RBSuh4uXw24pyVa/564=; b=vhQelLbsFyQm2x8BsEqU68cZYsbJJ8zGehCcrkFKzjGcMbXYQs+tQYNKpNRBVzbJtY hDwZ9dJ08NDGVnD91MGsuavCPss+tv4CzHITbvyI1isuQFFJ1GpySonOZL+WRlhbqIOm wC7atoalbx3/25UF3aAB91R6TkZFWf/h7+7RkUvLvuUzZ3C8cZmPKlvRIZJr2P2XkYsG SC354KVEqNfhbJf+x7mtgbFBuvUmreLIicnVt0spGJKcuIhaNWU7fKby45lZF/xXJuP9 lPMw3AWkq3AdTc2b/MXKJoWS+Lc5ZFd5EVBvZSi9ssG3gW051+UgyvCZGt4LwSl4At2/ G/5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=iJhrZSyI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q9si22876167ilo.56.2021.06.15.12.02.13; Tue, 15 Jun 2021 12:02:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=iJhrZSyI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230212AbhFOTDk (ORCPT + 99 others); Tue, 15 Jun 2021 15:03:40 -0400 Received: from mail.kernel.org ([198.145.29.99]:51578 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229749AbhFOTDj (ORCPT ); Tue, 15 Jun 2021 15:03:39 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4B5C2610C8; Tue, 15 Jun 2021 19:01:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623783694; bh=1mPxf9elKVhtryFbnAOffsjMg3tuReN/dnXxsnzb7DA=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=iJhrZSyIqybbjoVBu695QPnebyyHIfnqiRROhEvusZ/3zC6tTwPeMHMYH0gTMfFfP 526S+dvdJc/vT3fk99Wl71KD5W2Qd2DC43i4GWr9c8Aj/rd1isKrROqzLRAwCFCTf+ RF3rHsc4pF9C/rwCyC8w5zc8oHy0zi9bWhh2nFArB48rerYP9J/t2nh6d+ldbfA6dL NfkDuBtt3ZLlO81/RdqKRd/QsW8bJ8j9d9rfJyzpQKeWlui3uJHBuUQfQGdPHb01eL Dmp0eQ5sPLAk6rdOJ95lwh6Fh6eoPm2TGYzchWy6XuB14h3yko+I47IiaU8l1P3Ocu QIqbWSn+Bd9Xw== Received: by quaco.ghostprotocols.net (Postfix, from userid 1000) id 45A6840B1A; Tue, 15 Jun 2021 16:01:31 -0300 (-03) Date: Tue, 15 Jun 2021 16:01:31 -0300 From: Arnaldo Carvalho de Melo To: Andrii Nakryiko Cc: Yonghong Song , Andrii Nakryiko , Jiri Olsa , dwarves@vger.kernel.org, bpf , Kernel Team , Linux Kernel Mailing List Subject: Re: Parallelizing vmlinux BTF encoding. was Re: [RFT] Testing 1.22 Message-ID: References: <4901AF88-0354-428B-9305-2EDC6F75C073@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Url: http://acmel.wordpress.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Em Tue, Jun 08, 2021 at 09:59:48AM -0300, Arnaldo Carvalho de Melo escreveu: > Em Mon, Jun 07, 2021 at 05:53:59PM -0700, Andrii Nakryiko escreveu: > > I think it's very fragile and it will be easy to get > > broken/invalid/incomplete BTF. Yonghong already brought up the case > I thought about that as it would be almost like the compiler generating > BTF, but you are right, the vmlinux prep process is a complex beast and > probably it is best to go with the second approach I outlined and you > agreed to be less fragile, so I'll go with that, thanks for your > comments. So, just to write some notes here from what I saw so far: 1. In the LTO cases there are inter-CU references, so the current code combines all CUs into one and we end up not being able to parallelize much. LTO is expensive, so... I'll leave it for later, but yeah, I don't think the current algorithm is ideal, can be improved. 2. The case where there's no inter CU refs, which so far is the most common, seems easier, we create N threads, all sharing the dwarf_loader state and the btf_encoder, as-is now. we can process one CU per thread, and as soon as we finish it, just grab a lock and call btf_encoder__encode_cu() with the just produced CU data structures (tags, types, functions, variables, etc), consume them and delete the CU. So each thread will consume one CU, push it to the 'struct btf' class as-is now and then ask for the next CU, using the dwarf_loader state, still under that lock, then go back to processing dwarf tags, then lock, btf add types, rinse, repeat. The ordering will be different than what we have now, as some smaller CUs (object files with debug) will be processed faster so will get its btf encoding slot faster, but that, at btf__dedup() time shouldn't make a difference, right? I think I'm done with refactoring the btf_encoder code, which should be by now a thin layer on top of the excellent libbpf BTF API, just getting what the previous loader (DWARF) produced and feeding libbpf. I thought about fancy thread pools, etc, researching some pre-existing thing or doing some kthread + workqueue lifting from the kernel but will instead start with the most spartan code, we can improve later. There it is, dumped my thoughts on this, time to go do some coding before I get preempted... - Arnaldo > - Arnaldo > > > for static variables. There might be some other issues that exist > > today, or we might run into when we further extend BTF. Like some > > custom linker script that will do something to vmlinux.o that we won't > > know about. > > > > And also this will be purely vmlinux-specific approach relying on > > extra and custom Kbuild integration. > > > > While if you parallelize DWARF loading and BTF generation, that will > > be more reliably correct (modulo any bugs of course) and will work for > > any DWARF-to-BTF cases that might come up in the future. > > > > So I wouldn't even bother with individual .o's, tbh. > > > > > > > > If this isn't the case, we can process vmlinux as is today and go on > > > creating N threads and feeding each with a DW_TAG_compile_unit > > > "container", i.e. each thread would consume all the tags below each > > > DW_TAG_compile_unit and produce a foo.BTF file that in the end would be > > > combined and deduped by libbpf. > > > > > > Doing it as my first sketch above would take advantage of locality of > > > reference, i.e. the DWARF data would be freshly produced and in the > > > cache hierarchy when we first encode BTF, later, when doing the > > > combine+dedup we wouldn't be touching the more voluminous DWARF data. > > > > Yep, that's what I'd do. > > > > > > > > - Arnaldo > > > > > > > confident about BTF encoding part: dump each CU into its own BTF, use > > > > btf__add_type() to merge multiple BTFs together. Just need to re-map > > > > IDs (libbpf internally has API to visit each field that contains > > > > type_id, it's well-defined enough to expose that as a public API, if > > > > necessary). Then final btf_dedup(). > > > > > > > But the DWARF loading and parsing part is almost a black box to me, so > > > > I'm not sure how much work it would involve. > > > > > > > > I'm doing 'pahole -J vmlinux && btfdiff' after each cset and doing it > > > > > very piecemeal as I'm doing will help bisecting any subtle bug this may > > > > > introduce. > > > > > > > > > allow to parallelize BTF generation, where each CU would proceed in > > > > > > parallel generating local BTF, and then the final pass would merge and > > > > > > dedup BTFs. Currently reading and processing DWARF is the slowest part > > > > > > of the DWARF-to-BTF conversion, parallelization and maybe some other > > > > > > optimization seems like the only way to speed the process up. > > > > > > > > > Acked-by: Andrii Nakryiko > > > > > > > > Thanks! > > -- > > - Arnaldo -- - Arnaldo