Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp5102676pxb; Wed, 26 Jan 2022 05:00:57 -0800 (PST) X-Google-Smtp-Source: ABdhPJyN1jvghwBaBlA01mQRZcJqPFhE9o+15V/M+pFW80mo0k6PWORe0GOdCZ5ktH2/qzorbzR+ X-Received: by 2002:aa7:d407:: with SMTP id z7mr24854897edq.331.1643202056939; Wed, 26 Jan 2022 05:00:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643202056; cv=none; d=google.com; s=arc-20160816; b=LdRLMqhxo4TgkA1gKG6HvdkBwud6FwVWdUHZ701drHWuzaBF2UEGIf0zVr1u0xQZxU Ae3cDRgzeT2x+cN8iUXXUoQtDydmTduF27l0CEPcvQm3dcX4OvgYFPbbxecVvCux8qtK aTEA5PuvDrQ0fF3nmSO9TK6k4fLAuUv5qSi2KZq8y9euV+0QwtLROmMUoJwwew/Q+bU5 5W9L+22x3j2ZzcSBtwL/dhDUqZoMmYMrlYhR3J4/xs6M8eFbNfoxNQboiFkQyUEwnbfj lvTE1gf3S0Gbhyoortton+UisklrgKGxcE5aF1S2cMDY3pcQvzLcq0yt1PZGkDY7FOsR qsfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=uZ+Z2FyrUgr2EuDs2ZlFwscC70Ajh0SCwRZF6gJ4/QI=; b=leS6bZy7j14QiG4xWl/4qNbCkDvlnkwbLFgxqZEHQR69BcQJkVhEGcOhDOGd9BYtF7 q7sIi6l2aU9lo3ApHAiOixsVKr9vI9reHXLohPoEwwN5Ga1LK+YrRrS0AaiabdcuX8eM lNn6oBJpSGXXVAX9eSB9J/KpvrZcK5VDhBGBO1Gf3TGJdfxPJv9CRRMD090L7fEaj9r4 9ffJlPnWz7o6EQCrmCZ27jLX1SORe503T42ttLZFyt0u/ElfyKa8gnWblZkQGyrlYgrd HxUf07pb9q3ZLOuSfPewSurX7z1dPj9RhqocsN0+noO/5K52HEZSfrTLURF9RZd2vR98 kFBw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="ZAkaIE/G"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g13si3042853edb.579.2022.01.26.05.00.30; Wed, 26 Jan 2022 05:00:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="ZAkaIE/G"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234674AbiAYXJi (ORCPT + 99 others); Tue, 25 Jan 2022 18:09:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234660AbiAYXJe (ORCPT ); Tue, 25 Jan 2022 18:09:34 -0500 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46E50C06161C; Tue, 25 Jan 2022 15:09:33 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 96887CE1B70; Tue, 25 Jan 2022 23:09:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0B03CC340EE; Tue, 25 Jan 2022 23:09:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643152170; bh=LOMrUiyIHgsNDTx3vAWDhuWxHmuk2gjyVrwtUk6gu8A=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=ZAkaIE/Gj9GJgqqI8WYv8FuktPvTInCnAXzJ0h+F6sA/Kh0gFf21RXlQ0yumHCu46 xCaSssZRxku6HaokLRwxkCGuKKEXKtj3mXENaCK8+Z2kQvGttL+AGMX7EdEvuNsd8a yU11dFwOuudGhYp3LpnZgIJmbj+N2FRCXooa2RDsQrWwgl40GnEKb25gIbVX9RQsyU 2tJtp2UBQTT71kaIG+VdcQ5DJxVMC08bAdmO5HsAO0xVtp7pnJU3RKVd0U0Ur+TM8i FWmD1mUSTEdIhcY8csplDZDJeyh8AyhSMaPbX06WZ8p61QvMNDpdJxOvOUNThns/aG VRg6LWv5cuyeg== Received: by mail-yb1-f172.google.com with SMTP id p5so66066990ybd.13; Tue, 25 Jan 2022 15:09:29 -0800 (PST) X-Gm-Message-State: AOAM532uCiuLzGNwJEkVN34pNYSF9c+XbAp4LBvh0lvV+/TiGZ/ItziJ CrNbWLMj22HTHHOVdN2l0GmbV4JJU5+Pu2SBpKo= X-Received: by 2002:a25:8b85:: with SMTP id j5mr31467798ybl.558.1643152169079; Tue, 25 Jan 2022 15:09:29 -0800 (PST) MIME-Version: 1.0 References: <20220121194926.1970172-1-song@kernel.org> <20220121194926.1970172-7-song@kernel.org> <7393B983-3295-4B14-9528-B7BD04A82709@fb.com> <5407DA0E-C0F8-4DA9-B407-3DE657301BB2@fb.com> <5F4DEFB2-5F5A-4703-B5E5-BBCE05CD3651@fb.com> <5E70BF53-E3FB-4F7A-B55D-199C54A8FDCA@fb.com> <2AAC8B8C-96F1-400F-AFA6-D4AF41EC82F4@fb.com> In-Reply-To: From: Song Liu Date: Tue, 25 Jan 2022 15:09:18 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v6 bpf-next 6/7] bpf: introduce bpf_prog_pack allocator To: Alexei Starovoitov Cc: Song Liu , Ilya Leoshkevich , bpf , Network Development , LKML , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Kernel Team , Peter Zijlstra , X86 ML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 25, 2022 at 2:48 PM Alexei Starovoitov wrote: > > On Tue, Jan 25, 2022 at 2:25 PM Song Liu wrote: > > > > On Tue, Jan 25, 2022 at 12:00 PM Alexei Starovoitov > > wrote: > > > > > > On Mon, Jan 24, 2022 at 11:21 PM Song Liu wrote: > > > > > > > > On Mon, Jan 24, 2022 at 9:21 PM Alexei Starovoitov > > > > wrote: > > > > > > > > > > On Mon, Jan 24, 2022 at 10:27 AM Song Liu wrote: > > > > > > > > > > > > > > Are arches expected to allocate rw buffers in different ways? If not, > > > > > > > I would consider putting this into the common code as well. Then > > > > > > > arch-specific code would do something like > > > > > > > > > > > > > > header = bpf_jit_binary_alloc_pack(size, &prg_buf, &prg_addr, ...); > > > > > > > ... > > > > > > > /* > > > > > > > * Generate code into prg_buf, the code should assume that its first > > > > > > > * byte is located at prg_addr. > > > > > > > */ > > > > > > > ... > > > > > > > bpf_jit_binary_finalize_pack(header, prg_buf); > > > > > > > > > > > > > > where bpf_jit_binary_finalize_pack() would copy prg_buf to header and > > > > > > > free it. > > > > > > > > > > It feels right, but bpf_jit_binary_finalize_pack() sounds 100% arch > > > > > dependent. The only thing it will do is perform a copy via text_poke. > > > > > What else? > > > > > > > > > > > I think this should work. > > > > > > > > > > > > We will need an API like: bpf_arch_text_copy, which uses text_poke_copy() > > > > > > for x86_64 and s390_kernel_write() for x390. We will use bpf_arch_text_copy > > > > > > to > > > > > > 1) write header->size; > > > > > > 2) do finally copy in bpf_jit_binary_finalize_pack(). > > > > > > > > > > we can combine all text_poke operations into one. > > > > > > > > > > Can we add an 'image' pointer into struct bpf_binary_header ? > > > > > > > > There is a 4-byte hole in bpf_binary_header. How about we put > > > > image_offset there? Actually we only need 2 bytes for offset. > > > > > > > > > Then do: > > > > > int bpf_jit_binary_alloc_pack(size, &ro_hdr, &rw_hdr); > > > > > > > > > > ro_hdr->image would be the address used to compute offsets by JIT. > > > > > > > > If we only do one text_poke(), we cannot write ro_hdr->image yet. We > > > > can use ro_hdr + rw_hdr->image_offset instead. > > > > > > Good points. > > > Maybe let's go back to Ilya's suggestion and return 4 pointers > > > from bpf_jit_binary_alloc_pack ? > > > > How about we use image_offset, like: > > > > struct bpf_binary_header { > > u32 size; > > u32 image_offset; > > u8 image[] __aligned(BPF_IMAGE_ALIGNMENT); > > }; > > > > Then we can use > > > > image = (void *)header + header->image_offset; > > I'm not excited about it, since it leaks header details into JITs. > Looks like we don't need JIT to be aware of it. > How about we do random() % roundup(sizeof(struct bpf_binary_header), 64) > to pick the image start and populate > image-sizeof(struct bpf_binary_header) range > with 'int 3'. > This way we can completely hide binary_header inside generic code. > The bpf_jit_binary_alloc_pack() would return ro_image and rw_image only. > And JIT would pass them back into bpf_jit_binary_finalize_pack(). > From the image pointer it would be trivial to get to binary_header with &63. > The 128 byte offset that we use today was chosen arbitrarily. > We were burning the whole page for a single program, so 128 bytes zone > at the front was ok. > Now we will be packing progs rounded up to 64 bytes, so it's better > to avoid wasting those 128 bytes regardless. In bpf_jit_binary_hdr(), we calculate header as image & PAGE_MASK. If we want s/PAGE_MASK/63 for x86_64, we will have different versions of bpf_jit_binary_hdr(). It is not on any hot path, so we can use __weak for it. Other than this, I think the solution works fine. Thanks, Song