Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp1082037rwd; Sun, 14 May 2023 12:12:32 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4Kv+NoLEt2ION47I038FJDlKB8KoAjfGEHmL7U/7DwbSsfkM9hnsu72nQU/vE4zrjj/pzQ X-Received: by 2002:a17:90a:c245:b0:250:69e4:9ed2 with SMTP id d5-20020a17090ac24500b0025069e49ed2mr23630898pjx.48.1684091552214; Sun, 14 May 2023 12:12:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684091552; cv=none; d=google.com; s=arc-20160816; b=daATgLAkmOYk/HtR36jEXk+k2mqUa14ToGDAREZciCyuMc58vPzWaMFyTSAAYol5uM WM9obawcEACUxF8iuCaS6M7e0EEaVF8FdsyTK+EojSzknZHcClr4GiKlUVU15gKa3lsU 8Qvr2THBJ1ryTezj1JRoLqQriuMPNn+lJ8ENULVK1sKreKeBSsXMUIsmccP/MInjAghx HAti8yCO5rU8O9XvX1PrG9CzfJtlPRA64zYM4Y0/BGtcJE8kbLB1C6ClO0QNLdyudYle 1cquWgBYrgDRzjyn0p5t25PLx+D4R6+0FMYd+MqRKNwfcBl1P/RqerNa4ij+R68g5Jgq oArA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=Gug1ceIc1tmLneVG7JrKDuVeviN4eOlUA5EdUHO7qc0=; b=pa6te+dcNXH0DhgjZbF8x3aWhuCrFedlmrsiG4UGUCUxX5Nqs2zrBTIqidOVKljgSw tHIHnxaoRShfHX2ZHmDLrkWBReMvjH48i9tlirFhQ1UZjFNwz+BQa3qmRSQ1PfYPLoaa eJdRIXtpOqD+z/F6elHhehcpL9xhU2VUQVcrb86DNfITELOQh73ZBZgygeREh58i297i YUs/lfcaK2zIXECxm4KuBJl3jNDaNLhjvOaQWDYCAoWvxM6q4sPzkRT1jzAhCxvwV+/t lHaFQg8IhYW5WF1dWDgaJJgXksK9uCM6zqAq0bMgcex93p2tM98Za9Oaq9LjMzXLNycN O8jA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="A2U/h4UC"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ck20-20020a17090afe1400b0024e036ec731si32653429pjb.36.2023.05.14.12.12.17; Sun, 14 May 2023 12:12:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="A2U/h4UC"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233109AbjENSnc (ORCPT + 99 others); Sun, 14 May 2023 14:43:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229539AbjENSna (ORCPT ); Sun, 14 May 2023 14:43:30 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E30921BD3; Sun, 14 May 2023 11:43:28 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7D73760C90; Sun, 14 May 2023 18:43:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 92277C433EF; Sun, 14 May 2023 18:43:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1684089807; bh=Ml1D73OgB6a09DYGjURU+FXwDDBxN+djnvR7hyV+9F0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=A2U/h4UC1lJ+/+TsINJXlcGGXwM2KWKtn7CtMQSjOIVu+yQ/6t6hjWIaG/FAweri7 H2RRMav2LKvE+bBx7gW8jBfiJkzwlJ90LQaBX/3CgXwqVL0NHYOB9FJ+q2ylSqt4c9 +gTlsaj0svT4NGEIKuwVXXrX1hlgjl7MFr7debe3zFqLm5JmQ4oPbHZc+W5y62XAlu 6xX8TwUME2Dcokfpp9+5m5ozGSPIN8Gb5l0+8nXWEtgykcTCwMqwKcaCNE9CsUXZx4 nCMgUqNWxyUjChh6D4N9/5G/5ocXTte/QVnvsmqYBgZa1PmHoNpFUHl0XyQAMajrrY sdCAr9023sF2w== Date: Sun, 14 May 2023 11:43:25 -0700 From: Eric Biggers To: Kent Overstreet Cc: Lorenzo Stoakes , Christoph Hellwig , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-bcachefs@vger.kernel.org, Kent Overstreet , Andrew Morton , Uladzislau Rezki , linux-mm@kvack.org Subject: Re: [PATCH 07/32] mm: Bring back vmalloc_exec Message-ID: <20230514184325.GB9528@sol.localdomain> References: <20230509165657.1735798-1-kent.overstreet@linux.dev> <20230509165657.1735798-8-kent.overstreet@linux.dev> <20230510064849.GC1851@quark.localdomain> <20230513015752.GC3033@quark.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, May 14, 2023 at 01:45:29AM -0400, Kent Overstreet wrote: > On Fri, May 12, 2023 at 06:57:52PM -0700, Eric Biggers wrote: > > First, I wanted to mention that decoding of variable-length fields has been > > extensively studied for decompression algorithms, e.g. for Huffman decoding. > > And it turns out that it can be done branchlessly. The basic idea is that you > > have a branchless refill step that looks like the following: > > > > #define REFILL_BITS_BRANCHLESS() \ > > bitbuf |= get_unaligned_u64(p) << bitsleft; \ > > p += 7 - ((bitsleft >> 3) & 0x7); \ > > bitsleft |= 56; > > > > That branchlessly ensures that 'bitbuf' contains '56 <= bitsleft <= 63' bits. > > Then, the needed number of bits can be removed and returned: > > > > #define READ_BITS(n) \ > > REFILL_BITS_BRANCHLESS(); \ > > tmp = bitbuf & (((u64)1 << (n)) - 1); \ > > bitbuf >>= (n); \ > > bitsleft -= (n); \ > > tmp > > > > If you're interested, I can give you some references about the above method. > > I might be interested in those references, new bit tricks and integer > encodings are always fun :) There are some good blog posts by Fabian Giese: * https://fgiesen.wordpress.com/2018/02/19/reading-bits-in-far-too-many-ways-part-1/ * https://fgiesen.wordpress.com/2018/02/20/reading-bits-in-far-too-many-ways-part-2/ * https://fgiesen.wordpress.com/2018/09/27/reading-bits-in-far-too-many-ways-part-3/ And the examples I gave above are basically what I use in libdeflate: https://github.com/ebiggers/libdeflate/blob/master/lib/deflate_decompress.c > > But, I really just wanted to mention it for completeness, since I think you'd > > actually want to go in a slightly different direction, since (a) you have all > > the field widths available from the beginning, as opposed to being interleaved > > into the bitstream itself (as is the case in Huffman decoding for example), so > > you're not limited to serialized decoding of each field, (b) your fields are up > > to 96 bits, and (c) you've selected a bitstream convention that seems to make it > > such that your stream *must* be read in aligned units of u64, so I don't think > > something like REFILL_BITS_BRANCHLESS() could work for you anyway. > > > > What I would suggest instead is preprocessing the list of 6 field lengths to > > create some information that can be used to extract all 6 fields branchlessly > > with no dependencies between different fields. (And you clearly *can* add a > > preprocessing step, as you already have one -- the dynamic code generator.) > > > > So, something like the following: > > > > const struct field_info *info = &format->fields[0]; > > > > field0 = (in->u64s[info->word_idx] >> info->shift1) & info->mask; > > field0 |= in->u64s[info->word_idx - 1] >> info->shift2; > > > > ... but with the code for all 6 fields interleaved. > > > > On modern CPUs, I think that would be faster than your current C code. > > > > You could do better by creating variants that are specialized for specific > > common sets of parameters. During "preprocessing", you would select a variant > > and set an enum accordingly. During decoding, you would switch on that enum and > > call the appropriate variant. (This could also be done with a function pointer, > > of course, but indirect calls are slow these days...) > > testing random btree updates: > > dynamically generated unpack: > rand_insert: 20.0 MiB with 1 threads in 33 sec, 1609 nsec per iter, 607 KiB per sec > > old C unpack: > rand_insert: 20.0 MiB with 1 threads in 35 sec, 1672 nsec per iter, 584 KiB per sec > > the Eric Biggers special: > rand_insert: 20.0 MiB with 1 threads in 35 sec, 1676 nsec per iter, 583 KiB per sec > > Tested two versions of your approach, one without a shift value, one > where we use a shift value to try to avoid unaligned access - second was > perhaps 1% faster > > so it's not looking good. This benchmark doesn't even hit on > unpack_key() quite as much as I thought, so the difference is > significant. > > diff --git a/fs/bcachefs/bkey.c b/fs/bcachefs/bkey.c I don't know what this patch applies to, so I can't properly review it. I suggest checking the assembly and making sure it is what is expected. In general, for this type of thing it's also helpful to put together a userspace micro-benchmark program so that it's very fast to evaluate different options. Building and booting a kernel and doing some I/O benchmark on a bcachefs sounds much more time consuming and less precise. > -struct bkey __bch2_bkey_unpack_key(const struct bkey_format_processed *format_p, > +struct bkey __bch2_bkey_unpack_key(const struct bkey_format_processed *format, > const struct bkey_packed *in) > { > - const struct bkey_format *format = &format_p->f; > - struct unpack_state state = unpack_state_init(format, in); > struct bkey out; > > - EBUG_ON(format->nr_fields != BKEY_NR_FIELDS); > - EBUG_ON(in->u64s < format->key_u64s); > + EBUG_ON(format->f.nr_fields != BKEY_NR_FIELDS); > + EBUG_ON(in->u64s < format->f.key_u64s); > EBUG_ON(in->format != KEY_FORMAT_LOCAL_BTREE); > - EBUG_ON(in->u64s - format->key_u64s + BKEY_U64s > U8_MAX); > + EBUG_ON(in->u64s - format->f.key_u64s + BKEY_U64s > U8_MAX); > > - out.u64s = BKEY_U64s + in->u64s - format->key_u64s; > + out.u64s = BKEY_U64s + in->u64s - format->f.key_u64s; > out.format = KEY_FORMAT_CURRENT; > out.needs_whiteout = in->needs_whiteout; > out.type = in->type; > out.pad[0] = 0; > > + if (likely(format->aligned)) { > +#define x(id, field) out.field = get_aligned_field(format, in, id); > + bkey_fields() > +#undef x > + } else { > + struct unpack_state state = unpack_state_init(&format->f, in); > + > #define x(id, field) out.field = get_inc_field(&state, id); > - bkey_fields() > + bkey_fields() > #undef x > + } It looks like you didn't change the !aligned case. How often is the 'aligned' case taken? I think it would also help if the generated assembly had the handling of the fields interleaved. To achieve that, it might be necessary to interleave the C code. - Eric