Received: by 2002:a05:7412:a9a8:b0:f9:92ae:e617 with SMTP id o40csp47038rdh; Wed, 20 Dec 2023 15:37:55 -0800 (PST) X-Google-Smtp-Source: AGHT+IEoB627GSq/miiH0uH1T3O5rVInnke8KhzxW82QLkQy2dPNLvOaJAWqp2tpgb9G2ezJSA+b X-Received: by 2002:a05:620a:2992:b0:77d:7d70:d1f5 with SMTP id r18-20020a05620a299200b0077d7d70d1f5mr5865054qkp.37.1703115475526; Wed, 20 Dec 2023 15:37:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703115475; cv=none; d=google.com; s=arc-20160816; b=kNKsrP/rWLNMskfwyUVXe9rH0so4gOXpOzsLbyug63ryilNKZX2LTdpOCDsuVSlJfv sw0LnlIdVUNuM+0AbVlDSJn2zbq1LJCvYvXhTQ72ueJw5X+qLvqflgB3KvOmy3g8gm/o iOWQ+04jiWf8PzPqgTbzfUbFSN2v1s5YQjurzWWQ5oFFIiW6ktACVQtNUzj9zQspg2ke mTjFFDgrdesYaq1XCLN2mWBj6h1paLBF//YAdF+hvgNIoxbvYeoJwaPKGjUnS0R+ZVkf J9Ss2gcoObs9yYyiSETZ5waBb0LHGCpyvV0+sSplKuy4ln9bSBWrYleYFv3QgSSoIpqm dQfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:from :dkim-signature; bh=rlcuA36LI1zoJf4X+37ieCmx2xUdNx+5pCFfF6wObqA=; fh=B38m/x4vysnb+FduGD3Y3Qx6OAV3jidMNwqy8dyeER8=; b=yy0v5HRILuApTz5t/ubU/hTufDmez2Nxbk4GvasycKrvYOz6abrisWG2E8gUgjsSth WkR2SKU6AqxI48JWzCTNVXHAJSLONZ3sRaVmXSJ6WpdxZcEhMh/umYzgV35UimjEE8BQ iuzpjQxZL/zW30G+s4fnfCLXNxViNfS7RAqItrXyswJO93iS6it0F4B8LxJal+VJnbrn 66FI0hv/UTGlnydN2U9WnEgrfviPZnQUsZixgiLzINa+UUmP50TYUMs516JOT7a+EtcL heiH6fBcJNIGFDlDVU6nNrYRZcgxmr5ogXxS6I8cFu3vbLM7TPC6O9kD5R9J6mIf7CFa BwOg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=Sbl+pFQc; spf=pass (google.com: domain of linux-kernel+bounces-7636-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-7636-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id y20-20020a05620a44d400b0077be3daa31esi896456qkp.640.2023.12.20.15.37.55 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 15:37:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-7636-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=Sbl+pFQc; spf=pass (google.com: domain of linux-kernel+bounces-7636-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-7636-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 3C76B1C22D4E for ; Wed, 20 Dec 2023 23:37:55 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E00A24B151; Wed, 20 Dec 2023 23:37:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="Sbl+pFQc" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-oi1-f180.google.com (mail-oi1-f180.google.com [209.85.167.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EC5454AF6C for ; Wed, 20 Dec 2023 23:37:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Received: by mail-oi1-f180.google.com with SMTP id 5614622812f47-3ba41c956d5so105026b6e.0 for ; Wed, 20 Dec 2023 15:37:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1703115462; x=1703720262; darn=vger.kernel.org; h=cc:to:content-transfer-encoding:mime-version:message-id:date :subject:from:from:to:cc:subject:date:message-id:reply-to; bh=rlcuA36LI1zoJf4X+37ieCmx2xUdNx+5pCFfF6wObqA=; b=Sbl+pFQcO2QiKh7Zk8AIZL9O66DB+E6I9RArYNWTOLaLRKzyZR12d0k9ql50vVpppd PMtpEp+BkoXg9a6HNE8YWfaFl90yM5jzcyzzt9+GjOzJ4K9zZGjKXjFuIGeLaavQDbOg rG5YMPAkVQLCfr9EQ+3EsGrhHzhPJpEvGTB+D4oIqOnj2BHPxPDyaMDHxLZq8huYDdss PsOnEnP/9mLZSwPZmZ7cRsoi9l9QytXbtGAN0LecYCkC1kbZkYQVBO8K6KUK4ivyTG/A P7cmcAFqPGxdG7gw+f40ecxLhfynw7gX2vQUyJ8/cvJ5lXheApLhsVtQcB8oKcHfMD00 tZDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703115462; x=1703720262; h=cc:to:content-transfer-encoding:mime-version:message-id:date :subject:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=rlcuA36LI1zoJf4X+37ieCmx2xUdNx+5pCFfF6wObqA=; b=Buor5ukFlJ9Y/bKlZViM+yQROoQjk0b7KZI1UlpE5n9JMrT97x7PxBVTGPObjDMGkI R5e+GlKOIoYon8i9WS05Vh33GGdMa/hUI2H7CyAMSKKzvjHKXIupDC/SHXzhnHVaSBdG bXc0HbboSszr6IymcP8jp+lm1c8hZWNxRJzEm7+roC5F+ed9bVTkhn0Yzc7ztw17SZQo Pnkmp2TjPpE2H/MKCcMz3rzMsPxcLjuolzmMbhnv9BJ1VIm5HGD3wvuyZeWlfaTYzo0F XKstr58zr8riv1t+jdbiiiT5Eoc2Urt9bp5Di2Q98M6aB4Pwb7GlJP8D6Fs7/R0sUrG6 0xfw== X-Gm-Message-State: AOJu0Yw3sTV80C4BWHyP9vovY8tHMwzX0x4X6Q0quKxBNNCNefVxSO4W iuUCicDu5SRQZwMkY1EeBRp2pIAZYwVHyQ== X-Received: by 2002:a05:6808:2387:b0:3b9:d9be:802e with SMTP id bp7-20020a056808238700b003b9d9be802emr1673907oib.54.1703115461996; Wed, 20 Dec 2023 15:37:41 -0800 (PST) Received: from charlie.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id k5-20020a056830150500b006d87e38f91asm132834otp.56.2023.12.20.15.37.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 15:37:41 -0800 (PST) From: Charlie Jenkins Subject: [PATCH v13 0/5] riscv: Add fine-tuned checksum functions Date: Wed, 20 Dec 2023 15:37:38 -0800 Message-Id: <20231220-optimize_checksum-v13-0-a73547e1cad8@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-B4-Tracking: v=1; b=H4sIAMJ6g2UC/23S3UrEMBAF4FeRXlvJTP698j1EJMmkbpDdLu1a1 GXf3ekibpG5nEC+ZA7n3M11anXuHu/O3VSXNrfxwAPo+7uu7NLhrfaN+KBDhVoFZfrxeGr79l1 fy66W9/lj31MGYzGEVBA6vnec6tA+r+jzC8+7Np/G6ev6xgLr6a+GTtAW6FUftbcKMrOYnqa2j HM7lIcy7rsVXPCGRGUlBBkphZwNlL3RWUD0FvESolfEKiRtKJLXAmI2CICEGEa8LwUTxcqKgNg tIiW82PUn0eaACb31VUDcFhEzcYyASa4MDoqDICB+i0QJ8YwoVzwpIrLkBCT8IaBQDDYwMtTsQ QEBBhKQuEG0GGxkpCYFobqYEaOAgLop/JbYNl6nTyYG633O0YkMbBgQN4K1tZ5bEq0ZqnYgMbf aAgKKzNpbAzGZnBw59z/ey+XyA6H3vQmzAwAA To: Charlie Jenkins , Palmer Dabbelt , Conor Dooley , Samuel Holland , David Laight , Xiao Wang , Evan Green , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org Cc: Paul Walmsley , Albert Ou , Arnd Bergmann , David Laight , Conor Dooley X-Mailer: b4 0.12.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1703115460; l=7189; i=charlie@rivosinc.com; s=20231120; h=from:subject:message-id; bh=4IR3mjq7Qe5hgiByQjg6dI37GwuRsrOURUG+THLjvJY=; b=JUSsFLvL/oYHFOQZh0QTw83jouJ37B7/7WN57uNbUC48bBx9u+nnzDhVgMHI8AK91+KF9oaZR E+YENaAo7YADy1n6+oXm53tFLUOCOOg05EdHL468D0/oNSVxTb0R33N X-Developer-Key: i=charlie@rivosinc.com; a=ed25519; pk=t4RSWpMV1q5lf/NWIeR9z58bcje60/dbtxxmoSfBEcs= Each architecture generally implements fine-tuned checksum functions to leverage the instruction set. This patch adds the main checksum functions that are used in networking. Tested on QEMU, this series allows the CHECKSUM_KUNIT tests to complete an average of 50.9% faster. This patch takes heavy use of the Zbb extension using alternatives patching. To test this patch, enable the configs for KUNIT, then CHECKSUM_KUNIT. I have attempted to make these functions as optimal as possible, but I have not ran anything on actual riscv hardware. My performance testing has been limited to inspecting the assembly, running the algorithms on x86 hardware, and running in QEMU. ip_fast_csum is a relatively small function so even though it is possible to read 64 bits at a time on compatible hardware, the bottleneck becomes the clean up and setup code so loading 32 bits at a time is actually faster. Relies on https://lore.kernel.org/lkml/20230920193801.3035093-1-evan@rivosinc.com/ --- The algorithm proposed to replace the default csum_fold can be seen to compute the same result by running all 2^32 possible inputs. static inline unsigned int ror32(unsigned int word, unsigned int shift) { return (word >> (shift & 31)) | (word << ((-shift) & 31)); } unsigned short csum_fold(unsigned int csum) { unsigned int sum = csum; sum = (sum & 0xffff) + (sum >> 16); sum = (sum & 0xffff) + (sum >> 16); return ~sum; } unsigned short csum_fold_arc(unsigned int csum) { return ((~csum - ror32(csum, 16)) >> 16); } int main() { unsigned int start = 0x0; do { if (csum_fold(start) != csum_fold_arc(start)) { printf("Not the same %u\n", start); return -1; } start += 1; } while(start != 0x0); printf("The same\n"); return 0; } Cc: Paul Walmsley Cc: Albert Ou Cc: Arnd Bergmann To: Charlie Jenkins To: Palmer Dabbelt To: Conor Dooley To: Samuel Holland To: David Laight To: Xiao Wang To: Evan Green To: linux-riscv@lists.infradead.org To: linux-kernel@vger.kernel.org To: linux-arch@vger.kernel.org Signed-off-by: Charlie Jenkins --- Changes in v13: - Move cast from patch 4 to patch 3 - Link to v12: https://lore.kernel.org/r/20231212-optimize_checksum-v12-0-419a4ba6d666@rivosinc.com Changes in v12: - Rebase onto 6.7-rc5 - Add performance stats in the cover letter - Link to v11: https://lore.kernel.org/r/20231117-optimize_checksum-v11-0-7d9d954fe361@rivosinc.com Changes in v11: - Extensive modifications to comply to sparse - Organize include statements (Xiao) - Add csum_ipv6_magic to commit message (Xiao) - Remove extraneous len statement (Xiao) - Add kasan_check_read call (Xiao) - Improve comment field checksum.h (Xiao) - Consolidate "buff" and "len" into one parameter "end" (Xiao) - Link to v10: https://lore.kernel.org/r/20231101-optimize_checksum-v10-0-a498577bb969@rivosinc.com Changes in v10: - Move tests that were riscv-specific to be arch agnostic (Arnd) - Link to v9: https://lore.kernel.org/r/20231031-optimize_checksum-v9-0-ea018e69b229@rivosinc.com Changes in v9: - Use ror64 (Xiao) - Move do_csum and csum_ipv6_magic headers to patch 4 (Xiao) - Remove word "IP" from checksum headers (Xiao) - Swap to using ifndef CONFIG_32BIT instead of ifdef CONFIG_64BIT (Xiao) - Run no alignment code when buff is aligned (Xiao) - Consolidate two do_csum implementations overlap into do_csum_common - Link to v8: https://lore.kernel.org/r/20231027-optimize_checksum-v8-0-feb7101d128d@rivosinc.com Changes in v8: - Speedups of 12% without Zbb and 21% with Zbb when cpu supports fast misaligned accesses for do_csum - Various formatting updates - Patch now relies on https://lore.kernel.org/lkml/20230920193801.3035093-1-evan@rivosinc.com/ - Link to v7: https://lore.kernel.org/r/20230919-optimize_checksum-v7-0-06c7d0ddd5d6@rivosinc.com Changes in v7: - Included linux/bitops.h in asm-generic/checksum.h to use ror (Conor) - Optimized loop in do_csum (David) - Used ror instead of shifting (David) - Unfortunately had to reintroduce ifdefs because gcc is not smart enough to not throw warnings on code that will never execute - Use ifdef instead of IS_ENABLED on __LITTLE_ENDIAN because IS_ENABLED does not work on that - Only optimize for zbb when alternatives is enabled in do_csum - Link to v6: https://lore.kernel.org/r/20230915-optimize_checksum-v6-0-14a6cf61c618@rivosinc.com Changes in v6: - Fix accuracy of commit message for csum_fold - Fix indentation - Link to v5: https://lore.kernel.org/r/20230914-optimize_checksum-v5-0-c95b82a2757e@rivosinc.com Changes in v5: - Drop vector patches - Check ZBB enabled before doing any ZBB code (Conor) - Check endianness in IS_ENABLED - Revert to the simpler non-tree based version of ipv6_csum_magic since David pointed out that the tree based version is not better. - Link to v4: https://lore.kernel.org/r/20230911-optimize_checksum-v4-0-77cc2ad9e9d7@rivosinc.com Changes in v4: - Suggestion by David Laight to use an improved checksum used in arch/arc. - Eliminates zero-extension on rv32, but not on rv64. - Reduces data dependency which should improve execution speed on rv32 and rv64 - Still passes CHECKSUM_KUNIT and RISCV_CHECKSUM_KUNIT on rv32 and rv64 with and without zbb. - Link to v3: https://lore.kernel.org/r/20230907-optimize_checksum-v3-0-c502d34d9d73@rivosinc.com Changes in v3: - Use riscv_has_extension_likely and has_vector where possible (Conor) - Reduce ifdefs by using IS_ENABLED where possible (Conor) - Use kernel_vector_begin in the vector code (Samuel) - Link to v2: https://lore.kernel.org/r/20230905-optimize_checksum-v2-0-ccd658db743b@rivosinc.com Changes in v2: - After more benchmarking, rework functions to improve performance. - Remove tests that overlapped with the already existing checksum tests and make tests more extensive. - Use alternatives to activate code with Zbb and vector extensions - Link to v1: https://lore.kernel.org/r/20230826-optimize_checksum-v1-0-937501b4522a@rivosinc.com --- Charlie Jenkins (5): asm-generic: Improve csum_fold riscv: Add static key for misaligned accesses riscv: Add checksum header riscv: Add checksum library kunit: Add tests for csum_ipv6_magic and ip_fast_csum arch/riscv/include/asm/checksum.h | 93 ++++++++++ arch/riscv/include/asm/cpufeature.h | 2 + arch/riscv/kernel/cpufeature.c | 30 ++++ arch/riscv/lib/Makefile | 1 + arch/riscv/lib/csum.c | 326 ++++++++++++++++++++++++++++++++++++ include/asm-generic/checksum.h | 6 +- lib/checksum_kunit.c | 284 ++++++++++++++++++++++++++++++- 7 files changed, 738 insertions(+), 4 deletions(-) --- base-commit: a39b6ac3781d46ba18193c9dbb2110f31e9bffe9 change-id: 20230804-optimize_checksum-db145288ac21 -- - Charlie