2023-09-21 02:05:52

by Noah Goldstein

[permalink] [raw]
Subject: x86/csum: Remove unnecessary odd handling

The special case for odd aligned buffers is unnecessary and mostly
just adds overhead. Aligned buffers is the expectations, and even for
unaligned buffer, the only case that was helped is if the buffer was
1-byte from word aligned which is ~1/7 of the cases. Overall it seems
highly unlikely to be worth to extra branch.

It was left in the previous perf improvement patch because I was
erroneously comparing the exact output of `csum_partial(...)`, but
really we only need `csum_fold(csum_partial(...))` to match so its
safe to remove.

All csum kunit tests pass.

Signed-off-by: Noah Goldstein <[email protected]>
Reviewed-by: Eric Dumazet <[email protected]>
Reviewed-by: David Laight <[email protected]>
---
arch/x86/lib/csum-partial_64.c | 36 ++++------------------------------
1 file changed, 4 insertions(+), 32 deletions(-)

diff --git a/arch/x86/lib/csum-partial_64.c b/arch/x86/lib/csum-partial_64.c
index cea25ca8b8cf..b83c8accd756 100644
--- a/arch/x86/lib/csum-partial_64.c
+++ b/arch/x86/lib/csum-partial_64.c
@@ -11,26 +11,9 @@
#include <asm/checksum.h>
#include <asm/word-at-a-time.h>

-static inline unsigned short from32to16(unsigned a)
+static inline __wsum csum_finalize_sum(u64 temp64)
{
- unsigned short b = a >> 16;
- asm("addw %w2,%w0\n\t"
- "adcw $0,%w0\n"
- : "=r" (b)
- : "0" (b), "r" (a));
- return b;
-}
-
-static inline __wsum csum_tail(u64 temp64, int odd)
-{
- unsigned int result;
-
- result = add32_with_carry(temp64 >> 32, temp64 & 0xffffffff);
- if (unlikely(odd)) {
- result = from32to16(result);
- result = ((result >> 8) & 0xff) | ((result & 0xff) << 8);
- }
- return (__force __wsum)result;
+ return (temp64 + ror64(temp64, 32)) >> 32;
}

/*
@@ -47,17 +30,6 @@ static inline __wsum csum_tail(u64 temp64, int odd)
__wsum csum_partial(const void *buff, int len, __wsum sum)
{
u64 temp64 = (__force u64)sum;
- unsigned odd;
-
- odd = 1 & (unsigned long) buff;
- if (unlikely(odd)) {
- if (unlikely(len == 0))
- return sum;
- temp64 = ror32((__force u32)sum, 8);
- temp64 += (*(unsigned char *)buff << 8);
- len--;
- buff++;
- }

/*
* len == 40 is the hot case due to IPv6 headers, but annotating it likely()
@@ -73,7 +45,7 @@ __wsum csum_partial(const void *buff, int len, __wsum sum)
"adcq $0,%[res]"
: [res] "+r"(temp64)
: [src] "r"(buff), "m"(*(const char(*)[40])buff));
- return csum_tail(temp64, odd);
+ return csum_finalize_sum(temp64);
}
if (unlikely(len >= 64)) {
/*
@@ -143,7 +115,7 @@ __wsum csum_partial(const void *buff, int len, __wsum sum)
: [res] "+r"(temp64)
: [trail] "r"(trail));
}
- return csum_tail(temp64, odd);
+ return csum_finalize_sum(temp64);
}
EXPORT_SYMBOL(csum_partial);

--
2.34.1


2023-09-23 07:07:07

by kernel test robot

[permalink] [raw]
Subject: Re: x86/csum: Remove unnecessary odd handling

Hi Noah,

kernel test robot noticed the following build warnings:

[auto build test WARNING on tip/x86/core]
[also build test WARNING on tip/master tip/auto-latest linus/master v6.6-rc2 next-20230921]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Noah-Goldstein/x86-csum-Remove-unnecessary-odd-handling/20230921-032450
base: tip/x86/core
patch link: https://lore.kernel.org/r/20230920192300.3772199-1-goldstein.w.n%40gmail.com
patch subject: x86/csum: Remove unnecessary odd handling
config: x86_64-randconfig-121-20230921 (https://download.01.org/0day-ci/archive/20230923/[email protected]/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20230923/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

sparse warnings: (new ones prefixed by >>)
>> arch/x86/lib/csum-partial_64.c:16:45: sparse: sparse: incorrect type in return expression (different base types) @@ expected restricted __wsum @@ got unsigned long long @@
arch/x86/lib/csum-partial_64.c:16:45: sparse: expected restricted __wsum
arch/x86/lib/csum-partial_64.c:16:45: sparse: got unsigned long long
>> arch/x86/lib/csum-partial_64.c:16:45: sparse: sparse: incorrect type in return expression (different base types) @@ expected restricted __wsum @@ got unsigned long long @@
arch/x86/lib/csum-partial_64.c:16:45: sparse: expected restricted __wsum
arch/x86/lib/csum-partial_64.c:16:45: sparse: got unsigned long long

vim +16 arch/x86/lib/csum-partial_64.c

13
14 static inline __wsum csum_finalize_sum(u64 temp64)
15 {
> 16 return (temp64 + ror64(temp64, 32)) >> 32;
17 }
18

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-09-23 14:05:34

by Noah Goldstein

[permalink] [raw]
Subject: Re: x86/csum: Remove unnecessary odd handling

On Fri, Sep 22, 2023 at 10:25 PM kernel test robot <[email protected]> wrote:
>
> Hi Noah,
>
> kernel test robot noticed the following build warnings:
>
> [auto build test WARNING on tip/x86/core]
> [also build test WARNING on tip/master tip/auto-latest linus/master v6.6-rc2 next-20230921]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Noah-Goldstein/x86-csum-Remove-unnecessary-odd-handling/20230921-032450
> base: tip/x86/core
> patch link: https://lore.kernel.org/r/20230920192300.3772199-1-goldstein.w.n%40gmail.com
> patch subject: x86/csum: Remove unnecessary odd handling
> config: x86_64-randconfig-121-20230921 (https://download.01.org/0day-ci/archive/20230923/[email protected]/config)
> compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20230923/[email protected]/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <[email protected]>
> | Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
>
> sparse warnings: (new ones prefixed by >>)
> >> arch/x86/lib/csum-partial_64.c:16:45: sparse: sparse: incorrect type in return expression (different base types) @@ expected restricted __wsum @@ got unsigned long long @@
> arch/x86/lib/csum-partial_64.c:16:45: sparse: expected restricted __wsum
> arch/x86/lib/csum-partial_64.c:16:45: sparse: got unsigned long long
> >> arch/x86/lib/csum-partial_64.c:16:45: sparse: sparse: incorrect type in return expression (different base types) @@ expected restricted __wsum @@ got unsigned long long @@
> arch/x86/lib/csum-partial_64.c:16:45: sparse: expected restricted __wsum
> arch/x86/lib/csum-partial_64.c:16:45: sparse: got unsigned long long
>
> vim +16 arch/x86/lib/csum-partial_64.c
>
> 13
> 14 static inline __wsum csum_finalize_sum(u64 temp64)
> 15 {
> > 16 return (temp64 + ror64(temp64, 32)) >> 32;
> 17 }
> 18

Just needs a `(__wsum)` cast. Should I post a new patch?
>
> --
> 0-DAY CI Kernel Test Service
> https://github.com/intel/lkp-tests/wiki

2023-09-23 21:13:55

by David Laight

[permalink] [raw]
Subject: RE: x86/csum: Remove unnecessary odd handling

From: Noah Goldstein
> Sent: 23 September 2023 15:05
>
> On Fri, Sep 22, 2023 at 10:25 PM kernel test robot <[email protected]> wrote:
> >
> > Hi Noah,
> >
> > kernel test robot noticed the following build warnings:
> >
> > [auto build test WARNING on tip/x86/core]
> > [also build test WARNING on tip/master tip/auto-latest linus/master v6.6-rc2 next-20230921]
> > [If your patch is applied to the wrong git tree, kindly drop us a note.
> > And when submitting patch, we suggest to use '--base' as documented in
> > https://git-scm.com/docs/git-format-patch#_base_tree_information]
> >
> > url: https://github.com/intel-lab-lkp/linux/commits/Noah-Goldstein/x86-csum-Remove-unnecessary-
> odd-handling/20230921-032450
> > base: tip/x86/core
> > patch link: https://lore.kernel.org/r/20230920192300.3772199-1-goldstein.w.n%40gmail.com
> > patch subject: x86/csum: Remove unnecessary odd handling
> > config: x86_64-randconfig-121-20230921 (https://download.01.org/0day-
> ci/archive/20230923/[email protected]/config)
> > compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
> > reproduce (this is a W=1 build): (https://download.01.org/0day-
> ci/archive/20230923/[email protected]/reproduce)
> >
> > If you fix the issue in a separate patch/commit (i.e. not just a new version of
> > the same patch/commit), kindly add following tags
> > | Reported-by: kernel test robot <[email protected]>
> > | Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
> >
> > sparse warnings: (new ones prefixed by >>)
> > >> arch/x86/lib/csum-partial_64.c:16:45: sparse: sparse: incorrect type in return expression
> (different base types) @@ expected restricted __wsum @@ got unsigned long long @@
> > arch/x86/lib/csum-partial_64.c:16:45: sparse: expected restricted __wsum
> > arch/x86/lib/csum-partial_64.c:16:45: sparse: got unsigned long long
> > >> arch/x86/lib/csum-partial_64.c:16:45: sparse: sparse: incorrect type in return expression
> (different base types) @@ expected restricted __wsum @@ got unsigned long long @@
> > arch/x86/lib/csum-partial_64.c:16:45: sparse: expected restricted __wsum
> > arch/x86/lib/csum-partial_64.c:16:45: sparse: got unsigned long long
> >
> > vim +16 arch/x86/lib/csum-partial_64.c
> >
> > 13
> > 14 static inline __wsum csum_finalize_sum(u64 temp64)
> > 15 {
> > > 16 return (temp64 + ror64(temp64, 32)) >> 32;
> > 17 }
> > 18
>
> Just needs a `(__wsum)` cast. Should I post a new patch?

It'll need to be a (__force __wsum) cast.

I think new patches are expected...

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2023-09-24 16:32:31

by Noah Goldstein

[permalink] [raw]
Subject: Re: x86/csum: Remove unnecessary odd handling

On Sat, Sep 23, 2023 at 4:13 PM David Laight <[email protected]> wrote:
>
> From: Noah Goldstein
> > Sent: 23 September 2023 15:05
> >
> > On Fri, Sep 22, 2023 at 10:25 PM kernel test robot <[email protected]> wrote:
> > >
> > > Hi Noah,
> > >
> > > kernel test robot noticed the following build warnings:
> > >
> > > [auto build test WARNING on tip/x86/core]
> > > [also build test WARNING on tip/master tip/auto-latest linus/master v6.6-rc2 next-20230921]
> > > [If your patch is applied to the wrong git tree, kindly drop us a note.
> > > And when submitting patch, we suggest to use '--base' as documented in
> > > https://git-scm.com/docs/git-format-patch#_base_tree_information]
> > >
> > > url: https://github.com/intel-lab-lkp/linux/commits/Noah-Goldstein/x86-csum-Remove-unnecessary-
> > odd-handling/20230921-032450
> > > base: tip/x86/core
> > > patch link: https://lore.kernel.org/r/20230920192300.3772199-1-goldstein.w.n%40gmail.com
> > > patch subject: x86/csum: Remove unnecessary odd handling
> > > config: x86_64-randconfig-121-20230921 (https://download.01.org/0day-
> > ci/archive/20230923/[email protected]/config)
> > > compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
> > > reproduce (this is a W=1 build): (https://download.01.org/0day-
> > ci/archive/20230923/[email protected]/reproduce)
> > >
> > > If you fix the issue in a separate patch/commit (i.e. not just a new version of
> > > the same patch/commit), kindly add following tags
> > > | Reported-by: kernel test robot <[email protected]>
> > > | Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
> > >
> > > sparse warnings: (new ones prefixed by >>)
> > > >> arch/x86/lib/csum-partial_64.c:16:45: sparse: sparse: incorrect type in return expression
> > (different base types) @@ expected restricted __wsum @@ got unsigned long long @@
> > > arch/x86/lib/csum-partial_64.c:16:45: sparse: expected restricted __wsum
> > > arch/x86/lib/csum-partial_64.c:16:45: sparse: got unsigned long long
> > > >> arch/x86/lib/csum-partial_64.c:16:45: sparse: sparse: incorrect type in return expression
> > (different base types) @@ expected restricted __wsum @@ got unsigned long long @@
> > > arch/x86/lib/csum-partial_64.c:16:45: sparse: expected restricted __wsum
> > > arch/x86/lib/csum-partial_64.c:16:45: sparse: got unsigned long long
> > >
> > > vim +16 arch/x86/lib/csum-partial_64.c
> > >
> > > 13
> > > 14 static inline __wsum csum_finalize_sum(u64 temp64)
> > > 15 {
> > > > 16 return (temp64 + ror64(temp64, 32)) >> 32;
> > > 17 }
> > > 18
> >
> > Just needs a `(__wsum)` cast. Should I post a new patch?
>
> It'll need to be a (__force __wsum) cast.
>
> I think new patches are expected...
>
Thank you, posted V4 that should fix the warning.
> David
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)

2023-12-23 22:19:12

by Noah Goldstein

[permalink] [raw]
Subject: Re: x86/csum: Remove unnecessary odd handling

On Sun, Sep 24, 2023 at 7:35 AM Noah Goldstein <[email protected]> wrote:
>
> On Sat, Sep 23, 2023 at 4:13 PM David Laight <[email protected]> wrote:
> >
> > From: Noah Goldstein
> > > Sent: 23 September 2023 15:05
> > >
> > > On Fri, Sep 22, 2023 at 10:25 PM kernel test robot <[email protected]> wrote:
> > > >
> > > > Hi Noah,
> > > >
> > > > kernel test robot noticed the following build warnings:
> > > >
> > > > [auto build test WARNING on tip/x86/core]
> > > > [also build test WARNING on tip/master tip/auto-latest linus/master v6.6-rc2 next-20230921]
> > > > [If your patch is applied to the wrong git tree, kindly drop us a note.
> > > > And when submitting patch, we suggest to use '--base' as documented in
> > > > https://git-scm.com/docs/git-format-patch#_base_tree_information]
> > > >
> > > > url: https://github.com/intel-lab-lkp/linux/commits/Noah-Goldstein/x86-csum-Remove-unnecessary-
> > > odd-handling/20230921-032450
> > > > base: tip/x86/core
> > > > patch link: https://lore.kernel.org/r/20230920192300.3772199-1-goldstein.w.n%40gmail.com
> > > > patch subject: x86/csum: Remove unnecessary odd handling
> > > > config: x86_64-randconfig-121-20230921 (https://download.01.org/0day-
> > > ci/archive/20230923/[email protected]/config)
> > > > compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
> > > > reproduce (this is a W=1 build): (https://download.01.org/0day-
> > > ci/archive/20230923/[email protected]/reproduce)
> > > >
> > > > If you fix the issue in a separate patch/commit (i.e. not just a new version of
> > > > the same patch/commit), kindly add following tags
> > > > | Reported-by: kernel test robot <[email protected]>
> > > > | Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
> > > >
> > > > sparse warnings: (new ones prefixed by >>)
> > > > >> arch/x86/lib/csum-partial_64.c:16:45: sparse: sparse: incorrect type in return expression
> > > (different base types) @@ expected restricted __wsum @@ got unsigned long long @@
> > > > arch/x86/lib/csum-partial_64.c:16:45: sparse: expected restricted __wsum
> > > > arch/x86/lib/csum-partial_64.c:16:45: sparse: got unsigned long long
> > > > >> arch/x86/lib/csum-partial_64.c:16:45: sparse: sparse: incorrect type in return expression
> > > (different base types) @@ expected restricted __wsum @@ got unsigned long long @@
> > > > arch/x86/lib/csum-partial_64.c:16:45: sparse: expected restricted __wsum
> > > > arch/x86/lib/csum-partial_64.c:16:45: sparse: got unsigned long long
> > > >
> > > > vim +16 arch/x86/lib/csum-partial_64.c
> > > >
> > > > 13
> > > > 14 static inline __wsum csum_finalize_sum(u64 temp64)
> > > > 15 {
> > > > > 16 return (temp64 + ror64(temp64, 32)) >> 32;
> > > > 17 }
> > > > 18
> > >
> > > Just needs a `(__wsum)` cast. Should I post a new patch?
> >
> > It'll need to be a (__force __wsum) cast.
> >
> > I think new patches are expected...
> >
> Thank you, posted V4 that should fix the warning.
> > David
> >
> > -
> > Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> > Registration No: 1397386 (Wales)

ping.

2024-01-04 23:28:29

by Noah Goldstein

[permalink] [raw]
Subject: Re: x86/csum: Remove unnecessary odd handling

On Sat, Dec 23, 2023 at 2:18 PM Noah Goldstein <[email protected]> wrote:
>
> On Sun, Sep 24, 2023 at 7:35 AM Noah Goldstein <[email protected]> wrote:
> >
> > On Sat, Sep 23, 2023 at 4:13 PM David Laight <[email protected]> wrote:
> > >
> > > From: Noah Goldstein
> > > > Sent: 23 September 2023 15:05
> > > >
> > > > On Fri, Sep 22, 2023 at 10:25 PM kernel test robot <[email protected]> wrote:
> > > > >
> > > > > Hi Noah,
> > > > >
> > > > > kernel test robot noticed the following build warnings:
> > > > >
> > > > > [auto build test WARNING on tip/x86/core]
> > > > > [also build test WARNING on tip/master tip/auto-latest linus/master v6.6-rc2 next-20230921]
> > > > > [If your patch is applied to the wrong git tree, kindly drop us a note.
> > > > > And when submitting patch, we suggest to use '--base' as documented in
> > > > > https://git-scm.com/docs/git-format-patch#_base_tree_information]
> > > > >
> > > > > url: https://github.com/intel-lab-lkp/linux/commits/Noah-Goldstein/x86-csum-Remove-unnecessary-
> > > > odd-handling/20230921-032450
> > > > > base: tip/x86/core
> > > > > patch link: https://lore.kernel.org/r/20230920192300.3772199-1-goldstein.w.n%40gmail.com
> > > > > patch subject: x86/csum: Remove unnecessary odd handling
> > > > > config: x86_64-randconfig-121-20230921 (https://download.01.org/0day-
> > > > ci/archive/20230923/[email protected]/config)
> > > > > compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
> > > > > reproduce (this is a W=1 build): (https://download.01.org/0day-
> > > > ci/archive/20230923/[email protected]/reproduce)
> > > > >
> > > > > If you fix the issue in a separate patch/commit (i.e. not just a new version of
> > > > > the same patch/commit), kindly add following tags
> > > > > | Reported-by: kernel test robot <[email protected]>
> > > > > | Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
> > > > >
> > > > > sparse warnings: (new ones prefixed by >>)
> > > > > >> arch/x86/lib/csum-partial_64.c:16:45: sparse: sparse: incorrect type in return expression
> > > > (different base types) @@ expected restricted __wsum @@ got unsigned long long @@
> > > > > arch/x86/lib/csum-partial_64.c:16:45: sparse: expected restricted __wsum
> > > > > arch/x86/lib/csum-partial_64.c:16:45: sparse: got unsigned long long
> > > > > >> arch/x86/lib/csum-partial_64.c:16:45: sparse: sparse: incorrect type in return expression
> > > > (different base types) @@ expected restricted __wsum @@ got unsigned long long @@
> > > > > arch/x86/lib/csum-partial_64.c:16:45: sparse: expected restricted __wsum
> > > > > arch/x86/lib/csum-partial_64.c:16:45: sparse: got unsigned long long
> > > > >
> > > > > vim +16 arch/x86/lib/csum-partial_64.c
> > > > >
> > > > > 13
> > > > > 14 static inline __wsum csum_finalize_sum(u64 temp64)
> > > > > 15 {
> > > > > > 16 return (temp64 + ror64(temp64, 32)) >> 32;
> > > > > 17 }
> > > > > 18
> > > >
> > > > Just needs a `(__wsum)` cast. Should I post a new patch?
> > >
> > > It'll need to be a (__force __wsum) cast.
> > >
> > > I think new patches are expected...
> > >
> > Thank you, posted V4 that should fix the warning.
> > > David
> > >
> > > -
> > > Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> > > Registration No: 1397386 (Wales)
>
> ping.
ping

2024-01-04 23:34:15

by Dave Hansen

[permalink] [raw]
Subject: Re: x86/csum: Remove unnecessary odd handling

On 1/4/24 15:28, Noah Goldstein wrote:
>> ping.
> ping

Noah, it would be great to elaborate on what you are pinging about.
This thread has a note that you "posted v4", but I don't see a
clearly-tagged v4 in my inbox, just a bunch of identially-titled
non-standard-subject[1] emails without any indication of what changed in
the individual patches.

1. https://docs.kernel.org/process/submitting-patches.html

2024-01-04 23:37:00

by Linus Torvalds

[permalink] [raw]
Subject: Re: x86/csum: Remove unnecessary odd handling

On Thu, 4 Jan 2024 at 15:28, Noah Goldstein <[email protected]> wrote:
>
> ping

Bah. I think this keeps falling through the cracks because the
networking people consider this an architecture thing, and the x86
people probably consider this a networking thing.

Anyway, since I looked at the thing originally, and feel like I know
the x86 side and understand the strange IP csum too, I just applied it
directly.

Linus

2024-01-05 00:33:39

by Linus Torvalds

[permalink] [raw]
Subject: Re: x86/csum: Remove unnecessary odd handling

On Thu, 4 Jan 2024 at 15:36, Linus Torvalds
<[email protected]> wrote:
>
> Anyway, since I looked at the thing originally, and feel like I know
> the x86 side and understand the strange IP csum too, I just applied it
> directly.

I ended up just applying my 40-byte cleanup thing too that I've been
keeping in my own tree since posting it (as the "Silly csum
improvement. Maybe" patch).

I've been running it on my own machine since last June, and I finally
even enabled the csum KUnit test just to double-check it.

Linus

2024-01-05 10:41:53

by David Laight

[permalink] [raw]
Subject: RE: x86/csum: Remove unnecessary odd handling

From: Linus Torvalds
> Sent: 05 January 2024 00:33
>
> On Thu, 4 Jan 2024 at 15:36, Linus Torvalds
> <[email protected]> wrote:
> >
> > Anyway, since I looked at the thing originally, and feel like I know
> > the x86 side and understand the strange IP csum too, I just applied it
> > directly.
>
> I ended up just applying my 40-byte cleanup thing too that I've been
> keeping in my own tree since posting it (as the "Silly csum
> improvement. Maybe" patch).

Interesting, I'm pretty sure trying to get two blocks of
'adc' scheduled in parallel like that doesn't work.

I got an adc every clock from this 'beast':
+ /*
+ * Align the byte count to a multiple of 16 then
+ * add 64 bit words to alternating registers.
+ * Finally reduce to 64 bits.
+ */
+ asm( " bt $4, %[len]\n"
+ " jnc 10f\n"
+ " add (%[buff], %[len]), %[sum_0]\n"
+ " adc 8(%[buff], %[len]), %[sum_1]\n"
+ " lea 16(%[len]), %[len]\n"
+ "10: jecxz 20f\n"
+ " adc (%[buff], %[len]), %[sum_0]\n"
+ " adc 8(%[buff], %[len]), %[sum_1]\n"
+ " lea 32(%[len]), %[len_tmp]\n"
+ " adc 16(%[buff], %[len]), %[sum_0]\n"
+ " adc 24(%[buff], %[len]), %[sum_1]\n"
+ " mov %[len_tmp], %[len]\n"
+ " jmp 10b\n"
+ "20: adc %[sum_0], %[sum]\n"
+ " adc %[sum_1], %[sum]\n"
+ " adc $0, %[sum]\n"
+ : [sum] "+&r" (sum), [sum_0] "+&r" (sum_0), [sum_1] "+&r" (sum_1),
+ [len] "+&c" (len), [len_tmp] "=&r" (len_tmp)
+ : [buff] "r" (buff)
+ : "memory" );

Followed by code to sort out and trailing 15 bytes.

Intel cpu (from P-II until Broadwell 5th-gen) take two clocks for 'adc'
(probably because it needs 3 inputs).
So 'adc' chains ran a lot slower than you might think.
(Clearly no one ever actually benchmarked the old code!)
The first fix made the carry output available early - so adding
to alternate registers helps. IIRC this is in Ivy/Sandy bridge.
Maybe no one cares about Ivy/Sandy bridge and Haswell any more.
AMD cpu don't have this problem.

I'm pretty sure I measured that loop with a misaligned buffer.
Measurably slower, but less than one clock per cache line.
I guess that the cache-line crossing reads get split, but you
gain most back because the cpu can do two reads/clock.

Maybe I'll sort out another patch...

I did get 15/16 bytes/clock with a similar loop that used adox/adcx
but that needed unrolling again and only works on a few cpu.
IIRC amd have some cpu that support adox - but execute it slowly!
Annoyingly you can't use 'loop' even on cpu that support adox
because it is stupidly slow on intel cpu (ok on amd).

That version is a lot of pain since it needs run-time patching.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2024-01-05 16:12:38

by David Laight

[permalink] [raw]
Subject: RE: x86/csum: Remove unnecessary odd handling

From: David Laight
> Sent: 05 January 2024 10:41
>
> From: Linus Torvalds
> > Sent: 05 January 2024 00:33
> >
> > On Thu, 4 Jan 2024 at 15:36, Linus Torvalds
> > <[email protected]> wrote:
> > >
> > > Anyway, since I looked at the thing originally, and feel like I know
> > > the x86 side and understand the strange IP csum too, I just applied it
> > > directly.
> >
> > I ended up just applying my 40-byte cleanup thing too that I've been
> > keeping in my own tree since posting it (as the "Silly csum
> > improvement. Maybe" patch).
>
> Interesting, I'm pretty sure trying to get two blocks of
> 'adc' scheduled in parallel like that doesn't work.
>
> I got an adc every clock from this 'beast':
> + /*
> + * Align the byte count to a multiple of 16 then
> + * add 64 bit words to alternating registers.
> + * Finally reduce to 64 bits.
> + */
> + asm( " bt $4, %[len]\n"
> + " jnc 10f\n"
> + " add (%[buff], %[len]), %[sum_0]\n"
> + " adc 8(%[buff], %[len]), %[sum_1]\n"
> + " lea 16(%[len]), %[len]\n"
> + "10: jecxz 20f\n"
> + " adc (%[buff], %[len]), %[sum_0]\n"
> + " adc 8(%[buff], %[len]), %[sum_1]\n"
> + " lea 32(%[len]), %[len_tmp]\n"
> + " adc 16(%[buff], %[len]), %[sum_0]\n"
> + " adc 24(%[buff], %[len]), %[sum_1]\n"
> + " mov %[len_tmp], %[len]\n"
> + " jmp 10b\n"
> + "20: adc %[sum_0], %[sum]\n"
> + " adc %[sum_1], %[sum]\n"
> + " adc $0, %[sum]\n"
> + : [sum] "+&r" (sum), [sum_0] "+&r" (sum_0), [sum_1] "+&r" (sum_1),
> + [len] "+&c" (len), [len_tmp] "=&r" (len_tmp)
> + : [buff] "r" (buff)
> + : "memory" );

I've got far too many x86 checksum functions lying around.

Actually you don't need all that.
Anything recent (probably Broadwell on) will execute:
"10: jecxz 20f\n"
" adc (%[buff], %[len]), %[sum]\n"
" adc 8(%[buff], %[len]), %[sum]\n"
" lea 16(%[len]), %[len]\n"
" jmp 10b\n"
"20: adc $0, %[sum]\n"
in two clocks per iteration - 8 bytes/clock.
Since it is trivial to handle 8n+4 buffers (eg as above)
that only leaves the C code to handle the final 0-7 bytes.

> Maybe I'll sort out another patch...

Probably after the next rc1 is out.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2024-01-05 18:06:15

by Linus Torvalds

[permalink] [raw]
Subject: Re: x86/csum: Remove unnecessary odd handling

On Fri, 5 Jan 2024 at 02:41, David Laight <[email protected]> wrote:
>
> Interesting, I'm pretty sure trying to get two blocks of
> 'adc' scheduled in parallel like that doesn't work.

You should check out the benchmark at

https://github.com/fenrus75/csum_partial

and see if you can improve on it. I'm including the patch (on top of
that code by Arjan) to implement the actual current kernel version as
"New version".

Linus


Attachments:
p (4.73 kB)

2024-01-05 23:53:20

by David Laight

[permalink] [raw]
Subject: RE: x86/csum: Remove unnecessary odd handling

From: Linus Torvalds
> Sent: 05 January 2024 18:06
>
> On Fri, 5 Jan 2024 at 02:41, David Laight <[email protected]> wrote:
> >
> > Interesting, I'm pretty sure trying to get two blocks of
> > 'adc' scheduled in parallel like that doesn't work.
>
> You should check out the benchmark at
>
> https://github.com/fenrus75/csum_partial
>
> and see if you can improve on it. I'm including the patch (on top of
> that code by Arjan) to implement the actual current kernel version as
> "New version".

I'd have to fix his benchmark code first :-)
You can't use the TSC unless you lock the cpu frequency.
The longer the test runs for the faster the cpu will run.
I've had to use the performance counters to get accurate cycle counts.

I'm also sure I tried two separate 'adc' chains and failed
to get any overlapped instrictions.
But I'll try his loop in my framework.

The 'lfence' version also really does matter.
In some sense it doesn't matter if you add 10 clocks to the
IP header checksum - they'll be dwarfed by everything else.
But if you do have to checksum an 8k NFS UDP datagram the
loop count matters.

The OOO engine is very good a just piling up instructions
that are waiting for previous instructions in a big queue
and executing instruction for which the data is available.
So the control flow for the checksum code can finish well
before the checksum is available.

On a related point, do you remember what the 'killer app'
was for doing the checksum in copy_to/from_user?

The change pre-dates git and I can't find the commit message.
The only place it is done in the receive path is for UDP.
At a guess that helped 8k UDP datagrams going into a userspace nfsd?
I bet nothing really depends on RX UDP performance to userspace
on systems that don't do hw checksum?

The tx side is done copying data into an skb, I'm not sure if
that is all copies - you really don't want to do it if the
hardware supports tx checksum offload.
Using 'rep movsb' for copy_from_user() has to be faster than
the I-cache busting 'copy and checksum' code - even if you
end up doing a checksum not much later on.

IIRC (I looked a couple of weeks ago) only x86, sparc32 and m68k
actually have 'copy and checksum' (and the m68k didn't seem to
have the markers for faults!).
(All of them are I-cache killers)
The default iovec[] version checksums every fragment - which
gives grief trying to report a 32bit checksum and EFAULT.
OTOH it could sum the linear kernel buffer after the copy.
That would let it fold the checksum to 17bits and report -1
for EFAULT.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2024-01-06 00:19:22

by Linus Torvalds

[permalink] [raw]
Subject: Re: x86/csum: Remove unnecessary odd handling

On Fri, 5 Jan 2024 at 15:53, David Laight <[email protected]> wrote:
>
> I'd have to fix his benchmark code first :-)
> You can't use the TSC unless you lock the cpu frequency.
> The longer the test runs for the faster the cpu will run.

They'll stabilize, it has soem cycle result aging code.

But yes, set the CPU policy to 'performance' or do performance
counters if you care deeply.

> On a related point, do you remember what the 'killer app'
> was for doing the checksum in copy_to/from_user?

No. It's a long time ago, and many things have changed since.

It's possible the copy-and-csum it's not worth it any more, simply
because all modern network cards will do the csum for us, and I think
loopback sets a flag saying "no need to checksum" too.

But I do have a strong memory of it being a big deal back when. A
_loong_ time ago.

Linus

2024-01-06 10:26:32

by Eric Dumazet

[permalink] [raw]
Subject: Re: x86/csum: Remove unnecessary odd handling

On Sat, Jan 6, 2024 at 1:19 AM Linus Torvalds
<[email protected]> wrote:
>
> On Fri, 5 Jan 2024 at 15:53, David Laight <[email protected]> wrote:
> >
> > I'd have to fix his benchmark code first :-)
> > You can't use the TSC unless you lock the cpu frequency.
> > The longer the test runs for the faster the cpu will run.
>
> They'll stabilize, it has soem cycle result aging code.
>
> But yes, set the CPU policy to 'performance' or do performance
> counters if you care deeply.
>
> > On a related point, do you remember what the 'killer app'
> > was for doing the checksum in copy_to/from_user?
>
> No. It's a long time ago, and many things have changed since.
>
> It's possible the copy-and-csum it's not worth it any more, simply
> because all modern network cards will do the csum for us, and I think
> loopback sets a flag saying "no need to checksum" too.
>
> But I do have a strong memory of it being a big deal back when. A
> _loong_ time ago.
>

On a related note, at least with clang, I found that csum_ipv6_magic()
is needlessly using temporary on-stack storage,
showing a stall on Cascade Lake unless I am patching add32_with_carry() :

diff --git a/arch/x86/include/asm/checksum_64.h
b/arch/x86/include/asm/checksum_64.h
index 407beebadaf45a748f91a36b78bd1d023449b132..c3d6f47626c70d81f0c2ba401d85050b09a39922
100644
--- a/arch/x86/include/asm/checksum_64.h
+++ b/arch/x86/include/asm/checksum_64.h
@@ -171,7 +171,7 @@ static inline unsigned add32_with_carry(unsigned
a, unsigned b)
asm("addl %2,%0\n\t"
"adcl $0,%0"
: "=r" (a)
- : "0" (a), "rm" (b));
+ : "0" (a), "r" (b));
return a;
}

Before patch :

ffffffff81b9b600 <csum_ipv6_magic>:
ffffffff81b9b600: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
ffffffff81b9b601: R_X86_64_NONE __fentry__-0x4
ffffffff81b9b605: 55 push %rbp
ffffffff81b9b606: 48 89 e5 mov %rsp,%rbp
ffffffff81b9b609: 48 83 ec 04 sub $0x4,%rsp // This
will be removed after patch
ffffffff81b9b60d: 0f ca bswap %edx
ffffffff81b9b60f: 89 c9 mov %ecx,%ecx
ffffffff81b9b611: 48 c1 e1 08 shl $0x8,%rcx
ffffffff81b9b615: 44 89 c0 mov %r8d,%eax
ffffffff81b9b618: 48 01 d0 add %rdx,%rax
ffffffff81b9b61b: 48 01 c8 add %rcx,%rax
ffffffff81b9b61e: 48 03 07 add (%rdi),%rax
ffffffff81b9b621: 48 13 47 08 adc 0x8(%rdi),%rax
ffffffff81b9b625: 48 13 06 adc (%rsi),%rax
ffffffff81b9b628: 48 13 46 08 adc 0x8(%rsi),%rax
ffffffff81b9b62c: 48 83 d0 00 adc $0x0,%rax
ffffffff81b9b630: 48 89 c1 mov %rax,%rcx
ffffffff81b9b633: 48 c1 e9 20 shr $0x20,%rcx
ffffffff81b9b637: 89 4d fc mov %ecx,-0x4(%rbp) //
Store exc on the stack
ffffffff81b9b63a: 03 45 fc add -0x4(%rbp),%eax //
reads the value right away, stalling some Intel cpus.
ffffffff81b9b63d: 83 d0 00 adc $0x0,%eax
ffffffff81b9b640: 89 c1 mov %eax,%ecx // Note that
ecs content was scratch anyway
ffffffff81b9b642: c1 e1 10 shl $0x10,%ecx
ffffffff81b9b645: 25 00 00 ff ff and $0xffff0000,%eax
ffffffff81b9b64a: 01 c8 add %ecx,%eax
ffffffff81b9b64c: 15 ff ff 00 00 adc $0xffff,%eax
ffffffff81b9b651: f7 d0 not %eax
ffffffff81b9b653: c1 e8 10 shr $0x10,%eax
ffffffff81b9b656: 48 83 c4 04 add $0x4,%rsp
ffffffff81b9b65a: 5d pop %rbp
ffffffff81b9b65b: c3 ret
ffffffff81b9b65c: cc int3
ffffffff81b9b65d: 00 00 add %al,(%rax)
ffffffff81b9b65f: cc int3

After patch:

00000000000000a0 <csum_ipv6_magic>:
a0: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
a1: R_X86_64_NONE __fentry__-0x4
a5: 55 push %rbp
a6: 48 89 e5 mov %rsp,%rbp
a9: 0f ca bswap %edx
ab: 89 c9 mov %ecx,%ecx
ad: 48 c1 e1 08 shl $0x8,%rcx
b1: 44 89 c0 mov %r8d,%eax
b4: 48 01 d0 add %rdx,%rax
b7: 48 01 c8 add %rcx,%rax
ba: 48 03 07 add (%rdi),%rax
bd: 48 13 47 08 adc 0x8(%rdi),%rax
c1: 48 13 06 adc (%rsi),%rax
c4: 48 13 46 08 adc 0x8(%rsi),%rax
c8: 48 83 d0 00 adc $0x0,%rax
cc: 48 89 c1 mov %rax,%rcx
cf: 48 c1 e9 20 shr $0x20,%rcx
d3: 01 c8 add %ecx,%eax // just better !
d5: 83 d0 00 adc $0x0,%eax
d8: 89 c1 mov %eax,%ecx
da: c1 e1 10 shl $0x10,%ecx
dd: 25 00 00 ff ff and $0xffff0000,%eax
e2: 01 c8 add %ecx,%eax
e4: 15 ff ff 00 00 adc $0xffff,%eax
e9: f7 d0 not %eax
eb: c1 e8 10 shr $0x10,%eax
ee: 5d pop %rbp
ef: c3 ret
f0: cc int3

2024-01-06 19:33:28

by Linus Torvalds

[permalink] [raw]
Subject: Re: x86/csum: Remove unnecessary odd handling

On Sat, 6 Jan 2024 at 02:26, Eric Dumazet <[email protected]> wrote:
>
> On a related note, at least with clang, I found that csum_ipv6_magic()
> is needlessly using temporary on-stack storage,
> showing a stall on Cascade Lake unless I am patching add32_with_carry() :

This is a known compiler bug in clang:

https://github.com/llvm/llvm-project/issues/20571
https://github.com/llvm/llvm-project/issues/30873
https://github.com/llvm/llvm-project/issues/34837

and while we could always just use "r" for constraints, it will

(a) possibly run out of registers in some situations

(b) pessimize gcc that does this right

Can you please push the clang people to not do the stupid thing they
do now, which is to say "oh, I can use registers _or_ memory, so I'll
always use memory".

Btw, it's actually much worse than that, and clang is just doing
incredibly broken things. Switching to "r" just hides the garbage.

Because sometimes the source really *is* memory, ie we have

net/ipv4/udp.c:
csum = csum_add(csum, frags->csum);

and then it's criminally stupid to load it into a register when it can
be just used directly.

But clang says "criminally stupid? *I* will show you stupid!" -
because what *clang* does with the above is this thing of beauty:

movl 136(%rax), %edi
movl %edi, 28(%rsp)
addl 28(%rsp), %ecx
adcl $0, %ecx

which has turned from "criminally stupid" to "it's art, I tell you -
you're not supposed to understand it".

Anyway, this is *literally* a clang bug. Complain to clang people for
being *extra* stupid - we told the compiler that it can use a register
or memory, and clang decided "I'll use memory", so then when we gave
it a memory location, it said "no, not *that* memory - I'll just
reload it on stack".

In contrast, gcc gets this right - and for that udp.c case it just generates

addl 136(%rax),%ecx # frags_67->D.58941.D.58869.D.58836.csum, a
adcl $0,%ecx # a

like it should.

And for csum_ipv6_magic, gcc generates this:

addl %edx,%eax # tmp112, a
adcl $0,%eax # a

IOW, the kernel is *right*, and this is purely clang being completely bogus.

I really don't want to penalize a good compiler because a bad one
can't get its act together.

Linus

2024-01-06 22:09:22

by David Laight

[permalink] [raw]
Subject: RE: x86/csum: Remove unnecessary odd handling

From: Linus Torvalds
> Sent: 05 January 2024 18:06
>
> On Fri, 5 Jan 2024 at 02:41, David Laight <[email protected]> wrote:
> >
> > Interesting, I'm pretty sure trying to get two blocks of
> > 'adc' scheduled in parallel like that doesn't work.
>
> You should check out the benchmark at
>
> https://github.com/fenrus75/csum_partial
>
> and see if you can improve on it. I'm including the patch (on top of
> that code by Arjan) to implement the actual current kernel version as
> "New version".

Annoyingly (for me) you are partially right...

I found where my ip checksum perf code was hiding and revisited it.
Although I found comments elsewhere that the 'jecxz, adc, adc, lea, jmp'
did an adc every clock it isn't happening for me now.

I'm only measuring the inner loop for multiples of 64 bytes.
The code less than 8 bytes and partial final words is a
separate problem.
The less unrolled the main loop, the less overhead there'll
be for 'normal' sizes.
So I've changed your '80 byte' block to 64 bytes for consistency.

I'm ignoring pre-sandy bridge cpu (no split flags) and pre-broadwell
(adc takes two clocks - although adc to alternate regs is one clock
on sandy bridge).
My test system is an i7-7700, I think anything from broadwell (gen 4)
will be at least as good.
I don't have a modern amd cpu.

The best loop for 256+ bytes is an adxc/adxo one.
However that requires the run-time patching.
Followed by new kernel version (two blocks of 4 adc).
The surprising one is:
xor sum, sum
1: adc (buff), sum
adc 8(buff), sum
lea 16(buff), buff
dec count
jnz 1b
adc $0, sum
For 256 bytes it is only a couple of clocks slower.
Maybe 10% slower for 512+ bytes.
But it need almost no extra code for 'normal' buffer sizes.
By comparison the adxc/adxo one is 20% faster.

The code is doing:
old = rdpmc
mfence
csum = do_csum(buf, len);
mfence
clocks = rdpmc - old
(That is directly reading the pmc register.)
With 'no-op' function it takes 160 clocks (I-cache resident).
Without the mfence 40 - but pretty much everything can execute
after the 2nd rdpmc.

I've attached my (horrid) test program.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


Attachments:
ipcsum.c (12.65 kB)
ipcsum.c

2024-01-07 01:19:21

by H. Peter Anvin

[permalink] [raw]
Subject: RE: x86/csum: Remove unnecessary odd handling

On January 6, 2024 2:08:48 PM PST, David Laight <[email protected]> wrote:
>From: Linus Torvalds
>> Sent: 05 January 2024 18:06
>>
>> On Fri, 5 Jan 2024 at 02:41, David Laight <[email protected]> wrote:
>> >
>> > Interesting, I'm pretty sure trying to get two blocks of
>> > 'adc' scheduled in parallel like that doesn't work.
>>
>> You should check out the benchmark at
>>
>> https://github.com/fenrus75/csum_partial
>>
>> and see if you can improve on it. I'm including the patch (on top of
>> that code by Arjan) to implement the actual current kernel version as
>> "New version".
>
>Annoyingly (for me) you are partially right...
>
>I found where my ip checksum perf code was hiding and revisited it.
>Although I found comments elsewhere that the 'jecxz, adc, adc, lea, jmp'
>did an adc every clock it isn't happening for me now.
>
>I'm only measuring the inner loop for multiples of 64 bytes.
>The code less than 8 bytes and partial final words is a
>separate problem.
>The less unrolled the main loop, the less overhead there'll
>be for 'normal' sizes.
>So I've changed your '80 byte' block to 64 bytes for consistency.
>
>I'm ignoring pre-sandy bridge cpu (no split flags) and pre-broadwell
>(adc takes two clocks - although adc to alternate regs is one clock
>on sandy bridge).
>My test system is an i7-7700, I think anything from broadwell (gen 4)
>will be at least as good.
>I don't have a modern amd cpu.
>
>The best loop for 256+ bytes is an adxc/adxo one.
>However that requires the run-time patching.
>Followed by new kernel version (two blocks of 4 adc).
>The surprising one is:
> xor sum, sum
> 1: adc (buff), sum
> adc 8(buff), sum
> lea 16(buff), buff
> dec count
> jnz 1b
> adc $0, sum
>For 256 bytes it is only a couple of clocks slower.
>Maybe 10% slower for 512+ bytes.
>But it need almost no extra code for 'normal' buffer sizes.
>By comparison the adxc/adxo one is 20% faster.
>
>The code is doing:
> old = rdpmc
> mfence
> csum = do_csum(buf, len);
> mfence
> clocks = rdpmc - old
>(That is directly reading the pmc register.)
>With 'no-op' function it takes 160 clocks (I-cache resident).
>Without the mfence 40 - but pretty much everything can execute
>after the 2nd rdpmc.
>
>I've attached my (horrid) test program.
>
> David
>
>-
>Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
>Registration No: 1397386 (Wales)

Rather than runtime patching perhaps separate paths...

2024-01-07 11:45:00

by David Laight

[permalink] [raw]
Subject: RE: x86/csum: Remove unnecessary odd handling

From: H. Peter Anvin
> Sent: 07 January 2024 01:09
>
> On January 6, 2024 2:08:48 PM PST, David Laight <[email protected]> wrote:
...
> >The best loop for 256+ bytes is an adxc/adxo one.
> >However that requires the run-time patching.
...
> Rather than runtime patching perhaps separate paths...

It will need to detect the cpu type earlier, so a static
branch is probably enough.
Easier than substituting the entire code block.

I think it is silvermont and knight's landing that have
a 4 clock penalty for 64bit adxc (Intel atom family).
That might only be a decode penalty, so doesn't affect
the loop 'that much' (adc is 2 clocks on those cpu).
So probably not actually worth doing a run-time
performance check.

I might 'cook up' a full checksum function later.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2024-01-07 12:11:53

by David Laight

[permalink] [raw]
Subject: RE: x86/csum: Remove unnecessary odd handling

From: Eric Dumazet
> Sent: 06 January 2024 10:26
...
> On a related note, at least with clang, I found that csum_ipv6_magic()
> is needlessly using temporary on-stack storage,
> showing a stall on Cascade Lake unless I am patching add32_with_carry() :
>
> diff --git a/arch/x86/include/asm/checksum_64.h
> b/arch/x86/include/asm/checksum_64.h
> index 407beebadaf45a748f91a36b78bd1d023449b132..c3d6f47626c70d81f0c2ba401d85050b09a39922
> 100644
> --- a/arch/x86/include/asm/checksum_64.h
> +++ b/arch/x86/include/asm/checksum_64.h
> @@ -171,7 +171,7 @@ static inline unsigned add32_with_carry(unsigned
> a, unsigned b)
> asm("addl %2,%0\n\t"
> "adcl $0,%0"
> : "=r" (a)
> - : "0" (a), "rm" (b));
> + : "0" (a), "r" (b));
> return a;
> }

Try replacing:
return csum_fold(
(__force __wsum)add32_with_carry(sum64 & 0xffffffff, sum64>>32));
with:
return csum_fold((__force __wsum)((sum64 + ror64(sum64, 32)) >> 32));

Should be less instructions as well.
(shift, add, shift v shift, mov, and, add, add)
Although both might be 3 clocks.

The best C version of csum_fold (from IIRC arc) is also likely to be
better than the x86 asm one - certainly no worse.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)