2014-01-17 00:28:54

by Hannes Frederic Sowa

[permalink] [raw]
Subject: [PATCH net-next v2 3/3] reciprocal_divide: correction/update of the algorithm

Jakub Zawadzki noticed that some divisions by reciprocal_divide()
were not correct [1][2], which he could also show with BPF code after
divisions are transformed into reciprocal_value() for runtime invariant
which can be passed to reciprocal_divide() later on; reverse in BPF dump
ended up with a different, off-by-one K.

This has been fixed by Eric Dumazet in commit aee636c4809fa5 ("bpf: do not
use reciprocal divide"). This follow-up patch improves reciprocal_value()
and reciprocal_divide() to work in all cases, so future use is safe.

Known problems with the old implementation were that division by 1 always
returned 0 and some off-by-ones when the dividend and divisor where
very large. This seemed to not be problematic with its current users
in networking, mm/slab.c and lib/flex_array.c, but future users would
need to check for this specifically and it might not be obvious at first.

In order to fix that, we propose an extension from the original
implementation from commit 6a2d7a955d8d resp. [3][4], by using
the algorithm proposed in "Division by Invariant Integers Using
Multiplication" [5], Torbjörn Granlund and Peter L. Montgomery, that is,
pseudocode for q = n/d where q,n,d is in u32 universe:

1) Initialization:

int l = ceil(log_2 d)
uword m' = floor((1<<32)*((1<<l)-d)/d)+1
int sh_1 = min(l,1)
int sh_2 = max(l-1,0)

2) For q = n/d, all uword:

uword t = (n*m')>>32
q = (t+((n-t)>>sh_1))>>sh_2

The assembler implementation from Agner Fog [6] also helped a lot
while implementing. We have tested the implementation on x86_64,
ppc64, i686, s390x; on x86_64/haswell we're still half the latency
compared to normal divide.

Joint work with Daniel Borkmann.

[1] http://www.wireshark.org/~darkjames/reciprocal-buggy.c
[2] http://www.wireshark.org/~darkjames/set-and-dump-filter-k-bug.c
[3] https://gmplib.org/~tege/division-paper.pdf
[4] http://homepage.cs.uiowa.edu/~jones/bcd/divide.html
[5] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.1.2556
[6] http://www.agner.org/optimize/asmlib.zip

Fixes: 6a2d7a955d8d ("SLAB: use a multiply instead of a divide in obj_to_index()")
Fixes: 704f15ddb5fc ("flex_array: avoid divisions when accessing elements")
Reported-by: Jakub Zawadzki <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: Austin S Hemmelgarn <[email protected]>
Cc: [email protected]
Cc: Jesse Gross <[email protected]>
Cc: Jamal Hadi Salim <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Cc: Matt Mackall <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Andy Gospodarek <[email protected]>
Cc: Veaceslav Falico <[email protected]>
Cc: Jay Vosburgh <[email protected]>
Cc: Jakub Zawadzki <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Signed-off-by: Hannes Frederic Sowa <[email protected]>
---
drivers/net/bonding/bond_main.c | 24 ++++++++++++++++-------
drivers/net/bonding/bond_netlink.c | 4 ----
drivers/net/bonding/bond_options.c | 15 ++++++++++-----
drivers/net/bonding/bond_sysfs.c | 5 -----
drivers/net/bonding/bonding.h | 3 +++
include/linux/flex_array.h | 3 ++-
include/linux/reciprocal_div.h | 39 ++++++++++++++++++++------------------
include/linux/slab_def.h | 4 +++-
include/net/red.h | 3 ++-
lib/flex_array.c | 7 ++++++-
lib/reciprocal_div.c | 24 +++++++++++++++++++----
net/sched/sch_netem.c | 6 ++++--
12 files changed, 88 insertions(+), 49 deletions(-)

diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index f2fe6cb..77e57da 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -79,7 +79,6 @@
#include <net/pkt_sched.h>
#include <linux/rculist.h>
#include <net/flow_keys.h>
-#include <linux/reciprocal_div.h>
#include "bonding.h"
#include "bond_3ad.h"
#include "bond_alb.h"
@@ -3551,8 +3550,9 @@ static void bond_xmit_slave_id(struct bonding *bond, struct sk_buff *skb, int sl
*/
static u32 bond_rr_gen_slave_id(struct bonding *bond)
{
- int packets_per_slave = bond->params.packets_per_slave;
u32 slave_id;
+ struct reciprocal_value reciprocal_packets_per_slave;
+ int packets_per_slave = bond->params.packets_per_slave;

switch (packets_per_slave) {
case 0:
@@ -3562,8 +3562,10 @@ static u32 bond_rr_gen_slave_id(struct bonding *bond)
slave_id = bond->rr_tx_counter;
break;
default:
+ reciprocal_packets_per_slave =
+ bond->params.reciprocal_packets_per_slave;
slave_id = reciprocal_divide(bond->rr_tx_counter,
- packets_per_slave);
+ reciprocal_packets_per_slave);
break;
}
bond->rr_tx_counter++;
@@ -4297,10 +4299,18 @@ static int bond_check_params(struct bond_params *params)
params->resend_igmp = resend_igmp;
params->min_links = min_links;
params->lp_interval = lp_interval;
- if (packets_per_slave > 1)
- params->packets_per_slave = reciprocal_value(packets_per_slave);
- else
- params->packets_per_slave = packets_per_slave;
+ params->packets_per_slave = packets_per_slave;
+ if (packets_per_slave > 0) {
+ params->reciprocal_packets_per_slave =
+ reciprocal_value(packets_per_slave);
+ } else {
+ /* reciprocal_packets_per_slave is unused if
+ * packets_per_slave is 0 or 1, just initialize it
+ */
+ params->reciprocal_packets_per_slave =
+ (struct reciprocal_value) { 0 };
+ }
+
if (primary) {
strncpy(params->primary, primary, IFNAMSIZ);
params->primary[IFNAMSIZ - 1] = 0;
diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
index 555c783..9b13791 100644
--- a/drivers/net/bonding/bond_netlink.c
+++ b/drivers/net/bonding/bond_netlink.c
@@ -19,7 +19,6 @@
#include <linux/if_ether.h>
#include <net/netlink.h>
#include <net/rtnetlink.h>
-#include <linux/reciprocal_div.h>
#include "bonding.h"

static const struct nla_policy bond_policy[IFLA_BOND_MAX + 1] = {
@@ -416,9 +415,6 @@ static int bond_fill_info(struct sk_buff *skb,
goto nla_put_failure;

packets_per_slave = bond->params.packets_per_slave;
- if (packets_per_slave > 1)
- packets_per_slave = reciprocal_value(packets_per_slave);
-
if (nla_put_u32(skb, IFLA_BOND_PACKETS_PER_SLAVE,
packets_per_slave))
goto nla_put_failure;
diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
index 945a666..85e4348 100644
--- a/drivers/net/bonding/bond_options.c
+++ b/drivers/net/bonding/bond_options.c
@@ -16,7 +16,6 @@
#include <linux/netdevice.h>
#include <linux/rwlock.h>
#include <linux/rcupdate.h>
-#include <linux/reciprocal_div.h>
#include "bonding.h"

int bond_option_mode_set(struct bonding *bond, int mode)
@@ -671,11 +670,17 @@ int bond_option_packets_per_slave_set(struct bonding *bond,
pr_warn("%s: Warning: packets_per_slave has effect only in balance-rr mode\n",
bond->dev->name);

- if (packets_per_slave > 1)
- bond->params.packets_per_slave =
+ bond->params.packets_per_slave = packets_per_slave;
+ if (packets_per_slave > 0) {
+ bond->params.reciprocal_packets_per_slave =
reciprocal_value(packets_per_slave);
- else
- bond->params.packets_per_slave = packets_per_slave;
+ } else {
+ /* reciprocal_packets_per_slave is unused if
+ * packets_per_slave is 0 or 1, just initialize it
+ */
+ bond->params.reciprocal_packets_per_slave =
+ (struct reciprocal_value) { 0 };
+ }

return 0;
}
diff --git a/drivers/net/bonding/bond_sysfs.c b/drivers/net/bonding/bond_sysfs.c
index 011f163..c083e9a 100644
--- a/drivers/net/bonding/bond_sysfs.c
+++ b/drivers/net/bonding/bond_sysfs.c
@@ -39,7 +39,6 @@
#include <net/net_namespace.h>
#include <net/netns/generic.h>
#include <linux/nsproxy.h>
-#include <linux/reciprocal_div.h>

#include "bonding.h"

@@ -1374,10 +1373,6 @@ static ssize_t bonding_show_packets_per_slave(struct device *d,
{
struct bonding *bond = to_bond(d);
unsigned int packets_per_slave = bond->params.packets_per_slave;
-
- if (packets_per_slave > 1)
- packets_per_slave = reciprocal_value(packets_per_slave);
-
return sprintf(buf, "%u\n", packets_per_slave);
}

diff --git a/drivers/net/bonding/bonding.h b/drivers/net/bonding/bonding.h
index 955dc48..502dda8 100644
--- a/drivers/net/bonding/bonding.h
+++ b/drivers/net/bonding/bonding.h
@@ -23,6 +23,8 @@
#include <linux/netpoll.h>
#include <linux/inetdevice.h>
#include <linux/etherdevice.h>
+#include <linux/reciprocal_div.h>
+
#include "bond_3ad.h"
#include "bond_alb.h"

@@ -171,6 +173,7 @@ struct bond_params {
int resend_igmp;
int lp_interval;
int packets_per_slave;
+ struct reciprocal_value reciprocal_packets_per_slave;
};

struct bond_parm_tbl {
diff --git a/include/linux/flex_array.h b/include/linux/flex_array.h
index 6843cf1..b6efb0c 100644
--- a/include/linux/flex_array.h
+++ b/include/linux/flex_array.h
@@ -2,6 +2,7 @@
#define _FLEX_ARRAY_H

#include <linux/types.h>
+#include <linux/reciprocal_div.h>
#include <asm/page.h>

#define FLEX_ARRAY_PART_SIZE PAGE_SIZE
@@ -22,7 +23,7 @@ struct flex_array {
int element_size;
int total_nr_elements;
int elems_per_part;
- u32 reciprocal_elems;
+ struct reciprocal_value reciprocal_elems;
struct flex_array_part *parts[];
};
/*
diff --git a/include/linux/reciprocal_div.h b/include/linux/reciprocal_div.h
index f9c90b3..8c5a3fb 100644
--- a/include/linux/reciprocal_div.h
+++ b/include/linux/reciprocal_div.h
@@ -4,29 +4,32 @@
#include <linux/types.h>

/*
- * This file describes reciprocical division.
+ * This algorithm is based on the paper "Division by Invariant
+ * Integers Using Multiplication" by Torbjörn Granlund and Peter
+ * L. Montgomery.
*
- * This optimizes the (A/B) problem, when A and B are two u32
- * and B is a known value (but not known at compile time)
+ * The assembler implementation from Agner Fog, which this code is
+ * based on, can be found here:
+ * http://www.agner.org/optimize/asmlib.zip
*
- * The math principle used is :
- * Let RECIPROCAL_VALUE(B) be (((1LL << 32) + (B - 1))/ B)
- * Then A / B = (u32)(((u64)(A) * (R)) >> 32)
- *
- * This replaces a divide by a multiply (and a shift), and
- * is generally less expensive in CPU cycles.
+ * This optimization for A/B is helpful if the divisor B is mostly
+ * runtime invariant. The reciprocal of B is calculated in the
+ * slow-path with reciprocal_value(). The fast-path can then just use
+ * a much faster multiplication operation with a variable dividend A
+ * to calculate the division A/B.
*/

-/*
- * Computes the reciprocal value (R) for the value B of the divisor.
- * Should not be called before each reciprocal_divide(),
- * or else the performance is slower than a normal divide.
- */
-extern u32 reciprocal_value(u32 B);
+struct reciprocal_value {
+ u32 m;
+ u8 sh1, sh2;
+};

+struct reciprocal_value reciprocal_value(u32 d);

-static inline u32 reciprocal_divide(u32 A, u32 R)
+static inline u32 reciprocal_divide(u32 a, struct reciprocal_value R)
{
- return (u32)(((u64)A * R) >> 32);
+ u32 t = (u32)(((u64)a * R.m) >> 32);
+ return (t + ((a - t) >> R.sh1)) >> R.sh2;
}
-#endif
+
+#endif /* _LINUX_RECIPROCAL_DIV_H */
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index 09bfffb..96e8aba 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -1,6 +1,8 @@
#ifndef _LINUX_SLAB_DEF_H
#define _LINUX_SLAB_DEF_H

+#include <linux/reciprocal_div.h>
+
/*
* Definitions unique to the original Linux SLAB allocator.
*/
@@ -12,7 +14,7 @@ struct kmem_cache {
unsigned int shared;

unsigned int size;
- u32 reciprocal_buffer_size;
+ struct reciprocal_value reciprocal_buffer_size;
/* 2) touched by every alloc & free from the backend */

unsigned int flags; /* constant flags */
diff --git a/include/net/red.h b/include/net/red.h
index 168bb2f..76e0b5f 100644
--- a/include/net/red.h
+++ b/include/net/red.h
@@ -130,7 +130,8 @@ struct red_parms {
u32 qth_max; /* Max avg length threshold: Wlog scaled */
u32 Scell_max;
u32 max_P; /* probability, [0 .. 1.0] 32 scaled */
- u32 max_P_reciprocal; /* reciprocal_value(max_P / qth_delta) */
+ /* reciprocal_value(max_P / qth_delta) */
+ struct reciprocal_value max_P_reciprocal;
u32 qth_delta; /* max_th - min_th */
u32 target_min; /* min_th + 0.4*(max_th - min_th) */
u32 target_max; /* min_th + 0.6*(max_th - min_th) */
diff --git a/lib/flex_array.c b/lib/flex_array.c
index 6948a66..2eed22f 100644
--- a/lib/flex_array.c
+++ b/lib/flex_array.c
@@ -90,8 +90,8 @@ struct flex_array *flex_array_alloc(int element_size, unsigned int total,
{
struct flex_array *ret;
int elems_per_part = 0;
- int reciprocal_elems = 0;
int max_size = 0;
+ struct reciprocal_value reciprocal_elems = { 0 };

if (element_size) {
elems_per_part = FLEX_ARRAY_ELEMENTS_PER_PART(element_size);
@@ -119,6 +119,11 @@ EXPORT_SYMBOL(flex_array_alloc);
static int fa_element_to_part_nr(struct flex_array *fa,
unsigned int element_nr)
{
+ /*
+ * if element_size == 0 we don't get here, so we never touch
+ * the zeroed fa->reciprocal_elems, which would yield invalid
+ * results
+ */
return reciprocal_divide(element_nr, fa->reciprocal_elems);
}

diff --git a/lib/reciprocal_div.c b/lib/reciprocal_div.c
index 75510e9..4641524 100644
--- a/lib/reciprocal_div.c
+++ b/lib/reciprocal_div.c
@@ -1,11 +1,27 @@
+#include <linux/kernel.h>
#include <asm/div64.h>
#include <linux/reciprocal_div.h>
#include <linux/export.h>

-u32 reciprocal_value(u32 k)
+/*
+ * For a description of the algorithm please have a look at
+ * include/linux/reciprocal_div.h
+ */
+
+struct reciprocal_value reciprocal_value(u32 d)
{
- u64 val = (1LL << 32) + (k - 1);
- do_div(val, k);
- return (u32)val;
+ struct reciprocal_value R;
+ u64 m;
+ int l;
+
+ l = fls(d - 1);
+ m = ((1ULL << 32) * ((1ULL << l) - d));
+ do_div(m, d);
+ ++m;
+ R.m = (u32)m;
+ R.sh1 = min(l, 1);
+ R.sh2 = max(l - 1, 0);
+
+ return R;
}
EXPORT_SYMBOL(reciprocal_value);
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index 3019c10..81fc5b0 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -91,7 +91,7 @@ struct netem_sched_data {
u64 rate;
s32 packet_overhead;
u32 cell_size;
- u32 cell_size_reciprocal;
+ struct reciprocal_value cell_size_reciprocal;
s32 cell_overhead;

struct crndstate {
@@ -716,9 +716,11 @@ static void get_rate(struct Qdisc *sch, const struct nlattr *attr)
q->rate = r->rate;
q->packet_overhead = r->packet_overhead;
q->cell_size = r->cell_size;
+ q->cell_overhead = r->cell_overhead;
if (q->cell_size)
q->cell_size_reciprocal = reciprocal_value(q->cell_size);
- q->cell_overhead = r->cell_overhead;
+ else
+ q->cell_size_reciprocal = (struct reciprocal_value) { 0 };
}

static int get_loss_clg(struct Qdisc *sch, const struct nlattr *attr)
--
1.8.4.2


2014-01-17 02:33:46

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH net-next v2 3/3] reciprocal_divide: correction/update of the algorithm

On Fri, 2014-01-17 at 01:28 +0100, Hannes Frederic Sowa wrote:
> Jakub Zawadzki noticed that some divisions by reciprocal_divide()
> were not correct [1][2], which he could also show with BPF code after
> divisions are transformed into reciprocal_value() for runtime invariant
> which can be passed to reciprocal_divide() later on; reverse in BPF dump
> ended up with a different, off-by-one K.
>
> This has been fixed by Eric Dumazet in commit aee636c4809fa5 ("bpf: do not
> use reciprocal divide"). This follow-up patch improves reciprocal_value()
> and reciprocal_divide() to work in all cases, so future use is safe.
>
> Known problems with the old implementation were that division by 1 always
> returned 0 and some off-by-ones when the dividend and divisor where
> very large. This seemed to not be problematic with its current users
> in networking, mm/slab.c and lib/flex_array.c, but future users would
> need to check for this specifically and it might not be obvious at first.
>
> In order to fix that, we propose an extension from the original
> implementation from commit 6a2d7a955d8d resp. [3][4], by using
> the algorithm proposed in "Division by Invariant Integers Using
> Multiplication" [5], Torbjörn Granlund and Peter L. Montgomery, that is,
> pseudocode for q = n/d where q,n,d is in u32 universe:
>
> 1) Initialization:
>
> int l = ceil(log_2 d)
> uword m' = floor((1<<32)*((1<<l)-d)/d)+1
> int sh_1 = min(l,1)
> int sh_2 = max(l-1,0)
>
> 2) For q = n/d, all uword:
>
> uword t = (n*m')>>32
> q = (t+((n-t)>>sh_1))>>sh_2
>
> The assembler implementation from Agner Fog [6] also helped a lot
> while implementing. We have tested the implementation on x86_64,
> ppc64, i686, s390x; on x86_64/haswell we're still half the latency
> compared to normal divide.
>
> Joint work with Daniel Borkmann.
>
> [1] http://www.wireshark.org/~darkjames/reciprocal-buggy.c
> [2] http://www.wireshark.org/~darkjames/set-and-dump-filter-k-bug.c
> [3] https://gmplib.org/~tege/division-paper.pdf
> [4] http://homepage.cs.uiowa.edu/~jones/bcd/divide.html
> [5] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.1.2556
> [6] http://www.agner.org/optimize/asmlib.zip
>
> Fixes: 6a2d7a955d8d ("SLAB: use a multiply instead of a divide in obj_to_index()")


I already demonstrated this slab patch was fine.

The current algo works well (no off-by-one error) when the dividend is
a multiple of the divisor.

You are adding extra overhead, while we know its not necessary.

By using "Fixes: ... " you are asking a backport to stable branches,
which seems really silly in this case, especially with this monolithic
patch changing 12 files in different subsystems.

If you believe flex_array has a problem, please fix flex_array only,
by a small patch (Maybe a revert ?)

Then, introduce your new helpers if we really think they are needed.


2014-01-17 04:29:06

by Hannes Frederic Sowa

[permalink] [raw]
Subject: Re: [PATCH net-next v2 3/3] reciprocal_divide: correction/update of the algorithm

On Thu, Jan 16, 2014 at 06:33:37PM -0800, Eric Dumazet wrote:
> On Fri, 2014-01-17 at 01:28 +0100, Hannes Frederic Sowa wrote:
> > Jakub Zawadzki noticed that some divisions by reciprocal_divide()
> > were not correct [1][2], which he could also show with BPF code after
> > divisions are transformed into reciprocal_value() for runtime invariant
> > which can be passed to reciprocal_divide() later on; reverse in BPF dump
> > ended up with a different, off-by-one K.
> >
> > This has been fixed by Eric Dumazet in commit aee636c4809fa5 ("bpf: do not
> > use reciprocal divide"). This follow-up patch improves reciprocal_value()
> > and reciprocal_divide() to work in all cases, so future use is safe.
> >
> > Known problems with the old implementation were that division by 1 always
> > returned 0 and some off-by-ones when the dividend and divisor where
> > very large. This seemed to not be problematic with its current users
> > in networking, mm/slab.c and lib/flex_array.c, but future users would
> > need to check for this specifically and it might not be obvious at first.
> >
> > In order to fix that, we propose an extension from the original
> > implementation from commit 6a2d7a955d8d resp. [3][4], by using
> > the algorithm proposed in "Division by Invariant Integers Using
> > Multiplication" [5], Torbjörn Granlund and Peter L. Montgomery, that is,
> > pseudocode for q = n/d where q,n,d is in u32 universe:
> >
> > 1) Initialization:
> >
> > int l = ceil(log_2 d)
> > uword m' = floor((1<<32)*((1<<l)-d)/d)+1
> > int sh_1 = min(l,1)
> > int sh_2 = max(l-1,0)
> >
> > 2) For q = n/d, all uword:
> >
> > uword t = (n*m')>>32
> > q = (t+((n-t)>>sh_1))>>sh_2
> >
> > The assembler implementation from Agner Fog [6] also helped a lot
> > while implementing. We have tested the implementation on x86_64,
> > ppc64, i686, s390x; on x86_64/haswell we're still half the latency
> > compared to normal divide.
> >
> > Joint work with Daniel Borkmann.
> >
> > [1] http://www.wireshark.org/~darkjames/reciprocal-buggy.c
> > [2] http://www.wireshark.org/~darkjames/set-and-dump-filter-k-bug.c
> > [3] https://gmplib.org/~tege/division-paper.pdf
> > [4] http://homepage.cs.uiowa.edu/~jones/bcd/divide.html
> > [5] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.1.2556
> > [6] http://www.agner.org/optimize/asmlib.zip
> >
> > Fixes: 6a2d7a955d8d ("SLAB: use a multiply instead of a divide in obj_to_index()")
>
>
> I already demonstrated this slab patch was fine.
>
> The current algo works well (no off-by-one error) when the dividend is
> a multiple of the divisor.

Sure, so did we state in the commit message.

> You are adding extra overhead, while we know its not necessary.
>
> By using "Fixes: ... " you are asking a backport to stable branches,
> which seems really silly in this case, especially with this monolithic
> patch changing 12 files in different subsystems.

We can drop the the Fixes tags, no problem.

> If you believe flex_array has a problem, please fix flex_array only,
> by a small patch (Maybe a revert ?)

I really doubt it is helpful to have an implementation of reciprocal_divide
which has some known (and maybe unkown) problems in the long term.

This implementation still has an performance benefit compared to regular
division while calculating correct results in any case.

We clearly didn't intend stable inclusion, in fact this patch has been posted
for net-next inclusion as an improvment and not as a bugfix. The Fixes tags
where just lingering on this patch from my first attempt where the situation
was not that clear (at least for me).

Also I doubt the performance drop for SLAB will be that massive. Also it was
already replaced by SLUB as the default SLAB allocator, which doesn't use
reciprocal_divide.

Greetings,

Hannes

2014-01-17 05:43:05

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH net-next v2 3/3] reciprocal_divide: correction/update of the algorithm

On Fri, 2014-01-17 at 05:29 +0100, Hannes Frederic Sowa wrote:

> Also I doubt the performance drop for SLAB will be that massive. Also it was
> already replaced by SLUB as the default SLAB allocator, which doesn't use
> reciprocal_divide.

Google servers use SLAB, not SLUB, for various reasons, and performance
is one of them.