2010-11-30 16:31:12

by Martin Willi

[permalink] [raw]
Subject: [PATCH 0/5] xfrm: ESP Traffic Flow Confidentiality padding

The following patchset adds Traffic Flow Confidentiality padding. The
first patch introduces a new Netlink XFRM attribute to configure TFC via
userspace. The second patch removes an existing padlen option in ESP; It
is not used at all, and I currently don't see the purpose of the field,
nor how it should interact with TFC padding enabled. Patch three and four
implement the padding logic in IPv4 and IPv6 ESP.

Padding is specified with a length to pad the encapsulated data to.
Support for TFC padding as specified in RFC4303 must be negotiated
explicitly by the key management protocol, hence the optional flag. The
fallback with ESP padding field expansion is limited to 255 padding
bytes. If this is insufficient, padding length is randomized to hide
the real length as good as possible.

The last patch adds an option to pad all packets to the PMTU. It works
fine for simple scenarios, but I'm not sure if my PMTU lookup works in
all cases (nested transforms?). Any pointer would be appreciated.

Martin Willi (5):
xfrm: Add Traffic Flow Confidentiality padding XFRM attribute
xfrm: Remove unused ESP padlen field
xfrm: Traffic Flow Confidentiality for IPv4 ESP
xfrm: Traffic Flow Confidentiality for IPv6 ESP
xfrm: Add TFC padding option to automatically pad to PMTU

include/linux/xfrm.h | 8 +++++++
include/net/esp.h | 3 --
include/net/xfrm.h | 1 +
net/ipv4/esp4.c | 58 +++++++++++++++++++++++++++++++++++--------------
net/ipv6/esp6.c | 58 +++++++++++++++++++++++++++++++++++--------------
net/xfrm/xfrm_user.c | 16 ++++++++++++-
6 files changed, 105 insertions(+), 39 deletions(-)


2010-11-30 16:31:06

by Martin Willi

[permalink] [raw]
Subject: [PATCH 3/5] xfrm: Traffic Flow Confidentiality for IPv4 ESP

If configured on xfrm state, increase the length of all packets to
a given boundary using TFC padding as specified in RFC4303. For
transport mode, or if the XFRM_TFC_ESPV3 is not set, grow the ESP
padding field instead.

Signed-off-by: Martin Willi <[email protected]>
---
net/ipv4/esp4.c | 42 +++++++++++++++++++++++++++++++++---------
1 files changed, 33 insertions(+), 9 deletions(-)

diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index 67e4c12..a6adfbc 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -117,23 +117,43 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
int blksize;
int clen;
int alen;
+ int plen;
+ int tfclen;
+ int tfcpadto;
int nfrags;

/* skb is pure payload to encrypt */

err = -ENOMEM;

- /* Round to block size */
- clen = skb->len;
-
esp = x->data;
aead = esp->aead;
alen = crypto_aead_authsize(aead);

blksize = ALIGN(crypto_aead_blocksize(aead), 4);
- clen = ALIGN(clen + 2, blksize);
-
- if ((err = skb_cow_data(skb, clen - skb->len + alen, &trailer)) < 0)
+ tfclen = 0;
+ tfcpadto = x->tfc.pad;
+
+ if (skb->len >= tfcpadto) {
+ clen = ALIGN(skb->len + 2, blksize);
+ } else if (x->tfc.flags & XFRM_TFC_ESPV3 &&
+ x->props.mode == XFRM_MODE_TUNNEL) {
+ /* ESPv3 TFC padding, append bytes to payload */
+ tfclen = tfcpadto - skb->len;
+ clen = ALIGN(skb->len + 2 + tfclen, blksize);
+ } else {
+ /* ESPv2 TFC padding. If we exceed the 255 byte maximum, use
+ * random padding to hide payload length as good as possible. */
+ clen = ALIGN(skb->len + 2 + tfcpadto - skb->len, blksize);
+ if (clen - skb->len - 2 > 255) {
+ clen = ALIGN(skb->len + (u8)random32() + 2, blksize);
+ if (clen - skb->len - 2 > 255)
+ clen -= blksize;
+ }
+ }
+ plen = clen - skb->len - tfclen;
+ err = skb_cow_data(skb, tfclen + plen + alen, &trailer);
+ if (err < 0)
goto error;
nfrags = err;

@@ -148,13 +168,17 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)

/* Fill padding... */
tail = skb_tail_pointer(trailer);
+ if (tfclen) {
+ memset(tail, 0, tfclen);
+ tail += tfclen;
+ }
do {
int i;
- for (i=0; i<clen-skb->len - 2; i++)
+ for (i = 0; i < plen - 2; i++)
tail[i] = i + 1;
} while (0);
- tail[clen - skb->len - 2] = (clen - skb->len) - 2;
- tail[clen - skb->len - 1] = *skb_mac_header(skb);
+ tail[plen - 2] = plen - 2;
+ tail[plen - 1] = *skb_mac_header(skb);
pskb_put(skb, trailer, clen - skb->len + alen);

skb_push(skb, -skb_network_offset(skb));
--
1.7.1

2010-11-30 15:49:12

by Martin Willi

[permalink] [raw]
Subject: [PATCH 2/5] xfrm: Remove unused ESP padlen field

The padlen field in IPv4/6 ESP is used to align the ESP padding length
to a value larger than the aead block size. There is however no
option to set this field, hence it is removed.

Signed-off-by: Martin Willi <[email protected]>
---
include/net/esp.h | 3 ---
net/ipv4/esp4.c | 11 ++---------
net/ipv6/esp6.c | 11 ++---------
3 files changed, 4 insertions(+), 21 deletions(-)

diff --git a/include/net/esp.h b/include/net/esp.h
index d584513..6dfb4d0 100644
--- a/include/net/esp.h
+++ b/include/net/esp.h
@@ -6,9 +6,6 @@
struct crypto_aead;

struct esp_data {
- /* 0..255 */
- int padlen;
-
/* Confidentiality & Integrity */
struct crypto_aead *aead;
};
diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index 14ca1f1..67e4c12 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -132,8 +132,6 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)

blksize = ALIGN(crypto_aead_blocksize(aead), 4);
clen = ALIGN(clen + 2, blksize);
- if (esp->padlen)
- clen = ALIGN(clen, esp->padlen);

if ((err = skb_cow_data(skb, clen - skb->len + alen, &trailer)) < 0)
goto error;
@@ -386,12 +384,11 @@ static u32 esp4_get_mtu(struct xfrm_state *x, int mtu)
{
struct esp_data *esp = x->data;
u32 blksize = ALIGN(crypto_aead_blocksize(esp->aead), 4);
- u32 align = max_t(u32, blksize, esp->padlen);
u32 rem;

mtu -= x->props.header_len + crypto_aead_authsize(esp->aead);
- rem = mtu & (align - 1);
- mtu &= ~(align - 1);
+ rem = mtu & (blksize - 1);
+ mtu &= ~(blksize - 1);

switch (x->props.mode) {
case XFRM_MODE_TUNNEL:
@@ -570,8 +567,6 @@ static int esp_init_state(struct xfrm_state *x)

aead = esp->aead;

- esp->padlen = 0;
-
x->props.header_len = sizeof(struct ip_esp_hdr) +
crypto_aead_ivsize(aead);
if (x->props.mode == XFRM_MODE_TUNNEL)
@@ -594,8 +589,6 @@ static int esp_init_state(struct xfrm_state *x)
}

align = ALIGN(crypto_aead_blocksize(aead), 4);
- if (esp->padlen)
- align = max_t(u32, align, esp->padlen);
x->props.trailer_len = align + 1 + crypto_aead_authsize(esp->aead);

error:
diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
index ee9b93b..e9e6e1c 100644
--- a/net/ipv6/esp6.c
+++ b/net/ipv6/esp6.c
@@ -156,8 +156,6 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)

blksize = ALIGN(crypto_aead_blocksize(aead), 4);
clen = ALIGN(clen + 2, blksize);
- if (esp->padlen)
- clen = ALIGN(clen, esp->padlen);

if ((err = skb_cow_data(skb, clen - skb->len + alen, &trailer)) < 0)
goto error;
@@ -337,12 +335,11 @@ static u32 esp6_get_mtu(struct xfrm_state *x, int mtu)
{
struct esp_data *esp = x->data;
u32 blksize = ALIGN(crypto_aead_blocksize(esp->aead), 4);
- u32 align = max_t(u32, blksize, esp->padlen);
u32 rem;

mtu -= x->props.header_len + crypto_aead_authsize(esp->aead);
- rem = mtu & (align - 1);
- mtu &= ~(align - 1);
+ rem = mtu & (blksize - 1);
+ mtu &= ~(blksize - 1);

if (x->props.mode != XFRM_MODE_TUNNEL) {
u32 padsize = ((blksize - 1) & 7) + 1;
@@ -516,8 +513,6 @@ static int esp6_init_state(struct xfrm_state *x)

aead = esp->aead;

- esp->padlen = 0;
-
x->props.header_len = sizeof(struct ip_esp_hdr) +
crypto_aead_ivsize(aead);
switch (x->props.mode) {
@@ -536,8 +531,6 @@ static int esp6_init_state(struct xfrm_state *x)
}

align = ALIGN(crypto_aead_blocksize(aead), 4);
- if (esp->padlen)
- align = max_t(u32, align, esp->padlen);
x->props.trailer_len = align + 1 + crypto_aead_authsize(esp->aead);

error:
--
1.7.1


2010-11-30 15:49:14

by Martin Willi

[permalink] [raw]
Subject: [PATCH 4/5] xfrm: Traffic Flow Confidentiality for IPv6 ESP

If configured on xfrm state, increase the length of all packets to
a given boundary using TFC padding as specified in RFC4303. For
transport mode, or if the XFRM_TFC_ESPV3 is not set, grow the ESP
padding field instead.

Signed-off-by: Martin Willi <[email protected]>
---
net/ipv6/esp6.c | 42 +++++++++++++++++++++++++++++++++---------
1 files changed, 33 insertions(+), 9 deletions(-)

diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
index e9e6e1c..9494cb1 100644
--- a/net/ipv6/esp6.c
+++ b/net/ipv6/esp6.c
@@ -140,6 +140,9 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
int blksize;
int clen;
int alen;
+ int plen;
+ int tfclen;
+ int tfcpadto;
int nfrags;
u8 *iv;
u8 *tail;
@@ -148,16 +151,33 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
/* skb is pure payload to encrypt */
err = -ENOMEM;

- /* Round to block size */
- clen = skb->len;
-
aead = esp->aead;
alen = crypto_aead_authsize(aead);

blksize = ALIGN(crypto_aead_blocksize(aead), 4);
- clen = ALIGN(clen + 2, blksize);
-
- if ((err = skb_cow_data(skb, clen - skb->len + alen, &trailer)) < 0)
+ tfclen = 0;
+ tfcpadto = x->tfc.pad;
+
+ if (skb->len >= tfcpadto) {
+ clen = ALIGN(skb->len + 2, blksize);
+ } else if (x->tfc.flags & XFRM_TFC_ESPV3 &&
+ x->props.mode == XFRM_MODE_TUNNEL) {
+ /* ESPv3 TFC padding, append bytes to payload */
+ tfclen = tfcpadto - skb->len;
+ clen = ALIGN(skb->len + 2 + tfclen, blksize);
+ } else {
+ /* ESPv2 TFC padding. If we exceed the 255 byte maximum, use
+ * random padding to hide payload length as good as possible. */
+ clen = ALIGN(skb->len + 2 + tfcpadto - skb->len, blksize);
+ if (clen - skb->len - 2 > 255) {
+ clen = ALIGN(skb->len + (u8)random32() + 2, blksize);
+ if (clen - skb->len - 2 > 255)
+ clen -= blksize;
+ }
+ }
+ plen = clen - skb->len - tfclen;
+ err = skb_cow_data(skb, tfclen + plen + alen, &trailer);
+ if (err < 0)
goto error;
nfrags = err;

@@ -172,13 +192,17 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)

/* Fill padding... */
tail = skb_tail_pointer(trailer);
+ if (tfclen) {
+ memset(tail, 0, tfclen);
+ tail += tfclen;
+ }
do {
int i;
- for (i=0; i<clen-skb->len - 2; i++)
+ for (i = 0; i < plen - 2; i++)
tail[i] = i + 1;
} while (0);
- tail[clen-skb->len - 2] = (clen - skb->len) - 2;
- tail[clen - skb->len - 1] = *skb_mac_header(skb);
+ tail[plen - 2] = plen - 2;
+ tail[plen - 1] = *skb_mac_header(skb);
pskb_put(skb, trailer, clen - skb->len + alen);

skb_push(skb, -skb_network_offset(skb));
--
1.7.1


2010-11-30 16:31:14

by Martin Willi

[permalink] [raw]
Subject: [PATCH 5/5] xfrm: Add TFC padding option to automatically pad to PMTU

Traffic Flow Confidentiality padding is most effective if all packets
have exactly the same size. For SAs with mixed traffic, the largest
packet size is usually the PMTU. Instead of calculating the PMTU
manually, the XFRM_TFC_PMTU flag automatically pads to the PMTU.

Signed-off-by: Martin Willi <[email protected]>
---
include/linux/xfrm.h | 1 +
net/ipv4/esp4.c | 7 +++++++
net/ipv6/esp6.c | 7 +++++++
3 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/include/linux/xfrm.h b/include/linux/xfrm.h
index b1e5f8a..2a9f0b4 100644
--- a/include/linux/xfrm.h
+++ b/include/linux/xfrm.h
@@ -298,6 +298,7 @@ struct xfrm_tfc {
__u16 pad;
__u16 flags;
#define XFRM_TFC_ESPV3 1 /* RFC4303 TFC padding, if possible */
+#define XFRM_TFC_PMTU 2 /* ignore pad field, pad to PMTU */
};

enum xfrm_sadattr_type_t {
diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index a6adfbc..cfb4992 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -23,6 +23,8 @@ struct esp_skb_cb {

#define ESP_SKB_CB(__skb) ((struct esp_skb_cb *)&((__skb)->cb[0]))

+static u32 esp4_get_mtu(struct xfrm_state *x, int mtu);
+
/*
* Allocate an AEAD request structure with extra space for SG and IV.
*
@@ -133,6 +135,11 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
blksize = ALIGN(crypto_aead_blocksize(aead), 4);
tfclen = 0;
tfcpadto = x->tfc.pad;
+ if (x->tfc.flags & XFRM_TFC_PMTU) {
+ struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
+
+ tfcpadto = esp4_get_mtu(x, dst->child_mtu_cached);
+ }

if (skb->len >= tfcpadto) {
clen = ALIGN(skb->len + 2, blksize);
diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
index 9494cb1..6cb9a02 100644
--- a/net/ipv6/esp6.c
+++ b/net/ipv6/esp6.c
@@ -49,6 +49,8 @@ struct esp_skb_cb {

#define ESP_SKB_CB(__skb) ((struct esp_skb_cb *)&((__skb)->cb[0]))

+static u32 esp6_get_mtu(struct xfrm_state *x, int mtu);
+
/*
* Allocate an AEAD request structure with extra space for SG and IV.
*
@@ -157,6 +159,11 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
blksize = ALIGN(crypto_aead_blocksize(aead), 4);
tfclen = 0;
tfcpadto = x->tfc.pad;
+ if (x->tfc.flags & XFRM_TFC_PMTU) {
+ struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
+
+ tfcpadto = esp6_get_mtu(x, dst->child_mtu_cached);
+ }

if (skb->len >= tfcpadto) {
clen = ALIGN(skb->len + 2, blksize);
--
1.7.1

2010-11-30 16:31:09

by Martin Willi

[permalink] [raw]
Subject: [PATCH 1/5] xfrm: Add Traffic Flow Confidentiality padding XFRM attribute

The XFRMA_TFCPAD attribute for XFRM state installation configures
Traffic Flow Confidentiality by padding ESP packets to a specified
length. To use RFC4303 TFC padding and overcome the 255 byte ESP
padding field limit, the XFRM_TFC_ESPV3 flag must be set.

Signed-off-by: Martin Willi <[email protected]>
---
include/linux/xfrm.h | 7 +++++++
include/net/xfrm.h | 1 +
net/xfrm/xfrm_user.c | 16 ++++++++++++++--
3 files changed, 22 insertions(+), 2 deletions(-)

diff --git a/include/linux/xfrm.h b/include/linux/xfrm.h
index b971e38..b1e5f8a 100644
--- a/include/linux/xfrm.h
+++ b/include/linux/xfrm.h
@@ -283,6 +283,7 @@ enum xfrm_attr_type_t {
XFRMA_KMADDRESS, /* struct xfrm_user_kmaddress */
XFRMA_ALG_AUTH_TRUNC, /* struct xfrm_algo_auth */
XFRMA_MARK, /* struct xfrm_mark */
+ XFRMA_TFC, /* struct xfrm_tfc */
__XFRMA_MAX

#define XFRMA_MAX (__XFRMA_MAX - 1)
@@ -293,6 +294,12 @@ struct xfrm_mark {
__u32 m; /* mask */
};

+struct xfrm_tfc {
+ __u16 pad;
+ __u16 flags;
+#define XFRM_TFC_ESPV3 1 /* RFC4303 TFC padding, if possible */
+};
+
enum xfrm_sadattr_type_t {
XFRMA_SAD_UNSPEC,
XFRMA_SAD_CNT,
diff --git a/include/net/xfrm.h b/include/net/xfrm.h
index bcfb6b2..03468c0 100644
--- a/include/net/xfrm.h
+++ b/include/net/xfrm.h
@@ -143,6 +143,7 @@ struct xfrm_state {
struct xfrm_id id;
struct xfrm_selector sel;
struct xfrm_mark mark;
+ struct xfrm_tfc tfc;

u32 genid;

diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
index 8bae6b2..0b4ec02 100644
--- a/net/xfrm/xfrm_user.c
+++ b/net/xfrm/xfrm_user.c
@@ -148,7 +148,8 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
!attrs[XFRMA_ALG_AUTH_TRUNC]) ||
attrs[XFRMA_ALG_AEAD] ||
attrs[XFRMA_ALG_CRYPT] ||
- attrs[XFRMA_ALG_COMP])
+ attrs[XFRMA_ALG_COMP] ||
+ attrs[XFRMA_TFC])
goto out;
break;

@@ -172,7 +173,8 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
attrs[XFRMA_ALG_AEAD] ||
attrs[XFRMA_ALG_AUTH] ||
attrs[XFRMA_ALG_AUTH_TRUNC] ||
- attrs[XFRMA_ALG_CRYPT])
+ attrs[XFRMA_ALG_CRYPT] ||
+ attrs[XFRMA_TFC])
goto out;
break;

@@ -186,6 +188,7 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
attrs[XFRMA_ALG_CRYPT] ||
attrs[XFRMA_ENCAP] ||
attrs[XFRMA_SEC_CTX] ||
+ attrs[XFRMA_TFC] ||
!attrs[XFRMA_COADDR])
goto out;
break;
@@ -439,6 +442,9 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
goto error;
}

+ if (attrs[XFRMA_TFC])
+ memcpy(&x->tfc, nla_data(attrs[XFRMA_TFC]), sizeof(x->tfc));
+
if (attrs[XFRMA_COADDR]) {
x->coaddr = kmemdup(nla_data(attrs[XFRMA_COADDR]),
sizeof(*x->coaddr), GFP_KERNEL);
@@ -688,6 +694,9 @@ static int copy_to_user_state_extra(struct xfrm_state *x,
if (x->encap)
NLA_PUT(skb, XFRMA_ENCAP, sizeof(*x->encap), x->encap);

+ if (x->tfc.pad || x->tfc.flags)
+ NLA_PUT(skb, XFRMA_TFC, sizeof(x->tfc), &x->tfc);
+
if (xfrm_mark_put(skb, &x->mark))
goto nla_put_failure;

@@ -2122,6 +2131,7 @@ static const struct nla_policy xfrma_policy[XFRMA_MAX+1] = {
[XFRMA_MIGRATE] = { .len = sizeof(struct xfrm_user_migrate) },
[XFRMA_KMADDRESS] = { .len = sizeof(struct xfrm_user_kmaddress) },
[XFRMA_MARK] = { .len = sizeof(struct xfrm_mark) },
+ [XFRMA_TFC] = { .len = sizeof(struct xfrm_tfc) },
};

static struct xfrm_link {
@@ -2301,6 +2311,8 @@ static inline size_t xfrm_sa_len(struct xfrm_state *x)
l += nla_total_size(sizeof(*x->calg));
if (x->encap)
l += nla_total_size(sizeof(*x->encap));
+ if (x->tfc.pad)
+ l += nla_total_size(sizeof(x->tfc));
if (x->security)
l += nla_total_size(sizeof(struct xfrm_user_sec_ctx) +
x->security->ctx_len);
--
1.7.1

2010-12-03 07:34:03

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 3/5] xfrm: Traffic Flow Confidentiality for IPv4 ESP

On Tue, Nov 30, 2010 at 03:49:13PM +0000, Martin Willi wrote:
>
> + if (skb->len >= tfcpadto) {
> + clen = ALIGN(skb->len + 2, blksize);
> + } else if (x->tfc.flags & XFRM_TFC_ESPV3 &&
> + x->props.mode == XFRM_MODE_TUNNEL) {
> + /* ESPv3 TFC padding, append bytes to payload */
> + tfclen = tfcpadto - skb->len;
> + clen = ALIGN(skb->len + 2 + tfclen, blksize);
> + } else {
> + /* ESPv2 TFC padding. If we exceed the 255 byte maximum, use
> + * random padding to hide payload length as good as possible. */
> + clen = ALIGN(skb->len + 2 + tfcpadto - skb->len, blksize);
> + if (clen - skb->len - 2 > 255) {
> + clen = ALIGN(skb->len + (u8)random32() + 2, blksize);
> + if (clen - skb->len - 2 > 255)
> + clen -= blksize;
> + }

What is the basis of this random length padding?

Also, what happens when padto exceeds the MTU? Doesn't this
effectively disable PMTU-discovery?

I know that your last patch allows the padto to be set by PMTU.
But why would we ever want to use a padto that isn't clamped by
PMTU?

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2010-12-03 08:32:59

by Martin Willi

[permalink] [raw]
Subject: Re: [PATCH 3/5] xfrm: Traffic Flow Confidentiality for IPv4 ESP


> What is the basis of this random length padding?

Let assume a peer does not support ESPv3 padding, but we have to pad a
small packet with more than 255 bytes. We can't, the ESP padding length
field is limited to 255.
We could add 255 fixed bytes, but an eavesdropper could just subtract
the 255 bytes from all packets smaller than the boundary, rendering our
TFC efforts useless.
By inserting a random length padding in the range possible, the
eavesdropper knows that the packet has a length between "length" and
"length - 255", but can't estimated its exact size. I'm aware that this
is not optimal, but probably the best we can do(?).

> Also, what happens when padto exceeds the MTU? Doesn't this
> effectively disable PMTU-discovery?

Yes. An administrator setting a padto value larger than PMTU can
currently break PMTU discovery.

> I know that your last patch allows the padto to be set by PMTU.
> But why would we ever want to use a padto that isn't clamped by
> PMTU?

Probably never, valid point.

I'll add PMTU clamping to the next revision. We probably can drop the
PMTU flag then and just use USHRT_MAX instead.

Thanks!
Martin

2010-12-03 08:39:11

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 3/5] xfrm: Traffic Flow Confidentiality for IPv4 ESP

On Fri, Dec 03, 2010 at 09:32:55AM +0100, Martin Willi wrote:
>
> > What is the basis of this random length padding?
>
> Let assume a peer does not support ESPv3 padding, but we have to pad a
> small packet with more than 255 bytes. We can't, the ESP padding length
> field is limited to 255.
> We could add 255 fixed bytes, but an eavesdropper could just subtract
> the 255 bytes from all packets smaller than the boundary, rendering our
> TFC efforts useless.
> By inserting a random length padding in the range possible, the
> eavesdropper knows that the packet has a length between "length" and
> "length - 255", but can't estimated its exact size. I'm aware that this
> is not optimal, but probably the best we can do(?).

I know why you want to do this, what I'm asking is do you have any
research behind this with regards to security (e.g., you're using an
insecure RNG to generate a value that is then used as the basis
for concealment)?

Has this scheme been discussed on a public forum somewhere?

> > I know that your last patch allows the padto to be set by PMTU.
> > But why would we ever want to use a padto that isn't clamped by
> > PMTU?
>
> Probably never, valid point.
>
> I'll add PMTU clamping to the next revision. We probably can drop the
> PMTU flag then and just use USHRT_MAX instead.

Sounds good.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2010-12-06 15:10:33

by Martin Willi

[permalink] [raw]
Subject: Re: [PATCH 3/5] xfrm: Traffic Flow Confidentiality for IPv4 ESP

Hi Herbert,

> I know why you want to do this, what I'm asking is do you have any
> research behind this with regards to security
>
> Has this scheme been discussed on a public forum somewhere?

No, sorry, I haven't found much valuable discussion about TFC padding.
Nothing at all how to overcome the ESPv2 padding limit.

> using an insecure RNG to generate a value that is then used as the
> basis for concealment

Using get_random_bytes() adds another ~10% processing overhead due to
the underlying sha_transform. But this is probably negligible, we add
much more with the additional padding to encrypt/MAC.

I'll re-spin the patchset with get_random_bytes(). Even if the ESPv2
padding fallback makes TFC in this case less efficient, it shouldn't
harm. Or do you see this differently?

Regards
Martin

2010-12-06 15:22:54

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 3/5] xfrm: Traffic Flow Confidentiality for IPv4 ESP

On Mon, Dec 06, 2010 at 04:10:25PM +0100, Martin Willi wrote:
> >
> > Has this scheme been discussed on a public forum somewhere?
>
> No, sorry, I haven't found much valuable discussion about TFC padding.
> Nothing at all how to overcome the ESPv2 padding limit.

OK.

> I'll re-spin the patchset with get_random_bytes(). Even if the ESPv2
> padding fallback makes TFC in this case less efficient, it shouldn't
> harm. Or do you see this differently?

Indeed I don't think we should do anything for the ESPv2 case
at all without having this discussed in an appropriate forum
first.

So please remove that part completely from your submission for
now.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt