2021-02-11 21:37:54

by Sharath Chandra Vurukala

[permalink] [raw]
Subject: [PATCH 0/3]net:qualcomm:rmnet:Enable Mapv5.

This series introduces the MAPv5 packet format.

Patch 0 documents the MAPv5.
Patch 1 introduces the Mapv5 and the Inline checksum offload for RX.
Patch 2 introduces the Mapv5 and the Inline checksum offload for TX.

A new checksum header format is used as part of MAPv5.
For RX checksum offload, the checksum is verified by the HW and the validity is marked in the checksum header of MAPv5.
for TX, the required metadata is filled up so hardware can compute the checksum.

Sharath Chandra Vurukala (3):
docs:networking: Add documentation for MAP v5
net:ethernet:rmnet:Support for downlink MAPv5 csum offload
net:ethernet:rmnet: Add support for Mapv5 Uplink packet

.../device_drivers/cellular/qualcomm/rmnet.rst | 53 +++++++-
drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h | 4 +-
.../net/ethernet/qualcomm/rmnet/rmnet_handlers.c | 36 ++++--
drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h | 36 +++++-
.../net/ethernet/qualcomm/rmnet/rmnet_map_data.c | 136 +++++++++++++++++++--
include/linux/if_rmnet.h | 17 ++-
include/uapi/linux/if_link.h | 2 +
7 files changed, 255 insertions(+), 29 deletions(-)

--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


2021-02-11 21:40:08

by Sharath Chandra Vurukala

[permalink] [raw]
Subject: [PATCH 2/3] net:ethernet:rmnet:Support for downlink MAPv5 csum offload

Adding support for processing of Mapv5 downlink packets.
It involves parsing the Mapv5 packet and checking the csum header
to know whether the hardware has validated the checksum and is
valid or not.

Based on the checksum valid bit the corresponding stats are
incremented and skb->ip_summed is marked either CHECKSUM_UNNECESSARY
or left as CHEKSUM_NONE to let network stack revalidated the checksum
and update the respective snmp stats.

Signed-off-by: Sharath Chandra Vurukala <[email protected]>
---
drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h | 3 +-
.../net/ethernet/qualcomm/rmnet/rmnet_handlers.c | 19 ++++++----
drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h | 32 +++++++++++++++-
.../net/ethernet/qualcomm/rmnet/rmnet_map_data.c | 44 +++++++++++++++++++++-
include/linux/if_rmnet.h | 17 +++++++--
include/uapi/linux/if_link.h | 1 +
6 files changed, 102 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
index 8d8d469..d4d61471 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
-/* Copyright (c) 2013-2014, 2016-2018 The Linux Foundation. All rights reserved.
+/* Copyright (c) 2013-2014, 2016-2018 The Linux Foundation.
+ * All rights reserved.
*
* RMNET Data configuration engine
*/
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
index 3d7d3ab..70ad6a7 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-only
-/* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2013-2018, 2021, The Linux Foundation. All rights reserved.
*
* RMNET Data ingress/egress handler
*/
@@ -57,8 +57,8 @@ __rmnet_map_ingress_handler(struct sk_buff *skb,
struct rmnet_port *port)
{
struct rmnet_endpoint *ep;
+ u8 mux_id, next_hdr;
u16 len, pad;
- u8 mux_id;

if (RMNET_MAP_GET_CD_BIT(skb)) {
if (port->data_format & RMNET_FLAGS_INGRESS_MAP_COMMANDS)
@@ -70,6 +70,7 @@ __rmnet_map_ingress_handler(struct sk_buff *skb,
mux_id = RMNET_MAP_GET_MUX_ID(skb);
pad = RMNET_MAP_GET_PAD(skb);
len = RMNET_MAP_GET_LENGTH(skb) - pad;
+ next_hdr = RMNET_MAP_GET_NH_BIT(skb);

if (mux_id >= RMNET_MAX_LOGICAL_EP)
goto free_skb;
@@ -80,15 +81,19 @@ __rmnet_map_ingress_handler(struct sk_buff *skb,

skb->dev = ep->egress_dev;

- /* Subtract MAP header */
- skb_pull(skb, sizeof(struct rmnet_map_header));
- rmnet_set_skb_proto(skb);
-
- if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV4) {
+ if (next_hdr &&
+ (port->data_format & (RMNET_FLAGS_INGRESS_MAP_CKSUMV5))) {
+ if (rmnet_map_process_next_hdr_packet(skb, len))
+ goto free_skb;
+ } else if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV4) {
if (!rmnet_map_checksum_downlink_packet(skb, len + pad))
skb->ip_summed = CHECKSUM_UNNECESSARY;
}

+ /* Subtract MAP header */
+ skb_pull(skb, sizeof(struct rmnet_map_header));
+ rmnet_set_skb_proto(skb);
+
skb_trim(skb, len);
rmnet_deliver_skb(skb);
return;
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
index 576501d..55d293c 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0-only */
-/* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2013-2018, 2021, The Linux Foundation. All rights reserved.
*/

#ifndef _RMNET_MAP_H_
@@ -23,6 +23,12 @@ struct rmnet_map_control_command {
};
} __aligned(1);

+enum rmnet_map_v5_header_type {
+ RMNET_MAP_HEADER_TYPE_UNKNOWN,
+ RMNET_MAP_HEADER_TYPE_CSUM_OFFLOAD = 0x2,
+ RMNET_MAP_HEADER_TYPE_ENUM_LENGTH
+};
+
enum rmnet_map_commands {
RMNET_MAP_COMMAND_NONE,
RMNET_MAP_COMMAND_FLOW_DISABLE,
@@ -44,6 +50,9 @@ enum rmnet_map_commands {
#define RMNET_MAP_GET_LENGTH(Y) (ntohs(((struct rmnet_map_header *) \
(Y)->data)->pkt_len))

+#define RMNET_MAP_GET_NH_BIT(Y) (((struct rmnet_map_header *) \
+ (Y)->data)->next_hdr)
+
#define RMNET_MAP_COMMAND_REQUEST 0
#define RMNET_MAP_COMMAND_ACK 1
#define RMNET_MAP_COMMAND_UNSUPPORTED 2
@@ -55,10 +64,29 @@ enum rmnet_map_commands {
struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb,
struct rmnet_port *port);
struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
- int hdrlen, int pad);
+ int hdrlen,
+ struct rmnet_port *port,
+ int pad);
void rmnet_map_command(struct sk_buff *skb, struct rmnet_port *port);
int rmnet_map_checksum_downlink_packet(struct sk_buff *skb, u16 len);
void rmnet_map_checksum_uplink_packet(struct sk_buff *skb,
struct net_device *orig_dev);
+int rmnet_map_process_next_hdr_packet(struct sk_buff *skb, u16 len);
+
+static u8 rmnet_map_get_next_hdr_type(struct sk_buff *skb)
+{
+ unsigned char *data = skb->data;
+
+ data += sizeof(struct rmnet_map_header);
+ return ((struct rmnet_map_v5_csum_header *)data)->header_type;
+}
+
+static inline bool rmnet_map_get_csum_valid(struct sk_buff *skb)
+{
+ unsigned char *data = skb->data;
+
+ data += sizeof(struct rmnet_map_header);
+ return ((struct rmnet_map_v5_csum_header *)data)->csum_valid_required;
+}

#endif /* _RMNET_MAP_H_ */
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
index 21d3816..3d7e03f 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-only
-/* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2013-2018, 2021, The Linux Foundation. All rights reserved.
*
* RMNET Data MAP protocol
*/
@@ -311,6 +311,7 @@ struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb,
struct rmnet_port *port)
{
+ unsigned char *data = skb->data, *next_hdr = NULL;
struct rmnet_map_header *maph;
struct sk_buff *skbn;
u32 packet_len;
@@ -323,6 +324,12 @@ struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb,

if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV4)
packet_len += sizeof(struct rmnet_map_dl_csum_trailer);
+ else if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV5) {
+ if (!maph->cd_bit) {
+ packet_len += sizeof(struct rmnet_map_v5_csum_header);
+ next_hdr = data + sizeof(*maph);
+ }
+ }

if (((int)skb->len - (int)packet_len) < 0)
return NULL;
@@ -331,6 +338,11 @@ struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb,
if (ntohs(maph->pkt_len) == 0)
return NULL;

+ if (next_hdr &&
+ ((struct rmnet_map_v5_csum_header *)next_hdr)->header_type !=
+ RMNET_MAP_HEADER_TYPE_CSUM_OFFLOAD)
+ return NULL;
+
skbn = alloc_skb(packet_len + RMNET_MAP_DEAGGR_SPACING, GFP_ATOMIC);
if (!skbn)
return NULL;
@@ -428,3 +440,33 @@ void rmnet_map_checksum_uplink_packet(struct sk_buff *skb,

priv->stats.csum_sw++;
}
+
+/* Process a MAPv5 packet header */
+int rmnet_map_process_next_hdr_packet(struct sk_buff *skb,
+ u16 len)
+{
+ struct rmnet_priv *priv = netdev_priv(skb->dev);
+ int rc = 0;
+
+ switch (rmnet_map_get_next_hdr_type(skb)) {
+ case RMNET_MAP_HEADER_TYPE_CSUM_OFFLOAD:
+ if (unlikely(!(skb->dev->features & NETIF_F_RXCSUM))) {
+ priv->stats.csum_sw++;
+ } else if (rmnet_map_get_csum_valid(skb)) {
+ priv->stats.csum_ok++;
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ } else {
+ priv->stats.csum_valid_unset++;
+ }
+
+ /* Pull csum v5 header */
+ skb_pull(skb, sizeof(struct rmnet_map_v5_csum_header));
+ break;
+ default:
+ rc = -EINVAL;
+ break;
+ }
+
+ return rc;
+}
+
diff --git a/include/linux/if_rmnet.h b/include/linux/if_rmnet.h
index 9661416..81acd0b 100644
--- a/include/linux/if_rmnet.h
+++ b/include/linux/if_rmnet.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0-only
- * Copyright (c) 2013-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2013-2019, 2021 The Linux Foundation. All rights reserved.
*/

#ifndef _LINUX_IF_RMNET_H_
@@ -8,11 +8,11 @@
struct rmnet_map_header {
#if defined(__LITTLE_ENDIAN_BITFIELD)
u8 pad_len:6;
- u8 reserved_bit:1;
+ u8 next_hdr:1;
u8 cd_bit:1;
#elif defined (__BIG_ENDIAN_BITFIELD)
u8 cd_bit:1;
- u8 reserved_bit:1;
+ u8 next_hdr:1;
u8 pad_len:6;
#else
#error "Please fix <asm/byteorder.h>"
@@ -52,4 +52,15 @@ struct rmnet_map_ul_csum_header {
#endif
} __aligned(1);

+/* MAP CSUM headers */
+struct rmnet_map_v5_csum_header {
+ u8 next_hdr:1;
+ u8 header_type:7;
+ u8 hw_reserved:5;
+ u8 priority:1;
+ u8 hw_reserved_bit:1;
+ u8 csum_valid_required:1;
+ __be16 reserved;
+} __aligned(1);
+
#endif /* !(_LINUX_IF_RMNET_H_) */
diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
index 82708c6..838bd29 100644
--- a/include/uapi/linux/if_link.h
+++ b/include/uapi/linux/if_link.h
@@ -1233,6 +1233,7 @@ enum {
#define RMNET_FLAGS_INGRESS_MAP_COMMANDS (1U << 1)
#define RMNET_FLAGS_INGRESS_MAP_CKSUMV4 (1U << 2)
#define RMNET_FLAGS_EGRESS_MAP_CKSUMV4 (1U << 3)
+#define RMNET_FLAGS_INGRESS_MAP_CKSUMV5 (1U << 4)

enum {
IFLA_RMNET_UNSPEC,
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

2021-02-11 21:40:26

by Sharath Chandra Vurukala

[permalink] [raw]
Subject: [PATCH 3/3] net:ethernet:rmnet: Add support for Mapv5 Uplink packet

Adding Support for Mapv5 uplink packet.
Based on the configuration Request HW for csum offload
by setting the csum_valid_required of Mapv5 packet.

Signed-off-by: Sharath Chandra Vurukala <[email protected]>
---
drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h | 3 +-
.../net/ethernet/qualcomm/rmnet/rmnet_handlers.c | 17 +++-
drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h | 4 +-
.../net/ethernet/qualcomm/rmnet/rmnet_map_data.c | 92 ++++++++++++++++++++--
include/uapi/linux/if_link.h | 1 +
5 files changed, 105 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
index d4d61471..8e64ca9 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0-only */
-/* Copyright (c) 2013-2014, 2016-2018 The Linux Foundation.
+/* Copyright (c) 2013-2014, 2016-2018, 2021 The Linux Foundation.
* All rights reserved.
*
* RMNET Data configuration engine
@@ -57,6 +57,7 @@ struct rmnet_priv_stats {
u64 csum_fragmented_pkt;
u64 csum_skipped;
u64 csum_sw;
+ u64 csum_hw;
};

struct rmnet_priv {
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
index 70ad6a7..870be09 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
@@ -131,8 +131,9 @@ static int rmnet_map_egress_handler(struct sk_buff *skb,
struct rmnet_port *port, u8 mux_id,
struct net_device *orig_dev)
{
- int required_headroom, additional_header_len;
+ int required_headroom, additional_header_len, csum_type;
struct rmnet_map_header *map_header;
+ csum_type = 0;

additional_header_len = 0;
required_headroom = sizeof(struct rmnet_map_header);
@@ -140,17 +141,25 @@ static int rmnet_map_egress_handler(struct sk_buff *skb,
if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV4) {
additional_header_len = sizeof(struct rmnet_map_ul_csum_header);
required_headroom += additional_header_len;
+ csum_type = RMNET_FLAGS_EGRESS_MAP_CKSUMV4;
+ } else if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV5) {
+ additional_header_len = sizeof(struct rmnet_map_v5_csum_header);
+ csum_type = RMNET_FLAGS_EGRESS_MAP_CKSUMV5;
}

+ required_headroom += additional_header_len;
+
if (skb_headroom(skb) < required_headroom) {
if (pskb_expand_head(skb, required_headroom, 0, GFP_ATOMIC))
return -ENOMEM;
}

- if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV4)
- rmnet_map_checksum_uplink_packet(skb, orig_dev);
+ if (csum_type) {
+ rmnet_map_checksum_uplink_packet(skb, port, orig_dev,
+ csum_type);
+ }

- map_header = rmnet_map_add_map_header(skb, additional_header_len, 0);
+ map_header = rmnet_map_add_map_header(skb, additional_header_len, port, 0);
if (!map_header)
return -ENOMEM;

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
index 55d293c..9b2aef0 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
@@ -70,7 +70,9 @@ struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
void rmnet_map_command(struct sk_buff *skb, struct rmnet_port *port);
int rmnet_map_checksum_downlink_packet(struct sk_buff *skb, u16 len);
void rmnet_map_checksum_uplink_packet(struct sk_buff *skb,
- struct net_device *orig_dev);
+ struct rmnet_port *port,
+ struct net_device *orig_dev,
+ int csum_type);
int rmnet_map_process_next_hdr_packet(struct sk_buff *skb, u16 len);

static u8 rmnet_map_get_next_hdr_type(struct sk_buff *skb)
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
index 3d7e03f..600b9a2 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
@@ -263,12 +263,68 @@ rmnet_map_ipv6_ul_csum_header(void *ip6hdr,
}
#endif

+void rmnet_map_v5_checksum_uplink_packet(struct sk_buff *skb,
+ struct rmnet_port *port,
+ struct net_device *orig_dev)
+{
+ struct rmnet_priv *priv = netdev_priv(orig_dev);
+ struct rmnet_map_v5_csum_header *ul_header;
+
+ if (!(port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV5))
+ return;
+
+ ul_header = (struct rmnet_map_v5_csum_header *)
+ skb_push(skb, sizeof(*ul_header));
+ memset(ul_header, 0, sizeof(*ul_header));
+ ul_header->header_type = RMNET_MAP_HEADER_TYPE_CSUM_OFFLOAD;
+
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ void *iph = (char *)ul_header + sizeof(*ul_header);
+ __sum16 *check;
+ void *trans;
+ u8 proto;
+
+ if (skb->protocol == htons(ETH_P_IP)) {
+ u16 ip_len = ((struct iphdr *)iph)->ihl * 4;
+
+ proto = ((struct iphdr *)iph)->protocol;
+ trans = iph + ip_len;
+ }
+#if IS_ENABLED(CONFIG_IPV6)
+ else if (skb->protocol == htons(ETH_P_IPV6)) {
+ u16 ip_len = sizeof(struct ipv6hdr);
+
+ proto = ((struct ipv6hdr *)iph)->nexthdr;
+ trans = iph + ip_len;
+ }
+#endif /* CONFIG_IPV6 */
+ else {
+ priv->stats.csum_err_invalid_ip_version++;
+ goto sw_csum;
+ }
+
+ check = rmnet_map_get_csum_field(proto, trans);
+ if (check) {
+ skb->ip_summed = CHECKSUM_NONE;
+ /* Ask for checksum offloading */
+ ul_header->csum_valid_required = 1;
+ priv->stats.csum_hw++;
+ return;
+ }
+ }
+
+sw_csum:
+ priv->stats.csum_sw++;
+}
+
/* Adds MAP header to front of skb->data
* Padding is calculated and set appropriately in MAP header. Mux ID is
* initialized to 0.
*/
struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
- int hdrlen, int pad)
+ int hdrlen,
+ struct rmnet_port *port,
+ int pad)
{
struct rmnet_map_header *map_header;
u32 padding, map_datalen;
@@ -279,6 +335,11 @@ struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
skb_push(skb, sizeof(struct rmnet_map_header));
memset(map_header, 0, sizeof(struct rmnet_map_header));

+ /* Set next_hdr bit for csum offload packets */
+ if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV5) {
+ map_header->next_hdr = 1;
+ }
+
if (pad == RMNET_MAP_NO_PAD_BYTES) {
map_header->pkt_len = htons(map_datalen);
return map_header;
@@ -395,11 +456,8 @@ int rmnet_map_checksum_downlink_packet(struct sk_buff *skb, u16 len)
return 0;
}

-/* Generates UL checksum meta info header for IPv4 and IPv6 over TCP and UDP
- * packets that are supported for UL checksum offload.
- */
-void rmnet_map_checksum_uplink_packet(struct sk_buff *skb,
- struct net_device *orig_dev)
+void rmnet_map_v4_checksum_uplink_packet(struct sk_buff *skb,
+ struct net_device *orig_dev)
{
struct rmnet_priv *priv = netdev_priv(orig_dev);
struct rmnet_map_ul_csum_header *ul_header;
@@ -418,10 +476,12 @@ void rmnet_map_checksum_uplink_packet(struct sk_buff *skb,

if (skb->protocol == htons(ETH_P_IP)) {
rmnet_map_ipv4_ul_csum_header(iphdr, ul_header, skb);
+ priv->stats.csum_hw++;
return;
} else if (skb->protocol == htons(ETH_P_IPV6)) {
#if IS_ENABLED(CONFIG_IPV6)
rmnet_map_ipv6_ul_csum_header(iphdr, ul_header, skb);
+ priv->stats.csum_hw++;
return;
#else
priv->stats.csum_err_invalid_ip_version++;
@@ -441,6 +501,26 @@ void rmnet_map_checksum_uplink_packet(struct sk_buff *skb,
priv->stats.csum_sw++;
}

+/* Generates UL checksum meta info header for IPv4 and IPv6 over TCP and UDP
+ * packets that are supported for UL checksum offload.
+ */
+void rmnet_map_checksum_uplink_packet(struct sk_buff *skb,
+ struct rmnet_port *port,
+ struct net_device *orig_dev,
+ int csum_type)
+{
+ switch (csum_type) {
+ case RMNET_FLAGS_EGRESS_MAP_CKSUMV4:
+ rmnet_map_v4_checksum_uplink_packet(skb, orig_dev);
+ break;
+ case RMNET_FLAGS_EGRESS_MAP_CKSUMV5:
+ rmnet_map_v5_checksum_uplink_packet(skb, port, orig_dev);
+ break;
+ default:
+ break;
+ }
+}
+
/* Process a MAPv5 packet header */
int rmnet_map_process_next_hdr_packet(struct sk_buff *skb,
u16 len)
diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
index 838bd29..319865f 100644
--- a/include/uapi/linux/if_link.h
+++ b/include/uapi/linux/if_link.h
@@ -1234,6 +1234,7 @@ enum {
#define RMNET_FLAGS_INGRESS_MAP_CKSUMV4 (1U << 2)
#define RMNET_FLAGS_EGRESS_MAP_CKSUMV4 (1U << 3)
#define RMNET_FLAGS_INGRESS_MAP_CKSUMV5 (1U << 4)
+#define RMNET_FLAGS_EGRESS_MAP_CKSUMV5 (1U << 5)

enum {
IFLA_RMNET_UNSPEC,
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

2021-02-11 21:41:10

by Sharath Chandra Vurukala

[permalink] [raw]
Subject: [PATCH 1/3] docs:networking: Add documentation for MAP v5

Adding documentation explaining the new MAPv5 packet format
and the corresponding checksum offload header.

Signed-off-by: Sharath Chandra Vurukala <[email protected]>
---
.../device_drivers/cellular/qualcomm/rmnet.rst | 53 ++++++++++++++++++++--
1 file changed, 49 insertions(+), 4 deletions(-)

diff --git a/Documentation/networking/device_drivers/cellular/qualcomm/rmnet.rst b/Documentation/networking/device_drivers/cellular/qualcomm/rmnet.rst
index 70643b5..8c81f19 100644
--- a/Documentation/networking/device_drivers/cellular/qualcomm/rmnet.rst
+++ b/Documentation/networking/device_drivers/cellular/qualcomm/rmnet.rst
@@ -27,7 +27,7 @@ these MAP frames and send them to appropriate PDN's.
2. Packet format
================

-a. MAP packet (data / control)
+a. MAP packet v1 (data / control)

MAP header has the same endianness of the IP packet.

@@ -35,8 +35,9 @@ Packet format::

Bit 0 1 2-7 8 - 15 16 - 31
Function Command / Data Reserved Pad Multiplexer ID Payload length
- Bit 32 - x
- Function Raw Bytes
+
+ Bit 32-x
+ Function Raw bytes

Command (1)/ Data (0) bit value is to indicate if the packet is a MAP command
or data packet. Control packet is used for transport level flow control. Data
@@ -52,7 +53,51 @@ Multiplexer ID is to indicate the PDN on which data has to be sent.
Payload length includes the padding length but does not include MAP header
length.

-b. MAP packet (command specific)::
+b. MAP packet v5 (data / control)
+
+MAP header has the same endianness of the IP packet.
+
+Packet format::
+
+ Bit 0 1 2-7 8 - 15 16 - 31
+ Function Command / Data Next Header Pad Multiplexer ID Payload length
+
+ Bit 32 - 38 1 40 41-63
+ Function Header Type Next Header Checksum Valid Reserved
+
+ Bit 64-x
+ Function Raw bytes
+
+Command (1)/ Data (0) bit value is to indicate if the packet is a MAP command
+or data packet. Control packet is used for transport level flow control. Data
+packets are standard IP packets.
+
+Next header is used to indicate the presence of another header, currently is
+limited to checksum header.
+
+Padding is number of bytes to be added for 4 byte alignment if required by
+hardware.
+
+Multiplexer ID is to indicate the PDN on which data has to be sent.
+
+Payload length includes the padding length but does not include MAP header
+length.
+
+Header Type is to indicate the type of header, this usually is set to CHECKSUM
+
+Header types
+= ==========================================
+0 Reserved
+1 Reserved
+2 checksum header
+
+Checksum Valid is to indicate whether the header checksum is valid. Value of 1
+implies that checksum is calculated on this packet and is valid, value of 0
+indicates that the calculated packet checksum is invalid.
+
+Reserved bits are usually zeroed out and to be ignored by receiver.
+
+c. MAP packet v1/v5 (command specific)::

Bit 0 1 2-7 8 - 15 16 - 31
Function Command Reserved Pad Multiplexer ID Payload length
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

2021-02-12 02:08:24

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH 2/3] net:ethernet:rmnet:Support for downlink MAPv5 csum offload

On Fri, 12 Feb 2021 03:05:23 +0530 Sharath Chandra Vurukala wrote:
> +/* MAP CSUM headers */
> +struct rmnet_map_v5_csum_header {
> + u8 next_hdr:1;
> + u8 header_type:7;
> + u8 hw_reserved:5;
> + u8 priority:1;
> + u8 hw_reserved_bit:1;
> + u8 csum_valid_required:1;
> + __be16 reserved;
> +} __aligned(1);

Will this work on big endian?

2021-02-12 02:30:39

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 3/3] net:ethernet:rmnet: Add support for Mapv5 Uplink packet

Hi Sharath,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on ipvs/master]
[also build test WARNING on linus/master sparc-next/master v5.11-rc7 next-20210211]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/0day-ci/linux/commits/Sharath-Chandra-Vurukala/docs-networking-Add-documentation-for-MAP-v5/20210212-063547
base: https://git.kernel.org/pub/scm/linux/kernel/git/horms/ipvs.git master
config: x86_64-randconfig-a012-20210209 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project c9439ca36342fb6013187d0a69aef92736951476)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install x86_64 cross compiling tool for clang build
# apt-get install binutils-x86-64-linux-gnu
# https://github.com/0day-ci/linux/commit/7f0a1e35c1d1c17de5873aded88d5dadfedce2fb
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Sharath-Chandra-Vurukala/docs-networking-Add-documentation-for-MAP-v5/20210212-063547
git checkout 7f0a1e35c1d1c17de5873aded88d5dadfedce2fb
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>

All warnings (new ones prefixed by >>):

>> drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c:266:6: warning: no previous prototype for function 'rmnet_map_v5_checksum_uplink_packet' [-Wmissing-prototypes]
void rmnet_map_v5_checksum_uplink_packet(struct sk_buff *skb,
^
drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c:266:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void rmnet_map_v5_checksum_uplink_packet(struct sk_buff *skb,
^
static
>> drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c:459:6: warning: no previous prototype for function 'rmnet_map_v4_checksum_uplink_packet' [-Wmissing-prototypes]
void rmnet_map_v4_checksum_uplink_packet(struct sk_buff *skb,
^
drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c:459:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void rmnet_map_v4_checksum_uplink_packet(struct sk_buff *skb,
^
static
2 warnings generated.


vim +/rmnet_map_v5_checksum_uplink_packet +266 drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c

265
> 266 void rmnet_map_v5_checksum_uplink_packet(struct sk_buff *skb,
267 struct rmnet_port *port,
268 struct net_device *orig_dev)
269 {
270 struct rmnet_priv *priv = netdev_priv(orig_dev);
271 struct rmnet_map_v5_csum_header *ul_header;
272
273 if (!(port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV5))
274 return;
275
276 ul_header = (struct rmnet_map_v5_csum_header *)
277 skb_push(skb, sizeof(*ul_header));
278 memset(ul_header, 0, sizeof(*ul_header));
279 ul_header->header_type = RMNET_MAP_HEADER_TYPE_CSUM_OFFLOAD;
280
281 if (skb->ip_summed == CHECKSUM_PARTIAL) {
282 void *iph = (char *)ul_header + sizeof(*ul_header);
283 __sum16 *check;
284 void *trans;
285 u8 proto;
286
287 if (skb->protocol == htons(ETH_P_IP)) {
288 u16 ip_len = ((struct iphdr *)iph)->ihl * 4;
289
290 proto = ((struct iphdr *)iph)->protocol;
291 trans = iph + ip_len;
292 }
293 #if IS_ENABLED(CONFIG_IPV6)
294 else if (skb->protocol == htons(ETH_P_IPV6)) {
295 u16 ip_len = sizeof(struct ipv6hdr);
296
297 proto = ((struct ipv6hdr *)iph)->nexthdr;
298 trans = iph + ip_len;
299 }
300 #endif /* CONFIG_IPV6 */
301 else {
302 priv->stats.csum_err_invalid_ip_version++;
303 goto sw_csum;
304 }
305
306 check = rmnet_map_get_csum_field(proto, trans);
307 if (check) {
308 skb->ip_summed = CHECKSUM_NONE;
309 /* Ask for checksum offloading */
310 ul_header->csum_valid_required = 1;
311 priv->stats.csum_hw++;
312 return;
313 }
314 }
315
316 sw_csum:
317 priv->stats.csum_sw++;
318 }
319
320 /* Adds MAP header to front of skb->data
321 * Padding is calculated and set appropriately in MAP header. Mux ID is
322 * initialized to 0.
323 */
324 struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
325 int hdrlen,
326 struct rmnet_port *port,
327 int pad)
328 {
329 struct rmnet_map_header *map_header;
330 u32 padding, map_datalen;
331 u8 *padbytes;
332
333 map_datalen = skb->len - hdrlen;
334 map_header = (struct rmnet_map_header *)
335 skb_push(skb, sizeof(struct rmnet_map_header));
336 memset(map_header, 0, sizeof(struct rmnet_map_header));
337
338 /* Set next_hdr bit for csum offload packets */
339 if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV5) {
340 map_header->next_hdr = 1;
341 }
342
343 if (pad == RMNET_MAP_NO_PAD_BYTES) {
344 map_header->pkt_len = htons(map_datalen);
345 return map_header;
346 }
347
348 padding = ALIGN(map_datalen, 4) - map_datalen;
349
350 if (padding == 0)
351 goto done;
352
353 if (skb_tailroom(skb) < padding)
354 return NULL;
355
356 padbytes = (u8 *)skb_put(skb, padding);
357 memset(padbytes, 0, padding);
358
359 done:
360 map_header->pkt_len = htons(map_datalen + padding);
361 map_header->pad_len = padding & 0x3F;
362
363 return map_header;
364 }
365
366 /* Deaggregates a single packet
367 * A whole new buffer is allocated for each portion of an aggregated frame.
368 * Caller should keep calling deaggregate() on the source skb until 0 is
369 * returned, indicating that there are no more packets to deaggregate. Caller
370 * is responsible for freeing the original skb.
371 */
372 struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb,
373 struct rmnet_port *port)
374 {
375 unsigned char *data = skb->data, *next_hdr = NULL;
376 struct rmnet_map_header *maph;
377 struct sk_buff *skbn;
378 u32 packet_len;
379
380 if (skb->len == 0)
381 return NULL;
382
383 maph = (struct rmnet_map_header *)skb->data;
384 packet_len = ntohs(maph->pkt_len) + sizeof(struct rmnet_map_header);
385
386 if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV4)
387 packet_len += sizeof(struct rmnet_map_dl_csum_trailer);
388 else if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV5) {
389 if (!maph->cd_bit) {
390 packet_len += sizeof(struct rmnet_map_v5_csum_header);
391 next_hdr = data + sizeof(*maph);
392 }
393 }
394
395 if (((int)skb->len - (int)packet_len) < 0)
396 return NULL;
397
398 /* Some hardware can send us empty frames. Catch them */
399 if (ntohs(maph->pkt_len) == 0)
400 return NULL;
401
402 if (next_hdr &&
403 ((struct rmnet_map_v5_csum_header *)next_hdr)->header_type !=
404 RMNET_MAP_HEADER_TYPE_CSUM_OFFLOAD)
405 return NULL;
406
407 skbn = alloc_skb(packet_len + RMNET_MAP_DEAGGR_SPACING, GFP_ATOMIC);
408 if (!skbn)
409 return NULL;
410
411 skb_reserve(skbn, RMNET_MAP_DEAGGR_HEADROOM);
412 skb_put(skbn, packet_len);
413 memcpy(skbn->data, skb->data, packet_len);
414 skb_pull(skb, packet_len);
415
416 return skbn;
417 }
418
419 /* Validates packet checksums. Function takes a pointer to
420 * the beginning of a buffer which contains the IP payload +
421 * padding + checksum trailer.
422 * Only IPv4 and IPv6 are supported along with TCP & UDP.
423 * Fragmented or tunneled packets are not supported.
424 */
425 int rmnet_map_checksum_downlink_packet(struct sk_buff *skb, u16 len)
426 {
427 struct rmnet_priv *priv = netdev_priv(skb->dev);
428 struct rmnet_map_dl_csum_trailer *csum_trailer;
429
430 if (unlikely(!(skb->dev->features & NETIF_F_RXCSUM))) {
431 priv->stats.csum_sw++;
432 return -EOPNOTSUPP;
433 }
434
435 csum_trailer = (struct rmnet_map_dl_csum_trailer *)(skb->data + len);
436
437 if (!csum_trailer->valid) {
438 priv->stats.csum_valid_unset++;
439 return -EINVAL;
440 }
441
442 if (skb->protocol == htons(ETH_P_IP)) {
443 return rmnet_map_ipv4_dl_csum_trailer(skb, csum_trailer, priv);
444 } else if (skb->protocol == htons(ETH_P_IPV6)) {
445 #if IS_ENABLED(CONFIG_IPV6)
446 return rmnet_map_ipv6_dl_csum_trailer(skb, csum_trailer, priv);
447 #else
448 priv->stats.csum_err_invalid_ip_version++;
449 return -EPROTONOSUPPORT;
450 #endif
451 } else {
452 priv->stats.csum_err_invalid_ip_version++;
453 return -EPROTONOSUPPORT;
454 }
455
456 return 0;
457 }
458
> 459 void rmnet_map_v4_checksum_uplink_packet(struct sk_buff *skb,
460 struct net_device *orig_dev)
461 {
462 struct rmnet_priv *priv = netdev_priv(orig_dev);
463 struct rmnet_map_ul_csum_header *ul_header;
464 void *iphdr;
465
466 ul_header = (struct rmnet_map_ul_csum_header *)
467 skb_push(skb, sizeof(struct rmnet_map_ul_csum_header));
468
469 if (unlikely(!(orig_dev->features &
470 (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM))))
471 goto sw_csum;
472
473 if (skb->ip_summed == CHECKSUM_PARTIAL) {
474 iphdr = (char *)ul_header +
475 sizeof(struct rmnet_map_ul_csum_header);
476
477 if (skb->protocol == htons(ETH_P_IP)) {
478 rmnet_map_ipv4_ul_csum_header(iphdr, ul_header, skb);
479 priv->stats.csum_hw++;
480 return;
481 } else if (skb->protocol == htons(ETH_P_IPV6)) {
482 #if IS_ENABLED(CONFIG_IPV6)
483 rmnet_map_ipv6_ul_csum_header(iphdr, ul_header, skb);
484 priv->stats.csum_hw++;
485 return;
486 #else
487 priv->stats.csum_err_invalid_ip_version++;
488 goto sw_csum;
489 #endif
490 } else {
491 priv->stats.csum_err_invalid_ip_version++;
492 }
493 }
494
495 sw_csum:
496 ul_header->csum_start_offset = 0;
497 ul_header->csum_insert_offset = 0;
498 ul_header->csum_enabled = 0;
499 ul_header->udp_ind = 0;
500
501 priv->stats.csum_sw++;
502 }
503

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]


Attachments:
(No filename) (10.94 kB)
.config.gz (33.41 kB)
Download all attachments

2021-02-12 04:58:48

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 2/3] net:ethernet:rmnet:Support for downlink MAPv5 csum offload

Hi Sharath,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on ipvs/master]
[also build test ERROR on linus/master sparc-next/master v5.11-rc7 next-20210211]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/0day-ci/linux/commits/Sharath-Chandra-Vurukala/docs-networking-Add-documentation-for-MAP-v5/20210212-063547
base: https://git.kernel.org/pub/scm/linux/kernel/git/horms/ipvs.git master
config: arm64-allyesconfig (attached as .config)
compiler: aarch64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/293142d706c02bf2e6ce7acb4e04ebb6cf4a2a63
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Sharath-Chandra-Vurukala/docs-networking-Add-documentation-for-MAP-v5/20210212-063547
git checkout 293142d706c02bf2e6ce7acb4e04ebb6cf4a2a63
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm64

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>

Note: the linux-review/Sharath-Chandra-Vurukala/docs-networking-Add-documentation-for-MAP-v5/20210212-063547 HEAD 7f0a1e35c1d1c17de5873aded88d5dadfedce2fb builds fine.
It only hurts bisectibility.

All errors (new ones prefixed by >>):

drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c: In function 'rmnet_map_egress_handler':
>> drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c:153:15: error: too few arguments to function 'rmnet_map_add_map_header'
153 | map_header = rmnet_map_add_map_header(skb, additional_header_len, 0);
| ^~~~~~~~~~~~~~~~~~~~~~~~
In file included from drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c:14:
drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h:66:26: note: declared here
66 | struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
| ^~~~~~~~~~~~~~~~~~~~~~~~
At top level:
drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h:76:11: warning: 'rmnet_map_get_next_hdr_type' defined but not used [-Wunused-function]
76 | static u8 rmnet_map_get_next_hdr_type(struct sk_buff *skb)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
--
>> drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c:270:26: error: conflicting types for 'rmnet_map_add_map_header'
270 | struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
| ^~~~~~~~~~~~~~~~~~~~~~~~
In file included from drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c:12:
drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h:66:26: note: previous declaration of 'rmnet_map_add_map_header' was here
66 | struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
| ^~~~~~~~~~~~~~~~~~~~~~~~


vim +/rmnet_map_add_map_header +153 drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c

ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 129
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 130 static int rmnet_map_egress_handler(struct sk_buff *skb,
56470c927f1ba1 Subash Abhinov Kasiviswanathan 2017-10-11 131 struct rmnet_port *port, u8 mux_id,
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 132 struct net_device *orig_dev)
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 133 {
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 134 int required_headroom, additional_header_len;
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 135 struct rmnet_map_header *map_header;
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 136
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 137 additional_header_len = 0;
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 138 required_headroom = sizeof(struct rmnet_map_header);
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 139
14452ca3b5ce30 Subash Abhinov Kasiviswanathan 2018-03-21 140 if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV4) {
5eb5f8608ef118 Subash Abhinov Kasiviswanathan 2018-01-07 141 additional_header_len = sizeof(struct rmnet_map_ul_csum_header);
5eb5f8608ef118 Subash Abhinov Kasiviswanathan 2018-01-07 142 required_headroom += additional_header_len;
5eb5f8608ef118 Subash Abhinov Kasiviswanathan 2018-01-07 143 }
5eb5f8608ef118 Subash Abhinov Kasiviswanathan 2018-01-07 144
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 145 if (skb_headroom(skb) < required_headroom) {
6392ff3c8e4c23 Subash Abhinov Kasiviswanathan 2018-10-02 146 if (pskb_expand_head(skb, required_headroom, 0, GFP_ATOMIC))
1eece799d3f611 Subash Abhinov Kasiviswanathan 2018-05-15 147 return -ENOMEM;
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 148 }
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 149
14452ca3b5ce30 Subash Abhinov Kasiviswanathan 2018-03-21 150 if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV4)
5eb5f8608ef118 Subash Abhinov Kasiviswanathan 2018-01-07 151 rmnet_map_checksum_uplink_packet(skb, orig_dev);
5eb5f8608ef118 Subash Abhinov Kasiviswanathan 2018-01-07 152
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 @153 map_header = rmnet_map_add_map_header(skb, additional_header_len, 0);
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 154 if (!map_header)
1eece799d3f611 Subash Abhinov Kasiviswanathan 2018-05-15 155 return -ENOMEM;
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 156
56470c927f1ba1 Subash Abhinov Kasiviswanathan 2017-10-11 157 map_header->mux_id = mux_id;
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 158
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 159 skb->protocol = htons(ETH_P_MAP);
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 160
cf2fe57b0cc220 Subash Abhinov Kasiviswanathan 2017-12-11 161 return 0;
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 162 }
ceed73a2cf4aff Subash Abhinov Kasiviswanathan 2017-08-29 163

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]


Attachments:
(No filename) (6.50 kB)
.config.gz (74.63 kB)
Download all attachments

2021-02-12 14:04:19

by Alex Elder

[permalink] [raw]
Subject: Re: [PATCH 2/3] net:ethernet:rmnet:Support for downlink MAPv5 csum offload

On 2/11/21 8:04 PM, Jakub Kicinski wrote:
> On Fri, 12 Feb 2021 03:05:23 +0530 Sharath Chandra Vurukala wrote:
>> +/* MAP CSUM headers */
>> +struct rmnet_map_v5_csum_header {
>> + u8 next_hdr:1;
>> + u8 header_type:7;
>> + u8 hw_reserved:5;
>> + u8 priority:1;
>> + u8 hw_reserved_bit:1;
>> + u8 csum_valid_required:1;
>> + __be16 reserved;
>> +} __aligned(1);
>
> Will this work on big endian?

Sort of related to this point...

I'm sure the response to this will be to add two versions
of the definition, surrounded __LITTLE_ENDIAN_BITFIELD
and __BIG_ENDIAN_BITFIELD tests.

I really find this non-intuitive, and every time I
look at it I have to think about it a bit to figure
out where the bits actually lie in the word.

I know this pattern is used elsewhere in the networking
code, but that doesn't make it any easier for me to
understand...

Can we used mask, defined in host byte order, to
specify the positions of these fields?

I proposed a change at one time that did this and
this *_ENDIAN_BITFIELD thing was used instead.

I will gladly implement this change (completely
separate from what's being done here), but thought
it might be best to see what people think about it
before doing that work.

-Alex

Subject: Re: [PATCH 2/3] net:ethernet:rmnet:Support for downlink MAPv5 csum offload

On 2021-02-12 07:01, Alex Elder wrote:
> On 2/11/21 8:04 PM, Jakub Kicinski wrote:
>> On Fri, 12 Feb 2021 03:05:23 +0530 Sharath Chandra Vurukala wrote:
>>> +/* MAP CSUM headers */
>>> +struct rmnet_map_v5_csum_header {
>>> + u8 next_hdr:1;
>>> + u8 header_type:7;
>>> + u8 hw_reserved:5;
>>> + u8 priority:1;
>>> + u8 hw_reserved_bit:1;
>>> + u8 csum_valid_required:1;
>>> + __be16 reserved;
>>> +} __aligned(1);
>>
>> Will this work on big endian?
>
> Sort of related to this point...
>
> I'm sure the response to this will be to add two versions
> of the definition, surrounded __LITTLE_ENDIAN_BITFIELD
> and __BIG_ENDIAN_BITFIELD tests.
>
> I really find this non-intuitive, and every time I
> look at it I have to think about it a bit to figure
> out where the bits actually lie in the word.
>
> I know this pattern is used elsewhere in the networking
> code, but that doesn't make it any easier for me to
> understand...
>
> Can we used mask, defined in host byte order, to
> specify the positions of these fields?
>
> I proposed a change at one time that did this and
> this *_ENDIAN_BITFIELD thing was used instead.
>
> I will gladly implement this change (completely
> separate from what's being done here), but thought
> it might be best to see what people think about it
> before doing that work.
>
> -Alex

Our preference is to stick with __LITTLE_ENDIAN_BITFIELD
& __BIG_ENDIAN_BITFIELD definitions similar to other
networking definitions.

2021-02-12 18:54:30

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH 2/3] net:ethernet:rmnet:Support for downlink MAPv5 csum offload

On Fri, 12 Feb 2021 08:01:15 -0600 Alex Elder wrote:
> On 2/11/21 8:04 PM, Jakub Kicinski wrote:
> > On Fri, 12 Feb 2021 03:05:23 +0530 Sharath Chandra Vurukala wrote:
> >> +/* MAP CSUM headers */
> >> +struct rmnet_map_v5_csum_header {
> >> + u8 next_hdr:1;
> >> + u8 header_type:7;
> >> + u8 hw_reserved:5;
> >> + u8 priority:1;
> >> + u8 hw_reserved_bit:1;
> >> + u8 csum_valid_required:1;
> >> + __be16 reserved;
> >> +} __aligned(1);
> >
> > Will this work on big endian?
>
> Sort of related to this point...
>
> I'm sure the response to this will be to add two versions
> of the definition, surrounded __LITTLE_ENDIAN_BITFIELD
> and __BIG_ENDIAN_BITFIELD tests.
>
> I really find this non-intuitive, and every time I
> look at it I have to think about it a bit to figure
> out where the bits actually lie in the word.
>
> I know this pattern is used elsewhere in the networking
> code, but that doesn't make it any easier for me to
> understand...
>
> Can we used mask, defined in host byte order, to
> specify the positions of these fields?
>
> I proposed a change at one time that did this and
> this *_ENDIAN_BITFIELD thing was used instead.
>
> I will gladly implement this change (completely
> separate from what's being done here), but thought
> it might be best to see what people think about it
> before doing that work.

Most definitely agree, please convert.

2021-02-12 19:12:11

by Alex Elder

[permalink] [raw]
Subject: Re: [PATCH 2/3] net:ethernet:rmnet:Support for downlink MAPv5 csum offload

On 2/12/21 12:51 PM, Jakub Kicinski wrote:
> On Fri, 12 Feb 2021 08:01:15 -0600 Alex Elder wrote:
>> On 2/11/21 8:04 PM, Jakub Kicinski wrote:
>>> On Fri, 12 Feb 2021 03:05:23 +0530 Sharath Chandra Vurukala wrote:
>>>> +/* MAP CSUM headers */
>>>> +struct rmnet_map_v5_csum_header {
>>>> + u8 next_hdr:1;
>>>> + u8 header_type:7;
>>>> + u8 hw_reserved:5;
>>>> + u8 priority:1;
>>>> + u8 hw_reserved_bit:1;
>>>> + u8 csum_valid_required:1;
>>>> + __be16 reserved;
>>>> +} __aligned(1);
>>>
>>> Will this work on big endian?
>>
>> Sort of related to this point...
>>
>> I'm sure the response to this will be to add two versions
>> of the definition, surrounded __LITTLE_ENDIAN_BITFIELD
>> and __BIG_ENDIAN_BITFIELD tests.
>>
>> I really find this non-intuitive, and every time I
>> look at it I have to think about it a bit to figure
>> out where the bits actually lie in the word.
>>
>> I know this pattern is used elsewhere in the networking
>> code, but that doesn't make it any easier for me to
>> understand...
>>
>> Can we used mask, defined in host byte order, to
>> specify the positions of these fields?
>>
>> I proposed a change at one time that did this and
>> this *_ENDIAN_BITFIELD thing was used instead.
>>
>> I will gladly implement this change (completely
>> separate from what's being done here), but thought
>> it might be best to see what people think about it
>> before doing that work.
>
> Most definitely agree, please convert.

KS, would you like me to do this to the existing code
first?

I don't think it will take me very long. If it were
a priority I could probably get it done by the end of
today, but I'd want to ensure the result worked for
the testing you do.

-Alex

Subject: Re: [PATCH 2/3] net:ethernet:rmnet:Support for downlink MAPv5 csum offload

On 2021-02-12 12:06, Alex Elder wrote:
> On 2/12/21 12:51 PM, Jakub Kicinski wrote:
>> On Fri, 12 Feb 2021 08:01:15 -0600 Alex Elder wrote:
>>> On 2/11/21 8:04 PM, Jakub Kicinski wrote:
>>>> On Fri, 12 Feb 2021 03:05:23 +0530 Sharath Chandra Vurukala wrote:
>>>>> +/* MAP CSUM headers */
>>>>> +struct rmnet_map_v5_csum_header {
>>>>> + u8 next_hdr:1;
>>>>> + u8 header_type:7;
>>>>> + u8 hw_reserved:5;
>>>>> + u8 priority:1;
>>>>> + u8 hw_reserved_bit:1;
>>>>> + u8 csum_valid_required:1;
>>>>> + __be16 reserved;
>>>>> +} __aligned(1);
>>>>
>>>> Will this work on big endian?
>>>
>>> Sort of related to this point...
>>>
>>> I'm sure the response to this will be to add two versions
>>> of the definition, surrounded __LITTLE_ENDIAN_BITFIELD
>>> and __BIG_ENDIAN_BITFIELD tests.
>>>
>>> I really find this non-intuitive, and every time I
>>> look at it I have to think about it a bit to figure
>>> out where the bits actually lie in the word.
>>>
>>> I know this pattern is used elsewhere in the networking
>>> code, but that doesn't make it any easier for me to
>>> understand...
>>>
>>> Can we used mask, defined in host byte order, to
>>> specify the positions of these fields?
>>>
>>> I proposed a change at one time that did this and
>>> this *_ENDIAN_BITFIELD thing was used instead.
>>>
>>> I will gladly implement this change (completely
>>> separate from what's being done here), but thought
>>> it might be best to see what people think about it
>>> before doing that work.
>>
>> Most definitely agree, please convert.
>
> KS, would you like me to do this to the existing code
> first?
>
> I don't think it will take me very long. If it were
> a priority I could probably get it done by the end of
> today, but I'd want to ensure the result worked for
> the testing you do.
>
> -Alex

Sorry, I am not convinced that it is helping
to improve anything. It just adds a big
overhead of testing everything again without any
apparent improvement of performance or readablity
of code.