Received: by 2002:ac0:8c9a:0:0:0:0:0 with SMTP id r26csp3645314ima; Mon, 4 Feb 2019 02:43:45 -0800 (PST) X-Google-Smtp-Source: ALg8bN7TOYCOrCsxjuxwVCN276XKdfEFRJ19ds0NOtqrjV3FrBrAPxSx4Q4IEoNgGcrSXCpeFuLv X-Received: by 2002:a62:8893:: with SMTP id l141mr50050515pfd.1.1549277025564; Mon, 04 Feb 2019 02:43:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1549277025; cv=none; d=google.com; s=arc-20160816; b=pLEyya83Iv45qlN8jTsLWFSKiMKyQU/1ya5/gm0dVhYFWvYtIIaZvnv0mqRwJdjEnd 2bs5x1fxDIkQeaKici6hurh2hcmf68YxLtpoN4tMy6/6mVM4xaadhurwIWGdF5nGTx+E Cb6HdVULc+DNAljd6zRHuJDwPOXA3blVbQioQdmn2zgNoOTyeXIGkUp0+MU38TwiKvUt RvPP9YEBYgGqgyzqp0V+BX2Ksv9cqoBv1bF5OsyWAK+C1XRS6AnOK7sOFdZmqUZX8WZ3 PRv9ddCKFIzZEOKjOXuVoPHL1FAWbhZPM8FLAa9ia8b2/kBUN4I3HZGIT5XD6i6QDlU7 LwxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=aZQWsStq9ekR5ORzMHBj6ZHJTVOtAo+MGQtrw0WmrUY=; b=ljl/0ppZrMvCm8ShuyZi7gKvuyZyTsW6E/G5O9R8tsOig/mEr3RFFvf++1Y6vOPUYv E1uG0u6pCRJkMGXyOSMRlDRrrOMJ/ydsVfgWY5YFPoufg8fkUkT8IloOszdBjHhdglCZ 8sciVdgBsg4j30EZOtYA8rIj+9Rpm9CTsZKn22juQI1iU1j5uoFDR95IuNdk8Spqkbbm pyFjGuFeeWiZRzaVgN7GT+AvXsB30abvAV6e9zl0jSvFfV3vNkD8OjpeaDkIcXfviO5m bmHHZ1zRSZ7mDvDR4HPW/8zCUsBlbwAZdLb/BjlE/JnCNppqoO7BzwM64zXEF8h1U+v0 6Jww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=XS89ZvIi; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j20si14802501pgb.520.2019.02.04.02.43.29; Mon, 04 Feb 2019 02:43:45 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=XS89ZvIi; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730570AbfBDKmd (ORCPT + 99 others); Mon, 4 Feb 2019 05:42:33 -0500 Received: from mail.kernel.org ([198.145.29.99]:40434 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730558AbfBDKmb (ORCPT ); Mon, 4 Feb 2019 05:42:31 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7E6E02070C; Mon, 4 Feb 2019 10:42:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1549276950; bh=bT6am/t8NcCYWmkwPRYCFf8UqsoJWjPAqx+wnUk/MNI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XS89ZvIiGdHe9gFrkvd20t3WZDVR8hxE/wJ/h2G/MDgVm9ByzuAgxMs5g1bipIMay YWviBHivXhryptmPoXzTmAxR7MWuVH/rejfHr8gkVWFE9/AQhe3nj5/FGywhLx0/Y+ HLgVzPHJic9q1D8N1GNnFbIqallFAeJioKpge/60= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Willem de Bruijn , Peter Oskolkov , Eric Dumazet , Florian Westphal , "David S. Miller" , Mao Wenan Subject: [PATCH 4.4 61/65] ip: add helpers to process in-order fragments faster. Date: Mon, 4 Feb 2019 11:36:54 +0100 Message-Id: <20190204103620.364941364@linuxfoundation.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190204103610.583715954@linuxfoundation.org> References: <20190204103610.583715954@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Peter Oskolkov commit 353c9cb360874e737fb000545f783df756c06f9a upstream. This patch introduces several helper functions/macros that will be used in the follow-up patch. No runtime changes yet. The new logic (fully implemented in the second patch) is as follows: * Nodes in the rb-tree will now contain not single fragments, but lists of consecutive fragments ("runs"). * At each point in time, the current "active" run at the tail is maintained/tracked. Fragments that arrive in-order, adjacent to the previous tail fragment, are added to this tail run without triggering the re-balancing of the rb-tree. * If a fragment arrives out of order with the offset _before_ the tail run, it is inserted into the rb-tree as a single fragment. * If a fragment arrives after the current tail fragment (with a gap), it starts a new "tail" run, as is inserted into the rb-tree at the end as the head of the new run. skb->cb is used to store additional information needed here (suggested by Eric Dumazet). Reported-by: Willem de Bruijn Signed-off-by: Peter Oskolkov Cc: Eric Dumazet Cc: Florian Westphal Signed-off-by: David S. Miller Signed-off-by: Mao Wenan Signed-off-by: Greg Kroah-Hartman --- include/net/inet_frag.h | 4 ++ net/ipv4/ip_fragment.c | 74 +++++++++++++++++++++++++++++++++++++++++++++--- 2 files changed, 74 insertions(+), 4 deletions(-) --- a/include/net/inet_frag.h +++ b/include/net/inet_frag.h @@ -48,6 +48,7 @@ struct inet_frag_queue { struct sk_buff *fragments; /* Used in IPv6. */ struct rb_root rb_fragments; /* Used in IPv4. */ struct sk_buff *fragments_tail; + struct sk_buff *last_run_head; ktime_t stamp; int len; int meat; @@ -118,6 +119,9 @@ struct inet_frag_queue *inet_frag_find(s void inet_frag_maybe_warn_overflow(struct inet_frag_queue *q, const char *prefix); +/* Free all skbs in the queue; return the sum of their truesizes. */ +unsigned int inet_frag_rbtree_purge(struct rb_root *root); + static inline void inet_frag_put(struct inet_frag_queue *q, struct inet_frags *f) { if (atomic_dec_and_test(&q->refcnt)) --- a/net/ipv4/ip_fragment.c +++ b/net/ipv4/ip_fragment.c @@ -58,13 +58,57 @@ static int sysctl_ipfrag_max_dist __read_mostly = 64; static const char ip_frag_cache_name[] = "ip4-frags"; -struct ipfrag_skb_cb -{ +/* Use skb->cb to track consecutive/adjacent fragments coming at + * the end of the queue. Nodes in the rb-tree queue will + * contain "runs" of one or more adjacent fragments. + * + * Invariants: + * - next_frag is NULL at the tail of a "run"; + * - the head of a "run" has the sum of all fragment lengths in frag_run_len. + */ +struct ipfrag_skb_cb { struct inet_skb_parm h; - int offset; + int offset; + struct sk_buff *next_frag; + int frag_run_len; }; -#define FRAG_CB(skb) ((struct ipfrag_skb_cb *)((skb)->cb)) +#define FRAG_CB(skb) ((struct ipfrag_skb_cb *)((skb)->cb)) + +static void ip4_frag_init_run(struct sk_buff *skb) +{ + BUILD_BUG_ON(sizeof(struct ipfrag_skb_cb) > sizeof(skb->cb)); + + FRAG_CB(skb)->next_frag = NULL; + FRAG_CB(skb)->frag_run_len = skb->len; +} + +/* Append skb to the last "run". */ +static void ip4_frag_append_to_last_run(struct inet_frag_queue *q, + struct sk_buff *skb) +{ + RB_CLEAR_NODE(&skb->rbnode); + FRAG_CB(skb)->next_frag = NULL; + + FRAG_CB(q->last_run_head)->frag_run_len += skb->len; + FRAG_CB(q->fragments_tail)->next_frag = skb; + q->fragments_tail = skb; +} + +/* Create a new "run" with the skb. */ +static void ip4_frag_create_run(struct inet_frag_queue *q, struct sk_buff *skb) +{ + if (q->last_run_head) + rb_link_node(&skb->rbnode, &q->last_run_head->rbnode, + &q->last_run_head->rbnode.rb_right); + else + rb_link_node(&skb->rbnode, NULL, &q->rb_fragments.rb_node); + rb_insert_color(&skb->rbnode, &q->rb_fragments); + + ip4_frag_init_run(skb); + q->fragments_tail = skb; + q->last_run_head = skb; +} /* Describe an entry in the "incomplete datagrams" queue. */ struct ipq { @@ -721,6 +765,28 @@ struct sk_buff *ip_check_defrag(struct n } EXPORT_SYMBOL(ip_check_defrag); +unsigned int inet_frag_rbtree_purge(struct rb_root *root) +{ + struct rb_node *p = rb_first(root); + unsigned int sum = 0; + + while (p) { + struct sk_buff *skb = rb_entry(p, struct sk_buff, rbnode); + + p = rb_next(p); + rb_erase(&skb->rbnode, root); + while (skb) { + struct sk_buff *next = FRAG_CB(skb)->next_frag; + + sum += skb->truesize; + kfree_skb(skb); + skb = next; + } + } + return sum; +} +EXPORT_SYMBOL(inet_frag_rbtree_purge); + #ifdef CONFIG_SYSCTL static int zero;