Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965072AbcCIXZm (ORCPT ); Wed, 9 Mar 2016 18:25:42 -0500 Received: from youngberry.canonical.com ([91.189.89.112]:41274 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965067AbcCIXTi (ORCPT ); Wed, 9 Mar 2016 18:19:38 -0500 From: Kamal Mostafa To: linux-kernel@vger.kernel.org, stable@vger.kernel.org, kernel-team@lists.ubuntu.com Cc: Neil Horman , Dmitry Vyukov , Vladislav Yasevich , "David S. Miller" , Luis Henriques , Kamal Mostafa Subject: [PATCH 3.13.y-ckt 133/138] sctp: Fix port hash table size computation Date: Wed, 9 Mar 2016 15:14:20 -0800 Message-Id: <1457565265-15195-134-git-send-email-kamal@canonical.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1457565265-15195-1-git-send-email-kamal@canonical.com> References: <1457565265-15195-1-git-send-email-kamal@canonical.com> X-Extended-Stable: 3.13 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5453 Lines: 146 3.13.11-ckt36 -stable review patch. If anyone has any objections, please let me know. ---8<------------------------------------------------------------ From: Neil Horman commit d9749fb5942f51555dc9ce1ac0dbb1806960a975 upstream. Dmitry Vyukov noted recently that the sctp_port_hashtable had an error in its size computation, observing that the current method never guaranteed that the hashsize (measured in number of entries) would be a power of two, which the input hash function for that table requires. The root cause of the problem is that two values need to be computed (one, the allocation order of the storage requries, as passed to __get_free_pages, and two the number of entries for the hash table). Both need to be ^2, but for different reasons, and the existing code is simply computing one order value, and using it as the basis for both, which is wrong (i.e. it assumes that ((1< Reported-by: Dmitry Vyukov CC: Dmitry Vyukov CC: Vladislav Yasevich CC: "David S. Miller" Signed-off-by: David S. Miller [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques Signed-off-by: Kamal Mostafa --- net/sctp/protocol.c | 47 ++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 38 insertions(+), 9 deletions(-) diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c index 599757e..d8689fc 100644 --- a/net/sctp/protocol.c +++ b/net/sctp/protocol.c @@ -61,6 +61,8 @@ #include #include +#define MAX_SCTP_PORT_HASH_ENTRIES (64 * 1024) + /* Global data structures. */ struct sctp_globals sctp_globals __read_mostly; @@ -1333,6 +1335,8 @@ static __init int sctp_init(void) unsigned long limit; int max_share; int order; + int num_entries; + int max_entry_order; BUILD_BUG_ON(sizeof(struct sctp_ulpevent) > sizeof(((struct sk_buff *) 0)->cb)); @@ -1386,14 +1390,24 @@ static __init int sctp_init(void) /* Size and allocate the association hash table. * The methodology is similar to that of the tcp hash tables. + * Though not identical. Start by getting a goal size */ if (totalram_pages >= (128 * 1024)) goal = totalram_pages >> (22 - PAGE_SHIFT); else goal = totalram_pages >> (24 - PAGE_SHIFT); - for (order = 0; (1UL << order) < goal; order++) - ; + /* Then compute the page order for said goal */ + order = get_order(goal); + + /* Now compute the required page order for the maximum sized table we + * want to create + */ + max_entry_order = get_order(MAX_SCTP_PORT_HASH_ENTRIES * + sizeof(struct sctp_bind_hashbucket)); + + /* Limit the page order by that maximum hash table size */ + order = min(order, max_entry_order); do { sctp_assoc_hashsize = (1UL << order) * PAGE_SIZE / @@ -1427,27 +1441,42 @@ static __init int sctp_init(void) INIT_HLIST_HEAD(&sctp_ep_hashtable[i].chain); } - /* Allocate and initialize the SCTP port hash table. */ + /* Allocate and initialize the SCTP port hash table. + * Note that order is initalized to start at the max sized + * table we want to support. If we can't get that many pages + * reduce the order and try again + */ do { - sctp_port_hashsize = (1UL << order) * PAGE_SIZE / - sizeof(struct sctp_bind_hashbucket); - if ((sctp_port_hashsize > (64 * 1024)) && order > 0) - continue; sctp_port_hashtable = (struct sctp_bind_hashbucket *) __get_free_pages(GFP_ATOMIC|__GFP_NOWARN, order); } while (!sctp_port_hashtable && --order > 0); + if (!sctp_port_hashtable) { pr_err("Failed bind hash alloc\n"); status = -ENOMEM; goto err_bhash_alloc; } + + /* Now compute the number of entries that will fit in the + * port hash space we allocated + */ + num_entries = (1UL << order) * PAGE_SIZE / + sizeof(struct sctp_bind_hashbucket); + + /* And finish by rounding it down to the nearest power of two + * this wastes some memory of course, but its needed because + * the hash function operates based on the assumption that + * that the number of entries is a power of two + */ + sctp_port_hashsize = rounddown_pow_of_two(num_entries); + for (i = 0; i < sctp_port_hashsize; i++) { spin_lock_init(&sctp_port_hashtable[i].lock); INIT_HLIST_HEAD(&sctp_port_hashtable[i].chain); } - pr_info("Hash tables configured (established %d bind %d)\n", - sctp_assoc_hashsize, sctp_port_hashsize); + pr_info("Hash tables configured (established %d bind %d/%d)\n", + sctp_assoc_hashsize, sctp_port_hashsize, num_entries); sctp_sysctl_register(); -- 2.7.0