Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp292870img; Wed, 20 Mar 2019 20:46:35 -0700 (PDT) X-Google-Smtp-Source: APXvYqwaL16k0Qm6N5fV/LmK8mk/nHlQE3ZrlbeGK10VRE0mPC8l2r2WQJX2fDdn+5D8TwuZPTz8 X-Received: by 2002:a62:b618:: with SMTP id j24mr1302104pff.120.1553139995097; Wed, 20 Mar 2019 20:46:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553139995; cv=none; d=google.com; s=arc-20160816; b=no+HyoKmAUgz9Scsq8qqdGznGTXhs42Now2Qv5m96XBzKdTKmX28PbIKI5gUQEvNuQ UNgwgO5jIfLJCEBeSl9xo81NbL/EGuQrfb2ePJCZTDVfXqx/IKBB1wJJpch4T12ZnO79 dbufgyx54UdUOe6n3c+c7WxF3bP1SeMoCwRjjI+TPPlCNXrgUHCH4zmQugtBlM2zMrKd JOwAOBmelBCQZf3ILYkPC+n/IS1UNktuxNRwQbhY3OL6fv48eLDeF6+g58I6bjPsXnVv mPcBs7x13v/k7OCxAvy7NB2ePOW5cXVjGmOm3MlvUS5zgQ+RHHlmpknhXojYSIUUcQAe RvfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:cc:subject:date:to :from; bh=w10I2aHc3JAjjNMAgav1hRrG7Su0XIpOuQjDvGVU4DI=; b=yN4aB9uP457cwZZi511oWQwW3h7bG37kNbj3dW+bAb5QnE9sndLHlXjWhFv5Ch09kh ImMDYUt4HGozNTE3v99YlEgFeeaL7+v9BvQnfchMVLMXyW3kuPVrAznqntEPF0UFRg15 14XOANMZ+OSNdViSDYzFF9a5t+Q23Y/RIIgIATpwi8WuAaWahs7Km+6bhJF4bcGW4HOG 80Ij5O8cw8DauxmfN9CTIiy6rneLP6I98MNHno8msU0oI4xfqzkjOP2EhsGWWB6kQl6j YQmmXwbVvV+RU/g9tzujCdmycEvyISzkT2g+bjQfb8wd7vs8wjkrrABOLRjFAs7LRBE1 dGPg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l72si3348586pfj.149.2019.03.20.20.46.19; Wed, 20 Mar 2019 20:46:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727908AbfCUDo1 (ORCPT + 99 others); Wed, 20 Mar 2019 23:44:27 -0400 Received: from mx2.suse.de ([195.135.220.15]:58802 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726620AbfCUDo0 (ORCPT ); Wed, 20 Mar 2019 23:44:26 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 41A4EAD49; Thu, 21 Mar 2019 03:44:24 +0000 (UTC) From: NeilBrown To: David Miller Date: Thu, 21 Mar 2019 14:42:40 +1100 Subject: [PATCH 1/2] rhashtable: don't hold lock on first table throughout insertion. Cc: tgraf@suug.ch, netdev@vger.kernel.org, herbert@gondor.apana.org.au, linux-kernel@vger.kernel.org Message-ID: <155313975999.31132.6450737698233829984.stgit@noble.brown> In-Reply-To: <155313966311.31132.17461701268394583387.stgit@noble.brown> References: <155313966311.31132.17461701268394583387.stgit@noble.brown> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org rhashtable_try_insert() currently holds a lock on the bucket in the first table, while also locking buckets in subsequent tables. This is unnecessary and looks like a hold-over from some earlier version of the implementation. As insert and remove always lock a bucket in each table in turn, and as insert only inserts in the final table, there cannot be any races that are not covered by simply locking a bucket in each table in turn. When an insert call reaches that last table it can be sure that there is no matchinf entry in any other table as it has searched them all, and insertion never happens anywhere but in the last table. The fact that code tests for the existence of future_tbl while holding a lock on the relevant bucket ensures that two threads inserting the same key will make compatible decisions about which is the "last" table. This simplifies the code and allows the ->rehash field to be discarded. We still need a way to ensure that a dead bucket_table is never re-linked by rhashtable_walk_stop(). This can be achieved by calling call_rcu() inside the locked region, and checking with rcu_head_after_call_rcu() in rhashtable_walk_stop() to see if the bucket table is empty and dead. Acked-by: Herbert Xu Reviewed-by: Paul E. McKenney Signed-off-by: NeilBrown --- include/linux/rhashtable.h | 13 ----------- lib/rhashtable.c | 52 ++++++++++++++------------------------------ 2 files changed, 16 insertions(+), 49 deletions(-) diff --git a/include/linux/rhashtable.h b/include/linux/rhashtable.h index ae9c0f71f311..3864193d5e2e 100644 --- a/include/linux/rhashtable.h +++ b/include/linux/rhashtable.h @@ -63,7 +63,6 @@ struct bucket_table { unsigned int size; unsigned int nest; - unsigned int rehash; u32 hash_rnd; unsigned int locks_mask; spinlock_t *locks; @@ -776,12 +775,6 @@ static inline int rhltable_insert( * @obj: pointer to hash head inside object * @params: hash table parameters * - * Locks down the bucket chain in both the old and new table if a resize - * is in progress to ensure that writers can't remove from the old table - * and can't insert to the new table during the atomic operation of search - * and insertion. Searches for duplicates in both the old and new table if - * a resize is in progress. - * * This lookup function may only be used for fixed key hash table (key_len * parameter set). It will BUG() if used inappropriately. * @@ -837,12 +830,6 @@ static inline void *rhashtable_lookup_get_insert_fast( * @obj: pointer to hash head inside object * @params: hash table parameters * - * Locks down the bucket chain in both the old and new table if a resize - * is in progress to ensure that writers can't remove from the old table - * and can't insert to the new table during the atomic operation of search - * and insertion. Searches for duplicates in both the old and new table if - * a resize is in progress. - * * Lookups may occur in parallel with hashtable mutations and resizing. * * Will trigger an automatic deferred table resizing if residency in the diff --git a/lib/rhashtable.c b/lib/rhashtable.c index 0a105d4af166..776b3a82d3a1 100644 --- a/lib/rhashtable.c +++ b/lib/rhashtable.c @@ -197,6 +197,7 @@ static struct bucket_table *bucket_table_alloc(struct rhashtable *ht, return NULL; } + rcu_head_init(&tbl->rcu); INIT_LIST_HEAD(&tbl->walkers); tbl->hash_rnd = get_random_u32(); @@ -280,10 +281,9 @@ static int rhashtable_rehash_chain(struct rhashtable *ht, while (!(err = rhashtable_rehash_one(ht, old_hash))) ; - if (err == -ENOENT) { - old_tbl->rehash++; + if (err == -ENOENT) err = 0; - } + spin_unlock_bh(old_bucket_lock); return err; @@ -330,13 +330,16 @@ static int rhashtable_rehash_table(struct rhashtable *ht) spin_lock(&ht->lock); list_for_each_entry(walker, &old_tbl->walkers, list) walker->tbl = NULL; - spin_unlock(&ht->lock); /* Wait for readers. All new readers will see the new * table, and thus no references to the old table will * remain. + * We do this inside the locked region so that + * rhashtable_walk_stop() can use rcu_head_after_call_rcu() + * to check if it should not re-link the table. */ call_rcu(&old_tbl->rcu, bucket_table_free_rcu); + spin_unlock(&ht->lock); return rht_dereference(new_tbl->future_tbl, ht) ? -EAGAIN : 0; } @@ -578,46 +581,22 @@ static void *rhashtable_try_insert(struct rhashtable *ht, const void *key, struct bucket_table *new_tbl; struct bucket_table *tbl; unsigned int hash; - spinlock_t *lock; void *data; - tbl = rcu_dereference(ht->tbl); - - /* All insertions must grab the oldest table containing - * the hashed bucket that is yet to be rehashed. - */ - for (;;) { - hash = rht_head_hashfn(ht, tbl, obj, ht->p); - lock = rht_bucket_lock(tbl, hash); - spin_lock_bh(lock); - - if (tbl->rehash <= hash) - break; - - spin_unlock_bh(lock); - tbl = rht_dereference_rcu(tbl->future_tbl, ht); - } - - data = rhashtable_lookup_one(ht, tbl, hash, key, obj); - new_tbl = rhashtable_insert_one(ht, tbl, hash, obj, data); - if (PTR_ERR(new_tbl) != -EEXIST) - data = ERR_CAST(new_tbl); + new_tbl = rcu_dereference(ht->tbl); - while (!IS_ERR_OR_NULL(new_tbl)) { + do { tbl = new_tbl; hash = rht_head_hashfn(ht, tbl, obj, ht->p); - spin_lock_nested(rht_bucket_lock(tbl, hash), - SINGLE_DEPTH_NESTING); + spin_lock_bh(rht_bucket_lock(tbl, hash)); data = rhashtable_lookup_one(ht, tbl, hash, key, obj); new_tbl = rhashtable_insert_one(ht, tbl, hash, obj, data); if (PTR_ERR(new_tbl) != -EEXIST) data = ERR_CAST(new_tbl); - spin_unlock(rht_bucket_lock(tbl, hash)); - } - - spin_unlock_bh(lock); + spin_unlock_bh(rht_bucket_lock(tbl, hash)); + } while (!IS_ERR_OR_NULL(new_tbl)); if (PTR_ERR(data) == -EAGAIN) data = ERR_PTR(rhashtable_insert_rehash(ht, tbl) ?: @@ -939,10 +918,11 @@ void rhashtable_walk_stop(struct rhashtable_iter *iter) ht = iter->ht; spin_lock(&ht->lock); - if (tbl->rehash < tbl->size) - list_add(&iter->walker.list, &tbl->walkers); - else + if (rcu_head_after_call_rcu(&tbl->rcu, bucket_table_free_rcu)) + /* This bucket table is being freed, don't re-link it. */ iter->walker.tbl = NULL; + else + list_add(&iter->walker.list, &tbl->walkers); spin_unlock(&ht->lock); out: