Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2448459imm; Sat, 16 Jun 2018 19:04:07 -0700 (PDT) X-Google-Smtp-Source: ADUXVKK2miSv1K/h/N39vuB3wxN9iFj1xQh0+WuHYgvb2liEB79XgY8LDMx+Z2QhpHaEP6ryANwU X-Received: by 2002:a62:81c5:: with SMTP id t188-v6mr8104723pfd.146.1529201047180; Sat, 16 Jun 2018 19:04:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529201047; cv=none; d=google.com; s=arc-20160816; b=XpDa+ctV3kyCVc3mC8xUqxEkfvxqvOIRJtZ9grx4fNgSsmz+5Vyzr0uSUzqg/8lCpo bM5sE6B3h+LL/IYYzPryr2JBWr4Agl7Zwb2o3DFCaTKIFlFiu2GjMJEjsA2eS0HPeqjn ChYRGtZ8B2mmfXd0giHPw3i9QTym6+gzkKshSTV2bzDpHvJ9+MBnl06tW+Y3M9lVjwoR UuRO4R1imejeybw6Ghl7sRx6SZSPdfWZZ8rBlE7A9ImN7P7eFDuyHqZzTdc3NjRB8Sp7 V4LG0OXA27yEh6zu/NUWSHtL3QmLrSbpJvvS5E7wAOSgBImD78UDCs7nawZswFFWCvrd 9yxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=48LpEvkQqhIIHZka3Rt488wqdWU55j8+vgW3hpTgoEg=; b=WUPx6Tdpata3od8bap04Cs8kFmTPLz4wfE2NR1QRdROJ/EtDUeuZPFmDQmUr7lRAHp epe7SfP5D7bH4oCYBXoGq1m9SR58ZedCjDZCm0XOU3SY8iG7fA2Qmh2Q51ODt4dJiTCj uY7ijhfkpuklKZGUW4Yfou2u+I9IMxyFtglWTA5Xz9cBxIk2HxQUYn9NTNT00kdX3DKb JU9CA783tAWZ2F++3tWqMZtvGkuigEx8xBN81hNlQ+Ibf77OScSqK2R3i4X7OSmdCrnE EAxVvrRbw7F9EKFgfUDRB8jtxx/zXvmD6FjSQt1LsnfP/DDEmpp3UE61soeLQv+2078W /y6w== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b="uR/oSlgB"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n88-v6si11745000pfj.251.2018.06.16.19.03.52; Sat, 16 Jun 2018 19:04:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b="uR/oSlgB"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934653AbeFQCBk (ORCPT + 99 others); Sat, 16 Jun 2018 22:01:40 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:60272 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934443AbeFQCBc (ORCPT ); Sat, 16 Jun 2018 22:01:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=48LpEvkQqhIIHZka3Rt488wqdWU55j8+vgW3hpTgoEg=; b=uR/oSlgB6PfCxT7IbhiF0Y9iy zU1Tleybk/DH+sWNNJM8r5aGUZy1X1kvVypZw7rqrNx96384la3H+Qk3+eQrbwE+wUzs84/laRW9l yZyEWePrpAzv0SSoBAmzi0Zs8RjNPZiO0LJuumgkVXpAFPNGvfJzeUDmcwuJ3DLJyJTSrkXEB9HC5 yqJviFQkqVLs1hXYDb+UVQnyDvTSxmtzAuUKVM6Au23QKJPvyV9eo3LjZuzZ0909r5O0EJrHLrhLz QpScWxPTDVYo1UAPq51a4/ezse0GZhL7guOEa+dbB/lUI4qUplt597CoJA7JlUQpo9VDm2GDgumot hR1iOOl9w==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fUN0V-0001bn-1v; Sun, 17 Jun 2018 02:01:31 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Jan Kara , Jeff Layton , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Nicholas Piggin , Ryusuke Konishi , linux-nilfs@vger.kernel.org, Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net Subject: [PATCH v14 63/74] dax: Hash on XArray instead of mapping Date: Sat, 16 Jun 2018 19:00:41 -0700 Message-Id: <20180617020052.4759-64-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180617020052.4759-1-willy@infradead.org> References: <20180617020052.4759-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since the XArray is embedded in the struct address_space, its address contains exactly as much entropy as the address of the mapping. This patch is purely preparatory for later patches which will simplify the wait/wake interfaces. Signed-off-by: Matthew Wilcox --- fs/dax.c | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 157762fe2ba1..b7f54e386da8 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -116,7 +116,7 @@ static int dax_is_empty_entry(void *entry) * DAX page cache entry locking */ struct exceptional_entry_key { - struct address_space *mapping; + struct xarray *xa; pgoff_t entry_start; }; @@ -125,7 +125,7 @@ struct wait_exceptional_entry_queue { struct exceptional_entry_key key; }; -static wait_queue_head_t *dax_entry_waitqueue(struct address_space *mapping, +static wait_queue_head_t *dax_entry_waitqueue(struct xarray *xa, pgoff_t index, void *entry, struct exceptional_entry_key *key) { unsigned long hash; @@ -138,21 +138,21 @@ static wait_queue_head_t *dax_entry_waitqueue(struct address_space *mapping, if (dax_is_pmd_entry(entry)) index &= ~PG_PMD_COLOUR; - key->mapping = mapping; + key->xa = xa; key->entry_start = index; - hash = hash_long((unsigned long)mapping ^ index, DAX_WAIT_TABLE_BITS); + hash = hash_long((unsigned long)xa ^ index, DAX_WAIT_TABLE_BITS); return wait_table + hash; } -static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mode, - int sync, void *keyp) +static int wake_exceptional_entry_func(wait_queue_entry_t *wait, + unsigned int mode, int sync, void *keyp) { struct exceptional_entry_key *key = keyp; struct wait_exceptional_entry_queue *ewait = container_of(wait, struct wait_exceptional_entry_queue, wait); - if (key->mapping != ewait->key.mapping || + if (key->xa != ewait->key.xa || key->entry_start != ewait->key.entry_start) return 0; return autoremove_wake_function(wait, mode, sync, NULL); @@ -163,13 +163,13 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo * The important information it's conveying is whether the entry at * this index used to be a PMD entry. */ -static void dax_wake_mapping_entry_waiter(struct address_space *mapping, +static void dax_wake_mapping_entry_waiter(struct xarray *xa, pgoff_t index, void *entry, bool wake_all) { struct exceptional_entry_key key; wait_queue_head_t *wq; - wq = dax_entry_waitqueue(mapping, index, entry, &key); + wq = dax_entry_waitqueue(xa, index, entry, &key); /* * Checking for locked entry and prepare_to_wait_exclusive() happens @@ -246,7 +246,8 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping, return entry; } - wq = dax_entry_waitqueue(mapping, index, entry, &ewait.key); + wq = dax_entry_waitqueue(&mapping->i_pages, index, entry, + &ewait.key); prepare_to_wait_exclusive(wq, &ewait.wait, TASK_UNINTERRUPTIBLE); xa_unlock_irq(&mapping->i_pages); @@ -270,7 +271,7 @@ static void dax_unlock_mapping_entry(struct address_space *mapping, } unlock_slot(mapping, slot); xa_unlock_irq(&mapping->i_pages); - dax_wake_mapping_entry_waiter(mapping, index, entry, false); + dax_wake_mapping_entry_waiter(&mapping->i_pages, index, entry, false); } static void put_locked_mapping_entry(struct address_space *mapping, @@ -290,7 +291,7 @@ static void put_unlocked_mapping_entry(struct address_space *mapping, return; /* We have to wake up next waiter for the page cache entry lock */ - dax_wake_mapping_entry_waiter(mapping, index, entry, false); + dax_wake_mapping_entry_waiter(&mapping->i_pages, index, entry, false); } static unsigned long dax_entry_size(void *entry) @@ -423,7 +424,8 @@ struct page *dax_lock_page(unsigned long pfn) break; } - wq = dax_entry_waitqueue(mapping, index, entry, &ewait.key); + wq = dax_entry_waitqueue(&mapping->i_pages, index, entry, + &ewait.key); prepare_to_wait_exclusive(wq, &ewait.wait, TASK_UNINTERRUPTIBLE); xa_unlock_irq(&mapping->i_pages); @@ -556,8 +558,8 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index, dax_disassociate_entry(entry, mapping, false); radix_tree_delete(&mapping->i_pages, index); mapping->nrexceptional--; - dax_wake_mapping_entry_waiter(mapping, index, entry, - true); + dax_wake_mapping_entry_waiter(&mapping->i_pages, + index, entry, true); } entry = dax_make_locked(0, size_flag | DAX_EMPTY); -- 2.17.1