Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3518111imu; Mon, 7 Jan 2019 04:59:53 -0800 (PST) X-Google-Smtp-Source: ALg8bN4/lQuvBWZKzke8T3cG2CDb4dr1Dmd/xTxbbCUnV+aBUTTsc5Q7TXZj+ak8Ts+KPAFP9Kqr X-Received: by 2002:a17:902:96a:: with SMTP id 97mr59410441plm.45.1546865993304; Mon, 07 Jan 2019 04:59:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1546865993; cv=none; d=google.com; s=arc-20160816; b=vUFr/CQDrkyY3+HgvYZmnatrEfytmAt+cp+TEG3BJV6d4aKvKwZo+VITimVDBfcYyr 90lBvk4cEhmxCxAumqzUhUs59/9Np2XrZD1AkAi1KtHTfUbwdvkhFMbK/PTLnJy2x8Tp FnZWsKLkOmstg+99lN3d4ZtH7Va5oZ/gBXvxJYWBiTZcE+46atNGqTqQ5RvVgzivn2nz bBh35qpmUnZcs01esZsHaBrsuSsb4KxmEjDAG77YHnLSrweiZ3QA8myKWgDCWYrDVktp edDTWDkJ5MZhnVigVDIDXjOlt2Wmu/MR0ACRi3hQ5xbtY0IYfTqEe9fUVDwOgAsPxmOb ldqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=03DqXxxseIZEIH4954jI6xQvU5SgxLNJVkUnJyak420=; b=0CjiAcLatRQRgyP3EeMn/Od4sYndM86V2t2zXPjRElImD/SYgD38tJpy9ddzBju6g7 wGymAfn10MKY4g55S3YfWqCJ7nyy9zAYtlQAGu2KI10nx+PdjZCHPQn8PdUmlg19O+nP OMjrDOsP+y8yF5x7TudOUR9i/VIJCmZDCSpbecPSwzoBMXB53ucCvHPjb7iFzUL0Qp7M puss31sOda7hTgmsgGP/LhBJZSC1RHJDI9NIc+nLHQ5LcVK9nDGC5tkr9P2qaQpBJX49 038asdcUr7KdavgyqkAgl1s+ooCT0M9i2UCuR7Bkw/xS3q+BTZaJWyF8Ank7AhOsBkmc EErA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=CwEHukW+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g92si16143823plg.392.2019.01.07.04.59.38; Mon, 07 Jan 2019 04:59:53 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=CwEHukW+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729297AbfAGM5l (ORCPT + 99 others); Mon, 7 Jan 2019 07:57:41 -0500 Received: from mail.kernel.org ([198.145.29.99]:45186 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727570AbfAGM5g (ORCPT ); Mon, 7 Jan 2019 07:57:36 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D5C32206BB; Mon, 7 Jan 2019 12:57:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1546865855; bh=b5EHuPKIGuH3OjjOfQF3jxl7dWyvdoH5On5rhMiYioI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CwEHukW+WQM3TwpiSxhlXi2NhiZHTQVQwFYNf/LOjmA0J3rY9vZ/lypjAjNRHnk9v hlMOkSIXkzRMBVD6D+TvK4L0HOv5lFnHBukMaM400Q2ZdU2Ze4Ve4Sq7SZK8txsMsp W6OoZaUDhzLx33ff43m0AEnt8ypXAkD2NvVJn3as= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Matthew Wilcox , Jan Kara , Dan Williams , Sasha Levin Subject: [PATCH 4.19 136/170] dax: Dont access a freed inode Date: Mon, 7 Jan 2019 13:32:43 +0100 Message-Id: <20190107104509.026119134@linuxfoundation.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190107104452.953560660@linuxfoundation.org> References: <20190107104452.953560660@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.19-stable review patch. If anyone has any objections, please let me know. ------------------ commit 55e56f06ed71d9441f3abd5b1d3c1a870812b3fe upstream. After we drop the i_pages lock, the inode can be freed at any time. The get_unlocked_entry() code has no choice but to reacquire the lock, so it can't be used here. Create a new wait_entry_unlocked() which takes care not to acquire the lock or dereference the address_space in any way. Fixes: c2a7d2a11552 ("filesystem-dax: Introduce dax_lock_mapping_entry()") Cc: Signed-off-by: Matthew Wilcox Reviewed-by: Jan Kara Signed-off-by: Dan Williams Signed-off-by: Sasha Levin --- fs/dax.c | 69 ++++++++++++++++++++++++++------------------------------ 1 file changed, 32 insertions(+), 37 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 3a2682a6c832..415605fafaeb 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -229,8 +229,8 @@ static void put_unlocked_mapping_entry(struct address_space *mapping, * * Must be called with the i_pages lock held. */ -static void *__get_unlocked_mapping_entry(struct address_space *mapping, - pgoff_t index, void ***slotp, bool (*wait_fn)(void)) +static void *get_unlocked_mapping_entry(struct address_space *mapping, + pgoff_t index, void ***slotp) { void *entry, **slot; struct wait_exceptional_entry_queue ewait; @@ -240,8 +240,6 @@ static void *__get_unlocked_mapping_entry(struct address_space *mapping, ewait.wait.func = wake_exceptional_entry_func; for (;;) { - bool revalidate; - entry = __radix_tree_lookup(&mapping->i_pages, index, NULL, &slot); if (!entry || @@ -256,30 +254,39 @@ static void *__get_unlocked_mapping_entry(struct address_space *mapping, prepare_to_wait_exclusive(wq, &ewait.wait, TASK_UNINTERRUPTIBLE); xa_unlock_irq(&mapping->i_pages); - revalidate = wait_fn(); + schedule(); finish_wait(wq, &ewait.wait); xa_lock_irq(&mapping->i_pages); - if (revalidate) { - put_unlocked_mapping_entry(mapping, index, entry); - return ERR_PTR(-EAGAIN); - } } } -static bool entry_wait(void) +/* + * The only thing keeping the address space around is the i_pages lock + * (it's cycled in clear_inode() after removing the entries from i_pages) + * After we call xas_unlock_irq(), we cannot touch xas->xa. + */ +static void wait_entry_unlocked(struct address_space *mapping, pgoff_t index, + void ***slotp, void *entry) { + struct wait_exceptional_entry_queue ewait; + wait_queue_head_t *wq; + + init_wait(&ewait.wait); + ewait.wait.func = wake_exceptional_entry_func; + + wq = dax_entry_waitqueue(mapping, index, entry, &ewait.key); + prepare_to_wait_exclusive(wq, &ewait.wait, TASK_UNINTERRUPTIBLE); + xa_unlock_irq(&mapping->i_pages); schedule(); + finish_wait(wq, &ewait.wait); + /* - * Never return an ERR_PTR() from - * __get_unlocked_mapping_entry(), just keep looping. + * Entry lock waits are exclusive. Wake up the next waiter since + * we aren't sure we will acquire the entry lock and thus wake + * the next waiter up on unlock. */ - return false; -} - -static void *get_unlocked_mapping_entry(struct address_space *mapping, - pgoff_t index, void ***slotp) -{ - return __get_unlocked_mapping_entry(mapping, index, slotp, entry_wait); + if (waitqueue_active(wq)) + __wake_up(wq, TASK_NORMAL, 1, &ewait.key); } static void unlock_mapping_entry(struct address_space *mapping, pgoff_t index) @@ -398,19 +405,6 @@ static struct page *dax_busy_page(void *entry) return NULL; } -static bool entry_wait_revalidate(void) -{ - rcu_read_unlock(); - schedule(); - rcu_read_lock(); - - /* - * Tell __get_unlocked_mapping_entry() to take a break, we need - * to revalidate page->mapping after dropping locks - */ - return true; -} - bool dax_lock_mapping_entry(struct page *page) { pgoff_t index; @@ -446,14 +440,15 @@ bool dax_lock_mapping_entry(struct page *page) } index = page->index; - entry = __get_unlocked_mapping_entry(mapping, index, &slot, - entry_wait_revalidate); + entry = __radix_tree_lookup(&mapping->i_pages, index, + NULL, &slot); if (!entry) { xa_unlock_irq(&mapping->i_pages); break; - } else if (IS_ERR(entry)) { - xa_unlock_irq(&mapping->i_pages); - WARN_ON_ONCE(PTR_ERR(entry) != -EAGAIN); + } else if (slot_locked(mapping, slot)) { + rcu_read_unlock(); + wait_entry_unlocked(mapping, index, &slot, entry); + rcu_read_lock(); continue; } lock_slot(mapping, slot); -- 2.19.1