Received: by 10.192.165.148 with SMTP id m20csp551250imm; Wed, 25 Apr 2018 04:07:37 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpYXwW4k9g+ep67eHGJUYqiIRrK5dPh3RsXifkzaAqHxLCIvWP8fZZfpH+RMBE7x9WdYLa+ X-Received: by 10.98.19.6 with SMTP id b6mr1765829pfj.58.1524654457361; Wed, 25 Apr 2018 04:07:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524654457; cv=none; d=google.com; s=arc-20160816; b=pXvIRX6ak0nnQOtL8kQl5WQUoaY3ef3eD4TvTUCSNbx2EdVYEYJvTWyfY4kFSwwkJV va9Jn1mAtbvh0Rts0zXrKQ00ArTGU/Z0UzQzSpqOArs2u2ayRfXmho7nCAj2+r34CQKI ivsSEiiJJnBBPJa/GmH2IcM62rr0Zk1C/OjH+Eo+B3IEO6E5fAClwQjoMO5VKnUVBejT sOOkRaad0H67JsORxLme76D4zHvtQJ1fNOYDJtFE/lWYaJEdtu7jSzByMcWKJ/OHNQzQ Egh76GgPPGuT4Zt5zv0YQV9WljKD8e/riS3z1sQOBZC3Z5lj23sGuKJQqM3Gnhf3lszv i+YA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=rYUe9nOmWzHRIhgKzaAudF9HBjIOaSyunqcSpMkP+0I=; b=kud3fbRhEyT/KpkLBH6usm19DM5/+Q+74IuL47T73FSD0rNDXy12n30Nfj5ImGzM0P ybmYw61JO4VWBr/2XeERuVPRqfzgZfXjEuGk4lEyilL1EdvJ3nml4qyzFxROMNYYm+fm axqUcDepLV4HPx/KBMd/7roEumDaKT38PudvRRqAQGMPCkR2/rd9UdagmmwFnE8iHNsG NjFZNg8D3XOqApM2M3z9mlMQJ7Fm0afplBkTpJ/xyAH981X1oxE0qhJjgfDrXATcRS7t 6iwr7wLsNXvUmqDH/35qwu1DZH9bvZQzBhGuagJm4hVUf9m6L8kgZwh2ODhfk6BPEF0+ me3Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a123si13545936pgc.43.2018.04.25.04.07.22; Wed, 25 Apr 2018 04:07:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754046AbeDYKl6 (ORCPT + 99 others); Wed, 25 Apr 2018 06:41:58 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:52496 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753997AbeDYKlx (ORCPT ); Wed, 25 Apr 2018 06:41:53 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 842E4480; Wed, 25 Apr 2018 10:41:52 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mel Gorman , Minchan Kim , "Huang, Ying" , Jan Kara , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 4.14 116/183] mm: pin address_space before dereferencing it while isolating an LRU page Date: Wed, 25 Apr 2018 12:35:36 +0200 Message-Id: <20180425103247.100550727@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180425103242.532713678@linuxfoundation.org> References: <20180425103242.532713678@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Mel Gorman [ Upstream commit 69d763fc6d3aee787a3e8c8c35092b4f4960fa5d ] Minchan Kim asked the following question -- what locks protects address_space destroying when race happens between inode trauncation and __isolate_lru_page? Jan Kara clarified by describing the race as follows CPU1 CPU2 truncate(inode) __isolate_lru_page() ... truncate_inode_page(mapping, page); delete_from_page_cache(page) spin_lock_irqsave(&mapping->tree_lock, flags); __delete_from_page_cache(page, NULL) page_cache_tree_delete(..) ... mapping = page_mapping(page); page->mapping = NULL; ... spin_unlock_irqrestore(&mapping->tree_lock, flags); page_cache_free_page(mapping, page) put_page(page) if (put_page_testzero(page)) -> false - inode now has no pages and can be freed including embedded address_space if (mapping && !mapping->a_ops->migratepage) - we've dereferenced mapping which is potentially already free. The race is theoretically possible but unlikely. Before the delete_from_page_cache, truncate_cleanup_page is called so the page is likely to be !PageDirty or PageWriteback which gets skipped by the only caller that checks the mappping in __isolate_lru_page. Even if the race occurs, a substantial amount of work has to happen during a tiny window with no preemption but it could potentially be done using a virtual machine to artifically slow one CPU or halt it during the critical window. This patch should eliminate the race with truncation by try-locking the page before derefencing mapping and aborting if the lock was not acquired. There was a suggestion from Huang Ying to use RCU as a side-effect to prevent mapping being freed. However, I do not like the solution as it's an unconventional means of preserving a mapping and it's not a context where rcu_read_lock is obviously protecting rcu data. Link: http://lkml.kernel.org/r/20180104102512.2qos3h5vqzeisrek@techsingularity.net Fixes: c82449352854 ("mm: compaction: make isolate_lru_page() filter-aware again") Signed-off-by: Mel Gorman Acked-by: Minchan Kim Cc: "Huang, Ying" Cc: Jan Kara Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- mm/vmscan.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1436,14 +1436,24 @@ int __isolate_lru_page(struct page *page if (PageDirty(page)) { struct address_space *mapping; + bool migrate_dirty; /* * Only pages without mappings or that have a * ->migratepage callback are possible to migrate - * without blocking + * without blocking. However, we can be racing with + * truncation so it's necessary to lock the page + * to stabilise the mapping as truncation holds + * the page lock until after the page is removed + * from the page cache. */ + if (!trylock_page(page)) + return ret; + mapping = page_mapping(page); - if (mapping && !mapping->a_ops->migratepage) + migrate_dirty = mapping && mapping->a_ops->migratepage; + unlock_page(page); + if (!migrate_dirty) return ret; } }