Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp10417356ybi; Wed, 24 Jul 2019 22:51:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqy07mF8WGlUY0GgzuLpatbROXK+IaNNIg1EOQvlWE2++Q5Q3V59xi/9MmLjgrY+T0+h7BNc X-Received: by 2002:a63:550d:: with SMTP id j13mr21709051pgb.173.1564033878704; Wed, 24 Jul 2019 22:51:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564033878; cv=none; d=google.com; s=arc-20160816; b=aQtjIHnKsPKXJ+iBokGMN+gHasuLFrakOdSGGwM0mB9pkyqcjOigt7FUY+E9AfannR XHZK2b2eWPN7qI+0YHwKSXlATu2mOL5m3ziJBdaPX8YIB3R/dB8cJs6QFDDjB5grdHA3 kUNDLEX60Ot6+AHqXQYV5D69xo9tkDwU3j5PKXX2bv/k4sk5eTSAMDxhDDJ8E7roaumv Gg5nQdky8JOgFYPz/ECozrEFtUvqDpj2xDXIzQ3I8ivT/nhikb++5N4iqZzkI1ZsptHu IwBib0kgVS2HaL8IDAph/IkmNfP3dp0eOwJTb7wDdbjoshkomN6+WhviJ1N0ofDj8uD9 nbcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=AGMennFGR3RQwqOaKvawxIvCw+p1UUJkNAyV14vll0s=; b=YGfe6rHyNOtIQN26Yq1cljAm8Ia2UKRVjShz70bHl59FNL8F0GsxMRlFoRg+EY8wFi zF3bT54lJE0ux3lpsr1En147ixMFGGsiZvpdtNaHHAqIqbHq9icUheKfBfEB6EDP/DKP wkFfyIZZNyzfH2nN2jRFtpwhOrteFAFYW3u0hIkMMV6OG2me8J1ZI6lZFIwAj1qkzkdJ 2oXB+Pa6Vbm9GVLvhZIQXlosVVQ+bm72YE3Y+4LY28t+Q/1mVZmzCUYdWDoxsaTOrMoW NAfPjAr3PtRGNLT8GxQ41ib/gqPsGvYN5cuhN4/katYw4KAV5rPvJsYPTMgNe8A32hFc 7Xvw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b="T/Wkfrdv"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a5si14819607pjv.80.2019.07.24.22.51.03; Wed, 24 Jul 2019 22:51:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b="T/Wkfrdv"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729778AbfGXX1R (ORCPT + 99 others); Wed, 24 Jul 2019 19:27:17 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:3090 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729776AbfGXX1P (ORCPT ); Wed, 24 Jul 2019 19:27:15 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 24 Jul 2019 16:27:20 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 24 Jul 2019 16:27:13 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 24 Jul 2019 16:27:13 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 24 Jul 2019 23:27:12 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 24 Jul 2019 23:27:12 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Wed, 24 Jul 2019 16:27:12 -0700 From: Ralph Campbell To: CC: , Ralph Campbell , , John Hubbard , "Christoph Hellwig" , Dan Williams , Andrew Morton , Jason Gunthorpe , "Logan Gunthorpe" , Ira Weiny , "Matthew Wilcox" , Mel Gorman , "Jan Kara" , "Kirill A. Shutemov" , Michal Hocko , Andrea Arcangeli , "Mike Kravetz" , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v3 2/3] mm/hmm: fix ZONE_DEVICE anon page mapping reuse Date: Wed, 24 Jul 2019 16:26:59 -0700 Message-ID: <20190724232700.23327-3-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190724232700.23327-1-rcampbell@nvidia.com> References: <20190724232700.23327-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1564010840; bh=AGMennFGR3RQwqOaKvawxIvCw+p1UUJkNAyV14vll0s=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=T/WkfrdvQlqhaZ1B4Gy+zyQ5343De2o15Dzo1m6FD/7MgNn5uKqnffF4dzB3dkmKY mmiUUmdHltbF6ltMVOwrDVa9GllkGYNiDoq8WmcIVGVqsC/Elp3l1WNPJ6CISgybvP AwRg2lBx7h+xQwmrhC5943YxhIcGH0bPZsj/InKaG0EwCLvi44iY7FVj6+e7E486BE 0tj8W2NNZgnn4UEtqs7sUYEQXacImUiNmRFT/MsiwokLzZHmyOCJmraHOwsIzPppSV S/3nigz0kzkE415ZkUpGn5gyuXcicjh1ltdikJX7/SSNVGUDERXSqHLu4wTA/V2awn z6wm+r3AU1nPQ== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When a ZONE_DEVICE private page is freed, the page->mapping field can be set. If this page is reused as an anonymous page, the previous value can prevent the page from being inserted into the CPU's anon rmap table. For example, when migrating a pte_none() page to device memory: migrate_vma(ops, vma, start, end, src, dst, private) migrate_vma_collect() src[] =3D MIGRATE_PFN_MIGRATE migrate_vma_prepare() /* no page to lock or isolate so OK */ migrate_vma_unmap() /* no page to unmap so OK */ ops->alloc_and_copy() /* driver allocates ZONE_DEVICE page for dst[] */ migrate_vma_pages() migrate_vma_insert_page() page_add_new_anon_rmap() __page_set_anon_rmap() /* This check sees the page's stale mapping field */ if (PageAnon(page)) return /* page->mapping is not updated */ The result is that the migration appears to succeed but a subsequent CPU fault will be unable to migrate the page back to system memory or worse. Clear the page->mapping field when freeing the ZONE_DEVICE page so stale pointer data doesn't affect future page use. Fixes: b7a523109fb5c9d2d6dd ("mm: don't clear ->mapping in hmm_devmem_free"= ) Cc: stable@vger.kernel.org Signed-off-by: Ralph Campbell Reviewed-by: John Hubbard Reviewed-by: Christoph Hellwig Cc: Dan Williams Cc: Andrew Morton Cc: Jason Gunthorpe Cc: Logan Gunthorpe Cc: Ira Weiny Cc: Matthew Wilcox Cc: Mel Gorman Cc: Jan Kara Cc: "Kirill A. Shutemov" Cc: Michal Hocko Cc: Andrea Arcangeli Cc: Mike Kravetz Cc: "J=C3=A9r=C3=B4me Glisse" --- kernel/memremap.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/kernel/memremap.c b/kernel/memremap.c index 6ee03a816d67..289a086e1467 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -397,6 +397,30 @@ void __put_devmap_managed_page(struct page *page) =20 mem_cgroup_uncharge(page); =20 + /* + * When a device_private page is freed, the page->mapping field + * may still contain a (stale) mapping value. For example, the + * lower bits of page->mapping may still identify the page as + * an anonymous page. Ultimately, this entire field is just + * stale and wrong, and it will cause errors if not cleared. + * One example is: + * + * migrate_vma_pages() + * migrate_vma_insert_page() + * page_add_new_anon_rmap() + * __page_set_anon_rmap() + * ...checks page->mapping, via PageAnon(page) call, + * and incorrectly concludes that the page is an + * anonymous page. Therefore, it incorrectly, + * silently fails to set up the new anon rmap. + * + * For other types of ZONE_DEVICE pages, migration is either + * handled differently or not done at all, so there is no need + * to clear page->mapping. + */ + if (is_device_private_page(page)) + page->mapping =3D NULL; + page->pgmap->ops->page_free(page); } else if (!count) __put_page(page); --=20 2.20.1