Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp112065ybi; Tue, 16 Jul 2019 17:16:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqwSPnQD2crzpRwSJg6bwrVQQcD/VfrutQ5fv3EWV2MwAXPoBQ5RysMRafvm4TENRJcJVE6E X-Received: by 2002:a17:902:2be6:: with SMTP id l93mr39502736plb.0.1563322608841; Tue, 16 Jul 2019 17:16:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563322608; cv=none; d=google.com; s=arc-20160816; b=FLS9ZVIgd+2g78VoNgHvfKeXxqM6fDLylOkdoF5yavMB3ZN0j7t/dgeNVOR2su+pNF jlSQ6yih2eKAGrNwb7cz0Eu6hEnsNIxi/e4KJFeBmW8OsBl7jZ9+N8wCu3lxjVqaX9cP go1UFdZy08VCbZRGZ+lrkGXaUlwCOflnyzCLMuz5Jr4KnBQ0tt8cbaWZNYppK3fKAH4X Lgr2P/lmu47JIwGVnAnUCWVVZ7frsshWblchMreDYlXumj2PYByV+/CUH28TVbfMG599 kK1zIce2KYAaLNblCPht8kaBpdqwKPSHqX4xJPTPm8DHEWcKrvjYCU0+Yxzj/yVXokFU JQIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=xvR8iX2outgELqejVmTcMhX9RlYLy+wU1js0WKyDum8=; b=w3HA6sEdVkBZ7Er7pbatA7E2//KnzTDdM9l9zV0UGn22bZu9OOg+FJGlLqWRiAJ+QO YmJIo5L2GZ65JeDDN44tTPzDL12xtJ+cuChnC+TFOYUhJIwnA+Fl1X5nimb9Ha0iqmis ffvRyVTgupgzEbZjwXFGF2c1hqKHnhwrsZKbLxW6N89xnmU+oPHRU2BXea7QAnGcUICF ZQjrG5Al7EEQE88ZV/dtj/ioWXVsNhgTN54PzQVv7FzduupLByzbtHm+LQRYgBW7wojp PPXu/5KZkyL2cKrPbwSMAkP5Qt/b0qc/xqjNsgaXiRduCEMz848BPMpzQ+brZJZxJi77 91LQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=TRCk27Tt; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v10si11372799pgq.17.2019.07.16.17.16.32; Tue, 16 Jul 2019 17:16:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=TRCk27Tt; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389105AbfGQAPo (ORCPT + 99 others); Tue, 16 Jul 2019 20:15:44 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:11600 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728597AbfGQAPn (ORCPT ); Tue, 16 Jul 2019 20:15:43 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 16 Jul 2019 17:15:48 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 16 Jul 2019 17:15:41 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 16 Jul 2019 17:15:41 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 17 Jul 2019 00:15:38 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 17 Jul 2019 00:15:38 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 16 Jul 2019 17:15:38 -0700 From: Ralph Campbell To: CC: , Ralph Campbell , , Christoph Hellwig , Dan Williams , Andrew Morton , "Jason Gunthorpe" , Logan Gunthorpe , "Ira Weiny" , Matthew Wilcox , Mel Gorman , Jan Kara , "Kirill A. Shutemov" , Michal Hocko , "Andrea Arcangeli" , Mike Kravetz , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH 2/3] mm/hmm: fix ZONE_DEVICE anon page mapping reuse Date: Tue, 16 Jul 2019 17:14:45 -0700 Message-ID: <20190717001446.12351-3-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190717001446.12351-1-rcampbell@nvidia.com> References: <20190717001446.12351-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1563322548; bh=xvR8iX2outgELqejVmTcMhX9RlYLy+wU1js0WKyDum8=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=TRCk27TtRrVv4ByDmKeCrqvJNp6+08H/etGBrEm50CH2SOXfZYymRmojCkvp8tLa6 H24TZFPuMsdO5jijygFy328uG5wNmKsxJMZir8gAsUoeMTM1BKQGEV8NBWZSheafK/ 1S/5Im8uAHIvUgtRm1EwNzfrk7IV6HonA8TEYRmmBQ2vyc8BFuVHI6uvlLQaEQfw0v XMqf8AqOwJrQxQ9pw5ESS5F/4G13BCIKLFjGjLccKaQ2cKaJzvx7FfBQi37zgSgXYP Ci17HzuHtXsMwYZDJqiTmYa54GJSGJ9gco+U/XzjJcosLSDUgG8c6QBed9JHo23xEJ cUx2MNq/WejeQ== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When a ZONE_DEVICE private page is freed, the page->mapping field can be set. If this page is reused as an anonymous page, the previous value can prevent the page from being inserted into the CPU's anon rmap table. For example, when migrating a pte_none() page to device memory: migrate_vma(ops, vma, start, end, src, dst, private) migrate_vma_collect() src[] =3D MIGRATE_PFN_MIGRATE migrate_vma_prepare() /* no page to lock or isolate so OK */ migrate_vma_unmap() /* no page to unmap so OK */ ops->alloc_and_copy() /* driver allocates ZONE_DEVICE page for dst[] */ migrate_vma_pages() migrate_vma_insert_page() page_add_new_anon_rmap() __page_set_anon_rmap() /* This check sees the page's stale mapping field */ if (PageAnon(page)) return /* page->mapping is not updated */ The result is that the migration appears to succeed but a subsequent CPU fault will be unable to migrate the page back to system memory or worse. Clear the page->mapping field when freeing the ZONE_DEVICE page so stale pointer data doesn't affect future page use. Fixes: b7a523109fb5c9d2d6dd ("mm: don't clear ->mapping in hmm_devmem_free"= ) Cc: stable@vger.kernel.org Signed-off-by: Ralph Campbell Cc: Christoph Hellwig Cc: Dan Williams Cc: Andrew Morton Cc: Jason Gunthorpe Cc: Logan Gunthorpe Cc: Ira Weiny Cc: Matthew Wilcox Cc: Mel Gorman Cc: Jan Kara Cc: "Kirill A. Shutemov" Cc: Michal Hocko Cc: Andrea Arcangeli Cc: Mike Kravetz Cc: "J=C3=A9r=C3=B4me Glisse" --- kernel/memremap.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/memremap.c b/kernel/memremap.c index bea6f887adad..238ae5d0ae8a 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -408,6 +408,10 @@ void __put_devmap_managed_page(struct page *page) =20 mem_cgroup_uncharge(page); =20 + /* Clear anonymous page mapping to prevent stale pointers */ + if (is_device_private_page(page)) + page->mapping =3D NULL; + page->pgmap->ops->page_free(page); } else if (!count) __put_page(page); --=20 2.20.1