Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp113948ybi; Mon, 15 Jul 2019 17:38:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqyFwc2SjOw+YaudnpobLwz19LXBmgiI92a+Yv5Lg6AGI31E7xxB2uwljXl6zIPUCZH8V3JH X-Received: by 2002:a17:902:968c:: with SMTP id n12mr33138915plp.59.1563237538520; Mon, 15 Jul 2019 17:38:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563237538; cv=none; d=google.com; s=arc-20160816; b=wR4jMUIbJfU2+IGKMLBBp1h5YVURJv3s8G+jsIrCY7wqy9KMA/HqouxNqesbKLG3zs l+/swCAPTudFHeqQvrWCrGUccsuRSk9+ALmrgeGzgdD38nC6NLNnnVp9MN7mZUMt1Uc4 2QiBVlTAnVDKiPZRzA0oAu0LPL2gjXxZ2vr8VOt2ZxNt1qByQMArtR9CQNic5R/EwxLt Fj80JL4QNbh1xGjx2OiV2WtvhdLx28d4+tJ7YdsTwfc/+QrdoNSXMHDTZCHP/zhxtIeS JUSWpp5lpu8neQKW4lkbQnAP8M6oqzUzmBfd0JVy3+s+SfOPR4gwKpSINtj9h1a5nyOH VgPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=ofrUZPeHN8eLZ9OqaICadscElKqWN6p7kWys0JZTuhA=; b=c2f8dRS1r8zDhWNEtRHkT6lIFStJSSZLdOUKt3sjV5gh4vIbvVP5+20F5RrqEO7Elw A/fCmmacHkqSSMDbAkGyXAPIl6UPbhkmatqhrSPDWhOml6agwqw0f2TacFc7o6ypVKbe 8IRhC53dD9WuJxn3gmOMcBqjXES+t4lZDQ2xMMjB5+zVrrOQ/Bmg24qkqjfIiPktRMRn 0P+EouIq+pMT8MXx/RHCMGbLObO0vWvBfnjVXWdqA6+deaAeNaznHjjFdAWOc1I5cI3x LJjBobJMpTR72OZ4xiAa9paqibxr7uqJdHHC8tcuWuA52x6LDq3BtfNOSP6V1N9lmogu l1Fg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=LUCe3kDU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g15si16610864plo.239.2019.07.15.17.38.42; Mon, 15 Jul 2019 17:38:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=LUCe3kDU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733032AbfGPAiL (ORCPT + 99 others); Mon, 15 Jul 2019 20:38:11 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:11459 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730383AbfGPAiK (ORCPT ); Mon, 15 Jul 2019 20:38:10 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 15 Jul 2019 17:38:06 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 15 Jul 2019 17:38:08 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 15 Jul 2019 17:38:08 -0700 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 16 Jul 2019 00:38:04 +0000 Subject: Re: [PATCH] mm/hmm: Fix bad subpage pointer in try_to_unmap_one To: John Hubbard , Andrew Morton CC: , , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , "Kirill A. Shutemov" , Mike Kravetz , Jason Gunthorpe References: <20190709223556.28908-1-rcampbell@nvidia.com> <20190709172823.9413bb2333363f7e33a471a0@linux-foundation.org> <05fffcad-cf5e-8f0c-f0c7-6ffbd2b10c2e@nvidia.com> <20190715150031.49c2846f4617f30bca5f043f@linux-foundation.org> <0ee5166a-26cd-a504-b9db-cffd082ecd38@nvidia.com> X-Nvconfidentiality: public From: Ralph Campbell Message-ID: <8dd86951-f8b0-75c2-d738-5080343e5dc5@nvidia.com> Date: Mon, 15 Jul 2019 17:38:04 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.0 MIME-Version: 1.0 In-Reply-To: <0ee5166a-26cd-a504-b9db-cffd082ecd38@nvidia.com> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1563237486; bh=ofrUZPeHN8eLZ9OqaICadscElKqWN6p7kWys0JZTuhA=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=LUCe3kDURX1xDPj2CwtN0jMKxVWQAyodV2+L1dwfQqHya3HI/sRDK9smEEbPHb0Ff 2mWdkD7daq3TlbcP/jPe/QgeLnCmv9MR87AAkK0rYBBLcIdpFz4CVSHPw8Pj6GRf6Z IsVXfYeACcGxnhFyFKkMkmeyzkxVhsMH50pb9rd/qVbK1HzDYSmTjAkqk1jBUST8+s 2AA/JJqG6pnvYbOD/effu13h7Q1Yki3WPB2j7nXgiRUYRwqMM6Q2wIoo0HKj7bbmw+ lUEQZzcW3IcuLSZC5BOnC8Zm7Y3LCNf520ktUCMVxdFiPfHnYvX4upKEIQxU1HFATa ar0051SC/Q4nA== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 7/15/19 4:34 PM, John Hubbard wrote: > On 7/15/19 3:00 PM, Andrew Morton wrote: >> On Tue, 9 Jul 2019 18:24:57 -0700 Ralph Campbell = wrote: >> >>> >>> On 7/9/19 5:28 PM, Andrew Morton wrote: >>>> On Tue, 9 Jul 2019 15:35:56 -0700 Ralph Campbell wrote: >>>> >>>>> When migrating a ZONE device private page from device memory to syste= m >>>>> memory, the subpage pointer is initialized from a swap pte which comp= utes >>>>> an invalid page pointer. A kernel panic results such as: >>>>> >>>>> BUG: unable to handle page fault for address: ffffea1fffffffc8 >>>>> >>>>> Initialize subpage correctly before calling page_remove_rmap(). >>>> >>>> I think this is >>>> >>>> Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVIC= E page in migration") >>>> Cc: stable >>>> >>>> yes? >>>> >>> >>> Yes. Can you add this or should I send a v2? >> >> I updated the patch. Could we please have some review input? >> >> >> From: Ralph Campbell >> Subject: mm/hmm: fix bad subpage pointer in try_to_unmap_one >> >> When migrating a ZONE device private page from device memory to system >> memory, the subpage pointer is initialized from a swap pte which compute= s >> an invalid page pointer. A kernel panic results such as: >> >> BUG: unable to handle page fault for address: ffffea1fffffffc8 >> >> Initialize subpage correctly before calling page_remove_rmap(). >> >> Link: http://lkml.kernel.org/r/20190709223556.28908-1-rcampbell@nvidia.c= om >> Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE p= age in migration") >> Signed-off-by: Ralph Campbell >> Cc: "J=C3=A9r=C3=B4me Glisse" >> Cc: "Kirill A. Shutemov" >> Cc: Mike Kravetz >> Cc: Jason Gunthorpe >> Cc: >> Signed-off-by: Andrew Morton >> --- >> >> mm/rmap.c | 1 + >> 1 file changed, 1 insertion(+) >> >> --- a/mm/rmap.c~mm-hmm-fix-bad-subpage-pointer-in-try_to_unmap_one >> +++ a/mm/rmap.c >> @@ -1476,6 +1476,7 @@ static bool try_to_unmap_one(struct page >> * No need to invalidate here it will synchronize on >> * against the special swap migration pte. >> */ >> + subpage =3D page; >> goto discard; >> } >> =20 >=20 > Hi Ralph and everyone, >=20 > While the above prevents a crash, I'm concerned that it is still not > an accurate fix. This fix leads to repeatedly removing the rmap, against = the > same struct page, which is odd, and also doesn't directly address the > root cause, which I understand to be: this routine can't handle migrating > the zero page properly--over and back, anyway. (We should also mention mo= re > about how this is triggered, in the commit description.) >=20 > I'll take a closer look at possible fixes (I have to step out for a bit) = soon, > but any more experienced help is also appreciated here. >=20 > thanks, I'm not surprised at the confusion. It took me quite awhile to=20 understand how migrate_vma() works with ZONE_DEVICE private memory. The big point to be aware of is that when migrating a page to device private memory, the source page's page->mapping pointer is copied to the ZONE_DEVICE struct page and the page_mapcount() is increased. So, the kernel sees the page as being "mapped" but the page table entry as being is_swap_pte() so the CPU will fault if it tries to access the mapped address. So yes, the source anon page is unmapped, DMA'ed to the device, and then mapped again. Then on a CPU fault, the zone device page is unmapped, DMA'ed to system memory, and mapped again. The rmap_walk() is used to clear the temporary migration pte so that is another important detail of how migrate_vma() works. At the moment, only single anon private pages can migrate to device private memory so there are no subpages and setting it to "page" should be correct for now. I'm looking at supporting migration of transparent huge pages but that is a work in progress. Let me know how much of all that you think should be in the change log. Getting an Acked-by from Jerome would be nice too. I see Christoph Hellwig got confused by this too [1]. I have a patch to clear page->mapping when freeing ZONE_DEVICE private struct pages which I'll send out soon. I'll probably also add some comments to struct page to include the above info and maybe remove the _zd_pad_1 field. [1] 740d6310ed4cd5c78e63 ("mm: don't clear ->mapping in hmm_devmem_free")