Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp5591359iog; Thu, 23 Jun 2022 00:57:44 -0700 (PDT) X-Google-Smtp-Source: AGRyM1vUdizG/5J9aUNXdfYggHXbWFjJGL5qglQaVKfjevhPu7NtKgFw3Ppq5Jh4P6Gm7gKuW2Ue X-Received: by 2002:a17:902:f60b:b0:16a:2dcf:989d with SMTP id n11-20020a170902f60b00b0016a2dcf989dmr16204431plg.90.1655971064461; Thu, 23 Jun 2022 00:57:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655971064; cv=none; d=google.com; s=arc-20160816; b=OCPvc0OkfiWWgJodx/ikk5NuWFuy1emfy37lc9bOPe7+bkhpxess2AFixc8GPmC9BS Ba2KAagICurcJXO6BlUp4jtdt7OjoPuCeZG0UHVUucJHnBrBgosI0rbbJUgaTsMt1U5+ rVg3Oqvgyk52Ef417dyI+t4CIQob14qm/hflRsd6jfGlPIZU3M5f5rpXr4lhFJlv/BIB g7M7Q653GmanKpkXw7bV4rCIRAEUnrotZkc0JwNjCEoReBb8ovZ9ANZ+YPfwCQPp02Le XHFnz40OqG4QMK33SZDP2FdgUlLUhsMvcJ3i4J5hvRYIMMbXY4YZDWOXv9gqxUqRmE7B +5yA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:subject :organization:from:references:cc:to:content-language:user-agent :mime-version:date:message-id:dkim-signature; bh=WRxmubYj/zvQx9FAer+eiq5WWKcFW5jnO5Ita3wHCwA=; b=wlX6L3NBVwMB14hy7P1gNMIFb+x5FxiQ/Jq6rcUNHvvgADMpfFl2E6+pjrMaJ1P3tV kynuDdzvRPsOZT4ciVy8Sc/Z50A9W6HAYqWDNzYnpwdZs2XYu/Fk9C7A1coiRGxIH7Me Nf5VqAr//Y50Y5/D/AoSPXH4P2oI/Dz/dHF2Y8T5PueMZVtv/xBeGS4UOyiYKTxJFCFh ZfJx0AXXpWZwPOM/SCxAmYTv6qNqvW/GHQ/HbBijhNJHINZ6E4JdE0PIoVV/k5bg+upY gpt+9Yb8V0YR7inrKXn3PzJsCaku9UIwohX/zBFw+iZ87my6CA/UuA4OPJ3ME9swXISY ftOQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Qvzi01Ot; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d1-20020a631d01000000b003fc75297697si22103479pgd.4.2022.06.23.00.57.29; Thu, 23 Jun 2022 00:57:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Qvzi01Ot; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230454AbiFWH5T (ORCPT + 99 others); Thu, 23 Jun 2022 03:57:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230447AbiFWH5R (ORCPT ); Thu, 23 Jun 2022 03:57:17 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id F138747079 for ; Thu, 23 Jun 2022 00:57:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655971034; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WRxmubYj/zvQx9FAer+eiq5WWKcFW5jnO5Ita3wHCwA=; b=Qvzi01Otyw70cSNnLIweoaShQTqoidgQeUY87AsdRLcZEWXVb5ZJvN/+aV9QmCbwJU36EE pv7NX5Pm6GKSFrC5mgvEFHMT00h7yp6o3HUSLqk0GkKTLJaOGg/oUEksA22IdRqmMbK0N3 uptaHF9SwSW9vj/HhQxT/ywGsiQ463g= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-126-48bfffxEMBilhvKfRbYzfw-1; Thu, 23 Jun 2022 03:57:13 -0400 X-MC-Unique: 48bfffxEMBilhvKfRbYzfw-1 Received: by mail-wm1-f72.google.com with SMTP id 8-20020a1c0208000000b003a03347d9afso161050wmc.8 for ; Thu, 23 Jun 2022 00:57:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent :content-language:to:cc:references:from:organization:subject :in-reply-to:content-transfer-encoding; bh=WRxmubYj/zvQx9FAer+eiq5WWKcFW5jnO5Ita3wHCwA=; b=Kth1M0jGG7JluiqEv6lrhxHJ7q9ENIP0xupeQED52BzqD0E0+0ss/f6dxnGv6JgCIu 3feFhPnbQ2wchcbfISJwWQ+ThFRxMJLsmQ1KQYcMQ2uwJz2btPgeSc+nzjGgR3O1s4eq GPVI4SoZvmcbeS6Lj/HkjWIGWuljaaxpjj7jxqLiU36f7yLF8zNZRBhN2q29L4zVnoHE 6k23rNidz5xR0Ddtn2FNPGiRJAObm1b48AKozsBppLqKYbgluvvQp4DU5DX9xS0ubPcH 1/5utIKvno9nbaf7k0IyLbSY9EmclHNvQqFwax1XicWbt21sIrfJZq+9cozbN17PEQyA y4ow== X-Gm-Message-State: AJIora99Mea9gmn+Mzk9m+9KpI1PwbML+vhEMwwvuk0nd+5ixtm/rLm5 dUq7Meok1IZlunzYsqSynUVJLb4AIBRhDwzBQ0wvPASe7QUnMAu+8C4uwsbAyj3MbmdvD2xfoRz wEJHKAxSdR9S4sBpuhc3m1Q== X-Received: by 2002:a05:6000:15c1:b0:21b:ad5e:2798 with SMTP id y1-20020a05600015c100b0021bad5e2798mr60438wry.237.1655971032242; Thu, 23 Jun 2022 00:57:12 -0700 (PDT) X-Received: by 2002:a05:6000:15c1:b0:21b:ad5e:2798 with SMTP id y1-20020a05600015c100b0021bad5e2798mr60416wry.237.1655971031851; Thu, 23 Jun 2022 00:57:11 -0700 (PDT) Received: from ?IPV6:2003:cb:c704:b100:7694:f34e:d0dd:95e7? (p200300cbc704b1007694f34ed0dd95e7.dip0.t-ipconnect.de. [2003:cb:c704:b100:7694:f34e:d0dd:95e7]) by smtp.gmail.com with ESMTPSA id r10-20020a05600c284a00b0039c41686421sm2066722wmb.17.2022.06.23.00.57.10 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 23 Jun 2022 00:57:11 -0700 (PDT) Message-ID: <9af76814-ee3a-0af4-7300-d432050b13a3@redhat.com> Date: Thu, 23 Jun 2022 09:57:10 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.0 Content-Language: en-US To: "Sierra Guiza, Alejandro (Alex)" , Alistair Popple , akpm@linux-foundation.org Cc: Felix Kuehling , jgg@nvidia.com, linux-mm@kvack.org, rcampbell@nvidia.com, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, hch@lst.de, jglisse@redhat.com, willy@infradead.org References: <20220531200041.24904-1-alex.sierra@amd.com> <20220531200041.24904-2-alex.sierra@amd.com> <3ac89358-2ce0-7d0d-8b9c-8b0e5cc48945@redhat.com> <02ed2cb7-3ad3-8ffc-6032-04ae1853e234@amd.com> <7605beee-0a76-4ee9-e950-17419630f2cf@redhat.com> <6aef4b7f-0ced-08cd-1f0c-50c22996aa41@redhat.com> <65987ab8-426d-e533-0295-069312b4f751@amd.com> <34e94bdb-675a-5d5c-6137-8aa1ee658d49@redhat.com> <87letq6wb5.fsf@nvdebian.thelocal> <643c44e7-48be-375b-c7ab-6a30b5ee2937@redhat.com> <01cf9f24-d7fc-61e9-1c28-85dc5aabe645@redhat.com> <01cad0cf-9937-8699-6df3-7d5dfa681922@amd.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v5 01/13] mm: add zone device coherent type memory support In-Reply-To: <01cad0cf-9937-8699-6df3-7d5dfa681922@amd.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-3.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A, RCVD_IN_DNSWL_LOW,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On 23.06.22 01:16, Sierra Guiza, Alejandro (Alex) wrote: > > On 6/21/2022 11:16 AM, David Hildenbrand wrote: >> On 21.06.22 18:08, Sierra Guiza, Alejandro (Alex) wrote: >>> On 6/21/2022 7:25 AM, David Hildenbrand wrote: >>>> On 21.06.22 13:55, Alistair Popple wrote: >>>>> David Hildenbrand writes: >>>>> >>>>>> On 21.06.22 13:25, Felix Kuehling wrote: >>>>>>> Am 6/17/22 um 23:19 schrieb David Hildenbrand: >>>>>>>> On 17.06.22 21:27, Sierra Guiza, Alejandro (Alex) wrote: >>>>>>>>> On 6/17/2022 12:33 PM, David Hildenbrand wrote: >>>>>>>>>> On 17.06.22 19:20, Sierra Guiza, Alejandro (Alex) wrote: >>>>>>>>>>> On 6/17/2022 4:40 AM, David Hildenbrand wrote: >>>>>>>>>>>> On 31.05.22 22:00, Alex Sierra wrote: >>>>>>>>>>>>> Device memory that is cache coherent from device and CPU point of view. >>>>>>>>>>>>> This is used on platforms that have an advanced system bus (like CAPI >>>>>>>>>>>>> or CXL). Any page of a process can be migrated to such memory. However, >>>>>>>>>>>>> no one should be allowed to pin such memory so that it can always be >>>>>>>>>>>>> evicted. >>>>>>>>>>>>> >>>>>>>>>>>>> Signed-off-by: Alex Sierra >>>>>>>>>>>>> Acked-by: Felix Kuehling >>>>>>>>>>>>> Reviewed-by: Alistair Popple >>>>>>>>>>>>> [hch: rebased ontop of the refcount changes, >>>>>>>>>>>>> removed is_dev_private_or_coherent_page] >>>>>>>>>>>>> Signed-off-by: Christoph Hellwig >>>>>>>>>>>>> --- >>>>>>>>>>>>> include/linux/memremap.h | 19 +++++++++++++++++++ >>>>>>>>>>>>> mm/memcontrol.c | 7 ++++--- >>>>>>>>>>>>> mm/memory-failure.c | 8 ++++++-- >>>>>>>>>>>>> mm/memremap.c | 10 ++++++++++ >>>>>>>>>>>>> mm/migrate_device.c | 16 +++++++--------- >>>>>>>>>>>>> mm/rmap.c | 5 +++-- >>>>>>>>>>>>> 6 files changed, 49 insertions(+), 16 deletions(-) >>>>>>>>>>>>> >>>>>>>>>>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h >>>>>>>>>>>>> index 8af304f6b504..9f752ebed613 100644 >>>>>>>>>>>>> --- a/include/linux/memremap.h >>>>>>>>>>>>> +++ b/include/linux/memremap.h >>>>>>>>>>>>> @@ -41,6 +41,13 @@ struct vmem_altmap { >>>>>>>>>>>>> * A more complete discussion of unaddressable memory may be found in >>>>>>>>>>>>> * include/linux/hmm.h and Documentation/vm/hmm.rst. >>>>>>>>>>>>> * >>>>>>>>>>>>> + * MEMORY_DEVICE_COHERENT: >>>>>>>>>>>>> + * Device memory that is cache coherent from device and CPU point of view. This >>>>>>>>>>>>> + * is used on platforms that have an advanced system bus (like CAPI or CXL). A >>>>>>>>>>>>> + * driver can hotplug the device memory using ZONE_DEVICE and with that memory >>>>>>>>>>>>> + * type. Any page of a process can be migrated to such memory. However no one >>>>>>>>>>>> Any page might not be right, I'm pretty sure. ... just thinking about special pages >>>>>>>>>>>> like vdso, shared zeropage, ... pinned pages ... >>>>>>>>>> Well, you cannot migrate long term pages, that's what I meant :) >>>>>>>>>> >>>>>>>>>>>>> + * should be allowed to pin such memory so that it can always be evicted. >>>>>>>>>>>>> + * >>>>>>>>>>>>> * MEMORY_DEVICE_FS_DAX: >>>>>>>>>>>>> * Host memory that has similar access semantics as System RAM i.e. DMA >>>>>>>>>>>>> * coherent and supports page pinning. In support of coordinating page >>>>>>>>>>>>> @@ -61,6 +68,7 @@ struct vmem_altmap { >>>>>>>>>>>>> enum memory_type { >>>>>>>>>>>>> /* 0 is reserved to catch uninitialized type fields */ >>>>>>>>>>>>> MEMORY_DEVICE_PRIVATE = 1, >>>>>>>>>>>>> + MEMORY_DEVICE_COHERENT, >>>>>>>>>>>>> MEMORY_DEVICE_FS_DAX, >>>>>>>>>>>>> MEMORY_DEVICE_GENERIC, >>>>>>>>>>>>> MEMORY_DEVICE_PCI_P2PDMA, >>>>>>>>>>>>> @@ -143,6 +151,17 @@ static inline bool folio_is_device_private(const struct folio *folio) >>>>>>>>>>>> In general, this LGTM, and it should be correct with PageAnonExclusive I think. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> However, where exactly is pinning forbidden? >>>>>>>>>>> Long-term pinning is forbidden since it would interfere with the device >>>>>>>>>>> memory manager owning the >>>>>>>>>>> device-coherent pages (e.g. evictions in TTM). However, normal pinning >>>>>>>>>>> is allowed on this device type. >>>>>>>>>> I don't see updates to folio_is_pinnable() in this patch. >>>>>>>>> Device coherent type pages should return true here, as they are pinnable >>>>>>>>> pages. >>>>>>>> That function is only called for long-term pinnings in try_grab_folio(). >>>>>>>> >>>>>>>>>> So wouldn't try_grab_folio() simply pin these pages? What am I missing? >>>>>>>>> As far as I understand this return NULL for long term pin pages. >>>>>>>>> Otherwise they get refcount incremented. >>>>>>>> I don't follow. >>>>>>>> >>>>>>>> You're saying >>>>>>>> >>>>>>>> a) folio_is_pinnable() returns true for device coherent pages >>>>>>>> >>>>>>>> and that >>>>>>>> >>>>>>>> b) device coherent pages don't get long-term pinned >>>>>>>> >>>>>>>> >>>>>>>> Yet, the code says >>>>>>>> >>>>>>>> struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) >>>>>>>> { >>>>>>>> if (flags & FOLL_GET) >>>>>>>> return try_get_folio(page, refs); >>>>>>>> else if (flags & FOLL_PIN) { >>>>>>>> struct folio *folio; >>>>>>>> >>>>>>>> /* >>>>>>>> * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a >>>>>>>> * right zone, so fail and let the caller fall back to the slow >>>>>>>> * path. >>>>>>>> */ >>>>>>>> if (unlikely((flags & FOLL_LONGTERM) && >>>>>>>> !is_pinnable_page(page))) >>>>>>>> return NULL; >>>>>>>> ... >>>>>>>> return folio; >>>>>>>> } >>>>>>>> } >>>>>>>> >>>>>>>> >>>>>>>> What prevents these pages from getting long-term pinned as stated in this patch? >>>>>>> Long-term pinning is handled by __gup_longterm_locked, which migrates >>>>>>> pages returned by __get_user_pages_locked that cannot be long-term >>>>>>> pinned. try_grab_folio is OK to grab the pages. Anything that can't be >>>>>>> long-term pinned will be migrated afterwards, and >>>>>>> __get_user_pages_locked will be retried. The migration of >>>>>>> DEVICE_COHERENT pages was implemented by Alistair in patch 5/13 >>>>>>> ("mm/gup: migrate device coherent pages when pinning instead of failing"). >>>>>> Thanks. >>>>>> >>>>>> __gup_longterm_locked()->check_and_migrate_movable_pages() >>>>>> >>>>>> Which checks folio_is_pinnable() and doesn't do anything if set. >>>>>> >>>>>> Sorry to be dense here, but I don't see how what's stated in this patch >>>>>> works without adjusting folio_is_pinnable(). >>>>> Ugh, I think you might be right about try_grab_folio(). >>>>> >>>>> We didn't update folio_is_pinnable() to include device coherent pages >>>>> because device coherent pages are pinnable. It is really just >>>>> FOLL_LONGTERM that we want to prevent here. >>>>> >>>>> For normal PUP that is done by my change in >>>>> check_and_migrate_movable_pages() which migrates pages being pinned with >>>>> FOLL_LONGTERM. But I think I incorrectly assumed we would take the >>>>> pte_devmap() path in gup_pte_range(), which we don't for coherent pages. >>>>> So I think the check in try_grab_folio() needs to be: >>>> I think I said it already (and I might be wrong without reading the >>>> code), but folio_is_pinnable() is *only* called for long-term pinnings. >>>> >>>> It should actually be called folio_is_longterm_pinnable(). >>>> >>>> That's where that check should go, no? >>> David, I think you're right. We didn't catch this since the LONGTERM gup >>> test we added to hmm-test only calls to pin_user_pages. Apparently >>> try_grab_folio is called only from fast callers (ex. >>> pin_user_pages_fast/get_user_pages_fast). I have added a conditional >>> similar to what Alistair has proposed to return null on LONGTERM && >>> (coherent_pages || folio_is_pinnable) at try_grab_folio. Also a new gup >>> test was added with LONGTERM set that calls pin_user_pages_fast. >>> Returning null under this condition it does causes the migration from >>> dev to system memory. >>> >> Why can't coherent memory simply put its checks into >> folio_is_pinnable()? I don't get it why we have to do things differently >> here. >> >>> Actually, Im having different problems with a call to PageAnonExclusive >>> from try_to_migrate_one during page fault from a HMM test that first >>> migrate pages to device private and forks to mark as COW these pages. >>> Apparently is catching the first BUG VM_BUG_ON_PGFLAGS(!PageAnon(page), >>> page) >> With or without this series? A backtrace would be great. > > Here's the back trace. This happens in a hmm-test added in this patch > series. However, I have tried to isolate this BUG by just adding the COW > test with private device memory only. This is only present as follows. > Allocate anonymous mem->Migrate to private device memory->fork->try to > access to parent's anonymous memory (which will suppose to trigger a > page fault and migration to system mem). Just for the record, if the > child is terminated before the parent's memory is accessed, this problem > is not present. The only usage of PageAnonExclusive() in try_to_migrate_one() is: anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(subpage); Which can only possibly fail if subpage is not actually part of the folio. I see some controversial code in the the if (folio_is_zone_device(folio)) case later: * The assignment to subpage above was computed from a * swap PTE which results in an invalid pointer. * Since only PAGE_SIZE pages can currently be * migrated, just set it to page. This will need to be * changed when hugepage migrations to device private * memory are supported. */ subpage = &folio->page; There we have our invalid pointer hint. I don't see how it could have worked if the child quit, though? Maybe just pure luck? Does the following fix your issue: From 09750c714739ef3ca317b4aec82bf20283c8fd2d Mon Sep 17 00:00:00 2001 From: David Hildenbrand Date: Thu, 23 Jun 2022 09:38:45 +0200 Subject: [PATCH] mm/rmap: fix dereferencing invalid subpage pointer in try_to_migrate_one() The subpage we calculate is an invalid pointer for device private pages, because device private pages are mapped via non-present device private entries, not ordinary present PTEs. Let's just not compute broken pointers and fixup later. Move the proper assignment of the correct subpage to the beginning of the function and assert that we really only have a single page in our folio. This currently results in a BUG when tying to compute anon_exclusive, because: [ 528.727237] BUG: unable to handle page fault for address: ffffea1fffffffc0 [ 528.739585] #PF: supervisor read access in kernel mode [ 528.745324] #PF: error_code(0x0000) - not-present page [ 528.751062] PGD 44eaf2067 P4D 44eaf2067 PUD 0 [ 528.756026] Oops: 0000 [#1] PREEMPT SMP NOPTI [ 528.760890] CPU: 120 PID: 18275 Comm: hmm-tests Not tainted 5.19.0-rc3-kfd-alex #257 [ 528.769542] Hardware name: AMD Corporation BardPeak/BardPeak, BIOS RTY1002BDS 09/17/2021 [ 528.778579] RIP: 0010:try_to_migrate_one+0x21a/0x1000 [ 528.784225] Code: f6 48 89 c8 48 2b 05 45 d1 6a 01 48 c1 f8 06 48 29 c3 48 8b 45 a8 48 c1 e3 06 48 01 cb f6 41 18 01 48 89 85 50 ff ff ff 74 0b <4c> 8b 33 49 c1 ee 11 41 83 e6 01 48 8b bd 48 ff ff ff e8 3f 99 02 [ 528.805194] RSP: 0000:ffffc90003cdfaa0 EFLAGS: 00010202 [ 528.811027] RAX: 00007ffff7ff4000 RBX: ffffea1fffffffc0 RCX: ffffeaffffffffc0 [ 528.818995] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffc90003cdfaf8 [ 528.826962] RBP: ffffc90003cdfb70 R08: 0000000000000000 R09: 0000000000000000 [ 528.834930] R10: ffffc90003cdf910 R11: 0000000000000002 R12: ffff888194450540 [ 528.842899] R13: ffff888160d057c0 R14: 0000000000000000 R15: 03ffffffffffffff [ 528.850865] FS: 00007ffff7fdb740(0000) GS:ffff8883b0600000(0000) knlGS:0000000000000000 [ 528.859891] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 528.866308] CR2: ffffea1fffffffc0 CR3: 00000001562b4003 CR4: 0000000000770ee0 [ 528.874275] PKRU: 55555554 [ 528.877286] Call Trace: [ 528.880016] [ 528.882356] ? lock_is_held_type+0xdf/0x130 [ 528.887033] rmap_walk_anon+0x167/0x410 [ 528.891316] try_to_migrate+0x90/0xd0 [ 528.895405] ? try_to_unmap_one+0xe10/0xe10 [ 528.900074] ? anon_vma_ctor+0x50/0x50 [ 528.904260] ? put_anon_vma+0x10/0x10 [ 528.908347] ? invalid_mkclean_vma+0x20/0x20 [ 528.913114] migrate_vma_setup+0x5f4/0x750 [ 528.917691] dmirror_devmem_fault+0x8c/0x250 [test_hmm] [ 528.923532] do_swap_page+0xac0/0xe50 [ 528.927623] ? __lock_acquire+0x4b2/0x1ac0 [ 528.932199] __handle_mm_fault+0x949/0x1440 [ 528.936876] handle_mm_fault+0x13f/0x3e0 [ 528.941256] do_user_addr_fault+0x215/0x740 [ 528.945928] exc_page_fault+0x75/0x280 [ 528.950115] asm_exc_page_fault+0x27/0x30 [ 528.954593] RIP: 0033:0x40366b ... Fixes: 6c287605fd56 ("mm: remember exclusively mapped anonymous pages with PG_anon_exclusive") Reported-by: Sierra Guiza, Alejandro (Alex) Cc: Vlastimil Babka Cc: Christoph Hellwig Cc: "Matthew Wilcox (Oracle)" Signed-off-by: David Hildenbrand --- mm/rmap.c | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 5bcb334cd6f2..746c05acad27 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1899,8 +1899,23 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, /* Unexpected PMD-mapped THP? */ VM_BUG_ON_FOLIO(!pvmw.pte, folio); - subpage = folio_page(folio, - pte_pfn(*pvmw.pte) - folio_pfn(folio)); + if (folio_is_zone_device(folio)) { + /* + * Our PTE is a non-present device exclusive entry and + * calculating the subpage as for the common case would + * result in an invalid pointer. + * + * Since only PAGE_SIZE pages can currently be + * migrated, just set it to page. This will need to be + * changed when hugepage migrations to device private + * memory are supported. + */ + VM_BUG_ON_FOLIO(folio_nr_pages(folio) > 1, folio); + subpage = &folio->page; + } else { + subpage = folio_page(folio, + pte_pfn(*pvmw.pte) - folio_pfn(folio)); + } address = pvmw.address; anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(subpage); @@ -1993,15 +2008,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, /* * No need to invalidate here it will synchronize on * against the special swap migration pte. - * - * The assignment to subpage above was computed from a - * swap PTE which results in an invalid pointer. - * Since only PAGE_SIZE pages can currently be - * migrated, just set it to page. This will need to be - * changed when hugepage migrations to device private - * memory are supported. */ - subpage = &folio->page; } else if (PageHWPoison(subpage)) { pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); if (folio_test_hugetlb(folio)) { -- 2.35.3 -- Thanks, David / dhildenb