Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752772AbdCQCAT convert rfc822-to-8bit (ORCPT ); Thu, 16 Mar 2017 22:00:19 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45376 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751182AbdCQCAR (ORCPT ); Thu, 16 Mar 2017 22:00:17 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com ACE957F359 Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=jglisse@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com ACE957F359 Date: Thu, 16 Mar 2017 21:52:23 -0400 (EDT) From: Jerome Glisse To: John Hubbard Cc: Balbir Singh , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm , Naoya Horiguchi , David Nellans , Evgeny Baskakov , Mark Hairgrove , Sherry Cheung , Subhash Gutti Message-ID: <2057035918.7910419.1489715543920.JavaMail.zimbra@redhat.com> In-Reply-To: <94e0d115-7deb-c748-3dc2-60d6289e6551@nvidia.com> References: <1489680335-6594-1-git-send-email-jglisse@redhat.com> <1489680335-6594-8-git-send-email-jglisse@redhat.com> <20170316160520.d03ac02474cad6d2c8eba9bc@linux-foundation.org> <94e0d115-7deb-c748-3dc2-60d6289e6551@nvidia.com> Subject: Re: [HMM 07/16] mm/migrate: new memory migration helper for use with device memory v4 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT X-Originating-IP: [10.10.120.194] Thread-Topic: mm/migrate: new memory migration helper for use with device memory v4 Thread-Index: vEWZzrCV7rEiEm1D75cqemSW2Qwkkg== X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Fri, 17 Mar 2017 01:52:24 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3769 Lines: 116 > On 03/16/2017 05:45 PM, Balbir Singh wrote: > > On Fri, Mar 17, 2017 at 11:22 AM, John Hubbard wrote: > >> On 03/16/2017 04:05 PM, Andrew Morton wrote: > >>> > >>> On Thu, 16 Mar 2017 12:05:26 -0400 Jérôme Glisse > >>> wrote: > >>> > >>>> +static inline struct page *migrate_pfn_to_page(unsigned long mpfn) > >>>> +{ > >>>> + if (!(mpfn & MIGRATE_PFN_VALID)) > >>>> + return NULL; > >>>> + return pfn_to_page(mpfn & MIGRATE_PFN_MASK); > >>>> +} > >>> > >>> > >>> i386 allnoconfig: > >>> > >>> In file included from mm/page_alloc.c:61: > >>> ./include/linux/migrate.h: In function 'migrate_pfn_to_page': > >>> ./include/linux/migrate.h:139: warning: left shift count >= width of type > >>> ./include/linux/migrate.h:141: warning: left shift count >= width of type > >>> ./include/linux/migrate.h: In function 'migrate_pfn_size': > >>> ./include/linux/migrate.h:146: warning: left shift count >= width of type > >>> > >> > >> It seems clear that this was never meant to work with < 64-bit pfns: > >> > >> // migrate.h excerpt: > >> #define MIGRATE_PFN_VALID (1UL << (BITS_PER_LONG_LONG - 1)) > >> #define MIGRATE_PFN_MIGRATE (1UL << (BITS_PER_LONG_LONG - 2)) > >> #define MIGRATE_PFN_HUGE (1UL << (BITS_PER_LONG_LONG - 3)) > >> #define MIGRATE_PFN_LOCKED (1UL << (BITS_PER_LONG_LONG - 4)) > >> #define MIGRATE_PFN_WRITE (1UL << (BITS_PER_LONG_LONG - 5)) > >> #define MIGRATE_PFN_DEVICE (1UL << (BITS_PER_LONG_LONG - 6)) > >> #define MIGRATE_PFN_ERROR (1UL << (BITS_PER_LONG_LONG - 7)) > >> #define MIGRATE_PFN_MASK ((1UL << (BITS_PER_LONG_LONG - > >> PAGE_SHIFT)) > >> - 1) > >> > >> ...obviously, there is not enough room for these flags, in a 32-bit pfn. > >> > >> So, given the current HMM design, I think we are going to have to provide > >> a > >> 32-bit version of these routines (migrate_pfn_to_page, and related) that > >> is > >> a no-op, right? > > > > Or make the HMM Kconfig feature 64BIT only by making it depend on 64BIT? > > > > Yes, that was my first reaction too, but these particular routines are > aspiring to be generic > routines--in fact, you have had an influence there, because these might > possibly help with NUMA > migrations. :) > > So it would look odd to see this: > > #ifdef CONFIG_HMM > int migrate_vma(const struct migrate_vma_ops *ops, > struct vm_area_struct *vma, > unsigned long mentries, > unsigned long start, > unsigned long end, > unsigned long *src, > unsigned long *dst, > void *private) > { > //...implementation > #endif > > ...because migrate_vma() does not sound HMM-specific, and it is, after all, > in migrate.h and > migrate.c. We probably want this a more generic approach (not sure if I've > picked exactly the right > token to #ifdef on, but it's close): > > #ifdef CONFIG_64BIT > int migrate_vma(const struct migrate_vma_ops *ops, > struct vm_area_struct *vma, > unsigned long mentries, > unsigned long start, > unsigned long end, > unsigned long *src, > unsigned long *dst, > void *private) > { > /* ... full implementation */ > } > > #else > int migrate_vma(const struct migrate_vma_ops *ops, > struct vm_area_struct *vma, > unsigned long mentries, > unsigned long start, > unsigned long end, > unsigned long *src, > unsigned long *dst, > void *private) > { > return -EINVAL; /* or something more appropriate */ > } > #endif > > thanks > John Hubbard > NVIDIA The original intention was for it to be 64bit only, 32bit is a dying species and before splitting out hmm_ prefix from this code and moving it to be generic it was behind a 64bit flag. If latter one someone really care about 32bit we can only move to u64 Cheers, Jérôme