2013-03-19 23:43:51

by Simon Jeons

[permalink] [raw]
Subject: Re: [RFC][PATCH 0/9] extend hugepage migration

Hi Naoya,
On 02/22/2013 03:41 AM, Naoya Horiguchi wrote:
> Hi,
>
> Hugepage migration is now available only for soft offlining (moving
> data on the half corrupted page to another page to save the data).
> But it's also useful some other users of page migration, so this
> patchset tries to extend some of such users to support hugepage.
>
> The targets of this patchset are NUMA related system calls (i.e.
> migrate_pages(2), move_pages(2), and mbind(2)), and memory hotplug.
> This patchset does not extend page migration in memory compaction,
> because I think that users of memory compaction mainly expect to
> construct thp by arranging raw pages but hugepage migration doesn't
> help it.
> CMA, another user of page migration, can have benefit from hugepage
> migration, but is not enabled to support it now. This is because
> I've never used CMA and need to learn more to extend and/or test
> hugepage migration in CMA. I'll add this in later version if it
> becomes ready, or will post as a separate patchset.
>
> Hugepage migration of 1GB hugepage is not enabled for now, because
> I'm not sure whether users of 1GB hugepage really want it.
> We need to spare free hugepage in order to do migration, but I don't
> think that users want to 1GB memory to idle for that purpose
> (currently we can't expand/shrink 1GB hugepage pool after boot).
>
> Could you review and give me some comments/feedbacks?
>
> Thanks,
> Naoya Horiguchi
> ---
> Easy patch access:
> [email protected]:Naoya-Horiguchi/linux.git
> branch:extend_hugepage_migration
>
> Test code:
> [email protected]:Naoya-Horiguchi/test_hugepage_migration_extension.git

git clone
[email protected]:Naoya-Horiguchi/test_hugepage_migration_extension.git
Cloning into test_hugepage_migration_extension...
Permission denied (publickey).
fatal: The remote end hung up unexpectedly

>
> Naoya Horiguchi (9):
> migrate: add migrate_entry_wait_huge()
> migrate: make core migration code aware of hugepage
> soft-offline: use migrate_pages() instead of migrate_huge_page()
> migrate: clean up migrate_huge_page()
> migrate: enable migrate_pages() to migrate hugepage
> migrate: enable move_pages() to migrate hugepage
> mbind: enable mbind() to migrate hugepage
> memory-hotplug: enable memory hotplug to handle hugepage
> remove /proc/sys/vm/hugepages_treat_as_movable
>
> Documentation/sysctl/vm.txt | 16 ------
> include/linux/hugetlb.h | 25 ++++++++--
> include/linux/mempolicy.h | 2 +-
> include/linux/migrate.h | 12 ++---
> include/linux/swapops.h | 4 ++
> kernel/sysctl.c | 7 ---
> mm/hugetlb.c | 98 ++++++++++++++++++++++++++++--------
> mm/memory-failure.c | 20 ++++++--
> mm/memory.c | 6 ++-
> mm/memory_hotplug.c | 51 +++++++++++++++----
> mm/mempolicy.c | 61 +++++++++++++++--------
> mm/migrate.c | 119 ++++++++++++++++++++++++++++++--------------
> mm/page_alloc.c | 12 +++++
> mm/page_isolation.c | 5 ++
> 14 files changed, 311 insertions(+), 127 deletions(-)
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>


2013-03-20 21:35:46

by Naoya Horiguchi

[permalink] [raw]
Subject: Re: [RFC][PATCH 0/9] extend hugepage migration

On Wed, Mar 20, 2013 at 07:43:44AM +0800, Simon Jeons wrote:
...
> >Easy patch access:
> > [email protected]:Naoya-Horiguchi/linux.git
> > branch:extend_hugepage_migration
> >
> >Test code:
> > [email protected]:Naoya-Horiguchi/test_hugepage_migration_extension.git
>
> git clone
> [email protected]:Naoya-Horiguchi/test_hugepage_migration_extension.git
> Cloning into test_hugepage_migration_extension...
> Permission denied (publickey).
> fatal: The remote end hung up unexpectedly

Sorry, wrong url.
git://github.com/Naoya-Horiguchi/test_hugepage_migration_extension.git
or
https://github.com/Naoya-Horiguchi/test_hugepage_migration_extension.git
should work.

Thanks,
Naoya

2013-03-20 23:49:55

by Simon Jeons

[permalink] [raw]
Subject: Re: [RFC][PATCH 0/9] extend hugepage migration

Hi Naoya,
On 03/21/2013 05:35 AM, Naoya Horiguchi wrote:
> On Wed, Mar 20, 2013 at 07:43:44AM +0800, Simon Jeons wrote:
> ...
>>> Easy patch access:
>>> [email protected]:Naoya-Horiguchi/linux.git
>>> branch:extend_hugepage_migration
>>>
>>> Test code:
>>> [email protected]:Naoya-Horiguchi/test_hugepage_migration_extension.git
>> git clone
>> [email protected]:Naoya-Horiguchi/test_hugepage_migration_extension.git
>> Cloning into test_hugepage_migration_extension...
>> Permission denied (publickey).
>> fatal: The remote end hung up unexpectedly
> Sorry, wrong url.
> git://github.com/Naoya-Horiguchi/test_hugepage_migration_extension.git
> or
> https://github.com/Naoya-Horiguchi/test_hugepage_migration_extension.git
> should work.

When I hacking arch/x86/mm/hugetlbpage.c like this,
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index ae1aa71..87f34ee 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -354,14 +354,13 @@ hugetlb_get_unmapped_area(struct file *file,
unsigned long addr,

#endif /*HAVE_ARCH_HUGETLB_UNMAPPED_AREA*/

-#ifdef CONFIG_X86_64
static __init int setup_hugepagesz(char *opt)
{
unsigned long ps = memparse(opt, &opt);
if (ps == PMD_SIZE) {
hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT);
- } else if (ps == PUD_SIZE && cpu_has_gbpages) {
- hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
+ } else if (ps == PUD_SIZE) {
+ hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT+4);
} else {
printk(KERN_ERR "hugepagesz: Unsupported page size %lu M\n",
ps >> 20);

I set boot=hugepagesz=1G hugepages=10, then I got 10 32MB huge pages.
What's the difference between these pages which I hacking and normal
huge pages?

>
> Thanks,
> Naoya

2013-03-21 12:56:34

by Michal Hocko

[permalink] [raw]
Subject: Re: [RFC][PATCH 0/9] extend hugepage migration

On Thu 21-03-13 07:49:48, Simon Jeons wrote:
[...]
> When I hacking arch/x86/mm/hugetlbpage.c like this,
> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
> index ae1aa71..87f34ee 100644
> --- a/arch/x86/mm/hugetlbpage.c
> +++ b/arch/x86/mm/hugetlbpage.c
> @@ -354,14 +354,13 @@ hugetlb_get_unmapped_area(struct file *file,
> unsigned long addr,
>
> #endif /*HAVE_ARCH_HUGETLB_UNMAPPED_AREA*/
>
> -#ifdef CONFIG_X86_64
> static __init int setup_hugepagesz(char *opt)
> {
> unsigned long ps = memparse(opt, &opt);
> if (ps == PMD_SIZE) {
> hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT);
> - } else if (ps == PUD_SIZE && cpu_has_gbpages) {
> - hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
> + } else if (ps == PUD_SIZE) {
> + hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT+4);
> } else {
> printk(KERN_ERR "hugepagesz: Unsupported page size %lu M\n",
> ps >> 20);
>
> I set boot=hugepagesz=1G hugepages=10, then I got 10 32MB huge pages.
> What's the difference between these pages which I hacking and normal
> huge pages?

How is this related to the patch set?
Please _stop_ distracting discussion to unrelated topics!

Nothing personal but this is just wasting our time.
--
Michal Hocko
SUSE Labs

2013-03-21 23:46:40

by Simon Jeons

[permalink] [raw]
Subject: Re: [RFC][PATCH 0/9] extend hugepage migration

Hi Michal,
On 03/21/2013 08:56 PM, Michal Hocko wrote:
> On Thu 21-03-13 07:49:48, Simon Jeons wrote:
> [...]
>> When I hacking arch/x86/mm/hugetlbpage.c like this,
>> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
>> index ae1aa71..87f34ee 100644
>> --- a/arch/x86/mm/hugetlbpage.c
>> +++ b/arch/x86/mm/hugetlbpage.c
>> @@ -354,14 +354,13 @@ hugetlb_get_unmapped_area(struct file *file,
>> unsigned long addr,
>>
>> #endif /*HAVE_ARCH_HUGETLB_UNMAPPED_AREA*/
>>
>> -#ifdef CONFIG_X86_64
>> static __init int setup_hugepagesz(char *opt)
>> {
>> unsigned long ps = memparse(opt, &opt);
>> if (ps == PMD_SIZE) {
>> hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT);
>> - } else if (ps == PUD_SIZE && cpu_has_gbpages) {
>> - hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
>> + } else if (ps == PUD_SIZE) {
>> + hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT+4);
>> } else {
>> printk(KERN_ERR "hugepagesz: Unsupported page size %lu M\n",
>> ps >> 20);
>>
>> I set boot=hugepagesz=1G hugepages=10, then I got 10 32MB huge pages.
>> What's the difference between these pages which I hacking and normal
>> huge pages?
> How is this related to the patch set?
> Please _stop_ distracting discussion to unrelated topics!
>
> Nothing personal but this is just wasting our time.

Sorry kindly Michal, my bad.
Btw, could you explain this question for me? very sorry waste your time.