Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp501706pxk; Wed, 2 Sep 2020 07:25:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx3KQMvWC4JS6QSchxY9EFxGNaDlmq/zuwUznka2VwAWUedRQQsarsvKzShfSXgccz0Us3G X-Received: by 2002:a17:906:1106:: with SMTP id h6mr290867eja.200.1599056735799; Wed, 02 Sep 2020 07:25:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599056735; cv=none; d=google.com; s=arc-20160816; b=qfcwMIZrEzjFV11ghIG9FaV4lbs2eEyq1F09U0xju3euUcMlC0bWXg+kyJNIoXrOJp Wv5DKHZRY0gOdgrSul0eCjyvADja+5KMW2sXJXLQDxfo8aUrsnaVT+dovNT+oShlmZlK uMbiRYWel3IVFfrmjv6wRh2AXkraNBWjfRVAbyYurpyDYNeUVzH9c+NIUzrEkzU2PeS9 qo+zy1hUqjTEt6TPZlekWVDRQAUhRVIvPYVe9nNip5A41+q+Y0UdAayUpsPhtD7cOorF atrFwS095NmR73eJDAKm8QxdMlqs+hCam+YxYU3UU1BxM2KEAvE4I9kp9gIm192z2HBy 0xlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=J+4eDTKQc2yhtEO6QNX9PfX6T4AuHo67Sz3KQDzGzIs=; b=ZcI6db7Jt1HvOK4z7rWaySnytWRFTn1dKPg5YiCf1TCMMvfqKedz6dbLhwZMyIqf/r 5t5pcMyyXOmOQKSwxATiTc577phlwvlnxRy29hjhc9PpiMHyE/hFOLsmJ1K3ffe2vpq0 J+j/584qvDLsV5pUW97EdBug25YXTn5oy6bItYKN0Kmg02KB4XM/3N3ggVyMUyriKDTu M4hwz4PuTY5jsM5KhWliLNgMGX6vDEaa0rFOK9lN0faP7RglhQYKUJC3oXmDmDMgvwW+ XRE9gYFVaLToo3Q1ue1KalK9b5M8P49AfPWnB8ELcFdONrR4+cYRjQns6iSshC8uVdem xZMw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f21si352479eds.601.2020.09.02.07.25.13; Wed, 02 Sep 2020 07:25:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727950AbgIBOWZ (ORCPT + 99 others); Wed, 2 Sep 2020 10:22:25 -0400 Received: from mx2.suse.de ([195.135.220.15]:33372 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727095AbgIBOLA (ORCPT ); Wed, 2 Sep 2020 10:11:00 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 91439B64A; Wed, 2 Sep 2020 14:10:59 +0000 (UTC) Date: Wed, 2 Sep 2020 16:10:57 +0200 From: Michal Hocko To: Pavel Tatashin Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, linux-mm@kvack.org, Vlastimil Babka , Mel Gorman Subject: Re: [PATCH] mm/memory_hotplug: drain per-cpu pages again during memory offline Message-ID: <20200902141057.GK4617@dhcp22.suse.cz> References: <20200901124615.137200-1-pasha.tatashin@soleen.com> <20200902140116.GI4617@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200902140116.GI4617@dhcp22.suse.cz> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 02-09-20 16:01:17, Michal Hocko wrote: > [Cc Mel and Vlastimil - I am still rummaging] > > On Tue 01-09-20 08:46:15, Pavel Tatashin wrote: > > There is a race during page offline that can lead to infinite loop: > > a page never ends up on a buddy list and __offline_pages() keeps > > retrying infinitely or until a termination signal is received. > > > > Thread#1 - a new process: > > > > load_elf_binary > > begin_new_exec > > exec_mmap > > mmput > > exit_mmap > > tlb_finish_mmu > > tlb_flush_mmu > > release_pages > > free_unref_page_list > > free_unref_page_prepare > > set_pcppage_migratetype(page, migratetype); > > // Set page->index migration type below MIGRATE_PCPTYPES > > > > Thread#2 - hot-removes memory > > __offline_pages > > start_isolate_page_range > > set_migratetype_isolate > > set_pageblock_migratetype(page, MIGRATE_ISOLATE); > > Set migration type to MIGRATE_ISOLATE-> set > > drain_all_pages(zone); > > // drain per-cpu page lists to buddy allocator. > > > > Thread#1 - continue > > free_unref_page_commit > > migratetype = get_pcppage_migratetype(page); > > // get old migration type > > list_add(&page->lru, &pcp->lists[migratetype]); > > // add new page to already drained pcp list > > > > Thread#2 > > Never drains pcp again, and therefore gets stuck in the loop. > > > > The fix is to try to drain per-cpu lists again after > > check_pages_isolated_cb() fails. Still trying to wrap my head around this but I think this is not a proper fix. It should be the page isolation to make sure no races are possible with the page freeing path. > > Signed-off-by: Pavel Tatashin > > Cc: stable@vger.kernel.org > > --- > > mm/memory_hotplug.c | 9 +++++++++ > > 1 file changed, 9 insertions(+) > > > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > > index e9d5ab5d3ca0..d6d54922bfce 100644 > > --- a/mm/memory_hotplug.c > > +++ b/mm/memory_hotplug.c > > @@ -1575,6 +1575,15 @@ static int __ref __offline_pages(unsigned long start_pfn, > > /* check again */ > > ret = walk_system_ram_range(start_pfn, end_pfn - start_pfn, > > NULL, check_pages_isolated_cb); > > + /* > > + * per-cpu pages are drained in start_isolate_page_range, but if > > + * there are still pages that are not free, make sure that we > > + * drain again, because when we isolated range we might > > + * have raced with another thread that was adding pages to > > + * pcp list. > > + */ > > + if (ret) > > + drain_all_pages(zone); > > } while (ret); > > > > /* Ok, all of our target is isolated. > > -- > > 2.25.1 > > > > -- > Michal Hocko > SUSE Labs -- Michal Hocko SUSE Labs