Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp497914pxk; Wed, 2 Sep 2020 07:20:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw+UNezfKtdxB6CyLqJHkodW6DyEWsBF1XLJtJMGbu/uz+8mC+PMhhFLaZKsij+KtASqk14 X-Received: by 2002:a17:906:3495:: with SMTP id g21mr230447ejb.121.1599056434124; Wed, 02 Sep 2020 07:20:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599056434; cv=none; d=google.com; s=arc-20160816; b=boDq8jg24s3+kR2lS4ItnAbUDbbhPygILNdg7Ps+cn4WAdLRS5cg72ExvhThCTHERP zJx0O4WpAu9/A/BWZqMctnPT24Khg5lnPLDfbLzWcILnkP6gLjnt77gjhfQjIKC8bzL7 0jlQnONy8ZRYTwwXG25j+tFXF9VorEV9+2Li2WwS5dOmx9jjOK2QoWHnO0X122FpyxHW 3Xl5aQQTtYRrIx/DcXu/jHFWKKD0J3EjyAZiCkXIU5YFcr6UfplHQoyJNBrtqYSgGs5o L7BlqBaMfjvUDQKJxMh2SJVDukgGSfxJ+j/QZitvJPQiuAOKPe0xqN9E3aOaBwmc86ij rBJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=50AE+h8cIt3tX9m6NpHCbIQzaKUShgah4ln94YYZAFA=; b=GwVP90rQm3pbAendzmYgATg2StA6rgbzKPlgEwy6YlU4b9h986t59r5VG7/mOFvbCv DdXDffOzWO/xZlXWNBdm6O8GkC6Ac1w4RN1kazloL+HWQedAzbzVdgZ3CmCMk9kjapPS 3cOR7RosVr0Cgrm3J/RW6sr57BKreeaqI89TwpLbRJhBvdQ3lwgOFw6PTIAb60ITh0Mq V9Yw7en0xUlqOM6N7IyqSgwTM2PxuZoIjXV7LpW/Igb8vScoPO0GZKMVfc7yt+g5EYK3 RLqQDm3ePzCQTPDKEBIZfHktYd4tPGsWwp+lbtmZWw0MYBfk+XKOGvQhhYIa43JBnW8t qzsg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f21si352479eds.601.2020.09.02.07.20.11; Wed, 02 Sep 2020 07:20:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727973AbgIBOTW (ORCPT + 99 others); Wed, 2 Sep 2020 10:19:22 -0400 Received: from mx2.suse.de ([195.135.220.15]:60242 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726657AbgIBOIx (ORCPT ); Wed, 2 Sep 2020 10:08:53 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 4ECA7AEFA; Wed, 2 Sep 2020 14:08:53 +0000 (UTC) Date: Wed, 2 Sep 2020 16:08:51 +0200 From: Michal Hocko To: Pavel Tatashin Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, linux-mm@kvack.org Subject: Re: [PATCH] mm/memory_hotplug: drain per-cpu pages again during memory offline Message-ID: <20200902140851.GJ4617@dhcp22.suse.cz> References: <20200901124615.137200-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200901124615.137200-1-pasha.tatashin@soleen.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 01-09-20 08:46:15, Pavel Tatashin wrote: > There is a race during page offline that can lead to infinite loop: > a page never ends up on a buddy list and __offline_pages() keeps > retrying infinitely or until a termination signal is received. > > Thread#1 - a new process: > > load_elf_binary > begin_new_exec > exec_mmap > mmput > exit_mmap > tlb_finish_mmu > tlb_flush_mmu > release_pages > free_unref_page_list > free_unref_page_prepare > set_pcppage_migratetype(page, migratetype); > // Set page->index migration type below MIGRATE_PCPTYPES > > Thread#2 - hot-removes memory > __offline_pages > start_isolate_page_range > set_migratetype_isolate > set_pageblock_migratetype(page, MIGRATE_ISOLATE); > Set migration type to MIGRATE_ISOLATE-> set > drain_all_pages(zone); > // drain per-cpu page lists to buddy allocator. It is not really clear to me how we could have passed has_unmovable_pages at this stage when the page is not PageBuddy. Is this because you are using Movable Zones? > > Thread#1 - continue > free_unref_page_commit > migratetype = get_pcppage_migratetype(page); > // get old migration type > list_add(&page->lru, &pcp->lists[migratetype]); > // add new page to already drained pcp list > > Thread#2 > Never drains pcp again, and therefore gets stuck in the loop. > > The fix is to try to drain per-cpu lists again after > check_pages_isolated_cb() fails. But this means that the page is not isolated and so it could be reused for something else. No? > Signed-off-by: Pavel Tatashin > Cc: stable@vger.kernel.org > --- > mm/memory_hotplug.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index e9d5ab5d3ca0..d6d54922bfce 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -1575,6 +1575,15 @@ static int __ref __offline_pages(unsigned long start_pfn, > /* check again */ > ret = walk_system_ram_range(start_pfn, end_pfn - start_pfn, > NULL, check_pages_isolated_cb); > + /* > + * per-cpu pages are drained in start_isolate_page_range, but if > + * there are still pages that are not free, make sure that we > + * drain again, because when we isolated range we might > + * have raced with another thread that was adding pages to > + * pcp list. > + */ > + if (ret) > + drain_all_pages(zone); > } while (ret); > > /* Ok, all of our target is isolated. > -- > 2.25.1 > -- Michal Hocko SUSE Labs