Received: by 2002:a25:d7c1:0:0:0:0:0 with SMTP id o184csp1732111ybg; Sat, 19 Oct 2019 01:17:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqwFly4MXmerSzdMhwn4rGifo9SL3BvgHpmzT2XC/6qF4fctBrn9XKlqttaVXvBQq8fuGzjw X-Received: by 2002:a17:906:a884:: with SMTP id ha4mr1077909ejb.192.1571473071683; Sat, 19 Oct 2019 01:17:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571473071; cv=none; d=google.com; s=arc-20160816; b=YSEkYoB3A0y1dRvZeeUULz/zzQT0eX/lrY9/gkw/okyzRkKaH8gTejlxH871yk75ge 6jjrA8OmrW0/oA9ZFAFgglqPIozyiX75irl/eVuk6Mk7YXo7l+W0r1N3C3wyPidWDQ5u F1xWDeQGy+TMddziwTIX74NrjOMmsONpo5B3im02uoTYz+7h1ihrWtv5+Wu8NeUxYIVg SJg/u75RtGHpOAl+dBIbvjUbIbAWMOubWJnQ8Qqjcotip4EpNZkspQc9abb6+QNZqk+z x7F0sFlMF8LIt4YylaQKRmwiFdJnKIi1rwrR+66+kh3jeifhrTuAd/7Q7eBjbQFU8CpI 9aaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=uRVupZ/47sUKCJcE4rCwErIxqZtydVar2d0sAYvA7yQ=; b=a4ixCScOOKUjE+E+AclZoSdN6cnmyI3shx4SvpbnFt9ZDWweY6BIre3J8SWQW4nRfB PqVh4epXI6izl8/R1qURbvu3YbZoWJ4kObO2PbgXFDhDED0mKhx2fssbyZrttX2Vq/8m lfvRhv8+RGMbCOEr9ZaZVcA8RoNddr4Bdwh/S2B7q0y6QS8AAES7tDJfuP+bJV9TzbIH RyPXkwNNfhW7yE13flXL7j/5hRH6smF1mD8EVgFc84w8Fhh7jcPG4XhrWkBCLbZqOPkD yTeEurkFvgab7uxSvn+JQwAzaKZ5W5jKCPDaTW2QGcjRJwgPwsMJ5+3DhVdp0rezoQES ffPw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g23si4884786ejk.186.2019.10.19.01.17.28; Sat, 19 Oct 2019 01:17:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404767AbfJRLek (ORCPT + 99 others); Fri, 18 Oct 2019 07:34:40 -0400 Received: from mx2.suse.de ([195.135.220.15]:53282 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2391782AbfJRLek (ORCPT ); Fri, 18 Oct 2019 07:34:40 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 789E7AFC3; Fri, 18 Oct 2019 11:34:38 +0000 (UTC) Date: Fri, 18 Oct 2019 13:34:37 +0200 From: Michal Hocko To: David Hildenbrand Cc: Naoya Horiguchi , Qian Cai , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , Mike Kravetz Subject: Re: memory offline infinite loop after soft offline Message-ID: <20191018113437.GJ5017@dhcp22.suse.cz> References: <20191017093410.GA19973@hori.linux.bs1.fc.nec.co.jp> <20191017100106.GF24485@dhcp22.suse.cz> <1571335633.5937.69.camel@lca.pw> <20191017182759.GN24485@dhcp22.suse.cz> <20191018021906.GA24978@hori.linux.bs1.fc.nec.co.jp> <33946728-bdeb-494a-5db8-e279acebca47@redhat.com> <20191018082459.GE5017@dhcp22.suse.cz> <20191018085528.GG5017@dhcp22.suse.cz> <3ac0ad7a-7dd2-c851-858d-2986fa8d44b6@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3ac0ad7a-7dd2-c851-858d-2986fa8d44b6@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 18-10-19 13:00:45, David Hildenbrand wrote: > On 18.10.19 10:55, Michal Hocko wrote: > > On Fri 18-10-19 10:38:21, David Hildenbrand wrote: > > > On 18.10.19 10:24, Michal Hocko wrote: > > > > On Fri 18-10-19 10:13:36, David Hildenbrand wrote: > > > > [...] > > > > > However, if the compound page spans multiple pageblocks > > > > > > > > Although hugetlb pages spanning pageblocks are possible this shouldn't > > > > matter in__test_page_isolated_in_pageblock because this function doesn't > > > > really operate on pageblocks as the name suggests. It is simply > > > > traversing all valid RAM ranges (see walk_system_ram_range). > > > > > > As long as the hugepages don't span memory blocks/sections, you are right. I > > > have no experience with gigantic pages in this regard. > > > > They can clearly span sections (1GB is larger than 128MB). Why do you > > think it matters actually? walk_system_ram_range walks RAM ranges and no > > allocation should span holes in RAM right? > > > > Let's explore what I was thinking. If we can agree that any compound page is > always aligned to its size , then what I tell here is not applicable. I know > it is true for gigantic pages. > > Some extreme example to clarify > > [ memory block 0 (128MB) ][ memory block 1 (128MB) ] > [ compound page (128MB) ] > > If you would offline memory block 1, and you detect PG_offline on the first > page of that memory block (PageHWPoison(compound_head(page))), you would > jump over the whole memory block (pfn += 1 << compound_order(page)), leaving > 64MB of the memory block unchecked. > > Again, if any compound page has the alignment restrictions (PFN of head > aligned to 1 << compound_order(page)), this is not possible. > > > If it is, however, possible, the "clean" thing would be to only jump over > the remaining part of the compound page, e.g., something like > > pfn += (1 << compound_order(page)) - (page - compound_head(page))); OK, I see what you mean now. In other words similar to eeb0efd071d82. -- Michal Hocko SUSE Labs