Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp277681imm; Wed, 18 Jul 2018 01:52:20 -0700 (PDT) X-Google-Smtp-Source: AAOMgpd8NC6gGYIytie81Obbj+bUL0XDFD/PHBlxbCg18VxbvfjHT458aTteps9d1OEx5ZV54BbP X-Received: by 2002:a62:8d16:: with SMTP id z22-v6mr4311326pfd.181.1531903940749; Wed, 18 Jul 2018 01:52:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531903940; cv=none; d=google.com; s=arc-20160816; b=zRwQUikrhqCEqvPB3Raz4iSIbGhL5w8S7zme+d2KkBcl7+YJLqnGL0O0/sNyTJYSn4 oFNaxtTLkj4H//H6bGT7NebWsAT9L+mqZdgKF3awatVPXdr1itFhxfyJTvoUWYF3Cvj/ 5I9Q0ef/XNsfTh3OlfP+3yDExUwi0O7Brp9yYdv+xwR2lGJ4cteWRI/jeI2iLQhDGVRD 6Yr8dK/1oSzMWJbItwVAqsQXy+3fVUHndENHyQ0Ksm1xdTC7tLEV6Nr+VsoB4LcMgqeN ykXJ0KJNHAp2qxSutUfyr4SbH9URX+Z6c6qWgDTQAPdk0WNCOTEmD/mBo+1c7U6QjRyB kb0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=n1irLnBzXmmuBiEZDQ2R5mGYcpaQ37yrpwV4H/9hWKU=; b=xtTdjB4LsTmQEKQcvZpdFc1y7t7HB/BnYQVWpbFX9kHQ8/0O3MXuYo9Sy/tgP30pb7 XiN4ZiMnuu+8Z7GRQHNQRf3oFIbcBAEVLKWUGV70kiiEO9lwZYbZGDfuEGDSEUpQkygI EHKL47My/ZNswV9MWjF18EuX3Sm8WJqADZPDmgHa6/Lae4F7Z+QqEuSkyG+Y/wmZrBl2 sWEP4RnWissmzrQzddbQNVdz3Dndd0FlN02LPAHVDC/u7ja2axO9Jjdy00diXM9FAD7b S8k2eai8xqo+Ob5Wb4jIF006zx4mRX4O/X9osZJwZ4FVbVTKfud4OyyRYnavzBPW+rpA DffQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cb6-v6si2953634plb.356.2018.07.18.01.52.05; Wed, 18 Jul 2018 01:52:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730621AbeGRJ10 (ORCPT + 99 others); Wed, 18 Jul 2018 05:27:26 -0400 Received: from mx2.suse.de ([195.135.220.15]:36226 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728010AbeGRJ10 (ORCPT ); Wed, 18 Jul 2018 05:27:26 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 1E2F3ADE3; Wed, 18 Jul 2018 08:50:33 +0000 (UTC) Date: Wed, 18 Jul 2018 10:50:32 +0200 From: Michal Hocko To: Naoya Horiguchi Cc: "linux-mm@kvack.org" , Andrew Morton , "xishi.qiuxishi@alibaba-inc.com" , "zy.zhengyi@alibaba-inc.com" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH v2 1/2] mm: fix race on soft-offlining free huge pages Message-ID: <20180718085032.GS7193@dhcp22.suse.cz> References: <1531805552-19547-1-git-send-email-n-horiguchi@ah.jp.nec.com> <1531805552-19547-2-git-send-email-n-horiguchi@ah.jp.nec.com> <20180717142743.GJ7193@dhcp22.suse.cz> <20180718005528.GA12184@hori1.linux.bs1.fc.nec.co.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180718005528.GA12184@hori1.linux.bs1.fc.nec.co.jp> User-Agent: Mutt/1.10.0 (2018-05-17) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 18-07-18 00:55:29, Naoya Horiguchi wrote: > On Tue, Jul 17, 2018 at 04:27:43PM +0200, Michal Hocko wrote: > > On Tue 17-07-18 14:32:31, Naoya Horiguchi wrote: > > > There's a race condition between soft offline and hugetlb_fault which > > > causes unexpected process killing and/or hugetlb allocation failure. > > > > > > The process killing is caused by the following flow: > > > > > > CPU 0 CPU 1 CPU 2 > > > > > > soft offline > > > get_any_page > > > // find the hugetlb is free > > > mmap a hugetlb file > > > page fault > > > ... > > > hugetlb_fault > > > hugetlb_no_page > > > alloc_huge_page > > > // succeed > > > soft_offline_free_page > > > // set hwpoison flag > > > mmap the hugetlb file > > > page fault > > > ... > > > hugetlb_fault > > > hugetlb_no_page > > > find_lock_page > > > return VM_FAULT_HWPOISON > > > mm_fault_error > > > do_sigbus > > > // kill the process > > > > > > > > > The hugetlb allocation failure comes from the following flow: > > > > > > CPU 0 CPU 1 > > > > > > mmap a hugetlb file > > > // reserve all free page but don't fault-in > > > soft offline > > > get_any_page > > > // find the hugetlb is free > > > soft_offline_free_page > > > // set hwpoison flag > > > dissolve_free_huge_page > > > // fail because all free hugepages are reserved > > > page fault > > > ... > > > hugetlb_fault > > > hugetlb_no_page > > > alloc_huge_page > > > ... > > > dequeue_huge_page_node_exact > > > // ignore hwpoisoned hugepage > > > // and finally fail due to no-mem > > > > > > The root cause of this is that current soft-offline code is written > > > based on an assumption that PageHWPoison flag should beset at first to > > > avoid accessing the corrupted data. This makes sense for memory_failure() > > > or hard offline, but does not for soft offline because soft offline is > > > about corrected (not uncorrected) error and is safe from data lost. > > > This patch changes soft offline semantics where it sets PageHWPoison flag > > > only after containment of the error page completes successfully. > > > > Could you please expand on the worklow here please? The code is really > > hard to grasp. I must be missing something because the thing shouldn't > > be really complicated. Either the page is in the free pool and you just > > remove it from the allocator (with hugetlb asking for a new hugeltb page > > to guaratee reserves) or it is used and you just migrate the content to > > a new page (again with the hugetlb reserves consideration). Why should > > PageHWPoison flag ordering make any relevance? > > (Considering soft offlining free hugepage,) > PageHWPoison is set at first before this patch, which is racy with > hugetlb fault code because it's not protected by hugetlb_lock. > > Originally this was written in the similar manner as hard-offline, where > the race is accepted and a PageHWPoison flag is set as soon as possible. > But actually that's found not necessary/correct because soft offline is > supposed to be less aggressive and failure is OK. OK > So this patch is suggesting to make soft-offline less aggressive by > moving SetPageHWPoison into the lock. I guess I still do not understand why we should even care about the ordering of the HWPoison flag setting. Why cannot we simply have the following code flow? Or maybe we are doing that already I just do not follow the code soft_offline check page_count - free - normal page - remove from the allocator - hugetlb - allocate a new hugetlb page && remove from the pool - used - migrate to a new page && never release the old one Why do we even need HWPoison flag here? Everything can be completely transparent to the application. It shouldn't fail from what I understood. > > Do I get it right that the only difference between the hard and soft > > offlining is that hugetlb reserves might break for the former while not > > for the latter > > Correct. > > > and that the failed migration kills all owners for the > > former while not for latter? > > Hard-offline doesn't cause any page migration because the data is already > lost, but yes it can kill the owners. > Soft-offline never kills processes even if it fails (due to migration failrue > or some other reasons.) > > I listed below some common points and differences between hard-offline > and soft-offline. > > common points > - they are both contained by PageHWPoison flag, > - error is injected via simliar interfaces. > > differences > - the data on the page is considered lost in hard offline, but is not > in soft offline, > - hard offline likely kills the affected processes, but soft offline > never kills processes, > - soft offline causes page migration, but hard offline does not, > - hard offline prioritizes to prevent consumption of broken data with > accepting some race, and soft offline prioritizes not to impact > userspace with accepting failure. > > Looks to me that there're more differences rather than commont points. Thanks for the summary. It certainly helped me -- Michal Hocko SUSE Labs