Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp1011085pxb; Wed, 1 Sep 2021 15:13:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxR//FXM7MG4n6U+lRcIMx32BNvC680uFrakOgh1KWGDd5tY2vPcDbsKyI3oErX7P4ag906 X-Received: by 2002:a05:6638:348e:: with SMTP id t14mr109822jal.66.1630534428785; Wed, 01 Sep 2021 15:13:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1630534428; cv=none; d=google.com; s=arc-20160816; b=dwNdi6J89JA55ElwKhMw5GlT10Uuu+/OmmQj7sYnCtRsNIc8SntE5yEo6RtvZHBCj2 ooflUwuZUaacpd4IG4uUQSbJc4RNOFj3XDU5Q4umbnKwinlyc7M6lyyb8eYHqXaKr+Dt MLNg5SqShdIpc8alPw9g/SBB+0N01UiIGwEZ8n+STxpUwtkIhxCZyqVUmiuSo8zD9AKa OgZvMdyA/ebHgZk4WEldaiYTF5fuD47BfbJ1MZBj25JjpdegHzivJBhZXE6Z8GLk0RWW +Yw61pgDDLWG1OydpivR8AHci2AdWmoCSJJvtzNim3sFYD8vrD3zUDD5dTb1m8xFBqN2 8qTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=OhEVcvySOTosolAypqXoaysi6sWq9OJ9KiUFki1BM1Y=; b=fau8brzHtuSbK4ZGYNG/eB4CvZ062Um1ACyt+WOwXLit3BaZbcXOqd95rekavmXJ0J uOjZCt5pmvNCoOfKb9d4jZVq4dDaWIWGGh9saCGer06np085ec5GYiXgyeULBaYe0fm0 UBynhvv2tAkjOrRW1M9xbQ+7aAwjlhTqqxgYfCpmPTWiOTAtAAUVlrXInjzQtXRyj0Pa sJM3I/STicm61McUuav+UfK7rFL+P7mc7SwScNFf9VMHEJG/0d53B66GOjWusiYvSM/L BP21tNqNkgu4ZNok1cPcLZEzssTCseRr7JuZn2Ei/sQE7E3DzPkLIKacLzy557hcPgyt /deQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l18si37678ilc.72.2021.09.01.15.13.37; Wed, 01 Sep 2021 15:13:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244495AbhIANnJ (ORCPT + 99 others); Wed, 1 Sep 2021 09:43:09 -0400 Received: from mga03.intel.com ([134.134.136.65]:48731 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244221AbhIANnF (ORCPT ); Wed, 1 Sep 2021 09:43:05 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10093"; a="218805028" X-IronPort-AV: E=Sophos;i="5.84,369,1620716400"; d="scan'208";a="218805028" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2021 06:42:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,369,1620716400"; d="scan'208";a="498900664" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.146.151]) by fmsmga008.fm.intel.com with ESMTP; 01 Sep 2021 06:42:00 -0700 Date: Wed, 1 Sep 2021 21:42:00 +0800 From: Feng Tang To: David Rientjes , Michal Hocko Cc: "linux-mm@kvack.org" , Andrew Morton , Christian Brauner , "linux-kernel@vger.kernel.org" Subject: Re: [RFC PATCH] mm/oom: detect and kill task which has allocation forbidden by cpuset limit Message-ID: <20210901134200.GA50993@shbuild999.sh.intel.com> References: <1630399085-70431-1-git-send-email-feng.tang@intel.com> <52d80e9-cf27-9a59-94fd-d27a1e2dac6f@google.com> <20210901024402.GB46357@shbuild999.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210901024402.GB46357@shbuild999.sh.intel.com> User-Agent: Mutt/1.5.24 (2015-08-30) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 01, 2021 at 10:44:02AM +0800, Tang, Feng wrote: [SNIP] > > So I'd agree in this case that it would be better to simply fail the > > allocation. > > I agree with yours and Michal's comments, putting it in the OOM code > is a little late and wastes cpu cycles. > > > Feng, would you move this check to __alloc_pages_may_oom() like the other > > special cases and simply fail rather than call into the oom killer? > > Will explore more in this direction, thanks! I tried below patch, which can solve the blindly killing issue, that the docker processes will see page allocation errors, and eventually quit running. Thanks, Feng --- diff --git a/mm/page_alloc.c b/mm/page_alloc.c index eeb3a9cb36bb..d1ae77be45a2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4271,10 +4271,18 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, .gfp_mask = gfp_mask, .order = order, }; - struct page *page; + struct page *page = NULL; + struct zoneref *z; *did_some_progress = 0; + if (cpusets_enabled() && (gfp_mask & __GFP_HARDWALL)) { + z = first_zones_zonelist(ac->zonelist, + gfp_zone(gfp_mask), &cpuset_current_mems_allowed); + if (!z->zone) + goto out; + } + /* * Acquire the oom lock. If that fails, somebody else is * making progress for us.