Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp753140ybt; Mon, 6 Jul 2020 22:43:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxtGARzqv/6NvizQ7sgRPqVT0UGKVcol6YqHXbnR8oxOc81yRyhrnXWXPEE+aqtURBpTL7Q X-Received: by 2002:aa7:d88e:: with SMTP id u14mr43969208edq.11.1594100587869; Mon, 06 Jul 2020 22:43:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594100587; cv=none; d=google.com; s=arc-20160816; b=Bsie0GR4mvooDnlzVgbCCzb67O+oPtNMcbXtMi8K1wuNNJQ3midg9l28gpN0W0grmt tcw/sQHNI8DCVZdpLr5+yAEUQlVdQw1Ohpji9vhUmj/pC62ztlNy/T1CfQNmqZYVraJi OoGRdQyhPSRw40eC0U/pmV1jh0Te7e/mRZE/ZUcpm05OFk//F20NI4prFd3yRJX9zGgy QYEQ3C+vBjfOJ3zzyhWWxLJXCUzQcjVLjnouLYuhBB8Jmf610ZyN/cyUIqbo0Ugub1Ar uRSD2LstWizIWIp6lWGkS3NqPEnVrLTyS16nuPUuzCdwjgchKIQeHVIsSBLXGrIWILfJ FNvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:ironport-sdr:ironport-sdr; bh=l7Hh0Xug2zXN65EsqTgKHyjmirGJ5kUaU7JZUhSM2RU=; b=JF/TSEdZ8PHoGBh2RZv8SWopTIyoU+aHtZnPomo1mES4McwSrHkIvp8Abzv2Tuctk6 KbqITg+cl3ppI2Wy0FMsdRSOnFM6ACbLC1P5pe0NAqZzDLGtRyqr+ArY8Uw9aO40X6vZ GpLC4CigcJSoJFl1cY3CifXv372LXsFuUQgdiEWnAcNLT6X4exgGEkMp6cO7m0+ZKUar ntIlAbdYBjOoLoCpih6lgljFAzmspb4p90FWBH+5EBrPrcSzL8Kt0jLei35xyaOgs4ZV /j5YkZ7jS3csQYVyYd/te1XLKNe440ge9HcHQAw+t/ZM6jgBiXU1shFxMHbK3jnoZKXh bj8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e10si14340923ejj.546.2020.07.06.22.42.41; Mon, 06 Jul 2020 22:43:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727827AbgGGFl0 (ORCPT + 99 others); Tue, 7 Jul 2020 01:41:26 -0400 Received: from mga17.intel.com ([192.55.52.151]:23108 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727120AbgGGFlZ (ORCPT ); Tue, 7 Jul 2020 01:41:25 -0400 IronPort-SDR: pvE1meqStglqeqmBLwmoMCQS/57BBUZ7zDJbmAlPXbvtxJc5ZXpKVVd9RcZok8UM0hCNrNDlvo FQayTIGKZYuw== X-IronPort-AV: E=McAfee;i="6000,8403,9674"; a="127628928" X-IronPort-AV: E=Sophos;i="5.75,321,1589266800"; d="scan'208";a="127628928" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jul 2020 22:41:25 -0700 IronPort-SDR: FSNhW8VrQIYxVDuURu1WYzQeuKzoq5PkkMeyvVkFPYE7Bnx91dYtbXg4fQc8qLYQNQuiSNl9l0 ve7HLN6ylEBg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,321,1589266800"; d="scan'208";a="305536538" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.146.107]) by fmsmga004.fm.intel.com with ESMTP; 06 Jul 2020 22:41:20 -0700 Date: Tue, 7 Jul 2020 13:41:20 +0800 From: Feng Tang To: "Huang, Ying" Cc: Andi Kleen , Qian Cai , Andrew Morton , Dennis Zhou , Tejun Heo , Christoph Lameter , kernel test robot , Michal Hocko , Johannes Weiner , Matthew Wilcox , Mel Gorman , Kees Cook , Luis Chamberlain , Iurii Zaikin , tim.c.chen@intel.com, dave.hansen@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, lkp@lists.01.org Subject: Re: [mm] 4e2c82a409: ltp.overcommit_memory01.fail Message-ID: <20200707054120.GC21741@shbuild999.sh.intel.com> References: <20200705044454.GA90533@shbuild999.sh.intel.com> <20200705125854.GA66252@shbuild999.sh.intel.com> <20200705155232.GA608@lca.pw> <20200706014313.GB66252@shbuild999.sh.intel.com> <20200706023614.GA1231@lca.pw> <20200706132443.GA34488@shbuild999.sh.intel.com> <20200706133434.GA3483883@tassilo.jf.intel.com> <20200707023829.GA85993@shbuild999.sh.intel.com> <87zh8c7z5i.fsf@yhuang-dev.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87zh8c7z5i.fsf@yhuang-dev.intel.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 07, 2020 at 12:00:09PM +0800, Huang, Ying wrote: > Feng Tang writes: > > > On Mon, Jul 06, 2020 at 06:34:34AM -0700, Andi Kleen wrote: > >> > ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); > >> > - if (ret == 0 && write) > >> > + if (ret == 0 && write) { > >> > + if (sysctl_overcommit_memory == OVERCOMMIT_NEVER) > >> > + schedule_on_each_cpu(sync_overcommit_as); > >> > >> The schedule_on_each_cpu is not atomic, so the problem could still happen > >> in that window. > >> > >> I think it may be ok if it eventually resolves, but certainly needs > >> a comment explaining it. Can you do some stress testing toggling the > >> policy all the time on different CPUs and running the test on > >> other CPUs and see if the test fails? > > > > For the raw test case reported by 0day, this patch passed in 200 times > > run. And I will read the ltp code and try stress testing it as you > > suggested. > > > > > >> The other alternative would be to define some intermediate state > >> for the sysctl variable and only switch to never once the schedule_on_each_cpu > >> returned. But that's more complexity. > > > > One thought I had is to put this schedule_on_each_cpu() before > > the proc_dointvec_minmax() to do the sync before sysctl_overcommit_memory > > is really changed. But the window still exists, as the batch is > > still the larger one. > > Can we change the batch firstly, then sync the global counter, finally > change the overcommit policy? These reorderings are really head scratching :) I've thought about this before when Qian Cai first reported the warning message, as kernel had a check: VM_WARN_ONCE(percpu_counter_read(&vm_committed_as) < -(s64)vm_committed_as_batch * num_online_cpus(), "memory commitment underflow"); If the batch is decreased first, the warning will be easier/earlier to be triggered, so I didn't brought this up when handling the warning message. But it might work now, as the warning has been removed. Thanks, Feng