Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1445662ybt; Thu, 9 Jul 2020 07:16:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwECYbkR7JnBQ3o2vlFEn39b4I5Y62Y2iLx9eeZY2XBDB5QMK7M5UzPfskKVGna7mweS8Md X-Received: by 2002:a05:6402:2064:: with SMTP id bd4mr70447930edb.180.1594304170056; Thu, 09 Jul 2020 07:16:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594304170; cv=none; d=google.com; s=arc-20160816; b=yiz7b9NeXaakPIOHcA1DN0vMP7kPHahKS1UcwUQCV6TyinUiaPL8NCZPLYOCoh6iw/ O1oW2PF16IFTxvot2O7HAHnjCNfoWlx6qJUtWxWaYNZSiAI6HOjIUSe6GDfLnwgeRDfI kQMNbhMK/qRh1MHBMx/ruu93/wajZw8TiyF0l0q3NABuF18Ix+XfmdBEq/nOL7gxG3zl w/eCWvtowg66UNdZmAjGaEu1nJEzOry6PEOqbzdECGAwSqEMbs5RCREJKSGzCOWASFS5 fB1UQfosXCxAiorTVYUmd4P2u84o62JaV7p1R3W7gLfx4yqGyn++DWG6+e53FoDaWZIF Sctg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:ironport-sdr:ironport-sdr; bh=SfZXrLK+cs+drBNKhbPiFgVbP8dUkc+pguSYlbBBbaE=; b=vlj+m5blej6fyGuDNzF+Y1k7+PJUKzH7ucid2aRO3KH1p3B922XbD6YcwkCCEtS1Tc OpKQtb0GoLEv3221JbhWqDAx7DUzZh2zJfjw04DgBMznZinpbjzygedGNYLm+w/ZMURm B81rCTk80aWchtcgzK1CWrIJCDymq4+Z6H6g2g7IOQRtcmzIaKb89PFNpB3kifG1Cn43 BHOsrC9sFjHzji7ELt4cgYW8N721ngxfmF7sdYVpHrOoAzUTH6JzLunreND2dntD0G6m xZBQDhFBBrNJ1apQduNoy151aHrzAaV6cPGXVtjko229Ye+HfL9ifw6RZ09jyc4wqwPe HgnA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h11si2383433edl.235.2020.07.09.07.15.46; Thu, 09 Jul 2020 07:16:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726757AbgGIOPZ (ORCPT + 99 others); Thu, 9 Jul 2020 10:15:25 -0400 Received: from mga04.intel.com ([192.55.52.120]:62380 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726371AbgGIOPZ (ORCPT ); Thu, 9 Jul 2020 10:15:25 -0400 IronPort-SDR: kf/v/xi/lbApu2CPuguPZpZgNtH+kekjNo7D+n03n6xG4JvZS2Zmt8jOBuKftsHZq5Au0YrJz9 m0oVFd8WRd5A== X-IronPort-AV: E=McAfee;i="6000,8403,9676"; a="145495431" X-IronPort-AV: E=Sophos;i="5.75,331,1589266800"; d="scan'208";a="145495431" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jul 2020 07:15:24 -0700 IronPort-SDR: NP5zU4anmdD83tg0hQfVCEP9q7HN/mnqLsUuV9QRfIUoG0m64JhpUQPiEBSCuJLSjNNiABF6K+ kJdmtCCQBFyg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,331,1589266800"; d="scan'208";a="389153088" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.146.107]) by fmsmga001.fm.intel.com with ESMTP; 09 Jul 2020 07:15:20 -0700 Date: Thu, 9 Jul 2020 22:15:19 +0800 From: Feng Tang To: Qian Cai Cc: "Huang, Ying" , Andi Kleen , Andrew Morton , Michal Hocko , Dennis Zhou , Tejun Heo , Christoph Lameter , kernel test robot , Johannes Weiner , Matthew Wilcox , Mel Gorman , Kees Cook , Luis Chamberlain , Iurii Zaikin , tim.c.chen@intel.com, dave.hansen@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, lkp@lists.01.org Subject: Re: [mm] 4e2c82a409: ltp.overcommit_memory01.fail Message-ID: <20200709141519.GA81727@shbuild999.sh.intel.com> References: <20200705155232.GA608@lca.pw> <20200706014313.GB66252@shbuild999.sh.intel.com> <20200706023614.GA1231@lca.pw> <20200706132443.GA34488@shbuild999.sh.intel.com> <20200706133434.GA3483883@tassilo.jf.intel.com> <20200707023829.GA85993@shbuild999.sh.intel.com> <87zh8c7z5i.fsf@yhuang-dev.intel.com> <20200707054120.GC21741@shbuild999.sh.intel.com> <20200709045554.GA56190@shbuild999.sh.intel.com> <20200709134040.GA1110@lca.pw> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200709134040.GA1110@lca.pw> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Qian Cai, On Thu, Jul 09, 2020 at 09:40:40AM -0400, Qian Cai wrote: > > > > Can we change the batch firstly, then sync the global counter, finally > > > > change the overcommit policy? > > > > > > These reorderings are really head scratching :) > > > > > > I've thought about this before when Qian Cai first reported the warning > > > message, as kernel had a check: > > > > > > VM_WARN_ONCE(percpu_counter_read(&vm_committed_as) < > > > -(s64)vm_committed_as_batch * num_online_cpus(), > > > "memory commitment underflow"); > > > > > > If the batch is decreased first, the warning will be easier/earlier to be > > > triggered, so I didn't brought this up when handling the warning message. > > > > > > But it might work now, as the warning has been removed. > > > > I tested the reorder way, and the test could pass in 100 times run. The > > new order when changing policy to OVERCOMMIT_NEVER: > > 1. re-compute the batch ( to the smaller one) > > 2. do the on_each_cpu sync > > 3. really change the policy to NEVER. > > > > It solves one of previous concern, that after the sync is done on cpuX, > > but before the whole sync on all cpus are done, there is a window that > > the percpu-counter could be enlarged again. > > > > IIRC Andi had concern about read side cost when doing the sync, my > > understanding is most of the readers (malloc/free/map/unmap) are using > > percpu_counter_read_positive, which is a fast path without involving lock. > > > > As for the problem itself, I agree with Michal's point, that usually there > > is no normal case that will change the overcommit_policy too frequently. > > > > The code logic is mainly in overcommit_policy_handler(), based on the > > previous sync fix. please help to review, thanks! > > > > int overcommit_policy_handler(struct ctl_table *table, int write, void *buffer, > > size_t *lenp, loff_t *ppos) > > { > > int ret; > > > > if (write) { > > int new_policy; > > struct ctl_table t; > > > > t = *table; > > t.data = &new_policy; > > ret = proc_dointvec_minmax(&t, write, buffer, lenp, ppos); > > if (ret) > > return ret; > > > > mm_compute_batch(new_policy); > > if (new_policy == OVERCOMMIT_NEVER) > > schedule_on_each_cpu(sync_overcommit_as); > > sysctl_overcommit_memory = new_policy; > > } else { > > ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); > > } > > > > return ret; > > } > > Rather than having to indent those many lines, how about this? Thanks for the cleanup suggestion. > t = *table; > t.data = &new_policy; The input table->data is actually &sysctl_overcommit_memory, so there is a problem for "read" case, it will return the 'new_policy' value instead of real sysctl_overcommit_memory. It should work after adding a check if (write) t.data = &new_policy; > ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); --> &t Thanks, Feng > if (ret || !write) > return ret; > mm_compute_batch(new_policy); > if (new_policy == OVERCOMMIT_NEVER) > schedule_on_each_cpu(sync_overcommit_as); > > sysctl_overcommit_memory = new_policy; > return ret;