Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp2870101ybb; Sun, 22 Mar 2020 09:38:31 -0700 (PDT) X-Google-Smtp-Source: ADFU+vuu15o6a+p6HfGL2hSysa5+hiz3E2bIeSkOMzkm3VXNdRIOHPyLInaynE9oMjGzJpwBzsQW X-Received: by 2002:a05:6830:1ad4:: with SMTP id r20mr15239566otc.316.1584895110945; Sun, 22 Mar 2020 09:38:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584895110; cv=none; d=google.com; s=arc-20160816; b=DyVWiwvAICGeeLsZcdk736nyYexwFm2oxvoa7Pwv0MqVWSAjgLQ4RRdLzopznycWk8 hD5RZSPzbhEeklbLXXR5cEUImaqoLllYpNDohHgY+nlfLP/oUVW5imeOz18hJsNQYXI9 gWX34x7WeRj65MDiK/98AiiFWmNaKBvKWIbgLAm3TfL2hYb2GvngvG34Ph/Ab4hDL+/B U6LhLVdKCFGfr+3Ah+sj4Q1BgUIJ6eKrk6YuAPcDteFf1bhgvTD/pde8ny6xJczJLIyj PgJeXacd0f2mpd8TVtU57MT2rzvMjqepaZyOrbJenZ0GC4Y47/DnPeEGsiwZlK4NPYZo 7JsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=8GRKFaCcXJ7mdsvy6NUcpQNVCNNISDlTqxog704JRIs=; b=sjMgGXXla6YCC4+nPXO6r3hAyqinzPd097P8bNVdTgayYeo36iJmHOLSfIKpmTgjIo xjrUGB/6SsEZsksAN89KxReNyTvpnLb5gFtPbRgNsgLOIE0i6NfyWLikdyHA8QNPzphC S/k4jMJYSlmsh66NIosJdc3wGcEVNhZCQc3qz4NV/C//LFcyGwZUud7BMVkS8pA/P/+E qvxWV803EmXeuel6lczi9k4r6Xl7GJUpZmX999jND8LSu68sq9Mm99ZhRdm4ENZ9iTgw jwIbqHf3Fe07uW5vXJhU866fGN6BTK9Ngb8TmIMLeuYd1SvrtrOFEsRRP8mPwFTwXLR+ eUaw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=tvRFQfNi; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o18si6533728otk.80.2020.03.22.09.38.17; Sun, 22 Mar 2020 09:38:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=tvRFQfNi; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726916AbgCVQhC (ORCPT + 99 others); Sun, 22 Mar 2020 12:37:02 -0400 Received: from mail-ot1-f67.google.com ([209.85.210.67]:40450 "EHLO mail-ot1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726538AbgCVQhB (ORCPT ); Sun, 22 Mar 2020 12:37:01 -0400 Received: by mail-ot1-f67.google.com with SMTP id e19so10990663otj.7 for ; Sun, 22 Mar 2020 09:37:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=8GRKFaCcXJ7mdsvy6NUcpQNVCNNISDlTqxog704JRIs=; b=tvRFQfNiD4QieIcurY5uIxY68I7r4dirD+9m/gx1ebXGGB/f/fOqlVi+iUP1q7iX4l QlZ6nfeCtzcob9S6go+lz1o+N20wI2lOIGJy38i8KoXrV6qmrL58UwgCSPu2FA3x3EPQ EouWUxFsnGEl0kFOBxOotjQuXsm0cPSgRB+aCCXRHJNN0yDGeIW9K3tcUw24FFu0sdfv gCPMfsGvgv83/EVtsKbnIbpa0x6fEgwEZugjOtgkTQZCGHtDufb0/340NhW4cvoQhaPI YmdxCrOtPNNIZoHjEpJCQfk9QDWSn+OpxsBVDkTfsp/Nt9ZTSaQ93wXy1k5HxjlU881w WlNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=8GRKFaCcXJ7mdsvy6NUcpQNVCNNISDlTqxog704JRIs=; b=oK2hAZ7TjfTn6Gjd+nKmtknh3gOoseYXUDekOvsp0vp8m6WtXIaCbCGwOMZXvfekFy RFcnx6HM6jrPZsS4iZRia+XchJh4octvuKQOGmOUpb6kUDBOZDoXpsntH4GnMsb04o06 RFAmQ/YjYbnoodczYj8aaFpFmjBiZWTyRqjc3C7TV2D5qEgbPWGoZl5FBEHy1q8Q0K1m ZrsFsn1+b0nnD5zJBjszgBvmotPpquiapLeOeX2RcCYH0CYpAiEwHJ43eBdPSGaRiOoo zDnGgSxNuglDsIqOILApwxORKtJWl24yGxfaf+2QlhPC0OzrAcqyPWTUIwizPk7I3US2 bjvg== X-Gm-Message-State: ANhLgQ3sKzqWKOQskbJJHPRQKQYUawIuuohTLymsb+oIfZ6Fst9U9PWy wQ6HbK9PFcwYxggH6cxewhfWTlBgIev+yXhMViy1CgpfSjE= X-Received: by 2002:a9d:1a2:: with SMTP id e31mr15534935ote.30.1584895020899; Sun, 22 Mar 2020 09:37:00 -0700 (PDT) MIME-Version: 1.0 References: <20200322013525.1095493-1-aquini@redhat.com> <20200321184352.826d3dba38aecc4ff7b32e72@linux-foundation.org> <20200322020326.GB1068248@t490s> <20200321213142.597e23af955de653fc4db7a1@linux-foundation.org> In-Reply-To: <20200321213142.597e23af955de653fc4db7a1@linux-foundation.org> From: Shakeel Butt Date: Sun, 22 Mar 2020 09:36:49 -0700 Message-ID: Subject: Re: [PATCH] tools/testing/selftests/vm/mlock2-tests: fix mlock2 false-negative errors To: Andrew Morton Cc: Rafael Aquini , LKML , linux-kselftest@vger.kernel.org, shuah@kernel.org Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Mar 21, 2020 at 9:31 PM Andrew Morton wrote: > > On Sat, 21 Mar 2020 22:03:26 -0400 Rafael Aquini wrote: > > > > > + * In order to sort out that race, and get the after fault checks consistent, > > > > + * the "quick and dirty" trick below is required in order to force a call to > > > > + * lru_add_drain_all() to get the recently MLOCK_ONFAULT pages moved to > > > > + * the unevictable LRU, as expected by the checks in this selftest. > > > > + */ > > > > +static void force_lru_add_drain_all(void) > > > > +{ > > > > + sched_yield(); > > > > + system("echo 1 > /proc/sys/vm/compact_memory"); > > > > +} > > > > > > What is the sched_yield() for? > > > > > > > Mostly it's there to provide a sleeping gap after the fault, whithout > > actually adding an arbitrary value with usleep(). > > > > It's not a hard requirement, but, in some of the tests I performed > > (whithout that sleeping gap) I would still see around 1% chance > > of hitting the false-negative. After adding it I could not hit > > the issue anymore. > > It's concerning that such deep machinery as pagevec draining is visible > to userspace. > We already have other examples like memcg stats where the optimizations like batching per-cpu stats collection exposes differences to the userspace. I would not be that worried here. > I suppose that for consistency and correctness we should perform a > drain prior to each read from /proc/*/pagemap. Presumably this would > be far too expensive. > > Is there any other way? One such might be to make the MLOCK_ONFAULT > pages bypass the lru_add_pvecs? > I would rather prefer to have something similar to /proc/sys/vm/stat_refresh which drains the pagevecs.