Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp3064840pxb; Mon, 16 Nov 2020 05:03:43 -0800 (PST) X-Google-Smtp-Source: ABdhPJxPvDEI1R/fLOe1c55CMylKvKHGs02dC06O9W6wUqCbk884PRFK8y/K21Na2F8xi88doaIr X-Received: by 2002:a17:906:b202:: with SMTP id p2mr13925281ejz.483.1605531823515; Mon, 16 Nov 2020 05:03:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605531823; cv=none; d=google.com; s=arc-20160816; b=SBcEdxVYu+1lPq7sPUSM8BEqUb+FuiOTdqoCf+mLpPiTpMMMvu6UI9JeDcQWwfRq98 EzqmwHdDD4BfyKX1NfVBsCVGTKS3IgRR6LcnVMwwRRYeV/lk2tRV6eer7VvhPQ11akNR m2slAlqOuv37mU0Gm9+5F4RWwFvK5YKcBjQPfK/RS+eVU8cCj7RcorU0/HlH6i1ZYYhC PWiB5GW+mhvffUBm/BHguD2RX+UcEzH4OTF6ghFf3yGmqlUE2wltRW7s+F+DS1khdiaq RhJO9d8UIwyqKyEI+nS3r2eblVzpTDJ+0OQGucosvfcIc9KjN6f+C+1PgCOR8iH1kb8t 1S5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=KRXNkO+KFPcAC0XnVHoUN3XHV3TWuEbfhoxAq87wtMM=; b=trDsBXLJnFPwjYpenEZ+z4SPuA01GgJEBOq0BEhaiRjTYoZvrgiJCFu0aZFHbETT3t bReRSmA3uJUH3ubRDlH48nku7phmS5zNPkMG/0k2R/QjDs6K6Hn1EYs5gVo403Z5Hhsj 7rNWEdTh7lwJlOYR0VCdej7eF8+DqkqivECHEpz9Lvfo6V2VGGoqmbPzW3/j+1dc14K3 qyqPkMD2KHNbzN6/5W69bnwvTe5+jwVWXGuxmlksmgmmiOgtcDqBJV5/qesa0bSFTus5 VCPLiHoVdR0d7pdOU55uqr2A0oQNfPkeRkQhChUZoV9X52UPpZiQDtFsDdg/kAOLaWyI S6Bg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=B0L09+U2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ar8si11592599ejc.496.2020.11.16.05.02.59; Mon, 16 Nov 2020 05:03:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=B0L09+U2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727248AbgKPM6J (ORCPT + 99 others); Mon, 16 Nov 2020 07:58:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58608 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726097AbgKPM6J (ORCPT ); Mon, 16 Nov 2020 07:58:09 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93214C0613CF for ; Mon, 16 Nov 2020 04:58:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=KRXNkO+KFPcAC0XnVHoUN3XHV3TWuEbfhoxAq87wtMM=; b=B0L09+U2QoRTqaM6/qgdzSi31Z bGUGOs6oWWZnqTOoTxyZIZUnEMInKNBuvfo18awjmqlzUWIG+mxGVdV/UOq1OzLef7ci0IGhTTTpq FTSD4AfBVZw3rKQDWR2VRIocxQ8iD+W6SFE3Auu4I2czF679Lk5fbBoo8EdIFLL6eRO8y7oyEAYP8 O4ZbTiRnhgFdVBHipz59jj5NyAJofb+qw2KpVFlbaMuRPXZLzTZaWHOW08+KUow39Lj+GqH/WyiXS rjE+IQMCFzr1nISH7lqOrbJkGAhentDuPBCikT2rJy0CS31ZIwxS7rTxgUlzNvVVWLYdIV9doG1QK NZVM1KDg==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kee55-0004fe-Ri; Mon, 16 Nov 2020 12:58:04 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 79A623060AE; Mon, 16 Nov 2020 13:58:03 +0100 (CET) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 65CAB202647A8; Mon, 16 Nov 2020 13:58:03 +0100 (CET) Date: Mon, 16 Nov 2020 13:58:03 +0100 From: Peter Zijlstra To: Mel Gorman Cc: Will Deacon , Davidlohr Bueso , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: Loadavg accounting error on arm64 Message-ID: <20201116125803.GB3121429@hirez.programming.kicks-ass.net> References: <20201116091054.GL3371@techsingularity.net> <20201116114938.GN3371@techsingularity.net> <20201116125355.GB3121392@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201116125355.GB3121392@hirez.programming.kicks-ass.net> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 16, 2020 at 01:53:55PM +0100, Peter Zijlstra wrote: > On Mon, Nov 16, 2020 at 11:49:38AM +0000, Mel Gorman wrote: > > On Mon, Nov 16, 2020 at 09:10:54AM +0000, Mel Gorman wrote: > > > I'll be looking again today to see can I find a mistake in the ordering for > > > how sched_contributes_to_load is handled but again, the lack of knowledge > > > on the arm64 memory model means I'm a bit stuck and a second set of eyes > > > would be nice :( > > > > > > > This morning, it's not particularly clear what orders the visibility of > > sched_contributes_to_load exactly like other task fields in the schedule > > vs try_to_wake_up paths. I thought the rq lock would have ordered them but > > something is clearly off or loadavg would not be getting screwed. It could > > be done with an rmb and wmb (testing and hasn't blown up so far) but that's > > far too heavy. smp_load_acquire/smp_store_release might be sufficient > > on it although less clear if the arm64 gives the necessary guarantees. > > > > (This is still at the chucking out ideas as I haven't context switched > > back in all the memory barrier rules). > > IIRC it should be so ordered by ->on_cpu. > > We have: > > schedule() > prev->sched_contributes_to_load = X; > smp_store_release(prev->on_cpu, 0); > > > on the one hand, and: Ah, my bad, ttwu() itself will of course wait for !p->on_cpu before we even get here. > sched_ttwu_pending() > if (WARN_ON_ONCE(p->on_cpu)) > smp_cond_load_acquire(&p->on_cpu) > > ttwu_do_activate() > if (p->sched_contributes_to_load) > ... > > on the other (for the remote case, which is the only 'interesting' one). Also see the "Notes on Program-Order guarantees on SMP systems." comment.