Received: by 10.223.176.5 with SMTP id f5csp3139581wra; Mon, 29 Jan 2018 09:21:11 -0800 (PST) X-Google-Smtp-Source: AH8x225Nyyj1JbIUz+1Ebj/tsiCiNoB5YovRCczuNCf2BOr+ZP+myXXI7fu8vqqrWU2PvTEIVUvm X-Received: by 2002:a17:902:5303:: with SMTP id b3-v6mr11322390pli.133.1517246471305; Mon, 29 Jan 2018 09:21:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517246471; cv=none; d=google.com; s=arc-20160816; b=GN8FzNz/iMleAiJz3UDBjSU35zADb+UY2QbGzOVoofavR7fz0XASDLiVlTI/D0SzB1 mwP4Dn+JW9IEV+S/TPWq71rQ/di/DTM6HlKQ+/SVSskfD/VDyyoN3dEj6zURxEGs7Npk E6xYv31xyi9AfZf/h+rQd3PMj1IchyFLzIvndsy14jjQDZJgDNLeu7Mv6WjZNzgPCQNl 3C/JyUkUBs1kHVuygUuRVh4d/yBrdblM0Nix5DyUKJWOQcSXaD4wb2rJ6ADkrcJWsSYZ KqgFvq7cjwW5eo0ypa8xZiiFAmgU3yQfNm6xfZeYISrzAGyg/m7SkPOIQrcy0yWWHlsK N1vA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=Tf8LiONxPGveDjL+e6+PFVgTenQC6uPhWyqzRQvwrQY=; b=DF60I1AM9jbNrMpUKVf5ddxuoQi+kx7HVqDq5reGzs7D5K4DJ+aKssKkNHPtGBfEo0 UDltukWghq4opLjCx7IUoqM9Mipd01sRFrGzwmadjcXvdJV1JcdKhFmO8KFhqspfe07D 6u9qEtT+tqNm031BHN+3elUHMqn+EQ5sZ/Rye8kVOzAdg6+6o6CgbwshD6LCn7zY+vyn U0MPPwmRX41WEe8vjFRYHui6XNpUhKZVe4zBVvJ4DS2hCrAC1P8mNa6w5t51+CBbg/wL TRgh5iprAm3ooTkwKu37OX8GsvdkcSvxRqNUKaeXkcQ8Vvw1gRIppeMntRcvUmFg/5Kr rMrA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=Snc8ibcC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s22-v6si192259plk.550.2018.01.29.09.20.55; Mon, 29 Jan 2018 09:21:11 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=Snc8ibcC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751422AbeA2RUa (ORCPT + 99 others); Mon, 29 Jan 2018 12:20:30 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:57539 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751072AbeA2RU3 (ORCPT ); Mon, 29 Jan 2018 12:20:29 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Tf8LiONxPGveDjL+e6+PFVgTenQC6uPhWyqzRQvwrQY=; b=Snc8ibcCA6lSSV3isLMf52/G9 NRhYdHEREenxb0Ix7FtEUAunBKylmi2v4YPgKXhA6eTuDfiJRzclo9NZJRyOVqvoJvoV0rGoDT8v5 B8WDDzLn8OiEyH8v+sVDkguuMPZkVqz8t5DWt8/JwQa3y0uKqxkEfzqO7ZPHN0rwYav8XWPOZDdQN 7Ud84NdUbrsiF/aMeEE1BeGI+yVraCfwc07mEVAHPJ37N25VISl02/OrLEbx1rEWTouDy3RnLbK8p FlQSXkHVCKyLccNKm8p/G2Zm4j02F9nHp918xEHCY9zuX1fshBIIrUyFy8Ckh0rm8B5lNiIZRKlDG 8yIdxGHXQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.89 #1 (Red Hat Linux)) id 1egD6T-0002RG-NC; Mon, 29 Jan 2018 17:20:21 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id E54DB2025C2A3; Mon, 29 Jan 2018 18:20:18 +0100 (CET) Date: Mon, 29 Jan 2018 18:20:18 +0100 From: Peter Zijlstra To: Frederic Weisbecker Cc: Ingo Molnar , LKML , Chris Metcalf , Thomas Gleixner , Luiz Capitulino , Christoph Lameter , "Paul E . McKenney" , Wanpeng Li , Mike Galbraith , Rik van Riel Subject: Re: [PATCH 4/6] sched/isolation: Residual 1Hz scheduler tick offload Message-ID: <20180129172018.GN2249@hirez.programming.kicks-ass.net> References: <1516320140-13189-1-git-send-email-frederic@kernel.org> <1516320140-13189-5-git-send-email-frederic@kernel.org> <20180129153839.GT2269@hirez.programming.kicks-ass.net> <20180129164832.GC2942@lerouge> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180129164832.GC2942@lerouge> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 29, 2018 at 05:48:33PM +0100, Frederic Weisbecker wrote: > On Mon, Jan 29, 2018 at 04:38:39PM +0100, Peter Zijlstra wrote: > > I would very much like a few words on why sched_class::task_tick() is > > safe to call remote -- from a quick look I think it actually is, but it > > would be good to have some words here. > > Let's rather say I can't prove that it is safe, given the amount of code that > is behind throughout the various flavour of scheduler features. > > But as far as I checked several times, it seems that nothing is accessed locally > on ::scheduler_tick(). Everything looks fetched from the runqueue struct target > while it is locked. > > If we ever find local references such as "current" or "__this_cpu_*" in the path, > we'll have to fix them. Sure, but at least state you audited the code for what issues. That tells me you know wth you were doing and gives more trust than blindly changing random code ;-) > > > +static void sched_tick_start(int cpu) > > > +{ > > > + struct tick_work *twork; > > > + > > > + if (housekeeping_cpu(cpu, HK_FLAG_TICK)) > > > + return; > > > > This all looks very static :-(, you can't reconfigure this nohz_full > > crud after boot? > > Unfortunately yes. In fact making the nohz interface dynamically available > through cpuset is the next big step. OK, fair enough. > > > + WARN_ON_ONCE(!tick_work_cpu); > > > + > > > + twork = per_cpu_ptr(tick_work_cpu, cpu); > > > + twork->cpu = cpu; > > > + INIT_DELAYED_WORK(&twork->work, sched_tick_remote); > > > + queue_delayed_work(system_unbound_wq, &twork->work, HZ); > > > +} > > > > Similarly, I think we want a few words about how unbound workqueues are > > expected to behave vs NUMA. > > > > AFAICT unbound workqueues by default prefer to run on a cpu in the same > > node, but if no cpu is available, it doesn't go looking for the nearest > > node that does have a cpu, it just punts to whatever random cpu. > > Yes, and in fact you just made me look into wq_select_unbound_cpu() and > it looks worse than that. If the current CPU is not in the wq_unbound_cpumask, > a random one is picked up from that global cpumask without trying a near > one in the current node. > > Looks like room for improvement on the workqueue side. I'll see what I can do. Great, thanks!