Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3215817pxk; Mon, 5 Oct 2020 04:22:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyr7liLLR6t1M1XUDzFB1zbIuaFMPnqbeQyqi86TR3+Sjmmi4vzgvQqdPxki6uZcfncgEoE X-Received: by 2002:a17:907:33ca:: with SMTP id zk10mr14627952ejb.317.1601896941236; Mon, 05 Oct 2020 04:22:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601896941; cv=none; d=google.com; s=arc-20160816; b=hlmjxM9dmuIlcnHzM5PcfmJiENufpTh5A3J07SPREkoZwWPZfLy3+IyLkClc1a67Pe HGo/YuvcX+2yoLX5RAHbUTMMYJ6oiZ3NZr7WLGILiwD0vp1YmeMftdMWnOFSQ1KMFfyH mQb/LFEU/bRQipdusweQ7VE15J4NM+SihffYXCWFX0+MStS+o67vf7wZ8+R7pHV//fDU lBZtGEttEIl58wwXwGHgTZBJR3PYEOTc0fh8kEf1GgIgYrJUlcF1v+nCJBF90eXM1zeM CHxvm5uihRDzXxTuiBoQlasTQaiF/pNJqh0uO8tL+rxaai2hF4Zcu8SEw1GkKNgxGsiU cEhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=Ra1YPs5Ur9GmtKp6lL+9W2Oo4CFvG8xsosvNRMWmnLY=; b=RJP5ALzAaTaJJ15FinejuBPlXDI+B0WVj6Xn+H7hdB6R3FN6py0sx/8+3e5SoS36HC aUXQdGS91snmV+lG6sNpvy1kPpJwwThtcPK1vK6r8AJY6muvz2bTW6NW/Wu5N4QKmFsJ 4M8qtezOSTNxiKlk+NYuLxK1FkXO3FKmpqM4xqaCC4fsF6JwHVxy7twqHkdhg5HVN6Q3 TEFNXJIwBeQrq+gmOHTlD6YkbPL9yx7X6Y/BTMoA7DpJXyu1Pcz3yHL9Zd/1guxuGkCx kagS4qDq5tG0SDdw7dDE0LPPZh2q18Lxj6B5LaJHGR8l8sNFyfYknaaorUOUbhhRDoVP wH6Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b="V66/orP3"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v19si6596383ejf.257.2020.10.05.04.21.57; Mon, 05 Oct 2020 04:22:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b="V66/orP3"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725960AbgJELTj (ORCPT + 99 others); Mon, 5 Oct 2020 07:19:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725914AbgJELTi (ORCPT ); Mon, 5 Oct 2020 07:19:38 -0400 Received: from merlin.infradead.org (merlin.infradead.org [IPv6:2001:8b0:10b:1231::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30ED0C0613CE; Mon, 5 Oct 2020 04:19:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Ra1YPs5Ur9GmtKp6lL+9W2Oo4CFvG8xsosvNRMWmnLY=; b=V66/orP3bwMW/YEzkQN47EHpD0 ZHYBKXwFpZjN1JhOy9tkZOghvamu5T3Xn0IC3/1aT+Ov6tncf/sfxBH43pTa8WPtZ9mE6bv4Dpkrs 2ll0hJz5kXjlhB29PBgIL9r0MHsTl5moUvqWlCgs69PnX/c3lK2Mq1pXoB8WMUrmlouauK1PHtFSb 9ZIBHXKXEvl82+Tyy6wPJeNLYf1gPAFbY331j42cE1K5aZaFBJegF5nzybQs3ZqTdT5M35NKUSAzJ uncFb5gRUMbtuwwcwEfLS/EI+pEaVfSJPsPTJWUBds9Kb1Z6FKddw+lVuhGZ6XPsNhf44tduhmXoh NiQh4hmw==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kPOWZ-0006Pv-K0; Mon, 05 Oct 2020 11:19:23 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id C8C17300B22; Mon, 5 Oct 2020 13:19:20 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id AE5B020C19001; Mon, 5 Oct 2020 13:19:20 +0200 (CEST) Date: Mon, 5 Oct 2020 13:19:20 +0200 From: Peter Zijlstra To: Xi Wang Cc: Paul Turner , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Josh Don , LKML , linux-fsdevel@vger.kernel.org, Stephane Eranian , Arnaldo Carvalho de Melo Subject: Re: [PATCH] sched: watchdog: Touch kernel watchdog in sched code Message-ID: <20201005111920.GO2611@hirez.programming.kicks-ass.net> References: <20200304213941.112303-1-xii@google.com> <20200305075742.GR2596@hirez.programming.kicks-ass.net> <20200306084039.GC12561@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 06, 2020 at 02:34:20PM -0800, Xi Wang wrote: > On Fri, Mar 6, 2020 at 12:40 AM Peter Zijlstra wrote: > > > > On Thu, Mar 05, 2020 at 02:11:49PM -0800, Paul Turner wrote: > > > The goal is to improve jitter since we're constantly periodically > > > preempting other classes to run the watchdog. Even on a single CPU > > > this is measurable as jitter in the us range. But, what increases the > > > motivation is this disruption has been recently magnified by CPU > > > "gifts" which require evicting the whole core when one of the siblings > > > schedules one of these watchdog threads. > > > > > > The majority outcome being asserted here is that we could actually > > > exercise pick_next_task if required -- there are other potential > > > things this will catch, but they are much more braindead generally > > > speaking (e.g. a bug in pick_next_task itself). > > > > I still utterly hate what the patch does though; there is no way I'll > > have watchdog code hook in the scheduler like this. That's just asking > > for trouble. > > > > Why isn't it sufficient to sample the existing context switch counters > > from the watchdog? And why can't we fix that? > > We could go to pick next and repick the same task. There won't be a > context switch but we still want to hold the watchdog. I assume such a > counter also needs to be per cpu and inside the rq lock. There doesn't > seem to be an existing one that fits this purpose. Sorry, your reply got lost, but I just ran into something that reminded me of this. There's sched_count. That's currently schedstat, but if you can find a spot in a hot cacheline (from schedule()'s perspective) then it should be cheap to incremenent unconditionally. If only someone were to write a useful cacheline perf tool (and no that c2c trainwreck doesn't count).