Received: by 2002:a05:6a10:6006:0:0:0:0 with SMTP id w6csp1066834pxa; Fri, 28 Aug 2020 02:42:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJydGV9vnH504nZwUFQj96h61ZyPBUKavQvsJAjUx4kMO9cG5TjKcLWPAZ6IDWRYa2STuaXE X-Received: by 2002:a17:906:4a4c:: with SMTP id a12mr943252ejv.228.1598607752592; Fri, 28 Aug 2020 02:42:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598607752; cv=none; d=google.com; s=arc-20160816; b=i9Wky4zL2Kno9Y78Q76jsu3Eu2yViIkKzkPJUYvtYK2fDnDyJNpZu3eEwgFEGzkQsQ t0WRosaLhE82Q9sY9KbfkEiMm8n0PIy88UBJMubhZJfkQYEbgbMRnEHCfblvxUSIKzGh VqLQmME72E9Uy1zcxJuJGfwNw5VOv7cnTDbsUKT+D9knRy04YfpDOVeWGDfTv2eHUIV+ 3Qi56J3lrO6vvd1yfx4UbDFHJWE5pwul6I3OZW1JJq6CzoYSLdGz2VWIWLFe9wopUiXL dDGeHmMtS1CbbYu6w+NVN7to6wDGVbUWO8cEE+qeOqNqOTN0Eeu0G6rPxGpqOGj8xcgW oVBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=CdXLQpPUWkEOcjGQbnDIQEns0iHlpDt8vkTIouRwsi4=; b=FQ2OJ95FjVYnSr4LcvY2xRlZtNuMXTmGMKs6fiFfE1VornVLWhcO+O2cpGOlyAT5Wc 8qBsOARisIXdVmjWGGH5OyUX7qucvMZADngb1WR3/Ol+E86pjI1fG5QmPwGkR8C1hGOR oRDfb5CY/wZotT/QJ10e+CuPFyB5Qe1HCyXHa1I0s42PTLNrtUVu/jP9x2P2otci/V4q Z69Q8CKExrJuljB1TWneW+avAbXtdEgkUtSYvqE0vMYFyKfLFgnA6L0MKnKIKTlm1Kxb 6RF1zb87zKpf+rXY/Cikkp7qeu7nzVGmA9Q/EodPq1eQyRATue9rN5Kg5l92jfbWwtqC CI9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w22si286776ejk.590.2020.08.28.02.42.10; Fri, 28 Aug 2020 02:42:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728556AbgH1Jle (ORCPT + 99 others); Fri, 28 Aug 2020 05:41:34 -0400 Received: from mx2.suse.de ([195.135.220.15]:56192 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728269AbgH1Jlb (ORCPT ); Fri, 28 Aug 2020 05:41:31 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id A7A89AC6F; Fri, 28 Aug 2020 09:42:02 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 8E94A1E12C0; Fri, 28 Aug 2020 11:41:29 +0200 (CEST) Date: Fri, 28 Aug 2020 11:41:29 +0200 From: Jan Kara To: peterz@infradead.org Cc: Xianting Tian , viro@zeniv.linux.org.uk, bcrl@kvack.org, mingo@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, jack@suse.cz, linux-fsdevel@vger.kernel.org, linux-aio@kvack.org, linux-kernel@vger.kernel.org, Tejun Heo Subject: Re: [PATCH] aio: make aio wait path to account iowait time Message-ID: <20200828094129.GF7072@quack2.suse.cz> References: <20200828060712.34983-1-tian.xianting@h3c.com> <20200828090729.GT1362448@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200828090729.GT1362448@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 28-08-20 11:07:29, peterz@infradead.org wrote: > On Fri, Aug 28, 2020 at 02:07:12PM +0800, Xianting Tian wrote: > > As the normal aio wait path(read_events() -> > > wait_event_interruptible_hrtimeout()) doesn't account iowait time, so use > > this patch to make it to account iowait time, which can truely reflect > > the system io situation when using a tool like 'top'. > > Do be aware though that io_schedule() is potentially far more expensive > than regular schedule() and io-wait accounting as a whole is a > trainwreck. Hum, I didn't know that io_schedule() is that much more expensive. Thanks for info. > When in_iowait is set schedule() and ttwu() will have to do additional > atomic ops, and (much) worse, PSI will take additional locks. > > And all that for a number that, IMO, is mostly useless, see the comment > with nr_iowait(). Well, I understand the limited usefulness of the system or even per CPU percentage spent in IO wait. However whether a particular task is sleeping waiting for IO or not is IMO a useful diagnostic information and there are several places in the kernel that take that into account (PSI, hangcheck timer, cpufreq, ...). So I don't see that properly accounting that a task is waiting for IO is just "expensive random number generator" as you mention below :). But I'm open to being educated... > But, if you don't care about performance, and want to see a shiny random > number generator, by all means, use io_schedule(). Honza -- Jan Kara SUSE Labs, CR