Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp622659ybe; Fri, 6 Sep 2019 04:57:54 -0700 (PDT) X-Google-Smtp-Source: APXvYqxEL+DCuxgKzMi5k0u9P/rpdDSXIOl4lPPdMI5FYC2VnPGuVgSKrincj3VLd12xYyb0fktX X-Received: by 2002:a63:9e54:: with SMTP id r20mr7842156pgo.64.1567771074151; Fri, 06 Sep 2019 04:57:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567771074; cv=none; d=google.com; s=arc-20160816; b=FAIwoAIHQ1/0BztA4nOfYwAne6qdGc4bwfBKsNr0Rjg60tNpCCfrlRDoiNJ/kJ+dkR 4NI0TOXpuc+mLdvSlgu/CJHKHRUYAsOrjhHBvEQVnC+sq5J7W7pVUyeEaXwCIOtVEWwq +JneDwGYTdEgxwBu15dcXrTsPZxSFiVM80EMkYTdQwyumfWGlCShCuZSNMyuaDfaIrqM jYOzWiBMYuHtOte5GziddmTA8vkT3X3sh72K24JiAsVHhZpZ6X8QOXiKFI5DfhGxp+ay giEd+B7FwXAeZ/xSlvES4zvT4rASv7C3w1nmDFfTrnfGMn/prdNNt94LmlYL1MCS+Dwn lToA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=DFT4oNzc8xXbPHHFukr1CKuRsxR2G7/m0Rkw6sVm1is=; b=WfmoXb9/j1X/+xTiz//3XUFrLrSa/y64jwlNgS0PUx/pSqRpbX8Py+09eJ1LRg0exl 8KcdxZyFndn+uqC5825raKG7Xs6JnBqGyThz9QT4sVGfTlhs6+iwp2PeUDdRzUEhbNTT mzdG90Kd5H0b6LZnKcx2eCUhR3tVnG/mflhzrRB2Yw6BUfozyKylb8A8V1RG08WmmnZ/ qn8BGuOGXpxPKW64l0zsox9OlXnSiOIZQsWLEVCjkSdmU/lK1gzxpOv99mkgHSt3UgYx t52/GnvyiL1GIZBCQGouZAF4WZ9OrL4RSQ4YlBl7CZ7SF8o9uoGQE9bE8AMeJTbvFAuJ ah8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t5si4645263pjw.81.2019.09.06.04.57.39; Fri, 06 Sep 2019 04:57:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404277AbfIFBsp (ORCPT + 99 others); Thu, 5 Sep 2019 21:48:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:42136 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404265AbfIFBsp (ORCPT ); Thu, 5 Sep 2019 21:48:45 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A106218C4266; Fri, 6 Sep 2019 01:48:44 +0000 (UTC) Received: from ming.t460p (ovpn-8-16.pek2.redhat.com [10.72.8.16]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 8744860C18; Fri, 6 Sep 2019 01:48:26 +0000 (UTC) Date: Fri, 6 Sep 2019 09:48:21 +0800 From: Ming Lei To: Daniel Lezcano Cc: Keith Busch , Hannes Reinecke , Bart Van Assche , linux-scsi@vger.kernel.org, Peter Zijlstra , Long Li , John Garry , LKML , linux-nvme@lists.infradead.org, Jens Axboe , Ingo Molnar , Thomas Gleixner , Christoph Hellwig , Sagi Grimberg Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism Message-ID: <20190906014819.GB27116@ming.t460p> References: <20190903033001.GB23861@ming.t460p> <299fb6b5-d414-2e71-1dd2-9d6e34ee1c79@linaro.org> <20190903063125.GA21022@ming.t460p> <6b88719c-782a-4a63-db9f-bf62734a7874@linaro.org> <20190903072848.GA22170@ming.t460p> <6f3b6557-1767-8c80-f786-1ea667179b39@acm.org> <2a8bd278-5384-d82f-c09b-4fce236d2d95@linaro.org> <20190905090617.GB4432@ming.t460p> <6a36ccc7-24cd-1d92-fef1-2c5e0f798c36@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6a36ccc7-24cd-1d92-fef1-2c5e0f798c36@linaro.org> User-Agent: Mutt/1.11.3 (2019-02-01) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.62]); Fri, 06 Sep 2019 01:48:44 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Daniel, On Thu, Sep 05, 2019 at 12:37:13PM +0200, Daniel Lezcano wrote: > > Hi Ming, > > On 05/09/2019 11:06, Ming Lei wrote: > > On Wed, Sep 04, 2019 at 07:31:48PM +0200, Daniel Lezcano wrote: > >> Hi, > >> > >> On 04/09/2019 19:07, Bart Van Assche wrote: > >>> On 9/3/19 12:50 AM, Daniel Lezcano wrote: > >>>> On 03/09/2019 09:28, Ming Lei wrote: > >>>>> On Tue, Sep 03, 2019 at 08:40:35AM +0200, Daniel Lezcano wrote: > >>>>>> It is a scheduler problem then ? > >>>>> > >>>>> Scheduler can do nothing if the CPU is taken completely by handling > >>>>> interrupt & softirq, so seems not a scheduler problem, IMO. > >>>> > >>>> Why? If there is a irq pressure on one CPU reducing its capacity, the > >>>> scheduler will balance the tasks on another CPU, no? > >>> > >>> Only if CONFIG_IRQ_TIME_ACCOUNTING has been enabled. However, I don't > >>> know any Linux distro that enables that option. That's probably because > >>> that option introduces two rdtsc() calls in each interrupt. Given the > >>> overhead introduced by this option, I don't think this is the solution > >>> Ming is looking for. > >> > >> Was this overhead reported somewhere ? > > > > The syscall of gettimeofday() calls ktime_get_real_ts64() which finally > > calls tk_clock_read() which calls rdtsc too. > > > > But gettimeofday() is often used in fast path, and block IO_STAT needs to > > read it too. > > > >> > >>> See also irqtime_account_irq() in kernel/sched/cputime.c. > >> > >> From my POV, this framework could be interesting to detect this situation. > > > > Now we are talking about IRQ_TIME_ACCOUNTING instead of IRQ_TIMINGS, and the > > former one could be used to implement the detection. And the only sharing > > should be the read of timestamp. > > You did not share yet the analysis of the problem (the kernel warnings > give the symptoms) and gave the reasoning for the solution. It is hard > to understand what you are looking for exactly and how to connect the dots. Let me explain it one more time: When one IRQ flood happens on one CPU: 1) softirq handling on this CPU can't make progress 2) kernel thread bound to this CPU can't make progress For example, network may require softirq to xmit packets, or another irq thread for handling keyboards/mice or whatever, or rcu_sched may depend on that CPU for making progress, then the irq flood stalls the whole system. > > AFAIU, there are fast medium where the responses to requests are faster > than the time to process them, right? Usually medium may not be faster than CPU, now we are talking about interrupts, which can be originated from lots of devices concurrently, for example, in Long Li'test, there are 8 NVMe drives involved. > > I don't see how detecting IRQ flooding and use a threaded irq is the > solution, can you explain? When IRQ flood is detected, we reserve a bit little time for providing chance to make softirq/threads scheduled by scheduler, then the above problem can be avoided. > > If the responses are coming at a very high rate, whatever the solution > (interrupts, threaded interrupts, polling), we are still in the same > situation. When we moving the interrupt handling into irq thread, other softirq/ threaded interrupt/thread gets chance to be scheduled, so we can avoid to stall the whole system. > > My suggestion was initially to see if the interrupt load will be taken > into accounts in the cpu load and favorize task migration with the > scheduler load balance to a less loaded CPU, thus the CPU processing > interrupts will end up doing only that while other CPUs will handle the > "threaded" side. > > Beside that, I'm wondering if the block scheduler should be somehow > involved in that [1] For NVMe or any multi-queue storage, the default scheduler is 'none', which basically does nothing except for submitting IO asap. Thanks, Ming