Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp3633391ybl; Mon, 19 Aug 2019 23:16:28 -0700 (PDT) X-Google-Smtp-Source: APXvYqyHlkvKIupUwC0reFLXVAluBg/OfYWVFZyGiehIOr7MXRE/5oLu7Z/T0fiVHBXxhJ58mi+Q X-Received: by 2002:a17:902:2d03:: with SMTP id o3mr19749695plb.96.1566281788233; Mon, 19 Aug 2019 23:16:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566281788; cv=none; d=google.com; s=arc-20160816; b=Dv3yFuDeNDflz2G8eIIelxaxU8CznDDpYA6oFTBJ4ZMbuU5J57G748DUF8BE2H6+/E B0uxqWjltO9AV/n1xZSIAYNpcHoIwXbPvCuF6Y4AB4HBqDq/R8qcMpUD2gOsZ49ZD42q 3E0pW8pSOM4KGcmKaWwBMb9NZ0c2N5diG79m0503xBJDh97LmqDKeFyOembXotLjSo4O x86bknW0Vu/RLJS++ySMb5UnOWlAx1SYJigIhH1B8K7YoenP/Rkk3JFq4LavYIdnzDr8 K+RMREAdSkas/tPEUzZ6x3rxLHwCsMhU4a6M1ntxTYBQKbWeGgFdm+5inxRbBI4H6Syr H51A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=9eZGSjk9dhxH6WYp4/QnLQB2seQhgmMP3Tb2BUTFLsA=; b=KHkWnrZJk+oIb5/wXyQnSxiV5cw/IbE/+2sP0xK69/M0pPw0OtN17jRiLvtYTWHhad sovukXY6JE+z6b4SuTWz3jUz1nAtrLtFyuHdmHmwCALBkO9ZNP4ZcOQXyplo3W63A7fK CN8zCA4im9PPbCa8P+BJHu+I1BO7RmkDfDtkTyMlYAI9KAodI9giLoJivfj0lkDPhxVa /fTWX3AjGO2Ui5c5e46HKNReBHuFO17S5wRd2F0YQjThlBJvB4Mh8+0gBb1Wo3OE5ltJ gIUfqvs+cNeeGQWWekos+VU2oAlQqFygXQ7CcmeTOrWPX9pfMCZPrA1YB+XCU3zru15o dkaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=BjESw9kS; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r4si11552135pgb.245.2019.08.19.23.16.13; Mon, 19 Aug 2019 23:16:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=BjESw9kS; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729290AbfHTGOx (ORCPT + 99 others); Tue, 20 Aug 2019 02:14:53 -0400 Received: from linux.microsoft.com ([13.77.154.182]:37864 "EHLO linux.microsoft.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729289AbfHTGOw (ORCPT ); Tue, 20 Aug 2019 02:14:52 -0400 Received: by linux.microsoft.com (Postfix, from userid 1004) id 25C1720B7194; Mon, 19 Aug 2019 23:14:51 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 25C1720B7194 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1566281691; bh=9eZGSjk9dhxH6WYp4/QnLQB2seQhgmMP3Tb2BUTFLsA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BjESw9kS3G3OQvSm8ATP5gO76PZ/Cbr6HpGYDVg4sUBi3b0CXjGfQRwaBUNwhTxo3 qJSyqjjTAdgWVH9Rx2SuNRKb2joqYtvWq7CrH/aGyUaVjbPDy0NX08yqsnG84xPdX9 9RQSluDRSO8hif87nZ5yVKmJrVldkXulHP2cpFOY= From: longli@linuxonhyperv.com To: Ingo Molnar , Peter Zijlstra , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Long Li Subject: [PATCH 3/3] nvme: complete request in work queue on CPU with flooded interrupts Date: Mon, 19 Aug 2019 23:14:29 -0700 Message-Id: <1566281669-48212-4-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1566281669-48212-1-git-send-email-longli@linuxonhyperv.com> References: <1566281669-48212-1-git-send-email-longli@linuxonhyperv.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Long Li When a NVMe hardware queue is mapped to several CPU queues, it is possible that the CPU this hardware queue is bound to is flooded by returning I/O for other CPUs. For example, consider the following scenario: 1. CPU 0, 1, 2 and 3 share the same hardware queue 2. the hardware queue interrupts CPU 0 for I/O response 3. processes from CPU 1, 2 and 3 keep sending I/Os CPU 0 may be flooded with interrupts from NVMe device that are I/O responses for CPU 1, 2 and 3. Under heavy I/O load, it is possible that CPU 0 spends all the time serving NVMe and other system interrupts, but doesn't have a chance to run in process context. To fix this, CPU 0 can schedule a work to complete the I/O request when it detects the scheduler is not making progress. This serves multiple purposes: 1. This CPU has to be scheduled to complete the request. The other CPUs can't issue more I/Os until some previous I/Os are completed. This helps this CPU get out of NVMe interrupts. 2. This acts a throttling mechanisum for NVMe devices, in that it can not starve a CPU while servicing I/Os from other CPUs. 3. This CPU can make progress on RCU and other work items on its queue. Signed-off-by: Long Li --- drivers/nvme/host/core.c | 57 +++++++++++++++++++++++++++++++++++++++- drivers/nvme/host/nvme.h | 1 + 2 files changed, 57 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 6a9dd68c0f4f..576bb6fce293 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -28,6 +28,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include "trace.h" @@ -97,6 +98,15 @@ static dev_t nvme_chr_devt; static struct class *nvme_class; static struct class *nvme_subsys_class; +/* + * The following are for detecting if this CPU is flooded with interrupts. + * The timestamp for the last context switch is recorded. If that is at least + * MAX_SCHED_TIMEOUT ago, try to recover from interrupt flood + */ +static DEFINE_PER_CPU(u64, last_switch); +static DEFINE_PER_CPU(u64, last_clock); +#define MAX_SCHED_TIMEOUT 2000000000 // 2 seconds in ns + static int nvme_revalidate_disk(struct gendisk *disk); static void nvme_put_subsystem(struct nvme_subsystem *subsys); static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl, @@ -260,9 +270,54 @@ static void nvme_retry_req(struct request *req) blk_mq_delay_kick_requeue_list(req->q, delay); } +static void nvme_complete_rq_work(struct work_struct *work) +{ + struct nvme_request *nvme_rq = + container_of(work, struct nvme_request, work); + struct request *req = blk_mq_rq_from_pdu(nvme_rq); + + nvme_complete_rq(req); +} + + void nvme_complete_rq(struct request *req) { - blk_status_t status = nvme_error_status(req); + blk_status_t status; + int cpu; + u64 switches; + struct nvme_request *nvme_rq; + + if (!in_interrupt()) + goto skip_check; + + nvme_rq = nvme_req(req); + cpu = smp_processor_id(); + if (idle_cpu(cpu)) + goto skip_check; + + /* Check if this CPU is flooded with interrupts */ + switches = get_cpu_rq_switches(cpu); + if (this_cpu_read(last_switch) == switches) { + /* + * If this CPU hasn't made a context switch in + * MAX_SCHED_TIMEOUT ns (and it's not idle), schedule a work to + * complete this I/O. This forces this CPU run non-interrupt + * code and throttle the other CPU issuing the I/O + */ + if (sched_clock() - this_cpu_read(last_clock) + > MAX_SCHED_TIMEOUT) { + INIT_WORK(&nvme_rq->work, nvme_complete_rq_work); + schedule_work_on(cpu, &nvme_rq->work); + return; + } + + } else { + this_cpu_write(last_switch, switches); + this_cpu_write(last_clock, sched_clock()); + } + +skip_check: + status = nvme_error_status(req); trace_nvme_complete_rq(req); diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 0a4a7f5e0de7..a8876e69e476 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -113,6 +113,7 @@ struct nvme_request { u8 flags; u16 status; struct nvme_ctrl *ctrl; + struct work_struct work; }; /* -- 2.17.1