Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp242481imj; Wed, 13 Feb 2019 07:37:14 -0800 (PST) X-Google-Smtp-Source: AHgI3IaoE75fbH2kCQ+YGAjKIJpaOeSgIJsOj6xdaXRZ+pwCvtuPyKVjeqUDTtGbKDHkvbdfO2nM X-Received: by 2002:a17:902:346:: with SMTP id 64mr1089409pld.337.1550072234005; Wed, 13 Feb 2019 07:37:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550072233; cv=none; d=google.com; s=arc-20160816; b=HVE2uLhISeYmWjkqK/1WZqoyxY7zupzO2/xZHGtpyDLxw6iMebwYX0kegi6x630Hzk srAM6QxjLMrWgKBT509s/m3NZVandPNIOW4wHqmKciNbHIg4jQHInd2kzs9CS8EM3/NE cX49m5AvHR8OFiwRoKpssc40leQp72QoJY1PtELTMx+frvhgHqYEuYj2VCyno8W4d+QO mEQOUL2OqFw5nO5QeFvgGECkVrt9GHnjYCGYrA+ktjGLztOdjVdK/7Iz297l4dePuGL2 /99QemMOADz7I0oip+xPd4vPyNSzfwwR0b6eabQeFF9CmV9k+tCgc36eMctelIguk9oY b+Og== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=UP/XJf9gjD7dtcGTx7iCw0+rurFg8a2V1Pi4VxIvHl4=; b=D06S1oVw6QwtlO/KepMAum0H4rPeEUHw7oh0djL+ZFW9wZDNRp/wUTzaB8Tqcuhc55 IxUFuFTJpuJwqSEMMv9R9DbmHimbILbfme74TRn3LqPlUWPt9gPkm4qvPbFnL+BAd1Q8 fGikvsJeDIVer2VMgq3WR4sF7M4JVYPHn4Qb02V3aGhIc0EslY25bNuI+q7U1AMTsVAY Y4/g/7Ji2/moT376B7hXlRA8P0PPpOutXBtO9B12sB11GMkT0B49m0D1H5pcp5aJeg4c qso6b91RY/vm8k/0AH/1Z4AiQxjKtq0fUAfjx/g712fQGjIHaZcz3SD8/uvlphIBIdOr OoPg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=biQ+NnN+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f4si9930584pgd.395.2019.02.13.07.36.57; Wed, 13 Feb 2019 07:37:13 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=biQ+NnN+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732779AbfBMPH1 (ORCPT + 99 others); Wed, 13 Feb 2019 10:07:27 -0500 Received: from mail.kernel.org ([198.145.29.99]:51960 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726317AbfBMPH0 (ORCPT ); Wed, 13 Feb 2019 10:07:26 -0500 Received: from localhost (173-25-63-173.client.mchsi.com [173.25.63.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1C649222C2; Wed, 13 Feb 2019 15:07:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1550070445; bh=uOCdBJs/k/DikWGzoXILqmP1QrmQOgHrs7BMqLTb/es=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=biQ+NnN+tg1ZIjk2cwb0j/ehhjs7eTiaV+NLd+uAPocDFQyYKS4SoRAhmhPv61gsF Tu8jOUu4O9wPIHPozU4ElCNkbQeZvKrqkPEI+Ad7mBmr5lWj9kQUmIqerZazLral61 g7iolxP6fmse3JvkBLaLU/tRtyLaW/5h/YSOfM1o= Date: Wed, 13 Feb 2019 09:07:24 -0600 From: Bjorn Helgaas To: Ming Lei Cc: Christoph Hellwig , Thomas Gleixner , Jens Axboe , linux-block@vger.kernel.org, Sagi Grimberg , linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, Keith Busch Subject: Re: [PATCH V3 2/5] genirq/affinity: store irq set vectors in 'struct irq_affinity' Message-ID: <20190213150723.GC96272@google.com> References: <20190213105041.13537-1-ming.lei@redhat.com> <20190213105041.13537-3-ming.lei@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190213105041.13537-3-ming.lei@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 13, 2019 at 06:50:38PM +0800, Ming Lei wrote: > Currently the array of irq set vectors is provided by driver. > > irq_create_affinity_masks() can be simplied a bit by treating the > non-irq-set case as single irq set. s/simplied a bit/simplified/ > So move this array into 'struct irq_affinity', and pre-define the max > set number as 4, which should be enough for normal cases. s/irq/IRQ/ You have a real mixture of capitalization across the changelogs. > Signed-off-by: Ming Lei > --- > drivers/nvme/host/pci.c | 5 ++--- > include/linux/interrupt.h | 6 ++++-- > kernel/irq/affinity.c | 18 +++++++++++------- > 3 files changed, 17 insertions(+), 12 deletions(-) > > diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c > index 022ea1ee63f8..0086bdf80ea1 100644 > --- a/drivers/nvme/host/pci.c > +++ b/drivers/nvme/host/pci.c > @@ -2081,12 +2081,11 @@ static void nvme_calc_io_queues(struct nvme_dev *dev, unsigned int irq_queues) > static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues) > { > struct pci_dev *pdev = to_pci_dev(dev->dev); > - int irq_sets[2]; > struct irq_affinity affd = { > .pre_vectors = 1, > - .nr_sets = ARRAY_SIZE(irq_sets), > - .sets = irq_sets, > + .nr_sets = 2, > }; > + int *irq_sets = affd.set_vectors; > int result = 0; > unsigned int irq_queues, this_p_queues; > > diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h > index 1ed1014c9684..a20150627a32 100644 > --- a/include/linux/interrupt.h > +++ b/include/linux/interrupt.h > @@ -259,6 +259,8 @@ struct irq_affinity_notify { > void (*release)(struct kref *ref); > }; > > +#define IRQ_MAX_SETS 4 This is a pretty generic name. Maybe it should include a hint that it's related to affinity? > /** > * struct irq_affinity - Description for automatic irq affinity assignements > * @pre_vectors: Don't apply affinity to @pre_vectors at beginning of > @@ -266,13 +268,13 @@ struct irq_affinity_notify { > * @post_vectors: Don't apply affinity to @post_vectors at end of > * the MSI(-X) vector space > * @nr_sets: Length of passed in *sets array > - * @sets: Number of affinitized sets > + * @set_vectors: Number of affinitized sets > */ > struct irq_affinity { > int pre_vectors; > int post_vectors; > int nr_sets; > - int *sets; > + int set_vectors[IRQ_MAX_SETS]; > }; > > /** > diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c > index 9200d3b26f7d..b868b9d3df7f 100644 > --- a/kernel/irq/affinity.c > +++ b/kernel/irq/affinity.c > @@ -244,7 +244,7 @@ irq_create_affinity_masks(int nvecs, struct irq_affinity *affd) > int affvecs = nvecs - affd->pre_vectors - affd->post_vectors; > int curvec, usedvecs; > struct irq_affinity_desc *masks = NULL; > - int i, nr_sets; > + int i; > > /* > * If there aren't any vectors left after applying the pre/post > @@ -253,6 +253,9 @@ irq_create_affinity_masks(int nvecs, struct irq_affinity *affd) > if (nvecs == affd->pre_vectors + affd->post_vectors) > return NULL; > > + if (affd->nr_sets > IRQ_MAX_SETS) > + return NULL; > + > masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL); > if (!masks) > return NULL; > @@ -264,12 +267,13 @@ irq_create_affinity_masks(int nvecs, struct irq_affinity *affd) > * Spread on present CPUs starting from affd->pre_vectors. If we > * have multiple sets, build each sets affinity mask separately. > */ > - nr_sets = affd->nr_sets; > - if (!nr_sets) > - nr_sets = 1; > + if (!affd->nr_sets) { > + affd->nr_sets = 1; > + affd->set_vectors[0] = affvecs; > + } > > - for (i = 0, usedvecs = 0; i < nr_sets; i++) { > - int this_vecs = affd->sets ? affd->sets[i] : affvecs; > + for (i = 0, usedvecs = 0; i < affd->nr_sets; i++) { > + int this_vecs = affd->set_vectors[i]; > int ret; > > ret = irq_build_affinity_masks(affd, curvec, this_vecs, > @@ -316,7 +320,7 @@ int irq_calc_affinity_vectors(int minvec, int maxvec, const struct irq_affinity > int i; > > for (i = 0, set_vecs = 0; i < affd->nr_sets; i++) > - set_vecs += affd->sets[i]; > + set_vecs += affd->set_vectors[i]; > } else { > get_online_cpus(); > set_vecs = cpumask_weight(cpu_possible_mask); > -- > 2.9.5 >