Received: by 2002:ac0:98c7:0:0:0:0:0 with SMTP id g7-v6csp4470827imd; Tue, 30 Oct 2018 02:28:14 -0700 (PDT) X-Google-Smtp-Source: AJdET5fZtGk4d2D5nNRsXBekP8xtybJxdH285XJ5znxA8+JyH3a4rhrBCTAWM6L1YaHuTUtjlKq5 X-Received: by 2002:a17:902:760b:: with SMTP id k11-v6mr18059454pll.103.1540891694868; Tue, 30 Oct 2018 02:28:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540891694; cv=none; d=google.com; s=arc-20160816; b=elu03yEfZxGoN8DbPK/po+IfNKMzkktjVwpfPHB8YFPuz4i5yReWhnAp0CPW+oOpWH IbTEZK2j68zieWKLQ9jsO4wbDqe2AuYIw75EcCJ5bdATXsvwRMoipWIwwSrEi6TmhsbB XnfUiXRxu2Ug2MLTB9wcRidL7YY96ZI7cn2PqKJsjMXZRnM+iFVCAYt/cykFzkAzSy/K 5aqmvlUpF1uChdhnj6IahDK/KYH6iM0OGJDlpxajXorIrJTQtEIihSMtdoxQvidzvfzU b2WG6bhU+a4J0vV9wpVIeYV0KnBL9VO1Z2ZNsZn/Z4c3LwQPlQs7yQ4nnMhTWddmgYRP 3aaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=7vCK++mrdCZGd6xH/mcD7Bs1tnWCEzWWTs1Weygu+1o=; b=bZBJJsOoz2VgXF/p7Byyvhob5by6L1dohDCORKJif305MCUIqkYtxjj3FpVHvdoalb Ge/67cRbeiGMEH9wGrnd2Qa+m4HUQGaWCwijc8K7zFSSkhIMmBqORjqbLD44bl9kdngK KzYCZxwknf3hkGaXoaQyUiwZiy4nyVu+m4f3P59WGpcImK33zgYvWVMDeJnXlQVH+jJl yJ1f++q911q0F0i4jXXDtfVLh5XL/4VoxcLMH65jpSui6aHvf79ytoyL99Mqob/0OPVJ MuIpTR3WYK+JkRQSFSVfH7sq4m1/REpJjjiNbjVwrKm6XwQrnH71L5zuqatRr78T4Nwc y3hA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u128-v6si5489394pfc.145.2018.10.30.02.27.59; Tue, 30 Oct 2018 02:28:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727065AbeJ3SR7 (ORCPT + 99 others); Tue, 30 Oct 2018 14:17:59 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46900 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726211AbeJ3SR7 (ORCPT ); Tue, 30 Oct 2018 14:17:59 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 52A328830F; Tue, 30 Oct 2018 09:25:20 +0000 (UTC) Received: from ming.t460p (ovpn-8-20.pek2.redhat.com [10.72.8.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7D80C60BE7; Tue, 30 Oct 2018 09:25:15 +0000 (UTC) Date: Tue, 30 Oct 2018 17:25:11 +0800 From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Subject: Re: [PATCH 11/14] irq: add support for allocating (and affinitizing) sets of IRQs Message-ID: <20181030092510.GB13582@ming.t460p> References: <20181029163738.10172-1-axboe@kernel.dk> <20181029163738.10172-12-axboe@kernel.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181029163738.10172-12-axboe@kernel.dk> User-Agent: Mutt/1.9.1 (2017-09-22) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Tue, 30 Oct 2018 09:25:20 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 29, 2018 at 10:37:35AM -0600, Jens Axboe wrote: > A driver may have a need to allocate multiple sets of MSI/MSI-X > interrupts, and have them appropriately affinitized. Add support for > defining a number of sets in the irq_affinity structure, of varying > sizes, and get each set affinitized correctly across the machine. > > Cc: Thomas Gleixner > Cc: linux-kernel@vger.kernel.org > Reviewed-by: Hannes Reinecke > Signed-off-by: Jens Axboe > --- > include/linux/interrupt.h | 4 ++++ > kernel/irq/affinity.c | 40 ++++++++++++++++++++++++++++++--------- > 2 files changed, 35 insertions(+), 9 deletions(-) > > diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h > index 1d6711c28271..ca397ff40836 100644 > --- a/include/linux/interrupt.h > +++ b/include/linux/interrupt.h > @@ -247,10 +247,14 @@ struct irq_affinity_notify { > * the MSI(-X) vector space > * @post_vectors: Don't apply affinity to @post_vectors at end of > * the MSI(-X) vector space > + * @nr_sets: Length of passed in *sets array > + * @sets: Number of affinitized sets > */ > struct irq_affinity { > int pre_vectors; > int post_vectors; > + int nr_sets; > + int *sets; > }; > > #if defined(CONFIG_SMP) > diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c > index f4f29b9d90ee..2046a0f0f0f1 100644 > --- a/kernel/irq/affinity.c > +++ b/kernel/irq/affinity.c > @@ -180,6 +180,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) > int curvec, usedvecs; > cpumask_var_t nmsk, npresmsk, *node_to_cpumask; > struct cpumask *masks = NULL; > + int i, nr_sets; > > /* > * If there aren't any vectors left after applying the pre/post > @@ -210,10 +211,23 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) > get_online_cpus(); > build_node_to_cpumask(node_to_cpumask); > > - /* Spread on present CPUs starting from affd->pre_vectors */ > - usedvecs = irq_build_affinity_masks(affd, curvec, affvecs, > - node_to_cpumask, cpu_present_mask, > - nmsk, masks); > + /* > + * Spread on present CPUs starting from affd->pre_vectors. If we > + * have multiple sets, build each sets affinity mask separately. > + */ > + nr_sets = affd->nr_sets; > + if (!nr_sets) > + nr_sets = 1; > + > + for (i = 0, usedvecs = 0; i < nr_sets; i++) { > + int this_vecs = affd->sets ? affd->sets[i] : affvecs; > + int nr; > + > + nr = irq_build_affinity_masks(affd, curvec, this_vecs, > + node_to_cpumask, cpu_present_mask, > + nmsk, masks + usedvecs); > + usedvecs += nr; > + } > > /* > * Spread on non present CPUs starting from the next vector to be > @@ -258,13 +272,21 @@ int irq_calc_affinity_vectors(int minvec, int maxvec, const struct irq_affinity > { > int resv = affd->pre_vectors + affd->post_vectors; > int vecs = maxvec - resv; > - int ret; > + int set_vecs; > > if (resv > minvec) > return 0; > > - get_online_cpus(); > - ret = min_t(int, cpumask_weight(cpu_possible_mask), vecs) + resv; > - put_online_cpus(); > - return ret; > + if (affd->nr_sets) { > + int i; > + > + for (i = 0, set_vecs = 0; i < affd->nr_sets; i++) > + set_vecs += affd->sets[i]; > + } else { > + get_online_cpus(); > + set_vecs = cpumask_weight(cpu_possible_mask); > + put_online_cpus(); > + } > + > + return resv + min(set_vecs, vecs); > } > -- > 2.17.1 > Looks fine: Reviewed-by: Ming Lei -- Ming