Received: by 10.213.65.68 with SMTP id h4csp692361imn; Wed, 4 Apr 2018 05:47:51 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/TzU5YdU9AFSildRgYpsK1Ch/I91WdJmM+dFGrA7oqvlDDt2wgnRj0U8oHqamAFYRcuTNB X-Received: by 2002:a17:902:a612:: with SMTP id u18-v6mr18487272plq.10.1522846071322; Wed, 04 Apr 2018 05:47:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522846071; cv=none; d=google.com; s=arc-20160816; b=aR9Z+dZGTf0H1j7MT/E/hMnBYL36ZJhbmlFE77waXpY8D0xPr30Qf1RG/pbajU8AK4 pU6f84S/MCMqcOFFgtrmiRHazRMPjQ+DmmCNDc1Ss5AvTSkL3b7UwmP+Hm2Dv+BbWTAN 1dh5YQJSt67PgTTjli9xAOxhTozXK4wyE2fuapJwH3kM/d0MI8iCyRP3PagmQuD75ozw 1qbhcxp43eU1giHRPWBmHkfP5X2rYzBE8kl+D5z7PENgpHoj3YtTWYAvBsS7sBsgTzto PllgbOgx0CKzYNh2M7sYYut8z1AGHi9oOJR19wS49hh6qds2lx5losQdgD4Wkw/0T6Nm mfjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date :arc-authentication-results; bh=LuaPW/5abMDtLdqVQtztCwvvp3g+s1q88XAxWsrO7lo=; b=FljDRaC3dUdDdN89jBh3W02aZdMksYJP/RKub/f1ZlEBL1P+xgn27TJ4nlWXBftcv+ 1tiFFX7alNoRfqm9HB1hk/K37Rhf2ubugA/rs3rqFBSUHavziPSP56S/6J5n5iQxGM3B SBNvzVlJt1QK2xTjZgiEcpG83Kjr6capYHgc/h1h4psakqPfGqAiy1Fs17goHPpIdkjm dy/Aqmva+n6xwEFrTg/8uKdjV5mIkWjtOvSdjFt1I/jAi6GpUgnOg3RfsbvqxgHdOTcw R/lYmCd5C2EL6T9u9gPTIZpJYGcUDl0UZqttS0JXIwIrEUaRrQzM4/ueEXVI8db36f8H 5wyA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t22-v6si3380214plj.233.2018.04.04.05.47.37; Wed, 04 Apr 2018 05:47:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751975AbeDDMpY (ORCPT + 99 others); Wed, 4 Apr 2018 08:45:24 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:33626 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751534AbeDDMpU (ORCPT ); Wed, 4 Apr 2018 08:45:20 -0400 Received: from hsi-kbw-5-158-153-52.hsi19.kabel-badenwuerttemberg.de ([5.158.153.52] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1f3hmw-0004na-I9; Wed, 04 Apr 2018 14:45:18 +0200 Date: Wed, 4 Apr 2018 14:45:18 +0200 (CEST) From: Thomas Gleixner To: Ming Lei cc: Jens Axboe , Christoph Hellwig , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Laurence Oberman Subject: Re: [PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible In-Reply-To: Message-ID: References: <20180308105358.1506-1-ming.lei@redhat.com> <20180308105358.1506-5-ming.lei@redhat.com> <20180403160001.GA25255@ming.t460p> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 4 Apr 2018, Thomas Gleixner wrote: > I'm aware how that hw-queue stuff works. But that only works if the > spreading algorithm makes the interrupts affine to offline/not-present CPUs > when the block device is initialized. > > In the example above: > > > > > irq 39, cpu list 0,4 > > > > irq 40, cpu list 1,6 > > > > irq 41, cpu list 2,5 > > > > irq 42, cpu list 3,7 > > and assumed that at driver init time only CPU 0-3 are online then the > hotplug of CPU 4-7 will not result in any interrupt delivered to CPU 4-7. > > So the extra assignment to CPU 4-7 in the affinity mask has no effect > whatsoever and even if the spreading result is 'perfect' it just looks > perfect as it is not making any difference versus the original result: > > > > > irq 39, cpu list 0 > > > > irq 40, cpu list 1 > > > > irq 41, cpu list 2 > > > > irq 42, cpu list 3 And looking deeper into the changes, I think that the first spreading step has to use cpu_present_mask and not cpu_online_mask. Assume the following scenario: Machine with 8 present CPUs is booted, the 4 last CPUs are unplugged. Device with 4 queues is initialized. The resulting spread is going to be exactly your example: irq 39, cpu list 0,4 irq 40, cpu list 1,6 irq 41, cpu list 2,5 irq 42, cpu list 3,7 Now the 4 offline CPUs are plugged in again. These CPUs won't ever get an interrupt as all interrupts stay on CPU 0-3 unless one of these CPUs is unplugged. Using cpu_present_mask the spread would be: irq 39, cpu list 0,1 irq 40, cpu list 2,3 irq 41, cpu list 4,5 irq 42, cpu list 6,7 while on a machine where CPU 4-7 are NOT present, but advertised as possible the spread would be: irq 39, cpu list 0,4 irq 40, cpu list 1,6 irq 41, cpu list 2,5 irq 42, cpu list 3,7 Hmm? Thanks, tglx