Received: by 10.213.65.68 with SMTP id h4csp405954imn; Fri, 6 Apr 2018 02:15:25 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+pCVBWUx7mesE6TkhH+nHcVh3QKDF8XI2KJnQ/PDlcaGHvNKhdOgwcAMcfRBpEZJEug9UM X-Received: by 10.98.32.80 with SMTP id g77mr19819636pfg.216.1523006125927; Fri, 06 Apr 2018 02:15:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523006125; cv=none; d=google.com; s=arc-20160816; b=cGLaZGhlhinYtDgVIHmymHpxCQRLjJTViLFKbaDRamWluPPA8X933ulO4eeNwEIXCN Bfgvgi2EpbByIge/nZHYe+NREPQ0pkUr71IMpkwpJy7PShnFv0Djgib7N27PYA4WLfE8 exZvomWPnErzbLqQmuacorxrPCYE57G9UgWPWbUD37bd8bTcKYW2kfn1qxrfSsX70W5S QCeo6MsxYj4HIerYOrhpDGESUEGxejh46jWbBBZYj4Y2QnUTNxylM6IAulg+3b5mJZX8 vKZysmOOqQSRx+LVnWxhTLUv7IYs7PtBZSeH2z/DY2HEF5KaU7whmMjCWQ6t/WcjFFbC Sv4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=M4T4LDvNXvojS56Mhm/eceoMJuGhGz8fcYsSvYPWpJ4=; b=oVHxGaaHL3X8jS3EKZLiM1e69BvqmHX7lurQ/qi5pB7eV3bpQez46PVuwP4FFlSCSm SR3gvbgapaJYNxfiNii+s/1lEno6VXFnBZ+5ljdGw+PYqI7h9aIUoB3fSommE/IazL8Y pCWHc8Sx9J/X5EDmbQVno8YcuOVwgpuV0SDYY19rc7ADW9W4vB2zBtiLsbzVTP8Jj92T wmx53uHIY+USD+7NK8nmoEX4HpKFGiNM1iA7CtsZRc5IhW4+F+vrdOndibpDQ7YrbOUD 4/96j2t8lX5ISYsz7A1jaH1UPO/RXl6BklogZT2eTTLIQdJx9KHPZA+dH76EbQt/gHQc rX+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c22si7684606pfe.29.2018.04.06.02.15.11; Fri, 06 Apr 2018 02:15:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751984AbeDFJNc (ORCPT + 99 others); Fri, 6 Apr 2018 05:13:32 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:40966 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751415AbeDFJNa (ORCPT ); Fri, 6 Apr 2018 05:13:30 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 90735406E8A4; Fri, 6 Apr 2018 09:13:29 +0000 (UTC) Received: from ming.t460p (ovpn-12-25.pek2.redhat.com [10.72.12.25]) by smtp.corp.redhat.com (Postfix) with ESMTPS id DD98EAFD68; Fri, 6 Apr 2018 09:13:09 +0000 (UTC) Date: Fri, 6 Apr 2018 17:13:04 +0800 From: Ming Lei To: Thomas Gleixner Cc: Jens Axboe , Christoph Hellwig , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Laurence Oberman Subject: Re: [PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible Message-ID: <20180406091257.GA9605@ming.t460p> References: <20180308105358.1506-1-ming.lei@redhat.com> <20180308105358.1506-5-ming.lei@redhat.com> <20180403160001.GA25255@ming.t460p> <20180404150759.GA24824@ming.t460p> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.1 (2017-09-22) X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Fri, 06 Apr 2018 09:13:29 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Fri, 06 Apr 2018 09:13:29 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Thomas, On Wed, Apr 04, 2018 at 09:38:26PM +0200, Thomas Gleixner wrote: > On Wed, 4 Apr 2018, Ming Lei wrote: > > On Wed, Apr 04, 2018 at 10:25:16AM +0200, Thomas Gleixner wrote: > > > In the example above: > > > > > > > > > irq 39, cpu list 0,4 > > > > > > irq 40, cpu list 1,6 > > > > > > irq 41, cpu list 2,5 > > > > > > irq 42, cpu list 3,7 > > > > > > and assumed that at driver init time only CPU 0-3 are online then the > > > hotplug of CPU 4-7 will not result in any interrupt delivered to CPU 4-7. > > > > Indeed, and I just tested this case, and found that no interrupts are > > delivered to CPU 4-7. > > > > In theory, the affinity has been assigned to these irq vectors, and > > programmed to interrupt controller, I understand it should work. > > > > Could you explain it a bit why interrupts aren't delivered to CPU 4-7? > > As I explained before: > > "If the device is already in use when the offline CPUs get hot plugged, then > the interrupts still stay on cpu 0-3 because the effective affinity of > interrupts on X86 (and other architectures) is always a single CPU." > > IOW. If you set the affinity mask so it contains more than one CPU then the > kernel selects a single CPU as target. The selected CPU must be online and > if there is more than one online CPU in the mask then the kernel picks the > one which has the least number of interrupts targeted at it. This selected > CPU target is programmed into the corresponding interrupt chip > (IOAPIC/MSI/MSIX....) and it stays that way until the selected target CPU > goes offline or the affinity mask changes. > > The reasons why we use single target delivery on X86 are: > > 1) Not all X86 systems support multi target delivery > > 2) If a system supports multi target delivery then the interrupt is > preferrably delivered to the CPU with the lowest APIC ID (which > usually corresponds to the lowest CPU number) due to hardware magic > and only a very small percentage of interrupts are delivered to the > other CPUs in the multi target set. So the benefit is rather dubious > and extensive performance testing did not show any significant > difference. > > 3) The management of multi targets on the software side is painful as > the same low level vector number has to be allocated on all possible > target CPUs. That's making a lot of things including hotplug more > complex for very little - if at all - benefit. > > So at some point we ripped out the multi target support on X86 and moved > everything to single target delivery mode. > > Other architectures never supported multi target delivery either due to > hardware restrictions or for similar reasons why X86 dropped it. There > might be a few architectures which support it, but I have no overview at > the moment. > > The information is in procfs > > # cat /proc/irq/9/smp_affinity_list > 0-3 > # cat /proc/irq/9/effective_affinity_list > 1 > > # cat /proc/irq/10/smp_affinity_list > 0-3 > # cat /proc/irq/10/effective_affinity_list > 2 > > smp_affinity[_list] is the affinity which is set either by the kernel or by > writing to /proc/irq/$N/smp_affinity[_list] > > effective_affinity[_list] is the affinity which is effective, i.e. the > single target CPU to which the interrupt is affine at this point. > > As you can see in the above examples the target CPU is selected from the > given possible target set and the internal spreading of the low level x86 > vector allocation code picks a CPU which has the lowest number of > interrupts targeted at it. > > Let's assume for the example below > > # cat /proc/irq/10/smp_affinity_list > 0-3 > # cat /proc/irq/10/effective_affinity_list > 2 > > that CPU 3 was offline when the device was initialized. So there was no way > to select it and when CPU 3 comes online there is no reason to change the > affinity of that interrupt, at least not from the kernel POV. Actually we > don't even have a mechanism to do so automagically. > > If I offline CPU 2 after onlining CPU 3 then the kernel has to move the > interrupt away from CPU 2, so it selects CPU 3 as it's the one with the > lowest number of interrupts targeted at it. > > Now this is a bit different if you use affinity managed interrupts like > NVME and other devices do. > > Many of these devices create one queue per possible CPU, so the spreading > is simple; One interrupt per possible cpu. Pretty boring. > > When the device has less queues than possible CPUs, then stuff gets more > interesting. The queues and therefore the interrupts must be targeted at > multiple CPUs. There is some logic which spreads them over the numa nodes > and takes siblings into account when Hyperthreading is enabled. > > In both cases the managed interrupts are handled over CPU soft > hotplug/unplug: > > 1) If a CPU is soft unplugged and an interrupt is targeted at the CPU > then the interrupt is either moved to a still online CPU in the > affinity mask or if the outgoing CPU is the last one in the affinity > mask it is shut down. > > 2) If a CPU is soft plugged then the interrupts are scanned and the ones > which are managed and shutdown checked whether the affinity mask > contains the upcoming CPU. If that's the case then the interrupt is > started up and can deliver interrupts for the corresponding queue. > > If an interupt is managed and already started, then nothing happens > and the effective affinity is untouched even if the upcoming CPU is in > the affinity set. > > Lets briefly talk about the 3 cpu masks: > > 1) cpus_possible_mask: > > The CPUs which are possible on a system. > > 2) cpus_present_mask: > > The CPUs which are present on a system. Present means phsyically > present. Physical hotplug sets or removes CPUs from that mask, > > "Physical" hotplug is used in virtualization as well. > > 3) cpus_online_mask: > > The CPUs which are soft onlined. If a present CPU is not soft onlined > then its cleared in the online mask, but still set in the present > mask. Great thanks for your so detailed explanation. > > Now back to my suggestion in the other end of this thread, that we should > use cpus_present_mask instead of cpus_online_mask. > > The reason why I suggested this is that we have to differentiate between > soft plugging and phsycial plugging of CPUs. > > If CPUs are in the present mask, i.e. phsyically available, but not in the > online mask, then it's trivial to plug them soft by writing to the > corresponding online file in sysfs. CPU soft plugging is used for power > management nowadays, so the scenario I described in the other mail is not > completely unrealistic. OK, got it, and this scenario can be emulated easily by offlining CPU before loading device driver. I will post V4 soon by using cpu_present_mask in the 1st stage irq spread. And it should work fine for Kashyap's case in normal cases. It might not work fine when there are lots of offline CPUs before device initialization, in which less active irq vectors will be assigned. But given driver is usually loaded during kernel booting, at that time generally all CPUs are online, so looks it shouldn't be one issue to consider. > > In case of physical hotplug it's a different story. Neither the kernel nor > user space can plug a CPU phsyically. It needs interaction by the operator, > i.e. in the real world by inserting/removing hardware or in the > virtualization space by changing the current CPU allocation. So here the > present mask wont help when the number of queues is less than the number of > possible CPUs and an initially not present CPU gets 'physically' plugged > in. > > To make things worse we have the unfortunate case of qualiteee BIOS/ACPI > tables which claim that there are more possible CPUs than present CPUs on > systems which cannot support phsyical hotplug due to lack of hardware > support. Unfortunately there is no simple way to figure out whether a > system supports physical hotplug or not, so we cannot make an informed > decision here. But we can look at the present mask which tells us how many > CPUs are physically available. In a regular boot up the present mask and > the online mask are identical, so there is no difference. > > For the physical hotplug case - real or virtual - neither of the spreading > algorithms is ideal. Solving this needs more thought as it would require to > recalculate the spreading once the physically plugged CPUs become > available/online. I agree, that may be an another improvement in this field. > > Hope that clarifies the internals. Sure, it does, thanks again for your clarification! Thanks, Ming