Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp519757pxu; Tue, 6 Oct 2020 12:00:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwRAdftguYB14ERBpjpM7xP15AkVuUu9PF/F3KRnyHV8J4LeSank7NZS+OXMzWP8XsdF4qm X-Received: by 2002:a17:907:2179:: with SMTP id rl25mr1030790ejb.450.1602010831189; Tue, 06 Oct 2020 12:00:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602010831; cv=none; d=google.com; s=arc-20160816; b=vv6qMLRBoDKOm+N5OrS69gs1x6ltI43yhYsvtfopCYM1LUenfXzF2LJyaA/l5XY7Mk T9kcOqCiBRNJq0Bj+bTU8ZIs1uZ0qzddUW4ESMkxFCiNtirBgYkQGiv4G/NHvCLbSzL8 4bbjcBu3SphcT4ufJ07cBlG5ClGJFDTIptu6gGPyqN1nxgic/FO4U+SlF9MwVD11k6Qy 36PLjEKq8uLOezkFAF8zzpwnJaGQ8SYbPrw1TzkXqgFlx0JQkpWMqWSQLSoTDRhShAKK 40oz8wQOqrIadWTRcTZjqyyl36mYIJLJ6S3OjlYdrbneNTVO015rs8VQ5ctXiUr4QFAY 0LEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:references :in-reply-to:subject:cc:to:dkim-signature:dkim-signature:from; bh=v9RVExHAFhfiLD/J+tya2zETyut3wnRGwzyKv5m9r0Q=; b=i/dMKIW26SYJMfQ8KbMI32P6DGW/sPpq528UzDmjHTpUiqtzSxz7P6kMna29CtXZW/ cfihyCL+SbPPHWv9Qw12404WS6nMXLajp67bgQCI2bk3Opk+E25HnHLfrnOhHee3U3kL qlvIkeTYmTZo1tzcEG7G8iA4XEpIEsJmlsxHvWcGa3Z0J1mxkSISDLyRoPjmrDxxv8wF wvNqTPh1/mBOZm2BgmiAoHDS4Evgx4hWdL7IGC8DjdavOBXGmP8HogNYu80tZ9W/vis7 fkJw2MUrFlCps4S6Sgis107xeFzWlDG43izRAfxzxaaAMc7ABsKMp3QDsPmGAhsZmjq9 10Sg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=jv+gXVUl; dkim=neutral (no key) header.i=@linutronix.de header.b="S2uj2/iy"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a15si3214218eju.615.2020.10.06.12.00.08; Tue, 06 Oct 2020 12:00:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=jv+gXVUl; dkim=neutral (no key) header.i=@linutronix.de header.b="S2uj2/iy"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726896AbgJFS5o (ORCPT + 99 others); Tue, 6 Oct 2020 14:57:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726760AbgJFS5o (ORCPT ); Tue, 6 Oct 2020 14:57:44 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46C93C061755; Tue, 6 Oct 2020 11:57:44 -0700 (PDT) From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1602010660; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=v9RVExHAFhfiLD/J+tya2zETyut3wnRGwzyKv5m9r0Q=; b=jv+gXVUlsJ2a5b7kNJ/46oVI0DInRlarp1cObzbzRZyIxvdzl3pABixQJt2ENsg2pvR86F v6np9S6suu7lgyAKCbqCtDhL5wlMmnRxRGlc4LNgX7ecGEYETsiRjbvQ6Xd86DlQ7zjS9O D5RR9bNOZIhBwTcaETE/gQRDkooU4rkeOFP6Hw69w2iec1YwK2PRr103foBIJ5bskO1sEn KxPbXN+fMOxL6vnxLvg2ernpjxxCii4suZCrG0NuWd2r+W3i1v46JoagDWAy9NI3a0uvd8 +hnbxD/djtxxGd25eHQmC9VKAXifpNP584XZaI2SFMt8CcMigI+8XXpDI2qkEg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1602010660; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=v9RVExHAFhfiLD/J+tya2zETyut3wnRGwzyKv5m9r0Q=; b=S2uj2/iyNS9jNMg+CSSI8CiNzw2BEPY8b1rXgzS+ckoj0G4crizVGNlCGBxDesTJoIs9iE cRzvpLLglUC76FAA== To: Dexuan Cui , Ming Lei , Christoph Hellwig , Christian Borntraeger , Stefan Haberland , Jens Axboe , Marc Zyngier , "linux-pci\@vger.kernel.org" , "linux-kernel\@vger.kernel.org" Cc: Long Li , Haiyang Zhang , Michael Kelley Subject: Re: irq_build_affinity_masks() allocates improper affinity if num_possible_cpus() > num_present_cpus()? In-Reply-To: References: Date: Tue, 06 Oct 2020 20:57:39 +0200 Message-ID: <87lfgj6v30.fsf@nanos.tec.linutronix.de> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 06 2020 at 06:47, Dexuan Cui wrote: > I'm running a single-CPU Linux VM on Hyper-V. The Linux kernel is v5.9-rc7 > and I have CONFIG_NR_CPUS=256. > > The Hyper-V Host (Version 17763-10.0-1-0.1457) provides a guest firmware, > which always reports 128 Local APIC entries in the ACPI MADT table. Here > only the first Local APIC entry's "Processor Enabled" is 1 since this > Linux VM is configured to have only 1 CPU. This means: in the Linux kernel, > the "cpu_present_mask" and " cpu_online_mask " have only 1 CPU (i.e. CPU0), > while the "cpu_possible_mask" has 128 CPUs, and the "nr_cpu_ids" is 128. > > I pass through an MSI-X-capable PCI device to the Linux VM (which has > only 1 virtual CPU), and the below code does *not* report any error > (i.e. pci_alloc_irq_vectors_affinity() returns 2, and request_irq() > returns 0), but the code does not work: the second MSI-X interrupt is not > happening while the first interrupt does work fine. > > int nr_irqs = 2; > int i, nvec, irq; > > nvec = pci_alloc_irq_vectors_affinity(pdev, nr_irqs, nr_irqs, > PCI_IRQ_MSIX | PCI_IRQ_AFFINITY, NULL); Why should it return an error? > for (i = 0; i < nvec; i++) { > irq = pci_irq_vector(pdev, i); > err = request_irq(irq, test_intr, 0, "test_intr", &intr_cxt[i]); > } And why do you expect that the second interrupt works? This is about managed interrupts and the spreading code has two vectors to which it can spread the interrupts. One is assigned to one half of the possible CPUs and the other one to the other half. Now you have only one CPU online so only the interrupt with has the online CPU in the assigned affinity mask is started up. That's how managed interrupts work. If you don't want managed interrupts then don't use them. Thanks, tglx