Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp476873pxy; Fri, 30 Apr 2021 09:15:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzNQ6AdYNjs/biXJImQh6ojLKBeAVC/uarW9QhNG7V19kfUiVYUWbLFcyZJ9823i1sKprCv X-Received: by 2002:aa7:c049:: with SMTP id k9mr6907110edo.56.1619799358286; Fri, 30 Apr 2021 09:15:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619799358; cv=none; d=google.com; s=arc-20160816; b=as6YieYOPOZFtPjrK2igNtOYp5CS83xlWd59ZCOcuOCqI55QZwC0oar0vNg+FWfgYl I+eqFHSvvTGEHQBPcZ4iTMoHcozJGLlv1dkRqo1UV8zP1oe/LzzzdHsLJ2DuoXKTDtoe NtUg7WVQHg/OfIVTnZiLT+BS+mk3wtrbTJltN92EyFd6NVkq57huOjWEEtrPo7YHZtX4 IHjRvsVVvh3h39T1ZjHA5sB0tgUsH7gWNsAKHD43wfPT1PEDo1i6cFdjUAbIbh5egxZY GJCXWkbUwQ+v2RR9bC321Jo3YuoNHdZdTcUaAmg5oYF7tu5IufBi7Qail2HVKDwHSNOK vikg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=alwAQ5VEHKiy/Rben4QlM5DZwditekPwnzgprKEw46c=; b=tOuFpplFif/OWoPvODFEzKLGeafL0/ieJZ1jvLil9A/83YmIEMXBW9DEM6wuL+qGug dCjHaXX6fvY5CojeEd13Y6cIQkId7XOBSmtlwCzq1WwHO5jjaU0GhYoL6Bp2b6zr16/t B8NpmvpsX1bhO2SA/YR2dMlx6xL92ufFRL2kRFmHjW+FOU9DuNLOMdtTfu0kd09fK/mZ HKoBUByJXJPQ3V/t6MOrp8bIECZHQ3vFlBKZezJeHw4AiZKK2k8IGW3B0Ox+wU4YGcVS 6NilqOGmPoAE7dltn5AwG1jfnqeMGd53c1IPLxEoploBjsD8OhIiK7jPx3Qd2JLc+y2V H7RA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WkrQqtZG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id oy28si3715635ejb.412.2021.04.30.09.15.27; Fri, 30 Apr 2021 09:15:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WkrQqtZG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229579AbhD3QPO (ORCPT + 99 others); Fri, 30 Apr 2021 12:15:14 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:37126 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230055AbhD3QPN (ORCPT ); Fri, 30 Apr 2021 12:15:13 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1619799264; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=alwAQ5VEHKiy/Rben4QlM5DZwditekPwnzgprKEw46c=; b=WkrQqtZG7nqsJBfsNu8PbjvEPSq02DVXjIqf1GZBvlhzKKMLOAKmZs+KbAnaEt98nxDc7Q 6WSvodiR4lXTHzcd7K9tmEIC63wYnklbNH2B9iZ+uGnG6mKugcnyHRpJXSkX+VPIRqJ27Z G+7h8Q4vYuVBDu2umbReONAg7RSTITI= Received: from mail-lj1-f200.google.com (mail-lj1-f200.google.com [209.85.208.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-457-r5rBfyfVNvS87GmQQR7hNg-1; Fri, 30 Apr 2021 12:14:22 -0400 X-MC-Unique: r5rBfyfVNvS87GmQQR7hNg-1 Received: by mail-lj1-f200.google.com with SMTP id e6-20020a05651c04c6b02900c01712738eso5630602lji.2 for ; Fri, 30 Apr 2021 09:14:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=alwAQ5VEHKiy/Rben4QlM5DZwditekPwnzgprKEw46c=; b=CDYgBxrxsBVUb/Qq643pmGzvNExA7PqJJjuhpv1fRj2BDBVWTJQ0dAQKCYc4JwyDZ5 kbT/C0OgfoUlyWuWekN0pSF3ENDnrywoDWvt+a/qGEoja5y7h/In9kulhznqUQHZoQNG JqWJ1YIFGvJ8RB0p/XzW+qoSej1Fynuv0FrqKExYK9NA8dXRyZXlwJ07oUy4046//biq 5MIy9MUZ7noF9JebMcWjGEazVHJXrJzr3q/MBNtU+XNXKNVHSplCGKVJJKmL2uiqTozx 5/oUNmhqPHRUEKtE9E0VEpDJfSQO6rahpTYmhZuoje8h/fNGlf3uMqAz55rW7r2Oo7kt NSGg== X-Gm-Message-State: AOAM532yO5knKpMo0MpKEDXpHP2IeTOHb8HqaWb8HXq4OKUMtTIHWU4o BG8h7EOCl0puzh2d8ktgHeUNoeaZxPUE2RtEqNEYPF9886jlTqN5BWgcjzdex76falNSEonG8F1 sq93xM7I4nTqKPL4USnCUAWUWk55df9rmwWrQikIt X-Received: by 2002:a2e:b4c3:: with SMTP id r3mr4318873ljm.232.1619799260367; Fri, 30 Apr 2021 09:14:20 -0700 (PDT) X-Received: by 2002:a2e:b4c3:: with SMTP id r3mr4318844ljm.232.1619799260159; Fri, 30 Apr 2021 09:14:20 -0700 (PDT) MIME-Version: 1.0 References: <20200625223443.2684-1-nitesh@redhat.com> <20200625223443.2684-2-nitesh@redhat.com> <3e9ce666-c9cd-391b-52b6-3471fe2be2e6@arm.com> <20210127121939.GA54725@fuller.cnet> <87r1m5can2.fsf@nanos.tec.linutronix.de> <20210128165903.GB38339@fuller.cnet> <87h7n0de5a.fsf@nanos.tec.linutronix.de> <20210204181546.GA30113@fuller.cnet> <20210204190647.GA32868@fuller.cnet> <87y2g26tnt.fsf@nanos.tec.linutronix.de> <7780ae60-efbd-2902-caaa-0249a1f277d9@redhat.com> <07c04bc7-27f0-9c07-9f9e-2d1a450714ef@redhat.com> <20210406102207.0000485c@intel.com> <1a044a14-0884-eedb-5d30-28b4bec24b23@redhat.com> <20210414091100.000033cf@intel.com> <54ecc470-b205-ea86-1fc3-849c5b144b3b@redhat.com> <87czucfdtf.ffs@nanos.tec.linutronix.de> In-Reply-To: <87czucfdtf.ffs@nanos.tec.linutronix.de> From: Nitesh Lal Date: Fri, 30 Apr 2021 12:14:08 -0400 Message-ID: Subject: Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs To: Thomas Gleixner Cc: Jesse Brandeburg , "frederic@kernel.org" , "juri.lelli@redhat.com" , Marcelo Tosatti , abelits@marvell.com, Robin Murphy , "linux-kernel@vger.kernel.org" , "linux-api@vger.kernel.org" , "bhelgaas@google.com" , "linux-pci@vger.kernel.org" , "rostedt@goodmis.org" , "mingo@kernel.org" , "peterz@infradead.org" , "davem@davemloft.net" , "akpm@linux-foundation.org" , "sfr@canb.auug.org.au" , "stephen@networkplumber.org" , "rppt@linux.vnet.ibm.com" , "jinyuqi@huawei.com" , "zhangshaokun@hisilicon.com" , netdev@vger.kernel.org, chris.friesen@windriver.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 30, 2021 at 3:10 AM Thomas Gleixner wrote: > > Nitesh, > > On Thu, Apr 29 2021 at 17:44, Nitesh Lal wrote: > > First of all: Nice analysis, well done! Thanks, Thomas. > > > So to understand further what the problem was with the older kernel based > > on Jesse's description and whether it is still there I did some more > > digging. Following are some of the findings (kindly correct me if > > there is a gap in my understanding): > > > > I think this explains why even if we have multiple CPUs in the SMP affinity > > mask the interrupts may only land on CPU0. > > There are two issues in the pre rework vector management: > > 1) The allocation logic itself which preferred lower numbered CPUs and > did not try to spread out the vectors accross CPUs. This was pretty > much true for any APIC addressing mode. > > 2) The multi CPU affinity support if supported by the APIC > mode. That's restricted to logical APIC addressing mode. That is > available for non X2APIC up to 8 CPUs and with X2APIC it requires > to be in cluster mode. > > All other addressing modes had a single CPU target selected under > the hood which due to #1 was ending up on CPU0 most of the time at > least up to the point where it still had vectors available. > > Also logical addressing mode with multiple target CPUs was subject > to #1 and due to the delivery logic the lowest numbered CPU (APIC) > was where most interrupts ended up. > Right, thank you for confirming. Based on this analysis and the fact that with your re-work the interrupts seems to be naturally spread across the CPUs, will it be safe to revert Jesse's patch e2e64a932 genirq: Set initial affinity in irq_set_affinity_hint() as it overwrites the previously set IRQ affinity mask for some of the devices? IMHO if we think that this patch is still solving some issue other than what Jesse has mentioned then perhaps we should reproduce that and fix it directly from the request_irq code path. -- Nitesh