Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1433965pxb; Thu, 4 Mar 2021 11:08:26 -0800 (PST) X-Google-Smtp-Source: ABdhPJwnKaAdVKmtn6ScO7YNA4i7MAuwoLjKfWevrxUt9JuX7VjqC2K+otsOwsxaiXm4Tejz1kqo X-Received: by 2002:a17:907:1b1f:: with SMTP id mp31mr5923238ejc.348.1614884906229; Thu, 04 Mar 2021 11:08:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614884906; cv=none; d=google.com; s=arc-20160816; b=ixtcO18hLdUrWVRbJfGt+X6yYH57IYKZb9NLt9m1IoJlZARhWkbeAOgPdHElf7BBM4 nkxpiHsUV/gE197SEAsKdCXxl3kNVqzxTFkqCZPeUisQjGpze6f+WsW/ac28aNnETJ42 Ktor18LCSL71RXGtV81Ja9j6gCt9MLai80HGq5ZejBIToGRODUY6LwjrH7Vg+w8ExXYr IpRA5McUGnsqQl7X20dEk6nn4+bsO0gQhDnOqPgNJFkXfc/eyttoRjsyAgCngRQ4cLv/ yLSk9GfPSSzkuTLcBuyUq13ofEj5pQjJ/wePUMqcZm6g70amroTKp9iwbgSCzYtH86xZ EhWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-language:content-transfer-encoding :in-reply-to:mime-version:user-agent:date:message-id:organization :references:cc:to:from:subject:dkim-signature; bh=wCLLVeuQSYKDdbpI1wltydL0veeTCSBx9VV/dM1Mezo=; b=tEgDe+37sDQjuvswM1d6GM66Dwpc/emhnfHruUwwdkvEVkvVGxOuIjEanu8BDWfoBf xZ2G2j1YQ0h2oQ8WnS/g5KF4DzyyOQrn/IZAXHSTmaz6Voe4uu7dM1dylcksnOjo5Lqa Pk/lLig68EQPo+ZLazvYR+X4rcknI7ToOcmsTMvOcLmTlPHaE7w4Jaz1kVgxiKnO1L6J 4EOm6osbK+6Ndk8TlY6OQjcAO5veyiPCpMq1NbdUC7fIUeS5N8WkdPhJG3Yghbjm+6z9 vTQ2eQ6N9ZTXct7SuLPUXTREKCbAE/CRJBn0xX8cJXT6u4m1sBANxB41+fV/EsXyf8M4 gAbQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bd8QSH91; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w24si169369eds.559.2021.03.04.11.08.03; Thu, 04 Mar 2021 11:08:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bd8QSH91; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233954AbhCDSRt (ORCPT + 99 others); Thu, 4 Mar 2021 13:17:49 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:20314 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233950AbhCDSRe (ORCPT ); Thu, 4 Mar 2021 13:17:34 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1614881768; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wCLLVeuQSYKDdbpI1wltydL0veeTCSBx9VV/dM1Mezo=; b=bd8QSH91FalzgtBOs9rsqhRq3V8PRJwDp9qD/tAr+OKMxFNKqlhLjrD++ZPPh7Lk2U/MN6 6XQbeP8va3E7KwViV4gayA5vwJffuJ+S29xS1cQ69z2Sld7aP2WTNTOLWG2iRdDC5VcjLy q5sEahzK4LxIwwta6ttpbaI5jMrzJ44= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-308-HrI0Mh4EPT21OQD9tTiQ3A-1; Thu, 04 Mar 2021 13:16:03 -0500 X-MC-Unique: HrI0Mh4EPT21OQD9tTiQ3A-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2E8211084D6B; Thu, 4 Mar 2021 18:16:00 +0000 (UTC) Received: from [10.10.112.189] (ovpn-112-189.rdu2.redhat.com [10.10.112.189]) by smtp.corp.redhat.com (Postfix) with ESMTPS id DE17E5C8A8; Thu, 4 Mar 2021 18:15:32 +0000 (UTC) Subject: Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs From: Nitesh Narayan Lal To: Thomas Gleixner , Marcelo Tosatti , Jesse Brandeburg Cc: Robin Murphy , linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, frederic@kernel.org, juri.lelli@redhat.com, abelits@marvell.com, bhelgaas@google.com, linux-pci@vger.kernel.org, rostedt@goodmis.org, mingo@kernel.org, peterz@infradead.org, davem@davemloft.net, akpm@linux-foundation.org, sfr@canb.auug.org.au, stephen@networkplumber.org, rppt@linux.vnet.ibm.com, jinyuqi@huawei.com, zhangshaokun@hisilicon.com References: <20200625223443.2684-1-nitesh@redhat.com> <20200625223443.2684-2-nitesh@redhat.com> <3e9ce666-c9cd-391b-52b6-3471fe2be2e6@arm.com> <20210127121939.GA54725@fuller.cnet> <87r1m5can2.fsf@nanos.tec.linutronix.de> <20210128165903.GB38339@fuller.cnet> <87h7n0de5a.fsf@nanos.tec.linutronix.de> <20210204181546.GA30113@fuller.cnet> <20210204190647.GA32868@fuller.cnet> <87y2g26tnt.fsf@nanos.tec.linutronix.de> <7780ae60-efbd-2902-caaa-0249a1f277d9@redhat.com> Organization: Red Hat Inc, Message-ID: <07c04bc7-27f0-9c07-9f9e-2d1a450714ef@redhat.com> Date: Thu, 4 Mar 2021 13:15:31 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.5.0 MIME-Version: 1.0 In-Reply-To: <7780ae60-efbd-2902-caaa-0249a1f277d9@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2/11/21 10:55 AM, Nitesh Narayan Lal wrote: > On 2/6/21 7:43 PM, Nitesh Narayan Lal wrote: >> On 2/5/21 5:23 PM, Thomas Gleixner wrote: >>> On Thu, Feb 04 2021 at 14:17, Nitesh Narayan Lal wrote: >>>> On 2/4/21 2:06 PM, Marcelo Tosatti wrote: >>>>>>> How about adding a new flag for isolcpus instead? >>>>>>> >>>>>> Do you mean a flag based on which we can switch the affinity mask to >>>>>> housekeeping for all the devices at the time of IRQ distribution? >>>>> Yes a new flag for isolcpus. HK_FLAG_IRQ_SPREAD or some better name. >>>> Does sounds like a nice idea to explore, lets see what Thomas thinks about it. > > >>>> When the affinity mask of the interrupt at the time when it is actually >>>> requested contains an isolated CPU then nothing prevents the kernel from >>>> steering it at an isolated CPU. But that has absolutely nothing to do >>>> with that spreading thingy. >>>> >>>> The only difference which this change makes is the fact that the >>>> affinity hint changes. Nothing else. >>>> >> Thanks for the detailed explanation. >> >> Before I posted this patch, I was doing some debugging on a setup where I >> was observing some latency issues due to the iavf IRQs that were pinned on >> the isolated CPUs. >> >> Based on some initial traces I had this impression that the affinity hint >> or cpumask_local_spread was somehow playing a role in deciding the affinity >> mask of these IRQs. Although, that does look incorrect after going through >> your explanation. >> For some reason, with a kernel that had this patch when I tried creating >> VFs iavf IRQs always ended up on the HK CPUs. >> >> The reasoning for the above is still not very clear to me. I will investigate >> this further to properly understand this behavior. >> >> > After a little more digging, I found out why cpumask_local_spread change > affects the general/initial smp_affinity for certain device IRQs. > > After the introduction of the commit: > >     e2e64a932 genirq: Set initial affinity in irq_set_affinity_hint() > Continuing the conversation about the above commit and adding Jesse. I was trying to understand the problem that the commit message explains "The default behavior of the kernel is somewhat undesirable as all requested interrupts end up on CPU0 after registration.", I have also been trying to reproduce this behavior without the patch but I failed in doing so, maybe because I am missing something here. @Jesse Can you please explain? FWIU IRQ affinity should be decided based on the default affinity mask. The problem with the commit is that when we overwrite the affinity mask based on the hinting mask we completely ignore the default SMP affinity mask. If we do want to overwrite the affinity based on the hint mask we should atleast consider the default SMP affinity. -- Thanks Nitesh