Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp373087ybl; Thu, 15 Aug 2019 19:29:59 -0700 (PDT) X-Google-Smtp-Source: APXvYqzQ8VdSKydshI8VEvpgbhFyUZPjPrtm7Gxwc3FSJB06y4Fy5584JtqF65S4+bAaGLDgAcXC X-Received: by 2002:a63:b20f:: with SMTP id x15mr5986242pge.453.1565922599583; Thu, 15 Aug 2019 19:29:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565922599; cv=none; d=google.com; s=arc-20160816; b=GbmIUAOSRh9jQDgvKrYJrja/k6IoM6Mb3kLKkKs391db5+GQMFEVeW8i+xJ4CxnBX0 SnCPxmYFJRCeGA85ucIInFgZAvLJzJwKTrAtvHV6Audp0vuka42e7pEqYbIhJplWHxx6 9PyBs9lxlE7Ctj4dyH8FcFFHkb5SbxQ/PllL8p5F5HJJHRVgjRKgjCOOMtZws4RgedP4 fbsvzhbyG0k2Z/23f6/8IKK1wCR/N0uo4jubidwDiDXoWEJ2hi+/0mhx1qDZXuiYjjED QJk5P2K4QF3O+mak7c0FE4SZxbMPeWbygwvO/Pl/uqydHnDweK/0xCZGJY7a/Y6fdpIx Jrhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=t58lGjUmzhaCu5gkAHvUrEvu6ygVe9NKAdls2K0CH9M=; b=MQB/lFE/F+qCa928ulTGuXX06CAinULr5P/EttNkpQJg07SyV636hEDPWHXRGmT9Xd N9gSV+gIq6Lv+pizJ6dJ2lP5WEfveNZbWVx1TMC/8XvvxvZctiwj+EA2kTUfoVIsT5Uw rw4h3UGsU5+JlDb0WUZp3j1X/+i60GtADmqE1+RPKuuBee3k1CZaLVcjTFqF1dnwFT6K DKudfsaThexApw7pfxFZYYkNIyREH+Vn+jad8dzgP6ccALBFnG1UPkA8ZzUwZw/UoJWB 0SHsZou8ysKU549Vh+YxihfnYKoGn5/NawIwP9g7L63TTVnLA8mvrDUof88iDlj8/O8i sTsQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l2si2087330pjw.0.2019.08.15.19.29.44; Thu, 15 Aug 2019 19:29:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726403AbfHPC3D (ORCPT + 99 others); Thu, 15 Aug 2019 22:29:03 -0400 Received: from mx1.redhat.com ([209.132.183.28]:41500 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726215AbfHPC3D (ORCPT ); Thu, 15 Aug 2019 22:29:03 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DBBDA30917EF; Fri, 16 Aug 2019 02:29:02 +0000 (UTC) Received: from localhost (ovpn-8-25.pek2.redhat.com [10.72.8.25]) by smtp.corp.redhat.com (Postfix) with ESMTP id 52A3B7DA3E; Fri, 16 Aug 2019 02:28:56 +0000 (UTC) From: Ming Lei To: Thomas Gleixner Cc: linux-kernel@vger.kernel.org, Ming Lei , Jens Axboe , Christoph Hellwig , Keith Busch , linux-nvme@lists.infradead.org, Jon Derrick Subject: [PATCH V5 0/2] genriq/affinity: Make vectors allocation fair Date: Fri, 16 Aug 2019 10:28:47 +0800 Message-Id: <20190816022849.14075-1-ming.lei@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.41]); Fri, 16 Aug 2019 02:29:03 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Thomas, The 1st patch makes __irq_build_affinity_masks() more reliable, such as, all nodes can be covered in the spread. The 2nd patch spread vectors on node according to the ratio of this node's CPU number to number of all remaining CPUs, then vectors assignment can become more fair. Meantime, the warning report from Jon Derrick can be fixed. Please consider it for V5.4. V5: - remove patch 1 of V4, which is wrong - handle vector wrapping because the 'start vector' may begin anywhere, especially for the 2nd stage spread - add more comment on the vector allocation algorithm - cleanup code a bit - run more tests to verify the change, which always get the expected result. Covers lots of num_queues, numa topo, CPU unpresent setting. V4: - provide proof why number of allocated vectors for each node is <= CPU count of this node V3: - re-order the patchset - add helper of irq_spread_vectors_on_node() - handle vector spread correctly in case that numvecs is > ncpus - return -ENOMEM to API's caller V2: - add patch3 - start to allocate vectors from node with minimized CPU number, then every node is guaranteed to be allocated at least one vector. - avoid cross node spread Ming Lei (2): genirq/affinity: Improve __irq_build_affinity_masks() genirq/affinity: Spread vectors on node according to nr_cpu ratio kernel/irq/affinity.c | 215 +++++++++++++++++++++++++++++++++++++----- 1 file changed, 194 insertions(+), 21 deletions(-) Cc: Jens Axboe Cc: Christoph Hellwig Cc: Keith Busch Cc: linux-nvme@lists.infradead.org, Cc: Jon Derrick -- 2.20.1