Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp152847img; Sun, 17 Mar 2019 23:24:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqxjHD4YU+ur4W/WFjXwBkvyq5CypJkXh8irfo7yZ7+s1y8SOfLMt/wNlADm6Nc/Ko/6IaLD X-Received: by 2002:a17:902:266:: with SMTP id 93mr18400166plc.161.1552890249590; Sun, 17 Mar 2019 23:24:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552890249; cv=none; d=google.com; s=arc-20160816; b=PO5vhaQJb3jMTATgy334FPPufa7aIhxwEcDSPOVozLC2QX429UAUX5I/8ZZZuzfpDd 9HdMzmQ8bgfMTGZmmsBV362cNuOW0xjeqrzUCweayHZ4dFwf/a+DZGlxD2C5QWTHaGQe BSpYa7VUVWjq/96qO618PupP7DrblygPbBr025BuWCnnIXRGJrEc1firP1BUUrlFYy5d DK7/QdsUFJEKE7HRr2XHlOyA6W5dose2+LC1u8nYT+C69l2rsOtWgd6/Yr/yHeRFJfCu JHOHn8Bd5sziuow3q4y0lr17YcHrxxBoZ85AWN3kTxIWYvQMQj0g2eNCoLT7OyM0NoBA ikpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:content-disposition :mime-version:message-id:subject:cc:to:from:date; bh=V8EgdiHSzU34TD2pLrq53erNemseAi8xC9ZYvKxGFwA=; b=lhCTXeuvJZaYyJ+7fsVrXymzji6ajfYbqssvrTNllxlse9ft9+uAVF09TRPVnEZjsR 7WXQ9NGdvDye+OMD+5Jc4U380PhJCadBICCofZ/j3m6fu9Ucgc2UD99pVEm/P/xg2Zvp aH6Kq8NYk7Ztxwg00S6rK0mUIGOeBup7gTb0Z0ruf91T4ABiNODIvqrWIpQ2o9byrCF3 vqxWqNig67O3hrVRITId9QC+lq9xSGtqJ9Pv2QPmoY6d+szWCNOIP1ikzjUXUaN98R3y T/l5rKhPSspZrr2MK8Fp8PpMuT7FbV4ikht7h/2DgJlr9jvgX1djxX6IqEATL1tfBfGO eR8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r123si8447931pgr.188.2019.03.17.23.23.25; Sun, 17 Mar 2019 23:24:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727769AbfCRGV7 (ORCPT + 99 others); Mon, 18 Mar 2019 02:21:59 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48450 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726646AbfCRGV7 (ORCPT ); Mon, 18 Mar 2019 02:21:59 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 09387308219C; Mon, 18 Mar 2019 06:21:59 +0000 (UTC) Received: from xz-x1 (dhcp-14-116.nay.redhat.com [10.66.14.116]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D32BC5D9C8; Mon, 18 Mar 2019 06:21:52 +0000 (UTC) Date: Mon, 18 Mar 2019 14:21:50 +0800 From: Peter Xu To: Christoph Hellwig Cc: Thomas Gleixner , Jason Wang , Luiz Capitulino , Linux Kernel Mailing List , "Michael S. Tsirkin" Subject: Virtio-scsi multiqueue irq affinity Message-ID: <20190318062150.GC6654@xz-x1> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline User-Agent: Mutt/1.10.1 (2018-07-13) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Mon, 18 Mar 2019 06:21:59 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, Christoph & all, I noticed that starting from commit 0d9f0a52c8b9 ("virtio_scsi: use virtio IRQ affinity", 2017-02-27) the virtio scsi driver is using a new way (via irq_create_affinity_masks()) to automatically initialize IRQ affinities for the multi-queues, which is different comparing to all the other virtio devices (like virtio-net, which still uses virtqueue_set_affinity(), which is actually, irq_set_affinity_hint()). Firstly, it will definitely broke some of the userspace programs with that when the scripts wanted to do the bindings explicitly like before and they could simply fail with -EIO now every time when echoing to /proc/irq/N/smp_affinity of any of the multi-queues (see write_irq_affinity()). Is there any specific reason to do it with the new way? Since AFAIU we should still allow the system admins to decide what to do for such configurations, .e.g., what if we only want to provision half of the CPU resources to handle IRQs for a specific virtio-scsi controller? We won't be able to achieve that with current policy. Or, could this be a question for the IRQ system (irq_create_affinity_masks()) in general? Any special considerations behind the big picture? I believe I must have missed some contexts here and there... but I'd like to raise the question up. Say, if the new way is preferred and attempted, maybe it would worth it to spread the idea to the rest of the virtio drivers who support multi-queues as well. Thanks, -- Peter Xu