Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp3791983rwl; Mon, 27 Mar 2023 20:45:49 -0700 (PDT) X-Google-Smtp-Source: AKy350Z+tJf9hn0tfNDwI0Ejk+DQcJtiC+Ca4rhuCD2bEx8/Hou92+29GKiqbS7fjTtcO+0mxPkN X-Received: by 2002:a17:903:22c7:b0:19e:674a:1fb9 with SMTP id y7-20020a17090322c700b0019e674a1fb9mr14627364plg.69.1679975149489; Mon, 27 Mar 2023 20:45:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679975149; cv=none; d=google.com; s=arc-20160816; b=N/g6frSwt571VLpLV/0xw8BzwfQP9yVDQSfdhY354fLFE+tTcoQAaejukDGZ5cxosZ ZMwY4SmZuTf6UGdimLye8awk8poq9j5WSPeEH5LpupcMuJELrGz3CHLStR9+scMu66iA eTy24I7S7jIQSFGqHvZh65fk4FWhr3o/Xj11AGjKO//fHaM3ppLt9NO729zu6QA5SDrz LA095fvCRYkhMzD1lGSbfyXhXPPbc5p2ZrL+Dpm579R5QMaDJJPlQ7CUOW9l34Ysk/Ij Ex2SM529YkjKZrK7DsXik3T5Qs+lpkT67W10+tKcKYNEVC+xXkj92sTpvxfD8SKrgc23 4r3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=ir5V99A0zsoCPWi1/drgb88gCMu5kFYUe4/CApNWsb4=; b=X0inqRHoUB0LGaco6hM1t/YtwrWIvR2PayO4evk+J9RvEuINeHxJXqsM1Y7cRkimwM YcOEDG5EsphC0LECSm3X811jGtpfFrMnR8aBddPESDz66Ir4kBbMvh8H2CZxODiZA2RD ShnMMi68WSVGHLNgqWfMopHGevuyqs4oUZrEBLEbIxDzGVcsMRDaYok2IG1t3rU0KW6i 8VHnlDOFYFiivepk61utGLcfC3nIVJBc+sWrytFJ5qjLeHpstIeTObyzZURY66HOSjT1 9pgDc7jtBsN+pzaoMve6ubNBXYKqlBATjLs8GGnL6kpm3J+mXStN+kriSRZV1eTz2AZU 8Fvw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=XxiHgCao; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jc7-20020a17090325c700b0019cd5c8593bsi28325314plb.328.2023.03.27.20.45.37; Mon, 27 Mar 2023 20:45:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=XxiHgCao; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232769AbjC1Dee (ORCPT + 99 others); Mon, 27 Mar 2023 23:34:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232777AbjC1DeI (ORCPT ); Mon, 27 Mar 2023 23:34:08 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3EEF198D for ; Mon, 27 Mar 2023 20:33:12 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id cu12so7035016pfb.13 for ; Mon, 27 Mar 2023 20:33:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1679974391; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=ir5V99A0zsoCPWi1/drgb88gCMu5kFYUe4/CApNWsb4=; b=XxiHgCaoFSAe4B3bizN+ChgnY7ZwVOlPR49xr4wvd+fJxiVH9mnF3uGMZreQzk61cq OxlxgfsoDcqlDGqOwjOZ4FAyedoUw+llfBhOXIhLTKXIicVGX//KC01hRAtkAaf4czn4 2E8tMksWE9JA7L1BTnwFf/tLdtvXK7D0PXTEp3n5ASXcegpbA8YYzKBA1jn6RhhU2gto s8SwgDAbAgrnaFJ8dMcsq6oDpOZp0Om9Ft1/ifMMmQyVIgaWOezW8P1ojwCxMuGDQ1nt LOI/OPVKMD2oJTcpyDwGjHxzf4oNKK74KCh+/gGkV03AA4krAKn+zj0n1Il6jn40Ocje 4tOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679974391; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ir5V99A0zsoCPWi1/drgb88gCMu5kFYUe4/CApNWsb4=; b=SdqIr9tjczzlY/4lqgRDU3CBwEdZfWlCY2VeA/neDPD7WZ95rfk83eNsLzhV8G8nLQ NyJvbZ1AIgXFypAN4nk1KBv5W2UlocYhAGKAnAcT1EisjcpEgm2pWD47TQiP4MY/vwQY JIMXSLpA8PFtU3zeRyDLZx7xt6USMdPprjbggzyx/J5V9Rk9NmXimXVG7yuzxm6txPWN p0yz+TiHABsh6AMDZDwGQRd4wCimUYyehiIDSMC9b2bIzEfFDbGbe0Q5f8Qfc9apmwjl iQeJ0Wzit0g49fGtEWgLSa4c10r5yQzJ5zTNtxGdfulvfiOa8jHwXMwFusUI0BqN4TVS lOHw== X-Gm-Message-State: AAQBX9dfMcsatM4tu/CYixtKBSPlhzWP9dH87J1vfY9MA7qKp+OdBoSy tcoDawBQgxxARP7PCSUYT4+CzjQvCe14b0Z2+sQ5 X-Received: by 2002:a63:6784:0:b0:4f1:cd3a:3e83 with SMTP id b126-20020a636784000000b004f1cd3a3e83mr3821811pgc.3.1679974391319; Mon, 27 Mar 2023 20:33:11 -0700 (PDT) MIME-Version: 1.0 References: <20230323053043.35-1-xieyongji@bytedance.com> <20230323053043.35-4-xieyongji@bytedance.com> In-Reply-To: From: Yongji Xie Date: Tue, 28 Mar 2023 11:33:00 +0800 Message-ID: Subject: Re: [PATCH v4 03/11] virtio-vdpa: Support interrupt affinity spreading mechanism To: Jason Wang Cc: "Michael S. Tsirkin" , Thomas Gleixner , Christoph Hellwig , virtualization , linux-kernel Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 28, 2023 at 11:14=E2=80=AFAM Jason Wang w= rote: > > On Tue, Mar 28, 2023 at 11:03=E2=80=AFAM Yongji Xie wrote: > > > > On Fri, Mar 24, 2023 at 2:28=E2=80=AFPM Jason Wang wrote: > > > > > > On Thu, Mar 23, 2023 at 1:31=E2=80=AFPM Xie Yongji wrote: > > > > > > > > To support interrupt affinity spreading mechanism, > > > > this makes use of group_cpus_evenly() to create > > > > an irq callback affinity mask for each virtqueue > > > > of vdpa device. Then we will unify set_vq_affinity > > > > callback to pass the affinity to the vdpa device driver. > > > > > > > > Signed-off-by: Xie Yongji > > > > > > Thinking hard of all the logics, I think I've found something interes= ting. > > > > > > Commit ad71473d9c437 ("virtio_blk: use virtio IRQ affinity") tries to > > > pass irq_affinity to transport specific find_vqs(). This seems a > > > layer violation since driver has no knowledge of > > > > > > 1) whether or not the callback is based on an IRQ > > > 2) whether or not the device is a PCI or not (the details are hided b= y > > > the transport driver) > > > 3) how many vectors could be used by a device > > > > > > This means the driver can't actually pass a real affinity masks so th= e > > > commit passes a zero irq affinity structure as a hint in fact, so the > > > PCI layer can build a default affinity based that groups cpus evenly > > > based on the number of MSI-X vectors (the core logic is the > > > group_cpus_evenly). I think we should fix this by replacing the > > > irq_affinity structure with > > > > > > 1) a boolean like auto_cb_spreading > > > > > > or > > > > > > 2) queue to cpu mapping > > > > > > > But only the driver knows which queues are used in the control path > > which don't need the automatic irq affinity assignment. > > Is this knowledge awarded by the transport driver now? > This knowledge is awarded by the device driver rather than the transport dr= iver. E.g. virtio-scsi uses: struct irq_affinity desc =3D { .pre_vectors =3D 2 }; // vq0 is control queue, vq1 is event queue > E.g virtio-blk uses: > > struct irq_affinity desc =3D { 0, }; > > Atleast we can tell the transport driver which vq requires automatic > irq affinity. > I think that is what the current implementation does. > > So I think the > > irq_affinity structure can only be created by device drivers and > > passed to the virtio-pci/virtio-vdpa driver. > > This could be not easy since the driver doesn't even know how many > interrupts will be used by the transport driver, so it can't built the > actual affinity structure. > The actual affinity mask is built by the transport driver, device driver only passes a hint on which queues don't need the automatic irq affinity assignment. Thanks, Yongji