Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp1261640pxb; Thu, 21 Oct 2021 19:47:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzs6ijuqlp4ienphfic6r0t2kAT6CUyeRYdGyL9ZQ9HbLHDnlC8DriHG/wNvUmK90UH+gxP X-Received: by 2002:a50:fe14:: with SMTP id f20mr855939edt.334.1634870862848; Thu, 21 Oct 2021 19:47:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634870862; cv=none; d=google.com; s=arc-20160816; b=TKpmekz2QD2QkDBNbJakpek4TNlijq/pUm/klEvjpdyTK/WL7IQLarONB+fftoUB5S FWrtEb/FrT6aT+TLT1AOuE1jIwdYZL/td9wINCPpgkFRuoiPhEpFtueYGPOrBhZpzafR cSrLSP2UtyZHFho5wC7LkhpbLWTBz2PLgr4P7+vlDRD1WuRjgpdmBUdZ6itYF7h7uEVC 4icN4C5GrR0tJxBRuQQVWGenI2NdmMFz2nr5omIhTSBIC1A87enG9G4KB/4hVQgBSkam uY8Ro+CN4PXNp4x6n/XlctakcNRJKpw1Q1iPJJTiHYwi06/uaxwrN4AseBUKr+1fgh+V fGvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=pnzM5JTKhHY6XSPpzOQhBNpG4rz9RI4qQyyd6EacGCY=; b=ion+kn5bJ7wuHcjRW2NpzMPwgSGw0FhKSykF+1RJ/jaPIZg6xH9JDGMm5xpJkxf79+ Qatv7V9zwmQuY9v4Ebcf5FuDXX2wmk1BOqBcNKmKN4PZFUuvJVyZH6rvEq+Zp4ZH95gD iWErXHgwBd9cDFjKUuHgDWEcEAAkL7+qM6o6hZh6xWIfBmjbF1IbiP40lJhxSamILiB7 fUjoZAUwN75ko3hLvKF8116LO7l57oP7pKGIjK65W1jBPdOm25rvZJjCP+um77KUOlta yQBoPpzAibVO7orHdLbSvdjM8qkkXUqvkIgdUR1vsoPDXwVPlqRt1F/xSiI6DJjnK6sp lSog== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l21si11105994edw.460.2021.10.21.19.47.19; Thu, 21 Oct 2021 19:47:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232539AbhJVCrg (ORCPT + 99 others); Thu, 21 Oct 2021 22:47:36 -0400 Received: from out30-132.freemail.mail.aliyun.com ([115.124.30.132]:54798 "EHLO out30-132.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232532AbhJVCrg (ORCPT ); Thu, 21 Oct 2021 22:47:36 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04395;MF=wuzongyong@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0UtCtrLa_1634870672; Received: from localhost.localdomain(mailfrom:wuzongyong@linux.alibaba.com fp:SMTPD_---0UtCtrLa_1634870672) by smtp.aliyun-inc.com(127.0.0.1); Fri, 22 Oct 2021 10:44:41 +0800 From: Wu Zongyong To: wuzongyong@linux.alibaba.com, jasowang@redhat.com, virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org, mst@redhat.com Cc: wei.yang1@linux.alibaba.com Subject: [PATCH v6 3/8] vp_vdpa: add vq irq offloading support Date: Fri, 22 Oct 2021 10:44:18 +0800 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch implements the get_vq_irq() callback for virtio pci devices to allow irq offloading. Signed-off-by: Wu Zongyong Acked-by: Jason Wang --- drivers/vdpa/virtio_pci/vp_vdpa.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c index 5bcd00246d2e..e3ff7875e123 100644 --- a/drivers/vdpa/virtio_pci/vp_vdpa.c +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c @@ -76,6 +76,17 @@ static u8 vp_vdpa_get_status(struct vdpa_device *vdpa) return vp_modern_get_status(mdev); } +static int vp_vdpa_get_vq_irq(struct vdpa_device *vdpa, u16 idx) +{ + struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); + int irq = vp_vdpa->vring[idx].irq; + + if (irq == VIRTIO_MSI_NO_VECTOR) + return -EINVAL; + + return irq; +} + static void vp_vdpa_free_irq(struct vp_vdpa *vp_vdpa) { struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; @@ -427,6 +438,7 @@ static const struct vdpa_config_ops vp_vdpa_ops = { .get_config = vp_vdpa_get_config, .set_config = vp_vdpa_set_config, .set_config_cb = vp_vdpa_set_config_cb, + .get_vq_irq = vp_vdpa_get_vq_irq, }; static void vp_vdpa_free_irq_vectors(void *data) -- 2.31.1