Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp4365630ybp; Mon, 14 Oct 2019 03:34:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqzXcyvXeyF1M1IJSaCIcD0yhDPaaslvss15I2TxhgbL40sxV+J39euDaTLDQ0kClk/t1inR X-Received: by 2002:a50:f198:: with SMTP id x24mr28852382edl.238.1571049265548; Mon, 14 Oct 2019 03:34:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571049265; cv=none; d=google.com; s=arc-20160816; b=kciGLMKUiwyIC+fNa6xaKFffkr5SJR5xwy9qzwTUooni6F665cwrDep2L9CwhDvjYk U322i7zyVY4K5hJvU/aIWZ7PZrKXAk1BObspAztOlXPC/f6SNWXNsDoPD1khsLONKYa2 mkKZthfcE54wREop0kexD9JRW7w8uKpkuoKLr8vObUzaHn4x4u9b5+uVV4rRbhBmvdUO kDkNS1qNid22oiS/8+TCcEnkRcGyuzG/Ipun+9aXqRLv3UqNTHQNHmqplguqRIRkFksS k+H6cNjOpC7kij1eJG+z/eVUPkf/4PGq+XXtaFAadCtivnwkn2CaXDWW49p8VnNgOvYi HF4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=oUxiYEyAMUiFlWbyACYeVt4wto4jVAv/PYKjqVKaLG8=; b=efkWJN3Pjh4YThjUUN3lt0hYuLbUgZO5bIa4mSBCFXiTQS5qpliV+ESEC3m0YuGgDn /2Pi07T+lIT4yqNDocXCY1hAUxHN52AInjoWmSNf1p/NSm9s4Il7kfCulUZYZ6/HtJyt vntOd9DWMRK7yaXbOXtg8NMpwt5Oyb27ZImiIVj0QRqX3vfwk2d7txA2+rcoBeoGUTlF 1BdBL9/ERTpHJukLwqbhJRE+bJFMDppRuZTrM7wbWvlj7dhjbfXakVi+2CQHhaBWnTDj eHubhgG6sW36vmdBVs783geNmwd2HkyBr51SYVehWF6NoneANYtCXhXCI7/M1CGmhB/b MNIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=XICH1u05; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b15si11027442eda.152.2019.10.14.03.34.01; Mon, 14 Oct 2019 03:34:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=XICH1u05; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731522AbfJNKcy (ORCPT + 99 others); Mon, 14 Oct 2019 06:32:54 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:39750 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731127AbfJNKcw (ORCPT ); Mon, 14 Oct 2019 06:32:52 -0400 Received: by mail-pg1-f196.google.com with SMTP id p12so528388pgn.6; Mon, 14 Oct 2019 03:32:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=oUxiYEyAMUiFlWbyACYeVt4wto4jVAv/PYKjqVKaLG8=; b=XICH1u057uohUq/74Alm2nHi/+fOFgDV9WgjiihVfT5Sg76syBDQB3ZizG8huP+Yqk o821IEmlgdEDJyukPm/4YrmYKCkR0RMC0E7iFXRbAWBXXOc9Xl0m3ft6nVju7VuSvJ7B 69PzTbwPmu3Rp4LVY8Sk87PE/6NEGvbB5PWT+UbOobPlpn9YcfrulAkpId+OH/Mbe7P3 BivHa4t5N5eQKSHmVk+NuQejn4ttzhysNf0KBjesKMH+xz1zduMXH9LdprxgihaPPWnk xyivPrh4aAvXUWB/FL/ZO1as5PHUK91l/iDsn131+Lq4sMUXUpTtsX/qqrqf9ZotcohT 13bQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=oUxiYEyAMUiFlWbyACYeVt4wto4jVAv/PYKjqVKaLG8=; b=TKj5Eu36VTEFPff/y3Y9JzZdDUrlOJzJ/1b9EV6V/mwlXTe6xrkm0dkv9Dr502JNuX 3+nFuSDvvcm5cpnpSvdAwHSvmOI1g7hFDj4/ntazfn8vuoKTg+lKsrt/wVTwv7lRGcDs X69Eisf8XMl/cKxQElKwDYWOr6neIIOW1N2oPGt6GvpihLpotX8gADgZtLr3GZP+5g5e 5M4OR9uw9BtLY6wnGnBfDugORwyDAHshHBbnQna9N4fx84KRIQbzsoGsZRmacQxRt1+K FAl4NCED3+HRzPA0fLq36QkyRgmBHgWEYHKKowOf3JpInRgzhKDEg+5RihnkexLTZl// Bm9w== X-Gm-Message-State: APjAAAXhFwKDgeLn4Q9HBMWfjmD+7bAM4JtvxuStFQIp+9LhtZ7/kIrk G34vvDH/+NSVq0umEzwpoiO+qPLq7NB6Illo/e+rLtFI1CiVBw== X-Received: by 2002:a63:82:: with SMTP id 124mr33805671pga.112.1571049171202; Mon, 14 Oct 2019 03:32:51 -0700 (PDT) MIME-Version: 1.0 References: <20191014090910.9701-1-jgross@suse.com> <20191014090910.9701-3-jgross@suse.com> In-Reply-To: <20191014090910.9701-3-jgross@suse.com> From: Paul Durrant Date: Mon, 14 Oct 2019 11:32:40 +0100 Message-ID: Subject: Re: [PATCH 2/2] xen/netback: cleanup init and deinit code To: Juergen Gross Cc: xen-devel , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Wei Liu , Paul Durrant , "David S. Miller" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 14 Oct 2019 at 10:09, Juergen Gross wrote: > > Do some cleanup of the netback init and deinit code: > > - add an omnipotent queue deinit function usable from > xenvif_disconnect_data() and the error path of xenvif_connect_data() > - only install the irq handlers after initializing all relevant items > (especially the kthreads related to the queue) > - there is no need to use get_task_struct() after creating a kthread > and using put_task_struct() again after having stopped it. > - use kthread_run() instead of kthread_create() to spare the call of > wake_up_process(). I guess the reason it was done that way was to ensure that queue->task and queue->dealloc_task would be set before the relevant threads executed, but I don't see anywhere relying on this so I guess change is safe. The rest of it looks fine. > > Signed-off-by: Juergen Gross Reviewed-by: Paul Durrant > --- > drivers/net/xen-netback/interface.c | 114 +++++++++++++++++------------------- > 1 file changed, 54 insertions(+), 60 deletions(-) > > diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c > index 103ed00775eb..68dd7bb07ca6 100644 > --- a/drivers/net/xen-netback/interface.c > +++ b/drivers/net/xen-netback/interface.c > @@ -626,6 +626,38 @@ int xenvif_connect_ctrl(struct xenvif *vif, grant_ref_t ring_ref, > return err; > } > > +static void xenvif_disconnect_queue(struct xenvif_queue *queue) > +{ > + if (queue->tx_irq) { > + unbind_from_irqhandler(queue->tx_irq, queue); > + if (queue->tx_irq == queue->rx_irq) > + queue->rx_irq = 0; > + queue->tx_irq = 0; > + } > + > + if (queue->rx_irq) { > + unbind_from_irqhandler(queue->rx_irq, queue); > + queue->rx_irq = 0; > + } > + > + if (queue->task) { > + kthread_stop(queue->task); > + queue->task = NULL; > + } > + > + if (queue->dealloc_task) { > + kthread_stop(queue->dealloc_task); > + queue->dealloc_task = NULL; > + } > + > + if (queue->napi.poll) { > + netif_napi_del(&queue->napi); > + queue->napi.poll = NULL; > + } > + > + xenvif_unmap_frontend_data_rings(queue); > +} > + > int xenvif_connect_data(struct xenvif_queue *queue, > unsigned long tx_ring_ref, > unsigned long rx_ring_ref, > @@ -651,13 +683,27 @@ int xenvif_connect_data(struct xenvif_queue *queue, > netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll, > XENVIF_NAPI_WEIGHT); > > + queue->stalled = true; > + > + task = kthread_run(xenvif_kthread_guest_rx, queue, > + "%s-guest-rx", queue->name); > + if (IS_ERR(task)) > + goto kthread_err; > + queue->task = task; > + > + task = kthread_run(xenvif_dealloc_kthread, queue, > + "%s-dealloc", queue->name); > + if (IS_ERR(task)) > + goto kthread_err; > + queue->dealloc_task = task; > + > if (tx_evtchn == rx_evtchn) { > /* feature-split-event-channels == 0 */ > err = bind_interdomain_evtchn_to_irqhandler( > queue->vif->domid, tx_evtchn, xenvif_interrupt, 0, > queue->name, queue); > if (err < 0) > - goto err_unmap; > + goto err; > queue->tx_irq = queue->rx_irq = err; > disable_irq(queue->tx_irq); > } else { > @@ -668,7 +714,7 @@ int xenvif_connect_data(struct xenvif_queue *queue, > queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0, > queue->tx_irq_name, queue); > if (err < 0) > - goto err_unmap; > + goto err; > queue->tx_irq = err; > disable_irq(queue->tx_irq); > > @@ -678,47 +724,18 @@ int xenvif_connect_data(struct xenvif_queue *queue, > queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0, > queue->rx_irq_name, queue); > if (err < 0) > - goto err_tx_unbind; > + goto err; > queue->rx_irq = err; > disable_irq(queue->rx_irq); > } > > - queue->stalled = true; > - > - task = kthread_create(xenvif_kthread_guest_rx, > - (void *)queue, "%s-guest-rx", queue->name); > - if (IS_ERR(task)) { > - pr_warn("Could not allocate kthread for %s\n", queue->name); > - err = PTR_ERR(task); > - goto err_rx_unbind; > - } > - queue->task = task; > - get_task_struct(task); > - > - task = kthread_create(xenvif_dealloc_kthread, > - (void *)queue, "%s-dealloc", queue->name); > - if (IS_ERR(task)) { > - pr_warn("Could not allocate kthread for %s\n", queue->name); > - err = PTR_ERR(task); > - goto err_rx_unbind; > - } > - queue->dealloc_task = task; > - > - wake_up_process(queue->task); > - wake_up_process(queue->dealloc_task); > - > return 0; > > -err_rx_unbind: > - unbind_from_irqhandler(queue->rx_irq, queue); > - queue->rx_irq = 0; > -err_tx_unbind: > - unbind_from_irqhandler(queue->tx_irq, queue); > - queue->tx_irq = 0; > -err_unmap: > - xenvif_unmap_frontend_data_rings(queue); > - netif_napi_del(&queue->napi); > +kthread_err: > + pr_warn("Could not allocate kthread for %s\n", queue->name); > + err = PTR_ERR(task); > err: > + xenvif_disconnect_queue(queue); > return err; > } > > @@ -746,30 +763,7 @@ void xenvif_disconnect_data(struct xenvif *vif) > for (queue_index = 0; queue_index < num_queues; ++queue_index) { > queue = &vif->queues[queue_index]; > > - netif_napi_del(&queue->napi); > - > - if (queue->task) { > - kthread_stop(queue->task); > - put_task_struct(queue->task); > - queue->task = NULL; > - } > - > - if (queue->dealloc_task) { > - kthread_stop(queue->dealloc_task); > - queue->dealloc_task = NULL; > - } > - > - if (queue->tx_irq) { > - if (queue->tx_irq == queue->rx_irq) > - unbind_from_irqhandler(queue->tx_irq, queue); > - else { > - unbind_from_irqhandler(queue->tx_irq, queue); > - unbind_from_irqhandler(queue->rx_irq, queue); > - } > - queue->tx_irq = 0; > - } > - > - xenvif_unmap_frontend_data_rings(queue); > + xenvif_disconnect_queue(queue); > } > > xenvif_mcast_addr_list_free(vif); > -- > 2.16.4 >