Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp2276271img; Sun, 24 Mar 2019 04:34:36 -0700 (PDT) X-Google-Smtp-Source: APXvYqxgya470RKtdaOR8xOzpyymckupBHuesScxjgKMO1Aq2CINN3ALBYCfWUuCY3VS2CONCoK7 X-Received: by 2002:a62:bd13:: with SMTP id a19mr18645218pff.222.1553427276827; Sun, 24 Mar 2019 04:34:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553427276; cv=none; d=google.com; s=arc-20160816; b=j2z4kHfzpOCMPN/XJNmwvfjZn9MEJ/1ckK+tcFEsj0Ch00lb/oj0Qs6p7smnTrE9VS PE67j5kCU+w5w6Gp/Swf8e+7iOvFr6LqFXxGMc4GFbeLp1oSo/R4OsjHA3oJeodVXOUb i8U++CznjhOxO+nHBL/aWSSknvabTITL/BFxabYVAQTh3EkFuL/wrg6Vydjk9CpOkgEp fSpwmZ9BvHh3dEnSqtcHRJDlgf8qa5A17ZpTNvutSMk4OVRqdv2p+ObaICqEBG6sVCcb HtteQRdWWtbCKJ8sw49kPqU4YfvjszN+3W0cpHkwemntfwcFICe/lJWfN9WaNjO7mHJm mugA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=hcamOi32I2oXBhaPYv1WPzFaErWcU2jqlPpIap7IZvM=; b=izMIkTxAm9o8pkWifsTJP0abOeW1pCp63fbXLnM8rQFuY5ef6WckkzEYtQEhf9WKeL wVwYCKNrdHqhXBMBF2ES4HIifw8oKurvMG6MWeL375dBMjs7rJRw9jRrxuuxMicPd12y IjC+OsxFNaGouvZcs7vF3oyHfODdjlc34ugw1VcPJAQTTbfzFKKlqKC4xojgVWhV0zlv 3vQ10/S/dtGXUyO2eA8kQhF7OBhwMRtEbUbaY1n3s00iSNOw/e9Xf677GTWnHLJB9a+G 4E0cdcjP4XGQ6t5vskF8ZhMBVMBI/RZgfIFk7sis80BUIGp5q2MptujHuy/DvrdhVZix DuKw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j188si11360458pfb.75.2019.03.24.04.33.45; Sun, 24 Mar 2019 04:34:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728152AbfCXLbq (ORCPT + 99 others); Sun, 24 Mar 2019 07:31:46 -0400 Received: from bmailout2.hostsharing.net ([83.223.90.240]:52689 "EHLO bmailout2.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726160AbfCXLbq (ORCPT ); Sun, 24 Mar 2019 07:31:46 -0400 Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.hostsharing.net", Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified)) by bmailout2.hostsharing.net (Postfix) with ESMTPS id 847262800B48A; Sun, 24 Mar 2019 12:31:44 +0100 (CET) Received: by h08.hostsharing.net (Postfix, from userid 100393) id 34A8E33099; Sun, 24 Mar 2019 12:31:44 +0100 (CET) Date: Sun, 24 Mar 2019 12:31:44 +0100 From: Lukas Wunner To: Mika Westerberg Cc: linux-kernel@vger.kernel.org, Michael Jamet , Yehezkel Bernat , Andreas Noever , "David S . Miller" , Andy Shevchenko , netdev@vger.kernel.org Subject: Re: [PATCH v2 17/28] thunderbolt: Add support for full PCIe daisy chains Message-ID: <20190324113144.wubme46hby7rj6r2@wunner.de> References: <20190206131738.43696-1-mika.westerberg@linux.intel.com> <20190206131738.43696-18-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190206131738.43696-18-mika.westerberg@linux.intel.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 06, 2019 at 04:17:27PM +0300, Mika Westerberg wrote: > @@ -63,6 +71,16 @@ static void tb_discover_tunnels(struct tb_switch *sw) > } > } > > +static void tb_switch_authorize(struct work_struct *work) > +{ > + struct tb_switch *sw = container_of(work, typeof(*sw), work); > + > + mutex_lock(&sw->tb->lock); > + if (!sw->is_unplugged) > + tb_domain_approve_switch(sw->tb, sw); > + mutex_unlock(&sw->tb->lock); > +} > + You're establishing PCI tunnels by having tb_scan_port() schedule tb_switch_authorize() via a work item, which in turn calls tb_domain_approve_switch(), which in turn calls tb_approve_switch(), which in turn calls tb_tunnel_pci(). This seems kind of like a roundabout way of doing things, in particular since all switches are hardcoded to be automatically authorized. Why don't you just invoke tb_tunnel_pci() directly from tb_scan_port()? And why is the work item needed? I'm also confused that the work item has been present in struct tb_switch for 2 years but is put to use only now. > -static void tb_activate_pcie_devices(struct tb *tb) > +static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw) > { [...] > + /* > + * Look up available down port. Since we are chaining, it is > + * typically found right above this switch. > + */ > + down = NULL; > + parent_sw = tb_to_switch(sw->dev.parent); > + while (parent_sw) { > + down = tb_find_unused_down_port(parent_sw); > + if (down) > + break; > + parent_sw = tb_to_switch(parent_sw->dev.parent); > + } The problem I see here is that there's no guarantee that the switch on which you're selecting a down port is actually itself connected with a PCI tunnel. E.g., allocation of a tunnel to that parent switch may have failed. In that case you end up establishing a tunnel between that parent switch and the newly connected switch but the tunnel is of no use. It would seem more logical to me to walk down the chain of newly connected switches and try to establish a PCI tunnel to each of them in order. By deferring tunnel establishment to a work item, I think the tunnels may be established in an arbitrary order, right? Thanks, Lukas