Hi,
Software connection manager (drivers/thunderbolt/tb.c) is used on older
Apple hardware with Light Ridge, Cactus Ridge or Falcon Ridge controllers
to create PCIe tunnels when a Thunderbolt device is connected. Currently
only one PCIe tunnel is supported. On newer Alpine Ridge based Apple
systems the driver starts the firmware which then takes care creating
tunnels.
This series improves the software connection manager so that it will
support:
- Full PCIe daisy chains (up to 6 devices)
- Display Port tunneling
- P2P networking
We also add support for Titan Ridge based Apple systems where we can use
the same flows than with Alpine Ridge to start the firmware.
Note in order to prevent possible DMA attacks on these systems we should
make sure IOMMU is enabled. One option is to force dmar_platform_optin()
return true on Apple hardware. However, it is not part of this series. I'm
trusting people using Linux on such systems to take care of it. :-)
Previous version of the patch series can be viewed here:
https://lkml.org/lkml/2019/1/29/924
Changes from v1:
* Added ACK from David
* Add constant (TMU_ACCESS_EN) for BIT(20) when TMU access is enabled. We
keep it in cap.c close to the LR/ER workaround. Also we enable/disable
only during capability walk. If it turns we need to have it enabled
elsewhere we can move it to switch.c and enable just once during
switch enumeration.
* Use 0 to mean no cap_adap instead of negative value. This follows
cap_phy.
* Use correct PCI IDs (_BRIDGE) in the last patch where we start firmware
on Titan Ridge. It wrongly used NHI PCI IDs in v1.
Mika Westerberg (28):
net: thunderbolt: Unregister ThunderboltIP protocol handler when suspending
thunderbolt: Do not allocate switch if depth is greater than 6
thunderbolt: Enable TMU access when accessing port space on legacy devices
thunderbolt: Add dummy read after port capability list walk on Light Ridge
thunderbolt: Move LC specific functionality into a separate file
thunderbolt: Configure lanes when switch is initialized
thunderbolt: Set sleep bit when suspending switch
thunderbolt: Properly disable path
thunderbolt: Cache adapter specific capability offset into struct port
thunderbolt: Rename tunnel_pci to tunnel
thunderbolt: Generalize tunnel creation functionality
thunderbolt: Add functions for allocating and releasing hop IDs
thunderbolt: Add helper function to iterate from one port to another
thunderbolt: Extend tunnel creation to more than 2 adjacent switches
thunderbolt: Deactivate all paths before restarting them
thunderbolt: Discover preboot PCIe paths the boot firmware established
thunderbolt: Add support for full PCIe daisy chains
thunderbolt: Scan only valid NULL adapter ports in hotplug
thunderbolt: Generalize port finding routines to support all port types
thunderbolt: Rework NFC credits handling
thunderbolt: Add support for Display Port tunnels
thunderbolt: Run tb_xdp_handle_request() in system workqueue
thunderbolt: Add XDomain UUID exchange support
thunderbolt: Add support for DMA tunnels
thunderbolt: Make tb_switch_alloc() return ERR_PTR()
thunderbolt: Add support for XDomain connections
thunderbolt: Make rest of the logging to happen at debug level
thunderbolt: Start firmware on Titan Ridge Apple systems
drivers/net/thunderbolt.c | 3 +
drivers/thunderbolt/Makefile | 4 +-
drivers/thunderbolt/cap.c | 90 +++-
drivers/thunderbolt/ctl.c | 2 +-
drivers/thunderbolt/icm.c | 15 +-
drivers/thunderbolt/lc.c | 179 ++++++++
drivers/thunderbolt/path.c | 326 +++++++++++++--
drivers/thunderbolt/switch.c | 466 ++++++++++++++++++---
drivers/thunderbolt/tb.c | 529 ++++++++++++++++++------
drivers/thunderbolt/tb.h | 67 ++-
drivers/thunderbolt/tb_msgs.h | 11 +
drivers/thunderbolt/tb_regs.h | 50 ++-
drivers/thunderbolt/tunnel.c | 681 +++++++++++++++++++++++++++++++
drivers/thunderbolt/tunnel.h | 75 ++++
drivers/thunderbolt/tunnel_pci.c | 226 ----------
drivers/thunderbolt/tunnel_pci.h | 31 --
drivers/thunderbolt/xdomain.c | 142 ++++++-
include/linux/thunderbolt.h | 8 +
18 files changed, 2389 insertions(+), 516 deletions(-)
create mode 100644 drivers/thunderbolt/lc.c
create mode 100644 drivers/thunderbolt/tunnel.c
create mode 100644 drivers/thunderbolt/tunnel.h
delete mode 100644 drivers/thunderbolt/tunnel_pci.c
delete mode 100644 drivers/thunderbolt/tunnel_pci.h
--
2.20.1
Currently the software connection manager (tb.c) has only supported
creating a single PCIe tunnel, no PCIe device daisy chaining has been
supported so far. This updates the software connection manager so that
it now can create PCIe tunnels for full chain of six devices.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/tb.c | 174 +++++++++++++++++++++++----------------
1 file changed, 104 insertions(+), 70 deletions(-)
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 371633e17916..f2b23b290b63 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -1,8 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
- * Thunderbolt Cactus Ridge driver - bus logic (NHI independent)
+ * Thunderbolt driver - bus logic (NHI independent)
*
* Copyright (c) 2014 Andreas Noever <[email protected]>
+ * Copyright (C) 2019, Intel Corporation
*/
#include <linux/slab.h>
@@ -50,8 +51,15 @@ static void tb_discover_tunnels(struct tb_switch *sw)
}
/* Find and add existing tunnels */
- if (tunnel)
+ if (tunnel) {
+ struct tb_port *p;
+
+ /* Firmware added switches are always authorized */
+ tb_for_each_port(p, tunnel->src_port, tunnel->dst_port)
+ p->sw->boot = true;
+
list_add_tail(&tunnel->list, &tcm->tunnel_list);
+ }
}
for (i = 1; i <= sw->config.max_port_number; i++) {
@@ -63,6 +71,16 @@ static void tb_discover_tunnels(struct tb_switch *sw)
}
}
+static void tb_switch_authorize(struct work_struct *work)
+{
+ struct tb_switch *sw = container_of(work, typeof(*sw), work);
+
+ mutex_lock(&sw->tb->lock);
+ if (!sw->is_unplugged)
+ tb_domain_approve_switch(sw->tb, sw);
+ mutex_unlock(&sw->tb->lock);
+}
+
static void tb_scan_port(struct tb_port *port);
/**
@@ -80,6 +98,7 @@ static void tb_scan_switch(struct tb_switch *sw)
*/
static void tb_scan_port(struct tb_port *port)
{
+ struct tb_cm *tcm = tb_priv(port->sw->tb);
struct tb_switch *sw;
if (tb_is_upstream_port(port))
return;
@@ -106,6 +125,14 @@ static void tb_scan_port(struct tb_port *port)
return;
}
+ /*
+ * Do not send uevents until we have discovered all existing
+ * tunnels and know which switches were authorized already by
+ * the boot firmware.
+ */
+ if (!tcm->hotplug_active)
+ dev_set_uevent_suppress(&sw->dev, true);
+
sw->authorized = true;
if (tb_switch_add(sw)) {
@@ -113,6 +140,9 @@ static void tb_scan_port(struct tb_port *port)
return;
}
+ INIT_WORK(&sw->work, tb_switch_authorize);
+ queue_work(sw->tb->wq, &sw->work);
+
port->remote = tb_upstream_port(sw);
tb_upstream_port(sw)->remote = port;
tb_scan_switch(sw);
@@ -149,6 +179,7 @@ static void tb_free_unplugged_children(struct tb_switch *sw)
if (!port->remote)
continue;
if (port->remote->sw->is_unplugged) {
+ cancel_work_sync(&port->remote->sw->work);
tb_switch_remove(port->remote->sw);
port->remote = NULL;
} else {
@@ -197,72 +228,58 @@ static struct tb_port *tb_find_unused_down_port(struct tb_switch *sw)
return NULL;
}
-/**
- * tb_activate_pcie_devices() - scan for and activate PCIe devices
- *
- * This method is somewhat ad hoc. For now it only supports one device
- * per port and only devices at depth 1.
- */
-static void tb_activate_pcie_devices(struct tb *tb)
+static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
{
- int i;
- int cap;
- u32 data;
- struct tb_switch *sw;
- struct tb_port *up_port;
- struct tb_port *down_port;
- struct tb_tunnel *tunnel;
struct tb_cm *tcm = tb_priv(tb);
+ struct tb_switch *parent_sw;
+ struct tb_port *up, *down;
+ struct tb_tunnel *tunnel;
- /* scan for pcie devices at depth 1*/
- for (i = 1; i <= tb->root_switch->config.max_port_number; i++) {
- if (tb_is_upstream_port(&tb->root_switch->ports[i]))
- continue;
- if (tb->root_switch->ports[i].config.type != TB_TYPE_PORT)
- continue;
- if (!tb->root_switch->ports[i].remote)
- continue;
- sw = tb->root_switch->ports[i].remote->sw;
- up_port = tb_find_pci_up_port(sw);
- if (!up_port) {
- tb_sw_info(sw, "no PCIe devices found, aborting\n");
- continue;
- }
+ up = tb_find_pci_up_port(sw);
+ if (!up)
+ return 0;
- /* check whether port is already activated */
- cap = up_port->cap_adap;
- if (!cap)
- continue;
- if (tb_port_read(up_port, &data, TB_CFG_PORT, cap, 1))
- continue;
- if (data & 0x80000000) {
- tb_port_info(up_port,
- "PCIe port already activated, aborting\n");
- continue;
- }
+ /*
+ * Look up available down port. Since we are chaining, it is
+ * typically found right above this switch.
+ */
+ down = NULL;
+ parent_sw = tb_to_switch(sw->dev.parent);
+ while (parent_sw) {
+ down = tb_find_unused_down_port(parent_sw);
+ if (down)
+ break;
+ parent_sw = tb_to_switch(parent_sw->dev.parent);
+ }
- down_port = tb_find_unused_down_port(tb->root_switch);
- if (!down_port) {
- tb_port_info(up_port,
- "All PCIe down ports are occupied, aborting\n");
- continue;
- }
- tunnel = tb_tunnel_alloc_pci(tb, up_port, down_port);
- if (!tunnel) {
- tb_port_info(up_port,
- "PCIe tunnel allocation failed, aborting\n");
- continue;
- }
+ if (!down)
+ return 0;
- if (tb_tunnel_activate(tunnel)) {
- tb_port_info(up_port,
- "PCIe tunnel activation failed, aborting\n");
- tb_tunnel_free(tunnel);
- continue;
- }
+ tunnel = tb_tunnel_alloc_pci(tb, up, down);
+ if (!tunnel)
+ return -EIO;
- list_add(&tunnel->list, &tcm->tunnel_list);
+ if (tb_tunnel_activate(tunnel)) {
+ tb_port_info(up,
+ "PCIe tunnel activation failed, aborting\n");
+ tb_tunnel_free(tunnel);
+ return -EIO;
}
+ list_add_tail(&tunnel->list, &tcm->tunnel_list);
+
+ return 0;
+}
+
+static int tb_approve_switch(struct tb *tb, struct tb_switch *sw)
+{
+ /*
+ * Already authorized by the boot firmware so no need to do
+ * anything here.
+ */
+ if (sw->boot)
+ return 0;
+
+ return tb_tunnel_pci(tb, sw);
}
/* hotplug handling */
@@ -316,6 +333,7 @@ static void tb_handle_hotplug(struct work_struct *work)
tb_port_info(port, "unplugged\n");
tb_sw_set_unplugged(port->remote->sw);
tb_free_invalid_tunnels(tb);
+ cancel_work_sync(&sw->work);
tb_switch_remove(port->remote->sw);
port->remote = NULL;
} else {
@@ -328,16 +346,8 @@ static void tb_handle_hotplug(struct work_struct *work)
} else {
tb_port_info(port, "hotplug: scanning\n");
tb_scan_port(port);
- if (!port->remote) {
+ if (!port->remote)
tb_port_info(port, "hotplug: no switch found\n");
- } else if (port->remote->sw->config.depth > 1) {
- tb_sw_warn(port->remote->sw,
- "hotplug: chaining not supported\n");
- } else {
- tb_sw_info(port->remote->sw,
- "hotplug: activating pcie devices\n");
- tb_activate_pcie_devices(tb);
- }
}
out:
mutex_unlock(&tb->lock);
@@ -395,6 +405,27 @@ static void tb_stop(struct tb *tb)
tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */
}
+static int tb_scan_finalize_switch(struct device *dev, void *data)
+{
+ if (tb_is_switch(dev)) {
+ struct tb_switch *sw = tb_to_switch(dev);
+
+ /*
+ * If we found that the switch was already setup by the
+ * boot firmware, mark it as authorized now before we
+ * send uevent to userspace.
+ */
+ if (sw->boot)
+ sw->authorized = 1;
+
+ dev_set_uevent_suppress(dev, false);
+ kobject_uevent(&dev->kobj, KOBJ_ADD);
+ device_for_each_child(dev, NULL, tb_scan_finalize_switch);
+ }
+
+ return 0;
+}
+
static int tb_start(struct tb *tb)
{
struct tb_cm *tcm = tb_priv(tb);
@@ -428,7 +459,9 @@ static int tb_start(struct tb *tb)
tb_scan_switch(tb->root_switch);
/* Find out tunnels created by the boot firmware */
tb_discover_tunnels(tb->root_switch);
- tb_activate_pcie_devices(tb);
+ /* Make the discovered switches available to the userspace */
+ device_for_each_child(&tb->root_switch->dev, NULL,
+ tb_scan_finalize_switch);
/* Allow tb_handle_hotplug to progress events */
tcm->hotplug_active = true;
@@ -483,6 +516,7 @@ static const struct tb_cm_ops tb_cm_ops = {
.suspend_noirq = tb_suspend_noirq,
.resume_noirq = tb_resume_noirq,
.handle_event = tb_handle_event,
+ .approve_switch = tb_approve_switch,
};
struct tb *tb_probe(struct tb_nhi *nhi)
--
2.20.1
Now that the driver can handle every possible tunnel types there is no
point to log everything as info level so turn these to happen at debug
level instead.
While at it remove duplicated tunnel activation log message
(tb_tunnel_activate() calls tb_tunnel_restart() which print the same
message).
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/ctl.c | 2 +-
drivers/thunderbolt/icm.c | 2 +-
drivers/thunderbolt/path.c | 30 +++++++++++++++---------------
drivers/thunderbolt/switch.c | 19 +++++++++----------
drivers/thunderbolt/tb.c | 11 +++++------
drivers/thunderbolt/tunnel.c | 10 ++++------
6 files changed, 35 insertions(+), 39 deletions(-)
diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index 73b386de4d15..2427d73be731 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -720,7 +720,7 @@ int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
.port = port,
.error = error,
};
- tb_ctl_info(ctl, "resetting error on %llx:%x.\n", route, port);
+ tb_ctl_dbg(ctl, "resetting error on %llx:%x.\n", route, port);
return tb_ctl_tx(ctl, &pkg, sizeof(pkg), TB_CFG_PKG_ERROR);
}
diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
index e28a4255d56a..c44906fac2a4 100644
--- a/drivers/thunderbolt/icm.c
+++ b/drivers/thunderbolt/icm.c
@@ -1559,7 +1559,7 @@ static int icm_firmware_start(struct tb *tb, struct tb_nhi *nhi)
if (val & REG_FW_STS_ICM_EN)
return 0;
- dev_info(&nhi->pdev->dev, "starting ICM firmware\n");
+ dev_dbg(&nhi->pdev->dev, "starting ICM firmware\n");
ret = icm_firmware_reset(tb, nhi);
if (ret)
diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c
index afdb667fcc0d..1aefdef403ef 100644
--- a/drivers/thunderbolt/path.c
+++ b/drivers/thunderbolt/path.c
@@ -354,12 +354,12 @@ void tb_path_deactivate(struct tb_path *path)
tb_WARN(path->tb, "trying to deactivate an inactive path\n");
return;
}
- tb_info(path->tb,
- "deactivating path from %llx:%x to %llx:%x\n",
- tb_route(path->hops[0].in_port->sw),
- path->hops[0].in_port->port,
- tb_route(path->hops[path->path_length - 1].out_port->sw),
- path->hops[path->path_length - 1].out_port->port);
+ tb_dbg(path->tb,
+ "deactivating path from %llx:%x to %llx:%x\n",
+ tb_route(path->hops[0].in_port->sw),
+ path->hops[0].in_port->port,
+ tb_route(path->hops[path->path_length - 1].out_port->sw),
+ path->hops[path->path_length - 1].out_port->port);
__tb_path_deactivate_hops(path, 0);
__tb_path_deallocate_nfc(path, 0);
path->activated = false;
@@ -382,12 +382,12 @@ int tb_path_activate(struct tb_path *path)
return -EINVAL;
}
- tb_info(path->tb,
- "activating path from %llx:%x to %llx:%x\n",
- tb_route(path->hops[0].in_port->sw),
- path->hops[0].in_port->port,
- tb_route(path->hops[path->path_length - 1].out_port->sw),
- path->hops[path->path_length - 1].out_port->port);
+ tb_dbg(path->tb,
+ "activating path from %llx:%x to %llx:%x\n",
+ tb_route(path->hops[0].in_port->sw),
+ path->hops[0].in_port->port,
+ tb_route(path->hops[path->path_length - 1].out_port->sw),
+ path->hops[path->path_length - 1].out_port->port);
/* Clear counters. */
for (i = path->path_length - 1; i >= 0; i--) {
@@ -438,8 +438,8 @@ int tb_path_activate(struct tb_path *path)
& out_mask;
hop.unknown3 = 0;
- tb_port_info(path->hops[i].in_port, "Writing hop %d, index %d",
- i, path->hops[i].in_hop_index);
+ tb_port_dbg(path->hops[i].in_port, "Writing hop %d, index %d",
+ i, path->hops[i].in_hop_index);
tb_dump_hop(path->hops[i].in_port, &hop);
res = tb_port_write(path->hops[i].in_port, &hop, TB_CFG_HOPS,
2 * path->hops[i].in_hop_index, 2);
@@ -450,7 +450,7 @@ int tb_path_activate(struct tb_path *path)
}
}
path->activated = true;
- tb_info(path->tb, "path activation complete\n");
+ tb_dbg(path->tb, "path activation complete\n");
return 0;
err:
tb_WARN(path->tb, "path activation failed\n");
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 79391c3d834e..2b84b770b8d2 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -500,23 +500,22 @@ int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged)
if (state < 0)
return state;
if (state == TB_PORT_DISABLED) {
- tb_port_info(port, "is disabled (state: 0)\n");
+ tb_port_dbg(port, "is disabled (state: 0)\n");
return 0;
}
if (state == TB_PORT_UNPLUGGED) {
if (wait_if_unplugged) {
/* used during resume */
- tb_port_info(port,
- "is unplugged (state: 7), retrying...\n");
+ tb_port_dbg(port,
+ "is unplugged (state: 7), retrying...\n");
msleep(100);
continue;
}
- tb_port_info(port, "is unplugged (state: 7)\n");
+ tb_port_dbg(port, "is unplugged (state: 7)\n");
return 0;
}
if (state == TB_PORT_UP) {
- tb_port_info(port,
- "is connected, link is up (state: 2)\n");
+ tb_port_dbg(port, "is connected, link is up (state: 2)\n");
return 1;
}
@@ -524,9 +523,9 @@ int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged)
* After plug-in the state is TB_PORT_CONNECTING. Give it some
* time.
*/
- tb_port_info(port,
- "is connected, link is not up (state: %d), retrying...\n",
- state);
+ tb_port_dbg(port,
+ "is connected, link is not up (state: %d), retrying...\n",
+ state);
msleep(100);
}
tb_port_warn(port,
@@ -592,7 +591,7 @@ int tb_port_set_initial_credits(struct tb_port *port, u32 credits)
int tb_port_clear_counter(struct tb_port *port, int counter)
{
u32 zero[3] = { 0, 0, 0 };
- tb_port_info(port, "clearing counter %d\n", counter);
+ tb_port_dbg(port, "clearing counter %d\n", counter);
return tb_port_write(port, zero, TB_CFG_COUNTERS, 3 * counter, 3);
}
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 94dd1fd12967..3fda11d45a03 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -514,18 +514,17 @@ static void tb_handle_hotplug(struct work_struct *work)
} else if (tb_port_is_dpout(port)) {
tb_teardown_dp(tb, port);
} else {
- tb_port_info(port,
- "got unplug event for disconnected port, ignoring\n");
+ tb_port_dbg(port,
+ "got unplug event for disconnected port, ignoring\n");
}
} else if (port->remote) {
- tb_port_info(port,
- "got plug event for connected port, ignoring\n");
+ tb_port_dbg(port, "got plug event for connected port, ignoring\n");
} else {
if (tb_port_is_null(port)) {
- tb_port_info(port, "hotplug: scanning\n");
+ tb_port_dbg(port, "hotplug: scanning\n");
tb_scan_port(port);
if (!port->remote)
- tb_port_info(port, "hotplug: no switch found\n");
+ tb_port_dbg(port, "hotplug: no switch found\n");
} else if (tb_port_is_dpout(port)) {
tb_tunnel_dp(tb, port);
}
diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index f10a0a15b873..daacd700df6d 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -47,8 +47,8 @@ static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA" };
__TB_TUNNEL_PRINT(tb_WARN, tunnel, fmt, ##arg)
#define tb_tunnel_warn(tunnel, fmt, arg...) \
__TB_TUNNEL_PRINT(tb_warn, tunnel, fmt, ##arg)
-#define tb_tunnel_info(tunnel, fmt, arg...) \
- __TB_TUNNEL_PRINT(tb_info, tunnel, fmt, ##arg)
+#define tb_tunnel_dbg(tunnel, fmt, arg...) \
+ __TB_TUNNEL_PRINT(tb_dbg, tunnel, fmt, ##arg)
static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths,
enum tb_tunnel_type type)
@@ -610,7 +610,7 @@ int tb_tunnel_restart(struct tb_tunnel *tunnel)
{
int res, i;
- tb_tunnel_info(tunnel, "activating\n");
+ tb_tunnel_dbg(tunnel, "activating\n");
/* Make sure all paths are properly disabled before enable them again */
for (i = 0; i < tunnel->npaths; i++) {
@@ -650,8 +650,6 @@ int tb_tunnel_activate(struct tb_tunnel *tunnel)
{
int i;
- tb_tunnel_info(tunnel, "activating\n");
-
for (i = 0; i < tunnel->npaths; i++) {
if (tunnel->paths[i]->activated) {
tb_tunnel_WARN(tunnel,
@@ -671,7 +669,7 @@ void tb_tunnel_deactivate(struct tb_tunnel *tunnel)
{
int i;
- tb_tunnel_info(tunnel, "deactivating\n");
+ tb_tunnel_dbg(tunnel, "deactivating\n");
if (tunnel->activate)
tunnel->activate(tunnel, false);
--
2.20.1
Titan Ridge flow to start the firmware is the same as Alpine Ridge so we
can do the same on Titan Ridge based Apple systems.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/icm.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
index c44906fac2a4..95cd391bfa8d 100644
--- a/drivers/thunderbolt/icm.c
+++ b/drivers/thunderbolt/icm.c
@@ -1190,6 +1190,8 @@ static struct pci_dev *get_upstream_port(struct pci_dev *pdev)
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_BRIDGE:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE:
return parent;
}
--
2.20.1
Display Port tunnels are somewhat more complex than PCIe tunnels as it
requires 3 tunnels (AUX Rx/Tx and Video). In addition we are not
supposed to create the tunnels immediately when a DP OUT is enumerated.
Instead we need to wait until we get hotplug event to that adapter port
or check if the port has HPD set before tunnels can be established. This
adds Display Port tunneling support to the software connection manager.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/switch.c | 111 ++++++++++++++
drivers/thunderbolt/tb.c | 119 ++++++++++++---
drivers/thunderbolt/tb.h | 17 +++
drivers/thunderbolt/tb_regs.h | 22 +++
drivers/thunderbolt/tunnel.c | 277 +++++++++++++++++++++++++++++++++-
drivers/thunderbolt/tunnel.h | 21 +++
6 files changed, 545 insertions(+), 22 deletions(-)
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 7cc1f534e776..a1876dcd1d10 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -760,6 +760,10 @@ bool tb_port_is_enabled(struct tb_port *port)
case TB_TYPE_PCIE_DOWN:
return tb_pci_port_is_enabled(port);
+ case TB_TYPE_DP_HDMI_IN:
+ case TB_TYPE_DP_HDMI_OUT:
+ return tb_dp_port_is_enabled(port);
+
default:
return false;
}
@@ -792,6 +796,113 @@ int tb_pci_port_enable(struct tb_port *port, bool enable)
return tb_port_write(port, &word, TB_CFG_PORT, port->cap_adap, 1);
}
+/**
+ * tb_dp_port_hpd_is_active() - Is HPD already active
+ * @port: DP out port to check
+ *
+ * Checks if the DP OUT adapter port has HDP bit already set.
+ */
+int tb_dp_port_hpd_is_active(struct tb_port *port)
+{
+ u32 data;
+ int ret;
+
+ ret = tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap + 2, 1);
+ if (ret)
+ return ret;
+
+ return !!(data & TB_DP_HDP);
+}
+
+/**
+ * tb_dp_port_hpd_clear() - Clear HPD from DP IN port
+ * @port: Port to clear HPD
+ *
+ * If the DP IN port has HDP set, this function can be used to clear it.
+ */
+int tb_dp_port_hpd_clear(struct tb_port *port)
+{
+ u32 data;
+ int ret;
+
+ ret = tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap + 3, 1);
+ if (ret)
+ return ret;
+
+ data |= TB_DP_HPDC;
+ return tb_port_write(port, &data, TB_CFG_PORT, port->cap_adap + 3, 1);
+}
+
+/**
+ * tb_dp_port_set_hops() - Set video/aux Hop IDs for DP port
+ * @port: DP IN/OUT port to set hops
+ * @video: Video Hop ID
+ * @aux_tx: AUX TX Hop ID
+ * @aux_rx: AUX RX Hop ID
+ *
+ * Programs specified Hop IDs for DP IN/OUT port.
+ */
+int tb_dp_port_set_hops(struct tb_port *port, unsigned int video,
+ unsigned int aux_tx, unsigned int aux_rx)
+{
+ u32 data[2];
+ int ret;
+
+ ret = tb_port_read(port, data, TB_CFG_PORT, port->cap_adap,
+ ARRAY_SIZE(data));
+ if (ret)
+ return ret;
+
+ data[0] &= ~TB_DP_VIDEO_HOPID_MASK;
+ data[1] &= ~(TB_DP_AUX_RX_HOPID_MASK | TB_DP_AUX_TX_HOPID_MASK);
+
+ data[0] |= (video << TB_DP_VIDEO_HOPID_SHIFT) & TB_DP_VIDEO_HOPID_MASK;
+ data[1] |= aux_tx & TB_DP_AUX_TX_HOPID_MASK;
+ data[1] |= (aux_rx << TB_DP_AUX_RX_HOPID_SHIFT) & TB_DP_AUX_RX_HOPID_MASK;
+
+ return tb_port_write(port, data, TB_CFG_PORT, port->cap_adap,
+ ARRAY_SIZE(data));
+}
+
+/**
+ * tb_dp_port_is_enabled() - Is DP adapter port enabled
+ * @port: DP adapter port to check
+ */
+bool tb_dp_port_is_enabled(struct tb_port *port)
+{
+ u32 data;
+
+ if (tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap, 1))
+ return false;
+
+ return !!(data & (TB_DP_VIDEO_EN | TB_DP_AUX_EN));
+}
+
+/**
+ * tb_dp_port_enable() - Enables/disables DP paths of a port
+ * @port: DP IN/OUT port
+ * @enable: Enable/disable DP path
+ *
+ * Once Hop IDs are programmed DP paths can be enabled or disabled by
+ * calling this function.
+ */
+int tb_dp_port_enable(struct tb_port *port, bool enable)
+{
+ u32 data;
+ int ret;
+
+ ret = tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap, 1);
+ if (ret)
+ return ret;
+
+ if (enable)
+ data |= TB_DP_VIDEO_EN | TB_DP_AUX_EN;
+ else
+ data &= ~(TB_DP_VIDEO_EN | TB_DP_AUX_EN);
+
+ return tb_port_write(port, &data, TB_CFG_PORT, port->cap_adap, 1);
+}
+
/* switch utility functions */
static void tb_dump_switch(struct tb *tb, struct tb_regs_switch_header *sw)
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 98c993259759..2565e30cdb96 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -28,6 +28,32 @@ struct tb_cm {
bool hotplug_active;
};
+struct tb_hotplug_event {
+ struct work_struct work;
+ struct tb *tb;
+ u64 route;
+ u8 port;
+ bool unplug;
+};
+
+static void tb_handle_hotplug(struct work_struct *work);
+
+static void tb_queue_hotplug(struct tb *tb, u64 route, u8 port, bool unplug)
+{
+ struct tb_hotplug_event *ev;
+
+ ev = kmalloc(sizeof(*ev), GFP_KERNEL);
+ if (!ev)
+ return;
+
+ ev->tb = tb;
+ ev->route = route;
+ ev->port = port;
+ ev->unplug = unplug;
+ INIT_WORK(&ev->work, tb_handle_hotplug);
+ queue_work(tb->wq, &ev->work);
+}
+
/* enumeration & hot plug handling */
static void tb_discover_tunnels(struct tb_switch *sw)
@@ -42,6 +68,10 @@ static void tb_discover_tunnels(struct tb_switch *sw)
port = &sw->ports[i];
switch (port->config.type) {
+ case TB_TYPE_DP_HDMI_IN:
+ tunnel = tb_tunnel_discover_dp(tb, port);
+ break;
+
case TB_TYPE_PCIE_DOWN:
tunnel = tb_tunnel_discover_pci(tb, port);
break;
@@ -102,6 +132,14 @@ static void tb_scan_port(struct tb_port *port)
struct tb_switch *sw;
if (tb_is_upstream_port(port))
return;
+
+ if (tb_port_is_dpout(port) && tb_dp_port_hpd_is_active(port) == 1) {
+ tb_port_dbg(port, "DP adapter HPD set, queuing hotplug\n");
+ tb_queue_hotplug(port->sw->tb, tb_route(port->sw), port->port,
+ false);
+ return;
+ }
+
if (port->config.type != TB_TYPE_PORT)
return;
if (port->dual_link_port && port->link_nr)
@@ -148,6 +186,26 @@ static void tb_scan_port(struct tb_port *port)
tb_scan_switch(sw);
}
+static int tb_free_tunnel(struct tb *tb, enum tb_tunnel_type type,
+ struct tb_port *src_port, struct tb_port *dst_port)
+{
+ struct tb_cm *tcm = tb_priv(tb);
+ struct tb_tunnel *tunnel;
+
+ list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
+ if (tunnel->type == type &&
+ ((src_port && src_port == tunnel->src_port) ||
+ (dst_port && dst_port == tunnel->dst_port))) {
+ tb_tunnel_deactivate(tunnel);
+ list_del(&tunnel->list);
+ tb_tunnel_free(tunnel);
+ return 0;
+ }
+ }
+
+ return -ENODEV;
+}
+
/**
* tb_free_invalid_tunnels() - destroy tunnels of devices that have gone away
*/
@@ -227,6 +285,44 @@ static struct tb_port *tb_find_unused_port(struct tb_switch *sw,
return NULL;
}
+static int tb_tunnel_dp(struct tb *tb, struct tb_port *out)
+{
+ struct tb_cm *tcm = tb_priv(tb);
+ struct tb_switch *sw = out->sw;
+ struct tb_tunnel *tunnel;
+ struct tb_port *in;
+
+ if (tb_port_is_enabled(out))
+ return 0;
+
+ do {
+ sw = tb_to_switch(sw->dev.parent);
+ if (!sw)
+ return 0;
+ in = tb_find_unused_port(sw, TB_TYPE_DP_HDMI_IN);
+ } while (!in);
+
+ tunnel = tb_tunnel_alloc_dp(tb, in, out);
+ if (!tunnel) {
+ tb_port_dbg(out, "DP tunnel allocation failed\n");
+ return -EIO;
+ }
+
+ if (tb_tunnel_activate(tunnel)) {
+ tb_port_info(out, "DP tunnel activation failed, aborting\n");
+ tb_tunnel_free(tunnel);
+ return -EIO;
+ }
+
+ list_add_tail(&tunnel->list, &tcm->tunnel_list);
+ return 0;
+}
+
+static void tb_teardown_dp(struct tb *tb, struct tb_port *out)
+{
+ tb_free_tunnel(tb, TB_TUNNEL_DP, NULL, out);
+}
+
static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
{
struct tb_cm *tcm = tb_priv(tb);
@@ -283,14 +379,6 @@ static int tb_approve_switch(struct tb *tb, struct tb_switch *sw)
/* hotplug handling */
-struct tb_hotplug_event {
- struct work_struct work;
- struct tb *tb;
- u64 route;
- u8 port;
- bool unplug;
-};
-
/**
* tb_handle_hotplug() - handle hotplug event
*
@@ -335,6 +423,8 @@ static void tb_handle_hotplug(struct work_struct *work)
cancel_work_sync(&sw->work);
tb_switch_remove(port->remote->sw);
port->remote = NULL;
+ } else if (tb_port_is_dpout(port)) {
+ tb_teardown_dp(tb, port);
} else {
tb_port_info(port,
"got unplug event for disconnected port, ignoring\n");
@@ -348,6 +438,8 @@ static void tb_handle_hotplug(struct work_struct *work)
tb_scan_port(port);
if (!port->remote)
tb_port_info(port, "hotplug: no switch found\n");
+ } else if (tb_port_is_dpout(port)) {
+ tb_tunnel_dp(tb, port);
}
}
out:
@@ -364,7 +456,6 @@ static void tb_handle_event(struct tb *tb, enum tb_cfg_pkg_type type,
const void *buf, size_t size)
{
const struct cfg_event_pkg *pkg = buf;
- struct tb_hotplug_event *ev;
u64 route;
if (type != TB_CFG_PKG_EVENT) {
@@ -380,15 +471,7 @@ static void tb_handle_event(struct tb *tb, enum tb_cfg_pkg_type type,
pkg->port);
}
- ev = kmalloc(sizeof(*ev), GFP_KERNEL);
- if (!ev)
- return;
- INIT_WORK(&ev->work, tb_handle_hotplug);
- ev->tb = tb;
- ev->route = route;
- ev->port = pkg->port;
- ev->unplug = pkg->unplug;
- queue_work(tb->wq, &ev->work);
+ tb_queue_hotplug(tb, route, pkg->port, pkg->unplug);
}
static void tb_stop(struct tb *tb)
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index d16a67898a34..e06e5a944998 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -291,6 +291,16 @@ static inline bool tb_port_is_null(const struct tb_port *port)
return port->port && port->config.type == TB_TYPE_PORT;
}
+static inline bool tb_port_is_dpin(const struct tb_port *port)
+{
+ return port->config.type == TB_TYPE_DP_HDMI_IN;
+}
+
+static inline bool tb_port_is_dpout(const struct tb_port *port)
+{
+ return port->config.type == TB_TYPE_DP_HDMI_OUT;
+}
+
static inline int tb_sw_read(struct tb_switch *sw, void *buffer,
enum tb_cfg_space space, u32 offset, u32 length)
{
@@ -467,6 +477,13 @@ bool tb_port_is_enabled(struct tb_port *port);
bool tb_pci_port_is_enabled(struct tb_port *port);
int tb_pci_port_enable(struct tb_port *port, bool enable);
+int tb_dp_port_hpd_is_active(struct tb_port *port);
+int tb_dp_port_hpd_clear(struct tb_port *port);
+int tb_dp_port_set_hops(struct tb_port *port, unsigned int video,
+ unsigned int aux_tx, unsigned int aux_rx);
+bool tb_dp_port_is_enabled(struct tb_port *port);
+int tb_dp_port_enable(struct tb_port *port, bool enable);
+
struct tb_path *tb_path_discover(struct tb_port *port, int start_hopid,
struct tb_port **last);
struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src,
diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
index 74c0f4a5606d..420d2a623f31 100644
--- a/drivers/thunderbolt/tb_regs.h
+++ b/drivers/thunderbolt/tb_regs.h
@@ -213,6 +213,28 @@ struct tb_regs_port_header {
/* DWORD 4 */
#define TB_PORT_NFC_CREDITS_MASK GENMASK(19, 0)
+#define TB_PORT_MAX_CREDITS_SHIFT 20
+#define TB_PORT_MAX_CREDITS_MASK GENMASK(26, 20)
+
+/* Display Port adapter registers */
+
+/* DWORD 0 */
+#define TB_DP_VIDEO_EN BIT(31)
+#define TB_DP_AUX_EN BIT(30)
+#define TB_DP_VIDEO_HOPID_SHIFT 16
+#define TB_DP_VIDEO_HOPID_MASK GENMASK(26, 16)
+/* DWORD 1 */
+#define TB_DP_AUX_TX_HOPID_MASK GENMASK(10, 0)
+#define TB_DP_AUX_RX_HOPID_SHIFT 11
+#define TB_DP_AUX_RX_HOPID_MASK GENMASK(21, 11)
+/* DWORD 2 */
+#define TB_DP_HDP BIT(6)
+/* DWORD 3 */
+#define TB_DP_HPDC BIT(9)
+/* DWORD 4 */
+#define TB_DP_LOCAL_CAP 4
+/* DWORD 5 */
+#define TB_DP_REMOTE_CAP 5
/* PCIe adapter registers */
diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index 1a5e2aa395c6..7aab7e07739b 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -18,14 +18,25 @@
#define TB_PCI_PATH_DOWN 0
#define TB_PCI_PATH_UP 1
+/* DP adapters use hop ID 8 for aux and 9 for video */
+#define TB_DP_AUX_TX_HOPID 8
+#define TB_DP_VIDEO_HOPID 9
+
+#define TB_DP_VIDEO_PATH_OUT 0
+#define TB_DP_AUX_PATH_OUT 1
+#define TB_DP_AUX_PATH_IN 2
+
+static const char * const tb_tunnel_names[] = { "PCI", "DP" };
+
#define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \
do { \
struct tb_tunnel *__tunnel = (tunnel); \
- level(__tunnel->tb, "%llx:%x <-> %llx:%x (PCI): " fmt, \
+ level(__tunnel->tb, "%llx:%x <-> %llx:%x (%s): " fmt, \
tb_route(__tunnel->src_port->sw), \
__tunnel->src_port->port, \
tb_route(__tunnel->dst_port->sw), \
__tunnel->dst_port->port, \
+ tb_tunnel_names[__tunnel->type], \
## arg); \
} while (0)
@@ -36,7 +47,8 @@
#define tb_tunnel_info(tunnel, fmt, arg...) \
__TB_TUNNEL_PRINT(tb_info, tunnel, fmt, ##arg)
-static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths)
+static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths,
+ enum tb_tunnel_type type)
{
struct tb_tunnel *tunnel;
@@ -53,6 +65,7 @@ static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths)
INIT_LIST_HEAD(&tunnel->list);
tunnel->tb = tb;
tunnel->npaths = npaths;
+ tunnel->type = type;
return tunnel;
}
@@ -99,7 +112,7 @@ struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down)
if (!tb_pci_port_is_enabled(down))
return NULL;
- tunnel = tb_tunnel_alloc(tb, 2);
+ tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_PCI);
if (!tunnel)
return NULL;
@@ -165,7 +178,7 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
struct tb_tunnel *tunnel;
struct tb_path *path;
- tunnel = tb_tunnel_alloc(tb, 2);
+ tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_PCI);
if (!tunnel)
return NULL;
@@ -192,6 +205,262 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
return tunnel;
}
+static int tb_dp_xchg_caps(struct tb_port *in, struct tb_port *out)
+{
+ u32 in_dp_cap, out_dp_cap;
+ int ret;
+
+ /*
+ * Copy DP_LOCAL_CAP register to DP_REMOTE_CAP register for
+ * newer generation hardware.
+ */
+ if (in->sw->generation < 2 || out->sw->generation < 2)
+ return 0;
+
+ /* Read both DP_LOCAL_CAP registers */
+ ret = tb_port_read(in, &in_dp_cap, TB_CFG_PORT,
+ in->cap_adap + TB_DP_LOCAL_CAP, 1);
+ if (ret)
+ return ret;
+
+ ret = tb_port_read(out, &out_dp_cap, TB_CFG_PORT,
+ out->cap_adap + TB_DP_LOCAL_CAP, 1);
+ if (ret)
+ return ret;
+
+ /* Write them to the opposite adapter port */
+ ret = tb_port_write(out, &in_dp_cap, TB_CFG_PORT,
+ out->cap_adap + TB_DP_REMOTE_CAP, 1);
+ if (ret)
+ return ret;
+
+ return tb_port_write(in, &out_dp_cap, TB_CFG_PORT,
+ in->cap_adap + TB_DP_REMOTE_CAP, 1);
+}
+
+static int tb_dp_activate(struct tb_tunnel *tunnel, bool active)
+{
+ int ret;
+
+ if (active) {
+ struct tb_path **paths;
+ int last;
+
+ ret = tb_dp_xchg_caps(tunnel->src_port, tunnel->dst_port);
+ if (ret)
+ return ret;
+
+ paths = tunnel->paths;
+ last = paths[TB_DP_VIDEO_PATH_OUT]->path_length - 1;
+
+ tb_dp_port_set_hops(tunnel->src_port,
+ paths[TB_DP_VIDEO_PATH_OUT]->hops[0].in_hop_index,
+ paths[TB_DP_AUX_PATH_OUT]->hops[0].in_hop_index,
+ paths[TB_DP_AUX_PATH_IN]->hops[last].next_hop_index);
+
+ tb_dp_port_set_hops(tunnel->dst_port,
+ paths[TB_DP_VIDEO_PATH_OUT]->hops[last].next_hop_index,
+ paths[TB_DP_AUX_PATH_IN]->hops[0].in_hop_index,
+ paths[TB_DP_AUX_PATH_OUT]->hops[last].next_hop_index);
+ } else {
+ tb_dp_port_hpd_clear(tunnel->src_port);
+ }
+
+ ret = tb_dp_port_enable(tunnel->src_port, active);
+ if (ret)
+ return ret;
+
+ return tb_dp_port_enable(tunnel->dst_port, active);
+}
+
+static void tb_dp_init_aux_path(struct tb_path *path)
+{
+ int i;
+
+ path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
+ path->egress_shared_buffer = TB_PATH_NONE;
+ path->ingress_fc_enable = TB_PATH_ALL;
+ path->ingress_shared_buffer = TB_PATH_NONE;
+ path->priority = 2;
+ path->weight = 1;
+
+ path->hops[0].initial_credits = 1;
+ for (i = 1; i < path->path_length; i++)
+ path->hops[i].initial_credits = 1;
+}
+
+static void tb_dp_init_video_path(struct tb_path *path, bool discover)
+{
+ const struct tb_port *in = path->hops[0].in_port;
+
+ path->egress_fc_enable = TB_PATH_NONE;
+ path->egress_shared_buffer = TB_PATH_NONE;
+ path->ingress_fc_enable = TB_PATH_NONE;
+ path->ingress_shared_buffer = TB_PATH_NONE;
+ path->priority = 1;
+ path->weight = 1;
+
+ if (discover) {
+ path->nfc_credits =
+ in->config.nfc_credits & TB_PORT_NFC_CREDITS_MASK;
+ } else {
+ u32 max_credits;
+
+ max_credits = in->config.nfc_credits & TB_PORT_MAX_CREDITS_MASK;
+ max_credits >>= TB_PORT_MAX_CREDITS_SHIFT;
+
+ /* Leave some credits for AUX path */
+ path->nfc_credits = min_t(u32, max_credits - 2, 12);
+ }
+}
+
+/**
+ * tb_tunnel_discover_dp() - Discover existing Display Port tunnels
+ * @tb: Pointer to the domain structure
+ * @in: DP in adapter
+ *
+ * If @in adapter is active, follows the tunnel to the DP out adapter
+ * and back. Returns the discovered tunnel or %NULL if there was no
+ * tunnel.
+ *
+ * Return: DP tunnel or %NULL if no tunnel found.
+ */
+struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in)
+{
+ struct tb_tunnel *tunnel;
+ struct tb_path *path;
+ struct tb_port *out;
+ int ret, hopid;
+ u32 data[4];
+
+ ret = tb_port_read(in, data, TB_CFG_PORT, in->cap_adap,
+ ARRAY_SIZE(data));
+ if (ret < 0)
+ return NULL;
+
+ /* Both needs to be enabled for now */
+ if (!(data[0] & TB_DP_VIDEO_EN) || !(data[0] & TB_DP_AUX_EN))
+ return NULL;
+
+ tunnel = tb_tunnel_alloc(tb, 3, TB_TUNNEL_DP);
+ if (!tunnel)
+ return NULL;
+
+ tunnel->activate = tb_dp_activate;
+ tunnel->src_port = in;
+
+ hopid = (data[0] & TB_DP_VIDEO_HOPID_MASK) >> TB_DP_VIDEO_HOPID_SHIFT;
+ path = tb_path_discover(in, hopid, &out);
+ if (!path)
+ goto err_free;
+
+ if (out->config.type != TB_TYPE_DP_HDMI_OUT) {
+ tb_port_warn(in, "path does not end to a DP adapter\n");
+ goto err_free;
+ }
+
+ tunnel->paths[TB_DP_VIDEO_PATH_OUT] = path;
+ tunnel->dst_port = out;
+
+ hopid = data[1] & TB_DP_AUX_TX_HOPID_MASK;
+ path = tb_path_discover(in, hopid, NULL);
+ if (!path)
+ goto err_free;
+ tunnel->paths[TB_DP_AUX_PATH_OUT] = path;
+
+ ret = tb_port_read(out, data, TB_CFG_PORT, out->cap_adap,
+ ARRAY_SIZE(data));
+ if (ret < 0)
+ goto err_free;
+
+ hopid = data[1] & TB_DP_AUX_TX_HOPID_MASK;
+
+ if (!(data[0] & TB_DP_VIDEO_EN) || !(data[0] & TB_DP_AUX_EN))
+ goto err_free;
+
+ path = tb_path_discover(out, hopid, &in);
+ if (!path)
+ goto err_free;
+
+ tunnel->paths[TB_DP_AUX_PATH_IN] = path;
+
+ if (in != tunnel->src_port) {
+ tb_tunnel_warn(tunnel, "path is not complete, skipping\n");
+ goto err_free;
+ }
+
+ /* Activated by the boot firmware */
+ tunnel->paths[TB_DP_VIDEO_PATH_OUT]->activated = true;
+ tunnel->paths[TB_DP_AUX_PATH_OUT]->activated = true;
+ tunnel->paths[TB_DP_AUX_PATH_IN]->activated = true;
+
+ tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT], true);
+ tb_dp_init_aux_path(tunnel->paths[TB_DP_AUX_PATH_OUT]);
+ tb_dp_init_aux_path(tunnel->paths[TB_DP_AUX_PATH_IN]);
+
+ return tunnel;
+
+err_free:
+ tb_tunnel_free(tunnel);
+ return NULL;
+}
+
+/**
+ * tb_tunnel_alloc_dp() - allocate a Display Port tunnel
+ * @tb: Pointer to the domain structure
+ * @in: DP in adapter port
+ * @out: DP out adapter port
+ *
+ * Allocates a tunnel between @in and @out that is capable of tunneling
+ * Display Port traffic.
+ *
+ * Return: Returns a tb_tunnel on success or NULL on failure.
+ */
+struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
+ struct tb_port *out)
+{
+ struct tb_tunnel *tunnel;
+ struct tb_path **paths;
+ struct tb_path *path;
+
+ if (WARN_ON(!in->cap_adap || !out->cap_adap))
+ return NULL;
+
+ tunnel = tb_tunnel_alloc(tb, 3, TB_TUNNEL_DP);
+ if (!tunnel)
+ return NULL;
+
+ tunnel->activate = tb_dp_activate;
+ tunnel->src_port = in;
+ tunnel->dst_port = out;
+
+ paths = tunnel->paths;
+
+ path = tb_path_alloc(tb, in, out, TB_DP_VIDEO_HOPID, -1, 1);
+ if (!path)
+ goto err_free;
+ tb_dp_init_video_path(path, false);
+ paths[TB_DP_VIDEO_PATH_OUT] = path;
+
+ path = tb_path_alloc(tb, in, out, TB_DP_AUX_TX_HOPID, -1, 1);
+ if (!path)
+ goto err_free;
+ tb_dp_init_aux_path(path);
+ paths[TB_DP_AUX_PATH_OUT] = path;
+
+ path = tb_path_alloc(tb, out, in, TB_DP_AUX_TX_HOPID, -1, 1);
+ if (!path)
+ goto err_free;
+ tb_dp_init_aux_path(path);
+ paths[TB_DP_AUX_PATH_IN] = path;
+
+ return tunnel;
+
+err_free:
+ tb_tunnel_free(tunnel);
+ return NULL;
+}
+
/**
* tb_tunnel_free() - free a tunnel
* @tunnel: Tunnel to be freed
diff --git a/drivers/thunderbolt/tunnel.h b/drivers/thunderbolt/tunnel.h
index 7e801a31f9d1..07583f8247c1 100644
--- a/drivers/thunderbolt/tunnel.h
+++ b/drivers/thunderbolt/tunnel.h
@@ -11,6 +11,11 @@
#include "tb.h"
+enum tb_tunnel_type {
+ TB_TUNNEL_PCI,
+ TB_TUNNEL_DP,
+};
+
/**
* struct tb_tunnel - Tunnel between two ports
* @tb: Pointer to the domain
@@ -20,6 +25,7 @@
* @npaths: Number of paths in @paths
* @activate: Optional tunnel specific activation/deactivation
* @list: Tunnels are linked using this field
+ * @type: Type of the tunnel
*/
struct tb_tunnel {
struct tb *tb;
@@ -29,16 +35,31 @@ struct tb_tunnel {
size_t npaths;
int (*activate)(struct tb_tunnel *tunnel, bool activate);
struct list_head list;
+ enum tb_tunnel_type type;
};
struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down);
struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
struct tb_port *down);
+struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in);
+struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
+ struct tb_port *out);
+
void tb_tunnel_free(struct tb_tunnel *tunnel);
int tb_tunnel_activate(struct tb_tunnel *tunnel);
int tb_tunnel_restart(struct tb_tunnel *tunnel);
void tb_tunnel_deactivate(struct tb_tunnel *tunnel);
bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel);
+static inline bool tb_tunnel_is_pci(const struct tb_tunnel *tunnel)
+{
+ return tunnel->type == TB_TUNNEL_PCI;
+}
+
+static inline bool tb_tunnel_is_dp(const struct tb_tunnel *tunnel)
+{
+ return tunnel->type == TB_TUNNEL_DP;
+}
+
#endif
--
2.20.1
In order to detect possible connections to other domains we need to be
able to find out why tb_switch_alloc() fails so make it return ERR_PTR()
instead. This allows the caller to differentiate between errors such as
-ENOMEM which comes from the kernel and for instance -EIO which comes
from the hardware when trying to access the possible switch.
Convert all the current call sites to handle this properly.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/icm.c | 6 +++---
drivers/thunderbolt/switch.c | 36 ++++++++++++++++++++----------------
drivers/thunderbolt/tb.c | 6 +++---
3 files changed, 26 insertions(+), 22 deletions(-)
diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
index 041e7ab0efd3..e28a4255d56a 100644
--- a/drivers/thunderbolt/icm.c
+++ b/drivers/thunderbolt/icm.c
@@ -468,7 +468,7 @@ static void add_switch(struct tb_switch *parent_sw, u64 route,
pm_runtime_get_sync(&parent_sw->dev);
sw = tb_switch_alloc(parent_sw->tb, &parent_sw->dev, route);
- if (!sw)
+ if (IS_ERR(sw))
goto out;
sw->uuid = kmemdup(uuid, sizeof(*uuid), GFP_KERNEL);
@@ -1852,8 +1852,8 @@ static int icm_start(struct tb *tb)
tb->root_switch = tb_switch_alloc_safe_mode(tb, &tb->dev, 0);
else
tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0);
- if (!tb->root_switch)
- return -ENODEV;
+ if (IS_ERR(tb->root_switch))
+ return PTR_ERR(tb->root_switch);
/*
* NVM upgrade has not been tested on Apple systems and they
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 13eed95fc667..ba7dfce0b96f 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -1481,30 +1481,32 @@ static int tb_switch_get_generation(struct tb_switch *sw)
* separately. The returned switch should be released by calling
* tb_switch_put().
*
- * Return: Pointer to the allocated switch or %NULL in case of failure
+ * Return: Pointer to the allocated switch or ERR_PTR() in case of
+ * failure.
*/
struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
u64 route)
{
struct tb_switch *sw;
int upstream_port;
- int i, cap, depth;
+ int i, ret, depth;
/* Make sure we do not exceed maximum topology limit */
depth = tb_route_length(route);
if (depth > TB_SWITCH_MAX_DEPTH)
- return NULL;
+ return ERR_PTR(-EADDRNOTAVAIL);
upstream_port = tb_cfg_get_upstream_port(tb->ctl, route);
if (upstream_port < 0)
- return NULL;
+ return ERR_PTR(upstream_port);
sw = kzalloc(sizeof(*sw), GFP_KERNEL);
if (!sw)
- return NULL;
+ return ERR_PTR(-ENOMEM);
sw->tb = tb;
- if (tb_cfg_read(tb->ctl, &sw->config, route, 0, TB_CFG_SWITCH, 0, 5))
+ ret = tb_cfg_read(tb->ctl, &sw->config, route, 0, TB_CFG_SWITCH, 0, 5);
+ if (ret)
goto err_free_sw_ports;
tb_dbg(tb, "current switch config:\n");
@@ -1520,8 +1522,10 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
/* initialize ports */
sw->ports = kcalloc(sw->config.max_port_number + 1, sizeof(*sw->ports),
GFP_KERNEL);
- if (!sw->ports)
+ if (!sw->ports) {
+ ret = -ENOMEM;
goto err_free_sw_ports;
+ }
for (i = 0; i <= sw->config.max_port_number; i++) {
/* minimum setup for tb_find_cap and tb_drom_read to work */
@@ -1531,16 +1535,16 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
sw->generation = tb_switch_get_generation(sw);
- cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_PLUG_EVENTS);
- if (cap < 0) {
+ ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_PLUG_EVENTS);
+ if (ret < 0) {
tb_sw_warn(sw, "cannot find TB_VSE_CAP_PLUG_EVENTS aborting\n");
goto err_free_sw_ports;
}
- sw->cap_plug_events = cap;
+ sw->cap_plug_events = ret;
- cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER);
- if (cap > 0)
- sw->cap_lc = cap;
+ ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER);
+ if (ret > 0)
+ sw->cap_lc = ret;
/* Root switch is always authorized */
if (!route)
@@ -1559,7 +1563,7 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
kfree(sw->ports);
kfree(sw);
- return NULL;
+ return ERR_PTR(ret);
}
/**
@@ -1574,7 +1578,7 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
*
* The returned switch must be released by calling tb_switch_put().
*
- * Return: Pointer to the allocated switch or %NULL in case of failure
+ * Return: Pointer to the allocated switch or ERR_PTR() in case of failure
*/
struct tb_switch *
tb_switch_alloc_safe_mode(struct tb *tb, struct device *parent, u64 route)
@@ -1583,7 +1587,7 @@ tb_switch_alloc_safe_mode(struct tb *tb, struct device *parent, u64 route)
sw = kzalloc(sizeof(*sw), GFP_KERNEL);
if (!sw)
- return NULL;
+ return ERR_PTR(-ENOMEM);
sw->tb = tb;
sw->config.depth = tb_route_length(route);
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 2565e30cdb96..65a206f01941 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -155,7 +155,7 @@ static void tb_scan_port(struct tb_port *port)
}
sw = tb_switch_alloc(port->sw->tb, &port->sw->dev,
tb_downstream_route(port));
- if (!sw)
+ if (IS_ERR(sw))
return;
if (tb_switch_configure(sw)) {
@@ -516,8 +516,8 @@ static int tb_start(struct tb *tb)
int ret;
tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0);
- if (!tb->root_switch)
- return -ENOMEM;
+ if (IS_ERR(tb->root_switch))
+ return PTR_ERR(tb->root_switch);
/*
* ICM firmware upgrade needs running firmware and in native
--
2.20.1
The XDomain protocol messages may start as soon as Thunderbolt control
channel is started. This means that if the other host starts sending
ThunderboltIP packets early enough they will be passed to the network
driver which then gets confused because its resume hook is not called
yet.
Fix this by unregistering the ThunderboltIP protocol handler when
suspending and registering it back on resume.
Signed-off-by: Mika Westerberg <[email protected]>
Acked-by: David S. Miller <[email protected]>
---
drivers/net/thunderbolt.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c
index c48c3a1eb1f8..fcf31335a8b6 100644
--- a/drivers/net/thunderbolt.c
+++ b/drivers/net/thunderbolt.c
@@ -1282,6 +1282,7 @@ static int __maybe_unused tbnet_suspend(struct device *dev)
tbnet_tear_down(net, true);
}
+ tb_unregister_protocol_handler(&net->handler);
return 0;
}
@@ -1290,6 +1291,8 @@ static int __maybe_unused tbnet_resume(struct device *dev)
struct tb_service *svc = tb_to_service(dev);
struct tbnet *net = tb_service_get_drvdata(svc);
+ tb_register_protocol_handler(&net->handler);
+
netif_carrier_off(net->dev);
if (netif_running(net->dev)) {
netif_device_attach(net->dev);
--
2.20.1
We will be needing these routines to find Display Port adapters as well
so modify them to take port type as the second parameter.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/switch.c | 16 ++++++++++++++++
drivers/thunderbolt/tb.c | 35 +++++++++++++++++------------------
drivers/thunderbolt/tb.h | 1 +
3 files changed, 34 insertions(+), 18 deletions(-)
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index c6cb6c44a571..29bf9119e0ae 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -743,6 +743,22 @@ struct tb_port *tb_port_get_next(struct tb_port *start, struct tb_port *end,
return next;
}
+/**
+ * tb_port_is_enabled() - Is the adapter port enabled
+ * @port: Port to check
+ */
+bool tb_port_is_enabled(struct tb_port *port)
+{
+ switch (port->config.type) {
+ case TB_TYPE_PCIE_UP:
+ case TB_TYPE_PCIE_DOWN:
+ return tb_pci_port_is_enabled(port);
+
+ default:
+ return false;
+ }
+}
+
/**
* tb_pci_port_is_enabled() - Is the PCIe adapter port enabled
* @port: PCIe port to check
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index a450bebfeb92..98c993259759 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -188,40 +188,39 @@ static void tb_free_unplugged_children(struct tb_switch *sw)
}
}
-
/**
- * find_pci_up_port() - return the first PCIe up port on @sw or NULL
+ * tb_find_port() - return the first port of @type on @sw or NULL
+ * @sw: Switch to find the port from
+ * @type: Port type to look for
*/
-static struct tb_port *tb_find_pci_up_port(struct tb_switch *sw)
+static struct tb_port *tb_find_port(struct tb_switch *sw,
+ enum tb_port_type type)
{
int i;
for (i = 1; i <= sw->config.max_port_number; i++)
- if (sw->ports[i].config.type == TB_TYPE_PCIE_UP)
+ if (sw->ports[i].config.type == type)
return &sw->ports[i];
return NULL;
}
/**
- * find_unused_down_port() - return the first inactive PCIe down port on @sw
+ * tb_find_unused_port() - return the first inactive port on @sw
+ * @sw: Switch to find the port on
+ * @type: Port type to look for
*/
-static struct tb_port *tb_find_unused_down_port(struct tb_switch *sw)
+static struct tb_port *tb_find_unused_port(struct tb_switch *sw,
+ enum tb_port_type type)
{
int i;
- int cap;
- int res;
- int data;
+
for (i = 1; i <= sw->config.max_port_number; i++) {
if (tb_is_upstream_port(&sw->ports[i]))
continue;
- if (sw->ports[i].config.type != TB_TYPE_PCIE_DOWN)
- continue;
- cap = sw->ports[i].cap_adap;
- if (!cap)
+ if (sw->ports[i].config.type != type)
continue;
- res = tb_port_read(&sw->ports[i], &data, TB_CFG_PORT, cap, 1);
- if (res < 0)
+ if (!sw->ports[i].cap_adap)
continue;
- if (data & 0x80000000)
+ if (tb_port_is_enabled(&sw->ports[i]))
continue;
return &sw->ports[i];
}
@@ -235,7 +234,7 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
struct tb_port *up, *down;
struct tb_tunnel *tunnel;
- up = tb_find_pci_up_port(sw);
+ up = tb_find_port(sw, TB_TYPE_PCIE_UP);
if (!up)
return 0;
@@ -246,7 +245,7 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
down = NULL;
parent_sw = tb_to_switch(sw->dev.parent);
while (parent_sw) {
- down = tb_find_unused_down_port(parent_sw);
+ down = tb_find_unused_port(parent_sw, TB_TYPE_PCIE_DOWN);
if (down)
break;
parent_sw = tb_to_switch(parent_sw->dev.parent);
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 8906ee0a8a6a..d16a67898a34 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -462,6 +462,7 @@ struct tb_port *tb_port_get_next(struct tb_port *start, struct tb_port *end,
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap);
+bool tb_port_is_enabled(struct tb_port *port);
bool tb_pci_port_is_enabled(struct tb_port *port);
int tb_pci_port_enable(struct tb_port *port, bool enable);
--
2.20.1
In order to tunnel non-PCIe traffic as well rename tunnel_pci.[ch] to
tunnel.[ch] to reflect this fact. No functional changes.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/Makefile | 2 +-
drivers/thunderbolt/tb.c | 2 +-
drivers/thunderbolt/{tunnel_pci.c => tunnel.c} | 4 ++--
drivers/thunderbolt/{tunnel_pci.h => tunnel.h} | 6 +++---
4 files changed, 7 insertions(+), 7 deletions(-)
rename drivers/thunderbolt/{tunnel_pci.c => tunnel.c} (98%)
rename drivers/thunderbolt/{tunnel_pci.h => tunnel.h} (87%)
diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile
index 8531f15d3b3c..833bdee3cec7 100644
--- a/drivers/thunderbolt/Makefile
+++ b/drivers/thunderbolt/Makefile
@@ -1,3 +1,3 @@
obj-${CONFIG_THUNDERBOLT} := thunderbolt.o
-thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel_pci.o eeprom.o
+thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o
thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 7fd88b41d082..931612143896 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -12,7 +12,7 @@
#include "tb.h"
#include "tb_regs.h"
-#include "tunnel_pci.h"
+#include "tunnel.h"
/**
* struct tb_cm - Simple Thunderbolt connection manager
diff --git a/drivers/thunderbolt/tunnel_pci.c b/drivers/thunderbolt/tunnel.c
similarity index 98%
rename from drivers/thunderbolt/tunnel_pci.c
rename to drivers/thunderbolt/tunnel.c
index 2de4edccbd6d..1e470564e99d 100644
--- a/drivers/thunderbolt/tunnel_pci.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
- * Thunderbolt Cactus Ridge driver - PCIe tunnel
+ * Thunderbolt Cactus Ridge driver - Tunneling support
*
* Copyright (c) 2014 Andreas Noever <[email protected]>
*/
@@ -8,7 +8,7 @@
#include <linux/slab.h>
#include <linux/list.h>
-#include "tunnel_pci.h"
+#include "tunnel.h"
#include "tb.h"
#define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \
diff --git a/drivers/thunderbolt/tunnel_pci.h b/drivers/thunderbolt/tunnel.h
similarity index 87%
rename from drivers/thunderbolt/tunnel_pci.h
rename to drivers/thunderbolt/tunnel.h
index f9b65fa1fd4d..dff0f27d6ab5 100644
--- a/drivers/thunderbolt/tunnel_pci.h
+++ b/drivers/thunderbolt/tunnel.h
@@ -1,12 +1,12 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
- * Thunderbolt Cactus Ridge driver - PCIe tunnel
+ * Thunderbolt Cactus Ridge driver - Tunneling support
*
* Copyright (c) 2014 Andreas Noever <[email protected]>
*/
-#ifndef TB_PCI_H_
-#define TB_PCI_H_
+#ifndef TB_TUNNEL_H_
+#define TB_TUNNEL_H_
#include "tb.h"
--
2.20.1
Thunderbolt 2 devices and beyond need to have additional bits set in
link controller specific registers. This includes two bits in LC_SX_CTRL
that tell the link controller which lane is connected and whether it is
upstream facing or not.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/lc.c | 114 ++++++++++++++++++++++++++++++++++
drivers/thunderbolt/switch.c | 9 +++
drivers/thunderbolt/tb.h | 2 +
drivers/thunderbolt/tb_regs.h | 11 ++++
4 files changed, 136 insertions(+)
diff --git a/drivers/thunderbolt/lc.c b/drivers/thunderbolt/lc.c
index 2134a55ed837..a5dddf176546 100644
--- a/drivers/thunderbolt/lc.c
+++ b/drivers/thunderbolt/lc.c
@@ -19,3 +19,117 @@ int tb_lc_read_uuid(struct tb_switch *sw, u32 *uuid)
return -EINVAL;
return tb_sw_read(sw, uuid, TB_CFG_SWITCH, sw->cap_lc + TB_LC_FUSE, 4);
}
+
+static int read_lc_desc(struct tb_switch *sw, u32 *desc)
+{
+ if (!sw->cap_lc)
+ return -EINVAL;
+ return tb_sw_read(sw, desc, TB_CFG_SWITCH, sw->cap_lc + TB_LC_DESC, 1);
+}
+
+static int find_port_lc_cap(struct tb_port *port)
+{
+ struct tb_switch *sw = port->sw;
+ int start, phys, ret, size;
+ u32 desc;
+
+ ret = read_lc_desc(sw, &desc);
+ if (ret)
+ return ret;
+
+ /* Start of port LC registers */
+ start = (desc & TB_LC_DESC_SIZE_MASK) >> TB_LC_DESC_SIZE_SHIFT;
+ size = (desc & TB_LC_DESC_PORT_SIZE_MASK) >> TB_LC_DESC_PORT_SIZE_SHIFT;
+ phys = tb_phy_port_from_link(port->port);
+
+ return sw->cap_lc + start + phys * size;
+}
+
+static int tb_lc_configure_lane(struct tb_port *port, bool configure)
+{
+ bool upstream = tb_is_upstream_port(port);
+ struct tb_switch *sw = port->sw;
+ u32 ctrl, lane;
+ int cap, ret;
+
+ if (sw->generation < 2)
+ return 0;
+
+ cap = find_port_lc_cap(port);
+ if (cap < 0)
+ return cap;
+
+ ret = tb_sw_read(sw, &ctrl, TB_CFG_SWITCH, cap + TB_LC_SX_CTRL, 1);
+ if (ret)
+ return ret;
+
+ /* Resolve correct lane */
+ if (port->port % 2)
+ lane = TB_LC_SX_CTRL_L1C;
+ else
+ lane = TB_LC_SX_CTRL_L2C;
+
+ if (configure) {
+ ctrl |= lane;
+ if (upstream)
+ ctrl |= TB_LC_SX_CTRL_UPSTREAM;
+ } else {
+ ctrl &= ~lane;
+ if (upstream)
+ ctrl &= ~TB_LC_SX_CTRL_UPSTREAM;
+ }
+
+ return tb_sw_write(sw, &ctrl, TB_CFG_SWITCH, cap + TB_LC_SX_CTRL, 1);
+}
+
+/**
+ * tb_lc_configure_link() - Let LC know about configured link
+ * @sw: Switch that is being added
+ *
+ * Informs LC of both parent switch and @sw that there is established
+ * link between the two.
+ */
+int tb_lc_configure_link(struct tb_switch *sw)
+{
+ struct tb_port *up, *down;
+ int ret;
+
+ if (!sw->config.enabled || !tb_route(sw))
+ return 0;
+
+ up = tb_upstream_port(sw);
+ down = tb_port_at(tb_route(sw), tb_to_switch(sw->dev.parent));
+
+ /* Configure parent link toward this switch */
+ ret = tb_lc_configure_lane(down, true);
+ if (ret)
+ return ret;
+
+ /* Configure upstream link from this switch to the parent */
+ ret = tb_lc_configure_lane(up, true);
+ if (ret)
+ tb_lc_configure_lane(down, false);
+
+ return ret;
+}
+
+/**
+ * tb_lc_unconfigure_link() - Let LC know about unconfigured link
+ * @sw: Switch to unconfigure
+ *
+ * Informs LC of both parent switch and @sw that the link between the
+ * two does not exist anymore.
+ */
+void tb_lc_unconfigure_link(struct tb_switch *sw)
+{
+ struct tb_port *up, *down;
+
+ if (sw->is_unplugged || !sw->config.enabled || !tb_route(sw))
+ return;
+
+ up = tb_upstream_port(sw);
+ down = tb_port_at(tb_route(sw), tb_to_switch(sw->dev.parent));
+
+ tb_lc_configure_lane(up, false);
+ tb_lc_configure_lane(down, false);
+}
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index bd96eebd8248..760332f57b5c 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -1301,6 +1301,10 @@ int tb_switch_configure(struct tb_switch *sw)
if (ret)
return ret;
+ ret = tb_lc_configure_link(sw);
+ if (ret)
+ return ret;
+
return tb_plug_events_active(sw, true);
}
@@ -1504,6 +1508,7 @@ void tb_switch_remove(struct tb_switch *sw)
if (!sw->is_unplugged)
tb_plug_events_active(sw, false);
+ tb_lc_unconfigure_link(sw);
tb_switch_nvm_remove(sw);
@@ -1563,6 +1568,10 @@ int tb_switch_resume(struct tb_switch *sw)
if (err)
return err;
+ err = tb_lc_configure_link(sw);
+ if (err)
+ return err;
+
err = tb_plug_events_active(sw, true);
if (err)
return err;
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 530464b25dcb..e61c2409021d 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -451,6 +451,8 @@ int tb_drom_read(struct tb_switch *sw);
int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid);
int tb_lc_read_uuid(struct tb_switch *sw, u32 *uuid);
+int tb_lc_configure_link(struct tb_switch *sw);
+void tb_lc_unconfigure_link(struct tb_switch *sw);
static inline int tb_route_length(u64 route)
{
diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
index 4895ae9f0b40..e0f867dad5cf 100644
--- a/drivers/thunderbolt/tb_regs.h
+++ b/drivers/thunderbolt/tb_regs.h
@@ -238,6 +238,17 @@ struct tb_regs_hop {
} __packed;
/* Common link controller registers */
+#define TB_LC_DESC 0x02
+#define TB_LC_DESC_SIZE_SHIFT 8
+#define TB_LC_DESC_SIZE_MASK GENMASK(15, 8)
+#define TB_LC_DESC_PORT_SIZE_SHIFT 16
+#define TB_LC_DESC_PORT_SIZE_MASK GENMASK(27, 16)
#define TB_LC_FUSE 0x03
+/* Link controller registers */
+#define TB_LC_SX_CTRL 0x96
+#define TB_LC_SX_CTRL_L1C BIT(16)
+#define TB_LC_SX_CTRL_L2C BIT(20)
+#define TB_LC_SX_CTRL_UPSTREAM BIT(30)
+
#endif
--
2.20.1
Two domains (hosts) can be connected through a Thunderbolt cable and in
that case they can start software services such as networking over the
high-speed DMA paths. Now that we have all the basic building blocks in
place to create DMA tunnels over the Thunderbolt fabric we can add this
support to the software connection manager as well.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/switch.c | 29 ++++++--
drivers/thunderbolt/tb.c | 131 ++++++++++++++++++++++++++++++++++-
2 files changed, 153 insertions(+), 7 deletions(-)
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index ba7dfce0b96f..79391c3d834e 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -1885,6 +1885,17 @@ int tb_switch_resume(struct tb_switch *sw)
if (tb_route(sw)) {
u64 uid;
+ /*
+ * Check first that we can still read the switch config
+ * space. It may be that there is now another domain
+ * connected.
+ */
+ err = tb_cfg_get_upstream_port(sw->tb->ctl, tb_route(sw));
+ if (err < 0) {
+ tb_sw_info(sw, "switch not present anymore\n");
+ return err;
+ }
+
err = tb_drom_read_uid_only(sw, &uid);
if (err) {
tb_sw_warn(sw, "uid read failed\n");
@@ -1916,13 +1927,23 @@ int tb_switch_resume(struct tb_switch *sw)
struct tb_port *port = &sw->ports[i];
if (tb_is_upstream_port(port))
continue;
- if (!port->remote)
+
+ if (!port->remote && !port->xdomain)
continue;
- if (tb_wait_for_port(port, true) <= 0
- || tb_switch_resume(port->remote->sw)) {
+
+ if (tb_wait_for_port(port, true) <= 0) {
tb_port_warn(port,
"lost during suspend, disconnecting\n");
- tb_sw_set_unplugged(port->remote->sw);
+ if (port->remote)
+ tb_sw_set_unplugged(port->remote->sw);
+ else if (port->xdomain)
+ port->xdomain->is_unplugged = true;
+ } else if (port->remote) {
+ if (tb_switch_resume(port->remote->sw)) {
+ tb_port_warn(port,
+ "lost during suspend, disconnecting\n");
+ tb_sw_set_unplugged(port->remote->sw);
+ }
}
}
return 0;
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 65a206f01941..94dd1fd12967 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -111,6 +111,28 @@ static void tb_switch_authorize(struct work_struct *work)
mutex_unlock(&sw->tb->lock);
}
+static void tb_scan_xdomain(struct tb_port *port)
+{
+ struct tb_switch *sw = port->sw;
+ struct tb *tb = sw->tb;
+ struct tb_xdomain *xd;
+ u64 route;
+
+ route = tb_downstream_route(port);
+ xd = tb_xdomain_find_by_route(tb, route);
+ if (xd) {
+ tb_xdomain_put(xd);
+ return;
+ }
+
+ xd = tb_xdomain_alloc(tb, &sw->dev, route, tb->root_switch->uuid,
+ NULL);
+ if (xd) {
+ tb_port_at(route, sw)->xdomain = xd;
+ tb_xdomain_add(xd);
+ }
+}
+
static void tb_scan_port(struct tb_port *port);
/**
@@ -150,19 +172,36 @@ static void tb_scan_port(struct tb_port *port)
if (tb_wait_for_port(port, false) <= 0)
return;
if (port->remote) {
- tb_port_WARN(port, "port already has a remote!\n");
+ tb_port_dbg(port, "port already has a remote\n");
return;
}
sw = tb_switch_alloc(port->sw->tb, &port->sw->dev,
tb_downstream_route(port));
- if (IS_ERR(sw))
+ if (IS_ERR(sw)) {
+ /*
+ * If there is an error accessing the connected switch
+ * it may be connected to another domain. Also we allow
+ * the other domain to be connected to a max depth switch.
+ */
+ if (PTR_ERR(sw) == -EIO || PTR_ERR(sw) == -EADDRNOTAVAIL)
+ tb_scan_xdomain(port);
return;
+ }
if (tb_switch_configure(sw)) {
tb_switch_put(sw);
return;
}
+ /*
+ * If there was previously another domain connected remove it
+ * first.
+ */
+ if (port->xdomain) {
+ tb_xdomain_remove(port->xdomain);
+ port->xdomain = NULL;
+ }
+
/*
* Do not send uevents until we have discovered all existing
* tunnels and know which switches were authorized already by
@@ -377,6 +416,51 @@ static int tb_approve_switch(struct tb *tb, struct tb_switch *sw)
return tb_tunnel_pci(tb, sw);
}
+static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
+{
+ struct tb_cm *tcm = tb_priv(tb);
+ struct tb_port *nhi_port, *dst_port;
+ struct tb_tunnel *tunnel;
+ struct tb_switch *sw;
+
+ sw = tb_to_switch(xd->dev.parent);
+ dst_port = tb_port_at(xd->route, sw);
+ nhi_port = tb_find_port(tb->root_switch, TB_TYPE_NHI);
+
+ tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, xd->transmit_ring,
+ xd->transmit_path, xd->receive_ring,
+ xd->receive_path);
+ if (!tunnel)
+ return -ENOMEM;
+
+ if (tb_tunnel_activate(tunnel)) {
+ tb_port_info(nhi_port,
+ "DMA tunnel activation failed, aborting\n");
+ tb_tunnel_free(tunnel);
+ return -EIO;
+ }
+ list_add_tail(&tunnel->list, &tcm->tunnel_list);
+
+ return 0;
+}
+
+static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
+{
+ struct tb_port *dst_port;
+ struct tb_switch *sw;
+
+ sw = tb_to_switch(xd->dev.parent);
+ dst_port = tb_port_at(xd->route, sw);
+
+ /*
+ * It is possible that the tunnel was already teared down (in
+ * case of cable disconnect) so it is fine if we cannot find it
+ * here anymore.
+ */
+ tb_free_tunnel(tb, TB_TUNNEL_DMA, NULL, dst_port);
+ return 0;
+}
+
/* hotplug handling */
/**
@@ -417,12 +501,16 @@ static void tb_handle_hotplug(struct work_struct *work)
}
if (ev->unplug) {
if (port->remote) {
- tb_port_info(port, "unplugged\n");
+ tb_port_dbg(port, "switch unplugged\n");
tb_sw_set_unplugged(port->remote->sw);
tb_free_invalid_tunnels(tb);
cancel_work_sync(&sw->work);
tb_switch_remove(port->remote->sw);
port->remote = NULL;
+ } else if (port->xdomain) {
+ tb_port_dbg(port, "xdomain unplugged\n");
+ tb_xdomain_remove(port->xdomain);
+ port->xdomain = NULL;
} else if (tb_port_is_dpout(port)) {
tb_teardown_dp(tb, port);
} else {
@@ -594,13 +682,50 @@ static int tb_resume_noirq(struct tb *tb)
return 0;
}
+static int tb_free_unplugged_xdomains(struct tb_switch *sw)
+{
+ int i, ret = 0;
+
+ for (i = 1; i <= sw->config.max_port_number; i++) {
+ struct tb_port *port = &sw->ports[i];
+
+ if (tb_is_upstream_port(port))
+ continue;
+ if (port->xdomain && port->xdomain->is_unplugged) {
+ tb_xdomain_remove(port->xdomain);
+ port->xdomain = NULL;
+ ret++;
+ } else if (port->remote) {
+ ret += tb_free_unplugged_xdomains(port->remote->sw);
+ }
+ }
+
+ return ret;
+}
+
+static void tb_complete(struct tb *tb)
+{
+ /*
+ * Release any unplugged XDomains and if there is a case where
+ * another domain is swapped in place of unplugged XDomain we
+ * need to run another rescan.
+ */
+ mutex_lock(&tb->lock);
+ if (tb_free_unplugged_xdomains(tb->root_switch))
+ tb_scan_switch(tb->root_switch);
+ mutex_unlock(&tb->lock);
+}
+
static const struct tb_cm_ops tb_cm_ops = {
.start = tb_start,
.stop = tb_stop,
.suspend_noirq = tb_suspend_noirq,
.resume_noirq = tb_resume_noirq,
+ .complete = tb_complete,
.handle_event = tb_handle_event,
.approve_switch = tb_approve_switch,
+ .approve_xdomain_paths = tb_approve_xdomain_paths,
+ .disconnect_xdomain_paths = tb_disconnect_xdomain_paths,
};
struct tb *tb_probe(struct tb_nhi *nhi)
--
2.20.1
NFC (non flow control) credits is actually 20-bit field so update
tb_port_add_nfc_credits() to handle this properly. This allows us to set
NFC credits for Display Port path in subsequent patches.
Also make sure the function does not update the hardware if the
underlying switch is already unplugged.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/switch.c | 20 +++++++++++++-------
drivers/thunderbolt/tb_regs.h | 3 +++
2 files changed, 16 insertions(+), 7 deletions(-)
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 29bf9119e0ae..7cc1f534e776 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -544,14 +544,20 @@ int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged)
*/
int tb_port_add_nfc_credits(struct tb_port *port, int credits)
{
- if (credits == 0)
+ u32 nfc_credits;
+
+ if (credits == 0 || port->sw->is_unplugged)
return 0;
- tb_port_info(port,
- "adding %#x NFC credits (%#x -> %#x)",
- credits,
- port->config.nfc_credits,
- port->config.nfc_credits + credits);
- port->config.nfc_credits += credits;
+
+ nfc_credits = port->config.nfc_credits & TB_PORT_NFC_CREDITS_MASK;
+ nfc_credits += credits;
+
+ tb_port_dbg(port, "adding %d NFC credits to %lu",
+ credits, port->config.nfc_credits & TB_PORT_NFC_CREDITS_MASK);
+
+ port->config.nfc_credits &= ~TB_PORT_NFC_CREDITS_MASK;
+ port->config.nfc_credits |= nfc_credits;
+
return tb_port_write(port, &port->config.nfc_credits,
TB_CFG_PORT, 4, 1);
}
diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
index 75e935acade5..74c0f4a5606d 100644
--- a/drivers/thunderbolt/tb_regs.h
+++ b/drivers/thunderbolt/tb_regs.h
@@ -211,6 +211,9 @@ struct tb_regs_port_header {
} __packed;
+/* DWORD 4 */
+#define TB_PORT_NFC_CREDITS_MASK GENMASK(19, 0)
+
/* PCIe adapter registers */
#define TB_PCI_EN BIT(31)
--
2.20.1
In addition to PCIe and Display Port tunnels it is also possible to
create tunnels that forward DMA traffic from the host interface adapter
(NHI) to a NULL port that is connected to another domain through a
Thunderbolt cable. These tunnels can be used to carry software messages
such as networking packets.
To support this we introduce another tunnel type (TB_TUNNEL_DMA) that
supports paths from NHI to NULL port and back.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/path.c | 20 ++++++--
drivers/thunderbolt/switch.c | 22 ++++++++
drivers/thunderbolt/tb.h | 2 +
drivers/thunderbolt/tb_regs.h | 3 ++
drivers/thunderbolt/tunnel.c | 94 ++++++++++++++++++++++++++++++++++-
drivers/thunderbolt/tunnel.h | 10 ++++
6 files changed, 147 insertions(+), 4 deletions(-)
diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c
index ada60d4aa99b..afdb667fcc0d 100644
--- a/drivers/thunderbolt/path.c
+++ b/drivers/thunderbolt/path.c
@@ -284,7 +284,8 @@ static void __tb_path_deallocate_nfc(struct tb_path *path, int first_hop)
}
}
-static int __tb_path_deactivate_hop(struct tb_port *port, int hop_index)
+static int __tb_path_deactivate_hop(struct tb_port *port, int hop_index,
+ bool clear_fc)
{
struct tb_regs_hop hop;
ktime_t timeout;
@@ -311,8 +312,20 @@ static int __tb_path_deactivate_hop(struct tb_port *port, int hop_index)
if (ret)
return ret;
- if (!hop.pending)
+ if (!hop.pending) {
+ if (clear_fc) {
+ /* Clear flow control */
+ hop.ingress_fc = 0;
+ hop.egress_fc = 0;
+ hop.ingress_shared_buffer = 0;
+ hop.egress_shared_buffer = 0;
+
+ return tb_port_write(port, &hop, TB_CFG_HOPS,
+ 2 * hop_index, 2);
+ }
+
return 0;
+ }
usleep_range(10, 20);
} while (ktime_before(ktime_get(), timeout));
@@ -326,7 +339,8 @@ static void __tb_path_deactivate_hops(struct tb_path *path, int first_hop)
for (i = first_hop; i < path->path_length; i++) {
res = __tb_path_deactivate_hop(path->hops[i].in_port,
- path->hops[i].in_hop_index);
+ path->hops[i].in_hop_index,
+ path->clear_fc);
if (res)
tb_port_warn(path->hops[i].in_port,
"hop deactivation failed for hop %d, index %d\n",
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index a1876dcd1d10..13eed95fc667 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -562,6 +562,28 @@ int tb_port_add_nfc_credits(struct tb_port *port, int credits)
TB_CFG_PORT, 4, 1);
}
+/**
+ * tb_port_set_initial_credits() - Set initial port link credits allocated
+ * @port: Port to set the initial credits
+ * @credits: Number of credits to to allocate
+ *
+ * Set initial credits value to be used for ingress shared buffering.
+ */
+int tb_port_set_initial_credits(struct tb_port *port, u32 credits)
+{
+ u32 data;
+ int ret;
+
+ ret = tb_port_read(port, &data, TB_CFG_PORT, 5, 1);
+ if (ret)
+ return ret;
+
+ data &= ~TB_PORT_LCA_MASK;
+ data |= (credits << TB_PORT_LCA_SHIFT) & TB_PORT_LCA_MASK;
+
+ return tb_port_write(port, &data, TB_CFG_PORT, 5, 1);
+}
+
/**
* tb_port_clear_counter() - clear a counter in TB_CFG_COUNTER
*
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 7e155eed1fee..3a42a47df69f 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -198,6 +198,7 @@ struct tb_path {
int weight:4;
bool drop_packages;
bool activated;
+ bool clear_fc;
struct tb_path_hop *hops;
int path_length; /* number of hops */
};
@@ -465,6 +466,7 @@ static inline struct tb_switch *tb_to_switch(struct device *dev)
int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged);
int tb_port_add_nfc_credits(struct tb_port *port, int credits);
+int tb_port_set_initial_credits(struct tb_port *port, u32 credits);
int tb_port_clear_counter(struct tb_port *port, int counter);
int tb_port_alloc_in_hopid(struct tb_port *port, int hopid, int max_hopid);
void tb_port_release_in_hopid(struct tb_port *port, int hopid);
diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
index 420d2a623f31..4591c8b1d546 100644
--- a/drivers/thunderbolt/tb_regs.h
+++ b/drivers/thunderbolt/tb_regs.h
@@ -215,6 +215,9 @@ struct tb_regs_port_header {
#define TB_PORT_NFC_CREDITS_MASK GENMASK(19, 0)
#define TB_PORT_MAX_CREDITS_SHIFT 20
#define TB_PORT_MAX_CREDITS_MASK GENMASK(26, 20)
+/* DWORD 5 */
+#define TB_PORT_LCA_SHIFT 22
+#define TB_PORT_LCA_MASK GENMASK(28, 22)
/* Display Port adapter registers */
diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index 7aab7e07739b..f10a0a15b873 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -26,7 +26,10 @@
#define TB_DP_AUX_PATH_OUT 1
#define TB_DP_AUX_PATH_IN 2
-static const char * const tb_tunnel_names[] = { "PCI", "DP" };
+#define TB_DMA_PATH_OUT 0
+#define TB_DMA_PATH_IN 1
+
+static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA" };
#define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \
do { \
@@ -461,6 +464,95 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
return NULL;
}
+static u32 tb_dma_credits(struct tb_port *nhi)
+{
+ u32 max_credits;
+
+ max_credits = nhi->config.nfc_credits & TB_PORT_MAX_CREDITS_MASK;
+ max_credits >>= TB_PORT_MAX_CREDITS_SHIFT;
+
+ return min(max_credits, 13U);
+}
+
+static int tb_dma_activate(struct tb_tunnel *tunnel, bool active)
+{
+ struct tb_port *nhi = tunnel->src_port;
+ u32 credits;
+
+ credits = active ? tb_dma_credits(nhi) : 0;
+ return tb_port_set_initial_credits(nhi, credits);
+}
+
+static void tb_dma_init_path(struct tb_path *path, unsigned int isb,
+ unsigned int efc, u32 credits)
+{
+ int i;
+
+ path->egress_fc_enable = efc;
+ path->ingress_fc_enable = TB_PATH_ALL;
+ path->egress_shared_buffer = TB_PATH_NONE;
+ path->ingress_shared_buffer = isb;
+ path->priority = 5;
+ path->weight = 1;
+ path->clear_fc = true;
+
+ for (i = 0; i < path->path_length; i++)
+ path->hops[i].initial_credits = credits;
+}
+
+/**
+ * tb_tunnel_alloc_dma() - allocate a DMA tunnel
+ * @tb: Pointer to the domain structure
+ * @nhi: Host controller port
+ * @dst: Destination null port which the other domain is connected to
+ * @transmit_ring: NHI ring number used to send packets towards the
+ * other domain
+ * @transmit_path: HopID used for transmitting packets
+ * @receive_ring: NHI ring number used to receive packets from the
+ * other domain
+ * @reveive_path: HopID used for receiving packets
+ *
+ * Return: Returns a tb_tunnel on success or NULL on failure.
+ */
+struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
+ struct tb_port *dst, int transmit_ring,
+ int transmit_path, int receive_ring,
+ int receive_path)
+{
+ struct tb_tunnel *tunnel;
+ struct tb_path *path;
+ u32 credits;
+
+ tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_DMA);
+ if (!tunnel)
+ return NULL;
+
+ tunnel->activate = tb_dma_activate;
+ tunnel->src_port = nhi;
+ tunnel->dst_port = dst;
+
+ credits = tb_dma_credits(nhi);
+
+ path = tb_path_alloc(tb, dst, nhi, receive_path, receive_ring, 0);
+ if (!path) {
+ tb_tunnel_free(tunnel);
+ return NULL;
+ }
+ tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL,
+ credits);
+ tunnel->paths[TB_DMA_PATH_IN] = path;
+
+ path = tb_path_alloc(tb, nhi, dst, transmit_ring, transmit_path, 0);
+ if (!path) {
+ tb_tunnel_free(tunnel);
+ return NULL;
+ }
+ tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits);
+ tunnel->paths[TB_DMA_PATH_OUT] = path;
+
+ return tunnel;
+}
+
/**
* tb_tunnel_free() - free a tunnel
* @tunnel: Tunnel to be freed
diff --git a/drivers/thunderbolt/tunnel.h b/drivers/thunderbolt/tunnel.h
index 07583f8247c1..fa51217e5925 100644
--- a/drivers/thunderbolt/tunnel.h
+++ b/drivers/thunderbolt/tunnel.h
@@ -14,6 +14,7 @@
enum tb_tunnel_type {
TB_TUNNEL_PCI,
TB_TUNNEL_DP,
+ TB_TUNNEL_DMA,
};
/**
@@ -44,6 +45,10 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in);
struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
struct tb_port *out);
+struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
+ struct tb_port *dst, int transmit_ring,
+ int transmit_path, int receive_ring,
+ int receive_path);
void tb_tunnel_free(struct tb_tunnel *tunnel);
int tb_tunnel_activate(struct tb_tunnel *tunnel);
@@ -61,5 +66,10 @@ static inline bool tb_tunnel_is_dp(const struct tb_tunnel *tunnel)
return tunnel->type == TB_TUNNEL_DP;
}
+static inline bool tb_tunnel_is_dma(const struct tb_tunnel *tunnel)
+{
+ return tunnel->type == TB_TUNNEL_DMA;
+}
+
#endif
--
2.20.1
In Apple Macs the boot firmware (EFI) connects all devices automatically
when the system is started, before it hands over to the OS. Instead of
ignoring we discover all those PCIe tunnels and record them using our
internal structures, just like we do when a device is connected after
the OS is already up.
By doing this we can properly tear down tunnels when devices are
disconnected. Also this allows us to resume the existing tunnels after
system suspend/resume cycle.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/path.c | 144 +++++++++++++++++++++++++++++------
drivers/thunderbolt/switch.c | 14 ++++
drivers/thunderbolt/tb.c | 35 +++++++++
drivers/thunderbolt/tb.h | 4 +
drivers/thunderbolt/tunnel.c | 68 +++++++++++++++++
drivers/thunderbolt/tunnel.h | 1 +
6 files changed, 244 insertions(+), 22 deletions(-)
diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c
index 122e6a1daf34..ada60d4aa99b 100644
--- a/drivers/thunderbolt/path.c
+++ b/drivers/thunderbolt/path.c
@@ -1,8 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
- * Thunderbolt Cactus Ridge driver - path/tunnel functionality
+ * Thunderbolt driver - path/tunnel functionality
*
* Copyright (c) 2014 Andreas Noever <[email protected]>
+ * Copyright (C) 2019, Intel Corporation
*/
#include <linux/slab.h>
@@ -12,6 +13,7 @@
#include "tb.h"
+#define MAX_PATH_HOPS 7
static void tb_dump_hop(struct tb_port *port, struct tb_regs_hop *hop)
{
@@ -30,6 +32,124 @@ static void tb_dump_hop(struct tb_port *port, struct tb_regs_hop *hop)
hop->unknown1, hop->unknown2, hop->unknown3);
}
+static struct tb_port *tb_port_remote(struct tb_port *port)
+{
+ struct tb_port *remote = port->remote;
+
+ /*
+ * If we have a dual link, the remote is available through the
+ * primary link.
+ */
+ if (!remote && port->dual_link_port && port->dual_link_port->remote)
+ return port->dual_link_port->remote->dual_link_port;
+ return remote;
+}
+
+/**
+ * tb_path_discover() - Discover a path starting from given hopid
+ * @port: First input port of a path
+ * @start_hopid: Starting hop ID of a path
+ * @last: Last port on a path will be filled here if not %NULL
+ *
+ * Follows a path starting from @port and @hopid to the last output port
+ * of the path. Allocates hop IDs for the visited ports. Call
+ * tb_path_free() to release the path and allocated hop IDs when the
+ * path is not needed anymore.
+ *
+ * Return: Discovered path on success, %NULL in case of failure
+ */
+struct tb_path *tb_path_discover(struct tb_port *port, int start_hopid,
+ struct tb_port **last)
+{
+ struct tb_port *out_port;
+ struct tb_regs_hop hop;
+ struct tb_path *path;
+ struct tb_switch *sw;
+ struct tb_port *p;
+ size_t num_hops;
+ int ret, i, h;
+
+ p = port;
+ h = start_hopid;
+
+ for (i = 0; p && i < MAX_PATH_HOPS; i++) {
+ sw = p->sw;
+
+ ret = tb_port_read(p, &hop, TB_CFG_HOPS, 2 * h, 2);
+ if (ret) {
+ tb_port_warn(p, "failed to read path at %d\n", h);
+ return NULL;
+ }
+
+ if (!hop.enable)
+ return NULL;
+
+ out_port = &sw->ports[hop.out_port];
+ if (last)
+ *last = out_port;
+
+ h = hop.next_hop;
+ p = tb_port_remote(out_port);
+ }
+
+ num_hops = i;
+ path = kzalloc(sizeof(*path), GFP_KERNEL);
+ if (!path)
+ return NULL;
+
+ path->tb = port->sw->tb;
+ path->path_length = num_hops;
+
+ path->hops = kcalloc(num_hops, sizeof(*path->hops), GFP_KERNEL);
+ if (!path->hops) {
+ kfree(path);
+ return NULL;
+ }
+
+ p = port;
+ h = start_hopid;
+
+ for (i = 0; i < num_hops; i++) {
+ int next_hop;
+
+ sw = p->sw;
+
+ ret = tb_port_read(p, &hop, TB_CFG_HOPS, 2 * h, 2);
+ if (ret) {
+ tb_port_warn(p, "failed to read path at %d\n", h);
+ goto err;
+ }
+
+ if (tb_port_alloc_in_hopid(p, h, h) < 0)
+ goto err;
+
+ out_port = &sw->ports[hop.out_port];
+ next_hop = hop.next_hop;
+
+ if (tb_port_alloc_out_hopid(out_port, next_hop, next_hop) < 0) {
+ tb_port_release_in_hopid(p, h);
+ goto err;
+ }
+
+ path->hops[i].in_port = p;
+ path->hops[i].in_hop_index = h;
+ path->hops[i].in_counter_index = -1;
+ path->hops[i].out_port = out_port;
+ path->hops[i].next_hop_index = next_hop;
+
+ h = next_hop;
+ p = tb_port_remote(out_port);
+ }
+
+ return path;
+
+err:
+ tb_port_warn(port, "failed to discover path starting at hop %d\n",
+ start_hopid);
+ tb_path_free(path);
+ return NULL;
+}
+
/**
* tb_path_alloc() - allocate a thunderbolt path between two ports
* @tb: Domain pointer
@@ -279,30 +399,10 @@ int tb_path_activate(struct tb_path *path)
for (i = path->path_length - 1; i >= 0; i--) {
struct tb_regs_hop hop = { 0 };
- /*
- * We do (currently) not tear down paths setup by the firmeware.
- * If a firmware device is unplugged and plugged in again then
- * it can happen that we reuse some of the hops from the (now
- * defunct) firmeware path. This causes the hotplug operation to
- * fail (the pci device does not show up). Clearing the hop
- * before overwriting it fixes the problem.
- *
- * Should be removed once we discover and tear down firmeware
- * paths.
- */
- res = tb_port_write(path->hops[i].in_port, &hop, TB_CFG_HOPS,
- 2 * path->hops[i].in_hop_index, 2);
- if (res) {
- __tb_path_deactivate_hops(path, i);
- __tb_path_deallocate_nfc(path, 0);
- goto err;
- }
-
/* dword 0 */
hop.next_hop = path->hops[i].next_hop_index;
hop.out_port = path->hops[i].out_port->port;
- /* TODO: figure out why these are good values */
- hop.initial_credits = (i == path->path_length - 1) ? 16 : 7;
+ hop.initial_credits = path->hops[i].initial_credits;
hop.unknown1 = 0;
hop.enable = 1;
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 23b6bae8362e..c6cb6c44a571 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -743,6 +743,20 @@ struct tb_port *tb_port_get_next(struct tb_port *start, struct tb_port *end,
return next;
}
+/**
+ * tb_pci_port_is_enabled() - Is the PCIe adapter port enabled
+ * @port: PCIe port to check
+ */
+bool tb_pci_port_is_enabled(struct tb_port *port)
+{
+ u32 data;
+
+ if (tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap, 1))
+ return false;
+
+ return !!(data & TB_PCI_EN);
+}
+
/**
* tb_pci_port_enable() - Enable PCIe adapter port
* @port: PCIe port to enable
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 99f1c7e28d12..371633e17916 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -29,6 +29,39 @@ struct tb_cm {
/* enumeration & hot plug handling */
+static void tb_discover_tunnels(struct tb_switch *sw)
+{
+ struct tb *tb = sw->tb;
+ struct tb_cm *tcm = tb_priv(tb);
+ struct tb_port *port;
+ int i;
+
+ for (i = 1; i <= sw->config.max_port_number; i++) {
+ struct tb_tunnel *tunnel = NULL;
+
+ port = &sw->ports[i];
+ switch (port->config.type) {
+ case TB_TYPE_PCIE_DOWN:
+ tunnel = tb_tunnel_discover_pci(tb, port);
+ break;
+
+ default:
+ break;
+ }
+
+ /* Find and add existing tunnels */
+ if (tunnel)
+ list_add_tail(&tunnel->list, &tcm->tunnel_list);
+ }
+
+ for (i = 1; i <= sw->config.max_port_number; i++) {
+ port = &sw->ports[i];
+ if (tb_is_upstream_port(port))
+ continue;
+ if (port->remote)
+ tb_discover_tunnels(port->remote->sw);
+ }
+}
static void tb_scan_port(struct tb_port *port);
@@ -393,6 +426,8 @@ static int tb_start(struct tb *tb)
/* Full scan to discover devices added before the driver was loaded. */
tb_scan_switch(tb->root_switch);
+ /* Find out tunnels created by the boot firmware */
+ tb_discover_tunnels(tb->root_switch);
tb_activate_pcie_devices(tb);
/* Allow tb_handle_hotplug to progress events */
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 0e4d9088faf6..5a0b831a37ad 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -165,6 +165,7 @@ struct tb_path_hop {
int in_hop_index;
int in_counter_index; /* write -1 to disable counters for this hop. */
int next_hop_index;
+ unsigned int initial_credits;
};
/**
@@ -457,8 +458,11 @@ struct tb_port *tb_port_get_next(struct tb_port *start, struct tb_port *end,
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap);
+bool tb_pci_port_is_enabled(struct tb_port *port);
int tb_pci_port_enable(struct tb_port *port, bool enable);
+struct tb_path *tb_path_discover(struct tb_port *port, int start_hopid,
+ struct tb_port **last);
struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src,
struct tb_port *dst, int start_hopid,
int end_hopid, int link_nr);
diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index b48c66efe87a..1a5e2aa395c6 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -78,6 +78,74 @@ static void tb_pci_init_path(struct tb_path *path)
path->weight = 1;
path->drop_packages = 0;
path->nfc_credits = 0;
+ path->hops[0].initial_credits = 7;
+ path->hops[1].initial_credits = 16;
+}
+
+/**
+ * tb_tunnel_discover_pci() - Discover existing PCIe tunnels
+ * @tb: Pointer to the domain structure
+ * @down: PCIe downstream adapter
+ *
+ * If @down adapter is active, follows the tunnel to the PCIe upstream
+ * adapter and back. Returns the discovered tunnel or %NULL if there was
+ * no tunnel.
+ */
+struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down)
+{
+ struct tb_tunnel *tunnel;
+ struct tb_path *path;
+
+ if (!tb_pci_port_is_enabled(down))
+ return NULL;
+
+ tunnel = tb_tunnel_alloc(tb, 2);
+ if (!tunnel)
+ return NULL;
+
+ tunnel->activate = tb_pci_activate;
+ tunnel->src_port = down;
+
+ path = tb_path_discover(down, TB_PCI_HOPID, &tunnel->dst_port);
+ if (!path)
+ goto err_free;
+
+ if (tunnel->dst_port->config.type != TB_TYPE_PCIE_UP) {
+ tb_port_warn(tunnel->dst_port,
+ "path does not end to a PCIe adapter\n");
+ goto err_free;
+ }
+
+ tunnel->paths[TB_PCI_PATH_UP] = path;
+
+ path = tb_path_discover(tunnel->dst_port, TB_PCI_HOPID, &down);
+ if (!path)
+ goto err_free;
+ tunnel->paths[TB_PCI_PATH_DOWN] = path;
+
+ if (down != tunnel->src_port) {
+ tb_tunnel_warn(tunnel, "path is not complete, skipping\n");
+ goto err_free;
+ }
+
+ if (!tb_pci_port_is_enabled(tunnel->dst_port)) {
+ tb_tunnel_warn(tunnel,
+ "tunnel is not fully activated, skipping\n");
+ goto err_free;
+ }
+
+ /* Activated by the boot firmware */
+ tunnel->paths[TB_PCI_PATH_UP]->activated = true;
+ tunnel->paths[TB_PCI_PATH_DOWN]->activated = true;
+
+ tb_pci_init_path(tunnel->paths[TB_PCI_PATH_UP]);
+ tb_pci_init_path(tunnel->paths[TB_PCI_PATH_DOWN]);
+
+ return tunnel;
+
+err_free:
+ tb_tunnel_free(tunnel);
+ return NULL;
}
/**
diff --git a/drivers/thunderbolt/tunnel.h b/drivers/thunderbolt/tunnel.h
index b4e992165e56..7e801a31f9d1 100644
--- a/drivers/thunderbolt/tunnel.h
+++ b/drivers/thunderbolt/tunnel.h
@@ -31,6 +31,7 @@ struct tb_tunnel {
struct list_head list;
};
+struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down);
struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
struct tb_port *down);
void tb_tunnel_free(struct tb_tunnel *tunnel);
--
2.20.1
We can't be sure the paths are actually properly deactivated when a
tunnel is restarted after resume. So instead of marking all paths as
inactive we go ahead and deactivate them explicitly.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/tunnel.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index cdf9ca1c043e..b48c66efe87a 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -183,8 +183,15 @@ int tb_tunnel_restart(struct tb_tunnel *tunnel)
tb_tunnel_info(tunnel, "activating\n");
+ /* Make sure all paths are properly disabled before enable them again */
+ for (i = 0; i < tunnel->npaths; i++) {
+ if (tunnel->paths[i]->activated) {
+ tb_path_deactivate(tunnel->paths[i]);
+ tunnel->paths[i]->activated = false;
+ }
+ }
+
for (i = 0; i < tunnel->npaths; i++) {
- tunnel->paths[i]->activated = false;
res = tb_path_activate(tunnel->paths[i]);
if (res)
goto err;
--
2.20.1
Thunderbolt 2 devices and beyond link controller needs to be notified
when a switch is going to be suspended by setting bit 31 in LC_SX_CTRL
register. Add this functionality to the software connection manager.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/lc.c | 44 +++++++++++++++++++++++++++++++++++
drivers/thunderbolt/switch.c | 6 ++---
drivers/thunderbolt/tb.h | 1 +
drivers/thunderbolt/tb_regs.h | 2 ++
4 files changed, 49 insertions(+), 4 deletions(-)
diff --git a/drivers/thunderbolt/lc.c b/drivers/thunderbolt/lc.c
index a5dddf176546..ae1e92611c3e 100644
--- a/drivers/thunderbolt/lc.c
+++ b/drivers/thunderbolt/lc.c
@@ -133,3 +133,47 @@ void tb_lc_unconfigure_link(struct tb_switch *sw)
tb_lc_configure_lane(up, false);
tb_lc_configure_lane(down, false);
}
+
+/**
+ * tb_lc_set_sleep() - Inform LC that the switch is going to sleep
+ * @sw: Switch to set sleep
+ *
+ * Let the switch link controllers know that the switch is going to
+ * sleep.
+ */
+int tb_lc_set_sleep(struct tb_switch *sw)
+{
+ int start, size, nlc, ret, i;
+ u32 desc;
+
+ if (sw->generation < 2)
+ return 0;
+
+ ret = read_lc_desc(sw, &desc);
+ if (ret)
+ return ret;
+
+ /* Figure out number of link controllers */
+ nlc = desc & TB_LC_DESC_NLC_MASK;
+ start = (desc & TB_LC_DESC_SIZE_MASK) >> TB_LC_DESC_SIZE_SHIFT;
+ size = (desc & TB_LC_DESC_PORT_SIZE_MASK) >> TB_LC_DESC_PORT_SIZE_SHIFT;
+
+ /* For each link controller set sleep bit */
+ for (i = 0; i < nlc; i++) {
+ unsigned int offset = sw->cap_lc + start + i * size;
+ u32 ctrl;
+
+ ret = tb_sw_read(sw, &ctrl, TB_CFG_SWITCH,
+ offset + TB_LC_SX_CTRL, 1);
+ if (ret)
+ return ret;
+
+ ctrl |= TB_LC_SX_CTRL_SLP;
+ ret = tb_sw_write(sw, &ctrl, TB_CFG_SWITCH,
+ offset + TB_LC_SX_CTRL, 1);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 760332f57b5c..1eee2502b5ba 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -1604,10 +1604,8 @@ void tb_switch_suspend(struct tb_switch *sw)
if (!tb_is_upstream_port(&sw->ports[i]) && sw->ports[i].remote)
tb_switch_suspend(sw->ports[i].remote->sw);
}
- /*
- * TODO: invoke tb_cfg_prepare_to_sleep here? does not seem to have any
- * effect?
- */
+
+ tb_lc_set_sleep(sw);
}
struct tb_sw_lookup {
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index e61c2409021d..3160169389cc 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -453,6 +453,7 @@ int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid);
int tb_lc_read_uuid(struct tb_switch *sw, u32 *uuid);
int tb_lc_configure_link(struct tb_switch *sw);
void tb_lc_unconfigure_link(struct tb_switch *sw);
+int tb_lc_set_sleep(struct tb_switch *sw);
static inline int tb_route_length(u64 route)
{
diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
index e0f867dad5cf..1ab6e0fb31c0 100644
--- a/drivers/thunderbolt/tb_regs.h
+++ b/drivers/thunderbolt/tb_regs.h
@@ -239,6 +239,7 @@ struct tb_regs_hop {
/* Common link controller registers */
#define TB_LC_DESC 0x02
+#define TB_LC_DESC_NLC_MASK GENMASK(3, 0)
#define TB_LC_DESC_SIZE_SHIFT 8
#define TB_LC_DESC_SIZE_MASK GENMASK(15, 8)
#define TB_LC_DESC_PORT_SIZE_SHIFT 16
@@ -250,5 +251,6 @@ struct tb_regs_hop {
#define TB_LC_SX_CTRL_L1C BIT(16)
#define TB_LC_SX_CTRL_L2C BIT(20)
#define TB_LC_SX_CTRL_UPSTREAM BIT(30)
+#define TB_LC_SX_CTRL_SLP BIT(31)
#endif
--
2.20.1
Currently ICM has been handling XDomain UUID exchange so there was no
need to have it in the driver yet. However, since now we are going to
add the same capabilities to the software connection manager it needs to
be handled properly.
For this reason modify the driver XDomain protocol handling so that if
the remote domain UUID is not filled in the core will query it first and
only then start the normal property exchange flow.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/tb_msgs.h | 11 +++
drivers/thunderbolt/xdomain.c | 136 +++++++++++++++++++++++++++++++---
include/linux/thunderbolt.h | 8 ++
3 files changed, 145 insertions(+), 10 deletions(-)
diff --git a/drivers/thunderbolt/tb_msgs.h b/drivers/thunderbolt/tb_msgs.h
index 02c84aa3d018..afbe1d29bb03 100644
--- a/drivers/thunderbolt/tb_msgs.h
+++ b/drivers/thunderbolt/tb_msgs.h
@@ -492,6 +492,17 @@ struct tb_xdp_header {
u32 type;
};
+struct tb_xdp_uuid {
+ struct tb_xdp_header hdr;
+};
+
+struct tb_xdp_uuid_response {
+ struct tb_xdp_header hdr;
+ uuid_t src_uuid;
+ u32 src_route_hi;
+ u32 src_route_lo;
+};
+
struct tb_xdp_properties {
struct tb_xdp_header hdr;
uuid_t src_uuid;
diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
index 59789bdd93ac..7aa8b9da78c1 100644
--- a/drivers/thunderbolt/xdomain.c
+++ b/drivers/thunderbolt/xdomain.c
@@ -18,6 +18,7 @@
#include "tb.h"
#define XDOMAIN_DEFAULT_TIMEOUT 5000 /* ms */
+#define XDOMAIN_UUID_RETRIES 10
#define XDOMAIN_PROPERTIES_RETRIES 60
#define XDOMAIN_PROPERTIES_CHANGED_RETRIES 10
@@ -222,6 +223,50 @@ static int tb_xdp_handle_error(const struct tb_xdp_header *hdr)
return 0;
}
+static int tb_xdp_uuid_request(struct tb_ctl *ctl, u64 route, int retry,
+ uuid_t *uuid)
+{
+ struct tb_xdp_uuid_response res;
+ struct tb_xdp_uuid req;
+ int ret;
+
+ memset(&req, 0, sizeof(req));
+ tb_xdp_fill_header(&req.hdr, route, retry % 4, UUID_REQUEST,
+ sizeof(req));
+
+ memset(&res, 0, sizeof(res));
+ ret = __tb_xdomain_request(ctl, &req, sizeof(req),
+ TB_CFG_PKG_XDOMAIN_REQ, &res, sizeof(res),
+ TB_CFG_PKG_XDOMAIN_RESP,
+ XDOMAIN_DEFAULT_TIMEOUT);
+ if (ret)
+ return ret;
+
+ ret = tb_xdp_handle_error(&res.hdr);
+ if (ret)
+ return ret;
+
+ uuid_copy(uuid, &res.src_uuid);
+ return 0;
+}
+
+static int tb_xdp_uuid_response(struct tb_ctl *ctl, u64 route, u8 sequence,
+ const uuid_t *uuid)
+{
+ struct tb_xdp_uuid_response res;
+
+ memset(&res, 0, sizeof(res));
+ tb_xdp_fill_header(&res.hdr, route, sequence, UUID_RESPONSE,
+ sizeof(res));
+
+ uuid_copy(&res.src_uuid, uuid);
+ res.src_route_hi = upper_32_bits(route);
+ res.src_route_lo = lower_32_bits(route);
+
+ return __tb_xdomain_response(ctl, &res, sizeof(res),
+ TB_CFG_PKG_XDOMAIN_RESP);
+}
+
static int tb_xdp_error_response(struct tb_ctl *ctl, u64 route, u8 sequence,
enum tb_xdp_error error)
{
@@ -512,7 +557,14 @@ static void tb_xdp_handle_request(struct work_struct *work)
break;
}
+ case UUID_REQUEST_OLD:
+ case UUID_REQUEST:
+ ret = tb_xdp_uuid_response(ctl, route, sequence, uuid);
+ break;
+
default:
+ tb_xdp_error_response(ctl, route, sequence,
+ ERROR_NOT_SUPPORTED);
break;
}
@@ -828,6 +880,55 @@ static void tb_xdomain_restore_paths(struct tb_xdomain *xd)
}
}
+static void tb_xdomain_get_uuid(struct work_struct *work)
+{
+ struct tb_xdomain *xd = container_of(work, typeof(*xd),
+ get_uuid_work.work);
+ struct tb *tb = xd->tb;
+ uuid_t uuid;
+ int ret;
+
+ ret = tb_xdp_uuid_request(tb->ctl, xd->route, xd->uuid_retries, &uuid);
+ if (ret < 0) {
+ if (xd->uuid_retries-- > 0) {
+ queue_delayed_work(xd->tb->wq, &xd->get_uuid_work,
+ msecs_to_jiffies(100));
+ } else {
+ dev_dbg(&xd->dev, "failed to read remote UUID\n");
+ }
+ return;
+ }
+
+ if (uuid_equal(&uuid, xd->local_uuid)) {
+ dev_dbg(&xd->dev, "intra-domain loop detected\n");
+ return;
+ }
+
+ /*
+ * If the UUID is different, there is another domain connected
+ * so mark this one unplugged and wait for the connection
+ * manager to replace it.
+ */
+ if (xd->remote_uuid && !uuid_equal(&uuid, xd->remote_uuid)) {
+ dev_dbg(&xd->dev, "remote UUID is different, unplugging\n");
+ xd->is_unplugged = true;
+ return;
+ }
+
+ /* First time fill in the missing UUID */
+ if (!xd->remote_uuid) {
+ xd->remote_uuid = kmemdup(&uuid, sizeof(uuid_t), GFP_KERNEL);
+ if (!xd->remote_uuid)
+ return;
+ }
+
+ /* Now we can start the normal properties exchange */
+ queue_delayed_work(xd->tb->wq, &xd->properties_changed_work,
+ msecs_to_jiffies(100));
+ queue_delayed_work(xd->tb->wq, &xd->get_properties_work,
+ msecs_to_jiffies(1000));
+}
+
static void tb_xdomain_get_properties(struct work_struct *work)
{
struct tb_xdomain *xd = container_of(work, typeof(*xd),
@@ -1034,21 +1135,29 @@ static void tb_xdomain_release(struct device *dev)
static void start_handshake(struct tb_xdomain *xd)
{
+ xd->uuid_retries = XDOMAIN_UUID_RETRIES;
xd->properties_retries = XDOMAIN_PROPERTIES_RETRIES;
xd->properties_changed_retries = XDOMAIN_PROPERTIES_CHANGED_RETRIES;
- /* Start exchanging properties with the other host */
- queue_delayed_work(xd->tb->wq, &xd->properties_changed_work,
- msecs_to_jiffies(100));
- queue_delayed_work(xd->tb->wq, &xd->get_properties_work,
- msecs_to_jiffies(1000));
+ if (xd->needs_uuid) {
+ queue_delayed_work(xd->tb->wq, &xd->get_uuid_work,
+ msecs_to_jiffies(100));
+ } else {
+ /* Start exchanging properties with the other host */
+ queue_delayed_work(xd->tb->wq, &xd->properties_changed_work,
+ msecs_to_jiffies(100));
+ queue_delayed_work(xd->tb->wq, &xd->get_properties_work,
+ msecs_to_jiffies(1000));
+ }
}
static void stop_handshake(struct tb_xdomain *xd)
{
+ xd->uuid_retries = 0;
xd->properties_retries = 0;
xd->properties_changed_retries = 0;
+ cancel_delayed_work_sync(&xd->get_uuid_work);
cancel_delayed_work_sync(&xd->get_properties_work);
cancel_delayed_work_sync(&xd->properties_changed_work);
}
@@ -1091,7 +1200,7 @@ EXPORT_SYMBOL_GPL(tb_xdomain_type);
* other domain is reached).
* @route: Route string used to reach the other domain
* @local_uuid: Our local domain UUID
- * @remote_uuid: UUID of the other domain
+ * @remote_uuid: UUID of the other domain (optional)
*
* Allocates new XDomain structure and returns pointer to that. The
* object must be released by calling tb_xdomain_put().
@@ -1110,6 +1219,7 @@ struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent,
xd->route = route;
ida_init(&xd->service_ids);
mutex_init(&xd->lock);
+ INIT_DELAYED_WORK(&xd->get_uuid_work, tb_xdomain_get_uuid);
INIT_DELAYED_WORK(&xd->get_properties_work, tb_xdomain_get_properties);
INIT_DELAYED_WORK(&xd->properties_changed_work,
tb_xdomain_properties_changed);
@@ -1118,9 +1228,14 @@ struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent,
if (!xd->local_uuid)
goto err_free;
- xd->remote_uuid = kmemdup(remote_uuid, sizeof(uuid_t), GFP_KERNEL);
- if (!xd->remote_uuid)
- goto err_free_local_uuid;
+ if (remote_uuid) {
+ xd->remote_uuid = kmemdup(remote_uuid, sizeof(uuid_t),
+ GFP_KERNEL);
+ if (!xd->remote_uuid)
+ goto err_free_local_uuid;
+ } else {
+ xd->needs_uuid = true;
+ }
device_initialize(&xd->dev);
xd->dev.parent = get_device(parent);
@@ -1291,7 +1406,8 @@ static struct tb_xdomain *switch_find_xdomain(struct tb_switch *sw,
xd = port->xdomain;
if (lookup->uuid) {
- if (uuid_equal(xd->remote_uuid, lookup->uuid))
+ if (xd->remote_uuid &&
+ uuid_equal(xd->remote_uuid, lookup->uuid))
return xd;
} else if (lookup->link &&
lookup->link == xd->link &&
diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h
index bf6ec83e60ee..2d7e012db03f 100644
--- a/include/linux/thunderbolt.h
+++ b/include/linux/thunderbolt.h
@@ -181,6 +181,8 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir);
* @device_name: Name of the device (or %NULL if not known)
* @is_unplugged: The XDomain is unplugged
* @resume: The XDomain is being resumed
+ * @needs_uuid: If the XDomain does not have @remote_uuid it will be
+ * queried first
* @transmit_path: HopID which the remote end expects us to transmit
* @transmit_ring: Local ring (hop) where outgoing packets are pushed
* @receive_path: HopID which we expect the remote end to transmit
@@ -189,6 +191,9 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir);
* @properties: Properties exported by the remote domain
* @property_block_gen: Generation of @properties
* @properties_lock: Lock protecting @properties.
+ * @get_uuid_work: Work used to retrieve @remote_uuid
+ * @uuid_retries: Number of times left @remote_uuid is requested before
+ * giving up
* @get_properties_work: Work used to get remote domain properties
* @properties_retries: Number of times left to read properties
* @properties_changed_work: Work used to notify the remote domain that
@@ -220,6 +225,7 @@ struct tb_xdomain {
const char *device_name;
bool is_unplugged;
bool resume;
+ bool needs_uuid;
u16 transmit_path;
u16 transmit_ring;
u16 receive_path;
@@ -227,6 +233,8 @@ struct tb_xdomain {
struct ida service_ids;
struct tb_property_dir *properties;
u32 property_block_gen;
+ struct delayed_work get_uuid_work;
+ int uuid_retries;
struct delayed_work get_properties_work;
int properties_retries;
struct delayed_work properties_changed_work;
--
2.20.1
Maximum depth in Thunderbolt topology is 6 so make sure it is not
possible to allocate switches that exceed the depth limit.
While at it update tb_switch_alloc() to use upper/lower_32_bits()
following tb_switch_alloc_safe_mode().
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/icm.c | 5 ++---
drivers/thunderbolt/switch.c | 18 ++++++++++++------
drivers/thunderbolt/tb.h | 1 +
3 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
index e3fc920af682..041e7ab0efd3 100644
--- a/drivers/thunderbolt/icm.c
+++ b/drivers/thunderbolt/icm.c
@@ -42,7 +42,6 @@
#define ICM_TIMEOUT 5000 /* ms */
#define ICM_APPROVE_TIMEOUT 10000 /* ms */
#define ICM_MAX_LINK 4
-#define ICM_MAX_DEPTH 6
/**
* struct icm - Internal connection manager private data
@@ -709,7 +708,7 @@ icm_fr_device_disconnected(struct tb *tb, const struct icm_pkg_header *hdr)
depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >>
ICM_LINK_INFO_DEPTH_SHIFT;
- if (link > ICM_MAX_LINK || depth > ICM_MAX_DEPTH) {
+ if (link > ICM_MAX_LINK || depth > TB_SWITCH_MAX_DEPTH) {
tb_warn(tb, "invalid topology %u.%u, ignoring\n", link, depth);
return;
}
@@ -739,7 +738,7 @@ icm_fr_xdomain_connected(struct tb *tb, const struct icm_pkg_header *hdr)
depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >>
ICM_LINK_INFO_DEPTH_SHIFT;
- if (link > ICM_MAX_LINK || depth > ICM_MAX_DEPTH) {
+ if (link > ICM_MAX_LINK || depth > TB_SWITCH_MAX_DEPTH) {
tb_warn(tb, "invalid topology %u.%u, ignoring\n", link, depth);
return;
}
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index cd96994dc094..a90d21abed88 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -1155,10 +1155,16 @@ static int tb_switch_get_generation(struct tb_switch *sw)
struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
u64 route)
{
- int i;
- int cap;
struct tb_switch *sw;
- int upstream_port = tb_cfg_get_upstream_port(tb->ctl, route);
+ int upstream_port;
+ int i, cap, depth;
+
+ /* Make sure we do not exceed maximum topology limit */
+ depth = tb_route_length(route);
+ if (depth > TB_SWITCH_MAX_DEPTH)
+ return NULL;
+
+ upstream_port = tb_cfg_get_upstream_port(tb->ctl, route);
if (upstream_port < 0)
return NULL;
@@ -1175,9 +1181,9 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
/* configure switch */
sw->config.upstream_port_number = upstream_port;
- sw->config.depth = tb_route_length(route);
- sw->config.route_lo = route;
- sw->config.route_hi = route >> 32;
+ sw->config.depth = depth;
+ sw->config.route_hi = upper_32_bits(route);
+ sw->config.route_lo = lower_32_bits(route);
sw->config.enabled = 0;
/* initialize ports */
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 52584c4003e3..5faec5a8eb98 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -43,6 +43,7 @@ struct tb_switch_nvm {
};
#define TB_SWITCH_KEY_SIZE 32
+#define TB_SWITCH_MAX_DEPTH 6
/**
* struct tb_switch - a thunderbolt switch
--
2.20.1
The only way to expand Thunderbolt topology is through the NULL adapter
ports (typically ports 1, 2, 3 and 4). There is no point handling
Thunderbolt hotplug events on any other port.
Add a helper function (tb_port_is_null()) that can be used to determine
if the port is NULL port, and use it in software connection manager code
when hotplug event is handled.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/tb.c | 10 ++++++----
drivers/thunderbolt/tb.h | 5 +++++
2 files changed, 11 insertions(+), 4 deletions(-)
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index f2b23b290b63..a450bebfeb92 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -344,10 +344,12 @@ static void tb_handle_hotplug(struct work_struct *work)
tb_port_info(port,
"got plug event for connected port, ignoring\n");
} else {
- tb_port_info(port, "hotplug: scanning\n");
- tb_scan_port(port);
- if (!port->remote)
- tb_port_info(port, "hotplug: no switch found\n");
+ if (tb_port_is_null(port)) {
+ tb_port_info(port, "hotplug: scanning\n");
+ tb_scan_port(port);
+ if (!port->remote)
+ tb_port_info(port, "hotplug: no switch found\n");
+ }
}
out:
mutex_unlock(&tb->lock);
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 5a0b831a37ad..8906ee0a8a6a 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -286,6 +286,11 @@ static inline struct tb_port *tb_port_at(u64 route, struct tb_switch *sw)
return &sw->ports[port];
}
+static inline bool tb_port_is_null(const struct tb_port *port)
+{
+ return port->port && port->config.type == TB_TYPE_PORT;
+}
+
static inline int tb_sw_read(struct tb_switch *sw, void *buffer,
enum tb_cfg_space space, u32 offset, u32 length)
{
--
2.20.1
We need to be able to walk from one port to another when we are creating
paths where there are multiple switches between two ports. For this
reason introduce a new function tb_port_get_next() and a new macro
tb_for_each_port().
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/switch.c | 60 ++++++++++++++++++++++++++++++++++++
drivers/thunderbolt/tb.h | 6 ++++
2 files changed, 66 insertions(+)
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 320f64ebe8b8..23b6bae8362e 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -683,6 +683,66 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid)
ida_simple_remove(&port->out_hopids, hopid);
}
+/**
+ * tb_port_get_next() - Return next port for given port
+ * @start: Start port of the walk
+ * @end: End port of the walk
+ * @prev: Previous port (%NULL if this is the first)
+ *
+ * This function can be used to walk from one port to another if they
+ * are connected through zero or more switches. If the @prev is dual
+ * link port, the function follows that link and returns another end on
+ * that same link.
+ *
+ * If the walk cannot be continued, returns %NULL.
+ *
+ * Domain tb->lock must be held when this function is called.
+ */
+struct tb_port *tb_port_get_next(struct tb_port *start, struct tb_port *end,
+ struct tb_port *prev)
+{
+ struct tb_port *port, *next;
+
+ if (!prev)
+ return start;
+
+ if (prev->sw == end->sw) {
+ if (prev != end)
+ return end;
+ return NULL;
+ }
+
+ /* Switch back to use primary links for walking */
+ if (prev->dual_link_port && prev->link_nr)
+ port = prev->dual_link_port;
+ else
+ port = prev;
+
+ if (start->sw->config.depth < end->sw->config.depth) {
+ if (port->remote &&
+ port->remote->sw->config.depth > port->sw->config.depth)
+ next = port->remote;
+ else
+ next = tb_port_at(tb_route(end->sw), port->sw);
+ } else if (start->sw->config.depth > end->sw->config.depth) {
+ if (tb_is_upstream_port(port))
+ next = port->remote;
+ else
+ next = tb_upstream_port(port->sw);
+ } else {
+ /* Must be the same switch then */
+ if (start->sw != end->sw)
+ return NULL;
+ return end;
+ }
+
+ /* If prev was dual link return another end of that link then */
+ if (next->dual_link_port && next->link_nr != prev->link_nr)
+ return next->dual_link_port;
+
+ return next;
+}
+
/**
* tb_pci_port_enable() - Enable PCIe adapter port
* @port: PCIe port to enable
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index bfa1cee193fd..683725915ff7 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -447,6 +447,12 @@ int tb_port_alloc_in_hopid(struct tb_port *port, int hopid, int max_hopid);
void tb_port_release_in_hopid(struct tb_port *port, int hopid);
int tb_port_alloc_out_hopid(struct tb_port *port, int hopid, int max_hopid);
void tb_port_release_out_hopid(struct tb_port *port, int hopid);
+struct tb_port *tb_port_get_next(struct tb_port *start, struct tb_port *end,
+ struct tb_port *prev);
+
+#define tb_for_each_port(port, start, end) \
+ for (port = tb_port_get_next(start, end, NULL); port; \
+ port = tb_port_get_next(start, end, port))
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap);
--
2.20.1
To be able to tunnel non-PCIe traffic, separate tunnel functionality
into generic and PCIe specific parts. Rename struct tb_pci_tunnel to
tb_tunnel, and make it hold an array of paths instead of just two.
Update all the tunneling functions to take this structure as parameter.
We also move tb_pci_port_active() to switch.c (and rename it) where we
will be keeping all port and switch related functions.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/switch.c | 13 ++
drivers/thunderbolt/tb.c | 30 ++--
drivers/thunderbolt/tb.h | 2 +
drivers/thunderbolt/tb_regs.h | 4 +
drivers/thunderbolt/tunnel.c | 298 ++++++++++++++++++++--------------
drivers/thunderbolt/tunnel.h | 38 +++--
6 files changed, 235 insertions(+), 150 deletions(-)
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index ec3c274ff278..b20af050ce9a 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -606,6 +606,19 @@ static int tb_init_port(struct tb_port *port)
}
+/**
+ * tb_pci_port_enable() - Enable PCIe adapter port
+ * @port: PCIe port to enable
+ * @enable: Enable/disable the PCIe adapter
+ */
+int tb_pci_port_enable(struct tb_port *port, bool enable)
+{
+ u32 word = enable ? TB_PCI_EN : 0x0;
+ if (!port->cap_adap)
+ return -ENXIO;
+ return tb_port_write(port, &word, TB_CFG_PORT, port->cap_adap, 1);
+}
+
/* switch utility functions */
static void tb_dump_switch(struct tb *tb, struct tb_regs_switch_header *sw)
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 931612143896..99f1c7e28d12 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -91,14 +91,14 @@ static void tb_scan_port(struct tb_port *port)
static void tb_free_invalid_tunnels(struct tb *tb)
{
struct tb_cm *tcm = tb_priv(tb);
- struct tb_pci_tunnel *tunnel;
- struct tb_pci_tunnel *n;
+ struct tb_tunnel *tunnel;
+ struct tb_tunnel *n;
list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) {
- if (tb_pci_is_invalid(tunnel)) {
- tb_pci_deactivate(tunnel);
+ if (tb_tunnel_is_invalid(tunnel)) {
+ tb_tunnel_deactivate(tunnel);
list_del(&tunnel->list);
- tb_pci_free(tunnel);
+ tb_tunnel_free(tunnel);
}
}
}
@@ -178,7 +178,7 @@ static void tb_activate_pcie_devices(struct tb *tb)
struct tb_switch *sw;
struct tb_port *up_port;
struct tb_port *down_port;
- struct tb_pci_tunnel *tunnel;
+ struct tb_tunnel *tunnel;
struct tb_cm *tcm = tb_priv(tb);
/* scan for pcie devices at depth 1*/
@@ -214,17 +214,17 @@ static void tb_activate_pcie_devices(struct tb *tb)
"All PCIe down ports are occupied, aborting\n");
continue;
}
- tunnel = tb_pci_alloc(tb, up_port, down_port);
+ tunnel = tb_tunnel_alloc_pci(tb, up_port, down_port);
if (!tunnel) {
tb_port_info(up_port,
"PCIe tunnel allocation failed, aborting\n");
continue;
}
- if (tb_pci_activate(tunnel)) {
+ if (tb_tunnel_activate(tunnel)) {
tb_port_info(up_port,
"PCIe tunnel activation failed, aborting\n");
- tb_pci_free(tunnel);
+ tb_tunnel_free(tunnel);
continue;
}
@@ -350,13 +350,13 @@ static void tb_handle_event(struct tb *tb, enum tb_cfg_pkg_type type,
static void tb_stop(struct tb *tb)
{
struct tb_cm *tcm = tb_priv(tb);
- struct tb_pci_tunnel *tunnel;
- struct tb_pci_tunnel *n;
+ struct tb_tunnel *tunnel;
+ struct tb_tunnel *n;
/* tunnels are only present after everything has been initialized */
list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) {
- tb_pci_deactivate(tunnel);
- tb_pci_free(tunnel);
+ tb_tunnel_deactivate(tunnel);
+ tb_tunnel_free(tunnel);
}
tb_switch_remove(tb->root_switch);
tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */
@@ -415,7 +415,7 @@ static int tb_suspend_noirq(struct tb *tb)
static int tb_resume_noirq(struct tb *tb)
{
struct tb_cm *tcm = tb_priv(tb);
- struct tb_pci_tunnel *tunnel, *n;
+ struct tb_tunnel *tunnel, *n;
tb_dbg(tb, "resuming...\n");
@@ -426,7 +426,7 @@ static int tb_resume_noirq(struct tb *tb)
tb_free_invalid_tunnels(tb);
tb_free_unplugged_children(tb->root_switch);
list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list)
- tb_pci_restart(tunnel);
+ tb_tunnel_restart(tunnel);
if (!list_empty(&tcm->tunnel_list)) {
/*
* the pcie links need some time to get going.
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index bab451ab31ff..a13d1cd53bc3 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -443,6 +443,8 @@ int tb_port_clear_counter(struct tb_port *port, int counter);
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap);
+int tb_pci_port_enable(struct tb_port *port, bool enable);
+
struct tb_path *tb_path_alloc(struct tb *tb, int num_hops);
void tb_path_free(struct tb_path *path);
int tb_path_activate(struct tb_path *path);
diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
index 82ac4ec8757f..75e935acade5 100644
--- a/drivers/thunderbolt/tb_regs.h
+++ b/drivers/thunderbolt/tb_regs.h
@@ -211,6 +211,10 @@ struct tb_regs_port_header {
} __packed;
+/* PCIe adapter registers */
+
+#define TB_PCI_EN BIT(31)
+
/* Hop register from TB_CFG_HOPS. 8 byte per entry. */
struct tb_regs_hop {
/* DWORD 0 */
diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index 1e470564e99d..20ce28276f7a 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -1,8 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
- * Thunderbolt Cactus Ridge driver - Tunneling support
+ * Thunderbolt driver - Tunneling support
*
* Copyright (c) 2014 Andreas Noever <[email protected]>
+ * Copyright (C) 2019, Intel Corporation
*/
#include <linux/slab.h>
@@ -11,14 +12,17 @@
#include "tunnel.h"
#include "tb.h"
+#define TB_PCI_PATH_DOWN 0
+#define TB_PCI_PATH_UP 1
+
#define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \
do { \
- struct tb_pci_tunnel *__tunnel = (tunnel); \
+ struct tb_tunnel *__tunnel = (tunnel); \
level(__tunnel->tb, "%llx:%x <-> %llx:%x (PCI): " fmt, \
- tb_route(__tunnel->down_port->sw), \
- __tunnel->down_port->port, \
- tb_route(__tunnel->up_port->sw), \
- __tunnel->up_port->port, \
+ tb_route(__tunnel->src_port->sw), \
+ __tunnel->src_port->port, \
+ tb_route(__tunnel->dst_port->sw), \
+ __tunnel->dst_port->port, \
## arg); \
} while (0)
@@ -29,6 +33,38 @@
#define tb_tunnel_info(tunnel, fmt, arg...) \
__TB_TUNNEL_PRINT(tb_info, tunnel, fmt, ##arg)
+static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths)
+{
+ struct tb_tunnel *tunnel;
+
+ tunnel = kzalloc(sizeof(*tunnel), GFP_KERNEL);
+ if (!tunnel)
+ return NULL;
+
+ tunnel->paths = kcalloc(npaths, sizeof(tunnel->paths[0]), GFP_KERNEL);
+ if (!tunnel->paths) {
+ tb_tunnel_free(tunnel);
+ return NULL;
+ }
+
+ INIT_LIST_HEAD(&tunnel->list);
+ tunnel->tb = tb;
+ tunnel->npaths = npaths;
+
+ return tunnel;
+}
+
+static int tb_pci_activate(struct tb_tunnel *tunnel, bool activate)
+{
+ int res;
+
+ res = tb_pci_port_enable(tunnel->src_port, activate);
+ if (res)
+ return res;
+
+ return tb_pci_port_enable(tunnel->dst_port, activate);
+}
+
static void tb_pci_init_path(struct tb_path *path)
{
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
@@ -42,7 +78,10 @@ static void tb_pci_init_path(struct tb_path *path)
}
/**
- * tb_pci_alloc() - allocate a pci tunnel
+ * tb_tunnel_alloc_pci() - allocate a pci tunnel
+ * @tb: Pointer to the domain structure
+ * @up: PCIe upstream adapter port
+ * @down: PCIe downstream adapter port
*
* Allocate a PCI tunnel. The ports must be of type TB_TYPE_PCIE_UP and
* TB_TYPE_PCIE_DOWN.
@@ -54,170 +93,185 @@ static void tb_pci_init_path(struct tb_path *path)
* my thunderbolt devices). Therefore at most ONE path per device may be
* activated.
*
- * Return: Returns a tb_pci_tunnel on success or NULL on failure.
+ * Return: Returns a tb_tunnel on success or NULL on failure.
*/
-struct tb_pci_tunnel *tb_pci_alloc(struct tb *tb, struct tb_port *up,
- struct tb_port *down)
+struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
+ struct tb_port *down)
{
- struct tb_pci_tunnel *tunnel = kzalloc(sizeof(*tunnel), GFP_KERNEL);
+ struct tb_path *path_to_up;
+ struct tb_path *path_to_down;
+ struct tb_tunnel *tunnel;
+
+ tunnel = tb_tunnel_alloc(tb, 2);
if (!tunnel)
- goto err;
- tunnel->tb = tb;
- tunnel->down_port = down;
- tunnel->up_port = up;
- INIT_LIST_HEAD(&tunnel->list);
- tunnel->path_to_up = tb_path_alloc(up->sw->tb, 2);
- if (!tunnel->path_to_up)
- goto err;
- tunnel->path_to_down = tb_path_alloc(up->sw->tb, 2);
- if (!tunnel->path_to_down)
- goto err;
- tb_pci_init_path(tunnel->path_to_up);
- tb_pci_init_path(tunnel->path_to_down);
-
- tunnel->path_to_up->hops[0].in_port = down;
- tunnel->path_to_up->hops[0].in_hop_index = 8;
- tunnel->path_to_up->hops[0].in_counter_index = -1;
- tunnel->path_to_up->hops[0].out_port = tb_upstream_port(up->sw)->remote;
- tunnel->path_to_up->hops[0].next_hop_index = 8;
-
- tunnel->path_to_up->hops[1].in_port = tb_upstream_port(up->sw);
- tunnel->path_to_up->hops[1].in_hop_index = 8;
- tunnel->path_to_up->hops[1].in_counter_index = -1;
- tunnel->path_to_up->hops[1].out_port = up;
- tunnel->path_to_up->hops[1].next_hop_index = 8;
-
- tunnel->path_to_down->hops[0].in_port = up;
- tunnel->path_to_down->hops[0].in_hop_index = 8;
- tunnel->path_to_down->hops[0].in_counter_index = -1;
- tunnel->path_to_down->hops[0].out_port = tb_upstream_port(up->sw);
- tunnel->path_to_down->hops[0].next_hop_index = 8;
-
- tunnel->path_to_down->hops[1].in_port =
- tb_upstream_port(up->sw)->remote;
- tunnel->path_to_down->hops[1].in_hop_index = 8;
- tunnel->path_to_down->hops[1].in_counter_index = -1;
- tunnel->path_to_down->hops[1].out_port = down;
- tunnel->path_to_down->hops[1].next_hop_index = 8;
- return tunnel;
+ return NULL;
-err:
- if (tunnel) {
- if (tunnel->path_to_down)
- tb_path_free(tunnel->path_to_down);
- if (tunnel->path_to_up)
- tb_path_free(tunnel->path_to_up);
- kfree(tunnel);
+ tunnel->activate = tb_pci_activate;
+ tunnel->src_port = down;
+ tunnel->dst_port = up;
+
+ path_to_up = tb_path_alloc(tb, 2);
+ if (!path_to_up) {
+ tb_tunnel_free(tunnel);
+ return NULL;
}
- return NULL;
+ tunnel->paths[TB_PCI_PATH_UP] = path_to_up;
+
+ path_to_down = tb_path_alloc(tb, 2);
+ if (!path_to_down) {
+ tb_tunnel_free(tunnel);
+ return NULL;
+ }
+ tunnel->paths[TB_PCI_PATH_DOWN] = path_to_down;
+
+ tb_pci_init_path(path_to_up);
+ tb_pci_init_path(path_to_down);
+
+ path_to_up->hops[0].in_port = down;
+ path_to_up->hops[0].in_hop_index = 8;
+ path_to_up->hops[0].in_counter_index = -1;
+ path_to_up->hops[0].out_port = tb_upstream_port(up->sw)->remote;
+ path_to_up->hops[0].next_hop_index = 8;
+
+ path_to_up->hops[1].in_port = tb_upstream_port(up->sw);
+ path_to_up->hops[1].in_hop_index = 8;
+ path_to_up->hops[1].in_counter_index = -1;
+ path_to_up->hops[1].out_port = up;
+ path_to_up->hops[1].next_hop_index = 8;
+
+ path_to_down->hops[0].in_port = up;
+ path_to_down->hops[0].in_hop_index = 8;
+ path_to_down->hops[0].in_counter_index = -1;
+ path_to_down->hops[0].out_port = tb_upstream_port(up->sw);
+ path_to_down->hops[0].next_hop_index = 8;
+
+ path_to_down->hops[1].in_port = tb_upstream_port(up->sw)->remote;
+ path_to_down->hops[1].in_hop_index = 8;
+ path_to_down->hops[1].in_counter_index = -1;
+ path_to_down->hops[1].out_port = down;
+ path_to_down->hops[1].next_hop_index = 8;
+
+ return tunnel;
}
/**
- * tb_pci_free() - free a tunnel
+ * tb_tunnel_free() - free a tunnel
+ * @tunnel: Tunnel to be freed
*
* The tunnel must have been deactivated.
*/
-void tb_pci_free(struct tb_pci_tunnel *tunnel)
+void tb_tunnel_free(struct tb_tunnel *tunnel)
{
- if (tunnel->path_to_up->activated || tunnel->path_to_down->activated) {
- tb_tunnel_WARN(tunnel, "trying to free an activated tunnel\n");
+ int i;
+
+ if (!tunnel)
return;
+
+ for (i = 0; i < tunnel->npaths; i++) {
+ if (tunnel->paths[i] && tunnel->paths[i]->activated) {
+ tb_tunnel_WARN(tunnel,
+ "trying to free an activated tunnel\n");
+ return;
+ }
}
- tb_path_free(tunnel->path_to_up);
- tb_path_free(tunnel->path_to_down);
+
+ for (i = 0; i < tunnel->npaths; i++) {
+ if (tunnel->paths[i])
+ tb_path_free(tunnel->paths[i]);
+ }
+
+ kfree(tunnel->paths);
kfree(tunnel);
}
/**
- * tb_pci_is_invalid - check whether an activated path is still valid
+ * tb_tunnel_is_invalid - check whether an activated path is still valid
+ * @tunnel: Tunnel to check
*/
-bool tb_pci_is_invalid(struct tb_pci_tunnel *tunnel)
+bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel)
{
- WARN_ON(!tunnel->path_to_up->activated);
- WARN_ON(!tunnel->path_to_down->activated);
+ int i;
- return tb_path_is_invalid(tunnel->path_to_up)
- || tb_path_is_invalid(tunnel->path_to_down);
-}
+ for (i = 0; i < tunnel->npaths; i++) {
+ WARN_ON(!tunnel->paths[i]->activated);
+ if (tb_path_is_invalid(tunnel->paths[i]))
+ return true;
+ }
-/**
- * tb_pci_port_active() - activate/deactivate PCI capability
- *
- * Return: Returns 0 on success or an error code on failure.
- */
-static int tb_pci_port_active(struct tb_port *port, bool active)
-{
- u32 word = active ? 0x80000000 : 0x0;
- if (!port->cap_adap)
- return -ENXIO;
- return tb_port_write(port, &word, TB_CFG_PORT, port->cap_adap, 1);
+ return false;
}
/**
- * tb_pci_restart() - activate a tunnel after a hardware reset
+ * tb_tunnel_restart() - activate a tunnel after a hardware reset
+ * @tunnel: Tunnel to restart
+ *
+ * Return: 0 on success and negative errno in case if failure
*/
-int tb_pci_restart(struct tb_pci_tunnel *tunnel)
+int tb_tunnel_restart(struct tb_tunnel *tunnel)
{
- int res;
- tunnel->path_to_up->activated = false;
- tunnel->path_to_down->activated = false;
+ int res, i;
tb_tunnel_info(tunnel, "activating\n");
- res = tb_path_activate(tunnel->path_to_up);
- if (res)
- goto err;
- res = tb_path_activate(tunnel->path_to_down);
- if (res)
- goto err;
+ for (i = 0; i < tunnel->npaths; i++) {
+ tunnel->paths[i]->activated = false;
+ res = tb_path_activate(tunnel->paths[i]);
+ if (res)
+ goto err;
+ }
- res = tb_pci_port_active(tunnel->down_port, true);
- if (res)
- goto err;
+ if (tunnel->activate) {
+ res = tunnel->activate(tunnel, true);
+ if (res)
+ goto err;
+ }
- res = tb_pci_port_active(tunnel->up_port, true);
- if (res)
- goto err;
return 0;
+
err:
tb_tunnel_warn(tunnel, "activation failed\n");
- tb_pci_deactivate(tunnel);
+ tb_tunnel_deactivate(tunnel);
return res;
}
/**
- * tb_pci_activate() - activate a tunnel
+ * tb_tunnel_activate() - activate a tunnel
+ * @tunnel: Tunnel to activate
*
* Return: Returns 0 on success or an error code on failure.
*/
-int tb_pci_activate(struct tb_pci_tunnel *tunnel)
+int tb_tunnel_activate(struct tb_tunnel *tunnel)
{
- if (tunnel->path_to_up->activated || tunnel->path_to_down->activated) {
- tb_tunnel_WARN(tunnel,
- "trying to activate an already activated tunnel\n");
- return -EINVAL;
- }
+ int i;
- return tb_pci_restart(tunnel);
-}
+ tb_tunnel_info(tunnel, "activating\n");
+ for (i = 0; i < tunnel->npaths; i++) {
+ if (tunnel->paths[i]->activated) {
+ tb_tunnel_WARN(tunnel,
+ "trying to activate an already activated tunnel\n");
+ return -EINVAL;
+ }
+ }
+ return tb_tunnel_restart(tunnel);
+}
/**
- * tb_pci_deactivate() - deactivate a tunnel
+ * tb_tunnel_deactivate() - deactivate a tunnel
+ * @tunnel: Tunnel to deactivate
*/
-void tb_pci_deactivate(struct tb_pci_tunnel *tunnel)
+void tb_tunnel_deactivate(struct tb_tunnel *tunnel)
{
+ int i;
+
tb_tunnel_info(tunnel, "deactivating\n");
- /*
- * TODO: enable reset by writing 0x04000000 to TB_CAP_PCIE + 1 on up
- * port. Seems to have no effect?
- */
- tb_pci_port_active(tunnel->up_port, false);
- tb_pci_port_active(tunnel->down_port, false);
- if (tunnel->path_to_down->activated)
- tb_path_deactivate(tunnel->path_to_down);
- if (tunnel->path_to_up->activated)
- tb_path_deactivate(tunnel->path_to_up);
-}
+ if (tunnel->activate)
+ tunnel->activate(tunnel, false);
+
+ for (i = 0; i < tunnel->npaths; i++) {
+ if (tunnel->paths[i]->activated)
+ tb_path_deactivate(tunnel->paths[i]);
+ }
+}
diff --git a/drivers/thunderbolt/tunnel.h b/drivers/thunderbolt/tunnel.h
index dff0f27d6ab5..b4e992165e56 100644
--- a/drivers/thunderbolt/tunnel.h
+++ b/drivers/thunderbolt/tunnel.h
@@ -1,8 +1,9 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
- * Thunderbolt Cactus Ridge driver - Tunneling support
+ * Thunderbolt driver - Tunneling support
*
* Copyright (c) 2014 Andreas Noever <[email protected]>
+ * Copyright (C) 2019, Intel Corporation
*/
#ifndef TB_TUNNEL_H_
@@ -10,22 +11,33 @@
#include "tb.h"
-struct tb_pci_tunnel {
+/**
+ * struct tb_tunnel - Tunnel between two ports
+ * @tb: Pointer to the domain
+ * @src_port: Source port of the tunnel
+ * @dst_port: Destination port of the tunnel
+ * @paths: All paths required by the tunnel
+ * @npaths: Number of paths in @paths
+ * @activate: Optional tunnel specific activation/deactivation
+ * @list: Tunnels are linked using this field
+ */
+struct tb_tunnel {
struct tb *tb;
- struct tb_port *up_port;
- struct tb_port *down_port;
- struct tb_path *path_to_up;
- struct tb_path *path_to_down;
+ struct tb_port *src_port;
+ struct tb_port *dst_port;
+ struct tb_path **paths;
+ size_t npaths;
+ int (*activate)(struct tb_tunnel *tunnel, bool activate);
struct list_head list;
};
-struct tb_pci_tunnel *tb_pci_alloc(struct tb *tb, struct tb_port *up,
- struct tb_port *down);
-void tb_pci_free(struct tb_pci_tunnel *tunnel);
-int tb_pci_activate(struct tb_pci_tunnel *tunnel);
-int tb_pci_restart(struct tb_pci_tunnel *tunnel);
-void tb_pci_deactivate(struct tb_pci_tunnel *tunnel);
-bool tb_pci_is_invalid(struct tb_pci_tunnel *tunnel);
+struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
+ struct tb_port *down);
+void tb_tunnel_free(struct tb_tunnel *tunnel);
+int tb_tunnel_activate(struct tb_tunnel *tunnel);
+int tb_tunnel_restart(struct tb_tunnel *tunnel);
+void tb_tunnel_deactivate(struct tb_tunnel *tunnel);
+bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel);
#endif
--
2.20.1
Each port has a separate path configuration space that is used for
finding the next hop (switch) in the path. Hop ID is an index to this
configuration space and hop IDs 0 - 7 are reserved.
In order to get next available hop ID for each direction we provide two
pairs of helper functions that can be used to allocate and release hop
IDs for a given port.
While there remove obsolete TODO comment.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/switch.c | 87 +++++++++++++++++++++++++++++++++++-
drivers/thunderbolt/tb.h | 8 ++++
2 files changed, 94 insertions(+), 1 deletion(-)
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index b20af050ce9a..320f64ebe8b8 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -601,11 +601,88 @@ static int tb_init_port(struct tb_port *port)
tb_dump_port(port->sw->tb, &port->config);
- /* TODO: Read dual link port, DP port and more from EEPROM. */
+ /* Control port does not need Hop ID allocation */
+ if (port->port) {
+ ida_init(&port->in_hopids);
+ ida_init(&port->out_hopids);
+ }
+
return 0;
}
+static int tb_port_alloc_hopid(struct tb_port *port, bool in, int min_hopid,
+ int max_hopid)
+{
+ int port_max_hopid;
+ struct ida *ida;
+
+ if (in) {
+ port_max_hopid = port->config.max_in_hop_id;
+ ida = &port->in_hopids;
+ } else {
+ port_max_hopid = port->config.max_out_hop_id;
+ ida = &port->out_hopids;
+ }
+
+ /* Hop IDs 0-7 are reserved */
+ if (min_hopid < 8)
+ min_hopid = 8;
+
+ if (max_hopid < 0 || max_hopid > port_max_hopid)
+ max_hopid = port_max_hopid;
+
+ return ida_simple_get(ida, min_hopid, max_hopid + 1, GFP_KERNEL);
+}
+
+/**
+ * tb_port_alloc_in_hopid() - Allocate input hop ID from port
+ * @port: Port to allocate hop ID for
+ * @min_hopid: Minimum acceptable input hop ID
+ * @max_hopid: Maximum acceptable input hop ID
+ *
+ * Return: Hop ID between @min_hopid and @max_hopid or negative errno in
+ * case of error.
+ */
+int tb_port_alloc_in_hopid(struct tb_port *port, int min_hopid, int max_hopid)
+{
+ return tb_port_alloc_hopid(port, true, min_hopid, max_hopid);
+}
+
+/**
+ * tb_port_alloc_out_hopid() - Allocate output hop ID from port
+ * @port: Port to allocate hop ID for
+ * @min_hopid: Minimum acceptable output hop ID
+ * @max_hopid: Maximum acceptable output hop ID
+ *
+ * Return: Hop ID between @min_hopid and @max_hopid or negative errno in
+ * case of error.
+ */
+int tb_port_alloc_out_hopid(struct tb_port *port, int min_hopid, int max_hopid)
+{
+ return tb_port_alloc_hopid(port, false, min_hopid, max_hopid);
+}
+
+/**
+ * tb_port_release_in_hopid() - Release allocated input hop ID from port
+ * @port: Port whose hop ID to release
+ * @hopid: Hop ID to release
+ */
+void tb_port_release_in_hopid(struct tb_port *port, int hopid)
+{
+ ida_simple_remove(&port->in_hopids, hopid);
+}
+
+/**
+ * tb_port_release_out_hopid() - Release allocated output hop ID from port
+ * @port: Port whose hop ID to release
+ * @hopid: Hop ID to release
+ */
+void tb_port_release_out_hopid(struct tb_port *port, int hopid)
+{
+ ida_simple_remove(&port->out_hopids, hopid);
+}
+
/**
* tb_pci_port_enable() - Enable PCIe adapter port
* @port: PCIe port to enable
@@ -1080,9 +1157,17 @@ static const struct attribute_group *switch_groups[] = {
static void tb_switch_release(struct device *dev)
{
struct tb_switch *sw = tb_to_switch(dev);
+ int i;
dma_port_free(sw->dma_port);
+ for (i = 1; i <= sw->config.max_port_number; i++) {
+ if (!sw->ports[i].disabled) {
+ ida_destroy(&sw->ports[i].in_hopids);
+ ida_destroy(&sw->ports[i].out_hopids);
+ }
+ }
+
kfree(sw->uuid);
kfree(sw->device_name);
kfree(sw->vendor_name);
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index a13d1cd53bc3..bfa1cee193fd 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -130,6 +130,8 @@ struct tb_switch {
* @dual_link_port: If the switch is connected using two ports, points
* to the other port.
* @link_nr: Is this primary or secondary port on the dual_link.
+ * @in_hopids: Currently allocated input hopids
+ * @out_hopids: Currently allocated output hopids
*/
struct tb_port {
struct tb_regs_port_header config;
@@ -142,6 +144,8 @@ struct tb_port {
bool disabled;
struct tb_port *dual_link_port;
u8 link_nr:1;
+ struct ida in_hopids;
+ struct ida out_hopids;
};
/**
@@ -439,6 +443,10 @@ static inline struct tb_switch *tb_to_switch(struct device *dev)
int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged);
int tb_port_add_nfc_credits(struct tb_port *port, int credits);
int tb_port_clear_counter(struct tb_port *port, int counter);
+int tb_port_alloc_in_hopid(struct tb_port *port, int hopid, int max_hopid);
+void tb_port_release_in_hopid(struct tb_port *port, int hopid);
+int tb_port_alloc_out_hopid(struct tb_port *port, int hopid, int max_hopid);
+void tb_port_release_out_hopid(struct tb_port *port, int hopid);
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap);
--
2.20.1
We run all XDomain requests during discovery in tb->wq and since it only
runs one work at the time it means that sending back reply to the other
domain may be delayed too much depending whether there is an active
XDomain discovery request running.
To make sure we can send reply to the other domain as soon as possible
run tb_xdp_handle_request() in system workqueue instead. Since the
device can be hot-removed in the middle we need to make sure the domain
structure is still around when the function is run so increase reference
count before we schedule the reply work.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/tb.h | 7 +++++++
drivers/thunderbolt/xdomain.c | 6 ++++--
2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index e06e5a944998..7e155eed1fee 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -410,6 +410,13 @@ int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd);
int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd);
int tb_domain_disconnect_all_paths(struct tb *tb);
+static inline struct tb *tb_domain_get(struct tb *tb)
+{
+ if (tb)
+ get_device(&tb->dev);
+ return tb;
+}
+
static inline void tb_domain_put(struct tb *tb)
{
put_device(&tb->dev);
diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
index e27dd8beb94b..59789bdd93ac 100644
--- a/drivers/thunderbolt/xdomain.c
+++ b/drivers/thunderbolt/xdomain.c
@@ -524,6 +524,8 @@ static void tb_xdp_handle_request(struct work_struct *work)
out:
kfree(xw->pkg);
kfree(xw);
+
+ tb_domain_put(tb);
}
static void
@@ -538,9 +540,9 @@ tb_xdp_schedule_request(struct tb *tb, const struct tb_xdp_header *hdr,
INIT_WORK(&xw->work, tb_xdp_handle_request);
xw->pkg = kmemdup(hdr, size, GFP_KERNEL);
- xw->tb = tb;
+ xw->tb = tb_domain_get(tb);
- queue_work(tb->wq, &xw->work);
+ schedule_work(&xw->work);
}
/**
--
2.20.1
Light Ridge and Eagle Ridge both need to have TMU access enabled before
port space can be fully accessed so make sure it happens on those. This
allows us to get rid of the offset quirk in tb_port_find_cap().
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/cap.c | 74 ++++++++++++++++++++++++++++++---------
1 file changed, 57 insertions(+), 17 deletions(-)
diff --git a/drivers/thunderbolt/cap.c b/drivers/thunderbolt/cap.c
index 9553305c63ea..0de548bda663 100644
--- a/drivers/thunderbolt/cap.c
+++ b/drivers/thunderbolt/cap.c
@@ -13,6 +13,7 @@
#define CAP_OFFSET_MAX 0xff
#define VSE_CAP_OFFSET_MAX 0xffff
+#define TMU_ACCESS_EN BIT(20)
struct tb_cap_any {
union {
@@ -22,28 +23,43 @@ struct tb_cap_any {
};
} __packed;
-/**
- * tb_port_find_cap() - Find port capability
- * @port: Port to find the capability for
- * @cap: Capability to look
- *
- * Returns offset to start of capability or %-ENOENT if no such
- * capability was found. Negative errno is returned if there was an
- * error.
- */
-int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap)
+static int tb_port_enable_tmu(struct tb_port *port, bool enable)
{
- u32 offset;
+ struct tb_switch *sw = port->sw;
+ u32 value, offset;
+ int ret;
/*
- * DP out adapters claim to implement TMU capability but in
- * reality they do not so we hard code the adapter specific
- * capability offset here.
+ * Legacy devices need to have TMU access enabled before port
+ * space can be fully accessed.
*/
- if (port->config.type == TB_TYPE_DP_HDMI_OUT)
- offset = 0x39;
+ switch (sw->config.device_id) {
+ case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
+ offset = 0x26;
+ break;
+ case PCI_DEVICE_ID_INTEL_EAGLE_RIDGE:
+ offset = 0x2a;
+ break;
+
+ default:
+ return 0;
+ }
+
+ ret = tb_sw_read(sw, &value, TB_CFG_SWITCH, offset, 1);
+ if (ret)
+ return ret;
+
+ if (enable)
+ value |= TMU_ACCESS_EN;
else
- offset = 0x1;
+ value &= ~TMU_ACCESS_EN;
+
+ return tb_sw_write(sw, &value, TB_CFG_SWITCH, offset, 1);
+}
+
+static int __tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap)
+{
+ u32 offset = 1;
do {
struct tb_cap_any header;
@@ -62,6 +78,30 @@ int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap)
return -ENOENT;
}
+/**
+ * tb_port_find_cap() - Find port capability
+ * @port: Port to find the capability for
+ * @cap: Capability to look
+ *
+ * Returns offset to start of capability or %-ENOENT if no such
+ * capability was found. Negative errno is returned if there was an
+ * error.
+ */
+int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap)
+{
+ int ret;
+
+ ret = tb_port_enable_tmu(port, true);
+ if (ret)
+ return ret;
+
+ ret = __tb_port_find_cap(port, cap);
+
+ tb_port_enable_tmu(port, false);
+
+ return ret;
+}
+
static int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap)
{
int offset = sw->config.first_cap_offset;
--
2.20.1
The adapter specific capability either is there or not if the port does
not hold an adapter. Instead of always finding it on-demand we read the
offset just once when the port is initialized.
While there we update the struct port documentation to follow kernel-doc
format.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/switch.c | 4 ++++
drivers/thunderbolt/tb.c | 8 ++++----
drivers/thunderbolt/tb.h | 2 ++
drivers/thunderbolt/tunnel_pci.c | 9 +++------
4 files changed, 13 insertions(+), 10 deletions(-)
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 1eee2502b5ba..ec3c274ff278 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -593,6 +593,10 @@ static int tb_init_port(struct tb_port *port)
port->cap_phy = cap;
else
tb_port_WARN(port, "non switch port without a PHY\n");
+ } else if (port->port != 0) {
+ cap = tb_port_find_cap(port, TB_PORT_CAP_ADAP);
+ if (cap > 0)
+ port->cap_adap = cap;
}
tb_dump_port(port->sw->tb, &port->config);
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 30e02c716f6c..7fd88b41d082 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -151,8 +151,8 @@ static struct tb_port *tb_find_unused_down_port(struct tb_switch *sw)
continue;
if (sw->ports[i].config.type != TB_TYPE_PCIE_DOWN)
continue;
- cap = tb_port_find_cap(&sw->ports[i], TB_PORT_CAP_ADAP);
- if (cap < 0)
+ cap = sw->ports[i].cap_adap;
+ if (!cap)
continue;
res = tb_port_read(&sw->ports[i], &data, TB_CFG_PORT, cap, 1);
if (res < 0)
@@ -197,8 +197,8 @@ static void tb_activate_pcie_devices(struct tb *tb)
}
/* check whether port is already activated */
- cap = tb_port_find_cap(up_port, TB_PORT_CAP_ADAP);
- if (cap < 0)
+ cap = up_port->cap_adap;
+ if (!cap)
continue;
if (tb_port_read(up_port, &data, TB_CFG_PORT, cap, 1))
continue;
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 3160169389cc..bab451ab31ff 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -124,6 +124,7 @@ struct tb_switch {
* @remote: Remote port (%NULL if not connected)
* @xdomain: Remote host (%NULL if not connected)
* @cap_phy: Offset, zero if not found
+ * @cap_adap: Offset of the adapter specific capability (%0 if not present)
* @port: Port number on switch
* @disabled: Disabled by eeprom
* @dual_link_port: If the switch is connected using two ports, points
@@ -136,6 +137,7 @@ struct tb_port {
struct tb_port *remote;
struct tb_xdomain *xdomain;
int cap_phy;
+ int cap_adap;
u8 port;
bool disabled;
struct tb_port *dual_link_port;
diff --git a/drivers/thunderbolt/tunnel_pci.c b/drivers/thunderbolt/tunnel_pci.c
index 0637537ea53f..2de4edccbd6d 100644
--- a/drivers/thunderbolt/tunnel_pci.c
+++ b/drivers/thunderbolt/tunnel_pci.c
@@ -148,12 +148,9 @@ bool tb_pci_is_invalid(struct tb_pci_tunnel *tunnel)
static int tb_pci_port_active(struct tb_port *port, bool active)
{
u32 word = active ? 0x80000000 : 0x0;
- int cap = tb_port_find_cap(port, TB_PORT_CAP_ADAP);
- if (cap < 0) {
- tb_port_warn(port, "TB_PORT_CAP_ADAP not found: %d\n", cap);
- return cap;
- }
- return tb_port_write(port, &word, TB_CFG_PORT, cap, 1);
+ if (!port->cap_adap)
+ return -ENXIO;
+ return tb_port_write(port, &word, TB_CFG_PORT, port->cap_adap, 1);
}
/**
--
2.20.1
We will be adding more link controller functionality in subsequent
patches and it does not make sense to keep all that in switch.c, so
separate LC functionality into its own file.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/Makefile | 2 +-
drivers/thunderbolt/lc.c | 21 +++++++++++++++++++++
drivers/thunderbolt/switch.c | 12 +++++++-----
drivers/thunderbolt/tb.h | 3 +++
drivers/thunderbolt/tb_regs.h | 2 ++
5 files changed, 34 insertions(+), 6 deletions(-)
create mode 100644 drivers/thunderbolt/lc.c
diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile
index f2f0de27252b..8531f15d3b3c 100644
--- a/drivers/thunderbolt/Makefile
+++ b/drivers/thunderbolt/Makefile
@@ -1,3 +1,3 @@
obj-${CONFIG_THUNDERBOLT} := thunderbolt.o
thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel_pci.o eeprom.o
-thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o
+thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o
diff --git a/drivers/thunderbolt/lc.c b/drivers/thunderbolt/lc.c
new file mode 100644
index 000000000000..2134a55ed837
--- /dev/null
+++ b/drivers/thunderbolt/lc.c
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Thunderbolt link controller support
+ *
+ * Copyright (C) 2019, Intel Corporation
+ * Author: Mika Westerberg <[email protected]>
+ */
+
+#include "tb.h"
+
+/**
+ * tb_lc_read_uuid() - Read switch UUID from link controller common register
+ * @sw: Switch whose UUID is read
+ * @uuid: UUID is placed here
+ */
+int tb_lc_read_uuid(struct tb_switch *sw, u32 *uuid)
+{
+ if (!sw->cap_lc)
+ return -EINVAL;
+ return tb_sw_read(sw, uuid, TB_CFG_SWITCH, sw->cap_lc + TB_LC_FUSE, 4);
+}
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index a90d21abed88..bd96eebd8248 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -1207,6 +1207,10 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
}
sw->cap_plug_events = cap;
+ cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER);
+ if (cap > 0)
+ sw->cap_lc = cap;
+
/* Root switch is always authorized */
if (!route)
sw->authorized = true;
@@ -1303,7 +1307,7 @@ int tb_switch_configure(struct tb_switch *sw)
static void tb_switch_set_uuid(struct tb_switch *sw)
{
u32 uuid[4];
- int cap;
+ int ret;
if (sw->uuid)
return;
@@ -1312,10 +1316,8 @@ static void tb_switch_set_uuid(struct tb_switch *sw)
* The newer controllers include fused UUID as part of link
* controller specific registers
*/
- cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER);
- if (cap > 0) {
- tb_sw_read(sw, uuid, TB_CFG_SWITCH, cap + 3, 4);
- } else {
+ ret = tb_lc_read_uuid(sw, uuid);
+ if (ret) {
/*
* ICM generates UUID based on UID and fills the upper
* two words with ones. This is not strictly following
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 5faec5a8eb98..530464b25dcb 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -63,6 +63,7 @@ struct tb_switch_nvm {
* @device_name: Name of the device (or %NULL if not known)
* @generation: Switch Thunderbolt generation
* @cap_plug_events: Offset to the plug events capability (%0 if not found)
+ * @cap_lc: Offset to the link controller capability (%0 if not found)
* @is_unplugged: The switch is going away
* @drom: DROM of the switch (%NULL if not found)
* @nvm: Pointer to the NVM if the switch has one (%NULL otherwise)
@@ -98,6 +99,7 @@ struct tb_switch {
const char *device_name;
unsigned int generation;
int cap_plug_events;
+ int cap_lc;
bool is_unplugged;
u8 *drom;
struct tb_switch_nvm *nvm;
@@ -448,6 +450,7 @@ bool tb_path_is_invalid(struct tb_path *path);
int tb_drom_read(struct tb_switch *sw);
int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid);
+int tb_lc_read_uuid(struct tb_switch *sw, u32 *uuid);
static inline int tb_route_length(u64 route)
{
diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
index 6f1ff04ee195..4895ae9f0b40 100644
--- a/drivers/thunderbolt/tb_regs.h
+++ b/drivers/thunderbolt/tb_regs.h
@@ -237,5 +237,7 @@ struct tb_regs_hop {
u32 unknown3:4; /* set to zero */
} __packed;
+/* Common link controller registers */
+#define TB_LC_FUSE 0x03
#endif
--
2.20.1
We need to wait until all buffers have been drained before the path can
be considered disabled. Do this for every hop in a path. Also if the
switch is physically disconnected, do not bother disabling it anymore
(it is not present anyway).
This adds another bit field to struct tb_regs_hop even if we are trying
to get rid of them but we can clean them up another day.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/path.c | 44 ++++++++++++++++++++++++++++++++---
drivers/thunderbolt/tb_regs.h | 3 ++-
2 files changed, 43 insertions(+), 4 deletions(-)
diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c
index a11956522bac..48cb15ff4446 100644
--- a/drivers/thunderbolt/path.c
+++ b/drivers/thunderbolt/path.c
@@ -7,6 +7,8 @@
#include <linux/slab.h>
#include <linux/errno.h>
+#include <linux/delay.h>
+#include <linux/ktime.h>
#include "tb.h"
@@ -74,13 +76,49 @@ static void __tb_path_deallocate_nfc(struct tb_path *path, int first_hop)
}
}
+static int __tb_path_deactivate_hop(struct tb_port *port, int hop_index)
+{
+ struct tb_regs_hop hop;
+ ktime_t timeout;
+ int ret;
+
+ if (port->sw->is_unplugged)
+ return 0;
+
+ /* Disable the path */
+ ret = tb_port_read(port, &hop, TB_CFG_HOPS, 2 * hop_index, 2);
+ if (ret)
+ return ret;
+
+ hop.enable = 0;
+
+ ret = tb_port_write(port, &hop, TB_CFG_HOPS, 2 * hop_index, 2);
+ if (ret)
+ return ret;
+
+ /* Wait until it is drained */
+ timeout = ktime_add_ms(ktime_get(), 500);
+ do {
+ ret = tb_port_read(port, &hop, TB_CFG_HOPS, 2 * hop_index, 2);
+ if (ret)
+ return ret;
+
+ if (!hop.pending)
+ return 0;
+
+ usleep_range(10, 20);
+ } while (ktime_before(ktime_get(), timeout));
+
+ return -ETIMEDOUT;
+}
+
static void __tb_path_deactivate_hops(struct tb_path *path, int first_hop)
{
int i, res;
- struct tb_regs_hop hop = { };
+
for (i = first_hop; i < path->path_length; i++) {
- res = tb_port_write(path->hops[i].in_port, &hop, TB_CFG_HOPS,
- 2 * path->hops[i].in_hop_index, 2);
+ res = __tb_path_deactivate_hop(path->hops[i].in_port,
+ path->hops[i].in_hop_index);
if (res)
tb_port_warn(path->hops[i].in_port,
"hop deactivation failed for hop %d, index %d\n",
diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
index 1ab6e0fb31c0..82ac4ec8757f 100644
--- a/drivers/thunderbolt/tb_regs.h
+++ b/drivers/thunderbolt/tb_regs.h
@@ -234,7 +234,8 @@ struct tb_regs_hop {
bool egress_fc:1;
bool ingress_shared_buffer:1;
bool egress_shared_buffer:1;
- u32 unknown3:4; /* set to zero */
+ bool pending:1;
+ u32 unknown3:3; /* set to zero */
} __packed;
/* Common link controller registers */
--
2.20.1
Now that we can allocate hop IDs per port on a path, we can take
advantage of this and create tunnels covering longer paths than just
between two adjacent switches. PCIe actually does not need this as it is
always a daisy chain between two adjacent switches but this way we do
not need to hard-code creation of the tunnel.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/path.c | 94 ++++++++++++++++++++++++++++++++++--
drivers/thunderbolt/tb.h | 4 +-
drivers/thunderbolt/tunnel.c | 54 +++++----------------
3 files changed, 106 insertions(+), 46 deletions(-)
diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c
index 48cb15ff4446..122e6a1daf34 100644
--- a/drivers/thunderbolt/path.c
+++ b/drivers/thunderbolt/path.c
@@ -31,23 +31,97 @@ static void tb_dump_hop(struct tb_port *port, struct tb_regs_hop *hop)
}
/**
- * tb_path_alloc() - allocate a thunderbolt path
+ * tb_path_alloc() - allocate a thunderbolt path between two ports
+ * @tb: Domain pointer
+ * @src: Source port of the path
+ * @dst: Destination port of the path
+ * @start_hopid: Hop ID used for the first ingress port in the path
+ * @end_hopid: Hop ID used for the last egress port in the path (%-1 for
+ * automatic allocation)
+ * @link_nr: Preferred link if there are dual links on the path
+ *
+ * Creates path between two ports starting with given @start_hopid. Reserves
+ * hop IDs for each port (they can be different from @start_hopid depending on
+ * how many hop IDs each port already have reserved). If there are dual
+ * links on the path, prioritizes using @link_nr.
*
* Return: Returns a tb_path on success or NULL on failure.
*/
-struct tb_path *tb_path_alloc(struct tb *tb, int num_hops)
+struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src,
+ struct tb_port *dst, int start_hopid,
+ int end_hopid, int link_nr)
{
- struct tb_path *path = kzalloc(sizeof(*path), GFP_KERNEL);
+ struct tb_port *in_port, *out_port;
+ int in_hopid, out_hopid;
+ struct tb_path *path;
+ size_t num_hops;
+ int i, ret;
+
+ path = kzalloc(sizeof(*path), GFP_KERNEL);
if (!path)
return NULL;
+
+ i = 0;
+ tb_for_each_port(in_port, src, dst)
+ i++;
+
+ /* Each hop takes two ports */
+ num_hops = i / 2;
+
path->hops = kcalloc(num_hops, sizeof(*path->hops), GFP_KERNEL);
if (!path->hops) {
kfree(path);
return NULL;
}
+
+ in_hopid = start_hopid;
+ out_port = NULL;
+ out_hopid = -1;
+
+ for (i = 0; i < num_hops; i++) {
+ in_port = tb_port_get_next(src, dst, out_port);
+
+ if (in_port->dual_link_port && in_port->link_nr != link_nr)
+ in_port = in_port->dual_link_port;
+
+ ret = tb_port_alloc_in_hopid(in_port, in_hopid, -1);
+ if (ret < 0)
+ goto err;
+ in_hopid = ret;
+
+ out_port = tb_port_get_next(src, dst, in_port);
+ if (!out_port)
+ goto err;
+
+ if (out_port->dual_link_port && out_port->link_nr != link_nr)
+ out_port = out_port->dual_link_port;
+
+ if (end_hopid && i == num_hops - 1)
+ ret = tb_port_alloc_out_hopid(out_port, end_hopid,
+ end_hopid);
+ else
+ ret = tb_port_alloc_out_hopid(out_port, -1, -1);
+
+ if (ret < 0)
+ goto err;
+ out_hopid = ret;
+
+ path->hops[i].in_hop_index = in_hopid;
+ path->hops[i].in_port = in_port;
+ path->hops[i].in_counter_index = -1;
+ path->hops[i].out_port = out_port;
+ path->hops[i].next_hop_index = out_hopid;
+
+ in_hopid = out_hopid;
+ }
+
path->tb = tb;
path->path_length = num_hops;
return path;
+
+err:
+ tb_path_free(path);
+ return NULL;
}
/**
@@ -55,10 +129,24 @@ struct tb_path *tb_path_alloc(struct tb *tb, int num_hops)
*/
void tb_path_free(struct tb_path *path)
{
+ int i;
+
if (path->activated) {
tb_WARN(path->tb, "trying to free an activated path\n")
return;
}
+
+ for (i = 0; i < path->path_length; i++) {
+ const struct tb_path_hop *hop = &path->hops[i];
+
+ if (hop->in_port)
+ tb_port_release_in_hopid(hop->in_port,
+ hop->in_hop_index);
+ if (hop->out_port)
+ tb_port_release_out_hopid(hop->out_port,
+ hop->next_hop_index);
+ }
+
kfree(path->hops);
kfree(path);
}
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 683725915ff7..0e4d9088faf6 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -459,7 +459,9 @@ int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap);
int tb_pci_port_enable(struct tb_port *port, bool enable);
-struct tb_path *tb_path_alloc(struct tb *tb, int num_hops);
+struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src,
+ struct tb_port *dst, int start_hopid,
+ int end_hopid, int link_nr);
void tb_path_free(struct tb_path *path);
int tb_path_activate(struct tb_path *path);
void tb_path_deactivate(struct tb_path *path);
diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index 20ce28276f7a..cdf9ca1c043e 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -12,6 +12,9 @@
#include "tunnel.h"
#include "tb.h"
+/* PCIe adapters use always hop ID of 8 for both directions */
+#define TB_PCI_HOPID 8
+
#define TB_PCI_PATH_DOWN 0
#define TB_PCI_PATH_UP 1
@@ -86,21 +89,13 @@ static void tb_pci_init_path(struct tb_path *path)
* Allocate a PCI tunnel. The ports must be of type TB_TYPE_PCIE_UP and
* TB_TYPE_PCIE_DOWN.
*
- * Currently only paths consisting of two hops are supported (that is the
- * ports must be on "adjacent" switches).
- *
- * The paths are hard-coded to use hop 8 (the only working hop id available on
- * my thunderbolt devices). Therefore at most ONE path per device may be
- * activated.
- *
* Return: Returns a tb_tunnel on success or NULL on failure.
*/
struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
struct tb_port *down)
{
- struct tb_path *path_to_up;
- struct tb_path *path_to_down;
struct tb_tunnel *tunnel;
+ struct tb_path *path;
tunnel = tb_tunnel_alloc(tb, 2);
if (!tunnel)
@@ -110,46 +105,21 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
tunnel->src_port = down;
tunnel->dst_port = up;
- path_to_up = tb_path_alloc(tb, 2);
- if (!path_to_up) {
+ path = tb_path_alloc(tb, down, up, TB_PCI_HOPID, -1, 0);
+ if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
- tunnel->paths[TB_PCI_PATH_UP] = path_to_up;
+ tb_pci_init_path(path);
+ tunnel->paths[TB_PCI_PATH_UP] = path;
- path_to_down = tb_path_alloc(tb, 2);
- if (!path_to_down) {
+ path = tb_path_alloc(tb, up, down, TB_PCI_HOPID, -1, 0);
+ if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
- tunnel->paths[TB_PCI_PATH_DOWN] = path_to_down;
-
- tb_pci_init_path(path_to_up);
- tb_pci_init_path(path_to_down);
-
- path_to_up->hops[0].in_port = down;
- path_to_up->hops[0].in_hop_index = 8;
- path_to_up->hops[0].in_counter_index = -1;
- path_to_up->hops[0].out_port = tb_upstream_port(up->sw)->remote;
- path_to_up->hops[0].next_hop_index = 8;
-
- path_to_up->hops[1].in_port = tb_upstream_port(up->sw);
- path_to_up->hops[1].in_hop_index = 8;
- path_to_up->hops[1].in_counter_index = -1;
- path_to_up->hops[1].out_port = up;
- path_to_up->hops[1].next_hop_index = 8;
-
- path_to_down->hops[0].in_port = up;
- path_to_down->hops[0].in_hop_index = 8;
- path_to_down->hops[0].in_counter_index = -1;
- path_to_down->hops[0].out_port = tb_upstream_port(up->sw);
- path_to_down->hops[0].next_hop_index = 8;
-
- path_to_down->hops[1].in_port = tb_upstream_port(up->sw)->remote;
- path_to_down->hops[1].in_hop_index = 8;
- path_to_down->hops[1].in_counter_index = -1;
- path_to_down->hops[1].out_port = down;
- path_to_down->hops[1].next_hop_index = 8;
+ tb_pci_init_path(path);
+ tunnel->paths[TB_PCI_PATH_DOWN] = path;
return tunnel;
}
--
2.20.1
Light Ridge has an issue where reading the next capability pointer
location in port config space the read data is not cleared. It is fine
to read capabilities each after another so only thing we need to do is
to make sure we issue dummy read after tb_port_find_cap() is finished to
avoid the issue in next read.
Signed-off-by: Mika Westerberg <[email protected]>
---
drivers/thunderbolt/cap.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/drivers/thunderbolt/cap.c b/drivers/thunderbolt/cap.c
index 0de548bda663..8aceb0d97a63 100644
--- a/drivers/thunderbolt/cap.c
+++ b/drivers/thunderbolt/cap.c
@@ -57,6 +57,21 @@ static int tb_port_enable_tmu(struct tb_port *port, bool enable)
return tb_sw_write(sw, &value, TB_CFG_SWITCH, offset, 1);
}
+static void tb_port_dummy_read(struct tb_port *port)
+{
+ /*
+ * When reading from next capability pointer location in port
+ * config space the read data is not cleared on LR. To avoid
+ * reading stale data on next read perform one dummy read after
+ * port capabilities are walked.
+ */
+ if (port->sw->config.device_id == PCI_DEVICE_ID_INTEL_LIGHT_RIDGE) {
+ u32 dummy;
+
+ tb_port_read(port, &dummy, TB_CFG_PORT, 0, 1);
+ }
+}
+
static int __tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap)
{
u32 offset = 1;
@@ -97,6 +112,7 @@ int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap)
ret = __tb_port_find_cap(port, cap);
+ tb_port_dummy_read(port);
tb_port_enable_tmu(port, false);
return ret;
--
2.20.1
On Wed, Feb 06, 2019 at 04:17:10PM +0300, Mika Westerberg wrote:
> Hi,
>
> Software connection manager (drivers/thunderbolt/tb.c) is used on older
> Apple hardware with Light Ridge, Cactus Ridge or Falcon Ridge controllers
> to create PCIe tunnels when a Thunderbolt device is connected. Currently
> only one PCIe tunnel is supported. On newer Alpine Ridge based Apple
> systems the driver starts the firmware which then takes care creating
> tunnels.
>
> This series improves the software connection manager so that it will
> support:
>
> - Full PCIe daisy chains (up to 6 devices)
> - Display Port tunneling
> - P2P networking
>
> We also add support for Titan Ridge based Apple systems where we can use
> the same flows than with Alpine Ridge to start the firmware.
>
> Note in order to prevent possible DMA attacks on these systems we should
> make sure IOMMU is enabled. One option is to force dmar_platform_optin()
> return true on Apple hardware. However, it is not part of this series. I'm
> trusting people using Linux on such systems to take care of it. :-)
>
> Previous version of the patch series can be viewed here:
>
> https://lkml.org/lkml/2019/1/29/924
From code style and some other things we discussed off list the series looks
good to me.
Reviewed-by: Andy Shevchenko <[email protected]>
Though minor comments per individual patches are provided.
>
> Changes from v1:
>
> * Added ACK from David
>
> * Add constant (TMU_ACCESS_EN) for BIT(20) when TMU access is enabled. We
> keep it in cap.c close to the LR/ER workaround. Also we enable/disable
> only during capability walk. If it turns we need to have it enabled
> elsewhere we can move it to switch.c and enable just once during
> switch enumeration.
>
> * Use 0 to mean no cap_adap instead of negative value. This follows
> cap_phy.
>
> * Use correct PCI IDs (_BRIDGE) in the last patch where we start firmware
> on Titan Ridge. It wrongly used NHI PCI IDs in v1.
>
> Mika Westerberg (28):
> net: thunderbolt: Unregister ThunderboltIP protocol handler when suspending
> thunderbolt: Do not allocate switch if depth is greater than 6
> thunderbolt: Enable TMU access when accessing port space on legacy devices
> thunderbolt: Add dummy read after port capability list walk on Light Ridge
> thunderbolt: Move LC specific functionality into a separate file
> thunderbolt: Configure lanes when switch is initialized
> thunderbolt: Set sleep bit when suspending switch
> thunderbolt: Properly disable path
> thunderbolt: Cache adapter specific capability offset into struct port
> thunderbolt: Rename tunnel_pci to tunnel
> thunderbolt: Generalize tunnel creation functionality
> thunderbolt: Add functions for allocating and releasing hop IDs
> thunderbolt: Add helper function to iterate from one port to another
> thunderbolt: Extend tunnel creation to more than 2 adjacent switches
> thunderbolt: Deactivate all paths before restarting them
> thunderbolt: Discover preboot PCIe paths the boot firmware established
> thunderbolt: Add support for full PCIe daisy chains
> thunderbolt: Scan only valid NULL adapter ports in hotplug
> thunderbolt: Generalize port finding routines to support all port types
> thunderbolt: Rework NFC credits handling
> thunderbolt: Add support for Display Port tunnels
> thunderbolt: Run tb_xdp_handle_request() in system workqueue
> thunderbolt: Add XDomain UUID exchange support
> thunderbolt: Add support for DMA tunnels
> thunderbolt: Make tb_switch_alloc() return ERR_PTR()
> thunderbolt: Add support for XDomain connections
> thunderbolt: Make rest of the logging to happen at debug level
> thunderbolt: Start firmware on Titan Ridge Apple systems
>
> drivers/net/thunderbolt.c | 3 +
> drivers/thunderbolt/Makefile | 4 +-
> drivers/thunderbolt/cap.c | 90 +++-
> drivers/thunderbolt/ctl.c | 2 +-
> drivers/thunderbolt/icm.c | 15 +-
> drivers/thunderbolt/lc.c | 179 ++++++++
> drivers/thunderbolt/path.c | 326 +++++++++++++--
> drivers/thunderbolt/switch.c | 466 ++++++++++++++++++---
> drivers/thunderbolt/tb.c | 529 ++++++++++++++++++------
> drivers/thunderbolt/tb.h | 67 ++-
> drivers/thunderbolt/tb_msgs.h | 11 +
> drivers/thunderbolt/tb_regs.h | 50 ++-
> drivers/thunderbolt/tunnel.c | 681 +++++++++++++++++++++++++++++++
> drivers/thunderbolt/tunnel.h | 75 ++++
> drivers/thunderbolt/tunnel_pci.c | 226 ----------
> drivers/thunderbolt/tunnel_pci.h | 31 --
> drivers/thunderbolt/xdomain.c | 142 ++++++-
> include/linux/thunderbolt.h | 8 +
> 18 files changed, 2389 insertions(+), 516 deletions(-)
> create mode 100644 drivers/thunderbolt/lc.c
> create mode 100644 drivers/thunderbolt/tunnel.c
> create mode 100644 drivers/thunderbolt/tunnel.h
> delete mode 100644 drivers/thunderbolt/tunnel_pci.c
> delete mode 100644 drivers/thunderbolt/tunnel_pci.h
>
> --
> 2.20.1
>
--
With Best Regards,
Andy Shevchenko
On Wed, Feb 06, 2019 at 04:17:31PM +0300, Mika Westerberg wrote:
> Display Port tunnels are somewhat more complex than PCIe tunnels as it
> requires 3 tunnels (AUX Rx/Tx and Video). In addition we are not
> supposed to create the tunnels immediately when a DP OUT is enumerated.
> Instead we need to wait until we get hotplug event to that adapter port
> or check if the port has HPD set before tunnels can be established. This
> adds Display Port tunneling support to the software connection manager.
> +static int tb_tunnel_dp(struct tb *tb, struct tb_port *out)
> +{
> + struct tb_cm *tcm = tb_priv(tb);
> + struct tb_switch *sw = out->sw;
> + struct tb_tunnel *tunnel;
> + struct tb_port *in;
> +
> + if (tb_port_is_enabled(out))
> + return 0;
> +
> + do {
> + sw = tb_to_switch(sw->dev.parent);
> + if (!sw)
> + return 0;
> + in = tb_find_unused_port(sw, TB_TYPE_DP_HDMI_IN);
> + } while (!in);
> +
> + tunnel = tb_tunnel_alloc_dp(tb, in, out);
> + if (!tunnel) {
> + tb_port_dbg(out, "DP tunnel allocation failed\n");
> + return -EIO;
In the same way how you done for XDomains it makes sense to return here -ENOMEM.
> + }
> +
> + if (tb_tunnel_activate(tunnel)) {
> + tb_port_info(out, "DP tunnel activation failed, aborting\n");
> + tb_tunnel_free(tunnel);
> + return -EIO;
> + }
> +
> + list_add_tail(&tunnel->list, &tcm->tunnel_list);
> + return 0;
> +}
--
With Best Regards,
Andy Shevchenko
On Wed, Feb 06, 2019 at 04:17:23PM +0300, Mika Westerberg wrote:
> We need to be able to walk from one port to another when we are creating
> paths where there are multiple switches between two ports. For this
> reason introduce a new function tb_port_get_next() and a new macro
> tb_for_each_port().
> +struct tb_port *tb_port_get_next(struct tb_port *start, struct tb_port *end,
> + struct tb_port *prev)
> +{
> + struct tb_port *port, *next;
> +
> + if (!prev)
> + return start;
> +
> + if (prev->sw == end->sw) {
> + if (prev != end)
> + return end;
> + return NULL;
I would prefer to see the similar pattern as you used below, i.e. when we have
an "bail out" condition, check for it.
if (prev == end)
return NULL;
return end;
> + }
> +
> + /* Switch back to use primary links for walking */
> + if (prev->dual_link_port && prev->link_nr)
> + port = prev->dual_link_port;
> + else
> + port = prev;
> +
> + if (start->sw->config.depth < end->sw->config.depth) {
> + if (port->remote &&
> + port->remote->sw->config.depth > port->sw->config.depth)
> + next = port->remote;
> + else
> + next = tb_port_at(tb_route(end->sw), port->sw);
> + } else if (start->sw->config.depth > end->sw->config.depth) {
> + if (tb_is_upstream_port(port))
> + next = port->remote;
> + else
> + next = tb_upstream_port(port->sw);
> + } else {
> + /* Must be the same switch then */
> + if (start->sw != end->sw)
> + return NULL;
> + return end;
Here is a good pattern.
> + }
> +
> + /* If prev was dual link return another end of that link then */
> + if (next->dual_link_port && next->link_nr != prev->link_nr)
> + return next->dual_link_port;
> +
> + return next;
> +}
--
With Best Regards,
Andy Shevchenko
On Wed, Feb 06, 2019 at 04:17:22PM +0300, Mika Westerberg wrote:
> Each port has a separate path configuration space that is used for
> finding the next hop (switch) in the path. Hop ID is an index to this
> configuration space and hop IDs 0 - 7 are reserved.
>
> In order to get next available hop ID for each direction we provide two
> pairs of helper functions that can be used to allocate and release hop
> IDs for a given port.
[...]
> +static int tb_port_alloc_hopid(struct tb_port *port, bool in, int min_hopid,
> + int max_hopid)
> +{
> + int port_max_hopid;
> + struct ida *ida;
> +
> + if (in) {
> + port_max_hopid = port->config.max_in_hop_id;
> + ida = &port->in_hopids;
> + } else {
> + port_max_hopid = port->config.max_out_hop_id;
> + ida = &port->out_hopids;
> + }
> +
> + /* Hop IDs 0-7 are reserved */
> + if (min_hopid < 8)
> + min_hopid = 8;
> +
> + if (max_hopid < 0 || max_hopid > port_max_hopid)
> + max_hopid = port_max_hopid;
> +
> + return ida_simple_get(ida, min_hopid, max_hopid + 1, GFP_KERNEL);
> +}
If there are two Macs at the ends of the daisy-chain with Thunderbolt
devices in-between, the other Mac may already have established tunnels
to some of the devices and therefore has occupied hop entries in the
devices' path config space. How do you ensure that you don't allocate
the same entries and overwrite the other Mac's hop entries, thereby
breaking its tunnels? Because you're only allocating the hop entries
locally here. Maybe there's some check in a later patch whether a
hop entry is already occupied, I'm not even halfway through this patch
bomb.
Thanks,
Lukas
On Wed, Feb 06, 2019 at 04:17:24PM +0300, Mika Westerberg wrote:
> Now that we can allocate hop IDs per port on a path, we can take
> advantage of this and create tunnels covering longer paths than just
> between two adjacent switches. PCIe actually does not need this as it is
> always a daisy chain between two adjacent switches but this way we do
> not need to hard-code creation of the tunnel.
That doesn't seem to be correct, at the bottom of this page there's
a figure showing a PCI tunnel between non-adjacent switches (blue line):
https://developer.apple.com/library/archive/documentation/HardwareDrivers/Conceptual/ThunderboltDevGuide/Basics/Basics.html
I'm not sure if there are advantages to such tunnels: Reduced latency
perhaps because packets need not pass through PCIe adapters on the
in-between device? Or maybe this allows for more fine-grained traffic
prioritization?
> + i = 0;
> + tb_for_each_port(in_port, src, dst)
> + i++;
This looks more complicated than necessary. Isn't the path length
always the length of the route string from in_port switch to out_port
switch, plus 2 for the adapter on each end? Or do paths without
adapters exist?
> + for (i = 0; i < num_hops; i++) {
> + in_port = tb_port_get_next(src, dst, out_port);
> +
> + if (in_port->dual_link_port && in_port->link_nr != link_nr)
> + in_port = in_port->dual_link_port;
> +
> + ret = tb_port_alloc_in_hopid(in_port, in_hopid, -1);
> + if (ret < 0)
> + goto err;
> + in_hopid = ret;
> +
> + out_port = tb_port_get_next(src, dst, in_port);
> + if (!out_port)
> + goto err;
There's a NULL pointer check here, but the invocation of tb_port_get_next()
further up to assign in_port lacks such a check. Is it guaranteed to never
be NULL?
Thanks,
Lukas
On Wed, Feb 06, 2019 at 04:17:23PM +0300, Mika Westerberg wrote:
> We need to be able to walk from one port to another when we are creating
> paths where there are multiple switches between two ports. For this
> reason introduce a new function tb_port_get_next() and a new macro
> tb_for_each_port().
These names seem fairly generic, they might as well refer to the next port
on a switch or iterate over the ports on a switch. E.g. I've proposed a
tb_sw_for_each_port() macro in this patch:
https://lore.kernel.org/patchwork/patch/983863/
I'd suggest renaming tb_port_get_next() to something like
tb_next_port_on_path() or tb_path_next_port() or tb_path_walk().
And I'd suggest dropping tb_for_each_port() because there are only
two occurrences where it's used, one calculates the path length,
and I think that's simply the route string length plus 2, and the
other one in patch 17 isn't even interested in the ports along a path,
but rather in the switches between the root switch and the end of a path.
It seems simpler to just iterate from the switch at the end upwards to
the root switch by following the parent pointer in the switch's
struct device, or alternatively by bytewise iterating over the route
string and calling get_switch_at_route() each time.
> +/**
> + * tb_port_get_next() - Return next port for given port
> + * @start: Start port of the walk
> + * @end: End port of the walk
> + * @prev: Previous port (%NULL if this is the first)
> + *
> + * This function can be used to walk from one port to another if they
> + * are connected through zero or more switches. If the @prev is dual
> + * link port, the function follows that link and returns another end on
> + * that same link.
> + *
> + * If the walk cannot be continued, returns %NULL.
This sounds as if NULL is returned if an error occurs but that doesn't
seem to be what the function does. I'd suggest:
"If the @end port has been reached, return %NULL."
> +struct tb_port *tb_port_get_next(struct tb_port *start, struct tb_port *end,
> + struct tb_port *prev)
> +{
> + struct tb_port *port, *next;
> +
> + if (!prev)
> + return start;
> +
> + if (prev->sw == end->sw) {
> + if (prev != end)
> + return end;
> + return NULL;
> + }
> +
> + /* Switch back to use primary links for walking */
"Switch back" requires that you switched to something else before,
which you didn't. I'd suggest something like:
"use primary link to discover next port"
Why is it necessary to use the primary link anyway? Is the
->remote member not set on the secondary link port? The reason
should probably be spelled out in the code comment.
> + if (prev->dual_link_port && prev->link_nr)
> + port = prev->dual_link_port;
> + else
> + port = prev;
> +
> + if (start->sw->config.depth < end->sw->config.depth) {
> + if (port->remote &&
> + port->remote->sw->config.depth > port->sw->config.depth)
Can we use "if (!tb_is_upstream_port(port))" for consistency with the
if-clause below?
> + next = port->remote;
> + else
> + next = tb_port_at(tb_route(end->sw), port->sw);
> + } else if (start->sw->config.depth > end->sw->config.depth) {
> + if (tb_is_upstream_port(port))
> + next = port->remote;
> + else
> + next = tb_upstream_port(port->sw);
> + } else {
> + /* Must be the same switch then */
> + if (start->sw != end->sw)
> + return NULL;
> + return end;
> + }
The else-clause here appears to be dead code, you've already checked
further up whether prev and end are on the same switch.
> +
> + /* If prev was dual link return another end of that link then */
*Here* a "switch back" comment would be appropriate. Nit: Please either
end code comments with a period or don't start them with an upper case
letter.
> + if (next->dual_link_port && next->link_nr != prev->link_nr)
> + return next->dual_link_port;
> +
> + return next;
> +}
Thanks,
Lukas
On Wed, Feb 06, 2019 at 04:17:25PM +0300, Mika Westerberg wrote:
> We can't be sure the paths are actually properly deactivated when a
> tunnel is restarted after resume.
Why can't we be sure? Please provide proper reasoning.
> So instead of marking all paths as
> inactive we go ahead and deactivate them explicitly.
This seems like a bad idea if the root partition is on a Thunderbolt-
attached drive, the system is waking from hibernate and the EFI NHI
driver has already established a tunnel to that drive. It would seem
more appropriate to discover tunnels already existing on resume from
system sleep and then attempt to establish any others that might be
missing.
> @@ -183,8 +183,15 @@ int tb_tunnel_restart(struct tb_tunnel *tunnel)
>
> tb_tunnel_info(tunnel, "activating\n");
>
> + /* Make sure all paths are properly disabled before enable them again */
This isn't proper English, s/enable/enabling/.
Thanks,
Lukas
On Sun, Feb 10, 2019 at 01:13:53PM +0100, Lukas Wunner wrote:
> On Wed, Feb 06, 2019 at 04:17:22PM +0300, Mika Westerberg wrote:
> > Each port has a separate path configuration space that is used for
> > finding the next hop (switch) in the path. Hop ID is an index to this
> > configuration space and hop IDs 0 - 7 are reserved.
> >
> > In order to get next available hop ID for each direction we provide two
> > pairs of helper functions that can be used to allocate and release hop
> > IDs for a given port.
> [...]
> > +static int tb_port_alloc_hopid(struct tb_port *port, bool in, int min_hopid,
> > + int max_hopid)
> > +{
> > + int port_max_hopid;
> > + struct ida *ida;
> > +
> > + if (in) {
> > + port_max_hopid = port->config.max_in_hop_id;
> > + ida = &port->in_hopids;
> > + } else {
> > + port_max_hopid = port->config.max_out_hop_id;
> > + ida = &port->out_hopids;
> > + }
> > +
> > + /* Hop IDs 0-7 are reserved */
> > + if (min_hopid < 8)
> > + min_hopid = 8;
> > +
> > + if (max_hopid < 0 || max_hopid > port_max_hopid)
> > + max_hopid = port_max_hopid;
> > +
> > + return ida_simple_get(ida, min_hopid, max_hopid + 1, GFP_KERNEL);
> > +}
>
> If there are two Macs at the ends of the daisy-chain with Thunderbolt
> devices in-between, the other Mac may already have established tunnels
> to some of the devices and therefore has occupied hop entries in the
> devices' path config space. How do you ensure that you don't allocate
> the same entries and overwrite the other Mac's hop entries, thereby
> breaking its tunnels?
If the other Mac has enumerated the device (set the upstream port,
route, depth) then the other Mac cannot access the device. You get an
error (we deal with that in the later patch in the series when we
identify XDomain connections). The Hop ID allocation is only relevant in
a single domain. Crossing one needs to have protocol such as we have in
case of ThunderboltIP to negotiate Hop IDs used in the link between two
domains.
On Sun, Feb 10, 2019 at 04:33:28PM +0100, Lukas Wunner wrote:
> On Wed, Feb 06, 2019 at 04:17:24PM +0300, Mika Westerberg wrote:
> > Now that we can allocate hop IDs per port on a path, we can take
> > advantage of this and create tunnels covering longer paths than just
> > between two adjacent switches. PCIe actually does not need this as it is
> > always a daisy chain between two adjacent switches but this way we do
> > not need to hard-code creation of the tunnel.
>
> That doesn't seem to be correct, at the bottom of this page there's
> a figure showing a PCI tunnel between non-adjacent switches (blue line):
>
> https://developer.apple.com/library/archive/documentation/HardwareDrivers/Conceptual/ThunderboltDevGuide/Basics/Basics.html
>
> I'm not sure if there are advantages to such tunnels: Reduced latency
> perhaps because packets need not pass through PCIe adapters on the
> in-between device? Or maybe this allows for more fine-grained traffic
> prioritization?
Interesting.
Are you sure Apple actually uses setup like that? I think I have never
seen such configuration happening on any of the devices I have.
I can update the changelog to mention that if you think it is useful.
Something like below maybe?
PCIe actually does not need this as it is typically a daisy chain
between two adjacent switches but this way we do not need to hard-code
creation of the tunnel.
> > + i = 0;
> > + tb_for_each_port(in_port, src, dst)
> > + i++;
>
> This looks more complicated than necessary. Isn't the path length
> always the length of the route string from in_port switch to out_port
> switch, plus 2 for the adapter on each end? Or do paths without
> adapters exist?
Yes, I think you are right.
> > + for (i = 0; i < num_hops; i++) {
> > + in_port = tb_port_get_next(src, dst, out_port);
> > +
> > + if (in_port->dual_link_port && in_port->link_nr != link_nr)
> > + in_port = in_port->dual_link_port;
> > +
> > + ret = tb_port_alloc_in_hopid(in_port, in_hopid, -1);
> > + if (ret < 0)
> > + goto err;
> > + in_hopid = ret;
> > +
> > + out_port = tb_port_get_next(src, dst, in_port);
> > + if (!out_port)
> > + goto err;
>
> There's a NULL pointer check here, but the invocation of tb_port_get_next()
> further up to assign in_port lacks such a check. Is it guaranteed to never
> be NULL?
No, I'll add NULL check there.
On Mon, Feb 11, 2019 at 07:35:55AM +0100, Lukas Wunner wrote:
> On Wed, Feb 06, 2019 at 04:17:25PM +0300, Mika Westerberg wrote:
> > We can't be sure the paths are actually properly deactivated when a
> > tunnel is restarted after resume.
>
> Why can't we be sure? Please provide proper reasoning.
IIRC the reason was that when you suspend, then reconfigure parts of the
topology and resume, establishing the tunnel again went wrong. I'll
update the changelog with a better reasoning.
> > So instead of marking all paths as
> > inactive we go ahead and deactivate them explicitly.
>
> This seems like a bad idea if the root partition is on a Thunderbolt-
> attached drive, the system is waking from hibernate and the EFI NHI
> driver has already established a tunnel to that drive. It would seem
> more appropriate to discover tunnels already existing on resume from
> system sleep and then attempt to establish any others that might be
> missing.
That's what we do in the patch following, no? We discover the EFI
created paths and use that information to re-establish tunnels upon S3
resume and also when they are torn down.
> > @@ -183,8 +183,15 @@ int tb_tunnel_restart(struct tb_tunnel *tunnel)
> >
> > tb_tunnel_info(tunnel, "activating\n");
> >
> > + /* Make sure all paths are properly disabled before enable them again */
>
> This isn't proper English, s/enable/enabling/.
Thanks, I'll fix it up.
On Mon, Feb 11, 2019 at 07:16:00AM +0100, Lukas Wunner wrote:
> On Wed, Feb 06, 2019 at 04:17:23PM +0300, Mika Westerberg wrote:
> > We need to be able to walk from one port to another when we are creating
> > paths where there are multiple switches between two ports. For this
> > reason introduce a new function tb_port_get_next() and a new macro
> > tb_for_each_port().
>
> These names seem fairly generic, they might as well refer to the next port
> on a switch or iterate over the ports on a switch. E.g. I've proposed a
> tb_sw_for_each_port() macro in this patch:
>
> https://lore.kernel.org/patchwork/patch/983863/
>
> I'd suggest renaming tb_port_get_next() to something like
> tb_next_port_on_path() or tb_path_next_port() or tb_path_walk().
OK, tb_next_port_on_path() sounds good to me.
> And I'd suggest dropping tb_for_each_port() because there are only
> two occurrences where it's used, one calculates the path length,
> and I think that's simply the route string length plus 2, and the
> other one in patch 17 isn't even interested in the ports along a path,
> but rather in the switches between the root switch and the end of a path.
> It seems simpler to just iterate from the switch at the end upwards to
> the root switch by following the parent pointer in the switch's
> struct device, or alternatively by bytewise iterating over the route
> string and calling get_switch_at_route() each time.
OK.
> > +/**
> > + * tb_port_get_next() - Return next port for given port
> > + * @start: Start port of the walk
> > + * @end: End port of the walk
> > + * @prev: Previous port (%NULL if this is the first)
> > + *
> > + * This function can be used to walk from one port to another if they
> > + * are connected through zero or more switches. If the @prev is dual
> > + * link port, the function follows that link and returns another end on
> > + * that same link.
> > + *
> > + * If the walk cannot be continued, returns %NULL.
>
> This sounds as if NULL is returned if an error occurs but that doesn't
> seem to be what the function does. I'd suggest:
>
> "If the @end port has been reached, return %NULL."
It returns NULL if @end cannot be reached. So what about:
"If @end cannot be reached, returns %NULL"
?
> > +struct tb_port *tb_port_get_next(struct tb_port *start, struct tb_port *end,
> > + struct tb_port *prev)
> > +{
> > + struct tb_port *port, *next;
> > +
> > + if (!prev)
> > + return start;
> > +
> > + if (prev->sw == end->sw) {
> > + if (prev != end)
> > + return end;
> > + return NULL;
> > + }
> > +
> > + /* Switch back to use primary links for walking */
>
> "Switch back" requires that you switched to something else before,
> which you didn't. I'd suggest something like:
>
> "use primary link to discover next port"
OK.
> Why is it necessary to use the primary link anyway? Is the
> ->remote member not set on the secondary link port? The reason
> should probably be spelled out in the code comment.
IIRC it was because you may have something in the middle with only one
port (the primary). I'll add a comment here explaining that.
> > + if (prev->dual_link_port && prev->link_nr)
> > + port = prev->dual_link_port;
> > + else
> > + port = prev;
> > +
> > + if (start->sw->config.depth < end->sw->config.depth) {
> > + if (port->remote &&
> > + port->remote->sw->config.depth > port->sw->config.depth)
>
> Can we use "if (!tb_is_upstream_port(port))" for consistency with the
> if-clause below?
Yes, I think that should work.
> > + next = port->remote;
> > + else
> > + next = tb_port_at(tb_route(end->sw), port->sw);
> > + } else if (start->sw->config.depth > end->sw->config.depth) {
> > + if (tb_is_upstream_port(port))
> > + next = port->remote;
> > + else
> > + next = tb_upstream_port(port->sw);
> > + } else {
> > + /* Must be the same switch then */
> > + if (start->sw != end->sw)
> > + return NULL;
> > + return end;
> > + }
>
> The else-clause here appears to be dead code, you've already checked
> further up whether prev and end are on the same switch.
OK.
> > +
> > + /* If prev was dual link return another end of that link then */
>
> *Here* a "switch back" comment would be appropriate. Nit: Please either
> end code comments with a period or don't start them with an upper case
> letter.
That's the style I've been using in this driver and elsewhere and is my
preference anyway.
I'll update the comment content, though. :)
On Mon, Feb 11, 2019 at 10:30:43AM +0200, Mika Westerberg wrote:
> On Sun, Feb 10, 2019 at 01:13:53PM +0100, Lukas Wunner wrote:
> > If there are two Macs at the ends of the daisy-chain with Thunderbolt
> > devices in-between, the other Mac may already have established tunnels
> > to some of the devices and therefore has occupied hop entries in the
> > devices' path config space. How do you ensure that you don't allocate
> > the same entries and overwrite the other Mac's hop entries, thereby
> > breaking its tunnels?
>
> If the other Mac has enumerated the device (set the upstream port,
> route, depth) then the other Mac cannot access the device. You get an
> error (we deal with that in the later patch in the series when we
> identify XDomain connections). The Hop ID allocation is only relevant in
> a single domain. Crossing one needs to have protocol such as we have in
> case of ThunderboltIP to negotiate Hop IDs used in the link between two
> domains.
Understood now, thanks. (Well, in part at least.)
It looks like there's a race condition currently in tb_switch_configure()
wherein two machines on the daisy chain may write the config simultaneously
and overwrite each other's changes. Isn't there some kind of synchonization
mechanism available to prevent such an outcome?
Thanks,
Lukas
On Tue, Feb 12, 2019 at 01:43:33PM +0100, Lukas Wunner wrote:
> On Mon, Feb 11, 2019 at 10:30:43AM +0200, Mika Westerberg wrote:
> > On Sun, Feb 10, 2019 at 01:13:53PM +0100, Lukas Wunner wrote:
> > > If there are two Macs at the ends of the daisy-chain with Thunderbolt
> > > devices in-between, the other Mac may already have established tunnels
> > > to some of the devices and therefore has occupied hop entries in the
> > > devices' path config space. How do you ensure that you don't allocate
> > > the same entries and overwrite the other Mac's hop entries, thereby
> > > breaking its tunnels?
> >
> > If the other Mac has enumerated the device (set the upstream port,
> > route, depth) then the other Mac cannot access the device. You get an
> > error (we deal with that in the later patch in the series when we
> > identify XDomain connections). The Hop ID allocation is only relevant in
> > a single domain. Crossing one needs to have protocol such as we have in
> > case of ThunderboltIP to negotiate Hop IDs used in the link between two
> > domains.
>
> Understood now, thanks. (Well, in part at least.)
>
> It looks like there's a race condition currently in tb_switch_configure()
> wherein two machines on the daisy chain may write the config simultaneously
> and overwrite each other's changes. Isn't there some kind of synchonization
> mechanism available to prevent such an outcome?
AFAICT that's expected. The host that first enumerated the device wins.
On Mon, Feb 11, 2019 at 10:45:58AM +0200, Mika Westerberg wrote:
> On Sun, Feb 10, 2019 at 04:33:28PM +0100, Lukas Wunner wrote:
> > at the bottom of this page there's
> > a figure showing a PCI tunnel between non-adjacent switches (blue line):
> >
> > https://developer.apple.com/library/archive/documentation/HardwareDrivers/Conceptual/ThunderboltDevGuide/Basics/Basics.html
> >
> Are you sure Apple actually uses setup like that? I think I have never
> seen such configuration happening on any of the devices I have.
Sorry, I don't know if they actually use that.
> I can update the changelog to mention that if you think it is useful.
> Something like below maybe?
>
> PCIe actually does not need this as it is typically a daisy chain
> between two adjacent switches but this way we do not need to hard-code
> creation of the tunnel.
LGTM, thanks.
> > > + i = 0;
> > > + tb_for_each_port(in_port, src, dst)
> > > + i++;
> >
> > This looks more complicated than necessary. Isn't the path length
> > always the length of the route string from in_port switch to out_port
> > switch, plus 2 for the adapter on each end? Or do paths without
> > adapters exist?
>
> Yes, I think you are right.
Simply subtracting the depths of the start and end port's switch also yields
the path length. Of course this assumes that tunnels aren't established
between non-adjacent switches, but your algorithm doesn't do that.
Thanks,
Lukas
On Tue, Feb 12, 2019 at 01:59:27PM +0100, Lukas Wunner wrote:
> On Tue, Feb 12, 2019 at 02:51:25PM +0200, Mika Westerberg wrote:
> > On Tue, Feb 12, 2019 at 01:43:33PM +0100, Lukas Wunner wrote:
> > > On Mon, Feb 11, 2019 at 10:30:43AM +0200, Mika Westerberg wrote:
> > > > On Sun, Feb 10, 2019 at 01:13:53PM +0100, Lukas Wunner wrote:
> > > > > If there are two Macs at the ends of the daisy-chain with Thunderbolt
> > > > > devices in-between, the other Mac may already have established tunnels
> > > > > to some of the devices and therefore has occupied hop entries in the
> > > > > devices' path config space. How do you ensure that you don't allocate
> > > > > the same entries and overwrite the other Mac's hop entries, thereby
> > > > > breaking its tunnels?
> > > >
> > > > If the other Mac has enumerated the device (set the upstream port,
> > > > route, depth) then the other Mac cannot access the device. You get an
> > > > error (we deal with that in the later patch in the series when we
> > > > identify XDomain connections). The Hop ID allocation is only relevant in
> > > > a single domain. Crossing one needs to have protocol such as we have in
> > > > case of ThunderboltIP to negotiate Hop IDs used in the link between two
> > > > domains.
> > >
> > > Understood now, thanks. (Well, in part at least.)
> > >
> > > It looks like there's a race condition currently in tb_switch_configure()
> > > wherein two machines on the daisy chain may write the config simultaneously
> > > and overwrite each other's changes. Isn't there some kind of synchonization
> > > mechanism available to prevent such an outcome?
> >
> > AFAICT that's expected. The host that first enumerated the device wins.
>
> Yes but tb_switch_configure() goes on to blindly call
> tb_plug_events_active(). Does that or any other subsequently called
> function fail if another machine managed to overwrite the switch config?
Yes, once the switch is enumerated the other domain cannot access it
anymore but instead gets back errors.
On Tue, Feb 12, 2019 at 02:51:25PM +0200, Mika Westerberg wrote:
> On Tue, Feb 12, 2019 at 01:43:33PM +0100, Lukas Wunner wrote:
> > On Mon, Feb 11, 2019 at 10:30:43AM +0200, Mika Westerberg wrote:
> > > On Sun, Feb 10, 2019 at 01:13:53PM +0100, Lukas Wunner wrote:
> > > > If there are two Macs at the ends of the daisy-chain with Thunderbolt
> > > > devices in-between, the other Mac may already have established tunnels
> > > > to some of the devices and therefore has occupied hop entries in the
> > > > devices' path config space. How do you ensure that you don't allocate
> > > > the same entries and overwrite the other Mac's hop entries, thereby
> > > > breaking its tunnels?
> > >
> > > If the other Mac has enumerated the device (set the upstream port,
> > > route, depth) then the other Mac cannot access the device. You get an
> > > error (we deal with that in the later patch in the series when we
> > > identify XDomain connections). The Hop ID allocation is only relevant in
> > > a single domain. Crossing one needs to have protocol such as we have in
> > > case of ThunderboltIP to negotiate Hop IDs used in the link between two
> > > domains.
> >
> > Understood now, thanks. (Well, in part at least.)
> >
> > It looks like there's a race condition currently in tb_switch_configure()
> > wherein two machines on the daisy chain may write the config simultaneously
> > and overwrite each other's changes. Isn't there some kind of synchonization
> > mechanism available to prevent such an outcome?
>
> AFAICT that's expected. The host that first enumerated the device wins.
Yes but tb_switch_configure() goes on to blindly call
tb_plug_events_active(). Does that or any other subsequently called
function fail if another machine managed to overwrite the switch config?
On Mon, Feb 11, 2019 at 11:54:36AM +0200, Mika Westerberg wrote:
> On Mon, Feb 11, 2019 at 07:16:00AM +0100, Lukas Wunner wrote:
> > On Wed, Feb 06, 2019 at 04:17:23PM +0300, Mika Westerberg wrote:
> > > +/**
> > > + * tb_port_get_next() - Return next port for given port
> > > + * @start: Start port of the walk
> > > + * @end: End port of the walk
> > > + * @prev: Previous port (%NULL if this is the first)
> > > + *
> > > + * This function can be used to walk from one port to another if they
> > > + * are connected through zero or more switches. If the @prev is dual
> > > + * link port, the function follows that link and returns another end on
> > > + * that same link.
> > > + *
> > > + * If the walk cannot be continued, returns %NULL.
> >
> > This sounds as if NULL is returned if an error occurs but that doesn't
> > seem to be what the function does. I'd suggest:
> >
> > "If the @end port has been reached, return %NULL."
>
> It returns NULL if @end cannot be reached. So what about:
>
> "If @end cannot be reached, returns %NULL"
>
> ?
That doesn't appear to match what the function does. There are two places
where NULL is returned:
The first is at the top of the function and returns NULL if
((prev->sw == end->sw) && (prev == end)). So this happens when the
entire path has been traversed and "end" is passed in as prev argument.
The second is at the bottom and is presumably never executed because
it only happens if (start->sw->config.depth == end->sw->config.depth),
which I believe is only the case if (start->sw == end->sw), which implies
that prev can only be either "start" or "end", and both cases are already
handled at the top of the function.
Bottom line is that NULL is returned once the traversal has concluded.
Am I missing something?
> > Why is it necessary to use the primary link anyway? Is the
> > ->remote member not set on the secondary link port? The reason
> > should probably be spelled out in the code comment.
>
> IIRC it was because you may have something in the middle with only one
> port (the primary). I'll add a comment here explaining that.
Hm, I'm wondering if it wouldn't be more straightforward to also set
the remote member on secondary links to avoid all this special casing?
Any downside to that?
Thanks,
Lukas
On Wed, Feb 06, 2019 at 04:17:28PM +0300, Mika Westerberg wrote:
> The only way to expand Thunderbolt topology is through the NULL adapter
> ports (typically ports 1, 2, 3 and 4). There is no point handling
> Thunderbolt hotplug events on any other port.
>
> Add a helper function (tb_port_is_null()) that can be used to determine
> if the port is NULL port, and use it in software connection manager code
> when hotplug event is handled.
Andreas called these ports TB_TYPE_PORT. If the official name is NULL,
then renaming to TB_TYPE_NULL might be a useful cleanup. (Though it
seems the control port, i.e. port 0, is also of type TB_TYPE_PORT?)
> --- a/drivers/thunderbolt/tb.c
> +++ b/drivers/thunderbolt/tb.c
> @@ -344,10 +344,12 @@ static void tb_handle_hotplug(struct work_struct *work)
> tb_port_info(port,
> "got plug event for connected port, ignoring\n");
> } else {
> - tb_port_info(port, "hotplug: scanning\n");
> - tb_scan_port(port);
> - if (!port->remote)
> - tb_port_info(port, "hotplug: no switch found\n");
> + if (tb_port_is_null(port)) {
> + tb_port_info(port, "hotplug: scanning\n");
> + tb_scan_port(port);
> + if (!port->remote)
> + tb_port_info(port, "hotplug: no switch found\n");
> + }
There's several other sanity checks further up in this function.
Why not move the tb_port_is_null() check near them, e.g. below the
check for tb_is_upstream_port()?
Thanks,
Lukas
On Wed, Feb 06, 2019 at 04:17:26PM +0300, Mika Westerberg wrote:
> /* dword 0 */
> hop.next_hop = path->hops[i].next_hop_index;
> hop.out_port = path->hops[i].out_port->port;
> - /* TODO: figure out why these are good values */
> - hop.initial_credits = (i == path->path_length - 1) ? 16 : 7;
> + hop.initial_credits = path->hops[i].initial_credits;
> hop.unknown1 = 0;
> hop.enable = 1;
[...]
> @@ -78,6 +78,74 @@ static void tb_pci_init_path(struct tb_path *path)
> path->weight = 1;
> path->drop_packages = 0;
> path->nfc_credits = 0;
> + path->hops[0].initial_credits = 7;
> + path->hops[1].initial_credits = 16;
I guess Andreas' algorithm (the last hop in the path is assigned
16 and all the ones before are assigned 7) was reverse-engineered
from Apple's driver. The fact that this algorithm works for paths
of arbitrary length could indicate that Apple indeed does establish
tunnels between non-adjacent switches.
Also, why are these good values? (You've deleted the comment.)
Thanks,
Lukas
On Tue, Feb 12, 2019 at 06:49:42PM +0100, Lukas Wunner wrote:
> On Wed, Feb 06, 2019 at 04:17:26PM +0300, Mika Westerberg wrote:
> > /* dword 0 */
> > hop.next_hop = path->hops[i].next_hop_index;
> > hop.out_port = path->hops[i].out_port->port;
> > - /* TODO: figure out why these are good values */
> > - hop.initial_credits = (i == path->path_length - 1) ? 16 : 7;
> > + hop.initial_credits = path->hops[i].initial_credits;
> > hop.unknown1 = 0;
> > hop.enable = 1;
> [...]
> > @@ -78,6 +78,74 @@ static void tb_pci_init_path(struct tb_path *path)
> > path->weight = 1;
> > path->drop_packages = 0;
> > path->nfc_credits = 0;
> > + path->hops[0].initial_credits = 7;
> > + path->hops[1].initial_credits = 16;
>
> I guess Andreas' algorithm (the last hop in the path is assigned
> 16 and all the ones before are assigned 7) was reverse-engineered
> from Apple's driver. The fact that this algorithm works for paths
> of arbitrary length could indicate that Apple indeed does establish
> tunnels between non-adjacent switches.
We do it as well for DP and DMA paths in subsequent patches. For those
there are NULL ports in the middle which get assigned different amount
of credits. PCIe paths on the other hand only need two hops when we
daisy-chain them in this patch series.
> Also, why are these good values? (You've deleted the comment.)
To be honest, I don't know all the details. Credits are used for flow
control to make sure receiving port always has enough buffers before
sending port can send more packets (assuming the path is using flow
control). I don't know where 7 and 16 came but they seem to work pretty
well for PCIe path so I kept using them.
On Tue, Feb 12, 2019 at 03:04:22PM +0100, Lukas Wunner wrote:
> On Wed, Feb 06, 2019 at 04:17:28PM +0300, Mika Westerberg wrote:
> > The only way to expand Thunderbolt topology is through the NULL adapter
> > ports (typically ports 1, 2, 3 and 4). There is no point handling
> > Thunderbolt hotplug events on any other port.
> >
> > Add a helper function (tb_port_is_null()) that can be used to determine
> > if the port is NULL port, and use it in software connection manager code
> > when hotplug event is handled.
>
> Andreas called these ports TB_TYPE_PORT. If the official name is NULL,
> then renaming to TB_TYPE_NULL might be a useful cleanup. (Though it
> seems the control port, i.e. port 0, is also of type TB_TYPE_PORT?)
Yes, they are called NULL ports. The control port does not have port
config space but is still accounted in max port number of the switch and
also has the type TB_TYPE_PORT but you access it when you talk to switch
config space.
Since the type is the same, I would like to keep it like this and
differentiate control port using the port number 0 where needed.
> > --- a/drivers/thunderbolt/tb.c
> > +++ b/drivers/thunderbolt/tb.c
> > @@ -344,10 +344,12 @@ static void tb_handle_hotplug(struct work_struct *work)
> > tb_port_info(port,
> > "got plug event for connected port, ignoring\n");
> > } else {
> > - tb_port_info(port, "hotplug: scanning\n");
> > - tb_scan_port(port);
> > - if (!port->remote)
> > - tb_port_info(port, "hotplug: no switch found\n");
> > + if (tb_port_is_null(port)) {
> > + tb_port_info(port, "hotplug: scanning\n");
> > + tb_scan_port(port);
> > + if (!port->remote)
> > + tb_port_info(port, "hotplug: no switch found\n");
> > + }
>
> There's several other sanity checks further up in this function.
> Why not move the tb_port_is_null() check near them, e.g. below the
> check for tb_is_upstream_port()?
DP adapters also get hotplug events and in subsequent patches we add
handling for those in this function as well.
On Tue, Feb 12, 2019 at 02:55:42PM +0100, Lukas Wunner wrote:
> On Mon, Feb 11, 2019 at 11:54:36AM +0200, Mika Westerberg wrote:
> > On Mon, Feb 11, 2019 at 07:16:00AM +0100, Lukas Wunner wrote:
> > > On Wed, Feb 06, 2019 at 04:17:23PM +0300, Mika Westerberg wrote:
> > > > +/**
> > > > + * tb_port_get_next() - Return next port for given port
> > > > + * @start: Start port of the walk
> > > > + * @end: End port of the walk
> > > > + * @prev: Previous port (%NULL if this is the first)
> > > > + *
> > > > + * This function can be used to walk from one port to another if they
> > > > + * are connected through zero or more switches. If the @prev is dual
> > > > + * link port, the function follows that link and returns another end on
> > > > + * that same link.
> > > > + *
> > > > + * If the walk cannot be continued, returns %NULL.
> > >
> > > This sounds as if NULL is returned if an error occurs but that doesn't
> > > seem to be what the function does. I'd suggest:
> > >
> > > "If the @end port has been reached, return %NULL."
> >
> > It returns NULL if @end cannot be reached. So what about:
> >
> > "If @end cannot be reached, returns %NULL"
> >
> > ?
>
> That doesn't appear to match what the function does. There are two places
> where NULL is returned:
>
> The first is at the top of the function and returns NULL if
> ((prev->sw == end->sw) && (prev == end)). So this happens when the
> entire path has been traversed and "end" is passed in as prev argument.
>
> The second is at the bottom and is presumably never executed because
> it only happens if (start->sw->config.depth == end->sw->config.depth),
> which I believe is only the case if (start->sw == end->sw), which implies
> that prev can only be either "start" or "end", and both cases are already
> handled at the top of the function.
>
> Bottom line is that NULL is returned once the traversal has concluded.
> Am I missing something?
No, you are right. I'll update the comment accordingly.
> > > Why is it necessary to use the primary link anyway? Is the
> > > ->remote member not set on the secondary link port? The reason
> > > should probably be spelled out in the code comment.
> >
> > IIRC it was because you may have something in the middle with only one
> > port (the primary). I'll add a comment here explaining that.
>
> Hm, I'm wondering if it wouldn't be more straightforward to also set
> the remote member on secondary links to avoid all this special casing?
> Any downside to that?
I think it is useful to distinguish between primary and secondary links
as we do when we establish DP tunnels. Granted we could rename them to
"primary" and "secondary" instead of "remote" and "dual_link_port". Or
alternatively have two remotes and then link_nr or something like that.
I'll try and see if it simplifies the code.
On Wed, Feb 06, 2019 at 04:17:26PM +0300, Mika Westerberg wrote:
> +static struct tb_port *tb_port_remote(struct tb_port *port)
> +{
> + struct tb_port *remote = port->remote;
> +
> + /*
> + * If we have a dual link, the remote is available through the
> + * primary link.
> + */
> + if (!remote && port->dual_link_port && port->dual_link_port->remote)
> + return port->dual_link_port->remote->dual_link_port;
> + return remote;
> +}
Yet more special-casing for dual-link ports. :-(
> + if (tunnel->dst_port->config.type != TB_TYPE_PCIE_UP) {
> + tb_port_warn(tunnel->dst_port,
> + "path does not end to a PCIe adapter\n");
Nit: I think the proper proposition is "on" or "at", not "to".
The tunnel discovery algorithm looks solid to me, so:
Reviewed-by: Lukas Wunner <[email protected]>
When the module is unloaded, tb_stop() currently deactivates all PCI
tunnels. Is this still a good idea now that tunnels are discovered
on probe? We could just leave the tunnels in place and rediscover
them when the module is reloaded. If something was unplugged in the
meantime, pciehp will have disconnected the devices and we should
notice on reprobe that certain tunnels cannot be rediscovered, so no
harm no foul. Thoughts?
Thanks,
Lukas
On Tue, Feb 12, 2019 at 08:42:49PM +0100, Lukas Wunner wrote:
> On Wed, Feb 06, 2019 at 04:17:26PM +0300, Mika Westerberg wrote:
> > +static struct tb_port *tb_port_remote(struct tb_port *port)
> > +{
> > + struct tb_port *remote = port->remote;
> > +
> > + /*
> > + * If we have a dual link, the remote is available through the
> > + * primary link.
> > + */
> > + if (!remote && port->dual_link_port && port->dual_link_port->remote)
> > + return port->dual_link_port->remote->dual_link_port;
> > + return remote;
> > +}
>
> Yet more special-casing for dual-link ports. :-(
>
>
> > + if (tunnel->dst_port->config.type != TB_TYPE_PCIE_UP) {
> > + tb_port_warn(tunnel->dst_port,
> > + "path does not end to a PCIe adapter\n");
>
> Nit: I think the proper proposition is "on" or "at", not "to".
>
> The tunnel discovery algorithm looks solid to me, so:
> Reviewed-by: Lukas Wunner <[email protected]>
Thanks!
> When the module is unloaded, tb_stop() currently deactivates all PCI
> tunnels. Is this still a good idea now that tunnels are discovered
> on probe? We could just leave the tunnels in place and rediscover
> them when the module is reloaded. If something was unplugged in the
> meantime, pciehp will have disconnected the devices and we should
> notice on reprobe that certain tunnels cannot be rediscovered, so no
> harm no foul. Thoughts?
I agree it makes sense and that's actually what we already do with the
firmware CM.
On Wed, Feb 06, 2019 at 04:17:27PM +0300, Mika Westerberg wrote:
> @@ -63,6 +71,16 @@ static void tb_discover_tunnels(struct tb_switch *sw)
> }
> }
>
> +static void tb_switch_authorize(struct work_struct *work)
> +{
> + struct tb_switch *sw = container_of(work, typeof(*sw), work);
> +
> + mutex_lock(&sw->tb->lock);
> + if (!sw->is_unplugged)
> + tb_domain_approve_switch(sw->tb, sw);
> + mutex_unlock(&sw->tb->lock);
> +}
> +
You're establishing PCI tunnels by having tb_scan_port() schedule
tb_switch_authorize() via a work item, which in turn calls
tb_domain_approve_switch(), which in turn calls tb_approve_switch(),
which in turn calls tb_tunnel_pci().
This seems kind of like a roundabout way of doing things, in particular
since all switches are hardcoded to be automatically authorized.
Why don't you just invoke tb_tunnel_pci() directly from tb_scan_port()?
And why is the work item needed? I'm also confused that the work item
has been present in struct tb_switch for 2 years but is put to use only
now.
> -static void tb_activate_pcie_devices(struct tb *tb)
> +static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
> {
[...]
> + /*
> + * Look up available down port. Since we are chaining, it is
> + * typically found right above this switch.
> + */
> + down = NULL;
> + parent_sw = tb_to_switch(sw->dev.parent);
> + while (parent_sw) {
> + down = tb_find_unused_down_port(parent_sw);
> + if (down)
> + break;
> + parent_sw = tb_to_switch(parent_sw->dev.parent);
> + }
The problem I see here is that there's no guarantee that the switch
on which you're selecting a down port is actually itself connected
with a PCI tunnel. E.g., allocation of a tunnel to that parent
switch may have failed. In that case you end up establishing a
tunnel between that parent switch and the newly connected switch
but the tunnel is of no use.
It would seem more logical to me to walk down the chain of newly
connected switches and try to establish a PCI tunnel to each of
them in order. By deferring tunnel establishment to a work item,
I think the tunnels may be established in an arbitrary order, right?
Thanks,
Lukas
On Sun, Mar 24, 2019 at 12:31:44PM +0100, Lukas Wunner wrote:
> On Wed, Feb 06, 2019 at 04:17:27PM +0300, Mika Westerberg wrote:
> > @@ -63,6 +71,16 @@ static void tb_discover_tunnels(struct tb_switch *sw)
> > }
> > }
> >
> > +static void tb_switch_authorize(struct work_struct *work)
> > +{
> > + struct tb_switch *sw = container_of(work, typeof(*sw), work);
> > +
> > + mutex_lock(&sw->tb->lock);
> > + if (!sw->is_unplugged)
> > + tb_domain_approve_switch(sw->tb, sw);
> > + mutex_unlock(&sw->tb->lock);
> > +}
> > +
>
> You're establishing PCI tunnels by having tb_scan_port() schedule
> tb_switch_authorize() via a work item, which in turn calls
> tb_domain_approve_switch(), which in turn calls tb_approve_switch(),
> which in turn calls tb_tunnel_pci().
>
> This seems kind of like a roundabout way of doing things, in particular
> since all switches are hardcoded to be automatically authorized.
>
> Why don't you just invoke tb_tunnel_pci() directly from tb_scan_port()?
Indeed, it does not make much sense to schedule separate work item just
for this.
I'm will remove it in v3. However, instead of always creating PCIe
tunnels I'm going to propose that we implement the "user" security level
in the software connection manager by default. While DMA protection
relies on IOMMU, doing this allows user to turn off PCIe tunneling
completely (or implement their own whitelisting of known good devices
for example).
> And why is the work item needed? I'm also confused that the work item
> has been present in struct tb_switch for 2 years but is put to use only
> now.
Yes, you are right - the work item here is not needed. It is actually
remnant from the original patch series. I'll cook a patch removing it.
> > -static void tb_activate_pcie_devices(struct tb *tb)
> > +static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
> > {
> [...]
> > + /*
> > + * Look up available down port. Since we are chaining, it is
> > + * typically found right above this switch.
> > + */
> > + down = NULL;
> > + parent_sw = tb_to_switch(sw->dev.parent);
> > + while (parent_sw) {
> > + down = tb_find_unused_down_port(parent_sw);
> > + if (down)
> > + break;
> > + parent_sw = tb_to_switch(parent_sw->dev.parent);
> > + }
>
> The problem I see here is that there's no guarantee that the switch
> on which you're selecting a down port is actually itself connected
> with a PCI tunnel. E.g., allocation of a tunnel to that parent
> switch may have failed. In that case you end up establishing a
> tunnel between that parent switch and the newly connected switch
> but the tunnel is of no use.
Since this is going through tb_domain_approve_switch() it does not allow
PCIe tunnel creation if the parent is not authorized first.
> It would seem more logical to me to walk down the chain of newly
> connected switches and try to establish a PCI tunnel to each of
> them in order. By deferring tunnel establishment to a work item,
> I think the tunnels may be established in an arbitrary order, right?
The workqueue is ordered so AFAIK they should be run in the order the
hotplug happened. In any case I'm going to remove the work item so this
should not be an issue.
On Mon, Mar 25, 2019 at 11:57:33AM +0200, Mika Westerberg wrote:
> On Sun, Mar 24, 2019 at 12:31:44PM +0100, Lukas Wunner wrote:
> > On Wed, Feb 06, 2019 at 04:17:27PM +0300, Mika Westerberg wrote:
> > > -static void tb_activate_pcie_devices(struct tb *tb)
> > > +static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
> > > {
> > [...]
> > > + /*
> > > + * Look up available down port. Since we are chaining, it is
> > > + * typically found right above this switch.
> > > + */
> > > + down = NULL;
> > > + parent_sw = tb_to_switch(sw->dev.parent);
> > > + while (parent_sw) {
> > > + down = tb_find_unused_down_port(parent_sw);
> > > + if (down)
> > > + break;
> > > + parent_sw = tb_to_switch(parent_sw->dev.parent);
> > > + }
> >
> > The problem I see here is that there's no guarantee that the switch
> > on which you're selecting a down port is actually itself connected
> > with a PCI tunnel. E.g., allocation of a tunnel to that parent
> > switch may have failed. In that case you end up establishing a
> > tunnel between that parent switch and the newly connected switch
> > but the tunnel is of no use.
>
> Since this is going through tb_domain_approve_switch() it does not allow
> PCIe tunnel creation if the parent is not authorized first.
Yes, but my point is that it doesn't make much sense to establish a tunnel
between a switch and one of its parent switches unless that parent switch
is reachable from the root switch over a PCI tunnel or a series of PCI
tunnels.
It may be worth checking that condition, or, if new tunnels are established
top-down in the daisy-chain and tunnel establishment has failed for a
switch, to not establish tunnels for switches beyond that one.
Thanks,
Lukas
On Mon, Mar 25, 2019 at 12:16:16PM +0100, Lukas Wunner wrote:
> Yes, but my point is that it doesn't make much sense to establish a tunnel
> between a switch and one of its parent switches unless that parent switch
> is reachable from the root switch over a PCI tunnel or a series of PCI
> tunnels.
>
> It may be worth checking that condition, or, if new tunnels are established
> top-down in the daisy-chain and tunnel establishment has failed for a
> switch, to not establish tunnels for switches beyond that one.
OK, got it :)