2017-06-06 12:25:26

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 00/27] Thunderbolt security levels and NVM firmware upgrade

Hi,

This is fourth version of the patch series adding support for Thunderbolt
security levels and NVM firmware upgrade. PCs running Intel Falcon Ridge or
newer need these in order to connect devices if the security level is set
to "user(SL1) or secure(SL2)" from BIOS.

The security levels were added to to prevent DMA attacks when PCIe is
tunneled over Thunderbolt fabric where IOMMU is not available or cannot be
enabled for different reasons.

This series converts the Thunderbolt driver to a Linux bus and makes each
Thunderbolt device (aka Thunderbolt switch or router or endpoint) a Linux
device which is connected to that bus addressed by their routing string
(this is the addressing Thunderbolt uses).

With these patches the user can authorize devices using sysfs attributes
like (following USB):

# echo 1 > /sys/bus/thunderbolt/devices/0-1/authorized

If the BIOS security level is set to "secure" and the device supports
secure connect on the first connect the device is uploaded a random key:

# key=$(openssl rand -hex 32)
# echo $key > /sys/bus/thunderbolt/devices/0-1/key
# echo 1 > /sys/bus/thunderbolt/devices/0-1/authorized

Next time the device is plugged in the user can verify (challenge) the
device using the same key:

# echo $key > /sys/bus/thunderbolt/devices/0-1/key
# echo 2 > /sys/bus/thunderbolt/devices/0-1/authorized

The device identification information is found below each device sysfs
node and includes following attributes:

device - Device ID
device_name - Name of the device
vendor - Vendor ID
vendor_name - Name of the vendor
unique_id - Unique ID string identifying this particular device

In addition these patches add support for upgrading NVM firmware running on
a host or device by running something like:

# dd if=KYK_TBT_FW_0018.bin of=/sys/bus/thunderbolt/devices/0-0/nvm_non_active0/nvmem
# echo 1 > /sys/bus/thunderbolt/devices/0-0/nvm_authenticate

More information how to use the Thunderbolt bus can be found in patch [26/27].

Where Internal Connection Manager (ICM) firmware is available and usable,
we use it in the driver. This also includes newer Apple Macbooks with
Alpine Ridge. For older Macbooks the driver works as before but in addition
the Thunderbolt bus is available there as well (including possibility to
upgrade NVM firmware of connected devices).

Note for Macs the Linux native PCIe hotplug support does not work well with
the Thunderbolt PCIe topologies where there is need to put all available
resources to the PCIe downstream port where the PCIe chain is extended.
This is something we need to fix. In the mean time is a way to work it
around by passing "pci=hpbussize=10,hpmemsize=2M" or so to the kernel
command line.

We are currently porting Amir's network driver on top of the Thunderbolt
bus.

The previous version (v3) of the series can be found in:

https://lwn.net/Articles/724465/

Changes from v3:
- Rename tb_switch_find_vsec_cap() to tb_switch_find_vse_cap() to
be consistent with PCI where VSEC stands for Vendor Specific
Extended Capability.
- Rename TB_VSEC_CAP_* to TB_VSE_CAP accordingly.
- Add missing call to mutex_destroy() for tb->lock
- Disable host NVM upgrade when in native mode
- Add SoB from Andreas

Greg, I did not add your ack for the two patches (9 and 26) you acked
because I think you are going to add your SoB to the series anyway when you
apply the series. Let me know if you want me to add them. Thanks.

Lukas Wunner (1):
thunderbolt: Refactor and fix parsing of port drom entries

Mika Westerberg (26):
thunderbolt: Use const buffer pointer in write operations
thunderbolt: No need to read UID of the root switch on resume
thunderbolt: Do not try to read UID if DROM offset is read as 0
thunderbolt: Do not warn about newer DROM versions
thunderbolt: Add MSI-X support
thunderbolt: Rework capability handling
thunderbolt: Allow passing NULL to tb_ctl_free()
thunderbolt: Introduce thunderbolt bus and connection manager
thunderbolt: Convert switch to a device
thunderbolt: Fail switch adding operation if reading DROM fails
thunderbolt: Do not fail if DROM data CRC32 is invalid
thunderbolt: Read vendor and device name from DROM
thunderbolt: Move control channel messages to tb_msgs.h
thunderbolt: Expose get_route() to other files
thunderbolt: Expose make_header() to other files
thunderbolt: Let the connection manager handle all notifications
thunderbolt: Rework control channel to be more reliable
thunderbolt: Add new Thunderbolt PCI IDs
thunderbolt: Add support for NHI mailbox
thunderbolt: Store Thunderbolt generation in the switch structure
thunderbolt: Add support for DMA configuration based mailbox
thunderbolt: Do not touch the hardware if the NHI is gone on resume
thunderbolt: Add support for Internal Connection Manager (ICM)
thunderbolt: Add support for host and device NVM firmware upgrade
thunderbolt: Add documentation how Thunderbolt bus can be used
MAINTAINERS: Add maintainers for Thunderbolt driver

Documentation/ABI/testing/sysfs-bus-thunderbolt | 110 +++
Documentation/admin-guide/index.rst | 1 +
Documentation/admin-guide/thunderbolt.rst | 199 ++++
MAINTAINERS | 3 +
drivers/thunderbolt/Kconfig | 13 +-
drivers/thunderbolt/Makefile | 2 +-
drivers/thunderbolt/cap.c | 169 ++--
drivers/thunderbolt/ctl.c | 665 +++++++++----
drivers/thunderbolt/ctl.h | 105 +-
drivers/thunderbolt/dma_port.c | 524 ++++++++++
drivers/thunderbolt/dma_port.h | 34 +
drivers/thunderbolt/domain.c | 456 +++++++++
drivers/thunderbolt/eeprom.c | 119 ++-
drivers/thunderbolt/icm.c | 1089 +++++++++++++++++++++
drivers/thunderbolt/nhi.c | 306 +++++-
drivers/thunderbolt/nhi.h | 93 +-
drivers/thunderbolt/nhi_regs.h | 27 +
drivers/thunderbolt/switch.c | 1178 +++++++++++++++++++++--
drivers/thunderbolt/tb.c | 240 +++--
drivers/thunderbolt/tb.h | 251 ++++-
drivers/thunderbolt/tb_msgs.h | 260 +++++
drivers/thunderbolt/tb_regs.h | 50 +-
drivers/thunderbolt/tunnel_pci.c | 17 +-
23 files changed, 5329 insertions(+), 582 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-bus-thunderbolt
create mode 100644 Documentation/admin-guide/thunderbolt.rst
create mode 100644 drivers/thunderbolt/dma_port.c
create mode 100644 drivers/thunderbolt/dma_port.h
create mode 100644 drivers/thunderbolt/domain.c
create mode 100644 drivers/thunderbolt/icm.c
create mode 100644 drivers/thunderbolt/tb_msgs.h

--
2.11.0


2017-06-06 12:25:40

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 26/27] thunderbolt: Add documentation how Thunderbolt bus can be used

Since there are no such tool yet that handles all the low-level details
of connecting devices and upgrading their firmware, add a small document
that shows how the Thunderbolt bus can be used directly from command
line.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
Documentation/admin-guide/index.rst | 1 +
Documentation/admin-guide/thunderbolt.rst | 199 ++++++++++++++++++++++++++++++
2 files changed, 200 insertions(+)
create mode 100644 Documentation/admin-guide/thunderbolt.rst

diff --git a/Documentation/admin-guide/index.rst b/Documentation/admin-guide/index.rst
index 8c60a8a32a1a..6d99a7ce6e21 100644
--- a/Documentation/admin-guide/index.rst
+++ b/Documentation/admin-guide/index.rst
@@ -61,6 +61,7 @@ configure specific aspects of kernel behavior to your liking.
java
ras
pm/index
+ thunderbolt

.. only:: subproject and html

diff --git a/Documentation/admin-guide/thunderbolt.rst b/Documentation/admin-guide/thunderbolt.rst
new file mode 100644
index 000000000000..6a4cd1f159ca
--- /dev/null
+++ b/Documentation/admin-guide/thunderbolt.rst
@@ -0,0 +1,199 @@
+=============
+ Thunderbolt
+=============
+The interface presented here is not meant for end users. Instead there
+should be a userspace tool that handles all the low-level details, keeps
+database of the authorized devices and prompts user for new connections.
+
+More details about the sysfs interface for Thunderbolt devices can be
+found in ``Documentation/ABI/testing/sysfs-bus-thunderbolt``.
+
+Those users who just want to connect any device without any sort of
+manual work, can add following line to
+``/etc/udev/rules.d/99-local.rules``::
+
+ ACTION=="add", SUBSYSTEM=="thunderbolt", ATTR{authorized}=="0", ATTR{authorized}="1"
+
+This will authorize all devices automatically when they appear. However,
+keep in mind that this bypasses the security levels and makes the system
+vulnerable to DMA attacks.
+
+Security levels and how to use them
+-----------------------------------
+Starting from Intel Falcon Ridge Thunderbolt controller there are 4
+security levels available. The reason for these is the fact that the
+connected devices can be DMA masters and thus read contents of the host
+memory without CPU and OS knowing about it. There are ways to prevent
+this by setting up an IOMMU but it is not always available for various
+reasons.
+
+The security levels are as follows:
+
+ none
+ All devices are automatically connected by the firmware. No user
+ approval is needed. In BIOS settings this is typically called
+ *Legacy mode*.
+
+ user
+ User is asked whether the device is allowed to be connected.
+ Based on the device identification information available through
+ ``/sys/bus/thunderbolt/devices``. user then can do the decision.
+ In BIOS settings this is typically called *Unique ID*.
+
+ secure
+ User is asked whether the device is allowed to be connected. In
+ addition to UUID the device (if it supports secure connect) is sent
+ a challenge that should match the expected one based on a random key
+ written to ``key`` sysfs attribute. In BIOS settings this is
+ typically called *One time saved key*.
+
+ dponly
+ The firmware automatically creates tunnels for Display Port and
+ USB. No PCIe tunneling is done. In BIOS settings this is
+ typically called *Display Port Only*.
+
+The current security level can be read from
+``/sys/bus/thunderbolt/devices/domainX/security`` where ``domainX`` is
+the Thunderbolt domain the host controller manages. There is typically
+one domain per Thunderbolt host controller.
+
+If the security level reads as ``user`` or ``secure`` the connected
+device must be authorized by the user before PCIe tunnels are created
+(e.g the PCIe device appears).
+
+Each Thunderbolt device plugged in will appear in sysfs under
+``/sys/bus/thunderbolt/devices``. The device directory carries
+information that can be used to identify the particular device,
+including its name and UUID.
+
+Authorizing devices when security level is ``user`` or ``secure``
+-----------------------------------------------------------------
+When a device is plugged in it will appear in sysfs as follows::
+
+ /sys/bus/thunderbolt/devices/0-1/authorized - 0
+ /sys/bus/thunderbolt/devices/0-1/device - 0x8004
+ /sys/bus/thunderbolt/devices/0-1/device_name - Thunderbolt to FireWire Adapter
+ /sys/bus/thunderbolt/devices/0-1/vendor - 0x1
+ /sys/bus/thunderbolt/devices/0-1/vendor_name - Apple, Inc.
+ /sys/bus/thunderbolt/devices/0-1/unique_id - e0376f00-0300-0100-ffff-ffffffffffff
+
+The ``authorized`` attribute reads 0 which means no PCIe tunnels are
+created yet. The user can authorize the device by simply::
+
+ # echo 1 > /sys/bus/thunderbolt/devices/0-1/authorized
+
+This will create the PCIe tunnels and the device is now connected.
+
+If the device supports secure connect, and the domain security level is
+set to ``secure``, it has an additional attribute ``key`` which can hold
+a random 32 byte value used for authorization and challenging the device in
+future connects::
+
+ /sys/bus/thunderbolt/devices/0-3/authorized - 0
+ /sys/bus/thunderbolt/devices/0-3/device - 0x305
+ /sys/bus/thunderbolt/devices/0-3/device_name - AKiTiO Thunder3 PCIe Box
+ /sys/bus/thunderbolt/devices/0-3/key -
+ /sys/bus/thunderbolt/devices/0-3/vendor - 0x41
+ /sys/bus/thunderbolt/devices/0-3/vendor_name - inXtron
+ /sys/bus/thunderbolt/devices/0-3/unique_id - dc010000-0000-8508-a22d-32ca6421cb16
+
+Notice the key is empty by default.
+
+If the user does not want to use secure connect it can just ``echo 1``
+to the ``authorized`` attribute and the PCIe tunnels will be created in
+the same way than in ``user`` security level.
+
+If the user wants to use secure connect, the first time the device is
+plugged a key needs to be created and send to the device::
+
+ # key=$(openssl rand -hex 32)
+ # echo $key > /sys/bus/thunderbolt/devices/0-3/key
+ # echo 1 > /sys/bus/thunderbolt/devices/0-3/authorized
+
+Now the device is connected (PCIe tunnels are created) and in addition
+the key is stored on the device NVM.
+
+Next time the device is plugged in the user can verify (challenge) the
+device using the same key::
+
+ # echo $key > /sys/bus/thunderbolt/devices/0-3/key
+ # echo 2 > /sys/bus/thunderbolt/devices/0-3/authorized
+
+If the challenge the device returns back matches the one we expect based
+on the key, the device is connected and the PCIe tunnels are created.
+However, if the challenge failed no tunnels are created and error is
+returned to the user.
+
+If the user still wants to connect the device it can either approve
+the device without a key or write new key and write 1 to the
+``authorized`` file to get the new key stored on the device NVM.
+
+Upgrading NVM on Thunderbolt device or host
+-------------------------------------------
+Since most of the functionality is handled in a firmware running on a
+host controller or a device, it is important that the firmware can be
+upgraded to the latest where possible bugs in it have been fixed.
+Typically OEMs provide this firmware from their support site.
+
+There is also a central site which has links where to download firmwares
+for some machines:
+
+ `Thunderbolt Updates <https://thunderbolttechnology.net/updates>`_
+
+Before you upgrade firmware on a device or host, please make sure it is
+the suitable. Failing to do that may render the device (or host) in a
+state where it cannot be used properly anymore without special tools!
+
+Host NVM upgrade on Apple Macs is not supported.
+
+Once the NVM image has been downloaded, you need to plug in a
+Thunderbolt device so that the host controller appears. It does not
+matter which device is connected (unless you are upgrading NVM on a
+device - then you need to connect that particular device).
+
+Note OEM-specific method to power the controller up ("force power") may
+be available for your system in which case there is no need to plug in a
+Thunderbolt device.
+
+After that we can write the firmware to the non-active parts of the NVM
+of the host or device. As an example here is how Intel NUC6i7KYK (Skull
+Canyon) Thunderbolt controller NVM is upgraded::
+
+ # dd if=KYK_TBT_FW_0018.bin of=/sys/bus/thunderbolt/devices/0-0/nvm_non_active0/nvmem
+
+Once the operation completes we can trigger NVM authentication and
+upgrade process as follows::
+
+ # echo 1 > /sys/bus/thunderbolt/devices/0-0/nvm_authenticate
+
+If no errors are returned, the host controller shortly disappears. Once
+it comes back the driver notices it and initiates a full power cycle.
+After a while the host controller appears again and this time it should
+be fully functional.
+
+We can verify that the new NVM firmware is active by running following
+commands::
+
+ # cat /sys/bus/thunderbolt/devices/0-0/nvm_authenticate
+ 0x0
+ # cat /sys/bus/thunderbolt/devices/0-0/nvm_version
+ 18.0
+
+If ``nvm_authenticate`` contains anything else than 0x0 it is the error
+code from the last authentication cycle, which means the authentication
+of the NVM image failed.
+
+Note names of the NVMem devices ``nvm_activeN`` and ``nvm_non_activeN``
+depends on the order they are registered in the NVMem subsystem. N in
+the name is the identifier added by the NVMem subsystem.
+
+Upgrading NVM when host controller is in safe mode
+--------------------------------------------------
+If the existing NVM is not properly authenticated (or is missing) the
+host controller goes into safe mode which means that only available
+functionality is flashing new NVM image. When in this mode the reading
+``nvm_version`` fails with ``ENODATA`` and the device identification
+information is missing.
+
+To recover from this mode, one needs to flash a valid NVM image to the
+host host controller in the same way it is done in the previous chapter.
--
2.11.0

2017-06-06 12:25:50

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 17/27] thunderbolt: Let the connection manager handle all notifications

Currently the control channel (ctl.c) handles the one supported
notification (PLUG_EVENT) and sends back ACK accordingly. However, we
are going to add support for the internal connection manager (ICM) that
needs to handle a different notifications. So instead of dealing
everything in the control channel, we change the callback to take an
arbitrary thunderbolt packet and convert the native connection manager
to handle the event itself.

In addition we only push replies we know of to the response FIFO.
Everything else is treated as notification (or request) and is expected
to be dealt by the connection manager implementation.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/ctl.c | 86 ++++++++++++++++++++++++++++++--------------
drivers/thunderbolt/ctl.h | 5 +--
drivers/thunderbolt/domain.c | 15 +++++++-
drivers/thunderbolt/tb.c | 30 ++++++++++++----
drivers/thunderbolt/tb.h | 5 +--
5 files changed, 103 insertions(+), 38 deletions(-)

diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index c6633da582b8..5417ed244edc 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -35,7 +35,7 @@ struct tb_ctl {
DECLARE_KFIFO(response_fifo, struct ctl_pkg*, 16);
struct completion response_ready;

- hotplug_cb callback;
+ event_cb callback;
void *callback_data;
};

@@ -52,6 +52,9 @@ struct tb_ctl {
#define tb_ctl_info(ctl, format, arg...) \
dev_info(&(ctl)->nhi->pdev->dev, format, ## arg)

+#define tb_ctl_dbg(ctl, format, arg...) \
+ dev_dbg(&(ctl)->nhi->pdev->dev, format, ## arg)
+
/* utility functions */

static int check_header(struct ctl_pkg *pkg, u32 len, enum tb_cfg_pkg_type type,
@@ -272,24 +275,12 @@ static int tb_ctl_tx(struct tb_ctl *ctl, const void *data, size_t len,
}

/**
- * tb_ctl_handle_plug_event() - acknowledge a plug event, invoke ctl->callback
+ * tb_ctl_handle_event() - acknowledge a plug event, invoke ctl->callback
*/
-static void tb_ctl_handle_plug_event(struct tb_ctl *ctl,
- struct ctl_pkg *response)
+static void tb_ctl_handle_event(struct tb_ctl *ctl, enum tb_cfg_pkg_type type,
+ struct ctl_pkg *pkg, size_t size)
{
- struct cfg_event_pkg *pkg = response->buffer;
- u64 route = tb_cfg_get_route(&pkg->header);
-
- if (check_header(response, sizeof(*pkg), TB_CFG_PKG_EVENT, route)) {
- tb_ctl_warn(ctl, "malformed TB_CFG_PKG_EVENT\n");
- return;
- }
-
- if (tb_cfg_error(ctl, route, pkg->port, TB_CFG_ERROR_ACK_PLUG_EVENT))
- tb_ctl_warn(ctl, "could not ack plug event on %llx:%x\n",
- route, pkg->port);
- WARN(pkg->zero, "pkg->zero is %#x\n", pkg->zero);
- ctl->callback(ctl->callback_data, route, pkg->port, pkg->unplug);
+ ctl->callback(ctl->callback_data, type, pkg->buffer, size);
}

static void tb_ctl_rx_submit(struct ctl_pkg *pkg)
@@ -302,10 +293,29 @@ static void tb_ctl_rx_submit(struct ctl_pkg *pkg)
*/
}

+static int tb_async_error(const struct ctl_pkg *pkg)
+{
+ const struct cfg_error_pkg *error = (const struct cfg_error_pkg *)pkg;
+
+ if (pkg->frame.eof != TB_CFG_PKG_ERROR)
+ return false;
+
+ switch (error->error) {
+ case TB_CFG_ERROR_LINK_ERROR:
+ case TB_CFG_ERROR_HEC_ERROR_DETECTED:
+ case TB_CFG_ERROR_FLOW_CONTROL_ERROR:
+ return true;
+
+ default:
+ return false;
+ }
+}
+
static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
bool canceled)
{
struct ctl_pkg *pkg = container_of(frame, typeof(*pkg), frame);
+ __be32 crc32;

if (canceled)
return; /*
@@ -320,18 +330,42 @@ static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
}

frame->size -= 4; /* remove checksum */
- if (*(__be32 *) (pkg->buffer + frame->size)
- != tb_crc(pkg->buffer, frame->size)) {
- tb_ctl_err(pkg->ctl,
- "RX: checksum mismatch, dropping packet\n");
- goto rx;
- }
+ crc32 = tb_crc(pkg->buffer, frame->size);
be32_to_cpu_array(pkg->buffer, pkg->buffer, frame->size / 4);

- if (frame->eof == TB_CFG_PKG_EVENT) {
- tb_ctl_handle_plug_event(pkg->ctl, pkg);
+ switch (frame->eof) {
+ case TB_CFG_PKG_READ:
+ case TB_CFG_PKG_WRITE:
+ case TB_CFG_PKG_ERROR:
+ case TB_CFG_PKG_OVERRIDE:
+ case TB_CFG_PKG_RESET:
+ if (*(__be32 *)(pkg->buffer + frame->size) != crc32) {
+ tb_ctl_err(pkg->ctl,
+ "RX: checksum mismatch, dropping packet\n");
+ goto rx;
+ }
+ if (tb_async_error(pkg)) {
+ tb_ctl_handle_event(pkg->ctl, frame->eof,
+ pkg, frame->size);
+ goto rx;
+ }
+ break;
+
+ case TB_CFG_PKG_EVENT:
+ if (*(__be32 *)(pkg->buffer + frame->size) != crc32) {
+ tb_ctl_err(pkg->ctl,
+ "RX: checksum mismatch, dropping packet\n");
+ goto rx;
+ }
+ tb_ctl_handle_event(pkg->ctl, frame->eof, pkg, frame->size);
+ goto rx;
+
+ default:
+ tb_ctl_dbg(pkg->ctl, "RX: unknown package %#x, dropping\n",
+ frame->eof);
goto rx;
}
+
if (!kfifo_put(&pkg->ctl->response_fifo, pkg)) {
tb_ctl_err(pkg->ctl, "RX: fifo is full\n");
goto rx;
@@ -379,7 +413,7 @@ static struct tb_cfg_result tb_ctl_rx(struct tb_ctl *ctl, void *buffer,
*
* Return: Returns a pointer on success or NULL on failure.
*/
-struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, hotplug_cb cb, void *cb_data)
+struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data)
{
int i;
struct tb_ctl *ctl = kzalloc(sizeof(*ctl), GFP_KERNEL);
diff --git a/drivers/thunderbolt/ctl.h b/drivers/thunderbolt/ctl.h
index 914da86ec77d..2b23e030a85b 100644
--- a/drivers/thunderbolt/ctl.h
+++ b/drivers/thunderbolt/ctl.h
@@ -13,9 +13,10 @@
/* control channel */
struct tb_ctl;

-typedef void (*hotplug_cb)(void *data, u64 route, u8 port, bool unplug);
+typedef void (*event_cb)(void *data, enum tb_cfg_pkg_type type,
+ const void *buf, size_t size);

-struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, hotplug_cb cb, void *cb_data);
+struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data);
void tb_ctl_start(struct tb_ctl *ctl);
void tb_ctl_stop(struct tb_ctl *ctl);
void tb_ctl_free(struct tb_ctl *ctl);
diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c
index 3302f4c59638..54bc15f9bf6b 100644
--- a/drivers/thunderbolt/domain.c
+++ b/drivers/thunderbolt/domain.c
@@ -95,6 +95,19 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize)
return NULL;
}

+static void tb_domain_event_cb(void *data, enum tb_cfg_pkg_type type,
+ const void *buf, size_t size)
+{
+ struct tb *tb = data;
+
+ if (!tb->cm_ops->handle_event) {
+ tb_warn(tb, "domain does not have event handler\n");
+ return;
+ }
+
+ tb->cm_ops->handle_event(tb, type, buf, size);
+}
+
/**
* tb_domain_add() - Add domain to the system
* @tb: Domain to add
@@ -115,7 +128,7 @@ int tb_domain_add(struct tb *tb)

mutex_lock(&tb->lock);

- tb->ctl = tb_ctl_alloc(tb->nhi, tb->cm_ops->hotplug, tb);
+ tb->ctl = tb_ctl_alloc(tb->nhi, tb_domain_event_cb, tb);
if (!tb->ctl) {
ret = -ENOMEM;
goto err_unlock;
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 94ecac012428..ea9de49b5e10 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -311,18 +311,34 @@ static void tb_handle_hotplug(struct work_struct *work)
*
* Delegates to tb_handle_hotplug.
*/
-static void tb_schedule_hotplug_handler(void *data, u64 route, u8 port,
- bool unplug)
+static void tb_handle_event(struct tb *tb, enum tb_cfg_pkg_type type,
+ const void *buf, size_t size)
{
- struct tb *tb = data;
- struct tb_hotplug_event *ev = kmalloc(sizeof(*ev), GFP_KERNEL);
+ const struct cfg_event_pkg *pkg = buf;
+ struct tb_hotplug_event *ev;
+ u64 route;
+
+ if (type != TB_CFG_PKG_EVENT) {
+ tb_warn(tb, "unexpected event %#x, ignoring\n", type);
+ return;
+ }
+
+ route = tb_cfg_get_route(&pkg->header);
+
+ if (tb_cfg_error(tb->ctl, route, pkg->port,
+ TB_CFG_ERROR_ACK_PLUG_EVENT)) {
+ tb_warn(tb, "could not ack plug event on %llx:%x\n", route,
+ pkg->port);
+ }
+
+ ev = kmalloc(sizeof(*ev), GFP_KERNEL);
if (!ev)
return;
INIT_WORK(&ev->work, tb_handle_hotplug);
ev->tb = tb;
ev->route = route;
- ev->port = port;
- ev->unplug = unplug;
+ ev->port = pkg->port;
+ ev->unplug = pkg->unplug;
queue_work(tb->wq, &ev->work);
}

@@ -419,7 +435,7 @@ static const struct tb_cm_ops tb_cm_ops = {
.stop = tb_stop,
.suspend_noirq = tb_suspend_noirq,
.resume_noirq = tb_resume_noirq,
- .hotplug = tb_schedule_hotplug_handler,
+ .handle_event = tb_handle_event,
};

struct tb *tb_probe(struct tb_nhi *nhi)
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 6d4910ef2eb9..5bb9a5d60d2c 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -118,14 +118,15 @@ struct tb_path {
* @stop: Stops the domain
* @suspend_noirq: Connection manager specific suspend_noirq
* @resume_noirq: Connection manager specific resume_noirq
- * @hotplug: Handle hotplug event
+ * @handle_event: Handle thunderbolt event
*/
struct tb_cm_ops {
int (*start)(struct tb *tb);
void (*stop)(struct tb *tb);
int (*suspend_noirq)(struct tb *tb);
int (*resume_noirq)(struct tb *tb);
- hotplug_cb hotplug;
+ void (*handle_event)(struct tb *tb, enum tb_cfg_pkg_type,
+ const void *buf, size_t size);
};

/**
--
2.11.0

2017-06-06 12:25:34

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 07/27] thunderbolt: Allow passing NULL to tb_ctl_free()

Following the usual pattern used in many places, we allow passing NULL
pointer to tb_ctl_free(). Then the user can call the function regardless
if it has allocated control channel or not making the code bit simpler.

Suggested-by: Andy Shevchenko <[email protected]>
Signed-off-by: Mika Westerberg <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/ctl.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index 889a32dd21e7..f8290a577b2b 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -520,6 +520,10 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, hotplug_cb cb, void *cb_data)
void tb_ctl_free(struct tb_ctl *ctl)
{
int i;
+
+ if (!ctl)
+ return;
+
if (ctl->rx)
ring_free(ctl->rx);
if (ctl->tx)
--
2.11.0

2017-06-06 12:25:41

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 14/27] thunderbolt: Move control channel messages to tb_msgs.h

We will be forwarding notifications received from the control channel to
the connection manager implementations. This way they can decide what to
do if anything when a notification is received.

To be able to use control channel messages from other files, move them
to tb_msgs.h.

No functional changes intended.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/ctl.c | 76 -----------------------------
drivers/thunderbolt/ctl.h | 16 +------
drivers/thunderbolt/tb_msgs.h | 108 ++++++++++++++++++++++++++++++++++++++++++
3 files changed, 109 insertions(+), 91 deletions(-)
create mode 100644 drivers/thunderbolt/tb_msgs.h

diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index f8290a577b2b..24118c60b062 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -52,82 +52,6 @@ struct tb_ctl {
#define tb_ctl_info(ctl, format, arg...) \
dev_info(&(ctl)->nhi->pdev->dev, format, ## arg)

-
-/* configuration packets definitions */
-
-enum tb_cfg_pkg_type {
- TB_CFG_PKG_READ = 1,
- TB_CFG_PKG_WRITE = 2,
- TB_CFG_PKG_ERROR = 3,
- TB_CFG_PKG_NOTIFY_ACK = 4,
- TB_CFG_PKG_EVENT = 5,
- TB_CFG_PKG_XDOMAIN_REQ = 6,
- TB_CFG_PKG_XDOMAIN_RESP = 7,
- TB_CFG_PKG_OVERRIDE = 8,
- TB_CFG_PKG_RESET = 9,
- TB_CFG_PKG_PREPARE_TO_SLEEP = 0xd,
-};
-
-/* common header */
-struct tb_cfg_header {
- u32 route_hi:22;
- u32 unknown:10; /* highest order bit is set on replies */
- u32 route_lo;
-} __packed;
-
-/* additional header for read/write packets */
-struct tb_cfg_address {
- u32 offset:13; /* in dwords */
- u32 length:6; /* in dwords */
- u32 port:6;
- enum tb_cfg_space space:2;
- u32 seq:2; /* sequence number */
- u32 zero:3;
-} __packed;
-
-/* TB_CFG_PKG_READ, response for TB_CFG_PKG_WRITE */
-struct cfg_read_pkg {
- struct tb_cfg_header header;
- struct tb_cfg_address addr;
-} __packed;
-
-/* TB_CFG_PKG_WRITE, response for TB_CFG_PKG_READ */
-struct cfg_write_pkg {
- struct tb_cfg_header header;
- struct tb_cfg_address addr;
- u32 data[64]; /* maximum size, tb_cfg_address.length has 6 bits */
-} __packed;
-
-/* TB_CFG_PKG_ERROR */
-struct cfg_error_pkg {
- struct tb_cfg_header header;
- enum tb_cfg_error error:4;
- u32 zero1:4;
- u32 port:6;
- u32 zero2:2; /* Both should be zero, still they are different fields. */
- u32 zero3:16;
-} __packed;
-
-/* TB_CFG_PKG_EVENT */
-struct cfg_event_pkg {
- struct tb_cfg_header header;
- u32 port:6;
- u32 zero:25;
- bool unplug:1;
-} __packed;
-
-/* TB_CFG_PKG_RESET */
-struct cfg_reset_pkg {
- struct tb_cfg_header header;
-} __packed;
-
-/* TB_CFG_PKG_PREPARE_TO_SLEEP */
-struct cfg_pts_pkg {
- struct tb_cfg_header header;
- u32 data;
-} __packed;
-
-
/* utility functions */

static u64 get_route(struct tb_cfg_header header)
diff --git a/drivers/thunderbolt/ctl.h b/drivers/thunderbolt/ctl.h
index 83ae54947082..610980e3232f 100644
--- a/drivers/thunderbolt/ctl.h
+++ b/drivers/thunderbolt/ctl.h
@@ -8,6 +8,7 @@
#define _TB_CFG

#include "nhi.h"
+#include "tb_msgs.h"

/* control channel */
struct tb_ctl;
@@ -23,21 +24,6 @@ void tb_ctl_free(struct tb_ctl *ctl);

#define TB_CFG_DEFAULT_TIMEOUT 5000 /* msec */

-enum tb_cfg_space {
- TB_CFG_HOPS = 0,
- TB_CFG_PORT = 1,
- TB_CFG_SWITCH = 2,
- TB_CFG_COUNTERS = 3,
-};
-
-enum tb_cfg_error {
- TB_CFG_ERROR_PORT_NOT_CONNECTED = 0,
- TB_CFG_ERROR_INVALID_CONFIG_SPACE = 2,
- TB_CFG_ERROR_NO_SUCH_PORT = 4,
- TB_CFG_ERROR_ACK_PLUG_EVENT = 7, /* send as reply to TB_CFG_PKG_EVENT */
- TB_CFG_ERROR_LOOP = 8,
-};
-
struct tb_cfg_result {
u64 response_route;
u32 response_port; /*
diff --git a/drivers/thunderbolt/tb_msgs.h b/drivers/thunderbolt/tb_msgs.h
new file mode 100644
index 000000000000..761d56287149
--- /dev/null
+++ b/drivers/thunderbolt/tb_msgs.h
@@ -0,0 +1,108 @@
+/*
+ * Thunderbolt control channel messages
+ *
+ * Copyright (C) 2014 Andreas Noever <[email protected]>
+ * Copyright (C) 2017, Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef _TB_MSGS
+#define _TB_MSGS
+
+#include <linux/types.h>
+
+enum tb_cfg_pkg_type {
+ TB_CFG_PKG_READ = 1,
+ TB_CFG_PKG_WRITE = 2,
+ TB_CFG_PKG_ERROR = 3,
+ TB_CFG_PKG_NOTIFY_ACK = 4,
+ TB_CFG_PKG_EVENT = 5,
+ TB_CFG_PKG_XDOMAIN_REQ = 6,
+ TB_CFG_PKG_XDOMAIN_RESP = 7,
+ TB_CFG_PKG_OVERRIDE = 8,
+ TB_CFG_PKG_RESET = 9,
+ TB_CFG_PKG_PREPARE_TO_SLEEP = 0xd,
+
+};
+
+enum tb_cfg_space {
+ TB_CFG_HOPS = 0,
+ TB_CFG_PORT = 1,
+ TB_CFG_SWITCH = 2,
+ TB_CFG_COUNTERS = 3,
+};
+
+enum tb_cfg_error {
+ TB_CFG_ERROR_PORT_NOT_CONNECTED = 0,
+ TB_CFG_ERROR_LINK_ERROR = 1,
+ TB_CFG_ERROR_INVALID_CONFIG_SPACE = 2,
+ TB_CFG_ERROR_NO_SUCH_PORT = 4,
+ TB_CFG_ERROR_ACK_PLUG_EVENT = 7, /* send as reply to TB_CFG_PKG_EVENT */
+ TB_CFG_ERROR_LOOP = 8,
+ TB_CFG_ERROR_HEC_ERROR_DETECTED = 12,
+ TB_CFG_ERROR_FLOW_CONTROL_ERROR = 13,
+};
+
+/* common header */
+struct tb_cfg_header {
+ u32 route_hi:22;
+ u32 unknown:10; /* highest order bit is set on replies */
+ u32 route_lo;
+} __packed;
+
+/* additional header for read/write packets */
+struct tb_cfg_address {
+ u32 offset:13; /* in dwords */
+ u32 length:6; /* in dwords */
+ u32 port:6;
+ enum tb_cfg_space space:2;
+ u32 seq:2; /* sequence number */
+ u32 zero:3;
+} __packed;
+
+/* TB_CFG_PKG_READ, response for TB_CFG_PKG_WRITE */
+struct cfg_read_pkg {
+ struct tb_cfg_header header;
+ struct tb_cfg_address addr;
+} __packed;
+
+/* TB_CFG_PKG_WRITE, response for TB_CFG_PKG_READ */
+struct cfg_write_pkg {
+ struct tb_cfg_header header;
+ struct tb_cfg_address addr;
+ u32 data[64]; /* maximum size, tb_cfg_address.length has 6 bits */
+} __packed;
+
+/* TB_CFG_PKG_ERROR */
+struct cfg_error_pkg {
+ struct tb_cfg_header header;
+ enum tb_cfg_error error:4;
+ u32 zero1:4;
+ u32 port:6;
+ u32 zero2:2; /* Both should be zero, still they are different fields. */
+ u32 zero3:16;
+} __packed;
+
+/* TB_CFG_PKG_EVENT */
+struct cfg_event_pkg {
+ struct tb_cfg_header header;
+ u32 port:6;
+ u32 zero:25;
+ bool unplug:1;
+} __packed;
+
+/* TB_CFG_PKG_RESET */
+struct cfg_reset_pkg {
+ struct tb_cfg_header header;
+} __packed;
+
+/* TB_CFG_PKG_PREPARE_TO_SLEEP */
+struct cfg_pts_pkg {
+ struct tb_cfg_header header;
+ u32 data;
+} __packed;
+
+#endif
--
2.11.0

2017-06-06 12:26:01

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 03/27] thunderbolt: Do not try to read UID if DROM offset is read as 0

At least Falcon Ridge when in host mode does not have any kind of DROM
available and reading DROM offset returns 0 for these. Do not try to
read DROM any further in that case.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/eeprom.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
index 6392990c984d..e4e64b130514 100644
--- a/drivers/thunderbolt/eeprom.c
+++ b/drivers/thunderbolt/eeprom.c
@@ -276,6 +276,9 @@ int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid)
if (res)
return res;

+ if (drom_offset == 0)
+ return -ENODEV;
+
/* read uid */
res = tb_eeprom_read_n(sw, drom_offset, data, 9);
if (res)
--
2.11.0

2017-06-06 12:26:05

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 24/27] thunderbolt: Add support for Internal Connection Manager (ICM)

Starting from Intel Falcon Ridge the internal connection manager running
on the Thunderbolt host controller has been supporting 4 security
levels. One reason for this is to prevent DMA attacks and only allow
connecting devices the user trusts.

The internal connection manager (ICM) is the preferred way of connecting
Thunderbolt devices over software only implementation typically used on
Macs. The driver communicates with ICM using special Thunderbolt ring 0
(control channel) messages. In order to handle these messages we add
support for the ICM messages to the control channel.

The security levels are as follows:

none - No security, all tunnels are created automatically
user - User needs to approve the device before tunnels are created
secure - User need to approve the device before tunnels are created.
The device is sent a challenge on future connects to be able
to verify it is actually the approved device.
dponly - Only Display Port and USB tunnels can be created and those
are created automatically.

The security levels are typically configurable from the system BIOS and
by default it is set to "user" on many systems.

In this patch each Thunderbolt device will have either one or two new
sysfs attributes: authorized and key. The latter appears for devices
that support secure connect.

In order to identify the device the user can read identication
information, including UUID and name of the device from sysfs and based
on that make a decision to authorize the device. The device is
authorized by simply writing 1 to the "authorized" sysfs attribute. This
is following the USB bus device authorization mechanism. The secure
connect requires an additional challenge step (writing 2 to the
"authorized" attribute) in future connects when the key has already been
stored to the NVM of the device.

Non-ICM systems (before Alpine Ridge) continue to use the existing
functionality and the security level is set to none. For systems with
Alpine Ridge, even on Apple hardware, we will use ICM.

This code is based on the work done by Amir Levy and Michael Jamet.

Signed-off-by: Michael Jamet <[email protected]>
Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
Documentation/ABI/testing/sysfs-bus-thunderbolt | 48 +
drivers/thunderbolt/Kconfig | 12 +-
drivers/thunderbolt/Makefile | 2 +-
drivers/thunderbolt/ctl.c | 2 +
drivers/thunderbolt/domain.c | 195 +++++
drivers/thunderbolt/icm.c | 1058 +++++++++++++++++++++++
drivers/thunderbolt/nhi.c | 33 +-
drivers/thunderbolt/nhi_regs.h | 7 +
drivers/thunderbolt/switch.c | 222 +++++
drivers/thunderbolt/tb.c | 7 +
drivers/thunderbolt/tb.h | 79 ++
drivers/thunderbolt/tb_msgs.h | 152 ++++
12 files changed, 1805 insertions(+), 12 deletions(-)
create mode 100644 drivers/thunderbolt/icm.c

diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt
index 29a516f53d2c..05b7f9a6431f 100644
--- a/Documentation/ABI/testing/sysfs-bus-thunderbolt
+++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt
@@ -1,3 +1,51 @@
+What: /sys/bus/thunderbolt/devices/.../domainX/security
+Date: Sep 2017
+KernelVersion: 4.13
+Contact: [email protected]
+Description: This attribute holds current Thunderbolt security level
+ set by the system BIOS. Possible values are:
+
+ none: All devices are automatically authorized
+ user: Devices are only authorized based on writing
+ appropriate value to the authorized attribute
+ secure: Require devices that support secure connect at
+ minimum. User needs to authorize each device.
+ dponly: Automatically tunnel Display port (and USB). No
+ PCIe tunnels are created.
+
+What: /sys/bus/thunderbolt/devices/.../authorized
+Date: Sep 2017
+KernelVersion: 4.13
+Contact: [email protected]
+Description: This attribute is used to authorize Thunderbolt devices
+ after they have been connected. If the device is not
+ authorized, no devices such as PCIe and Display port are
+ available to the system.
+
+ Contents of this attribute will be 0 when the device is not
+ yet authorized.
+
+ Possible values are supported:
+ 1: The device will be authorized and connected
+
+ When key attribute contains 32 byte hex string the possible
+ values are:
+ 1: The 32 byte hex string is added to the device NVM and
+ the device is authorized.
+ 2: Send a challenge based on the 32 byte hex string. If the
+ challenge response from device is valid, the device is
+ authorized. In case of failure errno will be ENOKEY if
+ the device did not contain a key at all, and
+ EKEYREJECTED if the challenge response did not match.
+
+What: /sys/bus/thunderbolt/devices/.../key
+Date: Sep 2017
+KernelVersion: 4.13
+Contact: [email protected]
+Description: When a devices supports Thunderbolt secure connect it will
+ have this attribute. Writing 32 byte hex string changes
+ authorization to use the secure connection method instead.
+
What: /sys/bus/thunderbolt/devices/.../device
Date: Sep 2017
KernelVersion: 4.13
diff --git a/drivers/thunderbolt/Kconfig b/drivers/thunderbolt/Kconfig
index d35db16aa43f..a9cc724985ad 100644
--- a/drivers/thunderbolt/Kconfig
+++ b/drivers/thunderbolt/Kconfig
@@ -1,15 +1,15 @@
menuconfig THUNDERBOLT
- tristate "Thunderbolt support for Apple devices"
+ tristate "Thunderbolt support"
depends on PCI
depends on X86 || COMPILE_TEST
select APPLE_PROPERTIES if EFI_STUB && X86
select CRC32
+ select CRYPTO
+ select CRYPTO_HASH
help
- Cactus Ridge Thunderbolt Controller driver
- This driver is required if you want to hotplug Thunderbolt devices on
- Apple hardware.
-
- Device chaining is currently not supported.
+ Thunderbolt Controller driver. This driver is required if you
+ want to hotplug Thunderbolt devices on Apple hardware or on PCs
+ with Intel Falcon Ridge or newer.

To compile this driver a module, choose M here. The module will be
called thunderbolt.
diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile
index 9828e862dd35..4900febc6c8a 100644
--- a/drivers/thunderbolt/Makefile
+++ b/drivers/thunderbolt/Makefile
@@ -1,3 +1,3 @@
obj-${CONFIG_THUNDERBOLT} := thunderbolt.o
thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel_pci.o eeprom.o
-thunderbolt-objs += domain.o dma_port.o
+thunderbolt-objs += domain.o dma_port.o icm.o
diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index 27c30ff79a84..69c0232a22f8 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -463,6 +463,8 @@ static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
"RX: checksum mismatch, dropping packet\n");
goto rx;
}
+ /* Fall through */
+ case TB_CFG_PKG_ICM_EVENT:
tb_ctl_handle_event(pkg->ctl, frame->eof, pkg, frame->size);
goto rx;

diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c
index 54bc15f9bf6b..f71b63e90016 100644
--- a/drivers/thunderbolt/domain.c
+++ b/drivers/thunderbolt/domain.c
@@ -13,11 +13,43 @@
#include <linux/idr.h>
#include <linux/module.h>
#include <linux/slab.h>
+#include <linux/random.h>
+#include <crypto/hash.h>

#include "tb.h"

static DEFINE_IDA(tb_domain_ida);

+static const char * const tb_security_names[] = {
+ [TB_SECURITY_NONE] = "none",
+ [TB_SECURITY_USER] = "user",
+ [TB_SECURITY_SECURE] = "secure",
+ [TB_SECURITY_DPONLY] = "dponly",
+};
+
+static ssize_t security_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct tb *tb = container_of(dev, struct tb, dev);
+
+ return sprintf(buf, "%s\n", tb_security_names[tb->security_level]);
+}
+static DEVICE_ATTR_RO(security);
+
+static struct attribute *domain_attrs[] = {
+ &dev_attr_security.attr,
+ NULL,
+};
+
+static struct attribute_group domain_attr_group = {
+ .attrs = domain_attrs,
+};
+
+static const struct attribute_group *domain_attr_groups[] = {
+ &domain_attr_group,
+ NULL,
+};
+
struct bus_type tb_bus_type = {
.name = "thunderbolt",
};
@@ -82,6 +114,7 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize)
tb->dev.parent = &nhi->pdev->dev;
tb->dev.bus = &tb_bus_type;
tb->dev.type = &tb_domain_type;
+ tb->dev.groups = domain_attr_groups;
dev_set_name(&tb->dev, "domain%d", tb->index);
device_initialize(&tb->dev);

@@ -140,6 +173,12 @@ int tb_domain_add(struct tb *tb)
*/
tb_ctl_start(tb->ctl);

+ if (tb->cm_ops->driver_ready) {
+ ret = tb->cm_ops->driver_ready(tb);
+ if (ret)
+ goto err_ctl_stop;
+ }
+
ret = device_add(&tb->dev);
if (ret)
goto err_ctl_stop;
@@ -231,6 +270,162 @@ int tb_domain_resume_noirq(struct tb *tb)
return ret;
}

+int tb_domain_suspend(struct tb *tb)
+{
+ int ret;
+
+ mutex_lock(&tb->lock);
+ if (tb->cm_ops->suspend) {
+ ret = tb->cm_ops->suspend(tb);
+ if (ret) {
+ mutex_unlock(&tb->lock);
+ return ret;
+ }
+ }
+ mutex_unlock(&tb->lock);
+ return 0;
+}
+
+void tb_domain_complete(struct tb *tb)
+{
+ mutex_lock(&tb->lock);
+ if (tb->cm_ops->complete)
+ tb->cm_ops->complete(tb);
+ mutex_unlock(&tb->lock);
+}
+
+/**
+ * tb_domain_approve_switch() - Approve switch
+ * @tb: Domain the switch belongs to
+ * @sw: Switch to approve
+ *
+ * This will approve switch by connection manager specific means. In
+ * case of success the connection manager will create tunnels for all
+ * supported protocols.
+ */
+int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw)
+{
+ struct tb_switch *parent_sw;
+
+ if (!tb->cm_ops->approve_switch)
+ return -EPERM;
+
+ /* The parent switch must be authorized before this one */
+ parent_sw = tb_to_switch(sw->dev.parent);
+ if (!parent_sw || !parent_sw->authorized)
+ return -EINVAL;
+
+ return tb->cm_ops->approve_switch(tb, sw);
+}
+
+/**
+ * tb_domain_approve_switch_key() - Approve switch and add key
+ * @tb: Domain the switch belongs to
+ * @sw: Switch to approve
+ *
+ * For switches that support secure connect, this function first adds
+ * key to the switch NVM using connection manager specific means. If
+ * adding the key is successful, the switch is approved and connected.
+ *
+ * Return: %0 on success and negative errno in case of failure.
+ */
+int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw)
+{
+ struct tb_switch *parent_sw;
+ int ret;
+
+ if (!tb->cm_ops->approve_switch || !tb->cm_ops->add_switch_key)
+ return -EPERM;
+
+ /* The parent switch must be authorized before this one */
+ parent_sw = tb_to_switch(sw->dev.parent);
+ if (!parent_sw || !parent_sw->authorized)
+ return -EINVAL;
+
+ ret = tb->cm_ops->add_switch_key(tb, sw);
+ if (ret)
+ return ret;
+
+ return tb->cm_ops->approve_switch(tb, sw);
+}
+
+/**
+ * tb_domain_challenge_switch_key() - Challenge and approve switch
+ * @tb: Domain the switch belongs to
+ * @sw: Switch to approve
+ *
+ * For switches that support secure connect, this function generates
+ * random challenge and sends it to the switch. The switch responds to
+ * this and if the response matches our random challenge, the switch is
+ * approved and connected.
+ *
+ * Return: %0 on success and negative errno in case of failure.
+ */
+int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw)
+{
+ u8 challenge[TB_SWITCH_KEY_SIZE];
+ u8 response[TB_SWITCH_KEY_SIZE];
+ u8 hmac[TB_SWITCH_KEY_SIZE];
+ struct tb_switch *parent_sw;
+ struct crypto_shash *tfm;
+ struct shash_desc *shash;
+ int ret;
+
+ if (!tb->cm_ops->approve_switch || !tb->cm_ops->challenge_switch_key)
+ return -EPERM;
+
+ /* The parent switch must be authorized before this one */
+ parent_sw = tb_to_switch(sw->dev.parent);
+ if (!parent_sw || !parent_sw->authorized)
+ return -EINVAL;
+
+ get_random_bytes(challenge, sizeof(challenge));
+ ret = tb->cm_ops->challenge_switch_key(tb, sw, challenge, response);
+ if (ret)
+ return ret;
+
+ tfm = crypto_alloc_shash("hmac(sha256)", 0, 0);
+ if (IS_ERR(tfm))
+ return PTR_ERR(tfm);
+
+ ret = crypto_shash_setkey(tfm, sw->key, TB_SWITCH_KEY_SIZE);
+ if (ret)
+ goto err_free_tfm;
+
+ shash = kzalloc(sizeof(*shash) + crypto_shash_descsize(tfm),
+ GFP_KERNEL);
+ if (!shash) {
+ ret = -ENOMEM;
+ goto err_free_tfm;
+ }
+
+ shash->tfm = tfm;
+ shash->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+
+ memset(hmac, 0, sizeof(hmac));
+ ret = crypto_shash_digest(shash, challenge, sizeof(hmac), hmac);
+ if (ret)
+ goto err_free_shash;
+
+ /* The returned HMAC must match the one we calculated */
+ if (memcmp(response, hmac, sizeof(hmac))) {
+ ret = -EKEYREJECTED;
+ goto err_free_shash;
+ }
+
+ crypto_free_shash(tfm);
+ kfree(shash);
+
+ return tb->cm_ops->approve_switch(tb, sw);
+
+err_free_shash:
+ kfree(shash);
+err_free_tfm:
+ crypto_free_shash(tfm);
+
+ return ret;
+}
+
int tb_domain_init(void)
{
return bus_register(&tb_bus_type);
diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
new file mode 100644
index 000000000000..0ffa4ec249ac
--- /dev/null
+++ b/drivers/thunderbolt/icm.c
@@ -0,0 +1,1058 @@
+/*
+ * Internal Thunderbolt Connection Manager. This is a firmware running on
+ * the Thunderbolt host controller performing most of the low-level
+ * handling.
+ *
+ * Copyright (C) 2017, Intel Corporation
+ * Authors: Michael Jamet <[email protected]>
+ * Mika Westerberg <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/delay.h>
+#include <linux/dmi.h>
+#include <linux/mutex.h>
+#include <linux/pci.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+
+#include "ctl.h"
+#include "nhi_regs.h"
+#include "tb.h"
+
+#define PCIE2CIO_CMD 0x30
+#define PCIE2CIO_CMD_TIMEOUT BIT(31)
+#define PCIE2CIO_CMD_START BIT(30)
+#define PCIE2CIO_CMD_WRITE BIT(21)
+#define PCIE2CIO_CMD_CS_MASK GENMASK(20, 19)
+#define PCIE2CIO_CMD_CS_SHIFT 19
+#define PCIE2CIO_CMD_PORT_MASK GENMASK(18, 13)
+#define PCIE2CIO_CMD_PORT_SHIFT 13
+
+#define PCIE2CIO_WRDATA 0x34
+#define PCIE2CIO_RDDATA 0x38
+
+#define PHY_PORT_CS1 0x37
+#define PHY_PORT_CS1_LINK_DISABLE BIT(14)
+#define PHY_PORT_CS1_LINK_STATE_MASK GENMASK(29, 26)
+#define PHY_PORT_CS1_LINK_STATE_SHIFT 26
+
+#define ICM_TIMEOUT 5000 /* ms */
+#define ICM_MAX_LINK 4
+#define ICM_MAX_DEPTH 6
+
+/**
+ * struct icm - Internal connection manager private data
+ * @request_lock: Makes sure only one message is send to ICM at time
+ * @rescan_work: Work used to rescan the surviving switches after resume
+ * @upstream_port: Pointer to the PCIe upstream port this host
+ * controller is connected. This is only set for systems
+ * where ICM needs to be started manually
+ * @vnd_cap: Vendor defined capability where PCIe2CIO mailbox resides
+ * (only set when @upstream_port is not %NULL)
+ * @is_supported: Checks if we can support ICM on this controller
+ * @get_mode: Read and return the ICM firmware mode (optional)
+ * @get_route: Find a route string for given switch
+ * @device_connected: Handle device connected ICM message
+ * @device_disconnected: Handle device disconnected ICM message
+ */
+struct icm {
+ struct mutex request_lock;
+ struct delayed_work rescan_work;
+ struct pci_dev *upstream_port;
+ int vnd_cap;
+ bool (*is_supported)(struct tb *tb);
+ int (*get_mode)(struct tb *tb);
+ int (*get_route)(struct tb *tb, u8 link, u8 depth, u64 *route);
+ void (*device_connected)(struct tb *tb,
+ const struct icm_pkg_header *hdr);
+ void (*device_disconnected)(struct tb *tb,
+ const struct icm_pkg_header *hdr);
+};
+
+struct icm_notification {
+ struct work_struct work;
+ struct icm_pkg_header *pkg;
+ struct tb *tb;
+};
+
+static inline struct tb *icm_to_tb(struct icm *icm)
+{
+ return ((void *)icm - sizeof(struct tb));
+}
+
+static inline u8 phy_port_from_route(u64 route, u8 depth)
+{
+ return tb_switch_phy_port_from_link(route >> ((depth - 1) * 8));
+}
+
+static inline u8 dual_link_from_link(u8 link)
+{
+ return link ? ((link - 1) ^ 0x01) + 1 : 0;
+}
+
+static inline u64 get_route(u32 route_hi, u32 route_lo)
+{
+ return (u64)route_hi << 32 | route_lo;
+}
+
+static inline bool is_apple(void)
+{
+ return dmi_match(DMI_BOARD_VENDOR, "Apple Inc.");
+}
+
+static bool icm_match(const struct tb_cfg_request *req,
+ const struct ctl_pkg *pkg)
+{
+ const struct icm_pkg_header *res_hdr = pkg->buffer;
+ const struct icm_pkg_header *req_hdr = req->request;
+
+ if (pkg->frame.eof != req->response_type)
+ return false;
+ if (res_hdr->code != req_hdr->code)
+ return false;
+
+ return true;
+}
+
+static bool icm_copy(struct tb_cfg_request *req, const struct ctl_pkg *pkg)
+{
+ const struct icm_pkg_header *hdr = pkg->buffer;
+
+ if (hdr->packet_id < req->npackets) {
+ size_t offset = hdr->packet_id * req->response_size;
+
+ memcpy(req->response + offset, pkg->buffer, req->response_size);
+ }
+
+ return hdr->packet_id == hdr->total_packets - 1;
+}
+
+static int icm_request(struct tb *tb, const void *request, size_t request_size,
+ void *response, size_t response_size, size_t npackets,
+ unsigned int timeout_msec)
+{
+ struct icm *icm = tb_priv(tb);
+ int retries = 3;
+
+ do {
+ struct tb_cfg_request *req;
+ struct tb_cfg_result res;
+
+ req = tb_cfg_request_alloc();
+ if (!req)
+ return -ENOMEM;
+
+ req->match = icm_match;
+ req->copy = icm_copy;
+ req->request = request;
+ req->request_size = request_size;
+ req->request_type = TB_CFG_PKG_ICM_CMD;
+ req->response = response;
+ req->npackets = npackets;
+ req->response_size = response_size;
+ req->response_type = TB_CFG_PKG_ICM_RESP;
+
+ mutex_lock(&icm->request_lock);
+ res = tb_cfg_request_sync(tb->ctl, req, timeout_msec);
+ mutex_unlock(&icm->request_lock);
+
+ tb_cfg_request_put(req);
+
+ if (res.err != -ETIMEDOUT)
+ return res.err == 1 ? -EIO : res.err;
+
+ usleep_range(20, 50);
+ } while (retries--);
+
+ return -ETIMEDOUT;
+}
+
+static bool icm_fr_is_supported(struct tb *tb)
+{
+ return !is_apple();
+}
+
+static inline int icm_fr_get_switch_index(u32 port)
+{
+ int index;
+
+ if ((port & ICM_PORT_TYPE_MASK) != TB_TYPE_PORT)
+ return 0;
+
+ index = port >> ICM_PORT_INDEX_SHIFT;
+ return index != 0xff ? index : 0;
+}
+
+static int icm_fr_get_route(struct tb *tb, u8 link, u8 depth, u64 *route)
+{
+ struct icm_fr_pkg_get_topology_response *switches, *sw;
+ struct icm_fr_pkg_get_topology request = {
+ .hdr = { .code = ICM_GET_TOPOLOGY },
+ };
+ size_t npackets = ICM_GET_TOPOLOGY_PACKETS;
+ int ret, index;
+ u8 i;
+
+ switches = kcalloc(npackets, sizeof(*switches), GFP_KERNEL);
+ if (!switches)
+ return -ENOMEM;
+
+ ret = icm_request(tb, &request, sizeof(request), switches,
+ sizeof(*switches), npackets, ICM_TIMEOUT);
+ if (ret)
+ goto err_free;
+
+ sw = &switches[0];
+ index = icm_fr_get_switch_index(sw->ports[link]);
+ if (!index) {
+ ret = -ENODEV;
+ goto err_free;
+ }
+
+ sw = &switches[index];
+ for (i = 1; i < depth; i++) {
+ unsigned int j;
+
+ if (!(sw->first_data & ICM_SWITCH_USED)) {
+ ret = -ENODEV;
+ goto err_free;
+ }
+
+ for (j = 0; j < ARRAY_SIZE(sw->ports); j++) {
+ index = icm_fr_get_switch_index(sw->ports[j]);
+ if (index > sw->switch_index) {
+ sw = &switches[index];
+ break;
+ }
+ }
+ }
+
+ *route = get_route(sw->route_hi, sw->route_lo);
+
+err_free:
+ kfree(switches);
+ return ret;
+}
+
+static int icm_fr_approve_switch(struct tb *tb, struct tb_switch *sw)
+{
+ struct icm_fr_pkg_approve_device request;
+ struct icm_fr_pkg_approve_device reply;
+ int ret;
+
+ memset(&request, 0, sizeof(request));
+ memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid));
+ request.hdr.code = ICM_APPROVE_DEVICE;
+ request.connection_id = sw->connection_id;
+ request.connection_key = sw->connection_key;
+
+ memset(&reply, 0, sizeof(reply));
+ /* Use larger timeout as establishing tunnels can take some time */
+ ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+ 1, 10000);
+ if (ret)
+ return ret;
+
+ if (reply.hdr.flags & ICM_FLAGS_ERROR) {
+ tb_warn(tb, "PCIe tunnel creation failed\n");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int icm_fr_add_switch_key(struct tb *tb, struct tb_switch *sw)
+{
+ struct icm_fr_pkg_add_device_key request;
+ struct icm_fr_pkg_add_device_key_response reply;
+ int ret;
+
+ memset(&request, 0, sizeof(request));
+ memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid));
+ request.hdr.code = ICM_ADD_DEVICE_KEY;
+ request.connection_id = sw->connection_id;
+ request.connection_key = sw->connection_key;
+ memcpy(request.key, sw->key, TB_SWITCH_KEY_SIZE);
+
+ memset(&reply, 0, sizeof(reply));
+ ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+ 1, ICM_TIMEOUT);
+ if (ret)
+ return ret;
+
+ if (reply.hdr.flags & ICM_FLAGS_ERROR) {
+ tb_warn(tb, "Adding key to switch failed\n");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int icm_fr_challenge_switch_key(struct tb *tb, struct tb_switch *sw,
+ const u8 *challenge, u8 *response)
+{
+ struct icm_fr_pkg_challenge_device request;
+ struct icm_fr_pkg_challenge_device_response reply;
+ int ret;
+
+ memset(&request, 0, sizeof(request));
+ memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid));
+ request.hdr.code = ICM_CHALLENGE_DEVICE;
+ request.connection_id = sw->connection_id;
+ request.connection_key = sw->connection_key;
+ memcpy(request.challenge, challenge, TB_SWITCH_KEY_SIZE);
+
+ memset(&reply, 0, sizeof(reply));
+ ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+ 1, ICM_TIMEOUT);
+ if (ret)
+ return ret;
+
+ if (reply.hdr.flags & ICM_FLAGS_ERROR)
+ return -EKEYREJECTED;
+ if (reply.hdr.flags & ICM_FLAGS_NO_KEY)
+ return -ENOKEY;
+
+ memcpy(response, reply.response, TB_SWITCH_KEY_SIZE);
+
+ return 0;
+}
+
+static void remove_switch(struct tb_switch *sw)
+{
+ struct tb_switch *parent_sw;
+
+ parent_sw = tb_to_switch(sw->dev.parent);
+ tb_port_at(tb_route(sw), parent_sw)->remote = NULL;
+ tb_switch_remove(sw);
+}
+
+static void
+icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
+{
+ const struct icm_fr_event_device_connected *pkg =
+ (const struct icm_fr_event_device_connected *)hdr;
+ struct tb_switch *sw, *parent_sw;
+ struct icm *icm = tb_priv(tb);
+ bool authorized = false;
+ u8 link, depth;
+ u64 route;
+ int ret;
+
+ link = pkg->link_info & ICM_LINK_INFO_LINK_MASK;
+ depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >>
+ ICM_LINK_INFO_DEPTH_SHIFT;
+ authorized = pkg->link_info & ICM_LINK_INFO_APPROVED;
+
+ ret = icm->get_route(tb, link, depth, &route);
+ if (ret) {
+ tb_err(tb, "failed to find route string for switch at %u.%u\n",
+ link, depth);
+ return;
+ }
+
+ sw = tb_switch_find_by_uuid(tb, &pkg->ep_uuid);
+ if (sw) {
+ u8 phy_port, sw_phy_port;
+
+ parent_sw = tb_to_switch(sw->dev.parent);
+ sw_phy_port = phy_port_from_route(tb_route(sw), sw->depth);
+ phy_port = phy_port_from_route(route, depth);
+
+ /*
+ * On resume ICM will send us connected events for the
+ * devices that still are present. However, that
+ * information might have changed for example by the
+ * fact that a switch on a dual-link connection might
+ * have been enumerated using the other link now. Make
+ * sure our book keeping matches that.
+ */
+ if (sw->depth == depth && sw_phy_port == phy_port &&
+ !!sw->authorized == authorized) {
+ tb_port_at(tb_route(sw), parent_sw)->remote = NULL;
+ tb_port_at(route, parent_sw)->remote =
+ tb_upstream_port(sw);
+ sw->config.route_hi = upper_32_bits(route);
+ sw->config.route_lo = lower_32_bits(route);
+ sw->connection_id = pkg->connection_id;
+ sw->connection_key = pkg->connection_key;
+ sw->link = link;
+ sw->depth = depth;
+ sw->is_unplugged = false;
+ tb_switch_put(sw);
+ return;
+ }
+
+ /*
+ * User connected the same switch to another physical
+ * port or to another part of the topology. Remove the
+ * existing switch now before adding the new one.
+ */
+ remove_switch(sw);
+ tb_switch_put(sw);
+ }
+
+ /*
+ * If the switch was not found by UUID, look for a switch on
+ * same physical port (taking possible link aggregation into
+ * account) and depth. If we found one it is definitely a stale
+ * one so remove it first.
+ */
+ sw = tb_switch_find_by_link_depth(tb, link, depth);
+ if (!sw) {
+ u8 dual_link;
+
+ dual_link = dual_link_from_link(link);
+ if (dual_link)
+ sw = tb_switch_find_by_link_depth(tb, dual_link, depth);
+ }
+ if (sw) {
+ remove_switch(sw);
+ tb_switch_put(sw);
+ }
+
+ parent_sw = tb_switch_find_by_link_depth(tb, link, depth - 1);
+ if (!parent_sw) {
+ tb_err(tb, "failed to find parent switch for %u.%u\n",
+ link, depth);
+ return;
+ }
+
+ sw = tb_switch_alloc(tb, &parent_sw->dev, route);
+ if (!sw) {
+ tb_switch_put(parent_sw);
+ return;
+ }
+
+ sw->uuid = kmemdup(&pkg->ep_uuid, sizeof(pkg->ep_uuid), GFP_KERNEL);
+ sw->connection_id = pkg->connection_id;
+ sw->connection_key = pkg->connection_key;
+ sw->link = link;
+ sw->depth = depth;
+ sw->authorized = authorized;
+ sw->security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >>
+ ICM_FLAGS_SLEVEL_SHIFT;
+
+ /* Link the two switches now */
+ tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw);
+ tb_upstream_port(sw)->remote = tb_port_at(route, parent_sw);
+
+ ret = tb_switch_add(sw);
+ if (ret) {
+ tb_port_at(tb_route(sw), parent_sw)->remote = NULL;
+ tb_switch_put(sw);
+ }
+ tb_switch_put(parent_sw);
+}
+
+static void
+icm_fr_device_disconnected(struct tb *tb, const struct icm_pkg_header *hdr)
+{
+ const struct icm_fr_event_device_disconnected *pkg =
+ (const struct icm_fr_event_device_disconnected *)hdr;
+ struct tb_switch *sw;
+ u8 link, depth;
+
+ link = pkg->link_info & ICM_LINK_INFO_LINK_MASK;
+ depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >>
+ ICM_LINK_INFO_DEPTH_SHIFT;
+
+ if (link > ICM_MAX_LINK || depth > ICM_MAX_DEPTH) {
+ tb_warn(tb, "invalid topology %u.%u, ignoring\n", link, depth);
+ return;
+ }
+
+ sw = tb_switch_find_by_link_depth(tb, link, depth);
+ if (!sw) {
+ tb_warn(tb, "no switch exists at %u.%u, ignoring\n", link,
+ depth);
+ return;
+ }
+
+ remove_switch(sw);
+ tb_switch_put(sw);
+}
+
+static struct pci_dev *get_upstream_port(struct pci_dev *pdev)
+{
+ struct pci_dev *parent;
+
+ parent = pci_upstream_bridge(pdev);
+ while (parent) {
+ if (!pci_is_pcie(parent))
+ return NULL;
+ if (pci_pcie_type(parent) == PCI_EXP_TYPE_UPSTREAM)
+ break;
+ parent = pci_upstream_bridge(parent);
+ }
+
+ if (!parent)
+ return NULL;
+
+ switch (parent->device) {
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE:
+ return parent;
+ }
+
+ return NULL;
+}
+
+static bool icm_ar_is_supported(struct tb *tb)
+{
+ struct pci_dev *upstream_port;
+ struct icm *icm = tb_priv(tb);
+
+ /*
+ * Starting from Alpine Ridge we can use ICM on Apple machines
+ * as well. We just need to reset and re-enable it first.
+ */
+ if (!is_apple())
+ return true;
+
+ /*
+ * Find the upstream PCIe port in case we need to do reset
+ * through its vendor specific registers.
+ */
+ upstream_port = get_upstream_port(tb->nhi->pdev);
+ if (upstream_port) {
+ int cap;
+
+ cap = pci_find_ext_capability(upstream_port,
+ PCI_EXT_CAP_ID_VNDR);
+ if (cap > 0) {
+ icm->upstream_port = upstream_port;
+ icm->vnd_cap = cap;
+
+ return true;
+ }
+ }
+
+ return false;
+}
+
+static int icm_ar_get_mode(struct tb *tb)
+{
+ struct tb_nhi *nhi = tb->nhi;
+ int retries = 5;
+ u32 val;
+
+ do {
+ val = ioread32(nhi->iobase + REG_FW_STS);
+ if (val & REG_FW_STS_NVM_AUTH_DONE)
+ break;
+ msleep(30);
+ } while (--retries);
+
+ if (!retries) {
+ dev_err(&nhi->pdev->dev, "ICM firmware not authenticated\n");
+ return -ENODEV;
+ }
+
+ return nhi_mailbox_mode(nhi);
+}
+
+static int icm_ar_get_route(struct tb *tb, u8 link, u8 depth, u64 *route)
+{
+ struct icm_ar_pkg_get_route_response reply;
+ struct icm_ar_pkg_get_route request = {
+ .hdr = { .code = ICM_GET_ROUTE },
+ .link_info = depth << ICM_LINK_INFO_DEPTH_SHIFT | link,
+ };
+ int ret;
+
+ memset(&reply, 0, sizeof(reply));
+ ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+ 1, ICM_TIMEOUT);
+ if (ret)
+ return ret;
+
+ if (reply.hdr.flags & ICM_FLAGS_ERROR)
+ return -EIO;
+
+ *route = get_route(reply.route_hi, reply.route_lo);
+ return 0;
+}
+
+static void icm_handle_notification(struct work_struct *work)
+{
+ struct icm_notification *n = container_of(work, typeof(*n), work);
+ struct tb *tb = n->tb;
+ struct icm *icm = tb_priv(tb);
+
+ mutex_lock(&tb->lock);
+
+ switch (n->pkg->code) {
+ case ICM_EVENT_DEVICE_CONNECTED:
+ icm->device_connected(tb, n->pkg);
+ break;
+ case ICM_EVENT_DEVICE_DISCONNECTED:
+ icm->device_disconnected(tb, n->pkg);
+ break;
+ }
+
+ mutex_unlock(&tb->lock);
+
+ kfree(n->pkg);
+ kfree(n);
+}
+
+static void icm_handle_event(struct tb *tb, enum tb_cfg_pkg_type type,
+ const void *buf, size_t size)
+{
+ struct icm_notification *n;
+
+ n = kmalloc(sizeof(*n), GFP_KERNEL);
+ if (!n)
+ return;
+
+ INIT_WORK(&n->work, icm_handle_notification);
+ n->pkg = kmemdup(buf, size, GFP_KERNEL);
+ n->tb = tb;
+
+ queue_work(tb->wq, &n->work);
+}
+
+static int
+__icm_driver_ready(struct tb *tb, enum tb_security_level *security_level)
+{
+ struct icm_pkg_driver_ready_response reply;
+ struct icm_pkg_driver_ready request = {
+ .hdr.code = ICM_DRIVER_READY,
+ };
+ unsigned int retries = 10;
+ int ret;
+
+ memset(&reply, 0, sizeof(reply));
+ ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
+ 1, ICM_TIMEOUT);
+ if (ret)
+ return ret;
+
+ if (security_level)
+ *security_level = reply.security_level & 0xf;
+
+ /*
+ * Hold on here until the switch config space is accessible so
+ * that we can read root switch config successfully.
+ */
+ do {
+ struct tb_cfg_result res;
+ u32 tmp;
+
+ res = tb_cfg_read_raw(tb->ctl, &tmp, 0, 0, TB_CFG_SWITCH,
+ 0, 1, 100);
+ if (!res.err)
+ return 0;
+
+ msleep(50);
+ } while (--retries);
+
+ return -ETIMEDOUT;
+}
+
+static int pci2cio_wait_completion(struct icm *icm, unsigned long timeout_msec)
+{
+ unsigned long end = jiffies + msecs_to_jiffies(timeout_msec);
+ u32 cmd;
+
+ do {
+ pci_read_config_dword(icm->upstream_port,
+ icm->vnd_cap + PCIE2CIO_CMD, &cmd);
+ if (!(cmd & PCIE2CIO_CMD_START)) {
+ if (cmd & PCIE2CIO_CMD_TIMEOUT)
+ break;
+ return 0;
+ }
+
+ msleep(50);
+ } while (time_before(jiffies, end));
+
+ return -ETIMEDOUT;
+}
+
+static int pcie2cio_read(struct icm *icm, enum tb_cfg_space cs,
+ unsigned int port, unsigned int index, u32 *data)
+{
+ struct pci_dev *pdev = icm->upstream_port;
+ int ret, vnd_cap = icm->vnd_cap;
+ u32 cmd;
+
+ cmd = index;
+ cmd |= (port << PCIE2CIO_CMD_PORT_SHIFT) & PCIE2CIO_CMD_PORT_MASK;
+ cmd |= (cs << PCIE2CIO_CMD_CS_SHIFT) & PCIE2CIO_CMD_CS_MASK;
+ cmd |= PCIE2CIO_CMD_START;
+ pci_write_config_dword(pdev, vnd_cap + PCIE2CIO_CMD, cmd);
+
+ ret = pci2cio_wait_completion(icm, 5000);
+ if (ret)
+ return ret;
+
+ pci_read_config_dword(pdev, vnd_cap + PCIE2CIO_RDDATA, data);
+ return 0;
+}
+
+static int pcie2cio_write(struct icm *icm, enum tb_cfg_space cs,
+ unsigned int port, unsigned int index, u32 data)
+{
+ struct pci_dev *pdev = icm->upstream_port;
+ int vnd_cap = icm->vnd_cap;
+ u32 cmd;
+
+ pci_write_config_dword(pdev, vnd_cap + PCIE2CIO_WRDATA, data);
+
+ cmd = index;
+ cmd |= (port << PCIE2CIO_CMD_PORT_SHIFT) & PCIE2CIO_CMD_PORT_MASK;
+ cmd |= (cs << PCIE2CIO_CMD_CS_SHIFT) & PCIE2CIO_CMD_CS_MASK;
+ cmd |= PCIE2CIO_CMD_WRITE | PCIE2CIO_CMD_START;
+ pci_write_config_dword(pdev, vnd_cap + PCIE2CIO_CMD, cmd);
+
+ return pci2cio_wait_completion(icm, 5000);
+}
+
+static int icm_firmware_reset(struct tb *tb, struct tb_nhi *nhi)
+{
+ struct icm *icm = tb_priv(tb);
+ u32 val;
+
+ /* Put ARC to wait for CIO reset event to happen */
+ val = ioread32(nhi->iobase + REG_FW_STS);
+ val |= REG_FW_STS_CIO_RESET_REQ;
+ iowrite32(val, nhi->iobase + REG_FW_STS);
+
+ /* Re-start ARC */
+ val = ioread32(nhi->iobase + REG_FW_STS);
+ val |= REG_FW_STS_ICM_EN_INVERT;
+ val |= REG_FW_STS_ICM_EN_CPU;
+ iowrite32(val, nhi->iobase + REG_FW_STS);
+
+ /* Trigger CIO reset now */
+ return pcie2cio_write(icm, TB_CFG_SWITCH, 0, 0x50, BIT(9));
+}
+
+static int icm_firmware_start(struct tb *tb, struct tb_nhi *nhi)
+{
+ unsigned int retries = 10;
+ int ret;
+ u32 val;
+
+ /* Check if the ICM firmware is already running */
+ val = ioread32(nhi->iobase + REG_FW_STS);
+ if (val & REG_FW_STS_ICM_EN)
+ return 0;
+
+ dev_info(&nhi->pdev->dev, "starting ICM firmware\n");
+
+ ret = icm_firmware_reset(tb, nhi);
+ if (ret)
+ return ret;
+
+ /* Wait until the ICM firmware tells us it is up and running */
+ do {
+ /* Check that the ICM firmware is running */
+ val = ioread32(nhi->iobase + REG_FW_STS);
+ if (val & REG_FW_STS_NVM_AUTH_DONE)
+ return 0;
+
+ msleep(300);
+ } while (--retries);
+
+ return -ETIMEDOUT;
+}
+
+static int icm_reset_phy_port(struct tb *tb, int phy_port)
+{
+ struct icm *icm = tb_priv(tb);
+ u32 state0, state1;
+ int port0, port1;
+ u32 val0, val1;
+ int ret;
+
+ if (!icm->upstream_port)
+ return 0;
+
+ if (phy_port) {
+ port0 = 3;
+ port1 = 4;
+ } else {
+ port0 = 1;
+ port1 = 2;
+ }
+
+ /*
+ * Read link status of both null ports belonging to a single
+ * physical port.
+ */
+ ret = pcie2cio_read(icm, TB_CFG_PORT, port0, PHY_PORT_CS1, &val0);
+ if (ret)
+ return ret;
+ ret = pcie2cio_read(icm, TB_CFG_PORT, port1, PHY_PORT_CS1, &val1);
+ if (ret)
+ return ret;
+
+ state0 = val0 & PHY_PORT_CS1_LINK_STATE_MASK;
+ state0 >>= PHY_PORT_CS1_LINK_STATE_SHIFT;
+ state1 = val1 & PHY_PORT_CS1_LINK_STATE_MASK;
+ state1 >>= PHY_PORT_CS1_LINK_STATE_SHIFT;
+
+ /* If they are both up we need to reset them now */
+ if (state0 != TB_PORT_UP || state1 != TB_PORT_UP)
+ return 0;
+
+ val0 |= PHY_PORT_CS1_LINK_DISABLE;
+ ret = pcie2cio_write(icm, TB_CFG_PORT, port0, PHY_PORT_CS1, val0);
+ if (ret)
+ return ret;
+
+ val1 |= PHY_PORT_CS1_LINK_DISABLE;
+ ret = pcie2cio_write(icm, TB_CFG_PORT, port1, PHY_PORT_CS1, val1);
+ if (ret)
+ return ret;
+
+ /* Wait a bit and then re-enable both ports */
+ usleep_range(10, 100);
+
+ ret = pcie2cio_read(icm, TB_CFG_PORT, port0, PHY_PORT_CS1, &val0);
+ if (ret)
+ return ret;
+ ret = pcie2cio_read(icm, TB_CFG_PORT, port1, PHY_PORT_CS1, &val1);
+ if (ret)
+ return ret;
+
+ val0 &= ~PHY_PORT_CS1_LINK_DISABLE;
+ ret = pcie2cio_write(icm, TB_CFG_PORT, port0, PHY_PORT_CS1, val0);
+ if (ret)
+ return ret;
+
+ val1 &= ~PHY_PORT_CS1_LINK_DISABLE;
+ return pcie2cio_write(icm, TB_CFG_PORT, port1, PHY_PORT_CS1, val1);
+}
+
+static int icm_firmware_init(struct tb *tb)
+{
+ struct icm *icm = tb_priv(tb);
+ struct tb_nhi *nhi = tb->nhi;
+ int ret;
+
+ ret = icm_firmware_start(tb, nhi);
+ if (ret) {
+ dev_err(&nhi->pdev->dev, "could not start ICM firmware\n");
+ return ret;
+ }
+
+ if (icm->get_mode) {
+ ret = icm->get_mode(tb);
+
+ switch (ret) {
+ case NHI_FW_CM_MODE:
+ /* Ask ICM to accept all Thunderbolt devices */
+ nhi_mailbox_cmd(nhi, NHI_MAILBOX_ALLOW_ALL_DEVS, 0);
+ break;
+
+ default:
+ tb_err(tb, "ICM firmware is in wrong mode: %u\n", ret);
+ return -ENODEV;
+ }
+ }
+
+ /*
+ * Reset both physical ports if there is anything connected to
+ * them already.
+ */
+ ret = icm_reset_phy_port(tb, 0);
+ if (ret)
+ dev_warn(&nhi->pdev->dev, "failed to reset links on port0\n");
+ ret = icm_reset_phy_port(tb, 1);
+ if (ret)
+ dev_warn(&nhi->pdev->dev, "failed to reset links on port1\n");
+
+ return 0;
+}
+
+static int icm_driver_ready(struct tb *tb)
+{
+ int ret;
+
+ ret = icm_firmware_init(tb);
+ if (ret)
+ return ret;
+
+ return __icm_driver_ready(tb, &tb->security_level);
+}
+
+static int icm_suspend(struct tb *tb)
+{
+ return nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_SAVE_DEVS, 0);
+}
+
+/*
+ * Mark all switches (except root switch) below this one unplugged. ICM
+ * firmware will send us an updated list of switches after we have send
+ * it driver ready command. If a switch is not in that list it will be
+ * removed when we perform rescan.
+ */
+static void icm_unplug_children(struct tb_switch *sw)
+{
+ unsigned int i;
+
+ if (tb_route(sw))
+ sw->is_unplugged = true;
+
+ for (i = 1; i <= sw->config.max_port_number; i++) {
+ struct tb_port *port = &sw->ports[i];
+
+ if (tb_is_upstream_port(port))
+ continue;
+ if (!port->remote)
+ continue;
+
+ icm_unplug_children(port->remote->sw);
+ }
+}
+
+static void icm_free_unplugged_children(struct tb_switch *sw)
+{
+ unsigned int i;
+
+ for (i = 1; i <= sw->config.max_port_number; i++) {
+ struct tb_port *port = &sw->ports[i];
+
+ if (tb_is_upstream_port(port))
+ continue;
+ if (!port->remote)
+ continue;
+
+ if (port->remote->sw->is_unplugged) {
+ tb_switch_remove(port->remote->sw);
+ port->remote = NULL;
+ } else {
+ icm_free_unplugged_children(port->remote->sw);
+ }
+ }
+}
+
+static void icm_rescan_work(struct work_struct *work)
+{
+ struct icm *icm = container_of(work, struct icm, rescan_work.work);
+ struct tb *tb = icm_to_tb(icm);
+
+ mutex_lock(&tb->lock);
+ if (tb->root_switch)
+ icm_free_unplugged_children(tb->root_switch);
+ mutex_unlock(&tb->lock);
+}
+
+static void icm_complete(struct tb *tb)
+{
+ struct icm *icm = tb_priv(tb);
+
+ if (tb->nhi->going_away)
+ return;
+
+ icm_unplug_children(tb->root_switch);
+
+ /*
+ * Now all existing children should be resumed, start events
+ * from ICM to get updated status.
+ */
+ __icm_driver_ready(tb, NULL);
+
+ /*
+ * We do not get notifications of devices that have been
+ * unplugged during suspend so schedule rescan to clean them up
+ * if any.
+ */
+ queue_delayed_work(tb->wq, &icm->rescan_work, msecs_to_jiffies(500));
+}
+
+static int icm_start(struct tb *tb)
+{
+ int ret;
+
+ tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0);
+ if (!tb->root_switch)
+ return -ENODEV;
+
+ ret = tb_switch_add(tb->root_switch);
+ if (ret)
+ tb_switch_put(tb->root_switch);
+
+ return ret;
+}
+
+static void icm_stop(struct tb *tb)
+{
+ struct icm *icm = tb_priv(tb);
+
+ cancel_delayed_work(&icm->rescan_work);
+ tb_switch_remove(tb->root_switch);
+ tb->root_switch = NULL;
+ nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DRV_UNLOADS, 0);
+}
+
+/* Falcon Ridge and Alpine Ridge */
+static const struct tb_cm_ops icm_fr_ops = {
+ .driver_ready = icm_driver_ready,
+ .start = icm_start,
+ .stop = icm_stop,
+ .suspend = icm_suspend,
+ .complete = icm_complete,
+ .handle_event = icm_handle_event,
+ .approve_switch = icm_fr_approve_switch,
+ .add_switch_key = icm_fr_add_switch_key,
+ .challenge_switch_key = icm_fr_challenge_switch_key,
+};
+
+struct tb *icm_probe(struct tb_nhi *nhi)
+{
+ struct icm *icm;
+ struct tb *tb;
+
+ tb = tb_domain_alloc(nhi, sizeof(struct icm));
+ if (!tb)
+ return NULL;
+
+ icm = tb_priv(tb);
+ INIT_DELAYED_WORK(&icm->rescan_work, icm_rescan_work);
+ mutex_init(&icm->request_lock);
+
+ switch (nhi->pdev->device) {
+ case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
+ case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
+ icm->is_supported = icm_fr_is_supported;
+ icm->get_route = icm_fr_get_route;
+ icm->device_connected = icm_fr_device_connected;
+ icm->device_disconnected = icm_fr_device_disconnected;
+ tb->cm_ops = &icm_fr_ops;
+ break;
+
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_NHI:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_NHI:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI:
+ icm->is_supported = icm_ar_is_supported;
+ icm->get_mode = icm_ar_get_mode;
+ icm->get_route = icm_ar_get_route;
+ icm->device_connected = icm_fr_device_connected;
+ icm->device_disconnected = icm_fr_device_disconnected;
+ tb->cm_ops = &icm_fr_ops;
+ break;
+ }
+
+ if (!icm->is_supported || !icm->is_supported(tb)) {
+ dev_dbg(&nhi->pdev->dev, "ICM not supported on this controller\n");
+ tb_domain_put(tb);
+ return NULL;
+ }
+
+ return tb;
+}
diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index 14311535661d..05af126a2435 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -13,7 +13,6 @@
#include <linux/pci.h>
#include <linux/interrupt.h>
#include <linux/module.h>
-#include <linux/dmi.h>
#include <linux/delay.h>

#include "nhi.h"
@@ -668,6 +667,22 @@ static int nhi_resume_noirq(struct device *dev)
return tb_domain_resume_noirq(tb);
}

+static int nhi_suspend(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct tb *tb = pci_get_drvdata(pdev);
+
+ return tb_domain_suspend(tb);
+}
+
+static void nhi_complete(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct tb *tb = pci_get_drvdata(pdev);
+
+ tb_domain_complete(tb);
+}
+
static void nhi_shutdown(struct tb_nhi *nhi)
{
int i;
@@ -784,10 +799,16 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
/* magic value - clock related? */
iowrite32(3906250 / 10000, nhi->iobase + 0x38c00);

- dev_info(&nhi->pdev->dev, "NHI initialized, starting thunderbolt\n");
- tb = tb_probe(nhi);
+ tb = icm_probe(nhi);
if (!tb)
+ tb = tb_probe(nhi);
+ if (!tb) {
+ dev_err(&nhi->pdev->dev,
+ "failed to determine connection manager, aborting\n");
return -ENODEV;
+ }
+
+ dev_info(&nhi->pdev->dev, "NHI initialized, starting thunderbolt\n");

res = tb_domain_add(tb);
if (res) {
@@ -826,6 +847,10 @@ static const struct dev_pm_ops nhi_pm_ops = {
* pci-tunnels stay alive.
*/
.restore_noirq = nhi_resume_noirq,
+ .suspend = nhi_suspend,
+ .freeze = nhi_suspend,
+ .poweroff = nhi_suspend,
+ .complete = nhi_complete,
};

static struct pci_device_id nhi_ids[] = {
@@ -886,8 +911,6 @@ static int __init nhi_init(void)
{
int ret;

- if (!dmi_match(DMI_BOARD_VENDOR, "Apple Inc."))
- return -ENOSYS;
ret = tb_domain_init();
if (ret)
return ret;
diff --git a/drivers/thunderbolt/nhi_regs.h b/drivers/thunderbolt/nhi_regs.h
index 322fe1fa3a3c..09ed574e92ff 100644
--- a/drivers/thunderbolt/nhi_regs.h
+++ b/drivers/thunderbolt/nhi_regs.h
@@ -118,4 +118,11 @@ struct ring_desc {
#define REG_OUTMAIL_CMD_OPMODE_SHIFT 8
#define REG_OUTMAIL_CMD_OPMODE_MASK GENMASK(11, 8)

+#define REG_FW_STS 0x39944
+#define REG_FW_STS_NVM_AUTH_DONE BIT(31)
+#define REG_FW_STS_CIO_RESET_REQ BIT(30)
+#define REG_FW_STS_ICM_EN_CPU BIT(2)
+#define REG_FW_STS_ICM_EN_INVERT BIT(1)
+#define REG_FW_STS_ICM_EN BIT(0)
+
#endif
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 4b47e0999cda..1524edf42ee8 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -9,6 +9,9 @@

#include "tb.h"

+/* Switch authorization from userspace is serialized by this lock */
+static DEFINE_MUTEX(switch_lock);
+
/* port utility functions */

static const char *tb_port_type(struct tb_regs_port_header *port)
@@ -310,6 +313,75 @@ static int tb_plug_events_active(struct tb_switch *sw, bool active)
sw->cap_plug_events + 1, 1);
}

+static ssize_t authorized_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct tb_switch *sw = tb_to_switch(dev);
+
+ return sprintf(buf, "%u\n", sw->authorized);
+}
+
+static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val)
+{
+ int ret = -EINVAL;
+
+ if (mutex_lock_interruptible(&switch_lock))
+ return -ERESTARTSYS;
+
+ if (sw->authorized)
+ goto unlock;
+
+ switch (val) {
+ /* Approve switch */
+ case 1:
+ if (sw->key)
+ ret = tb_domain_approve_switch_key(sw->tb, sw);
+ else
+ ret = tb_domain_approve_switch(sw->tb, sw);
+ break;
+
+ /* Challenge switch */
+ case 2:
+ if (sw->key)
+ ret = tb_domain_challenge_switch_key(sw->tb, sw);
+ break;
+
+ default:
+ break;
+ }
+
+ if (!ret) {
+ sw->authorized = val;
+ /* Notify status change to the userspace */
+ kobject_uevent(&sw->dev.kobj, KOBJ_CHANGE);
+ }
+
+unlock:
+ mutex_unlock(&switch_lock);
+ return ret;
+}
+
+static ssize_t authorized_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct tb_switch *sw = tb_to_switch(dev);
+ unsigned int val;
+ ssize_t ret;
+
+ ret = kstrtouint(buf, 0, &val);
+ if (ret)
+ return ret;
+ if (val > 2)
+ return -EINVAL;
+
+ ret = tb_switch_set_authorized(sw, val);
+
+ return ret ? ret : count;
+}
+static DEVICE_ATTR_RW(authorized);
+
static ssize_t device_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
@@ -328,6 +400,54 @@ device_name_show(struct device *dev, struct device_attribute *attr, char *buf)
}
static DEVICE_ATTR_RO(device_name);

+static ssize_t key_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct tb_switch *sw = tb_to_switch(dev);
+ ssize_t ret;
+
+ if (mutex_lock_interruptible(&switch_lock))
+ return -ERESTARTSYS;
+
+ if (sw->key)
+ ret = sprintf(buf, "%*phN\n", TB_SWITCH_KEY_SIZE, sw->key);
+ else
+ ret = sprintf(buf, "\n");
+
+ mutex_unlock(&switch_lock);
+ return ret;
+}
+
+static ssize_t key_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct tb_switch *sw = tb_to_switch(dev);
+ u8 key[TB_SWITCH_KEY_SIZE];
+ ssize_t ret = count;
+
+ if (count < 64)
+ return -EINVAL;
+
+ if (hex2bin(key, buf, sizeof(key)))
+ return -EINVAL;
+
+ if (mutex_lock_interruptible(&switch_lock))
+ return -ERESTARTSYS;
+
+ if (sw->authorized) {
+ ret = -EBUSY;
+ } else {
+ kfree(sw->key);
+ sw->key = kmemdup(key, sizeof(key), GFP_KERNEL);
+ if (!sw->key)
+ ret = -ENOMEM;
+ }
+
+ mutex_unlock(&switch_lock);
+ return ret;
+}
+static DEVICE_ATTR_RW(key);
+
static ssize_t vendor_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
@@ -356,15 +476,35 @@ static ssize_t unique_id_show(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR_RO(unique_id);

static struct attribute *switch_attrs[] = {
+ &dev_attr_authorized.attr,
&dev_attr_device.attr,
&dev_attr_device_name.attr,
+ &dev_attr_key.attr,
&dev_attr_vendor.attr,
&dev_attr_vendor_name.attr,
&dev_attr_unique_id.attr,
NULL,
};

+static umode_t switch_attr_is_visible(struct kobject *kobj,
+ struct attribute *attr, int n)
+{
+ struct device *dev = container_of(kobj, struct device, kobj);
+ struct tb_switch *sw = tb_to_switch(dev);
+
+ if (attr == &dev_attr_key.attr) {
+ if (tb_route(sw) &&
+ sw->tb->security_level == TB_SECURITY_SECURE &&
+ sw->security_level == TB_SECURITY_SECURE)
+ return attr->mode;
+ return 0;
+ }
+
+ return attr->mode;
+}
+
static struct attribute_group switch_group = {
+ .is_visible = switch_attr_is_visible,
.attrs = switch_attrs,
};

@@ -384,6 +524,7 @@ static void tb_switch_release(struct device *dev)
kfree(sw->vendor_name);
kfree(sw->ports);
kfree(sw->drom);
+ kfree(sw->key);
kfree(sw);
}

@@ -490,6 +631,10 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
}
sw->cap_plug_events = cap;

+ /* Root switch is always authorized */
+ if (!route)
+ sw->authorized = true;
+
device_initialize(&sw->dev);
sw->dev.parent = parent;
sw->dev.bus = &tb_bus_type;
@@ -754,3 +899,80 @@ void tb_switch_suspend(struct tb_switch *sw)
* effect?
*/
}
+
+struct tb_sw_lookup {
+ struct tb *tb;
+ u8 link;
+ u8 depth;
+ const uuid_be *uuid;
+};
+
+static int tb_switch_match(struct device *dev, void *data)
+{
+ struct tb_switch *sw = tb_to_switch(dev);
+ struct tb_sw_lookup *lookup = data;
+
+ if (!sw)
+ return 0;
+ if (sw->tb != lookup->tb)
+ return 0;
+
+ if (lookup->uuid)
+ return !memcmp(sw->uuid, lookup->uuid, sizeof(*lookup->uuid));
+
+ /* Root switch is matched only by depth */
+ if (!lookup->depth)
+ return !sw->depth;
+
+ return sw->link == lookup->link && sw->depth == lookup->depth;
+}
+
+/**
+ * tb_switch_find_by_link_depth() - Find switch by link and depth
+ * @tb: Domain the switch belongs
+ * @link: Link number the switch is connected
+ * @depth: Depth of the switch in link
+ *
+ * Returned switch has reference count increased so the caller needs to
+ * call tb_switch_put() when done with the switch.
+ */
+struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link, u8 depth)
+{
+ struct tb_sw_lookup lookup;
+ struct device *dev;
+
+ memset(&lookup, 0, sizeof(lookup));
+ lookup.tb = tb;
+ lookup.link = link;
+ lookup.depth = depth;
+
+ dev = bus_find_device(&tb_bus_type, NULL, &lookup, tb_switch_match);
+ if (dev)
+ return tb_to_switch(dev);
+
+ return NULL;
+}
+
+/**
+ * tb_switch_find_by_link_depth() - Find switch by UUID
+ * @tb: Domain the switch belongs
+ * @uuid: UUID to look for
+ *
+ * Returned switch has reference count increased so the caller needs to
+ * call tb_switch_put() when done with the switch.
+ */
+struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_be *uuid)
+{
+ struct tb_sw_lookup lookup;
+ struct device *dev;
+
+ memset(&lookup, 0, sizeof(lookup));
+ lookup.tb = tb;
+ lookup.uuid = uuid;
+
+ dev = bus_find_device(&tb_bus_type, NULL, &lookup, tb_switch_match);
+ if (dev)
+ return tb_to_switch(dev);
+
+ return NULL;
+}
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index ea9de49b5e10..ad2304bad592 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -7,6 +7,7 @@
#include <linux/slab.h>
#include <linux/errno.h>
#include <linux/delay.h>
+#include <linux/dmi.h>

#include "tb.h"
#include "tb_regs.h"
@@ -71,6 +72,8 @@ static void tb_scan_port(struct tb_port *port)
return;
}

+ sw->authorized = true;
+
if (tb_switch_add(sw)) {
tb_switch_put(sw);
return;
@@ -443,10 +446,14 @@ struct tb *tb_probe(struct tb_nhi *nhi)
struct tb_cm *tcm;
struct tb *tb;

+ if (!dmi_match(DMI_BOARD_VENDOR, "Apple Inc."))
+ return NULL;
+
tb = tb_domain_alloc(nhi, sizeof(*tcm));
if (!tb)
return NULL;

+ tb->security_level = TB_SECURITY_NONE;
tb->cm_ops = &tb_cm_ops;

tcm = tb_priv(tb);
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 31521c531715..a998b3a251d5 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -15,6 +15,24 @@
#include "dma_port.h"

/**
+ * enum tb_security_level - Thunderbolt security level
+ * @TB_SECURITY_NONE: No security, legacy mode
+ * @TB_SECURITY_USER: User approval required at minimum
+ * @TB_SECURITY_SECURE: One time saved key required at minimum
+ * @TB_SECURITY_DPONLY: Only tunnel Display port (and USB)
+ */
+enum tb_security_level {
+ TB_SECURITY_NONE,
+ TB_SECURITY_USER,
+ TB_SECURITY_SECURE,
+ TB_SECURITY_DPONLY,
+};
+
+#define TB_SWITCH_KEY_SIZE 32
+/* Each physical port contains 2 links on modern controllers */
+#define TB_SWITCH_LINKS_PER_PHY_PORT 2
+
+/**
* struct tb_switch - a thunderbolt switch
* @dev: Device for the switch
* @config: Switch configuration
@@ -33,6 +51,19 @@
* @cap_plug_events: Offset to the plug events capability (%0 if not found)
* @is_unplugged: The switch is going away
* @drom: DROM of the switch (%NULL if not found)
+ * @authorized: Whether the switch is authorized by user or policy
+ * @work: Work used to automatically authorize a switch
+ * @security_level: Switch supported security level
+ * @key: Contains the key used to challenge the device or %NULL if not
+ * supported. Size of the key is %TB_SWITCH_KEY_SIZE.
+ * @connection_id: Connection ID used with ICM messaging
+ * @connection_key: Connection key used with ICM messaging
+ * @link: Root switch link this switch is connected (ICM only)
+ * @depth: Depth in the chain this switch is connected (ICM only)
+ *
+ * When the switch is being added or removed to the domain (other
+ * switches) you need to have domain lock held. For switch authorization
+ * internal switch_lock is enough.
*/
struct tb_switch {
struct device dev;
@@ -50,6 +81,14 @@ struct tb_switch {
int cap_plug_events;
bool is_unplugged;
u8 *drom;
+ unsigned int authorized;
+ struct work_struct work;
+ enum tb_security_level security_level;
+ u8 *key;
+ u8 connection_id;
+ u8 connection_key;
+ u8 link;
+ u8 depth;
};

/**
@@ -121,19 +160,33 @@ struct tb_path {

/**
* struct tb_cm_ops - Connection manager specific operations vector
+ * @driver_ready: Called right after control channel is started. Used by
+ * ICM to send driver ready message to the firmware.
* @start: Starts the domain
* @stop: Stops the domain
* @suspend_noirq: Connection manager specific suspend_noirq
* @resume_noirq: Connection manager specific resume_noirq
+ * @suspend: Connection manager specific suspend
+ * @complete: Connection manager specific complete
* @handle_event: Handle thunderbolt event
+ * @approve_switch: Approve switch
+ * @add_switch_key: Add key to switch
+ * @challenge_switch_key: Challenge switch using key
*/
struct tb_cm_ops {
+ int (*driver_ready)(struct tb *tb);
int (*start)(struct tb *tb);
void (*stop)(struct tb *tb);
int (*suspend_noirq)(struct tb *tb);
int (*resume_noirq)(struct tb *tb);
+ int (*suspend)(struct tb *tb);
+ void (*complete)(struct tb *tb);
void (*handle_event)(struct tb *tb, enum tb_cfg_pkg_type,
const void *buf, size_t size);
+ int (*approve_switch)(struct tb *tb, struct tb_switch *sw);
+ int (*add_switch_key)(struct tb *tb, struct tb_switch *sw);
+ int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw,
+ const u8 *challenge, u8 *response);
};

/**
@@ -147,6 +200,7 @@ struct tb_cm_ops {
* @root_switch: Root switch of this domain
* @cm_ops: Connection manager specific operations vector
* @index: Linux assigned domain number
+ * @security_level: Current security level
* @privdata: Private connection manager specific data
*/
struct tb {
@@ -158,6 +212,7 @@ struct tb {
struct tb_switch *root_switch;
const struct tb_cm_ops *cm_ops;
int index;
+ enum tb_security_level security_level;
unsigned long privdata[0];
};

@@ -188,6 +243,16 @@ static inline u64 tb_route(struct tb_switch *sw)
return ((u64) sw->config.route_hi) << 32 | sw->config.route_lo;
}

+static inline struct tb_port *tb_port_at(u64 route, struct tb_switch *sw)
+{
+ u8 port;
+
+ port = route >> (sw->config.depth * 8);
+ if (WARN_ON(port > sw->config.max_port_number))
+ return NULL;
+ return &sw->ports[port];
+}
+
static inline int tb_sw_read(struct tb_switch *sw, void *buffer,
enum tb_cfg_space space, u32 offset, u32 length)
{
@@ -266,6 +331,7 @@ static inline int tb_port_write(struct tb_port *port, const void *buffer,
#define tb_port_info(port, fmt, arg...) \
__TB_PORT_PRINT(tb_info, port, fmt, ##arg)

+struct tb *icm_probe(struct tb_nhi *nhi);
struct tb *tb_probe(struct tb_nhi *nhi);

extern struct bus_type tb_bus_type;
@@ -280,6 +346,11 @@ int tb_domain_add(struct tb *tb);
void tb_domain_remove(struct tb *tb);
int tb_domain_suspend_noirq(struct tb *tb);
int tb_domain_resume_noirq(struct tb *tb);
+int tb_domain_suspend(struct tb *tb);
+void tb_domain_complete(struct tb *tb);
+int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw);
+int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw);
+int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw);

static inline void tb_domain_put(struct tb *tb)
{
@@ -296,6 +367,14 @@ int tb_switch_resume(struct tb_switch *sw);
int tb_switch_reset(struct tb *tb, u64 route);
void tb_sw_set_unplugged(struct tb_switch *sw);
struct tb_switch *get_switch_at_route(struct tb_switch *sw, u64 route);
+struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link,
+ u8 depth);
+struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_be *uuid);
+
+static inline unsigned int tb_switch_phy_port_from_link(unsigned int link)
+{
+ return (link - 1) / TB_SWITCH_LINKS_PER_PHY_PORT;
+}

static inline void tb_switch_put(struct tb_switch *sw)
{
diff --git a/drivers/thunderbolt/tb_msgs.h b/drivers/thunderbolt/tb_msgs.h
index 761d56287149..85b6d33c0919 100644
--- a/drivers/thunderbolt/tb_msgs.h
+++ b/drivers/thunderbolt/tb_msgs.h
@@ -13,6 +13,7 @@
#define _TB_MSGS

#include <linux/types.h>
+#include <linux/uuid.h>

enum tb_cfg_pkg_type {
TB_CFG_PKG_READ = 1,
@@ -24,6 +25,9 @@ enum tb_cfg_pkg_type {
TB_CFG_PKG_XDOMAIN_RESP = 7,
TB_CFG_PKG_OVERRIDE = 8,
TB_CFG_PKG_RESET = 9,
+ TB_CFG_PKG_ICM_EVENT = 10,
+ TB_CFG_PKG_ICM_CMD = 11,
+ TB_CFG_PKG_ICM_RESP = 12,
TB_CFG_PKG_PREPARE_TO_SLEEP = 0xd,

};
@@ -105,4 +109,152 @@ struct cfg_pts_pkg {
u32 data;
} __packed;

+/* ICM messages */
+
+enum icm_pkg_code {
+ ICM_GET_TOPOLOGY = 0x1,
+ ICM_DRIVER_READY = 0x3,
+ ICM_APPROVE_DEVICE = 0x4,
+ ICM_CHALLENGE_DEVICE = 0x5,
+ ICM_ADD_DEVICE_KEY = 0x6,
+ ICM_GET_ROUTE = 0xa,
+};
+
+enum icm_event_code {
+ ICM_EVENT_DEVICE_CONNECTED = 3,
+ ICM_EVENT_DEVICE_DISCONNECTED = 4,
+};
+
+struct icm_pkg_header {
+ u8 code;
+ u8 flags;
+ u8 packet_id;
+ u8 total_packets;
+} __packed;
+
+#define ICM_FLAGS_ERROR BIT(0)
+#define ICM_FLAGS_NO_KEY BIT(1)
+#define ICM_FLAGS_SLEVEL_SHIFT 3
+#define ICM_FLAGS_SLEVEL_MASK GENMASK(4, 3)
+
+struct icm_pkg_driver_ready {
+ struct icm_pkg_header hdr;
+} __packed;
+
+struct icm_pkg_driver_ready_response {
+ struct icm_pkg_header hdr;
+ u8 romver;
+ u8 ramver;
+ u16 security_level;
+} __packed;
+
+/* Falcon Ridge & Alpine Ridge common messages */
+
+struct icm_fr_pkg_get_topology {
+ struct icm_pkg_header hdr;
+} __packed;
+
+#define ICM_GET_TOPOLOGY_PACKETS 14
+
+struct icm_fr_pkg_get_topology_response {
+ struct icm_pkg_header hdr;
+ u32 route_lo;
+ u32 route_hi;
+ u8 first_data;
+ u8 second_data;
+ u8 drom_i2c_address_index;
+ u8 switch_index;
+ u32 reserved[2];
+ u32 ports[16];
+ u32 port_hop_info[16];
+} __packed;
+
+#define ICM_SWITCH_USED BIT(0)
+#define ICM_SWITCH_UPSTREAM_PORT_MASK GENMASK(7, 1)
+#define ICM_SWITCH_UPSTREAM_PORT_SHIFT 1
+
+#define ICM_PORT_TYPE_MASK GENMASK(23, 0)
+#define ICM_PORT_INDEX_SHIFT 24
+#define ICM_PORT_INDEX_MASK GENMASK(31, 24)
+
+struct icm_fr_event_device_connected {
+ struct icm_pkg_header hdr;
+ uuid_be ep_uuid;
+ u8 connection_key;
+ u8 connection_id;
+ u16 link_info;
+ u32 ep_name[55];
+} __packed;
+
+#define ICM_LINK_INFO_LINK_MASK 0x7
+#define ICM_LINK_INFO_DEPTH_SHIFT 4
+#define ICM_LINK_INFO_DEPTH_MASK GENMASK(7, 4)
+#define ICM_LINK_INFO_APPROVED BIT(8)
+
+struct icm_fr_pkg_approve_device {
+ struct icm_pkg_header hdr;
+ uuid_be ep_uuid;
+ u8 connection_key;
+ u8 connection_id;
+ u16 reserved;
+} __packed;
+
+struct icm_fr_event_device_disconnected {
+ struct icm_pkg_header hdr;
+ u16 reserved;
+ u16 link_info;
+} __packed;
+
+struct icm_fr_pkg_add_device_key {
+ struct icm_pkg_header hdr;
+ uuid_be ep_uuid;
+ u8 connection_key;
+ u8 connection_id;
+ u16 reserved;
+ u32 key[8];
+} __packed;
+
+struct icm_fr_pkg_add_device_key_response {
+ struct icm_pkg_header hdr;
+ uuid_be ep_uuid;
+ u8 connection_key;
+ u8 connection_id;
+ u16 reserved;
+} __packed;
+
+struct icm_fr_pkg_challenge_device {
+ struct icm_pkg_header hdr;
+ uuid_be ep_uuid;
+ u8 connection_key;
+ u8 connection_id;
+ u16 reserved;
+ u32 challenge[8];
+} __packed;
+
+struct icm_fr_pkg_challenge_device_response {
+ struct icm_pkg_header hdr;
+ uuid_be ep_uuid;
+ u8 connection_key;
+ u8 connection_id;
+ u16 reserved;
+ u32 challenge[8];
+ u32 response[8];
+} __packed;
+
+/* Alpine Ridge only messages */
+
+struct icm_ar_pkg_get_route {
+ struct icm_pkg_header hdr;
+ u16 reserved;
+ u16 link_info;
+} __packed;
+
+struct icm_ar_pkg_get_route_response {
+ struct icm_pkg_header hdr;
+ u16 reserved;
+ u16 link_info;
+ u32 route_hi;
+ u32 route_lo;
+} __packed;
+
#endif
--
2.11.0

2017-06-06 12:25:37

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 12/27] thunderbolt: Refactor and fix parsing of port drom entries

From: Lukas Wunner <[email protected]>

Currently tb_drom_parse_entry() is only able to parse drom entries of
type TB_DROM_ENTRY_PORT. Rename it to tb_drom_parse_entry_port().
Fold tb_drom_parse_port_entry() into it.

Its return value is currently ignored. Evaluate it and abort parsing on
error.

Change tb_drom_parse_entries() to accommodate for parsing of other entry
types than TB_DROM_ENTRY_PORT.

Signed-off-by: Lukas Wunner <[email protected]>
Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/eeprom.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
index e2c1f8a45522..5c7d80a109b1 100644
--- a/drivers/thunderbolt/eeprom.c
+++ b/drivers/thunderbolt/eeprom.c
@@ -295,25 +295,13 @@ int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid)
return 0;
}

-static void tb_drom_parse_port_entry(struct tb_port *port,
- struct tb_drom_entry_port *entry)
-{
- port->link_nr = entry->link_nr;
- if (entry->has_dual_link_port)
- port->dual_link_port =
- &port->sw->ports[entry->dual_link_port_nr];
-}
-
-static int tb_drom_parse_entry(struct tb_switch *sw,
- struct tb_drom_entry_header *header)
+static int tb_drom_parse_entry_port(struct tb_switch *sw,
+ struct tb_drom_entry_header *header)
{
struct tb_port *port;
int res;
enum tb_port_type type;

- if (header->type != TB_DROM_ENTRY_PORT)
- return 0;
-
port = &sw->ports[header->index];
port->disabled = header->port_disabled;
if (port->disabled)
@@ -332,7 +320,10 @@ static int tb_drom_parse_entry(struct tb_switch *sw,
header->len, sizeof(struct tb_drom_entry_port));
return -EIO;
}
- tb_drom_parse_port_entry(port, entry);
+ port->link_nr = entry->link_nr;
+ if (entry->has_dual_link_port)
+ port->dual_link_port =
+ &port->sw->ports[entry->dual_link_port_nr];
}
return 0;
}
@@ -347,6 +338,7 @@ static int tb_drom_parse_entries(struct tb_switch *sw)
struct tb_drom_header *header = (void *) sw->drom;
u16 pos = sizeof(*header);
u16 drom_size = header->data_len + TB_DROM_DATA_START;
+ int res;

while (pos < drom_size) {
struct tb_drom_entry_header *entry = (void *) (sw->drom + pos);
@@ -356,7 +348,15 @@ static int tb_drom_parse_entries(struct tb_switch *sw)
return -EIO;
}

- tb_drom_parse_entry(sw, entry);
+ switch (entry->type) {
+ case TB_DROM_ENTRY_GENERIC:
+ break;
+ case TB_DROM_ENTRY_PORT:
+ res = tb_drom_parse_entry_port(sw, entry);
+ break;
+ }
+ if (res)
+ return res;

pos += entry->len;
}
--
2.11.0

2017-06-06 12:26:17

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 19/27] thunderbolt: Add new Thunderbolt PCI IDs

Add Intel Win Ridge (Thunderbolt 2) and Alpine Ridge (Thunderbolt 3)
controller PCI IDs to the list of supported devices.

This code is based on the work done by Amir Levy and Michael Jamet.

Signed-off-by: Michael Jamet <[email protected]>
Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/nhi.c | 11 +++++++++++
drivers/thunderbolt/nhi.h | 17 +++++++++++++++++
drivers/thunderbolt/switch.c | 19 ++++++++++++++-----
3 files changed, 42 insertions(+), 5 deletions(-)

diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index c1113a3c4128..fa4c2745dba2 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -787,6 +787,17 @@ static struct pci_device_id nhi_ids[] = {
.device = PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI,
.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID,
},
+
+ /* Thunderbolt 3 */
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_USBONLY_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_USBONLY_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI) },
+
{ 0,}
};

diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
index 630f44140530..8bd9b4e5a0b1 100644
--- a/drivers/thunderbolt/nhi.h
+++ b/drivers/thunderbolt/nhi.h
@@ -143,4 +143,21 @@ static inline int ring_tx(struct tb_ring *ring, struct ring_frame *frame)
return __ring_enqueue(ring, frame);
}

+/*
+ * PCI IDs used in this driver from Win Ridge forward. There is no
+ * need for the PCI quirk anymore as we will use ICM also on Apple
+ * hardware.
+ */
+#define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_NHI 0x157d
+#define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_BRIDGE 0x157e
+#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI 0x15bf
+#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_BRIDGE 0x15c0
+#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI 0x15d2
+#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE 0x15d3
+#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI 0x15d9
+#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE 0x15da
+#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_USBONLY_NHI 0x15dc
+#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_USBONLY_NHI 0x15dd
+#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI 0x15de
+
#endif
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 11f16a141686..1497518aceff 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -491,13 +491,22 @@ int tb_switch_configure(struct tb_switch *sw)
tb_sw_warn(sw, "unknown switch vendor id %#x\n",
sw->config.vendor_id);

- if (sw->config.device_id != PCI_DEVICE_ID_INTEL_LIGHT_RIDGE &&
- sw->config.device_id != PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C &&
- sw->config.device_id != PCI_DEVICE_ID_INTEL_PORT_RIDGE &&
- sw->config.device_id != PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE &&
- sw->config.device_id != PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE)
+ switch (sw->config.device_id) {
+ case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
+ case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
+ case PCI_DEVICE_ID_INTEL_PORT_RIDGE:
+ case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE:
+ break;
+
+ default:
tb_sw_warn(sw, "unsupported switch device id %#x\n",
sw->config.device_id);
+ }

sw->config.enabled = 1;

--
2.11.0

2017-06-06 12:26:34

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 10/27] thunderbolt: Fail switch adding operation if reading DROM fails

All non-root switches are expected to have DROM so if the operation
fails, it might be due the user unlugging the device. There is no point
continuing adding the switch further in that case. Just bail out.

For root switches (hosts) the DROM is either retrieved from a EFI
variable, NVM or hard-coded.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/switch.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 0475bfc6c208..2390f08b94da 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -535,8 +535,11 @@ int tb_switch_add(struct tb_switch *sw)
int i, ret;

/* read drom */
- if (tb_drom_read(sw))
- tb_sw_warn(sw, "tb_eeprom_read_rom failed, continuing\n");
+ ret = tb_drom_read(sw);
+ if (ret) {
+ tb_sw_warn(sw, "tb_eeprom_read_rom failed\n");
+ return ret;
+ }
tb_sw_info(sw, "uid: %#llx\n", sw->uid);

tb_switch_set_uuid(sw);
--
2.11.0

2017-06-06 12:26:26

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 16/27] thunderbolt: Expose make_header() to other files

We will be using this function in files introduced in subsequent
patches. While there the function is renamed to tb_cfg_make_header()
following tb_cfg_get_route().

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/ctl.c | 19 ++++---------------
drivers/thunderbolt/ctl.h | 11 +++++++++++
2 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index 8352ee8662aa..c6633da582b8 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -54,17 +54,6 @@ struct tb_ctl {

/* utility functions */

-static struct tb_cfg_header make_header(u64 route)
-{
- struct tb_cfg_header header = {
- .route_hi = route >> 32,
- .route_lo = route,
- };
- /* check for overflow, route_hi is not 32 bits! */
- WARN_ON(tb_cfg_get_route(&header) != route);
- return header;
-}
-
static int check_header(struct ctl_pkg *pkg, u32 len, enum tb_cfg_pkg_type type,
u64 route)
{
@@ -501,7 +490,7 @@ int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
enum tb_cfg_error error)
{
struct cfg_error_pkg pkg = {
- .header = make_header(route),
+ .header = tb_cfg_make_header(route),
.port = port,
.error = error,
};
@@ -520,7 +509,7 @@ struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route,
int timeout_msec)
{
int err;
- struct cfg_reset_pkg request = { .header = make_header(route) };
+ struct cfg_reset_pkg request = { .header = tb_cfg_make_header(route) };
struct tb_cfg_header reply;

err = tb_ctl_tx(ctl, &request, sizeof(request), TB_CFG_PKG_RESET);
@@ -542,7 +531,7 @@ struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,
{
struct tb_cfg_result res = { 0 };
struct cfg_read_pkg request = {
- .header = make_header(route),
+ .header = tb_cfg_make_header(route),
.addr = {
.port = port,
.space = space,
@@ -579,7 +568,7 @@ struct tb_cfg_result tb_cfg_write_raw(struct tb_ctl *ctl, const void *buffer,
{
struct tb_cfg_result res = { 0 };
struct cfg_write_pkg request = {
- .header = make_header(route),
+ .header = tb_cfg_make_header(route),
.addr = {
.port = port,
.space = space,
diff --git a/drivers/thunderbolt/ctl.h b/drivers/thunderbolt/ctl.h
index 9812b1c86d4f..914da86ec77d 100644
--- a/drivers/thunderbolt/ctl.h
+++ b/drivers/thunderbolt/ctl.h
@@ -43,6 +43,17 @@ static inline u64 tb_cfg_get_route(const struct tb_cfg_header *header)
return (u64) header->route_hi << 32 | header->route_lo;
}

+static inline struct tb_cfg_header tb_cfg_make_header(u64 route)
+{
+ struct tb_cfg_header header = {
+ .route_hi = route >> 32,
+ .route_lo = route,
+ };
+ /* check for overflow, route_hi is not 32 bits! */
+ WARN_ON(tb_cfg_get_route(&header) != route);
+ return header;
+}
+
int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
enum tb_cfg_error error);
struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route,
--
2.11.0

2017-06-06 12:26:32

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 04/27] thunderbolt: Do not warn about newer DROM versions

DROM version 2 is compatible with the previous generation so no need to
warn about that.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/eeprom.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
index e4e64b130514..eb2179c98b09 100644
--- a/drivers/thunderbolt/eeprom.c
+++ b/drivers/thunderbolt/eeprom.c
@@ -488,7 +488,7 @@ int tb_drom_read(struct tb_switch *sw)
goto err;
}

- if (header->device_rom_revision > 1)
+ if (header->device_rom_revision > 2)
tb_sw_warn(sw, "drom device_rom_revision %#x unknown\n",
header->device_rom_revision);

--
2.11.0

2017-06-06 12:26:50

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 18/27] thunderbolt: Rework control channel to be more reliable

If a request times out the response might arrive right after the request
is failed. This response is pushed to the kfifo and next request will
read it instead. Since it most likely will not pass our validation
checks in parse_header() the next request will fail as well, and
response to that request will be pushed to the kfifo, ad infinitum.

We end up in a situation where all requests fail and no devices can be
added anymore until the driver is unloaded and reloaded again.

To overcome this, rework the control channel so that we will have a
queue of outstanding requests. Each request will be handled in turn and
the response is validated against what is expected. Unexpected packets
(for example responses for requests that have been timed out) are
dropped. This model is copied from Greybus implementation with small
changes here and there to get it cope with Thunderbolt control packets.

In addition the configuration packets support sequence number which the
switch is supposed to copy from the request to response. We use this to
drop responses that are already timed out. Taking advantage of the
sequence number, we automatically retry configuration read/write 4 times
before giving up.

Also timeout is not a programming error so there is no need to trigger a
scary backtrace (WARN), instead we just log a warning. After all
Thunderbolt devices are hot-pluggable by definition which means user can
unplug a device any time and that is totally acceptable.

With this change there is no need to take the global domain lock when
sending configuration packets anymore. This is useful when we add
support for cross-domain (XDomain) communication later on.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/ctl.c | 477 +++++++++++++++++++++++++++++++++++++++-------
drivers/thunderbolt/ctl.h | 65 +++++++
drivers/thunderbolt/tb.h | 2 +-
3 files changed, 473 insertions(+), 71 deletions(-)

diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index 5417ed244edc..27c30ff79a84 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -5,22 +5,17 @@
*/

#include <linux/crc32.h>
+#include <linux/delay.h>
#include <linux/slab.h>
#include <linux/pci.h>
#include <linux/dmapool.h>
#include <linux/workqueue.h>
-#include <linux/kfifo.h>

#include "ctl.h"


-struct ctl_pkg {
- struct tb_ctl *ctl;
- void *buffer;
- struct ring_frame frame;
-};
-
-#define TB_CTL_RX_PKG_COUNT 10
+#define TB_CTL_RX_PKG_COUNT 10
+#define TB_CTL_RETRIES 4

/**
* struct tb_cfg - thunderbolt control channel
@@ -32,8 +27,9 @@ struct tb_ctl {

struct dma_pool *frame_pool;
struct ctl_pkg *rx_packets[TB_CTL_RX_PKG_COUNT];
- DECLARE_KFIFO(response_fifo, struct ctl_pkg*, 16);
- struct completion response_ready;
+ struct mutex request_queue_lock;
+ struct list_head request_queue;
+ bool running;

event_cb callback;
void *callback_data;
@@ -55,10 +51,121 @@ struct tb_ctl {
#define tb_ctl_dbg(ctl, format, arg...) \
dev_dbg(&(ctl)->nhi->pdev->dev, format, ## arg)

+static DECLARE_WAIT_QUEUE_HEAD(tb_cfg_request_cancel_queue);
+/* Serializes access to request kref_get/put */
+static DEFINE_MUTEX(tb_cfg_request_lock);
+
+/**
+ * tb_cfg_request_alloc() - Allocates a new config request
+ *
+ * This is refcounted object so when you are done with this, call
+ * tb_cfg_request_put() to it.
+ */
+struct tb_cfg_request *tb_cfg_request_alloc(void)
+{
+ struct tb_cfg_request *req;
+
+ req = kzalloc(sizeof(*req), GFP_KERNEL);
+ if (!req)
+ return NULL;
+
+ kref_init(&req->kref);
+
+ return req;
+}
+
+/**
+ * tb_cfg_request_get() - Increase refcount of a request
+ * @req: Request whose refcount is increased
+ */
+void tb_cfg_request_get(struct tb_cfg_request *req)
+{
+ mutex_lock(&tb_cfg_request_lock);
+ kref_get(&req->kref);
+ mutex_unlock(&tb_cfg_request_lock);
+}
+
+static void tb_cfg_request_destroy(struct kref *kref)
+{
+ struct tb_cfg_request *req = container_of(kref, typeof(*req), kref);
+
+ kfree(req);
+}
+
+/**
+ * tb_cfg_request_put() - Decrease refcount and possibly release the request
+ * @req: Request whose refcount is decreased
+ *
+ * Call this function when you are done with the request. When refcount
+ * goes to %0 the object is released.
+ */
+void tb_cfg_request_put(struct tb_cfg_request *req)
+{
+ mutex_lock(&tb_cfg_request_lock);
+ kref_put(&req->kref, tb_cfg_request_destroy);
+ mutex_unlock(&tb_cfg_request_lock);
+}
+
+static int tb_cfg_request_enqueue(struct tb_ctl *ctl,
+ struct tb_cfg_request *req)
+{
+ WARN_ON(test_bit(TB_CFG_REQUEST_ACTIVE, &req->flags));
+ WARN_ON(req->ctl);
+
+ mutex_lock(&ctl->request_queue_lock);
+ if (!ctl->running) {
+ mutex_unlock(&ctl->request_queue_lock);
+ return -ENOTCONN;
+ }
+ req->ctl = ctl;
+ list_add_tail(&req->list, &ctl->request_queue);
+ set_bit(TB_CFG_REQUEST_ACTIVE, &req->flags);
+ mutex_unlock(&ctl->request_queue_lock);
+ return 0;
+}
+
+static void tb_cfg_request_dequeue(struct tb_cfg_request *req)
+{
+ struct tb_ctl *ctl = req->ctl;
+
+ mutex_lock(&ctl->request_queue_lock);
+ list_del(&req->list);
+ clear_bit(TB_CFG_REQUEST_ACTIVE, &req->flags);
+ if (test_bit(TB_CFG_REQUEST_CANCELED, &req->flags))
+ wake_up(&tb_cfg_request_cancel_queue);
+ mutex_unlock(&ctl->request_queue_lock);
+}
+
+static bool tb_cfg_request_is_active(struct tb_cfg_request *req)
+{
+ return test_bit(TB_CFG_REQUEST_ACTIVE, &req->flags);
+}
+
+static struct tb_cfg_request *
+tb_cfg_request_find(struct tb_ctl *ctl, struct ctl_pkg *pkg)
+{
+ struct tb_cfg_request *req;
+ bool found = false;
+
+ mutex_lock(&pkg->ctl->request_queue_lock);
+ list_for_each_entry(req, &pkg->ctl->request_queue, list) {
+ tb_cfg_request_get(req);
+ if (req->match(req, pkg)) {
+ found = true;
+ break;
+ }
+ tb_cfg_request_put(req);
+ }
+ mutex_unlock(&pkg->ctl->request_queue_lock);
+
+ return found ? req : NULL;
+}
+
/* utility functions */

-static int check_header(struct ctl_pkg *pkg, u32 len, enum tb_cfg_pkg_type type,
- u64 route)
+
+static int check_header(const struct ctl_pkg *pkg, u32 len,
+ enum tb_cfg_pkg_type type, u64 route)
{
struct tb_cfg_header *header = pkg->buffer;

@@ -100,8 +207,6 @@ static int check_config_address(struct tb_cfg_address addr,
if (WARN(length != addr.length, "wrong space (expected %x, got %x\n)",
length, addr.length))
return -EIO;
- if (WARN(addr.seq, "addr.seq is %#x\n", addr.seq))
- return -EIO;
/*
* We cannot check addr->port as it is set to the upstream port of the
* sender.
@@ -109,7 +214,7 @@ static int check_config_address(struct tb_cfg_address addr,
return 0;
}

-static struct tb_cfg_result decode_error(struct ctl_pkg *response)
+static struct tb_cfg_result decode_error(const struct ctl_pkg *response)
{
struct cfg_error_pkg *pkg = response->buffer;
struct tb_cfg_result res = { 0 };
@@ -130,7 +235,7 @@ static struct tb_cfg_result decode_error(struct ctl_pkg *response)

}

-static struct tb_cfg_result parse_header(struct ctl_pkg *pkg, u32 len,
+static struct tb_cfg_result parse_header(const struct ctl_pkg *pkg, u32 len,
enum tb_cfg_pkg_type type, u64 route)
{
struct tb_cfg_header *header = pkg->buffer;
@@ -198,7 +303,7 @@ static void be32_to_cpu_array(u32 *dst, __be32 *src, size_t len)
dst[i] = be32_to_cpu(src[i]);
}

-static __be32 tb_crc(void *data, size_t len)
+static __be32 tb_crc(const void *data, size_t len)
{
return cpu_to_be32(~__crc32c_le(~0, data, len));
}
@@ -315,6 +420,7 @@ static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
bool canceled)
{
struct ctl_pkg *pkg = container_of(frame, typeof(*pkg), frame);
+ struct tb_cfg_request *req;
__be32 crc32;

if (canceled)
@@ -361,48 +467,135 @@ static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
goto rx;

default:
- tb_ctl_dbg(pkg->ctl, "RX: unknown package %#x, dropping\n",
- frame->eof);
- goto rx;
+ break;
}

- if (!kfifo_put(&pkg->ctl->response_fifo, pkg)) {
- tb_ctl_err(pkg->ctl, "RX: fifo is full\n");
- goto rx;
+ /*
+ * The received packet will be processed only if there is an
+ * active request and that the packet is what is expected. This
+ * prevents packets such as replies coming after timeout has
+ * triggered from messing with the active requests.
+ */
+ req = tb_cfg_request_find(pkg->ctl, pkg);
+ if (req) {
+ if (req->copy(req, pkg))
+ schedule_work(&req->work);
+ tb_cfg_request_put(req);
}
- complete(&pkg->ctl->response_ready);
- return;
+
rx:
tb_ctl_rx_submit(pkg);
}

+static void tb_cfg_request_work(struct work_struct *work)
+{
+ struct tb_cfg_request *req = container_of(work, typeof(*req), work);
+
+ if (!test_bit(TB_CFG_REQUEST_CANCELED, &req->flags))
+ req->callback(req->callback_data);
+
+ tb_cfg_request_dequeue(req);
+ tb_cfg_request_put(req);
+}
+
/**
- * tb_ctl_rx() - receive a packet from the control channel
+ * tb_cfg_request() - Start control request not waiting for it to complete
+ * @ctl: Control channel to use
+ * @req: Request to start
+ * @callback: Callback called when the request is completed
+ * @callback_data: Data to be passed to @callback
+ *
+ * This queues @req on the given control channel without waiting for it
+ * to complete. When the request completes @callback is called.
*/
-static struct tb_cfg_result tb_ctl_rx(struct tb_ctl *ctl, void *buffer,
- size_t length, int timeout_msec,
- u64 route, enum tb_cfg_pkg_type type)
+int tb_cfg_request(struct tb_ctl *ctl, struct tb_cfg_request *req,
+ void (*callback)(void *), void *callback_data)
{
- struct tb_cfg_result res;
- struct ctl_pkg *pkg;
+ int ret;

- if (!wait_for_completion_timeout(&ctl->response_ready,
- msecs_to_jiffies(timeout_msec))) {
- tb_ctl_WARN(ctl, "RX: timeout\n");
- return (struct tb_cfg_result) { .err = -ETIMEDOUT };
- }
- if (!kfifo_get(&ctl->response_fifo, &pkg)) {
- tb_ctl_WARN(ctl, "empty kfifo\n");
- return (struct tb_cfg_result) { .err = -EIO };
- }
+ req->flags = 0;
+ req->callback = callback;
+ req->callback_data = callback_data;
+ INIT_WORK(&req->work, tb_cfg_request_work);
+ INIT_LIST_HEAD(&req->list);

- res = parse_header(pkg, length, type, route);
- if (!res.err)
- memcpy(buffer, pkg->buffer, length);
- tb_ctl_rx_submit(pkg);
- return res;
+ tb_cfg_request_get(req);
+ ret = tb_cfg_request_enqueue(ctl, req);
+ if (ret)
+ goto err_put;
+
+ ret = tb_ctl_tx(ctl, req->request, req->request_size,
+ req->request_type);
+ if (ret)
+ goto err_dequeue;
+
+ if (!req->response)
+ schedule_work(&req->work);
+
+ return 0;
+
+err_dequeue:
+ tb_cfg_request_dequeue(req);
+err_put:
+ tb_cfg_request_put(req);
+
+ return ret;
+}
+
+/**
+ * tb_cfg_request_cancel() - Cancel a control request
+ * @req: Request to cancel
+ * @err: Error to assign to the request
+ *
+ * This function can be used to cancel ongoing request. It will wait
+ * until the request is not active anymore.
+ */
+void tb_cfg_request_cancel(struct tb_cfg_request *req, int err)
+{
+ set_bit(TB_CFG_REQUEST_CANCELED, &req->flags);
+ schedule_work(&req->work);
+ wait_event(tb_cfg_request_cancel_queue, !tb_cfg_request_is_active(req));
+ req->result.err = err;
}

+static void tb_cfg_request_complete(void *data)
+{
+ complete(data);
+}
+
+/**
+ * tb_cfg_request_sync() - Start control request and wait until it completes
+ * @ctl: Control channel to use
+ * @req: Request to start
+ * @timeout_msec: Timeout how long to wait @req to complete
+ *
+ * Starts a control request and waits until it completes. If timeout
+ * triggers the request is canceled before function returns. Note the
+ * caller needs to make sure only one message for given switch is active
+ * at a time.
+ */
+struct tb_cfg_result tb_cfg_request_sync(struct tb_ctl *ctl,
+ struct tb_cfg_request *req,
+ int timeout_msec)
+{
+ unsigned long timeout = msecs_to_jiffies(timeout_msec);
+ struct tb_cfg_result res = { 0 };
+ DECLARE_COMPLETION_ONSTACK(done);
+ int ret;
+
+ ret = tb_cfg_request(ctl, req, tb_cfg_request_complete, &done);
+ if (ret) {
+ res.err = ret;
+ return res;
+ }
+
+ if (!wait_for_completion_timeout(&done, timeout))
+ tb_cfg_request_cancel(req, -ETIMEDOUT);
+
+ flush_work(&req->work);
+
+ return req->result;
+}

/* public interface, alloc/start/stop/free */

@@ -423,8 +616,8 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data)
ctl->callback = cb;
ctl->callback_data = cb_data;

- init_completion(&ctl->response_ready);
- INIT_KFIFO(ctl->response_fifo);
+ mutex_init(&ctl->request_queue_lock);
+ INIT_LIST_HEAD(&ctl->request_queue);
ctl->frame_pool = dma_pool_create("thunderbolt_ctl", &nhi->pdev->dev,
TB_FRAME_SIZE, 4, 0);
if (!ctl->frame_pool)
@@ -492,6 +685,8 @@ void tb_ctl_start(struct tb_ctl *ctl)
ring_start(ctl->rx);
for (i = 0; i < TB_CTL_RX_PKG_COUNT; i++)
tb_ctl_rx_submit(ctl->rx_packets[i]);
+
+ ctl->running = true;
}

/**
@@ -504,12 +699,16 @@ void tb_ctl_start(struct tb_ctl *ctl)
*/
void tb_ctl_stop(struct tb_ctl *ctl)
{
+ mutex_lock(&ctl->request_queue_lock);
+ ctl->running = false;
+ mutex_unlock(&ctl->request_queue_lock);
+
ring_stop(ctl->rx);
ring_stop(ctl->tx);

- if (!kfifo_is_empty(&ctl->response_fifo))
- tb_ctl_WARN(ctl, "dangling response in response_fifo\n");
- kfifo_reset(&ctl->response_fifo);
+ if (!list_empty(&ctl->request_queue))
+ tb_ctl_WARN(ctl, "dangling request in request_queue\n");
+ INIT_LIST_HEAD(&ctl->request_queue);
tb_ctl_info(ctl, "control channel stopped\n");
}

@@ -532,6 +731,49 @@ int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
return tb_ctl_tx(ctl, &pkg, sizeof(pkg), TB_CFG_PKG_ERROR);
}

+static bool tb_cfg_match(const struct tb_cfg_request *req,
+ const struct ctl_pkg *pkg)
+{
+ u64 route = tb_cfg_get_route(pkg->buffer) & ~BIT_ULL(63);
+
+ if (pkg->frame.eof == TB_CFG_PKG_ERROR)
+ return true;
+
+ if (pkg->frame.eof != req->response_type)
+ return false;
+ if (route != tb_cfg_get_route(req->request))
+ return false;
+ if (pkg->frame.size != req->response_size)
+ return false;
+
+ if (pkg->frame.eof == TB_CFG_PKG_READ ||
+ pkg->frame.eof == TB_CFG_PKG_WRITE) {
+ const struct cfg_read_pkg *req_hdr = req->request;
+ const struct cfg_read_pkg *res_hdr = pkg->buffer;
+
+ if (req_hdr->addr.seq != res_hdr->addr.seq)
+ return false;
+ }
+
+ return true;
+}
+
+static bool tb_cfg_copy(struct tb_cfg_request *req, const struct ctl_pkg *pkg)
+{
+ struct tb_cfg_result res;
+
+ /* Now make sure it is in expected format */
+ res = parse_header(pkg, req->response_size, req->response_type,
+ tb_cfg_get_route(req->request));
+ if (!res.err)
+ memcpy(req->response, pkg->buffer, req->response_size);
+
+ req->result = res;
+
+ /* Always complete when first response is received */
+ return true;
+}
+
/**
* tb_cfg_reset() - send a reset packet and wait for a response
*
@@ -542,16 +784,31 @@ int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route,
int timeout_msec)
{
- int err;
struct cfg_reset_pkg request = { .header = tb_cfg_make_header(route) };
+ struct tb_cfg_result res = { 0 };
struct tb_cfg_header reply;
+ struct tb_cfg_request *req;
+
+ req = tb_cfg_request_alloc();
+ if (!req) {
+ res.err = -ENOMEM;
+ return res;
+ }
+
+ req->match = tb_cfg_match;
+ req->copy = tb_cfg_copy;
+ req->request = &request;
+ req->request_size = sizeof(request);
+ req->request_type = TB_CFG_PKG_RESET;
+ req->response = &reply;
+ req->response_size = sizeof(reply);
+ req->response_type = sizeof(TB_CFG_PKG_RESET);
+
+ res = tb_cfg_request_sync(ctl, req, timeout_msec);

- err = tb_ctl_tx(ctl, &request, sizeof(request), TB_CFG_PKG_RESET);
- if (err)
- return (struct tb_cfg_result) { .err = err };
+ tb_cfg_request_put(req);

- return tb_ctl_rx(ctl, &reply, sizeof(reply), timeout_msec, route,
- TB_CFG_PKG_RESET);
+ return res;
}

/**
@@ -574,13 +831,39 @@ struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,
},
};
struct cfg_write_pkg reply;
+ int retries = 0;

- res.err = tb_ctl_tx(ctl, &request, sizeof(request), TB_CFG_PKG_READ);
- if (res.err)
- return res;
+ while (retries < TB_CTL_RETRIES) {
+ struct tb_cfg_request *req;
+
+ req = tb_cfg_request_alloc();
+ if (!req) {
+ res.err = -ENOMEM;
+ return res;
+ }
+
+ request.addr.seq = retries++;
+
+ req->match = tb_cfg_match;
+ req->copy = tb_cfg_copy;
+ req->request = &request;
+ req->request_size = sizeof(request);
+ req->request_type = TB_CFG_PKG_READ;
+ req->response = &reply;
+ req->response_size = 12 + 4 * length;
+ req->response_type = TB_CFG_PKG_READ;
+
+ res = tb_cfg_request_sync(ctl, req, timeout_msec);
+
+ tb_cfg_request_put(req);
+
+ if (res.err != -ETIMEDOUT)
+ break;
+
+ /* Wait a bit (arbitrary time) until we send a retry */
+ usleep_range(10, 100);
+ }

- res = tb_ctl_rx(ctl, &reply, 12 + 4 * length, timeout_msec, route,
- TB_CFG_PKG_READ);
if (res.err)
return res;

@@ -611,15 +894,41 @@ struct tb_cfg_result tb_cfg_write_raw(struct tb_ctl *ctl, const void *buffer,
},
};
struct cfg_read_pkg reply;
+ int retries = 0;

memcpy(&request.data, buffer, length * 4);

- res.err = tb_ctl_tx(ctl, &request, 12 + 4 * length, TB_CFG_PKG_WRITE);
- if (res.err)
- return res;
+ while (retries < TB_CTL_RETRIES) {
+ struct tb_cfg_request *req;
+
+ req = tb_cfg_request_alloc();
+ if (!req) {
+ res.err = -ENOMEM;
+ return res;
+ }
+
+ request.addr.seq = retries++;
+
+ req->match = tb_cfg_match;
+ req->copy = tb_cfg_copy;
+ req->request = &request;
+ req->request_size = 12 + 4 * length;
+ req->request_type = TB_CFG_PKG_WRITE;
+ req->response = &reply;
+ req->response_size = sizeof(reply);
+ req->response_type = TB_CFG_PKG_WRITE;
+
+ res = tb_cfg_request_sync(ctl, req, timeout_msec);
+
+ tb_cfg_request_put(req);
+
+ if (res.err != -ETIMEDOUT)
+ break;
+
+ /* Wait a bit (arbitrary time) until we send a retry */
+ usleep_range(10, 100);
+ }

- res = tb_ctl_rx(ctl, &reply, sizeof(reply), timeout_msec, route,
- TB_CFG_PKG_WRITE);
if (res.err)
return res;

@@ -633,11 +942,25 @@ int tb_cfg_read(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
{
struct tb_cfg_result res = tb_cfg_read_raw(ctl, buffer, route, port,
space, offset, length, TB_CFG_DEFAULT_TIMEOUT);
- if (res.err == 1) {
+ switch (res.err) {
+ case 0:
+ /* Success */
+ break;
+
+ case 1:
+ /* Thunderbolt error, tb_error holds the actual number */
tb_cfg_print_error(ctl, &res);
return -EIO;
+
+ case -ETIMEDOUT:
+ tb_ctl_warn(ctl, "timeout reading config space %u from %#x\n",
+ space, offset);
+ break;
+
+ default:
+ WARN(1, "tb_cfg_read: %d\n", res.err);
+ break;
}
- WARN(res.err, "tb_cfg_read: %d\n", res.err);
return res.err;
}

@@ -646,11 +969,25 @@ int tb_cfg_write(struct tb_ctl *ctl, const void *buffer, u64 route, u32 port,
{
struct tb_cfg_result res = tb_cfg_write_raw(ctl, buffer, route, port,
space, offset, length, TB_CFG_DEFAULT_TIMEOUT);
- if (res.err == 1) {
+ switch (res.err) {
+ case 0:
+ /* Success */
+ break;
+
+ case 1:
+ /* Thunderbolt error, tb_error holds the actual number */
tb_cfg_print_error(ctl, &res);
return -EIO;
+
+ case -ETIMEDOUT:
+ tb_ctl_warn(ctl, "timeout writing config space %u to %#x\n",
+ space, offset);
+ break;
+
+ default:
+ WARN(1, "tb_cfg_write: %d\n", res.err);
+ break;
}
- WARN(res.err, "tb_cfg_write: %d\n", res.err);
return res.err;
}

diff --git a/drivers/thunderbolt/ctl.h b/drivers/thunderbolt/ctl.h
index 2b23e030a85b..36fd28b1c1c5 100644
--- a/drivers/thunderbolt/ctl.h
+++ b/drivers/thunderbolt/ctl.h
@@ -7,6 +7,8 @@
#ifndef _TB_CFG
#define _TB_CFG

+#include <linux/kref.h>
+
#include "nhi.h"
#include "tb_msgs.h"

@@ -39,6 +41,69 @@ struct tb_cfg_result {
enum tb_cfg_error tb_error; /* valid if err == 1 */
};

+struct ctl_pkg {
+ struct tb_ctl *ctl;
+ void *buffer;
+ struct ring_frame frame;
+};
+
+/**
+ * struct tb_cfg_request - Control channel request
+ * @kref: Reference count
+ * @ctl: Pointer to the control channel structure. Only set when the
+ * request is queued.
+ * @request_size: Size of the request packet (in bytes)
+ * @request_type: Type of the request packet
+ * @response: Response is stored here
+ * @response_size: Maximum size of one response packet
+ * @response_type: Expected type of the response packet
+ * @npackets: Number of packets expected to be returned with this request
+ * @match: Function used to match the incoming packet
+ * @copy: Function used to copy the incoming packet to @response
+ * @callback: Callback called when the request is finished successfully
+ * @callback_data: Data to be passed to @callback
+ * @flags: Flags for the request
+ * @work: Work item used to complete the request
+ * @result: Result after the request has been completed
+ * @list: Requests are queued using this field
+ *
+ * An arbitrary request over Thunderbolt control channel. For standard
+ * control channel message, one should use tb_cfg_read/write() and
+ * friends if possible.
+ */
+struct tb_cfg_request {
+ struct kref kref;
+ struct tb_ctl *ctl;
+ const void *request;
+ size_t request_size;
+ enum tb_cfg_pkg_type request_type;
+ void *response;
+ size_t response_size;
+ enum tb_cfg_pkg_type response_type;
+ size_t npackets;
+ bool (*match)(const struct tb_cfg_request *req,
+ const struct ctl_pkg *pkg);
+ bool (*copy)(struct tb_cfg_request *req, const struct ctl_pkg *pkg);
+ void (*callback)(void *callback_data);
+ void *callback_data;
+ unsigned long flags;
+ struct work_struct work;
+ struct tb_cfg_result result;
+ struct list_head list;
+};
+
+#define TB_CFG_REQUEST_ACTIVE 0
+#define TB_CFG_REQUEST_CANCELED 1
+
+struct tb_cfg_request *tb_cfg_request_alloc(void);
+void tb_cfg_request_get(struct tb_cfg_request *req);
+void tb_cfg_request_put(struct tb_cfg_request *req);
+int tb_cfg_request(struct tb_ctl *ctl, struct tb_cfg_request *req,
+ void (*callback)(void *), void *callback_data);
+void tb_cfg_request_cancel(struct tb_cfg_request *req, int err);
+struct tb_cfg_result tb_cfg_request_sync(struct tb_ctl *ctl,
+ struct tb_cfg_request *req, int timeout_msec);
+
static inline u64 tb_cfg_get_route(const struct tb_cfg_header *header)
{
return (u64) header->route_hi << 32 | header->route_lo;
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 5bb9a5d60d2c..98a405384596 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -132,7 +132,7 @@ struct tb_cm_ops {
/**
* struct tb - main thunderbolt bus structure
* @dev: Domain device
- * @lock: Big lock. Must be held when accessing cfg or any struct
+ * @lock: Big lock. Must be held when accessing any struct
* tb_switch / struct tb_port.
* @nhi: Pointer to the NHI structure
* @ctl: Control channel for this domain
--
2.11.0

2017-06-06 12:26:49

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 23/27] thunderbolt: Do not touch the hardware if the NHI is gone on resume

On PCs the NHI host controller is only present when there is a device
connected. When the last device is disconnected the host controller will
dissappear shortly (within 10s). Now if that happens when we are
suspended we should not try to touch the hardware anymore, so add a flag
for this and check it before we re-enable rings.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/nhi.c | 12 ++++++++++++
drivers/thunderbolt/nhi.h | 3 +++
2 files changed, 15 insertions(+)

diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index c358c074f925..14311535661d 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -403,6 +403,8 @@ void ring_start(struct tb_ring *ring)
{
mutex_lock(&ring->nhi->lock);
mutex_lock(&ring->lock);
+ if (ring->nhi->going_away)
+ goto err;
if (ring->running) {
dev_WARN(&ring->nhi->pdev->dev, "ring already started\n");
goto err;
@@ -449,6 +451,8 @@ void ring_stop(struct tb_ring *ring)
mutex_lock(&ring->lock);
dev_info(&ring->nhi->pdev->dev, "stopping %s %d\n",
RING_TYPE(ring), ring->hop);
+ if (ring->nhi->going_away)
+ goto err;
if (!ring->running) {
dev_WARN(&ring->nhi->pdev->dev, "%s %d already stopped\n",
RING_TYPE(ring), ring->hop);
@@ -653,6 +657,14 @@ static int nhi_resume_noirq(struct device *dev)
struct pci_dev *pdev = to_pci_dev(dev);
struct tb *tb = pci_get_drvdata(pdev);

+ /*
+ * Check that the device is still there. It may be that the user
+ * unplugged last device which causes the host controller to go
+ * away on PCs.
+ */
+ if (!pci_device_is_present(pdev))
+ tb->nhi->going_away = true;
+
return tb_domain_resume_noirq(tb);
}

diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
index 446ff6dac91d..953864ae0ab3 100644
--- a/drivers/thunderbolt/nhi.h
+++ b/drivers/thunderbolt/nhi.h
@@ -20,6 +20,8 @@
* @tx_rings: All Tx rings available on this host controller
* @rx_rings: All Rx rings available on this host controller
* @msix_ida: Used to allocate MSI-X vectors for rings
+ * @going_away: The host controller device is about to disappear so when
+ * this flag is set, avoid touching the hardware anymore.
* @interrupt_work: Work scheduled to handle ring interrupt when no
* MSI-X is used.
* @hop_count: Number of rings (end point hops) supported by NHI.
@@ -31,6 +33,7 @@ struct tb_nhi {
struct tb_ring **tx_rings;
struct tb_ring **rx_rings;
struct ida msix_ida;
+ bool going_away;
struct work_struct interrupt_work;
u32 hop_count;
};
--
2.11.0

2017-06-06 12:26:16

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 02/27] thunderbolt: No need to read UID of the root switch on resume

The root switch is part of the host controller and cannot be physically
removed, so there is no point of reading UID again on resume in order to
check if the root switch is still the same.

Suggested-by: Andreas Noever <[email protected]>
Signed-off-by: Mika Westerberg <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/switch.c | 29 ++++++++++++++++++-----------
1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index c6f30b1695a9..81f5164a6364 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -452,19 +452,26 @@ void tb_sw_set_unplugged(struct tb_switch *sw)
int tb_switch_resume(struct tb_switch *sw)
{
int i, err;
- u64 uid;
tb_sw_info(sw, "resuming switch\n");

- err = tb_drom_read_uid_only(sw, &uid);
- if (err) {
- tb_sw_warn(sw, "uid read failed\n");
- return err;
- }
- if (sw != sw->tb->root_switch && sw->uid != uid) {
- tb_sw_info(sw,
- "changed while suspended (uid %#llx -> %#llx)\n",
- sw->uid, uid);
- return -ENODEV;
+ /*
+ * Check for UID of the connected switches except for root
+ * switch which we assume cannot be removed.
+ */
+ if (tb_route(sw)) {
+ u64 uid;
+
+ err = tb_drom_read_uid_only(sw, &uid);
+ if (err) {
+ tb_sw_warn(sw, "uid read failed\n");
+ return err;
+ }
+ if (sw->uid != uid) {
+ tb_sw_info(sw,
+ "changed while suspended (uid %#llx -> %#llx)\n",
+ sw->uid, uid);
+ return -ENODEV;
+ }
}

/* upload configuration */
--
2.11.0

2017-06-06 12:26:04

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 27/27] MAINTAINERS: Add maintainers for Thunderbolt driver

We will be helping Andreas to maintain the Thunderbolt driver.

Signed-off-by: Michael Jamet <[email protected]>
Signed-off-by: Yehezkel Bernat <[email protected]>
Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
MAINTAINERS | 3 +++
1 file changed, 3 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 7a28acd7f525..c29756cf0a76 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11328,6 +11328,9 @@ F: Documentation/tee.txt

THUNDERBOLT DRIVER
M: Andreas Noever <[email protected]>
+M: Michael Jamet <[email protected]>
+M: Mika Westerberg <[email protected]>
+M: Yehezkel Bernat <[email protected]>
S: Maintained
F: drivers/thunderbolt/

--
2.11.0

2017-06-06 12:28:12

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 20/27] thunderbolt: Add support for NHI mailbox

The host controller includes two sets of registers that are used to
communicate with the firmware. Add functions that can be used to access
these registers.

This code is based on the work done by Amir Levy and Michael Jamet.

Signed-off-by: Michael Jamet <[email protected]>
Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/nhi.c | 58 ++++++++++++++++++++++++++++++++++++++++++
drivers/thunderbolt/nhi.h | 16 ++++++++++++
drivers/thunderbolt/nhi_regs.h | 11 ++++++++
3 files changed, 85 insertions(+)

diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index fa4c2745dba2..c358c074f925 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -14,6 +14,7 @@
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/dmi.h>
+#include <linux/delay.h>

#include "nhi.h"
#include "nhi_regs.h"
@@ -28,6 +29,8 @@
#define MSIX_MIN_VECS 6
#define MSIX_MAX_VECS 16

+#define NHI_MAILBOX_TIMEOUT 500 /* ms */
+
static int ring_interrupt_index(struct tb_ring *ring)
{
int bit = ring->hop;
@@ -525,6 +528,61 @@ void ring_free(struct tb_ring *ring)
kfree(ring);
}

+/**
+ * nhi_mailbox_cmd() - Send a command through NHI mailbox
+ * @nhi: Pointer to the NHI structure
+ * @cmd: Command to send
+ * @data: Data to be send with the command
+ *
+ * Sends mailbox command to the firmware running on NHI. Returns %0 in
+ * case of success and negative errno in case of failure.
+ */
+int nhi_mailbox_cmd(struct tb_nhi *nhi, enum nhi_mailbox_cmd cmd, u32 data)
+{
+ ktime_t timeout;
+ u32 val;
+
+ iowrite32(data, nhi->iobase + REG_INMAIL_DATA);
+
+ val = ioread32(nhi->iobase + REG_INMAIL_CMD);
+ val &= ~(REG_INMAIL_CMD_MASK | REG_INMAIL_ERROR);
+ val |= REG_INMAIL_OP_REQUEST | cmd;
+ iowrite32(val, nhi->iobase + REG_INMAIL_CMD);
+
+ timeout = ktime_add_ms(ktime_get(), NHI_MAILBOX_TIMEOUT);
+ do {
+ val = ioread32(nhi->iobase + REG_INMAIL_CMD);
+ if (!(val & REG_INMAIL_OP_REQUEST))
+ break;
+ usleep_range(10, 20);
+ } while (ktime_before(ktime_get(), timeout));
+
+ if (val & REG_INMAIL_OP_REQUEST)
+ return -ETIMEDOUT;
+ if (val & REG_INMAIL_ERROR)
+ return -EIO;
+
+ return 0;
+}
+
+/**
+ * nhi_mailbox_mode() - Return current firmware operation mode
+ * @nhi: Pointer to the NHI structure
+ *
+ * The function reads current firmware operation mode using NHI mailbox
+ * registers and returns it to the caller.
+ */
+enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi)
+{
+ u32 val;
+
+ val = ioread32(nhi->iobase + REG_OUTMAIL_CMD);
+ val &= REG_OUTMAIL_CMD_OPMODE_MASK;
+ val >>= REG_OUTMAIL_CMD_OPMODE_SHIFT;
+
+ return (enum nhi_fw_mode)val;
+}
+
static void nhi_interrupt_work(struct work_struct *work)
{
struct tb_nhi *nhi = container_of(work, typeof(*nhi), interrupt_work);
diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
index 8bd9b4e5a0b1..446ff6dac91d 100644
--- a/drivers/thunderbolt/nhi.h
+++ b/drivers/thunderbolt/nhi.h
@@ -143,6 +143,22 @@ static inline int ring_tx(struct tb_ring *ring, struct ring_frame *frame)
return __ring_enqueue(ring, frame);
}

+enum nhi_fw_mode {
+ NHI_FW_SAFE_MODE,
+ NHI_FW_AUTH_MODE,
+ NHI_FW_EP_MODE,
+ NHI_FW_CM_MODE,
+};
+
+enum nhi_mailbox_cmd {
+ NHI_MAILBOX_SAVE_DEVS = 0x05,
+ NHI_MAILBOX_DRV_UNLOADS = 0x07,
+ NHI_MAILBOX_ALLOW_ALL_DEVS = 0x23,
+};
+
+int nhi_mailbox_cmd(struct tb_nhi *nhi, enum nhi_mailbox_cmd cmd, u32 data);
+enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi);
+
/*
* PCI IDs used in this driver from Win Ridge forward. There is no
* need for the PCI quirk anymore as we will use ICM also on Apple
diff --git a/drivers/thunderbolt/nhi_regs.h b/drivers/thunderbolt/nhi_regs.h
index 48b98d3c7e6a..322fe1fa3a3c 100644
--- a/drivers/thunderbolt/nhi_regs.h
+++ b/drivers/thunderbolt/nhi_regs.h
@@ -107,4 +107,15 @@ struct ring_desc {
#define REG_DMA_MISC 0x39864
#define REG_DMA_MISC_INT_AUTO_CLEAR BIT(2)

+#define REG_INMAIL_DATA 0x39900
+
+#define REG_INMAIL_CMD 0x39904
+#define REG_INMAIL_CMD_MASK GENMASK(7, 0)
+#define REG_INMAIL_ERROR BIT(30)
+#define REG_INMAIL_OP_REQUEST BIT(31)
+
+#define REG_OUTMAIL_CMD 0x3990c
+#define REG_OUTMAIL_CMD_OPMODE_SHIFT 8
+#define REG_OUTMAIL_CMD_OPMODE_MASK GENMASK(11, 8)
+
#endif
--
2.11.0

2017-06-06 12:28:11

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 05/27] thunderbolt: Add MSI-X support

Intel Thunderbolt controllers support up to 16 MSI-X vectors. Using
MSI-X is preferred over MSI or legacy interrupt and may bring additional
performance because there is no need to check the status registers which
interrupt was triggered.

While there we convert comments in structs tb_ring and tb_nhi to follow
kernel-doc format more closely.

This code is based on the work done by Amir Levy and Michael Jamet.

Signed-off-by: Michael Jamet <[email protected]>
Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/ctl.c | 4 +-
drivers/thunderbolt/nhi.c | 165 +++++++++++++++++++++++++++++++++++------
drivers/thunderbolt/nhi.h | 56 +++++++++++---
drivers/thunderbolt/nhi_regs.h | 9 +++
4 files changed, 198 insertions(+), 36 deletions(-)

diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index 1031d97407a8..889a32dd21e7 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -488,11 +488,11 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, hotplug_cb cb, void *cb_data)
if (!ctl->frame_pool)
goto err;

- ctl->tx = ring_alloc_tx(nhi, 0, 10);
+ ctl->tx = ring_alloc_tx(nhi, 0, 10, RING_FLAG_NO_SUSPEND);
if (!ctl->tx)
goto err;

- ctl->rx = ring_alloc_rx(nhi, 0, 10);
+ ctl->rx = ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND);
if (!ctl->rx)
goto err;

diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index a8c20413dbda..ed75c49748f5 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -21,6 +21,12 @@

#define RING_TYPE(ring) ((ring)->is_tx ? "TX ring" : "RX ring")

+/*
+ * Minimal number of vectors when we use MSI-X. Two for control channel
+ * Rx/Tx and the rest four are for cross domain DMA paths.
+ */
+#define MSIX_MIN_VECS 6
+#define MSIX_MAX_VECS 16

static int ring_interrupt_index(struct tb_ring *ring)
{
@@ -42,6 +48,37 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
int bit = ring_interrupt_index(ring) & 31;
int mask = 1 << bit;
u32 old, new;
+
+ if (ring->irq > 0) {
+ u32 step, shift, ivr, misc;
+ void __iomem *ivr_base;
+ int index;
+
+ if (ring->is_tx)
+ index = ring->hop;
+ else
+ index = ring->hop + ring->nhi->hop_count;
+
+ /*
+ * Ask the hardware to clear interrupt status bits automatically
+ * since we already know which interrupt was triggered.
+ */
+ misc = ioread32(ring->nhi->iobase + REG_DMA_MISC);
+ if (!(misc & REG_DMA_MISC_INT_AUTO_CLEAR)) {
+ misc |= REG_DMA_MISC_INT_AUTO_CLEAR;
+ iowrite32(misc, ring->nhi->iobase + REG_DMA_MISC);
+ }
+
+ ivr_base = ring->nhi->iobase + REG_INT_VEC_ALLOC_BASE;
+ step = index / REG_INT_VEC_ALLOC_REGS * REG_INT_VEC_ALLOC_BITS;
+ shift = index % REG_INT_VEC_ALLOC_REGS * REG_INT_VEC_ALLOC_BITS;
+ ivr = ioread32(ivr_base + step);
+ ivr &= ~(REG_INT_VEC_ALLOC_MASK << shift);
+ if (active)
+ ivr |= ring->vector << shift;
+ iowrite32(ivr, ivr_base + step);
+ }
+
old = ioread32(ring->nhi->iobase + reg);
if (active)
new = old | mask;
@@ -239,8 +276,50 @@ int __ring_enqueue(struct tb_ring *ring, struct ring_frame *frame)
return ret;
}

+static irqreturn_t ring_msix(int irq, void *data)
+{
+ struct tb_ring *ring = data;
+
+ schedule_work(&ring->work);
+ return IRQ_HANDLED;
+}
+
+static int ring_request_msix(struct tb_ring *ring, bool no_suspend)
+{
+ struct tb_nhi *nhi = ring->nhi;
+ unsigned long irqflags;
+ int ret;
+
+ if (!nhi->pdev->msix_enabled)
+ return 0;
+
+ ret = ida_simple_get(&nhi->msix_ida, 0, MSIX_MAX_VECS, GFP_KERNEL);
+ if (ret < 0)
+ return ret;
+
+ ring->vector = ret;
+
+ ring->irq = pci_irq_vector(ring->nhi->pdev, ring->vector);
+ if (ring->irq < 0)
+ return ring->irq;
+
+ irqflags = no_suspend ? IRQF_NO_SUSPEND : 0;
+ return request_irq(ring->irq, ring_msix, irqflags, "thunderbolt", ring);
+}
+
+static void ring_release_msix(struct tb_ring *ring)
+{
+ if (ring->irq <= 0)
+ return;
+
+ free_irq(ring->irq, ring);
+ ida_simple_remove(&ring->nhi->msix_ida, ring->vector);
+ ring->vector = 0;
+ ring->irq = 0;
+}
+
static struct tb_ring *ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
- bool transmit)
+ bool transmit, unsigned int flags)
{
struct tb_ring *ring = NULL;
dev_info(&nhi->pdev->dev, "allocating %s ring %d of size %d\n",
@@ -271,9 +350,14 @@ static struct tb_ring *ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
ring->hop = hop;
ring->is_tx = transmit;
ring->size = size;
+ ring->flags = flags;
ring->head = 0;
ring->tail = 0;
ring->running = false;
+
+ if (ring_request_msix(ring, flags & RING_FLAG_NO_SUSPEND))
+ goto err;
+
ring->descriptors = dma_alloc_coherent(&ring->nhi->pdev->dev,
size * sizeof(*ring->descriptors),
&ring->descriptors_dma, GFP_KERNEL | __GFP_ZERO);
@@ -295,14 +379,16 @@ static struct tb_ring *ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
return NULL;
}

-struct tb_ring *ring_alloc_tx(struct tb_nhi *nhi, int hop, int size)
+struct tb_ring *ring_alloc_tx(struct tb_nhi *nhi, int hop, int size,
+ unsigned int flags)
{
- return ring_alloc(nhi, hop, size, true);
+ return ring_alloc(nhi, hop, size, true, flags);
}

-struct tb_ring *ring_alloc_rx(struct tb_nhi *nhi, int hop, int size)
+struct tb_ring *ring_alloc_rx(struct tb_nhi *nhi, int hop, int size,
+ unsigned int flags)
{
- return ring_alloc(nhi, hop, size, false);
+ return ring_alloc(nhi, hop, size, false, flags);
}

/**
@@ -413,6 +499,8 @@ void ring_free(struct tb_ring *ring)
RING_TYPE(ring), ring->hop);
}

+ ring_release_msix(ring);
+
dma_free_coherent(&ring->nhi->pdev->dev,
ring->size * sizeof(*ring->descriptors),
ring->descriptors, ring->descriptors_dma);
@@ -428,9 +516,9 @@ void ring_free(struct tb_ring *ring)

mutex_unlock(&ring->nhi->lock);
/**
- * ring->work can no longer be scheduled (it is scheduled only by
- * nhi_interrupt_work and ring_stop). Wait for it to finish before
- * freeing the ring.
+ * ring->work can no longer be scheduled (it is scheduled only
+ * by nhi_interrupt_work, ring_stop and ring_msix). Wait for it
+ * to finish before freeing the ring.
*/
flush_work(&ring->work);
mutex_destroy(&ring->lock);
@@ -528,9 +616,52 @@ static void nhi_shutdown(struct tb_nhi *nhi)
* We have to release the irq before calling flush_work. Otherwise an
* already executing IRQ handler could call schedule_work again.
*/
- devm_free_irq(&nhi->pdev->dev, nhi->pdev->irq, nhi);
- flush_work(&nhi->interrupt_work);
+ if (!nhi->pdev->msix_enabled) {
+ devm_free_irq(&nhi->pdev->dev, nhi->pdev->irq, nhi);
+ flush_work(&nhi->interrupt_work);
+ }
mutex_destroy(&nhi->lock);
+ ida_destroy(&nhi->msix_ida);
+}
+
+static int nhi_init_msi(struct tb_nhi *nhi)
+{
+ struct pci_dev *pdev = nhi->pdev;
+ int res, irq, nvec;
+
+ /* In case someone left them on. */
+ nhi_disable_interrupts(nhi);
+
+ ida_init(&nhi->msix_ida);
+
+ /*
+ * The NHI has 16 MSI-X vectors or a single MSI. We first try to
+ * get all MSI-X vectors and if we succeed, each ring will have
+ * one MSI-X. If for some reason that does not work out, we
+ * fallback to a single MSI.
+ */
+ nvec = pci_alloc_irq_vectors(pdev, MSIX_MIN_VECS, MSIX_MAX_VECS,
+ PCI_IRQ_MSIX);
+ if (nvec < 0) {
+ nvec = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI);
+ if (nvec < 0)
+ return nvec;
+
+ INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work);
+
+ irq = pci_irq_vector(nhi->pdev, 0);
+ if (irq < 0)
+ return irq;
+
+ res = devm_request_irq(&pdev->dev, irq, nhi_msi,
+ IRQF_NO_SUSPEND, "thunderbolt", nhi);
+ if (res) {
+ dev_err(&pdev->dev, "request_irq failed, aborting\n");
+ return res;
+ }
+ }
+
+ return 0;
}

static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
@@ -545,12 +676,6 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return res;
}

- res = pci_enable_msi(pdev);
- if (res) {
- dev_err(&pdev->dev, "cannot enable MSI, aborting\n");
- return res;
- }
-
res = pcim_iomap_regions(pdev, 1 << 0, "thunderbolt");
if (res) {
dev_err(&pdev->dev, "cannot obtain PCI resources, aborting\n");
@@ -568,7 +693,6 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (nhi->hop_count != 12 && nhi->hop_count != 32)
dev_warn(&pdev->dev, "unexpected hop count: %d\n",
nhi->hop_count);
- INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work);

nhi->tx_rings = devm_kcalloc(&pdev->dev, nhi->hop_count,
sizeof(*nhi->tx_rings), GFP_KERNEL);
@@ -577,12 +701,9 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (!nhi->tx_rings || !nhi->rx_rings)
return -ENOMEM;

- nhi_disable_interrupts(nhi); /* In case someone left them on. */
- res = devm_request_irq(&pdev->dev, pdev->irq, nhi_msi,
- IRQF_NO_SUSPEND, /* must work during _noirq */
- "thunderbolt", nhi);
+ res = nhi_init_msi(nhi);
if (res) {
- dev_err(&pdev->dev, "request_irq failed, aborting\n");
+ dev_err(&pdev->dev, "cannot enable MSI, aborting\n");
return res;
}

diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
index 317242939b31..630f44140530 100644
--- a/drivers/thunderbolt/nhi.h
+++ b/drivers/thunderbolt/nhi.h
@@ -7,45 +7,75 @@
#ifndef DSL3510_H_
#define DSL3510_H_

+#include <linux/idr.h>
#include <linux/mutex.h>
#include <linux/workqueue.h>

/**
* struct tb_nhi - thunderbolt native host interface
+ * @lock: Must be held during ring creation/destruction. Is acquired by
+ * interrupt_work when dispatching interrupts to individual rings.
+ * @pdev: Pointer to the PCI device
+ * @iobase: MMIO space of the NHI
+ * @tx_rings: All Tx rings available on this host controller
+ * @rx_rings: All Rx rings available on this host controller
+ * @msix_ida: Used to allocate MSI-X vectors for rings
+ * @interrupt_work: Work scheduled to handle ring interrupt when no
+ * MSI-X is used.
+ * @hop_count: Number of rings (end point hops) supported by NHI.
*/
struct tb_nhi {
- struct mutex lock; /*
- * Must be held during ring creation/destruction.
- * Is acquired by interrupt_work when dispatching
- * interrupts to individual rings.
- **/
+ struct mutex lock;
struct pci_dev *pdev;
void __iomem *iobase;
struct tb_ring **tx_rings;
struct tb_ring **rx_rings;
+ struct ida msix_ida;
struct work_struct interrupt_work;
- u32 hop_count; /* Number of rings (end point hops) supported by NHI. */
+ u32 hop_count;
};

/**
* struct tb_ring - thunderbolt TX or RX ring associated with a NHI
+ * @lock: Lock serializing actions to this ring. Must be acquired after
+ * nhi->lock.
+ * @nhi: Pointer to the native host controller interface
+ * @size: Size of the ring
+ * @hop: Hop (DMA channel) associated with this ring
+ * @head: Head of the ring (write next descriptor here)
+ * @tail: Tail of the ring (complete next descriptor here)
+ * @descriptors: Allocated descriptors for this ring
+ * @queue: Queue holding frames to be transferred over this ring
+ * @in_flight: Queue holding frames that are currently in flight
+ * @work: Interrupt work structure
+ * @is_tx: Is the ring Tx or Rx
+ * @running: Is the ring running
+ * @irq: MSI-X irq number if the ring uses MSI-X. %0 otherwise.
+ * @vector: MSI-X vector number the ring uses (only set if @irq is > 0)
+ * @flags: Ring specific flags
*/
struct tb_ring {
- struct mutex lock; /* must be acquired after nhi->lock */
+ struct mutex lock;
struct tb_nhi *nhi;
int size;
int hop;
- int head; /* write next descriptor here */
- int tail; /* complete next descriptor here */
+ int head;
+ int tail;
struct ring_desc *descriptors;
dma_addr_t descriptors_dma;
struct list_head queue;
struct list_head in_flight;
struct work_struct work;
- bool is_tx:1; /* rx otherwise */
+ bool is_tx:1;
bool running:1;
+ int irq;
+ u8 vector;
+ unsigned int flags;
};

+/* Leave ring interrupt enabled on suspend */
+#define RING_FLAG_NO_SUSPEND BIT(0)
+
struct ring_frame;
typedef void (*ring_cb)(struct tb_ring*, struct ring_frame*, bool canceled);

@@ -64,8 +94,10 @@ struct ring_frame {

#define TB_FRAME_SIZE 0x100 /* minimum size for ring_rx */

-struct tb_ring *ring_alloc_tx(struct tb_nhi *nhi, int hop, int size);
-struct tb_ring *ring_alloc_rx(struct tb_nhi *nhi, int hop, int size);
+struct tb_ring *ring_alloc_tx(struct tb_nhi *nhi, int hop, int size,
+ unsigned int flags);
+struct tb_ring *ring_alloc_rx(struct tb_nhi *nhi, int hop, int size,
+ unsigned int flags);
void ring_start(struct tb_ring *ring);
void ring_stop(struct tb_ring *ring);
void ring_free(struct tb_ring *ring);
diff --git a/drivers/thunderbolt/nhi_regs.h b/drivers/thunderbolt/nhi_regs.h
index 75cf0691e6c5..48b98d3c7e6a 100644
--- a/drivers/thunderbolt/nhi_regs.h
+++ b/drivers/thunderbolt/nhi_regs.h
@@ -95,7 +95,16 @@ struct ring_desc {
#define REG_RING_INTERRUPT_BASE 0x38200
#define RING_INTERRUPT_REG_COUNT(nhi) ((31 + 2 * nhi->hop_count) / 32)

+/* Interrupt Vector Allocation */
+#define REG_INT_VEC_ALLOC_BASE 0x38c40
+#define REG_INT_VEC_ALLOC_BITS 4
+#define REG_INT_VEC_ALLOC_MASK GENMASK(3, 0)
+#define REG_INT_VEC_ALLOC_REGS (32 / REG_INT_VEC_ALLOC_BITS)
+
/* The last 11 bits contain the number of hops supported by the NHI port. */
#define REG_HOP_COUNT 0x39640

+#define REG_DMA_MISC 0x39864
+#define REG_DMA_MISC_INT_AUTO_CLEAR BIT(2)
+
#endif
--
2.11.0

2017-06-06 12:25:44

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 22/27] thunderbolt: Add support for DMA configuration based mailbox

The DMA (NHI) port of a switch provides access to the NVM of the host
controller (and devices starting from Intel Alpine Ridge). The NVM
contains also more complete DROM for the root switch including vendor
and device identification strings.

This will look for the DMA port capability for each switch and if found
populates sw->dma_port. We then teach tb_drom_read() to read the DROM
information from NVM if available for the root switch.

The DMA port capability also supports upgrading the NVM for both host
controller and devices which will be added in subsequent patches.

This code is based on the work done by Amir Levy and Michael Jamet.

Signed-off-by: Michael Jamet <[email protected]>
Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/Makefile | 2 +-
drivers/thunderbolt/dma_port.c | 524 +++++++++++++++++++++++++++++++++++++++++
drivers/thunderbolt/dma_port.h | 34 +++
drivers/thunderbolt/eeprom.c | 51 +++-
drivers/thunderbolt/switch.c | 30 +++
drivers/thunderbolt/tb.h | 5 +
6 files changed, 644 insertions(+), 2 deletions(-)
create mode 100644 drivers/thunderbolt/dma_port.c
create mode 100644 drivers/thunderbolt/dma_port.h

diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile
index e276a9a62261..9828e862dd35 100644
--- a/drivers/thunderbolt/Makefile
+++ b/drivers/thunderbolt/Makefile
@@ -1,3 +1,3 @@
obj-${CONFIG_THUNDERBOLT} := thunderbolt.o
thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel_pci.o eeprom.o
-thunderbolt-objs += domain.o
+thunderbolt-objs += domain.o dma_port.o
diff --git a/drivers/thunderbolt/dma_port.c b/drivers/thunderbolt/dma_port.c
new file mode 100644
index 000000000000..af6dde347bee
--- /dev/null
+++ b/drivers/thunderbolt/dma_port.c
@@ -0,0 +1,524 @@
+/*
+ * Thunderbolt DMA configuration based mailbox support
+ *
+ * Copyright (C) 2017, Intel Corporation
+ * Authors: Michael Jamet <[email protected]>
+ * Mika Westerberg <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/delay.h>
+#include <linux/slab.h>
+
+#include "dma_port.h"
+#include "tb_regs.h"
+
+#define DMA_PORT_CAP 0x3e
+
+#define MAIL_DATA 1
+#define MAIL_DATA_DWORDS 16
+
+#define MAIL_IN 17
+#define MAIL_IN_CMD_SHIFT 28
+#define MAIL_IN_CMD_MASK GENMASK(31, 28)
+#define MAIL_IN_CMD_FLASH_WRITE 0x0
+#define MAIL_IN_CMD_FLASH_UPDATE_AUTH 0x1
+#define MAIL_IN_CMD_FLASH_READ 0x2
+#define MAIL_IN_CMD_POWER_CYCLE 0x4
+#define MAIL_IN_DWORDS_SHIFT 24
+#define MAIL_IN_DWORDS_MASK GENMASK(27, 24)
+#define MAIL_IN_ADDRESS_SHIFT 2
+#define MAIL_IN_ADDRESS_MASK GENMASK(23, 2)
+#define MAIL_IN_CSS BIT(1)
+#define MAIL_IN_OP_REQUEST BIT(0)
+
+#define MAIL_OUT 18
+#define MAIL_OUT_STATUS_RESPONSE BIT(29)
+#define MAIL_OUT_STATUS_CMD_SHIFT 4
+#define MAIL_OUT_STATUS_CMD_MASK GENMASK(7, 4)
+#define MAIL_OUT_STATUS_MASK GENMASK(3, 0)
+#define MAIL_OUT_STATUS_COMPLETED 0
+#define MAIL_OUT_STATUS_ERR_AUTH 1
+#define MAIL_OUT_STATUS_ERR_ACCESS 2
+
+#define DMA_PORT_TIMEOUT 5000 /* ms */
+#define DMA_PORT_RETRIES 3
+
+/**
+ * struct tb_dma_port - DMA control port
+ * @sw: Switch the DMA port belongs to
+ * @port: Switch port number where DMA capability is found
+ * @base: Start offset of the mailbox registers
+ * @buf: Temporary buffer to store a single block
+ */
+struct tb_dma_port {
+ struct tb_switch *sw;
+ u8 port;
+ u32 base;
+ u8 *buf;
+};
+
+/*
+ * When the switch is in safe mode it supports very little functionality
+ * so we don't validate that much here.
+ */
+static bool dma_port_match(const struct tb_cfg_request *req,
+ const struct ctl_pkg *pkg)
+{
+ u64 route = tb_cfg_get_route(pkg->buffer) & ~BIT_ULL(63);
+
+ if (pkg->frame.eof == TB_CFG_PKG_ERROR)
+ return true;
+ if (pkg->frame.eof != req->response_type)
+ return false;
+ if (route != tb_cfg_get_route(req->request))
+ return false;
+ if (pkg->frame.size != req->response_size)
+ return false;
+
+ return true;
+}
+
+static bool dma_port_copy(struct tb_cfg_request *req, const struct ctl_pkg *pkg)
+{
+ memcpy(req->response, pkg->buffer, req->response_size);
+ return true;
+}
+
+static int dma_port_read(struct tb_ctl *ctl, void *buffer, u64 route,
+ u32 port, u32 offset, u32 length, int timeout_msec)
+{
+ struct cfg_read_pkg request = {
+ .header = tb_cfg_make_header(route),
+ .addr = {
+ .seq = 1,
+ .port = port,
+ .space = TB_CFG_PORT,
+ .offset = offset,
+ .length = length,
+ },
+ };
+ struct tb_cfg_request *req;
+ struct cfg_write_pkg reply;
+ struct tb_cfg_result res;
+
+ req = tb_cfg_request_alloc();
+ if (!req)
+ return -ENOMEM;
+
+ req->match = dma_port_match;
+ req->copy = dma_port_copy;
+ req->request = &request;
+ req->request_size = sizeof(request);
+ req->request_type = TB_CFG_PKG_READ;
+ req->response = &reply;
+ req->response_size = 12 + 4 * length;
+ req->response_type = TB_CFG_PKG_READ;
+
+ res = tb_cfg_request_sync(ctl, req, timeout_msec);
+
+ tb_cfg_request_put(req);
+
+ if (res.err)
+ return res.err;
+
+ memcpy(buffer, &reply.data, 4 * length);
+ return 0;
+}
+
+static int dma_port_write(struct tb_ctl *ctl, const void *buffer, u64 route,
+ u32 port, u32 offset, u32 length, int timeout_msec)
+{
+ struct cfg_write_pkg request = {
+ .header = tb_cfg_make_header(route),
+ .addr = {
+ .seq = 1,
+ .port = port,
+ .space = TB_CFG_PORT,
+ .offset = offset,
+ .length = length,
+ },
+ };
+ struct tb_cfg_request *req;
+ struct cfg_read_pkg reply;
+ struct tb_cfg_result res;
+
+ memcpy(&request.data, buffer, length * 4);
+
+ req = tb_cfg_request_alloc();
+ if (!req)
+ return -ENOMEM;
+
+ req->match = dma_port_match;
+ req->copy = dma_port_copy;
+ req->request = &request;
+ req->request_size = 12 + 4 * length;
+ req->request_type = TB_CFG_PKG_WRITE;
+ req->response = &reply;
+ req->response_size = sizeof(reply);
+ req->response_type = TB_CFG_PKG_WRITE;
+
+ res = tb_cfg_request_sync(ctl, req, timeout_msec);
+
+ tb_cfg_request_put(req);
+
+ return res.err;
+}
+
+static int dma_find_port(struct tb_switch *sw)
+{
+ int port, ret;
+ u32 type;
+
+ /*
+ * The DMA (NHI) port is either 3 or 5 depending on the
+ * controller. Try both starting from 5 which is more common.
+ */
+ port = 5;
+ ret = dma_port_read(sw->tb->ctl, &type, tb_route(sw), port, 2, 1,
+ DMA_PORT_TIMEOUT);
+ if (!ret && (type & 0xffffff) == TB_TYPE_NHI)
+ return port;
+
+ port = 3;
+ ret = dma_port_read(sw->tb->ctl, &type, tb_route(sw), port, 2, 1,
+ DMA_PORT_TIMEOUT);
+ if (!ret && (type & 0xffffff) == TB_TYPE_NHI)
+ return port;
+
+ return -ENODEV;
+}
+
+/**
+ * dma_port_alloc() - Finds DMA control port from a switch pointed by route
+ * @sw: Switch from where find the DMA port
+ *
+ * Function checks if the switch NHI port supports DMA configuration
+ * based mailbox capability and if it does, allocates and initializes
+ * DMA port structure. Returns %NULL if the capabity was not found.
+ *
+ * The DMA control port is functional also when the switch is in safe
+ * mode.
+ */
+struct tb_dma_port *dma_port_alloc(struct tb_switch *sw)
+{
+ struct tb_dma_port *dma;
+ int port;
+
+ port = dma_find_port(sw);
+ if (port < 0)
+ return NULL;
+
+ dma = kzalloc(sizeof(*dma), GFP_KERNEL);
+ if (!dma)
+ return NULL;
+
+ dma->buf = kmalloc_array(MAIL_DATA_DWORDS, sizeof(u32), GFP_KERNEL);
+ if (!dma->buf) {
+ kfree(dma);
+ return NULL;
+ }
+
+ dma->sw = sw;
+ dma->port = port;
+ dma->base = DMA_PORT_CAP;
+
+ return dma;
+}
+
+/**
+ * dma_port_free() - Release DMA control port structure
+ * @dma: DMA control port
+ */
+void dma_port_free(struct tb_dma_port *dma)
+{
+ if (dma) {
+ kfree(dma->buf);
+ kfree(dma);
+ }
+}
+
+static int dma_port_wait_for_completion(struct tb_dma_port *dma,
+ unsigned int timeout)
+{
+ unsigned long end = jiffies + msecs_to_jiffies(timeout);
+ struct tb_switch *sw = dma->sw;
+
+ do {
+ int ret;
+ u32 in;
+
+ ret = dma_port_read(sw->tb->ctl, &in, tb_route(sw), dma->port,
+ dma->base + MAIL_IN, 1, 50);
+ if (ret) {
+ if (ret != -ETIMEDOUT)
+ return ret;
+ } else if (!(in & MAIL_IN_OP_REQUEST)) {
+ return 0;
+ }
+
+ usleep_range(50, 100);
+ } while (time_before(jiffies, end));
+
+ return -ETIMEDOUT;
+}
+
+static int status_to_errno(u32 status)
+{
+ switch (status & MAIL_OUT_STATUS_MASK) {
+ case MAIL_OUT_STATUS_COMPLETED:
+ return 0;
+ case MAIL_OUT_STATUS_ERR_AUTH:
+ return -EINVAL;
+ case MAIL_OUT_STATUS_ERR_ACCESS:
+ return -EACCES;
+ }
+
+ return -EIO;
+}
+
+static int dma_port_request(struct tb_dma_port *dma, u32 in,
+ unsigned int timeout)
+{
+ struct tb_switch *sw = dma->sw;
+ u32 out;
+ int ret;
+
+ ret = dma_port_write(sw->tb->ctl, &in, tb_route(sw), dma->port,
+ dma->base + MAIL_IN, 1, DMA_PORT_TIMEOUT);
+ if (ret)
+ return ret;
+
+ ret = dma_port_wait_for_completion(dma, timeout);
+ if (ret)
+ return ret;
+
+ ret = dma_port_read(sw->tb->ctl, &out, tb_route(sw), dma->port,
+ dma->base + MAIL_OUT, 1, DMA_PORT_TIMEOUT);
+ if (ret)
+ return ret;
+
+ return status_to_errno(out);
+}
+
+static int dma_port_flash_read_block(struct tb_dma_port *dma, u32 address,
+ void *buf, u32 size)
+{
+ struct tb_switch *sw = dma->sw;
+ u32 in, dwaddress, dwords;
+ int ret;
+
+ dwaddress = address / 4;
+ dwords = size / 4;
+
+ in = MAIL_IN_CMD_FLASH_READ << MAIL_IN_CMD_SHIFT;
+ if (dwords < MAIL_DATA_DWORDS)
+ in |= (dwords << MAIL_IN_DWORDS_SHIFT) & MAIL_IN_DWORDS_MASK;
+ in |= (dwaddress << MAIL_IN_ADDRESS_SHIFT) & MAIL_IN_ADDRESS_MASK;
+ in |= MAIL_IN_OP_REQUEST;
+
+ ret = dma_port_request(dma, in, DMA_PORT_TIMEOUT);
+ if (ret)
+ return ret;
+
+ return dma_port_read(sw->tb->ctl, buf, tb_route(sw), dma->port,
+ dma->base + MAIL_DATA, dwords, DMA_PORT_TIMEOUT);
+}
+
+static int dma_port_flash_write_block(struct tb_dma_port *dma, u32 address,
+ const void *buf, u32 size)
+{
+ struct tb_switch *sw = dma->sw;
+ u32 in, dwaddress, dwords;
+ int ret;
+
+ dwords = size / 4;
+
+ /* Write the block to MAIL_DATA registers */
+ ret = dma_port_write(sw->tb->ctl, buf, tb_route(sw), dma->port,
+ dma->base + MAIL_DATA, dwords, DMA_PORT_TIMEOUT);
+
+ in = MAIL_IN_CMD_FLASH_WRITE << MAIL_IN_CMD_SHIFT;
+
+ /* CSS header write is always done to the same magic address */
+ if (address >= DMA_PORT_CSS_ADDRESS) {
+ dwaddress = DMA_PORT_CSS_ADDRESS;
+ in |= MAIL_IN_CSS;
+ } else {
+ dwaddress = address / 4;
+ }
+
+ in |= ((dwords - 1) << MAIL_IN_DWORDS_SHIFT) & MAIL_IN_DWORDS_MASK;
+ in |= (dwaddress << MAIL_IN_ADDRESS_SHIFT) & MAIL_IN_ADDRESS_MASK;
+ in |= MAIL_IN_OP_REQUEST;
+
+ return dma_port_request(dma, in, DMA_PORT_TIMEOUT);
+}
+
+/**
+ * dma_port_flash_read() - Read from active flash region
+ * @dma: DMA control port
+ * @address: Address relative to the start of active region
+ * @buf: Buffer where the data is read
+ * @size: Size of the buffer
+ */
+int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
+ void *buf, size_t size)
+{
+ unsigned int retries = DMA_PORT_RETRIES;
+ unsigned int offset;
+
+ offset = address & 3;
+ address = address & ~3;
+
+ do {
+ u32 nbytes = min_t(u32, size, MAIL_DATA_DWORDS * 4);
+ int ret;
+
+ ret = dma_port_flash_read_block(dma, address, dma->buf,
+ ALIGN(nbytes, 4));
+ if (ret) {
+ if (ret == -ETIMEDOUT) {
+ if (retries--)
+ continue;
+ ret = -EIO;
+ }
+ return ret;
+ }
+
+ memcpy(buf, dma->buf + offset, nbytes);
+
+ size -= nbytes;
+ address += nbytes;
+ buf += nbytes;
+ } while (size > 0);
+
+ return 0;
+}
+
+/**
+ * dma_port_flash_write() - Write to non-active flash region
+ * @dma: DMA control port
+ * @address: Address relative to the start of non-active region
+ * @buf: Data to write
+ * @size: Size of the buffer
+ *
+ * Writes block of data to the non-active flash region of the switch. If
+ * the address is given as %DMA_PORT_CSS_ADDRESS the block is written
+ * using CSS command.
+ */
+int dma_port_flash_write(struct tb_dma_port *dma, unsigned int address,
+ const void *buf, size_t size)
+{
+ unsigned int retries = DMA_PORT_RETRIES;
+ unsigned int offset;
+
+ if (address >= DMA_PORT_CSS_ADDRESS) {
+ offset = 0;
+ if (size > DMA_PORT_CSS_MAX_SIZE)
+ return -E2BIG;
+ } else {
+ offset = address & 3;
+ address = address & ~3;
+ }
+
+ do {
+ u32 nbytes = min_t(u32, size, MAIL_DATA_DWORDS * 4);
+ int ret;
+
+ memcpy(dma->buf + offset, buf, nbytes);
+
+ ret = dma_port_flash_write_block(dma, address, buf, nbytes);
+ if (ret) {
+ if (ret == -ETIMEDOUT) {
+ if (retries--)
+ continue;
+ ret = -EIO;
+ }
+ return ret;
+ }
+
+ size -= nbytes;
+ address += nbytes;
+ buf += nbytes;
+ } while (size > 0);
+
+ return 0;
+}
+
+/**
+ * dma_port_flash_update_auth() - Starts flash authenticate cycle
+ * @dma: DMA control port
+ *
+ * Starts the flash update authentication cycle. If the image in the
+ * non-active area was valid, the switch starts upgrade process where
+ * active and non-active area get swapped in the end. Caller should call
+ * dma_port_flash_update_auth_status() to get status of this command.
+ * This is because if the switch in question is root switch the
+ * thunderbolt host controller gets reset as well.
+ */
+int dma_port_flash_update_auth(struct tb_dma_port *dma)
+{
+ u32 in;
+
+ in = MAIL_IN_CMD_FLASH_UPDATE_AUTH << MAIL_IN_CMD_SHIFT;
+ in |= MAIL_IN_OP_REQUEST;
+
+ return dma_port_request(dma, in, 150);
+}
+
+/**
+ * dma_port_flash_update_auth_status() - Reads status of update auth command
+ * @dma: DMA control port
+ * @status: Status code of the operation
+ *
+ * The function checks if there is status available from the last update
+ * auth command. Returns %0 if there is no status and no further
+ * action is required. If there is status, %1 is returned instead and
+ * @status holds the failure code.
+ *
+ * Negative return means there was an error reading status from the
+ * switch.
+ */
+int dma_port_flash_update_auth_status(struct tb_dma_port *dma, u32 *status)
+{
+ struct tb_switch *sw = dma->sw;
+ u32 out, cmd;
+ int ret;
+
+ ret = dma_port_read(sw->tb->ctl, &out, tb_route(sw), dma->port,
+ dma->base + MAIL_OUT, 1, DMA_PORT_TIMEOUT);
+ if (ret)
+ return ret;
+
+ /* Check if the status relates to flash update auth */
+ cmd = (out & MAIL_OUT_STATUS_CMD_MASK) >> MAIL_OUT_STATUS_CMD_SHIFT;
+ if (cmd == MAIL_IN_CMD_FLASH_UPDATE_AUTH) {
+ if (status)
+ *status = out & MAIL_OUT_STATUS_MASK;
+
+ /* Reset is needed in any case */
+ return 1;
+ }
+
+ return 0;
+}
+
+/**
+ * dma_port_power_cycle() - Power cycles the switch
+ * @dma: DMA control port
+ *
+ * Triggers power cycle to the switch.
+ */
+int dma_port_power_cycle(struct tb_dma_port *dma)
+{
+ u32 in;
+
+ in = MAIL_IN_CMD_POWER_CYCLE << MAIL_IN_CMD_SHIFT;
+ in |= MAIL_IN_OP_REQUEST;
+
+ return dma_port_request(dma, in, 150);
+}
diff --git a/drivers/thunderbolt/dma_port.h b/drivers/thunderbolt/dma_port.h
new file mode 100644
index 000000000000..c4a69e0fbff7
--- /dev/null
+++ b/drivers/thunderbolt/dma_port.h
@@ -0,0 +1,34 @@
+/*
+ * Thunderbolt DMA configuration based mailbox support
+ *
+ * Copyright (C) 2017, Intel Corporation
+ * Authors: Michael Jamet <[email protected]>
+ * Mika Westerberg <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef DMA_PORT_H_
+#define DMA_PORT_H_
+
+#include "tb.h"
+
+struct tb_switch;
+struct tb_dma_port;
+
+#define DMA_PORT_CSS_ADDRESS 0x3fffff
+#define DMA_PORT_CSS_MAX_SIZE SZ_128
+
+struct tb_dma_port *dma_port_alloc(struct tb_switch *sw);
+void dma_port_free(struct tb_dma_port *dma);
+int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
+ void *buf, size_t size);
+int dma_port_flash_update_auth(struct tb_dma_port *dma);
+int dma_port_flash_update_auth_status(struct tb_dma_port *dma, u32 *status);
+int dma_port_flash_write(struct tb_dma_port *dma, unsigned int address,
+ const void *buf, size_t size);
+int dma_port_power_cycle(struct tb_dma_port *dma);
+
+#endif
diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
index d40a5f07fc4c..996c6e29c8ad 100644
--- a/drivers/thunderbolt/eeprom.c
+++ b/drivers/thunderbolt/eeprom.c
@@ -429,6 +429,50 @@ static int tb_drom_copy_efi(struct tb_switch *sw, u16 *size)
return -EINVAL;
}

+static int tb_drom_copy_nvm(struct tb_switch *sw, u16 *size)
+{
+ u32 drom_offset;
+ int ret;
+
+ if (!sw->dma_port)
+ return -ENODEV;
+
+ ret = tb_sw_read(sw, &drom_offset, TB_CFG_SWITCH,
+ sw->cap_plug_events + 12, 1);
+ if (ret)
+ return ret;
+
+ if (!drom_offset)
+ return -ENODEV;
+
+ ret = dma_port_flash_read(sw->dma_port, drom_offset + 14, size,
+ sizeof(*size));
+ if (ret)
+ return ret;
+
+ /* Size includes CRC8 + UID + CRC32 */
+ *size += 1 + 8 + 4;
+ sw->drom = kzalloc(*size, GFP_KERNEL);
+ if (!sw->drom)
+ return -ENOMEM;
+
+ ret = dma_port_flash_read(sw->dma_port, drom_offset, sw->drom, *size);
+ if (ret)
+ goto err_free;
+
+ /*
+ * Read UID from the minimal DROM because the one in NVM is just
+ * a placeholder.
+ */
+ tb_drom_read_uid_only(sw, &sw->uid);
+ return 0;
+
+err_free:
+ kfree(sw->drom);
+ sw->drom = NULL;
+ return ret;
+}
+
/**
* tb_drom_read - copy drom to sw->drom and parse it
*/
@@ -450,6 +494,10 @@ int tb_drom_read(struct tb_switch *sw)
if (tb_drom_copy_efi(sw, &size) == 0)
goto parse;

+ /* Non-Apple hardware has the DROM as part of NVM */
+ if (tb_drom_copy_nvm(sw, &size) == 0)
+ goto parse;
+
/*
* The root switch contains only a dummy drom (header only,
* no entries). Hardcode the configuration here.
@@ -510,7 +558,8 @@ int tb_drom_read(struct tb_switch *sw)
header->uid_crc8, crc);
goto err;
}
- sw->uid = header->uid;
+ if (!sw->uid)
+ sw->uid = header->uid;
sw->vendor = header->vendor_id;
sw->device = header->model_id;

diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 6384061100b0..4b47e0999cda 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -377,6 +377,8 @@ static void tb_switch_release(struct device *dev)
{
struct tb_switch *sw = tb_to_switch(dev);

+ dma_port_free(sw->dma_port);
+
kfree(sw->uuid);
kfree(sw->device_name);
kfree(sw->vendor_name);
@@ -570,6 +572,25 @@ static void tb_switch_set_uuid(struct tb_switch *sw)
sw->uuid = kmemdup(uuid, sizeof(uuid), GFP_KERNEL);
}

+static void tb_switch_add_dma_port(struct tb_switch *sw)
+{
+ switch (sw->generation) {
+ case 3:
+ break;
+
+ case 2:
+ /* Only root switch can be upgraded */
+ if (tb_route(sw))
+ return;
+ break;
+
+ default:
+ return;
+ }
+
+ sw->dma_port = dma_port_alloc(sw);
+}
+
/**
* tb_switch_add() - Add a switch to the domain
* @sw: Switch to add
@@ -586,6 +607,15 @@ int tb_switch_add(struct tb_switch *sw)
{
int i, ret;

+ /*
+ * Initialize DMA control port now before we read DROM. Recent
+ * host controllers have more complete DROM on NVM that includes
+ * vendor and model identification strings which we then expose
+ * to the userspace. NVM can be accessed through DMA
+ * configuration based mailbox.
+ */
+ tb_switch_add_dma_port(sw);
+
/* read drom */
ret = tb_drom_read(sw);
if (ret) {
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 39d24dff82c5..31521c531715 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -12,12 +12,16 @@

#include "tb_regs.h"
#include "ctl.h"
+#include "dma_port.h"

/**
* struct tb_switch - a thunderbolt switch
* @dev: Device for the switch
* @config: Switch configuration
* @ports: Ports in this switch
+ * @dma_port: If the switch has port supporting DMA configuration based
+ * mailbox this will hold the pointer to that (%NULL
+ * otherwise).
* @tb: Pointer to the domain the switch belongs to
* @uid: Unique ID of the switch
* @uuid: UUID of the switch (or %NULL if not supported)
@@ -34,6 +38,7 @@ struct tb_switch {
struct device dev;
struct tb_regs_switch_header config;
struct tb_port *ports;
+ struct tb_dma_port *dma_port;
struct tb *tb;
u64 uid;
uuid_be *uuid;
--
2.11.0

2017-06-06 12:25:46

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 06/27] thunderbolt: Rework capability handling

Organization of the capabilities in switches and ports is not so random
after all. Rework the capability handling functionality so that it
follows how capabilities are organized and provide two new functions
(tb_switch_find_vse_cap() and tb_port_find_cap()) which can be used to
extract capabilities for ports and switches. Then convert the current
users over these.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/cap.c | 169 +++++++++++++++++++++------------------
drivers/thunderbolt/switch.c | 6 +-
drivers/thunderbolt/tb.c | 8 +-
drivers/thunderbolt/tb.h | 3 +-
drivers/thunderbolt/tb_regs.h | 50 +++++++++---
drivers/thunderbolt/tunnel_pci.c | 8 +-
6 files changed, 142 insertions(+), 102 deletions(-)

diff --git a/drivers/thunderbolt/cap.c b/drivers/thunderbolt/cap.c
index a7b47e7cddbd..38bc27a5ce4f 100644
--- a/drivers/thunderbolt/cap.c
+++ b/drivers/thunderbolt/cap.c
@@ -9,6 +9,8 @@

#include "tb.h"

+#define CAP_OFFSET_MAX 0xff
+#define VSE_CAP_OFFSET_MAX 0xffff

struct tb_cap_any {
union {
@@ -18,99 +20,110 @@ struct tb_cap_any {
};
} __packed;

-static bool tb_cap_is_basic(struct tb_cap_any *cap)
-{
- /* basic.cap is u8. This checks only the lower 8 bit of cap. */
- return cap->basic.cap != 5;
-}
-
-static bool tb_cap_is_long(struct tb_cap_any *cap)
+/**
+ * tb_port_find_cap() - Find port capability
+ * @port: Port to find the capability for
+ * @cap: Capability to look
+ *
+ * Returns offset to start of capability or %-ENOENT if no such
+ * capability was found. Negative errno is returned if there was an
+ * error.
+ */
+int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap)
{
- return !tb_cap_is_basic(cap)
- && cap->extended_short.next == 0
- && cap->extended_short.length == 0;
-}
+ u32 offset;

-static enum tb_cap tb_cap(struct tb_cap_any *cap)
-{
- if (tb_cap_is_basic(cap))
- return cap->basic.cap;
+ /*
+ * DP out adapters claim to implement TMU capability but in
+ * reality they do not so we hard code the adapter specific
+ * capability offset here.
+ */
+ if (port->config.type == TB_TYPE_DP_HDMI_OUT)
+ offset = 0x39;
else
- /* extended_short/long have cap at the same offset. */
- return cap->extended_short.cap;
+ offset = 0x1;
+
+ do {
+ struct tb_cap_any header;
+ int ret;
+
+ ret = tb_port_read(port, &header, TB_CFG_PORT, offset, 1);
+ if (ret)
+ return ret;
+
+ if (header.basic.cap == cap)
+ return offset;
+
+ offset = header.basic.next;
+ } while (offset);
+
+ return -ENOENT;
}

-static u32 tb_cap_next(struct tb_cap_any *cap, u32 offset)
+static int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap)
{
- int next;
- if (offset == 1) {
- /*
- * The first pointer is part of the switch header and always
- * a simple pointer.
- */
- next = cap->basic.next;
- } else {
- /*
- * Somehow Intel decided to use 3 different types of capability
- * headers. It is not like anyone could have predicted that
- * single byte offsets are not enough...
- */
- if (tb_cap_is_basic(cap))
- next = cap->basic.next;
- else if (!tb_cap_is_long(cap))
- next = cap->extended_short.next;
- else
- next = cap->extended_long.next;
+ int offset = sw->config.first_cap_offset;
+
+ while (offset > 0 && offset < CAP_OFFSET_MAX) {
+ struct tb_cap_any header;
+ int ret;
+
+ ret = tb_sw_read(sw, &header, TB_CFG_SWITCH, offset, 1);
+ if (ret)
+ return ret;
+
+ if (header.basic.cap == cap)
+ return offset;
+
+ offset = header.basic.next;
}
- /*
- * "Hey, we could terminate some capability lists with a null offset
- * and others with a pointer to the last element." - "Great idea!"
- */
- if (next == offset)
- return 0;
- return next;
+
+ return -ENOENT;
}

/**
- * tb_find_cap() - find a capability
+ * tb_switch_find_vse_cap() - Find switch vendor specific capability
+ * @sw: Switch to find the capability for
+ * @vsec: Vendor specific capability to look
*
- * Return: Returns a positive offset if the capability was found and 0 if not.
- * Returns an error code on failure.
+ * Functions enumerates vendor specific capabilities (VSEC) of a switch
+ * and returns offset when capability matching @vsec is found. If no
+ * such capability is found returns %-ENOENT. In case of error returns
+ * negative errno.
*/
-int tb_find_cap(struct tb_port *port, enum tb_cfg_space space, enum tb_cap cap)
+int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec)
{
- u32 offset = 1;
struct tb_cap_any header;
- int res;
- int retries = 10;
- while (retries--) {
- res = tb_port_read(port, &header, space, offset, 1);
- if (res) {
- /* Intel needs some help with linked lists. */
- if (space == TB_CFG_PORT && offset == 0xa
- && port->config.type == TB_TYPE_DP_HDMI_OUT) {
- offset = 0x39;
- continue;
- }
- return res;
- }
- if (offset != 1) {
- if (tb_cap(&header) == cap)
+ int offset;
+
+ offset = tb_switch_find_cap(sw, TB_SWITCH_CAP_VSE);
+ if (offset < 0)
+ return offset;
+
+ while (offset > 0 && offset < VSE_CAP_OFFSET_MAX) {
+ int ret;
+
+ ret = tb_sw_read(sw, &header, TB_CFG_SWITCH, offset, 2);
+ if (ret)
+ return ret;
+
+ /*
+ * Extended vendor specific capabilities come in two
+ * flavors: short and long. The latter is used when
+ * offset is over 0xff.
+ */
+ if (offset >= CAP_OFFSET_MAX) {
+ if (header.extended_long.vsec_id == vsec)
return offset;
- if (tb_cap_is_long(&header)) {
- /* tb_cap_extended_long is 2 dwords */
- res = tb_port_read(port, &header, space,
- offset, 2);
- if (res)
- return res;
- }
+ offset = header.extended_long.next;
+ } else {
+ if (header.extended_short.vsec_id == vsec)
+ return offset;
+ if (!header.extended_short.length)
+ return -ENOENT;
+ offset = header.extended_short.next;
}
- offset = tb_cap_next(&header, offset);
- if (!offset)
- return 0;
}
- tb_port_WARN(port,
- "run out of retries while looking for cap %#x in config space %d, last offset: %#x\n",
- cap, space, offset);
- return -EIO;
+
+ return -ENOENT;
}
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 81f5164a6364..b379b4183bac 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -192,7 +192,7 @@ static int tb_init_port(struct tb_port *port)

/* Port 0 is the switch itself and has no PHY. */
if (port->config.type == TB_TYPE_PORT && port->port != 0) {
- cap = tb_find_cap(port, TB_CFG_PORT, TB_CAP_PHY);
+ cap = tb_port_find_cap(port, TB_PORT_CAP_PHY);

if (cap > 0)
port->cap_phy = cap;
@@ -394,9 +394,9 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, u64 route)
sw->ports[i].port = i;
}

- cap = tb_find_cap(&sw->ports[0], TB_CFG_SWITCH, TB_CAP_PLUG_EVENTS);
+ cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_PLUG_EVENTS);
if (cap < 0) {
- tb_sw_warn(sw, "cannot find TB_CAP_PLUG_EVENTS aborting\n");
+ tb_sw_warn(sw, "cannot find TB_VSE_CAP_PLUG_EVENTS aborting\n");
goto err;
}
sw->cap_plug_events = cap;
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 24b6d30c3c86..6b44076e1380 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -121,8 +121,8 @@ static struct tb_port *tb_find_unused_down_port(struct tb_switch *sw)
continue;
if (sw->ports[i].config.type != TB_TYPE_PCIE_DOWN)
continue;
- cap = tb_find_cap(&sw->ports[i], TB_CFG_PORT, TB_CAP_PCIE);
- if (cap <= 0)
+ cap = tb_port_find_cap(&sw->ports[i], TB_PORT_CAP_ADAP);
+ if (cap < 0)
continue;
res = tb_port_read(&sw->ports[i], &data, TB_CFG_PORT, cap, 1);
if (res < 0)
@@ -165,8 +165,8 @@ static void tb_activate_pcie_devices(struct tb *tb)
}

/* check whether port is already activated */
- cap = tb_find_cap(up_port, TB_CFG_PORT, TB_CAP_PCIE);
- if (cap <= 0)
+ cap = tb_port_find_cap(up_port, TB_PORT_CAP_ADAP);
+ if (cap < 0)
continue;
if (tb_port_read(up_port, &data, TB_CFG_PORT, cap, 1))
continue;
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index ba2b85750335..0b78bc4fbe61 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -233,7 +233,8 @@ int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged);
int tb_port_add_nfc_credits(struct tb_port *port, int credits);
int tb_port_clear_counter(struct tb_port *port, int counter);

-int tb_find_cap(struct tb_port *port, enum tb_cfg_space space, enum tb_cap cap);
+int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
+int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap);

struct tb_path *tb_path_alloc(struct tb *tb, int num_hops);
void tb_path_free(struct tb_path *path);
diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
index 1e2a4a8046be..582bd1f156dc 100644
--- a/drivers/thunderbolt/tb_regs.h
+++ b/drivers/thunderbolt/tb_regs.h
@@ -23,15 +23,22 @@
*/
#define TB_MAX_CONFIG_RW_LENGTH 60

-enum tb_cap {
- TB_CAP_PHY = 0x0001,
- TB_CAP_TIME1 = 0x0003,
- TB_CAP_PCIE = 0x0004,
- TB_CAP_I2C = 0x0005,
- TB_CAP_PLUG_EVENTS = 0x0105, /* also EEPROM */
- TB_CAP_TIME2 = 0x0305,
- TB_CAP_IECS = 0x0405,
- TB_CAP_LINK_CONTROLLER = 0x0605, /* also IECS */
+enum tb_switch_cap {
+ TB_SWITCH_CAP_VSE = 0x05,
+};
+
+enum tb_switch_vse_cap {
+ TB_VSE_CAP_PLUG_EVENTS = 0x01, /* also EEPROM */
+ TB_VSE_CAP_TIME2 = 0x03,
+ TB_VSE_CAP_IECS = 0x04,
+ TB_VSE_CAP_LINK_CONTROLLER = 0x06, /* also IECS */
+};
+
+enum tb_port_cap {
+ TB_PORT_CAP_PHY = 0x01,
+ TB_PORT_CAP_TIME1 = 0x03,
+ TB_PORT_CAP_ADAP = 0x04,
+ TB_PORT_CAP_VSE = 0x05,
};

enum tb_port_state {
@@ -49,15 +56,34 @@ struct tb_cap_basic {
u8 cap; /* if cap == 0x05 then we have a extended capability */
} __packed;

+/**
+ * struct tb_cap_extended_short - Switch extended short capability
+ * @next: Pointer to the next capability. If @next and @length are zero
+ * then we have a long cap.
+ * @cap: Base capability ID (see &enum tb_switch_cap)
+ * @vsec_id: Vendor specific capability ID (see &enum switch_vse_cap)
+ * @length: Length of this capability
+ */
struct tb_cap_extended_short {
- u8 next; /* if next and length are zero then we have a long cap */
- enum tb_cap cap:16;
+ u8 next;
+ u8 cap;
+ u8 vsec_id;
u8 length;
} __packed;

+/**
+ * struct tb_cap_extended_long - Switch extended long capability
+ * @zero1: This field should be zero
+ * @cap: Base capability ID (see &enum tb_switch_cap)
+ * @vsec_id: Vendor specific capability ID (see &enum switch_vse_cap)
+ * @zero2: This field should be zero
+ * @next: Pointer to the next capability
+ * @length: Length of this capability
+ */
struct tb_cap_extended_long {
u8 zero1;
- enum tb_cap cap:16;
+ u8 cap;
+ u8 vsec_id;
u8 zero2;
u16 next;
u16 length;
diff --git a/drivers/thunderbolt/tunnel_pci.c b/drivers/thunderbolt/tunnel_pci.c
index baf1cd370446..f4ce9845e42a 100644
--- a/drivers/thunderbolt/tunnel_pci.c
+++ b/drivers/thunderbolt/tunnel_pci.c
@@ -147,10 +147,10 @@ bool tb_pci_is_invalid(struct tb_pci_tunnel *tunnel)
static int tb_pci_port_active(struct tb_port *port, bool active)
{
u32 word = active ? 0x80000000 : 0x0;
- int cap = tb_find_cap(port, TB_CFG_PORT, TB_CAP_PCIE);
- if (cap <= 0) {
- tb_port_warn(port, "TB_CAP_PCIE not found: %d\n", cap);
- return cap ? cap : -ENXIO;
+ int cap = tb_port_find_cap(port, TB_PORT_CAP_ADAP);
+ if (cap < 0) {
+ tb_port_warn(port, "TB_PORT_CAP_ADAP not found: %d\n", cap);
+ return cap;
}
return tb_port_write(port, &word, TB_CFG_PORT, cap, 1);
}
--
2.11.0

2017-06-06 12:29:09

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 15/27] thunderbolt: Expose get_route() to other files

We are going to use it when we change the connection manager to handle
events itself. Also rename it to follow naming convention used in
functions exposed in ctl.h.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/ctl.c | 19 +++++++------------
drivers/thunderbolt/ctl.h | 4 ++++
2 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index 24118c60b062..8352ee8662aa 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -54,11 +54,6 @@ struct tb_ctl {

/* utility functions */

-static u64 get_route(struct tb_cfg_header header)
-{
- return (u64) header.route_hi << 32 | header.route_lo;
-}
-
static struct tb_cfg_header make_header(u64 route)
{
struct tb_cfg_header header = {
@@ -66,7 +61,7 @@ static struct tb_cfg_header make_header(u64 route)
.route_lo = route,
};
/* check for overflow, route_hi is not 32 bits! */
- WARN_ON(get_route(header) != route);
+ WARN_ON(tb_cfg_get_route(&header) != route);
return header;
}

@@ -91,9 +86,9 @@ static int check_header(struct ctl_pkg *pkg, u32 len, enum tb_cfg_pkg_type type,
if (WARN(header->unknown != 1 << 9,
"header->unknown is %#x\n", header->unknown))
return -EIO;
- if (WARN(route != get_route(*header),
+ if (WARN(route != tb_cfg_get_route(header),
"wrong route (expected %llx, got %llx)",
- route, get_route(*header)))
+ route, tb_cfg_get_route(header)))
return -EIO;
return 0;
}
@@ -126,10 +121,10 @@ static struct tb_cfg_result decode_error(struct ctl_pkg *response)
{
struct cfg_error_pkg *pkg = response->buffer;
struct tb_cfg_result res = { 0 };
- res.response_route = get_route(pkg->header);
+ res.response_route = tb_cfg_get_route(&pkg->header);
res.response_port = 0;
res.err = check_header(response, sizeof(*pkg), TB_CFG_PKG_ERROR,
- get_route(pkg->header));
+ tb_cfg_get_route(&pkg->header));
if (res.err)
return res;

@@ -153,7 +148,7 @@ static struct tb_cfg_result parse_header(struct ctl_pkg *pkg, u32 len,
return decode_error(pkg);

res.response_port = 0; /* will be updated later for cfg_read/write */
- res.response_route = get_route(*header);
+ res.response_route = tb_cfg_get_route(header);
res.err = check_header(pkg, len, type, route);
return res;
}
@@ -294,7 +289,7 @@ static void tb_ctl_handle_plug_event(struct tb_ctl *ctl,
struct ctl_pkg *response)
{
struct cfg_event_pkg *pkg = response->buffer;
- u64 route = get_route(pkg->header);
+ u64 route = tb_cfg_get_route(&pkg->header);

if (check_header(response, sizeof(*pkg), TB_CFG_PKG_EVENT, route)) {
tb_ctl_warn(ctl, "malformed TB_CFG_PKG_EVENT\n");
diff --git a/drivers/thunderbolt/ctl.h b/drivers/thunderbolt/ctl.h
index 610980e3232f..9812b1c86d4f 100644
--- a/drivers/thunderbolt/ctl.h
+++ b/drivers/thunderbolt/ctl.h
@@ -38,6 +38,10 @@ struct tb_cfg_result {
enum tb_cfg_error tb_error; /* valid if err == 1 */
};

+static inline u64 tb_cfg_get_route(const struct tb_cfg_header *header)
+{
+ return (u64) header->route_hi << 32 | header->route_lo;
+}

int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
enum tb_cfg_error error);
--
2.11.0

2017-06-06 12:29:32

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 21/27] thunderbolt: Store Thunderbolt generation in the switch structure

In some cases it is useful to know what is the Thunderbolt generation
the switch supports. This introduces a new field to struct switch that
stores the generation of the switch based on the device ID. Unknown
switches (there should be none) are assumed to be first generation to be
on the safe side.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/switch.c | 55 ++++++++++++++++++++++++++++++--------------
drivers/thunderbolt/tb.h | 2 ++
2 files changed, 40 insertions(+), 17 deletions(-)

diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 1497518aceff..6384061100b0 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -390,6 +390,42 @@ struct device_type tb_switch_type = {
.release = tb_switch_release,
};

+static int tb_switch_get_generation(struct tb_switch *sw)
+{
+ switch (sw->config.device_id) {
+ case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
+ case PCI_DEVICE_ID_INTEL_EAGLE_RIDGE:
+ case PCI_DEVICE_ID_INTEL_LIGHT_PEAK:
+ case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_2C:
+ case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
+ case PCI_DEVICE_ID_INTEL_PORT_RIDGE:
+ case PCI_DEVICE_ID_INTEL_REDWOOD_RIDGE_2C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_REDWOOD_RIDGE_4C_BRIDGE:
+ return 1;
+
+ case PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE:
+ return 2;
+
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE:
+ case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE:
+ return 3;
+
+ default:
+ /*
+ * For unknown switches assume generation to be 1 to be
+ * on the safe side.
+ */
+ tb_sw_warn(sw, "unsupported switch device id %#x\n",
+ sw->config.device_id);
+ return 1;
+ }
+}
+
/**
* tb_switch_alloc() - allocate a switch
* @tb: Pointer to the owning domain
@@ -443,6 +479,8 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
sw->ports[i].port = i;
}

+ sw->generation = tb_switch_get_generation(sw);
+
cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_PLUG_EVENTS);
if (cap < 0) {
tb_sw_warn(sw, "cannot find TB_VSE_CAP_PLUG_EVENTS aborting\n");
@@ -491,23 +529,6 @@ int tb_switch_configure(struct tb_switch *sw)
tb_sw_warn(sw, "unknown switch vendor id %#x\n",
sw->config.vendor_id);

- switch (sw->config.device_id) {
- case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
- case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
- case PCI_DEVICE_ID_INTEL_PORT_RIDGE:
- case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE:
- case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE:
- case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_BRIDGE:
- case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_BRIDGE:
- case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE:
- case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE:
- break;
-
- default:
- tb_sw_warn(sw, "unsupported switch device id %#x\n",
- sw->config.device_id);
- }
-
sw->config.enabled = 1;

/* upload configuration */
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 98a405384596..39d24dff82c5 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -25,6 +25,7 @@
* @device: Device ID of the switch
* @vendor_name: Name of the vendor (or %NULL if not known)
* @device_name: Name of the device (or %NULL if not known)
+ * @generation: Switch Thunderbolt generation
* @cap_plug_events: Offset to the plug events capability (%0 if not found)
* @is_unplugged: The switch is going away
* @drom: DROM of the switch (%NULL if not found)
@@ -40,6 +41,7 @@ struct tb_switch {
u16 device;
const char *vendor_name;
const char *device_name;
+ unsigned int generation;
int cap_plug_events;
bool is_unplugged;
u8 *drom;
--
2.11.0

2017-06-06 12:29:33

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 25/27] thunderbolt: Add support for host and device NVM firmware upgrade

Starting from Intel Falcon Ridge the NVM firmware can be upgraded by
using DMA configuration based mailbox commands. If we detect that the
host or device (device support starts from Intel Alpine Ridge) has the
DMA configuration based mailbox we expose NVM information to the
userspace as two separate Linux NVMem devices: nvm_active and
nvm_non_active. The former is read-only portion of the active NVM which
firmware upgrade tools can be use to find out suitable NVM image if the
device identification strings are not enough.

The latter is write-only portion where the new NVM image is to be
written by the userspace. It is up to the userspace to find out right
NVM image (the kernel does very minimal validation). The ICM firmware
itself authenticates the new NVM firmware and fails the operation if it
is not what is expected.

We also expose two new sysfs files per each switch: nvm_version and
nvm_authenticate which can be used to read the active NVM version and
start the upgrade process.

We also introduce safe mode which is the mode a switch goes when it does
not have properly authenticated firmware. In this mode the switch only
accepts a couple of commands including flashing a new NVM firmware image
and triggering power cycle.

This code is based on the work done by Amir Levy and Michael Jamet.

Signed-off-by: Michael Jamet <[email protected]>
Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
Documentation/ABI/testing/sysfs-bus-thunderbolt | 26 +
drivers/thunderbolt/Kconfig | 1 +
drivers/thunderbolt/domain.c | 18 +
drivers/thunderbolt/icm.c | 33 +-
drivers/thunderbolt/nhi.h | 1 +
drivers/thunderbolt/switch.c | 603 +++++++++++++++++++++++-
drivers/thunderbolt/tb.c | 7 +
drivers/thunderbolt/tb.h | 40 +-
8 files changed, 706 insertions(+), 23 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt
index 05b7f9a6431f..2a98149943ea 100644
--- a/Documentation/ABI/testing/sysfs-bus-thunderbolt
+++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt
@@ -82,3 +82,29 @@ Description: This attribute contains unique_id string of this device.
This is either read from hardware registers (UUID on
newer hardware) or based on UID from the device DROM.
Can be used to uniquely identify particular device.
+
+What: /sys/bus/thunderbolt/devices/.../nvm_version
+Date: Sep 2017
+KernelVersion: 4.13
+Contact: [email protected]
+Description: If the device has upgradeable firmware the version
+ number is available here. Format: %x.%x, major.minor.
+ If the device is in safe mode reading the file returns
+ -ENODATA instead as the NVM version is not available.
+
+What: /sys/bus/thunderbolt/devices/.../nvm_authenticate
+Date: Sep 2017
+KernelVersion: 4.13
+Contact: [email protected]
+Description: When new NVM image is written to the non-active NVM
+ area (through non_activeX NVMem device), the
+ authentication procedure is started by writing 1 to
+ this file. If everything goes well, the device is
+ restarted with the new NVM firmware. If the image
+ verification fails an error code is returned instead.
+
+ When read holds status of the last authentication
+ operation if an error occurred during the process. This
+ is directly the status value from the DMA configuration
+ based mailbox before the device is power cycled. Writing
+ 0 here clears the status.
diff --git a/drivers/thunderbolt/Kconfig b/drivers/thunderbolt/Kconfig
index a9cc724985ad..f4869c38c7e4 100644
--- a/drivers/thunderbolt/Kconfig
+++ b/drivers/thunderbolt/Kconfig
@@ -6,6 +6,7 @@ menuconfig THUNDERBOLT
select CRC32
select CRYPTO
select CRYPTO_HASH
+ select NVMEM
help
Thunderbolt Controller driver. This driver is required if you
want to hotplug Thunderbolt devices on Apple hardware or on PCs
diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c
index f71b63e90016..9f2dcd48974d 100644
--- a/drivers/thunderbolt/domain.c
+++ b/drivers/thunderbolt/domain.c
@@ -426,6 +426,23 @@ int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw)
return ret;
}

+/**
+ * tb_domain_disconnect_pcie_paths() - Disconnect all PCIe paths
+ * @tb: Domain whose PCIe paths to disconnect
+ *
+ * This needs to be called in preparation for NVM upgrade of the host
+ * controller. Makes sure all PCIe paths are disconnected.
+ *
+ * Return %0 on success and negative errno in case of error.
+ */
+int tb_domain_disconnect_pcie_paths(struct tb *tb)
+{
+ if (!tb->cm_ops->disconnect_pcie_paths)
+ return -EPERM;
+
+ return tb->cm_ops->disconnect_pcie_paths(tb);
+}
+
int tb_domain_init(void)
{
return bus_register(&tb_bus_type);
@@ -435,4 +452,5 @@ void tb_domain_exit(void)
{
bus_unregister(&tb_bus_type);
ida_destroy(&tb_domain_ida);
+ tb_switch_exit();
}
diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
index 0ffa4ec249ac..8ee340290219 100644
--- a/drivers/thunderbolt/icm.c
+++ b/drivers/thunderbolt/icm.c
@@ -54,6 +54,7 @@
* where ICM needs to be started manually
* @vnd_cap: Vendor defined capability where PCIe2CIO mailbox resides
* (only set when @upstream_port is not %NULL)
+ * @safe_mode: ICM is in safe mode
* @is_supported: Checks if we can support ICM on this controller
* @get_mode: Read and return the ICM firmware mode (optional)
* @get_route: Find a route string for given switch
@@ -65,6 +66,7 @@ struct icm {
struct delayed_work rescan_work;
struct pci_dev *upstream_port;
int vnd_cap;
+ bool safe_mode;
bool (*is_supported)(struct tb *tb);
int (*get_mode)(struct tb *tb);
int (*get_route)(struct tb *tb, u8 link, u8 depth, u64 *route);
@@ -852,6 +854,10 @@ static int icm_firmware_init(struct tb *tb)
ret = icm->get_mode(tb);

switch (ret) {
+ case NHI_FW_SAFE_MODE:
+ icm->safe_mode = true;
+ break;
+
case NHI_FW_CM_MODE:
/* Ask ICM to accept all Thunderbolt devices */
nhi_mailbox_cmd(nhi, NHI_MAILBOX_ALLOW_ALL_DEVS, 0);
@@ -879,12 +885,20 @@ static int icm_firmware_init(struct tb *tb)

static int icm_driver_ready(struct tb *tb)
{
+ struct icm *icm = tb_priv(tb);
int ret;

ret = icm_firmware_init(tb);
if (ret)
return ret;

+ if (icm->safe_mode) {
+ tb_info(tb, "Thunderbolt host controller is in safe mode.\n");
+ tb_info(tb, "You need to update NVM firmware of the controller before it can be used.\n");
+ tb_info(tb, "For latest updates check https://thunderbolttechnology.net/updates.\n");
+ return 0;
+ }
+
return __icm_driver_ready(tb, &tb->security_level);
}

@@ -975,12 +989,23 @@ static void icm_complete(struct tb *tb)

static int icm_start(struct tb *tb)
{
+ struct icm *icm = tb_priv(tb);
int ret;

- tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0);
+ if (icm->safe_mode)
+ tb->root_switch = tb_switch_alloc_safe_mode(tb, &tb->dev, 0);
+ else
+ tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0);
if (!tb->root_switch)
return -ENODEV;

+ /*
+ * NVM upgrade has not been tested on Apple systems and they
+ * don't provide images publicly either. To be on the safe side
+ * prevent root switch NVM upgrade on Macs for now.
+ */
+ tb->root_switch->no_nvm_upgrade = is_apple();
+
ret = tb_switch_add(tb->root_switch);
if (ret)
tb_switch_put(tb->root_switch);
@@ -998,6 +1023,11 @@ static void icm_stop(struct tb *tb)
nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DRV_UNLOADS, 0);
}

+static int icm_disconnect_pcie_paths(struct tb *tb)
+{
+ return nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DISCONNECT_PCIE_PATHS, 0);
+}
+
/* Falcon Ridge and Alpine Ridge */
static const struct tb_cm_ops icm_fr_ops = {
.driver_ready = icm_driver_ready,
@@ -1009,6 +1039,7 @@ static const struct tb_cm_ops icm_fr_ops = {
.approve_switch = icm_fr_approve_switch,
.add_switch_key = icm_fr_add_switch_key,
.challenge_switch_key = icm_fr_challenge_switch_key,
+ .disconnect_pcie_paths = icm_disconnect_pcie_paths,
};

struct tb *icm_probe(struct tb_nhi *nhi)
diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
index 953864ae0ab3..5b5bb2c436be 100644
--- a/drivers/thunderbolt/nhi.h
+++ b/drivers/thunderbolt/nhi.h
@@ -155,6 +155,7 @@ enum nhi_fw_mode {

enum nhi_mailbox_cmd {
NHI_MAILBOX_SAVE_DEVS = 0x05,
+ NHI_MAILBOX_DISCONNECT_PCIE_PATHS = 0x06,
NHI_MAILBOX_DRV_UNLOADS = 0x07,
NHI_MAILBOX_ALLOW_ALL_DEVS = 0x23,
};
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 1524edf42ee8..ab3e8f410444 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -5,13 +5,395 @@
*/

#include <linux/delay.h>
+#include <linux/idr.h>
+#include <linux/nvmem-provider.h>
+#include <linux/sizes.h>
#include <linux/slab.h>
+#include <linux/vmalloc.h>

#include "tb.h"

/* Switch authorization from userspace is serialized by this lock */
static DEFINE_MUTEX(switch_lock);

+/* Switch NVM support */
+
+#define NVM_DEVID 0x05
+#define NVM_VERSION 0x08
+#define NVM_CSS 0x10
+#define NVM_FLASH_SIZE 0x45
+
+#define NVM_MIN_SIZE SZ_32K
+#define NVM_MAX_SIZE SZ_512K
+
+static DEFINE_IDA(nvm_ida);
+
+struct nvm_auth_status {
+ struct list_head list;
+ uuid_be uuid;
+ u32 status;
+};
+
+/*
+ * Hold NVM authentication failure status per switch This information
+ * needs to stay around even when the switch gets power cycled so we
+ * keep it separately.
+ */
+static LIST_HEAD(nvm_auth_status_cache);
+static DEFINE_MUTEX(nvm_auth_status_lock);
+
+static struct nvm_auth_status *__nvm_get_auth_status(const struct tb_switch *sw)
+{
+ struct nvm_auth_status *st;
+
+ list_for_each_entry(st, &nvm_auth_status_cache, list) {
+ if (!uuid_be_cmp(st->uuid, *sw->uuid))
+ return st;
+ }
+
+ return NULL;
+}
+
+static void nvm_get_auth_status(const struct tb_switch *sw, u32 *status)
+{
+ struct nvm_auth_status *st;
+
+ mutex_lock(&nvm_auth_status_lock);
+ st = __nvm_get_auth_status(sw);
+ mutex_unlock(&nvm_auth_status_lock);
+
+ *status = st ? st->status : 0;
+}
+
+static void nvm_set_auth_status(const struct tb_switch *sw, u32 status)
+{
+ struct nvm_auth_status *st;
+
+ if (WARN_ON(!sw->uuid))
+ return;
+
+ mutex_lock(&nvm_auth_status_lock);
+ st = __nvm_get_auth_status(sw);
+
+ if (!st) {
+ st = kzalloc(sizeof(*st), GFP_KERNEL);
+ if (!st)
+ goto unlock;
+
+ memcpy(&st->uuid, sw->uuid, sizeof(st->uuid));
+ INIT_LIST_HEAD(&st->list);
+ list_add_tail(&st->list, &nvm_auth_status_cache);
+ }
+
+ st->status = status;
+unlock:
+ mutex_unlock(&nvm_auth_status_lock);
+}
+
+static void nvm_clear_auth_status(const struct tb_switch *sw)
+{
+ struct nvm_auth_status *st;
+
+ mutex_lock(&nvm_auth_status_lock);
+ st = __nvm_get_auth_status(sw);
+ if (st) {
+ list_del(&st->list);
+ kfree(st);
+ }
+ mutex_unlock(&nvm_auth_status_lock);
+}
+
+static int nvm_validate_and_write(struct tb_switch *sw)
+{
+ unsigned int image_size, hdr_size;
+ const u8 *buf = sw->nvm->buf;
+ u16 ds_size;
+ int ret;
+
+ if (!buf)
+ return -EINVAL;
+
+ image_size = sw->nvm->buf_data_size;
+ if (image_size < NVM_MIN_SIZE || image_size > NVM_MAX_SIZE)
+ return -EINVAL;
+
+ /*
+ * FARB pointer must point inside the image and must at least
+ * contain parts of the digital section we will be reading here.
+ */
+ hdr_size = (*(u32 *)buf) & 0xffffff;
+ if (hdr_size + NVM_DEVID + 2 >= image_size)
+ return -EINVAL;
+
+ /* Digital section start should be aligned to 4k page */
+ if (!IS_ALIGNED(hdr_size, SZ_4K))
+ return -EINVAL;
+
+ /*
+ * Read digital section size and check that it also fits inside
+ * the image.
+ */
+ ds_size = *(u16 *)(buf + hdr_size);
+ if (ds_size >= image_size)
+ return -EINVAL;
+
+ if (!sw->safe_mode) {
+ u16 device_id;
+
+ /*
+ * Make sure the device ID in the image matches the one
+ * we read from the switch config space.
+ */
+ device_id = *(u16 *)(buf + hdr_size + NVM_DEVID);
+ if (device_id != sw->config.device_id)
+ return -EINVAL;
+
+ if (sw->generation < 3) {
+ /* Write CSS headers first */
+ ret = dma_port_flash_write(sw->dma_port,
+ DMA_PORT_CSS_ADDRESS, buf + NVM_CSS,
+ DMA_PORT_CSS_MAX_SIZE);
+ if (ret)
+ return ret;
+ }
+
+ /* Skip headers in the image */
+ buf += hdr_size;
+ image_size -= hdr_size;
+ }
+
+ return dma_port_flash_write(sw->dma_port, 0, buf, image_size);
+}
+
+static int nvm_authenticate_host(struct tb_switch *sw)
+{
+ int ret;
+
+ /*
+ * Root switch NVM upgrade requires that we disconnect the
+ * existing PCIe paths first (in case it is not in safe mode
+ * already).
+ */
+ if (!sw->safe_mode) {
+ ret = tb_domain_disconnect_pcie_paths(sw->tb);
+ if (ret)
+ return ret;
+ /*
+ * The host controller goes away pretty soon after this if
+ * everything goes well so getting timeout is expected.
+ */
+ ret = dma_port_flash_update_auth(sw->dma_port);
+ return ret == -ETIMEDOUT ? 0 : ret;
+ }
+
+ /*
+ * From safe mode we can get out by just power cycling the
+ * switch.
+ */
+ dma_port_power_cycle(sw->dma_port);
+ return 0;
+}
+
+static int nvm_authenticate_device(struct tb_switch *sw)
+{
+ int ret, retries = 10;
+
+ ret = dma_port_flash_update_auth(sw->dma_port);
+ if (ret && ret != -ETIMEDOUT)
+ return ret;
+
+ /*
+ * Poll here for the authentication status. It takes some time
+ * for the device to respond (we get timeout for a while). Once
+ * we get response the device needs to be power cycled in order
+ * to the new NVM to be taken into use.
+ */
+ do {
+ u32 status;
+
+ ret = dma_port_flash_update_auth_status(sw->dma_port, &status);
+ if (ret < 0 && ret != -ETIMEDOUT)
+ return ret;
+ if (ret > 0) {
+ if (status) {
+ tb_sw_warn(sw, "failed to authenticate NVM\n");
+ nvm_set_auth_status(sw, status);
+ }
+
+ tb_sw_info(sw, "power cycling the switch now\n");
+ dma_port_power_cycle(sw->dma_port);
+ return 0;
+ }
+
+ msleep(500);
+ } while (--retries);
+
+ return -ETIMEDOUT;
+}
+
+static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val,
+ size_t bytes)
+{
+ struct tb_switch *sw = priv;
+
+ return dma_port_flash_read(sw->dma_port, offset, val, bytes);
+}
+
+static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val,
+ size_t bytes)
+{
+ struct tb_switch *sw = priv;
+ int ret = 0;
+
+ if (mutex_lock_interruptible(&switch_lock))
+ return -ERESTARTSYS;
+
+ /*
+ * Since writing the NVM image might require some special steps,
+ * for example when CSS headers are written, we cache the image
+ * locally here and handle the special cases when the user asks
+ * us to authenticate the image.
+ */
+ if (!sw->nvm->buf) {
+ sw->nvm->buf = vmalloc(NVM_MAX_SIZE);
+ if (!sw->nvm->buf) {
+ ret = -ENOMEM;
+ goto unlock;
+ }
+ }
+
+ sw->nvm->buf_data_size = offset + bytes;
+ memcpy(sw->nvm->buf + offset, val, bytes);
+
+unlock:
+ mutex_unlock(&switch_lock);
+
+ return ret;
+}
+
+static struct nvmem_device *register_nvmem(struct tb_switch *sw, int id,
+ size_t size, bool active)
+{
+ struct nvmem_config config;
+
+ memset(&config, 0, sizeof(config));
+
+ if (active) {
+ config.name = "nvm_active";
+ config.reg_read = tb_switch_nvm_read;
+ } else {
+ config.name = "nvm_non_active";
+ config.reg_write = tb_switch_nvm_write;
+ }
+
+ config.id = id;
+ config.stride = 4;
+ config.word_size = 4;
+ config.size = size;
+ config.dev = &sw->dev;
+ config.owner = THIS_MODULE;
+ config.root_only = true;
+ config.priv = sw;
+
+ return nvmem_register(&config);
+}
+
+static int tb_switch_nvm_add(struct tb_switch *sw)
+{
+ struct nvmem_device *nvm_dev;
+ struct tb_switch_nvm *nvm;
+ u32 val;
+ int ret;
+
+ if (!sw->dma_port)
+ return 0;
+
+ nvm = kzalloc(sizeof(*nvm), GFP_KERNEL);
+ if (!nvm)
+ return -ENOMEM;
+
+ nvm->id = ida_simple_get(&nvm_ida, 0, 0, GFP_KERNEL);
+
+ /*
+ * If the switch is in safe-mode the only accessible portion of
+ * the NVM is the non-active one where userspace is expected to
+ * write new functional NVM.
+ */
+ if (!sw->safe_mode) {
+ u32 nvm_size, hdr_size;
+
+ ret = dma_port_flash_read(sw->dma_port, NVM_FLASH_SIZE, &val,
+ sizeof(val));
+ if (ret)
+ goto err_ida;
+
+ hdr_size = sw->generation < 3 ? SZ_8K : SZ_16K;
+ nvm_size = (SZ_1M << (val & 7)) / 8;
+ nvm_size = (nvm_size - hdr_size) / 2;
+
+ ret = dma_port_flash_read(sw->dma_port, NVM_VERSION, &val,
+ sizeof(val));
+ if (ret)
+ goto err_ida;
+
+ nvm->major = val >> 16;
+ nvm->minor = val >> 8;
+
+ nvm_dev = register_nvmem(sw, nvm->id, nvm_size, true);
+ if (IS_ERR(nvm_dev)) {
+ ret = PTR_ERR(nvm_dev);
+ goto err_ida;
+ }
+ nvm->active = nvm_dev;
+ }
+
+ nvm_dev = register_nvmem(sw, nvm->id, NVM_MAX_SIZE, false);
+ if (IS_ERR(nvm_dev)) {
+ ret = PTR_ERR(nvm_dev);
+ goto err_nvm_active;
+ }
+ nvm->non_active = nvm_dev;
+
+ mutex_lock(&switch_lock);
+ sw->nvm = nvm;
+ mutex_unlock(&switch_lock);
+
+ return 0;
+
+err_nvm_active:
+ if (nvm->active)
+ nvmem_unregister(nvm->active);
+err_ida:
+ ida_simple_remove(&nvm_ida, nvm->id);
+ kfree(nvm);
+
+ return ret;
+}
+
+static void tb_switch_nvm_remove(struct tb_switch *sw)
+{
+ struct tb_switch_nvm *nvm;
+
+ mutex_lock(&switch_lock);
+ nvm = sw->nvm;
+ sw->nvm = NULL;
+ mutex_unlock(&switch_lock);
+
+ if (!nvm)
+ return;
+
+ /* Remove authentication status in case the switch is unplugged */
+ if (!nvm->authenticating)
+ nvm_clear_auth_status(sw);
+
+ nvmem_unregister(nvm->non_active);
+ if (nvm->active)
+ nvmem_unregister(nvm->active);
+ ida_simple_remove(&nvm_ida, nvm->id);
+ vfree(nvm->buf);
+ kfree(nvm);
+}
+
/* port utility functions */

static const char *tb_port_type(struct tb_regs_port_header *port)
@@ -448,6 +830,83 @@ static ssize_t key_store(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RW(key);

+static ssize_t nvm_authenticate_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct tb_switch *sw = tb_to_switch(dev);
+ u32 status;
+
+ nvm_get_auth_status(sw, &status);
+ return sprintf(buf, "%#x\n", status);
+}
+
+static ssize_t nvm_authenticate_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct tb_switch *sw = tb_to_switch(dev);
+ bool val;
+ int ret;
+
+ if (mutex_lock_interruptible(&switch_lock))
+ return -ERESTARTSYS;
+
+ /* If NVMem devices are not yet added */
+ if (!sw->nvm) {
+ ret = -EAGAIN;
+ goto exit_unlock;
+ }
+
+ ret = kstrtobool(buf, &val);
+ if (ret)
+ goto exit_unlock;
+
+ /* Always clear the authentication status */
+ nvm_clear_auth_status(sw);
+
+ if (val) {
+ ret = nvm_validate_and_write(sw);
+ if (ret)
+ goto exit_unlock;
+
+ sw->nvm->authenticating = true;
+
+ if (!tb_route(sw))
+ ret = nvm_authenticate_host(sw);
+ else
+ ret = nvm_authenticate_device(sw);
+ }
+
+exit_unlock:
+ mutex_unlock(&switch_lock);
+
+ if (ret)
+ return ret;
+ return count;
+}
+static DEVICE_ATTR_RW(nvm_authenticate);
+
+static ssize_t nvm_version_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct tb_switch *sw = tb_to_switch(dev);
+ int ret;
+
+ if (mutex_lock_interruptible(&switch_lock))
+ return -ERESTARTSYS;
+
+ if (sw->safe_mode)
+ ret = -ENODATA;
+ else if (!sw->nvm)
+ ret = -EAGAIN;
+ else
+ ret = sprintf(buf, "%x.%x\n", sw->nvm->major, sw->nvm->minor);
+
+ mutex_unlock(&switch_lock);
+
+ return ret;
+}
+static DEVICE_ATTR_RO(nvm_version);
+
static ssize_t vendor_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
@@ -480,6 +939,8 @@ static struct attribute *switch_attrs[] = {
&dev_attr_device.attr,
&dev_attr_device_name.attr,
&dev_attr_key.attr,
+ &dev_attr_nvm_authenticate.attr,
+ &dev_attr_nvm_version.attr,
&dev_attr_vendor.attr,
&dev_attr_vendor_name.attr,
&dev_attr_unique_id.attr,
@@ -498,9 +959,14 @@ static umode_t switch_attr_is_visible(struct kobject *kobj,
sw->security_level == TB_SECURITY_SECURE)
return attr->mode;
return 0;
+ } else if (attr == &dev_attr_nvm_authenticate.attr ||
+ attr == &dev_attr_nvm_version.attr) {
+ if (sw->dma_port)
+ return attr->mode;
+ return 0;
}

- return attr->mode;
+ return sw->safe_mode ? 0 : attr->mode;
}

static struct attribute_group switch_group = {
@@ -652,6 +1118,45 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
}

/**
+ * tb_switch_alloc_safe_mode() - allocate a switch that is in safe mode
+ * @tb: Pointer to the owning domain
+ * @parent: Parent device for this switch
+ * @route: Route string for this switch
+ *
+ * This creates a switch in safe mode. This means the switch pretty much
+ * lacks all capabilities except DMA configuration port before it is
+ * flashed with a valid NVM firmware.
+ *
+ * The returned switch must be released by calling tb_switch_put().
+ *
+ * Return: Pointer to the allocated switch or %NULL in case of failure
+ */
+struct tb_switch *
+tb_switch_alloc_safe_mode(struct tb *tb, struct device *parent, u64 route)
+{
+ struct tb_switch *sw;
+
+ sw = kzalloc(sizeof(*sw), GFP_KERNEL);
+ if (!sw)
+ return NULL;
+
+ sw->tb = tb;
+ sw->config.depth = tb_route_length(route);
+ sw->config.route_hi = upper_32_bits(route);
+ sw->config.route_lo = lower_32_bits(route);
+ sw->safe_mode = true;
+
+ device_initialize(&sw->dev);
+ sw->dev.parent = parent;
+ sw->dev.bus = &tb_bus_type;
+ sw->dev.type = &tb_switch_type;
+ sw->dev.groups = switch_groups;
+ dev_set_name(&sw->dev, "%u-%llx", tb->index, tb_route(sw));
+
+ return sw;
+}
+
+/**
* tb_switch_configure() - Uploads configuration to the switch
* @sw: Switch to configure
*
@@ -717,8 +1222,11 @@ static void tb_switch_set_uuid(struct tb_switch *sw)
sw->uuid = kmemdup(uuid, sizeof(uuid), GFP_KERNEL);
}

-static void tb_switch_add_dma_port(struct tb_switch *sw)
+static int tb_switch_add_dma_port(struct tb_switch *sw)
{
+ u32 status;
+ int ret;
+
switch (sw->generation) {
case 3:
break;
@@ -726,14 +1234,49 @@ static void tb_switch_add_dma_port(struct tb_switch *sw)
case 2:
/* Only root switch can be upgraded */
if (tb_route(sw))
- return;
+ return 0;
break;

default:
- return;
+ /*
+ * DMA port is the only thing available when the switch
+ * is in safe mode.
+ */
+ if (!sw->safe_mode)
+ return 0;
+ break;
}

+ if (sw->no_nvm_upgrade)
+ return 0;
+
sw->dma_port = dma_port_alloc(sw);
+ if (!sw->dma_port)
+ return 0;
+
+ /*
+ * Check status of the previous flash authentication. If there
+ * is one we need to power cycle the switch in any case to make
+ * it functional again.
+ */
+ ret = dma_port_flash_update_auth_status(sw->dma_port, &status);
+ if (ret <= 0)
+ return ret;
+
+ if (status) {
+ tb_sw_info(sw, "switch flash authentication failed\n");
+ tb_switch_set_uuid(sw);
+ nvm_set_auth_status(sw, status);
+ }
+
+ tb_sw_info(sw, "power cycling the switch now\n");
+ dma_port_power_cycle(sw->dma_port);
+
+ /*
+ * We return error here which causes the switch adding failure.
+ * It should appear back after power cycle is complete.
+ */
+ return -ESHUTDOWN;
}

/**
@@ -759,29 +1302,41 @@ int tb_switch_add(struct tb_switch *sw)
* to the userspace. NVM can be accessed through DMA
* configuration based mailbox.
*/
- tb_switch_add_dma_port(sw);
-
- /* read drom */
- ret = tb_drom_read(sw);
- if (ret) {
- tb_sw_warn(sw, "tb_eeprom_read_rom failed\n");
+ ret = tb_switch_add_dma_port(sw);
+ if (ret)
return ret;
- }
- tb_sw_info(sw, "uid: %#llx\n", sw->uid);

- tb_switch_set_uuid(sw);
+ if (!sw->safe_mode) {
+ /* read drom */
+ ret = tb_drom_read(sw);
+ if (ret) {
+ tb_sw_warn(sw, "tb_eeprom_read_rom failed\n");
+ return ret;
+ }
+ tb_sw_info(sw, "uid: %#llx\n", sw->uid);

- for (i = 0; i <= sw->config.max_port_number; i++) {
- if (sw->ports[i].disabled) {
- tb_port_info(&sw->ports[i], "disabled by eeprom\n");
- continue;
+ tb_switch_set_uuid(sw);
+
+ for (i = 0; i <= sw->config.max_port_number; i++) {
+ if (sw->ports[i].disabled) {
+ tb_port_info(&sw->ports[i], "disabled by eeprom\n");
+ continue;
+ }
+ ret = tb_init_port(&sw->ports[i]);
+ if (ret)
+ return ret;
}
- ret = tb_init_port(&sw->ports[i]);
- if (ret)
- return ret;
}

- return device_add(&sw->dev);
+ ret = device_add(&sw->dev);
+ if (ret)
+ return ret;
+
+ ret = tb_switch_nvm_add(sw);
+ if (ret)
+ device_del(&sw->dev);
+
+ return ret;
}

/**
@@ -808,6 +1363,7 @@ void tb_switch_remove(struct tb_switch *sw)
if (!sw->is_unplugged)
tb_plug_events_active(sw, false);

+ tb_switch_nvm_remove(sw);
device_unregister(&sw->dev);
}

@@ -976,3 +1532,8 @@ struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_be *uuid)

return NULL;
}
+
+void tb_switch_exit(void)
+{
+ ida_destroy(&nvm_ida);
+}
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index ad2304bad592..1b02ca0b6129 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -369,6 +369,13 @@ static int tb_start(struct tb *tb)
if (!tb->root_switch)
return -ENOMEM;

+ /*
+ * ICM firmware upgrade needs running firmware and in native
+ * mode that is not available so disable firmware upgrade of the
+ * root switch.
+ */
+ tb->root_switch->no_nvm_upgrade = true;
+
ret = tb_switch_configure(tb->root_switch);
if (ret) {
tb_switch_put(tb->root_switch);
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index a998b3a251d5..3d9f64676e58 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -7,6 +7,7 @@
#ifndef TB_H_
#define TB_H_

+#include <linux/nvmem-provider.h>
#include <linux/pci.h>
#include <linux/uuid.h>

@@ -15,6 +16,30 @@
#include "dma_port.h"

/**
+ * struct tb_switch_nvm - Structure holding switch NVM information
+ * @major: Major version number of the active NVM portion
+ * @minor: Minor version number of the active NVM portion
+ * @id: Identifier used with both NVM portions
+ * @active: Active portion NVMem device
+ * @non_active: Non-active portion NVMem device
+ * @buf: Buffer where the NVM image is stored before it is written to
+ * the actual NVM flash device
+ * @buf_data_size: Number of bytes actually consumed by the new NVM
+ * image
+ * @authenticating: The switch is authenticating the new NVM
+ */
+struct tb_switch_nvm {
+ u8 major;
+ u8 minor;
+ int id;
+ struct nvmem_device *active;
+ struct nvmem_device *non_active;
+ void *buf;
+ size_t buf_data_size;
+ bool authenticating;
+};
+
+/**
* enum tb_security_level - Thunderbolt security level
* @TB_SECURITY_NONE: No security, legacy mode
* @TB_SECURITY_USER: User approval required at minimum
@@ -39,7 +64,8 @@ enum tb_security_level {
* @ports: Ports in this switch
* @dma_port: If the switch has port supporting DMA configuration based
* mailbox this will hold the pointer to that (%NULL
- * otherwise).
+ * otherwise). If set it also means the switch has
+ * upgradeable NVM.
* @tb: Pointer to the domain the switch belongs to
* @uid: Unique ID of the switch
* @uuid: UUID of the switch (or %NULL if not supported)
@@ -51,6 +77,9 @@ enum tb_security_level {
* @cap_plug_events: Offset to the plug events capability (%0 if not found)
* @is_unplugged: The switch is going away
* @drom: DROM of the switch (%NULL if not found)
+ * @nvm: Pointer to the NVM if the switch has one (%NULL otherwise)
+ * @no_nvm_upgrade: Prevent NVM upgrade of this switch
+ * @safe_mode: The switch is in safe-mode
* @authorized: Whether the switch is authorized by user or policy
* @work: Work used to automatically authorize a switch
* @security_level: Switch supported security level
@@ -81,6 +110,9 @@ struct tb_switch {
int cap_plug_events;
bool is_unplugged;
u8 *drom;
+ struct tb_switch_nvm *nvm;
+ bool no_nvm_upgrade;
+ bool safe_mode;
unsigned int authorized;
struct work_struct work;
enum tb_security_level security_level;
@@ -172,6 +204,7 @@ struct tb_path {
* @approve_switch: Approve switch
* @add_switch_key: Add key to switch
* @challenge_switch_key: Challenge switch using key
+ * @disconnect_pcie_paths: Disconnects PCIe paths before NVM update
*/
struct tb_cm_ops {
int (*driver_ready)(struct tb *tb);
@@ -187,6 +220,7 @@ struct tb_cm_ops {
int (*add_switch_key)(struct tb *tb, struct tb_switch *sw);
int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw,
const u8 *challenge, u8 *response);
+ int (*disconnect_pcie_paths)(struct tb *tb);
};

/**
@@ -340,6 +374,7 @@ extern struct device_type tb_switch_type;

int tb_domain_init(void);
void tb_domain_exit(void);
+void tb_switch_exit(void);

struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize);
int tb_domain_add(struct tb *tb);
@@ -351,6 +386,7 @@ void tb_domain_complete(struct tb *tb);
int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw);
int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw);
int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw);
+int tb_domain_disconnect_pcie_paths(struct tb *tb);

static inline void tb_domain_put(struct tb *tb)
{
@@ -359,6 +395,8 @@ static inline void tb_domain_put(struct tb *tb)

struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
u64 route);
+struct tb_switch *tb_switch_alloc_safe_mode(struct tb *tb,
+ struct device *parent, u64 route);
int tb_switch_configure(struct tb_switch *sw);
int tb_switch_add(struct tb_switch *sw);
void tb_switch_remove(struct tb_switch *sw);
--
2.11.0

2017-06-06 12:25:36

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 09/27] thunderbolt: Convert switch to a device

Thunderbolt domain consists of switches that are connected to each
other, forming a bus. This will convert each switch into a real Linux
device structure and adds them to the domain. The advantage here is
that we get all the goodies from the driver core, like reference
counting and sysfs hierarchy for free.

Also expose device identification information to the userspace via new
sysfs attributes.

In order to support internal connection manager (ICM) we separate switch
configuration into its own function (tb_switch_configure()) which is
only called by the existing native connection manager implementation
used on Macs.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
Documentation/ABI/testing/sysfs-bus-thunderbolt | 22 ++
drivers/thunderbolt/eeprom.c | 2 +
drivers/thunderbolt/switch.c | 261 +++++++++++++++++++-----
drivers/thunderbolt/tb.c | 40 +++-
drivers/thunderbolt/tb.h | 45 +++-
5 files changed, 303 insertions(+), 67 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-bus-thunderbolt

diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt
new file mode 100644
index 000000000000..9f1bd0086938
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt
@@ -0,0 +1,22 @@
+What: /sys/bus/thunderbolt/devices/.../device
+Date: Sep 2017
+KernelVersion: 4.13
+Contact: [email protected]
+Description: This attribute contains id of this device extracted from
+ the device DROM.
+
+What: /sys/bus/thunderbolt/devices/.../vendor
+Date: Sep 2017
+KernelVersion: 4.13
+Contact: [email protected]
+Description: This attribute contains vendor id of this device extracted
+ from the device DROM.
+
+What: /sys/bus/thunderbolt/devices/.../unique_id
+Date: Sep 2017
+KernelVersion: 4.13
+Contact: [email protected]
+Description: This attribute contains unique_id string of this device.
+ This is either read from hardware registers (UUID on
+ newer hardware) or based on UID from the device DROM.
+ Can be used to uniquely identify particular device.
diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
index eb2179c98b09..7e485e3ef27e 100644
--- a/drivers/thunderbolt/eeprom.c
+++ b/drivers/thunderbolt/eeprom.c
@@ -479,6 +479,8 @@ int tb_drom_read(struct tb_switch *sw)
goto err;
}
sw->uid = header->uid;
+ sw->vendor = header->vendor_id;
+ sw->device = header->model_id;

crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len);
if (crc != header->data_crc32) {
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index b379b4183bac..0475bfc6c208 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -281,6 +281,9 @@ static int tb_plug_events_active(struct tb_switch *sw, bool active)
u32 data;
int res;

+ if (!sw->config.enabled)
+ return 0;
+
sw->config.plug_events_delay = 0xff;
res = tb_sw_write(sw, ((u32 *) &sw->config) + 4, TB_CFG_SWITCH, 4, 1);
if (res)
@@ -307,36 +310,79 @@ static int tb_plug_events_active(struct tb_switch *sw, bool active)
sw->cap_plug_events + 1, 1);
}

+static ssize_t device_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct tb_switch *sw = tb_to_switch(dev);

-/**
- * tb_switch_free() - free a tb_switch and all downstream switches
- */
-void tb_switch_free(struct tb_switch *sw)
+ return sprintf(buf, "%#x\n", sw->device);
+}
+static DEVICE_ATTR_RO(device);
+
+static ssize_t vendor_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
{
- int i;
- /* port 0 is the switch itself and never has a remote */
- for (i = 1; i <= sw->config.max_port_number; i++) {
- if (tb_is_upstream_port(&sw->ports[i]))
- continue;
- if (sw->ports[i].remote)
- tb_switch_free(sw->ports[i].remote->sw);
- sw->ports[i].remote = NULL;
- }
+ struct tb_switch *sw = tb_to_switch(dev);

- if (!sw->is_unplugged)
- tb_plug_events_active(sw, false);
+ return sprintf(buf, "%#x\n", sw->vendor);
+}
+static DEVICE_ATTR_RO(vendor);
+
+static ssize_t unique_id_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct tb_switch *sw = tb_to_switch(dev);
+
+ return sprintf(buf, "%pUb\n", sw->uuid);
+}
+static DEVICE_ATTR_RO(unique_id);
+
+static struct attribute *switch_attrs[] = {
+ &dev_attr_device.attr,
+ &dev_attr_vendor.attr,
+ &dev_attr_unique_id.attr,
+ NULL,
+};
+
+static struct attribute_group switch_group = {
+ .attrs = switch_attrs,
+};

+static const struct attribute_group *switch_groups[] = {
+ &switch_group,
+ NULL,
+};
+
+static void tb_switch_release(struct device *dev)
+{
+ struct tb_switch *sw = tb_to_switch(dev);
+
+ kfree(sw->uuid);
kfree(sw->ports);
kfree(sw->drom);
kfree(sw);
}

+struct device_type tb_switch_type = {
+ .name = "thunderbolt_device",
+ .release = tb_switch_release,
+};
+
/**
- * tb_switch_alloc() - allocate and initialize a switch
+ * tb_switch_alloc() - allocate a switch
+ * @tb: Pointer to the owning domain
+ * @parent: Parent device for this switch
+ * @route: Route string for this switch
*
- * Return: Returns a NULL on failure.
+ * Allocates and initializes a switch. Will not upload configuration to
+ * the switch. For that you need to call tb_switch_configure()
+ * separately. The returned switch should be released by calling
+ * tb_switch_put().
+ *
+ * Return: Pointer to the allocated switch or %NULL in case of failure
*/
-struct tb_switch *tb_switch_alloc(struct tb *tb, u64 route)
+struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
+ u64 route)
{
int i;
int cap;
@@ -351,11 +397,9 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, u64 route)

sw->tb = tb;
if (tb_cfg_read(tb->ctl, &sw->config, route, 0, TB_CFG_SWITCH, 0, 5))
- goto err;
- tb_info(tb,
- "initializing Switch at %#llx (depth: %d, up port: %d)\n",
- route, tb_route_length(route), upstream_port);
- tb_info(tb, "old switch config:\n");
+ goto err_free_sw_ports;
+
+ tb_info(tb, "current switch config:\n");
tb_dump_switch(tb, &sw->config);

/* configure switch */
@@ -363,30 +407,13 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, u64 route)
sw->config.depth = tb_route_length(route);
sw->config.route_lo = route;
sw->config.route_hi = route >> 32;
- sw->config.enabled = 1;
- /* from here on we may use the tb_sw_* functions & macros */
-
- if (sw->config.vendor_id != 0x8086)
- tb_sw_warn(sw, "unknown switch vendor id %#x\n",
- sw->config.vendor_id);
-
- if (sw->config.device_id != PCI_DEVICE_ID_INTEL_LIGHT_RIDGE &&
- sw->config.device_id != PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C &&
- sw->config.device_id != PCI_DEVICE_ID_INTEL_PORT_RIDGE &&
- sw->config.device_id != PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE &&
- sw->config.device_id != PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE)
- tb_sw_warn(sw, "unsupported switch device id %#x\n",
- sw->config.device_id);
-
- /* upload configuration */
- if (tb_sw_write(sw, 1 + (u32 *) &sw->config, TB_CFG_SWITCH, 1, 3))
- goto err;
+ sw->config.enabled = 0;

/* initialize ports */
sw->ports = kcalloc(sw->config.max_port_number + 1, sizeof(*sw->ports),
GFP_KERNEL);
if (!sw->ports)
- goto err;
+ goto err_free_sw_ports;

for (i = 0; i <= sw->config.max_port_number; i++) {
/* minimum setup for tb_find_cap and tb_drom_read to work */
@@ -397,35 +424,161 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, u64 route)
cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_PLUG_EVENTS);
if (cap < 0) {
tb_sw_warn(sw, "cannot find TB_VSE_CAP_PLUG_EVENTS aborting\n");
- goto err;
+ goto err_free_sw_ports;
}
sw->cap_plug_events = cap;

+ device_initialize(&sw->dev);
+ sw->dev.parent = parent;
+ sw->dev.bus = &tb_bus_type;
+ sw->dev.type = &tb_switch_type;
+ sw->dev.groups = switch_groups;
+ dev_set_name(&sw->dev, "%u-%llx", tb->index, tb_route(sw));
+
+ return sw;
+
+err_free_sw_ports:
+ kfree(sw->ports);
+ kfree(sw);
+
+ return NULL;
+}
+
+/**
+ * tb_switch_configure() - Uploads configuration to the switch
+ * @sw: Switch to configure
+ *
+ * Call this function before the switch is added to the system. It will
+ * upload configuration to the switch and makes it available for the
+ * connection manager to use.
+ *
+ * Return: %0 in case of success and negative errno in case of failure
+ */
+int tb_switch_configure(struct tb_switch *sw)
+{
+ struct tb *tb = sw->tb;
+ u64 route;
+ int ret;
+
+ route = tb_route(sw);
+ tb_info(tb,
+ "initializing Switch at %#llx (depth: %d, up port: %d)\n",
+ route, tb_route_length(route), sw->config.upstream_port_number);
+
+ if (sw->config.vendor_id != PCI_VENDOR_ID_INTEL)
+ tb_sw_warn(sw, "unknown switch vendor id %#x\n",
+ sw->config.vendor_id);
+
+ if (sw->config.device_id != PCI_DEVICE_ID_INTEL_LIGHT_RIDGE &&
+ sw->config.device_id != PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C &&
+ sw->config.device_id != PCI_DEVICE_ID_INTEL_PORT_RIDGE &&
+ sw->config.device_id != PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE &&
+ sw->config.device_id != PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE)
+ tb_sw_warn(sw, "unsupported switch device id %#x\n",
+ sw->config.device_id);
+
+ sw->config.enabled = 1;
+
+ /* upload configuration */
+ ret = tb_sw_write(sw, 1 + (u32 *)&sw->config, TB_CFG_SWITCH, 1, 3);
+ if (ret)
+ return ret;
+
+ return tb_plug_events_active(sw, true);
+}
+
+static void tb_switch_set_uuid(struct tb_switch *sw)
+{
+ u32 uuid[4];
+ int cap;
+
+ if (sw->uuid)
+ return;
+
+ /*
+ * The newer controllers include fused UUID as part of link
+ * controller specific registers
+ */
+ cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER);
+ if (cap > 0) {
+ tb_sw_read(sw, uuid, TB_CFG_SWITCH, cap + 3, 4);
+ } else {
+ /*
+ * ICM generates UUID based on UID and fills the upper
+ * two words with ones. This is not strictly following
+ * UUID format but we want to be compatible with it so
+ * we do the same here.
+ */
+ uuid[0] = sw->uid & 0xffffffff;
+ uuid[1] = (sw->uid >> 32) & 0xffffffff;
+ uuid[2] = 0xffffffff;
+ uuid[3] = 0xffffffff;
+ }
+
+ sw->uuid = kmemdup(uuid, sizeof(uuid), GFP_KERNEL);
+}
+
+/**
+ * tb_switch_add() - Add a switch to the domain
+ * @sw: Switch to add
+ *
+ * This is the last step in adding switch to the domain. It will read
+ * identification information from DROM and initializes ports so that
+ * they can be used to connect other switches. The switch will be
+ * exposed to the userspace when this function successfully returns. To
+ * remove and release the switch, call tb_switch_remove().
+ *
+ * Return: %0 in case of success and negative errno in case of failure
+ */
+int tb_switch_add(struct tb_switch *sw)
+{
+ int i, ret;
+
/* read drom */
if (tb_drom_read(sw))
tb_sw_warn(sw, "tb_eeprom_read_rom failed, continuing\n");
tb_sw_info(sw, "uid: %#llx\n", sw->uid);

+ tb_switch_set_uuid(sw);
+
for (i = 0; i <= sw->config.max_port_number; i++) {
if (sw->ports[i].disabled) {
tb_port_info(&sw->ports[i], "disabled by eeprom\n");
continue;
}
- if (tb_init_port(&sw->ports[i]))
- goto err;
+ ret = tb_init_port(&sw->ports[i]);
+ if (ret)
+ return ret;
}

- /* TODO: I2C, IECS, link controller */
+ return device_add(&sw->dev);
+}

- if (tb_plug_events_active(sw, true))
- goto err;
+/**
+ * tb_switch_remove() - Remove and release a switch
+ * @sw: Switch to remove
+ *
+ * This will remove the switch from the domain and release it after last
+ * reference count drops to zero. If there are switches connected below
+ * this switch, they will be removed as well.
+ */
+void tb_switch_remove(struct tb_switch *sw)
+{
+ int i;

- return sw;
-err:
- kfree(sw->ports);
- kfree(sw->drom);
- kfree(sw);
- return NULL;
+ /* port 0 is the switch itself and never has a remote */
+ for (i = 1; i <= sw->config.max_port_number; i++) {
+ if (tb_is_upstream_port(&sw->ports[i]))
+ continue;
+ if (sw->ports[i].remote)
+ tb_switch_remove(sw->ports[i].remote->sw);
+ sw->ports[i].remote = NULL;
+ }
+
+ if (!sw->is_unplugged)
+ tb_plug_events_active(sw, false);
+
+ device_unregister(&sw->dev);
}

/**
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 9f00a0f28d53..94ecac012428 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -61,9 +61,21 @@ static void tb_scan_port(struct tb_port *port)
tb_port_WARN(port, "port already has a remote!\n");
return;
}
- sw = tb_switch_alloc(port->sw->tb, tb_downstream_route(port));
+ sw = tb_switch_alloc(port->sw->tb, &port->sw->dev,
+ tb_downstream_route(port));
if (!sw)
return;
+
+ if (tb_switch_configure(sw)) {
+ tb_switch_put(sw);
+ return;
+ }
+
+ if (tb_switch_add(sw)) {
+ tb_switch_put(sw);
+ return;
+ }
+
port->remote = tb_upstream_port(sw);
tb_upstream_port(sw)->remote = port;
tb_scan_switch(sw);
@@ -100,7 +112,7 @@ static void tb_free_unplugged_children(struct tb_switch *sw)
if (!port->remote)
continue;
if (port->remote->sw->is_unplugged) {
- tb_switch_free(port->remote->sw);
+ tb_switch_remove(port->remote->sw);
port->remote = NULL;
} else {
tb_free_unplugged_children(port->remote->sw);
@@ -266,7 +278,7 @@ static void tb_handle_hotplug(struct work_struct *work)
tb_port_info(port, "unplugged\n");
tb_sw_set_unplugged(port->remote->sw);
tb_free_invalid_tunnels(tb);
- tb_switch_free(port->remote->sw);
+ tb_switch_remove(port->remote->sw);
port->remote = NULL;
} else {
tb_port_info(port,
@@ -325,22 +337,32 @@ static void tb_stop(struct tb *tb)
tb_pci_deactivate(tunnel);
tb_pci_free(tunnel);
}
-
- if (tb->root_switch)
- tb_switch_free(tb->root_switch);
- tb->root_switch = NULL;
-
+ tb_switch_remove(tb->root_switch);
tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */
}

static int tb_start(struct tb *tb)
{
struct tb_cm *tcm = tb_priv(tb);
+ int ret;

- tb->root_switch = tb_switch_alloc(tb, 0);
+ tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0);
if (!tb->root_switch)
return -ENOMEM;

+ ret = tb_switch_configure(tb->root_switch);
+ if (ret) {
+ tb_switch_put(tb->root_switch);
+ return ret;
+ }
+
+ /* Announce the switch to the world */
+ ret = tb_switch_add(tb->root_switch);
+ if (ret) {
+ tb_switch_put(tb->root_switch);
+ return ret;
+ }
+
/* Full scan to discover devices added before the driver was loaded. */
tb_scan_switch(tb->root_switch);
tb_activate_pcie_devices(tb);
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 5fab4c44f124..f7dfe733d71a 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -8,20 +8,36 @@
#define TB_H_

#include <linux/pci.h>
+#include <linux/uuid.h>

#include "tb_regs.h"
#include "ctl.h"

/**
* struct tb_switch - a thunderbolt switch
+ * @dev: Device for the switch
+ * @config: Switch configuration
+ * @ports: Ports in this switch
+ * @tb: Pointer to the domain the switch belongs to
+ * @uid: Unique ID of the switch
+ * @uuid: UUID of the switch (or %NULL if not supported)
+ * @vendor: Vendor ID of the switch
+ * @device: Device ID of the switch
+ * @cap_plug_events: Offset to the plug events capability (%0 if not found)
+ * @is_unplugged: The switch is going away
+ * @drom: DROM of the switch (%NULL if not found)
*/
struct tb_switch {
+ struct device dev;
struct tb_regs_switch_header config;
struct tb_port *ports;
struct tb *tb;
u64 uid;
- int cap_plug_events; /* offset, zero if not found */
- bool is_unplugged; /* unplugged, will go away */
+ uuid_be *uuid;
+ u16 vendor;
+ u16 device;
+ int cap_plug_events;
+ bool is_unplugged;
u8 *drom;
};

@@ -242,6 +258,7 @@ struct tb *tb_probe(struct tb_nhi *nhi);

extern struct bus_type tb_bus_type;
extern struct device_type tb_domain_type;
+extern struct device_type tb_switch_type;

int tb_domain_init(void);
void tb_domain_exit(void);
@@ -257,14 +274,34 @@ static inline void tb_domain_put(struct tb *tb)
put_device(&tb->dev);
}

-struct tb_switch *tb_switch_alloc(struct tb *tb, u64 route);
-void tb_switch_free(struct tb_switch *sw);
+struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
+ u64 route);
+int tb_switch_configure(struct tb_switch *sw);
+int tb_switch_add(struct tb_switch *sw);
+void tb_switch_remove(struct tb_switch *sw);
void tb_switch_suspend(struct tb_switch *sw);
int tb_switch_resume(struct tb_switch *sw);
int tb_switch_reset(struct tb *tb, u64 route);
void tb_sw_set_unplugged(struct tb_switch *sw);
struct tb_switch *get_switch_at_route(struct tb_switch *sw, u64 route);

+static inline void tb_switch_put(struct tb_switch *sw)
+{
+ put_device(&sw->dev);
+}
+
+static inline bool tb_is_switch(const struct device *dev)
+{
+ return dev->type == &tb_switch_type;
+}
+
+static inline struct tb_switch *tb_to_switch(struct device *dev)
+{
+ if (tb_is_switch(dev))
+ return container_of(dev, struct tb_switch, dev);
+ return NULL;
+}
+
int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged);
int tb_port_add_nfc_credits(struct tb_port *port, int credits);
int tb_port_clear_counter(struct tb_port *port, int counter);
--
2.11.0

2017-06-06 12:30:48

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 08/27] thunderbolt: Introduce thunderbolt bus and connection manager

Thunderbolt fabric consists of one or more switches. This fabric is
called domain and it is controlled by an entity called connection
manager. The connection manager can be either internal (driven by a
firmware running on the host controller) or external (software driver).
This driver currently implements support for the latter.

In order to manage switches and their properties more easily we model
this domain structure as a Linux bus. Each host controller adds a domain
device to this bus, and these devices are named as domainN where N
stands for index or id of the current domain.

We then abstract connection manager specific operations into a new
structure tb_cm_ops and convert the existing tb.c to fill those
accordingly. This makes it easier to add support for the internal
connection manager in subsequent patches.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/Makefile | 2 +-
drivers/thunderbolt/domain.c | 230 +++++++++++++++++++++++++++++++++++++++
drivers/thunderbolt/nhi.c | 31 ++++--
drivers/thunderbolt/tb.c | 156 ++++++++++++--------------
drivers/thunderbolt/tb.h | 70 +++++++++---
drivers/thunderbolt/tunnel_pci.c | 9 +-
6 files changed, 377 insertions(+), 121 deletions(-)
create mode 100644 drivers/thunderbolt/domain.c

diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile
index 5d1053cdfa54..e276a9a62261 100644
--- a/drivers/thunderbolt/Makefile
+++ b/drivers/thunderbolt/Makefile
@@ -1,3 +1,3 @@
obj-${CONFIG_THUNDERBOLT} := thunderbolt.o
thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel_pci.o eeprom.o
-
+thunderbolt-objs += domain.o
diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c
new file mode 100644
index 000000000000..3302f4c59638
--- /dev/null
+++ b/drivers/thunderbolt/domain.c
@@ -0,0 +1,230 @@
+/*
+ * Thunderbolt bus support
+ *
+ * Copyright (C) 2017, Intel Corporation
+ * Author: Mika Westerberg <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/device.h>
+#include <linux/idr.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#include "tb.h"
+
+static DEFINE_IDA(tb_domain_ida);
+
+struct bus_type tb_bus_type = {
+ .name = "thunderbolt",
+};
+
+static void tb_domain_release(struct device *dev)
+{
+ struct tb *tb = container_of(dev, struct tb, dev);
+
+ tb_ctl_free(tb->ctl);
+ destroy_workqueue(tb->wq);
+ ida_simple_remove(&tb_domain_ida, tb->index);
+ mutex_destroy(&tb->lock);
+ kfree(tb);
+}
+
+struct device_type tb_domain_type = {
+ .name = "thunderbolt_domain",
+ .release = tb_domain_release,
+};
+
+/**
+ * tb_domain_alloc() - Allocate a domain
+ * @nhi: Pointer to the host controller
+ * @privsize: Size of the connection manager private data
+ *
+ * Allocates and initializes a new Thunderbolt domain. Connection
+ * managers are expected to call this and then fill in @cm_ops
+ * accordingly.
+ *
+ * Call tb_domain_put() to release the domain before it has been added
+ * to the system.
+ *
+ * Return: allocated domain structure on %NULL in case of error
+ */
+struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize)
+{
+ struct tb *tb;
+
+ /*
+ * Make sure the structure sizes map with that the hardware
+ * expects because bit-fields are being used.
+ */
+ BUILD_BUG_ON(sizeof(struct tb_regs_switch_header) != 5 * 4);
+ BUILD_BUG_ON(sizeof(struct tb_regs_port_header) != 8 * 4);
+ BUILD_BUG_ON(sizeof(struct tb_regs_hop) != 2 * 4);
+
+ tb = kzalloc(sizeof(*tb) + privsize, GFP_KERNEL);
+ if (!tb)
+ return NULL;
+
+ tb->nhi = nhi;
+ mutex_init(&tb->lock);
+
+ tb->index = ida_simple_get(&tb_domain_ida, 0, 0, GFP_KERNEL);
+ if (tb->index < 0)
+ goto err_free;
+
+ tb->wq = alloc_ordered_workqueue("thunderbolt%d", 0, tb->index);
+ if (!tb->wq)
+ goto err_remove_ida;
+
+ tb->dev.parent = &nhi->pdev->dev;
+ tb->dev.bus = &tb_bus_type;
+ tb->dev.type = &tb_domain_type;
+ dev_set_name(&tb->dev, "domain%d", tb->index);
+ device_initialize(&tb->dev);
+
+ return tb;
+
+err_remove_ida:
+ ida_simple_remove(&tb_domain_ida, tb->index);
+err_free:
+ kfree(tb);
+
+ return NULL;
+}
+
+/**
+ * tb_domain_add() - Add domain to the system
+ * @tb: Domain to add
+ *
+ * Starts the domain and adds it to the system. Hotplugging devices will
+ * work after this has been returned successfully. In order to remove
+ * and release the domain after this function has been called, call
+ * tb_domain_remove().
+ *
+ * Return: %0 in case of success and negative errno in case of error
+ */
+int tb_domain_add(struct tb *tb)
+{
+ int ret;
+
+ if (WARN_ON(!tb->cm_ops))
+ return -EINVAL;
+
+ mutex_lock(&tb->lock);
+
+ tb->ctl = tb_ctl_alloc(tb->nhi, tb->cm_ops->hotplug, tb);
+ if (!tb->ctl) {
+ ret = -ENOMEM;
+ goto err_unlock;
+ }
+
+ /*
+ * tb_schedule_hotplug_handler may be called as soon as the config
+ * channel is started. Thats why we have to hold the lock here.
+ */
+ tb_ctl_start(tb->ctl);
+
+ ret = device_add(&tb->dev);
+ if (ret)
+ goto err_ctl_stop;
+
+ /* Start the domain */
+ if (tb->cm_ops->start) {
+ ret = tb->cm_ops->start(tb);
+ if (ret)
+ goto err_domain_del;
+ }
+
+ /* This starts event processing */
+ mutex_unlock(&tb->lock);
+
+ return 0;
+
+err_domain_del:
+ device_del(&tb->dev);
+err_ctl_stop:
+ tb_ctl_stop(tb->ctl);
+err_unlock:
+ mutex_unlock(&tb->lock);
+
+ return ret;
+}
+
+/**
+ * tb_domain_remove() - Removes and releases a domain
+ * @tb: Domain to remove
+ *
+ * Stops the domain, removes it from the system and releases all
+ * resources once the last reference has been released.
+ */
+void tb_domain_remove(struct tb *tb)
+{
+ mutex_lock(&tb->lock);
+ if (tb->cm_ops->stop)
+ tb->cm_ops->stop(tb);
+ /* Stop the domain control traffic */
+ tb_ctl_stop(tb->ctl);
+ mutex_unlock(&tb->lock);
+
+ flush_workqueue(tb->wq);
+ device_unregister(&tb->dev);
+}
+
+/**
+ * tb_domain_suspend_noirq() - Suspend a domain
+ * @tb: Domain to suspend
+ *
+ * Suspends all devices in the domain and stops the control channel.
+ */
+int tb_domain_suspend_noirq(struct tb *tb)
+{
+ int ret = 0;
+
+ /*
+ * The control channel interrupt is left enabled during suspend
+ * and taking the lock here prevents any events happening before
+ * we actually have stopped the domain and the control channel.
+ */
+ mutex_lock(&tb->lock);
+ if (tb->cm_ops->suspend_noirq)
+ ret = tb->cm_ops->suspend_noirq(tb);
+ if (!ret)
+ tb_ctl_stop(tb->ctl);
+ mutex_unlock(&tb->lock);
+
+ return ret;
+}
+
+/**
+ * tb_domain_resume_noirq() - Resume a domain
+ * @tb: Domain to resume
+ *
+ * Re-starts the control channel, and resumes all devices connected to
+ * the domain.
+ */
+int tb_domain_resume_noirq(struct tb *tb)
+{
+ int ret = 0;
+
+ mutex_lock(&tb->lock);
+ tb_ctl_start(tb->ctl);
+ if (tb->cm_ops->resume_noirq)
+ ret = tb->cm_ops->resume_noirq(tb);
+ mutex_unlock(&tb->lock);
+
+ return ret;
+}
+
+int tb_domain_init(void)
+{
+ return bus_register(&tb_bus_type);
+}
+
+void tb_domain_exit(void)
+{
+ bus_unregister(&tb_bus_type);
+ ida_destroy(&tb_domain_ida);
+}
diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index ed75c49748f5..c1113a3c4128 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -586,16 +586,16 @@ static int nhi_suspend_noirq(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct tb *tb = pci_get_drvdata(pdev);
- thunderbolt_suspend(tb);
- return 0;
+
+ return tb_domain_suspend_noirq(tb);
}

static int nhi_resume_noirq(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct tb *tb = pci_get_drvdata(pdev);
- thunderbolt_resume(tb);
- return 0;
+
+ return tb_domain_resume_noirq(tb);
}

static void nhi_shutdown(struct tb_nhi *nhi)
@@ -715,12 +715,17 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
iowrite32(3906250 / 10000, nhi->iobase + 0x38c00);

dev_info(&nhi->pdev->dev, "NHI initialized, starting thunderbolt\n");
- tb = thunderbolt_alloc_and_start(nhi);
- if (!tb) {
+ tb = tb_probe(nhi);
+ if (!tb)
+ return -ENODEV;
+
+ res = tb_domain_add(tb);
+ if (res) {
/*
* At this point the RX/TX rings might already have been
* activated. Do a proper shutdown.
*/
+ tb_domain_put(tb);
nhi_shutdown(nhi);
return -EIO;
}
@@ -733,7 +738,8 @@ static void nhi_remove(struct pci_dev *pdev)
{
struct tb *tb = pci_get_drvdata(pdev);
struct tb_nhi *nhi = tb->nhi;
- thunderbolt_shutdown_and_free(tb);
+
+ tb_domain_remove(tb);
nhi_shutdown(nhi);
}

@@ -797,14 +803,23 @@ static struct pci_driver nhi_driver = {

static int __init nhi_init(void)
{
+ int ret;
+
if (!dmi_match(DMI_BOARD_VENDOR, "Apple Inc."))
return -ENOSYS;
- return pci_register_driver(&nhi_driver);
+ ret = tb_domain_init();
+ if (ret)
+ return ret;
+ ret = pci_register_driver(&nhi_driver);
+ if (ret)
+ tb_domain_exit();
+ return ret;
}

static void __exit nhi_unload(void)
{
pci_unregister_driver(&nhi_driver);
+ tb_domain_exit();
}

module_init(nhi_init);
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 6b44076e1380..9f00a0f28d53 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -12,6 +12,18 @@
#include "tb_regs.h"
#include "tunnel_pci.h"

+/**
+ * struct tb_cm - Simple Thunderbolt connection manager
+ * @tunnel_list: List of active tunnels
+ * @hotplug_active: tb_handle_hotplug will stop progressing plug
+ * events and exit if this is not set (it needs to
+ * acquire the lock one more time). Used to drain wq
+ * after cfg has been paused.
+ */
+struct tb_cm {
+ struct list_head tunnel_list;
+ bool hotplug_active;
+};

/* enumeration & hot plug handling */

@@ -62,12 +74,14 @@ static void tb_scan_port(struct tb_port *port)
*/
static void tb_free_invalid_tunnels(struct tb *tb)
{
+ struct tb_cm *tcm = tb_priv(tb);
struct tb_pci_tunnel *tunnel;
struct tb_pci_tunnel *n;
- list_for_each_entry_safe(tunnel, n, &tb->tunnel_list, list)
- {
+
+ list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) {
if (tb_pci_is_invalid(tunnel)) {
tb_pci_deactivate(tunnel);
+ list_del(&tunnel->list);
tb_pci_free(tunnel);
}
}
@@ -149,6 +163,8 @@ static void tb_activate_pcie_devices(struct tb *tb)
struct tb_port *up_port;
struct tb_port *down_port;
struct tb_pci_tunnel *tunnel;
+ struct tb_cm *tcm = tb_priv(tb);
+
/* scan for pcie devices at depth 1*/
for (i = 1; i <= tb->root_switch->config.max_port_number; i++) {
if (tb_is_upstream_port(&tb->root_switch->ports[i]))
@@ -195,6 +211,7 @@ static void tb_activate_pcie_devices(struct tb *tb)
tb_pci_free(tunnel);
}

+ list_add(&tunnel->list, &tcm->tunnel_list);
}
}

@@ -217,10 +234,11 @@ static void tb_handle_hotplug(struct work_struct *work)
{
struct tb_hotplug_event *ev = container_of(work, typeof(*ev), work);
struct tb *tb = ev->tb;
+ struct tb_cm *tcm = tb_priv(tb);
struct tb_switch *sw;
struct tb_port *port;
mutex_lock(&tb->lock);
- if (!tb->hotplug_active)
+ if (!tcm->hotplug_active)
goto out; /* during init, suspend or shutdown */

sw = get_switch_at_route(tb->root_switch, ev->route);
@@ -296,22 +314,14 @@ static void tb_schedule_hotplug_handler(void *data, u64 route, u8 port,
queue_work(tb->wq, &ev->work);
}

-/**
- * thunderbolt_shutdown_and_free() - shutdown everything
- *
- * Free all switches and the config channel.
- *
- * Used in the error path of thunderbolt_alloc_and_start.
- */
-void thunderbolt_shutdown_and_free(struct tb *tb)
+static void tb_stop(struct tb *tb)
{
+ struct tb_cm *tcm = tb_priv(tb);
struct tb_pci_tunnel *tunnel;
struct tb_pci_tunnel *n;

- mutex_lock(&tb->lock);
-
/* tunnels are only present after everything has been initialized */
- list_for_each_entry_safe(tunnel, n, &tb->tunnel_list, list) {
+ list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) {
tb_pci_deactivate(tunnel);
tb_pci_free(tunnel);
}
@@ -320,98 +330,44 @@ void thunderbolt_shutdown_and_free(struct tb *tb)
tb_switch_free(tb->root_switch);
tb->root_switch = NULL;

- if (tb->ctl) {
- tb_ctl_stop(tb->ctl);
- tb_ctl_free(tb->ctl);
- }
- tb->ctl = NULL;
- tb->hotplug_active = false; /* signal tb_handle_hotplug to quit */
-
- /* allow tb_handle_hotplug to acquire the lock */
- mutex_unlock(&tb->lock);
- if (tb->wq) {
- flush_workqueue(tb->wq);
- destroy_workqueue(tb->wq);
- tb->wq = NULL;
- }
- mutex_destroy(&tb->lock);
- kfree(tb);
+ tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */
}

-/**
- * thunderbolt_alloc_and_start() - setup the thunderbolt bus
- *
- * Allocates a tb_cfg control channel, initializes the root switch, enables
- * plug events and activates pci devices.
- *
- * Return: Returns NULL on error.
- */
-struct tb *thunderbolt_alloc_and_start(struct tb_nhi *nhi)
+static int tb_start(struct tb *tb)
{
- struct tb *tb;
-
- BUILD_BUG_ON(sizeof(struct tb_regs_switch_header) != 5 * 4);
- BUILD_BUG_ON(sizeof(struct tb_regs_port_header) != 8 * 4);
- BUILD_BUG_ON(sizeof(struct tb_regs_hop) != 2 * 4);
-
- tb = kzalloc(sizeof(*tb), GFP_KERNEL);
- if (!tb)
- return NULL;
-
- tb->nhi = nhi;
- mutex_init(&tb->lock);
- mutex_lock(&tb->lock);
- INIT_LIST_HEAD(&tb->tunnel_list);
-
- tb->wq = alloc_ordered_workqueue("thunderbolt", 0);
- if (!tb->wq)
- goto err_locked;
-
- tb->ctl = tb_ctl_alloc(tb->nhi, tb_schedule_hotplug_handler, tb);
- if (!tb->ctl)
- goto err_locked;
- /*
- * tb_schedule_hotplug_handler may be called as soon as the config
- * channel is started. Thats why we have to hold the lock here.
- */
- tb_ctl_start(tb->ctl);
+ struct tb_cm *tcm = tb_priv(tb);

tb->root_switch = tb_switch_alloc(tb, 0);
if (!tb->root_switch)
- goto err_locked;
+ return -ENOMEM;

/* Full scan to discover devices added before the driver was loaded. */
tb_scan_switch(tb->root_switch);
tb_activate_pcie_devices(tb);

/* Allow tb_handle_hotplug to progress events */
- tb->hotplug_active = true;
- mutex_unlock(&tb->lock);
- return tb;
-
-err_locked:
- mutex_unlock(&tb->lock);
- thunderbolt_shutdown_and_free(tb);
- return NULL;
+ tcm->hotplug_active = true;
+ return 0;
}

-void thunderbolt_suspend(struct tb *tb)
+static int tb_suspend_noirq(struct tb *tb)
{
+ struct tb_cm *tcm = tb_priv(tb);
+
tb_info(tb, "suspending...\n");
- mutex_lock(&tb->lock);
tb_switch_suspend(tb->root_switch);
- tb_ctl_stop(tb->ctl);
- tb->hotplug_active = false; /* signal tb_handle_hotplug to quit */
- mutex_unlock(&tb->lock);
+ tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */
tb_info(tb, "suspend finished\n");
+
+ return 0;
}

-void thunderbolt_resume(struct tb *tb)
+static int tb_resume_noirq(struct tb *tb)
{
+ struct tb_cm *tcm = tb_priv(tb);
struct tb_pci_tunnel *tunnel, *n;
+
tb_info(tb, "resuming...\n");
- mutex_lock(&tb->lock);
- tb_ctl_start(tb->ctl);

/* remove any pci devices the firmware might have setup */
tb_switch_reset(tb, 0);
@@ -419,9 +375,9 @@ void thunderbolt_resume(struct tb *tb)
tb_switch_resume(tb->root_switch);
tb_free_invalid_tunnels(tb);
tb_free_unplugged_children(tb->root_switch);
- list_for_each_entry_safe(tunnel, n, &tb->tunnel_list, list)
+ list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list)
tb_pci_restart(tunnel);
- if (!list_empty(&tb->tunnel_list)) {
+ if (!list_empty(&tcm->tunnel_list)) {
/*
* the pcie links need some time to get going.
* 100ms works for me...
@@ -430,7 +386,33 @@ void thunderbolt_resume(struct tb *tb)
msleep(100);
}
/* Allow tb_handle_hotplug to progress events */
- tb->hotplug_active = true;
- mutex_unlock(&tb->lock);
+ tcm->hotplug_active = true;
tb_info(tb, "resume finished\n");
+
+ return 0;
+}
+
+static const struct tb_cm_ops tb_cm_ops = {
+ .start = tb_start,
+ .stop = tb_stop,
+ .suspend_noirq = tb_suspend_noirq,
+ .resume_noirq = tb_resume_noirq,
+ .hotplug = tb_schedule_hotplug_handler,
+};
+
+struct tb *tb_probe(struct tb_nhi *nhi)
+{
+ struct tb_cm *tcm;
+ struct tb *tb;
+
+ tb = tb_domain_alloc(nhi, sizeof(*tcm));
+ if (!tb)
+ return NULL;
+
+ tb->cm_ops = &tb_cm_ops;
+
+ tcm = tb_priv(tb);
+ INIT_LIST_HEAD(&tcm->tunnel_list);
+
+ return tb;
}
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 0b78bc4fbe61..5fab4c44f124 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -92,29 +92,52 @@ struct tb_path {
int path_length; /* number of hops */
};

+/**
+ * struct tb_cm_ops - Connection manager specific operations vector
+ * @start: Starts the domain
+ * @stop: Stops the domain
+ * @suspend_noirq: Connection manager specific suspend_noirq
+ * @resume_noirq: Connection manager specific resume_noirq
+ * @hotplug: Handle hotplug event
+ */
+struct tb_cm_ops {
+ int (*start)(struct tb *tb);
+ void (*stop)(struct tb *tb);
+ int (*suspend_noirq)(struct tb *tb);
+ int (*resume_noirq)(struct tb *tb);
+ hotplug_cb hotplug;
+};

/**
* struct tb - main thunderbolt bus structure
+ * @dev: Domain device
+ * @lock: Big lock. Must be held when accessing cfg or any struct
+ * tb_switch / struct tb_port.
+ * @nhi: Pointer to the NHI structure
+ * @ctl: Control channel for this domain
+ * @wq: Ordered workqueue for all domain specific work
+ * @root_switch: Root switch of this domain
+ * @cm_ops: Connection manager specific operations vector
+ * @index: Linux assigned domain number
+ * @privdata: Private connection manager specific data
*/
struct tb {
- struct mutex lock; /*
- * Big lock. Must be held when accessing cfg or
- * any struct tb_switch / struct tb_port.
- */
+ struct device dev;
+ struct mutex lock;
struct tb_nhi *nhi;
struct tb_ctl *ctl;
- struct workqueue_struct *wq; /* ordered workqueue for plug events */
+ struct workqueue_struct *wq;
struct tb_switch *root_switch;
- struct list_head tunnel_list; /* list of active PCIe tunnels */
- bool hotplug_active; /*
- * tb_handle_hotplug will stop progressing plug
- * events and exit if this is not set (it needs to
- * acquire the lock one more time). Used to drain
- * wq after cfg has been paused.
- */
-
+ const struct tb_cm_ops *cm_ops;
+ int index;
+ unsigned long privdata[0];
};

+static inline void *tb_priv(struct tb *tb)
+{
+ return (void *)tb->privdata;
+}
+
/* helper functions & macros */

/**
@@ -215,11 +238,24 @@ static inline int tb_port_write(struct tb_port *port, const void *buffer,
#define tb_port_info(port, fmt, arg...) \
__TB_PORT_PRINT(tb_info, port, fmt, ##arg)

+struct tb *tb_probe(struct tb_nhi *nhi);
+
+extern struct bus_type tb_bus_type;
+extern struct device_type tb_domain_type;
+
+int tb_domain_init(void);
+void tb_domain_exit(void);

-struct tb *thunderbolt_alloc_and_start(struct tb_nhi *nhi);
-void thunderbolt_shutdown_and_free(struct tb *tb);
-void thunderbolt_suspend(struct tb *tb);
-void thunderbolt_resume(struct tb *tb);
+struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize);
+int tb_domain_add(struct tb *tb);
+void tb_domain_remove(struct tb *tb);
+int tb_domain_suspend_noirq(struct tb *tb);
+int tb_domain_resume_noirq(struct tb *tb);
+
+static inline void tb_domain_put(struct tb *tb)
+{
+ put_device(&tb->dev);
+}

struct tb_switch *tb_switch_alloc(struct tb *tb, u64 route);
void tb_switch_free(struct tb_switch *sw);
diff --git a/drivers/thunderbolt/tunnel_pci.c b/drivers/thunderbolt/tunnel_pci.c
index f4ce9845e42a..ca4475907d7a 100644
--- a/drivers/thunderbolt/tunnel_pci.c
+++ b/drivers/thunderbolt/tunnel_pci.c
@@ -194,19 +194,13 @@ int tb_pci_restart(struct tb_pci_tunnel *tunnel)
*/
int tb_pci_activate(struct tb_pci_tunnel *tunnel)
{
- int res;
if (tunnel->path_to_up->activated || tunnel->path_to_down->activated) {
tb_tunnel_WARN(tunnel,
"trying to activate an already activated tunnel\n");
return -EINVAL;
}

- res = tb_pci_restart(tunnel);
- if (res)
- return res;
-
- list_add(&tunnel->list, &tunnel->tb->tunnel_list);
- return 0;
+ return tb_pci_restart(tunnel);
}


@@ -227,6 +221,5 @@ void tb_pci_deactivate(struct tb_pci_tunnel *tunnel)
tb_path_deactivate(tunnel->path_to_down);
if (tunnel->path_to_up->activated)
tb_path_deactivate(tunnel->path_to_up);
- list_del_init(&tunnel->list);
}

--
2.11.0

2017-06-06 12:31:09

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 13/27] thunderbolt: Read vendor and device name from DROM

The device DROM contains name of the vendor and device among other
things. Extract this information and expose it to the userspace via two
new attributes.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
Documentation/ABI/testing/sysfs-bus-thunderbolt | 14 +++++++++++
drivers/thunderbolt/eeprom.c | 32 +++++++++++++++++++++++++
drivers/thunderbolt/switch.c | 22 +++++++++++++++++
drivers/thunderbolt/tb.h | 4 ++++
4 files changed, 72 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt
index 9f1bd0086938..29a516f53d2c 100644
--- a/Documentation/ABI/testing/sysfs-bus-thunderbolt
+++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt
@@ -5,6 +5,13 @@ Contact: [email protected]
Description: This attribute contains id of this device extracted from
the device DROM.

+What: /sys/bus/thunderbolt/devices/.../device_name
+Date: Sep 2017
+KernelVersion: 4.13
+Contact: [email protected]
+Description: This attribute contains name of this device extracted from
+ the device DROM.
+
What: /sys/bus/thunderbolt/devices/.../vendor
Date: Sep 2017
KernelVersion: 4.13
@@ -12,6 +19,13 @@ Contact: [email protected]
Description: This attribute contains vendor id of this device extracted
from the device DROM.

+What: /sys/bus/thunderbolt/devices/.../vendor_name
+Date: Sep 2017
+KernelVersion: 4.13
+Contact: [email protected]
+Description: This attribute contains vendor name of this device extracted
+ from the device DROM.
+
What: /sys/bus/thunderbolt/devices/.../unique_id
Date: Sep 2017
KernelVersion: 4.13
diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
index 5c7d80a109b1..d40a5f07fc4c 100644
--- a/drivers/thunderbolt/eeprom.c
+++ b/drivers/thunderbolt/eeprom.c
@@ -204,6 +204,11 @@ struct tb_drom_entry_header {
enum tb_drom_entry_type type:1;
} __packed;

+struct tb_drom_entry_generic {
+ struct tb_drom_entry_header header;
+ u8 data[0];
+} __packed;
+
struct tb_drom_entry_port {
/* BYTES 0-1 */
struct tb_drom_entry_header header;
@@ -295,6 +300,32 @@ int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid)
return 0;
}

+static int tb_drom_parse_entry_generic(struct tb_switch *sw,
+ struct tb_drom_entry_header *header)
+{
+ const struct tb_drom_entry_generic *entry =
+ (const struct tb_drom_entry_generic *)header;
+
+ switch (header->index) {
+ case 1:
+ /* Length includes 2 bytes header so remove it before copy */
+ sw->vendor_name = kstrndup(entry->data,
+ header->len - sizeof(*header), GFP_KERNEL);
+ if (!sw->vendor_name)
+ return -ENOMEM;
+ break;
+
+ case 2:
+ sw->device_name = kstrndup(entry->data,
+ header->len - sizeof(*header), GFP_KERNEL);
+ if (!sw->device_name)
+ return -ENOMEM;
+ break;
+ }
+
+ return 0;
+}
+
static int tb_drom_parse_entry_port(struct tb_switch *sw,
struct tb_drom_entry_header *header)
{
@@ -350,6 +381,7 @@ static int tb_drom_parse_entries(struct tb_switch *sw)

switch (entry->type) {
case TB_DROM_ENTRY_GENERIC:
+ res = tb_drom_parse_entry_generic(sw, entry);
break;
case TB_DROM_ENTRY_PORT:
res = tb_drom_parse_entry_port(sw, entry);
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 2390f08b94da..11f16a141686 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -319,6 +319,15 @@ static ssize_t device_show(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RO(device);

+static ssize_t
+device_name_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct tb_switch *sw = tb_to_switch(dev);
+
+ return sprintf(buf, "%s\n", sw->device_name ? sw->device_name : "");
+}
+static DEVICE_ATTR_RO(device_name);
+
static ssize_t vendor_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
@@ -328,6 +337,15 @@ static ssize_t vendor_show(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RO(vendor);

+static ssize_t
+vendor_name_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct tb_switch *sw = tb_to_switch(dev);
+
+ return sprintf(buf, "%s\n", sw->vendor_name ? sw->vendor_name : "");
+}
+static DEVICE_ATTR_RO(vendor_name);
+
static ssize_t unique_id_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
@@ -339,7 +357,9 @@ static DEVICE_ATTR_RO(unique_id);

static struct attribute *switch_attrs[] = {
&dev_attr_device.attr,
+ &dev_attr_device_name.attr,
&dev_attr_vendor.attr,
+ &dev_attr_vendor_name.attr,
&dev_attr_unique_id.attr,
NULL,
};
@@ -358,6 +378,8 @@ static void tb_switch_release(struct device *dev)
struct tb_switch *sw = tb_to_switch(dev);

kfree(sw->uuid);
+ kfree(sw->device_name);
+ kfree(sw->vendor_name);
kfree(sw->ports);
kfree(sw->drom);
kfree(sw);
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index f7dfe733d71a..6d4910ef2eb9 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -23,6 +23,8 @@
* @uuid: UUID of the switch (or %NULL if not supported)
* @vendor: Vendor ID of the switch
* @device: Device ID of the switch
+ * @vendor_name: Name of the vendor (or %NULL if not known)
+ * @device_name: Name of the device (or %NULL if not known)
* @cap_plug_events: Offset to the plug events capability (%0 if not found)
* @is_unplugged: The switch is going away
* @drom: DROM of the switch (%NULL if not found)
@@ -36,6 +38,8 @@ struct tb_switch {
uuid_be *uuid;
u16 vendor;
u16 device;
+ const char *vendor_name;
+ const char *device_name;
int cap_plug_events;
bool is_unplugged;
u8 *drom;
--
2.11.0

2017-06-06 12:31:24

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 11/27] thunderbolt: Do not fail if DROM data CRC32 is invalid

There are devices out there where CRC32 of the DROM is not correct. One
reason for this is that the ICM firmware does not validate it and it
seems that neither does the Apple driver. To be able to support such
devices we continue parsing the DROM contents regardless of whether
CRC32 failed or not. We still keep the warning there.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/eeprom.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
index 7e485e3ef27e..e2c1f8a45522 100644
--- a/drivers/thunderbolt/eeprom.c
+++ b/drivers/thunderbolt/eeprom.c
@@ -485,9 +485,8 @@ int tb_drom_read(struct tb_switch *sw)
crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len);
if (crc != header->data_crc32) {
tb_sw_warn(sw,
- "drom data crc32 mismatch (expected: %#x, got: %#x), aborting\n",
+ "drom data crc32 mismatch (expected: %#x, got: %#x), continuing\n",
header->data_crc32, crc);
- goto err;
}

if (header->device_rom_revision > 2)
--
2.11.0

2017-06-06 12:31:38

by Mika Westerberg

[permalink] [raw]
Subject: [PATCH v4 01/27] thunderbolt: Use const buffer pointer in write operations

These functions should not (and do not) modify the argument in any way
so make it const.

Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Reviewed-by: Michael Jamet <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Andreas Noever <[email protected]>
---
drivers/thunderbolt/ctl.c | 8 ++++----
drivers/thunderbolt/ctl.h | 4 ++--
drivers/thunderbolt/tb.h | 2 +-
3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index 1146ff4210a9..1031d97407a8 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -273,7 +273,7 @@ static void tb_cfg_print_error(struct tb_ctl *ctl,
}
}

-static void cpu_to_be32_array(__be32 *dst, u32 *src, size_t len)
+static void cpu_to_be32_array(__be32 *dst, const u32 *src, size_t len)
{
int i;
for (i = 0; i < len; i++)
@@ -333,7 +333,7 @@ static void tb_ctl_tx_callback(struct tb_ring *ring, struct ring_frame *frame,
*
* Return: Returns 0 on success or an error code on failure.
*/
-static int tb_ctl_tx(struct tb_ctl *ctl, void *data, size_t len,
+static int tb_ctl_tx(struct tb_ctl *ctl, const void *data, size_t len,
enum tb_cfg_pkg_type type)
{
int res;
@@ -650,7 +650,7 @@ struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,
*
* Offset and length are in dwords.
*/
-struct tb_cfg_result tb_cfg_write_raw(struct tb_ctl *ctl, void *buffer,
+struct tb_cfg_result tb_cfg_write_raw(struct tb_ctl *ctl, const void *buffer,
u64 route, u32 port, enum tb_cfg_space space,
u32 offset, u32 length, int timeout_msec)
{
@@ -695,7 +695,7 @@ int tb_cfg_read(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
return res.err;
}

-int tb_cfg_write(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
+int tb_cfg_write(struct tb_ctl *ctl, const void *buffer, u64 route, u32 port,
enum tb_cfg_space space, u32 offset, u32 length)
{
struct tb_cfg_result res = tb_cfg_write_raw(ctl, buffer, route, port,
diff --git a/drivers/thunderbolt/ctl.h b/drivers/thunderbolt/ctl.h
index ba87d6e731dd..83ae54947082 100644
--- a/drivers/thunderbolt/ctl.h
+++ b/drivers/thunderbolt/ctl.h
@@ -61,13 +61,13 @@ struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,
u64 route, u32 port,
enum tb_cfg_space space, u32 offset,
u32 length, int timeout_msec);
-struct tb_cfg_result tb_cfg_write_raw(struct tb_ctl *ctl, void *buffer,
+struct tb_cfg_result tb_cfg_write_raw(struct tb_ctl *ctl, const void *buffer,
u64 route, u32 port,
enum tb_cfg_space space, u32 offset,
u32 length, int timeout_msec);
int tb_cfg_read(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
enum tb_cfg_space space, u32 offset, u32 length);
-int tb_cfg_write(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
+int tb_cfg_write(struct tb_ctl *ctl, const void *buffer, u64 route, u32 port,
enum tb_cfg_space space, u32 offset, u32 length);
int tb_cfg_get_upstream_port(struct tb_ctl *ctl, u64 route);

diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 61d57ba64035..ba2b85750335 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -173,7 +173,7 @@ static inline int tb_port_read(struct tb_port *port, void *buffer,
length);
}

-static inline int tb_port_write(struct tb_port *port, void *buffer,
+static inline int tb_port_write(struct tb_port *port, const void *buffer,
enum tb_cfg_space space, u32 offset, u32 length)
{
return tb_cfg_write(port->sw->tb->ctl,
--
2.11.0

2017-06-06 13:08:14

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v4 02/27] thunderbolt: No need to read UID of the root switch on resume

On Tue, 2017-06-06 at 15:24 +0300, Mika Westerberg wrote:
> The root switch is part of the host controller and cannot be
> physically
> removed, so there is no point of reading UID again on resume in order
> to
> check if the root switch is still the same.
>

FWIW,
Reviewed-by: Andy Shevchenko <[email protected]>


> Suggested-by: Andreas Noever <[email protected]>
> Signed-off-by: Mika Westerberg <[email protected]>
> Signed-off-by: Andreas Noever <[email protected]>
> ---
>  drivers/thunderbolt/switch.c | 29 ++++++++++++++++++-----------
>  1 file changed, 18 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/thunderbolt/switch.c
> b/drivers/thunderbolt/switch.c
> index c6f30b1695a9..81f5164a6364 100644
> --- a/drivers/thunderbolt/switch.c
> +++ b/drivers/thunderbolt/switch.c
> @@ -452,19 +452,26 @@ void tb_sw_set_unplugged(struct tb_switch *sw)
>  int tb_switch_resume(struct tb_switch *sw)
>  {
>   int i, err;
> - u64 uid;
>   tb_sw_info(sw, "resuming switch\n");
>  
> - err = tb_drom_read_uid_only(sw, &uid);
> - if (err) {
> - tb_sw_warn(sw, "uid read failed\n");
> - return err;
> - }
> - if (sw != sw->tb->root_switch && sw->uid != uid) {
> - tb_sw_info(sw,
> - "changed while suspended (uid %#llx ->
> %#llx)\n",
> - sw->uid, uid);
> - return -ENODEV;
> + /*
> +  * Check for UID of the connected switches except for root
> +  * switch which we assume cannot be removed.
> +  */
> + if (tb_route(sw)) {
> + u64 uid;
> +
> + err = tb_drom_read_uid_only(sw, &uid);
> + if (err) {
> + tb_sw_warn(sw, "uid read failed\n");
> + return err;
> + }
> + if (sw->uid != uid) {
> + tb_sw_info(sw,
> + "changed while suspended (uid %#llx
> -> %#llx)\n",
> + sw->uid, uid);
> + return -ENODEV;
> + }
>   }
>  
>   /* upload configuration */

--
Andy Shevchenko <[email protected]>
Intel Finland Oy

2017-06-09 09:45:29

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v4 00/27] Thunderbolt security levels and NVM firmware upgrade

On Tue, Jun 06, 2017 at 03:24:52PM +0300, Mika Westerberg wrote:
> Hi,
>
> This is fourth version of the patch series adding support for Thunderbolt
> security levels and NVM firmware upgrade. PCs running Intel Falcon Ridge or
> newer need these in order to connect devices if the security level is set
> to "user(SL1) or secure(SL2)" from BIOS.

All now queued up, nice work!

greg k-h

2017-06-09 12:33:07

by Mika Westerberg

[permalink] [raw]
Subject: Re: [PATCH v4 00/27] Thunderbolt security levels and NVM firmware upgrade

On Fri, Jun 09, 2017 at 11:44:07AM +0200, Greg Kroah-Hartman wrote:
> On Tue, Jun 06, 2017 at 03:24:52PM +0300, Mika Westerberg wrote:
> > Hi,
> >
> > This is fourth version of the patch series adding support for Thunderbolt
> > security levels and NVM firmware upgrade. PCs running Intel Falcon Ridge or
> > newer need these in order to connect devices if the security level is set
> > to "user(SL1) or secure(SL2)" from BIOS.
>
> All now queued up, nice work!

Thanks!

2017-07-20 16:11:57

by Christian Kellner

[permalink] [raw]
Subject: Re: [PATCH v4 00/27] Thunderbolt security levels and NVM firmware upgrade

Hi!

> This is fourth version of the patch series adding support for Thunderbolt
> security levels and NVM firmware upgrade.

While prototyping the user-space bits for GNOME, I stumbled upon an
oops on the Lenovo T470s (see below) when attaching a Dell TB16
thunderbolt 3 dock. As a result /sys/bus/thunderbolt/devices has only
domain0 in it but not the dock itself. Everything works fine on a Dell
XPS 13 9630. The oops happens with linux-next and when I backport the
patches to 4.11.11.
Happy to provide any more information and test future patches.

Cheers,
Christian

--- 8< ---
lspci output:

pcilib: Cannot open /sys/bus/pci/devices/0000:03:00.0/config
lspci: Unable to read the standard configuration space header of device 0000:03:00.0
00:00.0 Host bridge: Intel Corporation Device 5904 (rev 02)
00:02.0 VGA compatible controller: Intel Corporation Device 5916 (rev 02)
00:14.0 USB controller: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller (rev 21)
00:14.2 Signal processing controller: Intel Corporation Sunrise Point-LP Thermal subsystem (rev 21)
00:16.0 Communication controller: Intel Corporation Sunrise Point-LP CSME HECI #1 (rev 21)
00:16.3 Serial controller: Intel Corporation Device 9d3d (rev 21)
00:1c.0 PCI bridge: Intel Corporation Device 9d10 (rev f1)
00:1c.2 PCI bridge: Intel Corporation Device 9d12 (rev f1)
00:1d.0 PCI bridge: Intel Corporation Sunrise Point-LP PCI Express Root Port #9 (rev f1)
00:1f.0 ISA bridge: Intel Corporation Device 9d4e (rev 21)
00:1f.2 Memory controller: Intel Corporation Sunrise Point-LP PMC (rev 21)
00:1f.3 Audio device: Intel Corporation Device 9d71 (rev 21)
00:1f.4 SMBus: Intel Corporation Sunrise Point-LP SMBus (rev 21)
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (4) I219-LM (rev 21)
01:00.0 PCI bridge: Intel Corporation JHL6240 Thunderbolt 3 Bridge (Low Power) [Alpine Ridge LP 2016] (rev ff)
02:00.0 PCI bridge: Intel Corporation JHL6240 Thunderbolt 3 Bridge (Low Power) [Alpine Ridge LP 2016] (rev ff)
02:01.0 PCI bridge: Intel Corporation JHL6240 Thunderbolt 3 Bridge (Low Power) [Alpine Ridge LP 2016] (rev ff)
02:02.0 PCI bridge: Intel Corporation JHL6240 Thunderbolt 3 Bridge (Low Power) [Alpine Ridge LP 2016] (rev ff)
3a:00.0 Network controller: Intel Corporation Wireless 8265 / 8275 (rev 78)
3c:00.0 Non-Volatile memory controller: Toshiba America Info Systems Device 0115 (rev 01)

--- >8 ---
Ops:

[ 69.886978] thunderbolt 0000:03:00.0: current switch config:
[ 69.886983] thunderbolt 0000:03:00.0: Switch: 8086:15c0 (Revision: 1, TB Version: 2)
[ 69.886986] thunderbolt 0000:03:00.0: Max Port Number: 5
[ 69.886987] thunderbolt 0000:03:00.0: Config:
[ 69.886991] thunderbolt 0000:03:00.0: Upstream Port Number: 3 Depth: 0 Route String: 0x0 Enabled: 1, PlugEventsDelay: 254ms
[ 69.886994] thunderbolt 0000:03:00.0: unknown1: 0x0 unknown4: 0x0
[ 69.920748] BUG: unable to handle kernel NULL pointer dereference at 00000000000002ec
[ 69.920834] IP: tb_drom_read+0x383/0x890 [thunderbolt]
[ 69.920873] PGD 0
[ 69.920874] P4D 0

[ 69.920925] Oops: 0000 [#1] SMP
[ 69.920952] Modules linked in: thunderbolt(+) fuse rfcomm ccm xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 tun nf_conntrack_netbios_ns nf_conntrack_broadcast xt_CT ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack libcrc32c iptable_mangle iptable_raw iptable_security ebtable_filter ebtables ip6table_filter ip6_tables cmac bnep sunrpc arc4 wmi_bmof iTCO_wdt iwlmvm iTCO_vendor_support snd_soc_skl mei_wdt snd_soc_skl_ipc snd_soc_sst_ipc snd_soc_sst_dsp snd_hda_ext_core tpm_crb snd_soc_sst_match intel_rapl x86_pkg_temp_thermal intel_powerclamp mac80211 coretemp
[ 69.921479] kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel snd_hda_codec_hdmi intel_cstate intel_uncore intel_rapl_perf snd_soc_core iwlwifi snd_hda_codec_realtek snd_hda_codec_generic snd_compress snd_pcm_dmaengine ac97_bus snd_hda_intel snd_hda_codec snd_hda_core cfg80211 snd_hwdep snd_seq pcspkr uvcvideo hci_uart joydev snd_seq_device snd_pcm videobuf2_vmalloc videobuf2_memops videobuf2_v4l2 videobuf2_core btusb videodev btrtl btbcm snd_timer i2c_i801 btqca btintel media bluetooth qcserial mei_me usb_wwan mei shpchp thinkpad_acpi intel_pch_thermal snd wmi tpm_tis soundcore tpm_tis_core pinctrl_sunrisepoint intel_lpss_acpi tpm ecdh_generic intel_lpss pinctrl_intel rfkill acpi_pad cdc_mbim cdc_wdm cdc_ncm usbnet mii uas usb_storage i915 i2c_algo_bit drm_kms_helper drm e1000e crc32c_intel
[ 69.922013] ptp serio_raw nvme nvme_core pps_core video i2c_hid
[ 69.922062] CPU: 3 PID: 12248 Comm: systemd-udevd Not tainted 4.13.0-rc1-next-20170719 #1
[ 69.922116] Hardware name: LENOVO 20HF000YGE/20HF000YGE, BIOS N1WET32W (1.11 ) 05/23/2017
[ 69.922170] task: ffff8a293e4bcd80 task.stack: ffffa698027a8000
[ 69.922218] RIP: 0010:tb_drom_read+0x383/0x890 [thunderbolt]
[ 69.922256] RSP: 0018:ffffa698027ab990 EFLAGS: 00010246
[ 69.922292] RAX: 0000000000000000 RBX: ffff8a2940af7800 RCX: 0000000000000000
[ 69.922339] RDX: ffff8a2940ebb400 RSI: 0000000000000000 RDI: ffffa698027ab9a0
[ 69.922385] RBP: ffffa698027ab9d0 R08: 0000000000000001 R09: 0000000000000002
[ 69.922432] R10: ffff8a2940ebb5b0 R11: 0000000000000000 R12: ffff8a293bfa968c
[ 69.922478] R13: 000000000000002c R14: 0000000000000056 R15: 0000000000000056
[ 69.922526] FS: 00007f0a945a38c0(0000) GS:ffff8a2961580000(0000) knlGS:0000000000000000
[ 69.922579] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 69.922617] CR2: 00000000000002ec CR3: 000000043e785000 CR4: 00000000003606e0
[ 69.922665] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 69.922712] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 69.922760] Call Trace:
[ 69.922789] tb_switch_add+0x9d/0x730 [thunderbolt]
[ 69.922846] ? tb_switch_alloc+0x3cd/0x4d0 [thunderbolt]
[ 69.922893] icm_start+0x5a/0xa0 [thunderbolt]
[ 69.922934] tb_domain_add+0xc3/0xf0 [thunderbolt]
[ 69.922976] nhi_probe+0x19e/0x310 [thunderbolt]
[ 69.923017] local_pci_probe+0x42/0xa0
[ 69.923048] pci_device_probe+0x18d/0x1a0
[ 69.923082] driver_probe_device+0x2ff/0x450
[ 69.923116] __driver_attach+0xa4/0xe0
[ 69.923146] ? driver_probe_device+0x450/0x450
[ 69.923184] bus_for_each_dev+0x6e/0xb0
[ 69.923215] driver_attach+0x1e/0x20
[ 69.923246] bus_add_driver+0x1d0/0x270
[ 69.923277] ? 0xffffffffc0bbb000
[ 69.923304] driver_register+0x60/0xe0
[ 69.923334] ? 0xffffffffc0bbb000
[ 69.923372] __pci_register_driver+0x4c/0x50
[ 69.923411] nhi_init+0x28/0x1000 [thunderbolt]
[ 69.923449] do_one_initcall+0x50/0x190
[ 69.923481] ? __vunmap+0x81/0xb0
[ 69.923511] ? _cond_resched+0x1a/0x50
[ 69.923544] ? kmem_cache_alloc_trace+0x15f/0x1c0
[ 69.923581] ? do_init_module+0x27/0x1e9
[ 69.923613] do_init_module+0x5f/0x1e9
[ 69.923643] load_module+0x24e7/0x2a60
[ 69.923677] ? vfs_read+0x115/0x130
[ 69.923707] SYSC_finit_module+0xfc/0x120
[ 69.923738] ? SYSC_finit_module+0xfc/0x120
[ 69.923773] SyS_finit_module+0xe/0x10
[ 69.923805] do_syscall_64+0x67/0x170
[ 69.923850] entry_SYSCALL64_slow_path+0x25/0x25
[ 69.923882] RIP: 0033:0x7f0a93214529
[ 69.923908] RSP: 002b:00007fffddc70308 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[ 69.923959] RAX: ffffffffffffffda RBX: 000000c52e0e6450 RCX: 00007f0a93214529
[ 69.924005] RDX: 0000000000000000 RSI: 00007f0a93d509c5 RDI: 0000000000000007
[ 69.924052] RBP: 00007f0a93d509c5 R08: 0000000000000000 R09: 00007fffddc70420
[ 69.926454] R10: 0000000000000007 R11: 0000000000000246 R12: 0000000000000000
[ 69.928766] R13: 000000c52e0e5590 R14: 0000000000020000 R15: 000000c52df83f4a
[ 69.931055] Code: 8d 0c c9 83 e0 01 84 c0 4c 8d 14 ca 41 88 42 35 75 85 49 8b 42 20 41 0f b6 4a 34 41 b9 02 00 00 00 41 b8 01 00 00 00 4c 89 55 c0 <8b> 90 ec 02 00 00 8b b0 e8 02 00 00 48 8b 80 08 03 00 00 81 e2
[ 69.933380] RIP: tb_drom_read+0x383/0x890 [thunderbolt] RSP: ffffa698027ab990
[ 69.935708] CR2: 00000000000002ec
[ 69.950221] ---[ end trace a3f247a8fe7705ab ]---

--- 8< ---

--
Dr. Christian J. Kellner
Red Hat Desktop Hardware Enablement
[email protected]

2017-07-20 16:16:05

by Mario Limonciello

[permalink] [raw]
Subject: RE: [PATCH v4 00/27] Thunderbolt security levels and NVM firmware upgrade

> -----Original Message-----
> From: Christian Kellner [mailto:[email protected]]
> Sent: Thursday, July 20, 2017 11:12 AM
> To: Mika Westerberg <[email protected]>
> Cc: Andreas Noever <[email protected]>; Greg Kroah-Hartman
> <[email protected]>; Michael Jamet <[email protected]>;
> Yehezkel Bernat <[email protected]>; Lukas Wunner
> <[email protected]>; Amir Levy <[email protected]>; Andy Lutomirski
> <[email protected]>; Limonciello, Mario <[email protected]>;
> Dominguez, Jared <[email protected]>; Andy Shevchenko
> <[email protected]>; [email protected]; Peter FP1
> Zhang <[email protected]>
> Subject: Re: [PATCH v4 00/27] Thunderbolt security levels and NVM firmware
> upgrade
>
> Hi!
>
> > This is fourth version of the patch series adding support for Thunderbolt
> > security levels and NVM firmware upgrade.
>
> While prototyping the user-space bits for GNOME, I stumbled upon an
> oops on the Lenovo T470s (see below) when attaching a Dell TB16
> thunderbolt 3 dock. As a result /sys/bus/thunderbolt/devices has only
> domain0 in it but not the dock itself. Everything works fine on a Dell
> XPS 13 9630. The oops happens with linux-next and when I backport the
> patches to 4.11.11.
> Happy to provide any more information and test future patches.
>
Christian,

Did you by chance boot with the dock "plugged in"? Or was this the result of
plugging in after boot?

There is a known issue that Mika is tracking regarding some problems with
Thunderbolt devices plugged in at boot. It causes a few things to explode
down the line too (which I suspect is what happened).

Thanks,

> Cheers,
> Christian
>
> --- 8< ---
> lspci output:
>
> pcilib: Cannot open /sys/bus/pci/devices/0000:03:00.0/config
> lspci: Unable to read the standard configuration space header of device
> 0000:03:00.0
> 00:00.0 Host bridge: Intel Corporation Device 5904 (rev 02)
> 00:02.0 VGA compatible controller: Intel Corporation Device 5916 (rev 02)
> 00:14.0 USB controller: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller
> (rev 21)
> 00:14.2 Signal processing controller: Intel Corporation Sunrise Point-LP Thermal
> subsystem (rev 21)
> 00:16.0 Communication controller: Intel Corporation Sunrise Point-LP CSME HECI
> #1 (rev 21)
> 00:16.3 Serial controller: Intel Corporation Device 9d3d (rev 21)
> 00:1c.0 PCI bridge: Intel Corporation Device 9d10 (rev f1)
> 00:1c.2 PCI bridge: Intel Corporation Device 9d12 (rev f1)
> 00:1d.0 PCI bridge: Intel Corporation Sunrise Point-LP PCI Express Root Port #9 (rev
> f1)
> 00:1f.0 ISA bridge: Intel Corporation Device 9d4e (rev 21)
> 00:1f.2 Memory controller: Intel Corporation Sunrise Point-LP PMC (rev 21)
> 00:1f.3 Audio device: Intel Corporation Device 9d71 (rev 21)
> 00:1f.4 SMBus: Intel Corporation Sunrise Point-LP SMBus (rev 21)
> 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (4) I219-LM (rev
> 21)
> 01:00.0 PCI bridge: Intel Corporation JHL6240 Thunderbolt 3 Bridge (Low Power)
> [Alpine Ridge LP 2016] (rev ff)
> 02:00.0 PCI bridge: Intel Corporation JHL6240 Thunderbolt 3 Bridge (Low Power)
> [Alpine Ridge LP 2016] (rev ff)
> 02:01.0 PCI bridge: Intel Corporation JHL6240 Thunderbolt 3 Bridge (Low Power)
> [Alpine Ridge LP 2016] (rev ff)
> 02:02.0 PCI bridge: Intel Corporation JHL6240 Thunderbolt 3 Bridge (Low Power)
> [Alpine Ridge LP 2016] (rev ff)
> 3a:00.0 Network controller: Intel Corporation Wireless 8265 / 8275 (rev 78)
> 3c:00.0 Non-Volatile memory controller: Toshiba America Info Systems Device
> 0115 (rev 01)
>
> --- >8 ---
> Ops:
>
> [ 69.886978] thunderbolt 0000:03:00.0: current switch config:
> [ 69.886983] thunderbolt 0000:03:00.0: Switch: 8086:15c0 (Revision: 1, TB
> Version: 2)
> [ 69.886986] thunderbolt 0000:03:00.0: Max Port Number: 5
> [ 69.886987] thunderbolt 0000:03:00.0: Config:
> [ 69.886991] thunderbolt 0000:03:00.0: Upstream Port Number: 3 Depth: 0
> Route String: 0x0 Enabled: 1, PlugEventsDelay: 254ms
> [ 69.886994] thunderbolt 0000:03:00.0: unknown1: 0x0 unknown4: 0x0
> [ 69.920748] BUG: unable to handle kernel NULL pointer dereference at
> 00000000000002ec
> [ 69.920834] IP: tb_drom_read+0x383/0x890 [thunderbolt]
> [ 69.920873] PGD 0
> [ 69.920874] P4D 0
>
> [ 69.920925] Oops: 0000 [#1] SMP
> [ 69.920952] Modules linked in: thunderbolt(+) fuse rfcomm ccm xt_CHECKSUM
> ipt_MASQUERADE nf_nat_masquerade_ipv4 tun nf_conntrack_netbios_ns
> nf_conntrack_broadcast xt_CT ip6t_rpfilter ip6t_REJECT nf_reject_ipv6
> xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc
> ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle
> ip6table_raw ip6table_security iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4
> nf_nat_ipv4 nf_nat nf_conntrack libcrc32c iptable_mangle iptable_raw
> iptable_security ebtable_filter ebtables ip6table_filter ip6_tables cmac bnep
> sunrpc arc4 wmi_bmof iTCO_wdt iwlmvm iTCO_vendor_support snd_soc_skl
> mei_wdt snd_soc_skl_ipc snd_soc_sst_ipc snd_soc_sst_dsp snd_hda_ext_core
> tpm_crb snd_soc_sst_match intel_rapl x86_pkg_temp_thermal intel_powerclamp
> mac80211 coretemp
> [ 69.921479] kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel
> snd_hda_codec_hdmi intel_cstate intel_uncore intel_rapl_perf snd_soc_core
> iwlwifi snd_hda_codec_realtek snd_hda_codec_generic snd_compress
> snd_pcm_dmaengine ac97_bus snd_hda_intel snd_hda_codec snd_hda_core
> cfg80211 snd_hwdep snd_seq pcspkr uvcvideo hci_uart joydev snd_seq_device
> snd_pcm videobuf2_vmalloc videobuf2_memops videobuf2_v4l2 videobuf2_core
> btusb videodev btrtl btbcm snd_timer i2c_i801 btqca btintel media bluetooth
> qcserial mei_me usb_wwan mei shpchp thinkpad_acpi intel_pch_thermal snd wmi
> tpm_tis soundcore tpm_tis_core pinctrl_sunrisepoint intel_lpss_acpi tpm
> ecdh_generic intel_lpss pinctrl_intel rfkill acpi_pad cdc_mbim cdc_wdm cdc_ncm
> usbnet mii uas usb_storage i915 i2c_algo_bit drm_kms_helper drm e1000e
> crc32c_intel
> [ 69.922013] ptp serio_raw nvme nvme_core pps_core video i2c_hid
> [ 69.922062] CPU: 3 PID: 12248 Comm: systemd-udevd Not tainted 4.13.0-rc1-
> next-20170719 #1
> [ 69.922116] Hardware name: LENOVO 20HF000YGE/20HF000YGE, BIOS
> N1WET32W (1.11 ) 05/23/2017
> [ 69.922170] task: ffff8a293e4bcd80 task.stack: ffffa698027a8000
> [ 69.922218] RIP: 0010:tb_drom_read+0x383/0x890 [thunderbolt]
> [ 69.922256] RSP: 0018:ffffa698027ab990 EFLAGS: 00010246
> [ 69.922292] RAX: 0000000000000000 RBX: ffff8a2940af7800 RCX:
> 0000000000000000
> [ 69.922339] RDX: ffff8a2940ebb400 RSI: 0000000000000000 RDI:
> ffffa698027ab9a0
> [ 69.922385] RBP: ffffa698027ab9d0 R08: 0000000000000001 R09:
> 0000000000000002
> [ 69.922432] R10: ffff8a2940ebb5b0 R11: 0000000000000000 R12:
> ffff8a293bfa968c
> [ 69.922478] R13: 000000000000002c R14: 0000000000000056 R15:
> 0000000000000056
> [ 69.922526] FS: 00007f0a945a38c0(0000) GS:ffff8a2961580000(0000)
> knlGS:0000000000000000
> [ 69.922579] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 69.922617] CR2: 00000000000002ec CR3: 000000043e785000 CR4:
> 00000000003606e0
> [ 69.922665] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [ 69.922712] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
> 0000000000000400
> [ 69.922760] Call Trace:
> [ 69.922789] tb_switch_add+0x9d/0x730 [thunderbolt]
> [ 69.922846] ? tb_switch_alloc+0x3cd/0x4d0 [thunderbolt]
> [ 69.922893] icm_start+0x5a/0xa0 [thunderbolt]
> [ 69.922934] tb_domain_add+0xc3/0xf0 [thunderbolt]
> [ 69.922976] nhi_probe+0x19e/0x310 [thunderbolt]
> [ 69.923017] local_pci_probe+0x42/0xa0
> [ 69.923048] pci_device_probe+0x18d/0x1a0
> [ 69.923082] driver_probe_device+0x2ff/0x450
> [ 69.923116] __driver_attach+0xa4/0xe0
> [ 69.923146] ? driver_probe_device+0x450/0x450
> [ 69.923184] bus_for_each_dev+0x6e/0xb0
> [ 69.923215] driver_attach+0x1e/0x20
> [ 69.923246] bus_add_driver+0x1d0/0x270
> [ 69.923277] ? 0xffffffffc0bbb000
> [ 69.923304] driver_register+0x60/0xe0
> [ 69.923334] ? 0xffffffffc0bbb000
> [ 69.923372] __pci_register_driver+0x4c/0x50
> [ 69.923411] nhi_init+0x28/0x1000 [thunderbolt]
> [ 69.923449] do_one_initcall+0x50/0x190
> [ 69.923481] ? __vunmap+0x81/0xb0
> [ 69.923511] ? _cond_resched+0x1a/0x50
> [ 69.923544] ? kmem_cache_alloc_trace+0x15f/0x1c0
> [ 69.923581] ? do_init_module+0x27/0x1e9
> [ 69.923613] do_init_module+0x5f/0x1e9
> [ 69.923643] load_module+0x24e7/0x2a60
> [ 69.923677] ? vfs_read+0x115/0x130
> [ 69.923707] SYSC_finit_module+0xfc/0x120
> [ 69.923738] ? SYSC_finit_module+0xfc/0x120
> [ 69.923773] SyS_finit_module+0xe/0x10
> [ 69.923805] do_syscall_64+0x67/0x170
> [ 69.923850] entry_SYSCALL64_slow_path+0x25/0x25
> [ 69.923882] RIP: 0033:0x7f0a93214529
> [ 69.923908] RSP: 002b:00007fffddc70308 EFLAGS: 00000246 ORIG_RAX:
> 0000000000000139
> [ 69.923959] RAX: ffffffffffffffda RBX: 000000c52e0e6450 RCX:
> 00007f0a93214529
> [ 69.924005] RDX: 0000000000000000 RSI: 00007f0a93d509c5 RDI:
> 0000000000000007
> [ 69.924052] RBP: 00007f0a93d509c5 R08: 0000000000000000 R09:
> 00007fffddc70420
> [ 69.926454] R10: 0000000000000007 R11: 0000000000000246 R12:
> 0000000000000000
> [ 69.928766] R13: 000000c52e0e5590 R14: 0000000000020000 R15:
> 000000c52df83f4a
> [ 69.931055] Code: 8d 0c c9 83 e0 01 84 c0 4c 8d 14 ca 41 88 42 35 75 85 49 8b 42
> 20 41 0f b6 4a 34 41 b9 02 00 00 00 41 b8 01 00 00 00 4c 89 55 c0 <8b> 90 ec 02 00
> 00 8b b0 e8 02 00 00 48 8b 80 08 03 00 00 81 e2
> [ 69.933380] RIP: tb_drom_read+0x383/0x890 [thunderbolt] RSP:
> ffffa698027ab990
> [ 69.935708] CR2: 00000000000002ec
> [ 69.950221] ---[ end trace a3f247a8fe7705ab ]---
>
> --- 8< ---
>
> --
> Dr. Christian J. Kellner
> Red Hat Desktop Hardware Enablement
> [email protected]

2017-07-20 16:47:39

by Christian Kellner

[permalink] [raw]
Subject: RE: [PATCH v4 00/27] Thunderbolt security levels and NVM firmware upgrade


Hi,

> Did you by chance boot with the dock "plugged in"? Or was this the
> result of
> plugging in after boot?
Dock was plugged in after boot. I included another dmesg output,
where I marked the time point that I plugged the device in.

> There is a known issue that Mika is tracking regarding some problems
> with
> Thunderbolt devices plugged in at boot. It causes a few things to
> explode
> down the line too (which I suspect is what happened).
Good to know.

Cheers,
Christian

--- 8< ---

[...]
[ 13.302868] IPv6: ADDRCONF(NETDEV_CHANGE): wlp58s0: link becomes
ready
[ 13.401493] wlp58s0: Limiting TX power to 23 (23 - 0) dBm as
advertised by 6c:70:9f:e6:d0:0b
[ 15.033372] Bluetooth: RFCOMM TTY layer initialized
[ 15.033377] Bluetooth: RFCOMM socket layer initialized
[ 15.033421] Bluetooth: RFCOMM ver 1.11
[ 15.576238] fuse init (API version 7.26)
[ 16.206665] rfkill: input handler disabled
--- DEVICE PLUGGED IN HERE ---
[ 53.696945] thinkpad_acpi: EC reports that Thermal Table has changed
[ 68.432897] thinkpad_acpi: EC reports that Thermal Table has changed
[ 68.559713] pci 0000:01:00.0: [8086:15c0] type 01 class 0x060400
[ 68.559829] pci 0000:01:00.0: supports D1 D2
[ 68.559830] pci 0000:01:00.0: PME# supported from D0 D1 D2 D3hot
D3cold
[ 68.560001] pci 0000:02:00.0: [8086:15c0] type 01 class 0x060400
[ 68.560118] pci 0000:02:00.0: supports D1 D2
[ 68.560119] pci 0000:02:00.0: PME# supported from D0 D1 D2 D3hot
D3cold
[ 68.560195] pci 0000:02:01.0: [8086:15c0] type 01 class 0x060400
[ 68.560311] pci 0000:02:01.0: supports D1 D2
[ 68.560312] pci 0000:02:01.0: PME# supported from D0 D1 D2 D3hot
D3cold
[ 68.560388] pci 0000:02:02.0: [8086:15c0] type 01 class 0x060400
[ 68.560505] pci 0000:02:02.0: supports D1 D2
[ 68.560505] pci 0000:02:02.0: PME# supported from D0 D1 D2 D3hot
D3cold
[ 68.560605] pci 0000:01:00.0: PCI bridge to [bus 02-39]
[ 68.560613] pci 0000:01:00.0: bridge window [mem
0xd4000000-0xea0fffff]
[ 68.560618] pci 0000:01:00.0: bridge window [mem
0xb0000000-0xd1ffffff 64bit pref]
[ 68.560674] pci 0000:03:00.0: [8086:15bf] type 00 class 0x088000
[ 68.560710] pci 0000:03:00.0: reg 0x10: [mem 0xea000000-0xea03ffff]
[ 68.560721] pci 0000:03:00.0: reg 0x14: [mem 0xea040000-0xea040fff]
[ 68.560859] pci 0000:03:00.0: supports D1 D2
[ 68.560859] pci 0000:03:00.0: PME# supported from D0 D1 D2 D3hot
D3cold
[ 68.560980] pci 0000:02:00.0: PCI bridge to [bus 03]
[ 68.560988] pci 0000:02:00.0: bridge window [mem
0xea000000-0xea0fffff]
[ 68.561059] pci 0000:04:00.0: [8086:1578] type 01 class 0x060400
[ 68.561305] pci 0000:04:00.0: supports D1 D2
[ 68.561306] pci 0000:04:00.0: PME# supported from D0 D1 D2 D3hot
D3cold
[ 68.563738] pci 0000:02:01.0: PCI bridge to [bus 04-38]
[ 68.563747] pci 0000:02:01.0: bridge window [mem
0xd4000000-0xe9efffff]
[ 68.563752] pci 0000:02:01.0: bridge window [mem
0xb0000000-0xd1ffffff 64bit pref]
[ 68.563861] pci 0000:05:01.0: [8086:1578] type 01 class 0x060400
[ 68.564105] pci 0000:05:01.0: supports D1 D2
[ 68.564106] pci 0000:05:01.0: PME# supported from D0 D1 D2 D3hot
D3cold
[ 68.564263] pci 0000:05:04.0: [8086:1578] type 01 class 0x060400
[ 68.564511] pci 0000:05:04.0: supports D1 D2
[ 68.564512] pci 0000:05:04.0: PME# supported from D0 D1 D2 D3hot
D3cold
[ 68.564668] pci 0000:04:00.0: PCI bridge to [bus 05-38]
[ 68.564682] pci 0000:04:00.0: bridge window [mem
0xd4000000-0xe9efffff]
[ 68.564701] pci 0000:04:00.0: bridge window [mem
0xb0000000-0xd1ffffff 64bit pref]
[ 68.564769] pci 0000:05:01.0: PCI bridge to [bus 06]
[ 68.564862] pci 0000:05:04.0: PCI bridge to [bus 07-38]
[ 68.564877] pci 0000:05:04.0: bridge window [mem
0xd4000000-0xe9efffff]
[ 68.564886] pci 0000:05:04.0: bridge window [mem
0xb0000000-0xd1ffffff 64bit pref]
[ 68.564979] pci 0000:02:02.0: PCI bridge to [bus 39]
[ 68.564986] pci 0000:02:02.0: bridge window [mem
0xe9f00000-0xe9ffffff]
[ 68.565011] pci_bus 0000:02: Allocating resources
[ 68.565072] pci 0000:05:04.0: bridge window [io 0x1000-0x0fff] to
[bus 07-38] add_size 1000
[ 68.565087] pci 0000:04:00.0: bridge window [io 0x1000-0x0fff] to
[bus 05-38] add_size 1000
[ 68.565094] pci 0000:02:01.0: bridge window [io 0x1000-0x0fff] to
[bus 04-38] add_size 1000
[ 68.565101] pci 0000:02:02.0: bridge window [io 0x1000-0x0fff] to
[bus 39] add_size 1000
[ 68.565104] pci 0000:02:02.0: bridge window [mem
0x00100000-0x000fffff 64bit pref] to [bus 39] add_size 200000 add_align
100000
[ 68.565112] pci 0000:01:00.0: bridge window [io 0x1000-0x0fff] to
[bus 02-39] add_size 2000
[ 68.565114] pci 0000:01:00.0: BAR 13: no space for [io size 0x2000]
[ 68.565115] pci 0000:01:00.0: BAR 13: failed to assign [io size
0x2000]
[ 68.565116] pci 0000:01:00.0: BAR 13: no space for [io size 0x2000]
[ 68.565116] pci 0000:01:00.0: BAR 13: failed to assign [io size
0x2000]
[ 68.565119] pci 0000:02:02.0: BAR 15: no space for [mem size
0x00200000 64bit pref]
[ 68.565120] pci 0000:02:02.0: BAR 15: failed to assign [mem size
0x00200000 64bit pref]
[ 68.565120] pci 0000:02:01.0: BAR 13: no space for [io size 0x1000]
[ 68.565121] pci 0000:02:01.0: BAR 13: failed to assign [io size
0x1000]
[ 68.565122] pci 0000:02:02.0: BAR 13: no space for [io size 0x1000]
[ 68.565122] pci 0000:02:02.0: BAR 13: failed to assign [io size
0x1000]
[ 68.565124] pci 0000:02:02.0: BAR 15: no space for [mem size
0x00200000 64bit pref]
[ 68.565125] pci 0000:02:02.0: BAR 15: failed to assign [mem size
0x00200000 64bit pref]
[ 68.565125] pci 0000:02:02.0: BAR 13: no space for [io size 0x1000]
[ 68.565126] pci 0000:02:02.0: BAR 13: failed to assign [io size
0x1000]
[ 68.565127] pci 0000:02:01.0: BAR 13: no space for [io size 0x1000]
[ 68.565128] pci 0000:02:01.0: BAR 13: failed to assign [io size
0x1000]
[ 68.565129] pci 0000:02:00.0: PCI bridge to [bus 03]
[ 68.565133] pci 0000:02:00.0: bridge window [mem
0xea000000-0xea0fffff]
[ 68.565140] pci 0000:04:00.0: BAR 13: no space for [io size 0x1000]
[ 68.565141] pci 0000:04:00.0: BAR 13: failed to assign [io size
0x1000]
[ 68.565142] pci 0000:04:00.0: BAR 13: no space for [io size 0x1000]
[ 68.565143] pci 0000:04:00.0: BAR 13: failed to assign [io size
0x1000]
[ 68.565143] pci 0000:05:04.0: BAR 13: no space for [io size 0x1000]
[ 68.565144] pci 0000:05:04.0: BAR 13: failed to assign [io size
0x1000]
[ 68.565145] pci 0000:05:04.0: BAR 13: no space for [io size 0x1000]
[ 68.565145] pci 0000:05:04.0: BAR 13: failed to assign [io size
0x1000]
[ 68.565146] pci 0000:05:01.0: PCI bridge to [bus 06]
[ 68.565168] pci 0000:05:04.0: PCI bridge to [bus 07-38]
[ 68.565176] pci 0000:05:04.0: bridge window [mem
0xd4000000-0xe9efffff]
[ 68.565182] pci 0000:05:04.0: bridge window [mem
0xb0000000-0xd1ffffff 64bit pref]
[ 68.565192] pci 0000:04:00.0: PCI bridge to [bus 05-38]
[ 68.565201] pci 0000:04:00.0: bridge window [mem
0xd4000000-0xe9efffff]
[ 68.565206] pci 0000:04:00.0: bridge window [mem
0xb0000000-0xd1ffffff 64bit pref]
[ 68.565216] pci 0000:02:01.0: PCI bridge to [bus 04-38]
[ 68.565220] pci 0000:02:01.0: bridge window [mem
0xd4000000-0xe9efffff]
[ 68.565223] pci 0000:02:01.0: bridge window [mem
0xb0000000-0xd1ffffff 64bit pref]
[ 68.565228] pci 0000:02:02.0: PCI bridge to [bus 39]
[ 68.565232] pci 0000:02:02.0: bridge window [mem
0xe9f00000-0xe9ffffff]
[ 68.565239] pci 0000:01:00.0: PCI bridge to [bus 02-39]
[ 68.565244] pci 0000:01:00.0: bridge window [mem
0xd4000000-0xea0fffff]
[ 68.565246] pci 0000:01:00.0: bridge window [mem
0xb0000000-0xd1ffffff 64bit pref]
[ 68.584816] thunderbolt 0000:03:00.0: NHI initialized, starting
thunderbolt
[ 68.584817] thunderbolt 0000:03:00.0: allocating TX ring 0 of size 10
[ 68.584826] thunderbolt 0000:03:00.0: allocating RX ring 0 of size 10
[ 68.584832] thunderbolt 0000:03:00.0: control channel created
[ 68.584832] thunderbolt 0000:03:00.0: control channel starting...
[ 68.584833] thunderbolt 0000:03:00.0: starting TX ring 0
[ 68.584838] thunderbolt 0000:03:00.0: enabling interrupt at register
0x38200 bit 0 (0x0 -> 0x1)
[ 68.584839] thunderbolt 0000:03:00.0: starting RX ring 0
[ 68.584845] thunderbolt 0000:03:00.0: enabling interrupt at register
0x38200 bit 12 (0x1 -> 0x1001)
[ 73.790039] pci 0000:07:00.0: [8086:1578] type 01 class 0x060400
[ 73.790413] pci 0000:07:00.0: supports D1 D2
[ 73.790414] pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot
D3cold
[ 73.790876] pci 0000:08:01.0: [8086:1578] type 01 class 0x060400
[ 73.791260] pci 0000:08:01.0: supports D1 D2
[ 73.791261] pci 0000:08:01.0: PME# supported from D0 D1 D2 D3hot
D3cold
[ 73.791463] pci 0000:08:04.0: [8086:1578] type 01 class 0x060400
[ 73.791844] pci 0000:08:04.0: supports D1 D2
[ 73.791845] pci 0000:08:04.0: PME# supported from D0 D1 D2 D3hot
D3cold
[ 73.792091] pci 0000:07:00.0: PCI bridge to [bus 08-38]
[ 73.792115] pci 0000:07:00.0: bridge window [mem
0xd4000000-0xe9efffff]
[ 73.792130] pci 0000:07:00.0: bridge window [mem
0xb0000000-0xd1ffffff 64bit pref]
[ 73.792290] pci 0000:09:00.0: [1b21:1142] type 00 class 0x0c0330
[ 73.792390] pci 0000:09:00.0: reg 0x10: [mem 0xd4000000-0xd4007fff
64bit]
[ 73.792778] pci 0000:09:00.0: PME# supported from D3cold
[ 73.793096] pci 0000:08:01.0: PCI bridge to [bus 09]
[ 73.793119] pci 0000:08:01.0: bridge window [mem
0xd4000000-0xd40fffff]
[ 73.793251] pci 0000:08:04.0: PCI bridge to [bus 0a-38]
[ 73.793274] pci 0000:08:04.0: bridge window [mem
0xd4100000-0xe9efffff]
[ 73.793290] pci 0000:08:04.0: bridge window [mem
0xb0000000-0xd1ffffff 64bit pref]
[ 73.793408] pci_bus 0000:02: Allocating resources
[ 73.793533] pci 0000:08:04.0: bridge window [io 0x1000-0x0fff] to
[bus 0a-38] add_size 1000
[ 73.793557] pci 0000:07:00.0: bridge window [io 0x1000-0x0fff] to
[bus 08-38] add_size 1000
[ 73.793572] pcieport 0000:05:04.0: bridge window [io 0x1000-0x0fff]
to [bus 07-38] add_size 1000
[ 73.793588] pcieport 0000:04:00.0: bridge window [io 0x1000-0x0fff]
to [bus 05-38] add_size 1000
[ 73.793596] pcieport 0000:02:01.0: bridge window [io 0x1000-0x0fff]
to [bus 04-38] add_size 1000
[ 73.793604] pcieport 0000:02:02.0: bridge window [io 0x1000-0x0fff]
to [bus 39] add_size 1000
[ 73.793606] pcieport 0000:02:02.0: bridge window [mem
0x00100000-0x000fffff 64bit pref] to [bus 39] add_size 200000 add_align
100000
[ 73.793613] pcieport 0000:01:00.0: bridge window [io 0x1000-0x0fff]
to [bus 02-39] add_size 2000
[ 73.793616] pcieport 0000:01:00.0: BAR 13: no space for [io size
0x2000]
[ 73.793617] pcieport 0000:01:00.0: BAR 13: failed to assign [io
size 0x2000]
[ 73.793619] pcieport 0000:01:00.0: BAR 13: no space for [io size
0x2000]
[ 73.793620] pcieport 0000:01:00.0: BAR 13: failed to assign [io
size 0x2000]
[ 73.793623] pcieport 0000:02:02.0: BAR 15: no space for [mem size
0x00200000 64bit pref]
[ 73.793624] pcieport 0000:02:02.0: BAR 15: failed to assign [mem
size 0x00200000 64bit pref]
[ 73.793625] pcieport 0000:02:01.0: BAR 13: no space for [io size
0x1000]
[ 73.793626] pcieport 0000:02:01.0: BAR 13: failed to assign [io
size 0x1000]
[ 73.793627] pcieport 0000:02:02.0: BAR 13: no space for [io size
0x1000]
[ 73.793628] pcieport 0000:02:02.0: BAR 13: failed to assign [io
size 0x1000]
[ 73.793630] pcieport 0000:02:02.0: BAR 15: no space for [mem size
0x00200000 64bit pref]
[ 73.793631] pcieport 0000:02:02.0: BAR 15: failed to assign [mem
size 0x00200000 64bit pref]
[ 73.793632] pcieport 0000:02:02.0: BAR 13: no space for [io size
0x1000]
[ 73.793633] pcieport 0000:02:02.0: BAR 13: failed to assign [io
size 0x1000]
[ 73.793633] pcieport 0000:02:01.0: BAR 13: no space for [io size
0x1000]
[ 73.793634] pcieport 0000:02:01.0: BAR 13: failed to assign [io
size 0x1000]
[ 73.793636] pcieport 0000:04:00.0: BAR 13: no space for [io size
0x1000]
[ 73.793637] pcieport 0000:04:00.0: BAR 13: failed to assign [io
size 0x1000]
[ 73.793638] pcieport 0000:04:00.0: BAR 13: no space for [io size
0x1000]
[ 73.793639] pcieport 0000:04:00.0: BAR 13: failed to assign [io
size 0x1000]
[ 73.793640] pcieport 0000:05:04.0: BAR 13: no space for [io size
0x1000]
[ 73.793641] pcieport 0000:05:04.0: BAR 13: failed to assign [io
size 0x1000]
[ 73.793642] pcieport 0000:05:04.0: BAR 13: no space for [io size
0x1000]
[ 73.793643] pcieport 0000:05:04.0: BAR 13: failed to assign [io
size 0x1000]
[ 73.793644] pci 0000:07:00.0: BAR 13: no space for [io size 0x1000]
[ 73.793645] pci 0000:07:00.0: BAR 13: failed to assign [io size
0x1000]
[ 73.793647] pci 0000:07:00.0: BAR 13: no space for [io size 0x1000]
[ 73.793647] pci 0000:07:00.0: BAR 13: failed to assign [io size
0x1000]
[ 73.793649] pci 0000:08:04.0: BAR 13: no space for [io size 0x1000]
[ 73.793650] pci 0000:08:04.0: BAR 13: failed to assign [io size
0x1000]
[ 73.793651] pci 0000:08:04.0: BAR 13: no space for [io size 0x1000]
[ 73.793652] pci 0000:08:04.0: BAR 13: failed to assign [io size
0x1000]
[ 73.793653] pci 0000:08:01.0: PCI bridge to [bus 09]
[ 73.793669] pci 0000:08:01.0: bridge window [mem
0xd4000000-0xd40fffff]
[ 73.793692] pci 0000:08:04.0: PCI bridge to [bus 0a-38]
[ 73.793704] pci 0000:08:04.0: bridge window [mem
0xd4100000-0xe9efffff]
[ 73.793713] pci 0000:08:04.0: bridge window [mem
0xb0000000-0xd1ffffff 64bit pref]
[ 73.793728] pci 0000:07:00.0: PCI bridge to [bus 08-38]
[ 73.793740] pci 0000:07:00.0: bridge window [mem
0xd4000000-0xe9efffff]
[ 73.793748] pci 0000:07:00.0: bridge window [mem
0xb0000000-0xd1ffffff 64bit pref]
[ 73.796162] xhci_hcd 0000:09:00.0: xHCI Host Controller
[ 73.796585] xhci_hcd 0000:09:00.0: new USB bus registered, assigned
bus number 3
[ 73.862879] xhci_hcd 0000:09:00.0: hcc params 0x0200e081 hci version
0x100 quirks 0x00000410
[ 73.863145] usb usb3: New USB device found, idVendor=1d6b,
idProduct=0002
[ 73.863146] usb usb3: New USB device strings: Mfr=3, Product=2,
SerialNumber=1
[ 73.863146] usb usb3: Product: xHCI Host Controller
[ 73.863147] usb usb3: Manufacturer: Linux 4.13.0-rc1-next-20170719
xhci-hcd
[ 73.863148] usb usb3: SerialNumber: 0000:09:00.0
[ 73.863427] hub 3-0:1.0: USB hub found
[ 73.863440] hub 3-0:1.0: 2 ports detected
[ 73.863522] xhci_hcd 0000:09:00.0: xHCI Host Controller
[ 73.868230] xhci_hcd 0000:09:00.0: new USB bus registered, assigned
bus number 4
[ 73.869047] usb usb4: We don't know the algorithms for LPM for this
host, disabling LPM.
[ 73.869077] usb usb4: New USB device found, idVendor=1d6b,
idProduct=0003
[ 73.869079] usb usb4: New USB device strings: Mfr=3, Product=2,
SerialNumber=1
[ 73.869081] usb usb4: Product: xHCI Host Controller
[ 73.869081] usb usb4: Manufacturer: Linux 4.13.0-rc1-next-20170719
xhci-hcd
[ 73.869082] usb usb4: SerialNumber: 0000:09:00.0
[ 73.871464] hub 4-0:1.0: USB hub found
[ 73.871527] hub 4-0:1.0: 2 ports detected
[ 74.229725] usb 3-1: new high-speed USB device number 2 using
xhci_hcd
[ 74.423625] usb 3-1: New USB device found, idVendor=0424,
idProduct=2137
[ 74.423631] usb 3-1: New USB device strings: Mfr=1, Product=2,
SerialNumber=0
[ 74.423635] usb 3-1: Product: USB2137B
[ 74.423638] usb 3-1: Manufacturer: SMSC
[ 74.443536] hub 3-1:1.0: USB hub found
[ 74.443620] hub 3-1:1.0: 7 ports detected
[ 74.537017] usb 4-1: new SuperSpeed USB device number 2 using
xhci_hcd
[ 74.553183] usb 4-1: New USB device found, idVendor=0424,
idProduct=5537
[ 74.553186] usb 4-1: New USB device strings: Mfr=2, Product=3,
SerialNumber=0
[ 74.553188] usb 4-1: Product: USB5537B
[ 74.553189] usb 4-1: Manufacturer: SMSC
[ 74.573098] hub 4-1:1.0: USB hub found
[ 74.573700] hub 4-1:1.0: 7 ports detected
[ 74.765719] usb 3-1.5: new high-speed USB device number 3 using
xhci_hcd
[ 74.948261] usb 3-1.5: New USB device found, idVendor=0bda,
idProduct=4014
[ 74.948266] usb 3-1.5: New USB device strings: Mfr=3, Product=1,
SerialNumber=2
[ 74.948269] usb 3-1.5: Product: USB Audio
[ 74.948272] usb 3-1.5: Manufacturer: Generic
[ 74.948274] usb 3-1.5: SerialNumber: 200901010001
[ 75.014095] usb 4-1.2: new SuperSpeed USB device number 3 using
xhci_hcd
[ 75.031783] usb 4-1.2: New USB device found, idVendor=0bda,
idProduct=8153
[ 75.031784] usb 4-1.2: New USB device strings: Mfr=1, Product=2,
SerialNumber=6
[ 75.031785] usb 4-1.2: Product: USB 10/100/1000 LAN
[ 75.031785] usb 4-1.2: Manufacturer: Realtek
[ 75.031786] usb 4-1.2: SerialNumber: 000001000000
[ 75.782301] usbcore: registered new interface driver r8152
[ 75.798094] usbcore: registered new interface driver cdc_ether
[ 76.102592] usb 4-1.2: reset SuperSpeed USB device number 3 using
xhci_hcd
[ 76.116181] thunderbolt 0000:03:00.0: current switch config:
[ 76.116187] thunderbolt 0000:03:00.0: Switch: 8086:15c0 (Revision:
1, TB Version: 2)
[ 76.116190] thunderbolt 0000:03:00.0: Max Port Number: 5
[ 76.116191] thunderbolt 0000:03:00.0: Config:
[ 76.116196] thunderbolt 0000:03:00.0: Upstream Port Number: 3
Depth: 0 Route String: 0x0 Enabled: 1, PlugEventsDelay: 254ms
[ 76.116198] thunderbolt 0000:03:00.0: unknown1: 0x0 unknown4: 0x0
[ 76.134639] BUG: unable to handle kernel NULL pointer dereference at
00000000000002ec
[ 76.134743] IP: tb_drom_read+0x383/0x890 [thunderbolt]
[ 76.134788] PGD 0
[ 76.134789] P4D 0

[ 76.134845] Oops: 0000 [#1] SMP
[ 76.134873] Modules linked in: cdc_ether r8152 snd_usb_audio(+)
snd_usbmidi_lib snd_rawmidi thunderbolt(+) fuse rfcomm ccm xt_CHECKSUM
ipt_MASQUERADE nf_nat_masquerade_ipv4 tun nf_conntrack_netbios_ns
nf_conntrack_broadcast xt_CT ip6t_rpfilter ip6t_REJECT nf_reject_ipv6
xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc
ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6
ip6table_mangle ip6table_raw ip6table_security iptable_nat
nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack
libcrc32c iptable_mangle iptable_raw iptable_security ebtable_filter
ebtables ip6table_filter ip6_tables cmac bnep sunrpc arc4 iwlmvm
snd_soc_skl intel_rapl x86_pkg_temp_thermal intel_powerclamp
snd_soc_skl_ipc mac80211 snd_soc_sst_ipc snd_soc_sst_dsp coretemp
snd_hda_ext_core wmi_bmof
[ 76.135458] iTCO_wdt iTCO_vendor_support mei_wdt snd_hda_codec_hdmi
kvm tpm_crb snd_soc_sst_match irqbypass crct10dif_pclmul crc32_pclmul
ghash_clmulni_intel snd_hda_codec_realtek intel_cstate
snd_hda_codec_generic intel_uncore iwlwifi intel_rapl_perf snd_soc_core
snd_compress snd_pcm_dmaengine ac97_bus snd_hda_intel snd_hda_codec
snd_hda_core cfg80211 snd_hwdep snd_seq joydev uvcvideo pcspkr
snd_seq_device snd_pcm hci_uart videobuf2_vmalloc videobuf2_memops
videobuf2_v4l2 videobuf2_core videodev btusb i2c_i801 snd_timer btrtl
btqca btbcm btintel thinkpad_acpi mei_me qcserial bluetooth media
usb_wwan mei shpchp intel_pch_thermal snd wmi soundcore ecdh_generic
tpm_tis pinctrl_sunrisepoint tpm_tis_core rfkill pinctrl_intel
intel_lpss_acpi intel_lpss tpm acpi_pad cdc_mbim cdc_wdm cdc_ncm usbnet
mii
[ 76.136045] uas usb_storage i915 i2c_algo_bit drm_kms_helper drm
e1000e crc32c_intel nvme ptp serio_raw pps_core nvme_core video i2c_hid
[ 76.136160] CPU: 0 PID: 5044 Comm: systemd-udevd Not tainted
4.13.0-rc1-next-20170719 #1
[ 76.136225] Hardware name: LENOVO 20HF000YGE/20HF000YGE, BIOS
N1WET32W (1.11 ) 05/23/2017
[ 76.136290] task: ffff8da48433cd80 task.stack: ffffb64f48104000
[ 76.136361] RIP: 0010:tb_drom_read+0x383/0x890 [thunderbolt]
[ 76.136404] RSP: 0018:ffffb64f48107990 EFLAGS: 00010246
[ 76.136444] RAX: 0000000000000000 RBX: ffff8da3f390d400 RCX:
0000000000000000
[ 76.136495] RDX: ffff8da4849d2600 RSI: 0000000000000000 RDI:
ffffb64f481079a0
[ 76.136546] RBP: ffffb64f481079d0 R08: 0000000000000001 R09:
0000000000000002
[ 76.136597] R10: ffff8da4849d27b0 R11: 0000000000000000 R12:
ffff8da48c444ecc
[ 76.136648] R13: 000000000000002c R14: 0000000000000056 R15:
0000000000000056
[ 76.136700] FS: 00007f6155a4d8c0(0000) GS:ffff8da4a1400000(0000)
knlGS:0000000000000000
[ 76.136758] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 76.136800] CR2: 00000000000002ec CR3: 000000044dddf000 CR4:
00000000003606f0
[ 76.136852] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[ 76.136904] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
0000000000000400
[ 76.136956] Call Trace:
[ 76.136988] tb_switch_add+0x9d/0x730 [thunderbolt]
[ 76.137031] ? tb_switch_alloc+0x3cd/0x4d0 [thunderbolt]
[ 76.137078] icm_start+0x5a/0xa0 [thunderbolt]
[ 76.137118] tb_domain_add+0xc3/0xf0 [thunderbolt]
[ 76.137161] nhi_probe+0x19e/0x310 [thunderbolt]
[ 76.137202] local_pci_probe+0x42/0xa0
[ 76.137233] pci_device_probe+0x18d/0x1a0
[ 76.137268] driver_probe_device+0x2ff/0x450
[ 76.137302] __driver_attach+0xa4/0xe0
[ 76.137333] ? driver_probe_device+0x450/0x450
[ 76.137369] bus_for_each_dev+0x6e/0xb0
[ 76.137400] driver_attach+0x1e/0x20
[ 76.137431] bus_add_driver+0x1d0/0x270
[ 76.137461] ? 0xffffffffc0c1c000
[ 76.137496] driver_register+0x60/0xe0
[ 76.137540] ? 0xffffffffc0c1c000
[ 76.137584] __pci_register_driver+0x4c/0x50
[ 76.137638] nhi_init+0x28/0x1000 [thunderbolt]
[ 76.137677] do_one_initcall+0x50/0x190
[ 76.137709] ? __vunmap+0x81/0xb0
[ 76.137739] ? _cond_resched+0x1a/0x50
[ 76.137772] ? kmem_cache_alloc_trace+0x15f/0x1c0
[ 76.137810] ? do_init_module+0x27/0x1e9
[ 76.137841] do_init_module+0x5f/0x1e9
[ 76.137872] load_module+0x24e7/0x2a60
[ 76.137906] ? vfs_read+0x115/0x130
[ 76.137937] SYSC_finit_module+0xfc/0x120
[ 76.137968] ? SYSC_finit_module+0xfc/0x120
[ 76.138003] SyS_finit_module+0xe/0x10
[ 76.138047] do_syscall_64+0x67/0x170
[ 76.138078] entry_SYSCALL64_slow_path+0x25/0x25
[ 76.138113] RIP: 0033:0x7f61546be529
[ 76.138142] RSP: 002b:00007ffe4ec72f28 EFLAGS: 00000246 ORIG_RAX:
0000000000000139
[ 76.138197] RAX: ffffffffffffffda RBX: 00000092f6a08b60 RCX:
00007f61546be529
[ 76.138249] RDX: 0000000000000000 RSI: 00007f61551fa9c5 RDI:
0000000000000007
[ 76.138300] RBP: 00007f61551fa9c5 R08: 0000000000000000 R09:
00007ffe4ec73040
[ 76.140994] R10: 0000000000000007 R11: 0000000000000246 R12:
0000000000000000
[ 76.143583] R13: 00000092f6a04580 R14: 0000000000020000 R15:
00000092f4e60f4a
[ 76.146145] Code: 8d 0c c9 83 e0 01 84 c0 4c 8d 14 ca 41 88 42 35 75
85 49 8b 42 20 41 0f b6 4a 34 41 b9 02 00 00 00 41 b8 01 00 00 00 4c 89
55 c0 <8b> 90 ec 02 00 00 8b b0 e8 02 00 00 48 8b 80 08 03 00 00 81 e2
[ 76.148716] RIP: tb_drom_read+0x383/0x890 [thunderbolt] RSP:
ffffb64f48107990
[ 76.151254] CR2: 00000000000002ec
[ 76.166697] usbcore: registered new interface driver snd-usb-audio
[ 76.167182] ---[ end trace 8d5cd480d046bff9 ]---
[ 76.215165] r8152 4-1.2:1.0 eth0: v1.09.9
[ 76.358446] r8152 4-1.2:1.0 enp9s0u1u2: renamed from eth0
[ 76.390687] IPv6: ADDRCONF(NETDEV_UP): enp9s0u1u2: link is not ready
[ 76.412379] IPv6: ADDRCONF(NETDEV_UP): enp9s0u1u2: link is not ready
[ 79.762371] r8152 4-1.2:1.0 enp9s0u1u2: carrier on
[ 79.762409] IPv6: ADDRCONF(NETDEV_CHANGE): enp9s0u1u2: link becomes
ready



2017-07-24 06:55:18

by Mika Westerberg

[permalink] [raw]
Subject: Re: [PATCH v4 00/27] Thunderbolt security levels and NVM firmware upgrade

Hi,

On Thu, Jul 20, 2017 at 06:11:49PM +0200, Christian Kellner wrote:
> Hi!
>
> > This is fourth version of the patch series adding support for Thunderbolt
> > security levels and NVM firmware upgrade.
>
> While prototyping the user-space bits for GNOME, I stumbled upon an
> oops on the Lenovo T470s (see below) when attaching a Dell TB16
> thunderbolt 3 dock. As a result /sys/bus/thunderbolt/devices has only
> domain0 in it but not the dock itself. Everything works fine on a Dell
> XPS 13 9630. The oops happens with linux-next and when I backport the
> patches to 4.11.11.
> Happy to provide any more information and test future patches.
>
> Cheers,
> Christian
>
> --- 8< ---
> lspci output:
>
> pcilib: Cannot open /sys/bus/pci/devices/0000:03:00.0/config
> lspci: Unable to read the standard configuration space header of device 0000:03:00.0
> 00:00.0 Host bridge: Intel Corporation Device 5904 (rev 02)
> 00:02.0 VGA compatible controller: Intel Corporation Device 5916 (rev 02)
> 00:14.0 USB controller: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller (rev 21)
> 00:14.2 Signal processing controller: Intel Corporation Sunrise Point-LP Thermal subsystem (rev 21)
> 00:16.0 Communication controller: Intel Corporation Sunrise Point-LP CSME HECI #1 (rev 21)
> 00:16.3 Serial controller: Intel Corporation Device 9d3d (rev 21)
> 00:1c.0 PCI bridge: Intel Corporation Device 9d10 (rev f1)
> 00:1c.2 PCI bridge: Intel Corporation Device 9d12 (rev f1)
> 00:1d.0 PCI bridge: Intel Corporation Sunrise Point-LP PCI Express Root Port #9 (rev f1)
> 00:1f.0 ISA bridge: Intel Corporation Device 9d4e (rev 21)
> 00:1f.2 Memory controller: Intel Corporation Sunrise Point-LP PMC (rev 21)
> 00:1f.3 Audio device: Intel Corporation Device 9d71 (rev 21)
> 00:1f.4 SMBus: Intel Corporation Sunrise Point-LP SMBus (rev 21)
> 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (4) I219-LM (rev 21)
> 01:00.0 PCI bridge: Intel Corporation JHL6240 Thunderbolt 3 Bridge (Low Power) [Alpine Ridge LP 2016] (rev ff)
> 02:00.0 PCI bridge: Intel Corporation JHL6240 Thunderbolt 3 Bridge (Low Power) [Alpine Ridge LP 2016] (rev ff)
> 02:01.0 PCI bridge: Intel Corporation JHL6240 Thunderbolt 3 Bridge (Low Power) [Alpine Ridge LP 2016] (rev ff)
> 02:02.0 PCI bridge: Intel Corporation JHL6240 Thunderbolt 3 Bridge (Low Power) [Alpine Ridge LP 2016] (rev ff)
> 3a:00.0 Network controller: Intel Corporation Wireless 8265 / 8275 (rev 78)
> 3c:00.0 Non-Volatile memory controller: Toshiba America Info Systems Device 0115 (rev 01)
>
> --- >8 ---
> Ops:
>
> [ 69.886978] thunderbolt 0000:03:00.0: current switch config:
> [ 69.886983] thunderbolt 0000:03:00.0: Switch: 8086:15c0 (Revision: 1, TB Version: 2)
> [ 69.886986] thunderbolt 0000:03:00.0: Max Port Number: 5
> [ 69.886987] thunderbolt 0000:03:00.0: Config:
> [ 69.886991] thunderbolt 0000:03:00.0: Upstream Port Number: 3 Depth: 0 Route String: 0x0 Enabled: 1, PlugEventsDelay: 254ms
> [ 69.886994] thunderbolt 0000:03:00.0: unknown1: 0x0 unknown4: 0x0
> [ 69.920748] BUG: unable to handle kernel NULL pointer dereference at 00000000000002ec
> [ 69.920834] IP: tb_drom_read+0x383/0x890 [thunderbolt]

I've seen this once on Alpine Ridge LP (which you have here) where the
DROM contents of the older NVM image listed too many ports. Can you try
if the below patch helps?

diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
index 996c6e2..bdf7f80 100644
--- a/drivers/thunderbolt/eeprom.c
+++ b/drivers/thunderbolt/eeprom.c
@@ -333,6 +333,12 @@ static int tb_drom_parse_entry_port(struct tb_switch *sw,
int res;
enum tb_port_type type;

+ if (header->index > sw->config.max_port_number) {
+ tb_sw_warn(sw, "DROM has too many entries %u (expected %u)\n",
+ header->index, sw->config.max_port_number);
+ return 0;
+ }
+
port = &sw->ports[header->index];
port->disabled = header->port_disabled;
if (port->disabled)

2017-07-25 09:11:12

by Christian Kellner

[permalink] [raw]
Subject: Re: [PATCH v4 00/27] Thunderbolt security levels and NVM firmware upgrade

Hi Mika,


> I've seen this once on Alpine Ridge LP (which you have here) where the
> DROM contents of the older NVM image listed too many ports. Can you
> try
> if the below patch helps?
You were right. Patch works fine, and I indeed see the expected
warnings in
the logs:

[ 102.739663] thunderbolt 0000:03:00.0: 0: DROM has too many entries 6
(expected 5)
[ 102.739665] thunderbolt 0000:03:00.0: 0: DROM has too many entries 7
(expected 5)
[ 102.739667] thunderbolt 0000:03:00.0: 0: DROM has too many entries 8
(expected 5)
[ 102.739668] thunderbolt 0000:03:00.0: 0: DROM has too many entries 9
(expected 5)
[ 102.739669] thunderbolt 0000:03:00.0: 0: DROM has too many entries
10 (expected 5)
[ 102.739670] thunderbolt 0000:03:00.0: 0: DROM has too many entries
11 (expected 5)

Thanks a lot,
Christian

> diff --git a/drivers/thunderbolt/eeprom.c
> b/drivers/thunderbolt/eeprom.c
> index 996c6e2..bdf7f80 100644
> --- a/drivers/thunderbolt/eeprom.c
> +++ b/drivers/thunderbolt/eeprom.c
> @@ -333,6 +333,12 @@ static int tb_drom_parse_entry_port(struct
> tb_switch *sw,
> int res;
> enum tb_port_type type;
>
> + if (header->index > sw->config.max_port_number) {
> + tb_sw_warn(sw, "DROM has too many entries %u (expected %u)\n",
> + header->index, sw->config.max_port_number);
> + return 0;
> + }
> +
> port = &sw->ports[header->index];
> port->disabled = header->port_disabled;
> if (port->disabled)

Tested-by: Christian Kellner <[email protected]>

2017-07-25 09:19:58

by Mika Westerberg

[permalink] [raw]
Subject: Re: [PATCH v4 00/27] Thunderbolt security levels and NVM firmware upgrade

On Tue, Jul 25, 2017 at 11:11:05AM +0200, Christian Kellner wrote:
> Hi Mika,
>
>
> > I've seen this once on Alpine Ridge LP (which you have here) where the
> > DROM contents of the older NVM image listed too many ports. Can you try
> > if the below patch helps?
> You were right. Patch works fine, and I indeed see the expected warnings in
> the logs:
>
> [ 102.739663] thunderbolt 0000:03:00.0: 0: DROM has too many entries 6
> (expected 5)
> [ 102.739665] thunderbolt 0000:03:00.0: 0: DROM has too many entries 7
> (expected 5)
> [ 102.739667] thunderbolt 0000:03:00.0: 0: DROM has too many entries 8
> (expected 5)
> [ 102.739668] thunderbolt 0000:03:00.0: 0: DROM has too many entries 9
> (expected 5)
> [ 102.739669] thunderbolt 0000:03:00.0: 0: DROM has too many entries 10
> (expected 5)
> [ 102.739670] thunderbolt 0000:03:00.0: 0: DROM has too many entries 11
> (expected 5)

Thanks for confirming! I will send a proper patch (with slight changes)
later today.