This patch series drops previous patches in [1]
that were incorporated by Kees Cook into patch series
"Introduce partial kernel_read_file() support" [2].
Remaining patches are contained in this series to add Broadcom VK driver.
(which depends on request_firmware_into_buf API addition in
other patch series [2] being applied first).
Please note this patch series will not compile without [2].
[1] https://lore.kernel.org/lkml/[email protected]/
[2] https://lore.kernel.org/lkml/[email protected]/
Changes from v5:
- dropped sysfs patch from series for now as rework to use hwmon
- tty patch still at end of series to drop if another solution available
- updated cover letter commit to point to Kees' latest patch submission in [2]
- specified --base with Kees' patches applied (kernel branches don't have these yet)
- removed trivial comment
- moved location of const to before the struct in two declarations
- changed dev_info to dev_warn and only print when irq don't match expected
- changed dev_info to dev_dbg when printing debug QSTATS
- removed unnecessary %p print
Changes from v4:
- fixed memory leak in probe function on failure
- changed -1 to -EBUSY in bcm_vk_tty return code
- move bcm_vk_tty patch to end of patch series so it
can be dropped from current patch series if needed
and rearchitected if needed.
Changes from v3:
- split driver into more incremental commits for acceptance/review
- lowered some dev_info to dev_dbg
- remove ANSI stdint types and replace with linux u8, etc types
- changed an EIO return to EPFNOSUPPORT
- move vk_msg_cmd internal to driver to not expose to UAPI at this time
Changes from v2:
- open code BIT macro in uapi header
- A0/B0 boot improvements
Changes from v1:
- declare bcm_vk_intf_ver_chk as static
Scott Branden (14):
bcm-vk: add bcm_vk UAPI
misc: bcm-vk: add Broadcom VK driver
misc: bcm-vk: add autoload support
misc: bcm-vk: add misc device to Broadcom VK driver
misc: bcm-vk: add triggers when host panic or reboots to notify card
misc: bcm-vk: add open/release
misc: bcm-vk: add ioctl load_image
misc: bcm-vk: add get_card_info, peerlog_info, and proc_mon_info
misc: bcm-vk: add VK messaging support
misc: bcm-vk: reset_pid support
misc: bcm-vk: add BCM_VK_QSTATS
misc: bcm-vk: add mmap function for exposing BAR2
MAINTAINERS: bcm-vk: add maintainer for Broadcom VK Driver
misc: bcm-vk: add ttyVK support
MAINTAINERS | 7 +
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/bcm-vk/Kconfig | 29 +
drivers/misc/bcm-vk/Makefile | 12 +
drivers/misc/bcm-vk/bcm_vk.h | 507 ++++++++++
drivers/misc/bcm-vk/bcm_vk_dev.c | 1629 ++++++++++++++++++++++++++++++
drivers/misc/bcm-vk/bcm_vk_msg.c | 1363 +++++++++++++++++++++++++
drivers/misc/bcm-vk/bcm_vk_msg.h | 169 ++++
drivers/misc/bcm-vk/bcm_vk_sg.c | 275 +++++
drivers/misc/bcm-vk/bcm_vk_sg.h | 61 ++
drivers/misc/bcm-vk/bcm_vk_tty.c | 333 ++++++
include/uapi/linux/misc/bcm_vk.h | 84 ++
13 files changed, 4471 insertions(+)
create mode 100644 drivers/misc/bcm-vk/Kconfig
create mode 100644 drivers/misc/bcm-vk/Makefile
create mode 100644 drivers/misc/bcm-vk/bcm_vk.h
create mode 100644 drivers/misc/bcm-vk/bcm_vk_dev.c
create mode 100644 drivers/misc/bcm-vk/bcm_vk_msg.c
create mode 100644 drivers/misc/bcm-vk/bcm_vk_msg.h
create mode 100644 drivers/misc/bcm-vk/bcm_vk_sg.c
create mode 100644 drivers/misc/bcm-vk/bcm_vk_sg.h
create mode 100644 drivers/misc/bcm-vk/bcm_vk_tty.c
create mode 100644 include/uapi/linux/misc/bcm_vk.h
base-commit: ea39bd280a47b40ed7e756a2f7d669f4a0a0811f
--
2.17.1
Add user space api for bcm-vk driver.
Provide ioctl api to load images and issue reset command to card.
FW status registers in PCI BAR space also defined as part
of API so that user space is able to interpret these memory locations
as needed via direct PCIe access.
Signed-off-by: Scott Branden <[email protected]>
---
include/uapi/linux/misc/bcm_vk.h | 84 ++++++++++++++++++++++++++++++++
1 file changed, 84 insertions(+)
create mode 100644 include/uapi/linux/misc/bcm_vk.h
diff --git a/include/uapi/linux/misc/bcm_vk.h b/include/uapi/linux/misc/bcm_vk.h
new file mode 100644
index 000000000000..ec28e0bd46a9
--- /dev/null
+++ b/include/uapi/linux/misc/bcm_vk.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Copyright 2018-2020 Broadcom.
+ */
+
+#ifndef __UAPI_LINUX_MISC_BCM_VK_H
+#define __UAPI_LINUX_MISC_BCM_VK_H
+
+#include <linux/ioctl.h>
+#include <linux/types.h>
+
+#define BCM_VK_MAX_FILENAME 64
+
+struct vk_image {
+ __u32 type; /* Type of image */
+#define VK_IMAGE_TYPE_BOOT1 1 /* 1st stage (load to SRAM) */
+#define VK_IMAGE_TYPE_BOOT2 2 /* 2nd stage (load to DDR) */
+ __u8 filename[BCM_VK_MAX_FILENAME]; /* Filename of image */
+};
+
+struct vk_reset {
+ __u32 arg1;
+ __u32 arg2;
+};
+
+#define VK_MAGIC 0x5e
+
+/* Load image to Valkyrie */
+#define VK_IOCTL_LOAD_IMAGE _IOW(VK_MAGIC, 0x2, struct vk_image)
+
+/* Send Reset to Valkyrie */
+#define VK_IOCTL_RESET _IOW(VK_MAGIC, 0x4, struct vk_reset)
+
+/*
+ * Firmware Status accessed directly via BAR space
+ */
+#define VK_BAR_FWSTS 0x41c
+#define VK_BAR_COP_FWSTS 0x428
+/* VK_FWSTS definitions */
+#define VK_FWSTS_RELOCATION_ENTRY (1UL << 0)
+#define VK_FWSTS_RELOCATION_EXIT (1UL << 1)
+#define VK_FWSTS_INIT_START (1UL << 2)
+#define VK_FWSTS_ARCH_INIT_DONE (1UL << 3)
+#define VK_FWSTS_PRE_KNL1_INIT_DONE (1UL << 4)
+#define VK_FWSTS_PRE_KNL2_INIT_DONE (1UL << 5)
+#define VK_FWSTS_POST_KNL_INIT_DONE (1UL << 6)
+#define VK_FWSTS_INIT_DONE (1UL << 7)
+#define VK_FWSTS_APP_INIT_START (1UL << 8)
+#define VK_FWSTS_APP_INIT_DONE (1UL << 9)
+#define VK_FWSTS_MASK 0xffffffff
+#define VK_FWSTS_READY (VK_FWSTS_INIT_START | \
+ VK_FWSTS_ARCH_INIT_DONE | \
+ VK_FWSTS_PRE_KNL1_INIT_DONE | \
+ VK_FWSTS_PRE_KNL2_INIT_DONE | \
+ VK_FWSTS_POST_KNL_INIT_DONE | \
+ VK_FWSTS_INIT_DONE | \
+ VK_FWSTS_APP_INIT_START | \
+ VK_FWSTS_APP_INIT_DONE)
+/* Deinit */
+#define VK_FWSTS_APP_DEINIT_START (1UL << 23)
+#define VK_FWSTS_APP_DEINIT_DONE (1UL << 24)
+#define VK_FWSTS_DRV_DEINIT_START (1UL << 25)
+#define VK_FWSTS_DRV_DEINIT_DONE (1UL << 26)
+#define VK_FWSTS_RESET_DONE (1UL << 27)
+#define VK_FWSTS_DEINIT_TRIGGERED (VK_FWSTS_APP_DEINIT_START | \
+ VK_FWSTS_APP_DEINIT_DONE | \
+ VK_FWSTS_DRV_DEINIT_START | \
+ VK_FWSTS_DRV_DEINIT_DONE)
+/* Last nibble for reboot reason */
+#define VK_FWSTS_RESET_REASON_SHIFT 28
+#define VK_FWSTS_RESET_REASON_MASK (0xf << VK_FWSTS_RESET_REASON_SHIFT)
+#define VK_FWSTS_RESET_SYS_PWRUP (0x0 << VK_FWSTS_RESET_REASON_SHIFT)
+#define VK_FWSTS_RESET_MBOX_DB (0x1 << VK_FWSTS_RESET_REASON_SHIFT)
+#define VK_FWSTS_RESET_M7_WDOG (0x2 << VK_FWSTS_RESET_REASON_SHIFT)
+#define VK_FWSTS_RESET_TEMP (0x3 << VK_FWSTS_RESET_REASON_SHIFT)
+#define VK_FWSTS_RESET_PCI_FLR (0x4 << VK_FWSTS_RESET_REASON_SHIFT)
+#define VK_FWSTS_RESET_PCI_HOT (0x5 << VK_FWSTS_RESET_REASON_SHIFT)
+#define VK_FWSTS_RESET_PCI_WARM (0x6 << VK_FWSTS_RESET_REASON_SHIFT)
+#define VK_FWSTS_RESET_PCI_COLD (0x7 << VK_FWSTS_RESET_REASON_SHIFT)
+#define VK_FWSTS_RESET_L1 (0x8 << VK_FWSTS_RESET_REASON_SHIFT)
+#define VK_FWSTS_RESET_L0 (0x9 << VK_FWSTS_RESET_REASON_SHIFT)
+#define VK_FWSTS_RESET_UNKNOWN (0xf << VK_FWSTS_RESET_REASON_SHIFT)
+
+#endif /* __UAPI_LINUX_MISC_BCM_VK_H */
--
2.17.1
Add support to load and boot images on card automatically.
The kernel module parameter auto_load can be passed in as false to disable
such support on probe.
As well, nr_scratch_pages can be specified to allocate more or less scratch
memory on init as needed for desired card operation.
Co-developed-by: Desmond Yan <[email protected]>
Signed-off-by: Desmond Yan <[email protected]>
Co-developed-by: James Hu <[email protected]>
Signed-off-by: James Hu <[email protected]>
Signed-off-by: Scott Branden <[email protected]>
---
drivers/misc/bcm-vk/bcm_vk.h | 250 +++++++++++
drivers/misc/bcm-vk/bcm_vk_dev.c | 718 +++++++++++++++++++++++++++++++
2 files changed, 968 insertions(+)
diff --git a/drivers/misc/bcm-vk/bcm_vk.h b/drivers/misc/bcm-vk/bcm_vk.h
index 9152785199ab..c4fb61a84e41 100644
--- a/drivers/misc/bcm-vk/bcm_vk.h
+++ b/drivers/misc/bcm-vk/bcm_vk.h
@@ -6,10 +6,187 @@
#ifndef BCM_VK_H
#define BCM_VK_H
+#include <linux/firmware.h>
#include <linux/pci.h>
+#include <linux/sched/signal.h>
#define DRV_MODULE_NAME "bcm-vk"
+/*
+ * Load Image is completed in two stages:
+ *
+ * 1) When the VK device boot-up, M7 CPU runs and executes the BootROM.
+ * The Secure Boot Loader (SBL) as part of the BootROM will run
+ * to open up ITCM for host to push BOOT1 image.
+ * SBL will authenticate the image before jumping to BOOT1 image.
+ *
+ * 2) Because BOOT1 image is a secured image, we also called it the
+ * Secure Boot Image (SBI). At second stage, SBI will initialize DDR
+ * and wait for host to push BOOT2 image to DDR.
+ * SBI will authenticate the image before jumping to BOOT2 image.
+ *
+ */
+/* Location of registers of interest in BAR0 */
+
+/* Request register for Secure Boot Loader (SBL) download */
+#define BAR_CODEPUSH_SBL 0x400
+/* Start of ITCM */
+#define CODEPUSH_BOOT1_ENTRY 0x00400000
+#define CODEPUSH_MASK 0xfffff000
+#define CODEPUSH_BOOTSTART BIT(0)
+
+/* Boot Status register */
+#define BAR_BOOT_STATUS 0x404
+
+#define SRAM_OPEN BIT(16)
+#define DDR_OPEN BIT(17)
+
+/* Firmware loader progress status definitions */
+#define FW_LOADER_ACK_SEND_MORE_DATA BIT(18)
+#define FW_LOADER_ACK_IN_PROGRESS BIT(19)
+#define FW_LOADER_ACK_RCVD_ALL_DATA BIT(20)
+
+/* Boot1/2 is running in standalone mode */
+#define BOOT_STDALONE_RUNNING BIT(21)
+
+/* definitions for boot status register */
+#define BOOT_STATE_MASK (0xffffffff & \
+ ~(FW_LOADER_ACK_SEND_MORE_DATA | \
+ FW_LOADER_ACK_IN_PROGRESS | \
+ BOOT_STDALONE_RUNNING))
+
+#define BOOT_ERR_SHIFT 4
+#define BOOT_ERR_MASK (0xf << BOOT_ERR_SHIFT)
+#define BOOT_PROG_MASK 0xf
+
+#define BROM_STATUS_NOT_RUN 0x2
+#define BROM_NOT_RUN (SRAM_OPEN | BROM_STATUS_NOT_RUN)
+#define BROM_STATUS_COMPLETE 0x6
+#define BROM_RUNNING (SRAM_OPEN | BROM_STATUS_COMPLETE)
+#define BOOT1_STATUS_COMPLETE 0x6
+#define BOOT1_RUNNING (DDR_OPEN | BOOT1_STATUS_COMPLETE)
+#define BOOT2_STATUS_COMPLETE 0x6
+#define BOOT2_RUNNING (FW_LOADER_ACK_RCVD_ALL_DATA | \
+ BOOT2_STATUS_COMPLETE)
+
+/* Boot request for Secure Boot Image (SBI) */
+#define BAR_CODEPUSH_SBI 0x408
+/* 64M mapped to BAR2 */
+#define CODEPUSH_BOOT2_ENTRY 0x60000000
+
+#define BAR_CARD_STATUS 0x410
+
+#define BAR_BOOT1_STDALONE_PROGRESS 0x420
+#define BOOT1_STDALONE_SUCCESS (BIT(13) | BIT(14))
+#define BOOT1_STDALONE_PROGRESS_MASK BOOT1_STDALONE_SUCCESS
+
+#define BAR_METADATA_VERSION 0x440
+#define BAR_OS_UPTIME 0x444
+#define BAR_CHIP_ID 0x448
+#define MAJOR_SOC_REV(_chip_id) (((_chip_id) >> 20) & 0xf)
+
+#define BAR_CARD_TEMPERATURE 0x45c
+
+#define BAR_CARD_VOLTAGE 0x460
+
+#define BAR_CARD_ERR_LOG 0x464
+
+#define BAR_CARD_ERR_MEM 0x468
+
+#define BAR_CARD_PWR_AND_THRE 0x46c
+
+#define BAR_CARD_STATIC_INFO 0x470
+
+#define BAR_INTF_VER 0x47c
+#define BAR_INTF_VER_MAJOR_SHIFT 16
+#define BAR_INTF_VER_MASK 0xffff
+/*
+ * major and minor semantic version numbers supported
+ * Please update as required on interface changes
+ */
+#define SEMANTIC_MAJOR 1
+#define SEMANTIC_MINOR 0
+
+/*
+ * first door bell reg, ie for queue = 0. Only need the first one, as
+ * we will use the queue number to derive the others
+ */
+#define VK_BAR0_REGSEG_DB_BASE 0x484
+#define VK_BAR0_REGSEG_DB_REG_GAP 8 /*
+ * DB register gap,
+ * DB1 at 0x48c and DB2 at 0x494
+ */
+
+/* reset register and specific values */
+#define VK_BAR0_RESET_DB_NUM 3
+#define VK_BAR0_RESET_DB_SOFT 0xffffffff
+#define VK_BAR0_RESET_DB_HARD 0xfffffffd
+#define VK_BAR0_RESET_RAMPDUMP 0xa0000000
+
+#define VK_BAR0_Q_DB_BASE(q_num) (VK_BAR0_REGSEG_DB_BASE + \
+ ((q_num) * VK_BAR0_REGSEG_DB_REG_GAP))
+#define VK_BAR0_RESET_DB_BASE (VK_BAR0_REGSEG_DB_BASE + \
+ (VK_BAR0_RESET_DB_NUM * VK_BAR0_REGSEG_DB_REG_GAP))
+
+#define BAR_BOOTSRC_SELECT 0xc78
+/* BOOTSRC definitions */
+#define BOOTSRC_SOFT_ENABLE BIT(14)
+
+/* Card OS Firmware version size */
+#define BAR_FIRMWARE_TAG_SIZE 50
+#define FIRMWARE_STATUS_PRE_INIT_DONE 0x1f
+
+/*
+ * BAR1
+ */
+
+/* BAR1 message q definition */
+
+/* indicate if msgq ctrl in BAR1 is populated */
+#define VK_BAR1_MSGQ_DEF_RDY 0x60c0
+/* ready marker value for the above location, normal boot2 */
+#define VK_BAR1_MSGQ_RDY_MARKER 0xbeefcafe
+/* ready marker value for the above location, normal boot2 */
+#define VK_BAR1_DIAG_RDY_MARKER 0xdeadcafe
+/* number of msgqs in BAR1 */
+#define VK_BAR1_MSGQ_NR 0x60c4
+/* BAR1 queue control structure offset */
+#define VK_BAR1_MSGQ_CTRL_OFF 0x60c8
+
+/* BAR1 ucode and boot1 version tag */
+#define VK_BAR1_UCODE_VER_TAG 0x6170
+#define VK_BAR1_BOOT1_VER_TAG 0x61b0
+#define VK_BAR1_VER_TAG_SIZE 64
+
+/* Memory to hold the DMA buffer memory address allocated for boot2 download */
+#define VK_BAR1_DMA_BUF_OFF_HI 0x61e0
+#define VK_BAR1_DMA_BUF_OFF_LO (VK_BAR1_DMA_BUF_OFF_HI + 4)
+#define VK_BAR1_DMA_BUF_SZ (VK_BAR1_DMA_BUF_OFF_HI + 8)
+
+/* Scratch memory allocated on host for VK */
+#define VK_BAR1_SCRATCH_OFF_HI 0x61f0
+#define VK_BAR1_SCRATCH_OFF_LO (VK_BAR1_SCRATCH_OFF_HI + 4)
+#define VK_BAR1_SCRATCH_SZ_ADDR (VK_BAR1_SCRATCH_OFF_HI + 8)
+#define VK_BAR1_SCRATCH_DEF_NR_PAGES 32
+
+/* BAR1 DAUTH info */
+#define VK_BAR1_DAUTH_BASE_ADDR 0x6200
+#define VK_BAR1_DAUTH_STORE_SIZE 0x48
+#define VK_BAR1_DAUTH_VALID_SIZE 0x8
+#define VK_BAR1_DAUTH_MAX 4
+#define VK_BAR1_DAUTH_STORE_ADDR(x) \
+ (VK_BAR1_DAUTH_BASE_ADDR + \
+ (x) * (VK_BAR1_DAUTH_STORE_SIZE + VK_BAR1_DAUTH_VALID_SIZE))
+#define VK_BAR1_DAUTH_VALID_ADDR(x) \
+ (VK_BAR1_DAUTH_STORE_ADDR(x) + VK_BAR1_DAUTH_STORE_SIZE)
+
+/* BAR1 SOTP AUTH and REVID info */
+#define VK_BAR1_SOTP_REVID_BASE_ADDR 0x6340
+#define VK_BAR1_SOTP_REVID_SIZE 0x10
+#define VK_BAR1_SOTP_REVID_MAX 2
+#define VK_BAR1_SOTP_REVID_ADDR(x) \
+ (VK_BAR1_SOTP_REVID_BASE_ADDR + (x) * VK_BAR1_SOTP_REVID_SIZE)
+
/* VK device supports a maximum of 3 bars */
#define MAX_BAR 3
@@ -21,9 +198,82 @@ enum pci_barno {
#define BCM_VK_NUM_TTY 2
+/* DAUTH related info */
+struct bcm_vk_dauth_key {
+ char store[VK_BAR1_DAUTH_STORE_SIZE];
+ char valid[VK_BAR1_DAUTH_VALID_SIZE];
+};
+
+struct bcm_vk_dauth_info {
+ struct bcm_vk_dauth_key keys[VK_BAR1_DAUTH_MAX];
+};
+
struct bcm_vk {
struct pci_dev *pdev;
void __iomem *bar[MAX_BAR];
+
+ struct bcm_vk_dauth_info dauth_info;
+
+ int devid; /* dev id allocated */
+
+ struct workqueue_struct *wq_thread;
+ struct work_struct wq_work; /* work queue for deferred job */
+ unsigned long wq_offload[1]; /* various flags on wq requested */
+ void *tdma_vaddr; /* test dma segment virtual addr */
+ dma_addr_t tdma_addr; /* test dma segment bus addr */
+};
+
+/* wq offload work items bits definitions */
+enum bcm_vk_wq_offload_flags {
+ BCM_VK_WQ_DWNLD_PEND = 0,
+ BCM_VK_WQ_DWNLD_AUTO = 1,
};
+/*
+ * check if PCIe interface is down on read. Use it when it is
+ * certain that _val should never be all ones.
+ */
+#define BCM_VK_INTF_IS_DOWN(val) ((val) == 0xffffffff)
+
+static inline u32 vkread32(struct bcm_vk *vk, enum pci_barno bar, u64 offset)
+{
+ return readl(vk->bar[bar] + offset);
+}
+
+static inline void vkwrite32(struct bcm_vk *vk,
+ u32 value,
+ enum pci_barno bar,
+ u64 offset)
+{
+ writel(value, vk->bar[bar] + offset);
+}
+
+static inline u8 vkread8(struct bcm_vk *vk, enum pci_barno bar, u64 offset)
+{
+ return readb(vk->bar[bar] + offset);
+}
+
+static inline void vkwrite8(struct bcm_vk *vk,
+ u8 value,
+ enum pci_barno bar,
+ u64 offset)
+{
+ writeb(value, vk->bar[bar] + offset);
+}
+
+static inline bool bcm_vk_msgq_marker_valid(struct bcm_vk *vk)
+{
+ u32 rdy_marker = 0;
+ u32 fw_status;
+
+ fw_status = vkread32(vk, BAR_0, VK_BAR_FWSTS);
+
+ if ((fw_status & VK_FWSTS_READY) == VK_FWSTS_READY)
+ rdy_marker = vkread32(vk, BAR_1, VK_BAR1_MSGQ_DEF_RDY);
+
+ return (rdy_marker == VK_BAR1_MSGQ_RDY_MARKER);
+}
+
+int bcm_vk_auto_load_all_images(struct bcm_vk *vk);
+
#endif
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index 14afe2477b97..3fbfcd9b9de8 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -3,16 +3,72 @@
* Copyright 2018-2020 Broadcom.
*/
+#include <linux/delay.h>
#include <linux/dma-mapping.h>
+#include <linux/firmware.h>
+#include <linux/fs.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pci_regs.h>
+#include <uapi/linux/misc/bcm_vk.h>
#include "bcm_vk.h"
#define PCI_DEVICE_ID_VALKYRIE 0x5e87
#define PCI_DEVICE_ID_VIPER 0x5e88
+static DEFINE_IDA(bcm_vk_ida);
+
+enum soc_idx {
+ VALKYRIE_A0 = 0,
+ VALKYRIE_B0,
+ VIPER,
+ VK_IDX_INVALID
+};
+
+enum img_idx {
+ IMG_PRI = 0,
+ IMG_SEC,
+ IMG_PER_TYPE_MAX
+};
+
+struct load_image_entry {
+ const u32 image_type;
+ const char *image_name[IMG_PER_TYPE_MAX];
+};
+
+#define NUM_BOOT_STAGES 2
+/* default firmware images names */
+static const struct load_image_entry image_tab[][NUM_BOOT_STAGES] = {
+ [VALKYRIE_A0] = {
+ {VK_IMAGE_TYPE_BOOT1, {"vk_a0-boot1.bin", "vk-boot1.bin"}},
+ {VK_IMAGE_TYPE_BOOT2, {"vk_a0-boot2.bin", "vk-boot2.bin"}}
+ },
+ [VALKYRIE_B0] = {
+ {VK_IMAGE_TYPE_BOOT1, {"vk_b0-boot1.bin", "vk-boot1.bin"}},
+ {VK_IMAGE_TYPE_BOOT2, {"vk_b0-boot2.bin", "vk-boot2.bin"}}
+ },
+
+ [VIPER] = {
+ {VK_IMAGE_TYPE_BOOT1, {"vp-boot1.bin", ""}},
+ {VK_IMAGE_TYPE_BOOT2, {"vp-boot2.bin", ""}}
+ },
+};
+
+/* Location of memory base addresses of interest in BAR1 */
+/* Load Boot1 to start of ITCM */
+#define BAR1_CODEPUSH_BASE_BOOT1 0x100000
+
+/* Allow minimum 1s for Load Image timeout responses */
+#define LOAD_IMAGE_TIMEOUT_MS (1 * MSEC_PER_SEC)
+
+/* Image startup timeouts */
+#define BOOT1_STARTUP_TIMEOUT_MS (5 * MSEC_PER_SEC)
+#define BOOT2_STARTUP_TIMEOUT_MS (10 * MSEC_PER_SEC)
+
+/* 1ms wait for checking the transfer complete status */
+#define TXFR_COMPLETE_TIMEOUT_MS 1
+
/* MSIX usages */
#define VK_MSIX_MSGQ_MAX 3
#define VK_MSIX_NOTF_MAX 1
@@ -24,13 +80,565 @@
/* Number of bits set in DMA mask*/
#define BCM_VK_DMA_BITS 64
+/* Ucode boot wait time */
+#define BCM_VK_UCODE_BOOT_US (100 * USEC_PER_MSEC)
+/* 50% margin */
+#define BCM_VK_UCODE_BOOT_MAX_US ((BCM_VK_UCODE_BOOT_US * 3) >> 1)
+
+/* deinit time for the card os after receiving doorbell */
+#define BCM_VK_DEINIT_TIME_MS (2 * MSEC_PER_SEC)
+
+/*
+ * module parameters
+ */
+static bool auto_load = true;
+module_param(auto_load, bool, 0444);
+MODULE_PARM_DESC(auto_load,
+ "Load images automatically at PCIe probe time.\n");
+static uint nr_scratch_pages = VK_BAR1_SCRATCH_DEF_NR_PAGES;
+module_param(nr_scratch_pages, uint, 0444);
+MODULE_PARM_DESC(nr_scratch_pages,
+ "Number of pre allocated DMAable coherent pages.\n");
+
+static int bcm_vk_intf_ver_chk(struct bcm_vk *vk)
+{
+ struct device *dev = &vk->pdev->dev;
+ u32 reg;
+ u16 major, minor;
+ int ret = 0;
+
+ /* read interface register */
+ reg = vkread32(vk, BAR_0, BAR_INTF_VER);
+ major = (reg >> BAR_INTF_VER_MAJOR_SHIFT) & BAR_INTF_VER_MASK;
+ minor = reg & BAR_INTF_VER_MASK;
+
+ /*
+ * if major number is 0, it is pre-release and it would be allowed
+ * to continue, else, check versions accordingly
+ */
+ if (!major) {
+ dev_warn(dev, "Pre-release major.minor=%d.%d - drv %d.%d\n",
+ major, minor, SEMANTIC_MAJOR, SEMANTIC_MINOR);
+ } else if (major != SEMANTIC_MAJOR) {
+ dev_err(dev,
+ "Intf major.minor=%d.%d rejected - drv %d.%d\n",
+ major, minor, SEMANTIC_MAJOR, SEMANTIC_MINOR);
+ ret = -EPFNOSUPPORT;
+ } else {
+ dev_dbg(dev,
+ "Intf major.minor=%d.%d passed - drv %d.%d\n",
+ major, minor, SEMANTIC_MAJOR, SEMANTIC_MINOR);
+ }
+ return ret;
+}
+
+static inline int bcm_vk_wait(struct bcm_vk *vk, enum pci_barno bar,
+ u64 offset, u32 mask, u32 value,
+ unsigned long timeout_ms)
+{
+ struct device *dev = &vk->pdev->dev;
+ unsigned long start_time;
+ unsigned long timeout;
+ u32 rd_val, boot_status;
+
+ start_time = jiffies;
+ timeout = start_time + msecs_to_jiffies(timeout_ms);
+
+ do {
+ rd_val = vkread32(vk, bar, offset);
+ dev_dbg(dev, "BAR%d Offset=0x%llx: 0x%x\n",
+ bar, offset, rd_val);
+
+ /* check for any boot err condition */
+ boot_status = vkread32(vk, BAR_0, BAR_BOOT_STATUS);
+ if (boot_status & BOOT_ERR_MASK) {
+ dev_err(dev, "Boot Err 0x%x, progress 0x%x after %d ms\n",
+ (boot_status & BOOT_ERR_MASK) >> BOOT_ERR_SHIFT,
+ boot_status & BOOT_PROG_MASK,
+ jiffies_to_msecs(jiffies - start_time));
+ return -EFAULT;
+ }
+
+ if (time_after(jiffies, timeout))
+ return -ETIMEDOUT;
+
+ cpu_relax();
+ cond_resched();
+ } while ((rd_val & mask) != value);
+
+ return 0;
+}
+
+static int bcm_vk_sync_card_info(struct bcm_vk *vk)
+{
+ u32 rdy_marker = vkread32(vk, BAR_1, VK_BAR1_MSGQ_DEF_RDY);
+
+ /* check for marker, but allow diags mode to skip sync */
+ if (!bcm_vk_msgq_marker_valid(vk))
+ return (rdy_marker == VK_BAR1_DIAG_RDY_MARKER ? 0 : -EINVAL);
+
+ /*
+ * Write down scratch addr which is used for DMA. For
+ * signed part, BAR1 is accessible only after boot2 has come
+ * up
+ */
+ if (vk->tdma_addr) {
+ vkwrite32(vk, (u64)vk->tdma_addr >> 32, BAR_1,
+ VK_BAR1_SCRATCH_OFF_HI);
+ vkwrite32(vk, (u32)vk->tdma_addr, BAR_1,
+ VK_BAR1_SCRATCH_OFF_LO);
+ vkwrite32(vk, nr_scratch_pages * PAGE_SIZE, BAR_1,
+ VK_BAR1_SCRATCH_SZ_ADDR);
+ }
+ return 0;
+}
+
+static void bcm_vk_buf_notify(struct bcm_vk *vk, void *bufp,
+ dma_addr_t host_buf_addr, u32 buf_size)
+{
+ /* update the dma address to the card */
+ vkwrite32(vk, (u64)host_buf_addr >> 32, BAR_1,
+ VK_BAR1_DMA_BUF_OFF_HI);
+ vkwrite32(vk, (u32)host_buf_addr, BAR_1,
+ VK_BAR1_DMA_BUF_OFF_LO);
+ vkwrite32(vk, buf_size, BAR_1, VK_BAR1_DMA_BUF_SZ);
+}
+
+static int bcm_vk_load_image_by_type(struct bcm_vk *vk, u32 load_type,
+ const char *filename)
+{
+ struct device *dev = &vk->pdev->dev;
+ const struct firmware *fw = NULL;
+ void *bufp = NULL;
+ size_t max_buf, offset;
+ int ret;
+ u64 offset_codepush;
+ u32 codepush;
+ u32 value;
+ dma_addr_t boot_dma_addr;
+ bool is_stdalone;
+
+ if (load_type == VK_IMAGE_TYPE_BOOT1) {
+ /*
+ * After POR, enable VK soft BOOTSRC so bootrom do not clear
+ * the pushed image (the TCM memories).
+ */
+ value = vkread32(vk, BAR_0, BAR_BOOTSRC_SELECT);
+ value |= BOOTSRC_SOFT_ENABLE;
+ vkwrite32(vk, value, BAR_0, BAR_BOOTSRC_SELECT);
+
+ codepush = CODEPUSH_BOOTSTART + CODEPUSH_BOOT1_ENTRY;
+ offset_codepush = BAR_CODEPUSH_SBL;
+
+ /* Write a 1 to request SRAM open bit */
+ vkwrite32(vk, CODEPUSH_BOOTSTART, BAR_0, offset_codepush);
+
+ /* Wait for VK to respond */
+ ret = bcm_vk_wait(vk, BAR_0, BAR_BOOT_STATUS, SRAM_OPEN,
+ SRAM_OPEN, LOAD_IMAGE_TIMEOUT_MS);
+ if (ret < 0) {
+ dev_err(dev, "boot1 wait SRAM err - ret(%d)\n", ret);
+ goto err_buf_out;
+ }
+
+ max_buf = SZ_256K;
+ bufp = dma_alloc_coherent(dev,
+ max_buf,
+ &boot_dma_addr, GFP_KERNEL);
+ if (!bufp) {
+ dev_err(dev, "Error allocating 0x%zx\n", max_buf);
+ ret = -ENOMEM;
+ goto err_buf_out;
+ }
+ } else if (load_type == VK_IMAGE_TYPE_BOOT2) {
+ codepush = CODEPUSH_BOOT2_ENTRY;
+ offset_codepush = BAR_CODEPUSH_SBI;
+
+ /* Wait for VK to respond */
+ ret = bcm_vk_wait(vk, BAR_0, BAR_BOOT_STATUS, DDR_OPEN,
+ DDR_OPEN, LOAD_IMAGE_TIMEOUT_MS);
+ if (ret < 0) {
+ dev_err(dev, "boot2 wait DDR open error - ret(%d)\n",
+ ret);
+ goto err_buf_out;
+ }
+
+ max_buf = SZ_4M;
+ bufp = dma_alloc_coherent(dev,
+ max_buf,
+ &boot_dma_addr, GFP_KERNEL);
+ if (!bufp) {
+ dev_err(dev, "Error allocating 0x%zx\n", max_buf);
+ ret = -ENOMEM;
+ goto err_buf_out;
+ }
+
+ bcm_vk_buf_notify(vk, bufp, boot_dma_addr, max_buf);
+ } else {
+ dev_err(dev, "Error invalid image type 0x%x\n", load_type);
+ ret = -EINVAL;
+ goto err_buf_out;
+ }
+
+ offset = 0;
+ ret = request_partial_firmware_into_buf(&fw, filename, dev,
+ bufp, max_buf, offset);
+ if (ret) {
+ dev_err(dev, "Error %d requesting firmware file: %s\n",
+ ret, filename);
+ goto err_firmware_out;
+ }
+ dev_dbg(dev, "size=0x%zx\n", fw->size);
+ if (load_type == VK_IMAGE_TYPE_BOOT1)
+ memcpy_toio(vk->bar[BAR_1] + BAR1_CODEPUSH_BASE_BOOT1,
+ bufp,
+ fw->size);
+
+ dev_dbg(dev, "Signaling 0x%x to 0x%llx\n", codepush, offset_codepush);
+ vkwrite32(vk, codepush, BAR_0, offset_codepush);
+
+ if (load_type == VK_IMAGE_TYPE_BOOT1) {
+ u32 boot_status;
+
+ /* wait until done */
+ ret = bcm_vk_wait(vk, BAR_0, BAR_BOOT_STATUS,
+ BOOT1_RUNNING,
+ BOOT1_RUNNING,
+ BOOT1_STARTUP_TIMEOUT_MS);
+
+ boot_status = vkread32(vk, BAR_0, BAR_BOOT_STATUS);
+ is_stdalone = !BCM_VK_INTF_IS_DOWN(boot_status) &&
+ (boot_status & BOOT_STDALONE_RUNNING);
+ if (ret && !is_stdalone) {
+ dev_err(dev,
+ "Timeout %ld ms waiting for boot1 to come up - ret(%d)\n",
+ BOOT1_STARTUP_TIMEOUT_MS, ret);
+ goto err_firmware_out;
+ } else if (is_stdalone) {
+ u32 reg;
+
+ reg = vkread32(vk, BAR_0, BAR_BOOT1_STDALONE_PROGRESS);
+ if ((reg & BOOT1_STDALONE_PROGRESS_MASK) ==
+ BOOT1_STDALONE_SUCCESS) {
+ dev_info(dev, "Boot1 standalone success\n");
+ ret = 0;
+ } else {
+ dev_err(dev, "Timeout %ld ms - Boot1 standalone failure\n",
+ BOOT1_STARTUP_TIMEOUT_MS);
+ ret = -EINVAL;
+ goto err_firmware_out;
+ }
+ }
+ } else if (load_type == VK_IMAGE_TYPE_BOOT2) {
+ unsigned long timeout;
+
+ timeout = jiffies + msecs_to_jiffies(LOAD_IMAGE_TIMEOUT_MS);
+
+ /* To send more data to VK than max_buf allowed at a time */
+ do {
+ /*
+ * Check for ack from card. when Ack is received,
+ * it means all the data is received by card.
+ * Exit the loop after ack is received.
+ */
+ ret = bcm_vk_wait(vk, BAR_0, BAR_BOOT_STATUS,
+ FW_LOADER_ACK_RCVD_ALL_DATA,
+ FW_LOADER_ACK_RCVD_ALL_DATA,
+ TXFR_COMPLETE_TIMEOUT_MS);
+ if (ret == 0) {
+ dev_dbg(dev, "Exit boot2 download\n");
+ break;
+ } else if (ret == -EFAULT) {
+ dev_err(dev, "Error detected during ACK waiting");
+ goto err_firmware_out;
+ }
+
+ /* exit the loop, if there is no response from card */
+ if (time_after(jiffies, timeout)) {
+ dev_err(dev, "Error. No reply from card\n");
+ ret = -ETIMEDOUT;
+ goto err_firmware_out;
+ }
+
+ /* Wait for VK to open BAR space to copy new data */
+ ret = bcm_vk_wait(vk, BAR_0, offset_codepush,
+ codepush, 0,
+ TXFR_COMPLETE_TIMEOUT_MS);
+ if (ret == 0) {
+ offset += max_buf;
+ ret = request_partial_firmware_into_buf
+ (&fw,
+ filename,
+ dev, bufp,
+ max_buf,
+ offset);
+ if (ret) {
+ dev_err(dev,
+ "Error %d requesting firmware file: %s offset: 0x%zx\n",
+ ret, filename, offset);
+ goto err_firmware_out;
+ }
+ dev_dbg(dev, "size=0x%zx\n", fw->size);
+ dev_dbg(dev, "Signaling 0x%x to 0x%llx\n",
+ codepush, offset_codepush);
+ vkwrite32(vk, codepush, BAR_0, offset_codepush);
+ /* reload timeout after every codepush */
+ timeout = jiffies +
+ msecs_to_jiffies(LOAD_IMAGE_TIMEOUT_MS);
+ } else if (ret == -EFAULT) {
+ dev_err(dev, "Error detected waiting for transfer\n");
+ goto err_firmware_out;
+ }
+ } while (1);
+
+ /* wait for fw status bits to indicate app ready */
+ ret = bcm_vk_wait(vk, BAR_0, VK_BAR_FWSTS,
+ VK_FWSTS_READY,
+ VK_FWSTS_READY,
+ BOOT2_STARTUP_TIMEOUT_MS);
+ if (ret < 0) {
+ dev_err(dev, "Boot2 not ready - ret(%d)\n", ret);
+ goto err_firmware_out;
+ }
+
+ is_stdalone = vkread32(vk, BAR_0, BAR_BOOT_STATUS) &
+ BOOT_STDALONE_RUNNING;
+ if (!is_stdalone) {
+ ret = bcm_vk_intf_ver_chk(vk);
+ if (ret) {
+ dev_err(dev, "failure in intf version check\n");
+ goto err_firmware_out;
+ }
+
+ /* sync & channel other info */
+ ret = bcm_vk_sync_card_info(vk);
+ if (ret) {
+ dev_err(dev, "Syncing Card Info failure\n");
+ goto err_firmware_out;
+ }
+ }
+ }
+
+err_firmware_out:
+ release_firmware(fw);
+
+err_buf_out:
+ if (bufp)
+ dma_free_coherent(dev, max_buf, bufp, boot_dma_addr);
+
+ return ret;
+}
+
+static u32 bcm_vk_next_boot_image(struct bcm_vk *vk)
+{
+ u32 boot_status;
+ u32 fw_status;
+ u32 load_type = 0; /* default for unknown */
+
+ boot_status = vkread32(vk, BAR_0, BAR_BOOT_STATUS);
+ fw_status = vkread32(vk, BAR_0, VK_BAR_FWSTS);
+
+ if (!BCM_VK_INTF_IS_DOWN(boot_status) && (boot_status & SRAM_OPEN))
+ load_type = VK_IMAGE_TYPE_BOOT1;
+ else if (boot_status == BOOT1_RUNNING)
+ load_type = VK_IMAGE_TYPE_BOOT2;
+
+ /* Log status so that we know different stages */
+ dev_info(&vk->pdev->dev,
+ "boot-status value for next image: 0x%x : fw-status 0x%x\n",
+ boot_status, fw_status);
+
+ return load_type;
+}
+
+static enum soc_idx get_soc_idx(struct bcm_vk *vk)
+{
+ struct pci_dev *pdev = vk->pdev;
+ enum soc_idx idx = VK_IDX_INVALID;
+ u32 rev;
+ static enum soc_idx const vk_soc_tab[] = { VALKYRIE_A0, VALKYRIE_B0 };
+
+ switch (pdev->device) {
+ case PCI_DEVICE_ID_VALKYRIE:
+ /* get the chip id to decide sub-class */
+ rev = MAJOR_SOC_REV(vkread32(vk, BAR_0, BAR_CHIP_ID));
+ if (rev < ARRAY_SIZE(vk_soc_tab)) {
+ idx = vk_soc_tab[rev];
+ } else {
+ /* Default to A0 firmware for all other chip revs */
+ idx = VALKYRIE_A0;
+ dev_warn(&pdev->dev,
+ "Rev %d not in image lookup table, default to idx=%d\n",
+ rev, idx);
+ }
+ break;
+
+ case PCI_DEVICE_ID_VIPER:
+ idx = VIPER;
+ break;
+
+ default:
+ dev_err(&pdev->dev, "no images for 0x%x\n", pdev->device);
+ }
+ return idx;
+}
+
+static const char *get_load_fw_name(struct bcm_vk *vk,
+ const struct load_image_entry *entry)
+{
+ const struct firmware *fw;
+ struct device *dev = &vk->pdev->dev;
+ int ret;
+ unsigned long dummy;
+ int i;
+
+ for (i = 0; i < IMG_PER_TYPE_MAX; i++) {
+ fw = NULL;
+ ret = request_partial_firmware_into_buf(&fw,
+ entry->image_name[i],
+ dev, &dummy,
+ sizeof(dummy),
+ 0);
+ release_firmware(fw);
+ if (!ret)
+ return entry->image_name[i];
+ }
+ return NULL;
+}
+
+int bcm_vk_auto_load_all_images(struct bcm_vk *vk)
+{
+ int i, ret = -1;
+ enum soc_idx idx;
+ struct device *dev = &vk->pdev->dev;
+ u32 curr_type;
+ const char *curr_name;
+
+ idx = get_soc_idx(vk);
+ if (idx == VK_IDX_INVALID)
+ goto auto_load_all_exit;
+
+ /* log a message to know the relative loading order */
+ dev_dbg(dev, "Load All for device %d\n", vk->devid);
+
+ for (i = 0; i < NUM_BOOT_STAGES; i++) {
+ curr_type = image_tab[idx][i].image_type;
+ if (bcm_vk_next_boot_image(vk) == curr_type) {
+ curr_name = get_load_fw_name(vk, &image_tab[idx][i]);
+ if (!curr_name) {
+ dev_err(dev, "No suitable firmware exists for type %d",
+ curr_type);
+ ret = -ENOENT;
+ goto auto_load_all_exit;
+ }
+ ret = bcm_vk_load_image_by_type(vk, curr_type,
+ curr_name);
+ dev_info(dev, "Auto load %s, ret %d\n",
+ curr_name, ret);
+
+ if (ret) {
+ dev_err(dev, "Error loading default %s\n",
+ curr_name);
+ goto auto_load_all_exit;
+ }
+ }
+ }
+
+auto_load_all_exit:
+ return ret;
+}
+
+static int bcm_vk_trigger_autoload(struct bcm_vk *vk)
+{
+ if (test_and_set_bit(BCM_VK_WQ_DWNLD_PEND, vk->wq_offload) != 0)
+ return -EPERM;
+
+ set_bit(BCM_VK_WQ_DWNLD_AUTO, vk->wq_offload);
+ queue_work(vk->wq_thread, &vk->wq_work);
+
+ return 0;
+}
+
+/*
+ * deferred work queue for auto download.
+ */
+static void bcm_vk_wq_handler(struct work_struct *work)
+{
+ struct bcm_vk *vk = container_of(work, struct bcm_vk, wq_work);
+
+ if (test_bit(BCM_VK_WQ_DWNLD_AUTO, vk->wq_offload)) {
+ bcm_vk_auto_load_all_images(vk);
+
+ /*
+ * at the end of operation, clear AUTO bit and pending
+ * bit
+ */
+ clear_bit(BCM_VK_WQ_DWNLD_AUTO, vk->wq_offload);
+ clear_bit(BCM_VK_WQ_DWNLD_PEND, vk->wq_offload);
+ }
+}
+
+static void bcm_to_v_reset_doorbell(struct bcm_vk *vk, u32 db_val)
+{
+ vkwrite32(vk, db_val, BAR_0, VK_BAR0_RESET_DB_BASE);
+}
+
+static int bcm_vk_trigger_reset(struct bcm_vk *vk)
+{
+ u32 i;
+ u32 value, boot_status;
+
+ /* make tag '\0' terminated */
+ vkwrite32(vk, 0, BAR_1, VK_BAR1_BOOT1_VER_TAG);
+
+ for (i = 0; i < VK_BAR1_DAUTH_MAX; i++) {
+ vkwrite32(vk, 0, BAR_1, VK_BAR1_DAUTH_STORE_ADDR(i));
+ vkwrite32(vk, 0, BAR_1, VK_BAR1_DAUTH_VALID_ADDR(i));
+ }
+ for (i = 0; i < VK_BAR1_SOTP_REVID_MAX; i++)
+ vkwrite32(vk, 0, BAR_1, VK_BAR1_SOTP_REVID_ADDR(i));
+
+ /*
+ * When boot request fails, the CODE_PUSH_OFFSET stays persistent.
+ * Allowing us to debug the failure. When we call reset,
+ * we should clear CODE_PUSH_OFFSET so ROM does not execute
+ * boot again (and fails again) and instead waits for a new
+ * codepush. And, if previous boot has encountered error, need
+ * to clear the entry values
+ */
+ boot_status = vkread32(vk, BAR_0, BAR_BOOT_STATUS);
+ if (boot_status & BOOT_ERR_MASK) {
+ dev_info(&vk->pdev->dev,
+ "Card in boot error 0x%x, clear CODEPUSH val\n",
+ boot_status);
+ value = 0;
+ } else {
+ value = vkread32(vk, BAR_0, BAR_CODEPUSH_SBL);
+ value &= CODEPUSH_MASK;
+ }
+ vkwrite32(vk, value, BAR_0, BAR_CODEPUSH_SBL);
+
+ /* reset fw_status with proper reason, and press db */
+ vkwrite32(vk, VK_FWSTS_RESET_MBOX_DB, BAR_0, VK_BAR_FWSTS);
+ bcm_to_v_reset_doorbell(vk, VK_BAR0_RESET_DB_SOFT);
+
+ /* clear other necessary registers records */
+ vkwrite32(vk, 0, BAR_0, BAR_OS_UPTIME);
+ vkwrite32(vk, 0, BAR_0, BAR_INTF_VER);
+
+ return 0;
+}
+
static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
int err;
int i;
+ int id;
int irq;
+ char name[20];
struct bcm_vk *vk;
struct device *dev = &pdev->dev;
+ u32 boot_status;
vk = kzalloc(sizeof(*vk), GFP_KERNEL);
if (!vk)
@@ -57,6 +665,18 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_disable_pdev;
}
+ /* The tdma is a scratch area for some DMA testings. */
+ if (nr_scratch_pages) {
+ vk->tdma_vaddr = dma_alloc_coherent
+ (dev,
+ nr_scratch_pages * PAGE_SIZE,
+ &vk->tdma_addr, GFP_KERNEL);
+ if (!vk->tdma_vaddr) {
+ err = -ENOMEM;
+ goto err_disable_pdev;
+ }
+ }
+
pci_set_master(pdev);
pci_set_drvdata(pdev, vk);
@@ -85,8 +705,53 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
}
}
+ id = ida_simple_get(&bcm_vk_ida, 0, 0, GFP_KERNEL);
+ if (id < 0) {
+ err = id;
+ dev_err(dev, "unable to get id\n");
+ goto err_iounmap;
+ }
+
+ vk->devid = id;
+ snprintf(name, sizeof(name), DRV_MODULE_NAME ".%d", id);
+
+ INIT_WORK(&vk->wq_work, bcm_vk_wq_handler);
+
+ /* create dedicated workqueue */
+ vk->wq_thread = create_singlethread_workqueue(name);
+ if (!vk->wq_thread) {
+ dev_err(dev, "Fail to create workqueue thread\n");
+ err = -ENOMEM;
+ goto err_ida_remove;
+ }
+
+ /* sync other info */
+ bcm_vk_sync_card_info(vk);
+
+ /*
+ * lets trigger an auto download. We don't want to do it serially here
+ * because at probing time, it is not supposed to block for a long time.
+ */
+ boot_status = vkread32(vk, BAR_0, BAR_BOOT_STATUS);
+ if (auto_load) {
+ if ((boot_status & BOOT_STATE_MASK) == BROM_RUNNING) {
+ if (bcm_vk_trigger_autoload(vk))
+ goto err_destroy_workqueue;
+ } else {
+ dev_err(dev,
+ "Auto-load skipped - BROM not in proper state (0x%x)\n",
+ boot_status);
+ }
+ }
+
return 0;
+err_destroy_workqueue:
+ destroy_workqueue(vk->wq_thread);
+
+err_ida_remove:
+ ida_simple_remove(&bcm_vk_ida, id);
+
err_iounmap:
for (i = 0; i < MAX_BAR; i++) {
if (vk->bar[i])
@@ -95,6 +760,10 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_release_regions(pdev);
err_disable_pdev:
+ if (vk->tdma_vaddr)
+ dma_free_coherent(&pdev->dev, nr_scratch_pages * PAGE_SIZE,
+ vk->tdma_vaddr, vk->tdma_addr);
+
pci_free_irq_vectors(pdev);
pci_disable_device(pdev);
pci_dev_put(pdev);
@@ -110,6 +779,22 @@ static void bcm_vk_remove(struct pci_dev *pdev)
int i;
struct bcm_vk *vk = pci_get_drvdata(pdev);
+ /*
+ * Trigger a reset to card and wait enough time for UCODE to rerun,
+ * which re-initialize the card into its default state.
+ * This ensures when driver is re-enumerated it will start from
+ * a completely clean state.
+ */
+ bcm_vk_trigger_reset(vk);
+ usleep_range(BCM_VK_UCODE_BOOT_US, BCM_VK_UCODE_BOOT_MAX_US);
+
+ if (vk->tdma_vaddr)
+ dma_free_coherent(&pdev->dev, nr_scratch_pages * PAGE_SIZE,
+ vk->tdma_vaddr, vk->tdma_addr);
+
+ cancel_work_sync(&vk->wq_work);
+ destroy_workqueue(vk->wq_thread);
+
for (i = 0; i < MAX_BAR; i++) {
if (vk->bar[i])
pci_iounmap(pdev, vk->bar[i]);
@@ -120,6 +805,38 @@ static void bcm_vk_remove(struct pci_dev *pdev)
pci_disable_device(pdev);
}
+static void bcm_vk_shutdown(struct pci_dev *pdev)
+{
+ struct bcm_vk *vk = pci_get_drvdata(pdev);
+ u32 reg, boot_stat;
+
+ reg = vkread32(vk, BAR_0, BAR_BOOT_STATUS);
+ boot_stat = reg & BOOT_STATE_MASK;
+
+ if (boot_stat == BOOT1_RUNNING) {
+ /* simply trigger a reset interrupt to park it */
+ bcm_vk_trigger_reset(vk);
+ } else if (boot_stat == BROM_NOT_RUN) {
+ int err;
+ u16 lnksta;
+
+ /*
+ * The boot status only reflects boot condition since last reset
+ * As ucode will run only once to configure pcie, if multiple
+ * resets happen, we lost track if ucode has run or not.
+ * Here, read the current link speed and use that to
+ * sync up the bootstatus properly so that on reboot-back-up,
+ * it has the proper state to start with autoload
+ */
+ err = pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnksta);
+ if (!err &&
+ (lnksta & PCI_EXP_LNKSTA_CLS) != PCI_EXP_LNKSTA_CLS_2_5GB) {
+ reg |= BROM_STATUS_COMPLETE;
+ vkwrite32(vk, reg, BAR_0, BAR_BOOT_STATUS);
+ }
+ }
+}
+
static const struct pci_device_id bcm_vk_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_VALKYRIE), },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_VIPER), },
@@ -132,6 +849,7 @@ static struct pci_driver pci_driver = {
.id_table = bcm_vk_ids,
.probe = bcm_vk_probe,
.remove = bcm_vk_remove,
+ .shutdown = bcm_vk_shutdown,
};
module_pci_driver(pci_driver);
--
2.17.1
Add misc device base support to create and remove devnode.
Additional misc functions for open/read/write/release/ioctl/sysfs, etc
will be added in follow on commits to allow for individual review.
Co-developed-by: Desmond Yan <[email protected]>
Signed-off-by: Desmond Yan <[email protected]>
Signed-off-by: Scott Branden <[email protected]>
---
drivers/misc/bcm-vk/bcm_vk.h | 2 ++
drivers/misc/bcm-vk/bcm_vk_dev.c | 36 +++++++++++++++++++++++++++++++-
2 files changed, 37 insertions(+), 1 deletion(-)
diff --git a/drivers/misc/bcm-vk/bcm_vk.h b/drivers/misc/bcm-vk/bcm_vk.h
index c4fb61a84e41..0a366db693c8 100644
--- a/drivers/misc/bcm-vk/bcm_vk.h
+++ b/drivers/misc/bcm-vk/bcm_vk.h
@@ -7,6 +7,7 @@
#define BCM_VK_H
#include <linux/firmware.h>
+#include <linux/miscdevice.h>
#include <linux/pci.h>
#include <linux/sched/signal.h>
@@ -214,6 +215,7 @@ struct bcm_vk {
struct bcm_vk_dauth_info dauth_info;
+ struct miscdevice miscdev;
int devid; /* dev id allocated */
struct workqueue_struct *wq_thread;
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index 3fbfcd9b9de8..c7bedd975609 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -7,6 +7,7 @@
#include <linux/dma-mapping.h>
#include <linux/firmware.h>
#include <linux/fs.h>
+#include <linux/idr.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pci_regs.h>
@@ -638,6 +639,7 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
char name[20];
struct bcm_vk *vk;
struct device *dev = &pdev->dev;
+ struct miscdevice *misc_device;
u32 boot_status;
vk = kzalloc(sizeof(*vk), GFP_KERNEL);
@@ -714,6 +716,19 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
vk->devid = id;
snprintf(name, sizeof(name), DRV_MODULE_NAME ".%d", id);
+ misc_device = &vk->miscdev;
+ misc_device->minor = MISC_DYNAMIC_MINOR;
+ misc_device->name = kstrdup(name, GFP_KERNEL);
+ if (!misc_device->name) {
+ err = -ENOMEM;
+ goto err_ida_remove;
+ }
+
+ err = misc_register(misc_device);
+ if (err) {
+ dev_err(dev, "failed to register device\n");
+ goto err_kfree_name;
+ }
INIT_WORK(&vk->wq_work, bcm_vk_wq_handler);
@@ -722,7 +737,7 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (!vk->wq_thread) {
dev_err(dev, "Fail to create workqueue thread\n");
err = -ENOMEM;
- goto err_ida_remove;
+ goto err_misc_deregister;
}
/* sync other info */
@@ -744,11 +759,20 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
}
}
+ dev_dbg(dev, "BCM-VK:%u created\n", id);
+
return 0;
err_destroy_workqueue:
destroy_workqueue(vk->wq_thread);
+err_misc_deregister:
+ misc_deregister(misc_device);
+
+err_kfree_name:
+ kfree(misc_device->name);
+ misc_device->name = NULL;
+
err_ida_remove:
ida_simple_remove(&bcm_vk_ida, id);
@@ -778,6 +802,7 @@ static void bcm_vk_remove(struct pci_dev *pdev)
{
int i;
struct bcm_vk *vk = pci_get_drvdata(pdev);
+ struct miscdevice *misc_device = &vk->miscdev;
/*
* Trigger a reset to card and wait enough time for UCODE to rerun,
@@ -792,6 +817,13 @@ static void bcm_vk_remove(struct pci_dev *pdev)
dma_free_coherent(&pdev->dev, nr_scratch_pages * PAGE_SIZE,
vk->tdma_vaddr, vk->tdma_addr);
+ /* remove if name is set which means misc dev registered */
+ if (misc_device->name) {
+ misc_deregister(misc_device);
+ kfree(misc_device->name);
+ ida_simple_remove(&bcm_vk_ida, vk->devid);
+ }
+
cancel_work_sync(&vk->wq_work);
destroy_workqueue(vk->wq_thread);
@@ -800,6 +832,8 @@ static void bcm_vk_remove(struct pci_dev *pdev)
pci_iounmap(pdev, vk->bar[i]);
}
+ dev_dbg(&pdev->dev, "BCM-VK:%d released\n", vk->devid);
+
pci_release_regions(pdev);
pci_free_irq_vectors(pdev);
pci_disable_device(pdev);
--
2.17.1
Pass down an interrupt to card in case of panic or reboot so
that card can take appropriate action to perform a clean reset.
Uses kernel notifier block either directly (register on panic list),
or implicitly (add shutdown method for PCI device).
Co-developed-by: Desmond Yan <[email protected]>
Signed-off-by: Desmond Yan <[email protected]>
Signed-off-by: Scott Branden <[email protected]>
---
drivers/misc/bcm-vk/bcm_vk.h | 2 ++
drivers/misc/bcm-vk/bcm_vk_dev.c | 29 ++++++++++++++++++++++++++++-
2 files changed, 30 insertions(+), 1 deletion(-)
diff --git a/drivers/misc/bcm-vk/bcm_vk.h b/drivers/misc/bcm-vk/bcm_vk.h
index 0a366db693c8..f428ad9a0c3d 100644
--- a/drivers/misc/bcm-vk/bcm_vk.h
+++ b/drivers/misc/bcm-vk/bcm_vk.h
@@ -223,6 +223,8 @@ struct bcm_vk {
unsigned long wq_offload[1]; /* various flags on wq requested */
void *tdma_vaddr; /* test dma segment virtual addr */
dma_addr_t tdma_addr; /* test dma segment bus addr */
+
+ struct notifier_block panic_nb;
};
/* wq offload work items bits definitions */
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index c7bedd975609..1755493b3096 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -630,6 +630,16 @@ static int bcm_vk_trigger_reset(struct bcm_vk *vk)
return 0;
}
+static int bcm_vk_on_panic(struct notifier_block *nb,
+ unsigned long e, void *p)
+{
+ struct bcm_vk *vk = container_of(nb, struct bcm_vk, panic_nb);
+
+ bcm_to_v_reset_doorbell(vk, VK_BAR0_RESET_DB_HARD);
+
+ return 0;
+}
+
static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
int err;
@@ -743,6 +753,15 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* sync other info */
bcm_vk_sync_card_info(vk);
+ /* register for panic notifier */
+ vk->panic_nb.notifier_call = bcm_vk_on_panic;
+ err = atomic_notifier_chain_register(&panic_notifier_list,
+ &vk->panic_nb);
+ if (err) {
+ dev_err(dev, "Fail to register panic notifier\n");
+ goto err_destroy_workqueue;
+ }
+
/*
* lets trigger an auto download. We don't want to do it serially here
* because at probing time, it is not supposed to block for a long time.
@@ -751,7 +770,7 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (auto_load) {
if ((boot_status & BOOT_STATE_MASK) == BROM_RUNNING) {
if (bcm_vk_trigger_autoload(vk))
- goto err_destroy_workqueue;
+ goto err_unregister_panic_notifier;
} else {
dev_err(dev,
"Auto-load skipped - BROM not in proper state (0x%x)\n",
@@ -763,6 +782,10 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
return 0;
+err_unregister_panic_notifier:
+ atomic_notifier_chain_unregister(&panic_notifier_list,
+ &vk->panic_nb);
+
err_destroy_workqueue:
destroy_workqueue(vk->wq_thread);
@@ -813,6 +836,10 @@ static void bcm_vk_remove(struct pci_dev *pdev)
bcm_vk_trigger_reset(vk);
usleep_range(BCM_VK_UCODE_BOOT_US, BCM_VK_UCODE_BOOT_MAX_US);
+ /* unregister panic notifier */
+ atomic_notifier_chain_unregister(&panic_notifier_list,
+ &vk->panic_nb);
+
if (vk->tdma_vaddr)
dma_free_coherent(&pdev->dev, nr_scratch_pages * PAGE_SIZE,
vk->tdma_vaddr, vk->tdma_addr);
--
2.17.1
Add open/release to replace private data with context for other methods
to use. Reason for the context is because it is allowed for multiple
sessions to open sysfs. For each file open, when upper layer queries the
response, only those that are tied to a specified open should be returned.
Co-developed-by: Desmond Yan <[email protected]>
Signed-off-by: Desmond Yan <[email protected]>
Signed-off-by: Scott Branden <[email protected]>
---
drivers/misc/bcm-vk/Makefile | 4 +-
drivers/misc/bcm-vk/bcm_vk.h | 15 ++++
drivers/misc/bcm-vk/bcm_vk_dev.c | 23 ++++++
drivers/misc/bcm-vk/bcm_vk_msg.c | 126 +++++++++++++++++++++++++++++++
drivers/misc/bcm-vk/bcm_vk_msg.h | 31 ++++++++
5 files changed, 198 insertions(+), 1 deletion(-)
create mode 100644 drivers/misc/bcm-vk/bcm_vk_msg.c
create mode 100644 drivers/misc/bcm-vk/bcm_vk_msg.h
diff --git a/drivers/misc/bcm-vk/Makefile b/drivers/misc/bcm-vk/Makefile
index f8a7ac4c242f..a2ae79858409 100644
--- a/drivers/misc/bcm-vk/Makefile
+++ b/drivers/misc/bcm-vk/Makefile
@@ -5,4 +5,6 @@
obj-$(CONFIG_BCM_VK) += bcm_vk.o
bcm_vk-objs := \
- bcm_vk_dev.o
+ bcm_vk_dev.o \
+ bcm_vk_msg.o
+
diff --git a/drivers/misc/bcm-vk/bcm_vk.h b/drivers/misc/bcm-vk/bcm_vk.h
index f428ad9a0c3d..5f0fcfdaf265 100644
--- a/drivers/misc/bcm-vk/bcm_vk.h
+++ b/drivers/misc/bcm-vk/bcm_vk.h
@@ -7,9 +7,14 @@
#define BCM_VK_H
#include <linux/firmware.h>
+#include <linux/kref.h>
#include <linux/miscdevice.h>
+#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/sched/signal.h>
+#include <uapi/linux/misc/bcm_vk.h>
+
+#include "bcm_vk_msg.h"
#define DRV_MODULE_NAME "bcm-vk"
@@ -218,6 +223,13 @@ struct bcm_vk {
struct miscdevice miscdev;
int devid; /* dev id allocated */
+ /* Reference-counting to handle file operations */
+ struct kref kref;
+
+ spinlock_t ctx_lock; /* Spinlock for component context */
+ struct bcm_vk_ctx ctx[VK_CMPT_CTX_MAX];
+ struct bcm_vk_ht_entry pid_ht[VK_PID_HT_SZ];
+
struct workqueue_struct *wq_thread;
struct work_struct wq_work; /* work queue for deferred job */
unsigned long wq_offload[1]; /* various flags on wq requested */
@@ -278,6 +290,9 @@ static inline bool bcm_vk_msgq_marker_valid(struct bcm_vk *vk)
return (rdy_marker == VK_BAR1_MSGQ_RDY_MARKER);
}
+int bcm_vk_open(struct inode *inode, struct file *p_file);
+int bcm_vk_release(struct inode *inode, struct file *p_file);
+void bcm_vk_release_data(struct kref *kref);
int bcm_vk_auto_load_all_images(struct bcm_vk *vk);
#endif
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index 1755493b3096..a2980bd0111c 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -8,6 +8,7 @@
#include <linux/firmware.h>
#include <linux/fs.h>
#include <linux/idr.h>
+#include <linux/kref.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pci_regs.h>
@@ -630,6 +631,12 @@ static int bcm_vk_trigger_reset(struct bcm_vk *vk)
return 0;
}
+static const struct file_operations bcm_vk_fops = {
+ .owner = THIS_MODULE,
+ .open = bcm_vk_open,
+ .release = bcm_vk_release,
+};
+
static int bcm_vk_on_panic(struct notifier_block *nb,
unsigned long e, void *p)
{
@@ -652,10 +659,13 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct miscdevice *misc_device;
u32 boot_status;
+ /* allocate vk structure which is tied to kref for freeing */
vk = kzalloc(sizeof(*vk), GFP_KERNEL);
if (!vk)
return -ENOMEM;
+ kref_init(&vk->kref);
+
err = pci_enable_device(pdev);
if (err) {
dev_err(dev, "Cannot enable PCI device\n");
@@ -733,6 +743,7 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
err = -ENOMEM;
goto err_ida_remove;
}
+ misc_device->fops = &bcm_vk_fops,
err = misc_register(misc_device);
if (err) {
@@ -821,6 +832,16 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
return err;
}
+void bcm_vk_release_data(struct kref *kref)
+{
+ struct bcm_vk *vk = container_of(kref, struct bcm_vk, kref);
+ struct pci_dev *pdev = vk->pdev;
+
+ dev_dbg(&pdev->dev, "BCM-VK:%d release data 0x%p\n", vk->devid, vk);
+ pci_dev_put(pdev);
+ kfree(vk);
+}
+
static void bcm_vk_remove(struct pci_dev *pdev)
{
int i;
@@ -864,6 +885,8 @@ static void bcm_vk_remove(struct pci_dev *pdev)
pci_release_regions(pdev);
pci_free_irq_vectors(pdev);
pci_disable_device(pdev);
+
+ kref_put(&vk->kref, bcm_vk_release_data);
}
static void bcm_vk_shutdown(struct pci_dev *pdev)
diff --git a/drivers/misc/bcm-vk/bcm_vk_msg.c b/drivers/misc/bcm-vk/bcm_vk_msg.c
new file mode 100644
index 000000000000..eb261fb87c9d
--- /dev/null
+++ b/drivers/misc/bcm-vk/bcm_vk_msg.c
@@ -0,0 +1,126 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2018-2020 Broadcom.
+ */
+
+#include "bcm_vk.h"
+#include "bcm_vk_msg.h"
+
+/*
+ * allocate a ctx per file struct
+ */
+static struct bcm_vk_ctx *bcm_vk_get_ctx(struct bcm_vk *vk, const pid_t pid)
+{
+ u32 i;
+ struct bcm_vk_ctx *ctx = NULL;
+ u32 hash_idx = hash_32(pid, VK_PID_HT_SHIFT_BIT);
+
+ spin_lock(&vk->ctx_lock);
+
+ for (i = 0; i < ARRAY_SIZE(vk->ctx); i++) {
+ if (!vk->ctx[i].in_use) {
+ vk->ctx[i].in_use = true;
+ ctx = &vk->ctx[i];
+ break;
+ }
+ }
+
+ if (!ctx) {
+ dev_err(&vk->pdev->dev, "All context in use\n");
+
+ goto all_in_use_exit;
+ }
+
+ /* set the pid and insert it to hash table */
+ ctx->pid = pid;
+ ctx->hash_idx = hash_idx;
+ list_add_tail(&ctx->node, &vk->pid_ht[hash_idx].head);
+
+ /* increase kref */
+ kref_get(&vk->kref);
+
+all_in_use_exit:
+ spin_unlock(&vk->ctx_lock);
+
+ return ctx;
+}
+
+static int bcm_vk_free_ctx(struct bcm_vk *vk, struct bcm_vk_ctx *ctx)
+{
+ u32 idx;
+ u32 hash_idx;
+ pid_t pid;
+ struct bcm_vk_ctx *entry;
+ int count = 0;
+
+ if (!ctx) {
+ dev_err(&vk->pdev->dev, "NULL context detected\n");
+ return -EINVAL;
+ }
+ idx = ctx->idx;
+ pid = ctx->pid;
+
+ spin_lock(&vk->ctx_lock);
+
+ if (!vk->ctx[idx].in_use) {
+ dev_err(&vk->pdev->dev, "context[%d] not in use!\n", idx);
+ } else {
+ vk->ctx[idx].in_use = false;
+ vk->ctx[idx].miscdev = NULL;
+
+ /* Remove it from hash list and see if it is the last one. */
+ list_del(&ctx->node);
+ hash_idx = ctx->hash_idx;
+ list_for_each_entry(entry, &vk->pid_ht[hash_idx].head, node) {
+ if (entry->pid == pid)
+ count++;
+ }
+ }
+
+ spin_unlock(&vk->ctx_lock);
+
+ return count;
+}
+int bcm_vk_open(struct inode *inode, struct file *p_file)
+{
+ struct bcm_vk_ctx *ctx;
+ struct miscdevice *miscdev = (struct miscdevice *)p_file->private_data;
+ struct bcm_vk *vk = container_of(miscdev, struct bcm_vk, miscdev);
+ struct device *dev = &vk->pdev->dev;
+ int rc = 0;
+
+ /* get a context and set it up for file */
+ ctx = bcm_vk_get_ctx(vk, task_pid_nr(current));
+ if (!ctx) {
+ dev_err(dev, "Error allocating context\n");
+ rc = -ENOMEM;
+ } else {
+ /*
+ * set up context and replace private data with context for
+ * other methods to use. Reason for the context is because
+ * it is allowed for multiple sessions to open the sysfs, and
+ * for each file open, when upper layer query the response,
+ * only those that are tied to a specific open should be
+ * returned. The context->idx will be used for such binding
+ */
+ ctx->miscdev = miscdev;
+ p_file->private_data = ctx;
+ dev_dbg(dev, "ctx_returned with idx %d, pid %d\n",
+ ctx->idx, ctx->pid);
+ }
+ return rc;
+}
+
+int bcm_vk_release(struct inode *inode, struct file *p_file)
+{
+ int ret;
+ struct bcm_vk_ctx *ctx = p_file->private_data;
+ struct bcm_vk *vk = container_of(ctx->miscdev, struct bcm_vk, miscdev);
+
+ ret = bcm_vk_free_ctx(vk, ctx);
+
+ kref_put(&vk->kref, bcm_vk_release_data);
+
+ return ret;
+}
+
diff --git a/drivers/misc/bcm-vk/bcm_vk_msg.h b/drivers/misc/bcm-vk/bcm_vk_msg.h
new file mode 100644
index 000000000000..32516abcaf89
--- /dev/null
+++ b/drivers/misc/bcm-vk/bcm_vk_msg.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2018-2020 Broadcom.
+ */
+
+#ifndef BCM_VK_MSG_H
+#define BCM_VK_MSG_H
+
+/* context per session opening of sysfs */
+struct bcm_vk_ctx {
+ struct list_head node; /* use for linkage in Hash Table */
+ unsigned int idx;
+ bool in_use;
+ pid_t pid;
+ u32 hash_idx;
+ struct miscdevice *miscdev;
+};
+
+/* pid hash table entry */
+struct bcm_vk_ht_entry {
+ struct list_head head;
+};
+
+/* total number of supported ctx, 32 ctx each for 5 components */
+#define VK_CMPT_CTX_MAX (32 * 5)
+
+/* hash table defines to store the opened FDs */
+#define VK_PID_HT_SHIFT_BIT 7 /* 128 */
+#define VK_PID_HT_SZ BIT(VK_PID_HT_SHIFT_BIT)
+
+#endif
--
2.17.1
Add ioctl support to issue load_image operation to VK card.
Co-developed-by: Desmond Yan <[email protected]>
Signed-off-by: Desmond Yan <[email protected]>
Co-developed-by: James Hu <[email protected]>
Signed-off-by: James Hu <[email protected]>
Signed-off-by: Scott Branden <[email protected]>
---
drivers/misc/bcm-vk/bcm_vk.h | 3 +
drivers/misc/bcm-vk/bcm_vk_dev.c | 95 ++++++++++++++++++++++++++++++++
2 files changed, 98 insertions(+)
diff --git a/drivers/misc/bcm-vk/bcm_vk.h b/drivers/misc/bcm-vk/bcm_vk.h
index 5f0fcfdaf265..726aab71bb6b 100644
--- a/drivers/misc/bcm-vk/bcm_vk.h
+++ b/drivers/misc/bcm-vk/bcm_vk.h
@@ -12,6 +12,7 @@
#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/sched/signal.h>
+#include <linux/uaccess.h>
#include <uapi/linux/misc/bcm_vk.h>
#include "bcm_vk_msg.h"
@@ -220,6 +221,8 @@ struct bcm_vk {
struct bcm_vk_dauth_info dauth_info;
+ /* mutex to protect the ioctls */
+ struct mutex mutex;
struct miscdevice miscdev;
int devid; /* dev id allocated */
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index a2980bd0111c..c48e2329e547 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -10,6 +10,7 @@
#include <linux/idr.h>
#include <linux/kref.h>
#include <linux/module.h>
+#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/pci_regs.h>
#include <uapi/linux/misc/bcm_vk.h>
@@ -580,6 +581,71 @@ static void bcm_vk_wq_handler(struct work_struct *work)
}
}
+static long bcm_vk_load_image(struct bcm_vk *vk,
+ const struct vk_image __user *arg)
+{
+ struct device *dev = &vk->pdev->dev;
+ const char *image_name;
+ struct vk_image image;
+ u32 next_loadable;
+ enum soc_idx idx;
+ int image_idx;
+ int ret = -EPERM;
+
+ if (copy_from_user(&image, arg, sizeof(image)))
+ return -EACCES;
+
+ if ((image.type != VK_IMAGE_TYPE_BOOT1) &&
+ (image.type != VK_IMAGE_TYPE_BOOT2)) {
+ dev_err(dev, "invalid image.type %u\n", image.type);
+ return ret;
+ }
+
+ next_loadable = bcm_vk_next_boot_image(vk);
+ if (next_loadable != image.type) {
+ dev_err(dev, "Next expected image %u, Loading %u\n",
+ next_loadable, image.type);
+ return ret;
+ }
+
+ /*
+ * if something is pending download already. This could only happen
+ * for now when the driver is being loaded, or if someone has issued
+ * another download command in another shell.
+ */
+ if (test_and_set_bit(BCM_VK_WQ_DWNLD_PEND, vk->wq_offload) != 0) {
+ dev_err(dev, "Download operation already pending.\n");
+ return ret;
+ }
+
+ image_name = image.filename;
+ if (image_name[0] == '\0') {
+ /* Use default image name if NULL */
+ idx = get_soc_idx(vk);
+ if (idx == VK_IDX_INVALID)
+ goto err_idx;
+
+ /* Image idx starts with boot1 */
+ image_idx = image.type - VK_IMAGE_TYPE_BOOT1;
+ image_name = get_load_fw_name(vk, &image_tab[idx][image_idx]);
+ if (!image_name) {
+ dev_err(dev, "No suitable image found for type %d",
+ image.type);
+ ret = -ENOENT;
+ goto err_idx;
+ }
+ } else {
+ /* Ensure filename is NULL terminated */
+ image.filename[sizeof(image.filename) - 1] = '\0';
+ }
+ ret = bcm_vk_load_image_by_type(vk, image.type, image_name);
+ dev_info(dev, "Load %s, ret %d\n", image_name, ret);
+err_idx:
+ clear_bit(BCM_VK_WQ_DWNLD_PEND, vk->wq_offload);
+
+ return ret;
+}
+
static void bcm_to_v_reset_doorbell(struct bcm_vk *vk, u32 db_val)
{
vkwrite32(vk, db_val, BAR_0, VK_BAR0_RESET_DB_BASE);
@@ -631,10 +697,38 @@ static int bcm_vk_trigger_reset(struct bcm_vk *vk)
return 0;
}
+static long bcm_vk_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ long ret = -EINVAL;
+ struct bcm_vk_ctx *ctx = file->private_data;
+ struct bcm_vk *vk = container_of(ctx->miscdev, struct bcm_vk, miscdev);
+ void __user *argp = (void __user *)arg;
+
+ dev_dbg(&vk->pdev->dev,
+ "ioctl, cmd=0x%02x, arg=0x%02lx\n",
+ cmd, arg);
+
+ mutex_lock(&vk->mutex);
+
+ switch (cmd) {
+ case VK_IOCTL_LOAD_IMAGE:
+ ret = bcm_vk_load_image(vk, argp);
+ break;
+
+ default:
+ break;
+ }
+
+ mutex_unlock(&vk->mutex);
+
+ return ret;
+}
+
static const struct file_operations bcm_vk_fops = {
.owner = THIS_MODULE,
.open = bcm_vk_open,
.release = bcm_vk_release,
+ .unlocked_ioctl = bcm_vk_ioctl,
};
static int bcm_vk_on_panic(struct notifier_block *nb,
@@ -665,6 +759,7 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
return -ENOMEM;
kref_init(&vk->kref);
+ mutex_init(&vk->mutex);
err = pci_enable_device(pdev);
if (err) {
--
2.17.1
Add reset support via ioctl.
Kill user processes that are open when VK card is reset.
If a particular PID has issued the reset request do not kill that process
as it issued the ioctl.
Co-developed-by: Desmond Yan <[email protected]>
Signed-off-by: Desmond Yan <[email protected]>
Signed-off-by: Scott Branden <[email protected]>
---
drivers/misc/bcm-vk/bcm_vk.h | 1 +
drivers/misc/bcm-vk/bcm_vk_dev.c | 158 +++++++++++++++++++++++++++++--
drivers/misc/bcm-vk/bcm_vk_msg.c | 40 +++++++-
3 files changed, 191 insertions(+), 8 deletions(-)
diff --git a/drivers/misc/bcm-vk/bcm_vk.h b/drivers/misc/bcm-vk/bcm_vk.h
index 775d892f13f8..ff8cb33d9882 100644
--- a/drivers/misc/bcm-vk/bcm_vk.h
+++ b/drivers/misc/bcm-vk/bcm_vk.h
@@ -462,6 +462,7 @@ irqreturn_t bcm_vk_msgq_irqhandler(int irq, void *dev_id);
irqreturn_t bcm_vk_notf_irqhandler(int irq, void *dev_id);
int bcm_vk_msg_init(struct bcm_vk *vk);
void bcm_vk_msg_remove(struct bcm_vk *vk);
+void bcm_vk_drain_msg_on_reset(struct bcm_vk *vk);
int bcm_vk_sync_msgq(struct bcm_vk *vk, bool force_sync);
void bcm_vk_blk_drv_access(struct bcm_vk *vk);
s32 bcm_to_h_msg_dequeue(struct bcm_vk *vk);
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index 4727d7617153..718badb53100 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -478,7 +478,9 @@ void bcm_vk_blk_drv_access(struct bcm_vk *vk)
int i;
/*
- * kill all the apps
+ * kill all the apps except for the process that is resetting.
+ * If not called during reset, reset_pid will be 0, and all will be
+ * killed.
*/
spin_lock(&vk->ctx_lock);
@@ -489,10 +491,12 @@ void bcm_vk_blk_drv_access(struct bcm_vk *vk)
struct bcm_vk_ctx *ctx;
list_for_each_entry(ctx, &vk->pid_ht[i].head, node) {
- dev_dbg(&vk->pdev->dev,
- "Send kill signal to pid %d\n",
- ctx->pid);
- kill_pid(find_vpid(ctx->pid), SIGKILL, 1);
+ if (ctx->pid != vk->reset_pid) {
+ dev_dbg(&vk->pdev->dev,
+ "Send kill signal to pid %d\n",
+ ctx->pid);
+ kill_pid(find_vpid(ctx->pid), SIGKILL, 1);
+ }
}
}
spin_unlock(&vk->ctx_lock);
@@ -975,6 +979,49 @@ static long bcm_vk_load_image(struct bcm_vk *vk,
return ret;
}
+static int bcm_vk_reset_successful(struct bcm_vk *vk)
+{
+ struct device *dev = &vk->pdev->dev;
+ u32 fw_status, reset_reason;
+ int ret = -EAGAIN;
+
+ /*
+ * Reset could be triggered when the card in several state:
+ * i) in bootROM
+ * ii) after boot1
+ * iii) boot2 running
+ *
+ * i) & ii) - no status bits will be updated. If vkboot1
+ * runs automatically after reset, it will update the reason
+ * to be unknown reason
+ * iii) - reboot reason match + deinit done.
+ */
+ fw_status = vkread32(vk, BAR_0, VK_BAR_FWSTS);
+ /* immediate exit if interface goes down */
+ if (BCM_VK_INTF_IS_DOWN(fw_status)) {
+ dev_err(dev, "PCIe Intf Down!\n");
+ goto reset_exit;
+ }
+
+ reset_reason = (fw_status & VK_FWSTS_RESET_REASON_MASK);
+ if ((reset_reason == VK_FWSTS_RESET_MBOX_DB) ||
+ (reset_reason == VK_FWSTS_RESET_UNKNOWN))
+ ret = 0;
+
+ /*
+ * if some of the deinit bits are set, but done
+ * bit is not, this is a failure if triggered while boot2 is running
+ */
+ if ((fw_status & VK_FWSTS_DEINIT_TRIGGERED) &&
+ !(fw_status & VK_FWSTS_RESET_DONE))
+ ret = -EAGAIN;
+
+reset_exit:
+ dev_dbg(dev, "FW status = 0x%x ret %d\n", fw_status, ret);
+
+ return ret;
+}
+
static void bcm_to_v_reset_doorbell(struct bcm_vk *vk, u32 db_val)
{
vkwrite32(vk, db_val, BAR_0, VK_BAR0_RESET_DB_BASE);
@@ -984,7 +1031,11 @@ static int bcm_vk_trigger_reset(struct bcm_vk *vk)
{
u32 i;
u32 value, boot_status;
+ bool is_stdalone, is_boot2;
+ /* clean up before pressing the door bell */
+ bcm_vk_drain_msg_on_reset(vk);
+ vkwrite32(vk, 0, BAR_1, VK_BAR1_MSGQ_DEF_RDY);
/* make tag '\0' terminated */
vkwrite32(vk, 0, BAR_1, VK_BAR1_BOOT1_VER_TAG);
@@ -995,6 +1046,11 @@ static int bcm_vk_trigger_reset(struct bcm_vk *vk)
for (i = 0; i < VK_BAR1_SOTP_REVID_MAX; i++)
vkwrite32(vk, 0, BAR_1, VK_BAR1_SOTP_REVID_ADDR(i));
+ memset(&vk->card_info, 0, sizeof(vk->card_info));
+ memset(&vk->peerlog_info, 0, sizeof(vk->peerlog_info));
+ memset(&vk->proc_mon_info, 0, sizeof(vk->proc_mon_info));
+ memset(&vk->alert_cnts, 0, sizeof(vk->alert_cnts));
+
/*
* When boot request fails, the CODE_PUSH_OFFSET stays persistent.
* Allowing us to debug the failure. When we call reset,
@@ -1015,17 +1071,103 @@ static int bcm_vk_trigger_reset(struct bcm_vk *vk)
}
vkwrite32(vk, value, BAR_0, BAR_CODEPUSH_SBL);
+ /* special reset handling */
+ is_stdalone = boot_status & BOOT_STDALONE_RUNNING;
+ is_boot2 = (boot_status & BOOT_STATE_MASK) == BOOT2_RUNNING;
+ if (vk->peer_alert.flags & ERR_LOG_RAMDUMP) {
+ /*
+ * if card is in ramdump mode, it is hitting an error. Don't
+ * reset the reboot reason as it will contain valid info that
+ * is important - simply use special reset
+ */
+ vkwrite32(vk, VK_BAR0_RESET_RAMPDUMP, BAR_0, VK_BAR_FWSTS);
+ return VK_BAR0_RESET_RAMPDUMP;
+ } else if (is_stdalone && !is_boot2) {
+ dev_info(&vk->pdev->dev, "Hard reset on Standalone mode");
+ bcm_to_v_reset_doorbell(vk, VK_BAR0_RESET_DB_HARD);
+ return VK_BAR0_RESET_DB_HARD;
+ }
+
/* reset fw_status with proper reason, and press db */
vkwrite32(vk, VK_FWSTS_RESET_MBOX_DB, BAR_0, VK_BAR_FWSTS);
bcm_to_v_reset_doorbell(vk, VK_BAR0_RESET_DB_SOFT);
- /* clear other necessary registers records */
+ /* clear other necessary registers and alert records */
vkwrite32(vk, 0, BAR_0, BAR_OS_UPTIME);
vkwrite32(vk, 0, BAR_0, BAR_INTF_VER);
+ memset(&vk->host_alert, 0, sizeof(vk->host_alert));
+ memset(&vk->peer_alert, 0, sizeof(vk->peer_alert));
+ /* clear 4096 bits of bitmap */
+ bitmap_clear(vk->bmap, 0, VK_MSG_ID_BITMAP_SIZE);
return 0;
}
+static long bcm_vk_reset(struct bcm_vk *vk, struct vk_reset __user *arg)
+{
+ struct device *dev = &vk->pdev->dev;
+ struct vk_reset reset;
+ int ret = 0;
+ u32 ramdump_reset;
+ int special_reset;
+
+ if (copy_from_user(&reset, arg, sizeof(struct vk_reset)))
+ return -EFAULT;
+
+ /* check if any download is in-progress, if so return error */
+ if (test_and_set_bit(BCM_VK_WQ_DWNLD_PEND, vk->wq_offload) != 0) {
+ dev_err(dev, "Download operation pending - skip reset.\n");
+ return -EPERM;
+ }
+
+ ramdump_reset = vk->peer_alert.flags & ERR_LOG_RAMDUMP;
+ dev_info(dev, "Issue Reset %s\n",
+ ramdump_reset ? "in ramdump mode" : "");
+
+ /*
+ * The following is the sequence of reset:
+ * - send card level graceful shut down
+ * - wait enough time for VK to handle its business, stopping DMA etc
+ * - kill host apps
+ * - Trigger interrupt with DB
+ */
+ bcm_vk_send_shutdown_msg(vk, VK_SHUTDOWN_GRACEFUL, 0, 0);
+
+ spin_lock(&vk->ctx_lock);
+ if (!vk->reset_pid) {
+ vk->reset_pid = task_pid_nr(current);
+ } else {
+ dev_err(dev, "Reset already launched by process pid %d\n",
+ vk->reset_pid);
+ ret = -EACCES;
+ }
+ spin_unlock(&vk->ctx_lock);
+ if (ret)
+ goto err_exit;
+
+ bcm_vk_blk_drv_access(vk);
+ special_reset = bcm_vk_trigger_reset(vk);
+
+ /*
+ * Wait enough time for card os to deinit
+ * and populate the reset reason.
+ */
+ msleep(BCM_VK_DEINIT_TIME_MS);
+
+ if (special_reset) {
+ /* if it is special ramdump reset, return the type to user */
+ reset.arg2 = special_reset;
+ if (copy_to_user(arg, &reset, sizeof(reset)))
+ ret = -EFAULT;
+ } else {
+ ret = bcm_vk_reset_successful(vk);
+ }
+
+err_exit:
+ clear_bit(BCM_VK_WQ_DWNLD_PEND, vk->wq_offload);
+ return ret;
+}
+
static long bcm_vk_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
long ret = -EINVAL;
@@ -1044,6 +1186,10 @@ static long bcm_vk_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
ret = bcm_vk_load_image(vk, argp);
break;
+ case VK_IOCTL_RESET:
+ ret = bcm_vk_reset(vk, argp);
+ break;
+
default:
break;
}
diff --git a/drivers/misc/bcm-vk/bcm_vk_msg.c b/drivers/misc/bcm-vk/bcm_vk_msg.c
index 3f2876484b7a..e31d41400199 100644
--- a/drivers/misc/bcm-vk/bcm_vk_msg.c
+++ b/drivers/misc/bcm-vk/bcm_vk_msg.c
@@ -204,6 +204,15 @@ static struct bcm_vk_ctx *bcm_vk_get_ctx(struct bcm_vk *vk, const pid_t pid)
spin_lock(&vk->ctx_lock);
+ /* check if it is in reset, if so, don't allow */
+ if (vk->reset_pid) {
+ dev_err(&vk->pdev->dev,
+ "No context allowed during reset by pid %d\n",
+ vk->reset_pid);
+
+ goto in_reset_exit;
+ }
+
for (i = 0; i < ARRAY_SIZE(vk->ctx); i++) {
if (!vk->ctx[i].in_use) {
vk->ctx[i].in_use = true;
@@ -232,6 +241,7 @@ static struct bcm_vk_ctx *bcm_vk_get_ctx(struct bcm_vk *vk, const pid_t pid)
init_waitqueue_head(&ctx->rd_wq);
all_in_use_exit:
+in_reset_exit:
spin_unlock(&vk->ctx_lock);
return ctx;
@@ -376,6 +386,12 @@ static void bcm_vk_drain_all_pend(struct device *dev,
num, ctx->idx);
}
+void bcm_vk_drain_msg_on_reset(struct bcm_vk *vk)
+{
+ bcm_vk_drain_all_pend(&vk->pdev->dev, &vk->to_v_msg_chan, NULL);
+ bcm_vk_drain_all_pend(&vk->pdev->dev, &vk->to_h_msg_chan, NULL);
+}
+
/*
* Function to sync up the messages queue info that is provided by BAR1
*/
@@ -700,13 +716,22 @@ static int bcm_vk_handle_last_sess(struct bcm_vk *vk, const pid_t pid,
/*
* don't send down or do anything if message queue is not initialized
+ * and if it is the reset session, clear it.
*/
- if (!bcm_vk_drv_access_ok(vk))
+ if (!bcm_vk_drv_access_ok(vk)) {
+ if (vk->reset_pid == pid)
+ vk->reset_pid = 0;
return -EPERM;
+ }
dev_dbg(dev, "No more sessions, shut down pid %d\n", pid);
- rc = bcm_vk_send_shutdown_msg(vk, VK_SHUTDOWN_PID, pid, q_num);
+ /* only need to do it if it is not the reset process */
+ if (vk->reset_pid != pid)
+ rc = bcm_vk_send_shutdown_msg(vk, VK_SHUTDOWN_PID, pid, q_num);
+ else
+ /* put reset_pid to 0 if it is exiting last session */
+ vk->reset_pid = 0;
return rc;
}
@@ -1110,6 +1135,17 @@ ssize_t bcm_vk_write(struct file *p_file,
int dir;
struct _vk_data *data;
+ /*
+ * check if we are in reset, if so, no buffer transfer is
+ * allowed and return error.
+ */
+ if (vk->reset_pid) {
+ dev_dbg(dev, "No Transfer allowed during reset, pid %d.\n",
+ ctx->pid);
+ rc = -EACCES;
+ goto write_free_msgid;
+ }
+
num_planes = entry->to_v_msg[0].cmd & VK_CMD_PLANES_MASK;
if ((entry->to_v_msg[0].cmd & VK_CMD_MASK) == VK_CMD_DOWNLOAD)
dir = DMA_FROM_DEVICE;
--
2.17.1
Add initial version of Broadcom VK driver to enumerate PCI device IDs
of Valkyrie and Viper device IDs.
VK based cards provide real-time high performance, high throughput,
low latency offload compute engine operations.
They are used for multiple parallel offload tasks as:
audio, video and image processing and crypto operations.
Further commits add additional features to driver beyond probe/remove.
Signed-off-by: Scott Branden <[email protected]>
---
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/bcm-vk/Kconfig | 15 ++++
drivers/misc/bcm-vk/Makefile | 8 ++
drivers/misc/bcm-vk/bcm_vk.h | 29 +++++++
drivers/misc/bcm-vk/bcm_vk_dev.c | 141 +++++++++++++++++++++++++++++++
6 files changed, 195 insertions(+)
create mode 100644 drivers/misc/bcm-vk/Kconfig
create mode 100644 drivers/misc/bcm-vk/Makefile
create mode 100644 drivers/misc/bcm-vk/bcm_vk.h
create mode 100644 drivers/misc/bcm-vk/bcm_vk_dev.c
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index ce136d685d14..9d42b5def81b 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -469,6 +469,7 @@ source "drivers/misc/genwqe/Kconfig"
source "drivers/misc/echo/Kconfig"
source "drivers/misc/cxl/Kconfig"
source "drivers/misc/ocxl/Kconfig"
+source "drivers/misc/bcm-vk/Kconfig"
source "drivers/misc/cardreader/Kconfig"
source "drivers/misc/habanalabs/Kconfig"
source "drivers/misc/uacce/Kconfig"
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index c7bd01ac6291..766837e4b961 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -52,6 +52,7 @@ obj-$(CONFIG_ECHO) += echo/
obj-$(CONFIG_CXL_BASE) += cxl/
obj-$(CONFIG_PCI_ENDPOINT_TEST) += pci_endpoint_test.o
obj-$(CONFIG_OCXL) += ocxl/
+obj-$(CONFIG_BCM_VK) += bcm-vk/
obj-y += cardreader/
obj-$(CONFIG_PVPANIC) += pvpanic.o
obj-$(CONFIG_HABANA_AI) += habanalabs/
diff --git a/drivers/misc/bcm-vk/Kconfig b/drivers/misc/bcm-vk/Kconfig
new file mode 100644
index 000000000000..2272e47655ed
--- /dev/null
+++ b/drivers/misc/bcm-vk/Kconfig
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Broadcom VK device
+#
+config BCM_VK
+ tristate "Support for Broadcom VK Accelerators"
+ depends on PCI_MSI
+ help
+ Select this option to enable support for Broadcom
+ VK Accelerators. VK is used for performing
+ specific offload processing.
+ This driver enables userspace programs to access these
+ accelerators via /dev/bcm-vk.N devices.
+
+ If unsure, say N.
diff --git a/drivers/misc/bcm-vk/Makefile b/drivers/misc/bcm-vk/Makefile
new file mode 100644
index 000000000000..f8a7ac4c242f
--- /dev/null
+++ b/drivers/misc/bcm-vk/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for Broadcom VK driver
+#
+
+obj-$(CONFIG_BCM_VK) += bcm_vk.o
+bcm_vk-objs := \
+ bcm_vk_dev.o
diff --git a/drivers/misc/bcm-vk/bcm_vk.h b/drivers/misc/bcm-vk/bcm_vk.h
new file mode 100644
index 000000000000..9152785199ab
--- /dev/null
+++ b/drivers/misc/bcm-vk/bcm_vk.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2018-2020 Broadcom.
+ */
+
+#ifndef BCM_VK_H
+#define BCM_VK_H
+
+#include <linux/pci.h>
+
+#define DRV_MODULE_NAME "bcm-vk"
+
+/* VK device supports a maximum of 3 bars */
+#define MAX_BAR 3
+
+enum pci_barno {
+ BAR_0 = 0,
+ BAR_1,
+ BAR_2
+};
+
+#define BCM_VK_NUM_TTY 2
+
+struct bcm_vk {
+ struct pci_dev *pdev;
+ void __iomem *bar[MAX_BAR];
+};
+
+#endif
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
new file mode 100644
index 000000000000..14afe2477b97
--- /dev/null
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -0,0 +1,141 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2018-2020 Broadcom.
+ */
+
+#include <linux/dma-mapping.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/pci_regs.h>
+
+#include "bcm_vk.h"
+
+#define PCI_DEVICE_ID_VALKYRIE 0x5e87
+#define PCI_DEVICE_ID_VIPER 0x5e88
+
+/* MSIX usages */
+#define VK_MSIX_MSGQ_MAX 3
+#define VK_MSIX_NOTF_MAX 1
+#define VK_MSIX_TTY_MAX BCM_VK_NUM_TTY
+#define VK_MSIX_IRQ_MAX (VK_MSIX_MSGQ_MAX + VK_MSIX_NOTF_MAX + \
+ VK_MSIX_TTY_MAX)
+#define VK_MSIX_IRQ_MIN_REQ (VK_MSIX_MSGQ_MAX + VK_MSIX_NOTF_MAX)
+
+/* Number of bits set in DMA mask*/
+#define BCM_VK_DMA_BITS 64
+
+static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ int err;
+ int i;
+ int irq;
+ struct bcm_vk *vk;
+ struct device *dev = &pdev->dev;
+
+ vk = kzalloc(sizeof(*vk), GFP_KERNEL);
+ if (!vk)
+ return -ENOMEM;
+
+ err = pci_enable_device(pdev);
+ if (err) {
+ dev_err(dev, "Cannot enable PCI device\n");
+ goto err_free_exit;
+ }
+ vk->pdev = pci_dev_get(pdev);
+
+ err = pci_request_regions(pdev, DRV_MODULE_NAME);
+ if (err) {
+ dev_err(dev, "Cannot obtain PCI resources\n");
+ goto err_disable_pdev;
+ }
+
+ /* make sure DMA is good */
+ err = dma_set_mask_and_coherent(&pdev->dev,
+ DMA_BIT_MASK(BCM_VK_DMA_BITS));
+ if (err) {
+ dev_err(dev, "failed to set DMA mask\n");
+ goto err_disable_pdev;
+ }
+
+ pci_set_master(pdev);
+ pci_set_drvdata(pdev, vk);
+
+ irq = pci_alloc_irq_vectors(pdev,
+ 1,
+ VK_MSIX_IRQ_MAX,
+ PCI_IRQ_MSI | PCI_IRQ_MSIX);
+
+ if (irq < VK_MSIX_IRQ_MIN_REQ) {
+ dev_err(dev, "failed to get min %d MSIX interrupts, irq(%d)\n",
+ VK_MSIX_IRQ_MIN_REQ, irq);
+ err = (irq >= 0) ? -EINVAL : irq;
+ goto err_disable_pdev;
+ }
+
+ if (irq != VK_MSIX_IRQ_MAX)
+ dev_warn(dev, "Number of IRQs %d allocated - requested(%d).\n",
+ irq, VK_MSIX_IRQ_MAX);
+
+ for (i = 0; i < MAX_BAR; i++) {
+ /* multiple by 2 for 64 bit BAR mapping */
+ vk->bar[i] = pci_ioremap_bar(pdev, i * 2);
+ if (!vk->bar[i]) {
+ dev_err(dev, "failed to remap BAR%d\n", i);
+ goto err_iounmap;
+ }
+ }
+
+ return 0;
+
+err_iounmap:
+ for (i = 0; i < MAX_BAR; i++) {
+ if (vk->bar[i])
+ pci_iounmap(pdev, vk->bar[i]);
+ }
+ pci_release_regions(pdev);
+
+err_disable_pdev:
+ pci_free_irq_vectors(pdev);
+ pci_disable_device(pdev);
+ pci_dev_put(pdev);
+
+err_free_exit:
+ kfree(vk);
+
+ return err;
+}
+
+static void bcm_vk_remove(struct pci_dev *pdev)
+{
+ int i;
+ struct bcm_vk *vk = pci_get_drvdata(pdev);
+
+ for (i = 0; i < MAX_BAR; i++) {
+ if (vk->bar[i])
+ pci_iounmap(pdev, vk->bar[i]);
+ }
+
+ pci_release_regions(pdev);
+ pci_free_irq_vectors(pdev);
+ pci_disable_device(pdev);
+}
+
+static const struct pci_device_id bcm_vk_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_VALKYRIE), },
+ { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_VIPER), },
+ { }
+};
+MODULE_DEVICE_TABLE(pci, bcm_vk_ids);
+
+static struct pci_driver pci_driver = {
+ .name = DRV_MODULE_NAME,
+ .id_table = bcm_vk_ids,
+ .probe = bcm_vk_probe,
+ .remove = bcm_vk_remove,
+};
+module_pci_driver(pci_driver);
+
+MODULE_DESCRIPTION("Broadcom VK Host Driver");
+MODULE_AUTHOR("Scott Branden <[email protected]>");
+MODULE_LICENSE("GPL v2");
+MODULE_VERSION("1.0");
--
2.17.1
Add BCM_VK_QSTATS Kconfig option to allow for enabling debug VK
queue statistics.
These statistics keep track of max, abs_max, and average for the
messages queues.
Co-developed-by: Desmond Yan <[email protected]>
Signed-off-by: Desmond Yan <[email protected]>
Signed-off-by: Scott Branden <[email protected]>
---
drivers/misc/bcm-vk/Kconfig | 14 +++++++++
drivers/misc/bcm-vk/bcm_vk_dev.c | 9 ++++++
drivers/misc/bcm-vk/bcm_vk_msg.c | 52 +++++++++++++++++++++++++++++++-
drivers/misc/bcm-vk/bcm_vk_msg.h | 12 ++++++++
4 files changed, 86 insertions(+), 1 deletion(-)
diff --git a/drivers/misc/bcm-vk/Kconfig b/drivers/misc/bcm-vk/Kconfig
index 2272e47655ed..a3a020b19e3b 100644
--- a/drivers/misc/bcm-vk/Kconfig
+++ b/drivers/misc/bcm-vk/Kconfig
@@ -13,3 +13,17 @@ config BCM_VK
accelerators via /dev/bcm-vk.N devices.
If unsure, say N.
+
+if BCM_VK
+
+config BCM_VK_QSTATS
+ bool "VK Queue Statistics"
+ help
+ Turn on to enable Queue Statistics.
+ These are useful for debugging purposes.
+ Some performance loss by enabling this debug config.
+ For properly operating PCIe hardware no need to enable this.
+
+ If unsure, say N.
+
+endif
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index 718badb53100..6c2370723e0a 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -1097,6 +1097,15 @@ static int bcm_vk_trigger_reset(struct bcm_vk *vk)
vkwrite32(vk, 0, BAR_0, BAR_INTF_VER);
memset(&vk->host_alert, 0, sizeof(vk->host_alert));
memset(&vk->peer_alert, 0, sizeof(vk->peer_alert));
+#if defined(CONFIG_BCM_VK_QSTATS)
+ /* clear qstats */
+ for (i = 0; i < VK_MSGQ_MAX_NR; i++) {
+ memset(&vk->to_v_msg_chan.qstats[i].qcnts, 0,
+ sizeof(vk->to_v_msg_chan.qstats[i].qcnts));
+ memset(&vk->to_h_msg_chan.qstats[i].qcnts, 0,
+ sizeof(vk->to_h_msg_chan.qstats[i].qcnts));
+ }
+#endif
/* clear 4096 bits of bitmap */
bitmap_clear(vk->bmap, 0, VK_MSG_ID_BITMAP_SIZE);
diff --git a/drivers/misc/bcm-vk/bcm_vk_msg.c b/drivers/misc/bcm-vk/bcm_vk_msg.c
index e31d41400199..6ba0a7a94dcc 100644
--- a/drivers/misc/bcm-vk/bcm_vk_msg.c
+++ b/drivers/misc/bcm-vk/bcm_vk_msg.c
@@ -91,6 +91,44 @@ u32 msgq_avail_space(const struct bcm_vk_msgq __iomem *msgq,
return (qinfo->q_size - msgq_occupied(msgq, qinfo) - 1);
}
+#if defined(CONFIG_BCM_VK_QSTATS)
+
+/* Use default value of 20000 rd/wr per update */
+#if !defined(BCM_VK_QSTATS_ACC_CNT)
+#define BCM_VK_QSTATS_ACC_CNT 20000
+#endif
+
+static void bcm_vk_update_qstats(struct bcm_vk *vk,
+ const char *tag,
+ struct bcm_vk_qstats *qstats,
+ u32 occupancy)
+{
+ struct bcm_vk_qs_cnts *qcnts = &qstats->qcnts;
+
+ if (occupancy > qcnts->max_occ) {
+ qcnts->max_occ = occupancy;
+ if (occupancy > qcnts->max_abs)
+ qcnts->max_abs = occupancy;
+ }
+
+ qcnts->acc_sum += occupancy;
+ if (++qcnts->cnt >= BCM_VK_QSTATS_ACC_CNT) {
+ /* log average and clear counters */
+ dev_dbg(&vk->pdev->dev,
+ "%s[%d]: Max: [%3d/%3d] Acc %d num %d, Aver %d\n",
+ tag, qstats->q_num,
+ qcnts->max_occ, qcnts->max_abs,
+ qcnts->acc_sum,
+ qcnts->cnt,
+ qcnts->acc_sum / qcnts->cnt);
+
+ qcnts->cnt = 0;
+ qcnts->max_occ = 0;
+ qcnts->acc_sum = 0;
+ }
+}
+#endif
+
/* number of retries when enqueue message fails before returning EAGAIN */
#define BCM_VK_H2VK_ENQ_RETRY 10
#define BCM_VK_H2VK_ENQ_RETRY_DELAY_MS 50
@@ -495,8 +533,12 @@ static int bcm_vk_msg_chan_init(struct bcm_vk_msg_chan *chan)
mutex_init(&chan->msgq_mutex);
spin_lock_init(&chan->pendq_lock);
- for (i = 0; i < VK_MSGQ_MAX_NR; i++)
+ for (i = 0; i < VK_MSGQ_MAX_NR; i++) {
INIT_LIST_HEAD(&chan->pendq[i]);
+#if defined(CONFIG_BCM_VK_QSTATS)
+ chan->qstats[i].q_num = i;
+#endif
+ }
return 0;
}
@@ -605,6 +647,10 @@ static int bcm_to_v_msg_enqueue(struct bcm_vk *vk, struct bcm_vk_wkent *entry)
avail = msgq_avail_space(msgq, qinfo);
+#if defined(CONFIG_BCM_VK_QSTATS)
+ bcm_vk_update_qstats(vk, "to_v", &chan->qstats[q_num],
+ qinfo->q_size - avail);
+#endif
/* if not enough space, return EAGAIN and let app handles it */
retry = 0;
while ((avail < entry->to_v_blks) &&
@@ -818,6 +864,10 @@ s32 bcm_to_h_msg_dequeue(struct bcm_vk *vk)
goto idx_err;
}
+#if defined(CONFIG_BCM_VK_QSTATS)
+ bcm_vk_update_qstats(vk, "to_h", &chan->qstats[q_num],
+ msgq_occupied(msgq, qinfo));
+#endif
num_blks = src_size + 1;
data = kzalloc(num_blks * VK_MSGQ_BLK_SIZE, GFP_KERNEL);
if (data) {
diff --git a/drivers/misc/bcm-vk/bcm_vk_msg.h b/drivers/misc/bcm-vk/bcm_vk_msg.h
index 637c5d662eb7..d00d9707bd01 100644
--- a/drivers/misc/bcm-vk/bcm_vk_msg.h
+++ b/drivers/misc/bcm-vk/bcm_vk_msg.h
@@ -125,6 +125,14 @@ struct bcm_vk_qs_cnts {
u32 max_abs; /* the abs max since reset */
};
+#if defined(CONFIG_BCM_VK_QSTATS)
+/* stats structure */
+struct bcm_vk_qstats {
+ u32 q_num;
+ struct bcm_vk_qs_cnts qcnts;
+};
+#endif
+
/* control channel structure for either to_v or to_h communication */
struct bcm_vk_msg_chan {
u32 q_nr;
@@ -138,6 +146,10 @@ struct bcm_vk_msg_chan {
struct list_head pendq[VK_MSGQ_MAX_NR];
/* static queue info from the sync */
struct bcm_vk_sync_qinfo sync_qinfo[VK_MSGQ_MAX_NR];
+#if defined(CONFIG_BCM_VK_QSTATS)
+ /* qstats */
+ struct bcm_vk_qstats qstats[VK_MSGQ_MAX_NR];
+#endif
};
/* total number of supported ctx, 32 ctx each for 5 components */
--
2.17.1
Add support to get card_info (details about card),
peerlog_info (to get details of peerlog on card),
and proc_mon_info (process monitoring on card).
This info is used for collection of logs via direct
read of BAR space and by sysfs access (in a follow on commit).
Co-developed-by: Desmond Yan <[email protected]>
Signed-off-by: Desmond Yan <[email protected]>
Signed-off-by: Scott Branden <[email protected]>
---
drivers/misc/bcm-vk/bcm_vk.h | 58 +++++++++++++++++++++
drivers/misc/bcm-vk/bcm_vk_dev.c | 87 ++++++++++++++++++++++++++++++++
2 files changed, 145 insertions(+)
diff --git a/drivers/misc/bcm-vk/bcm_vk.h b/drivers/misc/bcm-vk/bcm_vk.h
index 726aab71bb6b..452333a32604 100644
--- a/drivers/misc/bcm-vk/bcm_vk.h
+++ b/drivers/misc/bcm-vk/bcm_vk.h
@@ -205,6 +205,21 @@ enum pci_barno {
#define BCM_VK_NUM_TTY 2
+/* VK device max power state, supports 3, full, reduced and low */
+#define MAX_OPP 3
+#define MAX_CARD_INFO_TAG_SIZE 64
+
+struct bcm_vk_card_info {
+ u32 version;
+ char os_tag[MAX_CARD_INFO_TAG_SIZE];
+ char cmpt_tag[MAX_CARD_INFO_TAG_SIZE];
+ u32 cpu_freq_mhz;
+ u32 cpu_scale[MAX_OPP];
+ u32 ddr_freq_mhz;
+ u32 ddr_size_MB;
+ u32 video_core_freq_mhz;
+};
+
/* DAUTH related info */
struct bcm_vk_dauth_key {
char store[VK_BAR1_DAUTH_STORE_SIZE];
@@ -215,10 +230,47 @@ struct bcm_vk_dauth_info {
struct bcm_vk_dauth_key keys[VK_BAR1_DAUTH_MAX];
};
+/*
+ * Control structure of logging messages from the card. This
+ * buffer is for logmsg that comes from vk
+ */
+struct bcm_vk_peer_log {
+ u32 rd_idx;
+ u32 wr_idx;
+ u32 buf_size;
+ u32 mask;
+ char data[0];
+};
+
+/* max size per line of peer log */
+#define BCM_VK_PEER_LOG_LINE_MAX 256
+
+/*
+ * single entry for processing type + utilization
+ */
+#define BCM_VK_PROC_TYPE_TAG_LEN 8
+struct bcm_vk_proc_mon_entry_t {
+ char tag[BCM_VK_PROC_TYPE_TAG_LEN];
+ u32 used;
+ u32 max; /**< max capacity */
+};
+
+/**
+ * Structure for run time utilization
+ */
+#define BCM_VK_PROC_MON_MAX 8 /* max entries supported */
+struct bcm_vk_proc_mon_info {
+ u32 num; /**< no of entries */
+ u32 entry_size; /**< per entry size */
+ struct bcm_vk_proc_mon_entry_t entries[BCM_VK_PROC_MON_MAX];
+};
+
struct bcm_vk {
struct pci_dev *pdev;
void __iomem *bar[MAX_BAR];
+ struct bcm_vk_card_info card_info;
+ struct bcm_vk_proc_mon_info proc_mon_info;
struct bcm_vk_dauth_info dauth_info;
/* mutex to protect the ioctls */
@@ -240,6 +292,12 @@ struct bcm_vk {
dma_addr_t tdma_addr; /* test dma segment bus addr */
struct notifier_block panic_nb;
+
+ /* offset of the peer log control in BAR2 */
+ u32 peerlog_off;
+ struct bcm_vk_peer_log peerlog_info; /* record of peer log info */
+ /* offset of processing monitoring info in BAR2 */
+ u32 proc_mon_off;
};
/* wq offload work items bits definitions */
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index c48e2329e547..6d476f1ac3c4 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -172,6 +172,86 @@ static inline int bcm_vk_wait(struct bcm_vk *vk, enum pci_barno bar,
return 0;
}
+static void bcm_vk_get_card_info(struct bcm_vk *vk)
+{
+ struct device *dev = &vk->pdev->dev;
+ u32 offset;
+ int i;
+ u8 *dst;
+ struct bcm_vk_card_info *info = &vk->card_info;
+
+ /* first read the offset from spare register */
+ offset = vkread32(vk, BAR_0, BAR_CARD_STATIC_INFO);
+ offset &= (pci_resource_len(vk->pdev, BAR_2 * 2) - 1);
+
+ /* based on the offset, read info to internal card info structure */
+ dst = (u8 *)info;
+ for (i = 0; i < sizeof(*info); i++)
+ *dst++ = vkread8(vk, BAR_2, offset++);
+
+#define CARD_INFO_LOG_FMT "version : %x\n" \
+ "os_tag : %s\n" \
+ "cmpt_tag : %s\n" \
+ "cpu_freq : %d MHz\n" \
+ "cpu_scale : %d full, %d lowest\n" \
+ "ddr_freq : %d MHz\n" \
+ "ddr_size : %d MB\n" \
+ "video_freq: %d MHz\n"
+ dev_dbg(dev, CARD_INFO_LOG_FMT, info->version, info->os_tag,
+ info->cmpt_tag, info->cpu_freq_mhz, info->cpu_scale[0],
+ info->cpu_scale[MAX_OPP - 1], info->ddr_freq_mhz,
+ info->ddr_size_MB, info->video_core_freq_mhz);
+
+ /*
+ * get the peer log pointer, only need the offset, and get record
+ * of the log buffer information which would be used for checking
+ * before dump, in case the BAR2 memory has been corrupted.
+ */
+ vk->peerlog_off = offset;
+ memcpy_fromio(&vk->peerlog_info, vk->bar[BAR_2] + vk->peerlog_off,
+ sizeof(vk->peerlog_info));
+ dev_dbg(dev, "Peer log: Size 0x%x(0x%x), [Rd Wr] = [%d %d]\n",
+ vk->peerlog_info.buf_size,
+ vk->peerlog_info.mask,
+ vk->peerlog_info.rd_idx,
+ vk->peerlog_info.wr_idx);
+}
+
+static void bcm_vk_get_proc_mon_info(struct bcm_vk *vk)
+{
+ struct device *dev = &vk->pdev->dev;
+ struct bcm_vk_proc_mon_info *mon = &vk->proc_mon_info;
+ u32 num, entry_size, offset, buf_size;
+ u8 *dst;
+
+ /* calculate offset which is based on peerlog offset */
+ buf_size = vkread32(vk, BAR_2,
+ vk->peerlog_off
+ + offsetof(struct bcm_vk_peer_log, buf_size));
+ offset = vk->peerlog_off + sizeof(struct bcm_vk_peer_log)
+ + buf_size;
+
+ /* first read the num and entry size */
+ num = vkread32(vk, BAR_2, offset);
+ entry_size = vkread32(vk, BAR_2, offset + sizeof(num));
+
+ /* check for max allowed */
+ if (num > BCM_VK_PROC_MON_MAX) {
+ dev_err(dev, "Processing monitoring entry %d exceeds max %d\n",
+ num, BCM_VK_PROC_MON_MAX);
+ return;
+ }
+ mon->num = num;
+ mon->entry_size = entry_size;
+
+ vk->proc_mon_off = offset;
+
+ /* read it once that will capture those static info */
+ dst = (u8 *)&mon->entries[0];
+ offset += sizeof(num) + sizeof(entry_size);
+ memcpy_fromio(dst, vk->bar[BAR_2] + offset, num * entry_size);
+}
+
static int bcm_vk_sync_card_info(struct bcm_vk *vk)
{
u32 rdy_marker = vkread32(vk, BAR_1, VK_BAR1_MSGQ_DEF_RDY);
@@ -193,6 +273,13 @@ static int bcm_vk_sync_card_info(struct bcm_vk *vk)
vkwrite32(vk, nr_scratch_pages * PAGE_SIZE, BAR_1,
VK_BAR1_SCRATCH_SZ_ADDR);
}
+
+ /* get static card info, only need to read once */
+ bcm_vk_get_card_info(vk);
+
+ /* get the proc mon info once */
+ bcm_vk_get_proc_mon_info(vk);
+
return 0;
}
--
2.17.1
Add message support in order to be able to communicate
to VK card via message queues.
This info is used for debug purposes via collection of logs via direct
read of BAR space and by sysfs access (in a follow on commit).
Co-developed-by: Desmond Yan <[email protected]>
Signed-off-by: Desmond Yan <[email protected]>
Signed-off-by: Scott Branden <[email protected]>
---
drivers/misc/bcm-vk/Makefile | 3 +-
drivers/misc/bcm-vk/bcm_vk.h | 119 +++
drivers/misc/bcm-vk/bcm_vk_dev.c | 302 +++++++-
drivers/misc/bcm-vk/bcm_vk_msg.c | 1151 ++++++++++++++++++++++++++++++
drivers/misc/bcm-vk/bcm_vk_msg.h | 126 ++++
drivers/misc/bcm-vk/bcm_vk_sg.c | 275 +++++++
drivers/misc/bcm-vk/bcm_vk_sg.h | 61 ++
7 files changed, 2034 insertions(+), 3 deletions(-)
create mode 100644 drivers/misc/bcm-vk/bcm_vk_sg.c
create mode 100644 drivers/misc/bcm-vk/bcm_vk_sg.h
diff --git a/drivers/misc/bcm-vk/Makefile b/drivers/misc/bcm-vk/Makefile
index a2ae79858409..79b4e365c9e6 100644
--- a/drivers/misc/bcm-vk/Makefile
+++ b/drivers/misc/bcm-vk/Makefile
@@ -6,5 +6,6 @@
obj-$(CONFIG_BCM_VK) += bcm_vk.o
bcm_vk-objs := \
bcm_vk_dev.o \
- bcm_vk_msg.o
+ bcm_vk_msg.o \
+ bcm_vk_sg.o
diff --git a/drivers/misc/bcm-vk/bcm_vk.h b/drivers/misc/bcm-vk/bcm_vk.h
index 452333a32604..775d892f13f8 100644
--- a/drivers/misc/bcm-vk/bcm_vk.h
+++ b/drivers/misc/bcm-vk/bcm_vk.h
@@ -6,11 +6,13 @@
#ifndef BCM_VK_H
#define BCM_VK_H
+#include <linux/atomic.h>
#include <linux/firmware.h>
#include <linux/kref.h>
#include <linux/miscdevice.h>
#include <linux/mutex.h>
#include <linux/pci.h>
+#include <linux/poll.h>
#include <linux/sched/signal.h>
#include <linux/uaccess.h>
#include <uapi/linux/misc/bcm_vk.h>
@@ -93,14 +95,49 @@
#define MAJOR_SOC_REV(_chip_id) (((_chip_id) >> 20) & 0xf)
#define BAR_CARD_TEMPERATURE 0x45c
+/* defines for all temperature sensor */
+#define BCM_VK_TEMP_FIELD_MASK 0xff
+#define BCM_VK_CPU_TEMP_SHIFT 0
+#define BCM_VK_DDR0_TEMP_SHIFT 8
+#define BCM_VK_DDR1_TEMP_SHIFT 16
#define BAR_CARD_VOLTAGE 0x460
+/* defines for voltage rail conversion */
+#define BCM_VK_VOLT_RAIL_MASK 0xffff
+#define BCM_VK_3P3_VOLT_REG_SHIFT 16
#define BAR_CARD_ERR_LOG 0x464
+/* Error log register bit definition - register for error alerts */
+#define ERR_LOG_UECC BIT(0)
+#define ERR_LOG_SSIM_BUSY BIT(1)
+#define ERR_LOG_AFBC_BUSY BIT(2)
+#define ERR_LOG_HIGH_TEMP_ERR BIT(3)
+#define ERR_LOG_WDOG_TIMEOUT BIT(4)
+#define ERR_LOG_SYS_FAULT BIT(5)
+#define ERR_LOG_RAMDUMP BIT(6)
+#define ERR_LOG_MEM_ALLOC_FAIL BIT(8)
+#define ERR_LOG_LOW_TEMP_WARN BIT(9)
+#define ERR_LOG_ECC BIT(10)
+/* Alert bit definitions detectd on host */
+#define ERR_LOG_HOST_INTF_V_FAIL BIT(13)
+#define ERR_LOG_HOST_HB_FAIL BIT(14)
+#define ERR_LOG_HOST_PCIE_DWN BIT(15)
#define BAR_CARD_ERR_MEM 0x468
+/* defines for mem err, all fields have same width */
+#define BCM_VK_MEM_ERR_FIELD_MASK 0xff
+#define BCM_VK_ECC_MEM_ERR_SHIFT 0
+#define BCM_VK_UECC_MEM_ERR_SHIFT 8
+/* threshold of event occurrence and logs start to come out */
+#define BCM_VK_ECC_THRESHOLD 10
+#define BCM_VK_UECC_THRESHOLD 1
#define BAR_CARD_PWR_AND_THRE 0x46c
+/* defines for power and temp threshold, all fields have same width */
+#define BCM_VK_PWR_AND_THRE_FIELD_MASK 0xff
+#define BCM_VK_LOW_TEMP_THRE_SHIFT 0
+#define BCM_VK_HIGH_TEMP_THRE_SHIFT 8
+#define BCM_VK_PWR_STATE_SHIFT 16
#define BAR_CARD_STATIC_INFO 0x470
@@ -143,6 +180,11 @@
#define BAR_FIRMWARE_TAG_SIZE 50
#define FIRMWARE_STATUS_PRE_INIT_DONE 0x1f
+/* VK MSG_ID defines */
+#define VK_MSG_ID_BITMAP_SIZE 4096
+#define VK_MSG_ID_BITMAP_MASK (VK_MSG_ID_BITMAP_SIZE - 1)
+#define VK_MSG_ID_OVERFLOW 0xffff
+
/*
* BAR1
*/
@@ -197,6 +239,10 @@
/* VK device supports a maximum of 3 bars */
#define MAX_BAR 3
+/* default number of msg blk for inband SGL */
+#define BCM_VK_DEF_IB_SGL_BLK_LEN 16
+#define BCM_VK_IB_SGL_BLK_MAX 24
+
enum pci_barno {
BAR_0 = 0,
BAR_1,
@@ -265,9 +311,27 @@ struct bcm_vk_proc_mon_info {
struct bcm_vk_proc_mon_entry_t entries[BCM_VK_PROC_MON_MAX];
};
+struct bcm_vk_hb_ctrl {
+ struct timer_list timer;
+ u32 last_uptime;
+ u32 lost_cnt;
+};
+
+struct bcm_vk_alert {
+ u16 flags;
+ u16 notfs;
+};
+
+/* some alert counters that the driver will keep track */
+struct bcm_vk_alert_cnts {
+ u16 ecc;
+ u16 uecc;
+};
+
struct bcm_vk {
struct pci_dev *pdev;
void __iomem *bar[MAX_BAR];
+ int num_irqs;
struct bcm_vk_card_info card_info;
struct bcm_vk_proc_mon_info proc_mon_info;
@@ -281,9 +345,17 @@ struct bcm_vk {
/* Reference-counting to handle file operations */
struct kref kref;
+ spinlock_t msg_id_lock; /* Spinlock for msg_id */
+ u16 msg_id;
+ DECLARE_BITMAP(bmap, VK_MSG_ID_BITMAP_SIZE);
spinlock_t ctx_lock; /* Spinlock for component context */
struct bcm_vk_ctx ctx[VK_CMPT_CTX_MAX];
struct bcm_vk_ht_entry pid_ht[VK_PID_HT_SZ];
+ pid_t reset_pid; /* process that issue reset */
+
+ atomic_t msgq_inited; /* indicate if info has been synced with vk */
+ struct bcm_vk_msg_chan to_v_msg_chan;
+ struct bcm_vk_msg_chan to_h_msg_chan;
struct workqueue_struct *wq_thread;
struct work_struct wq_work; /* work queue for deferred job */
@@ -292,6 +364,15 @@ struct bcm_vk {
dma_addr_t tdma_addr; /* test dma segment bus addr */
struct notifier_block panic_nb;
+ u32 ib_sgl_size; /* size allocated for inband sgl insertion */
+
+ /* heart beat mechanism control structure */
+ struct bcm_vk_hb_ctrl hb_ctrl;
+ /* house-keeping variable of error logs */
+ spinlock_t host_alert_lock; /* protection to access host_alert struct */
+ struct bcm_vk_alert host_alert;
+ struct bcm_vk_alert peer_alert; /* bits set by the card */
+ struct bcm_vk_alert_cnts alert_cnts;
/* offset of the peer log control in BAR2 */
u32 peerlog_off;
@@ -304,8 +385,26 @@ struct bcm_vk {
enum bcm_vk_wq_offload_flags {
BCM_VK_WQ_DWNLD_PEND = 0,
BCM_VK_WQ_DWNLD_AUTO = 1,
+ BCM_VK_WQ_NOTF_PEND = 2,
};
+/* a macro to get an individual field with mask and shift */
+#define BCM_VK_EXTRACT_FIELD(_field, _reg, _mask, _shift) \
+ (_field = (((_reg) >> (_shift)) & (_mask)))
+
+struct bcm_vk_entry {
+ const u32 mask;
+ const u32 exp_val;
+ const char *str;
+};
+
+/* alerts that could be generated from peer */
+#define BCM_VK_PEER_ERR_NUM 10
+extern struct bcm_vk_entry const bcm_vk_peer_err[BCM_VK_PEER_ERR_NUM];
+/* alerts detected by the host */
+#define BCM_VK_HOST_ERR_NUM 3
+extern struct bcm_vk_entry const bcm_vk_host_err[BCM_VK_HOST_ERR_NUM];
+
/*
* check if PCIe interface is down on read. Use it when it is
* certain that _val should never be all ones.
@@ -352,8 +451,28 @@ static inline bool bcm_vk_msgq_marker_valid(struct bcm_vk *vk)
}
int bcm_vk_open(struct inode *inode, struct file *p_file);
+ssize_t bcm_vk_read(struct file *p_file, char __user *buf, size_t count,
+ loff_t *f_pos);
+ssize_t bcm_vk_write(struct file *p_file, const char __user *buf,
+ size_t count, loff_t *f_pos);
+__poll_t bcm_vk_poll(struct file *p_file, struct poll_table_struct *wait);
int bcm_vk_release(struct inode *inode, struct file *p_file);
void bcm_vk_release_data(struct kref *kref);
+irqreturn_t bcm_vk_msgq_irqhandler(int irq, void *dev_id);
+irqreturn_t bcm_vk_notf_irqhandler(int irq, void *dev_id);
+int bcm_vk_msg_init(struct bcm_vk *vk);
+void bcm_vk_msg_remove(struct bcm_vk *vk);
+int bcm_vk_sync_msgq(struct bcm_vk *vk, bool force_sync);
+void bcm_vk_blk_drv_access(struct bcm_vk *vk);
+s32 bcm_to_h_msg_dequeue(struct bcm_vk *vk);
+int bcm_vk_send_shutdown_msg(struct bcm_vk *vk, u32 shut_type,
+ const pid_t pid, const u32 q_num);
+void bcm_to_v_q_doorbell(struct bcm_vk *vk, u32 q_num, u32 db_val);
int bcm_vk_auto_load_all_images(struct bcm_vk *vk);
+void bcm_vk_hb_init(struct bcm_vk *vk);
+void bcm_vk_hb_deinit(struct bcm_vk *vk);
+void bcm_vk_handle_notf(struct bcm_vk *vk);
+bool bcm_vk_drv_access_ok(struct bcm_vk *vk);
+void bcm_vk_set_host_alert(struct bcm_vk *vk, u32 bit_mask);
#endif
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index 6d476f1ac3c4..4727d7617153 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -8,6 +8,7 @@
#include <linux/firmware.h>
#include <linux/fs.h>
#include <linux/idr.h>
+#include <linux/interrupt.h>
#include <linux/kref.h>
#include <linux/module.h>
#include <linux/mutex.h>
@@ -102,6 +103,51 @@ static uint nr_scratch_pages = VK_BAR1_SCRATCH_DEF_NR_PAGES;
module_param(nr_scratch_pages, uint, 0444);
MODULE_PARM_DESC(nr_scratch_pages,
"Number of pre allocated DMAable coherent pages.\n");
+static uint nr_ib_sgl_blk = BCM_VK_DEF_IB_SGL_BLK_LEN;
+module_param(nr_ib_sgl_blk, uint, 0444);
+MODULE_PARM_DESC(nr_ib_sgl_blk,
+ "Number of in-band msg blks for short SGL.\n");
+
+/*
+ * alerts that could be generated from peer
+ */
+const struct bcm_vk_entry bcm_vk_peer_err[BCM_VK_PEER_ERR_NUM] = {
+ {ERR_LOG_UECC, ERR_LOG_UECC, "uecc"},
+ {ERR_LOG_SSIM_BUSY, ERR_LOG_SSIM_BUSY, "ssim_busy"},
+ {ERR_LOG_AFBC_BUSY, ERR_LOG_AFBC_BUSY, "afbc_busy"},
+ {ERR_LOG_HIGH_TEMP_ERR, ERR_LOG_HIGH_TEMP_ERR, "high_temp"},
+ {ERR_LOG_WDOG_TIMEOUT, ERR_LOG_WDOG_TIMEOUT, "wdog_timeout"},
+ {ERR_LOG_SYS_FAULT, ERR_LOG_SYS_FAULT, "sys_fault"},
+ {ERR_LOG_RAMDUMP, ERR_LOG_RAMDUMP, "ramdump"},
+ {ERR_LOG_MEM_ALLOC_FAIL, ERR_LOG_MEM_ALLOC_FAIL, "malloc_fail warn"},
+ {ERR_LOG_LOW_TEMP_WARN, ERR_LOG_LOW_TEMP_WARN, "low_temp warn"},
+ {ERR_LOG_ECC, ERR_LOG_ECC, "ecc"},
+};
+
+/* alerts detected by the host */
+const struct bcm_vk_entry bcm_vk_host_err[BCM_VK_HOST_ERR_NUM] = {
+ {ERR_LOG_HOST_PCIE_DWN, ERR_LOG_HOST_PCIE_DWN, "PCIe_down"},
+ {ERR_LOG_HOST_HB_FAIL, ERR_LOG_HOST_HB_FAIL, "hb_fail"},
+ {ERR_LOG_HOST_INTF_V_FAIL, ERR_LOG_HOST_INTF_V_FAIL, "intf_ver_fail"},
+};
+
+irqreturn_t bcm_vk_notf_irqhandler(int irq, void *dev_id)
+{
+ struct bcm_vk *vk = dev_id;
+
+ if (!bcm_vk_drv_access_ok(vk)) {
+ dev_err(&vk->pdev->dev,
+ "Interrupt %d received when msgq not inited\n", irq);
+ goto skip_schedule_work;
+ }
+
+ /* if notification is not pending, set bit and schedule work */
+ if (test_and_set_bit(BCM_VK_WQ_NOTF_PEND, vk->wq_offload) == 0)
+ queue_work(vk->wq_thread, &vk->wq_work);
+
+skip_schedule_work:
+ return IRQ_HANDLED;
+}
static int bcm_vk_intf_ver_chk(struct bcm_vk *vk)
{
@@ -126,6 +172,7 @@ static int bcm_vk_intf_ver_chk(struct bcm_vk *vk)
dev_err(dev,
"Intf major.minor=%d.%d rejected - drv %d.%d\n",
major, minor, SEMANTIC_MAJOR, SEMANTIC_MINOR);
+ bcm_vk_set_host_alert(vk, ERR_LOG_HOST_INTF_V_FAIL);
ret = -EPFNOSUPPORT;
} else {
dev_dbg(dev,
@@ -135,6 +182,149 @@ static int bcm_vk_intf_ver_chk(struct bcm_vk *vk)
return ret;
}
+static void bcm_vk_log_notf(struct bcm_vk *vk,
+ struct bcm_vk_alert *alert,
+ struct bcm_vk_entry const *entry_tab,
+ const u32 table_size)
+{
+ u32 i;
+ u32 masked_val, latched_val;
+ struct bcm_vk_entry const *entry;
+ u32 reg;
+ u16 ecc_mem_err, uecc_mem_err;
+ struct device *dev = &vk->pdev->dev;
+
+ for (i = 0; i < table_size; i++) {
+ entry = &entry_tab[i];
+ masked_val = entry->mask & alert->notfs;
+ latched_val = entry->mask & alert->flags;
+
+ if (masked_val == ERR_LOG_UECC) {
+ /*
+ * if there is difference between stored cnt and it
+ * is greater than threshold, log it.
+ */
+ reg = vkread32(vk, BAR_0, BAR_CARD_ERR_MEM);
+ BCM_VK_EXTRACT_FIELD(uecc_mem_err, reg,
+ BCM_VK_MEM_ERR_FIELD_MASK,
+ BCM_VK_UECC_MEM_ERR_SHIFT);
+ if ((uecc_mem_err != vk->alert_cnts.uecc) &&
+ (uecc_mem_err >= BCM_VK_UECC_THRESHOLD))
+ dev_info(dev,
+ "ALERT! %s.%d uecc RAISED - ErrCnt %d\n",
+ DRV_MODULE_NAME, vk->devid,
+ uecc_mem_err);
+ vk->alert_cnts.uecc = uecc_mem_err;
+ } else if (masked_val == ERR_LOG_ECC) {
+ reg = vkread32(vk, BAR_0, BAR_CARD_ERR_MEM);
+ BCM_VK_EXTRACT_FIELD(ecc_mem_err, reg,
+ BCM_VK_MEM_ERR_FIELD_MASK,
+ BCM_VK_ECC_MEM_ERR_SHIFT);
+ if ((ecc_mem_err != vk->alert_cnts.ecc) &&
+ (ecc_mem_err >= BCM_VK_ECC_THRESHOLD))
+ dev_info(dev, "ALERT! %s.%d ecc RAISED - ErrCnt %d\n",
+ DRV_MODULE_NAME, vk->devid,
+ ecc_mem_err);
+ vk->alert_cnts.ecc = ecc_mem_err;
+ } else if (masked_val != latched_val) {
+ /* print a log as info */
+ dev_info(dev, "ALERT! %s.%d %s %s\n",
+ DRV_MODULE_NAME, vk->devid, entry->str,
+ masked_val ? "RAISED" : "CLEARED");
+ }
+ }
+}
+
+static void bcm_vk_dump_peer_log(struct bcm_vk *vk)
+{
+ struct bcm_vk_peer_log log;
+ struct bcm_vk_peer_log *log_info = &vk->peerlog_info;
+ char loc_buf[BCM_VK_PEER_LOG_LINE_MAX];
+ int cnt;
+ struct device *dev = &vk->pdev->dev;
+ unsigned int data_offset;
+
+ memcpy_fromio(&log, vk->bar[BAR_2] + vk->peerlog_off, sizeof(log));
+
+ dev_dbg(dev, "Peer PANIC: Size 0x%x(0x%x), [Rd Wr] = [%d %d]\n",
+ log.buf_size, log.mask, log.rd_idx, log.wr_idx);
+
+ /* perform range checking for rd/wr idx */
+ if ((log.rd_idx > log_info->mask) ||
+ (log.wr_idx > log_info->mask) ||
+ (log.buf_size != log_info->buf_size) ||
+ (log.mask != log_info->mask)) {
+ dev_err(dev,
+ "Corrupted Ptrs: Size 0x%x(0x%x) Mask 0x%x(0x%x) [Rd Wr] = [%d %d], skip log dump.\n",
+ log_info->buf_size, log.buf_size,
+ log_info->mask, log.mask,
+ log.rd_idx, log.wr_idx);
+ return;
+ }
+
+ cnt = 0;
+ data_offset = vk->peerlog_off + sizeof(struct bcm_vk_peer_log);
+ loc_buf[BCM_VK_PEER_LOG_LINE_MAX - 1] = '\0';
+ while (log.rd_idx != log.wr_idx) {
+ loc_buf[cnt] = vkread8(vk, BAR_2, data_offset + log.rd_idx);
+
+ if ((loc_buf[cnt] == '\0') ||
+ (cnt == (BCM_VK_PEER_LOG_LINE_MAX - 1))) {
+ dev_err(dev, "%s", loc_buf);
+ cnt = 0;
+ } else {
+ cnt++;
+ }
+ log.rd_idx = (log.rd_idx + 1) & log.mask;
+ }
+ /* update rd idx at the end */
+ vkwrite32(vk, log.rd_idx, BAR_2,
+ vk->peerlog_off + offsetof(struct bcm_vk_peer_log, rd_idx));
+}
+
+void bcm_vk_handle_notf(struct bcm_vk *vk)
+{
+ u32 reg;
+ struct bcm_vk_alert alert;
+ bool intf_down;
+ unsigned long flags;
+
+ /* handle peer alerts and then locally detected ones */
+ reg = vkread32(vk, BAR_0, BAR_CARD_ERR_LOG);
+ intf_down = BCM_VK_INTF_IS_DOWN(reg);
+ if (!intf_down) {
+ vk->peer_alert.notfs = reg;
+ bcm_vk_log_notf(vk, &vk->peer_alert, bcm_vk_peer_err,
+ ARRAY_SIZE(bcm_vk_peer_err));
+ vk->peer_alert.flags = vk->peer_alert.notfs;
+ } else {
+ /* turn off access */
+ bcm_vk_blk_drv_access(vk);
+ }
+
+ /* check and make copy of alert with lock and then free lock */
+ spin_lock_irqsave(&vk->host_alert_lock, flags);
+ if (intf_down)
+ vk->host_alert.notfs |= ERR_LOG_HOST_PCIE_DWN;
+
+ alert = vk->host_alert;
+ vk->host_alert.flags = vk->host_alert.notfs;
+ spin_unlock_irqrestore(&vk->host_alert_lock, flags);
+
+ /* call display with copy */
+ bcm_vk_log_notf(vk, &alert, bcm_vk_host_err,
+ ARRAY_SIZE(bcm_vk_host_err));
+
+ /*
+ * If it is a sys fault or heartbeat timeout, we would like extract
+ * log msg from the card so that we would know what is the last fault
+ */
+ if (!intf_down &&
+ ((vk->host_alert.flags & ERR_LOG_HOST_HB_FAIL) ||
+ (vk->peer_alert.flags & ERR_LOG_SYS_FAULT)))
+ bcm_vk_dump_peer_log(vk);
+}
+
static inline int bcm_vk_wait(struct bcm_vk *vk, enum pci_barno bar,
u64 offset, u32 mask, u32 value,
unsigned long timeout_ms)
@@ -283,6 +473,31 @@ static int bcm_vk_sync_card_info(struct bcm_vk *vk)
return 0;
}
+void bcm_vk_blk_drv_access(struct bcm_vk *vk)
+{
+ int i;
+
+ /*
+ * kill all the apps
+ */
+ spin_lock(&vk->ctx_lock);
+
+ /* set msgq_inited to 0 so that all rd/wr will be blocked */
+ atomic_set(&vk->msgq_inited, 0);
+
+ for (i = 0; i < VK_PID_HT_SZ; i++) {
+ struct bcm_vk_ctx *ctx;
+
+ list_for_each_entry(ctx, &vk->pid_ht[i].head, node) {
+ dev_dbg(&vk->pdev->dev,
+ "Send kill signal to pid %d\n",
+ ctx->pid);
+ kill_pid(find_vpid(ctx->pid), SIGKILL, 1);
+ }
+ }
+ spin_unlock(&vk->ctx_lock);
+}
+
static void bcm_vk_buf_notify(struct bcm_vk *vk, void *bufp,
dma_addr_t host_buf_addr, u32 buf_size)
{
@@ -500,6 +715,17 @@ static int bcm_vk_load_image_by_type(struct bcm_vk *vk, u32 load_type,
goto err_firmware_out;
}
+ /*
+ * Next, initialize Message Q if we are loading boot2.
+ * Do a force sync
+ */
+ ret = bcm_vk_sync_msgq(vk, true);
+ if (ret) {
+ dev_err(dev, "Boot2 Error reading comm msg Q info\n");
+ ret = -EIO;
+ goto err_firmware_out;
+ }
+
/* sync & channel other info */
ret = bcm_vk_sync_card_info(vk);
if (ret) {
@@ -650,12 +876,20 @@ static int bcm_vk_trigger_autoload(struct bcm_vk *vk)
}
/*
- * deferred work queue for auto download.
+ * deferred work queue for draining and auto download.
*/
static void bcm_vk_wq_handler(struct work_struct *work)
{
struct bcm_vk *vk = container_of(work, struct bcm_vk, wq_work);
+ struct device *dev = &vk->pdev->dev;
+ s32 ret;
+ /* check wq offload bit map to perform various operations */
+ if (test_bit(BCM_VK_WQ_NOTF_PEND, vk->wq_offload)) {
+ /* clear bit right the way for notification */
+ clear_bit(BCM_VK_WQ_NOTF_PEND, vk->wq_offload);
+ bcm_vk_handle_notf(vk);
+ }
if (test_bit(BCM_VK_WQ_DWNLD_AUTO, vk->wq_offload)) {
bcm_vk_auto_load_all_images(vk);
@@ -666,6 +900,14 @@ static void bcm_vk_wq_handler(struct work_struct *work)
clear_bit(BCM_VK_WQ_DWNLD_AUTO, vk->wq_offload);
clear_bit(BCM_VK_WQ_DWNLD_PEND, vk->wq_offload);
}
+
+ /* next, try to drain */
+ ret = bcm_to_h_msg_dequeue(vk);
+
+ if (ret == 0)
+ dev_dbg(dev, "Spurious trigger for workqueue\n");
+ else if (ret < 0)
+ bcm_vk_blk_drv_access(vk);
}
static long bcm_vk_load_image(struct bcm_vk *vk,
@@ -814,6 +1056,9 @@ static long bcm_vk_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
static const struct file_operations bcm_vk_fops = {
.owner = THIS_MODULE,
.open = bcm_vk_open,
+ .read = bcm_vk_read,
+ .write = bcm_vk_write,
+ .poll = bcm_vk_poll,
.release = bcm_vk_release,
.unlocked_ioctl = bcm_vk_ioctl,
};
@@ -846,6 +1091,12 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
return -ENOMEM;
kref_init(&vk->kref);
+ if (nr_ib_sgl_blk > BCM_VK_IB_SGL_BLK_MAX) {
+ dev_warn(dev, "Inband SGL blk %d limited to max %d\n",
+ nr_ib_sgl_blk, BCM_VK_IB_SGL_BLK_MAX);
+ nr_ib_sgl_blk = BCM_VK_IB_SGL_BLK_MAX;
+ }
+ vk->ib_sgl_size = nr_ib_sgl_blk * VK_MSGQ_BLK_SIZE;
mutex_init(&vk->mutex);
err = pci_enable_device(pdev);
@@ -909,11 +1160,35 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
}
}
+ for (vk->num_irqs = 0;
+ vk->num_irqs < VK_MSIX_MSGQ_MAX;
+ vk->num_irqs++) {
+ err = devm_request_irq(dev, pci_irq_vector(pdev, vk->num_irqs),
+ bcm_vk_msgq_irqhandler,
+ IRQF_SHARED, DRV_MODULE_NAME, vk);
+ if (err) {
+ dev_err(dev, "failed to request msgq IRQ %d for MSIX %d\n",
+ pdev->irq + vk->num_irqs, vk->num_irqs + 1);
+ goto err_irq;
+ }
+ }
+ /* one irq for notification from VK */
+ err = devm_request_irq(dev, pci_irq_vector(pdev, vk->num_irqs),
+ bcm_vk_notf_irqhandler,
+ IRQF_SHARED, DRV_MODULE_NAME, vk);
+ if (err) {
+ dev_err(dev, "failed to request notf IRQ %d for MSIX %d\n",
+ pdev->irq + vk->num_irqs, vk->num_irqs + 1);
+ goto err_irq;
+ }
+ vk->num_irqs++;
+
+
id = ida_simple_get(&bcm_vk_ida, 0, 0, GFP_KERNEL);
if (id < 0) {
err = id;
dev_err(dev, "unable to get id\n");
- goto err_iounmap;
+ goto err_irq;
}
vk->devid = id;
@@ -943,6 +1218,12 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_misc_deregister;
}
+ err = bcm_vk_msg_init(vk);
+ if (err) {
+ dev_err(dev, "failed to init msg queue info\n");
+ goto err_destroy_workqueue;
+ }
+
/* sync other info */
bcm_vk_sync_card_info(vk);
@@ -971,6 +1252,9 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
}
}
+ /* enable hb */
+ bcm_vk_hb_init(vk);
+
dev_dbg(dev, "BCM-VK:%u created\n", id);
return 0;
@@ -992,6 +1276,13 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
err_ida_remove:
ida_simple_remove(&bcm_vk_ida, id);
+err_irq:
+ for (i = 0; i < vk->num_irqs; i++)
+ devm_free_irq(dev, pci_irq_vector(pdev, i), vk);
+
+ pci_disable_msix(pdev);
+ pci_disable_msi(pdev);
+
err_iounmap:
for (i = 0; i < MAX_BAR; i++) {
if (vk->bar[i])
@@ -1030,6 +1321,8 @@ static void bcm_vk_remove(struct pci_dev *pdev)
struct bcm_vk *vk = pci_get_drvdata(pdev);
struct miscdevice *misc_device = &vk->miscdev;
+ bcm_vk_hb_deinit(vk);
+
/*
* Trigger a reset to card and wait enough time for UCODE to rerun,
* which re-initialize the card into its default state.
@@ -1053,6 +1346,11 @@ static void bcm_vk_remove(struct pci_dev *pdev)
kfree(misc_device->name);
ida_simple_remove(&bcm_vk_ida, vk->devid);
}
+ for (i = 0; i < vk->num_irqs; i++)
+ devm_free_irq(&pdev->dev, pci_irq_vector(pdev, i), vk);
+
+ pci_disable_msix(pdev);
+ pci_disable_msi(pdev);
cancel_work_sync(&vk->wq_work);
destroy_workqueue(vk->wq_thread);
diff --git a/drivers/misc/bcm-vk/bcm_vk_msg.c b/drivers/misc/bcm-vk/bcm_vk_msg.c
index eb261fb87c9d..3f2876484b7a 100644
--- a/drivers/misc/bcm-vk/bcm_vk_msg.c
+++ b/drivers/misc/bcm-vk/bcm_vk_msg.c
@@ -3,8 +3,195 @@
* Copyright 2018-2020 Broadcom.
*/
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/hash.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/poll.h>
+#include <linux/sizes.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+
#include "bcm_vk.h"
#include "bcm_vk_msg.h"
+#include "bcm_vk_sg.h"
+
+/* functions to manipulate the transport id in msg block */
+#define BCM_VK_MSG_Q_SHIFT 4
+#define BCM_VK_MSG_Q_MASK 0xF
+#define BCM_VK_MSG_ID_MASK 0xFFF
+
+#define BCM_VK_DMA_DRAIN_MAX_MS 2000
+
+/* number x q_size will be the max number of msg processed per loop */
+#define BCM_VK_MSG_PROC_MAX_LOOP 2
+
+/* module parameter */
+static bool hb_mon = true;
+module_param(hb_mon, bool, 0444);
+MODULE_PARM_DESC(hb_mon, "Monitoring heartbeat continuously.\n");
+static int batch_log = 1;
+module_param(batch_log, int, 0444);
+MODULE_PARM_DESC(batch_log, "Max num of logs per batch operation.\n");
+
+static bool hb_mon_is_on(void)
+{
+ return hb_mon;
+}
+
+static u32 get_q_num(const struct vk_msg_blk *msg)
+{
+ return (msg->trans_id & BCM_VK_MSG_Q_MASK);
+}
+
+static void set_q_num(struct vk_msg_blk *msg, u32 val)
+{
+ msg->trans_id = (msg->trans_id & ~BCM_VK_MSG_Q_MASK) | val;
+}
+
+static u32 get_msg_id(const struct vk_msg_blk *msg)
+{
+ return ((msg->trans_id >> BCM_VK_MSG_Q_SHIFT) & BCM_VK_MSG_ID_MASK);
+}
+
+static void set_msg_id(struct vk_msg_blk *msg, u32 val)
+{
+ msg->trans_id = (val << BCM_VK_MSG_Q_SHIFT) | get_q_num(msg);
+}
+
+static u32 msgq_inc(const struct bcm_vk_sync_qinfo *qinfo, u32 idx, u32 inc)
+{
+ return ((idx + inc) & qinfo->q_mask);
+}
+
+static
+struct vk_msg_blk __iomem *msgq_blk_addr(const struct bcm_vk_sync_qinfo *qinfo,
+ u32 idx)
+{
+ return qinfo->q_start + (VK_MSGQ_BLK_SIZE * idx);
+}
+
+static u32 msgq_occupied(const struct bcm_vk_msgq __iomem *msgq,
+ const struct bcm_vk_sync_qinfo *qinfo)
+{
+ u32 wr_idx, rd_idx;
+
+ wr_idx = readl_relaxed(&msgq->wr_idx);
+ rd_idx = readl_relaxed(&msgq->rd_idx);
+
+ return ((wr_idx - rd_idx) & qinfo->q_mask);
+}
+
+static
+u32 msgq_avail_space(const struct bcm_vk_msgq __iomem *msgq,
+ const struct bcm_vk_sync_qinfo *qinfo)
+{
+ return (qinfo->q_size - msgq_occupied(msgq, qinfo) - 1);
+}
+
+/* number of retries when enqueue message fails before returning EAGAIN */
+#define BCM_VK_H2VK_ENQ_RETRY 10
+#define BCM_VK_H2VK_ENQ_RETRY_DELAY_MS 50
+
+bool bcm_vk_drv_access_ok(struct bcm_vk *vk)
+{
+ return (!!atomic_read(&vk->msgq_inited));
+}
+
+void bcm_vk_set_host_alert(struct bcm_vk *vk, u32 bit_mask)
+{
+ struct bcm_vk_alert *alert = &vk->host_alert;
+ unsigned long flags;
+
+ /* use irqsave version as this maybe called inside timer interrupt */
+ spin_lock_irqsave(&vk->host_alert_lock, flags);
+ alert->notfs |= bit_mask;
+ spin_unlock_irqrestore(&vk->host_alert_lock, flags);
+
+ if (test_and_set_bit(BCM_VK_WQ_NOTF_PEND, vk->wq_offload) == 0)
+ queue_work(vk->wq_thread, &vk->wq_work);
+}
+
+/*
+ * Heartbeat related defines
+ * The heartbeat from host is a last resort. If stuck condition happens
+ * on the card, firmware is supposed to detect it. Therefore, the heartbeat
+ * values used will be more relaxed on the driver, which need to be bigger
+ * than the watchdog timeout on the card. The watchdog timeout on the card
+ * is 20s, with a jitter of 2s => 22s. We use a value of 27s here.
+ */
+#define BCM_VK_HB_TIMER_S 3
+#define BCM_VK_HB_TIMER_VALUE (BCM_VK_HB_TIMER_S * HZ)
+#define BCM_VK_HB_LOST_MAX (27 / BCM_VK_HB_TIMER_S)
+
+static void bcm_vk_hb_poll(struct timer_list *t)
+{
+ u32 uptime_s;
+ struct bcm_vk_hb_ctrl *hb = container_of(t, struct bcm_vk_hb_ctrl,
+ timer);
+ struct bcm_vk *vk = container_of(hb, struct bcm_vk, hb_ctrl);
+
+ if (bcm_vk_drv_access_ok(vk) && hb_mon_is_on()) {
+ /* read uptime from register and compare */
+ uptime_s = vkread32(vk, BAR_0, BAR_OS_UPTIME);
+
+ if (uptime_s == hb->last_uptime)
+ hb->lost_cnt++;
+ else /* reset to avoid accumulation */
+ hb->lost_cnt = 0;
+
+ dev_dbg(&vk->pdev->dev, "Last uptime %d current %d, lost %d\n",
+ hb->last_uptime, uptime_s, hb->lost_cnt);
+
+ /*
+ * if the interface goes down without any activity, a value
+ * of 0xFFFFFFFF will be continuously read, and the detection
+ * will be happened eventually.
+ */
+ hb->last_uptime = uptime_s;
+ } else {
+ /* reset heart beat lost cnt */
+ hb->lost_cnt = 0;
+ }
+
+ /* next, check if heartbeat exceeds limit */
+ if (hb->lost_cnt > BCM_VK_HB_LOST_MAX) {
+ dev_err(&vk->pdev->dev, "Heartbeat Misses %d times, %d s!\n",
+ BCM_VK_HB_LOST_MAX,
+ BCM_VK_HB_LOST_MAX * BCM_VK_HB_TIMER_S);
+
+ bcm_vk_blk_drv_access(vk);
+ bcm_vk_set_host_alert(vk, ERR_LOG_HOST_HB_FAIL);
+ }
+ /* re-arm timer */
+ mod_timer(&hb->timer, jiffies + BCM_VK_HB_TIMER_VALUE);
+}
+
+void bcm_vk_hb_init(struct bcm_vk *vk)
+{
+ struct bcm_vk_hb_ctrl *hb = &vk->hb_ctrl;
+
+ timer_setup(&hb->timer, bcm_vk_hb_poll, 0);
+ mod_timer(&hb->timer, jiffies + BCM_VK_HB_TIMER_VALUE);
+}
+
+void bcm_vk_hb_deinit(struct bcm_vk *vk)
+{
+ struct bcm_vk_hb_ctrl *hb = &vk->hb_ctrl;
+
+ del_timer(&hb->timer);
+}
+
+static void bcm_vk_msgid_bitmap_clear(struct bcm_vk *vk,
+ unsigned int start,
+ unsigned int nbits)
+{
+ spin_lock(&vk->msg_id_lock);
+ bitmap_clear(vk->bmap, start, nbits);
+ spin_unlock(&vk->msg_id_lock);
+}
/*
* allocate a ctx per file struct
@@ -39,12 +226,47 @@ static struct bcm_vk_ctx *bcm_vk_get_ctx(struct bcm_vk *vk, const pid_t pid)
/* increase kref */
kref_get(&vk->kref);
+ /* clear counter */
+ atomic_set(&ctx->pend_cnt, 0);
+ atomic_set(&ctx->dma_cnt, 0);
+ init_waitqueue_head(&ctx->rd_wq);
+
all_in_use_exit:
spin_unlock(&vk->ctx_lock);
return ctx;
}
+static u16 bcm_vk_get_msg_id(struct bcm_vk *vk)
+{
+ u16 rc = VK_MSG_ID_OVERFLOW;
+ u16 test_bit_count = 0;
+
+ spin_lock(&vk->msg_id_lock);
+ while (test_bit_count < (VK_MSG_ID_BITMAP_SIZE - 1)) {
+ /*
+ * first time come in this loop, msg_id will be 0
+ * and the first one tested will be 1. We skip
+ * VK_SIMPLEX_MSG_ID (0) for one way host2vk
+ * communication
+ */
+ vk->msg_id++;
+ if (vk->msg_id == VK_MSG_ID_BITMAP_SIZE)
+ vk->msg_id = 1;
+
+ if (test_bit(vk->msg_id, vk->bmap)) {
+ test_bit_count++;
+ continue;
+ }
+ rc = vk->msg_id;
+ bitmap_set(vk->bmap, vk->msg_id, 1);
+ break;
+ }
+ spin_unlock(&vk->msg_id_lock);
+
+ return rc;
+}
+
static int bcm_vk_free_ctx(struct bcm_vk *vk, struct bcm_vk_ctx *ctx)
{
u32 idx;
@@ -81,6 +303,630 @@ static int bcm_vk_free_ctx(struct bcm_vk *vk, struct bcm_vk_ctx *ctx)
return count;
}
+
+static void bcm_vk_free_wkent(struct device *dev, struct bcm_vk_wkent *entry)
+{
+ int proc_cnt;
+
+ bcm_vk_sg_free(dev, entry->dma, VK_DMA_MAX_ADDRS, &proc_cnt);
+ if (proc_cnt)
+ atomic_dec(&entry->ctx->dma_cnt);
+
+ kfree(entry->to_h_msg);
+ kfree(entry);
+}
+
+static void bcm_vk_drain_all_pend(struct device *dev,
+ struct bcm_vk_msg_chan *chan,
+ struct bcm_vk_ctx *ctx)
+{
+ u32 num;
+ struct bcm_vk_wkent *entry, *tmp;
+ struct bcm_vk *vk;
+ struct list_head del_q;
+
+ if (ctx)
+ vk = container_of(ctx->miscdev, struct bcm_vk, miscdev);
+
+ INIT_LIST_HEAD(&del_q);
+ spin_lock(&chan->pendq_lock);
+ for (num = 0; num < chan->q_nr; num++) {
+ list_for_each_entry_safe(entry, tmp, &chan->pendq[num], node) {
+ if ((!ctx) || (entry->ctx->idx == ctx->idx)) {
+ list_del(&entry->node);
+ list_add_tail(&entry->node, &del_q);
+ }
+ }
+ }
+ spin_unlock(&chan->pendq_lock);
+
+ /* batch clean up */
+ num = 0;
+ list_for_each_entry_safe(entry, tmp, &del_q, node) {
+ list_del(&entry->node);
+ num++;
+ if (ctx) {
+ struct vk_msg_blk *msg;
+ int bit_set;
+ bool responded;
+ u32 msg_id;
+
+ /* if it is specific ctx, log for any stuck */
+ msg = entry->to_v_msg;
+ msg_id = get_msg_id(msg);
+ bit_set = test_bit(msg_id, vk->bmap);
+ responded = entry->to_h_msg ? true : false;
+ if (num <= batch_log)
+ dev_info(dev,
+ "Drained: fid %u size %u msg 0x%x(seq-%x) ctx 0x%x[fd-%d] args:[0x%x 0x%x] resp %s, bmap %d\n",
+ msg->function_id, msg->size,
+ msg_id, entry->seq_num,
+ msg->context_id, entry->ctx->idx,
+ msg->cmd, msg->arg,
+ responded ? "T" : "F", bit_set);
+ if (responded)
+ atomic_dec(&ctx->pend_cnt);
+ else if (bit_set)
+ bcm_vk_msgid_bitmap_clear(vk, msg_id, 1);
+ }
+ bcm_vk_free_wkent(dev, entry);
+ }
+ if (num && ctx)
+ dev_info(dev, "Total drained items %d [fd-%d]\n",
+ num, ctx->idx);
+}
+
+/*
+ * Function to sync up the messages queue info that is provided by BAR1
+ */
+int bcm_vk_sync_msgq(struct bcm_vk *vk, bool force_sync)
+{
+ struct bcm_vk_msgq __iomem *msgq;
+ struct device *dev = &vk->pdev->dev;
+ u32 msgq_off;
+ u32 num_q;
+ struct bcm_vk_msg_chan *chan_list[] = {&vk->to_v_msg_chan,
+ &vk->to_h_msg_chan};
+ struct bcm_vk_msg_chan *chan;
+ int i, j;
+ int ret = 0;
+
+ /*
+ * If the driver is loaded at startup where vk OS is not up yet,
+ * the msgq-info may not be available until a later time. In
+ * this case, we skip and the sync function is supposed to be
+ * called again.
+ */
+ if (!bcm_vk_msgq_marker_valid(vk)) {
+ dev_info(dev, "BAR1 msgq marker not initialized.\n");
+ return -EAGAIN;
+ }
+
+ msgq_off = vkread32(vk, BAR_1, VK_BAR1_MSGQ_CTRL_OFF);
+
+ /* each side is always half the total */
+ num_q = vkread32(vk, BAR_1, VK_BAR1_MSGQ_NR) / 2;
+ vk->to_v_msg_chan.q_nr = num_q;
+ vk->to_h_msg_chan.q_nr = num_q;
+
+ /* first msgq location */
+ msgq = vk->bar[BAR_1] + msgq_off;
+
+ /*
+ * if this function is called when it is already inited,
+ * something is wrong
+ */
+ if (bcm_vk_drv_access_ok(vk) && !force_sync) {
+ dev_err(dev, "Msgq info already in sync\n");
+ return -EPERM;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(chan_list); i++) {
+ chan = chan_list[i];
+ memset(chan->sync_qinfo, 0, sizeof(chan->sync_qinfo));
+
+ for (j = 0; j < num_q; j++) {
+ struct bcm_vk_sync_qinfo *qinfo;
+ u32 msgq_start;
+ u32 msgq_size;
+ u32 msgq_nxt;
+ u32 msgq_db_offset, q_db_offset;
+
+ chan->msgq[j] = msgq;
+ msgq_start = readl_relaxed(&msgq->start);
+ msgq_size = readl_relaxed(&msgq->size);
+ msgq_nxt = readl_relaxed(&msgq->nxt);
+ msgq_db_offset = readl_relaxed(&msgq->db_offset);
+ q_db_offset = (msgq_db_offset & ((1 << DB_SHIFT) - 1));
+ if (q_db_offset == (~msgq_db_offset >> DB_SHIFT))
+ msgq_db_offset = q_db_offset;
+ else
+ /* fall back to default */
+ msgq_db_offset = VK_BAR0_Q_DB_BASE(j);
+
+ dev_info(dev,
+ "MsgQ[%d] type %d num %d, @ 0x%x, db_offset 0x%x rd_idx %d wr_idx %d, size %d, nxt 0x%x\n",
+ j,
+ readw_relaxed(&msgq->type),
+ readw_relaxed(&msgq->num),
+ msgq_start,
+ msgq_db_offset,
+ readl_relaxed(&msgq->rd_idx),
+ readl_relaxed(&msgq->wr_idx),
+ msgq_size,
+ msgq_nxt);
+
+ qinfo = &chan->sync_qinfo[j];
+ /* formulate and record static info */
+ qinfo->q_start = vk->bar[BAR_1] + msgq_start;
+ qinfo->q_size = msgq_size;
+ /* set low threshold as 50% or 1/2 */
+ qinfo->q_low = qinfo->q_size >> 1;
+ qinfo->q_mask = qinfo->q_size - 1;
+ qinfo->q_db_offset = msgq_db_offset;
+
+ msgq++;
+ }
+ }
+ atomic_set(&vk->msgq_inited, 1);
+
+ return ret;
+}
+
+static int bcm_vk_msg_chan_init(struct bcm_vk_msg_chan *chan)
+{
+ u32 i;
+
+ mutex_init(&chan->msgq_mutex);
+ spin_lock_init(&chan->pendq_lock);
+ for (i = 0; i < VK_MSGQ_MAX_NR; i++)
+ INIT_LIST_HEAD(&chan->pendq[i]);
+
+ return 0;
+}
+
+static void bcm_vk_append_pendq(struct bcm_vk_msg_chan *chan, u16 q_num,
+ struct bcm_vk_wkent *entry)
+{
+ struct bcm_vk_ctx *ctx;
+
+ spin_lock(&chan->pendq_lock);
+ list_add_tail(&entry->node, &chan->pendq[q_num]);
+ if (entry->to_h_msg) {
+ ctx = entry->ctx;
+ atomic_inc(&ctx->pend_cnt);
+ wake_up_interruptible(&ctx->rd_wq);
+ }
+ spin_unlock(&chan->pendq_lock);
+}
+
+static u32 bcm_vk_append_ib_sgl(struct bcm_vk *vk,
+ struct bcm_vk_wkent *entry,
+ struct _vk_data *data,
+ unsigned int num_planes)
+{
+ unsigned int i;
+ unsigned int item_cnt = 0;
+ struct device *dev = &vk->pdev->dev;
+ struct bcm_vk_msg_chan *chan = &vk->to_v_msg_chan;
+ struct vk_msg_blk *msg = &entry->to_v_msg[0];
+ struct bcm_vk_msgq __iomem *msgq;
+ struct bcm_vk_sync_qinfo *qinfo;
+ u32 ib_sgl_size = 0;
+ u8 *buf = (u8 *)&entry->to_v_msg[entry->to_v_blks];
+ u32 avail;
+ u32 q_num;
+
+ /* check if high watermark is hit, and if so, skip */
+ q_num = get_q_num(msg);
+ msgq = chan->msgq[q_num];
+ qinfo = &chan->sync_qinfo[q_num];
+ avail = msgq_avail_space(msgq, qinfo);
+ if (avail < qinfo->q_low) {
+ dev_dbg(dev, "Skip inserting inband SGL, [0x%x/0x%x]\n",
+ avail, qinfo->q_size);
+ return 0;
+ }
+
+ for (i = 0; i < num_planes; i++) {
+ if (data[i].address &&
+ (ib_sgl_size + data[i].size) <= vk->ib_sgl_size) {
+ item_cnt++;
+ memcpy(buf, entry->dma[i].sglist, data[i].size);
+ ib_sgl_size += data[i].size;
+ buf += data[i].size;
+ }
+ }
+
+ dev_dbg(dev, "Num %u sgl items appended, size 0x%x, room 0x%x\n",
+ item_cnt, ib_sgl_size, vk->ib_sgl_size);
+
+ /* round up size */
+ ib_sgl_size = (ib_sgl_size + VK_MSGQ_BLK_SIZE - 1)
+ >> VK_MSGQ_BLK_SZ_SHIFT;
+
+ return ib_sgl_size;
+}
+
+void bcm_to_v_q_doorbell(struct bcm_vk *vk, u32 q_num, u32 db_val)
+{
+ struct bcm_vk_msg_chan *chan = &vk->to_v_msg_chan;
+ struct bcm_vk_sync_qinfo *qinfo = &chan->sync_qinfo[q_num];
+
+ vkwrite32(vk, db_val, BAR_0, qinfo->q_db_offset);
+}
+
+static int bcm_to_v_msg_enqueue(struct bcm_vk *vk, struct bcm_vk_wkent *entry)
+{
+ static u32 seq_num;
+ struct bcm_vk_msg_chan *chan = &vk->to_v_msg_chan;
+ struct device *dev = &vk->pdev->dev;
+ struct vk_msg_blk *src = &entry->to_v_msg[0];
+
+ struct vk_msg_blk __iomem *dst;
+ struct bcm_vk_msgq __iomem *msgq;
+ struct bcm_vk_sync_qinfo *qinfo;
+ u32 q_num = get_q_num(src);
+ u32 wr_idx; /* local copy */
+ u32 i;
+ u32 avail;
+ u32 retry;
+
+ if (entry->to_v_blks != src->size + 1) {
+ dev_err(dev, "number of blks %d not matching %d MsgId[0x%x]: func %d ctx 0x%x\n",
+ entry->to_v_blks,
+ src->size + 1,
+ get_msg_id(src),
+ src->function_id,
+ src->context_id);
+ return -EMSGSIZE;
+ }
+
+ msgq = chan->msgq[q_num];
+ qinfo = &chan->sync_qinfo[q_num];
+
+ mutex_lock(&chan->msgq_mutex);
+
+ avail = msgq_avail_space(msgq, qinfo);
+
+ /* if not enough space, return EAGAIN and let app handles it */
+ retry = 0;
+ while ((avail < entry->to_v_blks) &&
+ (retry++ < BCM_VK_H2VK_ENQ_RETRY)) {
+ mutex_unlock(&chan->msgq_mutex);
+
+ msleep(BCM_VK_H2VK_ENQ_RETRY_DELAY_MS);
+ mutex_lock(&chan->msgq_mutex);
+ avail = msgq_avail_space(msgq, qinfo);
+ }
+ if (retry > BCM_VK_H2VK_ENQ_RETRY) {
+ mutex_unlock(&chan->msgq_mutex);
+ return -EAGAIN;
+ }
+
+ /* at this point, mutex is taken and there is enough space */
+ entry->seq_num = seq_num++; /* update debug seq number */
+ wr_idx = readl_relaxed(&msgq->wr_idx);
+
+ if (wr_idx >= qinfo->q_size) {
+ dev_crit(dev, "Invalid wr_idx 0x%x => max 0x%x!",
+ wr_idx, qinfo->q_size);
+ bcm_vk_blk_drv_access(vk);
+ bcm_vk_set_host_alert(vk, ERR_LOG_HOST_PCIE_DWN);
+ goto idx_err;
+ }
+
+ dst = msgq_blk_addr(qinfo, wr_idx);
+ for (i = 0; i < entry->to_v_blks; i++) {
+ memcpy_toio(dst, src, sizeof(*dst));
+
+ src++;
+ wr_idx = msgq_inc(qinfo, wr_idx, 1);
+ dst = msgq_blk_addr(qinfo, wr_idx);
+ }
+
+ /* flush the write pointer */
+ writel(wr_idx, &msgq->wr_idx);
+
+ /* log new info for debugging */
+ dev_dbg(dev,
+ "MsgQ[%d] [Rd Wr] = [%d %d] blks inserted %d - Q = [u-%d a-%d]/%d\n",
+ readl_relaxed(&msgq->num),
+ readl_relaxed(&msgq->rd_idx),
+ wr_idx,
+ entry->to_v_blks,
+ msgq_occupied(msgq, qinfo),
+ msgq_avail_space(msgq, qinfo),
+ readl_relaxed(&msgq->size));
+ /*
+ * press door bell based on queue number. 1 is added to the wr_idx
+ * to avoid the value of 0 appearing on the VK side to distinguish
+ * from initial value.
+ */
+ bcm_to_v_q_doorbell(vk, q_num, wr_idx + 1);
+idx_err:
+ mutex_unlock(&chan->msgq_mutex);
+ return 0;
+}
+
+int bcm_vk_send_shutdown_msg(struct bcm_vk *vk, u32 shut_type,
+ const pid_t pid, const u32 q_num)
+{
+ int rc = 0;
+ struct bcm_vk_wkent *entry;
+ struct device *dev = &vk->pdev->dev;
+
+ /*
+ * check if the marker is still good. Sometimes, the PCIe interface may
+ * have gone done, and if so and we ship down thing based on broken
+ * values, kernel may panic.
+ */
+ if (!bcm_vk_msgq_marker_valid(vk)) {
+ dev_info(dev, "PCIe comm chan - invalid marker (0x%x)!\n",
+ vkread32(vk, BAR_1, VK_BAR1_MSGQ_DEF_RDY));
+ return -EINVAL;
+ }
+
+ entry = kzalloc(sizeof(*entry) +
+ sizeof(struct vk_msg_blk), GFP_KERNEL);
+ if (!entry)
+ return -ENOMEM;
+
+ /* fill up necessary data */
+ entry->to_v_msg[0].function_id = VK_FID_SHUTDOWN;
+ set_q_num(&entry->to_v_msg[0], q_num);
+ set_msg_id(&entry->to_v_msg[0], VK_SIMPLEX_MSG_ID);
+ entry->to_v_blks = 1; /* always 1 block */
+
+ entry->to_v_msg[0].cmd = shut_type;
+ entry->to_v_msg[0].arg = pid;
+
+ rc = bcm_to_v_msg_enqueue(vk, entry);
+ if (rc)
+ dev_err(dev,
+ "Sending shutdown message to q %d for pid %d fails.\n",
+ get_q_num(&entry->to_v_msg[0]), pid);
+
+ kfree(entry);
+
+ return rc;
+}
+
+static int bcm_vk_handle_last_sess(struct bcm_vk *vk, const pid_t pid,
+ const u32 q_num)
+{
+ int rc = 0;
+ struct device *dev = &vk->pdev->dev;
+
+ /*
+ * don't send down or do anything if message queue is not initialized
+ */
+ if (!bcm_vk_drv_access_ok(vk))
+ return -EPERM;
+
+ dev_dbg(dev, "No more sessions, shut down pid %d\n", pid);
+
+ rc = bcm_vk_send_shutdown_msg(vk, VK_SHUTDOWN_PID, pid, q_num);
+
+ return rc;
+}
+
+static struct bcm_vk_wkent *bcm_vk_dequeue_pending(struct bcm_vk *vk,
+ struct bcm_vk_msg_chan *chan,
+ u16 q_num,
+ u16 msg_id)
+{
+ bool found = false;
+ struct bcm_vk_wkent *entry;
+
+ spin_lock(&chan->pendq_lock);
+ list_for_each_entry(entry, &chan->pendq[q_num], node) {
+ if (get_msg_id(&entry->to_v_msg[0]) == msg_id) {
+ list_del(&entry->node);
+ found = true;
+ bcm_vk_msgid_bitmap_clear(vk, msg_id, 1);
+ break;
+ }
+ }
+ spin_unlock(&chan->pendq_lock);
+ return ((found) ? entry : NULL);
+}
+
+s32 bcm_to_h_msg_dequeue(struct bcm_vk *vk)
+{
+ struct device *dev = &vk->pdev->dev;
+ struct bcm_vk_msg_chan *chan = &vk->to_h_msg_chan;
+ struct vk_msg_blk *data;
+ struct vk_msg_blk __iomem *src;
+ struct vk_msg_blk *dst;
+ struct bcm_vk_msgq __iomem *msgq;
+ struct bcm_vk_sync_qinfo *qinfo;
+ struct bcm_vk_wkent *entry;
+ u32 rd_idx, wr_idx;
+ u32 q_num, msg_id, j;
+ u32 num_blks;
+ s32 total = 0;
+ int cnt = 0;
+ int msg_processed = 0;
+ int max_msg_to_process;
+ bool exit_loop;
+
+ /*
+ * drain all the messages from the queues, and find its pending
+ * entry in the to_v queue, based on msg_id & q_num, and move the
+ * entry to the to_h pending queue, waiting for user space
+ * program to extract
+ */
+ mutex_lock(&chan->msgq_mutex);
+
+ for (q_num = 0; q_num < chan->q_nr; q_num++) {
+ msgq = chan->msgq[q_num];
+ qinfo = &chan->sync_qinfo[q_num];
+ max_msg_to_process = BCM_VK_MSG_PROC_MAX_LOOP * qinfo->q_size;
+
+ rd_idx = readl_relaxed(&msgq->rd_idx);
+ wr_idx = readl_relaxed(&msgq->wr_idx);
+ msg_processed = 0;
+ exit_loop = false;
+ while ((rd_idx != wr_idx) && !exit_loop) {
+ u8 src_size;
+
+ /*
+ * Make a local copy and get pointer to src blk
+ * The rd_idx is masked before getting the pointer to
+ * avoid out of bound access in case the interface goes
+ * down. It will end up pointing to the last block in
+ * the buffer, but subsequent src->size check would be
+ * able to catch this.
+ */
+ src = msgq_blk_addr(qinfo, rd_idx & qinfo->q_mask);
+ src_size = readb(&src->size);
+
+ if ((rd_idx >= qinfo->q_size) ||
+ (src_size > (qinfo->q_size - 1))) {
+ dev_crit(dev,
+ "Invalid rd_idx 0x%x or size 0x%x => max 0x%x!",
+ rd_idx, src_size, qinfo->q_size);
+ bcm_vk_blk_drv_access(vk);
+ bcm_vk_set_host_alert(vk,
+ ERR_LOG_HOST_PCIE_DWN);
+ goto idx_err;
+ }
+
+ num_blks = src_size + 1;
+ data = kzalloc(num_blks * VK_MSGQ_BLK_SIZE, GFP_KERNEL);
+ if (data) {
+ /* copy messages and linearize it */
+ dst = data;
+ for (j = 0; j < num_blks; j++) {
+ memcpy_fromio(dst, src, sizeof(*dst));
+
+ dst++;
+ rd_idx = msgq_inc(qinfo, rd_idx, 1);
+ src = msgq_blk_addr(qinfo, rd_idx);
+ }
+ total++;
+ } else {
+ /*
+ * if we could not allocate memory in kernel,
+ * that is fatal.
+ */
+ dev_crit(dev, "Kernel mem allocation failure.\n");
+ return -ENOMEM;
+ }
+
+ /* flush rd pointer after a message is dequeued */
+ writel(rd_idx, &msgq->rd_idx);
+
+ /* log new info for debugging */
+ dev_dbg(dev,
+ "MsgQ[%d] [Rd Wr] = [%d %d] blks extracted %d - Q = [u-%d a-%d]/%d\n",
+ readl_relaxed(&msgq->num),
+ rd_idx,
+ wr_idx,
+ num_blks,
+ msgq_occupied(msgq, qinfo),
+ msgq_avail_space(msgq, qinfo),
+ readl_relaxed(&msgq->size));
+
+ /*
+ * No need to search if it is an autonomous one-way
+ * message from driver, as these messages do not bear
+ * a to_v pending item. Currently, only the shutdown
+ * message falls into this category.
+ */
+ if (data->function_id == VK_FID_SHUTDOWN) {
+ kfree(data);
+ continue;
+ }
+
+ msg_id = get_msg_id(data);
+ /* lookup original message in to_v direction */
+ entry = bcm_vk_dequeue_pending(vk,
+ &vk->to_v_msg_chan,
+ q_num,
+ msg_id);
+
+ /*
+ * if there is message to does not have prior send,
+ * this is the location to add here
+ */
+ if (entry) {
+ entry->to_h_blks = num_blks;
+ entry->to_h_msg = data;
+ bcm_vk_append_pendq(&vk->to_h_msg_chan,
+ q_num, entry);
+
+ } else {
+ if (cnt++ < batch_log)
+ dev_info(dev,
+ "Could not find MsgId[0x%x] for resp func %d bmap %d\n",
+ msg_id, data->function_id,
+ test_bit(msg_id, vk->bmap));
+ kfree(data);
+ }
+ /* Fetch wr_idx to handle more back-to-back events */
+ wr_idx = readl(&msgq->wr_idx);
+
+ /*
+ * cap the max so that even we try to handle more back-to-back events,
+ * so that it won't hold CPU too long or in case rd/wr idexes are
+ * corrupted which triggers infinite looping.
+ */
+ if (++msg_processed >= max_msg_to_process) {
+ dev_warn(dev, "Q[%d] Per loop processing exceeds %d\n",
+ q_num, max_msg_to_process);
+ exit_loop = true;
+ }
+ }
+ }
+idx_err:
+ mutex_unlock(&chan->msgq_mutex);
+ dev_dbg(dev, "total %d drained from queues\n", total);
+
+ return total;
+}
+
+/*
+ * init routine for all required data structures
+ */
+static int bcm_vk_data_init(struct bcm_vk *vk)
+{
+ int i;
+
+ spin_lock_init(&vk->ctx_lock);
+ for (i = 0; i < ARRAY_SIZE(vk->ctx); i++) {
+ vk->ctx[i].in_use = false;
+ vk->ctx[i].idx = i; /* self identity */
+ vk->ctx[i].miscdev = NULL;
+ }
+ spin_lock_init(&vk->msg_id_lock);
+ spin_lock_init(&vk->host_alert_lock);
+ vk->msg_id = 0;
+
+ /* initialize hash table */
+ for (i = 0; i < VK_PID_HT_SZ; i++)
+ INIT_LIST_HEAD(&vk->pid_ht[i].head);
+
+ return 0;
+}
+
+irqreturn_t bcm_vk_msgq_irqhandler(int irq, void *dev_id)
+{
+ struct bcm_vk *vk = dev_id;
+
+ if (!bcm_vk_drv_access_ok(vk)) {
+ dev_err(&vk->pdev->dev,
+ "Interrupt %d received when msgq not inited\n", irq);
+ goto skip_schedule_work;
+ }
+
+ queue_work(vk->wq_thread, &vk->wq_work);
+
+skip_schedule_work:
+ return IRQ_HANDLED;
+}
+
int bcm_vk_open(struct inode *inode, struct file *p_file)
{
struct bcm_vk_ctx *ctx;
@@ -111,16 +957,321 @@ int bcm_vk_open(struct inode *inode, struct file *p_file)
return rc;
}
+ssize_t bcm_vk_read(struct file *p_file,
+ char __user *buf,
+ size_t count,
+ loff_t *f_pos)
+{
+ ssize_t rc = -ENOMSG;
+ struct bcm_vk_ctx *ctx = p_file->private_data;
+ struct bcm_vk *vk = container_of(ctx->miscdev, struct bcm_vk,
+ miscdev);
+ struct device *dev = &vk->pdev->dev;
+ struct bcm_vk_msg_chan *chan = &vk->to_h_msg_chan;
+ struct bcm_vk_wkent *entry = NULL;
+ u32 q_num;
+ u32 rsp_length;
+ bool found = false;
+
+ if (!bcm_vk_drv_access_ok(vk))
+ return -EPERM;
+
+ dev_dbg(dev, "Buf count %zu\n", count);
+ found = false;
+
+ /*
+ * search through the pendq on the to_h chan, and return only those
+ * that belongs to the same context. Search is always from the high to
+ * the low priority queues
+ */
+ spin_lock(&chan->pendq_lock);
+ for (q_num = 0; q_num < chan->q_nr; q_num++) {
+ list_for_each_entry(entry, &chan->pendq[q_num], node) {
+ if (entry->ctx->idx == ctx->idx) {
+ if (count >=
+ (entry->to_h_blks * VK_MSGQ_BLK_SIZE)) {
+ list_del(&entry->node);
+ atomic_dec(&ctx->pend_cnt);
+ found = true;
+ } else {
+ /* buffer not big enough */
+ rc = -EMSGSIZE;
+ }
+ goto read_loop_exit;
+ }
+ }
+ }
+read_loop_exit:
+ spin_unlock(&chan->pendq_lock);
+
+ if (found) {
+ /* retrieve the passed down msg_id */
+ set_msg_id(&entry->to_h_msg[0], entry->usr_msg_id);
+ rsp_length = entry->to_h_blks * VK_MSGQ_BLK_SIZE;
+ if (copy_to_user(buf, entry->to_h_msg, rsp_length) == 0)
+ rc = rsp_length;
+
+ bcm_vk_free_wkent(dev, entry);
+ } else if (rc == -EMSGSIZE) {
+ struct vk_msg_blk tmp_msg = entry->to_h_msg[0];
+
+ /*
+ * in this case, return just the first block, so
+ * that app knows what size it is looking for.
+ */
+ set_msg_id(&tmp_msg, entry->usr_msg_id);
+ tmp_msg.size = entry->to_h_blks - 1;
+ if (copy_to_user(buf, &tmp_msg, VK_MSGQ_BLK_SIZE) != 0) {
+ dev_err(dev, "Error return 1st block in -EMSGSIZE\n");
+ rc = -EFAULT;
+ }
+ }
+ return rc;
+}
+
+ssize_t bcm_vk_write(struct file *p_file,
+ const char __user *buf,
+ size_t count,
+ loff_t *f_pos)
+{
+ ssize_t rc;
+ struct bcm_vk_ctx *ctx = p_file->private_data;
+ struct bcm_vk *vk = container_of(ctx->miscdev, struct bcm_vk,
+ miscdev);
+ struct bcm_vk_msgq __iomem *msgq;
+ struct device *dev = &vk->pdev->dev;
+ struct bcm_vk_wkent *entry;
+ u32 sgl_extra_blks;
+ u32 q_num;
+ u32 msg_size;
+ u32 msgq_size;
+
+ if (!bcm_vk_drv_access_ok(vk))
+ return -EPERM;
+
+ dev_dbg(dev, "Msg count %zu\n", count);
+
+ /* first, do sanity check where count should be multiple of basic blk */
+ if (count & (VK_MSGQ_BLK_SIZE - 1)) {
+ dev_err(dev, "Failure with size %zu not multiple of %zu\n",
+ count, VK_MSGQ_BLK_SIZE);
+ rc = -EINVAL;
+ goto write_err;
+ }
+
+ /* allocate the work entry + buffer for size count and inband sgl */
+ entry = kzalloc(sizeof(*entry) + count + vk->ib_sgl_size,
+ GFP_KERNEL);
+ if (!entry) {
+ rc = -ENOMEM;
+ goto write_err;
+ }
+
+ /* now copy msg from user space, and then formulate the work entry */
+ if (copy_from_user(&entry->to_v_msg[0], buf, count)) {
+ rc = -EFAULT;
+ goto write_free_ent;
+ }
+
+ entry->to_v_blks = count >> VK_MSGQ_BLK_SZ_SHIFT;
+ entry->ctx = ctx;
+
+ /* do a check on the blk size which could not exceed queue space */
+ q_num = get_q_num(&entry->to_v_msg[0]);
+ msgq = vk->to_v_msg_chan.msgq[q_num];
+ msgq_size = readl_relaxed(&msgq->size);
+ if (entry->to_v_blks + (vk->ib_sgl_size >> VK_MSGQ_BLK_SZ_SHIFT)
+ > (msgq_size - 1)) {
+ dev_err(dev, "Blk size %d exceed max queue size allowed %d\n",
+ entry->to_v_blks, msgq_size - 1);
+ rc = -EINVAL;
+ goto write_free_ent;
+ }
+
+ /* Use internal message id */
+ entry->usr_msg_id = get_msg_id(&entry->to_v_msg[0]);
+ rc = bcm_vk_get_msg_id(vk);
+ if (rc == VK_MSG_ID_OVERFLOW) {
+ dev_err(dev, "msg_id overflow\n");
+ rc = -EOVERFLOW;
+ goto write_free_ent;
+ }
+ set_msg_id(&entry->to_v_msg[0], rc);
+ ctx->q_num = q_num;
+
+ dev_dbg(dev,
+ "[Q-%d]Message ctx id %d, usr_msg_id 0x%x sent msg_id 0x%x\n",
+ ctx->q_num, ctx->idx, entry->usr_msg_id,
+ get_msg_id(&entry->to_v_msg[0]));
+
+ /* Convert any pointers to sg list */
+ if (entry->to_v_msg[0].function_id == VK_FID_TRANS_BUF) {
+ unsigned int num_planes;
+ int dir;
+ struct _vk_data *data;
+
+ num_planes = entry->to_v_msg[0].cmd & VK_CMD_PLANES_MASK;
+ if ((entry->to_v_msg[0].cmd & VK_CMD_MASK) == VK_CMD_DOWNLOAD)
+ dir = DMA_FROM_DEVICE;
+ else
+ dir = DMA_TO_DEVICE;
+
+ /* Calculate vk_data location */
+ /* Go to end of the message */
+ msg_size = entry->to_v_msg[0].size;
+ if (msg_size > entry->to_v_blks) {
+ rc = -EMSGSIZE;
+ goto write_free_msgid;
+ }
+
+ data = (struct _vk_data *)&entry->to_v_msg[msg_size + 1];
+
+ /* Now back up to the start of the pointers */
+ data -= num_planes;
+
+ /* Convert user addresses to DMA SG List */
+ rc = bcm_vk_sg_alloc(dev, entry->dma, dir, data, num_planes);
+ if (rc)
+ goto write_free_msgid;
+
+ atomic_inc(&ctx->dma_cnt);
+ /* try to embed inband sgl */
+ sgl_extra_blks = bcm_vk_append_ib_sgl(vk, entry, data,
+ num_planes);
+ entry->to_v_blks += sgl_extra_blks;
+ entry->to_v_msg[0].size += sgl_extra_blks;
+ }
+
+ /*
+ * store work entry to pending queue until a response is received.
+ * This needs to be done before enqueuing the message
+ */
+ bcm_vk_append_pendq(&vk->to_v_msg_chan, q_num, entry);
+
+ rc = bcm_to_v_msg_enqueue(vk, entry);
+ if (rc) {
+ dev_err(dev, "Fail to enqueue msg to to_v queue\n");
+
+ /* remove message from pending list */
+ entry = bcm_vk_dequeue_pending
+ (vk,
+ &vk->to_v_msg_chan,
+ q_num,
+ get_msg_id(&entry->to_v_msg[0]));
+ goto write_free_ent;
+ }
+
+ return count;
+
+write_free_msgid:
+ bcm_vk_msgid_bitmap_clear(vk, get_msg_id(&entry->to_v_msg[0]), 1);
+write_free_ent:
+ kfree(entry);
+write_err:
+ return rc;
+}
+
+__poll_t bcm_vk_poll(struct file *p_file, struct poll_table_struct *wait)
+{
+ __poll_t ret = 0;
+ int cnt;
+ struct bcm_vk_ctx *ctx = p_file->private_data;
+ struct bcm_vk *vk = container_of(ctx->miscdev, struct bcm_vk, miscdev);
+ struct device *dev = &vk->pdev->dev;
+
+ poll_wait(p_file, &ctx->rd_wq, wait);
+
+ cnt = atomic_read(&ctx->pend_cnt);
+ if (cnt) {
+ ret = (__force __poll_t)(POLLIN | POLLRDNORM);
+ if (cnt < 0) {
+ dev_err(dev, "Error cnt %d, setting back to 0", cnt);
+ atomic_set(&ctx->pend_cnt, 0);
+ }
+ }
+
+ return ret;
+}
+
int bcm_vk_release(struct inode *inode, struct file *p_file)
{
int ret;
struct bcm_vk_ctx *ctx = p_file->private_data;
struct bcm_vk *vk = container_of(ctx->miscdev, struct bcm_vk, miscdev);
+ struct device *dev = &vk->pdev->dev;
+ pid_t pid = ctx->pid;
+ int dma_cnt;
+ unsigned long timeout, start_time;
+
+ /*
+ * if there are outstanding DMA transactions, need to delay long enough
+ * to ensure that the card side would have stopped touching the host buffer
+ * and its SGL list. A race condition could happen if the host app is killed
+ * abruptly, eg kill -9, while some DMA transfer orders are still inflight.
+ * Nothing could be done except for a delay as host side is running in a
+ * completely async fashion.
+ */
+ start_time = jiffies;
+ timeout = start_time + msecs_to_jiffies(BCM_VK_DMA_DRAIN_MAX_MS);
+ do {
+ if (time_after(jiffies, timeout)) {
+ dev_warn(dev, "%d dma still pending for [fd-%d] pid %d\n",
+ dma_cnt, ctx->idx, pid);
+ break;
+ }
+ dma_cnt = atomic_read(&ctx->dma_cnt);
+ cpu_relax();
+ cond_resched();
+ } while (dma_cnt);
+ dev_dbg(dev, "Draining for [fd-%d] pid %d - delay %d ms\n",
+ ctx->idx, pid, jiffies_to_msecs(jiffies - start_time));
+
+ bcm_vk_drain_all_pend(&vk->pdev->dev, &vk->to_v_msg_chan, ctx);
+ bcm_vk_drain_all_pend(&vk->pdev->dev, &vk->to_h_msg_chan, ctx);
ret = bcm_vk_free_ctx(vk, ctx);
+ if (ret == 0)
+ ret = bcm_vk_handle_last_sess(vk, pid, ctx->q_num);
+ else
+ ret = 0;
kref_put(&vk->kref, bcm_vk_release_data);
return ret;
}
+int bcm_vk_msg_init(struct bcm_vk *vk)
+{
+ struct device *dev = &vk->pdev->dev;
+ int ret;
+
+ if (bcm_vk_data_init(vk)) {
+ dev_err(dev, "Error initializing internal data structures\n");
+ return -EINVAL;
+ }
+
+ if (bcm_vk_msg_chan_init(&vk->to_v_msg_chan) ||
+ bcm_vk_msg_chan_init(&vk->to_h_msg_chan)) {
+ dev_err(dev, "Error initializing communication channel\n");
+ return -EIO;
+ }
+
+ /* read msgq info if ready */
+ ret = bcm_vk_sync_msgq(vk, false);
+ if (ret && (ret != -EAGAIN)) {
+ dev_err(dev, "Error reading comm msg Q info\n");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+void bcm_vk_msg_remove(struct bcm_vk *vk)
+{
+ bcm_vk_blk_drv_access(vk);
+
+ /* drain all pending items */
+ bcm_vk_drain_all_pend(&vk->pdev->dev, &vk->to_v_msg_chan, NULL);
+ bcm_vk_drain_all_pend(&vk->pdev->dev, &vk->to_h_msg_chan, NULL);
+}
+
diff --git a/drivers/misc/bcm-vk/bcm_vk_msg.h b/drivers/misc/bcm-vk/bcm_vk_msg.h
index 32516abcaf89..637c5d662eb7 100644
--- a/drivers/misc/bcm-vk/bcm_vk_msg.h
+++ b/drivers/misc/bcm-vk/bcm_vk_msg.h
@@ -6,6 +6,76 @@
#ifndef BCM_VK_MSG_H
#define BCM_VK_MSG_H
+#include <uapi/linux/misc/bcm_vk.h>
+#include "bcm_vk_sg.h"
+
+/* Single message queue control structure */
+struct bcm_vk_msgq {
+ u16 type; /* queue type */
+ u16 num; /* queue number */
+ u32 start; /* offset in BAR1 where the queue memory starts */
+
+ u32 rd_idx; /* read idx */
+ u32 wr_idx; /* write idx */
+
+ u32 size; /*
+ * size, which is in number of 16byte blocks,
+ * to align with the message data structure.
+ */
+ u32 nxt; /*
+ * nxt offset to the next msg queue struct.
+ * This is to provide flexibity for alignment purposes.
+ */
+
+/* Least significant 16 bits in below field hold doorbell register offset */
+#define DB_SHIFT 16
+
+ u32 db_offset; /* queue doorbell register offset in BAR0 */
+
+ u32 rsvd;
+};
+
+/*
+ * Structure to record static info from the msgq sync. We keep local copy
+ * for some of these variables for both performance + checking purpose.
+ */
+struct bcm_vk_sync_qinfo {
+ void __iomem *q_start;
+ u32 q_size;
+ u32 q_mask;
+ u32 q_low;
+ u32 q_db_offset;
+};
+
+#define VK_MSGQ_MAX_NR 4 /* Maximum number of message queues */
+
+/*
+ * message block - basic unit in the message where a message's size is always
+ * N x sizeof(basic_block)
+ */
+struct vk_msg_blk {
+ u8 function_id;
+#define VK_FID_TRANS_BUF 5
+#define VK_FID_SHUTDOWN 8
+ u8 size; /* size of the message in number of vk_msg_blk's */
+ u16 trans_id; /* transport id, queue & msg_id */
+ u32 context_id;
+ u32 cmd;
+#define VK_CMD_PLANES_MASK 0x000f /* number of planes to up/download */
+#define VK_CMD_UPLOAD 0x0400 /* memory transfer to vk */
+#define VK_CMD_DOWNLOAD 0x0500 /* memory transfer from vk */
+#define VK_CMD_MASK 0x0f00 /* command mask */
+ u32 arg;
+};
+
+/* vk_msg_blk is 16 bytes fixed */
+#define VK_MSGQ_BLK_SIZE (sizeof(struct vk_msg_blk))
+/* shift for fast division of basic msg blk size */
+#define VK_MSGQ_BLK_SZ_SHIFT 4
+
+/* use msg_id 0 for any simplex host2vk communication */
+#define VK_SIMPLEX_MSG_ID 0
+
/* context per session opening of sysfs */
struct bcm_vk_ctx {
struct list_head node; /* use for linkage in Hash Table */
@@ -13,7 +83,11 @@ struct bcm_vk_ctx {
bool in_use;
pid_t pid;
u32 hash_idx;
+ u32 q_num; /* queue number used by the stream */
struct miscdevice *miscdev;
+ atomic_t pend_cnt; /* number of items pending to be read from host */
+ atomic_t dma_cnt; /* any dma transaction outstanding */
+ wait_queue_head_t rd_wq;
};
/* pid hash table entry */
@@ -21,6 +95,51 @@ struct bcm_vk_ht_entry {
struct list_head head;
};
+#define VK_DMA_MAX_ADDRS 4 /* Max 4 DMA Addresses */
+/* structure for house keeping a single work entry */
+struct bcm_vk_wkent {
+ struct list_head node; /* for linking purpose */
+ struct bcm_vk_ctx *ctx;
+
+ /* Store up to 4 dma pointers */
+ struct bcm_vk_dma dma[VK_DMA_MAX_ADDRS];
+
+ u32 to_h_blks; /* response */
+ struct vk_msg_blk *to_h_msg;
+
+ /*
+ * put the to_v_msg at the end so that we could simply append to_v msg
+ * to the end of the allocated block
+ */
+ u32 usr_msg_id;
+ u32 to_v_blks;
+ u32 seq_num;
+ struct vk_msg_blk to_v_msg[0];
+};
+
+/* queue stats counters */
+struct bcm_vk_qs_cnts {
+ u32 cnt; /* general counter, used to limit output */
+ u32 acc_sum;
+ u32 max_occ; /* max during a sampling period */
+ u32 max_abs; /* the abs max since reset */
+};
+
+/* control channel structure for either to_v or to_h communication */
+struct bcm_vk_msg_chan {
+ u32 q_nr;
+ /* Mutex to access msgq */
+ struct mutex msgq_mutex;
+ /* pointing to BAR locations */
+ struct bcm_vk_msgq __iomem *msgq[VK_MSGQ_MAX_NR];
+ /* Spinlock to access pending queue */
+ spinlock_t pendq_lock;
+ /* for temporary storing pending items, one for each queue */
+ struct list_head pendq[VK_MSGQ_MAX_NR];
+ /* static queue info from the sync */
+ struct bcm_vk_sync_qinfo sync_qinfo[VK_MSGQ_MAX_NR];
+};
+
/* total number of supported ctx, 32 ctx each for 5 components */
#define VK_CMPT_CTX_MAX (32 * 5)
@@ -28,4 +147,11 @@ struct bcm_vk_ht_entry {
#define VK_PID_HT_SHIFT_BIT 7 /* 128 */
#define VK_PID_HT_SZ BIT(VK_PID_HT_SHIFT_BIT)
+/* The following are offsets of DDR info provided by the vk card */
+#define VK_BAR0_SEG_SIZE (4 * SZ_1K) /* segment size for BAR0 */
+
+/* shutdown types supported */
+#define VK_SHUTDOWN_PID 1
+#define VK_SHUTDOWN_GRACEFUL 2
+
#endif
diff --git a/drivers/misc/bcm-vk/bcm_vk_sg.c b/drivers/misc/bcm-vk/bcm_vk_sg.c
new file mode 100644
index 000000000000..c76d702f312f
--- /dev/null
+++ b/drivers/misc/bcm-vk/bcm_vk_sg.c
@@ -0,0 +1,275 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2018-2020 Broadcom.
+ */
+#include <linux/dma-mapping.h>
+#include <linux/mm.h>
+#include <linux/pagemap.h>
+#include <linux/vmalloc.h>
+
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/unaligned.h>
+
+#include <uapi/linux/misc/bcm_vk.h>
+
+#include "bcm_vk.h"
+#include "bcm_vk_msg.h"
+#include "bcm_vk_sg.h"
+
+/*
+ * Valkyrie has a hardware limitation of 16M transfer size.
+ * So limit the SGL chunks to 16M.
+ */
+#define BCM_VK_MAX_SGL_CHUNK SZ_16M
+
+static int bcm_vk_dma_alloc(struct device *dev,
+ struct bcm_vk_dma *dma,
+ int dir,
+ struct _vk_data *vkdata);
+static int bcm_vk_dma_free(struct device *dev, struct bcm_vk_dma *dma);
+
+/* Uncomment to dump SGLIST */
+/* #define BCM_VK_DUMP_SGLIST */
+
+static int bcm_vk_dma_alloc(struct device *dev,
+ struct bcm_vk_dma *dma,
+ int direction,
+ struct _vk_data *vkdata)
+{
+ dma_addr_t addr, sg_addr;
+ int err;
+ int i;
+ int offset;
+ u32 size;
+ u32 remaining_size;
+ u32 transfer_size;
+ u64 data;
+ unsigned long first, last;
+ struct _vk_data *sgdata;
+
+ /* Get 64-bit user address */
+ data = get_unaligned(&vkdata->address);
+
+ /* offset into first page */
+ offset = offset_in_page(data);
+
+ /* Calculate number of pages */
+ first = (data & PAGE_MASK) >> PAGE_SHIFT;
+ last = ((data + vkdata->size - 1) & PAGE_MASK) >> PAGE_SHIFT;
+ dma->nr_pages = last - first + 1;
+
+ /* Allocate DMA pages */
+ dma->pages = kmalloc_array(dma->nr_pages,
+ sizeof(struct page *),
+ GFP_KERNEL);
+ if (!dma->pages)
+ return -ENOMEM;
+
+ dev_dbg(dev, "Alloc DMA Pages [0x%llx+0x%x => %d pages]\n",
+ data, vkdata->size, dma->nr_pages);
+
+ dma->direction = direction;
+
+ /* Get user pages into memory */
+ err = get_user_pages_fast(data & PAGE_MASK,
+ dma->nr_pages,
+ direction == DMA_FROM_DEVICE,
+ dma->pages);
+ if (err != dma->nr_pages) {
+ dma->nr_pages = (err >= 0) ? err : 0;
+ dev_err(dev, "get_user_pages_fast, err=%d [%d]\n",
+ err, dma->nr_pages);
+ return err < 0 ? err : -EINVAL;
+ }
+
+ /* Max size of sg list is 1 per mapped page + fields at start */
+ dma->sglen = (dma->nr_pages * sizeof(*sgdata)) +
+ (sizeof(u32) * SGLIST_VKDATA_START);
+
+ /* Allocate sglist */
+ dma->sglist = dma_alloc_coherent(dev,
+ dma->sglen,
+ &dma->handle,
+ GFP_KERNEL);
+ if (!dma->sglist)
+ return -ENOMEM;
+
+ dma->sglist[SGLIST_NUM_SG] = 0;
+ dma->sglist[SGLIST_TOTALSIZE] = vkdata->size;
+ remaining_size = vkdata->size;
+ sgdata = (struct _vk_data *)&dma->sglist[SGLIST_VKDATA_START];
+
+ /* Map all pages into DMA */
+ size = min_t(size_t, PAGE_SIZE - offset, remaining_size);
+ remaining_size -= size;
+ sg_addr = dma_map_page(dev,
+ dma->pages[0],
+ offset,
+ size,
+ dma->direction);
+ transfer_size = size;
+ if (unlikely(dma_mapping_error(dev, sg_addr))) {
+ __free_page(dma->pages[0]);
+ return -EIO;
+ }
+
+ for (i = 1; i < dma->nr_pages; i++) {
+ size = min_t(size_t, PAGE_SIZE, remaining_size);
+ remaining_size -= size;
+ addr = dma_map_page(dev,
+ dma->pages[i],
+ 0,
+ size,
+ dma->direction);
+ if (unlikely(dma_mapping_error(dev, addr))) {
+ __free_page(dma->pages[i]);
+ return -EIO;
+ }
+
+ /*
+ * Compress SG list entry when pages are contiguous
+ * and transfer size less or equal to BCM_VK_MAX_SGL_CHUNK
+ */
+ if ((addr == (sg_addr + transfer_size)) &&
+ ((transfer_size + size) <= BCM_VK_MAX_SGL_CHUNK)) {
+ /* pages are contiguous, add to same sg entry */
+ transfer_size += size;
+ } else {
+ /* pages are not contiguous, write sg entry */
+ sgdata->size = transfer_size;
+ put_unaligned(sg_addr, (u64 *)&sgdata->address);
+ dma->sglist[SGLIST_NUM_SG]++;
+
+ /* start new sg entry */
+ sgdata++;
+ sg_addr = addr;
+ transfer_size = size;
+ }
+ }
+ /* Write last sg list entry */
+ sgdata->size = transfer_size;
+ put_unaligned(sg_addr, (u64 *)&sgdata->address);
+ dma->sglist[SGLIST_NUM_SG]++;
+
+ /* Update pointers and size field to point to sglist */
+ put_unaligned((u64)dma->handle, &vkdata->address);
+ vkdata->size = (dma->sglist[SGLIST_NUM_SG] * sizeof(*sgdata)) +
+ (sizeof(u32) * SGLIST_VKDATA_START);
+
+#ifdef BCM_VK_DUMP_SGLIST
+ dev_dbg(dev,
+ "sgl 0x%llx handle 0x%llx, sglen: 0x%x sgsize: 0x%x\n",
+ (u64)dma->sglist,
+ dma->handle,
+ dma->sglen,
+ vkdata->size);
+ for (i = 0; i < vkdata->size / sizeof(u32); i++)
+ dev_dbg(dev, "i:0x%x 0x%x\n", i, dma->sglist[i]);
+#endif
+
+ return 0;
+}
+
+int bcm_vk_sg_alloc(struct device *dev,
+ struct bcm_vk_dma *dma,
+ int dir,
+ struct _vk_data *vkdata,
+ int num)
+{
+ int i;
+ int rc = -EINVAL;
+
+ /* Convert user addresses to DMA SG List */
+ for (i = 0; i < num; i++) {
+ if (vkdata[i].size && vkdata[i].address) {
+ /*
+ * If both size and address are non-zero
+ * then DMA alloc.
+ */
+ rc = bcm_vk_dma_alloc(dev,
+ &dma[i],
+ dir,
+ &vkdata[i]);
+ } else if (vkdata[i].size ||
+ vkdata[i].address) {
+ /*
+ * If one of size and address are zero
+ * there is a problem.
+ */
+ dev_err(dev,
+ "Invalid vkdata %x 0x%x 0x%llx\n",
+ i, vkdata[i].size, vkdata[i].address);
+ rc = -EINVAL;
+ } else {
+ /*
+ * If size and address are both zero
+ * don't convert, but return success.
+ */
+ rc = 0;
+ }
+
+ if (rc)
+ goto fail_alloc;
+ }
+ return rc;
+
+fail_alloc:
+ while (i > 0) {
+ i--;
+ if (dma[i].sglist)
+ bcm_vk_dma_free(dev, &dma[i]);
+ }
+ return rc;
+}
+
+static int bcm_vk_dma_free(struct device *dev, struct bcm_vk_dma *dma)
+{
+ dma_addr_t addr;
+ int i;
+ int num_sg;
+ u32 size;
+ struct _vk_data *vkdata;
+
+ dev_dbg(dev, "free sglist=%p sglen=0x%x\n", dma->sglist, dma->sglen);
+
+ /* Unmap all pages in the sglist */
+ num_sg = dma->sglist[SGLIST_NUM_SG];
+ vkdata = (struct _vk_data *)&dma->sglist[SGLIST_VKDATA_START];
+ for (i = 0; i < num_sg; i++) {
+ size = vkdata[i].size;
+ addr = get_unaligned(&vkdata[i].address);
+
+ dma_unmap_page(dev, addr, size, dma->direction);
+ }
+
+ /* Free allocated sglist */
+ dma_free_coherent(dev, dma->sglen, dma->sglist, dma->handle);
+
+ /* Release lock on all pages */
+ for (i = 0; i < dma->nr_pages; i++)
+ put_page(dma->pages[i]);
+
+ /* Free allocated dma pages */
+ kfree(dma->pages);
+ dma->sglist = NULL;
+
+ return 0;
+}
+
+int bcm_vk_sg_free(struct device *dev, struct bcm_vk_dma *dma, int num,
+ int *proc_cnt)
+{
+ int i;
+
+ *proc_cnt = 0;
+ /* Unmap and free all pages and sglists */
+ for (i = 0; i < num; i++) {
+ if (dma[i].sglist) {
+ bcm_vk_dma_free(dev, &dma[i]);
+ *proc_cnt += 1;
+ }
+ }
+
+ return 0;
+}
diff --git a/drivers/misc/bcm-vk/bcm_vk_sg.h b/drivers/misc/bcm-vk/bcm_vk_sg.h
new file mode 100644
index 000000000000..81b3d0976ddb
--- /dev/null
+++ b/drivers/misc/bcm-vk/bcm_vk_sg.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2018-2020 Broadcom.
+ */
+
+#ifndef BCM_VK_SG_H
+#define BCM_VK_SG_H
+
+#include <linux/dma-mapping.h>
+
+struct bcm_vk_dma {
+ /* for userland buffer */
+ struct page **pages;
+ int nr_pages;
+
+ /* common */
+ dma_addr_t handle;
+ /*
+ * sglist is of the following LE format
+ * [U32] num_sg = number of sg addresses (N)
+ * [U32] totalsize = totalsize of data being transferred in sglist
+ * [U32] size[0] = size of data in address0
+ * [U32] addr_l[0] = lower 32-bits of address0
+ * [U32] addr_h[0] = higher 32-bits of address0
+ * ..
+ * [U32] size[N-1] = size of data in addressN-1
+ * [U32] addr_l[N-1] = lower 32-bits of addressN-1
+ * [U32] addr_h[N-1] = higher 32-bits of addressN-1
+ */
+ u32 *sglist;
+#define SGLIST_NUM_SG 0
+#define SGLIST_TOTALSIZE 1
+#define SGLIST_VKDATA_START 2
+
+ int sglen; /* Length (bytes) of sglist */
+ int direction;
+};
+
+struct _vk_data {
+ u32 size; /* data size in bytes */
+ u64 address; /* Pointer to data */
+} __packed;
+
+/*
+ * Scatter-gather DMA buffer API.
+ *
+ * These functions provide a simple way to create a page list and a
+ * scatter-gather list from userspace address and map the memory
+ * for DMA operation.
+ */
+int bcm_vk_sg_alloc(struct device *dev,
+ struct bcm_vk_dma *dma,
+ int dir,
+ struct _vk_data *vkdata,
+ int num);
+
+int bcm_vk_sg_free(struct device *dev, struct bcm_vk_dma *dma, int num,
+ int *proc_cnt);
+
+#endif
+
--
2.17.1
Add mmap function that allows host application to open up BAR2 memory
for remote spooling out messages from the VK logger.
Co-developed-by: Desmond Yan <[email protected]>
Signed-off-by: Desmond Yan <[email protected]>
Signed-off-by: Scott Branden <[email protected]>
---
drivers/misc/bcm-vk/bcm_vk_dev.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index 6c2370723e0a..8b869d7f7b93 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -1177,6 +1177,29 @@ static long bcm_vk_reset(struct bcm_vk *vk, struct vk_reset __user *arg)
return ret;
}
+static int bcm_vk_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct bcm_vk_ctx *ctx = file->private_data;
+ struct bcm_vk *vk = container_of(ctx->miscdev, struct bcm_vk, miscdev);
+ unsigned long pg_size;
+
+ /* only BAR2 is mmap possible, which is bar num 4 due to 64bit */
+#define VK_MMAPABLE_BAR 4
+
+ pg_size = ((pci_resource_len(vk->pdev, VK_MMAPABLE_BAR) - 1)
+ >> PAGE_SHIFT) + 1;
+ if (vma->vm_pgoff + vma_pages(vma) > pg_size)
+ return -EINVAL;
+
+ vma->vm_pgoff += (pci_resource_start(vk->pdev, VK_MMAPABLE_BAR)
+ >> PAGE_SHIFT);
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+
+ return io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot);
+}
+
static long bcm_vk_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
long ret = -EINVAL;
@@ -1215,6 +1238,7 @@ static const struct file_operations bcm_vk_fops = {
.write = bcm_vk_write,
.poll = bcm_vk_poll,
.release = bcm_vk_release,
+ .mmap = bcm_vk_mmap,
.unlocked_ioctl = bcm_vk_ioctl,
};
--
2.17.1
Add maintainer entry for new Broadcom VK Driver
Signed-off-by: Scott Branden <[email protected]>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 190c7fa2ea01..ce88e423547f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3714,6 +3714,13 @@ L: [email protected]
S: Supported
F: drivers/net/ethernet/broadcom/tg3.*
+BROADCOM VK DRIVER
+M: Scott Branden <[email protected]>
+L: [email protected]
+S: Supported
+F: drivers/misc/bcm-vk/
+F: include/uapi/linux/misc/bcm_vk.h
+
BROCADE BFA FC SCSI DRIVER
M: Anil Gurumurthy <[email protected]>
M: Sudarsana Kalluru <[email protected]>
--
2.17.1
Add ttyVK support to driver to allow console access to VK card from host.
Device node will be in the follow form /dev/bcm-vk.x_ttyVKy where:
x is the instance of the VK card
y is the tty device number on the VK card
Signed-off-by: Scott Branden <[email protected]>
---
drivers/misc/bcm-vk/Makefile | 3 +-
drivers/misc/bcm-vk/bcm_vk.h | 28 +++
drivers/misc/bcm-vk/bcm_vk_dev.c | 29 ++-
drivers/misc/bcm-vk/bcm_vk_tty.c | 333 +++++++++++++++++++++++++++++++
4 files changed, 391 insertions(+), 2 deletions(-)
create mode 100644 drivers/misc/bcm-vk/bcm_vk_tty.c
diff --git a/drivers/misc/bcm-vk/Makefile b/drivers/misc/bcm-vk/Makefile
index 79b4e365c9e6..e4a1486f7209 100644
--- a/drivers/misc/bcm-vk/Makefile
+++ b/drivers/misc/bcm-vk/Makefile
@@ -7,5 +7,6 @@ obj-$(CONFIG_BCM_VK) += bcm_vk.o
bcm_vk-objs := \
bcm_vk_dev.o \
bcm_vk_msg.o \
- bcm_vk_sg.o
+ bcm_vk_sg.o \
+ bcm_vk_tty.o
diff --git a/drivers/misc/bcm-vk/bcm_vk.h b/drivers/misc/bcm-vk/bcm_vk.h
index ff8cb33d9882..be5c1d702060 100644
--- a/drivers/misc/bcm-vk/bcm_vk.h
+++ b/drivers/misc/bcm-vk/bcm_vk.h
@@ -8,12 +8,14 @@
#include <linux/atomic.h>
#include <linux/firmware.h>
+#include <linux/irq.h>
#include <linux/kref.h>
#include <linux/miscdevice.h>
#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/poll.h>
#include <linux/sched/signal.h>
+#include <linux/tty.h>
#include <linux/uaccess.h>
#include <uapi/linux/misc/bcm_vk.h>
@@ -84,6 +86,9 @@
#define CODEPUSH_BOOT2_ENTRY 0x60000000
#define BAR_CARD_STATUS 0x410
+/* CARD_STATUS definitions */
+#define CARD_STATUS_TTYVK0_READY BIT(0)
+#define CARD_STATUS_TTYVK1_READY BIT(1)
#define BAR_BOOT1_STDALONE_PROGRESS 0x420
#define BOOT1_STDALONE_SUCCESS (BIT(13) | BIT(14))
@@ -251,6 +256,19 @@ enum pci_barno {
#define BCM_VK_NUM_TTY 2
+struct bcm_vk_tty {
+ struct tty_port port;
+ u32 to_offset; /* bar offset to use */
+ u32 to_size; /* to VK buffer size */
+ u32 wr; /* write offset shadow */
+ u32 from_offset; /* bar offset to use */
+ u32 from_size; /* from VK buffer size */
+ u32 rd; /* read offset shadow */
+ pid_t pid;
+ bool irq_enabled;
+ bool is_opened; /* tracks tty open/close */
+};
+
/* VK device max power state, supports 3, full, reduced and low */
#define MAX_OPP 3
#define MAX_CARD_INFO_TAG_SIZE 64
@@ -342,6 +360,12 @@ struct bcm_vk {
struct miscdevice miscdev;
int devid; /* dev id allocated */
+ struct tty_driver *tty_drv;
+ struct timer_list serial_timer;
+ struct bcm_vk_tty tty[BCM_VK_NUM_TTY];
+ struct workqueue_struct *tty_wq_thread;
+ struct work_struct tty_wq_work;
+
/* Reference-counting to handle file operations */
struct kref kref;
@@ -460,6 +484,7 @@ int bcm_vk_release(struct inode *inode, struct file *p_file);
void bcm_vk_release_data(struct kref *kref);
irqreturn_t bcm_vk_msgq_irqhandler(int irq, void *dev_id);
irqreturn_t bcm_vk_notf_irqhandler(int irq, void *dev_id);
+irqreturn_t bcm_vk_tty_irqhandler(int irq, void *dev_id);
int bcm_vk_msg_init(struct bcm_vk *vk);
void bcm_vk_msg_remove(struct bcm_vk *vk);
void bcm_vk_drain_msg_on_reset(struct bcm_vk *vk);
@@ -470,6 +495,9 @@ int bcm_vk_send_shutdown_msg(struct bcm_vk *vk, u32 shut_type,
const pid_t pid, const u32 q_num);
void bcm_to_v_q_doorbell(struct bcm_vk *vk, u32 q_num, u32 db_val);
int bcm_vk_auto_load_all_images(struct bcm_vk *vk);
+int bcm_vk_tty_init(struct bcm_vk *vk, char *name);
+void bcm_vk_tty_exit(struct bcm_vk *vk);
+void bcm_vk_tty_terminate_tty_user(struct bcm_vk *vk);
void bcm_vk_hb_init(struct bcm_vk *vk);
void bcm_vk_hb_deinit(struct bcm_vk *vk);
void bcm_vk_handle_notf(struct bcm_vk *vk);
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index 8b869d7f7b93..65cef4153f82 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -499,6 +499,7 @@ void bcm_vk_blk_drv_access(struct bcm_vk *vk)
}
}
}
+ bcm_vk_tty_terminate_tty_user(vk);
spin_unlock(&vk->ctx_lock);
}
@@ -1362,6 +1363,19 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
}
vk->num_irqs++;
+ for (i = 0;
+ (i < VK_MSIX_TTY_MAX) && (vk->num_irqs < irq);
+ i++, vk->num_irqs++) {
+ err = devm_request_irq(dev, pci_irq_vector(pdev, vk->num_irqs),
+ bcm_vk_tty_irqhandler,
+ IRQF_SHARED, DRV_MODULE_NAME, vk);
+ if (err) {
+ dev_err(dev, "failed request tty IRQ %d for MSIX %d\n",
+ pdev->irq + vk->num_irqs, vk->num_irqs + 1);
+ goto err_irq;
+ }
+ vk->tty[i].irq_enabled = true;
+ }
id = ida_simple_get(&bcm_vk_ida, 0, 0, GFP_KERNEL);
if (id < 0) {
@@ -1415,6 +1429,11 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_destroy_workqueue;
}
+ snprintf(name, sizeof(name), KBUILD_MODNAME ".%d_ttyVK", id);
+ err = bcm_vk_tty_init(vk, name);
+ if (err)
+ goto err_unregister_panic_notifier;
+
/*
* lets trigger an auto download. We don't want to do it serially here
* because at probing time, it is not supposed to block for a long time.
@@ -1423,7 +1442,7 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (auto_load) {
if ((boot_status & BOOT_STATE_MASK) == BROM_RUNNING) {
if (bcm_vk_trigger_autoload(vk))
- goto err_unregister_panic_notifier;
+ goto err_bcm_vk_tty_exit;
} else {
dev_err(dev,
"Auto-load skipped - BROM not in proper state (0x%x)\n",
@@ -1438,6 +1457,9 @@ static int bcm_vk_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
return 0;
+err_bcm_vk_tty_exit:
+ bcm_vk_tty_exit(vk);
+
err_unregister_panic_notifier:
atomic_notifier_chain_unregister(&panic_notifier_list,
&vk->panic_nb);
@@ -1515,6 +1537,9 @@ static void bcm_vk_remove(struct pci_dev *pdev)
atomic_notifier_chain_unregister(&panic_notifier_list,
&vk->panic_nb);
+ bcm_vk_msg_remove(vk);
+ bcm_vk_tty_exit(vk);
+
if (vk->tdma_vaddr)
dma_free_coherent(&pdev->dev, nr_scratch_pages * PAGE_SIZE,
vk->tdma_vaddr, vk->tdma_addr);
@@ -1533,6 +1558,8 @@ static void bcm_vk_remove(struct pci_dev *pdev)
cancel_work_sync(&vk->wq_work);
destroy_workqueue(vk->wq_thread);
+ cancel_work_sync(&vk->tty_wq_work);
+ destroy_workqueue(vk->tty_wq_thread);
for (i = 0; i < MAX_BAR; i++) {
if (vk->bar[i])
diff --git a/drivers/misc/bcm-vk/bcm_vk_tty.c b/drivers/misc/bcm-vk/bcm_vk_tty.c
new file mode 100644
index 000000000000..be3964949b63
--- /dev/null
+++ b/drivers/misc/bcm-vk/bcm_vk_tty.c
@@ -0,0 +1,333 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2018-2020 Broadcom.
+ */
+
+#include <linux/tty.h>
+#include <linux/tty_driver.h>
+#include <linux/tty_flip.h>
+
+#include "bcm_vk.h"
+
+/* TTYVK base offset is 0x30000 into BAR1 */
+#define BAR1_TTYVK_BASE_OFFSET 0x300000
+/* Each TTYVK channel (TO or FROM) is 0x10000 */
+#define BAR1_TTYVK_CHAN_OFFSET 0x100000
+/* Each TTYVK channel has TO and FROM, hence the * 2 */
+#define BAR1_TTYVK_BASE(index) (BAR1_TTYVK_BASE_OFFSET + \
+ ((index) * BAR1_TTYVK_CHAN_OFFSET * 2))
+/* TO TTYVK channel base comes before FROM for each index */
+#define TO_TTYK_BASE(index) BAR1_TTYVK_BASE(index)
+#define FROM_TTYK_BASE(index) (BAR1_TTYVK_BASE(index) + \
+ BAR1_TTYVK_CHAN_OFFSET)
+
+struct bcm_vk_tty_chan {
+ u32 reserved;
+ u32 size;
+ u32 wr;
+ u32 rd;
+ u32 *data;
+};
+
+#define VK_BAR_CHAN(v, DIR, e) ((v)->DIR##_offset \
+ + offsetof(struct bcm_vk_tty_chan, e))
+#define VK_BAR_CHAN_SIZE(v, DIR) VK_BAR_CHAN(v, DIR, size)
+#define VK_BAR_CHAN_WR(v, DIR) VK_BAR_CHAN(v, DIR, wr)
+#define VK_BAR_CHAN_RD(v, DIR) VK_BAR_CHAN(v, DIR, rd)
+#define VK_BAR_CHAN_DATA(v, DIR, off) (VK_BAR_CHAN(v, DIR, data) + (off))
+
+#define VK_BAR0_REGSEG_TTY_DB_OFFSET 0x86c
+
+/* Poll every 1/10 of second - temp hack till we use MSI interrupt */
+#define SERIAL_TIMER_VALUE (HZ / 10)
+
+static void bcm_vk_tty_poll(struct timer_list *t)
+{
+ struct bcm_vk *vk = from_timer(vk, t, serial_timer);
+
+ queue_work(vk->tty_wq_thread, &vk->tty_wq_work);
+ mod_timer(&vk->serial_timer, jiffies + SERIAL_TIMER_VALUE);
+}
+
+irqreturn_t bcm_vk_tty_irqhandler(int irq, void *dev_id)
+{
+ struct bcm_vk *vk = dev_id;
+
+ queue_work(vk->tty_wq_thread, &vk->tty_wq_work);
+
+ return IRQ_HANDLED;
+}
+
+static void bcm_vk_tty_wq_handler(struct work_struct *work)
+{
+ struct bcm_vk *vk = container_of(work, struct bcm_vk, tty_wq_work);
+ struct bcm_vk_tty *vktty;
+ int card_status;
+ int count;
+ unsigned char c;
+ int i;
+ int wr;
+
+ card_status = vkread32(vk, BAR_0, BAR_CARD_STATUS);
+ if (BCM_VK_INTF_IS_DOWN(card_status))
+ return;
+
+ for (i = 0; i < BCM_VK_NUM_TTY; i++) {
+ count = 0;
+ /* Check the card status that the tty channel is ready */
+ if ((card_status & BIT(i)) == 0)
+ continue;
+
+ vktty = &vk->tty[i];
+
+ /* Don't increment read index if tty app is closed */
+ if (!vktty->is_opened)
+ continue;
+
+ /* Fetch the wr offset in buffer from VK */
+ wr = vkread32(vk, BAR_1, VK_BAR_CHAN_WR(vktty, from));
+
+ /* safe to ignore until bar read gives proper size */
+ if (vktty->from_size == 0)
+ continue;
+
+ if (wr >= vktty->from_size) {
+ dev_err(&vk->pdev->dev,
+ "ERROR: wq handler ttyVK%d wr:0x%x > 0x%x\n",
+ i, wr, vktty->from_size);
+ /* Need to signal and close device in this case */
+ continue;
+ }
+
+ /*
+ * Simple read of circular buffer and
+ * insert into tty flip buffer
+ */
+ while (vk->tty[i].rd != wr) {
+ c = vkread8(vk, BAR_1,
+ VK_BAR_CHAN_DATA(vktty, from, vktty->rd));
+ vktty->rd++;
+ if (vktty->rd >= vktty->from_size)
+ vktty->rd = 0;
+ tty_insert_flip_char(&vktty->port, c, TTY_NORMAL);
+ count++;
+ }
+
+ if (count) {
+ tty_flip_buffer_push(&vktty->port);
+
+ /* Update read offset from shadow register to card */
+ vkwrite32(vk, vktty->rd, BAR_1,
+ VK_BAR_CHAN_RD(vktty, from));
+ }
+ }
+}
+
+static int bcm_vk_tty_open(struct tty_struct *tty, struct file *file)
+{
+ int card_status;
+ struct bcm_vk *vk;
+ struct bcm_vk_tty *vktty;
+ int index;
+
+ /* initialize the pointer in case something fails */
+ tty->driver_data = NULL;
+
+ vk = (struct bcm_vk *)dev_get_drvdata(tty->dev);
+ index = tty->index;
+
+ if (index >= BCM_VK_NUM_TTY)
+ return -EINVAL;
+
+ vktty = &vk->tty[index];
+
+ vktty->pid = task_pid_nr(current);
+ vktty->to_offset = TO_TTYK_BASE(index);
+ vktty->from_offset = FROM_TTYK_BASE(index);
+
+ /* Do not allow tty device to be opened if tty on card not ready */
+ card_status = vkread32(vk, BAR_0, BAR_CARD_STATUS);
+ if (BCM_VK_INTF_IS_DOWN(card_status) || ((card_status & BIT(index)) == 0))
+ return -EBUSY;
+
+ /*
+ * Get shadow registers of the buffer sizes and the "to" write offset
+ * and "from" read offset
+ */
+ vktty->to_size = vkread32(vk, BAR_1, VK_BAR_CHAN_SIZE(vktty, to));
+ vktty->wr = vkread32(vk, BAR_1, VK_BAR_CHAN_WR(vktty, to));
+ vktty->from_size = vkread32(vk, BAR_1, VK_BAR_CHAN_SIZE(vktty, from));
+ vktty->rd = vkread32(vk, BAR_1, VK_BAR_CHAN_RD(vktty, from));
+ vktty->is_opened = true;
+
+ if (tty->count == 1 && !vktty->irq_enabled) {
+ timer_setup(&vk->serial_timer, bcm_vk_tty_poll, 0);
+ mod_timer(&vk->serial_timer, jiffies + SERIAL_TIMER_VALUE);
+ }
+ return 0;
+}
+
+static void bcm_vk_tty_close(struct tty_struct *tty, struct file *file)
+{
+ struct bcm_vk *vk = dev_get_drvdata(tty->dev);
+
+ if (tty->index >= BCM_VK_NUM_TTY)
+ return;
+
+ vk->tty[tty->index].is_opened = false;
+
+ if (tty->count == 1)
+ del_timer_sync(&vk->serial_timer);
+}
+
+static void bcm_vk_tty_doorbell(struct bcm_vk *vk, u32 db_val)
+{
+ vkwrite32(vk, db_val, BAR_0,
+ VK_BAR0_REGSEG_DB_BASE + VK_BAR0_REGSEG_TTY_DB_OFFSET);
+}
+
+static int bcm_vk_tty_write(struct tty_struct *tty,
+ const unsigned char *buffer,
+ int count)
+{
+ int index;
+ struct bcm_vk *vk;
+ struct bcm_vk_tty *vktty;
+ int i;
+
+ index = tty->index;
+ vk = dev_get_drvdata(tty->dev);
+ vktty = &vk->tty[index];
+
+ /* Simple write each byte to circular buffer */
+ for (i = 0; i < count; i++) {
+ vkwrite8(vk, buffer[i], BAR_1,
+ VK_BAR_CHAN_DATA(vktty, to, vktty->wr));
+ vktty->wr++;
+ if (vktty->wr >= vktty->to_size)
+ vktty->wr = 0;
+ }
+ /* Update write offset from shadow register to card */
+ vkwrite32(vk, vktty->wr, BAR_1, VK_BAR_CHAN_WR(vktty, to));
+ bcm_vk_tty_doorbell(vk, 0);
+
+ return count;
+}
+
+static int bcm_vk_tty_write_room(struct tty_struct *tty)
+{
+ struct bcm_vk *vk = dev_get_drvdata(tty->dev);
+
+ return vk->tty[tty->index].to_size - 1;
+}
+
+static const struct tty_operations serial_ops = {
+ .open = bcm_vk_tty_open,
+ .close = bcm_vk_tty_close,
+ .write = bcm_vk_tty_write,
+ .write_room = bcm_vk_tty_write_room,
+};
+
+int bcm_vk_tty_init(struct bcm_vk *vk, char *name)
+{
+ int i;
+ int err;
+ struct tty_driver *tty_drv;
+ struct device *dev = &vk->pdev->dev;
+
+ tty_drv = tty_alloc_driver
+ (BCM_VK_NUM_TTY,
+ TTY_DRIVER_REAL_RAW | TTY_DRIVER_DYNAMIC_DEV);
+ if (IS_ERR(tty_drv))
+ return PTR_ERR(tty_drv);
+
+ /* Save struct tty_driver for uninstalling the device */
+ vk->tty_drv = tty_drv;
+
+ /* initialize the tty driver */
+ tty_drv->driver_name = KBUILD_MODNAME;
+ tty_drv->name = kstrdup(name, GFP_KERNEL);
+ if (!tty_drv->name) {
+ err = -ENOMEM;
+ goto err_put_tty_driver;
+ }
+ tty_drv->type = TTY_DRIVER_TYPE_SERIAL;
+ tty_drv->subtype = SERIAL_TYPE_NORMAL;
+ tty_drv->init_termios = tty_std_termios;
+ tty_set_operations(tty_drv, &serial_ops);
+
+ /* register the tty driver */
+ err = tty_register_driver(tty_drv);
+ if (err) {
+ dev_err(dev, "tty_register_driver failed\n");
+ goto err_kfree_tty_name;
+ }
+
+ for (i = 0; i < BCM_VK_NUM_TTY; i++) {
+ struct device *tty_dev;
+
+ tty_port_init(&vk->tty[i].port);
+ tty_dev = tty_port_register_device(&vk->tty[i].port, tty_drv,
+ i, dev);
+ if (IS_ERR(tty_dev)) {
+ err = PTR_ERR(tty_dev);
+ goto unwind;
+ }
+ dev_set_drvdata(tty_dev, vk);
+ vk->tty[i].is_opened = false;
+ }
+
+ INIT_WORK(&vk->tty_wq_work, bcm_vk_tty_wq_handler);
+ vk->tty_wq_thread = create_singlethread_workqueue("tty");
+ if (!vk->tty_wq_thread) {
+ dev_err(dev, "Fail to create tty workqueue thread\n");
+ err = -ENOMEM;
+ goto unwind;
+ }
+ return 0;
+
+unwind:
+ while (--i >= 0)
+ tty_port_unregister_device(&vk->tty[i].port, tty_drv, i);
+ tty_unregister_driver(tty_drv);
+
+err_kfree_tty_name:
+ kfree(tty_drv->name);
+ tty_drv->name = NULL;
+
+err_put_tty_driver:
+ put_tty_driver(tty_drv);
+
+ return err;
+}
+
+void bcm_vk_tty_exit(struct bcm_vk *vk)
+{
+ int i;
+
+ del_timer_sync(&vk->serial_timer);
+ for (i = 0; i < BCM_VK_NUM_TTY; ++i) {
+ tty_port_unregister_device(&vk->tty[i].port,
+ vk->tty_drv,
+ i);
+ tty_port_destroy(&vk->tty[i].port);
+ }
+ tty_unregister_driver(vk->tty_drv);
+
+ kfree(vk->tty_drv->name);
+ vk->tty_drv->name = NULL;
+
+ put_tty_driver(vk->tty_drv);
+}
+
+void bcm_vk_tty_terminate_tty_user(struct bcm_vk *vk)
+{
+ struct bcm_vk_tty *vktty;
+ int i;
+
+ for (i = 0; i < BCM_VK_NUM_TTY; ++i) {
+ vktty = &vk->tty[i];
+ if (vktty->pid)
+ kill_pid(find_vpid(vktty->pid), SIGKILL, 1);
+ }
+}
--
2.17.1
On 10/2/2020 2:23 PM, Scott Branden wrote:
> Add BCM_VK_QSTATS Kconfig option to allow for enabling debug VK
> queue statistics.
>
> These statistics keep track of max, abs_max, and average for the
> messages queues.
>
> Co-developed-by: Desmond Yan <[email protected]>
> Signed-off-by: Desmond Yan <[email protected]>
> Signed-off-by: Scott Branden <[email protected]>
would not it make more sense to have those debug prints be trace printks
instead? Given what you explained in the previous patch version and the
desire to correlate with other system wide activity, that might make
more sense. Looking at the kernel's log for debugging performance or
utilization or just to get a glimpse of what is going on is not quite
suited past probe.
--
Florian
Hi Florian,
On 2020-10-02 6:39 p.m., Florian Fainelli wrote:
>
>
> On 10/2/2020 2:23 PM, Scott Branden wrote:
>> Add BCM_VK_QSTATS Kconfig option to allow for enabling debug VK
>> queue statistics.
>>
>> These statistics keep track of max, abs_max, and average for the
>> messages queues.
>>
>> Co-developed-by: Desmond Yan <[email protected]>
>> Signed-off-by: Desmond Yan <[email protected]>
>> Signed-off-by: Scott Branden <[email protected]>
>
> would not it make more sense to have those debug prints be trace printks instead? Given what you explained in the previous patch version and the desire to correlate with other system wide activity, that might make more sense. Looking at the kernel's log for debugging performance or utilization or just to get a glimpse of what is going on is not quite suited past probe.
The debug prints have served our purpose up to this point.
But, in the interest of getting the other driver patches accepted I will drop this patch from the series and we will investigate further.
Thanks,
Scott